issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "When a snapshot operation on a particular shard finishes, the data node where this shard resides sends an update shard status request to the master node to indicate that the operation on the shard is done. When the master node receives the command it queues cluster state update task and acknowledges the receipt of the command to the data node. \n\nThe update snapshot shard status tasks have relatively low priority, so during cluster instability they tend to get stuck at the end of the queue. If the master node gets restarted before processing these tasks the information about the shards can be lost and the new master assumes that they are still in process while the data node thinks that these shards are already done. \n\nThis might be the root cause of #9924 and #10564\n",
"comments": [
{
"body": "This has happened to us a few times in production. We've added \"check to make sure a snapshot is not running\" during any maintenance we perform to our ES cluster.\n",
"created_at": "2015-06-07T20:26:31Z"
}
],
"number": 11314,
"title": "Snapshot/Restore: restart of a master node during snapshot can lead to hanging snapshots"
} | {
"body": "When a snapshot operation on a particular shard finishes, the data node where this shard resides sends an update shard status request to the master node to indicate that the operation on the shard is done. When the master node receives the command it queues cluster state update task and acknowledges the receipt of the command to the data node.\n\nThe update snapshot shard status tasks have relatively low priority, so during cluster instability they tend to get stuck at the end of the queue. If the master node gets restarted before processing these tasks the information about the shards can be lost and the new master assumes that they are still in process while the data node thinks that these shards are already done.\n\n This commit add a retry mechanism that checks compares cluster state of a newly elected master and the current state of snapshot shards and updates the cluster state on the master again if needed.\n\nCloses #11314\n",
"number": 11450,
"review_comments": [
{
"body": "can we use switch case statements for this it seems to be easier to read?\n",
"created_at": "2015-06-03T10:08:45Z"
}
],
"title": "Sync up snapshot shard status on a master restart"
} | {
"commits": [
{
"message": "Snapshot/Restore: sync up snapshot shard status on a master restart\n\nWhen a snapshot operation on a particular shard finishes, the data node where this shard resides sends an update shard status request to the master node to indicate that the operation on the shard is done. When the master node receives the command it queues cluster state update task and acknowledges the receipt of the command to the data node.\n\nThe update snapshot shard status tasks have relatively low priority, so during cluster instability they tend to get stuck at the end of the queue. If the master node gets restarted before processing these tasks the information about the shards can be lost and the new master assumes that they are still in process while the data node thinks that these shards are already done.\n\n This commit add a retry mechanism that checks compares cluster state of a newly elected master and the current state of snapshot shards and updates the cluster state on the master again if needed.\n\nCloses #11314"
}
],
"files": [
{
"diff": "@@ -549,13 +549,18 @@ public void clusterChanged(ClusterChangedEvent event) {\n \n if (prev == null) {\n if (curr != null) {\n- processIndexShardSnapshots(curr);\n+ processIndexShardSnapshots(event);\n }\n } else {\n if (!prev.equals(curr)) {\n- processIndexShardSnapshots(curr);\n+ processIndexShardSnapshots(event);\n }\n }\n+ if (event.state().nodes().masterNodeId() != null &&\n+ event.state().nodes().masterNodeId().equals(event.previousState().nodes().masterNodeId()) == false) {\n+ syncShardStatsOnNewMaster(event);\n+ }\n+\n } catch (Throwable t) {\n logger.warn(\"Failed to update snapshot state \", t);\n }\n@@ -778,9 +783,10 @@ private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n /**\n * Checks if any new shards should be snapshotted on this node\n *\n- * @param snapshotMetaData snapshot metadata to be processed\n+ * @param event cluster state changed event\n */\n- private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n+ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n+ SnapshotMetaData snapshotMetaData = event.state().metaData().custom(SnapshotMetaData.TYPE);\n Map<SnapshotId, SnapshotShards> survivors = newHashMap();\n // First, remove snapshots that are no longer there\n for (Map.Entry<SnapshotId, SnapshotShards> entry : shardSnapshots.entrySet()) {\n@@ -830,7 +836,17 @@ private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n if (snapshotStatus != null) {\n- snapshotStatus.abort();\n+ if (snapshotStatus.stage() == IndexShardSnapshotStatus.Stage.STARTED) {\n+ snapshotStatus.abort();\n+ } else if (snapshotStatus.stage() == IndexShardSnapshotStatus.Stage.DONE) {\n+ logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that is already done, updating status on the master\", entry.snapshotId(), shard.getKey());\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.snapshotId(), shard.getKey(),\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotMetaData.State.SUCCESS)));\n+ } else if (snapshotStatus.stage() == IndexShardSnapshotStatus.Stage.FAILURE) {\n+ logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that has already failed, updating status on the master\", entry.snapshotId(), shard.getKey());\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.snapshotId(), shard.getKey(),\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), State.FAILED, snapshotStatus.failure())));\n+ }\n }\n }\n }\n@@ -878,6 +894,45 @@ public void run() {\n }\n }\n \n+ /**\n+ * Checks if any shards were processed that the new master doesn't know about\n+ * @param event\n+ */\n+ private void syncShardStatsOnNewMaster(ClusterChangedEvent event) {\n+ SnapshotMetaData snapshotMetaData = event.state().getMetaData().custom(SnapshotMetaData.TYPE);\n+ if (snapshotMetaData == null) {\n+ return;\n+ }\n+ for (SnapshotMetaData.Entry snapshot : snapshotMetaData.entries()) {\n+ if (snapshot.state() == State.STARTED || snapshot.state() == State.ABORTED) {\n+ ImmutableMap<ShardId, IndexShardSnapshotStatus> localShards = currentSnapshotShards(snapshot.snapshotId());\n+ if (localShards != null) {\n+ ImmutableMap<ShardId, ShardSnapshotStatus> masterShards = snapshot.shards();\n+ for(Map.Entry<ShardId, IndexShardSnapshotStatus> localShard : localShards.entrySet()) {\n+ ShardId shardId = localShard.getKey();\n+ IndexShardSnapshotStatus localShardStatus = localShard.getValue();\n+ ShardSnapshotStatus masterShard = masterShards.get(shardId);\n+ if (masterShard != null && masterShard.state().completed() == false) {\n+ // Master knows about the shard and thinks it has not completed\n+ if (localShardStatus.stage() == IndexShardSnapshotStatus.Stage.DONE) {\n+ // but we think the shard is done - we need to make new master know that the shard is done\n+ logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard is done locally, updating status on the master\", snapshot.snapshotId(), shardId);\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(snapshot.snapshotId(), shardId,\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotMetaData.State.SUCCESS)));\n+ } else if (localShard.getValue().stage() == IndexShardSnapshotStatus.Stage.FAILURE) {\n+ // but we think the shard failed - we need to make new master know that the shard failed\n+ logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard failed locally, updating status on master\", snapshot.snapshotId(), shardId);\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(snapshot.snapshotId(), shardId,\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), State.FAILED, localShardStatus.failure())));\n+\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n /**\n * Updates the shard status\n *",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -22,8 +22,12 @@\n import com.google.common.collect.ImmutableList;\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksResponse;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n+import org.elasticsearch.cluster.service.PendingClusterTask;\n+import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.repositories.RepositoriesService;\n@@ -37,9 +41,12 @@\n import java.nio.file.Path;\n import java.nio.file.SimpleFileVisitor;\n import java.nio.file.attribute.BasicFileAttributes;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n \n /**\n */\n@@ -121,4 +128,128 @@ public static String blockNodeWithIndex(String index) {\n public static void unblockNode(String node) {\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, node).repository(\"test-repo\")).unblock();\n }\n+\n+ protected void assertBusyPendingTasks(final String taskPrefix, final int expectedCount) throws Exception {\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ PendingClusterTasksResponse tasks = client().admin().cluster().preparePendingClusterTasks().get();\n+ int count = 0;\n+ for(PendingClusterTask task : tasks) {\n+ if (task.getSource().toString().startsWith(taskPrefix)) {\n+ count++;\n+ }\n+ }\n+ assertThat(count, greaterThanOrEqualTo(expectedCount));\n+ }\n+ }, 1, TimeUnit.MINUTES);\n+ }\n+\n+ /**\n+ * Cluster state task that blocks waits for the blockOn task to show up and then blocks execution not letting\n+ * any cluster state update task to be performed unless they have priority higher then passThroughPriority.\n+ *\n+ * This class is useful to testing of cluster state update task batching for lower priority tasks.\n+ */\n+ protected class BlockingClusterStateListener implements ClusterStateListener {\n+\n+ private final Predicate<ClusterChangedEvent> blockOn;\n+ private final Predicate<ClusterChangedEvent> countOn;\n+ private final ClusterService clusterService;\n+ private final CountDownLatch latch;\n+ private final Priority passThroughPriority;\n+ private int count;\n+ private boolean timedOut;\n+ private final TimeValue timeout;\n+ private long stopWaitingAt = -1;\n+\n+ public BlockingClusterStateListener(ClusterService clusterService, String blockOn, String countOn, Priority passThroughPriority) {\n+ this(clusterService, blockOn, countOn, passThroughPriority, TimeValue.timeValueMinutes(1));\n+ }\n+\n+ public BlockingClusterStateListener(ClusterService clusterService, final String blockOn, final String countOn, Priority passThroughPriority, TimeValue timeout) {\n+ this.clusterService = clusterService;\n+ this.blockOn = new Predicate<ClusterChangedEvent>() {\n+ @Override\n+ public boolean apply(ClusterChangedEvent clusterChangedEvent) {\n+ return clusterChangedEvent.source().startsWith(blockOn);\n+ }\n+ };\n+ this.countOn = new Predicate<ClusterChangedEvent>() {\n+ @Override\n+ public boolean apply(ClusterChangedEvent clusterChangedEvent) {\n+ return clusterChangedEvent.source().startsWith(countOn);\n+ }\n+ };\n+ this.latch = new CountDownLatch(1);\n+ this.passThroughPriority = passThroughPriority;\n+ this.timeout = timeout;\n+\n+ }\n+\n+ public void unblock() {\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void clusterChanged(ClusterChangedEvent event) {\n+ if (blockOn.apply(event)) {\n+ logger.info(\"blocking cluster state tasks on [{}]\", event.source());\n+ assert stopWaitingAt < 0; // Make sure we are the first time here\n+ stopWaitingAt = System.currentTimeMillis() + timeout.getMillis();\n+ addBlock();\n+ }\n+ if (countOn.apply(event)) {\n+ count++;\n+ }\n+ }\n+\n+ private void addBlock() {\n+ // We should block after this task - add blocking cluster state update task\n+ clusterService.submitStateUpdateTask(\"test_block\", passThroughPriority, new ClusterStateUpdateTask() {\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ while(System.currentTimeMillis() < stopWaitingAt) {\n+ for (PendingClusterTask task : clusterService.pendingTasks()) {\n+ if (task.getSource().string().equals(\"test_block\") == false && passThroughPriority.sameOrAfter(task.getPriority())) {\n+ // There are other higher priority tasks in the queue and let them pass through and then set the block again\n+ logger.info(\"passing through cluster state task {}\", task.getSource());\n+ addBlock();\n+ return currentState;\n+ }\n+ }\n+ try {\n+ logger.info(\"waiting....\");\n+ if (latch.await(Math.min(100, timeout.millis()), TimeUnit.MILLISECONDS)){\n+ // Done waiting - unblock\n+ logger.info(\"unblocked\");\n+ return currentState;\n+ }\n+ logger.info(\"done waiting....\");\n+ } catch (InterruptedException ex) {\n+ logger.info(\"interrupted....\");\n+ Thread.currentThread().interrupt();\n+ return currentState;\n+ }\n+ }\n+ timedOut = true;\n+ return currentState;\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ logger.warn(\"failed to execute [{}]\", t, source);\n+ }\n+ });\n+\n+ }\n+\n+ public int count() {\n+ return count;\n+ }\n+\n+ public boolean timedOut() {\n+ return timedOut;\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/snapshots/AbstractSnapshotTests.java",
"status": "modified"
},
{
"diff": "@@ -34,13 +34,15 @@\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;\n import org.elasticsearch.action.admin.indices.recovery.ShardRecoveryResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ProcessedClusterStateUpdateTask;\n import org.elasticsearch.cluster.AbstractDiffable;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.metadata.MetaData.Custom;\n+import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;\n import org.elasticsearch.common.Nullable;\n@@ -64,6 +66,7 @@\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin;\n import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.rest.FakeRestRequest;\n import org.junit.Ignore;\n import org.junit.Test;\n@@ -791,6 +794,83 @@ public void run() {\n logger.info(\"--> done\");\n }\n \n+ @Test\n+ public void masterShutdownDuringSnapshotTest() throws Exception {\n+\n+ Settings masterSettings = settingsBuilder().put(\"node.data\", false).build();\n+ Settings dataSettings = settingsBuilder().put(\"node.master\", false).build();\n+\n+ logger.info(\"--> starting two master nodes and two data nodes\");\n+ internalCluster().startNode(masterSettings);\n+ internalCluster().startNode(masterSettings);\n+ internalCluster().startNode(dataSettings);\n+ internalCluster().startNode(dataSettings);\n+\n+ final Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.settingsBuilder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ assertAcked(prepareCreate(\"test-idx\", 0, settingsBuilder().put(\"number_of_shards\", between(1, 20))\n+ .put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test-idx\", \"type1\", Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+\n+ final int numberOfShards = getNumShards(\"test-idx\").numPrimaries;\n+ logger.info(\"number of shards: {}\", numberOfShards);\n+\n+ final ClusterService clusterService = internalCluster().clusterService(internalCluster().getMasterName());\n+ BlockingClusterStateListener snapshotListener = new BlockingClusterStateListener(clusterService, \"update_snapshot [\", \"update snapshot state\", Priority.HIGH);\n+ try {\n+ clusterService.addFirst(snapshotListener);\n+ logger.info(\"--> snapshot\");\n+ dataNodeClient().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ // Await until some updates are in pending state.\n+ assertBusyPendingTasks(\"update snapshot state\", 1);\n+\n+ logger.info(\"--> stopping master node\");\n+ internalCluster().stopCurrentMasterNode();\n+\n+ logger.info(\"--> unblocking snapshot execution\");\n+ snapshotListener.unblock();\n+\n+ logger.info(\"--> wait until the snapshot is done\");\n+\n+ } finally {\n+ clusterService.remove(snapshotListener);\n+ }\n+\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ SnapshotsStatusResponse snapshotsStatusResponse = client().admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ ImmutableList<SnapshotStatus> snapshotStatuses = snapshotsStatusResponse.getSnapshots();\n+ assertEquals(1, snapshotStatuses.size());\n+ assertTrue(snapshotStatuses.get(0).getState().completed());\n+ }\n+ });\n+\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(SnapshotState.SUCCESS, snapshotInfo.state());\n+ assertEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ assertEquals(0, snapshotInfo.failedShards());\n+ }\n+\n+\n private boolean snapshotIsDone(String repository, String snapshot) {\n try {\n SnapshotsStatusResponse snapshotsStatusResponse = client().admin().cluster().prepareSnapshotStatus(repository).setSnapshots(snapshot).get();",
"filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -1893,126 +1893,4 @@ public void batchingShardUpdateTaskTest() throws Exception {\n // Check that cluster state update task was called only once\n assertEquals(1, restoreListener.count());\n }\n-\n- private void assertBusyPendingTasks(final String taskPrefix, final int expectedCount) throws Exception {\n- assertBusy(new Runnable() {\n- @Override\n- public void run() {\n- PendingClusterTasksResponse tasks = client().admin().cluster().preparePendingClusterTasks().get();\n- int count = 0;\n- for(PendingClusterTask task : tasks) {\n- if (task.getSource().toString().startsWith(taskPrefix)) {\n- count++;\n- }\n- }\n- assertThat(count, equalTo(expectedCount));\n- }\n- }, 1, TimeUnit.MINUTES);\n- }\n-\n- /**\n- * Cluster state task that blocks waits for the blockOn task to show up and then blocks execution not letting\n- * any cluster state update task to be performed unless they have priority higher then passThroughPriority.\n- *\n- * This class is useful to testing of cluster state update task batching for lower priority tasks.\n- */\n- public class BlockingClusterStateListener implements ClusterStateListener {\n-\n- private final Predicate<ClusterChangedEvent> blockOn;\n- private final Predicate<ClusterChangedEvent> countOn;\n- private final ClusterService clusterService;\n- private final CountDownLatch latch;\n- private final Priority passThroughPriority;\n- private int count;\n- private boolean timedOut;\n- private final TimeValue timeout;\n- private long stopWaitingAt = -1;\n-\n- public BlockingClusterStateListener(ClusterService clusterService, String blockOn, String countOn, Priority passThroughPriority) {\n- this(clusterService, blockOn, countOn, passThroughPriority, TimeValue.timeValueMinutes(1));\n- }\n-\n- public BlockingClusterStateListener(ClusterService clusterService, final String blockOn, final String countOn, Priority passThroughPriority, TimeValue timeout) {\n- this.clusterService = clusterService;\n- this.blockOn = new Predicate<ClusterChangedEvent>() {\n- @Override\n- public boolean apply(ClusterChangedEvent clusterChangedEvent) {\n- return clusterChangedEvent.source().startsWith(blockOn);\n- }\n- };\n- this.countOn = new Predicate<ClusterChangedEvent>() {\n- @Override\n- public boolean apply(ClusterChangedEvent clusterChangedEvent) {\n- return clusterChangedEvent.source().startsWith(countOn);\n- }\n- };\n- this.latch = new CountDownLatch(1);\n- this.passThroughPriority = passThroughPriority;\n- this.timeout = timeout;\n-\n- }\n-\n- public void unblock() {\n- latch.countDown();\n- }\n-\n- @Override\n- public void clusterChanged(ClusterChangedEvent event) {\n- if (blockOn.apply(event)) {\n- logger.info(\"blocking cluster state tasks on [{}]\", event.source());\n- assert stopWaitingAt < 0; // Make sure we are the first time here\n- stopWaitingAt = System.currentTimeMillis() + timeout.getMillis();\n- addBlock();\n- }\n- if (countOn.apply(event)) {\n- count++;\n- }\n- }\n-\n- private void addBlock() {\n- // We should block after this task - add blocking cluster state update task\n- clusterService.submitStateUpdateTask(\"test_block\", passThroughPriority, new ClusterStateUpdateTask() {\n- @Override\n- public ClusterState execute(ClusterState currentState) throws Exception {\n- while(System.currentTimeMillis() < stopWaitingAt) {\n- for (PendingClusterTask task : clusterService.pendingTasks()) {\n- if (task.getSource().string().equals(\"test_block\") == false && passThroughPriority.sameOrAfter(task.getPriority())) {\n- // There are other higher priority tasks in the queue and let them pass through and then set the block again\n- logger.info(\"passing through cluster state task {}\", task.getSource());\n- addBlock();\n- return currentState;\n- }\n- }\n- try {\n- logger.info(\"wating....\");\n- if (latch.await(Math.min(100, timeout.millis()), TimeUnit.MILLISECONDS)){\n- // Done waiting - unblock\n- logger.info(\"unblocked\");\n- return currentState;\n- }\n- logger.info(\"done wating....\");\n- } catch (InterruptedException ex) {\n- Thread.currentThread().interrupt();\n- }\n- }\n- timedOut = true;\n- return currentState;\n- }\n-\n- @Override\n- public void onFailure(String source, Throwable t) {\n- logger.warn(\"failed to execute [{}]\", t, source);\n- }\n- });\n-\n- }\n-\n- public int count() {\n- return count;\n- }\n-\n- public boolean timedOut() {\n- return timedOut;\n- }\n- }\n }",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "Currently, when trying to determine if a location is within one of the configured repository\npaths, we compare a canonical path against an absolute path. These are not always\nequivalent and this check will fail even when the same directory is used. This changes\nthe logic to to follow that of the http server, where we use normalized absolute path\ncomparisons. A test has been added that failed with the old code and now passes with the\nupdated method.\n\nThis only targets the 1.x branch as the code for handling this is different on master.\n",
"comments": [
{
"body": "@imotov can you review?\n",
"created_at": "2015-05-29T15:30:31Z"
},
{
"body": "@jaymode Nice catch! Thanks! Maybe we can make it to match master: \n\n``` java\n public static File resolve(File[] roots, String path) {\n for (File root : roots) {\n Path rootPath = root.toPath();\n Path normalizedPath = rootPath.resolve(path).normalize();\n if(normalizedPath.startsWith(rootPath)) {\n return normalizedPath.toFile();\n }\n }\n return null;\n }\n```\n\nWhat do you think?\n",
"created_at": "2015-05-29T15:47:01Z"
},
{
"body": "@imotov makes sense! the only thing I did different than your example is normalize the root path, so we are comparing two normalized paths.\n",
"created_at": "2015-05-29T15:57:57Z"
},
{
"body": "LGTM then.\n",
"created_at": "2015-05-29T15:59:54Z"
}
],
"number": 11426,
"title": "Fix check for locations in a repository path"
} | {
"body": "Currently, when trying to determine if a location is within one of the configured repository\npaths, we compare the root path against a normalized path but the root path is never\nnormalized so the check may incorrectly fail. This change normalizes the root path and\ncompares it to the other normalized path.\n\nRelates to #11426\n",
"number": 11446,
"review_comments": [],
"title": "Always normalize root paths during resolution of paths"
} | {
"commits": [
{
"message": "always normalize root paths during resolution of paths\n\nCurrently, when trying to determine if a location is within one of the configured repository\npaths, we compare the root path against a normalized path but the root path is never\nnormalized so the check may incorrectly fail. This change normalizes the root path and\ncompares it to the other normalized path.\n\nRelates to #11426"
}
],
"files": [
{
"diff": "@@ -83,8 +83,9 @@ public static Path get(URI uri) {\n */\n public static Path get(Path[] roots, String path) {\n for (Path root : roots) {\n- Path normalizedPath = root.resolve(path).normalize();\n- if(normalizedPath.startsWith(root)) {\n+ Path normalizedRoot = root.normalize();\n+ Path normalizedPath = normalizedRoot.resolve(path).normalize();\n+ if(normalizedPath.startsWith(normalizedRoot)) {\n return normalizedPath;\n }\n }",
"filename": "src/main/java/org/elasticsearch/common/io/PathUtils.java",
"status": "modified"
},
{
"diff": "@@ -77,13 +77,14 @@ public void testRepositoryResolution() throws IOException {\n Environment environment = newEnvironment();\n assertThat(environment.resolveRepoFile(\"/test/repos/repo1\"), nullValue());\n assertThat(environment.resolveRepoFile(\"test/repos/repo1\"), nullValue());\n- environment = newEnvironment(settingsBuilder().putArray(\"path.repo\", \"/test/repos\", \"/another/repos\").build());\n+ environment = newEnvironment(settingsBuilder().putArray(\"path.repo\", \"/test/repos\", \"/another/repos\", \"/test/repos/../other\").build());\n assertThat(environment.resolveRepoFile(\"/test/repos/repo1\"), notNullValue());\n assertThat(environment.resolveRepoFile(\"test/repos/repo1\"), notNullValue());\n assertThat(environment.resolveRepoFile(\"/another/repos/repo1\"), notNullValue());\n assertThat(environment.resolveRepoFile(\"/test/repos/../repo1\"), nullValue());\n assertThat(environment.resolveRepoFile(\"/test/repos/../repos/repo1\"), notNullValue());\n assertThat(environment.resolveRepoFile(\"/somethingeles/repos/repo1\"), nullValue());\n+ assertThat(environment.resolveRepoFile(\"/test/other/repo\"), notNullValue());\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/env/EnvironmentTests.java",
"status": "modified"
}
]
} |
{
"body": "It seams that SearchContexts are not released in case the search requests fails due to a rejection (thread pool queue full). I have a test that fails each time here: https://github.com/brwe/elasticsearch/blob/open-search-contexts/src/test/java/org/elasticsearch/search/SearchWithRejectionsTests.java\n",
"comments": [
{
"body": "I think we should fix this for 1.6 if we can though\n",
"created_at": "2015-05-29T12:04:47Z"
},
{
"body": "This only happens with dfs queries. I made a pull request to fix it here: https://github.com/elastic/elasticsearch/pull/11434\n",
"created_at": "2015-05-30T15:55:17Z"
}
],
"number": 11400,
"title": "SearchContext not released when search request is rejected"
} | {
"body": "When the dfs phase runs a SearchContext is created on each node that has a shard\nfor this query. When the query phase (or query and fetch phase) failed that SearchContext\nwas released only if the query was actually executed on the node. If for example\nthe query was rejected because the thread pool queue was full then the search context\nwas not released.\nThis commit adds a dedicated call for releasing the SearchContext in this case.\n\nIn addition, set the docIdsToLoad to null in case the fetch phase failed, otherwise\nsearch contexts might not be released in releaseIrrelevantSearchContexts.\n\ncloses #11400\n",
"number": 11434,
"review_comments": [
{
"body": "is this guaranteed? I mean if the cpu is fast/slow enough we might not reject anything in theory? I think we can omit this assertion?\n",
"created_at": "2015-05-30T20:12:21Z"
},
{
"body": "Even if onSecondPhaseFailure is not supposed to throw exceptions, should we put onSecondPhaseFailure in a try block and sendReleaseSearchContext in the finally to be on the safe side?\n",
"created_at": "2015-05-31T08:49:58Z"
}
],
"title": "Release search contexts after failed dfs or query phase for dfs queries"
} | {
"commits": [
{
"message": "search: release search contexts after failed dfs or query phase for dfs queries\n\nWhen the dfs phase runs a SearchContext is created on each node that has a shard\nfor this query. When the query phase (or query and fetch phase) failed that SearchContext\nwas released only if the query was actually executed on the node. If for example\nthe query was rejected because the thread pool queue was full then the search context\nwas not released.\nThis commit adds a dedicated call for releasing the SearchContext in this case.\n\nIn addition, we must set the docIdsToLoad to null in case the fetch phase failed, otherwise\nsearch contexts might not be released in releaseIrrelevantSearchContexts.\n\ncloses #11400"
},
{
"message": "remove check for failure, might rarely not fail at all"
},
{
"message": "send request to release search context in finally block to be on the save side"
}
],
"files": [
{
"diff": "@@ -91,7 +91,7 @@ protected void moveToSecondPhase() {\n }\n }\n \n- void executeSecondPhase(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, DiscoveryNode node, final QuerySearchRequest querySearchRequest) {\n+ void executeSecondPhase(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, final DiscoveryNode node, final QuerySearchRequest querySearchRequest) {\n searchService.sendExecuteFetch(node, querySearchRequest, new ActionListener<QueryFetchSearchResult>() {\n @Override\n public void onResponse(QueryFetchSearchResult result) {\n@@ -104,7 +104,14 @@ public void onResponse(QueryFetchSearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n- onSecondPhaseFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ try {\n+ onSecondPhaseFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ } finally {\n+ // the query might not have been executed at all (for example because thread pool rejected execution)\n+ // and the search context that was created in dfs phase might not be released.\n+ // release it again to be in the safe side\n+ sendReleaseSearchContext(querySearchRequest.id(), node);\n+ }\n }\n });\n }",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryAndFetchAction.java",
"status": "modified"
},
{
"diff": "@@ -100,7 +100,7 @@ protected void moveToSecondPhase() {\n }\n }\n \n- void executeQuery(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, final QuerySearchRequest querySearchRequest, DiscoveryNode node) {\n+ void executeQuery(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, final QuerySearchRequest querySearchRequest, final DiscoveryNode node) {\n searchService.sendExecuteQuery(node, querySearchRequest, new ActionListener<QuerySearchResult>() {\n @Override\n public void onResponse(QuerySearchResult result) {\n@@ -113,7 +113,14 @@ public void onResponse(QuerySearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n- onQueryFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ try {\n+ onQueryFailure(t, querySearchRequest, shardIndex, dfsResult, counter);\n+ } finally {\n+ // the query might not have been executed at all (for example because thread pool rejected execution)\n+ // and the search context that was created in dfs phase might not be released.\n+ // release it again to be in the safe side\n+ sendReleaseSearchContext(querySearchRequest.id(), node);\n+ }\n }\n });\n }\n@@ -176,6 +183,11 @@ public void onResponse(FetchSearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n+ // the search context might not be cleared on the node where the fetch was executed for example\n+ // because the action was rejected by the thread pool. in this case we need to send a dedicated\n+ // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared\n+ // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done.\n+ docIdsToLoad.set(shardIndex, null);\n onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter);\n }\n });",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryThenFetchAction.java",
"status": "modified"
},
{
"diff": "@@ -35,8 +35,8 @@\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.search.controller.SearchPhaseController;\n-import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.search.internal.InternalSearchResponse;\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.search.query.QuerySearchResultProvider;\n@@ -118,7 +118,10 @@ public void onResponse(FetchSearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n- // the failure might happen without managing to clear the search context..., potentially need to clear its context (for example)\n+ // the search context might not be cleared on the node where the fetch was executed for example\n+ // because the action was rejected by the thread pool. in this case we need to send a dedicated\n+ // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared\n+ // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done.\n docIdsToLoad.set(shardIndex, null);\n onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter);\n }",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryThenFetchAction.java",
"status": "modified"
},
{
"diff": "@@ -303,9 +303,7 @@ private void raiseEarlyFailure(Throwable t) {\n for (AtomicArray.Entry<FirstResult> entry : firstResults.asList()) {\n try {\n DiscoveryNode node = nodes.get(entry.value.shardTarget().nodeId());\n- if (node != null) { // should not happen (==null) but safeguard anyhow\n- searchService.sendFreeContext(node, entry.value.id(), request);\n- }\n+ sendReleaseSearchContext(entry.value.id(), node);\n } catch (Throwable t1) {\n logger.trace(\"failed to release context\", t1);\n }\n@@ -329,9 +327,7 @@ protected void releaseIrrelevantSearchContexts(AtomicArray<? extends QuerySearch\n && docIdsToLoad.get(entry.index) == null) { // but none of them made it to the global top docs\n try {\n DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());\n- if (node != null) { // should not happen (==null) but safeguard anyhow\n- searchService.sendFreeContext(node, entry.value.queryResult().id(), request);\n- }\n+ sendReleaseSearchContext(entry.value.queryResult().id(), node);\n } catch (Throwable t1) {\n logger.trace(\"failed to release context\", t1);\n }\n@@ -340,6 +336,12 @@ protected void releaseIrrelevantSearchContexts(AtomicArray<? extends QuerySearch\n }\n }\n \n+ protected void sendReleaseSearchContext(long contextId, DiscoveryNode node) {\n+ if (node != null) {\n+ searchService.sendFreeContext(node, contextId, request);\n+ }\n+ }\n+\n protected ShardFetchSearchRequest createFetchRequest(QuerySearchResult queryResult, AtomicArray.Entry<IntArrayList> entry, ScoreDoc[] lastEmittedDocPerShard) {\n if (lastEmittedDocPerShard != null) {\n ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[entry.index];",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchTypeAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,90 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search;\n+\n+import com.google.common.base.Predicate;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.util.concurrent.Future;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+\n+@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE)\n+@LuceneTestCase.Slow\n+public class SearchWithRejectionsTests extends ElasticsearchIntegrationTest {\n+ @Override\n+ public Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder().put(super.nodeSettings(nodeOrdinal))\n+ .put(\"threadpool.search.type\", \"fixed\")\n+ .put(\"threadpool.search.size\", 1)\n+ .put(\"threadpool.search.queue_size\", 1)\n+ .build();\n+ }\n+\n+ @Test\n+ public void testOpenContextsAfterRejections() throws InterruptedException {\n+ createIndex(\"test\");\n+ ensureGreen(\"test\");\n+ final int docs = scaledRandomIntBetween(20, 50);\n+ for (int i = 0; i < docs; i++) {\n+ client().prepareIndex(\"test\", \"type\", Integer.toString(i)).setSource(\"field\", \"value\").execute().actionGet();\n+ }\n+ IndicesStatsResponse indicesStats = client().admin().indices().prepareStats().execute().actionGet();\n+ assertThat(indicesStats.getTotal().getSearch().getOpenContexts(), equalTo(0l));\n+ refresh();\n+\n+ int numSearches = 10;\n+ Future<SearchResponse>[] responses = new Future[numSearches];\n+ SearchType searchType = randomFrom(SearchType.DEFAULT, SearchType.QUERY_AND_FETCH, SearchType.QUERY_THEN_FETCH, SearchType.DFS_QUERY_AND_FETCH, SearchType.DFS_QUERY_THEN_FETCH);\n+ logger.info(\"search type is {}\", searchType);\n+ for (int i = 0; i < numSearches; i++) {\n+ responses[i] = client().prepareSearch()\n+ .setQuery(matchAllQuery())\n+ .setSearchType(searchType)\n+ .execute();\n+ }\n+ for (int i = 0; i < numSearches; i++) {\n+ try {\n+ responses[i].get();\n+ } catch (Throwable t) {\n+ }\n+ }\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object input) {\n+ // we must wait here because the requests to release search contexts might still be in flight\n+ // although the search request has already returned\n+ return client().admin().indices().prepareStats().execute().actionGet().getTotal().getSearch().getOpenContexts() == 0;\n+ }\n+ }, 1, TimeUnit.SECONDS);\n+ indicesStats = client().admin().indices().prepareStats().execute().actionGet();\n+ assertThat(indicesStats.getTotal().getSearch().getOpenContexts(), equalTo(0l));\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/search/SearchWithRejectionsTests.java",
"status": "added"
}
]
} |
{
"body": "...iling tokens (this might corrupt client result parsing).\n\nInvalidate document when the source containins trailing tokens:\n\nExample (indexing):\n\n'{}}' => invalid sice it contains an extra '}' after the source object is exited.\n",
"comments": [
{
"body": "I [updated](https://github.com/s1monw/elasticsearch/commit/4dca0960be56c0c5a8cc80c021c1bf6636b91241) this PR a bit with the addition this is only applied to indices that are created with ES v 1.3. \n",
"created_at": "2014-07-14T09:00:59Z"
},
{
"body": "To be completely honest, I don't think the document mapper can decide that when it's done parsing it is also the end of the underlying stream. It should also not mutate it with reading nextToken on it beyond the object scope. Imho it's the responsibility of the caller like the TransportIndexAction to validate the trailing content\n",
"created_at": "2014-07-14T09:49:47Z"
},
{
"body": "@s1monw @bleskes what are the next steps here?\n",
"created_at": "2014-08-22T07:06:11Z"
},
{
"body": "@bleskes can you think of a single place where we can perform this check for all REST requests?\n",
"created_at": "2014-10-16T19:31:22Z"
},
{
"body": "A quick fix would be to do the check in this PR does but only if the parser was created locally, like here: https://github.com/episerver/elasticsearch/blob/ThrowMapperParserExceptionIfSourceIsNotEndedProperly/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java#L518 \n\nA more general solution is to see if it makes sense (not sure) to add a flag to the close method of XContentParser to verify that the underlying stream was fully consumed.\n\nBtw - we should also consider accepting trailing empty spaces and new lines - I'm not sure whether the current check is good or not for that. As a different change, I would trim those down when copying over the bytes in SourceToParse\n",
"created_at": "2014-10-18T12:08:26Z"
},
{
"body": "This is certainly something I would like to see fixed, but we need to fix it in a clean and generic way, as per @bleskes' comments. \n\n@lindstromhenrik would you be interested in taking this further? It may take a few iterations.\n",
"created_at": "2014-10-20T11:38:26Z"
},
{
"body": "@bleskes lets talk about this at some point - we should move forward here \n",
"created_at": "2014-11-21T10:37:30Z"
},
{
"body": "We discussed this issue and PR, it is indeed a nice to have. We would want to expand it a bit probably, e.g. make sure we consume the entire document we are going to index, and add some more tests with different invalid jsons.\n\nWe have some concerns around backwards compatibility though. Even if we apply this validation at index time only (so we don't break searches for invalid documents that might have been previously stored), the additional validation might cause problems anyway because there is no way to fix invalid jsons other than deleting/reindexing them. Once this feature is in, we might rely on it in the future (e.g. assuming stored json is always valid) but that would be wrong as we will never be sure that there are no invalid jsons in the index.\n\nThat said, I am personally for moving forward with this, by looking at it as just a best effort to reject invalid jsons. Some might still get through or might be in the index from before, so it's something that we should never rely on in future features. Marking as adoptme since this is a good starting point but some more work needs to be done on it.\n",
"created_at": "2015-03-28T08:00:49Z"
},
{
"body": "So the summary is, because ES did the wrong thing before, it must always do the wrong thing forever? Doesn't sound right...\n",
"created_at": "2015-04-08T19:10:42Z"
},
{
"body": "This should be simple to solve backcompat. Ignore the check on indexes created before 2.0 (although I think this is silly and we should just always enforce the check).\n",
"created_at": "2015-04-08T19:38:41Z"
},
{
"body": "+1 @rjernst \n",
"created_at": "2015-04-08T20:05:19Z"
},
{
"body": "I've revived this and created a new PR: #11414\n",
"created_at": "2015-05-29T10:21:16Z"
}
],
"number": 2315,
"title": "Verify that source object is ended properly and does not contain any trailing tokens"
} | {
"body": "See #2315\n",
"number": 11414,
"review_comments": [],
"title": "Validate parsed document does not have trailing garbage that is invalid json"
} | {
"commits": [
{
"message": "Mappings: Validate parsed document does not have trailing garbage that is invalid json\n\nSee #2315"
}
],
"files": [
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.util.CloseableThreadLocal;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.settings.Settings;\n@@ -127,6 +128,13 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n parser.nextToken();\n }\n \n+ // try to parse the next token, this should be null if the object is ended properly\n+ // but will throw a JSON exception if the extra tokens is not valid JSON (this will be handled by the catch)\n+ if (Version.indexCreated(indexSettings).onOrAfter(Version.V_2_0_0_beta1)) {\n+ token = parser.nextToken();\n+ assert token == null; // double check, in tests, that we didn't end parsing early\n+ }\n+\n for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n metadataMapper.postParse(context);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -309,4 +309,18 @@ public void testComplete() throws Exception {\n .endObject().endObject().string();\n assertFalse(parser.parse(mapping).sourceMapper().isComplete());\n }\n+\n+ public void testSourceObjectContainsExtraTokens() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper documentMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ documentMapper.parse(\"test\", \"type\", \"1\", new BytesArray(\"{}}\")); // extra end object (invalid JSON)\n+ fail(\"Expected parse exception\");\n+ } catch (MapperParsingException e) {\n+ assertNotNull(e.getRootCause());\n+ String message = e.getRootCause().getMessage();\n+ assertTrue(message, message.contains(\"Unexpected close marker '}'\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/source/DefaultSourceMappingTests.java",
"status": "modified"
}
]
} |
{
"body": "From an irc conversation, someone did this:\n\n``` bash\ncurl -XPUT \"localhost:9200/my_index/_settings\" -d '{\n \"index\": {\n \"number_of_replicas\": 0\n }\n}'\ncurl -XPOST \"localhost:9200/my_index/_close\"\ncurl -XPUT \"localhost:9200/my_index/_settings\" -d '{\n \"index\": {\n \"number_of_replicas\": 2\n }\n}'\ncurl -XPOST \"localhost:9200/my_index/_open\"\n```\n\nand the index wouldn't open properly. Setting number_of_replicas back to 0 and then opening the index, and then setting number_of_replicas to 2 fixed the issue. It'd be nice if setting the number_of_replicas to something that prevents the index from opening wasn't possible.\n",
"comments": [
{
"body": "Hmm agreed... although I'm not sure how easy it would be to do. While the index is closed, we don't keep track of where or how many shards there are I believe.\n",
"created_at": "2015-02-05T13:48:13Z"
},
{
"body": "Yeah- I dunno. Maybe its better to just stop all replica count changes on\nclosed indexes or only allow it with some dangerous=OK flag or something.\n\nOr more documentation but I doubt that's enough.\nOn Feb 5, 2015 8:48 AM, \"Clinton Gormley\" notifications@github.com wrote:\n\n> Hmm agreed... although I'm not sure how easy it would be to do. While the\n> index is closed, we don't keep track of where or how many shards there are\n> I believe.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9566#issuecomment-73048215\n> .\n",
"created_at": "2015-02-05T14:58:32Z"
},
{
"body": "> Or more documentation but I doubt that's enough.\n\nNot sure how hard it is to fix but agreed that we should document this behaviour until it's fixed.\n",
"created_at": "2015-02-06T09:13:15Z"
},
{
"body": "I tend to say that we should forbid changing these settings while the index is closed as it is not clear what the effect of it would be. I also can't think of a use case where it will be helpful. I briefly looked at the code and it looks easy to add a black list of settings for closed indices.\n",
"created_at": "2015-02-06T09:41:51Z"
},
{
"body": "@bleskes +1 on not allowing to change these settings! Can you open a PR for this? I think the fact that you can do this is actually a bug?\n",
"created_at": "2015-02-13T10:20:52Z"
}
],
"number": 9566,
"title": "Setting number_of_replicas on a closed index can put it in an unopenable state"
} | {
"body": "Setting the number of replicas on a closed index can leave the index\nin an unopenable state since we might not be able to recover a quorum.\nThis commit simply prevents updating this setting on a closed index.\n\nCloses #9566\n",
"number": 11410,
"review_comments": [],
"title": "Prevent changing the number of replicas on a closed index"
} | {
"commits": [
{
"message": "Prevent changing the number of replicas on a closed index\n\nSetting the number of replicas on a closed index can leave the index\nin an unopenable state since we might not be able to recover a quorum.\nThis commit simply prevents updating this setting on a closed index.\n\nCloses #9566"
}
],
"files": [
{
"diff": "@@ -231,9 +231,15 @@ public ClusterState execute(ClusterState currentState) {\n }\n }\n \n+ if (closeIndices.size() > 0 && closeSettings.get(IndexMetaData.SETTING_NUMBER_OF_REPLICAS) != null) {\n+ throw new IllegalArgumentException(String.format(Locale.ROOT,\n+ \"Can't update [%s] on closed indices [%s] - can leave index in an unopenable state\", IndexMetaData.SETTING_NUMBER_OF_REPLICAS,\n+ closeIndices\n+ ));\n+ }\n if (!removedSettings.isEmpty() && !openIndices.isEmpty()) {\n throw new IllegalArgumentException(String.format(Locale.ROOT,\n- \"Can't update non dynamic settings[%s] for open indices[%s]\",\n+ \"Can't update non dynamic settings[%s] for open indices [%s]\",\n removedSettings,\n openIndices\n ));",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -773,7 +773,7 @@ public void testUpdateSettings() throws Exception {\n try {\n verify(client().admin().indices().prepareUpdateSettings(\"barbaz\").setSettings(Settings.builder().put(\"e\", \"f\")), false);\n } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(), equalTo(\"Can't update non dynamic settings[[index.e]] for open indices[[barbaz]]\"));\n+ assertThat(e.getMessage(), equalTo(\"Can't update non dynamic settings[[index.e]] for open indices [[barbaz]]\"));\n }\n verify(client().admin().indices().prepareUpdateSettings(\"baz*\").setSettings(Settings.builder().put(\"a\", \"b\")), true);\n }",
"filename": "src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationTests.java",
"status": "modified"
},
{
"diff": "@@ -92,6 +92,17 @@ public void testOpenCloseUpdateSettings() throws Exception {\n \n client().admin().indices().prepareClose(\"test\").execute().actionGet();\n \n+ try {\n+ client().admin().indices().prepareUpdateSettings(\"test\")\n+ .setSettings(Settings.settingsBuilder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ )\n+ .execute().actionGet();\n+ fail(\"can't change number of replicas on a closed index\");\n+ } catch (IllegalArgumentException ex) {\n+ assertEquals(ex.getMessage(), \"Can't update [index.number_of_replicas] on closed indices [[test]] - can leave index in an unopenable state\");\n+ // expected\n+ }\n client().admin().indices().prepareUpdateSettings(\"test\")\n .setSettings(Settings.settingsBuilder()\n .put(\"index.refresh_interval\", \"1s\") // this one can change",
"filename": "src/test/java/org/elasticsearch/indices/settings/UpdateSettingsTests.java",
"status": "modified"
}
]
} |
{
"body": "Sibling Pipeline Aggregations don't work if their parent aggregation is a SingleBucketAggregation. This is because [1] explicitly casts the parent aggregation to a MultiBucketAggregation. We should check the type of the parent aggregation and process it as now if its a MultiBucketAggregation or process it as a SingleBucketAggregation if not.\n\n[1] https://github.com/elastic/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/pipeline/SiblingPipelineAggregator.java#L49\n",
"comments": [],
"number": 11379,
"title": "Sibling Pipeline Aggregations only work if nested in multi-bucket aggregations"
} | {
"body": "Previously this would throw a ClassCastException as we explicitly cast the parent aggregation to a InternalMultiBucketAggregation\n\nCloses #11379\n",
"number": 11380,
"review_comments": [
{
"body": "Should it be called newAggregation like the method it delegates to?\n",
"created_at": "2015-05-27T16:03:19Z"
},
{
"body": "I did think about that but the method in InternalMultiBucketAggregation is called create(InternalAggregations) so I wanted to make it the same as that one since they are equivalent\n",
"created_at": "2015-05-27T16:05:24Z"
},
{
"body": "ok\n",
"created_at": "2015-05-27T16:22:02Z"
}
],
"title": "Sibling Pipeline Aggregations can now be nested in SingleBucketAggregations"
} | {
"commits": [
{
"message": "Aggregations: Sibling Pipeline Aggregations can now be nested in SingleBucketAggregations\n\nCloses #11379"
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.Aggregation;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -63,6 +64,18 @@ public InternalAggregations getAggregations() {\n return aggregations;\n }\n \n+ /**\n+ * Create a new copy of this {@link Aggregation} with the same settings as\n+ * this {@link Aggregation} and contains the provided sub-aggregations.\n+ * \n+ * @param subAggregations\n+ * the buckets to use in the new {@link Aggregation}\n+ * @return the new {@link Aggregation}\n+ */\n+ public InternalSingleBucketAggregation create(InternalAggregations subAggregations) {\n+ return newAggregation(getName(), getDocCount(), subAggregations);\n+ }\n+\n /**\n * Create a <b>new</b> empty sub aggregation. This must be a new instance on each call.\n */",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;\n+import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket;\n \n import java.util.ArrayList;\n@@ -45,20 +46,34 @@ protected SiblingPipelineAggregator(String name, String[] bucketsPaths, Map<Stri\n @SuppressWarnings(\"unchecked\")\n @Override\n public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext reduceContext) {\n- @SuppressWarnings(\"rawtypes\")\n- InternalMultiBucketAggregation multiBucketsAgg = (InternalMultiBucketAggregation) aggregation;\n- List<? extends Bucket> buckets = multiBucketsAgg.getBuckets();\n- List<Bucket> newBuckets = new ArrayList<>();\n- for (int i = 0; i < buckets.size(); i++) {\n- InternalMultiBucketAggregation.InternalBucket bucket = (InternalMultiBucketAggregation.InternalBucket) buckets.get(i);\n- InternalAggregation aggToAdd = doReduce(bucket.getAggregations(), reduceContext);\n- List<InternalAggregation> aggs = new ArrayList<>(Lists.transform(bucket.getAggregations().asList(), AGGREGATION_TRANFORM_FUNCTION));\n+ if (aggregation instanceof InternalMultiBucketAggregation) {\n+ @SuppressWarnings(\"rawtypes\")\n+ InternalMultiBucketAggregation multiBucketsAgg = (InternalMultiBucketAggregation) aggregation;\n+ List<? extends Bucket> buckets = multiBucketsAgg.getBuckets();\n+ List<Bucket> newBuckets = new ArrayList<>();\n+ for (int i = 0; i < buckets.size(); i++) {\n+ InternalMultiBucketAggregation.InternalBucket bucket = (InternalMultiBucketAggregation.InternalBucket) buckets.get(i);\n+ InternalAggregation aggToAdd = doReduce(bucket.getAggregations(), reduceContext);\n+ List<InternalAggregation> aggs = new ArrayList<>(Lists.transform(bucket.getAggregations().asList(),\n+ AGGREGATION_TRANFORM_FUNCTION));\n+ aggs.add(aggToAdd);\n+ InternalMultiBucketAggregation.InternalBucket newBucket = multiBucketsAgg.createBucket(new InternalAggregations(aggs),\n+ bucket);\n+ newBuckets.add(newBucket);\n+ }\n+\n+ return multiBucketsAgg.create(newBuckets);\n+ } else if (aggregation instanceof InternalSingleBucketAggregation) {\n+ InternalSingleBucketAggregation singleBucketAgg = (InternalSingleBucketAggregation) aggregation;\n+ InternalAggregation aggToAdd = doReduce(singleBucketAgg.getAggregations(), reduceContext);\n+ List<InternalAggregation> aggs = new ArrayList<>(Lists.transform(singleBucketAgg.getAggregations().asList(),\n+ AGGREGATION_TRANFORM_FUNCTION));\n aggs.add(aggToAdd);\n- InternalMultiBucketAggregation.InternalBucket newBucket = multiBucketsAgg.createBucket(new InternalAggregations(aggs), bucket);\n- newBuckets.add(newBucket);\n+ return singleBucketAgg.create(new InternalAggregations(aggs));\n+ } else {\n+ throw new IllegalStateException(\"Aggregation [\" + aggregation.getName() + \"] must be a bucket aggregation [\"\n+ + aggregation.type().name() + \"]\");\n }\n-\n- return multiBucketsAgg.create(newBuckets);\n }\n \n public abstract InternalAggregation doReduce(Aggregations aggregations, ReduceContext context);",
"filename": "src/main/java/org/elasticsearch/search/aggregations/pipeline/SiblingPipelineAggregator.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n@@ -34,11 +35,13 @@\n import java.util.ArrayList;\n import java.util.List;\n \n-import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.maxBucket;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.maxBucket;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n@@ -282,6 +285,55 @@ public void testMetric_asSubAgg() throws Exception {\n }\n }\n \n+ @Test\n+ public void testMetric_asSubAggOfSingleBucketAgg() throws Exception {\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\")\n+ .addAggregation(\n+ filter(\"filter\")\n+ .filter(termQuery(\"tag\", \"tag0\"))\n+ .subAggregation(\n+ histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(interval)\n+ .extendedBounds((long) minRandomValue, (long) maxRandomValue)\n+ .subAggregation(sum(\"sum\").field(SINGLE_VALUED_FIELD_NAME)))\n+ .subAggregation(maxBucket(\"max_bucket\").setBucketsPaths(\"histo>sum\"))).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filter filter = response.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getName(), equalTo(\"filter\"));\n+ Histogram histo = filter.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+\n+ List<String> maxKeys = new ArrayList<>();\n+ double maxValue = Double.NEGATIVE_INFINITY;\n+ for (int j = 0; j < numValueBuckets; ++j) {\n+ Histogram.Bucket bucket = buckets.get(j);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) j * interval));\n+ if (bucket.getDocCount() != 0) {\n+ Sum sum = bucket.getAggregations().get(\"sum\");\n+ assertThat(sum, notNullValue());\n+ if (sum.value() > maxValue) {\n+ maxValue = sum.value();\n+ maxKeys = new ArrayList<>();\n+ maxKeys.add(bucket.getKeyAsString());\n+ } else if (sum.value() == maxValue) {\n+ maxKeys.add(bucket.getKeyAsString());\n+ }\n+ }\n+ }\n+\n+ InternalBucketMetricValue maxBucketValue = filter.getAggregations().get(\"max_bucket\");\n+ assertThat(maxBucketValue, notNullValue());\n+ assertThat(maxBucketValue.getName(), equalTo(\"max_bucket\"));\n+ assertThat(maxBucketValue.value(), equalTo(maxValue));\n+ assertThat(maxBucketValue.keys(), equalTo(maxKeys.toArray(new String[maxKeys.size()])));\n+ }\n+\n @Test\n public void testMetric_asSubAggWithInsertZeros() throws Exception {\n SearchResponse response = client()",
"filename": "src/test/java/org/elasticsearch/search/aggregations/pipeline/MaxBucketTests.java",
"status": "modified"
}
]
} |
{
"body": "The JNA library is listed as optional in the pom file but when running tests in another project without the library, we still try to load it and loading fails with a `ClassNotFoundException`:\n\n```\n > Throwable #1: java.lang.NoClassDefFoundError: com/sun/jna/win32/StdCallLibrary$StdCallCallback\n > at org.elasticsearch.common.jna.Natives.tryVirtualLock(Natives.java:83)\n > at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)\n > at org.elasticsearch.bootstrap.BootstrapForTesting.<clinit>(BootstrapForTesting.java:52)\n > at org.elasticsearch.test.ElasticsearchTestCase.<clinit>(ElasticsearchTestCase.java:97)\n > at java.lang.Class.forName0(Native Method)\n > at java.lang.Class.forName(Class.java:274)\n > Caused by: java.lang.ClassNotFoundException: com.sun.jna.win32.StdCallLibrary$StdCallCallback\n > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)\n > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)\n > at java.security.AccessController.doPrivileged(Native Method)\n > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)\n > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)\n > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)\n > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)\n > ... 7 more\n```\n\nwhich then cascades into:\n\n```\n > Throwable #1: java.lang.NoClassDefFoundError: org.elasticsearch.test.ElasticsearchTestCase\n > at java.lang.Class.getDeclaredMethods0(Native Method)\n > at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)\n > at java.lang.Class.getDeclaredMethods(Class.java:1860)\n > at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:215)\n > at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:212)\n > at com.carrotsearch.randomizedtesting.ClassModel$ModelBuilder.build(ClassModel.java:85)\n > at com.carrotsearch.randomizedtesting.ClassModel.methodsModel(ClassModel.java:224)\n > at com.carrotsearch.randomizedtesting.ClassModel.<init>(ClassModel.java:207)\n > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)\n```\n\nThis seems to be provoked only on Windows by the `Natives#tryVirtualLock` method currently.\n",
"comments": [],
"number": 11360,
"title": "JNA is not optional when testing on windows"
} | {
"body": "Today, JNA is a optional dependency in the build but when running tests or running\nwith mlockall set to true, JNA must be on the classpath for Windows systems since\nwe always try to load JNA classes when using mlockall.\n\nThe old Natives class was renamed to JNANatives, and a new Natives class is\nintroduced without any direct imports on JNA classes. The Natives class checks to\nsee if JNA classes are available at startup. If the classes are available the Natives\nclass will delegate to the JNANatives class. If the classes are not available the\nNatives class will not use the JNANatives class, which results in no additional attempts\nto load JNA classes.\n\nAdditionally, all of the JNA classes were moved to the bootstrap package and made\npackage private as this is the only place they should be called from.\n\nCloses #11360\n",
"number": 11378,
"review_comments": [],
"title": "Make JNA optional for tests and move classes to bootstrap package"
} | {
"commits": [
{
"message": "make JNA optional for tests and move classes to bootstrap package\n\nToday, JNA is a optional dependency in the build but when running tests or running\nwith mlockall set to true, JNA must be on the classpath for Windows systems since\nwe always try to load JNA classes when using mlockall.\n\nThe old Natives class was renamed to JNANatives, and a new Natives class is\nintroduced without any direct imports on JNA classes. The Natives class checks to\nsee if JNA classes are available at startup. If the classes are available the Natives\nclass will delegate to the JNANatives class. If the classes are not available the\nNatives class will not use the JNANatives class, which results in no additional attempts\nto load JNA classes.\n\nAdditionally, all of the JNA classes were moved to the bootstrap package and made\npackage private as this is the only place they should be called from.\n\nCloses #11360"
}
],
"files": [
{
"diff": "@@ -28,8 +28,6 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.CreationException;\n import org.elasticsearch.common.inject.spi.Message;\n-import org.elasticsearch.common.jna.Kernel32Library;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n@@ -48,7 +46,6 @@\n import java.util.concurrent.CountDownLatch;\n \n import static com.google.common.collect.Sets.newHashSet;\n-import static org.elasticsearch.common.jna.Kernel32Library.ConsoleCtrlHandler;\n import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS;\n \n /**\n@@ -122,7 +119,7 @@ public boolean handle(int code) {\n \n // force remainder of JNA to be loaded (if available).\n try {\n- Kernel32Library.getInstance();\n+ JNAKernel32Library.getInstance();\n } catch (Throwable ignored) {\n // we've already logged this.\n }\n@@ -143,6 +140,10 @@ public boolean handle(int code) {\n StringHelper.randomId();\n }\n \n+ public static boolean isMemoryLocked() {\n+ return Natives.isMemoryLocked();\n+ }\n+\n private void setup(boolean addShutdownHook, Settings settings, Environment environment) throws Exception {\n initializeNatives(settings.getAsBoolean(\"bootstrap.mlockall\", false), \n settings.getAsBoolean(\"bootstrap.ctrlhandler\", true),",
"filename": "src/main/java/org/elasticsearch/bootstrap/Bootstrap.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,84 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bootstrap;\n+\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+\n+/**\n+ * The Natives class is a wrapper class that checks if the classes necessary for calling native methods are available on\n+ * startup. If they are not available, this class will avoid calling code that loads these classes.\n+ */\n+class Natives {\n+ private static final ESLogger logger = Loggers.getLogger(Natives.class);\n+\n+ // marker to determine if the JNA class files are available to the JVM\n+ private static boolean jnaAvailable = false;\n+\n+ static {\n+ try {\n+ // load one of the main JNA classes to see if the classes are available. this does not ensure that native\n+ // libraries are available\n+ Class.forName(\"com.sun.jna.Native\");\n+ jnaAvailable = true;\n+ } catch(ClassNotFoundException e) {\n+ logger.warn(\"JNA not found. native methods will be disabled.\");\n+ }\n+ }\n+\n+ static void tryMlockall() {\n+ if (!jnaAvailable) {\n+ logger.warn(\"cannot mlockall because JNA is not available\");\n+ return;\n+ }\n+ JNANatives.tryMlockall();\n+ }\n+\n+ static boolean definitelyRunningAsRoot() {\n+ if (!jnaAvailable) {\n+ logger.warn(\"cannot check if running as root because JNA is not available\");\n+ return false;\n+ }\n+ return JNANatives.definitelyRunningAsRoot();\n+ }\n+\n+ static void tryVirtualLock() {\n+ if (!jnaAvailable) {\n+ logger.warn(\"cannot mlockall because JNA is not available\");\n+ return;\n+ }\n+ JNANatives.tryVirtualLock();\n+ }\n+\n+ static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) {\n+ if (!jnaAvailable) {\n+ logger.warn(\"cannot register console handler because JNA is not available\");\n+ return;\n+ }\n+ JNANatives.addConsoleCtrlHandler(handler);\n+ }\n+\n+ static boolean isMemoryLocked() {\n+ if (!jnaAvailable) {\n+ return false;\n+ }\n+ return JNANatives.LOCAL_MLOCKALL;\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/bootstrap/Natives.java",
"status": "added"
},
{
"diff": "@@ -19,10 +19,10 @@\n \n package org.elasticsearch.monitor.process;\n \n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentBuilderString;\n@@ -50,7 +50,7 @@ public class ProcessInfo implements Streamable, Serializable, ToXContent {\n public ProcessInfo(long id, long maxFileDescriptors) {\n this.id = id;\n this.maxFileDescriptors = maxFileDescriptors;\n- this.mlockall = Natives.LOCAL_MLOCKALL;\n+ this.mlockall = Bootstrap.isMemoryLocked();\n }\n \n public long refreshInterval() {",
"filename": "src/main/java/org/elasticsearch/monitor/process/ProcessInfo.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,8 @@\n \n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -85,8 +85,9 @@ public class ManyMappingsBenchmark {\n \n public static void main(String[] args) throws Exception {\n System.setProperty(\"es.logger.prefix\", \"\");\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n Settings settings = settingsBuilder()\n+ .put(\"\")\n .put(SETTING_NUMBER_OF_SHARDS, 5)\n .put(SETTING_NUMBER_OF_REPLICAS, 0)\n .build();",
"filename": "src/test/java/org/elasticsearch/benchmark/mapping/ManyMappingsBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -20,10 +20,10 @@\n \n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n import org.elasticsearch.action.admin.indices.recovery.ShardRecoveryResponse;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.settings.Settings;\n@@ -57,7 +57,7 @@ public class ReplicaRecoveryBenchmark {\n \n public static void main(String[] args) throws Exception {\n System.setProperty(\"es.logger.prefix\", \"\");\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n \n Settings settings = settingsBuilder()\n .put(\"gateway.type\", \"local\")",
"filename": "src/test/java/org/elasticsearch/benchmark/recovery/ReplicaRecoveryBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -26,8 +26,8 @@\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.benchmark.search.aggregations.TermsAggregationSearchBenchmark.StatsResult;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.SizeValue;\n@@ -66,7 +66,7 @@ public class GlobalOrdinalsBenchmark {\n \n public static void main(String[] args) throws Exception {\n System.setProperty(\"es.logger.prefix\", \"\");\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n Random random = new Random();\n \n Settings settings = settingsBuilder()",
"filename": "src/test/java/org/elasticsearch/benchmark/search/aggregations/GlobalOrdinalsBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -27,10 +27,10 @@\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.common.StopWatch;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.SizeValue;\n@@ -71,7 +71,7 @@ public class SubAggregationSearchCollectModeBenchmark {\n static Node[] nodes;\n \n public static void main(String[] args) throws Exception {\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n Random random = new Random();\n \n Settings settings = settingsBuilder()",
"filename": "src/test/java/org/elasticsearch/benchmark/search/aggregations/SubAggregationSearchCollectModeBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -26,9 +26,9 @@\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.SizeValue;\n@@ -71,7 +71,7 @@ public class TermsAggregationSearchAndIndexingBenchmark {\n static Node[] nodes;\n \n public static void main(String[] args) throws Exception {\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n Settings settings = settingsBuilder()\n .put(\"refresh_interval\", \"-1\")\n .put(SETTING_NUMBER_OF_SHARDS, 1)",
"filename": "src/test/java/org/elasticsearch/benchmark/search/aggregations/TermsAggregationSearchAndIndexingBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -28,10 +28,10 @@\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.bootstrap.Bootstrap;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.common.StopWatch;\n-import org.elasticsearch.common.jna.Natives;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.SizeValue;\n@@ -99,7 +99,7 @@ SearchRequestBuilder addTermsStatsAgg(SearchRequestBuilder builder, String name,\n }\n \n public static void main(String[] args) throws Exception {\n- Natives.tryMlockall();\n+ Bootstrap.initializeNatives(true, false, false);\n Random random = new Random();\n \n Settings settings = settingsBuilder()",
"filename": "src/test/java/org/elasticsearch/benchmark/search/aggregations/TermsAggregationSearchBenchmark.java",
"status": "modified"
}
]
} |
{
"body": "My query:\n\n``` javascript\nGET weatherdata/_search\n{\n \"size\": 0,\n \"query\": {\n \"bool\": {\n \"must\": [\n {\n \"term\": {\n \"element\": {\n \"value\": \"tmax\"\n }\n }\n }\n ]\n }\n },\n \"aggs\": {\n \"maxTMax\": {\n \"max\": {\n \"field\": \"value\",\n \"script\": \"_value / 10\",\n \"format\": \"###.##\"\n }\n },\n \"maxAvgTMaxMonth\": {\n \"max_bucket\": {\n \"buckets_path\": \"tMaxHisto>avgTMax\",\n \"format\": \"###.##\"\n }\n },\n \"minAvgTMaxMonth\": {\n \"min_bucket\": {\n \"buckets_path\": \"tMaxHisto>avgTMax\",\n \"format\": \"###.##\"\n }\n },\n \"tMaxHisto\": {\n \"date_histogram\": {\n \"field\": \"date\",\n \"interval\": \"quarter\"\n },\n \"aggs\": {\n \"avgTMax\": {\n \"avg\": {\n \"field\": \"value\",\n \"script\": \"_value / 10\",\n \"format\": \"###.##\"\n }\n },\n \"movavg\": {\n \"moving_avg\": {\n \"buckets_path\": \"avgTMax\",\n \"model\": \"holt\",\n \"window\": 12,\n \"gap_policy\": \"skip\",\n \"predict\": 12,\n \"settings\": {\n \"alpha\": 0.8\n }\n }\n }\n }\n }\n }\n}\n```\n\nResponse snippet (the first three bucket are existing buckets, i.e. not predictions. The following 12 buckets are predictions but the bucket keys seem to start at 1986 and decrease by 20 years each time):\n\n``` json\n {\n \"key_as_string\": \"2005-07-01T00:00:00.000Z\",\n \"key\": 1120176000000,\n \"doc_count\": 11,\n \"avgTMax\": {\n \"value\": 28.8,\n \"value_as_string\": \"28.8\"\n },\n \"movavg\": {\n \"value\": 29.15195798473475\n }\n },\n {\n \"key_as_string\": \"2005-10-01T00:00:00.000Z\",\n \"key\": 1128124800000,\n \"doc_count\": 11,\n \"avgTMax\": {\n \"value\": 29.172727272727272,\n \"value_as_string\": \"29.17\"\n },\n \"movavg\": {\n \"value\": 29.024912216300212\n }\n },\n {\n \"key_as_string\": \"2006-01-01T00:00:00.000Z\",\n \"key\": 1136073600000,\n \"doc_count\": 4,\n \"avgTMax\": {\n \"value\": 29.525,\n \"value_as_string\": \"29.52\"\n },\n \"movavg\": {\n \"value\": 29.34023027099398\n }\n },\n {\n \"key_as_string\": \"1986-01-01T00:00:00.000Z\",\n \"key\": 504921600000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.34023027099398,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1966-01-01T00:00:00.000Z\",\n \"key\": -126230400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.28650429104976,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1946-01-01T00:00:00.000Z\",\n \"key\": -757382400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.232778311105537,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1926-01-01T00:00:00.000Z\",\n \"key\": -1388534400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.179052331161316,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1906-01-01T00:00:00.000Z\",\n \"key\": -2019686400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.125326351217094,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1885-12-31T00:00:00.000Z\",\n \"key\": -2650838400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.071600371272872,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1865-12-31T00:00:00.000Z\",\n \"key\": -3281990400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 29.01787439132865,\n \"value_as_string\": \"1970-01-01T00:00:00.029Z\"\n }\n },\n {\n \"key_as_string\": \"1845-12-31T00:00:00.000Z\",\n \"key\": -3913142400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 28.96414841138443,\n \"value_as_string\": \"1970-01-01T00:00:00.028Z\"\n }\n },\n {\n \"key_as_string\": \"1825-12-31T00:00:00.000Z\",\n \"key\": -4544294400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 28.910422431440207,\n \"value_as_string\": \"1970-01-01T00:00:00.028Z\"\n }\n },\n {\n \"key_as_string\": \"1805-12-31T00:00:00.000Z\",\n \"key\": -5175446400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 28.856696451495985,\n \"value_as_string\": \"1970-01-01T00:00:00.028Z\"\n }\n },\n {\n \"key_as_string\": \"1785-12-30T00:00:00.000Z\",\n \"key\": -5806598400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 28.802970471551763,\n \"value_as_string\": \"1970-01-01T00:00:00.028Z\"\n }\n },\n {\n \"key_as_string\": \"1765-12-30T00:00:00.000Z\",\n \"key\": -6437750400000,\n \"doc_count\": 0,\n \"movavg\": {\n \"value\": 28.74924449160754,\n \"value_as_string\": \"1970-01-01T00:00:00.028Z\"\n }\n }\n```\n",
"comments": [
{
"body": "The problem is with [1], as this calculation of the interval does not work if `key < 0` since the first bucket will calculate the interval as a negative value which will always be less than the actual interval. Also, I'm not sure this method works for non-fixed intervals like months, where the interval will change depending on what the key is\n\n[1] https://github.com/elastic/elasticsearch/blob/35deb7efea9552528780082143db469afbcd3812/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java#L137-145\n",
"created_at": "2015-05-27T12:32:18Z"
}
],
"number": 11369,
"title": "Incorrect keys for predicted buckets in Moving Average Aggregation"
} | {
"body": "The Moving average predict code generated incorrect keys if the key for the first bucket of the histogram was < 0. This fix makes the moving average use the rounding class from the histogram to generate the keys for the new buckets.\n\nCloses #11369\n",
"number": 11375,
"review_comments": [],
"title": "Fixed Moving Average prediction to calculate the correct keys"
} | {
"commits": [
{
"message": "Aggregations: Fixed Moving Average prediction to calculate the correct keys\n\nThe Moving average predict code generated incorrect keys if the key for the first bucket of the histogram was < 0. This fix makes the moving average use the rounding class from the histogram to generate the keys for the new buckets.\n\nCloses #11369"
}
],
"files": [
{
"diff": "@@ -111,7 +111,6 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n EvictingQueue<Double> values = EvictingQueue.create(this.window);\n \n long lastKey = 0;\n- long interval = Long.MAX_VALUE;\n Object currentKey;\n \n for (InternalHistogram.Bucket bucket : buckets) {\n@@ -135,10 +134,8 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n \n if (predict > 0) {\n if (currentKey instanceof Number) {\n- interval = Math.min(interval, ((Number) bucket.getKey()).longValue() - lastKey);\n lastKey = ((Number) bucket.getKey()).longValue();\n } else if (currentKey instanceof DateTime) {\n- interval = Math.min(interval, ((DateTime) bucket.getKey()).getMillis() - lastKey);\n lastKey = ((DateTime) bucket.getKey()).getMillis();\n } else {\n throw new AggregationExecutionException(\"Expected key of type Number or DateTime but got [\" + currentKey + \"]\");\n@@ -147,7 +144,6 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n \n }\n \n-\n if (buckets.size() > 0 && predict > 0) {\n \n boolean keyed;\n@@ -159,9 +155,11 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n for (int i = 0; i < predictions.length; i++) {\n List<InternalAggregation> aggs = new ArrayList<>();\n aggs.add(new InternalSimpleValue(name(), predictions[i], formatter, new ArrayList<PipelineAggregator>(), metaData()));\n- InternalHistogram.Bucket newBucket = factory.createBucket(lastKey + (interval * (i + 1)), 0, new InternalAggregations(\n+ long newKey = histo.getRounding().nextRoundingValue(lastKey);\n+ InternalHistogram.Bucket newBucket = factory.createBucket(newKey, 0, new InternalAggregations(\n aggs), keyed, formatter);\n newBuckets.add(newBucket);\n+ lastKey = newKey;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,6 @@\n \n import com.google.common.collect.EvictingQueue;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -32,6 +31,7 @@\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.Bucket;\n import org.elasticsearch.search.aggregations.metrics.ValuesSourceMetricsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregationHelperTests;\n import org.elasticsearch.search.aggregations.pipeline.SimpleValue;\n@@ -51,16 +51,19 @@\n import java.util.List;\n import java.util.Map;\n \n-import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.movingAvg;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.avg;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.max;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.min;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.range;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.movingAvg;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.*;\n+import static org.hamcrest.Matchers.closeTo;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.core.IsNull.notNullValue;\n import static org.hamcrest.core.IsNull.nullValue;\n \n@@ -154,6 +157,11 @@ public void setupSuiteScopeCluster() throws Exception {\n .field(INTERVAL_FIELD, 49)\n .field(GAP_FIELD, 1).endObject()));\n \n+ for (int i = -10; i < 10; i++) {\n+ builders.add(client().prepareIndex(\"neg_idx\", \"type\").setSource(\n+ jsonBuilder().startObject().field(INTERVAL_FIELD, i).field(VALUE_FIELD, 10).endObject()));\n+ }\n+\n indexRandom(true, builders);\n ensureSearchable();\n }\n@@ -514,6 +522,56 @@ public void holtSingleValuedField() {\n }\n }\n \n+ @Test\n+ public void testPredictNegativeKeysAtStart() {\n+\n+ SearchResponse response = client()\n+ .prepareSearch(\"neg_idx\")\n+ .setTypes(\"type\")\n+ .addAggregation(\n+ histogram(\"histo\")\n+ .field(INTERVAL_FIELD)\n+ .interval(1)\n+ .subAggregation(avg(\"avg\").field(VALUE_FIELD))\n+ .subAggregation(\n+ movingAvg(\"movavg_values\").window(windowSize).modelBuilder(new SimpleModel.SimpleModelBuilder())\n+ .gapPolicy(gapPolicy).predict(5).setBucketsPaths(\"avg\"))).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ InternalHistogram<Bucket> histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(25));\n+\n+ for (int i = 0; i < 20; i++) {\n+ Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat((long) bucket.getKey(), equalTo((long) i - 10));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Avg avgAgg = bucket.getAggregations().get(\"avg\");\n+ assertThat(avgAgg, notNullValue());\n+ assertThat(avgAgg.value(), equalTo(10d));\n+ SimpleValue movAvgAgg = bucket.getAggregations().get(\"movavg_values\");\n+ assertThat(movAvgAgg, notNullValue());\n+ assertThat(movAvgAgg.value(), equalTo(10d));\n+ }\n+\n+ for (int i = 20; i < 25; i++) {\n+ System.out.println(i);\n+ Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat((long) bucket.getKey(), equalTo((long) i - 10));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+ Avg avgAgg = bucket.getAggregations().get(\"avg\");\n+ assertThat(avgAgg, nullValue());\n+ SimpleValue movAvgAgg = bucket.getAggregations().get(\"movavg_values\");\n+ assertThat(movAvgAgg, notNullValue());\n+ assertThat(movAvgAgg.value(), equalTo(10d));\n+ }\n+ }\n+\n @Test\n public void testSizeZeroWindow() {\n try {",
"filename": "src/test/java/org/elasticsearch/search/aggregations/pipeline/moving/avg/MovAvgTests.java",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch redirects site plugin requests when trailing '/' is missing. It's using Header + meta tag to do job. Meta tag is missing escaped \", Jetty plugin loses Location header so redirection fails.\nOne solution is to fix Jetty plugin, other is to fix returned response.\n",
"comments": [
{
"body": "Closed by #11374\n",
"created_at": "2015-11-20T09:26:52Z"
}
],
"number": 11370,
"title": "Redirect response, during missing trailing /, missing escaped \""
} | {
"body": "Fixes issue #11370 by fixing response HTML\n",
"number": 11374,
"review_comments": [
{
"body": "You have an extra `,` before the URL\n",
"created_at": "2015-05-27T16:05:45Z"
}
],
"title": "Fix HTML response during redirection"
} | {
"commits": [
{
"message": "Fix HTML response during redirection"
},
{
"message": "Fix returned HTML when plugin site url is missing last /"
}
],
"files": [
{
"diff": "@@ -155,7 +155,7 @@ void handlePluginSite(HttpRequest request, HttpChannel channel) throws IOExcepti\n sitePath = null;\n // If a trailing / is missing, we redirect to the right page #2654\n String redirectUrl = request.rawPath() + \"/\";\n- BytesRestResponse restResponse = new BytesRestResponse(RestStatus.MOVED_PERMANENTLY, \"text/html\", \"<head><meta http-equiv=\\\"refresh\\\" content=\\\"0; URL=\" + redirectUrl + \"></head>\");\n+ BytesRestResponse restResponse = new BytesRestResponse(RestStatus.MOVED_PERMANENTLY, \"text/html\", \"<head><meta http-equiv=\\\"refresh\\\" content=\\\"0; URL=\" + redirectUrl + \"\\\"></head>\");\n restResponse.addHeader(\"Location\", redirectUrl);\n channel.sendResponse(restResponse);\n return;",
"filename": "src/main/java/org/elasticsearch/http/HttpServer.java",
"status": "modified"
}
]
} |
{
"body": "Specifying `fielddata_fields` on the query string seems to be ignored, although the [rest spec](https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/api/search.json#L45) lists it as a valid query parameter.\n\n```\nPUT /twitter/tweet/1\n{\n \"user\": \"gregmarzouka\"\n}\n```\n\n```\nGET /twitter/_search?fielddata_fields=user\n```\n\n``` json\n\"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"twitter\",\n \"_type\": \"tweet\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"user\": \"gregmarzouka\"\n }\n }\n ]\n}\n```\n\nSpecifying it as part of the request body works as expected:\n\n```\nGET /twitter/_search\n{\n \"fielddata_fields\": [\"user\"]\n}\n```\n\n``` json\n\"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"twitter\",\n \"_type\": \"tweet\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"user\": \"gregmarzouka\"\n },\n \"fields\": {\n \"user\": [\n \"gregmarzouka\"\n ]\n }\n }\n ]\n}\n```\n",
"comments": [],
"number": 11025,
"title": "fielddata_fields query string parameter ignored"
} | {
"body": "The RestSearchAction did not parse the fielddata_fields parameter. Added test case and missing parser code.\n\nCloses #11025\n",
"number": 11368,
"review_comments": [],
"title": "`fielddata_fields` query string parameter was ignored."
} | {
"commits": [
{
"message": "Search fix: fielddata_fields query string parameter was ignored.\nThe RestSearchAction did not parse the fielddata_fields parameter. Added test case and missing parser code.\n\nCloses #11025"
}
],
"files": [
{
"diff": "@@ -89,4 +89,10 @@\n query: { match_all: {} }\n - match: { hits.hits.0.fields: { include.field2 : [v2] }}\n - is_true: hits.hits.0._source\n+ \n+ \n+ - do:\n+ search:\n+ fielddata_fields: [\"count\"]\n+ - match: { hits.hits.0.fields.count: [1] } \n ",
"filename": "rest-api-spec/test/search/10_source_filtering.yaml",
"status": "modified"
},
{
"diff": "@@ -195,6 +195,20 @@ public static SearchSourceBuilder parseSearchSource(RestRequest request) {\n }\n }\n }\n+ String sFieldDataFields = request.param(\"fielddata_fields\");\n+ if (sFieldDataFields != null) {\n+ if (searchSourceBuilder == null) {\n+ searchSourceBuilder = new SearchSourceBuilder();\n+ }\n+ if (Strings.hasText(sFieldDataFields)) {\n+ String[] sFields = Strings.splitStringByCommaToArray(sFieldDataFields);\n+ if (sFields != null) {\n+ for (String field : sFields) {\n+ searchSourceBuilder.fieldDataField(field);\n+ }\n+ }\n+ }\n+ }\n FetchSourceContext fetchSourceContext = FetchSourceContext.parseFromRestRequest(request);\n if (fetchSourceContext != null) {\n if (searchSourceBuilder == null) {",
"filename": "src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java",
"status": "modified"
}
]
} |
{
"body": "There is a small window where a type or a field can not be published to the the replica due to a synced mapping update but we are already sending a document of that type to the replica during translog recovery. This is basically the same problem as we have with normal indexing where the first document introducing the type blocks until the mapping update is published but subsequent documents don't introduce the new mapping since the node receiving it already got the update. The window is small but we hit it once in tests today:\n\nhttp://build-us-00.elastic.co/job/es_core_master_centos/4808/consoleFull\n\nresulting in this:\n\n``` Java\n\n1> RemoteTransportException[[node_t2][local[658]][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[2] phase2 failed]; nested: RemoteTransportException[[node_t0][local[656]][internal:index/shard/recovery/translog_ops]]; nested: NullPointerException;\n 1> Caused by: [test][0] Phase[2] phase2 failed\n 1> at org.elasticsearch.indices.recovery.RecoverySourceHandler.recoverToTarget(RecoverySourceHandler.java:145)\n 1> at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:127)\n 1> at org.elasticsearch.indices.recovery.RecoverySource.access$200(RecoverySource.java:53)\n 1> at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:136)\n 1> at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:133)\n 1> at org.elasticsearch.transport.local.LocalTransport$2.doRun(LocalTransport.java:279)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> Caused by: RemoteTransportException[[node_t0][local[656]][internal:index/shard/recovery/translog_ops]]; nested: NullPointerException;\n 1> Caused by: java.lang.NullPointerException\n 1> at org.elasticsearch.index.mapper.MapperAnalyzer.getWrappedAnalyzer(MapperAnalyzer.java:48)\n 1> at org.apache.lucene.analysis.DelegatingAnalyzerWrapper$DelegatingReuseStrategy.getReusableComponents(DelegatingAnalyzerWrapper.java:74)\n 1> at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:139)\n 1> at org.elasticsearch.common.lucene.all.AllField.tokenStream(AllField.java:77)\n 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:606)\n 1> at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)\n 1> at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)\n 1> at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:232)\n 1> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:458)\n 1> at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1363)\n 1> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1142)\n 1> at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:522)\n 1> at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:448)\n 1> at org.elasticsearch.index.shard.TranslogRecoveryPerformer.performRecoveryOperation(TranslogRecoveryPerformer.java:112)\n 1> at org.elasticsearch.index.shard.TranslogRecoveryPerformer.performBatchRecovery(TranslogRecoveryPerformer.java:72)\n 1> at org.elasticsearch.index.shard.IndexShard.performBatchRecovery(IndexShard.java:812)\n 1> at org.elasticsearch.indices.recovery.RecoveryTarget$TranslogOperationsRequestHandler.messageReceived(RecoveryTarget.java:306)\n 1> at org.elasticsearch.indices.recovery.RecoveryTarget$TranslogOperationsRequestHandler.messageReceived(RecoveryTarget.java:297)\n 1> at org.elasticsearch.transport.local.LocalTransport$2.doRun(LocalTransport.java:279)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n```\n\nsomehow we also need to wait for a clusterstate update here during translog recovery.\n",
"comments": [],
"number": 11281,
"title": "Translog recovery can fail due to mappings not present on recovery target"
} | {
"body": "In rare occasion, the translog replay phase of recovery may require mapping changes on the target shard. This can happen where indexing on the primary introduces new mappings while the recovery is in phase1. If the source node processes the new mapping from the master, allowing the indexing to proceed, before the target node does and the recovery moves to the phase 2 (translog replay) before as well, the translog operations arriving on the target node may miss the mapping changes. Since this is extremely rare, we opt for a simple fix and simply restart the recovery. Note that in the case the file copy phase will likely be very short as the files are already in sync.\n\nRestarting recoveries in such a late phase means we may need to copy segment_N files and/or files that were quickly merged away on the target again. This annoys the write-once protection in our testing infra. To work around it I have introduces a counter in the termpoary file name prefix used by the recovery code.\n\n***\\* THERE IS STILL AN ONGOING ISSUE ***: Lucene will try to write the same segment_N file (which was cleaned by the recovery code) twice triggering test failures.\n\nDue ot this issue we have decided to change approach and use a cluster observer to retry operations once the mapping have arrived (or any other change)\n\n Closes #11281\n",
"number": 11363,
"review_comments": [
{
"body": "we only call this from one place so we don't need `boolean allowMappingUpdates` but can pass `false` directly?\n",
"created_at": "2015-06-01T14:20:02Z"
},
{
"body": "I felt that since `performRecoveryOperation` is public and allows to control this via parameter, it would be better API (consistent) to expose it here in a similar fashion. Don't feel too strongly about it though\n",
"created_at": "2015-06-02T06:03:39Z"
},
{
"body": "remove it it's confusion IMO\n",
"created_at": "2015-06-02T12:46:58Z"
}
],
"title": "Restart recovery upon mapping changes during translog replay"
} | {
"commits": [
{
"message": "Recovery: restart recovery upon mapping changes during translog replay\n\nIn rare occasion, the translog replay phase of recovery may require mapping changes on the target shard. This can happen where indexing on the primary introduces new mappings while the recovery is in phase1. If the source node processes the new mapping from the master, allowing the indexing to proceed, before the target node does and the recovery moves to the phase 2 (translog replay) before as well, the translog operations arriving on the target node may miss the mapping changes. Since this is extremely rare, we opt for a simple fix and simply restart the recovery. Note that in the case the file copy phase will likely be very short as the files are already in sync.\n\nRestarting recoveries in such a late phase means we may need to copy segment_N files and/or files that were quickly merged away on the target again. This annoys the write-once protection in our testing infra. To work around it I have introduces a counter in the termpoary file name prefix used by the recovery code.\n\n**** THERE IS STILL AN ONGOING ISSUE ***: Lucene will try to write the same segment_N file (which was cleaned by the recovery code) twice triggering test failures.\n\n Closes #11281"
},
{
"message": "Moved to retry performing operations locally on the target node using an observer"
},
{
"message": "revert changes to RecoveryStatus now unneeded"
}
],
"files": [
{
"diff": "@@ -62,9 +62,10 @@ public ClusterStateObserver(ClusterService clusterService, ESLogger logger) {\n /**\n * @param clusterService\n * @param timeout a global timeout for this observer. After it has expired the observer\n- * will fail any existing or new #waitForNextChange calls.\n+ * will fail any existing or new #waitForNextChange calls. Set to null\n+ * to wait indefinitely\n */\n- public ClusterStateObserver(ClusterService clusterService, TimeValue timeout, ESLogger logger) {\n+ public ClusterStateObserver(ClusterService clusterService, @Nullable TimeValue timeout, ESLogger logger) {\n this.clusterService = clusterService;\n this.lastObservedState = new AtomicReference<>(new ObservedState(clusterService.state()));\n this.timeOutValue = timeout;",
"filename": "src/main/java/org/elasticsearch/cluster/ClusterStateObserver.java",
"status": "modified"
},
{
"diff": "@@ -20,15 +20,10 @@\n package org.elasticsearch.index.engine;\n \n import com.google.common.collect.Lists;\n-\n import org.apache.lucene.index.*;\n import org.apache.lucene.index.IndexWriter.IndexReaderWarmer;\n import org.apache.lucene.search.BooleanClause.Occur;\n-import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.IndexSearcher;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.SearcherFactory;\n-import org.apache.lucene.search.SearcherManager;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.LockObtainFailedException;\n import org.apache.lucene.util.BytesRef;\n@@ -219,7 +214,7 @@ protected void recoverFromTranslog(EngineConfig engineConfig, Translog.TranslogG\n Translog.Operation operation;\n while ((operation = snapshot.next()) != null) {\n try {\n- handler.performRecoveryOperation(this, operation);\n+ handler.performRecoveryOperation(this, operation, true);\n opsRecovered++;\n } catch (ElasticsearchException e) {\n if (e.status() == RestStatus.BAD_REQUEST) {",
"filename": "src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.base.Charsets;\n import com.google.common.base.Preconditions;\n-\n import org.apache.lucene.codecs.PostingsFormat;\n import org.apache.lucene.index.CheckIndex;\n import org.apache.lucene.search.Query;\n@@ -801,8 +800,8 @@ public void prepareForIndexRecovery() {\n \n /**\n * Applies all operations in the iterable to the current engine and returns the number of operations applied.\n- * This operation will stop applying operations once an opertion failed to apply.\n- * Note: This method is typically used in peer recovery to replay remote tansaction log entries.\n+ * This operation will stop applying operations once an operation failed to apply.\n+ * Note: This method is typically used in peer recovery to replay remote transaction log entries.\n */\n public int performBatchRecovery(Iterable<Translog.Operation> operations) {\n if (state != IndexShardState.RECOVERING) {\n@@ -1386,7 +1385,7 @@ public void sync(Translog.Location location) {\n * Returns the current translog durability mode\n */\n public Translog.Durabilty getTranslogDurability() {\n- return translogConfig.getDurabilty();\n+ return translogConfig.getDurabilty();\n }\n \n private static Translog.Durabilty getFromSettings(ESLogger logger, Settings settings, Translog.Durabilty defaultValue) {",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -24,12 +24,7 @@\n import org.elasticsearch.index.cache.IndexCache;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.engine.IgnoreOnRecoveryEngineException;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MapperAnalyzer;\n-import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.mapper.MapperUtils;\n-import org.elasticsearch.index.mapper.Mapping;\n-import org.elasticsearch.index.mapper.Uid;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.translog.Translog;\n \n@@ -62,20 +57,28 @@ protected Tuple<DocumentMapper, Mapping> docMapper(String type) {\n return mapperService.documentMapperWithAutoCreate(type); // protected for testing\n }\n \n- /*\n+ /**\n * Applies all operations in the iterable to the current engine and returns the number of operations applied.\n- * This operation will stop applying operations once an opertion failed to apply.\n+ * This operation will stop applying operations once an operation failed to apply.\n+ *\n+ * Throws a {@link MapperException} to be thrown if a mapping update is encountered.\n */\n int performBatchRecovery(Engine engine, Iterable<Translog.Operation> operations) {\n int numOps = 0;\n for (Translog.Operation operation : operations) {\n- performRecoveryOperation(engine, operation);\n+ performRecoveryOperation(engine, operation, false);\n numOps++;\n }\n return numOps;\n }\n \n- private void addMappingUpdate(String type, Mapping update) {\n+ private void maybeAddMappingUpdate(String type, Mapping update, String docId, boolean allowMappingUpdates) {\n+ if (update == null) {\n+ return;\n+ }\n+ if (allowMappingUpdates == false) {\n+ throw new MapperException(\"mapping updates are not allowed (type: [\" + type + \"], id: [\" + docId + \"])\");\n+ }\n Mapping currentUpdate = recoveredTypes.get(type);\n if (currentUpdate == null) {\n recoveredTypes.put(type, update);\n@@ -85,10 +88,13 @@ private void addMappingUpdate(String type, Mapping update) {\n }\n \n /**\n- * Performs a single recovery operation, and returns the indexing operation (or null if its not an indexing operation)\n- * that can then be used for mapping updates (for example) if needed.\n+ * Performs a single recovery operation.\n+ *\n+ * @param allowMappingUpdates true if mapping update should be accepted (but collected). Setting it to false will\n+ * cause a {@link MapperException} to be thrown if an update\n+ * is encountered.\n */\n- public void performRecoveryOperation(Engine engine, Translog.Operation operation) {\n+ public void performRecoveryOperation(Engine engine, Translog.Operation operation, boolean allowMappingUpdates) {\n try {\n switch (operation.opType()) {\n case CREATE:\n@@ -98,21 +104,17 @@ public void performRecoveryOperation(Engine engine, Translog.Operation operation\n .routing(create.routing()).parent(create.parent()).timestamp(create.timestamp()).ttl(create.ttl()),\n create.version(), create.versionType().versionTypeForReplicationAndRecovery(), Engine.Operation.Origin.RECOVERY, true, false);\n mapperAnalyzer.setType(create.type()); // this is a PITA - once mappings are per index not per type this can go away an we can just simply move this to the engine eventually :)\n+ maybeAddMappingUpdate(engineCreate.type(), engineCreate.parsedDoc().dynamicMappingsUpdate(), engineCreate.id(), allowMappingUpdates);\n engine.create(engineCreate);\n- if (engineCreate.parsedDoc().dynamicMappingsUpdate() != null) {\n- addMappingUpdate(engineCreate.type(), engineCreate.parsedDoc().dynamicMappingsUpdate());\n- }\n break;\n case SAVE:\n Translog.Index index = (Translog.Index) operation;\n Engine.Index engineIndex = IndexShard.prepareIndex(docMapper(index.type()), source(index.source()).type(index.type()).id(index.id())\n .routing(index.routing()).parent(index.parent()).timestamp(index.timestamp()).ttl(index.ttl()),\n index.version(), index.versionType().versionTypeForReplicationAndRecovery(), Engine.Operation.Origin.RECOVERY, true);\n mapperAnalyzer.setType(index.type());\n+ maybeAddMappingUpdate(engineIndex.type(), engineIndex.parsedDoc().dynamicMappingsUpdate(), engineIndex.id(), allowMappingUpdates);\n engine.index(engineIndex);\n- if (engineIndex.parsedDoc().dynamicMappingsUpdate() != null) {\n- addMappingUpdate(engineIndex.type(), engineIndex.parsedDoc().dynamicMappingsUpdate());\n- }\n break;\n case DELETE:\n Translog.Delete delete = (Translog.Delete) operation;",
"filename": "src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java",
"status": "modified"
},
{
"diff": "@@ -26,8 +26,12 @@\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.IndexOutput;\n import org.apache.lucene.store.RateLimiter;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ElasticsearchTimeoutException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ClusterStateObserver;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -41,6 +45,7 @@\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.index.IndexShardMissingException;\n import org.elasticsearch.index.engine.RecoveryEngineException;\n+import org.elasticsearch.index.mapper.MapperException;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -294,13 +299,51 @@ public void messageReceived(RecoveryFinalizeRecoveryRequest request, TransportCh\n class TranslogOperationsRequestHandler implements TransportRequestHandler<RecoveryTranslogOperationsRequest> {\n \n @Override\n- public void messageReceived(RecoveryTranslogOperationsRequest request, TransportChannel channel) throws Exception {\n+ public void messageReceived(final RecoveryTranslogOperationsRequest request, final TransportChannel channel) throws Exception {\n try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger);\n final RecoveryStatus recoveryStatus = statusRef.status();\n final RecoveryState.Translog translog = recoveryStatus.state().getTranslog();\n translog.totalOperations(request.totalTranslogOps());\n assert recoveryStatus.indexShard().recoveryState() == recoveryStatus.state();\n- recoveryStatus.indexShard().performBatchRecovery(request.operations());\n+ try {\n+ recoveryStatus.indexShard().performBatchRecovery(request.operations());\n+ } catch (MapperException mapperException) {\n+ // in very rare cases a translog replay from primary is processed before a mapping update on this node\n+ // which causes local mapping changes. we want to wait until these mappings are processed.\n+ logger.trace(\"delaying recovery due to missing mapping changes\", mapperException);\n+ // we do not need to use a timeout here since the entire recovery mechanism has an inactivity protection (it will be\n+ // canceled)\n+ observer.waitForNextChange(new ClusterStateObserver.Listener() {\n+ @Override\n+ public void onNewClusterState(ClusterState state) {\n+ try {\n+ messageReceived(request, channel);\n+ } catch (Exception e) {\n+ onFailure(e);\n+ }\n+ }\n+\n+ protected void onFailure(Exception e) {\n+ try {\n+ channel.sendResponse(e);\n+ } catch (IOException e1) {\n+ logger.warn(\"failed to send error back to recovery source\", e1);\n+ }\n+ }\n+\n+ @Override\n+ public void onClusterServiceClose() {\n+ onFailure(new ElasticsearchException(\"cluster service was closed while waiting for mapping updates\"));\n+ }\n+\n+ @Override\n+ public void onTimeout(TimeValue timeout) {\n+ // note that we do not use a timeout (see comment above)\n+ onFailure(new ElasticsearchTimeoutException(\"timed out waiting for mapping updates (timeout [\" + timeout + \"])\"));\n+ }\n+ });\n+ }\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n ",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
}
]
} |
{
"body": "If a query has the parameter for size set to zero it interferes with the scoring logic used by aggregations such as `top_hits` and `sampler` that rely on scoring.\n\nExample GIST here: https://gist.github.com/markharwood/d66ea14a081fa584d3ca\n",
"comments": [
{
"body": "An additional side-effect is that the `min_score` setting is also currently broken if size=0. As a result aggregations that don't need scores e.g. `terms` agg will collect nothing if min_score is >0 and the size=0 is set.\n",
"created_at": "2015-05-27T17:05:26Z"
}
],
"number": 11119,
"title": "Query with size=zero changes aggregations that rely on scores"
} | {
"body": "Aggregations like Sampler and TopHits that require access to scores did not work if the query has size param set to zero. The assumption was that the Lucene query scoring logic was not required in these cases.\nAdded a Junit test to demonstrate the issue and a fix to test if any aggregations require scores to execute - if so a normal search is performed instead of making a count.\n\nCloses #11119\n",
"number": 11358,
"review_comments": [
{
"body": "Can we check that the hits information is correct in the response too? i.e. the hits are empty, the totalHits is correct and the maxScore is zero\n",
"created_at": "2015-05-26T16:39:38Z"
}
],
"title": "Queries with `size:0` break aggregations that need scores"
} | {
"commits": [
{
"message": "Aggregations fix: queries with size=0 broke aggregations that require scores.\nAggregations like Sampler and TopHits that require access to scores did not work if the query has size param set to zero. The assumption was that the Lucene query scoring logic was not required in these cases.\nAdded a Junit test to demonstrate the issue and a fix which relies on earlier creation of Collector wrappers so that Collector.needsScores() calls work in all search operations.\n\nCloses #11119\n\nShifted “needsScores” test away from AggregationPhase to existing centralised registry of collectors in ContextIndexSearcher to allow for other forms of scoring collectors to be accommodated in future.\nAdded Junit test asserts.\n\nAdded (currently failing test) for the other failure scenario - where min_score is set to >0 and size=0 will collect nothing\n\nShifted wrapping of collectors pre the Weight creation\n\nReverted to previous version"
}
],
"files": [
{
"diff": "@@ -133,8 +133,11 @@ public Weight createNormalizedWeight(Query query, boolean needsScores) throws IO\n }\n }\n \n+\n @Override\n- public void search(List<LeafReaderContext> leaves, Weight weight, Collector collector) throws IOException {\n+ public void search(Query query, Collector collector) throws IOException {\n+ // Wrap the caller's collector with various wrappers e.g. those used to siphon\n+ // matches off for aggregation or to impose a time-limit on collection.\n final boolean timeoutSet = searchContext.timeoutInMillis() != -1;\n final boolean terminateAfterSet = searchContext.terminateAfter() != SearchContext.DEFAULT_TERMINATE_AFTER;\n \n@@ -166,8 +169,13 @@ public void search(List<LeafReaderContext> leaves, Weight weight, Collector coll\n collector = new MinimumScoreCollector(collector, searchContext.minimumScore());\n }\n }\n+ super.search(query, collector);\n+ }\n \n- // we only compute the doc id set once since within a context, we execute the same query always...\n+ @Override\n+ public void search(List<LeafReaderContext> leaves, Weight weight, Collector collector) throws IOException {\n+ final boolean timeoutSet = searchContext.timeoutInMillis() != -1;\n+ final boolean terminateAfterSet = searchContext.terminateAfter() != SearchContext.DEFAULT_TERMINATE_AFTER;\n try {\n if (timeoutSet || terminateAfterSet) {\n try {",
"filename": "src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java",
"status": "modified"
},
{
"diff": "@@ -63,7 +63,15 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.*;\n+import static org.hamcrest.Matchers.arrayContaining;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n+import static org.hamcrest.Matchers.not;\n+import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.Matchers.sameInstance;\n \n /**\n *\n@@ -228,7 +236,9 @@ private String key(Terms.Bucket bucket) {\n \n @Test\n public void testBasics() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\")\n+ .setTypes(\"type\")\n .addAggregation(terms(\"terms\")\n .executionHint(randomExecutionHint())\n .field(TERMS_AGGS_FIELD)\n@@ -264,6 +274,65 @@ public void testBasics() throws Exception {\n }\n }\n \n+ @Test\n+ public void testIssue11119() throws Exception {\n+ // Test that top_hits aggregation is fed scores if query results size=0\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\")\n+ .setTypes(\"field-collapsing\")\n+ .setSize(0)\n+ .setQuery(matchQuery(\"text\", \"x y z\"))\n+ .addAggregation(terms(\"terms\").executionHint(randomExecutionHint()).field(\"group\").subAggregation(topHits(\"hits\")))\n+ .get();\n+\n+ assertSearchResponse(response);\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(8l));\n+ assertThat(response.getHits().hits().length, equalTo(0));\n+ assertThat(response.getHits().maxScore(), equalTo(0f));\n+ Terms terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+\n+ for (Terms.Bucket bucket : terms.getBuckets()) {\n+ assertThat(bucket, notNullValue());\n+ TopHits topHits = bucket.getAggregations().get(\"hits\");\n+ SearchHits hits = topHits.getHits();\n+ float bestScore = Float.MAX_VALUE;\n+ for (int h = 0; h < hits.getHits().length; h++) {\n+ float score=hits.getAt(h).getScore();\n+ assertThat(score, lessThanOrEqualTo(bestScore));\n+ assertThat(score, greaterThan(0f));\n+ bestScore = hits.getAt(h).getScore();\n+ }\n+ }\n+\n+ // Also check that min_score setting works when size=0\n+ // (technically not a test of top_hits but implementation details are\n+ // tied up with the need to feed scores into the agg tree even when\n+ // users don't want ranked set of query results.)\n+ response = client()\n+ .prepareSearch(\"idx\")\n+ .setTypes(\"field-collapsing\")\n+ .setSize(0)\n+ .setMinScore(0.0001f)\n+ .setQuery(matchQuery(\"text\", \"x y z\"))\n+ .addAggregation(terms(\"terms\").executionHint(randomExecutionHint()).field(\"group\"))\n+ .get();\n+\n+ assertSearchResponse(response);\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(8l));\n+ assertThat(response.getHits().hits().length, equalTo(0));\n+ assertThat(response.getHits().maxScore(), equalTo(0f));\n+ terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+ }\n+\n+\n @Test\n public void testBreadthFirst() throws Exception {\n // breadth_first will be ignored since we need scores",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "If I start a node of version 1.5.0 and another built from 1.x, and create an index, I get several messages like this:\n\n```\n[2015-05-25 21:17:57,012][WARN ][transport.netty ] [Donald Ritter] Message not fully read (request) for [482] and action [internal:index/shard/recovery/start_recovery], resetting\n[2015-05-25 21:17:57,020][WARN ][transport.netty ] [Donald Ritter] Message not fully read (request) for [483] and action [internal:index/shard/recovery/start_recovery], resetting\n[2015-05-25 21:17:57,519][WARN ][transport.netty ] [Donald Ritter] Message not fully read (request) for [485] and action [internal:index/shard/recovery/start_recovery], resetting\n[2015-05-25 21:17:57,524][WARN ][transport.netty ] [Donald Ritter] Message not fully read (request) for [486] and action [internal:index/shard/recovery/start_recovery], resetting\n[2015-05-25 21:17:57,591][WARN ][transport.netty ] [Donald Ritter] Message not fully read (request) for [488] and action [internal:index/shard/recovery/start_recovery], resetting\n```\n\nCould this be to do with the index sealing PR? /cc @brwe ?\n",
"comments": [
{
"body": "fixed in #11347\n",
"created_at": "2015-05-26T13:54:02Z"
}
],
"number": 11335,
"title": "Serialization error for action [internal:index/shard/recovery/start_recovery]"
} | {
"body": "This caused scary warnings in the logs\nMessage not fully read (request) for [157] and action [internal:index/shard/recovery/start_recovery], resetting.\n\ncloses #11335\n\nMust have happened while backporting #11179\n",
"number": 11347,
"review_comments": [],
"title": "Don't write recoveryType twice"
} | {
"commits": [
{
"message": "recovery: don't write recoveryType twice\n\nThis caused scary warnings in the logs\nMessage not fully read (request) for [157] and action [internal:index/shard/recovery/start_recovery], resetting\n\ncloses #11335"
}
],
"files": [
{
"diff": "@@ -127,7 +127,6 @@ public void writeTo(StreamOutput out) throws IOException {\n if (out.getVersion().onOrAfter(Version.V_1_2_2)) {\n out.writeByte(recoveryType.id());\n }\n- out.writeByte(recoveryType.id());\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n \n import java.io.ByteArrayInputStream;\n import java.io.ByteArrayOutputStream;\n+import java.io.EOFException;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n@@ -79,6 +80,7 @@ public void testSerialization() throws Exception {\n } else {\n assertThat(inRequest.recoveryType(), nullValue());\n }\n+ assertThat(in.read(), equalTo(-1));\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTest.java",
"status": "modified"
}
]
} |
{
"body": "When trying to upgrade to elasticsearch 1.5.x we encountered this unexpected behaviour.\n\nFor example:\n\n``` json\n{\n \"from\": 3,\n \"size\": 4,\n \"query\": {\n \"term\" : { \"text\" : \"hello\" }\n },\n \"rescore\" : {\n \"window_size\" : 50,\n \"query\" : {\n \"rescore_query\" : {\n \"match\" : {\n \"text\" : {\n \"query\" : \"world\",\n \"slop\" : 2\n }\n }\n },\n \"query_weight\" : 0.7,\n \"rescore_query_weight\" : 1.2\n }\n }\n}\n```\n\nReturns 4 hits on ES 1.4.5, but only one on 1.5.2\n\nSee https://gist.github.com/koenbollen/f7232e3b5e600c66b4b4 for a minimal example that returns unexpected results on elasticsearch >= 1.5.0\n",
"comments": [
{
"body": "Hi @jurriaan \n\nYes, i can replicate this. Setting `from` to anything but zero returns the same list of documents, but skipping the first `from` documents. It looks to be related to https://github.com/elastic/elasticsearch/pull/7707\n\n@mikemccand could you have a look at this please?\n\nA simpler recreation below:\n\n```\nDELETE test\n\nPUT test \n{\n \"settings\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 0\n }\n}\n\nPOST test/item/_bulk\n{\"index\": {\"_id\": 1}}\n{ \"text\": \"hello world\"}\n{\"index\": {\"_id\": 2}}\n{ \"text\": \"hello world\"}\n{\"index\": {\"_id\": 3}}\n{ \"text\": \"hello world\"}\n{\"index\": {\"_id\": 4}}\n{ \"text\": \"hello world\"}\n{\"index\": {\"_id\": 5}}\n{ \"text\": \"hello world\"}\n\nPOST /test/item/_search\n{\n \"from\": 1,\n \"size\": 4,\n \"query\": {\n \"term\": {\n \"text\": \"hello\"\n }\n },\n \"rescore\": {\n \"window_size\": 50,\n \"query\": {\n \"rescore_query\": {\n \"match_all\": {}\n }\n }\n }\n}\n```\n",
"created_at": "2015-05-25T15:15:39Z"
},
{
"body": "Thanks @jurriaan this is indeed wrong ... I opened #11342 to fix this.\n",
"created_at": "2015-05-26T08:09:26Z"
},
{
"body": "@mikemccand Thanks for the quick fix! Hopefully we can migrate to a newer version of elasticsearch soon :)\n",
"created_at": "2015-05-26T08:53:00Z"
}
],
"number": 11277,
"title": "Searches do not return expected number of hits when using rescoring"
} | {
"body": "I'm not sure why we had this fragment from #7707 ... it is over-trimming `TopDocs` such that you get `size-from` results instead of `size`, which is wrong when `from != 0`.\n\nCloses #11277 \n",
"number": 11342,
"review_comments": [],
"title": "Don't truncate TopDocs after rescoring"
} | {
"commits": [
{
"message": "don't truncate TopDocs after rescoring"
}
],
"files": [
{
"diff": "@@ -60,11 +60,6 @@ public void execute(SearchContext context) {\n for (RescoreSearchContext ctx : context.rescore()) {\n topDocs = ctx.rescorer().rescore(topDocs, context, ctx);\n }\n- if (context.size() < topDocs.scoreDocs.length) {\n- ScoreDoc[] hits = new ScoreDoc[context.size()];\n- System.arraycopy(topDocs.scoreDocs, 0, hits, 0, hits.length);\n- topDocs = new TopDocs(topDocs.totalHits, hits, topDocs.getMaxScore());\n- }\n context.queryResult().topDocs(topDocs);\n } catch (IOException e) {\n throw new ElasticsearchException(\"Rescore Phase Failed\", e);",
"filename": "src/main/java/org/elasticsearch/search/rescore/RescorePhase.java",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,7 @@\n import java.util.Comparator;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.*;\n@@ -206,7 +207,7 @@ public void testMoreDocs() throws Exception {\n RescoreBuilder.queryRescorer(QueryBuilders.matchPhraseQuery(\"field1\", \"lexington avenue massachusetts\").slop(3))\n .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f)).setRescoreWindow(20).execute().actionGet();\n \n- assertThat(searchResponse.getHits().hits().length, equalTo(3));\n+ assertThat(searchResponse.getHits().hits().length, equalTo(5));\n assertHitCount(searchResponse, 9);\n assertFirstHit(searchResponse, hasId(\"3\"));\n }\n@@ -719,4 +720,25 @@ private int indexRandomNumbers(String analyzer, int shards, boolean dummyDocs) t\n ensureGreen();\n return numDocs;\n }\n+\n+ // #11277\n+ public void testFromSize() throws Exception {\n+ Builder settings = Settings.builder();\n+ settings.put(SETTING_NUMBER_OF_SHARDS, 1);\n+ settings.put(SETTING_NUMBER_OF_REPLICAS, 0);\n+ assertAcked(prepareCreate(\"test\").setSettings(settings));\n+ for(int i=0;i<5;i++) {\n+ client().prepareIndex(\"test\", \"type\", \"\"+i).setSource(\"text\", \"hello world\").get();\n+ }\n+ refresh();\n+\n+ SearchRequestBuilder request = client().prepareSearch();\n+ request.setQuery(QueryBuilders.termQuery(\"text\", \"hello\"));\n+ request.setFrom(1);\n+ request.setSize(4);\n+ request.addRescorer(RescoreBuilder.queryRescorer(QueryBuilders.matchAllQuery()));\n+ request.setRescoreWindow(50);\n+\n+ assertEquals(4, request.get().getHits().hits().length);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/rescore/QueryRescorerTests.java",
"status": "modified"
}
]
} |
{
"body": "When an analyzer is defined that has a name starting with _ then this analyzer is silently dropped from the mapping once the mapping source is propagated through the cluster (https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java#L795). The local document mapper and mapping in the cluster state are then out of sync on some nodes and documents might be analyzed with the default analyzer on some or the analyzer that was defined by the user on others. After restarting the nodes, the analyzer setting is completely lost. Example below.\n\nWhen an analyzer is configured in the index settings, there should be a check to make sure the name does not start with an _. \n\nSee also: https://github.com/elasticsearch/elasticsearch/issues/3544#issuecomment-73203535\n\nExample:\n\n```\nPUT test\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"_some_name\": {\n \"tokenizer\":\"keyword\"\n }\n }\n }\n }\n}\n\nPUT test/_mapping/page\n{\n \"page\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\",\n \"analyzer\": \"_some_name\"\n }\n }\n }\n}\n\nGET test/_mapping\n\n```\n\nresults in\n\n```\n\n{\n \"test\": {\n \"mappings\": {\n \"page\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "+1\n",
"created_at": "2015-02-06T17:53:24Z"
}
],
"number": 9596,
"title": "Analyzers starting with _ are silently ignored"
} | {
"body": "closes #9596\n",
"number": 11303,
"review_comments": [
{
"body": "mit -> with :)\n",
"created_at": "2015-05-23T05:44:24Z"
},
{
"body": "Maybe put single quotes around the `_`?\n",
"created_at": "2015-05-23T05:45:39Z"
}
],
"title": "Custom analyzer names and aliases must not start with _"
} | {
"commits": [
{
"message": "analyzers: custom analyzers names and aliases must not start with _\n\ncloses #9596"
},
{
"message": "spell correct and add single quotes"
}
],
"files": [
{
"diff": "@@ -5,6 +5,7 @@ An analyzer of type `custom` that allows to combine a `Tokenizer` with\n zero or more `Token Filters`, and zero or more `Char Filters`. The\n custom analyzer accepts a logical/registered name of the tokenizer to\n use, and a list of logical/registered names of token filters.\n+The name of the custom analyzer must not start with \"_\".\n \n The following are settings that can be set for a `custom` analyzer type:\n ",
"filename": "docs/reference/analysis/analyzers/custom-analyzer.asciidoc",
"status": "modified"
},
{
"diff": "@@ -251,6 +251,11 @@ public AnalysisService(Index index, @IndexSettings Settings indexSettings, @Null\n defaultSearchAnalyzer = analyzers.containsKey(\"default_search\") ? analyzers.get(\"default_search\") : analyzers.get(\"default\");\n defaultSearchQuoteAnalyzer = analyzers.containsKey(\"default_search_quote\") ? analyzers.get(\"default_search_quote\") : defaultSearchAnalyzer;\n \n+ for (Map.Entry<String, NamedAnalyzer> analyzer : analyzers.entrySet()) {\n+ if (analyzer.getKey().startsWith(\"_\")) {\n+ throw new IllegalArgumentException(\"analyzer name must not start with '_'. got \\\"\" + analyzer.getKey() + \"\\\"\");\n+ }\n+ }\n this.analyzers = ImmutableMap.copyOf(analyzers);\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/analysis/AnalysisService.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.inject.Injector;\n import org.elasticsearch.common.inject.ModulesBuilder;\n+import org.elasticsearch.common.inject.ProvisionException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.SettingsModule;\n import org.elasticsearch.env.Environment;\n@@ -94,7 +95,7 @@ public void testSimpleConfigurationYaml() {\n Settings settings = loadFromClasspath(\"org/elasticsearch/index/analysis/test1.yml\");\n testSimpleConfiguration(settings);\n }\n- \n+\n @Test\n public void testDefaultFactoryTokenFilters() throws IOException {\n assertTokenFilter(\"keyword_repeat\", KeywordRepeatFilter.class);\n@@ -238,4 +239,36 @@ private Path generateWordList(String[] words) throws Exception {\n return wordListFile;\n }\n \n+ @Test\n+ public void testUnderscoreInAnalyzerName() {\n+ Settings settings = Settings.builder()\n+ .put(\"index.analysis.analyzer._invalid_name.tokenizer\", \"keyword\")\n+ .put(\"path.home\", createTempDir().toString())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, \"1\")\n+ .build();\n+ try {\n+ getAnalysisService(settings);\n+ fail(\"This should fail with IllegalArgumentException because the analyzers name starts with _\");\n+ } catch (ProvisionException e) {\n+ assertTrue(e.getCause() instanceof IllegalArgumentException);\n+ assertThat(e.getCause().getMessage(), equalTo(\"analyzer name must not start with _. got \\\"_invalid_name\\\"\"));\n+ }\n+ }\n+\n+ @Test\n+ public void testUnderscoreInAnalyzerNameAlias() {\n+ Settings settings = Settings.builder()\n+ .put(\"index.analysis.analyzer.valid_name.tokenizer\", \"keyword\")\n+ .put(\"index.analysis.analyzer.valid_name.alias\", \"_invalid_name\")\n+ .put(\"path.home\", createTempDir().toString())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, \"1\")\n+ .build();\n+ try {\n+ getAnalysisService(settings);\n+ fail(\"This should fail with IllegalArgumentException because the analyzers alias starts with _\");\n+ } catch (ProvisionException e) {\n+ assertTrue(e.getCause() instanceof IllegalArgumentException);\n+ assertThat(e.getCause().getMessage(), equalTo(\"analyzer name must not start with _. got \\\"_invalid_name\\\"\"));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java",
"status": "modified"
}
]
} |
{
"body": "The `IndicesTTLService` uses a bulk request to perform the deletions of expired documents. Sometimes these deletions could fail, but the service does not log or provide any indication that the deletions are failing.\n\nThe `onResponse` method could check to see if the response had any failures and if so log the failures with the details. Additionally, the `onFailure` method looks like it will just swallow the Throwable that caused the failure.\n",
"comments": [],
"number": 11019,
"title": "TTL deletion failures are not logged"
} | {
"body": "In order to get some information if the TTL purger thread could\nsuccessfully delete all documents per bulk exection, this commit\nadds some logging. TRACE level logging will potentially contain\na lot of information about all the bulk failures.\n\nCloses #11019\n",
"number": 11302,
"review_comments": [
{
"body": "can we s/Bulk/bulk here? I think we try to stay with lowercase in logs right?\n",
"created_at": "2015-05-22T13:12:52Z"
}
],
"title": "Add logging for failed TTL purges"
} | {
"commits": [
{
"message": "Logging: Add logging for failed TTL purges\n\nIn order to get some information if the TTL purger thread could\nsuccessfully delete all documents per bulk exection, this commit\nadds some logging. TRACE level logging will potentially contain\na lot of information about all the bulk failures.\n\nCloses #11019"
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.search.SimpleCollector;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.bulk.BulkItemResponse;\n import org.elasticsearch.action.bulk.BulkRequest;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.bulk.TransportBulkAction;\n@@ -280,12 +281,28 @@ private BulkRequest processBulkIfNeeded(BulkRequest bulkRequest, boolean force)\n bulkAction.executeBulk(bulkRequest, new ActionListener<BulkResponse>() {\n @Override\n public void onResponse(BulkResponse bulkResponse) {\n- logger.trace(\"bulk took \" + bulkResponse.getTookInMillis() + \"ms\");\n+ if (bulkResponse.hasFailures()) {\n+ int failedItems = 0;\n+ for (BulkItemResponse response : bulkResponse) {\n+ if (response.isFailed()) failedItems++;\n+ }\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"bulk deletion failures for [{}]/[{}] items, failure message: [{}]\", failedItems, bulkResponse.getItems().length, bulkResponse.buildFailureMessage());\n+ } else {\n+ logger.error(\"bulk deletion failures for [{}]/[{}] items\", failedItems, bulkResponse.getItems().length);\n+ }\n+ } else {\n+ logger.trace(\"bulk deletion took \" + bulkResponse.getTookInMillis() + \"ms\");\n+ }\n }\n \n @Override\n public void onFailure(Throwable e) {\n- logger.warn(\"failed to execute bulk\");\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"failed to execute bulk\", e);\n+ } else {\n+ logger.warn(\"failed to execute bulk: [{}]\", e.getMessage());\n+ }\n }\n });\n } catch (Exception e) {",
"filename": "src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java",
"status": "modified"
}
]
} |
{
"body": "This test fails since we get into a cycle of trying to recover the primary from replica 1 and replica 2 which are corrupted, and never end up reaching the non corrupted previous primary shard\n",
"comments": [
{
"body": "I had local failures... need to investigate\n",
"created_at": "2015-05-19T15:53:57Z"
}
],
"number": 11226,
"title": "CorruptedFileTest#testReplicaCorruption fails"
} | {
"body": "We fetch the state version to find the right shard to be started as\nthe primary. This can return a valid shard state even if the shard is\ncorrupted and can't even be opened. This commit adds best effort detection\nfor this scenario and returns an invalid version for the shard if it's corrupted\n\nCloses #11226\n",
"number": 11269,
"review_comments": [
{
"body": "cat we add a trace log here with the exception? Thinking it might be helpful to be able to know why... \n",
"created_at": "2015-05-21T06:50:41Z"
},
{
"body": "this is unneeded? we check before we call...\n",
"created_at": "2015-05-21T06:51:11Z"
},
{
"body": "I like the method to be self sufficient to that's why I did that\n",
"created_at": "2015-05-21T07:30:59Z"
}
],
"title": "Check if the index can be opened and is not corrupted on state listing"
} | {
"commits": [
{
"message": "Check if the index can be opened and is not corrupted on state listing\n\nWe fetch the state version to find the right shard to be started as\nthe primary. This can return a valid shard state even if the shard is\ncorrupted and can't even be opened. This commit adds best effort detection\nfor this scenario and returns an invalid version for the shard if it's corrupted\n\nCloses #11226"
}
],
"files": [
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.collect.Lists;\n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.action.ActionFuture;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.FailedNodeException;\n import org.elasticsearch.action.support.ActionFilters;\n@@ -30,15 +29,15 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.ShardPath;\n import org.elasticsearch.index.shard.ShardStateMetaData;\n+import org.elasticsearch.index.store.Store;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -108,6 +107,11 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) {\n logger.trace(\"{} loading local shard state info\", shardId);\n ShardStateMetaData shardStateMetaData = ShardStateMetaData.FORMAT.loadLatestState(logger, nodeEnv.availableShardPaths(request.shardId));\n if (shardStateMetaData != null) {\n+ final IndexMetaData metaData = clusterService.state().metaData().index(shardId.index().name()); // it's a mystery why this is sometimes null\n+ if (metaData != null && canOpenIndex(request.getShardId(), metaData) == false) {\n+ logger.trace(\"{} can't open index for shard\", shardId);\n+ return new NodeGatewayStartedShards(clusterService.localNode(), -1);\n+ }\n // old shard metadata doesn't have the actual index UUID so we need to check if the actual uuid in the metadata\n // is equal to IndexMetaData.INDEX_UUID_NA_VALUE otherwise this shard doesn't belong to the requested index.\n if (indexUUID.equals(shardStateMetaData.indexUUID) == false\n@@ -125,6 +129,18 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) {\n }\n }\n \n+ private boolean canOpenIndex(ShardId shardId, IndexMetaData metaData) throws IOException {\n+ // try and see if we an list unallocated\n+ if (metaData == null) {\n+ return false;\n+ }\n+ final ShardPath shardPath = ShardPath.loadShardPath(logger, nodeEnv, shardId, metaData.settings());\n+ if (shardPath == null) {\n+ return false;\n+ }\n+ return Store.canOpenIndex(logger, shardPath.resolveIndex());\n+ }\n+\n @Override\n protected boolean accumulateExceptions() {\n return true;",
"filename": "src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java",
"status": "modified"
},
{
"diff": "@@ -382,6 +382,22 @@ public static MetadataSnapshot readMetadataSnapshot(Path indexLocation, ESLogger\n return MetadataSnapshot.EMPTY;\n }\n \n+ /**\n+ * Returns <code>true</code> iff the given location contains an index an the index\n+ * can be successfully opened. This includes reading the segment infos and possible\n+ * corruption markers.\n+ */\n+ public static boolean canOpenIndex(ESLogger logger, Path indexLocation) throws IOException {\n+ try (Directory dir = new SimpleFSDirectory(indexLocation)) {\n+ failIfCorrupted(dir, new ShardId(\"\", 1));\n+ Lucene.readSegmentInfos(dir);\n+ return true;\n+ } catch (Exception ex) {\n+ logger.trace(\"Can't open index for path [{}]\", ex, indexLocation);\n+ return false;\n+ }\n+ }\n+\n /**\n * The returned IndexOutput might validate the files checksum if the file has been written with a newer lucene version\n * and the metadata holds the necessary information to detect that it was been written by Lucene 4.8 or newer. If it has only",
"filename": "src/main/java/org/elasticsearch/index/store/Store.java",
"status": "modified"
},
{
"diff": "@@ -56,6 +56,7 @@\n import java.io.FileNotFoundException;\n import java.io.IOException;\n import java.nio.file.NoSuchFileException;\n+import java.nio.file.Path;\n import java.util.*;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -1235,4 +1236,35 @@ public void testMarkCorruptedOnTruncatedSegmentsFile() throws IOException {\n assertTrue(store.isMarkedCorrupted());\n store.close();\n }\n+\n+ public void testCanOpenIndex() throws IOException {\n+ IndexWriterConfig iwc = newIndexWriterConfig();\n+ Path tempDir = createTempDir();\n+ final BaseDirectoryWrapper dir = newFSDirectory(tempDir);\n+ assertFalse(Store.canOpenIndex(logger, tempDir));\n+ IndexWriter writer = new IndexWriter(dir, iwc);\n+ Document doc = new Document();\n+ doc.add(new StringField(\"id\", \"1\", random().nextBoolean() ? Field.Store.YES : Field.Store.NO));\n+ writer.addDocument(doc);\n+ writer.commit();\n+ writer.close();\n+ assertTrue(Store.canOpenIndex(logger, tempDir));\n+\n+ final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n+ DirectoryService directoryService = new DirectoryService(shardId, ImmutableSettings.EMPTY) {\n+ @Override\n+ public long throttleTimeInNanos() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public Directory newDirectory() throws IOException {\n+ return dir;\n+ }\n+ };\n+ Store store = new Store(shardId, ImmutableSettings.EMPTY, directoryService, new DummyShardLock(shardId));\n+ store.markStoreCorrupted(new CorruptIndexException(\"foo\", \"bar\"));\n+ assertFalse(Store.canOpenIndex(logger, tempDir));\n+ store.close();\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/store/StoreTest.java",
"status": "modified"
}
]
} |
{
"body": "This test fails since we get into a cycle of trying to recover the primary from replica 1 and replica 2 which are corrupted, and never end up reaching the non corrupted previous primary shard\n",
"comments": [
{
"body": "I had local failures... need to investigate\n",
"created_at": "2015-05-19T15:53:57Z"
}
],
"number": 11226,
"title": "CorruptedFileTest#testReplicaCorruption fails"
} | {
"body": "This commit also reenables CorruptedFileTest#testReplicaCorruption which had\na missing `GatewayAllocator.INDEX_RECOVERY_INITIAL_SHARDS: \"one\"` setting.\n\nCloses #11226\n",
"number": 11230,
"review_comments": [
{
"body": "I think it's a leftover import\n",
"created_at": "2015-05-19T15:36:30Z"
}
],
"title": "Ensure we mark store as corrupted if we fail to read the segments info"
} | {
"commits": [
{
"message": "Ensure we mark store as corrupted if we fail to read the segments info\n\nThis commit also reenables CorruptedFileTest#testReplicaCorruption which had\na missing `GatewayAllocator.INDEX_RECOVERY_INITIAL_SHARDS: \"one\"` setting.\n\nCloses #11226"
}
],
"files": [
{
"diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.index.shard.AbstractIndexShardComponent;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.threadpool.ThreadPool;\n \n@@ -53,7 +54,6 @@\n */\n public class IndexShardGateway extends AbstractIndexShardComponent implements Closeable {\n \n- private final ThreadPool threadPool;\n private final MappingUpdatedAction mappingUpdatedAction;\n private final IndexService indexService;\n private final IndexShard indexShard;\n@@ -63,10 +63,9 @@ public class IndexShardGateway extends AbstractIndexShardComponent implements Cl\n \n \n @Inject\n- public IndexShardGateway(ShardId shardId, @IndexSettings Settings indexSettings, ThreadPool threadPool, MappingUpdatedAction mappingUpdatedAction,\n+ public IndexShardGateway(ShardId shardId, @IndexSettings Settings indexSettings, MappingUpdatedAction mappingUpdatedAction,\n IndexService indexService, IndexShard indexShard) {\n super(shardId, indexSettings);\n- this.threadPool = threadPool;\n this.mappingUpdatedAction = mappingUpdatedAction;\n this.indexService = indexService;\n this.indexShard = indexShard;\n@@ -82,16 +81,17 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n long version = -1;\n final Map<String, Mapping> typesToUpdate;\n SegmentInfos si = null;\n- indexShard.store().incRef();\n+ final Store store = indexShard.store();\n+ store.incRef();\n try {\n try {\n- indexShard.store().failIfCorrupted();\n+ store.failIfCorrupted();\n try {\n- si = Lucene.readSegmentInfos(indexShard.store().directory());\n+ si = store.readLastCommittedSegmentsInfo();\n } catch (Throwable e) {\n String files = \"_unknown_\";\n try {\n- files = Arrays.toString(indexShard.store().directory().listAll());\n+ files = Arrays.toString(store.directory().listAll());\n } catch (Throwable e1) {\n files += \" (failure=\" + ExceptionsHelper.detailedMessage(e1) + \")\";\n }\n@@ -106,7 +106,7 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n // it exists on the directory, but shouldn't exist on the FS, its a leftover (possibly dangling)\n // its a \"new index create\" API, we have to do something, so better to clean it than use same data\n logger.trace(\"cleaning existing shard, shouldn't exists\");\n- IndexWriter writer = new IndexWriter(indexShard.store().directory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER).setOpenMode(IndexWriterConfig.OpenMode.CREATE));\n+ IndexWriter writer = new IndexWriter(store.directory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER).setOpenMode(IndexWriterConfig.OpenMode.CREATE));\n writer.close();\n recoveryState.getTranslog().totalOperations(0);\n }\n@@ -120,7 +120,7 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n try {\n final RecoveryState.Index index = recoveryState.getIndex();\n if (si != null) {\n- final Directory directory = indexShard.store().directory();\n+ final Directory directory = store.directory();\n for (String name : Lucene.files(si)) {\n long length = directory.fileLength(name);\n index.addFileDetail(name, length, true);\n@@ -143,7 +143,7 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n } catch (EngineException e) {\n throw new IndexShardGatewayRecoveryException(shardId, \"failed to recovery from gateway\", e);\n } finally {\n- indexShard.store().decRef();\n+ store.decRef();\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -136,7 +136,13 @@ public Directory directory() {\n * @throws IOException if the index is corrupted or the segments file is not present\n */\n public SegmentInfos readLastCommittedSegmentsInfo() throws IOException {\n- return readSegmentsInfo(null, directory());\n+ failIfCorrupted();\n+ try {\n+ return readSegmentsInfo(null, directory());\n+ } catch (CorruptIndexException ex) {\n+ markStoreCorrupted(ex);\n+ throw ex;\n+ }\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/store/Store.java",
"status": "modified"
},
{
"diff": "@@ -50,6 +50,7 @@\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.Discovery;\n+import org.elasticsearch.gateway.GatewayAllocator;\n import org.elasticsearch.index.merge.policy.MergePolicyModule;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -513,12 +514,12 @@ public void testCorruptFileThenSnapshotAndRestore() throws ExecutionException, I\n * replica.\n */\n @Test\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/11226\")\n public void testReplicaCorruption() throws Exception {\n int numDocs = scaledRandomIntBetween(100, 1000);\n internalCluster().ensureAtLeastNumDataNodes(2);\n \n assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(GatewayAllocator.INDEX_RECOVERY_INITIAL_SHARDS, \"one\")\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, cluster().numDataNodes() - 1)\n .put(MergePolicyModule.MERGE_POLICY_TYPE_KEY, NoMergePolicyProvider.class)\n .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose",
"filename": "src/test/java/org/elasticsearch/index/store/CorruptedFileTest.java",
"status": "modified"
},
{
"diff": "@@ -1191,4 +1191,48 @@ public void testStreamStoreFilesMetaData() throws Exception {\n }\n assertThat(outStoreFileMetaData.syncId(), equalTo(inStoreFileMetaData.syncId()));\n }\n+\n+ public void testMarkCorruptedOnTruncatedSegmentsFile() throws IOException {\n+ IndexWriterConfig iwc = newIndexWriterConfig();\n+ final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random());\n+ Store store = new Store(shardId, ImmutableSettings.EMPTY, directoryService, new DummyShardLock(shardId));\n+ IndexWriter writer = new IndexWriter(store.directory(), iwc);\n+\n+ int numDocs = 1 + random().nextInt(10);\n+ List<Document> docs = new ArrayList<>();\n+ for (int i = 0; i < numDocs; i++) {\n+ Document doc = new Document();\n+ doc.add(new StringField(\"id\", \"\" + i, random().nextBoolean() ? Field.Store.YES : Field.Store.NO));\n+ doc.add(new TextField(\"body\", TestUtil.randomRealisticUnicodeString(random()), random().nextBoolean() ? Field.Store.YES : Field.Store.NO));\n+ doc.add(new SortedDocValuesField(\"dv\", new BytesRef(TestUtil.randomRealisticUnicodeString(random()))));\n+ docs.add(doc);\n+ }\n+ for (Document d : docs) {\n+ writer.addDocument(d);\n+ }\n+ writer.commit();\n+ writer.close();\n+ MockDirectoryWrapper leaf = DirectoryUtils.getLeaf(store.directory(), MockDirectoryWrapper.class);\n+ if (leaf != null) {\n+ leaf.setPreventDoubleWrite(false); // I do this on purpose\n+ }\n+ SegmentInfos segmentCommitInfos = store.readLastCommittedSegmentsInfo();\n+ try (IndexOutput out = store.directory().createOutput(segmentCommitInfos.getSegmentsFileName(), IOContext.DEFAULT)) {\n+ // empty file\n+ }\n+\n+ try {\n+ if (randomBoolean()) {\n+ store.getMetadata();\n+ } else {\n+ store.readLastCommittedSegmentsInfo();\n+ }\n+ fail(\"corrupted segments_N file\");\n+ } catch (CorruptIndexException ex) {\n+ // expected\n+ }\n+ assertTrue(store.isMarkedCorrupted());\n+ store.close();\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/store/StoreTest.java",
"status": "modified"
}
]
} |
{
"body": "The `detect_noop` flag doesn't work if a field contains a `null` value:\n\n```\nPUT t/t/1\n{\n \"foo\": \"baz\",\n \"bar\": \"baz\"\n}\n```\n\nThis update request always returns the same version:\n\n```\nPOST t/t/1/_update\n{\n \"doc\": {\n \"foo\": \"baz\",\n \"bar\": \"baz\"\n },\n \"detect_noop\": true\n}\n```\n\nWhile this one always returns a new version:\n\n```\nPOST t/t/1/_update\n{\n \"doc\": {\n \"foo\": \"baz\",\n \"bar\": null\n },\n \"detect_noop\": true\n}\n```\n",
"comments": [
{
"body": "Eww. I can have a look in a bit.\n",
"created_at": "2015-05-18T15:57:04Z"
},
{
"body": "low hanging fruit indeed.\n",
"created_at": "2015-05-18T16:53:53Z"
},
{
"body": "@nik9000 @clintongormley Thx for the super quick turnaround on this :+1: \n",
"created_at": "2015-05-18T17:11:11Z"
}
],
"number": 11208,
"title": "detect_noop doesn't work with null values"
} | {
"body": "If the source contrains a null value for a field then detect_noop should\nconsider setting it to null again to be a noop.\n\nCloses #11208\n",
"number": 11210,
"review_comments": [
{
"body": "maybe this could be simplified to `modified = !java.util.Objects.equal(old, changesEntry.getValue())`?\n",
"created_at": "2015-05-18T17:26:21Z"
},
{
"body": "Yeah. I forget about that one.\n",
"created_at": "2015-05-18T17:43:29Z"
}
],
"title": "`detect_noop` now understands `null` as a valid value"
} | {
"commits": [
{
"message": "detect_noop now understands null as a valid value\n\nIf the source contrains a null value for a field then detect_noop should\nconsider setting it to null again to be a noop.\n\nCloses #11208"
}
],
"files": [
{
"diff": "@@ -20,7 +20,9 @@\n package org.elasticsearch.common.xcontent;\n \n import com.google.common.base.Charsets;\n+import com.google.common.base.Objects;\n import com.google.common.collect.Maps;\n+\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -260,11 +262,11 @@ public static boolean update(Map<String, Object> source, Map<String, Object> cha\n if (modified) {\n continue;\n }\n- if (!checkUpdatesAreUnequal || old == null) {\n+ if (!checkUpdatesAreUnequal) {\n modified = true;\n continue;\n }\n- modified = !old.equals(changesEntry.getValue());\n+ modified = !Objects.equal(old, changesEntry.getValue());\n }\n return modified;\n }",
"filename": "src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.update;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -42,22 +41,34 @@ public void singleField() throws Exception {\n updateAndCheckSource(2, fields(\"bar\", \"bir\"));\n updateAndCheckSource(2, fields(\"bar\", \"bir\"));\n updateAndCheckSource(3, fields(\"bar\", \"foo\"));\n+ updateAndCheckSource(4, fields(\"bar\", null));\n+ updateAndCheckSource(4, fields(\"bar\", null));\n+ updateAndCheckSource(5, fields(\"bar\", \"foo\"));\n \n- assertEquals(2, totalNoopUpdates());\n+ assertEquals(3, totalNoopUpdates());\n }\n \n @Test\n public void twoFields() throws Exception {\n // Use random keys so we get random iteration order.\n String key1 = 1 + randomAsciiOfLength(3);\n String key2 = 2 + randomAsciiOfLength(3);\n+ String key3 = 3 + randomAsciiOfLength(3);\n updateAndCheckSource(1, fields(key1, \"foo\", key2, \"baz\"));\n updateAndCheckSource(1, fields(key1, \"foo\", key2, \"baz\"));\n updateAndCheckSource(2, fields(key1, \"foo\", key2, \"bir\"));\n updateAndCheckSource(2, fields(key1, \"foo\", key2, \"bir\"));\n updateAndCheckSource(3, fields(key1, \"foo\", key2, \"foo\"));\n+ updateAndCheckSource(4, fields(key1, \"foo\", key2, null));\n+ updateAndCheckSource(4, fields(key1, \"foo\", key2, null));\n+ updateAndCheckSource(5, fields(key1, \"foo\", key2, \"foo\"));\n+ updateAndCheckSource(6, fields(key1, null, key2, \"foo\"));\n+ updateAndCheckSource(6, fields(key1, null, key2, \"foo\"));\n+ updateAndCheckSource(7, fields(key1, null, key2, null));\n+ updateAndCheckSource(7, fields(key1, null, key2, null));\n+ updateAndCheckSource(8, fields(key1, null, key2, null, key3, null));\n \n- assertEquals(2, totalNoopUpdates());\n+ assertEquals(5, totalNoopUpdates());\n }\n \n @Test\n@@ -83,6 +94,7 @@ public void map() throws Exception {\n // Use random keys so we get variable iteration order.\n String key1 = 1 + randomAsciiOfLength(3);\n String key2 = 2 + randomAsciiOfLength(3);\n+ String key3 = 3 + randomAsciiOfLength(3);\n updateAndCheckSource(1, XContentFactory.jsonBuilder().startObject()\n .startObject(\"test\")\n .field(key1, \"foo\")\n@@ -108,8 +120,24 @@ public void map() throws Exception {\n .field(key1, \"foo\")\n .field(key2, \"foo\")\n .endObject().endObject());\n+ updateAndCheckSource(4, XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"test\")\n+ .field(key1, \"foo\")\n+ .field(key2, (Object) null)\n+ .endObject().endObject());\n+ updateAndCheckSource(4, XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"test\")\n+ .field(key1, \"foo\")\n+ .field(key2, (Object) null)\n+ .endObject().endObject());\n+ updateAndCheckSource(5, XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"test\")\n+ .field(key1, \"foo\")\n+ .field(key2, (Object) null)\n+ .field(key3, (Object) null)\n+ .endObject().endObject());\n \n- assertEquals(2, totalNoopUpdates());\n+ assertEquals(3, totalNoopUpdates());\n }\n \n @Test\n@@ -199,7 +227,7 @@ public void totallyEmpty() throws Exception {\n \n private XContentBuilder fields(Object... fields) throws IOException {\n assertEquals(\"Fields must field1, value1, field2, value2, etc\", 0, fields.length % 2);\n- \n+\n XContentBuilder builder = XContentFactory.jsonBuilder().startObject();\n for (int i = 0; i < fields.length; i += 2) {\n builder.field((String) fields[i], fields[i + 1]);\n@@ -229,6 +257,7 @@ private long totalNoopUpdates() {\n return client().admin().indices().prepareStats(\"test\").setIndexing(true).get().getIndex(\"test\").getTotal().getIndexing().getTotal()\n .getNoopUpdateCount();\n }\n+\n @Before\n public void setup() {\n createIndex(\"test\");",
"filename": "src/test/java/org/elasticsearch/update/UpdateNoopTests.java",
"status": "modified"
}
]
} |
{
"body": "In 2.0 as of #10461 we now allocate each shard on a node to whichever path.data has the most free space at the moment the shard is assigned to the node.\n\nBut in the case when a new node just started up, and suddenly gets a bunch of new shards assigned (new index), we will put all those shards onto a single path (the one with the most free space at that moment).\n\nI saw this happen on a node that had 2 identically sized SSDs on the path.data, but on one of them I had installed ES so it had a wee bit less disk space, and then all 5 shards of my new index were assigned to the other SSD.\n",
"comments": [
{
"body": "@mikemccand I made this a blocker\n",
"created_at": "2015-05-12T15:12:55Z"
},
{
"body": "> @mikemccand I made this a blocker\n\nOK, I agree. I can try to fix this ...\n\nI think there are at least 2 cases here: 1) new index (shard) allocated to this node, and 2) shard relocated/recovered from another node.\n\nFor case 2), I think IndexService.createShard needs to be told how large the incoming shard will be (it's not today?), since we must already know this somewhere, but I don't know how to get this information in IndicesClusterStateService.applyInitializingShard when it calls createShard.\n\nFor case 1) we need to be a bit smarter when our node is allocated more than 1 shard for the same index, and use some heuristic e.g. \"the size of the overall index will be 50% of total free space across all path.data paths\", and budget each shard as 1/Nth of that.\n\nMaybe there are other cases, e.g. restoring from a snapshot?\n\nFor both of these cases we need some way to record \"reserved space\" against each path.data, much like how a credit card company pre-charges you to \"authorize\" a purchase but not charge you until you actually need to pay. Maybe this \"tracking of disk space that will be consumed soon\" needs to be in NodeEnvironment?\n",
"created_at": "2015-05-12T16:27:55Z"
},
{
"body": "> For case 2), I think IndexService.createShard needs to be told how large the incoming shard will be (it's not today?), since we must already know this somewhere, but I don't know how to get this information in IndicesClusterStateService.applyInitializingShard when it calls createShard.\n\nI think we could average the shard sizes for all other shards in the cluster. This information can be retrieved from the `ClusterInfoService`, which will periodically fetch it (on the master node).\n",
"created_at": "2015-05-12T16:57:45Z"
},
{
"body": "Thanks for the pointer @dakrone, I'll try to use that for the case when there are already some shards in the cluster.\n",
"created_at": "2015-05-12T20:21:35Z"
},
{
"body": "At the end of the day, the fix here will be heuristicky, since we essentially must \"guess\" how big this shard will grow to in the future. So, there will be adversarial cases which fill up one data path while others remain very empty...\n\nMaybe we should (separately) make the separate path.datas on a given node visible to the cluster state? I think we don't do this today? E.g. DiskThresholdDecider looks at the net free bytes on a node, so it won't notice if one path is nearly full and another is very empty? It will see that node has still having plenty of space?\n\nIf we did that, it would handle the adversarial cases where we \"guessed wrong\", and accidentally put N tiny shards on one path and N huge shards on another ... we'd be able to correct it later by relocating shards from the nearly full path.data ...\n",
"created_at": "2015-05-13T12:41:25Z"
},
{
"body": "I mean we have a general problem here that we don't know on which node we should put the shard if the index is freshly created. I wonder if we should do the same thing we do for shard allocation as well for disk allocaiton and make sure shards of an index are balanced across disks if there is enough space on all the disks and as a second heuristic use the number of shards in total on that node to balance. \n",
"created_at": "2015-05-13T12:58:42Z"
}
],
"number": 11122,
"title": "Shard allocation should work harder to balance new shards across multiple path.data paths"
} | {
"body": "This change adds a simplistic heuristic to try to balance new shard allocations across multiple data paths on one node.\n\nIt very roughly predicts (guesses!) how much disk space a shard will eventually use, as the max of the current avg. size of shards across the cluster, and 5% of current free space across all path.data on the current node, and then reserves space by counting how many shards are now assigned to each path.data.\n\nPicking the best path.data for a new shard is using the same \"most free space\" logic, except it now deducts the reserved space.\n\nI tested this on an EC2 instance with 2 SSDs with nearly the same amount of free space and confirmed we now put 2 shards on one SSD and 3 shards on the other, vs all 5 shards on a single path with master today, but I'm not sure how to make a standalone unit test ... maybe I can use a MockFS to fake up N path.datas with different free space?\n\nThis is just a heuristic, and it easily has adversarial cases that will fill up one path.data while other path.data on the same node still have plenty of space, and unfortunately ES can't recover from that today. E.g., DiskThresholdDecider won't even detect any problem (since it sums up total free space across all path.data) ... I think we should separately think about fixing that, but at least this change improves the current situation.\n\nCloses #11122\n",
"number": 11185,
"review_comments": [
{
"body": "This seems error prone if the structure could change in the future? I dont know a better way though..\n",
"created_at": "2015-05-15T17:59:45Z"
},
{
"body": "any change we can move this heuristic into a utils class or into `ShardPath` as a static method?\n",
"created_at": "2015-05-15T20:06:36Z"
},
{
"body": "I'll move it to a new NodePath method, that way the directory structure details remain hidden in NodeEnv/NodePath abstraction...\n",
"created_at": "2015-05-17T09:41:13Z"
},
{
"body": "It's a little tricky because I'm iterating over IndexService's shards map ... but maybe I could break that out into a loop that first produces a map of shard to data path, and then pass that map to a new static method in ShardPath. I'll try.\n",
"created_at": "2015-05-17T09:44:35Z"
},
{
"body": "I think we should move this above the calculation. If a custom data path is used, we don't need to do all the space calculations so they can be skipped.\n",
"created_at": "2015-05-20T21:36:25Z"
},
{
"body": "Ahh good point, I'll fix.\n",
"created_at": "2015-05-20T21:43:14Z"
},
{
"body": "Hmm, I'm confused: when a custom data path is used, we still pass minUsed as ShardPath.shardStatePath ... is this not used when there is a custom data path? \n",
"created_at": "2015-05-20T21:59:29Z"
},
{
"body": "if we have custom data path it can only be one path so there is nothing to select?\n",
"created_at": "2015-05-21T15:26:47Z"
},
{
"body": "this only works if the custom data path is under the node path right?\n",
"created_at": "2015-05-21T15:32:42Z"
},
{
"body": "> if we have custom data path it can only be one path so there is nothing to select?\n\nThis is what I expected too, but go look at ShardPath.java right now in master: it still passes the loadedPath to the new ShardPath (as shardStatePath) even in the custom data path case. This confuses me :)\n\nI think it means we store shard state (ShardStateMetaData) on the local node's path.data even if custom data path is set?\n",
"created_at": "2015-05-22T08:53:55Z"
},
{
"body": "> this only works if the custom data path is under the node path right?\n\nSee above: because the shardStatePath is now always on the local path.data, this should work in the custom data path case too?\n",
"created_at": "2015-05-22T08:57:02Z"
},
{
"body": "I think this causes a memory leak since IndexService is created per index and if it's closed you need to remove this. you should add this listener in `IndicesService`\n",
"created_at": "2015-06-12T12:25:53Z"
},
{
"body": "I also think `clusterInfoService` is only started on the master\n",
"created_at": "2015-06-12T12:26:27Z"
},
{
"body": "I think for now we should just iterator over all teh shards on this node to get an idea....?\n",
"created_at": "2015-06-12T12:27:56Z"
},
{
"body": "can we do this `for (IndexShard shard : this) {...` not a big deal though\n",
"created_at": "2015-06-16T18:17:42Z"
},
{
"body": "Oh yeah I'll fix that and push, thanks @s1monw \n",
"created_at": "2015-06-16T20:27:29Z"
}
],
"title": "Balance new shard allocations more evenly on multiple path.data"
} | {
"commits": [
{
"message": "add simplistic guess at future reserved space per shard"
},
{
"message": "make static method to get back to NodePath.path from a shard path; push heuristic logic down into ShardPath"
},
{
"message": "skip shard size / disk space checking when shard has a custom data path, and always store state on NodePatshs[0] in that case"
},
{
"message": "don't use ClusterInfo; just add up sizes of shards already on this node based on 'refreshed every 10 sec by default' store stats, to get a 'guess' at this new shard's expected size"
},
{
"message": "use foreach loop"
}
],
"files": [
{
"diff": "@@ -771,4 +771,17 @@ private Path resolveCustomLocation(@IndexSettings Settings indexSettings, final\n public Path resolveCustomLocation(@IndexSettings Settings indexSettings, final ShardId shardId) {\n return resolveCustomLocation(indexSettings, shardId.index().name()).resolve(Integer.toString(shardId.id()));\n }\n+\n+ /**\n+ * Returns the {@code NodePath.path} for this shard.\n+ */\n+ public static Path shardStatePathToDataPath(Path shardPath) {\n+ int count = shardPath.getNameCount();\n+\n+ // Sanity check:\n+ assert Integer.parseInt(shardPath.getName(count-1).toString()) >= 0;\n+ assert \"indices\".equals(shardPath.getName(count-3).toString());\n+ \n+ return shardPath.getParent().getParent().getParent();\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -75,8 +75,10 @@\n \n import java.io.Closeable;\n import java.io.IOException;\n+import java.nio.file.Path;\n import java.util.HashMap;\n import java.util.Iterator;\n+import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n@@ -129,6 +131,7 @@ public IndexService(Injector injector, Index index, @IndexSettings Settings inde\n SimilarityService similarityService, IndexAliasesService aliasesService, IndexCache indexCache,\n IndexSettingsService settingsService,\n IndexFieldDataService indexFieldData, BitsetFilterCache bitSetFilterCache, IndicesService indicesServices) {\n+\n super(index, indexSettings);\n this.injector = injector;\n this.indexSettings = indexSettings;\n@@ -270,6 +273,21 @@ public String indexUUID() {\n return indexSettings.get(IndexMetaData.SETTING_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n }\n \n+ // NOTE: O(numShards) cost, but numShards should be smallish?\n+ private long getAvgShardSizeInBytes() throws IOException {\n+ long sum = 0;\n+ int count = 0;\n+ for(IndexShard indexShard : this) {\n+ sum += indexShard.store().stats().sizeInBytes();\n+ count++;\n+ }\n+ if (count == 0) {\n+ return -1L;\n+ } else {\n+ return sum / count;\n+ }\n+ }\n+\n public synchronized IndexShard createShard(int sShardId, boolean primary) {\n /*\n * TODO: we execute this in parallel but it's a synced method. Yet, we might\n@@ -287,7 +305,7 @@ public synchronized IndexShard createShard(int sShardId, boolean primary) {\n \n ShardPath path = ShardPath.loadShardPath(logger, nodeEnv, shardId, indexSettings);\n if (path == null) {\n- path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings);\n+ path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings, getAvgShardSizeInBytes(), this);\n logger.debug(\"{} creating using a new path [{}]\", shardId, path);\n } else {\n logger.debug(\"{} creating using an existing path [{}]\", shardId, path);",
"filename": "src/main/java/org/elasticsearch/index/IndexService.java",
"status": "modified"
},
{
"diff": "@@ -30,7 +30,9 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n+import java.util.HashMap;\n import java.util.List;\n+import java.util.Map;\n \n public final class ShardPath {\n public static final String INDEX_FOLDER_NAME = \"index\";\n@@ -110,35 +112,76 @@ public static ShardPath loadShardPath(ESLogger logger, NodeEnvironment env, Shar\n } else {\n dataPath = statePath;\n }\n- logger.debug(\"{} loaded data path [{}], state path [{}]\", shardId, dataPath, statePath);\n+ logger.debug(\"{} loaded data path [{}], state path [{}]\", shardId, dataPath, statePath);\n return new ShardPath(dataPath, statePath, indexUUID, shardId);\n }\n }\n \n- // TODO - do we need something more extensible? Yet, this does the job for now...\n- public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, @IndexSettings Settings indexSettings) throws IOException {\n- final String indexUUID = indexSettings.get(IndexMetaData.SETTING_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n- final NodeEnvironment.NodePath[] paths = env.nodePaths();\n- final List<Tuple<Path, Long>> minUsedPaths = new ArrayList<>();\n- for (NodeEnvironment.NodePath nodePath : paths) {\n- final Path shardPath = nodePath.resolve(shardId);\n- FileStore fileStore = nodePath.fileStore;\n- long usableSpace = fileStore.getUsableSpace();\n- if (minUsedPaths.isEmpty() || minUsedPaths.get(0).v2() == usableSpace) {\n- minUsedPaths.add(new Tuple<>(shardPath, usableSpace));\n- } else if (minUsedPaths.get(0).v2() < usableSpace) {\n- minUsedPaths.clear();\n- minUsedPaths.add(new Tuple<>(shardPath, usableSpace));\n- }\n+ /** Maps each path.data path to a \"guess\" of how many bytes the shards allocated to that path might additionally use over their\n+ * lifetime; we do this so a bunch of newly allocated shards won't just all go the path with the most free space at this moment. */\n+ private static Map<Path,Long> getEstimatedReservedBytes(NodeEnvironment env, long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n+ long totFreeSpace = 0;\n+ for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n+ totFreeSpace += nodePath.fileStore.getUsableSpace();\n }\n- Path minUsed = minUsedPaths.get(shardId.id() % minUsedPaths.size()).v1();\n+\n+ // Very rough heurisic of how much disk space we expect the shard will use over its lifetime, the max of current average\n+ // shard size across the cluster and 5% of the total available free space on this node:\n+ long estShardSizeInBytes = Math.max(avgShardSizeInBytes, (long) (totFreeSpace/20.0));\n+\n+ // Collate predicted (guessed!) disk usage on each path.data:\n+ Map<Path,Long> reservedBytes = new HashMap<>();\n+ for (IndexShard shard : shards) {\n+ Path dataPath = NodeEnvironment.shardStatePathToDataPath(shard.shardPath().getShardStatePath());\n+\n+ // Remove indices/<index>/<shardID> subdirs from the statePath to get back to the path.data/<lockID>:\n+ Long curBytes = reservedBytes.get(dataPath);\n+ if (curBytes == null) {\n+ curBytes = 0L;\n+ }\n+ reservedBytes.put(dataPath, curBytes + estShardSizeInBytes);\n+ } \n+\n+ return reservedBytes;\n+ }\n+\n+ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, @IndexSettings Settings indexSettings,\n+ long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n+\n final Path dataPath;\n- final Path statePath = minUsed;\n+ final Path statePath;\n+ \n+ final String indexUUID = indexSettings.get(IndexMetaData.SETTING_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n+\n if (NodeEnvironment.hasCustomDataPath(indexSettings)) {\n dataPath = env.resolveCustomLocation(indexSettings, shardId);\n+ statePath = env.nodePaths()[0].resolve(shardId);\n } else {\n+\n+ Map<Path,Long> estReservedBytes = getEstimatedReservedBytes(env, avgShardSizeInBytes, shards);\n+\n+ // TODO - do we need something more extensible? Yet, this does the job for now...\n+ final NodeEnvironment.NodePath[] paths = env.nodePaths();\n+ NodeEnvironment.NodePath bestPath = null;\n+ long maxUsableBytes = Long.MIN_VALUE;\n+ for (NodeEnvironment.NodePath nodePath : paths) {\n+ FileStore fileStore = nodePath.fileStore;\n+ long usableBytes = fileStore.getUsableSpace();\n+ Long reservedBytes = estReservedBytes.get(nodePath.path);\n+ if (reservedBytes != null) {\n+ // Deduct estimated reserved bytes from usable space:\n+ usableBytes -= reservedBytes;\n+ }\n+ if (usableBytes > maxUsableBytes) {\n+ maxUsableBytes = usableBytes;\n+ bestPath = nodePath;\n+ }\n+ }\n+\n+ statePath = bestPath.resolve(shardId);\n dataPath = statePath;\n }\n+\n return new ShardPath(dataPath, statePath, indexUUID, shardId);\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/shard/ShardPath.java",
"status": "modified"
}
]
} |
{
"body": "The `script_type` parameter in sigterms is non-standard. \n\nhttp://www.elastic.co/guide/en/elasticsearch/reference/master/search-aggregations-bucket-significantterms-aggregation.html#_scripted\n\nWe should use the same script construct that we use elsewhere.\n",
"comments": [
{
"body": "@markharwood See https://github.com/elastic/elasticsearch/pull/10649 and https://github.com/elastic/elasticsearch/pull/7977\n",
"created_at": "2015-04-26T14:18:38Z"
},
{
"body": "@markharwood this should have been fixed in https://github.com/elastic/elasticsearch/pull/11164. You might need to test to confirm though\n",
"created_at": "2015-06-15T08:22:41Z"
},
{
"body": "Just tried it and it works. Thanks @colings86 \n",
"created_at": "2015-06-15T13:04:33Z"
}
],
"number": 10810,
"title": "Make script parameters in significant terms same as elsewhere"
} | {
"body": "This change unifies the way scripts and templates are specified for all instances in the codebase. It builds on the Script class added previously and adds request building and parsing support as well as the ability to transfer script objects between nodes. It also adds a Template class which aims to provide the same functionality for template APIs.\n\nNote: This PR maintains backwards compatibility with versions 1.5 and before. There will be a separate PR to remove the backwards compatibility in 2.0 once this is merged (this will also include updating the \"Breaking changes in 2.0\" doc.\n\nCloses #11091\nCloses #10810\nCloses #10113\n",
"number": 11164,
"review_comments": [
{
"body": "I'd prefer this to be `@Nullable` as well... relates to xcontent serialization\n",
"created_at": "2015-05-14T12:02:59Z"
},
{
"body": "I'd say yes... if you want to be able to parse script as a string, you want to be able to serialize it as as string. I believe serialization should be symmetric - you write what you read. For this reason, I believe the script type should be nullable. if you read a script like a string, the read state should be preserved for the writing. \n",
"created_at": "2015-05-14T12:05:15Z"
},
{
"body": "> Run TransformOnIndexMapperIntegrationTest.getTransformed() with seed -Dtests.seed=CCF6041A004DDD9D to see why\n\nmaybe you can explain why here? without knowing much.. it smells like a bug in transform\n",
"created_at": "2015-05-14T12:06:08Z"
},
{
"body": "It's because the serialisation isn't symmetric. If we write an inline transform script with no language and params as an object (instead of a string) it gets serialized as a string in this method and errors. Making type `@Nullable` will solve this as we will know the difference between an inline script entered as an object and one entered as a simple string\n",
"created_at": "2015-05-14T12:09:02Z"
},
{
"body": "> Making type @Nullable will solve this as we will know the difference between an inline script entered as an object and one entered as a simple string\n\nthat's exactly my point... I believe the `Script` construct should reflect exactly what the user inputs.. hence the `@Nullable` type approach \n",
"created_at": "2015-05-14T12:51:11Z"
},
{
"body": "Yep, I'll make the change. Thanks for pointing it out\n",
"created_at": "2015-05-14T13:06:05Z"
},
{
"body": "@uboness I have made the type `@Nullable` internally to the class so that the `Script(String script)` constructor can set the type as `null` but the `Script(String script, ScriptType type, String lang, Map<String, Object> params)` constructor still throws an exception if no type is passed in.\n",
"created_at": "2015-05-14T14:00:34Z"
},
{
"body": "this change is not bw compatible for the java api. are we sure we wanna make it against 1.x?\n",
"created_at": "2015-05-15T07:45:12Z"
},
{
"body": "same as above, this breaks bw comp for the java api\n",
"created_at": "2015-05-15T07:46:15Z"
},
{
"body": "s/Unkknown/Unknown\n",
"created_at": "2015-05-15T07:47:28Z"
},
{
"body": "remove this?\n",
"created_at": "2015-05-15T07:48:14Z"
},
{
"body": "we can use Writeable here instead of Streamable so fields can become final and default constructor can go away\n",
"created_at": "2015-05-15T07:49:10Z"
},
{
"body": "can't subclasses just override writeTo and call super before doing anything else?\n",
"created_at": "2015-05-15T07:49:57Z"
},
{
"body": "if we do make it nullable @colings86 we have to make sure some value gets provided as part of the `canExecuteScript` call, where the type is required to decide whether the script can be executed or not.\n",
"created_at": "2015-05-15T07:55:41Z"
},
{
"body": "Is SearchRequest part of the Java API? I kept SearchRequestBuilder bwc as I thought that was the bit that we considered the Java API.\n",
"created_at": "2015-05-15T08:26:43Z"
},
{
"body": "As above I kept UpdateRequestBuilder bwc as I thought that was the bit exposed for the Java API\n",
"created_at": "2015-05-15T08:27:13Z"
},
{
"body": "Haven't come across that yet (actually can't see a Writeable class/interface in my branch?), is it new?\n",
"created_at": "2015-05-15T08:29:06Z"
},
{
"body": "Yep can do but I was following what we tend to do elsewhere. Happy to change it\n",
"created_at": "2015-05-15T08:29:38Z"
},
{
"body": "no both Request and RequestBuilders are java api. one can choose which one to use. you can always do client.search(SearchRequest) without going through the builder.\n",
"created_at": "2015-05-15T08:30:41Z"
},
{
"body": "same as above ;)\n",
"created_at": "2015-05-15T08:31:15Z"
},
{
"body": "yes it is :) but wait that is in master only so if this needs to be backported to 1.x my comment doesn't apply. \n",
"created_at": "2015-05-15T08:32:17Z"
},
{
"body": "never saw this before :) if you do use this approach then the writeTo should be made final ?\n",
"created_at": "2015-05-15T08:33:12Z"
},
{
"body": "ok, in which case I'll add the methods back in here and make it bwc\n",
"created_at": "2015-05-15T09:05:02Z"
},
{
"body": "After this is merged I will need to go back and remove the deprecated stuff from master so I'll change this to use Writable at that point\n",
"created_at": "2015-05-15T09:06:11Z"
},
{
"body": "sounds great thanks\n",
"created_at": "2015-05-15T09:49:08Z"
},
{
"body": "Not sure what to call this as there is already a method called script() which returns String and that needs to be maintained for bwc\n",
"created_at": "2015-05-15T09:50:13Z"
},
{
"body": "This line looks to be redundant\n",
"created_at": "2015-05-22T15:04:47Z"
},
{
"body": "indentation\n",
"created_at": "2015-05-22T15:08:49Z"
},
{
"body": "This doc is left without any examples of non-inline scripts. I presume \"id\" and \"file\" options are possible but the doc doesn't link to examples of these?\nI presume any `params` are global so passed to all init/map/combine/reduce - can their be type-specific params nested under each *_script variant?\n",
"created_at": "2015-05-22T15:30:35Z"
},
{
"body": "Typo? _id instead of id?\n",
"created_at": "2015-05-22T16:46:20Z"
}
],
"title": "Unify script and template requests across codebase"
} | {
"commits": [
{
"message": "Scripting: Unify script and template requests across codebase\n\nThis change unifies the way scripts and templates are specified for all instances in the codebase. It builds on the Script class added previously and adds request building and parsing support as well as the ability to transfer script objects between nodes. It also adds a Template class which aims to provide the same functionality for template APIs\n\nCloses #11091"
}
],
"files": [
{
"diff": "@@ -30,10 +30,10 @@ MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"));\n --------------------------------------------------\n \n You can also specify a `combine` script which will be executed on each shard:\n@@ -43,12 +43,12 @@ You can also specify a `combine` script which will be executed on each shard:\n MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n- .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .initScript(new Script(\"_agg['heights'] = []\"))\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\")\n- .combineScript(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"))\n+ .combineScript(new Script(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\"));\n --------------------------------------------------\n \n You can also specify a `reduce` script which will be executed on the node which gets the request:\n@@ -58,13 +58,13 @@ You can also specify a `reduce` script which will be executed on the node which\n MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n- .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .initScript(new Script(\"_agg['heights'] = []\"))\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\")\n- .combineScript(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\")\n- .reduceScript(\"heights_sum = 0; for (a in _aggs) { heights_sum += a }; return heights_sum\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"))\n+ .combineScript(new Script(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\"))\n+ .reduceScript(new Script(\"heights_sum = 0; for (a in _aggs) { heights_sum += a }; return heights_sum\"));\n --------------------------------------------------\n \n ",
"filename": "docs/java-api/aggregations/metrics/scripted-metric-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,7 @@ Or you can use `prepareUpdate()` method:\n [source,java]\n --------------------------------------------------\n client.prepareUpdate(\"ttl\", \"doc\", \"1\")\n- .setScript(\"ctx._source.gender = \\\"male\\\"\" <1> , ScriptService.ScriptType.INLINE)\n+ .setScript(new Script(\"ctx._source.gender = \\\"male\\\"\" <1> , ScriptService.ScriptType.INLINE, null, null))\n .get();\n \n client.prepareUpdate(\"ttl\", \"doc\", \"1\")\n@@ -46,7 +46,7 @@ The update API allows to update a document based on a script provided:\n [source,java]\n --------------------------------------------------\n UpdateRequest updateRequest = new UpdateRequest(\"ttl\", \"doc\", \"1\")\n- .script(\"ctx._source.gender = \\\"male\\\"\");\n+ .script(new Script(\"ctx._source.gender = \\\"male\\\"\"));\n client.update(updateRequest).get();\n --------------------------------------------------\n ",
"filename": "docs/java-api/update.asciidoc",
"status": "modified"
},
{
"diff": "@@ -73,8 +73,6 @@ Some aggregations work on values extracted from the aggregated documents. Typica\n a specific document field which is set using the `field` key for the aggregations. It is also possible to define a\n <<modules-scripting,`script`>> which will generate the values (per document).\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n-\n When both `field` and `script` settings are configured for the aggregation, the script will be treated as a\n `value script`. While normal scripts are evaluated on a document level (i.e. the script has access to all the data\n associated with the document), value scripts are evaluated on the *value* level. In this mode, the values are extracted",
"filename": "docs/reference/aggregations.asciidoc",
"status": "modified"
},
{
"diff": "@@ -128,8 +128,6 @@ It is also possible to customize the key for each range:\n \n ==== Script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n-\n [source,js]\n --------------------------------------------------\n {\n@@ -148,6 +146,33 @@ TIP: The `script` parameter expects an inline script. Use `script_id` for indexe\n }\n --------------------------------------------------\n \n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"price_ranges\" : {\n+ \"range\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ },\n+ \"ranges\" : [\n+ { \"to\" : 50 },\n+ { \"from\" : 50, \"to\" : 100 },\n+ { \"from\" : 100 }\n+ ]\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n+\n ==== Value Script\n \n Lets say the product prices are in USD but we would like to get the price ranges in EURO. We can use value script to convert the prices prior the aggregation (assuming conversion rate of 0.8)",
"filename": "docs/reference/aggregations/bucket/range-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -358,13 +358,6 @@ Customized scores can be implemented via a script:\n --------------------------------------------------\n \n Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <<modules-scripting, script documentation>>. \n-Parameters need to be set as follows:\n-\n-[horizontal]\n-`script`:: Inline script, name of script file or name of indexed script. Mandatory.\n-`script_type`:: One of \"inline\" (default), \"indexed\" or \"file\".\n-`lang`:: Script language (default \"groovy\")\n-`params`:: Script parameters (default empty).\n \n Available parameters in the script are\n ",
"filename": "docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -441,7 +441,27 @@ Generating the terms using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"genders\" : {\n+ \"terms\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"gender\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n \n ==== Value Script",
"filename": "docs/reference/aggregations/bucket/terms-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -47,7 +47,29 @@ Computing the average grade based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"avg_grade\" : { \n+ \"avg\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -63,9 +85,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"avg_corrected_grade\" : {\n \"avg\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/avg-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -153,7 +153,28 @@ however since hashes need to be computed on the fly.\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"author_count\" : {\n+ \"cardinality\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"first_name_field\": \"author.first_name\",\n+ \"last_name_field\": \"author.last_name\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Missing value\n ",
"filename": "docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -91,7 +91,29 @@ Computing the grades stats based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_stats\" : { \n+ \"extended_stats\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -107,9 +129,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"grades_stats\" : {\n \"extended_stats\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,27 @@ Computing the max price value across all document, this time using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"max_price\" : { \n+ \"max\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Value Script\n \n@@ -57,9 +77,11 @@ Let's say that the prices of the documents in our index are in USD, but we would\n \"max_price_in_euros\" : {\n \"max\" : {\n \"field\" : \"price\",\n- \"script\" : \"_value * conversion_rate\",\n- \"params\" : {\n- \"conversion_rate\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * conversion_rate\",\n+ \"params\" : {\n+ \"conversion_rate\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/max-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,27 @@ Computing the min price value across all document, this time using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"min_price\" : { \n+ \"min\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Value Script\n \n@@ -57,9 +77,11 @@ Let's say that the prices of the documents in our index are in USD, but we would\n \"min_price_in_euros\" : {\n \"min\" : {\n \"field\" : \"price\",\n- \"script\" : \"_value * conversion_rate\",\n- \"params\" : {\n- \"conversion_rate\" : 1.2\n+ \"script\" : \n+ \"inline\": \"_value * conversion_rate\",\n+ \"params\" : {\n+ \"conversion_rate\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/min-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -100,9 +100,11 @@ a script to convert them on-the-fly:\n \"aggs\" : {\n \"load_time_outlier\" : {\n \"percentiles\" : {\n- \"script\" : \"doc['load_time'].value / timeUnit\", <1>\n- \"params\" : {\n- \"timeUnit\" : 1000 <2>\n+ \"script\" : {\n+ \"inline\": \"doc['load_time'].value / timeUnit\", <1>\n+ \"params\" : {\n+ \"timeUnit\" : 1000 <2>\n+ }\n }\n }\n }\n@@ -113,7 +115,27 @@ a script to convert them on-the-fly:\n script to generate values which percentiles are calculated on\n <2> Scripting supports parameterized input just like any other script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"load_time_outlier\" : {\n+ \"percentiles\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"timeUnit\" : 1000\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n [[search-aggregations-metrics-percentile-aggregation-approximation]]\n ==== Percentiles are (usually) approximate",
"filename": "docs/reference/aggregations/metrics/percentile-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -72,9 +72,11 @@ a script to convert them on-the-fly:\n \"load_time_outlier\" : {\n \"percentile_ranks\" : {\n \"values\" : [3, 5],\n- \"script\" : \"doc['load_time'].value / timeUnit\", <1>\n- \"params\" : {\n- \"timeUnit\" : 1000 <2>\n+ \"script\" : {\n+ \"inline\": \"doc['load_time'].value / timeUnit\", <1>\n+ \"params\" : {\n+ \"timeUnit\" : 1000 <2>\n+ }\n }\n }\n }\n@@ -85,7 +87,28 @@ a script to convert them on-the-fly:\n script to generate values which percentile ranks are calculated on\n <2> Scripting supports parameterized input just like any other script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"load_time_outlier\" : {\n+ \"percentile_ranks\" : {\n+ \"values\" : [3, 5],\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"timeUnit\" : 1000\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Missing value\n \n@@ -108,3 +131,4 @@ had a value.\n --------------------------------------------------\n \n <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.\n+",
"filename": "docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,42 @@ The response for the above aggregation:\n }\n --------------------------------------------------\n \n+The above example can also be specified using file scripts as follows:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"query\" : {\n+ \"match_all\" : {}\n+ },\n+ \"aggs\": {\n+ \"profit\": {\n+ \"scripted_metric\": {\n+ \"init_script\" : {\n+ \"file\": \"my_init_script\"\n+ },\n+ \"map_script\" : {\n+ \"file\": \"my_map_script\"\n+ },\n+ \"combine_script\" : {\n+ \"file\": \"my_combine_script\"\n+ },\n+ \"params\": {\n+ \"field\": \"amount\" <1>\n+ },\n+ \"reduce_script\" : {\n+ \"file\": \"my_reduce_script\"\n+ },\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+<1> script parameters for init, map and combine scripts must be specified in a global `params` object so that it can be share between the scripts\n+\n+For more details on specifying scripts see <<modules-scripting, script documentation>>. \n+\n ==== Scope of scripts\n \n The scripted metric aggregation uses scripts at 4 stages of its execution:\n@@ -225,13 +261,4 @@ params:: Optional. An object whose contents will be passed as variable\n --------------------------------------------------\n reduce_params:: Optional. An object whose contents will be passed as variables to the `reduce_script`. This can be useful to allow the user to control \n the behavior of the reduce phase. If this is not specified the variable will be undefined in the reduce_script execution.\n-lang:: Optional. The script language used for the scripts. If this is not specified the default scripting language is used.\n-init_script_file:: Optional. Can be used in place of the `init_script` parameter to provide the script using in a file.\n-init_script_id:: Optional. Can be used in place of the `init_script` parameter to provide the script using an indexed script.\n-map_script_file:: Optional. Can be used in place of the `map_script` parameter to provide the script using in a file.\n-map_script_id:: Optional. Can be used in place of the `map_script` parameter to provide the script using an indexed script.\n-combine_script_file:: Optional. Can be used in place of the `combine_script` parameter to provide the script using in a file.\n-combine_script_id:: Optional. Can be used in place of the `combine_script` parameter to provide the script using an indexed script.\n-reduce_script_file:: Optional. Can be used in place of the `reduce_script` parameter to provide the script using in a file.\n-reduce_script_id:: Optional. Can be used in place of the `reduce_script` parameter to provide the script using an indexed script.\n ",
"filename": "docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -53,7 +53,29 @@ Computing the grades stats based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_stats\" : {\n+ \"stats\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -69,9 +91,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"grades_stats\" : {\n \"stats\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : \n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/stats-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,29 @@ Computing the intraday return based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"intraday_return\" : { \n+ \"sum\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"change\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -71,7 +93,8 @@ Computing the sum of squares over all stock tick changes:\n \"daytime_return\" : {\n \"sum\" : {\n \"field\" : \"change\",\n- \"script\" : \"_value * _value\" }\n+ \"script\" : \"_value * _value\"\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/sum-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -48,4 +48,26 @@ Counting the values generated by a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_count\" : { \n+ \"value_count\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.",
"filename": "docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -187,7 +187,7 @@ the options. Curl example with update actions:\n { \"update\" : {\"_id\" : \"1\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n { \"doc\" : {\"field\" : \"value\"} }\n { \"update\" : { \"_id\" : \"0\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n-{ \"script\" : \"ctx._source.counter += param1\", \"lang\" : \"js\", \"params\" : {\"param1\" : 1}, \"upsert\" : {\"counter\" : 1}}\n+{ \"script\" : { \"inline\": \"ctx._source.counter += param1\", \"lang\" : \"js\", \"params\" : {\"param1\" : 1}}, \"upsert\" : {\"counter\" : 1}}\n { \"update\" : {\"_id\" : \"2\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n { \"doc\" : {\"field\" : \"value\"}, \"doc_as_upsert\" : true }\n --------------------------------------------------",
"filename": "docs/reference/docs/bulk.asciidoc",
"status": "modified"
},
{
"diff": "@@ -28,9 +28,11 @@ Now, we can execute a script that would increment the counter:\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.counter += count\",\n- \"params\" : {\n- \"count\" : 4\n+ \"script\" : {\n+ \"inline\": \"ctx._source.counter += count\",\n+ \"params\" : {\n+ \"count\" : 4\n+ }\n }\n }'\n --------------------------------------------------\n@@ -41,9 +43,11 @@ will still add it, since its a list):\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.tags += tag\",\n- \"params\" : {\n- \"tag\" : \"blue\"\n+ \"script\" : {\n+ \"inline\": \"ctx._source.tags += tag\",\n+ \"params\" : {\n+ \"tag\" : \"blue\"\n+ }\n }\n }'\n --------------------------------------------------\n@@ -71,9 +75,11 @@ And, we can delete the doc if the tags contain blue, or ignore (noop):\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.tags.contains(tag) ? ctx.op = \\\"delete\\\" : ctx.op = \\\"none\\\"\",\n- \"params\" : {\n- \"tag\" : \"blue\"\n+ \"script\" : {\n+ \"inline\": \"ctx._source.tags.contains(tag) ? ctx.op = \\\"delete\\\" : ctx.op = \\\"none\\\"\",\n+ \"params\" : {\n+ \"tag\" : \"blue\"\n+ }\n }\n }'\n --------------------------------------------------\n@@ -136,9 +142,11 @@ index the fresh doc:\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.counter += count\",\n- \"params\" : {\n- \"count\" : 4\n+ \"script\" : {\n+ \"inline\": \"ctx._source.counter += count\",\n+ \"params\" : {\n+ \"count\" : 4\n+ }\n },\n \"upsert\" : {\n \"counter\" : 1\n@@ -153,13 +161,15 @@ new `scripted_upsert` parameter with the value `true`.\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/sessions/session/dh3sgudg8gsrgl/_update' -d '{\n- \"script_id\" : \"my_web_session_summariser\",\n \"scripted_upsert\":true,\n- \"params\" : {\n- \"pageViewEvent\" : {\n- \t\"url\":\"foo.com/bar\",\n- \t\"response\":404,\n- \t\"time\":\"2014-01-01 12:32\"\n+ \"script\" : {\n+ \"id\": \"my_web_session_summariser\",\n+ \"params\" : {\n+ \"pageViewEvent\" : {\n+ \"url\":\"foo.com/bar\",\n+ \"response\":404,\n+ \"time\":\"2014-01-01 12:32\"\n+ }\n }\n },\n \"upsert\" : {",
"filename": "docs/reference/docs/update.asciidoc",
"status": "modified"
},
{
"diff": "@@ -10,11 +10,13 @@ field. Example:\n {\n \"example\" : {\n \"transform\" : {\n- \"script\" : \"if (ctx._source['title']?.startsWith('t')) ctx._source['suggest'] = ctx._source['content']\",\n- \"params\" : {\n- \"variable\" : \"not used but an example anyway\"\n- },\n- \"lang\": \"groovy\"\n+ \"script\" : {\n+ \"inline\": \"if (ctx._source['title']?.startsWith('t')) ctx._source['suggest'] = ctx._source['content']\",\n+ \"params\" : {\n+ \"variable\" : \"not used but an example anyway\"\n+ },\n+ \"lang\": \"groovy\"\n+ }\n },\n \"properties\": {\n \"title\": { \"type\": \"string\" },",
"filename": "docs/reference/mapping/transform.asciidoc",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,7 @@ GET /_search\n {\n \"script_fields\": {\n \"my_field\": {\n- \"script\": \"1 + my_var\",\n+ \"inline\": \"1 + my_var\",\n \"params\": {\n \"my_var\": 2\n }\n@@ -38,7 +38,7 @@ GET /_search\n }\n -----------------------------------\n \n-Save the contents of the script as a file called `config/scripts/my_script.groovy`\n+Save the contents of the `inline` field as a file called `config/scripts/my_script.groovy`\n on every data node in the cluster:\n \n [source,js]\n@@ -54,7 +54,7 @@ GET /_search\n {\n \"script_fields\": {\n \"my_field\": {\n- \"script_file\": \"my_script\",\n+ \"file\": \"my_script\",\n \"params\": {\n \"my_var\": 2\n }\n@@ -67,9 +67,9 @@ GET /_search\n \n \n Additional `lang` plugins are provided to allow to execute scripts in\n-different languages. All places where a `script` parameter can be used, a `lang` parameter\n-(on the same level) can be provided to define the language of the\n-script. The following are the supported scripting languages:\n+different languages. All places where a script can be used, a `lang` parameter\n+can be provided to define the language of the script. The following are the \n+supported scripting languages:\n \n [cols=\"<,<,<\",options=\"header\",]\n |=======================================================================\n@@ -120,7 +120,7 @@ curl -XPOST localhost:9200/_search -d '{\n {\n \"script_score\": {\n \"lang\": \"groovy\",\n- \"script_file\": \"calculate-score\",\n+ \"file\": \"calculate-score\",\n \"params\": {\n \"my_modifier\": 8\n }\n@@ -162,8 +162,8 @@ curl -XPOST localhost:9200/_scripts/groovy/indexedCalculateScore -d '{\n This will create a document with id: `indexedCalculateScore` and type: `groovy` in the\n `.scripts` index. The type of the document is the language used by the script.\n \n-This script can be accessed at query time by appending `_id` to\n-the script parameter and passing the script id. So `script` becomes `script_id`.:\n+This script can be accessed at query time by using the `id` script parameter and passing \n+the script id:\n \n [source,js]\n --------------------------------------------------\n@@ -178,7 +178,7 @@ curl -XPOST localhost:9200/_search -d '{\n \"functions\": [\n {\n \"script_score\": {\n- \"script_id\": \"indexedCalculateScore\",\n+ \"id\": \"indexedCalculateScore\",\n \"lang\" : \"groovy\",\n \"params\": {\n \"my_modifier\": 8",
"filename": "docs/reference/modules/scripting.asciidoc",
"status": "modified"
},
{
"diff": "@@ -120,12 +120,14 @@ script, and provide parameters to it:\n [source,js]\n --------------------------------------------------\n \"script_score\": {\n- \"lang\": \"lang\",\n- \"params\": {\n- \"param1\": value1,\n- \"param2\": value2\n- },\n- \"script\": \"_score * doc['my_numeric_field'].value / pow(param1, param2)\"\n+ \"script\": {\n+ \"lang\": \"lang\",\n+ \"params\": {\n+ \"param1\": value1,\n+ \"param2\": value2\n+ },\n+ \"inline\": \"_score * doc['my_numeric_field'].value / pow(param1, param2)\"\n+ }\n }\n --------------------------------------------------\n ",
"filename": "docs/reference/query-dsl/function-score-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -34,9 +34,11 @@ to use the ability to pass parameters to the script itself, for example:\n }, \n \"filter\" : {\n \"script\" : {\n- \"script\" : \"doc['num1'].value > param1\"\n- \"params\" : {\n- \"param1\" : 5\n+ \"script\" : {\n+ \"inline\" : \"doc['num1'].value > param1\"\n+ \"params\" : {\n+ \"param1\" : 5\n+ }\n }\n }\n }",
"filename": "docs/reference/query-dsl/script-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -12,7 +12,7 @@ GET /_search\n {\n \"query\": {\n \"template\": {\n- \"query\": { \"match\": { \"text\": \"{{query_string}}\" }},\n+ \"inline\": { \"match\": { \"text\": \"{{query_string}}\" }},\n \"params\" : {\n \"query_string\" : \"all about search\"\n }\n@@ -45,7 +45,7 @@ GET /_search\n {\n \"query\": {\n \"template\": {\n- \"query\": \"{ \\\"match\\\": { \\\"text\\\": \\\"{{query_string}}\\\" }}\", <1>\n+ \"inline\": \"{ \\\"match\\\": { \\\"text\\\": \\\"{{query_string}}\\\" }}\", <1>\n \"params\" : {\n \"query_string\" : \"all about search\"\n }",
"filename": "docs/reference/query-dsl/template-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -15,9 +15,11 @@ evaluation>> (based on different fields) for each hit, for example:\n \"script\" : \"doc['my_field_name'].value * 2\"\n },\n \"test2\" : {\n- \"script\" : \"doc['my_field_name'].value * factor\",\n- \"params\" : {\n- \"factor\" : 2.0\n+ \"script\" : {\n+ \"inline\": \"doc['my_field_name'].value * factor\",\n+ \"params\" : {\n+ \"factor\" : 2.0\n+ }\n }\n }\n }",
"filename": "docs/reference/search/request/script-fields.asciidoc",
"status": "modified"
},
{
"diff": "@@ -318,10 +318,12 @@ Allow to sort based on custom scripts, here is an example:\n },\n \"sort\" : {\n \"_script\" : {\n- \"script\" : \"doc['field_name'].value * factor\",\n \"type\" : \"number\",\n- \"params\" : {\n- \"factor\" : 1.1\n+ \"script\" : {\n+ \"inline\": \"doc['field_name'].value * factor\",\n+ \"params\" : {\n+ \"factor\" : 1.1\n+ }\n },\n \"order\" : \"asc\"\n }",
"filename": "docs/reference/search/request/sort.asciidoc",
"status": "modified"
},
{
"diff": "@@ -8,7 +8,7 @@ before they are executed and fill existing templates with template parameters.\n ------------------------------------------\n GET /_search/template\n {\n- \"template\" : {\n+ \"inline\" : {\n \"query\": { \"match\" : { \"{{my_field}}\" : \"{{my_value}}\" } },\n \"size\" : \"{{my_size}}\"\n },\n@@ -40,7 +40,7 @@ disable scripts per language, source and operation as described in\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"match\": {\n \"title\": \"{{query_string}}\"\n@@ -60,7 +60,7 @@ GET /_search/template\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"terms\": {\n \"status\": [\n@@ -97,7 +97,7 @@ A default value is written as `{{var}}{{^var}}default{{/var}}` for instance:\n [source,js]\n ------------------------------------------\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"range\": {\n \"line_no\": {\n@@ -212,7 +212,7 @@ via the REST API, should be written as a string:\n \n [source,json]\n --------------------\n-\"template\": \"{\\\"query\\\":{\\\"filtered\\\":{\\\"query\\\":{\\\"match\\\":{\\\"line\\\":\\\"{{text}}\\\"}},\\\"filter\\\":{{{#line_no}}\\\"range\\\":{\\\"line_no\\\":{{{#start}}\\\"gte\\\":\\\"{{start}}\\\"{{#end}},{{/end}}{{/start}}{{#end}}\\\"lte\\\":\\\"{{end}}\\\"{{/end}}}}{{/line_no}}}}}}\"\n+\"inline\": \"{\\\"query\\\":{\\\"filtered\\\":{\\\"query\\\":{\\\"match\\\":{\\\"line\\\":\\\"{{text}}\\\"}},\\\"filter\\\":{{{#line_no}}\\\"range\\\":{\\\"line_no\\\":{{{#start}}\\\"gte\\\":\\\"{{start}}\\\"{{#end}},{{/end}}{{/start}}{{#end}}\\\"lte\\\":\\\"{{end}}\\\"{{/end}}}}{{/line_no}}}}}}\"\n --------------------\n \n ==================================\n@@ -229,9 +229,7 @@ In order to execute the stored template, reference it by it's name under the `te\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n- \"file\": \"storedTemplate\" <1>\n- },\n+ \"file\": \"storedTemplate\", <1>\n \"params\": {\n \"query_string\": \"search for these words\"\n }\n@@ -293,9 +291,7 @@ To use an indexed template at search time use:\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n- \"id\": \"templateName\" <1>\n- },\n+ \"id\": \"templateName\", <1>\n \"params\": {\n \"query_string\": \"search for these words\"\n }",
"filename": "docs/reference/search/search-template.asciidoc",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,12 @@\n package org.elasticsearch.action.bulk;\n \n import com.google.common.collect.Lists;\n-import org.elasticsearch.action.*;\n+\n+import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.CompositeIndicesRequest;\n+import org.elasticsearch.action.IndicesRequest;\n+import org.elasticsearch.action.WriteConsistencyLevel;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.update.UpdateRequest;\n@@ -140,7 +145,7 @@ BulkRequest internalAdd(UpdateRequest request, @Nullable Object payload) {\n sizeInBytes += request.upsertRequest().source().length();\n }\n if (request.script() != null) {\n- sizeInBytes += request.script().length() * 2;\n+ sizeInBytes += request.script().getScript().length() * 2;\n }\n return this;\n }",
"filename": "src/main/java/org/elasticsearch/action/bulk/BulkRequest.java",
"status": "modified"
},
{
"diff": "@@ -35,11 +35,13 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.ScriptService.ScriptType;\n+import org.elasticsearch.script.Template;\n+import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n import org.elasticsearch.search.Scroll;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n \n import java.io.IOException;\n-import java.util.Collections;\n import java.util.Map;\n \n import static org.elasticsearch.search.Scroll.readScroll;\n@@ -69,9 +71,7 @@ public class SearchRequest extends ActionRequest<SearchRequest> implements Indic\n private String preference;\n \n private BytesReference templateSource;\n- private String templateName;\n- private ScriptService.ScriptType templateType;\n- private Map<String, Object> templateParams = Collections.emptyMap();\n+ private Template template;\n \n private BytesReference source;\n \n@@ -100,9 +100,7 @@ public SearchRequest(SearchRequest searchRequest, ActionRequest originalRequest)\n this.routing = searchRequest.routing;\n this.preference = searchRequest.preference;\n this.templateSource = searchRequest.templateSource;\n- this.templateName = searchRequest.templateName;\n- this.templateType = searchRequest.templateType;\n- this.templateParams = searchRequest.templateParams;\n+ this.template = searchRequest.template;\n this.source = searchRequest.source;\n this.extraSource = searchRequest.extraSource;\n this.queryCache = searchRequest.queryCache;\n@@ -389,43 +387,93 @@ public SearchRequest templateSource(String template) {\n return this;\n }\n \n+ /**\n+ * The stored template\n+ */\n+ public void template(Template template) {\n+ this.template = template;\n+ }\n+\n+ /**\n+ * The stored template\n+ */\n+ public Template template() {\n+ return template;\n+ }\n+\n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n */\n+ @Deprecated\n public void templateName(String templateName) {\n- this.templateName = templateName;\n+ updateOrCreateScript(templateName, null, null, null);\n }\n \n+ /**\n+ * The type of the stored template\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n+ */\n+ @Deprecated\n public void templateType(ScriptService.ScriptType templateType) {\n- this.templateType = templateType;\n+ updateOrCreateScript(null, templateType, null, null);\n }\n \n /**\n * Template parameters used for rendering\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n */\n+ @Deprecated\n public void templateParams(Map<String, Object> params) {\n- this.templateParams = params;\n+ updateOrCreateScript(null, null, null, params);\n }\n \n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public String templateName() {\n- return templateName;\n+ return template == null ? null : template.getScript();\n }\n \n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public ScriptService.ScriptType templateType() {\n- return templateType;\n+ return template == null ? null : template.getType();\n }\n \n /**\n * Template parameters used for rendering\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public Map<String, Object> templateParams() {\n- return templateParams;\n+ return template == null ? null : template.getParams();\n+ }\n+\n+ private void updateOrCreateScript(String templateContent, ScriptType type, String lang, Map<String, Object> params) {\n+ Template template = template();\n+ if (template == null) {\n+ template = new Template(templateContent == null ? \"\" : templateContent, type == null ? ScriptType.INLINE : type, lang, null,\n+ params);\n+ } else {\n+ String newTemplateContent = templateContent == null ? template.getScript() : templateContent;\n+ ScriptType newTemplateType = type == null ? template.getType() : type;\n+ String newTemplateLang = lang == null ? template.getLang() : lang;\n+ Map<String, Object> newTemplateParams = params == null ? template.getParams() : params;\n+ template = new Template(newTemplateContent, newTemplateType, MustacheScriptEngineService.NAME, null, newTemplateParams);\n+ }\n+ template(template);\n }\n \n /**\n@@ -517,10 +565,8 @@ public void readFrom(StreamInput in) throws IOException {\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n \n templateSource = in.readBytesReference();\n- templateName = in.readOptionalString();\n- templateType = ScriptService.ScriptType.readFrom(in);\n if (in.readBoolean()) {\n- templateParams = (Map<String, Object>) in.readGenericValue();\n+ template = Template.readTemplate(in);\n }\n queryCache = in.readOptionalBoolean();\n }\n@@ -550,12 +596,10 @@ public void writeTo(StreamOutput out) throws IOException {\n indicesOptions.writeIndicesOptions(out);\n \n out.writeBytesReference(templateSource);\n- out.writeOptionalString(templateName);\n- ScriptService.ScriptType.writeTo(templateType, out);\n- boolean existTemplateParams = templateParams != null;\n- out.writeBoolean(existTemplateParams);\n- if (existTemplateParams) {\n- out.writeGenericValue(templateParams);\n+ boolean hasTemplate = template != null;\n+ out.writeBoolean(hasTemplate);\n+ if (hasTemplate) {\n+ template.writeTo(out);\n }\n \n out.writeOptionalBoolean(queryCache);",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequest.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,9 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.QueryBuilder;\n+import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.Template;\n import org.elasticsearch.search.Scroll;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n@@ -423,33 +425,60 @@ public SearchRequestBuilder addFieldDataField(String name) {\n * @param name The name that will represent this value in the return hit\n * @param script The script to use\n */\n- public SearchRequestBuilder addScriptField(String name, String script) {\n+ public SearchRequestBuilder addScriptField(String name, Script script) {\n sourceBuilder().scriptField(name, script);\n return this;\n }\n \n /**\n- * Adds a script based field to load and return. The field does not have to be stored,\n- * but its recommended to use non analyzed or numeric fields.\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n *\n- * @param name The name that will represent this value in the return hit\n- * @param script The script to use\n- * @param params Parameters that the script can use.\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param script\n+ * The script to use\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n */\n+ @Deprecated\n+ public SearchRequestBuilder addScriptField(String name, String script) {\n+ sourceBuilder().scriptField(name, script);\n+ return this;\n+ }\n+\n+ /**\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n+ *\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param script\n+ * The script to use\n+ * @param params\n+ * Parameters that the script can use.\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder addScriptField(String name, String script, Map<String, Object> params) {\n sourceBuilder().scriptField(name, script, params);\n return this;\n }\n \n /**\n- * Adds a script based field to load and return. The field does not have to be stored,\n- * but its recommended to use non analyzed or numeric fields.\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n *\n- * @param name The name that will represent this value in the return hit\n- * @param lang The language of the script\n- * @param script The script to use\n- * @param params Parameters that the script can use (can be <tt>null</tt>).\n- */\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param lang\n+ * The language of the script\n+ * @param script\n+ * The script to use\n+ * @param params\n+ * Parameters that the script can use (can be <tt>null</tt>).\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder addScriptField(String name, String lang, String script, Map<String, Object> params) {\n sourceBuilder().scriptField(name, lang, script, params);\n return this;\n@@ -939,16 +968,33 @@ public SearchRequestBuilder setExtraSource(Map source) {\n * template stuff\n */\n \n+ public SearchRequestBuilder setTemplate(Template template) {\n+ request.template(template);\n+ return this;\n+ }\n+\n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateName(String templateName) {\n request.templateName(templateName);\n return this;\n }\n \n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateType(ScriptService.ScriptType templateType) {\n request.templateType(templateType);\n return this;\n }\n \n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateParams(Map<String, Object> templateParams) {\n request.templateParams(templateParams);\n return this;",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java",
"status": "modified"
}
]
} |
{
"body": "Looks like the mapping transform feature supports dynamic scripts only. When submitting the following request:\n\n```\ncurl -XPUT localhost:9200/test -d '{\n \"mappings\" : {\n \"example\" : {\n \"transform\" : {\n \"script_id\" : \"1\",\n \"lang\": \"groovy\"\n }\n }\n }\n}\n'\n```\n\nthe mapping stored in the cluster state becomes the following, hence it loses the information about where the script should be loaded from:\n\n```\n{\n \"test\": {\n \"mappings\": {\n \"example\": {\n \"transform\": {\n \"script\": \"1\",\n \"lang\": \"groovy\"\n },\n \"properties\": {\n \"test\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n}\n```\n\nMight relate to #9995.\n",
"comments": [
{
"body": "@colings86 i thought that the change in https://github.com/elastic/elasticsearch/pull/11164 would fix this, but apparently not? \n",
"created_at": "2015-05-29T17:46:48Z"
},
{
"body": "@clintongormley it should do. Have you tried it?\n",
"created_at": "2015-05-29T17:48:45Z"
},
{
"body": "yes, eg i save a file called `test.groovy`, then\n\n```\nPUT t\n{\n \"mappings\": {\n \"t\": {\n \"transform\": {\n \"script\": {\n \"file\": \"test\"\n }\n }\n }\n }\n}\n```\n\nthen i get:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"script_parse_exception\",\n \"reason\": \"Value must be of type String: [script]\"\n }\n ],\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"mapping [t]\",\n \"caused_by\": {\n \"type\": \"script_parse_exception\",\n \"reason\": \"Value must be of type String: [script]\"\n }\n },\n \"status\": 400\n}\n```\n",
"created_at": "2015-05-29T17:52:43Z"
},
{
"body": "Hmm, ok I'll take a look\n",
"created_at": "2015-05-29T17:54:21Z"
},
{
"body": "@clintongormley currently the `transform` object is treated as the script (this is actually (almost) backwards compatible as previously the `transform` object could have `lang`, `script` and `params` fields) , which is why you are getting the error. The following request does work:\n\n``` javascript\nPUT t\n{\n \"mappings\": {\n \"t\": {\n \"transform\": {\n \"file\": \"test\"\n }\n }\n }\n}\n```\n\nI can change it so that we require the `script` object under the `transform` object but since we only have scripted transforms it seemed cleaner this way. WDYT?\n",
"created_at": "2015-06-01T11:44:04Z"
},
{
"body": "@colings86 doh - i should have realised. thanks for testing. \n\nClosed by https://github.com/elastic/elasticsearch/pull/11164\n",
"created_at": "2015-06-02T17:53:30Z"
}
],
"number": 10113,
"title": "Mapping transform doesn't support file and indexed scripts"
} | {
"body": "This change unifies the way scripts and templates are specified for all instances in the codebase. It builds on the Script class added previously and adds request building and parsing support as well as the ability to transfer script objects between nodes. It also adds a Template class which aims to provide the same functionality for template APIs.\n\nNote: This PR maintains backwards compatibility with versions 1.5 and before. There will be a separate PR to remove the backwards compatibility in 2.0 once this is merged (this will also include updating the \"Breaking changes in 2.0\" doc.\n\nCloses #11091\nCloses #10810\nCloses #10113\n",
"number": 11164,
"review_comments": [
{
"body": "I'd prefer this to be `@Nullable` as well... relates to xcontent serialization\n",
"created_at": "2015-05-14T12:02:59Z"
},
{
"body": "I'd say yes... if you want to be able to parse script as a string, you want to be able to serialize it as as string. I believe serialization should be symmetric - you write what you read. For this reason, I believe the script type should be nullable. if you read a script like a string, the read state should be preserved for the writing. \n",
"created_at": "2015-05-14T12:05:15Z"
},
{
"body": "> Run TransformOnIndexMapperIntegrationTest.getTransformed() with seed -Dtests.seed=CCF6041A004DDD9D to see why\n\nmaybe you can explain why here? without knowing much.. it smells like a bug in transform\n",
"created_at": "2015-05-14T12:06:08Z"
},
{
"body": "It's because the serialisation isn't symmetric. If we write an inline transform script with no language and params as an object (instead of a string) it gets serialized as a string in this method and errors. Making type `@Nullable` will solve this as we will know the difference between an inline script entered as an object and one entered as a simple string\n",
"created_at": "2015-05-14T12:09:02Z"
},
{
"body": "> Making type @Nullable will solve this as we will know the difference between an inline script entered as an object and one entered as a simple string\n\nthat's exactly my point... I believe the `Script` construct should reflect exactly what the user inputs.. hence the `@Nullable` type approach \n",
"created_at": "2015-05-14T12:51:11Z"
},
{
"body": "Yep, I'll make the change. Thanks for pointing it out\n",
"created_at": "2015-05-14T13:06:05Z"
},
{
"body": "@uboness I have made the type `@Nullable` internally to the class so that the `Script(String script)` constructor can set the type as `null` but the `Script(String script, ScriptType type, String lang, Map<String, Object> params)` constructor still throws an exception if no type is passed in.\n",
"created_at": "2015-05-14T14:00:34Z"
},
{
"body": "this change is not bw compatible for the java api. are we sure we wanna make it against 1.x?\n",
"created_at": "2015-05-15T07:45:12Z"
},
{
"body": "same as above, this breaks bw comp for the java api\n",
"created_at": "2015-05-15T07:46:15Z"
},
{
"body": "s/Unkknown/Unknown\n",
"created_at": "2015-05-15T07:47:28Z"
},
{
"body": "remove this?\n",
"created_at": "2015-05-15T07:48:14Z"
},
{
"body": "we can use Writeable here instead of Streamable so fields can become final and default constructor can go away\n",
"created_at": "2015-05-15T07:49:10Z"
},
{
"body": "can't subclasses just override writeTo and call super before doing anything else?\n",
"created_at": "2015-05-15T07:49:57Z"
},
{
"body": "if we do make it nullable @colings86 we have to make sure some value gets provided as part of the `canExecuteScript` call, where the type is required to decide whether the script can be executed or not.\n",
"created_at": "2015-05-15T07:55:41Z"
},
{
"body": "Is SearchRequest part of the Java API? I kept SearchRequestBuilder bwc as I thought that was the bit that we considered the Java API.\n",
"created_at": "2015-05-15T08:26:43Z"
},
{
"body": "As above I kept UpdateRequestBuilder bwc as I thought that was the bit exposed for the Java API\n",
"created_at": "2015-05-15T08:27:13Z"
},
{
"body": "Haven't come across that yet (actually can't see a Writeable class/interface in my branch?), is it new?\n",
"created_at": "2015-05-15T08:29:06Z"
},
{
"body": "Yep can do but I was following what we tend to do elsewhere. Happy to change it\n",
"created_at": "2015-05-15T08:29:38Z"
},
{
"body": "no both Request and RequestBuilders are java api. one can choose which one to use. you can always do client.search(SearchRequest) without going through the builder.\n",
"created_at": "2015-05-15T08:30:41Z"
},
{
"body": "same as above ;)\n",
"created_at": "2015-05-15T08:31:15Z"
},
{
"body": "yes it is :) but wait that is in master only so if this needs to be backported to 1.x my comment doesn't apply. \n",
"created_at": "2015-05-15T08:32:17Z"
},
{
"body": "never saw this before :) if you do use this approach then the writeTo should be made final ?\n",
"created_at": "2015-05-15T08:33:12Z"
},
{
"body": "ok, in which case I'll add the methods back in here and make it bwc\n",
"created_at": "2015-05-15T09:05:02Z"
},
{
"body": "After this is merged I will need to go back and remove the deprecated stuff from master so I'll change this to use Writable at that point\n",
"created_at": "2015-05-15T09:06:11Z"
},
{
"body": "sounds great thanks\n",
"created_at": "2015-05-15T09:49:08Z"
},
{
"body": "Not sure what to call this as there is already a method called script() which returns String and that needs to be maintained for bwc\n",
"created_at": "2015-05-15T09:50:13Z"
},
{
"body": "This line looks to be redundant\n",
"created_at": "2015-05-22T15:04:47Z"
},
{
"body": "indentation\n",
"created_at": "2015-05-22T15:08:49Z"
},
{
"body": "This doc is left without any examples of non-inline scripts. I presume \"id\" and \"file\" options are possible but the doc doesn't link to examples of these?\nI presume any `params` are global so passed to all init/map/combine/reduce - can their be type-specific params nested under each *_script variant?\n",
"created_at": "2015-05-22T15:30:35Z"
},
{
"body": "Typo? _id instead of id?\n",
"created_at": "2015-05-22T16:46:20Z"
}
],
"title": "Unify script and template requests across codebase"
} | {
"commits": [
{
"message": "Scripting: Unify script and template requests across codebase\n\nThis change unifies the way scripts and templates are specified for all instances in the codebase. It builds on the Script class added previously and adds request building and parsing support as well as the ability to transfer script objects between nodes. It also adds a Template class which aims to provide the same functionality for template APIs\n\nCloses #11091"
}
],
"files": [
{
"diff": "@@ -30,10 +30,10 @@ MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"));\n --------------------------------------------------\n \n You can also specify a `combine` script which will be executed on each shard:\n@@ -43,12 +43,12 @@ You can also specify a `combine` script which will be executed on each shard:\n MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n- .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .initScript(new Script(\"_agg['heights'] = []\"))\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\")\n- .combineScript(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"))\n+ .combineScript(new Script(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\"));\n --------------------------------------------------\n \n You can also specify a `reduce` script which will be executed on the node which gets the request:\n@@ -58,13 +58,13 @@ You can also specify a `reduce` script which will be executed on the node which\n MetricsAggregationBuilder aggregation =\n AggregationBuilders\n .scriptedMetric(\"agg\")\n- .initScript(\"_agg['heights'] = []\")\n- .mapScript(\"if (doc['gender'].value == \\\"male\\\") \" +\n+ .initScript(new Script(\"_agg['heights'] = []\"))\n+ .mapScript(new Script(\"if (doc['gender'].value == \\\"male\\\") \" +\n \"{ _agg.heights.add(doc['height'].value) } \" +\n \"else \" +\n- \"{ _agg.heights.add(-1 * doc['height'].value) }\")\n- .combineScript(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\")\n- .reduceScript(\"heights_sum = 0; for (a in _aggs) { heights_sum += a }; return heights_sum\");\n+ \"{ _agg.heights.add(-1 * doc['height'].value) }\"))\n+ .combineScript(new Script(\"heights_sum = 0; for (t in _agg.heights) { heights_sum += t }; return heights_sum\"))\n+ .reduceScript(new Script(\"heights_sum = 0; for (a in _aggs) { heights_sum += a }; return heights_sum\"));\n --------------------------------------------------\n \n ",
"filename": "docs/java-api/aggregations/metrics/scripted-metric-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,7 @@ Or you can use `prepareUpdate()` method:\n [source,java]\n --------------------------------------------------\n client.prepareUpdate(\"ttl\", \"doc\", \"1\")\n- .setScript(\"ctx._source.gender = \\\"male\\\"\" <1> , ScriptService.ScriptType.INLINE)\n+ .setScript(new Script(\"ctx._source.gender = \\\"male\\\"\" <1> , ScriptService.ScriptType.INLINE, null, null))\n .get();\n \n client.prepareUpdate(\"ttl\", \"doc\", \"1\")\n@@ -46,7 +46,7 @@ The update API allows to update a document based on a script provided:\n [source,java]\n --------------------------------------------------\n UpdateRequest updateRequest = new UpdateRequest(\"ttl\", \"doc\", \"1\")\n- .script(\"ctx._source.gender = \\\"male\\\"\");\n+ .script(new Script(\"ctx._source.gender = \\\"male\\\"\"));\n client.update(updateRequest).get();\n --------------------------------------------------\n ",
"filename": "docs/java-api/update.asciidoc",
"status": "modified"
},
{
"diff": "@@ -73,8 +73,6 @@ Some aggregations work on values extracted from the aggregated documents. Typica\n a specific document field which is set using the `field` key for the aggregations. It is also possible to define a\n <<modules-scripting,`script`>> which will generate the values (per document).\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n-\n When both `field` and `script` settings are configured for the aggregation, the script will be treated as a\n `value script`. While normal scripts are evaluated on a document level (i.e. the script has access to all the data\n associated with the document), value scripts are evaluated on the *value* level. In this mode, the values are extracted",
"filename": "docs/reference/aggregations.asciidoc",
"status": "modified"
},
{
"diff": "@@ -128,8 +128,6 @@ It is also possible to customize the key for each range:\n \n ==== Script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n-\n [source,js]\n --------------------------------------------------\n {\n@@ -148,6 +146,33 @@ TIP: The `script` parameter expects an inline script. Use `script_id` for indexe\n }\n --------------------------------------------------\n \n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"price_ranges\" : {\n+ \"range\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ },\n+ \"ranges\" : [\n+ { \"to\" : 50 },\n+ { \"from\" : 50, \"to\" : 100 },\n+ { \"from\" : 100 }\n+ ]\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n+\n ==== Value Script\n \n Lets say the product prices are in USD but we would like to get the price ranges in EURO. We can use value script to convert the prices prior the aggregation (assuming conversion rate of 0.8)",
"filename": "docs/reference/aggregations/bucket/range-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -358,13 +358,6 @@ Customized scores can be implemented via a script:\n --------------------------------------------------\n \n Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <<modules-scripting, script documentation>>. \n-Parameters need to be set as follows:\n-\n-[horizontal]\n-`script`:: Inline script, name of script file or name of indexed script. Mandatory.\n-`script_type`:: One of \"inline\" (default), \"indexed\" or \"file\".\n-`lang`:: Script language (default \"groovy\")\n-`params`:: Script parameters (default empty).\n \n Available parameters in the script are\n ",
"filename": "docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -441,7 +441,27 @@ Generating the terms using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"genders\" : {\n+ \"terms\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"gender\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n \n ==== Value Script",
"filename": "docs/reference/aggregations/bucket/terms-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -47,7 +47,29 @@ Computing the average grade based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"avg_grade\" : { \n+ \"avg\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -63,9 +85,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"avg_corrected_grade\" : {\n \"avg\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/avg-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -153,7 +153,28 @@ however since hashes need to be computed on the fly.\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"author_count\" : {\n+ \"cardinality\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"first_name_field\": \"author.first_name\",\n+ \"last_name_field\": \"author.last_name\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Missing value\n ",
"filename": "docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -91,7 +91,29 @@ Computing the grades stats based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_stats\" : { \n+ \"extended_stats\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -107,9 +129,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"grades_stats\" : {\n \"extended_stats\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,27 @@ Computing the max price value across all document, this time using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"max_price\" : { \n+ \"max\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Value Script\n \n@@ -57,9 +77,11 @@ Let's say that the prices of the documents in our index are in USD, but we would\n \"max_price_in_euros\" : {\n \"max\" : {\n \"field\" : \"price\",\n- \"script\" : \"_value * conversion_rate\",\n- \"params\" : {\n- \"conversion_rate\" : 1.2\n+ \"script\" : {\n+ \"inline\": \"_value * conversion_rate\",\n+ \"params\" : {\n+ \"conversion_rate\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/max-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,27 @@ Computing the min price value across all document, this time using a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"min_price\" : { \n+ \"min\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\": {\n+ \"field\": \"price\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Value Script\n \n@@ -57,9 +77,11 @@ Let's say that the prices of the documents in our index are in USD, but we would\n \"min_price_in_euros\" : {\n \"min\" : {\n \"field\" : \"price\",\n- \"script\" : \"_value * conversion_rate\",\n- \"params\" : {\n- \"conversion_rate\" : 1.2\n+ \"script\" : \n+ \"inline\": \"_value * conversion_rate\",\n+ \"params\" : {\n+ \"conversion_rate\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/min-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -100,9 +100,11 @@ a script to convert them on-the-fly:\n \"aggs\" : {\n \"load_time_outlier\" : {\n \"percentiles\" : {\n- \"script\" : \"doc['load_time'].value / timeUnit\", <1>\n- \"params\" : {\n- \"timeUnit\" : 1000 <2>\n+ \"script\" : {\n+ \"inline\": \"doc['load_time'].value / timeUnit\", <1>\n+ \"params\" : {\n+ \"timeUnit\" : 1000 <2>\n+ }\n }\n }\n }\n@@ -113,7 +115,27 @@ a script to convert them on-the-fly:\n script to generate values which percentiles are calculated on\n <2> Scripting supports parameterized input just like any other script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"load_time_outlier\" : {\n+ \"percentiles\" : {\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"timeUnit\" : 1000\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n [[search-aggregations-metrics-percentile-aggregation-approximation]]\n ==== Percentiles are (usually) approximate",
"filename": "docs/reference/aggregations/metrics/percentile-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -72,9 +72,11 @@ a script to convert them on-the-fly:\n \"load_time_outlier\" : {\n \"percentile_ranks\" : {\n \"values\" : [3, 5],\n- \"script\" : \"doc['load_time'].value / timeUnit\", <1>\n- \"params\" : {\n- \"timeUnit\" : 1000 <2>\n+ \"script\" : {\n+ \"inline\": \"doc['load_time'].value / timeUnit\", <1>\n+ \"params\" : {\n+ \"timeUnit\" : 1000 <2>\n+ }\n }\n }\n }\n@@ -85,7 +87,28 @@ a script to convert them on-the-fly:\n script to generate values which percentile ranks are calculated on\n <2> Scripting supports parameterized input just like any other script\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"aggs\" : {\n+ \"load_time_outlier\" : {\n+ \"percentile_ranks\" : {\n+ \"values\" : [3, 5],\n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"timeUnit\" : 1000\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ==== Missing value\n \n@@ -108,3 +131,4 @@ had a value.\n --------------------------------------------------\n \n <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.\n+",
"filename": "docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,42 @@ The response for the above aggregation:\n }\n --------------------------------------------------\n \n+The above example can also be specified using file scripts as follows:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"query\" : {\n+ \"match_all\" : {}\n+ },\n+ \"aggs\": {\n+ \"profit\": {\n+ \"scripted_metric\": {\n+ \"init_script\" : {\n+ \"file\": \"my_init_script\"\n+ },\n+ \"map_script\" : {\n+ \"file\": \"my_map_script\"\n+ },\n+ \"combine_script\" : {\n+ \"file\": \"my_combine_script\"\n+ },\n+ \"params\": {\n+ \"field\": \"amount\" <1>\n+ },\n+ \"reduce_script\" : {\n+ \"file\": \"my_reduce_script\"\n+ },\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+<1> script parameters for init, map and combine scripts must be specified in a global `params` object so that it can be share between the scripts\n+\n+For more details on specifying scripts see <<modules-scripting, script documentation>>. \n+\n ==== Scope of scripts\n \n The scripted metric aggregation uses scripts at 4 stages of its execution:\n@@ -225,13 +261,4 @@ params:: Optional. An object whose contents will be passed as variable\n --------------------------------------------------\n reduce_params:: Optional. An object whose contents will be passed as variables to the `reduce_script`. This can be useful to allow the user to control \n the behavior of the reduce phase. If this is not specified the variable will be undefined in the reduce_script execution.\n-lang:: Optional. The script language used for the scripts. If this is not specified the default scripting language is used.\n-init_script_file:: Optional. Can be used in place of the `init_script` parameter to provide the script using in a file.\n-init_script_id:: Optional. Can be used in place of the `init_script` parameter to provide the script using an indexed script.\n-map_script_file:: Optional. Can be used in place of the `map_script` parameter to provide the script using in a file.\n-map_script_id:: Optional. Can be used in place of the `map_script` parameter to provide the script using an indexed script.\n-combine_script_file:: Optional. Can be used in place of the `combine_script` parameter to provide the script using in a file.\n-combine_script_id:: Optional. Can be used in place of the `combine_script` parameter to provide the script using an indexed script.\n-reduce_script_file:: Optional. Can be used in place of the `reduce_script` parameter to provide the script using in a file.\n-reduce_script_id:: Optional. Can be used in place of the `reduce_script` parameter to provide the script using an indexed script.\n ",
"filename": "docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -53,7 +53,29 @@ Computing the grades stats based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_stats\" : {\n+ \"stats\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -69,9 +91,11 @@ It turned out that the exam was way above the level of the students and a grade\n \"grades_stats\" : {\n \"stats\" : {\n \"field\" : \"grade\",\n- \"script\" : \"_value * correction\",\n- \"params\" : {\n- \"correction\" : 1.2\n+ \"script\" : \n+ \"inline\": \"_value * correction\",\n+ \"params\" : {\n+ \"correction\" : 1.2\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/stats-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,29 @@ Computing the intraday return based on a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"intraday_return\" : { \n+ \"sum\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"change\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.\n \n ===== Value Script\n \n@@ -71,7 +93,8 @@ Computing the sum of squares over all stock tick changes:\n \"daytime_return\" : {\n \"sum\" : {\n \"field\" : \"change\",\n- \"script\" : \"_value * _value\" }\n+ \"script\" : \"_value * _value\"\n+ }\n }\n }\n }",
"filename": "docs/reference/aggregations/metrics/sum-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -48,4 +48,26 @@ Counting the values generated by a script:\n }\n --------------------------------------------------\n \n-TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.\n+This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a file script use the following syntax:\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ ...,\n+\n+ \"aggs\" : {\n+ \"grades_count\" : { \n+ \"value_count\" : { \n+ \"script\" : {\n+ \"file\": \"my_script\",\n+ \"params\" : {\n+ \"field\" : \"grade\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+TIP: for indexed scripts replace the `file` parameter with an `id` parameter.",
"filename": "docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -187,7 +187,7 @@ the options. Curl example with update actions:\n { \"update\" : {\"_id\" : \"1\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n { \"doc\" : {\"field\" : \"value\"} }\n { \"update\" : { \"_id\" : \"0\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n-{ \"script\" : \"ctx._source.counter += param1\", \"lang\" : \"js\", \"params\" : {\"param1\" : 1}, \"upsert\" : {\"counter\" : 1}}\n+{ \"script\" : { \"inline\": \"ctx._source.counter += param1\", \"lang\" : \"js\", \"params\" : {\"param1\" : 1}}, \"upsert\" : {\"counter\" : 1}}\n { \"update\" : {\"_id\" : \"2\", \"_type\" : \"type1\", \"_index\" : \"index1\", \"_retry_on_conflict\" : 3} }\n { \"doc\" : {\"field\" : \"value\"}, \"doc_as_upsert\" : true }\n --------------------------------------------------",
"filename": "docs/reference/docs/bulk.asciidoc",
"status": "modified"
},
{
"diff": "@@ -28,9 +28,11 @@ Now, we can execute a script that would increment the counter:\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.counter += count\",\n- \"params\" : {\n- \"count\" : 4\n+ \"script\" : {\n+ \"inline\": \"ctx._source.counter += count\",\n+ \"params\" : {\n+ \"count\" : 4\n+ }\n }\n }'\n --------------------------------------------------\n@@ -41,9 +43,11 @@ will still add it, since its a list):\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.tags += tag\",\n- \"params\" : {\n- \"tag\" : \"blue\"\n+ \"script\" : {\n+ \"inline\": \"ctx._source.tags += tag\",\n+ \"params\" : {\n+ \"tag\" : \"blue\"\n+ }\n }\n }'\n --------------------------------------------------\n@@ -71,9 +75,11 @@ And, we can delete the doc if the tags contain blue, or ignore (noop):\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.tags.contains(tag) ? ctx.op = \\\"delete\\\" : ctx.op = \\\"none\\\"\",\n- \"params\" : {\n- \"tag\" : \"blue\"\n+ \"script\" : {\n+ \"inline\": \"ctx._source.tags.contains(tag) ? ctx.op = \\\"delete\\\" : ctx.op = \\\"none\\\"\",\n+ \"params\" : {\n+ \"tag\" : \"blue\"\n+ }\n }\n }'\n --------------------------------------------------\n@@ -136,9 +142,11 @@ index the fresh doc:\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{\n- \"script\" : \"ctx._source.counter += count\",\n- \"params\" : {\n- \"count\" : 4\n+ \"script\" : {\n+ \"inline\": \"ctx._source.counter += count\",\n+ \"params\" : {\n+ \"count\" : 4\n+ }\n },\n \"upsert\" : {\n \"counter\" : 1\n@@ -153,13 +161,15 @@ new `scripted_upsert` parameter with the value `true`.\n [source,js]\n --------------------------------------------------\n curl -XPOST 'localhost:9200/sessions/session/dh3sgudg8gsrgl/_update' -d '{\n- \"script_id\" : \"my_web_session_summariser\",\n \"scripted_upsert\":true,\n- \"params\" : {\n- \"pageViewEvent\" : {\n- \t\"url\":\"foo.com/bar\",\n- \t\"response\":404,\n- \t\"time\":\"2014-01-01 12:32\"\n+ \"script\" : {\n+ \"id\": \"my_web_session_summariser\",\n+ \"params\" : {\n+ \"pageViewEvent\" : {\n+ \"url\":\"foo.com/bar\",\n+ \"response\":404,\n+ \"time\":\"2014-01-01 12:32\"\n+ }\n }\n },\n \"upsert\" : {",
"filename": "docs/reference/docs/update.asciidoc",
"status": "modified"
},
{
"diff": "@@ -10,11 +10,13 @@ field. Example:\n {\n \"example\" : {\n \"transform\" : {\n- \"script\" : \"if (ctx._source['title']?.startsWith('t')) ctx._source['suggest'] = ctx._source['content']\",\n- \"params\" : {\n- \"variable\" : \"not used but an example anyway\"\n- },\n- \"lang\": \"groovy\"\n+ \"script\" : {\n+ \"inline\": \"if (ctx._source['title']?.startsWith('t')) ctx._source['suggest'] = ctx._source['content']\",\n+ \"params\" : {\n+ \"variable\" : \"not used but an example anyway\"\n+ },\n+ \"lang\": \"groovy\"\n+ }\n },\n \"properties\": {\n \"title\": { \"type\": \"string\" },",
"filename": "docs/reference/mapping/transform.asciidoc",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,7 @@ GET /_search\n {\n \"script_fields\": {\n \"my_field\": {\n- \"script\": \"1 + my_var\",\n+ \"inline\": \"1 + my_var\",\n \"params\": {\n \"my_var\": 2\n }\n@@ -38,7 +38,7 @@ GET /_search\n }\n -----------------------------------\n \n-Save the contents of the script as a file called `config/scripts/my_script.groovy`\n+Save the contents of the `inline` field as a file called `config/scripts/my_script.groovy`\n on every data node in the cluster:\n \n [source,js]\n@@ -54,7 +54,7 @@ GET /_search\n {\n \"script_fields\": {\n \"my_field\": {\n- \"script_file\": \"my_script\",\n+ \"file\": \"my_script\",\n \"params\": {\n \"my_var\": 2\n }\n@@ -67,9 +67,9 @@ GET /_search\n \n \n Additional `lang` plugins are provided to allow to execute scripts in\n-different languages. All places where a `script` parameter can be used, a `lang` parameter\n-(on the same level) can be provided to define the language of the\n-script. The following are the supported scripting languages:\n+different languages. All places where a script can be used, a `lang` parameter\n+can be provided to define the language of the script. The following are the \n+supported scripting languages:\n \n [cols=\"<,<,<\",options=\"header\",]\n |=======================================================================\n@@ -120,7 +120,7 @@ curl -XPOST localhost:9200/_search -d '{\n {\n \"script_score\": {\n \"lang\": \"groovy\",\n- \"script_file\": \"calculate-score\",\n+ \"file\": \"calculate-score\",\n \"params\": {\n \"my_modifier\": 8\n }\n@@ -162,8 +162,8 @@ curl -XPOST localhost:9200/_scripts/groovy/indexedCalculateScore -d '{\n This will create a document with id: `indexedCalculateScore` and type: `groovy` in the\n `.scripts` index. The type of the document is the language used by the script.\n \n-This script can be accessed at query time by appending `_id` to\n-the script parameter and passing the script id. So `script` becomes `script_id`.:\n+This script can be accessed at query time by using the `id` script parameter and passing \n+the script id:\n \n [source,js]\n --------------------------------------------------\n@@ -178,7 +178,7 @@ curl -XPOST localhost:9200/_search -d '{\n \"functions\": [\n {\n \"script_score\": {\n- \"script_id\": \"indexedCalculateScore\",\n+ \"id\": \"indexedCalculateScore\",\n \"lang\" : \"groovy\",\n \"params\": {\n \"my_modifier\": 8",
"filename": "docs/reference/modules/scripting.asciidoc",
"status": "modified"
},
{
"diff": "@@ -120,12 +120,14 @@ script, and provide parameters to it:\n [source,js]\n --------------------------------------------------\n \"script_score\": {\n- \"lang\": \"lang\",\n- \"params\": {\n- \"param1\": value1,\n- \"param2\": value2\n- },\n- \"script\": \"_score * doc['my_numeric_field'].value / pow(param1, param2)\"\n+ \"script\": {\n+ \"lang\": \"lang\",\n+ \"params\": {\n+ \"param1\": value1,\n+ \"param2\": value2\n+ },\n+ \"inline\": \"_score * doc['my_numeric_field'].value / pow(param1, param2)\"\n+ }\n }\n --------------------------------------------------\n ",
"filename": "docs/reference/query-dsl/function-score-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -34,9 +34,11 @@ to use the ability to pass parameters to the script itself, for example:\n }, \n \"filter\" : {\n \"script\" : {\n- \"script\" : \"doc['num1'].value > param1\"\n- \"params\" : {\n- \"param1\" : 5\n+ \"script\" : {\n+ \"inline\" : \"doc['num1'].value > param1\"\n+ \"params\" : {\n+ \"param1\" : 5\n+ }\n }\n }\n }",
"filename": "docs/reference/query-dsl/script-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -12,7 +12,7 @@ GET /_search\n {\n \"query\": {\n \"template\": {\n- \"query\": { \"match\": { \"text\": \"{{query_string}}\" }},\n+ \"inline\": { \"match\": { \"text\": \"{{query_string}}\" }},\n \"params\" : {\n \"query_string\" : \"all about search\"\n }\n@@ -45,7 +45,7 @@ GET /_search\n {\n \"query\": {\n \"template\": {\n- \"query\": \"{ \\\"match\\\": { \\\"text\\\": \\\"{{query_string}}\\\" }}\", <1>\n+ \"inline\": \"{ \\\"match\\\": { \\\"text\\\": \\\"{{query_string}}\\\" }}\", <1>\n \"params\" : {\n \"query_string\" : \"all about search\"\n }",
"filename": "docs/reference/query-dsl/template-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -15,9 +15,11 @@ evaluation>> (based on different fields) for each hit, for example:\n \"script\" : \"doc['my_field_name'].value * 2\"\n },\n \"test2\" : {\n- \"script\" : \"doc['my_field_name'].value * factor\",\n- \"params\" : {\n- \"factor\" : 2.0\n+ \"script\" : {\n+ \"inline\": \"doc['my_field_name'].value * factor\",\n+ \"params\" : {\n+ \"factor\" : 2.0\n+ }\n }\n }\n }",
"filename": "docs/reference/search/request/script-fields.asciidoc",
"status": "modified"
},
{
"diff": "@@ -318,10 +318,12 @@ Allow to sort based on custom scripts, here is an example:\n },\n \"sort\" : {\n \"_script\" : {\n- \"script\" : \"doc['field_name'].value * factor\",\n \"type\" : \"number\",\n- \"params\" : {\n- \"factor\" : 1.1\n+ \"script\" : {\n+ \"inline\": \"doc['field_name'].value * factor\",\n+ \"params\" : {\n+ \"factor\" : 1.1\n+ }\n },\n \"order\" : \"asc\"\n }",
"filename": "docs/reference/search/request/sort.asciidoc",
"status": "modified"
},
{
"diff": "@@ -8,7 +8,7 @@ before they are executed and fill existing templates with template parameters.\n ------------------------------------------\n GET /_search/template\n {\n- \"template\" : {\n+ \"inline\" : {\n \"query\": { \"match\" : { \"{{my_field}}\" : \"{{my_value}}\" } },\n \"size\" : \"{{my_size}}\"\n },\n@@ -40,7 +40,7 @@ disable scripts per language, source and operation as described in\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"match\": {\n \"title\": \"{{query_string}}\"\n@@ -60,7 +60,7 @@ GET /_search/template\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"terms\": {\n \"status\": [\n@@ -97,7 +97,7 @@ A default value is written as `{{var}}{{^var}}default{{/var}}` for instance:\n [source,js]\n ------------------------------------------\n {\n- \"template\": {\n+ \"inline\": {\n \"query\": {\n \"range\": {\n \"line_no\": {\n@@ -212,7 +212,7 @@ via the REST API, should be written as a string:\n \n [source,json]\n --------------------\n-\"template\": \"{\\\"query\\\":{\\\"filtered\\\":{\\\"query\\\":{\\\"match\\\":{\\\"line\\\":\\\"{{text}}\\\"}},\\\"filter\\\":{{{#line_no}}\\\"range\\\":{\\\"line_no\\\":{{{#start}}\\\"gte\\\":\\\"{{start}}\\\"{{#end}},{{/end}}{{/start}}{{#end}}\\\"lte\\\":\\\"{{end}}\\\"{{/end}}}}{{/line_no}}}}}}\"\n+\"inline\": \"{\\\"query\\\":{\\\"filtered\\\":{\\\"query\\\":{\\\"match\\\":{\\\"line\\\":\\\"{{text}}\\\"}},\\\"filter\\\":{{{#line_no}}\\\"range\\\":{\\\"line_no\\\":{{{#start}}\\\"gte\\\":\\\"{{start}}\\\"{{#end}},{{/end}}{{/start}}{{#end}}\\\"lte\\\":\\\"{{end}}\\\"{{/end}}}}{{/line_no}}}}}}\"\n --------------------\n \n ==================================\n@@ -229,9 +229,7 @@ In order to execute the stored template, reference it by it's name under the `te\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n- \"file\": \"storedTemplate\" <1>\n- },\n+ \"file\": \"storedTemplate\", <1>\n \"params\": {\n \"query_string\": \"search for these words\"\n }\n@@ -293,9 +291,7 @@ To use an indexed template at search time use:\n ------------------------------------------\n GET /_search/template\n {\n- \"template\": {\n- \"id\": \"templateName\" <1>\n- },\n+ \"id\": \"templateName\", <1>\n \"params\": {\n \"query_string\": \"search for these words\"\n }",
"filename": "docs/reference/search/search-template.asciidoc",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,12 @@\n package org.elasticsearch.action.bulk;\n \n import com.google.common.collect.Lists;\n-import org.elasticsearch.action.*;\n+\n+import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.CompositeIndicesRequest;\n+import org.elasticsearch.action.IndicesRequest;\n+import org.elasticsearch.action.WriteConsistencyLevel;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.update.UpdateRequest;\n@@ -140,7 +145,7 @@ BulkRequest internalAdd(UpdateRequest request, @Nullable Object payload) {\n sizeInBytes += request.upsertRequest().source().length();\n }\n if (request.script() != null) {\n- sizeInBytes += request.script().length() * 2;\n+ sizeInBytes += request.script().getScript().length() * 2;\n }\n return this;\n }",
"filename": "src/main/java/org/elasticsearch/action/bulk/BulkRequest.java",
"status": "modified"
},
{
"diff": "@@ -35,11 +35,13 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.ScriptService.ScriptType;\n+import org.elasticsearch.script.Template;\n+import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n import org.elasticsearch.search.Scroll;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n \n import java.io.IOException;\n-import java.util.Collections;\n import java.util.Map;\n \n import static org.elasticsearch.search.Scroll.readScroll;\n@@ -69,9 +71,7 @@ public class SearchRequest extends ActionRequest<SearchRequest> implements Indic\n private String preference;\n \n private BytesReference templateSource;\n- private String templateName;\n- private ScriptService.ScriptType templateType;\n- private Map<String, Object> templateParams = Collections.emptyMap();\n+ private Template template;\n \n private BytesReference source;\n \n@@ -100,9 +100,7 @@ public SearchRequest(SearchRequest searchRequest, ActionRequest originalRequest)\n this.routing = searchRequest.routing;\n this.preference = searchRequest.preference;\n this.templateSource = searchRequest.templateSource;\n- this.templateName = searchRequest.templateName;\n- this.templateType = searchRequest.templateType;\n- this.templateParams = searchRequest.templateParams;\n+ this.template = searchRequest.template;\n this.source = searchRequest.source;\n this.extraSource = searchRequest.extraSource;\n this.queryCache = searchRequest.queryCache;\n@@ -389,43 +387,93 @@ public SearchRequest templateSource(String template) {\n return this;\n }\n \n+ /**\n+ * The stored template\n+ */\n+ public void template(Template template) {\n+ this.template = template;\n+ }\n+\n+ /**\n+ * The stored template\n+ */\n+ public Template template() {\n+ return template;\n+ }\n+\n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n */\n+ @Deprecated\n public void templateName(String templateName) {\n- this.templateName = templateName;\n+ updateOrCreateScript(templateName, null, null, null);\n }\n \n+ /**\n+ * The type of the stored template\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n+ */\n+ @Deprecated\n public void templateType(ScriptService.ScriptType templateType) {\n- this.templateType = templateType;\n+ updateOrCreateScript(null, templateType, null, null);\n }\n \n /**\n * Template parameters used for rendering\n+ * \n+ * @deprecated use {@link #template(Template))} instead.\n */\n+ @Deprecated\n public void templateParams(Map<String, Object> params) {\n- this.templateParams = params;\n+ updateOrCreateScript(null, null, null, params);\n }\n \n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public String templateName() {\n- return templateName;\n+ return template == null ? null : template.getScript();\n }\n \n /**\n * The name of the stored template\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public ScriptService.ScriptType templateType() {\n- return templateType;\n+ return template == null ? null : template.getType();\n }\n \n /**\n * Template parameters used for rendering\n+ * \n+ * @deprecated use {@link #template()} instead.\n */\n+ @Deprecated\n public Map<String, Object> templateParams() {\n- return templateParams;\n+ return template == null ? null : template.getParams();\n+ }\n+\n+ private void updateOrCreateScript(String templateContent, ScriptType type, String lang, Map<String, Object> params) {\n+ Template template = template();\n+ if (template == null) {\n+ template = new Template(templateContent == null ? \"\" : templateContent, type == null ? ScriptType.INLINE : type, lang, null,\n+ params);\n+ } else {\n+ String newTemplateContent = templateContent == null ? template.getScript() : templateContent;\n+ ScriptType newTemplateType = type == null ? template.getType() : type;\n+ String newTemplateLang = lang == null ? template.getLang() : lang;\n+ Map<String, Object> newTemplateParams = params == null ? template.getParams() : params;\n+ template = new Template(newTemplateContent, newTemplateType, MustacheScriptEngineService.NAME, null, newTemplateParams);\n+ }\n+ template(template);\n }\n \n /**\n@@ -517,10 +565,8 @@ public void readFrom(StreamInput in) throws IOException {\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n \n templateSource = in.readBytesReference();\n- templateName = in.readOptionalString();\n- templateType = ScriptService.ScriptType.readFrom(in);\n if (in.readBoolean()) {\n- templateParams = (Map<String, Object>) in.readGenericValue();\n+ template = Template.readTemplate(in);\n }\n queryCache = in.readOptionalBoolean();\n }\n@@ -550,12 +596,10 @@ public void writeTo(StreamOutput out) throws IOException {\n indicesOptions.writeIndicesOptions(out);\n \n out.writeBytesReference(templateSource);\n- out.writeOptionalString(templateName);\n- ScriptService.ScriptType.writeTo(templateType, out);\n- boolean existTemplateParams = templateParams != null;\n- out.writeBoolean(existTemplateParams);\n- if (existTemplateParams) {\n- out.writeGenericValue(templateParams);\n+ boolean hasTemplate = template != null;\n+ out.writeBoolean(hasTemplate);\n+ if (hasTemplate) {\n+ template.writeTo(out);\n }\n \n out.writeOptionalBoolean(queryCache);",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequest.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,9 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.QueryBuilder;\n+import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.Template;\n import org.elasticsearch.search.Scroll;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n@@ -423,33 +425,60 @@ public SearchRequestBuilder addFieldDataField(String name) {\n * @param name The name that will represent this value in the return hit\n * @param script The script to use\n */\n- public SearchRequestBuilder addScriptField(String name, String script) {\n+ public SearchRequestBuilder addScriptField(String name, Script script) {\n sourceBuilder().scriptField(name, script);\n return this;\n }\n \n /**\n- * Adds a script based field to load and return. The field does not have to be stored,\n- * but its recommended to use non analyzed or numeric fields.\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n *\n- * @param name The name that will represent this value in the return hit\n- * @param script The script to use\n- * @param params Parameters that the script can use.\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param script\n+ * The script to use\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n */\n+ @Deprecated\n+ public SearchRequestBuilder addScriptField(String name, String script) {\n+ sourceBuilder().scriptField(name, script);\n+ return this;\n+ }\n+\n+ /**\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n+ *\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param script\n+ * The script to use\n+ * @param params\n+ * Parameters that the script can use.\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder addScriptField(String name, String script, Map<String, Object> params) {\n sourceBuilder().scriptField(name, script, params);\n return this;\n }\n \n /**\n- * Adds a script based field to load and return. The field does not have to be stored,\n- * but its recommended to use non analyzed or numeric fields.\n+ * Adds a script based field to load and return. The field does not have to\n+ * be stored, but its recommended to use non analyzed or numeric fields.\n *\n- * @param name The name that will represent this value in the return hit\n- * @param lang The language of the script\n- * @param script The script to use\n- * @param params Parameters that the script can use (can be <tt>null</tt>).\n- */\n+ * @param name\n+ * The name that will represent this value in the return hit\n+ * @param lang\n+ * The language of the script\n+ * @param script\n+ * The script to use\n+ * @param params\n+ * Parameters that the script can use (can be <tt>null</tt>).\n+ * @deprecated Use {@link #addScriptField(String, Script)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder addScriptField(String name, String lang, String script, Map<String, Object> params) {\n sourceBuilder().scriptField(name, lang, script, params);\n return this;\n@@ -939,16 +968,33 @@ public SearchRequestBuilder setExtraSource(Map source) {\n * template stuff\n */\n \n+ public SearchRequestBuilder setTemplate(Template template) {\n+ request.template(template);\n+ return this;\n+ }\n+\n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateName(String templateName) {\n request.templateName(templateName);\n return this;\n }\n \n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateType(ScriptService.ScriptType templateType) {\n request.templateType(templateType);\n return this;\n }\n \n+ /**\n+ * @deprecated Use {@link #setTemplate(Template)} instead.\n+ */\n+ @Deprecated\n public SearchRequestBuilder setTemplateParams(Map<String, Object> templateParams) {\n request.templateParams(templateParams);\n return this;",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java",
"status": "modified"
}
]
} |
{
"body": "I'm getting a `java.lang.IllegalStateException: from state (0) already had transitions added` exception if i try to index certain documents with a mapping that has fields with `\"type\": \"completion\"`.\n\nThis happens at least in Version 1.5.1 and 1.5.2 and seems to work correctly up to 1.3.9.\n\nFirst i create a new index:\n`curl -XPUT localhost:9200/ses_firma/`\nThen i add the mapping with:\n`curl -s -XPUT localhost:9200/ses_firma/_mapping/firma --data-binary @MappingFirma.txt`\n\n```\n{\n \"firma\": {\n \"_source\": {\n \"enabled\": false\n },\n \"_id\": {\n \"path\": \"oid\",\n \"store\": true\n },\n \"properties\": {\n \"adressen\": {\n \"properties\": {\n \"ort\": {\n \"type\": \"string\"\n },\n \"postleitzahl\": {\n \"type\": \"string\"\n },\n \"strasse\": {\n \"type\": \"string\"\n },\n \"bundesland\": {\n \"include_in_all\": false,\n \"properties\": {\n \"name\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n }\n }\n },\n \"postfach\": {\n \"type\": \"string\"\n },\n \"land\": {\n \"include_in_all\": false,\n \"properties\": {\n \"iso\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n },\n \"archiv\": {\n \"type\": \"boolean\"\n },\n \"name\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n }\n }\n },\n \"oid\": {\n \"type\": \"string\"\n },\n \"postfachplz\": {\n \"type\": \"string\"\n }\n }\n },\n \"bemerkung\": {\n \"type\": \"string\",\n \"fields\": {\n \"suggest\": {\n \"max_input_length\": 50,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"context\": {\n \"type_context\": {\n \"path\": \"_type\",\n \"default\": [\n \"*\"\n ,\n \"firma\"\n ],\n \"type\": \"category\"\n }\n },\n \"preserve_position_increments\": true,\n \"type\": \"completion\",\n \"preserve_separators\": true\n }\n }\n },\n \"standardadresse\": {\n \"properties\": {\n \"ort\": {\n \"type\": \"string\"\n },\n \"postleitzahl\": {\n \"type\": \"string\"\n },\n \"strasse\": {\n \"type\": \"string\"\n },\n \"bundesland\": {\n \"include_in_all\": false,\n \"properties\": {\n \"name\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n }\n }\n },\n \"postfach\": {\n \"type\": \"string\"\n },\n \"land\": {\n \"include_in_all\": false,\n \"properties\": {\n \"iso\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n },\n \"archiv\": {\n \"type\": \"boolean\"\n },\n \"name\": {\n \"include_in_all\": false,\n \"type\": \"string\"\n }\n }\n },\n \"oid\": {\n \"type\": \"string\"\n },\n \"postfachplz\": {\n \"type\": \"string\"\n }\n }\n },\n \"abkuerzung\": {\n \"type\": \"string\",\n \"fields\": {\n \"suggest\": {\n \"max_input_length\": 50,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"context\": {\n \"type_context\": {\n \"path\": \"_type\",\n \"default\": [\n \"*\"\n ,\n \"firma\"\n ],\n \"type\": \"category\"\n }\n },\n \"preserve_position_increments\": true,\n \"type\": \"completion\",\n \"preserve_separators\": true\n }\n }\n },\n \"telefonnummern\": {\n \"properties\": {\n \"nummer\": {\n \"type\": \"string\"\n }\n }\n },\n \"oid\": {\n \"type\": \"string\"\n },\n \"email\": {\n \"type\": \"string\",\n \"fields\": {\n \"suggest\": {\n \"max_input_length\": 50,\n \"payloads\": false,\n \"analyzer\": \"keyword\",\n \"context\": {\n \"type_context\": {\n \"path\": \"_type\",\n \"default\": [\n \"*\"\n ,\n \"firma\"\n ],\n \"type\": \"category\"\n }\n },\n \"preserve_position_increments\": true,\n \"type\": \"completion\",\n \"preserve_separators\": true\n }\n }\n },\n \"firmenname\": {\n \"type\": \"string\",\n \"fields\": {\n \"suggest\": {\n \"max_input_length\": 50,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"context\": {\n \"type_context\": {\n \"path\": \"_type\",\n \"default\": [\n \"*\"\n ,\n \"firma\"\n ],\n \"type\": \"category\"\n }\n },\n \"preserve_position_increments\": true,\n \"type\": \"completion\",\n \"preserve_separators\": true\n }\n }\n }\n }\n }\n}\n```\n\nThen indexing the following documents:\n`curl -s -XPOST localhost:9200/_bulk --data-binary @BulkError.txt`\n\n```\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Alfred Reiter Bau GmbH\",\"abkuerzung\":\"ROTTMEIER\",\"email\":\"\",\"adressen\":[{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"}],\"standardadresse\":{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"},\"telefonnummern\":[{\"nummer\":\"08751/5171\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/7369223\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/2847356 Alfred\"}],\"bemerkung\":\"info@reiter-bau.de\",\"oid\":\"40d50149-aafa-4727-9c9b-0b95ae41d4bb\"}\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Volkswagen Bankdirect\",\"abkuerzung\":\"VOLKSWAGEN\",\"email\":\"\",\"adressen\":[{\"strasse\":\"Gifthorner Str. 57\",\"postleitzahl\":\"38112\",\"ort\":\"Braunschweig\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Niedersachsen\"},\"oid\":\"60672aff-dbb4-4291-9b3f-fd2ebc177cb6\"}],\"standardadresse\":{\"strasse\":\"Gifthorner Str. 57\",\"postleitzahl\":\"38112\",\"ort\":\"Braunschweig\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Niedersachsen\"},\"oid\":\"60672aff-dbb4-4291-9b3f-fd2ebc177cb6\"},\"telefonnummern\":[{\"nummer\":\"0531/2121732\"},{\"nummer\":\"0531/2122836\"},{\"nummer\":\"0531/2122880\"}],\"bemerkung\":\"VOLKSWAGEN\",\"oid\":\"2c1dc58b-e5a2-4513-aea6-1327714b07b8\"}\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Die Bayerische\",\"abkuerzung\":\"BBV\",\"email\":\"\",\"adressen\":[{\"strasse\":\"Thomas-Dehler-Str. 25\",\"postleitzahl\":\"81737\",\"ort\":\"München\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"1482071b-c049-4802-8a40-e37aba535317\"}],\"standardadresse\":{\"strasse\":\"Thomas-Dehler-Str. 25\",\"postleitzahl\":\"81737\",\"ort\":\"München\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"1482071b-c049-4802-8a40-e37aba535317\"},\"telefonnummern\":[{\"nummer\":\"Kfz 089/6787-2222\"},{\"nummer\":\"089/6787-0\"},{\"nummer\":\"089/6787-9150\"}],\"bemerkung\":\"BBV\",\"oid\":\"2786c191-b882-4c2d-a61f-1f962f74cdcd\"}\n```\n\nand i get the aforementioned error on all three documents. There are documents that work correctly but i can cut out all content (e. g. \"firmenname\":\"\") from all fields of these documents from above apart from the oid field and i nevertheless get the exception.\n\nHere is the call stack for one of the errors from the log file:\n\n```\n[2015-05-05 11:05:32,818][DEBUG][action.bulk ] [Amina Synge] [ses_firma][0] failed to execute bulk item (index) index {[ses_firma][firma][40d50149-aafa-4727-9c9b-0b95ae41d4bb], source[{\"firmenname\":\"Alfred\",\"abkuerzung\":\"\",\"email\":\"\",\"adressen\":[{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"}],\"standardadresse\":{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"},\"telefonnummern\":[{\"nummer\":\"08751/5171\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/7369223\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/2847356 Alfred\"}],\"bemerkung\":\"\",\"oid\":\"40d50149-aafa-4727-9c9b-0b95ae41d4bb\"}\n]}\norg.elasticsearch.index.engine.IndexFailedEngineException: [ses_firma][0] Index failed for [firma#40d50149-aafa-4727-9c9b-0b95ae41d4bb]\n at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:368)\n at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:498)\n at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:427)\n at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n at java.lang.Thread.run(Unknown Source)\nCaused by: java.lang.IllegalStateException: from state (0) already had transitions added\n at org.apache.lucene.util.automaton.Automaton.addTransition(Automaton.java:158)\n at org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester.replaceSep(XAnalyzingSuggester.java:302)\n at org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester.toFiniteStrings(XAnalyzingSuggester.java:932)\n at org.elasticsearch.search.suggest.completion.AnalyzingCompletionLookupProvider.toFiniteStrings(AnalyzingCompletionLookupProvider.java:371)\n at org.elasticsearch.search.suggest.completion.CompletionTokenStream.incrementToken(CompletionTokenStream.java:63)\n at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:618)\n at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)\n at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)\n at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)\n at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465)\n at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526)\n at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252)\n at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:431)\n at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:362)\n ... 8 more\n```\n\nI tried this on three different machines also with a completely new installed Elasticsearch instance.\n\nBest Regards,\nMarkus Dütting\n",
"comments": [
{
"body": "Hi @Duetting,\n\nThanks for reporting this. It seems the reason for this error is because the field `email` has an empty string. To avoid this error, you could omit the empty string completion fields from indexing.\nIdeally, this should be handled by elasticsearch, I have opened a PR for this (https://github.com/elastic/elasticsearch/pull/11158).\n\nIndexing the following works around this issue:\n\n```\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Alfred Reiter Bau GmbH\",\"abkuerzung\":\"ROTTMEIER\",\"adressen\":[{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"}],\"standardadresse\":{\"strasse\":\"Salvatorbergstr. 21\",\"postleitzahl\":\"84048\",\"ort\":\"Mainburg\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"452d44ff-b34d-4f40-b5be-5169b2621960\"},\"telefonnummern\":[{\"nummer\":\"08751/5171\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/7369223\"},{\"nummer\":\"08751/9400\"},{\"nummer\":\"0170/2847356 Alfred\"}],\"bemerkung\":\"info@reiter-bau.de\",\"oid\":\"40d50149-aafa-4727-9c9b-0b95ae41d4bb\"}\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Volkswagen Bankdirect\",\"abkuerzung\":\"VOLKSWAGEN\",\"adressen\":[{\"strasse\":\"Gifthorner Str. 57\",\"postleitzahl\":\"38112\",\"ort\":\"Braunschweig\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Niedersachsen\"},\"oid\":\"60672aff-dbb4-4291-9b3f-fd2ebc177cb6\"}],\"standardadresse\":{\"strasse\":\"Gifthorner Str. 57\",\"postleitzahl\":\"38112\",\"ort\":\"Braunschweig\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Niedersachsen\"},\"oid\":\"60672aff-dbb4-4291-9b3f-fd2ebc177cb6\"},\"telefonnummern\":[{\"nummer\":\"0531/2121732\"},{\"nummer\":\"0531/2122836\"},{\"nummer\":\"0531/2122880\"}],\"bemerkung\":\"VOLKSWAGEN\",\"oid\":\"2c1dc58b-e5a2-4513-aea6-1327714b07b8\"}\n{\"index\":{\"_index\":\"ses_firma\",\"_type\":\"firma\",\"_version_type\":\"external_gte\",\"_version\":3}}\n{\"firmenname\":\"Die Bayerische\",\"abkuerzung\":\"BBV\",\"adressen\":[{\"strasse\":\"Thomas-Dehler-Str. 25\",\"postleitzahl\":\"81737\",\"ort\":\"München\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"1482071b-c049-4802-8a40-e37aba535317\"}],\"standardadresse\":{\"strasse\":\"Thomas-Dehler-Str. 25\",\"postleitzahl\":\"81737\",\"ort\":\"München\",\"land\":{\"name\":\"Deutschland\",\"iso\":\"DEU\",\"archiv\":false},\"bundesland\":{\"name\":\"Bayern\"},\"oid\":\"1482071b-c049-4802-8a40-e37aba535317\"},\"telefonnummern\":[{\"nummer\":\"Kfz 089/6787-2222\"},{\"nummer\":\"089/6787-0\"},{\"nummer\":\"089/6787-9150\"}],\"bemerkung\":\"BBV\",\"oid\":\"2786c191-b882-4c2d-a61f-1f962f74cdcd\"}\n\n```\n\nI confirmed that the empty string values do work up to v1.3.9. I will have to dig deeper to figure out what has changed in the meantime.\n",
"created_at": "2015-05-14T02:20:02Z"
}
],
"number": 10987,
"title": "IllegalStateException on indexing with completion suggester"
} | {
"body": "This PR ensures completion entries with an input of an empty string never gets indexed into the underlying FST. \n\nThere is no way to retrieve an entry associated with an empty string through the _suggest API, so these entries should never be part of the underlying index, as they can never be retrieved but might bloat the FST with their context, weight, payload, etc.\n\ncloses #10987\n",
"number": 11158,
"review_comments": [],
"title": "Ensure empty string completion inputs are not indexed"
} | {
"commits": [
{
"message": "Ensure empty completion entries are never indexed\n\ncloses #10987"
}
],
"files": [
{
"diff": "@@ -369,6 +369,9 @@ public Mapper parse(ParseContext context) throws IOException {\n payload = payload == null ? EMPTY : payload;\n if (surfaceForm == null) { // no surface form use the input\n for (String input : inputs) {\n+ if (input.length() == 0) {\n+ continue;\n+ }\n BytesRef suggestPayload = analyzingSuggestLookupProvider.buildPayload(new BytesRef(\n input), weight, payload);\n context.doc().add(getCompletionField(ctx, input, suggestPayload));\n@@ -377,6 +380,9 @@ public Mapper parse(ParseContext context) throws IOException {\n BytesRef suggestPayload = analyzingSuggestLookupProvider.buildPayload(new BytesRef(\n surfaceForm), weight, payload);\n for (String input : inputs) {\n+ if (input.length() == 0) {\n+ continue;\n+ }\n context.doc().add(getCompletionField(ctx, input, suggestPayload));\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/CompletionFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -435,6 +435,34 @@ public void testSimpleField() throws Exception {\n \n }\n \n+ @Test // see issue #10987\n+ public void testEmptySuggestion() throws Exception {\n+ String mapping = jsonBuilder()\n+ .startObject()\n+ .startObject(TYPE)\n+ .startObject(\"properties\")\n+ .startObject(FIELD)\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\")\n+ .startObject(\"type_context\")\n+ .field(\"path\", \"_type\")\n+ .field(\"type\", \"category\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .string();\n+\n+ assertAcked(client().admin().indices().prepareCreate(INDEX).addMapping(TYPE, mapping).get());\n+ ensureGreen();\n+\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(FIELD, \"\")\n+ .setRefresh(true).get();\n+\n+ }\n+\n @Test\n public void testMultiValueField() throws Exception {\n assertAcked(prepareCreate(INDEX).addMapping(TYPE, createMapping(TYPE, ContextBuilder.reference(\"st\", \"category\"))));",
"filename": "src/test/java/org/elasticsearch/search/suggest/ContextSuggestSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "FuzzyQueryBuilder is a MultiTermQueryBuilder, so it should support a `rewrite` parameter. Indeed, `rewrite` works when I specify a JSON `fuzzy` query. But I can't set it if I want to use a `FuzzyQueryBuilder`.\n\nMy workaround is to use `MatchQueryBuilder`.\n",
"comments": [
{
"body": "Thanks @adamhooper good catch, fixed in master, 1.x and 1.5 branches.\n",
"created_at": "2015-05-13T14:21:34Z"
}
],
"number": 11130,
"title": "FuzzyQueryBuilder is missing rewrite"
} | {
"body": "We parse the rewrite field in `FuzzyQueryParser` but we don't allow to set it via `FuzzyQueryBuilder` for our java api users. Added missing field and setter.\n\nSimilar problems will be resolved with the query parser refactoring that we have been working on. The parser will only return an intermediate `Streamable` object, the same object that the java api users will use directly. No more building json through java api then, which can geet out of sync with what we actually parse.\n\nCloses #11130\n",
"number": 11139,
"review_comments": [],
"title": "Add missing rewrite parameter to FuzzyQueryBuilder"
} | {
"commits": [
{
"message": "Java api: add missing rewrite parameter to FuzzyQueryBuilder\n\nWe parse the rewrite field in FuzzyQueryParser but we don't allow to set it via FuzzyQueryBuilder for our java api users. Added missing field and setter.\n\nCloses #11130\nCloses #11139"
}
],
"files": [
{
"diff": "@@ -46,6 +46,8 @@ public class FuzzyQueryBuilder extends BaseQueryBuilder implements MultiTermQuer\n //LUCENE 4 UPGRADE we need a testcase for this + documentation\n private Boolean transpositions;\n \n+ private String rewrite;\n+\n private String queryName;\n \n /**\n@@ -89,6 +91,11 @@ public FuzzyQueryBuilder transpositions(boolean transpositions) {\n return this;\n }\n \n+ public FuzzyQueryBuilder rewrite(String rewrite) {\n+ this.rewrite = rewrite;\n+ return this;\n+ }\n+\n /**\n * Sets the query name for the filter that can be used when searching for matched_filters per hit.\n */\n@@ -120,6 +127,9 @@ public void doXContent(XContentBuilder builder, Params params) throws IOExceptio\n if (maxExpansions != null) {\n builder.field(\"max_expansions\", maxExpansions);\n }\n+ if (rewrite != null) {\n+ builder.field(\"rewrite\", rewrite);\n+ }\n if (queryName != null) {\n builder.field(\"_name\", queryName);\n }",
"filename": "src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java",
"status": "modified"
}
]
} |
{
"body": "When deleting a shard the node that deletes the shard first checks if all shard copies are\nstarted on other nodes. A message is sent to each node and each node checks locally for\nSTARTED or RELOCATED.\nHowever, it might happen that the shard is still in state POST_RECOVERY, like this:\n\nshard is relocating from node1 to node2\n1. relocated shard on node2 goes in POST_RECOVERY and node2 sends shard started to master\n2. master updates routing table and sends new cluster state to node1 and node2\n3. node1 processes the cluster state and asks node2 if it has the active shard\n before node2 processes the new cluster state (which would cause it to set the shard to started)\n4. node2 sends back it does not have the shard started and so node1 does not delete it\n\nThis can be avoided by waiting until cluster state that sets the shard to started is actually processed.\n\ncloses #10018\n",
"comments": [
{
"body": "cool stuff I left some comments\n",
"created_at": "2015-03-19T22:57:44Z"
},
{
"body": "thanks for the review! addressed all comments. want to take another look?\n",
"created_at": "2015-03-20T00:30:05Z"
},
{
"body": "left some 3 comments other than that LGTM\n",
"created_at": "2015-03-20T18:05:56Z"
},
{
"body": "Pushed another commit. We have to catch EsRejectedExecutionException when we try to send back whether the shard is active or not. For example, InternalNode.stop will cause ObserverClusterStateListener.onClose of the listener to be called at some point and the reject exception that this might throw is not caught anywhere it seems. Alternatively we might also consider not trying to send back a response on close?\n",
"created_at": "2015-03-21T17:44:18Z"
},
{
"body": "I think we should catch there exception in the caller of the close method - this can hit the next user of this as well?\n",
"created_at": "2015-03-23T09:43:29Z"
},
{
"body": "I like the approach. Left some comments.\n",
"created_at": "2015-03-23T10:18:49Z"
},
{
"body": "@bleskes @s1monw thanks a lot for the review! I implemented all changes except where I added a comment because I was unsure what to do. \n@s1monw about the exception handling: It seems in general unchecked exceptions are not handled when listeners are called when the cluster service closes. I can add a catch for them (see b4e88ed9a155a9cd3832403b574efbdf9db612eb) but because they are not handled anywhere I suspect there is method behind it. I'd be happy for any insight into how exceptions should be handled properly here.\n",
"created_at": "2015-03-30T10:36:37Z"
},
{
"body": "@brwe I think we should detach the exception handling problem from this issue. Yet, we should still address it. IMO we really need to make sure that all listeners are notified even if one of them threw an exception. Can you open a folllowup?\n",
"created_at": "2015-03-30T11:28:45Z"
},
{
"body": "other than that ^^ LGTM :)\n",
"created_at": "2015-03-30T11:29:40Z"
},
{
"body": "LGTM. Left some very minor comments. no need for another review cycle.\n",
"created_at": "2015-03-30T21:10:31Z"
},
{
"body": "pushed to master and 1.x (17dffe222b923c17614905515773614d6963e13e)\n",
"created_at": "2015-03-31T14:01:18Z"
}
],
"number": 10172,
"title": "Shard not deleted after relocation if relocated shard is still in post recovery"
} | {
"body": "The timeout value that was added to ShardActiveRequest in #10172 is\nalso used as a timeout for the cluster state observer. When\nstreaming from older versions this timeout value was null\nand this caused an NPE when the cluster state observer was called.\n\nI noticed this while looking at http://build-us-00.elastic.co/job/es_bwc_1x/10115/ although I do not think this caused the failure.\n",
"number": 11110,
"review_comments": [],
"title": "Fix NPE when checking for active shards before deletion"
} | {
"commits": [
{
"message": "store: fix NPE when checking for active shards before deletion\n\nThe timeout value that was added to ShardActiveRequest in #10172 is\nalso used as a timeout for the cluster state observer. When\nstreaming from older versions this timeout value was null\nand this caused an NPE when the cluster state observer was called."
}
],
"files": [
{
"diff": "@@ -69,6 +69,7 @@ public class IndicesStore extends AbstractComponent implements ClusterStateListe\n public static final String ACTION_SHARD_EXISTS = \"internal:index/shard/exists\";\n \n private static final EnumSet<IndexShardState> ACTIVE_STATES = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED);\n+ public static final TimeValue DEFAULT_SHARD_DELETE_TIMEOUT = new TimeValue(30, TimeUnit.SECONDS);\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n@@ -125,7 +126,7 @@ public IndicesStore(Settings settings, NodeEnvironment nodeEnv, NodeSettingsServ\n this.rateLimitingThrottle = componentSettings.getAsBytesSize(\"throttle.max_bytes_per_sec\", new ByteSizeValue(20, ByteSizeUnit.MB));\n rateLimiting.setMaxRate(rateLimitingThrottle);\n \n- this.deleteShardTimeout = settings.getAsTime(INDICES_STORE_DELETE_SHARD_TIMEOUT, new TimeValue(30, TimeUnit.SECONDS));\n+ this.deleteShardTimeout = settings.getAsTime(INDICES_STORE_DELETE_SHARD_TIMEOUT, DEFAULT_SHARD_DELETE_TIMEOUT);\n \n logger.debug(\"using indices.store.throttle.type [{}], with index.store.throttle.max_bytes_per_sec [{}]\", rateLimitingType, rateLimitingThrottle);\n \n@@ -344,6 +345,7 @@ public String executor() {\n \n @Override\n public void messageReceived(final ShardActiveRequest request, final TransportChannel channel) throws Exception {\n+ assert request.timeout != null;\n IndexShard indexShard = getShard(request);\n // make sure shard is really there before register cluster state observer\n if (indexShard == null) {\n@@ -430,11 +432,11 @@ private IndexShard getShard(ShardActiveRequest request) {\n }\n }\n \n- private static class ShardActiveRequest extends TransportRequest {\n- protected TimeValue timeout = null;\n- private ClusterName clusterName;\n- private String indexUUID;\n- private ShardId shardId;\n+ protected static class ShardActiveRequest extends TransportRequest {\n+ protected TimeValue timeout = DEFAULT_SHARD_DELETE_TIMEOUT;\n+ protected ClusterName clusterName;\n+ protected String indexUUID;\n+ protected ShardId shardId;\n \n ShardActiveRequest() {\n }\n@@ -444,6 +446,7 @@ private static class ShardActiveRequest extends TransportRequest {\n this.indexUUID = indexUUID;\n this.clusterName = clusterName;\n this.timeout = timeout;\n+ assert timeout != null;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/indices/store/IndicesStore.java",
"status": "modified"
},
{
"diff": "@@ -29,16 +29,23 @@\n import org.elasticsearch.cluster.routing.ImmutableShardRouting;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.common.io.stream.BytesStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.transport.LocalTransportAddress;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Before;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.Arrays;\n import java.util.HashSet;\n import java.util.Set;\n+import java.util.concurrent.TimeUnit;\n \n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n \n /**\n@@ -191,4 +198,25 @@ public void testShardCanBeDeleted_relocatingNode() throws Exception {\n assertTrue(indicesStore.shardCanBeDeleted(clusterState.build(), routingTable.build()));\n }\n \n+ public void testShardActiveRequestStreaming() throws IOException {\n+ BytesStreamOutput out = new BytesStreamOutput();\n+ Version version = randomVersion();\n+ out.setVersion(version);\n+ IndicesStore.ShardActiveRequest shardActiveRequest = new IndicesStore.ShardActiveRequest(new ClusterName(\"cluster\"), \"indexUUID\", new ShardId(\"index\", 0), new TimeValue(100));\n+ shardActiveRequest.writeTo(out);\n+ out.close();\n+ StreamInput in = new BytesStreamInput(out.bytes());\n+ in.setVersion(version);\n+ IndicesStore.ShardActiveRequest readShardActiveRequest = new IndicesStore.ShardActiveRequest();\n+ readShardActiveRequest.readFrom(in);\n+ in.close();\n+ if (version.onOrAfter(Version.V_1_5_0)) {\n+ assertThat(shardActiveRequest.timeout, equalTo(readShardActiveRequest.timeout));\n+ } else {\n+ assertThat(readShardActiveRequest.timeout, equalTo(IndicesStore.DEFAULT_SHARD_DELETE_TIMEOUT));\n+ }\n+ assertThat(shardActiveRequest.clusterName, equalTo(readShardActiveRequest.clusterName));\n+ assertThat(shardActiveRequest.indexUUID, equalTo(readShardActiveRequest.indexUUID));\n+ assertThat(shardActiveRequest.shardId, equalTo(readShardActiveRequest.shardId));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/indices/store/IndicesStoreTests.java",
"status": "modified"
}
]
} |
{
"body": "Here is a reproduction:\n\n```\nDELETE test \n\nPUT test \n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"loc\": {\n \"type\": \"geo_point\"\n }\n }\n }\n }\n}\n\nPUT test/test/1\n{\n \"loc\": \"1,0\"\n}\n\nGET test/_search\n{\n \"aggs\": {\n \"loc_bounds\": {\n \"geo_bounds\": {\n \"field\": \"loc\"\n }\n }\n }\n}\n```\n\nwhich returns\n\n```\n{\n \"took\": 2,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"test\",\n \"_type\": \"test\",\n \"_id\": \"1\",\n \"_score\": 1,\n \"_source\": {\n \"loc\": \"1,0\"\n }\n }\n ]\n },\n \"aggregations\": {\n \"loc_bounds\": {\n \"bounds\": {\n \"top_left\": {\n \"lat\": 1,\n \"lon\": \"Infinity\"\n },\n \"bottom_right\": {\n \"lat\": 1,\n \"lon\": \"-Infinity\"\n }\n }\n }\n }\n}\n```\n",
"comments": [],
"number": 11085,
"title": "Aggs: geo_bounds don't like when the lat or lon is equal to 0"
} | {
"body": "When the longitude is zero for a document, the left and right bounds do not get updated in the geo bounds aggregation which can cause the bounds to be returned with Infinite values for longitude\n\nCloses #11085\n",
"number": 11090,
"review_comments": [],
"title": "Fix geo bounds aggregation when longitude is 0"
} | {
"commits": [
{
"message": "Aggregations: Fix geo bounds aggregation when longitude is 0\n\nWhen the longitude is zero for a document, the left and right bounds do not get updated in the geo bounds aggregation which can cause the bounds to be returned with Infinite values for longitude\n\nCloses #11085"
}
],
"files": [
{
"diff": "@@ -115,11 +115,11 @@ public void collect(int doc, long bucket) throws IOException {\n bottom = value.lat();\n }\n double posLeft = posLefts.get(bucket);\n- if (value.lon() > 0 && value.lon() < posLeft) {\n+ if (value.lon() >= 0 && value.lon() < posLeft) {\n posLeft = value.lon();\n }\n double posRight = posRights.get(bucket);\n- if (value.lon() > 0 && value.lon() > posRight) {\n+ if (value.lon() >= 0 && value.lon() > posRight) {\n posRight = value.lon();\n }\n double negLeft = negLefts.get(bucket);",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -156,6 +156,10 @@ public void setupSuiteScopeCluster() throws Exception {\n .endObject()));\n }\n \n+ builders.add(client().prepareIndex(\"idx_zero\", \"type\").setSource(\n+ jsonBuilder().startObject().array(SINGLE_VALUED_FIELD_NAME, 0.0, 1.0).endObject()));\n+ assertAcked(prepareCreate(\"idx_zero\").addMapping(\"type\", SINGLE_VALUED_FIELD_NAME, \"type=geo_point\"));\n+\n indexRandom(true, builders);\n ensureSearchable();\n \n@@ -415,4 +419,22 @@ public void singleValuedFieldAsSubAggToHighCardTermsAgg() {\n }\n }\n \n+ @Test\n+ public void singleValuedFieldWithZeroLon() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"idx_zero\")\n+ .addAggregation(geoBounds(\"geoBounds\").field(SINGLE_VALUED_FIELD_NAME).wrapLongitude(false)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ GeoBounds geoBounds = response.getAggregations().get(\"geoBounds\");\n+ assertThat(geoBounds, notNullValue());\n+ assertThat(geoBounds.getName(), equalTo(\"geoBounds\"));\n+ GeoPoint topLeft = geoBounds.topLeft();\n+ GeoPoint bottomRight = geoBounds.bottomRight();\n+ assertThat(topLeft.lat(), equalTo(1.0));\n+ assertThat(topLeft.lon(), equalTo(0.0));\n+ assertThat(bottomRight.lat(), equalTo(1.0));\n+ assertThat(bottomRight.lon(), equalTo(0.0));\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/aggregations/metrics/GeoBoundsTests.java",
"status": "modified"
}
]
} |
{
"body": "Our `ThreadPool` constructor creates a couple of threads (scheduler and timer) which might not get shut down if the initialization of a node fails. A guice error might occur for example, which causes the `InternalNode` constructor to throw an exception. In this case the two threads are left behind, which is not a big problem when running es standalone as the error will be intercepted and the jvm will be stopped as a whole. It can become more of a problem though when running es in embedded mode, as we'll end up with lingering threads.\n\nThe steps to reproduce are a bit exotic: create a tribe node, and make sure one of its inner client nodes initialization fails. Whether the threads are left behind or not depends on when the failure happens, whether it happens before or after the `ThreadPool` has been initialized.\n\n```\n @Test\n public void testThreadPoolLeakingThreadsWithTribeNode() {\n Settings settings = ImmutableSettings.builder()\n .put(\"node.name\", \"thread_pool_leaking_threads_tribe_node\")\n .put(\"tribe.t1.cluster.name\", \"non_existing_cluster\")\n //trigger initialization failure of one of the tribes (doesn't require starting the node)\n .put(\"tribe.t1.plugin.mandatory\", \"non_existing\").build();\n\n try {\n NodeBuilder.nodeBuilder().settings(settings).build();\n } catch(Throwable t) {\n //all good\n assertThat(t.getMessage(), containsString(\"mandatory plugins [non_existing]\"));\n }\n }\n```\n\nI think creating threads on the constructor is quite bad already, even worse since we haven't started the node yet. In general I would love to have more lightweight constructors, especially since we use guice. I think we should delay the threads creation to the actual start phase of the node, but this requires quite some changes as we currently schedule executions (`threadPool.schedule`) from different objects constructors, which need the thread pool scheduler to be already initialized.\n",
"comments": [
{
"body": "+1 on not starting threads in the constructor. So we would need to have a start() method and start threads in this start method, is it the idea you had in mind @javanna ?\n\nI'm making it an adoptme.\n",
"created_at": "2015-01-09T09:18:47Z"
},
{
"body": "> So we would need to have a start() method and start threads in this start method\n\nYes this is what I had in mind and I gave it a quick try but found various other constructors that call `threadPool.schedule()` hence they rely on the scheduler thread to be already initialized. All those `schedule` calls would need to be delayed as well, not sure if this would be it for this change though :)\n",
"created_at": "2015-01-09T09:39:45Z"
},
{
"body": "Then maybe the ThreadPool creation should be moved outside of the hands of guice? For instance we could create it ourselves, then tell guice to use the provided instance for its injection, and finally close it if the guice injection failed?\n",
"created_at": "2015-01-09T09:41:43Z"
},
{
"body": "sounds good to me!\n",
"created_at": "2015-01-09T09:45:55Z"
}
],
"number": 9107,
"title": "ThreadPool: make sure no leaking threads are left behind in case of initialization failure"
} | {
"body": "Our ThreadPool constructor creates a couple of threads (scheduler and timer) which might not get shut down if the initialization of a node fails. A guice error might occur for example, which causes the InternalNode constructor to throw an exception. In this case the two threads are left behind, which is not a big problem when running es standalone as the error will be intercepted and the jvm will be stopped as a whole. It can become more of a problem though when running es in embedded mode, as we'll end up with lingering threads or testing an handling of initialization failures.\n\nCloses #9107\n",
"number": 11061,
"review_comments": [
{
"body": "Should we add some safety that this method is called only once?\n",
"created_at": "2015-05-08T19:18:09Z"
},
{
"body": "Maybe use ThreadPool.terminate instead?\n",
"created_at": "2015-05-08T19:18:29Z"
},
{
"body": "ThreadPool.terminate?\n",
"created_at": "2015-05-08T19:19:29Z"
},
{
"body": "I don't think we need the volatile here?\n",
"created_at": "2015-05-08T20:10:10Z"
}
],
"title": "ThreadPool: make sure no leaking threads are left behind in case of initialization failure"
} | {
"commits": [
{
"message": "ThreadPool: make sure no leaking threads are left behind in case of initialization failure\n\nOur ThreadPool constructor creates a couple of threads (scheduler and timer) which might not get shut down if the initialization of a node fails. A guice error might occur for example, which causes the InternalNode constructor to throw an exception. In this case the two threads are left behind, which is not a big problem when running es standalone as the error will be intercepted and the jvm will be stopped as a whole. It can become more of a problem though when running es in embedded mode, as we'll end up with lingering threads or testing an handling of initialization failures.\n\nCloses #9107"
}
],
"files": [
{
"diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.indices.breaker.CircuitBreakerModule;\n import org.elasticsearch.monitor.MonitorService;\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n+import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.plugins.PluginsModule;\n import org.elasticsearch.plugins.PluginsService;\n import org.elasticsearch.search.TransportSearchModule;\n@@ -125,24 +126,34 @@ public TransportClient build() {\n \n CompressorFactory.configure(this.settings);\n \n- ModulesBuilder modules = new ModulesBuilder();\n- modules.add(new Version.Module(version));\n- modules.add(new PluginsModule(this.settings, pluginsService));\n- modules.add(new EnvironmentModule(environment));\n- modules.add(new SettingsModule(this.settings));\n- modules.add(new NetworkModule());\n- modules.add(new ClusterNameModule(this.settings));\n- modules.add(new ThreadPoolModule(this.settings));\n- modules.add(new TransportSearchModule());\n- modules.add(new TransportModule(this.settings));\n- modules.add(new ActionModule(true));\n- modules.add(new ClientTransportModule());\n- modules.add(new CircuitBreakerModule(this.settings));\n-\n- Injector injector = modules.createInjector();\n- injector.getInstance(TransportService.class).start();\n-\n- return new TransportClient(injector);\n+ final ThreadPool threadPool = new ThreadPool(settings);\n+\n+ boolean success = false;\n+ try {\n+ ModulesBuilder modules = new ModulesBuilder();\n+ modules.add(new Version.Module(version));\n+ modules.add(new PluginsModule(this.settings, pluginsService));\n+ modules.add(new EnvironmentModule(environment));\n+ modules.add(new SettingsModule(this.settings));\n+ modules.add(new NetworkModule());\n+ modules.add(new ClusterNameModule(this.settings));\n+ modules.add(new ThreadPoolModule(threadPool));\n+ modules.add(new TransportSearchModule());\n+ modules.add(new TransportModule(this.settings));\n+ modules.add(new ActionModule(true));\n+ modules.add(new ClientTransportModule());\n+ modules.add(new CircuitBreakerModule(this.settings));\n+\n+ Injector injector = modules.createInjector();\n+ injector.getInstance(TransportService.class).start();\n+ TransportClient transportClient = new TransportClient(injector);\n+ success = true;\n+ return transportClient;\n+ } finally {\n+ if (!success) {\n+ ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS);\n+ }\n+ }\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/client/transport/TransportClient.java",
"status": "modified"
},
{
"diff": "@@ -75,6 +75,7 @@\n import org.elasticsearch.monitor.jvm.JvmInfo;\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n import org.elasticsearch.node.internal.NodeModule;\n+import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.percolator.PercolatorModule;\n import org.elasticsearch.percolator.PercolatorService;\n import org.elasticsearch.plugins.PluginsModule;\n@@ -159,6 +160,8 @@ public Node(Settings preparedSettings, boolean loadConfigSettings) {\n throw new IllegalStateException(\"Failed to created node environment\", ex);\n }\n \n+ final ThreadPool threadPool = new ThreadPool(settings);\n+\n boolean success = false;\n try {\n ModulesBuilder modules = new ModulesBuilder();\n@@ -174,7 +177,7 @@ public Node(Settings preparedSettings, boolean loadConfigSettings) {\n modules.add(new EnvironmentModule(environment));\n modules.add(new NodeEnvironmentModule(nodeEnvironment));\n modules.add(new ClusterNameModule(settings));\n- modules.add(new ThreadPoolModule(settings));\n+ modules.add(new ThreadPoolModule(threadPool));\n modules.add(new DiscoveryModule(settings));\n modules.add(new ClusterModule(settings));\n modules.add(new RestModule(settings));\n@@ -198,10 +201,12 @@ public Node(Settings preparedSettings, boolean loadConfigSettings) {\n injector = modules.createInjector();\n \n client = injector.getInstance(Client.class);\n+ threadPool.setNodeSettingsService(injector.getInstance(NodeSettingsService.class));\n success = true;\n } finally {\n if (!success) {\n nodeEnvironment.close();\n+ ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS);\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/node/Node.java",
"status": "modified"
},
{
"diff": "@@ -91,13 +91,14 @@ public static class Names {\n \n private final EstimatedTimeThread estimatedTimeThread;\n \n+ private boolean settingsListenerIsSet = false;\n+\n \n public ThreadPool(String name) {\n- this(ImmutableSettings.builder().put(\"name\", name).build(), null);\n+ this(ImmutableSettings.builder().put(\"name\", name).build());\n }\n \n- @Inject\n- public ThreadPool(Settings settings, @Nullable NodeSettingsService nodeSettingsService) {\n+ public ThreadPool(Settings settings) {\n super(settings);\n \n assert settings.get(\"name\") != null : \"ThreadPool's settings should contain a name\";\n@@ -148,15 +149,20 @@ public ThreadPool(Settings settings, @Nullable NodeSettingsService nodeSettingsS\n this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);\n this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false);\n this.scheduler.setRemoveOnCancelPolicy(true);\n- if (nodeSettingsService != null) {\n- nodeSettingsService.addListener(new ApplySettings());\n- }\n \n TimeValue estimatedTimeInterval = settings.getAsTime(\"threadpool.estimated_time_interval\", TimeValue.timeValueMillis(200));\n this.estimatedTimeThread = new EstimatedTimeThread(EsExecutors.threadName(settings, \"[timer]\"), estimatedTimeInterval.millis());\n this.estimatedTimeThread.start();\n }\n \n+ public void setNodeSettingsService(NodeSettingsService nodeSettingsService) {\n+ if(settingsListenerIsSet) {\n+ throw new IllegalStateException(\"the node settings listener was set more then once\");\n+ }\n+ nodeSettingsService.addListener(new ApplySettings());\n+ settingsListenerIsSet = true;\n+ }\n+\n public long estimatedTimeInMillis() {\n return estimatedTimeThread.estimatedTimeInMillis();\n }",
"filename": "src/main/java/org/elasticsearch/threadpool/ThreadPool.java",
"status": "modified"
},
{
"diff": "@@ -27,14 +27,14 @@\n */\n public class ThreadPoolModule extends AbstractModule {\n \n- private final Settings settings;\n+ private final ThreadPool threadPool;\n \n- public ThreadPoolModule(Settings settings) {\n- this.settings = settings;\n+ public ThreadPoolModule(ThreadPool threadPool) {\n+ this.threadPool = threadPool;\n }\n \n @Override\n protected void configure() {\n- bind(ThreadPool.class).asEagerSingleton();\n+ bind(ThreadPool.class).toInstance(threadPool);\n }\n }",
"filename": "src/main/java/org/elasticsearch/threadpool/ThreadPoolModule.java",
"status": "modified"
},
{
"diff": "@@ -77,7 +77,7 @@ public void setup() throws IOException {\n injector = new ModulesBuilder().add(\n new EnvironmentModule(new Environment(settings)),\n new SettingsModule(settings),\n- new ThreadPoolModule(settings),\n+ new ThreadPoolModule(new ThreadPool(settings)),\n new IndicesQueriesModule(),\n new ScriptModule(settings),\n new IndexSettingsModule(index, settings),",
"filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java",
"status": "modified"
},
{
"diff": "@@ -70,7 +70,7 @@ public void testCustomInjection() throws InterruptedException {\n Injector injector = new ModulesBuilder().add(\n new EnvironmentModule(new Environment(settings)),\n new SettingsModule(settings),\n- new ThreadPoolModule(settings),\n+ new ThreadPoolModule(new ThreadPool(settings)),\n new IndicesQueriesModule(),\n new ScriptModule(settings),\n new IndexSettingsModule(index, settings),",
"filename": "src/test/java/org/elasticsearch/index/query/plugin/IndexQueryParserPlugin2Tests.java",
"status": "modified"
},
{
"diff": "@@ -75,7 +75,7 @@ public void processXContentQueryParsers(XContentQueryParsersBindings bindings) {\n Injector injector = new ModulesBuilder().add(\n new EnvironmentModule(new Environment(settings)),\n new SettingsModule(settings),\n- new ThreadPoolModule(settings),\n+ new ThreadPoolModule(new ThreadPool(settings)),\n new IndicesQueriesModule(),\n new ScriptModule(settings),\n new IndexSettingsModule(index, settings),",
"filename": "src/test/java/org/elasticsearch/index/query/plugin/IndexQueryParserPluginTests.java",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,7 @@ public void testNativeScript() throws InterruptedException {\n .build();\n Injector injector = new ModulesBuilder().add(\n new EnvironmentModule(new Environment(settings)),\n- new ThreadPoolModule(settings),\n+ new ThreadPoolModule(new ThreadPool(settings)),\n new SettingsModule(settings),\n new ScriptModule(settings)).createInjector();\n ",
"filename": "src/test/java/org/elasticsearch/script/NativeScriptTests.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.node.NodeBuilder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n@@ -191,6 +192,24 @@ public void run() {\n }\n }\n \n+ @Test\n+ public void testThreadPoolLeakingThreadsWithTribeNode() {\n+ Settings settings = ImmutableSettings.builder()\n+ .put(\"node.name\", \"thread_pool_leaking_threads_tribe_node\")\n+ .put(\"path.home\", createTempDir())\n+ .put(\"tribe.t1.cluster.name\", \"non_existing_cluster\")\n+ //trigger initialization failure of one of the tribes (doesn't require starting the node)\n+ .put(\"tribe.t1.plugin.mandatory\", \"non_existing\").build();\n+\n+ try {\n+ NodeBuilder.nodeBuilder().settings(settings).build();\n+ fail(\"The node startup is supposed to fail\");\n+ } catch(Throwable t) {\n+ //all good\n+ assertThat(t.getMessage(), containsString(\"mandatory plugins [non_existing]\"));\n+ }\n+ }\n+\n private Map<String, Object> getPoolSettingsThroughJson(ThreadPoolInfo info, String poolName) throws IOException {\n XContentBuilder builder = XContentFactory.jsonBuilder();\n builder.startObject();",
"filename": "src/test/java/org/elasticsearch/threadpool/SimpleThreadPoolTests.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public void testThatToXContentWritesOutUnboundedCorrectly() throws Exception {\n @Test\n public void testThatNegativeSettingAllowsToStart() throws InterruptedException {\n Settings settings = settingsBuilder().put(\"name\", \"index\").put(\"threadpool.index.queue_size\", \"-1\").build();\n- ThreadPool threadPool = new ThreadPool(settings, null);\n+ ThreadPool threadPool = new ThreadPool(settings);\n assertThat(threadPool.info(\"index\").getQueueSize(), is(nullValue()));\n terminate(threadPool);\n }",
"filename": "src/test/java/org/elasticsearch/threadpool/ThreadPoolSerializationTests.java",
"status": "modified"
},
{
"diff": "@@ -54,7 +54,7 @@ public void testCachedExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(\n ImmutableSettings.settingsBuilder()\n .put(\"threadpool.search.type\", \"cached\")\n- .put(\"name\",\"testCachedExecutorType\").build(), null);\n+ .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"cached\"));\n assertThat(info(threadPool, Names.SEARCH).getKeepAlive().minutes(), equalTo(5L));\n@@ -109,7 +109,7 @@ public void testCachedExecutorType() throws InterruptedException {\n public void testFixedExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(settingsBuilder()\n .put(\"threadpool.search.type\", \"fixed\")\n- .put(\"name\",\"testCachedExecutorType\").build(), null);\n+ .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n \n@@ -170,7 +170,7 @@ public void testScalingExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(settingsBuilder()\n .put(\"threadpool.search.type\", \"scaling\")\n .put(\"threadpool.search.size\", 10)\n- .put(\"name\",\"testCachedExecutorType\").build(), null);\n+ .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(info(threadPool, Names.SEARCH).getMin(), equalTo(1));\n assertThat(info(threadPool, Names.SEARCH).getMax(), equalTo(10));\n@@ -204,7 +204,7 @@ public void testScalingExecutorType() throws InterruptedException {\n public void testShutdownDownNowDoesntBlock() throws Exception {\n ThreadPool threadPool = new ThreadPool(ImmutableSettings.settingsBuilder()\n .put(\"threadpool.search.type\", \"cached\")\n- .put(\"name\",\"testCachedExecutorType\").build(), null);\n+ .put(\"name\",\"testCachedExecutorType\").build());\n \n final CountDownLatch latch = new CountDownLatch(1);\n Executor oldExecutor = threadPool.executor(Names.SEARCH);\n@@ -236,7 +236,7 @@ public void testCustomThreadPool() throws Exception {\n .put(\"threadpool.my_pool2.type\", \"fixed\")\n .put(\"threadpool.my_pool2.size\", \"1\")\n .put(\"threadpool.my_pool2.queue_size\", \"1\")\n- .put(\"name\", \"testCustomThreadPool\").build(), null);\n+ .put(\"name\", \"testCustomThreadPool\").build());\n \n ThreadPoolInfo groups = threadPool.info();\n boolean foundPool1 = false;",
"filename": "src/test/java/org/elasticsearch/threadpool/UpdateThreadPoolSettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -58,8 +58,8 @@ public class NettySizeHeaderFrameDecoderTests extends ElasticsearchTestCase {\n \n @Before\n public void startThreadPool() {\n- threadPool = new ThreadPool(settings, new NodeSettingsService(settings));\n-\n+ threadPool = new ThreadPool(settings);\n+ threadPool.setNodeSettingsService(new NodeSettingsService(settings));\n NetworkService networkService = new NetworkService(settings);\n BigArrays bigArrays = new MockBigArrays(new MockPageCacheRecycler(settings, threadPool), new NoneCircuitBreakerService());\n nettyTransport = new NettyTransport(settings, threadPool, networkService, bigArrays, Version.CURRENT);",
"filename": "src/test/java/org/elasticsearch/transport/NettySizeHeaderFrameDecoderTests.java",
"status": "modified"
}
]
} |
{
"body": "Each transport request holds context and headers that represent the general context of the executed actions. There are actions that spawn other requests and we make sure that the context/headers of the original request is passed along to the sub-requests.\n\nIn the case of a search request, we have some cases where this context/headers passing doesn't happen and we need to fix that (we need to make sure the context & headers are not lost at any point in the execution). The reason for this today is that some spawned request simply don't have access to the original request. For example, `TermsLookup` used by the external terms filter executes a search during the parsing of the query. It has no access to the original search request that was executed and therefore cannot pass along the context & headers.\n\nTo solve that we can use the `SearchContext` and let it hold the search request context & headers (maybe we can have the `SearchContext` extends `ContextHolder`). During the execution of a search, any phase that is part of the execution has access to the search context using the thread local based `SearchContext.current()`. We need to make sure, to set the request context & headers on the `SearchContext` when it is created. From there on, we can extract these request context & headers everywhere we need to and pass it to the sub-request (a la `TermsLookup`).\n\nOther sub request we need to look at:\n- `MoreLikeThisQuery`\n- `GeoShapeFilter` and `GeoShapeQuery` with a indexed shape\n- Indexed Scripts\n- Phrase Suggester with a collate query or filter\n- Search Templates with indexed templates\n",
"comments": [
{
"body": "@spinscale please could you take a look when you have a moment\n",
"created_at": "2015-05-05T11:48:26Z"
},
{
"body": "checked the source, my current idea of implementing it is\n- Make SearchContext implementing an interface that adheres to `ContextHolder`, propagate to `ShardSearchRequest`, maybe just changing `DefaultSearchContext` is enough, needs to be checked\n- `ShardSearchRequest` also needs to implement that new interface, one of the two subclasses, `ShardSearchTransportRequest` already does, one needs to be extended `ShardSearchLocalRequest`\n- Both classes have access to the original `SearchRequest` and thus can copy the context\n- Make sure all of the above requests can make use of this infra\n- Search for possible more requests that need this\n- Copy headers need to be copied as well\n\nTricky: Testing this\n",
"created_at": "2015-05-06T14:49:56Z"
},
{
"body": "> Later: Do headers need to be copied as well\n\nwhile we're at it, I'd just copy the headers as well.. why not\n",
"created_at": "2015-05-06T18:02:23Z"
},
{
"body": "Implementation note: Percolate-by-id must be supported as well\n",
"created_at": "2015-05-08T07:56:06Z"
}
],
"number": 10979,
"title": "Properly propagate the search request context & headers to sub-requests"
} | {
"body": "Whenever a query parser (or any other component) issues another\nrequest as part of a request, the headers and the context has to\nbe supplied as well.\n\nIn order to do this, the `SearchContext` has to have those headers\navailable, which in turn means, the shard level request needs to\ncopy those from the original `SearchRequest`\n\nThis commit introduces two new interface to supply the needed methods\nto work with context and headers.\n\nCloses #10979\n",
"number": 11060,
"review_comments": [
{
"body": "can we rename this to `HasContext`?\n",
"created_at": "2015-05-12T12:13:20Z"
},
{
"body": "can we rename this to `HasHeaders`\n",
"created_at": "2015-05-12T12:13:35Z"
},
{
"body": "why not also add `copyHeadersFrom` method (like we do with the context)\n",
"created_at": "2015-05-12T12:14:33Z"
},
{
"body": "wondering.. can we move the copy headers/ctx tot he `SearchContex#suggest` method?\n",
"created_at": "2015-05-12T12:23:19Z"
},
{
"body": "`ContextHolder` already implements `ContextSupport`\n",
"created_at": "2015-05-12T12:25:38Z"
},
{
"body": "same as above\n",
"created_at": "2015-05-12T12:25:47Z"
},
{
"body": "cool!\n",
"created_at": "2015-05-12T12:30:47Z"
},
{
"body": "wondering.. what if this filter would catch all requests (regardless of the type)... and in each test we assert that all requests contain the headers & ctx? At the end of the day, we do want all requests (regardless of type) to contain the headers. Maybe it'll help expose requests that we're missing now.\n",
"created_at": "2015-05-12T12:32:03Z"
},
{
"body": "no need to anymore, we can remove the whole suggest stuff due to https://github.com/elastic/elasticsearch/commit/7efc43db25495b92833c4bc804f604b82597c157\n",
"created_at": "2015-05-15T12:43:54Z"
},
{
"body": "need to try this out for headers, by setting some header in the client() and transportClient() settings (not sure if we propagate those). This will not work at all with the context, because that only lives on the local node and is not copied.. unless we fill the context for every request manually, but then it doesnt make a lot of sense to check if it is configured.\n",
"created_at": "2015-05-15T13:44:44Z"
},
{
"body": "tested and this doesnt work. For example an `IndexRequest` into a non-existing index triggers a `PutMappingRequest`, which is not supposed to have those information, we need to be more selective about this...\n\nSame for index and refresh...\n\nHowever I added more calls to be tested, that those contain the headers in the test, like manual index and refresh requests\n",
"created_at": "2015-05-15T16:18:44Z"
},
{
"body": "I think this should be `actionRequest.copyHeadersFrom(restRequest)` ?\n",
"created_at": "2015-05-15T17:45:55Z"
},
{
"body": "I had to restore the old behaviour, as only specified/registered HTTP headers are included in the real request... added a test as well\n",
"created_at": "2015-05-18T07:22:48Z"
},
{
"body": "shouldn't this just implement `HasContextAndHeaders`?\n",
"created_at": "2015-05-18T08:15:04Z"
},
{
"body": "maybe we can add an additional method here `copyContextAndHeadersFrom`? and use it everywhere so we don't forget one or the other?\n",
"created_at": "2015-05-18T08:15:58Z"
},
{
"body": "`implements HasContextAndHeaders`?\n",
"created_at": "2015-05-18T08:16:39Z"
},
{
"body": "I don't think so. The percolator isn't executing requests during query parsing.\nMaybe add `assert false : \"percolator should get here...\"? So if in the future this does happen to be the case the test will fail and then we know about it\n",
"created_at": "2015-05-18T08:21:33Z"
},
{
"body": "did this when putting something in context or the headers or when calling the `copy*()` methods... doesnt really matter for reading methods IMO\n",
"created_at": "2015-05-18T09:04:23Z"
},
{
"body": "fixed\n",
"created_at": "2015-05-18T09:04:30Z"
},
{
"body": "fixed\n",
"created_at": "2015-05-18T09:04:36Z"
},
{
"body": "fixed... will have to wait for java8 and default methods to have this everywhere...\n",
"created_at": "2015-05-18T09:05:04Z"
},
{
"body": "yea.. that'd be great to have\n",
"created_at": "2015-05-18T09:07:46Z"
},
{
"body": "nice method name :D\n",
"created_at": "2015-05-18T11:14:02Z"
}
],
"title": "Propagate headers & contexts to sub-requests"
} | {
"commits": [
{
"message": "Internal: Propagate headers & contexts to sub-requests\n\nWhenever a query parser (or any other component) issues another\nrequest as part of a request, the headers and the context has to\nbe supplied as well.\n\nIn order to do this, the `SearchContext` has to have those headers\navailable, which in turn means, the shard level request needs to\ncopy those from the original `SearchRequest`\n\nThis commit introduces two new interface to supply the needed methods\nto work with context and headers.\n\nCloses #10979"
}
],
"files": [
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.action.percolate;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.get.GetRequest;",
"filename": "src/main/java/org/elasticsearch/action/percolate/TransportPercolateAction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.action.support.single.shard;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionResponse;\n import org.elasticsearch.action.NoShardAvailableActionException;",
"filename": "src/main/java/org/elasticsearch/action/support/single/shard/TransportShardSingleOperationAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,82 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common;\n+\n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+\n+public interface HasContext {\n+\n+ /**\n+ * Attaches the given value to the context.\n+ *\n+ * @return The previous value that was associated with the given key in the context, or\n+ * {@code null} if there was none.\n+ */\n+ <V> V putInContext(Object key, Object value);\n+\n+ /**\n+ * Attaches the given values to the context\n+ */\n+ void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map);\n+\n+ /**\n+ * @return The context value that is associated with the given key\n+ *\n+ * @see #putInContext(Object, Object)\n+ */\n+ <V> V getFromContext(Object key);\n+\n+ /**\n+ * @param defaultValue The default value that should be returned for the given key, if no\n+ * value is currently associated with it.\n+ *\n+ * @return The value that is associated with the given key in the context\n+ *\n+ * @see #putInContext(Object, Object)\n+ */\n+ <V> V getFromContext(Object key, V defaultValue);\n+\n+ /**\n+ * Checks if the context contains an entry with the given key\n+ */\n+ boolean hasInContext(Object key);\n+\n+ /**\n+ * @return The number of values attached in the context.\n+ */\n+ int contextSize();\n+\n+ /**\n+ * Checks if the context is empty.\n+ */\n+ boolean isContextEmpty();\n+\n+ /**\n+ * @return A safe immutable copy of the current context.\n+ */\n+ ImmutableOpenMap<Object, Object> getContext();\n+\n+ /**\n+ * Copies the context from the given context holder to this context holder. Any shared keys between\n+ * the two context will be overridden by the given context holder.\n+ */\n+ void copyContextFrom(HasContext other);\n+}",
"filename": "src/main/java/org/elasticsearch/common/HasContext.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,33 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common;\n+\n+/**\n+ * marker interface\n+ */\n+public interface HasContextAndHeaders extends HasContext, HasHeaders {\n+\n+ /**\n+ * copies over the context and the headers\n+ * @param other another object supporting headers and context\n+ */\n+ void copyContextAndHeadersFrom(HasContextAndHeaders other);\n+\n+}",
"filename": "src/main/java/org/elasticsearch/common/HasContextAndHeaders.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,38 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common;\n+\n+import java.util.Set;\n+\n+/**\n+ *\n+ */\n+public interface HasHeaders {\n+\n+ <V> V putHeader(String key, V value);\n+\n+ <V> V getHeader(String key);\n+\n+ boolean hasHeader(String key);\n+\n+ Set<String> getHeaders();\n+\n+ void copyHeadersFrom(HasHeaders from);\n+}",
"filename": "src/main/java/org/elasticsearch/common/HasHeaders.java",
"status": "added"
},
{
"diff": "@@ -19,15 +19,12 @@\n \n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.search.BooleanClause;\n-import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.ConstantScoreQuery;\n-import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.spatial.prefix.PrefixTreeStrategy;\n import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy;\n import org.apache.lucene.spatial.query.SpatialArgs;\n import org.apache.lucene.spatial.query.SpatialOperation;\n+import org.elasticsearch.action.get.GetRequest;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.geo.ShapeRelation;\n@@ -38,6 +35,7 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.geo.GeoShapeFieldMapper;\n import org.elasticsearch.index.search.shape.ShapeFetchService;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -116,7 +114,9 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else if (type == null) {\n throw new QueryParsingException(parseContext, \"Type for indexed shape not provided\");\n }\n- shape = fetchService.fetch(id, type, index, shapePath);\n+ GetRequest getRequest = new GetRequest(index, type, id);\n+ getRequest.copyContextAndHeadersFrom(SearchContext.current());\n+ shape = fetchService.fetch(getRequest, shapePath);\n } else {\n throw new QueryParsingException(parseContext, \"[geo_shape] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -180,7 +180,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n public void setFetchService(@Nullable ShapeFetchService fetchService) {\n this.fetchService = fetchService;\n }\n- \n+\n public static SpatialArgs getArgs(ShapeBuilder shape, ShapeRelation relation) {\n switch(relation) {\n case DISJOINT:\n@@ -191,7 +191,7 @@ public static SpatialArgs getArgs(ShapeBuilder shape, ShapeRelation relation) {\n return new SpatialArgs(SpatialOperation.IsWithin, shape.build());\n default:\n throw new IllegalArgumentException(\"\");\n- \n+\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/query/GeoShapeQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n-\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.queries.TermsQuery;\n import org.apache.lucene.search.BooleanClause;\n@@ -40,6 +39,7 @@\n import org.elasticsearch.index.analysis.Analysis;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.search.morelikethis.MoreLikeThisFetchService;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -245,6 +245,7 @@ else if (Fields.IGNORE_LIKE.match(currentFieldName, parseContext.parseFlags()))\n if (!likeItems.isEmpty()) {\n // set default index, type and fields if not specified\n MultiTermVectorsRequest items = likeItems;\n+\n for (TermVectorsRequest item : ignoreItems) {\n items.add(item);\n }\n@@ -272,7 +273,7 @@ else if (Fields.IGNORE_LIKE.match(currentFieldName, parseContext.parseFlags()))\n }\n }\n // fetching the items with multi-termvectors API\n- BooleanQuery boolQuery = new BooleanQuery();\n+ items.copyContextAndHeadersFrom(SearchContext.current());\n MultiTermVectorsResponse responses = fetchService.fetchResponse(items);\n \n // getting the Fields for liked items\n@@ -286,6 +287,7 @@ else if (Fields.IGNORE_LIKE.match(currentFieldName, parseContext.parseFlags()))\n }\n }\n \n+ BooleanQuery boolQuery = new BooleanQuery();\n boolQuery.add(mltQuery, BooleanClause.Occur.SHOULD);\n \n // exclude the items from the search",
"filename": "src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.query;\n \n import com.google.common.collect.Lists;\n-\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.TermsQuery;\n import org.apache.lucene.search.BooleanClause.Occur;\n@@ -40,6 +39,7 @@\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.indices.cache.filter.terms.TermsLookup;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.List;\n@@ -171,7 +171,9 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n if (lookupId != null) {\n final TermsLookup lookup = new TermsLookup(lookupIndex, lookupType, lookupId, lookupRouting, lookupPath, parseContext);\n- final GetResponse getResponse = client.get(new GetRequest(lookup.getIndex(), lookup.getType(), lookup.getId()).preference(\"_local\").routing(lookup.getRouting())).actionGet();\n+ GetRequest getRequest = new GetRequest(lookup.getIndex(), lookup.getType(), lookup.getId()).preference(\"_local\").routing(lookup.getRouting());\n+ getRequest.copyContextAndHeadersFrom(SearchContext.current());\n+ final GetResponse getResponse = client.get(getRequest).actionGet();\n if (getResponse.isExists()) {\n List<Object> values = XContentMapValues.extractRawValues(lookup.getPath(), getResponse.getSourceAsMap());\n terms.addAll(values);",
"filename": "src/main/java/org/elasticsearch/index/query/TermsQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -48,17 +48,17 @@ public ShapeFetchService(Client client, Settings settings) {\n /**\n * Fetches the Shape with the given ID in the given type and index.\n *\n- * @param id ID of the Shape to fetch\n- * @param type Index type where the Shape is indexed\n- * @param index Index where the Shape is indexed\n+ * @param getRequest GetRequest containing index, type and id\n * @param path Name or path of the field in the Shape Document where the Shape itself is located\n * @return Shape with the given ID\n * @throws IOException Can be thrown while parsing the Shape Document and extracting the Shape\n */\n- public ShapeBuilder fetch(String id, String type, String index, String path) throws IOException {\n- GetResponse response = client.get(new GetRequest(index, type, id).preference(\"_local\").operationThreaded(false)).actionGet();\n+ public ShapeBuilder fetch(GetRequest getRequest,String path) throws IOException {\n+ getRequest.preference(\"_local\");\n+ getRequest.operationThreaded(false);\n+ GetResponse response = client.get(getRequest).actionGet();\n if (!response.isExists()) {\n- throw new IllegalArgumentException(\"Shape with ID [\" + id + \"] in type [\" + type + \"] not found\");\n+ throw new IllegalArgumentException(\"Shape with ID [\" + getRequest.id() + \"] in type [\" + getRequest.type() + \"] not found\");\n }\n \n String[] pathElements = Strings.splitStringToArray(path, '.');\n@@ -81,7 +81,7 @@ public ShapeBuilder fetch(String id, String type, String index, String path) thr\n }\n }\n }\n- throw new IllegalStateException(\"Shape with name [\" + id + \"] found but missing \" + path + \" field\");\n+ throw new IllegalStateException(\"Shape with name [\" + getRequest.id() + \"] found but missing \" + path + \" field\");\n } finally {\n if (parser != null) {\n parser.close();",
"filename": "src/main/java/org/elasticsearch/index/search/shape/ShapeFetchService.java",
"status": "modified"
},
{
"diff": "@@ -18,8 +18,8 @@\n */\n package org.elasticsearch.percolator;\n \n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n import com.google.common.collect.ImmutableList;\n-\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.LeafReaderContext;\n@@ -32,6 +32,10 @@\n import org.elasticsearch.action.percolate.PercolateShardRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.util.BigArrays;\n@@ -61,21 +65,15 @@\n import org.elasticsearch.search.fetch.script.ScriptFieldsContext;\n import org.elasticsearch.search.fetch.source.FetchSourceContext;\n import org.elasticsearch.search.highlight.SearchContextHighlight;\n-import org.elasticsearch.search.internal.ContextIndexSearcher;\n-import org.elasticsearch.search.internal.InternalSearchHit;\n-import org.elasticsearch.search.internal.InternalSearchHitField;\n-import org.elasticsearch.search.internal.SearchContext;\n-import org.elasticsearch.search.internal.ShardSearchRequest;\n+import org.elasticsearch.search.internal.*;\n import org.elasticsearch.search.lookup.LeafSearchLookup;\n import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.rescore.RescoreSearchContext;\n import org.elasticsearch.search.scan.ScanContext;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.ConcurrentMap;\n \n /**\n@@ -693,4 +691,80 @@ public InnerHitsContext innerHits() {\n throw new UnsupportedOperationException();\n }\n \n+ @Override\n+ public <V> V putInContext(Object key, Object value) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ return null;\n+ }\n+\n+ @Override\n+ public void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key) {\n+ return null;\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key, V defaultValue) {\n+ return defaultValue;\n+ }\n+\n+ @Override\n+ public boolean hasInContext(Object key) {\n+ return false;\n+ }\n+\n+ @Override\n+ public int contextSize() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public boolean isContextEmpty() {\n+ return true;\n+ }\n+\n+ @Override\n+ public ImmutableOpenMap<Object, Object> getContext() {\n+ return ImmutableOpenMap.of();\n+ }\n+\n+ @Override\n+ public void copyContextFrom(HasContext other) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ }\n+\n+ @Override\n+ public <V> V putHeader(String key, V value) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ return value;\n+ }\n+\n+ @Override\n+ public <V> V getHeader(String key) {\n+ return null;\n+ }\n+\n+ @Override\n+ public boolean hasHeader(String key) {\n+ return false;\n+ }\n+\n+ @Override\n+ public Set<String> getHeaders() {\n+ return Collections.EMPTY_SET;\n+ }\n+\n+ @Override\n+ public void copyHeadersFrom(HasHeaders from) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ }\n+\n+ @Override\n+ public void copyContextAndHeadersFrom(HasContextAndHeaders other) {\n+ assert false : \"percolatocontext does not support contexts & headers\";\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.rest;\n \n import org.elasticsearch.common.Booleans;\n-import org.elasticsearch.common.ContextHolder;\n+import org.elasticsearch.common.ContextAndHeaderHolder;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -38,7 +38,7 @@\n /**\n *\n */\n-public abstract class RestRequest extends ContextHolder implements ToXContent.Params {\n+public abstract class RestRequest extends ContextAndHeaderHolder implements ToXContent.Params {\n \n public enum Method {\n GET, POST, PUT, DELETE, OPTIONS, HEAD",
"filename": "src/main/java/org/elasticsearch/rest/RestRequest.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,6 @@\n import com.google.common.cache.RemovalListener;\n import com.google.common.cache.RemovalNotification;\n import com.google.common.collect.ImmutableMap;\n-\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.delete.DeleteRequest;\n@@ -57,6 +56,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.query.TemplateQueryParser;\n import org.elasticsearch.script.groovy.GroovyScriptEngineService;\n+import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.watcher.FileChangesListener;\n import org.elasticsearch.watcher.FileWatcher;\n@@ -288,6 +288,7 @@ String getScriptFromIndex(String scriptLang, String id) {\n }\n scriptLang = validateScriptLanguage(scriptLang);\n GetRequest getRequest = new GetRequest(SCRIPT_INDEX, scriptLang, id);\n+ getRequest.copyContextAndHeadersFrom(SearchContext.current());\n GetResponse responseFields = client.get(getRequest).actionGet();\n if (responseFields.isExists()) {\n return getScriptFromResponse(responseFields);",
"filename": "src/main/java/org/elasticsearch/script/ScriptService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.internal;\n \n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Lists;\n \n@@ -33,7 +34,11 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.lucene.search.function.BoostScoreFunction;\n@@ -72,108 +77,64 @@\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Set;\n \n /**\n *\n */\n public class DefaultSearchContext extends SearchContext {\n \n private final long id;\n-\n private final ShardSearchRequest request;\n-\n private final SearchShardTarget shardTarget;\n private final Counter timeEstimateCounter;\n-\n private SearchType searchType;\n-\n private final Engine.Searcher engineSearcher;\n-\n private final ScriptService scriptService;\n-\n private final PageCacheRecycler pageCacheRecycler;\n-\n private final BigArrays bigArrays;\n-\n private final IndexShard indexShard;\n-\n private final IndexService indexService;\n-\n private final ContextIndexSearcher searcher;\n-\n private final DfsSearchResult dfsResult;\n-\n private final QuerySearchResult queryResult;\n-\n private final FetchSearchResult fetchResult;\n-\n // lazy initialized only if needed\n private ScanContext scanContext;\n-\n private float queryBoost = 1.0f;\n-\n // timeout in millis\n private long timeoutInMillis = -1;\n-\n // terminate after count\n private int terminateAfter = DEFAULT_TERMINATE_AFTER;\n-\n-\n private List<String> groupStats;\n-\n private Scroll scroll;\n-\n private boolean explain;\n-\n private boolean version = false; // by default, we don't return versions\n-\n private List<String> fieldNames;\n private FieldDataFieldsContext fieldDataFields;\n private ScriptFieldsContext scriptFields;\n private FetchSourceContext fetchSourceContext;\n-\n private int from = -1;\n-\n private int size = -1;\n-\n private Sort sort;\n-\n private Float minimumScore;\n-\n private boolean trackScores = false; // when sorting, track scores as well...\n-\n private ParsedQuery originalQuery;\n-\n private Query query;\n-\n private ParsedQuery postFilter;\n-\n private Query aliasFilter;\n-\n private int[] docIdsToLoad;\n-\n private int docsIdsToLoadFrom;\n-\n private int docsIdsToLoadSize;\n-\n private SearchContextAggregations aggregations;\n-\n private SearchContextHighlight highlight;\n-\n private SuggestionSearchContext suggest;\n-\n private List<RescoreSearchContext> rescore;\n-\n private SearchLookup searchLookup;\n-\n private boolean queryRewritten;\n-\n private volatile long keepAlive;\n-\n private ScoreDoc lastEmittedDoc;\n-\n private volatile long lastAccessTime = -1;\n-\n private InnerHitsContext innerHitsContext;\n \n public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget,\n@@ -790,4 +751,79 @@ public void innerHits(InnerHitsContext innerHitsContext) {\n public InnerHitsContext innerHits() {\n return innerHitsContext;\n }\n+\n+ @Override\n+ public <V> V putInContext(Object key, Object value) {\n+ return request.putInContext(key, value);\n+ }\n+\n+ @Override\n+ public void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map) {\n+ request.putAllInContext(map);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key) {\n+ return request.getFromContext(key);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key, V defaultValue) {\n+ return request.getFromContext(key, defaultValue);\n+ }\n+\n+ @Override\n+ public boolean hasInContext(Object key) {\n+ return request.hasInContext(key);\n+ }\n+\n+ @Override\n+ public int contextSize() {\n+ return request.contextSize();\n+ }\n+\n+ @Override\n+ public boolean isContextEmpty() {\n+ return request.isContextEmpty();\n+ }\n+\n+ @Override\n+ public ImmutableOpenMap<Object, Object> getContext() {\n+ return request.getContext();\n+ }\n+\n+ @Override\n+ public void copyContextFrom(HasContext other) {\n+ request.copyContextFrom(other);\n+ }\n+\n+ @Override\n+ public <V> V putHeader(String key, V value) {\n+ return request.putHeader(key, value);\n+ }\n+\n+ @Override\n+ public <V> V getHeader(String key) {\n+ return request.getHeader(key);\n+ }\n+\n+ @Override\n+ public boolean hasHeader(String key) {\n+ return request.hasHeader(key);\n+ }\n+\n+ @Override\n+ public Set<String> getHeaders() {\n+ return request.getHeaders();\n+ }\n+\n+ @Override\n+ public void copyHeadersFrom(HasHeaders from) {\n+ request.copyHeadersFrom(from);\n+ }\n+\n+ @Override\n+ public void copyContextAndHeadersFrom(HasContextAndHeaders other) {\n+ request.copyContextAndHeadersFrom(other);\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -19,23 +19,26 @@\n \n package org.elasticsearch.search.internal;\n \n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.Sort;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n-import org.elasticsearch.index.cache.filter.FilterCache;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.FieldMappers;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.query.ParsedQuery;\n-import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;\n@@ -56,6 +59,7 @@\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n import java.util.List;\n+import java.util.Set;\n \n /**\n */\n@@ -557,4 +561,78 @@ public Counter timeEstimateCounter() {\n return in.timeEstimateCounter();\n }\n \n+ @Override\n+ public <V> V putInContext(Object key, Object value) {\n+ return in.putInContext(key, value);\n+ }\n+\n+ @Override\n+ public void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map) {\n+ in.putAllInContext(map);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key) {\n+ return in.getFromContext(key);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key, V defaultValue) {\n+ return in.getFromContext(key, defaultValue);\n+ }\n+\n+ @Override\n+ public boolean hasInContext(Object key) {\n+ return in.hasInContext(key);\n+ }\n+\n+ @Override\n+ public int contextSize() {\n+ return in.contextSize();\n+ }\n+\n+ @Override\n+ public boolean isContextEmpty() {\n+ return in.isContextEmpty();\n+ }\n+\n+ @Override\n+ public ImmutableOpenMap<Object, Object> getContext() {\n+ return in.getContext();\n+ }\n+\n+ @Override\n+ public void copyContextFrom(HasContext other) {\n+ in.copyContextFrom(other);\n+ }\n+\n+ @Override\n+ public <V> V putHeader(String key, V value) {\n+ return in.putHeader(key, value);\n+ }\n+\n+ @Override\n+ public <V> V getHeader(String key) {\n+ return in.getHeader(key);\n+ }\n+\n+ @Override\n+ public boolean hasHeader(String key) {\n+ return in.hasHeader(key);\n+ }\n+\n+ @Override\n+ public Set<String> getHeaders() {\n+ return in.getHeaders();\n+ }\n+\n+ @Override\n+ public void copyHeadersFrom(HasHeaders from) {\n+ in.copyHeadersFrom(from);\n+ }\n+\n+ @Override\n+ public void copyContextAndHeadersFrom(HasContextAndHeaders other) {\n+ in.copyContextAndHeadersFrom(other);\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,9 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n@@ -67,7 +70,7 @@\n \n /**\n */\n-public abstract class SearchContext implements Releasable {\n+public abstract class SearchContext implements Releasable, HasContextAndHeaders {\n \n private static ThreadLocal<SearchContext> current = new ThreadLocal<>();\n public final static int DEFAULT_TERMINATE_AFTER = 0;",
"filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.common.ContextAndHeaderHolder;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -57,30 +58,22 @@\n * </pre>\n */\n \n-public class ShardSearchLocalRequest implements ShardSearchRequest {\n+public class ShardSearchLocalRequest extends ContextAndHeaderHolder implements ShardSearchRequest {\n \n private String index;\n-\n private int shardId;\n-\n private int numberOfShards;\n-\n private SearchType searchType;\n-\n private Scroll scroll;\n-\n private String[] types = Strings.EMPTY_ARRAY;\n-\n private String[] filteringAliases;\n-\n private BytesReference source;\n private BytesReference extraSource;\n private BytesReference templateSource;\n private String templateName;\n private ScriptService.ScriptType templateType;\n private Map<String, Object> templateParams;\n private Boolean queryCache;\n-\n private long nowInMillis;\n \n ShardSearchLocalRequest() {\n@@ -90,7 +83,6 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n String[] filteringAliases, long nowInMillis) {\n this(shardRouting.shardId(), numberOfShards, searchRequest.searchType(),\n searchRequest.source(), searchRequest.types(), searchRequest.queryCache());\n-\n this.extraSource = searchRequest.extraSource();\n this.templateSource = searchRequest.templateSource();\n this.templateName = searchRequest.templateName();\n@@ -99,6 +91,7 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n this.scroll = searchRequest.scroll();\n this.filteringAliases = filteringAliases;\n this.nowInMillis = nowInMillis;\n+ copyContextAndHeadersFrom(searchRequest);\n }\n \n public ShardSearchLocalRequest(String[] types, long nowInMillis) {",
"filename": "src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,9 @@\n package org.elasticsearch.search.internal;\n \n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.Scroll;\n@@ -32,7 +35,7 @@\n * It provides all the methods that the {@link org.elasticsearch.search.internal.SearchContext} needs.\n * Provides a cache key based on its content that can be used to cache shard level response.\n */\n-public interface ShardSearchRequest {\n+public interface ShardSearchRequest extends HasContextAndHeaders {\n \n String index();\n ",
"filename": "src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java",
"status": "modified"
},
{
"diff": "@@ -51,6 +51,7 @@ public void parse(XContentParser parser, SearchContext context) throws Exception\n \n public SuggestionSearchContext parseInternal(XContentParser parser, MapperService mapperService, IndexQueryParserService queryParserService, String index, int shardId) throws IOException {\n SuggestionSearchContext suggestionSearchContext = new SuggestionSearchContext();\n+\n BytesRef globalText = null;\n String fieldName = null;\n Map<String, SuggestionContext> suggestionContexts = newHashMap();",
"filename": "src/main/java/org/elasticsearch/search/suggest/SuggestParseElement.java",
"status": "modified"
},
{
"diff": "@@ -19,25 +19,20 @@\n \n package org.elasticsearch.transport;\n \n-import org.elasticsearch.common.ContextHolder;\n+import org.elasticsearch.common.ContextAndHeaderHolder;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.transport.TransportAddress;\n \n import java.io.IOException;\n-import java.util.Collections;\n import java.util.HashMap;\n-import java.util.Map;\n-import java.util.Set;\n \n /**\n- * The transport message is also a {@link ContextHolder context holder} that holds <b>transient</b> context, that is,\n+ * The transport message is also a {@link ContextAndHeaderHolder context holder} that holds <b>transient</b> context, that is,\n * the context is not serialized with message.\n */\n-public abstract class TransportMessage<TM extends TransportMessage<TM>> extends ContextHolder implements Streamable {\n-\n- private Map<String, Object> headers;\n+public abstract class TransportMessage<TM extends TransportMessage<TM>> extends ContextAndHeaderHolder<TM> implements Streamable {\n \n private TransportAddress remoteAddress;\n \n@@ -48,8 +43,8 @@ protected TransportMessage(TM message) {\n // create a new copy of the headers/context, since we are creating a new request\n // which might have its headers/context changed in the context of that specific request\n \n- if (((TransportMessage<?>) message).headers != null) {\n- this.headers = new HashMap<>(((TransportMessage<?>) message).headers);\n+ if (message.headers != null) {\n+ this.headers = new HashMap<>(message.headers);\n }\n copyContextFrom(message);\n }\n@@ -62,28 +57,6 @@ public TransportAddress remoteAddress() {\n return remoteAddress;\n }\n \n- @SuppressWarnings(\"unchecked\")\n- public final TM putHeader(String key, Object value) {\n- if (headers == null) {\n- headers = new HashMap<>();\n- }\n- headers.put(key, value);\n- return (TM) this;\n- }\n-\n- @SuppressWarnings(\"unchecked\")\n- public final <V> V getHeader(String key) {\n- return headers != null ? (V) headers.get(key) : null;\n- }\n-\n- public final boolean hasHeader(String key) {\n- return headers != null && headers.containsKey(key);\n- }\n-\n- public Set<String> getHeaders() {\n- return headers != null ? headers.keySet() : Collections.<String>emptySet();\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n headers = in.readBoolean() ? in.readMap() : null;",
"filename": "src/main/java/org/elasticsearch/transport/TransportMessage.java",
"status": "modified"
},
{
"diff": "@@ -18,13 +18,18 @@\n */\n package org.elasticsearch.test;\n \n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.Sort;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.analysis.AnalysisService;\n@@ -59,7 +64,9 @@\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.util.Collections;\n import java.util.List;\n+import java.util.Set;\n \n public class TestSearchContext extends SearchContext {\n \n@@ -597,4 +604,72 @@ public InnerHitsContext innerHits() {\n throw new UnsupportedOperationException();\n }\n \n+ @Override\n+ public <V> V putInContext(Object key, Object value) {\n+ return null;\n+ }\n+\n+ @Override\n+ public void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map) {\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key) {\n+ return null;\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key, V defaultValue) {\n+ return defaultValue;\n+ }\n+\n+ @Override\n+ public boolean hasInContext(Object key) {\n+ return false;\n+ }\n+\n+ @Override\n+ public int contextSize() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public boolean isContextEmpty() {\n+ return true;\n+ }\n+\n+ @Override\n+ public ImmutableOpenMap<Object, Object> getContext() {\n+ return ImmutableOpenMap.of();\n+ }\n+\n+ @Override\n+ public void copyContextFrom(HasContext other) {\n+ }\n+\n+ @Override\n+ public <V> V putHeader(String key, V value) {\n+ return value;\n+ }\n+\n+ @Override\n+ public <V> V getHeader(String key) {\n+ return null;\n+ }\n+\n+ @Override\n+ public boolean hasHeader(String key) {\n+ return false;\n+ }\n+\n+ @Override\n+ public Set<String> getHeaders() {\n+ return Collections.EMPTY_SET;\n+ }\n+\n+ @Override\n+ public void copyHeadersFrom(HasHeaders from) {}\n+\n+ @Override\n+ public void copyContextAndHeadersFrom(HasContextAndHeaders other) {}\n }",
"filename": "src/test/java/org/elasticsearch/test/TestSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,438 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.transport;\n+\n+import com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.ImmutableList;\n+import org.apache.http.impl.client.CloseableHttpClient;\n+import org.apache.http.impl.client.HttpClients;\n+import org.elasticsearch.action.*;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n+import org.elasticsearch.action.get.GetRequest;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptRequest;\n+import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptResponse;\n+import org.elasticsearch.action.percolate.PercolateResponse;\n+import org.elasticsearch.action.search.SearchRequest;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.support.ActionFilter;\n+import org.elasticsearch.action.termvectors.MultiTermVectorsRequest;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.client.FilterClient;\n+import org.elasticsearch.common.inject.AbstractModule;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.common.inject.PreProcessModule;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.index.query.*;\n+import org.elasticsearch.plugins.AbstractPlugin;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.groovy.GroovyScriptEngineService;\n+import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.rest.client.http.HttpRequestBuilder;\n+import org.elasticsearch.test.rest.client.http.HttpResponse;\n+import org.junit.After;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import java.util.*;\n+\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.node.Node.HTTP_ENABLED;\n+import static org.elasticsearch.rest.RestStatus.OK;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope.SUITE;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n+\n+@ClusterScope(scope = SUITE)\n+public class ContextAndHeaderTransportTests extends ElasticsearchIntegrationTest {\n+\n+ private static final List<ActionRequest> requests = Collections.synchronizedList(new ArrayList<ActionRequest>());\n+ private String randomHeaderKey = randomAsciiOfLength(10);\n+ private String randomHeaderValue = randomAsciiOfLength(20);\n+ private String queryIndex = \"query-\" + randomAsciiOfLength(10).toLowerCase(Locale.ROOT);\n+ private String lookupIndex = \"lookup-\" + randomAsciiOfLength(10).toLowerCase(Locale.ROOT);\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder()\n+ .put(super.nodeSettings(nodeOrdinal))\n+ .put(\"plugin.types\", ActionLoggingPlugin.class.getName())\n+ .put(\"script.indexed\", \"on\")\n+ .put(HTTP_ENABLED, true)\n+ .build();\n+ }\n+\n+ @Before\n+ public void createIndices() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"location\").field(\"type\", \"geo_shape\").endObject()\n+ .startObject(\"name\").field(\"type\", \"string\").endObject()\n+ .startObject(\"title\").field(\"type\", \"string\").field(\"analyzer\", \"text\").endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ Settings settings = settingsBuilder()\n+ .put(indexSettings())\n+ .put(SETTING_NUMBER_OF_SHARDS, 1) // A single shard will help to keep the tests repeatable.\n+ .put(\"index.analysis.analyzer.text.tokenizer\", \"standard\")\n+ .putArray(\"index.analysis.analyzer.text.filter\", \"lowercase\", \"my_shingle\")\n+ .put(\"index.analysis.filter.my_shingle.type\", \"shingle\")\n+ .put(\"index.analysis.filter.my_shingle.output_unigrams\", true)\n+ .put(\"index.analysis.filter.my_shingle.min_shingle_size\", 2)\n+ .put(\"index.analysis.filter.my_shingle.max_shingle_size\", 3)\n+ .build();\n+ assertAcked(transportClient().admin().indices().prepareCreate(lookupIndex)\n+ .setSettings(settings).addMapping(\"type\", mapping));\n+ assertAcked(transportClient().admin().indices().prepareCreate(queryIndex)\n+ .setSettings(settings).addMapping(\"type\", mapping));\n+ ensureGreen(queryIndex, lookupIndex);\n+\n+ requests.clear();\n+ }\n+\n+ @After\n+ public void checkAllRequestsContainHeaders() {\n+ assertRequestsContainHeader(IndexRequest.class);\n+ assertRequestsContainHeader(RefreshRequest.class);\n+\n+ /*\n+ for (ActionRequest request : requests) {\n+ String msg = String.format(Locale.ROOT, \"Expected request [%s] to have randomized header key set\", request.getClass().getSimpleName());\n+ assertThat(msg, request.hasHeader(randomHeaderKey), is(true));\n+ assertThat(request.getHeader(randomHeaderKey).toString(), is(randomHeaderValue));\n+ }\n+ */\n+ }\n+\n+ // TODO check context as well\n+\n+ @Test\n+ public void testThatTermsLookupGetRequestContainsContextAndHeaders() throws Exception {\n+ transportClient().prepareIndex(lookupIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().array(\"followers\", \"foo\", \"bar\", \"baz\").endObject()).get();\n+ transportClient().prepareIndex(queryIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"username\", \"foo\").endObject()).get();\n+ transportClient().admin().indices().prepareRefresh(queryIndex, lookupIndex).get();\n+\n+ TermsLookupQueryBuilder termsLookupFilterBuilder = QueryBuilders.termsLookupQuery(\"username\").lookupIndex(lookupIndex).lookupType(\"type\").lookupId(\"1\").lookupPath(\"followers\");\n+ BoolQueryBuilder queryBuilder = QueryBuilders.boolQuery().must(QueryBuilders.matchAllQuery()).must(termsLookupFilterBuilder);\n+\n+ SearchResponse searchResponse = transportClient()\n+ .prepareSearch(queryIndex)\n+ .setQuery(queryBuilder)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1);\n+\n+ assertGetRequestsContainHeaders();\n+ }\n+\n+ @Test\n+ public void testThatGeoShapeQueryGetRequestContainsContextAndHeaders() throws Exception {\n+ indexRandom(false, false,\n+ transportClient().prepareIndex(lookupIndex, \"type\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"Munich Suburban Area\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\").startArray()\n+ .startArray().value(11.34).value(48.25).endArray()\n+ .startArray().value(11.68).value(48.25).endArray()\n+ .startArray().value(11.65).value(48.06).endArray()\n+ .startArray().value(11.37).value(48.13).endArray()\n+ .startArray().value(11.34).value(48.25).endArray() // close the polygon\n+ .endArray().endArray()\n+ .endObject()\n+ .endObject()),\n+ // second document\n+ transportClient().prepareIndex(queryIndex, \"type\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"Munich Center\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"point\")\n+ .startArray(\"coordinates\").value(11.57).value(48.13).endArray()\n+ .endObject()\n+ .endObject())\n+ );\n+ transportClient().admin().indices().prepareRefresh(lookupIndex, queryIndex).get();\n+\n+ GeoShapeQueryBuilder queryBuilder = QueryBuilders.geoShapeQuery(\"location\", \"1\", \"type\")\n+ .indexedShapeIndex(lookupIndex)\n+ .indexedShapePath(\"location\");\n+\n+ SearchResponse searchResponse = transportClient()\n+ .prepareSearch(queryIndex)\n+ .setQuery(queryBuilder)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1);\n+ assertThat(requests, hasSize(greaterThan(0)));\n+\n+ assertGetRequestsContainHeaders();\n+ }\n+\n+ @Test\n+ public void testThatMoreLikeThisQueryMultiTermVectorRequestContainsContextAndHeaders() throws Exception {\n+ indexRandom(false, false,\n+ transportClient().prepareIndex(lookupIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Star Wars - The new republic\").endObject()),\n+ transportClient().prepareIndex(queryIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Jar Jar Binks - A horrible mistake\").endObject()),\n+ transportClient().prepareIndex(queryIndex, \"type\", \"2\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Star Wars - Return of the jedi\").endObject()));\n+ transportClient().admin().indices().prepareRefresh(lookupIndex, queryIndex).get();\n+\n+ MoreLikeThisQueryBuilder moreLikeThisQueryBuilder = QueryBuilders.moreLikeThisQuery(\"name\")\n+ .addItem(new MoreLikeThisQueryBuilder.Item(lookupIndex, \"type\", \"1\"))\n+ .minTermFreq(1)\n+ .minDocFreq(1);\n+\n+ SearchResponse searchResponse = transportClient()\n+ .prepareSearch(queryIndex)\n+ .setQuery(moreLikeThisQueryBuilder)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1);\n+\n+ assertRequestsContainHeader(MultiTermVectorsRequest.class);\n+ }\n+\n+ @Test\n+ public void testThatPercolatingExistingDocumentGetRequestContainsContextAndHeaders() throws Exception {\n+ indexRandom(false,\n+ transportClient().prepareIndex(lookupIndex, \".percolator\", \"1\")\n+ .setSource(jsonBuilder().startObject().startObject(\"query\").startObject(\"match\").field(\"name\", \"star wars\").endObject().endObject().endObject()),\n+ transportClient().prepareIndex(lookupIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Star Wars - The new republic\").endObject())\n+ );\n+ transportClient().admin().indices().prepareRefresh(lookupIndex).get();\n+\n+ GetRequest getRequest = transportClient().prepareGet(lookupIndex, \"type\", \"1\").request();\n+ PercolateResponse response = transportClient().preparePercolate().setDocumentType(\"type\").setGetRequest(getRequest).get();\n+ assertThat(response.getCount(), is(1l));\n+\n+ assertGetRequestsContainHeaders();\n+ }\n+\n+ @Test\n+ public void testThatIndexedScriptGetRequestContainsContextAndHeaders() throws Exception {\n+ PutIndexedScriptResponse scriptResponse = transportClient().preparePutIndexedScript(GroovyScriptEngineService.NAME, \"my_script\",\n+ jsonBuilder().startObject().field(\"script\", \"_score * 10\").endObject().string()\n+ ).get();\n+ assertThat(scriptResponse.isCreated(), is(true));\n+\n+ indexRandom(false, false, transportClient().prepareIndex(queryIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Star Wars - The new republic\").endObject()));\n+ transportClient().admin().indices().prepareRefresh(queryIndex).get();\n+\n+ // custom content, not sure how to specify \"script_id\" otherwise in the API\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"function_score\").field(\"boost_mode\", \"replace\").startArray(\"functions\")\n+ .startObject().startObject(\"script_score\").field(\"script_id\", \"my_script\").field(\"lang\", \"groovy\").endObject().endObject().endArray().endObject().endObject();\n+\n+ SearchResponse searchResponse = transportClient()\n+ .prepareSearch(queryIndex)\n+ .setQuery(builder)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1);\n+ assertThat(searchResponse.getHits().getMaxScore(), is(10.0f));\n+\n+ assertGetRequestsContainHeaders(\".scripts\");\n+ assertRequestsContainHeader(PutIndexedScriptRequest.class);\n+ }\n+\n+ @Test\n+ public void testThatSearchTemplatesWithIndexedTemplatesGetRequestContainsContextAndHeaders() throws Exception {\n+ PutIndexedScriptResponse scriptResponse = transportClient().preparePutIndexedScript(MustacheScriptEngineService.NAME, \"the_template\",\n+ jsonBuilder().startObject().startObject(\"template\").startObject(\"query\").startObject(\"match\")\n+ .field(\"name\", \"{{query_string}}\").endObject().endObject().endObject().endObject().string()\n+ ).get();\n+ assertThat(scriptResponse.isCreated(), is(true));\n+\n+ indexRandom(false, false, transportClient().prepareIndex(queryIndex, \"type\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"name\", \"Star Wars - The new republic\").endObject()));\n+ transportClient().admin().indices().prepareRefresh(queryIndex).get();\n+\n+ Map<String, Object> params = new HashMap<>();\n+ params.put(\"query_string\", \"star wars\");\n+\n+ SearchResponse searchResponse = transportClient().prepareSearch(queryIndex)\n+ .setTemplateName(\"the_template\")\n+ .setTemplateParams(params)\n+ .setTemplateType(ScriptService.ScriptType.INDEXED)\n+ .get();\n+\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1);\n+\n+ assertGetRequestsContainHeaders(\".scripts\");\n+ assertRequestsContainHeader(PutIndexedScriptRequest.class);\n+ }\n+\n+ @Test\n+ public void testThatRelevantHttpHeadersBecomeRequestHeaders() throws Exception {\n+ String releventHeaderName = \"relevant_\" + randomHeaderKey;\n+ for (RestController restController : internalCluster().getDataNodeInstances(RestController.class)) {\n+ restController.registerRelevantHeaders(releventHeaderName);\n+ }\n+\n+ CloseableHttpClient httpClient = HttpClients.createDefault();\n+ HttpResponse response = new HttpRequestBuilder(httpClient)\n+ .httpTransport(internalCluster().getDataNodeInstance(HttpServerTransport.class))\n+ .addHeader(randomHeaderKey, randomHeaderValue)\n+ .addHeader(releventHeaderName, randomHeaderValue)\n+ .path(\"/\" + queryIndex + \"/_search\")\n+ .execute();\n+\n+ assertThat(response, hasStatus(OK));\n+ List<SearchRequest> searchRequests = getRequests(SearchRequest.class);\n+ assertThat(searchRequests, hasSize(greaterThan(0)));\n+ for (SearchRequest searchRequest : searchRequests) {\n+ assertThat(searchRequest.hasHeader(releventHeaderName), is(true));\n+ // was not specified, thus is not included\n+ assertThat(searchRequest.hasHeader(randomHeaderKey), is(false));\n+ }\n+ }\n+\n+ private <T> List<T> getRequests(Class<T> clazz) {\n+ List<T> results = new ArrayList<>();\n+ for (ActionRequest request : requests) {\n+ if (request.getClass().equals(clazz)) {\n+ results.add((T) request);\n+ }\n+ }\n+\n+ return results;\n+ }\n+\n+ private void assertRequestsContainHeader(Class<? extends ActionRequest> clazz) {\n+ List<? extends ActionRequest> classRequests = getRequests(clazz);\n+ for (ActionRequest request : classRequests) {\n+ assertRequestContainsHeader(request);\n+ }\n+ }\n+\n+ private void assertGetRequestsContainHeaders() {\n+ assertGetRequestsContainHeaders(this.lookupIndex);\n+ }\n+\n+ private void assertGetRequestsContainHeaders(String index) {\n+ List<GetRequest> getRequests = getRequests(GetRequest.class);\n+ assertThat(getRequests, hasSize(greaterThan(0)));\n+\n+ for (GetRequest request : getRequests) {\n+ if (!request.index().equals(index)) {\n+ continue;\n+ }\n+ assertRequestContainsHeader(request);\n+ }\n+ }\n+\n+ private void assertRequestContainsHeader(ActionRequest request) {\n+ String msg = String.format(Locale.ROOT, \"Expected header %s to be in request %s\", randomHeaderKey, request.getClass().getName());\n+ if (request instanceof IndexRequest) {\n+ IndexRequest indexRequest = (IndexRequest) request;\n+ msg = String.format(Locale.ROOT, \"Expected header %s to be in index request %s/%s/%s\", randomHeaderKey,\n+ indexRequest.index(), indexRequest.type(), indexRequest.id());\n+ }\n+ assertThat(msg, request.hasHeader(randomHeaderKey), is(true));\n+ assertThat(request.getHeader(randomHeaderKey).toString(), is(randomHeaderValue));\n+ }\n+\n+ /**\n+ * a transport client that adds our random header\n+ */\n+ private Client transportClient() {\n+ Client transportClient = internalCluster().transportClient();\n+ FilterClient filterClient = new FilterClient(transportClient) {\n+ @Override\n+ protected <Request extends ActionRequest, Response extends ActionResponse, RequestBuilder extends ActionRequestBuilder<Request, Response, RequestBuilder>> void doExecute(Action<Request, Response, RequestBuilder> action, Request request, ActionListener<Response> listener) {\n+ request.putHeader(randomHeaderKey, randomHeaderValue);\n+ super.doExecute(action, request, listener);\n+ }\n+ };\n+\n+ return filterClient;\n+ }\n+\n+ public static class ActionLoggingPlugin extends AbstractPlugin {\n+\n+ @Override\n+ public String name() {\n+ return \"test-action-logging\";\n+ }\n+\n+ @Override\n+ public String description() {\n+ return \"Test action logging\";\n+ }\n+\n+ @Override\n+ public Collection<Class<? extends Module>> modules() {\n+ return ImmutableList.of(ActionLoggingModule.class);\n+ }\n+ }\n+\n+ public static class ActionLoggingModule extends AbstractModule implements PreProcessModule {\n+\n+\n+ @Override\n+ protected void configure() {\n+ bind(LoggingFilter.class).asEagerSingleton();\n+ }\n+\n+ @Override\n+ public void processModule(Module module) {\n+ if (module instanceof ActionModule) {\n+ ((ActionModule)module).registerFilter(LoggingFilter.class);\n+ }\n+ }\n+ }\n+\n+ public static class LoggingFilter extends ActionFilter.Simple {\n+\n+ @Inject\n+ public LoggingFilter(Settings settings) {\n+ super(settings);\n+ }\n+\n+ @Override\n+ public int order() {\n+ return 999;\n+ }\n+\n+ @Override\n+ protected boolean apply(String action, ActionRequest request, ActionListener listener) {\n+ requests.add(request);\n+ return true;\n+ }\n+\n+ @Override\n+ protected boolean apply(String action, ActionResponse response, ActionListener listener) {\n+ return true;\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/transport/ContextAndHeaderTransportTests.java",
"status": "added"
}
]
} |
{
"body": "When I switch numeric_resolution to use seconds on a date field and input a date as \"yyyy-MM-ddThh:mm:ssZ\" it appears to calculate the incorrect number of milliseconds stored internally.\n\nAs an example, using the date \"2015-04-28T04:02:07Z\" I was seeing the following number of milliseconds stored internally:\n\nUsing milliseconds: 1.4301...E12\nUsing seconds: 1.4301...E15\n\nI would expect these to be the same since they are both ultimately stored as milliseconds.\n",
"comments": [
{
"body": "Oh, this is a bad bug indeed. Numeric resolution should only be applied to dates provided as numbers, not as formatted dates.\n",
"created_at": "2015-05-06T08:53:22Z"
}
],
"number": 10995,
"title": "Mapping: Date Bug When Using numeric_resolution"
} | {
"body": "Close #10995\n",
"number": 11002,
"review_comments": [],
"title": "`numeric_resolution` should only apply to dates provided as numbers."
} | {
"commits": [
{
"message": "Mappings: `numeric_resolution` should only apply to dates provided as numbers.\n\nClose #10995"
}
],
"files": [
{
"diff": "@@ -471,17 +471,18 @@ protected void innerParseCreateField(ParseContext context, List<Field> fields) t\n context.allEntries().addText(names.fullName(), dateAsString, boost);\n }\n value = parseStringValue(dateAsString);\n+ } else if (value != null) {\n+ value = timeUnit.toMillis(value);\n }\n \n if (value != null) {\n- final long timestamp = timeUnit.toMillis(value);\n if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) {\n- CustomLongNumericField field = new CustomLongNumericField(this, timestamp, fieldType);\n+ CustomLongNumericField field = new CustomLongNumericField(this, value, fieldType);\n field.setBoost(boost);\n fields.add(field);\n }\n if (hasDocValues()) {\n- addDocValue(context, fields, timestamp);\n+ addDocValue(context, fields, value);\n }\n }\n }\n@@ -549,7 +550,7 @@ private long parseStringValue(String value) {\n return dateTimeFormatter.parser().parseMillis(value);\n } catch (RuntimeException e) {\n try {\n- return Long.parseLong(value);\n+ return timeUnit.toMillis(Long.parseLong(value));\n } catch (NumberFormatException e1) {\n throw new MapperParsingException(\"failed to parse date field [\" + value + \"], tried both date format [\" + dateTimeFormatter.format() + \"], and timestamp number with locale [\" + dateTimeFormatter.locale() + \"]\", e);\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -434,5 +434,13 @@ public void testNumericResolution() throws Exception {\n .endObject()\n .bytes());\n assertThat(getDateAsMillis(doc.rootDoc(), \"date_field\"), equalTo(43000L));\n+\n+ // but formatted dates still parse as milliseconds\n+ doc = defaultMapper.parse(\"type\", \"2\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"1970-01-01T00:00:44.000Z\")\n+ .endObject()\n+ .bytes());\n+ assertThat(getDateAsMillis(doc.rootDoc(), \"date_field\"), equalTo(44000L));\n }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java",
"status": "modified"
}
]
} |
{
"body": "After upgrading from 1.4 -> 1.5.2 shards fail to load due to ScriptParameterParseException. \n\nStacktrace:\n\n```\norg.elasticsearch.script.ScriptParameterParser$ScriptParameterParseException: Value must be of type String: [lang]\n at org.elasticsearch.script.ScriptParameterParser.parseConfig(ScriptParameterParser.java:111)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parseTransform(DocumentMapperParser.java:307)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:257)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:192)\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:434)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:307)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:430)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:376)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:181)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:467)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nThe cause as I can best understand is the `lang` property is getting set to null for some transform scripts. We do not explicitly set the lang so this was unexpected. Our process is to use a template...\n\n```\n\"mappings\": {\n \"<type>\": { \n \"transform\": [\n {\"script\": \"<source>\"},\n...\n```\n\nHowever, the mapping produced after an index creation (in 1.4) is...\n\n```\n{\n \"<index-name>\": {\n \"mappings\": {\n \"<type>\": {\n \"transform\": [\n {\"script\": \"<source>\",\"lang\": null}\n...\n```\n\nThe above mapping works in 1.4 but not in 1.5 it seems.\n",
"comments": [
{
"body": "@colings86 please could you take a look. probably related to #7977 \n",
"created_at": "2015-05-04T11:55:10Z"
},
{
"body": "Note to others; the only workaround I have found that works is to recreate the index and set lang property on transform explicitly. Perhaps there is a way to update an existing transform but it is not mentioned in the [Transform Documentation](http://www.elastic.co/guide/en/elasticsearch/reference/1.x/mapping-transform.html) at this time. \n",
"created_at": "2015-05-04T12:50:29Z"
},
{
"body": "@aewhite thanks for raising this. When the transform is written in the mapping the script language is written even if it is set to null (null meaning use the default). I have opened PR #10976 which lets the parser reading the stored mapping read null for the script language instead of throwing an error\n",
"created_at": "2015-05-05T09:30:35Z"
},
{
"body": "@aewhite thanks again for raising this. The fix has now been merged in and should be available from 1.5.3\n",
"created_at": "2015-05-07T08:52:12Z"
}
],
"number": 10926,
"title": "ScriptParameterParseException when upgrading from 1.4 -> 1.5.2 "
} | {
"body": "Closes #10926\n",
"number": 10976,
"review_comments": [],
"title": "Allow script language to be null when parsing"
} | {
"commits": [
{
"message": "Scripting: allow script language to be null when parsing\n\nCloses #10926"
}
],
"files": [
{
"diff": "@@ -585,7 +585,9 @@ public Map<String, Object> transformSourceAsMap(Map<String, Object> sourceAsMap)\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n builder.field(\"script\", script);\n- builder.field(\"lang\", language);\n+ if (language != null) {\n+ builder.field(\"lang\", language);\n+ }\n if (parameters != null) {\n builder.field(\"params\", parameters);\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -26,8 +26,13 @@\n import org.elasticsearch.script.ScriptService.ScriptType;\n \n import java.io.IOException;\n-import java.util.*;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.Iterator;\n+import java.util.Map;\n import java.util.Map.Entry;\n+import java.util.Set;\n \n public class ScriptParameterParser {\n \n@@ -102,12 +107,12 @@ public void parseConfig(Map<String, Object> config, boolean removeMatchedEntries\n String parameterName = entry.getKey();\n Object parameterValue = entry.getValue();\n if (ScriptService.SCRIPT_LANG.match(parameterName)) {\n- if (parameterValue instanceof String) {\n+ if (parameterValue instanceof String || parameterValue == null) {\n lang = (String) parameterValue;\n if (removeMatchedEntries) {\n itr.remove();\n }\n- } else {\n+ } else {\n throw new ScriptParameterParseException(\"Value must be of type String: [\" + parameterName + \"]\");\n }\n } else {",
"filename": "src/main/java/org/elasticsearch/script/ScriptParameterParser.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,89 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bwcompat;\n+\n+import com.google.common.collect.ImmutableMap;\n+\n+import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertExists;\n+import static org.hamcrest.Matchers.both;\n+import static org.hamcrest.Matchers.hasEntry;\n+import static org.hamcrest.Matchers.hasKey;\n+import static org.hamcrest.Matchers.not;\n+\n+public class ScriptTransformBackwardsCompatibilityTests extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @Test\n+ public void testTransformWithNoLangSpecified() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject();\n+ builder.field(\"transform\");\n+ if (getRandom().nextBoolean()) {\n+ // Single transform\n+ builder.startObject();\n+ buildTransformScript(builder);\n+ builder.endObject();\n+ } else {\n+ // Multiple transforms\n+ int total = between(1, 10);\n+ int actual = between(0, total - 1);\n+ builder.startArray();\n+ for (int s = 0; s < total; s++) {\n+ builder.startObject();\n+ if (s == actual) {\n+ buildTransformScript(builder);\n+ } else {\n+ builder.field(\"script\", \"true\");\n+ }\n+ builder.endObject();\n+ }\n+ builder.endArray();\n+ }\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").addMapping(\"test\", builder));\n+\n+ indexRandom(getRandom().nextBoolean(), client().prepareIndex(\"test\", \"test\", \"notitle\").setSource(\"content\", \"findme\"), client()\n+ .prepareIndex(\"test\", \"test\", \"badtitle\").setSource(\"content\", \"findme\", \"title\", \"cat\"),\n+ client().prepareIndex(\"test\", \"test\", \"righttitle\").setSource(\"content\", \"findme\", \"title\", \"table\"));\n+ GetResponse response = client().prepareGet(\"test\", \"test\", \"righttitle\").get();\n+ assertExists(response);\n+ assertThat(response.getSource(), both(hasEntry(\"content\", (Object) \"findme\")).and(not(hasKey(\"destination\"))));\n+\n+ response = client().prepareGet(\"test\", \"test\", \"righttitle\").setTransformSource(true).get();\n+ assertExists(response);\n+ assertThat(response.getSource(), both(hasEntry(\"destination\", (Object) \"findme\")).and(not(hasKey(\"content\"))));\n+ }\n+\n+ private void buildTransformScript(XContentBuilder builder) throws IOException {\n+ String script = \"if (ctx._source['title']?.startsWith('t')) { ctx._source['destination'] = ctx._source[sourceField] }; ctx._source.remove(sourceField);\";\n+ if (getRandom().nextBoolean()) {\n+ script = script.replace(\"sourceField\", \"'content'\");\n+ } else {\n+ builder.field(\"params\", ImmutableMap.of(\"sourceField\", \"content\"));\n+ }\n+ builder.field(\"script\", script);\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/bwcompat/ScriptTransformBackwardsCompatibilityTests.java",
"status": "added"
},
{
"diff": "@@ -130,7 +130,7 @@ private void setup(boolean forceRefresh) throws IOException, InterruptedExceptio\n // Single transform\n builder.startObject();\n buildTransformScript(builder);\n- builder.field(\"lang\", GroovyScriptEngineService.NAME);\n+ builder.field(\"lang\", randomFrom(null, GroovyScriptEngineService.NAME));\n builder.endObject();\n } else {\n // Multiple transforms\n@@ -144,7 +144,7 @@ private void setup(boolean forceRefresh) throws IOException, InterruptedExceptio\n } else {\n builder.field(\"script\", \"true\");\n }\n- builder.field(\"lang\", GroovyScriptEngineService.NAME);\n+ builder.field(\"lang\", randomFrom(null, GroovyScriptEngineService.NAME));\n builder.endObject();\n }\n builder.endArray();",
"filename": "src/test/java/org/elasticsearch/index/mapper/TransformOnIndexMapperIntegrationTest.java",
"status": "modified"
}
]
} |
{
"body": "Hi,\n\nI am following the example at: \nhttp://www.elastic.co/guide/en/elasticsearch/guide/current/geo-bounds-agg.html\nto create an aggregation of some sample data according to their geohash cells, \nand return the geo-bounds of the result set in the bucket.\n\n```\n \"aggs\": {\n \"bucket\" : {\n \"geohash_grid\" : {\n \"field\": \"centroid\",\n \"precision\": 8\n },\n \"aggs\": {\n \"cell\": {\n \"geo_bounds\": {\n \"field\": \"centroid\"\n }\n }\n }\n }\n }\n```\n\nI have noticed that when my precision becomes higher the returned geo-bounds contain\ninfinity values for longitude and latitude. The bbox in every bounds object is of the following form: \n\n```\n \"bounds\": {\n \"top_left\": {\n \"lat\": 49.611142831107415,\n \"lon\": \"-Infinity\"\n },\n \"bottom_right\": {\n \"lat\": \"-Infinity\",\n \"lon\": \"-Infinity\"\n }\n }\n```\n\n(only top-left.lat changes)\n\nReducing the number of buckets by geo-filtering seems to help but is very limiting.\nFor example, I applied a geo-bounding-box filter with the bounds of a \nsingle geo-hash cell of precision 6, then geo-aggregated using precision 7,\nwith 32 buckets that returned fine. However geo-aggregating with precision 8\n(returning 1024 buckets) have produced the incorrect geo-bounds.\n\nThe mapping that I've used:\n\n```\n \"properties\": {\n \"centroid\": {\n \"type\": \"geo_point\",\n \"lat_lon\": true,\n \"geohash\": true,\n \"geohash_prefix\": true,\n \"geohash_precision\": 11\n },\n \"geom\": {\n \"type\": \"geo_shape\",\n \"tree_levels\": 10\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"osmId\": {\n \"type\": \"long\"\n },\n \"type\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n```\n\nThanks\n",
"comments": [
{
"body": "Hi @plroit \n\nWould be very useful if you could provide some example docs, in fact a full simple recreation would be awesome.\n\nthanks\n\n/cc @colings86 \n",
"created_at": "2015-04-26T17:41:21Z"
},
{
"body": "Sorry for the delay,\n\nThe query that I have used:\n\n```\nPOST /osm/building/_search\n{\n \"size\": 0, \n \"aggs\": {\n \"bucket\" : {\n \"geohash_grid\" : {\n \"field\": \"centroid\",\n \"precision\": 8\n },\n \"aggs\": {\n \"cell\": {\n \"geo_bounds\": {\n \"field\": \"centroid\"}}}}}}\n```\n\nThe dataset comes from OpenStreetMap vector data of Luxemburg,\n[You can find it here](http://download.geofabrik.de/europe/luxembourg.html)\nI am trying to put into geohash buckets all the buildings, which are polygon shapes.\nThe field 'centroid' is the point center of the polygon.\n\nReviewing the indexed data reveals nothing special or erroneous. \n",
"created_at": "2015-04-29T05:13:36Z"
},
{
"body": "I can provide you a generated csv file of the 77K buildings, and the C# code that I have used for indexing them. Would that help?\n",
"created_at": "2015-04-29T05:17:40Z"
},
{
"body": "@plroit the csv file would be very useful if you could provide that. Don't worry about the C# code for now.\n",
"created_at": "2015-04-29T08:38:50Z"
},
{
"body": "Try this file, each line is the object serialized to json\nhttps://www.dropbox.com/s/49i73lfe2ebdvml/building.csv?dl=0\n",
"created_at": "2015-05-01T05:19:36Z"
},
{
"body": "@plroit thanks for the file and for raising this bug. I have reproduced the problem with the docs you provided so I'll look into what's causing this.\n",
"created_at": "2015-05-01T09:32:26Z"
},
{
"body": "@plroit Thanks again for raising this, I have just merged the fix via #10917 and it should be available from version 1.5.3 onwards\n",
"created_at": "2015-05-05T09:01:55Z"
}
],
"number": 10804,
"title": "Infinity values at bounding-box when calculating the geo-bounds aggregate with many buckets"
} | {
"body": "If the collect method was called with a bucketOrd of > 0 the arrays holding the state for the aggregation would be grown but the initial values for the bucketOrds > 0 were all set to Double.NEGATIVE_INFINITY meaning that for the bottom, posLeft and negLeft values no collected document would change the value since NEGATIVE_INFINITY is always less than every other value.\n\nCloses #10804\n",
"number": 10917,
"review_comments": [],
"title": "Fixes Infinite values return from geo_bounds with non-zero bucket-ordinals"
} | {
"commits": [
{
"message": "Aggregations: Fixes Infinite values return from geo_bounds with non-zero bucket-ordinals\n\nIf the collect method was called with a bucketOrd of > 0 the arrays holding the state for the aggregation would be grown but the initial values for the bucketOrds > 0 were all set to Double.NEGATIVE_INFINITY meaning that for the bottom, posLeft and negLeft values no collected document would change the value since NEGATIVE_INFINITY is always less than every other value.\n\nCloses #10804"
}
],
"files": [
{
"diff": "@@ -90,13 +90,13 @@ public void collect(int doc, long bucket) throws IOException {\n tops = bigArrays.grow(tops, bucket + 1);\n tops.fill(from, tops.size(), Double.NEGATIVE_INFINITY);\n bottoms = bigArrays.resize(bottoms, tops.size());\n- bottoms.fill(from, bottoms.size(), Double.NEGATIVE_INFINITY);\n+ bottoms.fill(from, bottoms.size(), Double.POSITIVE_INFINITY);\n posLefts = bigArrays.resize(posLefts, tops.size());\n- posLefts.fill(from, posLefts.size(), Double.NEGATIVE_INFINITY);\n+ posLefts.fill(from, posLefts.size(), Double.POSITIVE_INFINITY);\n posRights = bigArrays.resize(posRights, tops.size());\n posRights.fill(from, posRights.size(), Double.NEGATIVE_INFINITY);\n negLefts = bigArrays.resize(negLefts, tops.size());\n- negLefts.fill(from, negLefts.size(), Double.NEGATIVE_INFINITY);\n+ negLefts.fill(from, negLefts.size(), Double.POSITIVE_INFINITY);\n negRights = bigArrays.resize(negRights, tops.size());\n negRights.fill(from, negRights.size(), Double.NEGATIVE_INFINITY);\n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -27,7 +27,6 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHitField;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n@@ -38,7 +37,6 @@\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.util.ArrayList;\n@@ -51,7 +49,10 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.allOf;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.sameInstance;\n \n@@ -407,6 +408,10 @@ public void singleValuedFieldAsSubAggToHighCardTermsAgg() {\n GeoBounds geoBounds = bucket.getAggregations().get(\"geoBounds\");\n assertThat(geoBounds, notNullValue());\n assertThat(geoBounds.getName(), equalTo(\"geoBounds\"));\n+ assertThat(geoBounds.topLeft().getLat(), allOf(greaterThanOrEqualTo(-90.0), lessThanOrEqualTo(90.0)));\n+ assertThat(geoBounds.topLeft().getLon(), allOf(greaterThanOrEqualTo(-180.0), lessThanOrEqualTo(180.0)));\n+ assertThat(geoBounds.bottomRight().getLat(), allOf(greaterThanOrEqualTo(-90.0), lessThanOrEqualTo(90.0)));\n+ assertThat(geoBounds.bottomRight().getLon(), allOf(greaterThanOrEqualTo(-180.0), lessThanOrEqualTo(180.0)));\n }\n }\n ",
"filename": "src/test/java/org/elasticsearch/search/aggregations/metrics/GeoBoundsTests.java",
"status": "modified"
}
]
} |
{
"body": "While playing around with master I received the following error response:\n\n``` json\n{\n \"error\": {\n \"root_cause\": [],\n \"type\": \"access_control_exception\",\n \"reason\": \"access denied (\\\"java.io.FilePermission\\\" \\\"default-mapping.json\\\" \\\"read\\\")\"\n },\n \"status\": 500\n}\n```\n\nThe `root_cause` should probably contain something, even if it is just a copy of the outer level error.\n",
"comments": [
{
"body": "yeah that´s a bug if there is no `ElasticsearchException` wrapping this thing I guess it omits it... I will fix today or tomorrow thanks for opening this\n",
"created_at": "2015-04-28T09:13:45Z"
}
],
"number": 10836,
"title": "Response error.root_cause shouldn't empty"
} | {
"body": "if we don't have an ElasticsearchException as the wrapper of the\nactual cause we don't render a root cause today. This commit adds\nsupport for 3rd party exceptions as root causes.\n\nCloses #10836\n",
"number": 10850,
"review_comments": [],
"title": "Render non-elasticsearch exception as root cause"
} | {
"commits": [
{
"message": "[REST] Render non-elasticsearch exception as root cause\n\nif we don't have an ElasticsearchException as the wrapper of the\nactual cause we don't render a root cause today. This commit adds\nsupport for 3rd party exceptions as root causes.\n\nCloses #10836"
}
],
"files": [
{
"diff": "@@ -194,7 +194,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (this instanceof ElasticsearchWrapperException) {\n toXContent(builder, params, this);\n } else {\n- builder.field(\"type\", getExceptionName(this));\n+ builder.field(\"type\", getExceptionName());\n builder.field(\"reason\", getMessage());\n innerToXContent(builder, params);\n }\n@@ -261,7 +261,16 @@ public static ElasticsearchException[] guessRootCauses(Throwable t) {\n if (ex instanceof ElasticsearchException) {\n return ((ElasticsearchException) ex).guessRootCauses();\n }\n- return new ElasticsearchException[0];\n+ return new ElasticsearchException[] {new ElasticsearchException(t.getMessage(), t) {\n+ @Override\n+ protected String getExceptionName() {\n+ return getExceptionName(getCause());\n+ }\n+ }};\n+ }\n+\n+ protected String getExceptionName() {\n+ return getExceptionName(this);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/ElasticsearchException.java",
"status": "modified"
},
{
"diff": "@@ -104,6 +104,15 @@ public void testGuessRootCause() {\n \n }\n \n+ {\n+ final ElasticsearchException[] foobars = ElasticsearchException.guessRootCauses(new IllegalArgumentException(\"foobar\"));\n+ assertEquals(foobars.length, 1);\n+ assertTrue(foobars[0] instanceof ElasticsearchException);\n+ assertEquals(foobars[0].getMessage(), \"foobar\");\n+ assertEquals(foobars[0].getCause().getClass(), IllegalArgumentException.class);\n+ assertEquals(foobars[0].getExceptionName(), \"illegal_argument_exception\");\n+ }\n+\n }\n \n public void testDeduplicate() throws IOException {",
"filename": "src/test/java/org/elasticsearch/ElasticsearchExceptionTests.java",
"status": "modified"
},
{
"diff": "@@ -112,11 +112,18 @@ public void testErrorTrace() throws Exception {\n public void testGuessRootCause() throws IOException {\n RestRequest request = new FakeRestRequest();\n RestChannel channel = new DetailedExceptionRestChannel(request);\n-\n- Throwable t = new ElasticsearchException(\"an error occurred reading data\", new FileNotFoundException(\"/foo/bar\"));\n- BytesRestResponse response = new BytesRestResponse(channel, t);\n- String text = response.content().toUtf8();\n- assertThat(text, containsString(\"{\\\"root_cause\\\":[{\\\"type\\\":\\\"exception\\\",\\\"reason\\\":\\\"an error occurred reading data\\\"}]\"));\n+ {\n+ Throwable t = new ElasticsearchException(\"an error occurred reading data\", new FileNotFoundException(\"/foo/bar\"));\n+ BytesRestResponse response = new BytesRestResponse(channel, t);\n+ String text = response.content().toUtf8();\n+ assertThat(text, containsString(\"{\\\"root_cause\\\":[{\\\"type\\\":\\\"exception\\\",\\\"reason\\\":\\\"an error occurred reading data\\\"}]\"));\n+ }\n+ {\n+ Throwable t = new FileNotFoundException(\"/foo/bar\");\n+ BytesRestResponse response = new BytesRestResponse(channel, t);\n+ String text = response.content().toUtf8();\n+ assertThat(text, containsString(\"{\\\"root_cause\\\":[{\\\"type\\\":\\\"file_not_found_exception\\\",\\\"reason\\\":\\\"/foo/bar\\\"}]\"));\n+ }\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/rest/BytesRestResponseTests.java",
"status": "modified"
}
]
} |
{
"body": "Currently we're using the default `escape` method from Mustache, which is intended for escaping HTML, not JSON.\n\nThis results in things like `\"` -> `"`\n\nInstead, we should be using these escapes:\n\n```\n\\b Backspace (ascii code 08)\n\\f Form feed (ascii code 0C)\n\\n New line\n\\r Carriage return\n\\t Tab\n\\v Vertical tab\n\\\" Double quote\n\\\\ Backslash \n```\n",
"comments": [
{
"body": "Test case:\n\n```\nDELETE /t\n\nPUT /t\n{\n \"mappings\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n}\n\nPUT /t/foo/1\n{\n \"foo\": \"bar&\"\n}\n\nGET /_search/template\n{\n \"template\": {\n \"query\": {\n \"term\": {\n \"foo\": \"{{foo}}\"\n }\n }\n },\n \"params\": {\n \"foo\": \"bar&\"\n }\n}\n```\n",
"created_at": "2014-03-20T12:17:36Z"
},
{
"body": "cool I will take a look at it\n",
"created_at": "2014-03-20T12:44:55Z"
}
],
"number": 5473,
"title": "Mustache templates should escape JSON, not HTML"
} | {
"body": "This pull request replaces the current self-made implementation of JSON encoding special chars with re-using the Jackson JsonStringEncoder. Turns out the previous implementation also missed a few special chars so had to adjust the tests accordingly (looked at RFC 4627 for reference).\n\nNote: There's another JSON String encoder on our classpath (org.apache.commons.lang3.StringEscapeUtils) that essentially does the same thing but adds quoting to more characters than the Jackson Encoder above.\n\nRelates to #5473\n",
"number": 10820,
"review_comments": [],
"title": "Fix JSON encoding for Mustache templates."
} | {
"commits": [
{
"message": "Fix JSON encoding for Mustache templates.\n\nThis pull request replaces the current self-made implementation of JSON encoding special chars with re-using the Jackson JsonStringEncoder. Turns out the previous implementation also missed a few special chars so had to adjust the tests accordingly (looked at RFC 4627 for reference).\n\nNote: There's another JSON String encoder on our classpath (org.apache.commons.lang3.StringEscapeUtils) that essentially does the same thing but adds quoting to more characters than the Jackson Encoder above.\n\nRelates to #5473"
}
],
"files": [
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.script.mustache;\n \n+import com.fasterxml.jackson.core.io.JsonStringEncoder;\n import com.github.mustachejava.DefaultMustacheFactory;\n import com.github.mustachejava.MustacheException;\n \n@@ -28,40 +29,14 @@\n * A MustacheFactory that does simple JSON escaping.\n */\n public final class JsonEscapingMustacheFactory extends DefaultMustacheFactory {\n-\n+ \n @Override\n public void encode(String value, Writer writer) {\n try {\n- escape(value, writer);\n+ JsonStringEncoder utils = new JsonStringEncoder();\n+ writer.write(utils.quoteAsString(value));;\n } catch (IOException e) {\n throw new MustacheException(\"Failed to encode value: \" + value);\n }\n }\n-\n- public static Writer escape(String value, Writer writer) throws IOException {\n- for (int i = 0; i < value.length(); i++) {\n- final char character = value.charAt(i);\n- if (isEscapeChar(character)) {\n- writer.write('\\\\');\n- }\n- writer.write(character);\n- }\n- return writer;\n- }\n-\n- public static boolean isEscapeChar(char c) {\n- switch(c) {\n- case '\\b':\n- case '\\f':\n- case '\\n':\n- case '\\r':\n- case '\"':\n- case '\\\\':\n- case '\\u000B': // vertical tab\n- case '\\t':\n- return true;\n- }\n- return false;\n- }\n-\n }",
"filename": "src/main/java/org/elasticsearch/script/mustache/JsonEscapingMustacheFactory.java",
"status": "modified"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.script.mustache;\n \n-import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.test.ElasticsearchTestCase;\n@@ -38,10 +37,12 @@\n */\n public class MustacheScriptEngineTest extends ElasticsearchTestCase {\n private MustacheScriptEngineService qe;\n+ private JsonEscapingMustacheFactory escaper;\n \n @Before\n public void setup() {\n qe = new MustacheScriptEngineService(ImmutableSettings.Builder.EMPTY_SETTINGS);\n+ escaper = new JsonEscapingMustacheFactory();\n }\n \n @Test\n@@ -73,43 +74,98 @@ public void testSimpleParameterReplace() {\n public void testEscapeJson() throws IOException {\n {\n StringWriter writer = new StringWriter();\n- JsonEscapingMustacheFactory.escape(\"hello \\n world\", writer);\n- assertThat(writer.toString(), equalTo(\"hello \\\\\\n world\"));\n+ escaper.encode(\"hello \\n world\", writer);\n+ assertThat(writer.toString(), equalTo(\"hello \\\\n world\"));\n }\n {\n StringWriter writer = new StringWriter();\n- JsonEscapingMustacheFactory.escape(\"\\n\", writer);\n- assertThat(writer.toString(), equalTo(\"\\\\\\n\"));\n+ escaper.encode(\"\\n\", writer);\n+ assertThat(writer.toString(), equalTo(\"\\\\n\"));\n }\n \n- Character[] specialChars = new Character[]{'\\f', '\\n', '\\r', '\"', '\\\\', (char) 11, '\\t', '\\b' };\n+ Character[] specialChars = new Character[]{\n+ '\\\"', \n+ '\\\\', \n+ '\\u0000', \n+ '\\u0001',\n+ '\\u0002',\n+ '\\u0003',\n+ '\\u0004',\n+ '\\u0005',\n+ '\\u0006',\n+ '\\u0007',\n+ '\\u0008',\n+ '\\u0009',\n+ '\\u000B',\n+ '\\u000C',\n+ '\\u000E',\n+ '\\u000F',\n+ '\\u001F'};\n+ String[] escapedChars = new String[]{\n+ \"\\\\\\\"\", \n+ \"\\\\\\\\\", \n+ \"\\\\u0000\", \n+ \"\\\\u0001\",\n+ \"\\\\u0002\",\n+ \"\\\\u0003\",\n+ \"\\\\u0004\",\n+ \"\\\\u0005\",\n+ \"\\\\u0006\",\n+ \"\\\\u0007\",\n+ \"\\\\u0008\",\n+ \"\\\\u0009\",\n+ \"\\\\u000B\",\n+ \"\\\\u000C\",\n+ \"\\\\u000E\",\n+ \"\\\\u000F\",\n+ \"\\\\u001F\"};\n int iters = scaledRandomIntBetween(100, 1000);\n for (int i = 0; i < iters; i++) {\n int rounds = scaledRandomIntBetween(1, 20);\n- StringWriter escaped = new StringWriter();\n+ StringWriter expect = new StringWriter();\n StringWriter writer = new StringWriter();\n for (int j = 0; j < rounds; j++) {\n String s = getChars();\n writer.write(s);\n- escaped.write(s);\n- char c = RandomPicks.randomFrom(getRandom(), specialChars);\n- writer.append(c);\n- escaped.append('\\\\');\n- escaped.append(c);\n+ expect.write(s);\n+\n+ int charIndex = randomInt(7);\n+ writer.append(specialChars[charIndex]);\n+ expect.append(escapedChars[charIndex]);\n }\n StringWriter target = new StringWriter();\n- assertThat(escaped.toString(), equalTo(JsonEscapingMustacheFactory.escape(writer.toString(), target).toString()));\n+ escaper.encode(writer.toString(), target);\n+ assertThat(expect.toString(), equalTo(target.toString()));\n }\n }\n \n private String getChars() {\n String string = randomRealisticUnicodeOfCodepointLengthBetween(0, 10);\n for (int i = 0; i < string.length(); i++) {\n- if (JsonEscapingMustacheFactory.isEscapeChar(string.charAt(i))) {\n+ if (isEscapeChar(string.charAt(i))) {\n return string.substring(0, i);\n }\n }\n return string;\n }\n-\n+ \n+ /**\n+ * From https://www.ietf.org/rfc/rfc4627.txt:\n+ * \n+ * All Unicode characters may be placed within the\n+ * quotation marks except for the characters that must be escaped:\n+ * quotation mark, reverse solidus, and the control characters (U+0000\n+ * through U+001F). \n+ * */\n+ private static boolean isEscapeChar(char c) {\n+ switch (c) {\n+ case '\"':\n+ case '\\\\':\n+ return true;\n+ }\n+ \n+ if (c < '\\u002F')\n+ return true;\n+ return false;\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/script/mustache/MustacheScriptEngineTest.java",
"status": "modified"
}
]
} |
{
"body": "If a translog is closed while there is still a recovery running we can end up in an infinite loop keeping the shard and the store etc. open. I bet there are more bad thing that can happen here...\n\nit manifested in exceptions like this:\n\n```\n 2> REPRODUCE WITH: mvn test -Pdev -Dtests.seed=89A1F19C6ECBCF0C -Dtests.class=org.elasticsearch.test.rest.Rest2Tests -Dtests.slow=true -Dtests.method=\"test {yaml=indices.get_field_mapping/10_basic/Get field mapping with include_defaults}\" -Des.logger.level=DEBUG -Des.node.mode=local -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=1024m -Dtests.jvm.argline=\"-server -XX:+UseG1GC -XX:+UseCompressedOops -XX:+AggressiveOpts\" -Dtests.locale=tr -Dtests.timezone=Europe/Isle_of_Man -Dtests.rest.blacklist=cat.recovery/10_basic/*\nFAILURE 30.4s J4 | Rest2Tests.test {yaml=indices.get_field_mapping/10_basic/Get field mapping with include_defaults} <<<\n > Throwable #1: java.lang.AssertionError: Delete Index failed - not acked\n```\n\nand exceptions like:\n\n```\n 1> [2015-04-25 16:10:50,790][DEBUG][indices ] [node_s1] [test_index] failed to delete index store - at least one shards is still locked\n 1> org.apache.lucene.store.LockObtainFailedException: Can't lock shard [test_index][2], timed out after 0ms\n 1> at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:520)\n 1> at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:448)\n 1> at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:392)\n 1> at org.elasticsearch.env.NodeEnvironment.deleteIndexDirectorySafe(NodeEnvironment.java:342)\n 1> at org.elasticsearch.indices.IndicesService.deleteIndexStore(IndicesService.java:494)\n 1> at org.elasticsearch.indices.IndicesService.removeIndex(IndicesService.java:401)\n 1> at org.elasticsearch.indices.IndicesService.deleteIndex(IndicesService.java:443)\n 1> at org.elasticsearch.indices.cluster.IndicesClusterStateService.deleteIndex(IndicesClusterStateService.java:845)\n```\n\nall subsequent tests also fail with the delete not acked and print the same thread always sitting on a yield call in `FSTranslog`\n\n```\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> 8) Thread[id=178, name=elasticsearch[node_s0][generic][T#1], state=RUNNABLE, group=TGRP-Rest2Tests]\n 1> at java.lang.Thread.yield(Native Method)\n 1> at org.elasticsearch.index.translog.fs.FsTranslog.snapshot(FsTranslog.java:362)\n 1> at org.elasticsearch.index.translog.fs.FsTranslog.snapshot(FsTranslog.java:61)\n 1> at org.elasticsearch.index.engine.InternalEngine.recover(InternalEngine.java:845)\n 1> at org.elasticsearch.index.shard.IndexShard.recover(IndexShard.java:730)\n 1> at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:125)\n 1> at org.elasticsearch.indices.recovery.RecoverySource.access$200(RecoverySource.java:49)\n 1> at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:135)\n```\n\nif the translog is actually closed `current.snapshot()` always returns `null` and we will spin forever....\n\n``` Java\n @Override\n public FsChannelSnapshot snapshot() throws TranslogException {\n while (true) {\n FsChannelSnapshot snapshot = current.snapshot();\n if (snapshot != null) {\n return snapshot;\n }\n Thread.yield();\n }\n }\n```\n\nphew I am happy I finally tracked it down, it's super rare but annoying :)\n",
"comments": [
{
"body": "@bleskes I wonder if this can also cause funky other things like large translogs etc?\n",
"created_at": "2015-04-25T16:27:47Z"
},
{
"body": "here is the failure that shows the bug http://build-us-00.elastic.co/job/es_g1gc_master_metal/6078/consoleFull\n",
"created_at": "2015-04-25T16:28:29Z"
},
{
"body": "good catch! I don't think this can explain big translogs as the translog is only closed after the engine is closed, so there are no writes possible while this is ongoing. For what it's worth - this is also fixed in #10624 as the snapshot is retrieved from a view held by the recovery code. That view has it's own reference to the relevant translog files. Which means they can not be closed.\n",
"created_at": "2015-04-25T21:49:55Z"
},
{
"body": "@bleskes I kept the fix minimal since we are refactoring this anyways\n",
"created_at": "2015-04-26T12:09:43Z"
}
],
"number": 10807,
"title": "FSTranslog#snapshot() can enter infinite loop"
} | {
"body": "If the translog is closed while a snapshot opertion is in progress\nwe must fail the snapshot operation otherwise we end up in an endless\nloop.\n\nCloses #10807\n",
"number": 10809,
"review_comments": [],
"title": "Fail #snapshot if translog is closed"
} | {
"commits": [
{
"message": "[TRANSLOG] Fail #snapshot if translog is closed\n\nIf the translog is closed while a snapshot opertion is in progress\nwe must fail the snapshot operation otherwise we end up in an endless\nloop.\n\nCloses #10807"
}
],
"files": [
{
"diff": "@@ -50,6 +50,7 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n import java.util.regex.Matcher;\n@@ -93,7 +94,7 @@ public void onRefreshSettings(Settings settings) {\n \n private final ApplySettings applySettings = new ApplySettings();\n \n-\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n \n @Inject\n public FsTranslog(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService,\n@@ -140,14 +141,16 @@ public void updateBuffer(ByteSizeValue bufferSize) {\n \n @Override\n public void close() throws IOException {\n- if (indexSettingsService != null) {\n- indexSettingsService.removeListener(applySettings);\n- }\n- rwl.writeLock().lock();\n- try {\n- IOUtils.close(this.trans, this.current);\n- } finally {\n- rwl.writeLock().unlock();\n+ if (closed.compareAndSet(false, true)) {\n+ if (indexSettingsService != null) {\n+ indexSettingsService.removeListener(applySettings);\n+ }\n+ rwl.writeLock().lock();\n+ try {\n+ IOUtils.close(this.trans, this.current);\n+ } finally {\n+ rwl.writeLock().unlock();\n+ }\n }\n }\n \n@@ -355,6 +358,9 @@ public Location add(Operation operation) throws TranslogException {\n @Override\n public FsChannelSnapshot snapshot() throws TranslogException {\n while (true) {\n+ if (closed.get()) {\n+ throw new TranslogException(shardId, \"translog is already closed\");\n+ }\n FsChannelSnapshot snapshot = current.snapshot();\n if (snapshot != null) {\n return snapshot;",
"filename": "src/main/java/org/elasticsearch/index/translog/fs/FsTranslog.java",
"status": "modified"
},
{
"diff": "@@ -332,6 +332,17 @@ public void testSnapshotWithNewTranslog() throws IOException {\n snapshot.close();\n }\n \n+ public void testSnapshotOnClosedTranslog() throws IOException {\n+ assertTrue(Files.exists(translogDir.resolve(\"translog-1\")));\n+ translog.add(new Translog.Create(\"test\", \"1\", new byte[]{1}));\n+ translog.close();\n+ try {\n+ Translog.Snapshot snapshot = translog.snapshot();\n+ } catch (TranslogException ex) {\n+ assertEquals(ex.getMessage(), \"translog is already closed\");\n+ }\n+ }\n+\n @Test\n public void deleteOnRollover() throws IOException {\n translog.add(new Translog.Create(\"test\", \"1\", new byte[]{1}));",
"filename": "src/test/java/org/elasticsearch/index/translog/AbstractSimpleTranslogTests.java",
"status": "modified"
}
]
} |
{
"body": "Sampler Aggregator is a single-bucket aggregator but if you try to use it as part of the order in a terms aggregation it fails. Below is a sense script to reproduce:\n\n``` javascript\nPOST test/doc/1\n{\"color\":\"YELLOW\",\"date\":1500000009,\"weight\":105}\n\nPOST test/doc/2\n{\"color\":\"YELLOW\",\"date\":1500000008,\"weight\":104}\n\nPOST test/doc/3\n{\"color\":\"YELLOW\",\"date\":1500000007,\"weight\":103}\n\nPOST test/doc/11\n{\"color\":\"RED\",\"date\":1500000009,\"weight\":205}\n\nGET test/doc/_search\n{\n \"size\": 0,\n \"query\": {\n \"match_all\": {}\n },\n \"sort\": [\n {\n \"date\": {\n \"order\": \"desc\"\n }\n }\n ],\n \"aggregations\": {\n \"distinctColors\": {\n \"terms\": {\n \"field\": \"color\",\n \"size\": 1,\n \"order\": {\n \"sample>max_weight.value\": \"asc\"\n }\n },\n \"aggregations\": {\n \"sample\": {\n \"sampler\": {\n \"shard_size\": 1\n },\n \"aggs\": {\n \"max_weight\": {\n \"max\": {\n \"field\": \"weight\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nThe search request throws an ArrayStoreException:\n\n```\norg.elasticsearch.transport.RemoteTransportException: [Stallior][inet[/192.168.0.7:9300]][indices:data/read/search[phase/query]]\nCaused by: java.lang.ArrayStoreException\n at java.lang.System.arraycopy(Native Method)\n at org.elasticsearch.search.aggregations.support.AggregationPath.subPath(AggregationPath.java:191)\n at org.elasticsearch.search.aggregations.support.AggregationPath.validate(AggregationPath.java:307)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.validate(InternalOrder.java:145)\n at org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.validate(InternalOrder.java:138)\n at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.<init>(TermsAggregator.java:141)\n at org.elasticsearch.search.aggregations.bucket.terms.AbstractStringTermsAggregator.<init>(AbstractStringTermsAggregator.java:39)\n at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator.<init>(GlobalOrdinalsStringTermsAggregator.java:75)\n at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory$ExecutionMode$2.create(TermsAggregatorFactory.java:70)\n at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.doCreateInternal(TermsAggregatorFactory.java:223)\n at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:57)\n at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:95)\n at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:69)\n at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:77)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:96)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:296)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:307)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:422)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:1)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:340)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n\n```\n\nbut if you debug at `org.elasticsearch.search.aggregations.support.AggregationPath.subPath(AggregationPath.java:191)` you can see that the aggregator being tested is of type `AggregatorFactory$1` and wraps the `SamplerAggregator`. This is created by the `asMultiBucketAggregator(this, context, parent);` call in `SamplerAggregator$Factory.createInternal(...)`.\n\nThe reason SamplerAggregator has to be wrapped is that it's collectors do not take into account the `parentBucketOrdinal`.\n\nWe should update the SamplerAggregator (including the Diversity parts) to collect documents for each `parentBucketOrdinal` so that it doesn't need to be wrapped anymore and can be used in ordering like the other single-bucket aggregators\n",
"comments": [
{
"body": "@jpountz , can you confirm my assumption: the parent bucket IDs aggs are asked to collect on are compact and ascending (0,1,2,3...) or do I have to allow for very sparse values (7,10342,...)?\nThis dictates if I use a map or an array in my sampler collection and also if I in turn should rebase IDs of the buckets that survive the \"best docs\" selection process.\n",
"created_at": "2015-04-23T12:15:14Z"
},
{
"body": "@markharwood Indeed they are fine to use as array indices. However I'm confused why you are mentioning \"surviving\" bucket as the sampler aggregator should not filter buckets? My assumption was that it would just compute a different sample on each bucket?\n",
"created_at": "2015-04-23T12:33:56Z"
},
{
"body": "> My assumption was that it would just compute a different sample on each bucket?\n\nMy bad. You are correct.\nOn a separate point - when replaying the deferred collection(s) I need to replay collects in docId order along with the choice of bucket ID. There may be more than one bucket per doc id. A convenient way of doing this which avoids extra object allocations is to take the ScoreDocs produced from each of the samples and sneak the bucketID into the \"shardIndex\" int value they hold and then sort them for replay. A bit hacky (casting long bucket ids to ints) but should be OK?\n",
"created_at": "2015-04-23T12:45:38Z"
},
{
"body": "This hack sounds ok to me, if you use more than Integer.MAX_VALUE buckets to collect such an aggregator, you will have other issues anyway.\n",
"created_at": "2015-04-23T12:51:45Z"
}
],
"number": 10719,
"title": "Sampler Aggregator cannot be used in terms agg order"
} | {
"body": "The Sampler agg was not capable of collecting samples for more than one parent bucket.\nAdded a Junit test case and changed BestDocsDeferringCollector to internally maintain collections per parent bucket.\n\nCloses #10719\n",
"number": 10785,
"review_comments": [
{
"body": "Could we add a comment here reminding that the shardIndex is being used to store the parent bucket ordinal?\n",
"created_at": "2015-04-29T09:00:13Z"
},
{
"body": "Should we throw an exception here instead? It seems like something has gone wrong if we ask for the doc count for a parentBucket that doesn't exist?\n",
"created_at": "2015-04-29T09:03:51Z"
},
{
"body": "This comment is useful in the context of this review to know why the code moved but given that this was new as of a couple of weeks ago I think it can be removed before pushing?\n",
"created_at": "2015-04-29T09:07:42Z"
},
{
"body": "Will do\n",
"created_at": "2015-04-29T09:37:19Z"
},
{
"body": "A couple of the tests in SamplerTests trip this condition. Unmapped index, Sampler agg at the root and the aggs frameworks asks root agg for doc count on parent \"0\".\nI'll add a check for non-zero bucket ID and error accordingly\n",
"created_at": "2015-04-29T10:08:30Z"
},
{
"body": "can it be final?\n",
"created_at": "2015-05-19T20:40:26Z"
},
{
"body": "Actually I think it should come back: sometimes we can build aggregations on buckets that were not collected, see eg. https://github.com/elastic/elasticsearch/issues/11150\n",
"created_at": "2015-05-19T20:50:32Z"
}
],
"title": "Sampler agg could not be used with Terms agg’s order."
} | {
"commits": [
{
"message": "Aggregation fix: Sampler agg could not be used with Terms agg’s order.\nThe Sampler agg was not capable of collecting samples for more than one parent bucket.\nAdded a Junit test case and changed BestDocsDeferringCollector to internally maintain collections per parent bucket.\n\nCloses #10719"
},
{
"message": "Addressing minor comments from review"
},
{
"message": "Removed exception on getDocCount call when no prior collections following comment from @jpountz. Added use of final, rebased on master."
}
],
"files": [
{
"diff": "@@ -43,6 +43,8 @@\n import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeBuilder;\n import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceBuilder;\n import org.elasticsearch.search.aggregations.bucket.range.ipv4.IPv4RangeBuilder;\n+import org.elasticsearch.search.aggregations.bucket.sampler.Sampler;\n+import org.elasticsearch.search.aggregations.bucket.sampler.SamplerAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsBuilder;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n@@ -145,6 +147,13 @@ public static FiltersAggregationBuilder filters(String name) {\n return new FiltersAggregationBuilder(name);\n }\n \n+ /**\n+ * Create a new {@link Sampler} aggregation with the given name.\n+ */\n+ public static SamplerAggregationBuilder sampler(String name) {\n+ return new SamplerAggregationBuilder(name);\n+ }\n+\n /**\n * Create a new {@link Global} aggregation with the given name.\n */",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregationBuilders.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,10 @@\n import org.apache.lucene.search.TopDocsCollector;\n import org.apache.lucene.search.TopScoreDocCollector;\n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.ObjectArray;\n import org.elasticsearch.search.aggregations.BucketCollector;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n \n@@ -46,25 +50,29 @@\n * \n */\n \n-public class BestDocsDeferringCollector extends DeferringBucketCollector {\n+public class BestDocsDeferringCollector extends DeferringBucketCollector implements Releasable {\n final List<PerSegmentCollects> entries = new ArrayList<>();\n BucketCollector deferred;\n- TopDocsCollector<? extends ScoreDoc> tdc;\n- boolean finished = false;\n+ ObjectArray<PerParentBucketSamples> perBucketSamples;\n private int shardSize;\n private PerSegmentCollects perSegCollector;\n- private int matchedDocs;\n+ private final BigArrays bigArrays;\n \n /**\n * Sole constructor.\n * \n * @param shardSize\n+ * The number of top-scoring docs to collect for each bucket\n+ * @param bigArrays\n */\n- public BestDocsDeferringCollector(int shardSize) {\n+ public BestDocsDeferringCollector(int shardSize, BigArrays bigArrays) {\n this.shardSize = shardSize;\n+ this.bigArrays = bigArrays;\n+ perBucketSamples = bigArrays.newObjectArray(1);\n }\n \n \n+\n @Override\n public boolean needsScores() {\n return true;\n@@ -73,16 +81,10 @@ public boolean needsScores() {\n /** Set the deferred collectors. */\n public void setDeferredCollector(Iterable<BucketCollector> deferredCollectors) {\n this.deferred = BucketCollector.wrap(deferredCollectors);\n- try {\n- tdc = createTopDocsCollector(shardSize);\n- } catch (IOException e) {\n- throw new ElasticsearchException(\"IO error creating collector\", e);\n- }\n }\n \n @Override\n public LeafBucketCollector getLeafCollector(LeafReaderContext ctx) throws IOException {\n- // finishLeaf();\n perSegCollector = new PerSegmentCollects(ctx);\n entries.add(perSegCollector);\n \n@@ -95,7 +97,7 @@ public void setScorer(Scorer scorer) throws IOException {\n \n @Override\n public void collect(int doc, long bucket) throws IOException {\n- perSegCollector.collect(doc);\n+ perSegCollector.collect(doc, bucket);\n }\n };\n }\n@@ -112,50 +114,102 @@ public void preCollection() throws IOException {\n \n @Override\n public void postCollection() throws IOException {\n- finished = true;\n+ runDeferredAggs();\n }\n \n- /**\n- * Replay the wrapped collector, but only on a selection of buckets.\n- */\n+\n @Override\n public void prepareSelectedBuckets(long... selectedBuckets) throws IOException {\n- if (!finished) {\n- throw new IllegalStateException(\"Cannot replay yet, collection is not finished: postCollect() has not been called\");\n- }\n- if (selectedBuckets.length > 1) {\n- throw new IllegalStateException(\"Collection only supported on a single bucket\");\n- }\n+ // no-op - deferred aggs processed in postCollection call\n+ }\n \n+ private void runDeferredAggs() throws IOException {\n deferred.preCollection();\n \n- TopDocs topDocs = tdc.topDocs();\n- ScoreDoc[] sd = topDocs.scoreDocs;\n- matchedDocs = sd.length;\n- // Sort the top matches by docID for the benefit of deferred collector\n- Arrays.sort(sd, new Comparator<ScoreDoc>() {\n- @Override\n- public int compare(ScoreDoc o1, ScoreDoc o2) {\n- return o1.doc - o2.doc;\n+ List<ScoreDoc> allDocs = new ArrayList<>(shardSize);\n+ for (int i = 0; i < perBucketSamples.size(); i++) {\n+ PerParentBucketSamples perBucketSample = perBucketSamples.get(i);\n+ if (perBucketSample == null) {\n+ continue;\n }\n- });\n+ perBucketSample.getMatches(allDocs);\n+ }\n+ \n+ // Sort the top matches by docID for the benefit of deferred collector\n+ ScoreDoc[] docsArr = allDocs.toArray(new ScoreDoc[allDocs.size()]);\n+ Arrays.sort(docsArr, new Comparator<ScoreDoc>() {\n+ @Override\n+ public int compare(ScoreDoc o1, ScoreDoc o2) {\n+ if(o1.doc == o2.doc){\n+ return o1.shardIndex - o2.shardIndex; \n+ }\n+ return o1.doc - o2.doc;\n+ }\n+ });\n try {\n for (PerSegmentCollects perSegDocs : entries) {\n- perSegDocs.replayRelatedMatches(sd);\n+ perSegDocs.replayRelatedMatches(docsArr);\n }\n- // deferred.postCollection();\n } catch (IOException e) {\n throw new ElasticsearchException(\"IOException collecting best scoring results\", e);\n }\n deferred.postCollection();\n }\n \n+ class PerParentBucketSamples {\n+ private LeafCollector currentLeafCollector;\n+ private TopDocsCollector<? extends ScoreDoc> tdc;\n+ private long parentBucket;\n+ private int matchedDocs;\n+\n+ public PerParentBucketSamples(long parentBucket, Scorer scorer, LeafReaderContext readerContext) {\n+ try {\n+ this.parentBucket = parentBucket;\n+ tdc = createTopDocsCollector(shardSize);\n+ currentLeafCollector = tdc.getLeafCollector(readerContext);\n+ setScorer(scorer);\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"IO error creating collector\", e);\n+ }\n+ }\n+\n+ public void getMatches(List<ScoreDoc> allDocs) {\n+ TopDocs topDocs = tdc.topDocs();\n+ ScoreDoc[] sd = topDocs.scoreDocs;\n+ matchedDocs = sd.length;\n+ for (ScoreDoc scoreDoc : sd) {\n+ // A bit of a hack to (ab)use shardIndex property here to\n+ // hold a bucket ID but avoids allocating extra data structures\n+ // and users should have bigger concerns if bucket IDs\n+ // exceed int capacity..\n+ scoreDoc.shardIndex = (int) parentBucket;\n+ }\n+ allDocs.addAll(Arrays.asList(sd));\n+ }\n+\n+ public void collect(int doc) throws IOException {\n+ currentLeafCollector.collect(doc);\n+ }\n+\n+ public void setScorer(Scorer scorer) throws IOException {\n+ currentLeafCollector.setScorer(scorer);\n+ }\n+\n+ public void changeSegment(LeafReaderContext readerContext) throws IOException {\n+ currentLeafCollector = tdc.getLeafCollector(readerContext);\n+ }\n+\n+ public int getDocCount() {\n+ return matchedDocs;\n+ }\n+ }\n+\n class PerSegmentCollects extends Scorer {\n private LeafReaderContext readerContext;\n int maxDocId = Integer.MIN_VALUE;\n private float currentScore;\n private int currentDocId = -1;\n- private LeafCollector currentLeafCollector;\n+ private Scorer currentScorer;\n \n PerSegmentCollects(LeafReaderContext readerContext) throws IOException {\n // The publisher behaviour for Reader/Scorer listeners triggers a\n@@ -164,12 +218,24 @@ class PerSegmentCollects extends Scorer {\n // However, passing null seems to have no adverse effects here...\n super(null);\n this.readerContext = readerContext;\n- currentLeafCollector = tdc.getLeafCollector(readerContext);\n-\n+ for (int i = 0; i < perBucketSamples.size(); i++) {\n+ PerParentBucketSamples perBucketSample = perBucketSamples.get(i);\n+ if (perBucketSample == null) {\n+ continue;\n+ }\n+ perBucketSample.changeSegment(readerContext);\n+ }\n }\n \n public void setScorer(Scorer scorer) throws IOException {\n- currentLeafCollector.setScorer(scorer);\n+ this.currentScorer = scorer;\n+ for (int i = 0; i < perBucketSamples.size(); i++) {\n+ PerParentBucketSamples perBucketSample = perBucketSamples.get(i);\n+ if (perBucketSample == null) {\n+ continue;\n+ }\n+ perBucketSample.setScorer(scorer);\n+ }\n }\n \n public void replayRelatedMatches(ScoreDoc[] sd) throws IOException {\n@@ -188,7 +254,9 @@ public void replayRelatedMatches(ScoreDoc[] sd) throws IOException {\n if ((rebased >= 0) && (rebased <= maxDocId)) {\n currentScore = scoreDoc.score;\n currentDocId = rebased;\n- leafCollector.collect(rebased, 0);\n+ // We stored the bucket ID in Lucene's shardIndex property\n+ // for convenience. \n+ leafCollector.collect(rebased, scoreDoc.shardIndex);\n }\n }\n \n@@ -224,15 +292,32 @@ public long cost() {\n throw new ElasticsearchException(\"This caching scorer implementation only implements score() and docID()\");\n }\n \n- public void collect(int docId) throws IOException {\n- currentLeafCollector.collect(docId);\n+ public void collect(int docId, long parentBucket) throws IOException {\n+ perBucketSamples = bigArrays.grow(perBucketSamples, parentBucket + 1);\n+ PerParentBucketSamples sampler = perBucketSamples.get((int) parentBucket);\n+ if (sampler == null) {\n+ sampler = new PerParentBucketSamples(parentBucket, currentScorer, readerContext);\n+ perBucketSamples.set((int) parentBucket, sampler);\n+ }\n+ sampler.collect(docId);\n maxDocId = Math.max(maxDocId, docId);\n }\n }\n \n \n- public int getDocCount() {\n- return matchedDocs;\n+ public int getDocCount(long parentBucket) {\n+ PerParentBucketSamples sampler = perBucketSamples.get((int) parentBucket);\n+ if (sampler == null) {\n+ // There are conditions where no docs are collected and the aggs\n+ // framework still asks for doc count.\n+ return 0;\n+ }\n+ return sampler.getDocCount();\n+ }\n+\n+ @Override\n+ public void close() throws ElasticsearchException {\n+ Releasables.close(perBucketSamples);\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/BestDocsDeferringCollector.java",
"status": "modified"
},
{
"diff": "@@ -71,7 +71,7 @@ public DeferringBucketCollector getDeferringCollector() {\n class DiverseDocsDeferringCollector extends BestDocsDeferringCollector {\n \n public DiverseDocsDeferringCollector() {\n- super(shardSize);\n+ super(shardSize, context.bigArrays());\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedBytesHashSamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -77,7 +77,7 @@ public DeferringBucketCollector getDeferringCollector() {\n class DiverseDocsDeferringCollector extends BestDocsDeferringCollector {\n \n public DiverseDocsDeferringCollector() {\n- super(shardSize);\n+ super(shardSize, context.bigArrays());\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedMapSamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -64,7 +64,7 @@ public DeferringBucketCollector getDeferringCollector() {\n */\n class DiverseDocsDeferringCollector extends BestDocsDeferringCollector {\n public DiverseDocsDeferringCollector() {\n- super(shardSize);\n+ super(shardSize, context.bigArrays());\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedNumericSamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -66,7 +66,7 @@ public DeferringBucketCollector getDeferringCollector() {\n class DiverseDocsDeferringCollector extends BestDocsDeferringCollector {\n \n public DiverseDocsDeferringCollector() {\n- super(shardSize);\n+ super(shardSize, context.bigArrays());\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedOrdinalsSamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -154,7 +155,7 @@ public boolean needsScores() {\n \n @Override\n public DeferringBucketCollector getDeferringCollector() {\n- bdd = new BestDocsDeferringCollector(shardSize);\n+ bdd = new BestDocsDeferringCollector(shardSize, context.bigArrays());\n return bdd;\n \n }\n@@ -168,7 +169,8 @@ protected boolean shouldDefer(Aggregator aggregator) {\n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {\n runDeferredCollections(owningBucketOrdinal);\n- return new InternalSampler(name, bdd == null ? 0 : bdd.getDocCount(), bucketAggregations(owningBucketOrdinal), pipelineAggregators(),\n+ return new InternalSampler(name, bdd == null ? 0 : bdd.getDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal),\n+ pipelineAggregators(),\n metaData());\n }\n \n@@ -189,10 +191,6 @@ public Factory(String name, int shardSize) {\n @Override\n public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket,\n List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) throws IOException {\n-\n- if (collectsFromSingleBucket == false) {\n- return asMultiBucketAggregator(this, context, parent);\n- }\n return new SamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, metaData);\n }\n \n@@ -216,11 +214,6 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationCont\n boolean collectsFromSingleBucket, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData)\n throws IOException {\n \n- if (collectsFromSingleBucket == false) {\n- return asMultiBucketAggregator(this, context, parent);\n- }\n-\n-\n if (valuesSource instanceof ValuesSource.Numeric) {\n return new DiversifiedNumericSamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, metaData,\n (Numeric) valuesSource, maxDocsPerValue);\n@@ -272,5 +265,11 @@ protected LeafBucketCollector getLeafCollector(LeafReaderContext ctx, LeafBucket\n return bdd.getLeafCollector(ctx);\n }\n \n+ @Override\n+ protected void doClose() {\n+ Releasables.close(bdd);\n+ super.doClose();\n+ }\n+\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -28,17 +28,22 @@\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n+import org.elasticsearch.search.aggregations.metrics.max.Max;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n import java.util.Collection;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.max;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.sampler;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n \n /**\n@@ -58,19 +63,19 @@ public String randomExecutionHint() {\n public void setupSuiteScopeCluster() throws Exception {\n assertAcked(prepareCreate(\"test\").setSettings(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS, SETTING_NUMBER_OF_REPLICAS, 0).addMapping(\n \"book\", \"author\", \"type=string,index=not_analyzed\", \"name\", \"type=string,index=analyzed\", \"genre\",\n- \"type=string,index=not_analyzed\"));\n+ \"type=string,index=not_analyzed\", \"price\", \"type=float\"));\n createIndex(\"idx_unmapped\");\n // idx_unmapped_author is same as main index but missing author field\n assertAcked(prepareCreate(\"idx_unmapped_author\").setSettings(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS, SETTING_NUMBER_OF_REPLICAS, 0)\n- .addMapping(\"book\", \"name\", \"type=string,index=analyzed\", \"genre\", \"type=string,index=not_analyzed\"));\n+ .addMapping(\"book\", \"name\", \"type=string,index=analyzed\", \"genre\", \"type=string,index=not_analyzed\", \"price\", \"type=float\"));\n \n ensureGreen();\n String data[] = { \n // \"id,cat,name,price,inStock,author_t,series_t,sequence_i,genre_s\",\n \"0553573403,book,A Game of Thrones,7.99,true,George R.R. Martin,A Song of Ice and Fire,1,fantasy\",\n \"0553579908,book,A Clash of Kings,7.99,true,George R.R. Martin,A Song of Ice and Fire,2,fantasy\",\n \"055357342X,book,A Storm of Swords,7.99,true,George R.R. Martin,A Song of Ice and Fire,3,fantasy\",\n- \"0553293354,book,Foundation,7.99,true,Isaac Asimov,Foundation Novels,1,scifi\",\n+ \"0553293354,book,Foundation,17.99,true,Isaac Asimov,Foundation Novels,1,scifi\",\n \"0812521390,book,The Black Company,6.99,false,Glen Cook,The Chronicles of The Black Company,1,fantasy\",\n \"0812550706,book,Ender's Game,6.99,true,Orson Scott Card,Ender,1,scifi\",\n \"0441385532,book,Jhereg,7.95,false,Steven Brust,Vlad Taltos,1,fantasy\",\n@@ -82,11 +87,43 @@ public void setupSuiteScopeCluster() throws Exception {\n \n for (int i = 0; i < data.length; i++) {\n String[] parts = data[i].split(\",\");\n- client().prepareIndex(\"test\", \"book\", \"\" + i).setSource(\"author\", parts[5], \"name\", parts[2], \"genre\", parts[8]).get();\n- client().prepareIndex(\"idx_unmapped_author\", \"book\", \"\" + i).setSource(\"name\", parts[2], \"genre\", parts[8]).get();\n+ client().prepareIndex(\"test\", \"book\", \"\" + i).setSource(\"author\", parts[5], \"name\", parts[2], \"genre\", parts[8], \"price\",Float.parseFloat(parts[3])).get();\n+ client().prepareIndex(\"idx_unmapped_author\", \"book\", \"\" + i).setSource(\"name\", parts[2], \"genre\", parts[8],\"price\",Float.parseFloat(parts[3])).get();\n }\n client().admin().indices().refresh(new RefreshRequest(\"test\")).get();\n }\n+ \n+ @Test\n+ public void issue10719() throws Exception {\n+ // Tests that we can refer to nested elements under a sample in a path\n+ // statement\n+ boolean asc = randomBoolean(); \n+ SearchResponse response = client().prepareSearch(\"test\").setTypes(\"book\").setSearchType(SearchType.QUERY_AND_FETCH)\n+ .addAggregation(terms(\"genres\")\n+ .field(\"genre\")\n+ .order(Terms.Order.aggregation(\"sample>max_price.value\", asc))\n+ .subAggregation(sampler(\"sample\").shardSize(100)\n+ .subAggregation(max(\"max_price\").field(\"price\")))\n+ ).execute().actionGet();\n+ assertSearchResponse(response);\n+ Terms genres = response.getAggregations().get(\"genres\");\n+ Collection<Bucket> genreBuckets = genres.getBuckets();\n+ // For this test to be useful we need >1 genre bucket to compare\n+ assertThat(genreBuckets.size(), greaterThan(1));\n+ double lastMaxPrice = asc ? Double.MIN_VALUE : Double.MAX_VALUE;\n+ for (Terms.Bucket genreBucket : genres.getBuckets()) {\n+ Sampler sample = genreBucket.getAggregations().get(\"sample\");\n+ Max maxPriceInGenre = sample.getAggregations().get(\"max_price\");\n+ double price = maxPriceInGenre.getValue();\n+ if (asc) {\n+ assertThat(price, greaterThanOrEqualTo(lastMaxPrice));\n+ } else {\n+ assertThat(price, lessThanOrEqualTo(lastMaxPrice));\n+ }\n+ lastMaxPrice = price;\n+ }\n+\n+ }\n \n @Test\n public void noDiversity() throws Exception {",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/SamplerTests.java",
"status": "modified"
}
]
} |
{
"body": "This is very closely related to scenario described in #10038:\nthere is an index template with aliases using filter on some fields and a new events come in, failing to created the index when this doesn't yet exists (typical use case is logstash)\n\nThe difference here is that field used in index template to define the alias _is_ declared under default mapping, but still , the problem happens if _bulk endpoint is in use. normal single index request works fine.\n\nRepro using ES 1.5.0 - LS 1.4.2 (same behaviour on ES 1.4.1)\n\n```\n\n# user in _default_ mapping , term filter in alias\n\nPUT _template/template_1\n{\n \"template\": \"test1*\",\n \"settings\": {\n \"number_of_shards\": 1\n },\n \"mappings\": {\n \"_default_\": {\n \"properties\": {\n \"user\": {\n \"type\": \"string\"\n }\n }\n }\n },\n \"aliases\": {\n \"filtered-alias1\": {\n \"filter\": {\n \"term\": {\n \"user\": \"john\"\n }\n }\n }\n }\n}\n```\n\nUsing logstash 1.4.2 or 1.5.0beta1\n\n```\nabonuccelli@w530 /opt/elk/TEST/logstash-1.4.2 $ cat config/template-filter-alias-stdout.cnf \ninput{ \nstdin{}\n}\nfilter{\n}\noutput{\nstdout{\n codec=>rubydebug\n}\nelasticsearch {\n host => \"localhost\"\n protocol => http\n manage_template => false\n index => \"test1-%{+YYYY.MM.dd}\"\n }\n}\n```\n\nstart logstash parse a doc via logstash\n\n```\nabonuccelli@w530 /opt/elk/TEST/logstash-1.4.2 $ ./bin/logstash -f config/template-filter-alias-stdout.cnf --debug \nReading config file {:file=>\"logstash/agent.rb\", :level=>:debug, :line=>\"301\"}\nCompiled pipeline code:\n@inputs = []\n@filters = []\n@outputs = []\n@input_stdin_1 = plugin(\"input\", \"stdin\")\n\n@inputs << @input_stdin_1\n\n@output_stdout_2 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_stdout_2\n@output_elasticsearch_3 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"host\" => (\"localhost\".force_encoding(\"UTF-8\")) }, { \"protocol\" => (\"http\".force_encoding(\"UTF-8\")) }, { \"manage_template\" => (\"false\".force_encoding(\"UTF-8\")) }, { \"index\" => (\"test1-%{+YYYY.MM.dd}\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_elasticsearch_3\n @filter_func = lambda do |event, &block|\n extra_events = []\n @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n\n extra_events.each(&block)\n end\n @output_func = lambda do |event, &block|\n @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n @output_stdout_2.handle(event)\n @output_elasticsearch_3.handle(event)\n\n end {:level=>:debug, :file=>\"logstash/pipeline.rb\", :line=>\"26\"}\nconfig LogStash::Codecs::Line/@charset = \"UTF-8\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Inputs::Stdin/@debug = false {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Inputs::Stdin/@codec = <LogStash::Codecs::Line charset=>\"UTF-8\"> {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Inputs::Stdin/@add_field = {} {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::Stdout/@codec = <LogStash::Codecs::RubyDebug > {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::Stdout/@type = \"\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::Stdout/@tags = [] {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::Stdout/@exclude_tags = [] {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::Stdout/@workers = 1 {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Codecs::Plain/@charset = \"UTF-8\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@host = \"localhost\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@protocol = \"http\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@manage_template = false {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@index = \"test1-%{+YYYY.MM.dd}\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@type = \"\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@tags = [] {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@exclude_tags = [] {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@codec = <LogStash::Codecs::Plain charset=>\"UTF-8\"> {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@workers = 1 {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@template_name = \"logstash\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@template_overwrite = false {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@document_id = nil {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@embedded = false {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@embedded_http_port = \"9200-9300\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@max_inflight_requests = 50 {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@flush_size = 5000 {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@idle_flush_time = 1 {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nconfig LogStash::Outputs::ElasticSearch/@action = \"index\" {:level=>:debug, :file=>\"logstash/config/mixin.rb\", :line=>\"105\"}\nPipeline started {:level=>:info, :file=>\"logstash/pipeline.rb\", :line=>\"78\"}\nNew Elasticsearch output {:cluster=>nil, :host=>\"localhost\", :port=>\"9200\", :embedded=>false, :protocol=>\"http\", :level=>:info, :file=>\"logstash/outputs/elasticsearch.rb\", :line=>\"252\"}\nasdasdasd\noutput received {:event=>{\"message\"=>\"asdasdasd\", \"@version\"=>\"1\", \"@timestamp\"=>\"2015-04-15T09:51:18.096Z\", \"host\"=>\"w530\"}, :level=>:debug, :file=>\"(eval)\", :line=>\"21\"}\n{\n \"message\" => \"asdasdasd\",\n \"@version\" => \"1\",\n \"@timestamp\" => \"2015-04-15T09:51:18.096Z\",\n \"host\" => \"w530\"\n}\nFlushing output {:outgoing_count=>1, :time_since_last_flush=>2.154, :outgoing_events=>{nil=>[[\"index\", {:_id=>nil, :_index=>\"test1-2015.04.15\", :_type=>\"logs\"}, {\"message\"=>\"asdasdasd\", \"@version\"=>\"1\", \"@timestamp\"=>\"2015-04-15T09:51:18.096Z\", \"host\"=>\"w530\"}]]}, :batch_timeout=>1, :force=>nil, :final=>nil, :level=>:debug, :file=>\"stud/buffer.rb\", :line=>\"207\"}\n```\n\ntcpdump capture of it\n\n```\n11:52:54.717052 IP (tos 0x0, ttl 64, id 57438, offset 0, flags [DF], proto TCP (6), length 138)\n 127.0.0.1.32768 > 127.0.0.1.9200: Flags [P.], cksum 0xfe7e (incorrect -> 0x05af), seq 245:331, ack 537, win 350, options [nop,nop,TS val 167425274 ecr 167401133], length 86\n..........#.Z....g_-...^.~.....\n ... .V.POST /_bulk HTTP/1.1\nhost: localhost\nconnection: keep-alive\ncontent-length: 159\n\n\n11:52:54.717234 IP (tos 0x0, ttl 64, id 57439, offset 0, flags [DF], proto TCP (6), length 211)\n 127.0.0.1.32768 > 127.0.0.1.9200: Flags [P.], cksum 0xfec7 (incorrect -> 0xb5b8), seq 331:490, ack 537, win 350, options [nop,nop,TS val 167425274 ecr 167401133], length 159\nE...._@.@.[...........#.Z..=.g_-...^.......\n ... .V.{\"index\":{\"_id\":null,\"_index\":\"test1-2015.04.15\",\"_type\":\"logs\"}}\n{\"message\":\"asdasdasd\",\"@version\":\"1\",\"@timestamp\":\"2015-04-15T09:52:54.697Z\",\"host\":\"w530\"}\n```\n\nelasticsearch 1.5.0 log\n\n```\n\n[2015-04-15 11:52:54,728][DEBUG][action.admin.indices.create] [nodeM1] [test1-2015.04.15] failed to create\norg.elasticsearch.ElasticsearchIllegalArgumentException: failed to parse filter for alias [filtered-alias1]\n at org.elasticsearch.cluster.metadata.AliasValidator.validateAliasFilter(AliasValidator.java:142)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:413)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:365)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.index.query.QueryParsingException: [test1-2015.04.15] Strict field resolution and no field mapping can be found for the field with name [user]\n at org.elasticsearch.index.query.QueryParseContext.failIfFieldMappingNotFound(QueryParseContext.java:422)\n at org.elasticsearch.index.query.QueryParseContext.smartFieldMappers(QueryParseContext.java:397)\n at org.elasticsearch.index.query.TermFilterParser.parse(TermFilterParser.java:111)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:368)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:349)\n at org.elasticsearch.cluster.metadata.AliasValidator.validateAliasFilter(AliasValidator.java:151)\n at org.elasticsearch.cluster.metadata.AliasValidator.validateAliasFilter(AliasValidator.java:140)\n ... 7 more\n```\n\ndoing the same via curl - same problem\n\n```\nAntonios-MacBook-Air-2:~ abonuccelli$ curl -XPUT w530:9200/_bulk -d '{\"index\":{\"_id\":null,\"_index\":\"test1-2015.04.15\",\"_type\":\"logs\"}}\n{\"message\":\"asdasdasd\",\"@version\":\"1\",\"@timestamp\":\"2015-04-15T09:52:54.697Z\",\"host\":\"w530\"}\n'\n{\"took\":22,\"errors\":true,\"items\":[{\"index\":{\"_index\":\"test1-2015.04.15\",\"_type\":\"logs\",\"_id\":null,\"status\":400,\"error\":\"RemoteTransportException[[nodeM1][inet[/192.168.0.101:9304]][indices:admin/create]]; nested: ElasticsearchIllegalArgumentException[failed to parse filter for alias [filtered-alias1]]; nested: QueryParsingException[[test1-2015.04.15] Strict field resolution and no field mapping can be found for the field with name [user]]; \"}}]\n```\n\nif using normal single index request\n\n```\nAntonios-MacBook-Air-2:~ abonuccelli$ curl -XPUT w530:9200/test1-2015.04.15/type/1 -d '\n{\"message\":\"asdasdasd\",\"@version\":\"1\",\"@timestamp\":\"2015-04-15T09:52:54.697Z\",\"host\":\"w530\"}'\n```\n\nthere is no problem\n\n```\n[2015-04-15 12:07:34,356][DEBUG][cluster.service ] [nodeM1] processing [create-index [test1-2015.04.15], cause [auto(index api)]]: execute\n[2015-04-15 12:07:34,357][DEBUG][indices ] [nodeM1] creating Index [test1-2015.04.15], shards [1]/[2]\n[2015-04-15 12:07:34,373][DEBUG][index.mapper ] [nodeM1] [test1-2015.04.15] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/opt/elk/PROD/nodeM1/elasticsearch-1.5.0/lib/elasticsearch-1.5.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]\n[2015-04-15 12:07:34,373][DEBUG][index.cache.query.parser.resident] [nodeM1] [test1-2015.04.15] using [resident] query cache with max_size [100], expire [null]\n[2015-04-15 12:07:34,374][DEBUG][index.store.fs ] [nodeM1] [test1-2015.04.15] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]\n[2015-04-15 12:07:34,377][INFO ][cluster.metadata ] [nodeM1] [test1-2015.04.15] creating index, cause [auto(index api)], templates [template_1], shards [1]/[2], mappings [_default_, type]\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing ... (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index service (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index cache (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][index.cache.filter.weighted] [nodeM1] [test1-2015.04.15] full cache clear, reason [close]\n[2015-04-15 12:07:34,382][DEBUG][index.cache.fixedbitset ] [nodeM1] [test1-2015.04.15] clearing all bitsets because [close]\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] clearing index field data (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing analysis service (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index engine (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index gateway (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing mapper service (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index query parser service (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closing index service (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,382][DEBUG][indices ] [nodeM1] [test1-2015.04.15] closed... (reason [cleaning up after validating index on master])\n[2015-04-15 12:07:34,383][DEBUG][cluster.service ] [nodeM1] cluster state updated, version [183], source [create-index [test1-2015.04.15], cause [auto(index api)]]\n[2015-04-15 12:07:34,387][DEBUG][indices.cluster ] [nodeD2] [test1-2015.04.15] creating index\n[2015-04-15 12:07:34,387][DEBUG][indices ] [nodeD2] creating Index [test1-2015.04.15], shards [1]/[2]\n[2015-04-15 12:07:34,398][DEBUG][index.mapper ] [nodeD2] [test1-2015.04.15] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/opt/elk/PROD/nodeD2/elasticsearch-1.5.0/lib/elasticsearch-1.5.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]\n[2015-04-15 12:07:34,398][DEBUG][index.cache.query.parser.resident] [nodeD2] [test1-2015.04.15] using [resident] query cache with max_size [100], expire [null]\n[2015-04-15 12:07:34,399][DEBUG][index.store.fs ] [nodeD2] [test1-2015.04.15] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]\n[2015-04-15 12:07:34,399][DEBUG][indices.cluster ] [nodeD2] [test1-2015.04.15] adding mapping [_default_], source [{\"_default_\":{\"properties\":{\"user\":{\"type\":\"string\"}}}}]\n[2015-04-15 12:07:34,399][DEBUG][indices.cluster ] [nodeD2] [test1-2015.04.15] adding mapping [type], source [{\"type\":{\"properties\":{\"user\":{\"type\":\"string\"}}}}]\n[2015-04-15 12:07:34,400][DEBUG][indices.cluster ] [nodeD2] [test1-2015.04.15] adding alias [filtered-alias1], filter [{\"term\":{\"user\":\"john\"}}]\n[2015-04-15 12:07:34,400][DEBUG][indices.cluster ] [nodeD2] [test1-2015.04.15][0] creating shard\n[2015-04-15 12:07:34,400][DEBUG][index ] [nodeD2] [test1-2015.04.15] creating shard_id [test1-2015.04.15][0]\n[2015-04-15 12:07:34,402][DEBUG][index.store.fs ] [nodeD2] [test1-2015.04.15] using [/opt/elk/PROD/FS/data/nodeD2/tony_prod/nodes/0/indices/test1-2015.04.15/0/index] as shard's index location\n[2015-04-15 12:07:34,402][DEBUG][index.store ] [nodeD2] [test1-2015.04.15][0] store stats are refreshed with refresh_interval [10s]\n[2015-04-15 12:07:34,402][DEBUG][index.merge.scheduler ] [nodeD2] [test1-2015.04.15][0] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]\n[2015-04-15 12:07:34,402][DEBUG][index.store.fs ] [nodeD2] [test1-2015.04.15] using [/opt/elk/PROD/FS/data/nodeD2/tony_prod/nodes/0/indices/test1-2015.04.15/0/translog] as shard's translog location\n[2015-04-15 12:07:34,403][DEBUG][index.deletionpolicy ] [nodeD2] [test1-2015.04.15][0] Using [keep_only_last] deletion policy\n[2015-04-15 12:07:34,403][DEBUG][index.merge.policy ] [nodeD2] [test1-2015.04.15][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]\n[2015-04-15 12:07:34,403][DEBUG][index.shard ] [nodeD2] [test1-2015.04.15][0] state: [CREATED]\n[2015-04-15 12:07:34,403][DEBUG][index.translog ] [nodeD2] [test1-2015.04.15][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]\n[2015-04-15 12:07:34,403][DEBUG][index.shard ] [nodeD2] [test1-2015.04.15][0] state: [CREATED]->[RECOVERING], reason [from gateway]\n[2015-04-15 12:07:34,403][DEBUG][index.gateway ] [nodeD2] [test1-2015.04.15][0] starting recovery from local ...\n[2015-04-15 12:07:34,403][DEBUG][index.engine ] [nodeD2] [test1-2015.04.15][0] no 3.x segments needed upgrading\n[2015-04-15 12:07:34,424][DEBUG][cluster.service ] [nodeM1] processing [create-index [test1-2015.04.15], cause [auto(index api)]]: done applying updated cluster_state (version: 183)\n[2015-04-15 12:07:34,432][DEBUG][index.shard ] [nodeD2] [test1-2015.04.15][0] scheduling refresher every 1s\n[2015-04-15 12:07:34,432][DEBUG][index.shard ] [nodeD2] [test1-2015.04.15][0] scheduling optimizer / merger every 1s\n[2015-04-15 12:07:34,432][DEBUG][index.shard ] [nodeD2] [test1-2015.04.15][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]\n[2015-04-15 12:07:34,432][DEBUG][index.gateway ] [nodeD2] [test1-2015.04.15][0] recovery completed from [local], took [29ms]\n[2015-04-15 12:07:34,432][DEBUG][cluster.action.shard ] [nodeD2] sending shard started for [test1-2015.04.15][0], node[ci5IBs99RxmpMNOBvRWAHQ], [P], s[INITIALIZING], indexUUID [Lyn-pvs-QRS_InyMh9bi6w], reason [after recovery from gateway]\n[2015-04-15 12:07:34,432][DEBUG][cluster.action.shard ] [nodeM1] received shard started for [test1-2015.04.15][0], node[ci5IBs99RxmpMNOBvRWAHQ], [P], s[INITIALIZING], indexUUID [Lyn-pvs-QRS_InyMh9bi6w], reason [after recovery from gateway]\n[2015-04-15 12:07:34,432][DEBUG][cluster.service ] [nodeM1] processing [shard-started ([test1-2015.04.15][0], node[ci5IBs99RxmpMNOBvRWAHQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute\n[2015-04-15 12:07:34,432][DEBUG][cluster.action.shard ] [nodeM1] [test1-2015.04.15][0] will apply shard started [test1-2015.04.15][0], node[ci5IBs99RxmpMNOBvRWAHQ], [P], s[INITIALIZING], indexUUID [Lyn-pvs-QRS_InyMh9bi6w], reason [after recovery from gateway]\n[2015-04-15 12:07:34,433][DEBUG][indices.store ] [nodeD2] [test1-2015.04.15][0] loaded store meta data (took [0s])\n[2015-04-15 12:07:34,440][DEBUG][cluster.service ] [nodeM1] cluster state updated, version [184], source [shard-started ([test1-2015.04.15][0], node[ci5IBs99RxmpMNOBvRWAHQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]\n\n```\n",
"comments": [
{
"body": "Hi @martijnvg \n\nPlease can you look at this. \n",
"created_at": "2015-04-21T11:33:11Z"
}
],
"number": 10609,
"title": "\"Failed to parse filter for alias/no field mapping can be found\" with field declared in _default_ mapping when using _bulk"
} | {
"body": "Fields defined in the `_default_` mapping of an index template should be picked up when an index alias filter is parsed if a new index is introduced when a document is indexed into an index that doesn't exist yet via the bulk api.\n\nPR for #10609\n",
"number": 10762,
"review_comments": [],
"title": "`_default_` mapping should be picked up from index template during auto create index from bulk API "
} | {
"commits": [
{
"message": "bulk: Fields defined in the `_default_` mapping of an index template should be picked up when an index alias filter is parsed if a new index is introduced when a document is indexed into an index that doesn't exist yet via the bulk api.\n\nCloses #10609"
}
],
"files": [
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.collect.Lists;\n import com.google.common.collect.Maps;\n-import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.ExceptionsHelper;\n@@ -100,22 +99,33 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n final AtomicArray<BulkItemResponse> responses = new AtomicArray<>(bulkRequest.requests.size());\n \n if (autoCreateIndex.needToCheck()) {\n- final Set<String> indices = Sets.newHashSet();\n+ // Keep track of all unique indices and all unique types per index for the create index requests:\n+ final Map<String, Set<String>> indicesAndTypes = new HashMap<>();\n for (ActionRequest request : bulkRequest.requests) {\n if (request instanceof DocumentRequest) {\n DocumentRequest req = (DocumentRequest) request;\n- if (!indices.contains(req.index())) {\n- indices.add(req.index());\n+ Set<String> types = indicesAndTypes.get(req.index());\n+ if (types == null) {\n+ indicesAndTypes.put(req.index(), types = new HashSet<>());\n }\n+ types.add(req.type());\n } else {\n throw new ElasticsearchException(\"Parsed unknown request in bulk actions: \" + request.getClass().getSimpleName());\n }\n }\n- final AtomicInteger counter = new AtomicInteger(indices.size());\n+ final AtomicInteger counter = new AtomicInteger(indicesAndTypes.size());\n ClusterState state = clusterService.state();\n- for (final String index : indices) {\n+ for (Map.Entry<String, Set<String>> entry : indicesAndTypes.entrySet()) {\n+ final String index = entry.getKey();\n if (autoCreateIndex.shouldAutoCreate(index, state)) {\n- createIndexAction.execute(new CreateIndexRequest(bulkRequest).index(index).cause(\"auto(bulk api)\").masterNodeTimeout(bulkRequest.timeout()), new ActionListener<CreateIndexResponse>() {\n+ CreateIndexRequest createIndexRequest = new CreateIndexRequest(bulkRequest);\n+ createIndexRequest.index(index);\n+ for (String type : entry.getValue()) {\n+ createIndexRequest.mapping(type);\n+ }\n+ createIndexRequest.cause(\"auto(bulk api)\");\n+ createIndexRequest.masterNodeTimeout(bulkRequest.timeout());\n+ createIndexAction.execute(createIndexRequest, new ActionListener<CreateIndexResponse>() {\n @Override\n public void onResponse(CreateIndexResponse result) {\n if (counter.decrementAndGet() == 0) {",
"filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.common.Priority;\n@@ -604,38 +606,66 @@ public void testMultipleAliasesPrecedence() throws Exception {\n public void testStrictAliasParsingInIndicesCreatedViaTemplates() throws Exception {\n // Indexing into a should succeed, because the field mapping for field 'field' is defined in the test mapping.\n client().admin().indices().preparePutTemplate(\"template1\")\n- .setTemplate(\"a\")\n+ .setTemplate(\"a*\")\n .setOrder(0)\n .addMapping(\"test\", \"field\", \"type=string\")\n .addAlias(new Alias(\"alias1\").filter(termFilter(\"field\", \"value\"))).get();\n // Indexing into b should succeed, because the field mapping for field 'field' is defined in the _default_ mapping and the test type exists.\n client().admin().indices().preparePutTemplate(\"template2\")\n- .setTemplate(\"b\")\n+ .setTemplate(\"b*\")\n .setOrder(0)\n .addMapping(\"_default_\", \"field\", \"type=string\")\n .addMapping(\"test\")\n .addAlias(new Alias(\"alias2\").filter(termFilter(\"field\", \"value\"))).get();\n // Indexing into c should succeed, because the field mapping for field 'field' is defined in the _default_ mapping.\n client().admin().indices().preparePutTemplate(\"template3\")\n- .setTemplate(\"c\")\n+ .setTemplate(\"c*\")\n .setOrder(0)\n .addMapping(\"_default_\", \"field\", \"type=string\")\n .addAlias(new Alias(\"alias3\").filter(termFilter(\"field\", \"value\"))).get();\n // Indexing into d index should fail, since there is field with name 'field' in the mapping\n client().admin().indices().preparePutTemplate(\"template4\")\n- .setTemplate(\"d\")\n+ .setTemplate(\"d*\")\n .setOrder(0)\n .addAlias(new Alias(\"alias4\").filter(termFilter(\"field\", \"value\"))).get();\n \n- client().prepareIndex(\"a\", \"test\", \"test\").setSource(\"{}\").get();\n- client().prepareIndex(\"b\", \"test\", \"test\").setSource(\"{}\").get();\n- client().prepareIndex(\"c\", \"test\", \"test\").setSource(\"{}\").get();\n+ client().prepareIndex(\"a1\", \"test\", \"test\").setSource(\"{}\").get();\n+ BulkResponse response = client().prepareBulk().add(new IndexRequest(\"a2\", \"test\", \"test\").source(\"{}\")).get();\n+ assertThat(response.hasFailures(), is(false));\n+ assertThat(response.getItems()[0].isFailed(), equalTo(false));\n+ assertThat(response.getItems()[0].getIndex(), equalTo(\"a2\"));\n+ assertThat(response.getItems()[0].getType(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getId(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getVersion(), equalTo(1l));\n+\n+ client().prepareIndex(\"b1\", \"test\", \"test\").setSource(\"{}\").get();\n+ response = client().prepareBulk().add(new IndexRequest(\"b2\", \"test\", \"test\").source(\"{}\")).get();\n+ assertThat(response.hasFailures(), is(false));\n+ assertThat(response.getItems()[0].isFailed(), equalTo(false));\n+ assertThat(response.getItems()[0].getIndex(), equalTo(\"b2\"));\n+ assertThat(response.getItems()[0].getType(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getId(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getVersion(), equalTo(1l));\n+\n+ client().prepareIndex(\"c1\", \"test\", \"test\").setSource(\"{}\").get();\n+ response = client().prepareBulk().add(new IndexRequest(\"c2\", \"test\", \"test\").source(\"{}\")).get();\n+ assertThat(response.hasFailures(), is(false));\n+ assertThat(response.getItems()[0].isFailed(), equalTo(false));\n+ assertThat(response.getItems()[0].getIndex(), equalTo(\"c2\"));\n+ assertThat(response.getItems()[0].getType(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getId(), equalTo(\"test\"));\n+ assertThat(response.getItems()[0].getVersion(), equalTo(1l));\n+\n try {\n- client().prepareIndex(\"d\", \"test\", \"test\").setSource(\"{}\").get();\n+ client().prepareIndex(\"d1\", \"test\", \"test\").setSource(\"{}\").get();\n fail();\n } catch (Exception e) {\n assertThat(ExceptionsHelper.unwrapCause(e), instanceOf(ElasticsearchIllegalArgumentException.class));\n assertThat(e.getMessage(), containsString(\"failed to parse filter for alias [alias4]\"));\n }\n+ response = client().prepareBulk().add(new IndexRequest(\"d2\", \"test\", \"test\").source(\"{}\")).get();\n+ assertThat(response.hasFailures(), is(true));\n+ assertThat(response.getItems()[0].isFailed(), equalTo(true));\n+ assertThat(response.getItems()[0].getFailureMessage(), containsString(\"failed to parse filter for alias [alias4]\"));\n }\n }",
"filename": "src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateTests.java",
"status": "modified"
}
]
} |
{
"body": "FielddataTermsFilter considers that two filters are equal if they apply to the same field name and the **hash code** of the wrapped terms are the same. It should compare terms directly.\n",
"comments": [
{
"body": "Fixed through https://github.com/elastic/elasticsearch/pull/10727 and then backported.\n",
"created_at": "2015-04-22T15:58:55Z"
}
],
"number": 10728,
"title": "Search: FielddataTermsFilter equality is based on hash codes"
} | {
"body": "This snapshot contains in particular LUCENE-6446 (refactored explanation API)\nand LUCENE-6448 (better equals/hashcode for filters).\n\nCloses #10728\n",
"number": 10727,
"review_comments": [
{
"body": "should we just say min of or minimum of?\n",
"created_at": "2015-04-22T15:08:37Z"
},
{
"body": "same comment about Math.min as before.\n",
"created_at": "2015-04-22T15:11:09Z"
},
{
"body": "wonderful :)\n",
"created_at": "2015-04-22T15:14:20Z"
}
],
"title": "Upgrade to lucene-5.2-snapshot-1675363."
} | {
"commits": [
{
"message": "Upgrade to lucene-5.2-snapshot-1675363.\n\nThis snapshot contains in particular LUCENE-6446 (refactored explanation API)\nand LUCENE-6448 (better equals/hashcode for filters)."
}
],
"files": [
{
"diff": "@@ -32,7 +32,7 @@\n \n <properties>\n <lucene.version>5.2.0</lucene.version>\n- <lucene.snapshot.revision>1675100</lucene.snapshot.revision>\n+ <lucene.snapshot.revision>1675363</lucene.snapshot.revision>\n <lucene.maven.version>5.2.0-snapshot-${lucene.snapshot.revision}</lucene.maven.version>\n <testframework.version>2.1.14</testframework.version>\n <tests.jvms>auto</tests.jvms>",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -39,7 +39,6 @@\n import org.apache.lucene.index.SegmentCommitInfo;\n import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.search.Collector;\n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.Filter;\n@@ -530,48 +529,29 @@ public static void writeSortType(StreamOutput out, SortField.Type sortType) thro\n }\n \n public static Explanation readExplanation(StreamInput in) throws IOException {\n- Explanation explanation;\n- if (in.readBoolean()) {\n- Boolean match = in.readOptionalBoolean();\n- explanation = new ComplexExplanation();\n- ((ComplexExplanation) explanation).setMatch(match);\n-\n+ boolean match = in.readBoolean();\n+ String description = in.readString();\n+ final Explanation[] subExplanations = new Explanation[in.readVInt()];\n+ for (int i = 0; i < subExplanations.length; ++i) {\n+ subExplanations[i] = readExplanation(in);\n+ }\n+ if (match) {\n+ return Explanation.match(in.readFloat(), description, subExplanations);\n } else {\n- explanation = new Explanation();\n- }\n- explanation.setValue(in.readFloat());\n- explanation.setDescription(in.readString());\n- if (in.readBoolean()) {\n- int size = in.readVInt();\n- for (int i = 0; i < size; i++) {\n- explanation.addDetail(readExplanation(in));\n- }\n+ return Explanation.noMatch(description, subExplanations);\n }\n- return explanation;\n }\n \n public static void writeExplanation(StreamOutput out, Explanation explanation) throws IOException {\n-\n- if (explanation instanceof ComplexExplanation) {\n- out.writeBoolean(true);\n- out.writeOptionalBoolean(((ComplexExplanation) explanation).getMatch());\n- } else {\n- out.writeBoolean(false);\n- }\n- out.writeFloat(explanation.getValue());\n- if (explanation.getDescription() == null) {\n- throw new ElasticsearchIllegalArgumentException(\"Explanation descriptions should NOT be null\\n[\" + explanation.toString() + \"]\");\n- }\n+ out.writeBoolean(explanation.isMatch());\n out.writeString(explanation.getDescription());\n Explanation[] subExplanations = explanation.getDetails();\n- if (subExplanations == null) {\n- out.writeBoolean(false);\n- } else {\n- out.writeBoolean(true);\n- out.writeVInt(subExplanations.length);\n- for (Explanation subExp : subExplanations) {\n- writeExplanation(out, subExp);\n- }\n+ out.writeVInt(subExplanations.length);\n+ for (Explanation subExp : subExplanations) {\n+ writeExplanation(out, subExp);\n+ }\n+ if (explanation.isMatch()) {\n+ out.writeFloat(explanation.getValue());\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/lucene/Lucene.java",
"status": "modified"
},
{
"diff": "@@ -19,17 +19,20 @@\n \n package org.elasticsearch.common.lucene.search;\n \n+import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.MatchAllDocsQuery;\n-import org.apache.lucene.search.MatchNoDocsQuery;\n+import org.apache.lucene.search.PrefixQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.QueryWrapperFilter;\n+import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n \n@@ -58,6 +61,14 @@ public static Filter newMatchNoDocsFilter() {\n return wrap(newMatchNoDocsQuery());\n }\n \n+ public static Filter newNestedFilter() {\n+ return wrap(new PrefixQuery(new Term(TypeFieldMapper.NAME, new BytesRef(\"__\"))));\n+ }\n+\n+ public static Filter newNonNestedFilter() {\n+ return wrap(not(newNestedFilter()));\n+ }\n+\n /** Return a query that matches all documents but those that match the given query. */\n public static Query not(Query q) {\n BooleanQuery bq = new BooleanQuery();",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/Queries.java",
"status": "modified"
},
{
"diff": "@@ -37,8 +37,6 @@ public abstract class ResolvableFilter extends Filter {\n */\n public abstract Filter resolve();\n \n-\n-\n @Override\n public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {\n Filter resolvedFilter = resolve();",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/ResolvableFilter.java",
"status": "modified"
},
{
"diff": "@@ -52,9 +52,7 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) {\n- Explanation exp = new Explanation(boost, \"static boost factor\");\n- exp.addDetail(new Explanation(boost, \"boostFactor\"));\n- return exp;\n+ return Explanation.match(boost, \"static boost factor\", Explanation.match(boost, \"boostFactor\"));\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/BoostScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.common.lucene.search.function;\n \n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n \n public enum CombineFunction {\n@@ -35,16 +34,15 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = queryBoost * Math.min(funcExpl.getValue(), maxBoost) * queryExpl.getValue();\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- res.addDetail(queryExpl);\n- ComplexExplanation minExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- minExpl.addDetail(funcExpl);\n- minExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- res.addDetail(minExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation boostExpl = Explanation.match(maxBoost, \"maxBoost\");\n+ Explanation minExpl = Explanation.match(\n+ Math.min(funcExpl.getValue(), maxBoost),\n+ \"min of:\",\n+ funcExpl, boostExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ queryExpl, minExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n },\n REPLACE {\n@@ -59,15 +57,15 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = queryBoost * Math.min(funcExpl.getValue(), maxBoost);\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- ComplexExplanation minExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- minExpl.addDetail(funcExpl);\n- minExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- res.addDetail(minExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation boostExpl = Explanation.match(maxBoost, \"maxBoost\");\n+ Explanation minExpl = Explanation.match(\n+ Math.min(funcExpl.getValue(), maxBoost),\n+ \"min of:\",\n+ funcExpl, boostExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ minExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n \n },\n@@ -83,19 +81,14 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = queryBoost * (Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue());\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- ComplexExplanation minExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- minExpl.addDetail(funcExpl);\n- minExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- ComplexExplanation sumExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue(),\n- \"sum of\");\n- sumExpl.addDetail(queryExpl);\n- sumExpl.addDetail(minExpl);\n- res.addDetail(sumExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation minExpl = Explanation.match(Math.min(funcExpl.getValue(), maxBoost), \"min of:\",\n+ funcExpl, Explanation.match(maxBoost, \"maxBoost\"));\n+ Explanation sumExpl = Explanation.match(Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue(), \"sum of\",\n+ queryExpl, minExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ sumExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n \n },\n@@ -111,19 +104,15 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = toFloat(queryBoost * (queryExpl.getValue() + Math.min(funcExpl.getValue(), maxBoost)) / 2.0);\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- ComplexExplanation minExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- minExpl.addDetail(funcExpl);\n- minExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- ComplexExplanation avgExpl = new ComplexExplanation(true,\n- toFloat((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), \"avg of\");\n- avgExpl.addDetail(queryExpl);\n- avgExpl.addDetail(minExpl);\n- res.addDetail(avgExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation minExpl = Explanation.match(Math.min(funcExpl.getValue(), maxBoost), \"min of:\",\n+ funcExpl, Explanation.match(maxBoost, \"maxBoost\"));\n+ Explanation avgExpl = Explanation.match(\n+ toFloat((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), \"avg of\",\n+ queryExpl, minExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ avgExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n \n },\n@@ -139,19 +128,16 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = toFloat(queryBoost * Math.min(queryExpl.getValue(), Math.min(funcExpl.getValue(), maxBoost)));\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- ComplexExplanation innerMinExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- innerMinExpl.addDetail(funcExpl);\n- innerMinExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- ComplexExplanation outerMinExpl = new ComplexExplanation(true, Math.min(Math.min(funcExpl.getValue(), maxBoost),\n- queryExpl.getValue()), \"min of\");\n- outerMinExpl.addDetail(queryExpl);\n- outerMinExpl.addDetail(innerMinExpl);\n- res.addDetail(outerMinExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation innerMinExpl = Explanation.match(\n+ Math.min(funcExpl.getValue(), maxBoost), \"min of:\",\n+ funcExpl, Explanation.match(maxBoost, \"maxBoost\"));\n+ Explanation outerMinExpl = Explanation.match(\n+ Math.min(Math.min(funcExpl.getValue(), maxBoost), queryExpl.getValue()), \"min of\",\n+ queryExpl, innerMinExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ outerMinExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n \n },\n@@ -167,19 +153,16 @@ public String getName() {\n }\n \n @Override\n- public ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n+ public Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n float score = toFloat(queryBoost * Math.max(queryExpl.getValue(), Math.min(funcExpl.getValue(), maxBoost)));\n- ComplexExplanation res = new ComplexExplanation(true, score, \"function score, product of:\");\n- ComplexExplanation innerMinExpl = new ComplexExplanation(true, Math.min(funcExpl.getValue(), maxBoost), \"Math.min of\");\n- innerMinExpl.addDetail(funcExpl);\n- innerMinExpl.addDetail(new Explanation(maxBoost, \"maxBoost\"));\n- ComplexExplanation outerMaxExpl = new ComplexExplanation(true, Math.max(Math.min(funcExpl.getValue(), maxBoost),\n- queryExpl.getValue()), \"max of\");\n- outerMaxExpl.addDetail(queryExpl);\n- outerMaxExpl.addDetail(innerMinExpl);\n- res.addDetail(outerMaxExpl);\n- res.addDetail(new Explanation(queryBoost, \"queryBoost\"));\n- return res;\n+ Explanation innerMinExpl = Explanation.match(\n+ Math.min(funcExpl.getValue(), maxBoost), \"min of:\",\n+ funcExpl, Explanation.match(maxBoost, \"maxBoost\"));\n+ Explanation outerMaxExpl = Explanation.match(\n+ Math.max(Math.min(funcExpl.getValue(), maxBoost), queryExpl.getValue()), \"max of:\",\n+ queryExpl, innerMinExpl);\n+ return Explanation.match(score, \"function score, product of:\",\n+ outerMaxExpl, Explanation.match(queryBoost, \"queryBoost\"));\n }\n \n };\n@@ -198,5 +181,5 @@ private static double deviation(double input) { // only with assert!\n return Double.compare(floatVersion, input) == 0 || input == 0.0d ? 0 : 1.d - (floatVersion) / input;\n }\n \n- public abstract ComplexExplanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost);\n+ public abstract Explanation explain(float queryBoost, Explanation queryExpl, Explanation funcExpl, float maxBoost);\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java",
"status": "modified"
},
{
"diff": "@@ -70,13 +70,11 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) {\n- Explanation exp = new Explanation();\n String modifierStr = modifier != null ? modifier.toString() : \"\";\n double score = score(docId, subQueryScore.getValue());\n- exp.setValue(CombineFunction.toFloat(score));\n- exp.setDescription(\"field value function: \" +\n- modifierStr + \"(\" + \"doc['\" + field + \"'].value * factor=\" + boostFactor + \")\");\n- return exp;\n+ return Explanation.match(\n+ CombineFunction.toFloat(score),\n+ \"field value function: \" + modifierStr + \"(\" + \"doc['\" + field + \"'].value * factor=\" + boostFactor + \")\");\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -175,7 +175,7 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n return subQueryExpl;\n }\n // First: Gather explanations for all filters\n- List<ComplexExplanation> filterExplanations = new ArrayList<>();\n+ List<Explanation> filterExplanations = new ArrayList<>();\n float weightSum = 0;\n for (FilterFunction filterFunction : filterFunctions) {\n \n@@ -191,18 +191,16 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n Explanation functionExplanation = filterFunction.function.getLeafScoreFunction(context).explainScore(doc, subQueryExpl);\n double factor = functionExplanation.getValue();\n float sc = CombineFunction.toFloat(factor);\n- ComplexExplanation filterExplanation = new ComplexExplanation(true, sc, \"function score, product of:\");\n- filterExplanation.addDetail(new Explanation(1.0f, \"match filter: \" + filterFunction.filter.toString()));\n- filterExplanation.addDetail(functionExplanation);\n+ Explanation filterExplanation = Explanation.match(sc, \"function score, product of:\",\n+ Explanation.match(1.0f, \"match filter: \" + filterFunction.filter.toString()), functionExplanation);\n filterExplanations.add(filterExplanation);\n }\n }\n if (filterExplanations.size() == 0) {\n float sc = getBoost() * subQueryExpl.getValue();\n- Explanation res = new ComplexExplanation(true, sc, \"function score, no filter match, product of:\");\n- res.addDetail(subQueryExpl);\n- res.addDetail(new Explanation(getBoost(), \"queryBoost\"));\n- return res;\n+ return Explanation.match(sc, \"function score, no filter match, product of:\",\n+ subQueryExpl,\n+ Explanation.match(getBoost(), \"queryBoost\"));\n }\n \n // Second: Compute the factor that would have been computed by the\n@@ -242,12 +240,11 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n }\n }\n }\n- ComplexExplanation factorExplanaition = new ComplexExplanation(true, CombineFunction.toFloat(factor),\n- \"function score, score mode [\" + scoreMode.toString().toLowerCase(Locale.ROOT) + \"]\");\n- for (int i = 0; i < filterExplanations.size(); i++) {\n- factorExplanaition.addDetail(filterExplanations.get(i));\n- }\n- return combineFunction.explain(getBoost(), subQueryExpl, factorExplanaition, maxBoost);\n+ Explanation factorExplanation = Explanation.match(\n+ CombineFunction.toFloat(factor),\n+ \"function score, score mode [\" + scoreMode.toString().toLowerCase(Locale.ROOT) + \"]\",\n+ filterExplanations);\n+ return combineFunction.explain(getBoost(), subQueryExpl, factorExplanation, maxBoost);\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -74,9 +74,9 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) {\n- Explanation exp = new Explanation();\n- exp.setDescription(\"random score function (seed: \" + originalSeed + \")\");\n- return exp;\n+ return Explanation.match(\n+ CombineFunction.toFloat(score(docId, subQueryScore.getValue())),\n+ \"random score function (seed: \" + originalSeed + \")\");\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -117,10 +117,12 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE\n if (params != null) {\n explanation += \"\\\" and parameters: \\n\" + params.toString();\n }\n- exp = new Explanation(CombineFunction.toFloat(score), explanation);\n- Explanation scoreExp = new Explanation(subQueryScore.getValue(), \"_score: \");\n- scoreExp.addDetail(subQueryScore);\n- exp.addDetail(scoreExp);\n+ Explanation scoreExp = Explanation.match(\n+ subQueryScore.getValue(), \"_score: \",\n+ subQueryScore);\n+ return Explanation.match(\n+ CombineFunction.toFloat(score), explanation,\n+ scoreExp);\n }\n return exp;\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.common.lucene.search.function;\n \n import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n \n@@ -65,18 +64,16 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) throws IOException {\n- Explanation functionScoreExplanation;\n Explanation functionExplanation = leafFunction.explainScore(docId, subQueryScore);\n- functionScoreExplanation = new ComplexExplanation(true, functionExplanation.getValue() * (float) getWeight(), \"product of:\");\n- functionScoreExplanation.addDetail(functionExplanation);\n- functionScoreExplanation.addDetail(explainWeight());\n- return functionScoreExplanation;\n+ return Explanation.match(\n+ functionExplanation.getValue() * (float) getWeight(), \"product of:\",\n+ functionExplanation, explainWeight());\n }\n };\n }\n \n public Explanation explainWeight() {\n- return new Explanation(getWeight(), \"weight\");\n+ return Explanation.match(getWeight(), \"weight\");\n }\n \n public float getWeight() {\n@@ -99,7 +96,7 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) {\n- return new Explanation(1.0f, \"constant score 1.0 - no function provided\");\n+ return Explanation.match(1.0f, \"constant score 1.0 - no function provided\");\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import com.google.common.cache.CacheBuilder;\n import com.google.common.cache.RemovalListener;\n import com.google.common.cache.RemovalNotification;\n+\n import org.apache.lucene.index.LeafReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSet;\n@@ -36,19 +37,19 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.search.NoCacheFilter;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardUtils;\n-import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesWarmer;\n import org.elasticsearch.indices.IndicesWarmer.TerminationHandle;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -266,7 +267,7 @@ public IndicesWarmer.TerminationHandle warmNewReaders(final IndexShard indexShar\n }\n \n if (hasNested) {\n- warmUp.add(NonNestedDocsFilter.INSTANCE);\n+ warmUp.add(Queries.newNonNestedFilter());\n }\n \n final Executor executor = threadPool.executor(executor());",
"filename": "src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -212,13 +212,13 @@ public String toString(String field) {\n \n @Override\n public boolean equals(Object o) {\n- if (!(o instanceof FilterCacheFilterWrapper)) return false;\n+ if (super.equals(o) == false) return false;\n return this.filter.equals(((FilterCacheFilterWrapper) o).filter);\n }\n \n @Override\n public int hashCode() {\n- return filter.hashCode() ^ 0x1117BF25;\n+ return 31 * super.hashCode() + filter.hashCode();\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -36,7 +36,6 @@\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.QueryWrapperFilter;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchGenerationException;\n@@ -61,7 +60,6 @@\n import org.elasticsearch.index.mapper.Mapper.BuilderContext;\n import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.similarity.SimilarityLookupService;\n import org.elasticsearch.indices.InvalidTypeNameException;\n@@ -72,7 +70,6 @@\n import java.io.IOException;\n import java.net.MalformedURLException;\n import java.net.URL;\n-import java.nio.file.Paths;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -455,10 +452,10 @@ public Filter searchFilter(String... types) {\n if (hasNested && filterPercolateType) {\n BooleanQuery bq = new BooleanQuery();\n bq.add(percolatorType, Occur.MUST_NOT);\n- bq.add(NonNestedDocsFilter.INSTANCE, Occur.MUST);\n+ bq.add(Queries.newNonNestedFilter(), Occur.MUST);\n return Queries.wrap(bq);\n } else if (hasNested) {\n- return NonNestedDocsFilter.INSTANCE;\n+ return Queries.newNonNestedFilter();\n } else if (filterPercolateType) {\n return Queries.wrap(Queries.not(percolatorType));\n } else {\n@@ -523,7 +520,7 @@ public Filter searchFilter(String... types) {\n bool.add(percolatorType, BooleanClause.Occur.MUST_NOT);\n }\n if (hasNested) {\n- bool.add(NonNestedDocsFilter.INSTANCE, BooleanClause.Occur.MUST);\n+ bool.add(Queries.newNonNestedFilter(), BooleanClause.Occur.MUST);\n }\n \n return Queries.wrap(bool);",
"filename": "src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -35,7 +36,6 @@\n import org.elasticsearch.index.search.child.ChildrenQuery;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n import org.elasticsearch.index.search.child.ScoreType;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n \n@@ -166,7 +166,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n \n BitDocIdSetFilter nonNestedDocsFilter = null;\n if (parentDocMapper.hasNestedObjects()) {\n- nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n+ nonNestedDocsFilter = parseContext.bitsetFilter(Queries.newNonNestedFilter());\n }\n \n Filter parentFilter = parseContext.cacheFilter(parentDocMapper.typeFilter(), null, parseContext.autoFilterCachePolicy());",
"filename": "src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -36,7 +37,6 @@\n import org.elasticsearch.index.search.child.ChildrenQuery;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n import org.elasticsearch.index.search.child.ScoreType;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n \n@@ -165,7 +165,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n BitDocIdSetFilter nonNestedDocsFilter = null;\n if (parentDocMapper.hasNestedObjects()) {\n- nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n+ nonNestedDocsFilter = parseContext.bitsetFilter(Queries.newNonNestedFilter());\n }\n \n // wrap the query with type query",
"filename": "src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -31,7 +32,6 @@\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n import org.elasticsearch.index.search.child.ScoreType;\n import org.elasticsearch.index.search.child.TopChildrenQuery;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n \n import java.io.IOException;\n \n@@ -128,7 +128,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n BitDocIdSetFilter nonNestedDocsFilter = null;\n if (childDocMapper.hasNestedObjects()) {\n- nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n+ nonNestedDocsFilter = parseContext.bitsetFilter(Queries.newNonNestedFilter());\n }\n \n innerQuery.setBoost(boost);",
"filename": "src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.query.functionscore;\n \n import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n@@ -463,12 +462,10 @@ public double score(int docId, float subQueryScore) {\n \n @Override\n public Explanation explainScore(int docId, Explanation subQueryScore) throws IOException {\n- ComplexExplanation ce = new ComplexExplanation();\n- ce.setValue(CombineFunction.toFloat(score(docId, subQueryScore.getValue())));\n- ce.setMatch(true);\n- ce.setDescription(\"Function for field \" + getFieldName() + \":\");\n- ce.addDetail(func.explainFunction(getDistanceString(ctx, docId), distance.get(docId), scale));\n- return ce;\n+ return Explanation.match(\n+ CombineFunction.toFloat(score(docId, subQueryScore.getValue())),\n+ \"Function for field \" + getFieldName() + \":\",\n+ func.explainFunction(getDistanceString(ctx, docId), distance.get(docId), scale));\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.query.functionscore.exp;\n \n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.index.query.functionscore.DecayFunction;\n import org.elasticsearch.index.query.functionscore.DecayFunctionParser;\n@@ -49,10 +48,9 @@ public double evaluate(double value, double scale) {\n \n @Override\n public Explanation explainFunction(String valueExpl, double value, double scale) {\n- ComplexExplanation ce = new ComplexExplanation();\n- ce.setValue((float) evaluate(value, scale));\n- ce.setDescription(\"exp(- \" + valueExpl + \" * \" + -1 * scale + \")\");\n- return ce;\n+ return Explanation.match(\n+ (float) evaluate(value, scale),\n+ \"exp(- \" + valueExpl + \" * \" + -1 * scale + \")\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/exp/ExponentialDecayFunctionParser.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.query.functionscore.gauss;\n \n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.index.query.functionscore.DecayFunction;\n import org.elasticsearch.index.query.functionscore.DecayFunctionParser;\n@@ -45,10 +44,9 @@ public double evaluate(double value, double scale) {\n \n @Override\n public Explanation explainFunction(String valueExpl, double value, double scale) {\n- ComplexExplanation ce = new ComplexExplanation();\n- ce.setValue((float) evaluate(value, scale));\n- ce.setDescription(\"exp(-0.5*pow(\" + valueExpl + \",2.0)/\" + -1 * scale + \")\");\n- return ce;\n+ return Explanation.match(\n+ (float) evaluate(value, scale),\n+ \"exp(-0.5*pow(\" + valueExpl + \",2.0)/\" + -1 * scale + \")\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/gauss/GaussDecayFunctionParser.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.query.functionscore.lin;\n \n-import org.apache.lucene.search.ComplexExplanation;\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.index.query.functionscore.DecayFunction;\n import org.elasticsearch.index.query.functionscore.DecayFunctionParser;\n@@ -49,10 +48,9 @@ public double evaluate(double value, double scale) {\n \n @Override\n public Explanation explainFunction(String valueExpl, double value, double scale) {\n- ComplexExplanation ce = new ComplexExplanation();\n- ce.setValue((float) evaluate(value, scale));\n- ce.setDescription(\"max(0.0, ((\" + scale + \" - \" + valueExpl + \")/\" + scale + \")\");\n- return ce;\n+ return Explanation.match(\n+ (float) evaluate(value, scale),\n+ \"max(0.0, ((\" + scale + \" - \" + valueExpl + \")/\" + scale + \")\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/lin/LinearDecayFunctionParser.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -31,7 +32,6 @@\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryParsingException;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n@@ -195,7 +195,7 @@ public ObjectMapper getParentObjectMapper() {\n private void setPathLevel() {\n ObjectMapper objectMapper = parseContext.nestedScope().getObjectMapper();\n if (objectMapper == null) {\n- parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n+ parentFilter = parseContext.bitsetFilter(Queries.newNonNestedFilter());\n } else {\n parentFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter());\n }",
"filename": "src/main/java/org/elasticsearch/index/query/support/NestedInnerQueryParseSupport.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.search;\n \n import java.io.IOException;\n+import java.util.Objects;\n \n import com.carrotsearch.hppc.DoubleOpenHashSet;\n import com.carrotsearch.hppc.LongOpenHashSet;\n@@ -86,16 +87,19 @@ public static FieldDataTermsFilter newDoubles(IndexNumericFieldData fieldData, D\n @Override\n public boolean equals(Object obj) {\n if (this == obj) return true;\n- if (obj == null || !(obj instanceof FieldDataTermsFilter)) return false;\n+ if (super.equals(obj) == false) return false;\n \n FieldDataTermsFilter that = (FieldDataTermsFilter) obj;\n if (!fieldData.getFieldNames().indexName().equals(that.fieldData.getFieldNames().indexName())) return false;\n- if (this.hashCode() != obj.hashCode()) return false;\n return true;\n }\n \n @Override\n- public abstract int hashCode();\n+ public int hashCode() {\n+ int h = super.hashCode();\n+ h = 31 * h + fieldData.getFieldNames().indexName().hashCode();\n+ return h;\n+ }\n \n /**\n * Filters on non-numeric fields.\n@@ -109,11 +113,17 @@ protected BytesFieldDataFilter(IndexFieldData fieldData, ObjectOpenHashSet<Bytes\n this.terms = terms;\n }\n \n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ return Objects.equals(terms, ((BytesFieldDataFilter) obj).terms);\n+ }\n+\n @Override\n public int hashCode() {\n- int hashcode = fieldData.getFieldNames().indexName().hashCode();\n- hashcode += terms != null ? terms.hashCode() : 0;\n- return hashcode;\n+ return 31 * super.hashCode() + Objects.hashCode(terms);\n }\n \n @Override\n@@ -166,11 +176,17 @@ protected LongsFieldDataFilter(IndexNumericFieldData fieldData, LongOpenHashSet\n this.terms = terms;\n }\n \n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ return Objects.equals(terms, ((BytesFieldDataFilter) obj).terms);\n+ }\n+\n @Override\n public int hashCode() {\n- int hashcode = fieldData.getFieldNames().indexName().hashCode();\n- hashcode += terms != null ? terms.hashCode() : 0;\n- return hashcode;\n+ return 31 * super.hashCode() + Objects.hashCode(terms);\n }\n \n @Override\n@@ -225,11 +241,17 @@ protected DoublesFieldDataFilter(IndexNumericFieldData fieldData, DoubleOpenHash\n this.terms = terms;\n }\n \n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ return Objects.equals(terms, ((BytesFieldDataFilter) obj).terms);\n+ }\n+\n @Override\n public int hashCode() {\n- int hashcode = fieldData.getFieldNames().indexName().hashCode();\n- hashcode += terms != null ? terms.hashCode() : 0;\n- return hashcode;\n+ return 31 * super.hashCode() + Objects.hashCode(terms);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/search/FieldDataTermsFilter.java",
"status": "modified"
},
{
"diff": "@@ -85,7 +85,7 @@ public final String toString(String field) {\n @Override\n public final boolean equals(Object o) {\n if (this == o) return true;\n- if (!(o instanceof NumericRangeFieldDataFilter)) return false;\n+ if (super.equals(o) == false) return false;\n NumericRangeFieldDataFilter other = (NumericRangeFieldDataFilter) o;\n \n if (!this.indexFieldData.getFieldNames().indexName().equals(other.indexFieldData.getFieldNames().indexName())\n@@ -101,7 +101,8 @@ public final boolean equals(Object o) {\n \n @Override\n public final int hashCode() {\n- int h = indexFieldData.getFieldNames().indexName().hashCode();\n+ int h = super.hashCode();\n+ h = 31 * h + indexFieldData.getFieldNames().indexName().hashCode();\n h ^= (lowerVal != null) ? lowerVal.hashCode() : 550356204;\n h = (h << 1) | (h >>> 31); // rotate to distinguish lower from upper\n h ^= (upperVal != null) ? upperVal.hashCode() : -1674416163;",
"filename": "src/main/java/org/elasticsearch/index/search/NumericRangeFieldDataFilter.java",
"status": "modified"
},
{
"diff": "@@ -40,7 +40,6 @@\n import org.apache.lucene.util.LongBitSet;\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.lucene.search.NoopCollector;\n-import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.fielddata.AtomicParentChildFieldData;\n import org.elasticsearch.index.fielddata.IndexParentChildFieldData;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -202,7 +201,7 @@ public void extractTerms(Set<Term> terms) {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- return new Explanation(getBoost(), \"not implemented yet...\");\n+ return Explanation.match(getBoost(), \"not implemented yet...\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -36,14 +36,12 @@\n import org.apache.lucene.search.XFilteredDocIdSetIterator;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.Bits;\n-import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.ToStringUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.lucene.search.NoopCollector;\n-import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.FloatArray;\n import org.elasticsearch.common.util.IntArray;\n@@ -264,7 +262,7 @@ public void extractTerms(Set<Term> terms) {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- return new Explanation(getBoost(), \"not implemented yet...\");\n+ return Explanation.match(getBoost(), \"not implemented yet...\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java",
"status": "modified"
},
{
"diff": "@@ -22,12 +22,20 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.SortedDocValues;\n import org.apache.lucene.index.Term;\n-import org.apache.lucene.search.*;\n+import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.FilteredDocIdSetIterator;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.LongBitSet;\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.lucene.search.NoopCollector;\n-import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.fielddata.AtomicParentChildFieldData;\n import org.elasticsearch.index.fielddata.IndexParentChildFieldData;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n@@ -166,7 +174,7 @@ public void extractTerms(Set<Term> terms) {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- return new Explanation(getBoost(), \"not implemented yet...\");\n+ return Explanation.match(getBoost(), \"not implemented yet...\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -195,4 +195,24 @@ public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws I\n public String toString(String field) {\n return \"parentsFilter(type=\" + parentTypeBr.utf8ToString() + \")\";\n }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (super.equals(obj) == false) {\n+ return false;\n+ }\n+ ParentIdsFilter other = (ParentIdsFilter) obj;\n+ return parentTypeBr.equals(other.parentTypeBr)\n+ && parentIds.equals(other.parentIds)\n+ && nonNestedDocsFilter.equals(nonNestedDocsFilter);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int h = super.hashCode();\n+ h = 31 * h + parentTypeBr.hashCode();\n+ h = 31 * h + parentIds.hashCode();\n+ h = 31 * h + nonNestedDocsFilter.hashCode();\n+ return h;\n+ }\n }\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/index/search/child/ParentIdsFilter.java",
"status": "modified"
},
{
"diff": "@@ -18,17 +18,27 @@\n */\n package org.elasticsearch.index.search.child;\n \n-import org.apache.lucene.index.*;\n-import org.apache.lucene.search.*;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.SortedDocValues;\n+import org.apache.lucene.index.SortedSetDocValues;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.Bits;\n-import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.ToStringUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.lucene.search.NoopCollector;\n-import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.FloatArray;\n import org.elasticsearch.common.util.LongHash;\n@@ -232,7 +242,7 @@ public void extractTerms(Set<Term> terms) {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- return new Explanation(getBoost(), \"not implemented yet...\");\n+ return Explanation.match(getBoost(), \"not implemented yet...\");\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/search/child/ParentQuery.java",
"status": "modified"
},
{
"diff": "@@ -368,7 +368,7 @@ public float score() throws IOException {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- return new Explanation(getBoost(), \"not implemented yet...\");\n+ return Explanation.match(getBoost(), \"not implemented yet...\");\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java",
"status": "modified"
}
]
} |
{
"body": "```\nshell> curl -XPUT -d '{\"number_of_shards\":-100}' http://localhost:9200/asdf?pretty\n{\n \"acknowledged\" : true\n}\nshell> curl http://localhost:9200/_cat/indices?v\nhealth status index pri rep docs.count docs.deleted store.size pri.store.size\nred open asdf -100 1\nshell> time curl -XDELETE http://localhost:9200/asdf?pretty\n{\n \"acknowledged\" : false\n}\n\nreal 0m30.013s\nuser 0m0.002s\nsys 0m0.007s\nshell> curl http://localhost:9200/_cat/indices?v\nhealth status index pri rep docs.count docs.deleted store.size pri.store.size\nshell> curl http://localhost:9200/_cat/nodes?h=version\n1.5.1\n```\n",
"comments": [
{
"body": "@sorear Thanks reporting!\nI reproduced it.\n",
"created_at": "2015-04-21T10:55:21Z"
},
{
"body": "related #7495\n",
"created_at": "2015-04-21T10:56:36Z"
}
],
"number": 10693,
"title": "Can create index with negative shard count, deletion appears to fail (but actually succeeds)"
} | {
"body": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nnormalize settings before validating\n\nCloses #10693\n\nNow, this PR check CreateIndexRequest only. I think we should check other API related number_of_shards/number_of_replicas.\n",
"number": 10701,
"review_comments": [
{
"body": "I think I would move all settings validations including custom path settings validation above into a `validateIndexSettings` method and then we can call this method from the RestoreService. We will have an unnecessary validation for the number of shards there, but it's a small price to pay for not missing any other settings validations that will be added in the future.\n",
"created_at": "2015-04-22T14:11:19Z"
},
{
"body": "I think it would sound better if you changed it to \"--> try restoring while changing the number of replicas to a negative number - should fail\"\n",
"created_at": "2015-04-22T14:14:08Z"
},
{
"body": "Would it make sense to include this error into the validationErrors list as well?\n",
"created_at": "2015-04-23T14:30:49Z"
},
{
"body": "Yes.\n",
"created_at": "2015-04-23T14:59:01Z"
},
{
"body": "Unused\n",
"created_at": "2015-04-23T15:16:17Z"
},
{
"body": "Unused\n",
"created_at": "2015-04-23T15:16:23Z"
}
],
"title": "Validate number_of_shards/_replicas without index setting prefix"
} | {
"commits": [
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nnormalize settings before validating\nAdd test to update invalid number_of_replicas\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nMove the validation logic to MetaDataCreateIndexService\nAdd ShardClusterSnapshotRestoreTests\nAdd the validation to RestoreService\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nChange and merge validation logic\nFix some comments\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nUse snapshotIndexMetaData.settings instread of request.indexSettings\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nChange validateIndexSettings throw IndexCreationException\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nInclude customPath validation at once\n\nCloses #10693"
},
{
"message": "Settings: validate number_of_shards/number_of_replicas without index setting prefix\n\nRemove unused import\n\nCloses #10693"
}
],
"files": [
{
"diff": "@@ -106,14 +106,6 @@ public ActionRequestValidationException validate() {\n if (index == null) {\n validationException = addValidationError(\"index is missing\", validationException);\n }\n- Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n- Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);\n- if (number_of_primaries != null && number_of_primaries <= 0) {\n- validationException = addValidationError(\"index must have 1 or more primary shards\", validationException);\n- }\n- if (number_of_replicas != null && number_of_replicas < 0) {\n- validationException = addValidationError(\"index must have 0 or more replica shards\", validationException);\n- }\n return validationException;\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -338,8 +338,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n if (request.index().equals(ScriptService.SCRIPT_INDEX)) {\n indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 0));\n indexSettingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, \"0-all\");\n- }\n- else {\n+ } else {\n if (indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) {\n if (request.index().equals(riverIndexName)) {\n indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1));\n@@ -426,7 +425,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n for (Alias alias : request.aliases()) {\n AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter())\n- .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build();\n+ .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build();\n indexMetaDataBuilder.putAlias(aliasMetaData);\n }\n \n@@ -445,11 +444,11 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n \n indexService.indicesLifecycle().beforeIndexAddedToCluster(new Index(request.index()),\n- indexMetaData.settings());\n+ indexMetaData.settings());\n \n MetaData newMetaData = MetaData.builder(currentState.metaData())\n- .put(indexMetaData, false)\n- .build();\n+ .put(indexMetaData, false)\n+ .build();\n \n logger.info(\"[{}] creating index, cause [{}], templates {}, shards [{}]/[{}], mappings {}\", request.index(), request.cause(), templateNames, indexMetaData.numberOfShards(), indexMetaData.numberOfReplicas(), mappings.keySet());\n \n@@ -467,7 +466,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n \n if (request.state() == State.OPEN) {\n RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable())\n- .addAsNew(updatedState.metaData().index(request.index()));\n+ .addAsNew(updatedState.metaData().index(request.index()));\n RoutingAllocation.Result routingResult = allocationService.reroute(ClusterState.builder(updatedState).routingTable(routingTableBuilder).build());\n updatedState = ClusterState.builder(updatedState).routingResult(routingResult).build();\n }\n@@ -554,11 +553,37 @@ public int compare(IndexTemplateMetaData o1, IndexTemplateMetaData o2) {\n \n private void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws ElasticsearchException {\n validateIndexName(request.index(), state);\n- String customPath = request.settings().get(IndexMetaData.SETTING_DATA_PATH, null);\n+ validateIndexSettings(request.index(), request.settings());\n+ }\n+\n+ public void validateIndexSettings(String indexName, Settings settings) throws IndexCreationException {\n+ String customPath = settings.get(IndexMetaData.SETTING_DATA_PATH, null);\n+ List<String> validationErrors = Lists.newArrayList();\n if (customPath != null && nodeEnv.isCustomPathsEnabled() == false) {\n- throw new IndexCreationException(new Index(request.index()),\n- new ElasticsearchIllegalArgumentException(\"custom data_paths for indices is disabled\"));\n+ validationErrors.add(\"custom data_paths for indices is disabled\");\n+ }\n+ Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n+ Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);\n+ if (number_of_primaries != null && number_of_primaries <= 0) {\n+ validationErrors.add(\"index must have 1 or more primary shards\");\n+ }\n+ if (number_of_replicas != null && number_of_replicas < 0) {\n+ validationErrors.add(\"index must have 0 or more replica shards\");\n+ }\n+ if (validationErrors.isEmpty() == false) {\n+ throw new IndexCreationException(new Index(indexName),\n+ new ElasticsearchIllegalArgumentException(getMessage(validationErrors)));\n+ }\n+ }\n+\n+ private String getMessage(List<String> validationErrors) {\n+ StringBuilder sb = new StringBuilder();\n+ sb.append(\"Validation Failed: \");\n+ int index = 0;\n+ for (String error : validationErrors) {\n+ sb.append(++index).append(\": \").append(error).append(\";\");\n }\n+ return sb.toString();\n }\n \n private static class DefaultIndexTemplateFilter implements IndexTemplateFilter {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -190,6 +190,7 @@ public ClusterState execute(ClusterState currentState) {\n // Index doesn't exist - create it and start recovery\n // Make sure that the index we are about to create has a validate name\n createIndexService.validateIndexName(renamedIndex, currentState);\n+ createIndexService.validateIndexSettings(renamedIndex, snapshotIndexMetaData.settings());\n IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(snapshotIndexMetaData).state(IndexMetaData.State.OPEN).index(renamedIndex);\n indexMdBuilder.settings(ImmutableSettings.settingsBuilder().put(snapshotIndexMetaData.settings()).put(IndexMetaData.SETTING_UUID, Strings.randomBase64UUID()));\n if (!request.includeAliases() && !snapshotIndexMetaData.aliases().isEmpty()) {",
"filename": "src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n-import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -106,38 +106,76 @@ public void testDoubleAddMapping() throws Exception {\n public void testInvalidShardCountSettings() throws Exception {\n try {\n prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the primary shard count\");\n- } catch (ActionRequestValidationException e) {\n+ } catch (ElasticsearchIllegalArgumentException e) {\n assertThat(\"message contains error about shard count: \" + e.getMessage(),\n e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n }\n \n try {\n prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the replica shard count\");\n- } catch (ActionRequestValidationException e) {\n+ } catch (ElasticsearchIllegalArgumentException e) {\n assertThat(\"message contains error about shard count: \" + e.getMessage(),\n e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n }\n \n try {\n prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the shard count\");\n- } catch (ActionRequestValidationException e) {\n+ } catch (ElasticsearchIllegalArgumentException e) {\n assertThat(\"message contains error about shard count: \" + e.getMessage(),\n e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n assertThat(\"message contains error about shard count: \" + e.getMessage(),\n- e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n+ e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n }\n }\n+\n+ @Test\n+ public void testInvalidShardCountSettingsWithoutPrefix() throws Exception {\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS.substring(IndexMetaData.INDEX_SETTING_PREFIX.length()), randomIntBetween(-10, 0))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the shard count\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n+ }\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS.substring(IndexMetaData.INDEX_SETTING_PREFIX.length()), randomIntBetween(-10, -1))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the shard count\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n+ }\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS.substring(IndexMetaData.INDEX_SETTING_PREFIX.length()), randomIntBetween(-10, 0))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS.substring(IndexMetaData.INDEX_SETTING_PREFIX.length()), randomIntBetween(-10, -1))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the shard count\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n+ }\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexTests.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,13 @@\n \n package org.elasticsearch.indices.settings;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n@@ -263,4 +266,20 @@ public void testAutoExpandNumberReplicas2() {\n assertThat(clusterHealth.getIndices().get(\"test\").getNumberOfReplicas(), equalTo(3));\n assertThat(clusterHealth.getIndices().get(\"test\").getActiveShards(), equalTo(numShards.numPrimaries * 4));\n }\n+\n+ @Test\n+ public void testUpdateWithInvalidNumberOfReplicas() {\n+ createIndex(\"test\");\n+ try {\n+ client().admin().indices().prepareUpdateSettings(\"test\")\n+ .setSettings(ImmutableSettings.settingsBuilder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ )\n+ .execute().actionGet();\n+ fail(\"should have thrown an exception about the replica shard count\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"the value of the setting index.number_of_replicas must be a non negative integer\"), equalTo(true));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/indices/settings/UpdateNumberOfReplicasTests.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ListenableActionFuture;\n import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n@@ -56,7 +57,6 @@\n import org.elasticsearch.indices.InvalidIndexNameException;\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n-import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.nio.channels.SeekableByteChannel;\n@@ -1637,6 +1637,17 @@ public void changeSettingsOnRestoreTest() throws Exception {\n .setIndexSettings(newIncorrectIndexSettings)\n .setWaitForCompletion(true), SnapshotRestoreException.class);\n \n+ logger.info(\"--> try restoring while changing the number of replicas to a negative number - should fail\");\n+ Settings newIncorrectReplicasIndexSettings = ImmutableSettings.builder()\n+ .put(newIndexSettings)\n+ .put(SETTING_NUMBER_OF_REPLICAS.substring(IndexMetaData.INDEX_SETTING_PREFIX.length()), randomIntBetween(-10, -1))\n+ .build();\n+ assertThrows(client.admin().cluster()\n+ .prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setIgnoreIndexSettings(\"index.analysis.*\")\n+ .setIndexSettings(newIncorrectReplicasIndexSettings)\n+ .setWaitForCompletion(true), ElasticsearchIllegalArgumentException.class);\n+\n logger.info(\"--> restore index with correct settings from the snapshot\");\n RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster()\n .prepareRestoreSnapshot(\"test-repo\", \"test-snap\")",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "I saw this by adding assertingcodec to the mix in our tests:\n\nFAILURE 0.55s | TopHitsTests.testNestedFetchFeatures <<<\n\n> Throwable #1: java.lang.AssertionError: Hit count is 1 but 2 was expected. Total shards: 9 Successful shards: 8 & 1 shard failures:\n> shard [[qcEkX24CTsSgeHjwon4SYA][articles][5]], reason [ElasticsearchException[target must be > docID(), got 1 <= 3]; nested: AssertionError[target must be > docID(), got 1 <= 3]; ]\n> at __randomizedtesting.SeedInfo.seed([EEBE1D571C1FD8E9:9F4B07A5A1F77FA4]:0)\n> at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount(ElasticsearchAssertions.java:145)\n> at org.elasticsearch.search.aggregations.bucket.TopHitsTests.testNestedFetchFeatures(TopHitsTests.java:804)\n> at java.lang.Thread.run(Thread.java:745)\n",
"comments": [
{
"body": "I also hit this with SimpleNestedTests.simpleNestedMatchQueries() with AssertingCodec\n",
"created_at": "2015-04-19T17:44:38Z"
},
{
"body": "The failure in the TopHits is caused by the nested aggregator. The TopHitsAggregator doesn't invoke advance by itself. (the combination is being tested here)\n",
"created_at": "2015-04-19T21:55:31Z"
},
{
"body": "@martijnvg can you fix this?\n",
"created_at": "2015-04-20T09:49:10Z"
},
{
"body": "@s1monw sure, I'll fix this.\n",
"created_at": "2015-04-20T09:59:23Z"
},
{
"body": "@martijnvg SimpleNestedTests still has an `AwaitsFix` with this bug url, should it be removed?\n",
"created_at": "2015-06-25T14:02:38Z"
},
{
"body": "@jpountz yes, it should! I guess I forgot that. the test needs to be changed a bit too... the matched queries should be asserted on the inner hits instead of the root hit.\n",
"created_at": "2015-06-25T18:59:32Z"
}
],
"number": 10661,
"title": "TopHits advance()'s backwards"
} | {
"body": "Because the fetch phase now has nested doc support, the logic that deals with detecting if a named nested query/filter matches with a hit can be removed.\n\nPR for #10661\n",
"number": 10694,
"review_comments": [],
"title": "Matched queries: Remove redundant and broken code"
} | {
"commits": [
{
"message": "matched queries: Remove redundant and broken code\n\nBecause the fetch phase now has nested doc, the logic that deals with detecting if a named nested query/filter matches with a hit can be removed.\n\nCloses #10661"
}
],
"files": [
{
"diff": "@@ -20,17 +20,13 @@\n \n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Lists;\n-import org.apache.lucene.index.Term;\n-import org.apache.lucene.queries.TermFilter;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n-import org.elasticsearch.index.mapper.Uid;\n-import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.InternalSearchHit;\n@@ -71,16 +67,10 @@ public void hitExecute(SearchContext context, HitContext hitContext) throws Elas\n List<String> matchedQueries = Lists.newArrayListWithCapacity(2);\n \n try {\n- DocIdSet docAndNestedDocsIdSet = null;\n- if (context.mapperService().documentMapper(hitContext.hit().type()).hasNestedObjects()) {\n- // Both main and nested Lucene docs have a _uid field\n- Filter docAndNestedDocsFilter = new TermFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(hitContext.hit().type(), hitContext.hit().id())));\n- docAndNestedDocsIdSet = docAndNestedDocsFilter.getDocIdSet(hitContext.readerContext(), null);\n- }\n- addMatchedQueries(hitContext, context.parsedQuery().namedFilters(), matchedQueries, docAndNestedDocsIdSet);\n+ addMatchedQueries(hitContext, context.parsedQuery().namedFilters(), matchedQueries);\n \n if (context.parsedPostFilter() != null) {\n- addMatchedQueries(hitContext, context.parsedPostFilter().namedFilters(), matchedQueries, docAndNestedDocsIdSet);\n+ addMatchedQueries(hitContext, context.parsedPostFilter().namedFilters(), matchedQueries);\n }\n } catch (IOException e) {\n throw ExceptionsHelper.convertToElastic(e);\n@@ -91,41 +81,24 @@ public void hitExecute(SearchContext context, HitContext hitContext) throws Elas\n hitContext.hit().matchedQueries(matchedQueries.toArray(new String[matchedQueries.size()]));\n }\n \n- private void addMatchedQueries(HitContext hitContext, ImmutableMap<String, Filter> namedFiltersAndQueries, List<String> matchedQueries, DocIdSet docAndNestedDocsIdSet) throws IOException {\n+ private void addMatchedQueries(HitContext hitContext, ImmutableMap<String, Filter> namedFiltersAndQueries, List<String> matchedQueries) throws IOException {\n for (Map.Entry<String, Filter> entry : namedFiltersAndQueries.entrySet()) {\n String name = entry.getKey();\n Filter filter = entry.getValue();\n \n DocIdSet filterDocIdSet = filter.getDocIdSet(hitContext.readerContext(), null); // null is fine, since we filter by hitContext.docId()\n if (!DocIdSets.isEmpty(filterDocIdSet)) {\n- if (!DocIdSets.isEmpty(docAndNestedDocsIdSet)) {\n- DocIdSetIterator filterIterator = filterDocIdSet.iterator();\n- DocIdSetIterator docAndNestedDocsIterator = docAndNestedDocsIdSet.iterator();\n- if (filterIterator != null && docAndNestedDocsIterator != null) {\n- int matchedDocId = -1;\n- for (int docId = docAndNestedDocsIterator.nextDoc(); docId < DocIdSetIterator.NO_MORE_DOCS; docId = docAndNestedDocsIterator.nextDoc()) {\n- if (docId != matchedDocId) {\n- matchedDocId = filterIterator.advance(docId);\n- }\n- if (matchedDocId == docId) {\n- matchedQueries.add(name);\n- break;\n- }\n- }\n+ Bits bits = filterDocIdSet.bits();\n+ if (bits != null) {\n+ if (bits.get(hitContext.docId())) {\n+ matchedQueries.add(name);\n }\n } else {\n- Bits bits = filterDocIdSet.bits();\n- if (bits != null) {\n- if (bits.get(hitContext.docId())) {\n+ DocIdSetIterator iterator = filterDocIdSet.iterator();\n+ if (iterator != null) {\n+ if (iterator.advance(hitContext.docId()) == hitContext.docId()) {\n matchedQueries.add(name);\n }\n- } else {\n- DocIdSetIterator iterator = filterDocIdSet.iterator();\n- if (iterator != null) {\n- if (iterator.advance(hitContext.docId()) == hitContext.docId()) {\n- matchedQueries.add(name);\n- }\n- }\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/search/fetch/matchedqueries/MatchedQueriesFetchSubPhase.java",
"status": "modified"
},
{
"diff": "@@ -63,13 +63,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.emptyArray;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.not;\n-import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.nullValue;\n-import static org.hamcrest.Matchers.sameInstance;\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n@@ -776,7 +770,7 @@ public void testTopHitsInSecondLayerNested() throws Exception {\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n }\n \n- @Test @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/10661\")\n+ @Test\n public void testNestedFetchFeatures() {\n String hlType = randomFrom(\"plain\", \"fvh\", \"postings\");\n HighlightBuilder.Field hlField = new HighlightBuilder.Field(\"comments.message\")\n@@ -826,7 +820,7 @@ public void testNestedFetchFeatures() {\n assertThat(version, equalTo(1l));\n \n // Can't use named queries for the same reason explain doesn't work:\n- assertThat(searchHit.matchedQueries(), emptyArray());\n+ assertThat(searchHit.matchedQueries(), arrayContaining(\"test\"));\n \n SearchHitField field = searchHit.field(\"comments.user\");\n assertThat(field.getValue().toString(), equalTo(\"a\"));",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "The docs at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-geo-shape-filter.html state that the `geo_shape` filter will \"...use the same PrefixTree configuration as defined for the field mapping.\" Unfortunately at query time the `precision` and `tree_levels` parameters are overridden by the `distance_error_pct` This parameter is used as a percentage of the shortest diagonal distance (in degrees) of the bounding box for the defined filter shape and has the unfortunate side effect of degrading accuracy for larger filters (worst at the equator). This leads to the results containing a number of false positives. \n\nAn example of this can be seen at the following gist https://gist.github.com/nknize/abbcb87f091b891f85e1\n\nA work around is to set the `distance_error_pct` to something reasonable based on the size of your filter and location of your data. For the worst case, (i.e., you expect a LOT of global filtering over global data spanning the map - and majority is at the equator) then set `distance_error_pct` to 0 and `tree_levels` to 32. Doing this will force the filter to use a tree level of 32 which will give ~1m accuracy for results at the equator. Note that this will also increase memory usage and the overall size of the index.\n",
"comments": [
{
"body": "> Unfortunately at query time the precision and tree_levels parameters are overridden by the distance_error_pct\n\nTo me _query time_ suggests when you're POST'ing to `:9200/index/_search` but the docs give no indication that `distance_error_pct` is a valid property for a [GeoShape query](https://www.elastic.co/guide/en/elasticsearch/guide/1.x/querying-geo-shapes.html). `distance_error_pct` is discussed in the section on indexing.\n",
"created_at": "2016-04-11T17:23:30Z"
}
],
"number": 9691,
"title": "[GEO] geoshape filter ignores tree_levels and precision parameter mappings"
} | {
"body": "If a user explicitly defined the tree_level or precision parameter in a geo_shape mapping their precision was always overridden by the distance_error_pct parameter (even though our docs say this parameter is a 'hint'). This lead to unexpected accuracy problems (e.g., false positives) in the results of a geo_shape filter. (example provided in issue #9691)\n\nThis patch fixes this unexpected behavior by setting the distance_error_pct parameter to zero when the tree_level or precision parameters are provided by the user, but the distance_error_pct parameter is not. This enables a user to explicitly specify a precision and an error consciously knowing how the error factor will affect query results.\n\nUnder the covers the quadtree will now guarantee the precision defined by the user, eliminating this explanation that false positives are \"like text based stemming\". The docs will be updated to alert the user to exercise caution with these parameters. Specifying a precision of \"1m\" for an index using large complex shapes can use a significant amount of memory.\n\ncloses #9691\n",
"number": 10679,
"review_comments": [
{
"body": "Do you mean it will _default_ to 0?\n",
"created_at": "2015-04-20T22:05:30Z"
},
{
"body": "Did you mean to make this \"if (simulate) return\" instead of removing the simulate check altogether?\n",
"created_at": "2015-04-20T22:09:55Z"
},
{
"body": "Or maybe you meant to add the simulate check here? Otherwise this members would be modified when simulating (simulation is like the \"validation\" pass in mappings merging)\n",
"created_at": "2015-04-20T22:12:52Z"
},
{
"body": "exactly. Updated wording.\n",
"created_at": "2015-04-21T12:11:47Z"
},
{
"body": "++ moved the simulate conditional\n",
"created_at": "2015-04-21T12:11:50Z"
}
],
"title": "[GEO] Update tree_level and precision parameter priorities"
} | {
"commits": [
{
"message": "[GEO] Prioritize tree_level and precision parameters over default distance_error_pct\n\nIf a user explicitly defined the tree_level or precision parameter in a geo_shape mapping their specification was always overridden by the default_error_pct parameter (even though our docs say this parameter is a 'hint'). This lead to unexpected accuracy problems in the results of a geo_shape filter. (example provided in issue #9691)\n\nThis simple patch fixes the unexpected behavior by setting the default distance_error_pct parameter to zero when the tree_level or precision parameters are provided by the user. Under the covers the quadtree will now use the tree level defined by the user. The docs will be updated to alert the user to exercise caution with these parameters. Specifying a precision of \"1m\" for an index using large complex shapes can quickly lead to OOM issues.\n\ncloses #9691"
}
],
"files": [
{
"diff": "@@ -46,7 +46,13 @@ via the mapping API even if you use the precision parameter.\n \n |`distance_error_pct` |Used as a hint to the PrefixTree about how\n precise it should be. Defaults to 0.025 (2.5%) with 0.5 as the maximum\n-supported value.\n+supported value. PERFORMANCE NOTE: This value will be default to 0 if a `precision` or\n+`tree_level` definition is explicitly defined. This guarantees spatial precision\n+at the level defined in the mapping. This can lead to significant memory usage\n+for high resolution shapes with low error (e.g., large shapes at 1m with < 0.001 error).\n+To improve indexing performance (at the cost of query accuracy) explicitly define\n+`tree_level` or `precision` along with a reasonable `distance_error_pct`, noting\n+that large shapes will have greater false positives.\n \n |`orientation` |Optionally define how to interpret vertex order for\n polygons / multipolygons. This parameter defines one of two coordinate",
"filename": "docs/reference/mapping/types/geo-shape-type.asciidoc",
"status": "modified"
},
{
"diff": "@@ -114,6 +114,7 @@ public static class Builder extends AbstractFieldMapper.Builder<Builder, GeoShap\n private int treeLevels = 0;\n private double precisionInMeters = -1;\n private double distanceErrorPct = Defaults.DISTANCE_ERROR_PCT;\n+ private boolean distErrPctDefined;\n private Orientation orientation = Defaults.ORIENTATION;\n \n private SpatialPrefixTree prefixTree;\n@@ -173,23 +174,27 @@ public GeoShapeFieldMapper build(BuilderContext context) {\n return new GeoShapeFieldMapper(names, prefixTree, strategyName, distanceErrorPct, orientation, fieldType,\n context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo);\n }\n- }\n \n- private static final int getLevels(int treeLevels, double precisionInMeters, int defaultLevels, boolean geoHash) {\n- if (treeLevels > 0 || precisionInMeters >= 0) {\n- return Math.max(treeLevels, precisionInMeters >= 0 ? (geoHash ? GeoUtils.geoHashLevelsForPrecision(precisionInMeters)\n- : GeoUtils.quadTreeLevelsForPrecision(precisionInMeters)) : 0);\n+ private final int getLevels(int treeLevels, double precisionInMeters, int defaultLevels, boolean geoHash) {\n+ if (treeLevels > 0 || precisionInMeters >= 0) {\n+ // if the user specified a precision but not a distance error percent then zero out the distance err pct\n+ // this is done to guarantee precision specified by the user without doing something unexpected under the covers\n+ if (!distErrPctDefined) distanceErrorPct = 0;\n+ return Math.max(treeLevels, precisionInMeters >= 0 ? (geoHash ? GeoUtils.geoHashLevelsForPrecision(precisionInMeters)\n+ : GeoUtils.quadTreeLevelsForPrecision(precisionInMeters)) : 0);\n+ }\n+ return defaultLevels;\n }\n- return defaultLevels;\n }\n \n-\n public static class TypeParser implements Mapper.TypeParser {\n \n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n Builder builder = geoShapeField(name);\n-\n+ // if index was created before 1.6, this conditional should be true (this forces any index created on/or after 1.6 to use 0 for\n+ // the default distanceErrorPct parameter).\n+ builder.distErrPctDefined = parserContext.indexVersionCreated().before(Version.V_1_6_0);\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n@@ -205,6 +210,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n iterator.remove();\n } else if (Names.DISTANCE_ERROR_PCT.equals(fieldName)) {\n builder.distanceErrorPct(Double.parseDouble(fieldNode.toString()));\n+ builder.distErrPctDefined = true;\n iterator.remove();\n } else if (Names.ORIENTATION.equals(fieldName)) {\n builder.orientation(ShapeBuilder.orientationFromString(fieldNode.toString()));\n@@ -282,40 +288,38 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n return;\n }\n final GeoShapeFieldMapper fieldMergeWith = (GeoShapeFieldMapper) mergeWith;\n- if (!mergeContext.mergeFlags().simulate()) {\n- final PrefixTreeStrategy mergeWithStrategy = fieldMergeWith.defaultStrategy;\n+ final PrefixTreeStrategy mergeWithStrategy = fieldMergeWith.defaultStrategy;\n \n- // prevent user from changing strategies\n- if (!(this.defaultStrategy.getClass().equals(mergeWithStrategy.getClass()))) {\n- mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different strategy\");\n- }\n+ // prevent user from changing strategies\n+ if (!(this.defaultStrategy.getClass().equals(mergeWithStrategy.getClass()))) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different strategy\");\n+ }\n \n- final SpatialPrefixTree grid = this.defaultStrategy.getGrid();\n- final SpatialPrefixTree mergeGrid = mergeWithStrategy.getGrid();\n+ final SpatialPrefixTree grid = this.defaultStrategy.getGrid();\n+ final SpatialPrefixTree mergeGrid = mergeWithStrategy.getGrid();\n \n- // prevent user from changing trees (changes encoding)\n- if (!grid.getClass().equals(mergeGrid.getClass())) {\n- mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree\");\n- }\n+ // prevent user from changing trees (changes encoding)\n+ if (!grid.getClass().equals(mergeGrid.getClass())) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree\");\n+ }\n \n- // TODO we should allow this, but at the moment levels is used to build bookkeeping variables\n- // in lucene's SpatialPrefixTree implementations, need a patch to correct that first\n- if (grid.getMaxLevels() != mergeGrid.getMaxLevels()) {\n- mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree_levels or precision\");\n- }\n+ // TODO we should allow this, but at the moment levels is used to build bookkeeping variables\n+ // in lucene's SpatialPrefixTree implementations, need a patch to correct that first\n+ if (grid.getMaxLevels() != mergeGrid.getMaxLevels()) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree_levels or precision\");\n+ }\n \n- // bail if there were merge conflicts\n- if (mergeContext.hasConflicts()) {\n- return;\n- }\n+ // bail if there were merge conflicts\n+ if (mergeContext.hasConflicts() || mergeContext.mergeFlags().simulate()) {\n+ return;\n+ }\n \n- // change distance error percent\n- this.defaultStrategy.setDistErrPct(mergeWithStrategy.getDistErrPct());\n+ // change distance error percent\n+ this.defaultStrategy.setDistErrPct(mergeWithStrategy.getDistErrPct());\n \n- // change orientation - this is allowed because existing dateline spanning shapes\n- // have already been unwound and segmented\n- this.shapeOrientation = fieldMergeWith.shapeOrientation;\n- }\n+ // change orientation - this is allowed because existing dateline spanning shapes\n+ // have already been unwound and segmented\n+ this.shapeOrientation = fieldMergeWith.shapeOrientation;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -173,9 +173,35 @@ public void testLevelPrecisionConfiguration() throws IOException {\n \n assertThat(strategy.getDistErrPct(), equalTo(0.5));\n assertThat(strategy.getGrid(), instanceOf(QuadPrefixTree.class));\n- /* 70m is more precise so it wins */\n+ // 70m is more precise so it wins\n assertThat(strategy.getGrid().getMaxLevels(), equalTo(GeoUtils.quadTreeLevelsForPrecision(70d))); \n }\n+\n+ {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\").startObject(\"location\")\n+ .field(\"type\", \"geo_shape\")\n+ .field(\"tree\", \"quadtree\")\n+ .field(\"tree_levels\", \"26\")\n+ .field(\"precision\", \"70m\")\n+ .endObject().endObject()\n+ .endObject().endObject().string();\n+\n+\n+ DocumentMapper defaultMapper = parser.parse(mapping);\n+ FieldMapper fieldMapper = defaultMapper.mappers().name(\"location\").mapper();\n+ assertThat(fieldMapper, instanceOf(GeoShapeFieldMapper.class));\n+\n+ GeoShapeFieldMapper geoShapeFieldMapper = (GeoShapeFieldMapper) fieldMapper;\n+ PrefixTreeStrategy strategy = geoShapeFieldMapper.defaultStrategy();\n+\n+ // distance_error_pct was not specified so we expect the mapper to take the highest precision between \"precision\" and\n+ // \"tree_levels\" setting distErrPct to 0 to guarantee desired precision\n+ assertThat(strategy.getDistErrPct(), equalTo(0.0));\n+ assertThat(strategy.getGrid(), instanceOf(QuadPrefixTree.class));\n+ // 70m is less precise so it loses\n+ assertThat(strategy.getGrid().getMaxLevels(), equalTo(26));\n+ }\n \n {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n@@ -197,7 +223,7 @@ public void testLevelPrecisionConfiguration() throws IOException {\n \n assertThat(strategy.getDistErrPct(), equalTo(0.5));\n assertThat(strategy.getGrid(), instanceOf(GeohashPrefixTree.class));\n- /* 70m is more precise so it wins */\n+ // 70m is more precise so it wins\n assertThat(strategy.getGrid().getMaxLevels(), equalTo(GeoUtils.geoHashLevelsForPrecision(70d))); \n }\n ",
"filename": "src/test/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "Step over object fields when creating the nested identify and just use the object fieldname as a prefix for the nested field.\n\nCloses #10629\n",
"comments": [
{
"body": "fixed via #10663\n",
"created_at": "2015-04-30T15:02:49Z"
}
],
"number": 10662,
"title": "inner_hits: Ignore object fields."
} | {
"body": "Only parent filters should use bitset filter cache, to avoid memory being wasted.\n\nCloses #10662 \nCloses #10629\n",
"number": 10663,
"review_comments": [
{
"body": "Do we really need to pass livedocs?\n",
"created_at": "2015-04-19T21:59:02Z"
},
{
"body": "it just felt safe to do this... but I don't we need to, because nested docs can't be modified on their own. Their life cycle depends on the parent doc.\n",
"created_at": "2015-04-19T22:12:29Z"
},
{
"body": "I think its a little confusing, since we explicitly didn't pass bits before. IMO we should just have a code comment\n",
"created_at": "2015-04-20T01:17:52Z"
},
{
"body": "I added a comment regarding this.\n",
"created_at": "2015-04-21T13:57:25Z"
},
{
"body": "Filter.getDocIdSet can return null, can you check that nestedTypeSet is not null?\n",
"created_at": "2015-04-24T10:02:06Z"
},
{
"body": "Also the iterator.\n",
"created_at": "2015-04-24T10:02:58Z"
}
],
"title": "Don't use bitset cache for children filters."
} | {
"commits": [
{
"message": "inner_hits: Don't use bitset cache for children filters.\n\nOnly parent filters should use bitset filter cache, to avoid memory being wasted.\nAlso in case of object fields inline the field name into the nested object,\ninstead of creating an additional (dummy) nested identity.\n\nCloses #10662\nCloses #10629"
}
],
"files": [
{
"diff": "@@ -24,8 +24,9 @@\n import com.google.common.collect.Maps;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n-import org.apache.lucene.util.BitDocIdSet;\n import org.elasticsearch.ElasticsearchGenerationException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n@@ -41,36 +42,19 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.mapper.Mapping.SourceTransform;\n-import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n-import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;\n-import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n-import org.elasticsearch.index.mapper.internal.IndexFieldMapper;\n-import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n-import org.elasticsearch.index.mapper.internal.RoutingFieldMapper;\n-import org.elasticsearch.index.mapper.internal.SizeFieldMapper;\n-import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TTLFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n-import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n-import org.elasticsearch.index.mapper.internal.VersionFieldMapper;\n+import org.elasticsearch.index.mapper.internal.*;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.mapper.object.RootObjectMapper;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.ScriptService.ScriptType;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Collection;\n-import java.util.HashMap;\n-import java.util.LinkedHashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n /**\n@@ -352,15 +336,29 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n /**\n * Returns the best nested {@link ObjectMapper} instances that is in the scope of the specified nested docId.\n */\n- public ObjectMapper findNestedObjectMapper(int nestedDocId, BitsetFilterCache cache, LeafReaderContext context) throws IOException {\n+ public ObjectMapper findNestedObjectMapper(int nestedDocId, SearchContext sc, LeafReaderContext context) throws IOException {\n ObjectMapper nestedObjectMapper = null;\n for (ObjectMapper objectMapper : objectMappers().values()) {\n if (!objectMapper.nested().isNested()) {\n continue;\n }\n \n- BitDocIdSet nestedTypeBitSet = cache.getBitDocIdSetFilter(objectMapper.nestedTypeFilter()).getDocIdSet(context);\n- if (nestedTypeBitSet != null && nestedTypeBitSet.bits().get(nestedDocId)) {\n+ Filter filter = sc.filterCache().cache(objectMapper.nestedTypeFilter(), null, sc.queryParserService().autoFilterCachePolicy());\n+ if (filter == null) {\n+ continue;\n+ }\n+ // We can pass down 'null' as acceptedDocs, because nestedDocId is a doc to be fetched and\n+ // therefor is guaranteed to be a live doc.\n+ DocIdSet nestedTypeSet = filter.getDocIdSet(context, null);\n+ if (nestedTypeSet == null) {\n+ continue;\n+ }\n+ DocIdSetIterator iterator = nestedTypeSet.iterator();\n+ if (iterator == null) {\n+ continue;\n+ }\n+\n+ if (iterator.advance(nestedDocId) == nestedDocId) {\n if (nestedObjectMapper == null) {\n nestedObjectMapper = objectMapper;\n } else {",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,9 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n-\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.BitDocIdSet;\n@@ -67,12 +67,7 @@\n import org.elasticsearch.search.lookup.SourceLookup;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder;\n@@ -288,7 +283,7 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n SourceLookup sourceLookup = context.lookup().source();\n sourceLookup.setSegmentAndDocument(subReaderContext, nestedSubDocId);\n \n- ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context.bitsetFilterCache(), subReaderContext);\n+ ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context, subReaderContext);\n assert nestedObjectMapper != null;\n InternalSearchHit.InternalNestedIdentity nestedIdentity = getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper);\n \n@@ -375,38 +370,56 @@ private Map<String, SearchHitField> getSearchFields(SearchContext context, int n\n private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, LeafReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException {\n int currentParent = nestedSubDocId;\n ObjectMapper nestedParentObjectMapper;\n+ StringBuilder field = new StringBuilder();\n+ ObjectMapper current = nestedObjectMapper;\n InternalSearchHit.InternalNestedIdentity nestedIdentity = null;\n do {\n- String field;\n Filter parentFilter;\n- nestedParentObjectMapper = documentMapper.findParentObjectMapper(nestedObjectMapper);\n+ nestedParentObjectMapper = documentMapper.findParentObjectMapper(current);\n+ if (field.length() != 0) {\n+ field.insert(0, '.');\n+ }\n+ field.insert(0, current.name());\n if (nestedParentObjectMapper != null) {\n- field = nestedObjectMapper.name();\n- if (!nestedParentObjectMapper.nested().isNested()) {\n- nestedObjectMapper = nestedParentObjectMapper;\n- // all right, the parent is a normal object field, so this is the best identiy we can give for that:\n- nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, 0, nestedIdentity);\n+ if (nestedParentObjectMapper.nested().isNested() == false) {\n+ current = nestedParentObjectMapper;\n continue;\n }\n parentFilter = nestedParentObjectMapper.nestedTypeFilter();\n } else {\n- field = nestedObjectMapper.fullPath();\n parentFilter = Queries.newNonNestedFilter();\n }\n \n+ Filter childFilter = context.filterCache().cache(nestedObjectMapper.nestedTypeFilter(), null, context.queryParserService().autoFilterCachePolicy());\n+ if (childFilter == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+ // We can pass down 'null' as acceptedDocs, because we're fetching matched docId that matched in the query phase.\n+ DocIdSet childDocSet = childFilter.getDocIdSet(subReaderContext, null);\n+ if (childDocSet == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+ DocIdSetIterator childIter = childDocSet.iterator();\n+ if (childIter == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+\n BitDocIdSet parentBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(parentFilter).getDocIdSet(subReaderContext);\n BitSet parentBits = parentBitSet.bits();\n+\n int offset = 0;\n- BitDocIdSet nestedDocsBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(nestedObjectMapper.nestedTypeFilter()).getDocIdSet(subReaderContext);\n- BitSet nestedBits = nestedDocsBitSet.bits();\n int nextParent = parentBits.nextSetBit(currentParent);\n- for (int docId = nestedBits.nextSetBit(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = nestedBits.nextSetBit(docId + 1)) {\n+ for (int docId = childIter.advance(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = childIter.nextDoc()) {\n offset++;\n }\n currentParent = nextParent;\n- nestedObjectMapper = nestedParentObjectMapper;\n- nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, offset, nestedIdentity);\n- } while (nestedParentObjectMapper != null);\n+ current = nestedObjectMapper = nestedParentObjectMapper;\n+ nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field.toString(), offset, nestedIdentity);\n+ field = new StringBuilder();\n+ } while (current != null);\n return nestedIdentity;\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -867,7 +867,12 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n List<IndexRequestBuilder> requests = new ArrayList<>();\n requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"title\", \"quick brown fox\")\n- .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .startObject(\"comments\")\n+ .startArray(\"messages\")\n+ .startObject().field(\"message\", \"fox eat quick\").endObject()\n+ .startObject().field(\"message\", \"bear eat quick\").endObject()\n+ .endArray()\n+ .endObject()\n .endObject()));\n indexRandom(true, requests);\n \n@@ -879,11 +884,40 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ // index the message in an object form instead of an array\n+ requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"messages\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getChild(), nullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi All\n\nPlease refer to issue <a href=\"https://github.com/elastic/elasticsearch/issues/10334\">10334</a> that was marked as fixed in the latest release (1.5.1). I have tested this today and it still doesnt work correctly. The ArrayOutOfBoundsException is gone, however I now get incorrect data back. Only the first entry is returned regardless of which entry has matched the search request.\n\nTo Replicate: Setup the same template and data as in issue <a href=\"https://github.com/elastic/elasticsearch/issues/10334\">10334</a>:\n\n<pre><code>curl -XPOST 'http://localhost:9200/twitter'\n\ncurl -XPOST 'http://localhost:9200/twitter/_mapping/tweet' -d '\n{\n \"tweet\": {\n \"properties\": {\n \"comments\": {\n \"properties\": {\n \"messages\": {\n \"type\": \"nested\",\n \"properties\": {\n \"message\": {\n \"type\" : \"string\", \n \"index\": \"not_analyzed\"\n } \n }\n } \n }\n }\n }\n }\n}'\n\ncurl -XPOST 'http://localhost:9200/twitter/tweet' -d '\n{\n \"comments\": {\n \"messages\": [\n {\"message\": \"Nice website\"},\n {\"message\": \"Worst ever\"}\n ]\n }\n}'\n\n</code></pre>\n\n\nNow search for the message \"Worst ever\"\n\n<pre><code>\ncurl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty' -d '\n{\n \"query\": {\n \"nested\": {\n \"path\": \"comments.messages\",\n \"query\": {\n \"match\": {\"comments.messages.message\": \"Worst ever\"}\n },\n \"inner_hits\" : {}\n }\n }\n}'\n</code></pre>\n\n\nThe following gets returned:\n\n<pre><code>\n{\n \"took\" : 4,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.4054651,\n \"hits\" : [ {\n \"_index\" : \"twitter\",\n \"_type\" : \"tweet\",\n \"_id\" : \"AUzCMIjnYDWhRGHuJRhC\",\n \"_score\" : 1.4054651,\n \"_source\":\n{\n \"comments\": {\n \"messages\": [\n {\"message\": \"Nice website\"},\n {\"message\": \"Worst ever\"}\n ]\n }\n},\n \"inner_hits\" : {\n \"comments.messages\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.4054651,\n \"hits\" : [ {\n \"_index\" : \"twitter\",\n \"_type\" : \"tweet\",\n \"_id\" : \"AUzCMIjnYDWhRGHuJRhC\",\n \"_nested\" : {\n \"field\" : \"comments\",\n \"offset\" : 0,\n \"_nested\" : {\n \"field\" : \"messages\",\n \"offset\" : 0\n }\n },\n \"_score\" : 1.4054651,\n \"_source\":{\"message\":\"Nice website\"}\n } ]\n }\n }\n }\n } ]\n }\n}\n</code></pre>\n\n\nThis is clearly the incorrect inner hit. As discussed previously with martijnvg a workaround is if I change the object field in my mapping to be nested as well, like this:\n\n<pre><code>\ncurl -XPOST 'http://localhost:9200/twitter/_mapping/tweet' -d '\n{\n \"tweet\": {\n \"properties\": {\n \"comments\": {\n \"type\": \"nested\",\n \"properties\": {\n \"messages\": {\n \"type\": \"nested\",\n \"properties\": {\n \"message\": {\n \"type\" : \"string\", \n \"index\": \"not_analyzed\"\n } \n }\n } \n }\n }\n }\n }\n}'\n</code></pre>\n\n\nhowever doing so results in a larger memory footprint which is undesirable. Also note the use case in #10334 in my second post on the issue that explains why I only need a normal object field containing nested fields.\n",
"comments": [
{
"body": "@mariusdw agreed, that looks incorrect\n",
"created_at": "2015-04-16T12:56:19Z"
},
{
"body": "Fixed via #10663. If a non nested object field the direct parent of a nested object field, its field is inlined with the nested identify of the child nested object. Since a normal object isn't nested using a dummy nested identity didn't make sense. (this was the fix for #10334)\n",
"created_at": "2015-04-30T15:05:33Z"
}
],
"number": 10629,
"title": "Elasticsearch inner_hits query does not return second nested field in object field"
} | {
"body": "Only parent filters should use bitset filter cache, to avoid memory being wasted.\n\nCloses #10662 \nCloses #10629\n",
"number": 10663,
"review_comments": [
{
"body": "Do we really need to pass livedocs?\n",
"created_at": "2015-04-19T21:59:02Z"
},
{
"body": "it just felt safe to do this... but I don't we need to, because nested docs can't be modified on their own. Their life cycle depends on the parent doc.\n",
"created_at": "2015-04-19T22:12:29Z"
},
{
"body": "I think its a little confusing, since we explicitly didn't pass bits before. IMO we should just have a code comment\n",
"created_at": "2015-04-20T01:17:52Z"
},
{
"body": "I added a comment regarding this.\n",
"created_at": "2015-04-21T13:57:25Z"
},
{
"body": "Filter.getDocIdSet can return null, can you check that nestedTypeSet is not null?\n",
"created_at": "2015-04-24T10:02:06Z"
},
{
"body": "Also the iterator.\n",
"created_at": "2015-04-24T10:02:58Z"
}
],
"title": "Don't use bitset cache for children filters."
} | {
"commits": [
{
"message": "inner_hits: Don't use bitset cache for children filters.\n\nOnly parent filters should use bitset filter cache, to avoid memory being wasted.\nAlso in case of object fields inline the field name into the nested object,\ninstead of creating an additional (dummy) nested identity.\n\nCloses #10662\nCloses #10629"
}
],
"files": [
{
"diff": "@@ -24,8 +24,9 @@\n import com.google.common.collect.Maps;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n-import org.apache.lucene.util.BitDocIdSet;\n import org.elasticsearch.ElasticsearchGenerationException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n@@ -41,36 +42,19 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.mapper.Mapping.SourceTransform;\n-import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n-import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;\n-import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n-import org.elasticsearch.index.mapper.internal.IndexFieldMapper;\n-import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n-import org.elasticsearch.index.mapper.internal.RoutingFieldMapper;\n-import org.elasticsearch.index.mapper.internal.SizeFieldMapper;\n-import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TTLFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n-import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n-import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n-import org.elasticsearch.index.mapper.internal.VersionFieldMapper;\n+import org.elasticsearch.index.mapper.internal.*;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.mapper.object.RootObjectMapper;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.ScriptService.ScriptType;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Collection;\n-import java.util.HashMap;\n-import java.util.LinkedHashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n /**\n@@ -352,15 +336,29 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n /**\n * Returns the best nested {@link ObjectMapper} instances that is in the scope of the specified nested docId.\n */\n- public ObjectMapper findNestedObjectMapper(int nestedDocId, BitsetFilterCache cache, LeafReaderContext context) throws IOException {\n+ public ObjectMapper findNestedObjectMapper(int nestedDocId, SearchContext sc, LeafReaderContext context) throws IOException {\n ObjectMapper nestedObjectMapper = null;\n for (ObjectMapper objectMapper : objectMappers().values()) {\n if (!objectMapper.nested().isNested()) {\n continue;\n }\n \n- BitDocIdSet nestedTypeBitSet = cache.getBitDocIdSetFilter(objectMapper.nestedTypeFilter()).getDocIdSet(context);\n- if (nestedTypeBitSet != null && nestedTypeBitSet.bits().get(nestedDocId)) {\n+ Filter filter = sc.filterCache().cache(objectMapper.nestedTypeFilter(), null, sc.queryParserService().autoFilterCachePolicy());\n+ if (filter == null) {\n+ continue;\n+ }\n+ // We can pass down 'null' as acceptedDocs, because nestedDocId is a doc to be fetched and\n+ // therefor is guaranteed to be a live doc.\n+ DocIdSet nestedTypeSet = filter.getDocIdSet(context, null);\n+ if (nestedTypeSet == null) {\n+ continue;\n+ }\n+ DocIdSetIterator iterator = nestedTypeSet.iterator();\n+ if (iterator == null) {\n+ continue;\n+ }\n+\n+ if (iterator.advance(nestedDocId) == nestedDocId) {\n if (nestedObjectMapper == null) {\n nestedObjectMapper = objectMapper;\n } else {",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,9 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n-\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.BitDocIdSet;\n@@ -67,12 +67,7 @@\n import org.elasticsearch.search.lookup.SourceLookup;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder;\n@@ -288,7 +283,7 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n SourceLookup sourceLookup = context.lookup().source();\n sourceLookup.setSegmentAndDocument(subReaderContext, nestedSubDocId);\n \n- ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context.bitsetFilterCache(), subReaderContext);\n+ ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context, subReaderContext);\n assert nestedObjectMapper != null;\n InternalSearchHit.InternalNestedIdentity nestedIdentity = getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper);\n \n@@ -375,38 +370,56 @@ private Map<String, SearchHitField> getSearchFields(SearchContext context, int n\n private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, LeafReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException {\n int currentParent = nestedSubDocId;\n ObjectMapper nestedParentObjectMapper;\n+ StringBuilder field = new StringBuilder();\n+ ObjectMapper current = nestedObjectMapper;\n InternalSearchHit.InternalNestedIdentity nestedIdentity = null;\n do {\n- String field;\n Filter parentFilter;\n- nestedParentObjectMapper = documentMapper.findParentObjectMapper(nestedObjectMapper);\n+ nestedParentObjectMapper = documentMapper.findParentObjectMapper(current);\n+ if (field.length() != 0) {\n+ field.insert(0, '.');\n+ }\n+ field.insert(0, current.name());\n if (nestedParentObjectMapper != null) {\n- field = nestedObjectMapper.name();\n- if (!nestedParentObjectMapper.nested().isNested()) {\n- nestedObjectMapper = nestedParentObjectMapper;\n- // all right, the parent is a normal object field, so this is the best identiy we can give for that:\n- nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, 0, nestedIdentity);\n+ if (nestedParentObjectMapper.nested().isNested() == false) {\n+ current = nestedParentObjectMapper;\n continue;\n }\n parentFilter = nestedParentObjectMapper.nestedTypeFilter();\n } else {\n- field = nestedObjectMapper.fullPath();\n parentFilter = Queries.newNonNestedFilter();\n }\n \n+ Filter childFilter = context.filterCache().cache(nestedObjectMapper.nestedTypeFilter(), null, context.queryParserService().autoFilterCachePolicy());\n+ if (childFilter == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+ // We can pass down 'null' as acceptedDocs, because we're fetching matched docId that matched in the query phase.\n+ DocIdSet childDocSet = childFilter.getDocIdSet(subReaderContext, null);\n+ if (childDocSet == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+ DocIdSetIterator childIter = childDocSet.iterator();\n+ if (childIter == null) {\n+ current = nestedParentObjectMapper;\n+ continue;\n+ }\n+\n BitDocIdSet parentBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(parentFilter).getDocIdSet(subReaderContext);\n BitSet parentBits = parentBitSet.bits();\n+\n int offset = 0;\n- BitDocIdSet nestedDocsBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(nestedObjectMapper.nestedTypeFilter()).getDocIdSet(subReaderContext);\n- BitSet nestedBits = nestedDocsBitSet.bits();\n int nextParent = parentBits.nextSetBit(currentParent);\n- for (int docId = nestedBits.nextSetBit(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = nestedBits.nextSetBit(docId + 1)) {\n+ for (int docId = childIter.advance(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = childIter.nextDoc()) {\n offset++;\n }\n currentParent = nextParent;\n- nestedObjectMapper = nestedParentObjectMapper;\n- nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, offset, nestedIdentity);\n- } while (nestedParentObjectMapper != null);\n+ current = nestedObjectMapper = nestedParentObjectMapper;\n+ nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field.toString(), offset, nestedIdentity);\n+ field = new StringBuilder();\n+ } while (current != null);\n return nestedIdentity;\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -867,7 +867,12 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n List<IndexRequestBuilder> requests = new ArrayList<>();\n requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"title\", \"quick brown fox\")\n- .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .startObject(\"comments\")\n+ .startArray(\"messages\")\n+ .startObject().field(\"message\", \"fox eat quick\").endObject()\n+ .startObject().field(\"message\", \"bear eat quick\").endObject()\n+ .endArray()\n+ .endObject()\n .endObject()));\n indexRandom(true, requests);\n \n@@ -879,11 +884,40 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ // index the message in an object form instead of an array\n+ requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"messages\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getChild(), nullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi All\n\nPlease refer to issue <a href=\"https://github.com/elastic/elasticsearch/issues/10334\">10334</a> that was marked as fixed in the latest release (1.5.1). I have tested this today and it still doesnt work correctly. The ArrayOutOfBoundsException is gone, however I now get incorrect data back. Only the first entry is returned regardless of which entry has matched the search request.\n\nTo Replicate: Setup the same template and data as in issue <a href=\"https://github.com/elastic/elasticsearch/issues/10334\">10334</a>:\n\n<pre><code>curl -XPOST 'http://localhost:9200/twitter'\n\ncurl -XPOST 'http://localhost:9200/twitter/_mapping/tweet' -d '\n{\n \"tweet\": {\n \"properties\": {\n \"comments\": {\n \"properties\": {\n \"messages\": {\n \"type\": \"nested\",\n \"properties\": {\n \"message\": {\n \"type\" : \"string\", \n \"index\": \"not_analyzed\"\n } \n }\n } \n }\n }\n }\n }\n}'\n\ncurl -XPOST 'http://localhost:9200/twitter/tweet' -d '\n{\n \"comments\": {\n \"messages\": [\n {\"message\": \"Nice website\"},\n {\"message\": \"Worst ever\"}\n ]\n }\n}'\n\n</code></pre>\n\n\nNow search for the message \"Worst ever\"\n\n<pre><code>\ncurl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty' -d '\n{\n \"query\": {\n \"nested\": {\n \"path\": \"comments.messages\",\n \"query\": {\n \"match\": {\"comments.messages.message\": \"Worst ever\"}\n },\n \"inner_hits\" : {}\n }\n }\n}'\n</code></pre>\n\n\nThe following gets returned:\n\n<pre><code>\n{\n \"took\" : 4,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.4054651,\n \"hits\" : [ {\n \"_index\" : \"twitter\",\n \"_type\" : \"tweet\",\n \"_id\" : \"AUzCMIjnYDWhRGHuJRhC\",\n \"_score\" : 1.4054651,\n \"_source\":\n{\n \"comments\": {\n \"messages\": [\n {\"message\": \"Nice website\"},\n {\"message\": \"Worst ever\"}\n ]\n }\n},\n \"inner_hits\" : {\n \"comments.messages\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.4054651,\n \"hits\" : [ {\n \"_index\" : \"twitter\",\n \"_type\" : \"tweet\",\n \"_id\" : \"AUzCMIjnYDWhRGHuJRhC\",\n \"_nested\" : {\n \"field\" : \"comments\",\n \"offset\" : 0,\n \"_nested\" : {\n \"field\" : \"messages\",\n \"offset\" : 0\n }\n },\n \"_score\" : 1.4054651,\n \"_source\":{\"message\":\"Nice website\"}\n } ]\n }\n }\n }\n } ]\n }\n}\n</code></pre>\n\n\nThis is clearly the incorrect inner hit. As discussed previously with martijnvg a workaround is if I change the object field in my mapping to be nested as well, like this:\n\n<pre><code>\ncurl -XPOST 'http://localhost:9200/twitter/_mapping/tweet' -d '\n{\n \"tweet\": {\n \"properties\": {\n \"comments\": {\n \"type\": \"nested\",\n \"properties\": {\n \"messages\": {\n \"type\": \"nested\",\n \"properties\": {\n \"message\": {\n \"type\" : \"string\", \n \"index\": \"not_analyzed\"\n } \n }\n } \n }\n }\n }\n }\n}'\n</code></pre>\n\n\nhowever doing so results in a larger memory footprint which is undesirable. Also note the use case in #10334 in my second post on the issue that explains why I only need a normal object field containing nested fields.\n",
"comments": [
{
"body": "@mariusdw agreed, that looks incorrect\n",
"created_at": "2015-04-16T12:56:19Z"
},
{
"body": "Fixed via #10663. If a non nested object field the direct parent of a nested object field, its field is inlined with the nested identify of the child nested object. Since a normal object isn't nested using a dummy nested identity didn't make sense. (this was the fix for #10334)\n",
"created_at": "2015-04-30T15:05:33Z"
}
],
"number": 10629,
"title": "Elasticsearch inner_hits query does not return second nested field in object field"
} | {
"body": "Step over object fields when creating the nested identify and just use the object fieldname as a prefix for the nested field.\n\nCloses #10629\n",
"number": 10662,
"review_comments": [],
"title": "inner_hits: Ignore object fields."
} | {
"commits": [
{
"message": "inner_hits: Ignore object fields.\n\nStep over object fields when creating the nested identify and just use the object fieldname as a prefix for the nested field.\n\nCloses #10629"
}
],
"files": [
{
"diff": "@@ -367,23 +367,23 @@ private Map<String, SearchHitField> getSearchFields(SearchContext context, int n\n \n private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, AtomicReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException {\n int currentParent = nestedSubDocId;\n+ StringBuilder objectFieldPrefix = new StringBuilder();\n+ ObjectMapper current = nestedObjectMapper;\n ObjectMapper nestedParentObjectMapper;\n InternalSearchHit.InternalNestedIdentity nestedIdentity = null;\n do {\n- String field;\n- Filter parentFilter;\n- nestedParentObjectMapper = documentMapper.findParentObjectMapper(nestedObjectMapper);\n+ String field = nestedObjectMapper.name();\n+ nestedParentObjectMapper = documentMapper.findParentObjectMapper(current);\n+\n+ final Filter parentFilter;\n if (nestedParentObjectMapper != null) {\n- field = nestedObjectMapper.name();\n if (!nestedParentObjectMapper.nested().isNested()) {\n- nestedObjectMapper = nestedParentObjectMapper;\n- // all right, the parent is a normal object field, so this is the best identiy we can give for that:\n- nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, 0, nestedIdentity);\n+ objectFieldPrefix.append(nestedParentObjectMapper.name()).append('.');\n+ current = nestedParentObjectMapper;\n continue;\n }\n parentFilter = nestedParentObjectMapper.nestedTypeFilter();\n } else {\n- field = nestedObjectMapper.fullPath();\n parentFilter = NonNestedDocsFilter.INSTANCE;\n }\n \n@@ -395,7 +395,11 @@ private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(Searc\n offset++;\n }\n currentParent = nextParent;\n- nestedObjectMapper = nestedParentObjectMapper;\n+ nestedObjectMapper = current = nestedParentObjectMapper;\n+ if (objectFieldPrefix.length() > 0) {\n+ field = objectFieldPrefix.append(field).toString();\n+ objectFieldPrefix = new StringBuilder();\n+ }\n nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, offset, nestedIdentity);\n } while (nestedParentObjectMapper != null);\n return nestedIdentity;",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -851,21 +851,26 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n assertAcked(prepareCreate(\"articles\")\n .addMapping(\"article\", jsonBuilder().startObject()\n .startObject(\"properties\")\n- .startObject(\"comments\")\n- .field(\"type\", \"object\")\n- .startObject(\"properties\")\n- .startObject(\"messages\").field(\"type\", \"nested\").endObject()\n- .endObject()\n- .endObject()\n- .endObject()\n+ .startObject(\"comments\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"messages\").field(\"type\", \"nested\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n .endObject()\n )\n );\n \n List<IndexRequestBuilder> requests = new ArrayList<>();\n requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"title\", \"quick brown fox\")\n- .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .startObject(\"comments\")\n+ .startArray(\"messages\")\n+ .startObject().field(\"message\", \"fox eat quick\").endObject()\n+ .startObject().field(\"message\", \"bear eat quick\").endObject()\n+ .endArray()\n+ .endObject()\n .endObject()));\n indexRandom(true, requests);\n \n@@ -877,11 +882,21 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"messages\"));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n- assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getChild(), nullValue());\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "Fixing issue #9691 revealed a deeper problem with the QuadPrefixTree's memory usage. At 1m precision the example shape in https://gist.github.com/nknize/abbcb87f091b891f85e1 consumes more than 1GB of memory. This is initially alleviated by using 2 bit encoded quads (instead of 1byte) but only delays the problem. Moreover, as new complex shapes are added duplicate quadcells are created - thus introducing unnecessary redundant memory consumption (an inverted index approach makes mosts sense - its Lucene!).\n\nFor now, if a QuadTree is used for complex shapes great care must be taken and precision must be sacrificed (something that's automatically done with the distance_error_pct without the user knowing - which is a TERRIBLE approach). An alternative improvement could be to apply a Hilbert R-Tree - which will be explored as a separate issue. Or to restrict the accuracy to a lower level of precision (something that's undergoing experimentation).\n",
"comments": [],
"number": 9860,
"title": "[GEO] OOM Error when using QuadPrefixTree with 1m precision"
} | {
"body": "This is currently submitted as a patch in LUCENE-6422 (placed in our lucene package until its committed to lucene 5.x). It removes unnecessary transient memory usage for QuadPrefixTree and, for 1.6.0+ shape indexes adds a new compact bit encoded representation for each quadcell. This is the heart of numerous false positive matches, OOM exceptions, and all around poor shape indexing performance. The compact bit representation will also allows for encoding 3D shapes in future enhancements.\n\ncloses #2361\ncloses #9860\ncloses #10583 \n",
"number": 10652,
"review_comments": [],
"title": "Fix OOM for high precision exotic shapes"
} | {
"commits": [
{
"message": "[GEO] Fix OOM for high precision exotic shapes\n\nThis is currently submitted as a patch in LUCENE-6422. It removes unnecessary transient memory usage for QuadPrefixTree and, for 1.6.0+ shape indexes adds a new compact bit encoded representation for each quadcell. This is the heart of numerous false positive matches, OOM exceptions, and all around poor shape indexing performance. The compact bit representation will also allows for encoding 3D shapes in future enhancements."
}
],
"files": [
{
"diff": "@@ -0,0 +1,197 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.spatial.prefix;\n+\n+import com.spatial4j.core.shape.Point;\n+import com.spatial4j.core.shape.Shape;\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.spatial.prefix.tree.Cell;\n+import org.apache.lucene.spatial.prefix.tree.CellIterator;\n+import org.apache.lucene.spatial.prefix.tree.LegacyCell;\n+import org.apache.lucene.spatial.prefix.tree.PackedQuadPrefixTree;\n+import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree;\n+import org.apache.lucene.spatial.query.SpatialArgs;\n+import org.apache.lucene.spatial.query.SpatialOperation;\n+import org.apache.lucene.spatial.query.UnsupportedSpatialOperation;\n+\n+import java.util.ArrayList;\n+import java.util.Iterator;\n+import java.util.List;\n+\n+/**\n+ * A {@link PrefixTreeStrategy} which uses {@link AbstractVisitingPrefixTreeFilter}.\n+ * This strategy has support for searching non-point shapes (note: not tested).\n+ * Even a query shape with distErrPct=0 (fully precise to the grid) should have\n+ * good performance for typical data, unless there is a lot of indexed data\n+ * coincident with the shape's edge.\n+ *\n+ * @lucene.experimental\n+ *\n+ * NOTE: Will be removed upon commit of LUCENE-6422\n+ */\n+public class RecursivePrefixTreeStrategy extends PrefixTreeStrategy {\n+ /* Future potential optimizations:\n+\n+ Each shape.relate(otherShape) result could be cached since much of the same relations will be invoked when\n+ multiple segments are involved. Do this for \"complex\" shapes, not cheap ones, and don't cache when disjoint to\n+ bbox because it's a cheap calc. This is one advantage TermQueryPrefixTreeStrategy has over RPT.\n+\n+ */\n+\n+ protected int prefixGridScanLevel;\n+\n+ //Formerly known as simplifyIndexedCells. Eventually will be removed. Only compatible with RPT\n+ // and a LegacyPrefixTree.\n+ protected boolean pruneLeafyBranches = true;\n+\n+ protected boolean multiOverlappingIndexedShapes = true;\n+\n+ public RecursivePrefixTreeStrategy(SpatialPrefixTree grid, String fieldName) {\n+ super(grid, fieldName);\n+ prefixGridScanLevel = grid.getMaxLevels() - 4;//TODO this default constant is dependent on the prefix grid size\n+ }\n+\n+ public int getPrefixGridScanLevel() {\n+ return prefixGridScanLevel;\n+ }\n+\n+ /**\n+ * Sets the grid level [1-maxLevels] at which indexed terms are scanned brute-force\n+ * instead of by grid decomposition. By default this is maxLevels - 4. The\n+ * final level, maxLevels, is always scanned.\n+ *\n+ * @param prefixGridScanLevel 1 to maxLevels\n+ */\n+ public void setPrefixGridScanLevel(int prefixGridScanLevel) {\n+ //TODO if negative then subtract from maxlevels\n+ this.prefixGridScanLevel = prefixGridScanLevel;\n+ }\n+\n+ public boolean isMultiOverlappingIndexedShapes() {\n+ return multiOverlappingIndexedShapes;\n+ }\n+\n+ /** See {@link ContainsPrefixTreeFilter#multiOverlappingIndexedShapes}. */\n+ public void setMultiOverlappingIndexedShapes(boolean multiOverlappingIndexedShapes) {\n+ this.multiOverlappingIndexedShapes = multiOverlappingIndexedShapes;\n+ }\n+\n+ public boolean isPruneLeafyBranches() {\n+ return pruneLeafyBranches;\n+ }\n+\n+ /** An optional hint affecting non-point shapes: it will\n+ * simplify/aggregate sets of complete leaves in a cell to its parent, resulting in ~20-25%\n+ * fewer indexed cells. However, it will likely be removed in the future. (default=true)\n+ */\n+ public void setPruneLeafyBranches(boolean pruneLeafyBranches) {\n+ this.pruneLeafyBranches = pruneLeafyBranches;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ StringBuilder str = new StringBuilder(getClass().getSimpleName()).append('(');\n+ str.append(\"SPG:(\").append(grid.toString()).append(')');\n+ if (pointsOnly)\n+ str.append(\",pointsOnly\");\n+ if (pruneLeafyBranches)\n+ str.append(\",pruneLeafyBranches\");\n+ if (prefixGridScanLevel != grid.getMaxLevels() - 4)\n+ str.append(\",prefixGridScanLevel:\").append(\"\"+prefixGridScanLevel);\n+ if (!multiOverlappingIndexedShapes)\n+ str.append(\",!multiOverlappingIndexedShapes\");\n+ return str.append(')').toString();\n+ }\n+\n+ @Override\n+ protected Iterator<Cell> createCellIteratorToIndex(Shape shape, int detailLevel, Iterator<Cell> reuse) {\n+ if (shape instanceof Point || !pruneLeafyBranches || grid instanceof PackedQuadPrefixTree)\n+ return super.createCellIteratorToIndex(shape, detailLevel, reuse);\n+\n+ List<Cell> cells = new ArrayList<>(4096);\n+ recursiveTraverseAndPrune(grid.getWorldCell(), shape, detailLevel, cells);\n+ return cells.iterator();\n+ }\n+\n+ /** Returns true if cell was added as a leaf. If it wasn't it recursively descends. */\n+ private boolean recursiveTraverseAndPrune(Cell cell, Shape shape, int detailLevel, List<Cell> result) {\n+ // Important: this logic assumes Cells don't share anything with other cells when\n+ // calling cell.getNextLevelCells(). This is only true for LegacyCell.\n+ if (!(cell instanceof LegacyCell))\n+ throw new IllegalStateException(\"pruneLeafyBranches must be disabled for use with grid \"+grid);\n+\n+ if (cell.getLevel() == detailLevel) {\n+ cell.setLeaf();//FYI might already be a leaf\n+ }\n+ if (cell.isLeaf()) {\n+ result.add(cell);\n+ return true;\n+ }\n+ if (cell.getLevel() != 0)\n+ result.add(cell);\n+\n+ int leaves = 0;\n+ CellIterator subCells = cell.getNextLevelCells(shape);\n+ while (subCells.hasNext()) {\n+ Cell subCell = subCells.next();\n+ if (recursiveTraverseAndPrune(subCell, shape, detailLevel, result))\n+ leaves++;\n+ }\n+ //can we prune?\n+ if (leaves == ((LegacyCell)cell).getSubCellsSize() && cell.getLevel() != 0) {\n+ //Optimization: substitute the parent as a leaf instead of adding all\n+ // children as leaves\n+\n+ //remove the leaves\n+ do {\n+ result.remove(result.size() - 1);//remove last\n+ } while (--leaves > 0);\n+ //add cell as the leaf\n+ cell.setLeaf();\n+ return true;\n+ }\n+ return false;\n+ }\n+\n+ @Override\n+ public Filter makeFilter(SpatialArgs args) {\n+ final SpatialOperation op = args.getOperation();\n+\n+ Shape shape = args.getShape();\n+ int detailLevel = grid.getLevelForDistance(args.resolveDistErr(ctx, distErrPct));\n+\n+ if (op == SpatialOperation.Intersects) {\n+ return new IntersectsPrefixTreeFilter(\n+ shape, getFieldName(), grid, detailLevel, prefixGridScanLevel);\n+ } else if (op == SpatialOperation.IsWithin) {\n+ return new WithinPrefixTreeFilter(\n+ shape, getFieldName(), grid, detailLevel, prefixGridScanLevel,\n+ -1);//-1 flag is slower but ensures correct results\n+ } else if (op == SpatialOperation.Contains) {\n+ return new ContainsPrefixTreeFilter(shape, getFieldName(), grid, detailLevel,\n+ multiOverlappingIndexedShapes);\n+ }\n+ throw new UnsupportedSpatialOperation(op);\n+ }\n+}\n+\n+\n+\n+",
"filename": "src/main/java/org/apache/lucene/spatial/prefix/RecursivePrefixTreeStrategy.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,81 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.spatial.prefix.tree;\n+\n+import java.util.Iterator;\n+import java.util.NoSuchElementException;\n+\n+/**\n+ * An Iterator of SpatialPrefixTree Cells. The order is always sorted without duplicates.\n+ *\n+ * @lucene.experimental\n+ *\n+ * NOTE: Will be removed upon commit of LUCENE-6422\n+ */\n+public abstract class CellIterator implements Iterator<Cell> {\n+\n+ //note: nextCell or thisCell can be non-null but neither at the same time. That's\n+ // because they might return the same instance when re-used!\n+\n+ protected Cell nextCell;//to be returned by next(), and null'ed after\n+ protected Cell thisCell;//see next() & thisCell(). Should be cleared in hasNext().\n+\n+ /** Returns the cell last returned from {@link #next()}. It's cleared by hasNext(). */\n+ public Cell thisCell() {\n+ assert thisCell != null : \"Only call thisCell() after next(), not hasNext()\";\n+ return thisCell;\n+ }\n+\n+ // Arguably this belongs here and not on Cell\n+ //public SpatialRelation getShapeRel()\n+\n+ /**\n+ * Gets the next cell that is >= {@code fromCell}, compared using non-leaf bytes. If it returns null then\n+ * the iterator is exhausted.\n+ */\n+ public Cell nextFrom(Cell fromCell) {\n+ while (true) {\n+ if (!hasNext())\n+ return null;\n+ Cell c = next();//will update thisCell\n+ if (c.compareToNoLeaf(fromCell) >= 0) {\n+ return c;\n+ }\n+ }\n+ }\n+\n+ /** This prevents sub-cells (those underneath the current cell) from being iterated to,\n+ * if applicable, otherwise a NO-OP. */\n+ @Override\n+ public void remove() {\n+ assert thisCell != null;\n+ }\n+\n+ @Override\n+ public Cell next() {\n+ if (nextCell == null) {\n+ if (!hasNext())\n+ throw new NoSuchElementException();\n+ }\n+ thisCell = nextCell;\n+ nextCell = null;\n+ return thisCell;\n+ }\n+}",
"filename": "src/main/java/org/apache/lucene/spatial/prefix/tree/CellIterator.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,248 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.spatial.prefix.tree;\n+\n+import com.spatial4j.core.shape.Point;\n+import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.SpatialRelation;\n+import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.StringHelper;\n+\n+import java.util.Collection;\n+\n+/** The base for the original two SPT's: Geohash and Quad. Don't subclass this for new SPTs.\n+ * @lucene.internal\n+ *\n+ * NOTE: Will be removed upon commit of LUCENE-6422\n+ */\n+//public for RPT pruneLeafyBranches code\n+public abstract class LegacyCell implements Cell {\n+\n+ // Important: A LegacyCell doesn't share state for getNextLevelCells(), and\n+ // LegacySpatialPrefixTree assumes this in its simplify tree logic.\n+\n+ private static final byte LEAF_BYTE = '+';//NOTE: must sort before letters & numbers\n+\n+ //Arguably we could simply use a BytesRef, using an extra Object.\n+ protected byte[] bytes;//generally bigger to potentially hold a leaf\n+ protected int b_off;\n+ protected int b_len;//doesn't reflect leaf; same as getLevel()\n+\n+ protected boolean isLeaf;\n+\n+ /**\n+ * When set via getSubCells(filter), it is the relationship between this cell\n+ * and the given shape filter. Doesn't participate in shape equality.\n+ */\n+ protected SpatialRelation shapeRel;\n+\n+ protected Shape shape;//cached\n+\n+ /** Warning: Refers to the same bytes (no copy). If {@link #setLeaf()} is subsequently called then it\n+ * may modify bytes. */\n+ protected LegacyCell(byte[] bytes, int off, int len) {\n+ this.bytes = bytes;\n+ this.b_off = off;\n+ this.b_len = len;\n+ readLeafAdjust();\n+ }\n+\n+ protected void readCell(BytesRef bytes) {\n+ shapeRel = null;\n+ shape = null;\n+ this.bytes = bytes.bytes;\n+ this.b_off = bytes.offset;\n+ this.b_len = (short) bytes.length;\n+ readLeafAdjust();\n+ }\n+\n+ protected void readLeafAdjust() {\n+ isLeaf = (b_len > 0 && bytes[b_off + b_len - 1] == LEAF_BYTE);\n+ if (isLeaf)\n+ b_len--;\n+ if (getLevel() == getMaxLevels())\n+ isLeaf = true;\n+ }\n+\n+ protected abstract SpatialPrefixTree getGrid();\n+\n+ protected abstract int getMaxLevels();\n+\n+ @Override\n+ public SpatialRelation getShapeRel() {\n+ return shapeRel;\n+ }\n+\n+ @Override\n+ public void setShapeRel(SpatialRelation rel) {\n+ this.shapeRel = rel;\n+ }\n+\n+ @Override\n+ public boolean isLeaf() {\n+ return isLeaf;\n+ }\n+\n+ @Override\n+ public void setLeaf() {\n+ isLeaf = true;\n+ }\n+\n+ @Override\n+ public BytesRef getTokenBytesWithLeaf(BytesRef result) {\n+ result = getTokenBytesNoLeaf(result);\n+ if (!isLeaf || getLevel() == getMaxLevels())\n+ return result;\n+ if (result.bytes.length < result.offset + result.length + 1) {\n+ assert false : \"Not supposed to happen; performance bug\";\n+ byte[] copy = new byte[result.length + 1];\n+ System.arraycopy(result.bytes, result.offset, copy, 0, result.length - 1);\n+ result.bytes = copy;\n+ result.offset = 0;\n+ }\n+ result.bytes[result.offset + result.length++] = LEAF_BYTE;\n+ return result;\n+ }\n+\n+ @Override\n+ public BytesRef getTokenBytesNoLeaf(BytesRef result) {\n+ if (result == null)\n+ return new BytesRef(bytes, b_off, b_len);\n+ result.bytes = bytes;\n+ result.offset = b_off;\n+ result.length = b_len;\n+ return result;\n+ }\n+\n+ @Override\n+ public int getLevel() {\n+ return b_len;\n+ }\n+\n+ @Override\n+ public CellIterator getNextLevelCells(Shape shapeFilter) {\n+ assert getLevel() < getGrid().getMaxLevels();\n+ if (shapeFilter instanceof Point) {\n+ LegacyCell cell = getSubCell((Point) shapeFilter);\n+ cell.shapeRel = SpatialRelation.CONTAINS;\n+ return new SingletonCellIterator(cell);\n+ } else {\n+ return new FilterCellIterator(getSubCells().iterator(), shapeFilter);\n+ }\n+ }\n+\n+ /**\n+ * Performant implementations are expected to implement this efficiently by\n+ * considering the current cell's boundary.\n+ * <p>\n+ * Precondition: Never called when getLevel() == maxLevel.\n+ * Precondition: this.getShape().relate(p) != DISJOINT.\n+ */\n+ protected abstract LegacyCell getSubCell(Point p);\n+\n+ /**\n+ * Gets the cells at the next grid cell level that covers this cell.\n+ * Precondition: Never called when getLevel() == maxLevel.\n+ *\n+ * @return A set of cells (no dups), sorted, modifiable, not empty, not null.\n+ */\n+ protected abstract Collection<Cell> getSubCells();\n+\n+ /**\n+ * {@link #getSubCells()}.size() -- usually a constant. Should be >=2\n+ */\n+ public abstract int getSubCellsSize();\n+\n+ @Override\n+ public boolean isPrefixOf(Cell c) {\n+ //Note: this only works when each level uses a whole number of bytes.\n+ LegacyCell cell = (LegacyCell)c;\n+ boolean result = sliceEquals(cell.bytes, cell.b_off, cell.b_len, bytes, b_off, b_len);\n+ assert result == StringHelper.startsWith(c.getTokenBytesNoLeaf(null), getTokenBytesNoLeaf(null));\n+ return result;\n+ }\n+\n+ /** Copied from {@link org.apache.lucene.util.StringHelper#startsWith(org.apache.lucene.util.BytesRef, org.apache.lucene.util.BytesRef)}\n+ * which calls this. This is to avoid creating a BytesRef. */\n+ private static boolean sliceEquals(byte[] sliceToTest_bytes, int sliceToTest_offset, int sliceToTest_length,\n+ byte[] other_bytes, int other_offset, int other_length) {\n+ if (sliceToTest_length < other_length) {\n+ return false;\n+ }\n+ int i = sliceToTest_offset;\n+ int j = other_offset;\n+ final int k = other_offset + other_length;\n+\n+ while (j < k) {\n+ if (sliceToTest_bytes[i++] != other_bytes[j++]) {\n+ return false;\n+ }\n+ }\n+\n+ return true;\n+ }\n+\n+ @Override\n+ public int compareToNoLeaf(Cell fromCell) {\n+ LegacyCell b = (LegacyCell) fromCell;\n+ return compare(bytes, b_off, b_len, b.bytes, b.b_off, b.b_len);\n+ }\n+\n+ /** Copied from {@link org.apache.lucene.util.BytesRef#compareTo(org.apache.lucene.util.BytesRef)}.\n+ * This is to avoid creating a BytesRef. */\n+ protected static int compare(byte[] aBytes, int aUpto, int a_length, byte[] bBytes, int bUpto, int b_length) {\n+ final int aStop = aUpto + Math.min(a_length, b_length);\n+ while(aUpto < aStop) {\n+ int aByte = aBytes[aUpto++] & 0xff;\n+ int bByte = bBytes[bUpto++] & 0xff;\n+\n+ int diff = aByte - bByte;\n+ if (diff != 0) {\n+ return diff;\n+ }\n+ }\n+\n+ // One is a prefix of the other, or, they are equal:\n+ return a_length - b_length;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ //this method isn't \"normally\" called; just in asserts/tests\n+ if (obj instanceof Cell) {\n+ Cell cell = (Cell) obj;\n+ return getTokenBytesWithLeaf(null).equals(cell.getTokenBytesWithLeaf(null));\n+ } else {\n+ return false;\n+ }\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return getTokenBytesWithLeaf(null).hashCode();\n+ }\n+\n+ @Override\n+ public String toString() {\n+ //this method isn't \"normally\" called; just in asserts/tests\n+ return getTokenBytesWithLeaf(null).utf8ToString();\n+ }\n+\n+}",
"filename": "src/main/java/org/apache/lucene/spatial/prefix/tree/LegacyCell.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,435 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.spatial.prefix.tree;\n+\n+import com.spatial4j.core.context.SpatialContext;\n+import com.spatial4j.core.shape.Point;\n+import com.spatial4j.core.shape.Rectangle;\n+import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.SpatialRelation;\n+import com.spatial4j.core.shape.impl.RectangleImpl;\n+import org.apache.lucene.util.BytesRef;\n+\n+import java.util.ArrayList;\n+import java.util.Collection;\n+import java.util.List;\n+import java.util.NoSuchElementException;\n+\n+/**\n+ * Subclassing QuadPrefixTree this {@link SpatialPrefixTree} uses the compact QuadCell encoding described in\n+ * {@link PackedQuadCell}\n+ *\n+ * @lucene.experimental\n+ *\n+ * NOTE: Will be removed upon commit of LUCENE-6422\n+ */\n+public class PackedQuadPrefixTree extends QuadPrefixTree {\n+ public static final byte[] QUAD = new byte[] {0x00, 0x01, 0x02, 0x03};\n+ public static final int MAX_LEVELS_POSSIBLE = 29;\n+\n+ private boolean leafyPrune = true;\n+\n+ public static class Factory extends QuadPrefixTree.Factory {\n+ @Override\n+ protected SpatialPrefixTree newSPT() {\n+ if (maxLevels > MAX_LEVELS_POSSIBLE) {\n+ throw new IllegalArgumentException(\"maxLevels \" + maxLevels + \" exceeds maximum value \" + MAX_LEVELS_POSSIBLE);\n+ }\n+ return new PackedQuadPrefixTree(ctx, maxLevels);\n+ }\n+ }\n+\n+ public PackedQuadPrefixTree(SpatialContext ctx, int maxLevels) {\n+ super(ctx, maxLevels);\n+ }\n+\n+ @Override\n+ public Cell getWorldCell() {\n+ return new PackedQuadCell(0x0L);\n+ }\n+ @Override\n+ public Cell getCell(Point p, int level) {\n+ List<Cell> cells = new ArrayList<>(1);\n+ build(xmid, ymid, 0, cells, 0x0L, ctx.makePoint(p.getX(),p.getY()), level);\n+ return cells.get(0);//note cells could be longer if p on edge\n+ }\n+\n+ protected void build(double x, double y, int level, List<Cell> matches, long term, Shape shape, int maxLevel) {\n+ double w = levelW[level] / 2;\n+ double h = levelH[level] / 2;\n+\n+ // Z-Order\n+ // http://en.wikipedia.org/wiki/Z-order_%28curve%29\n+ checkBattenberg(QUAD[0], x - w, y + h, level, matches, term, shape, maxLevel);\n+ checkBattenberg(QUAD[1], x + w, y + h, level, matches, term, shape, maxLevel);\n+ checkBattenberg(QUAD[2], x - w, y - h, level, matches, term, shape, maxLevel);\n+ checkBattenberg(QUAD[3], x + w, y - h, level, matches, term, shape, maxLevel);\n+ }\n+\n+ protected void checkBattenberg(byte quad, double cx, double cy, int level, List<Cell> matches,\n+ long term, Shape shape, int maxLevel) {\n+ // short-circuit if we find a match for the point (no need to continue recursion)\n+ if (shape instanceof Point && !matches.isEmpty())\n+ return;\n+ double w = levelW[level] / 2;\n+ double h = levelH[level] / 2;\n+\n+ SpatialRelation v = shape.relate(ctx.makeRectangle(cx - w, cx + w, cy - h, cy + h));\n+\n+ if (SpatialRelation.DISJOINT == v) {\n+ return;\n+ }\n+\n+ // set bits for next level\n+ term |= (((long)(quad))<<(64-(++level<<1)));\n+ // increment level\n+ term = ((term>>>1)+1)<<1;\n+\n+ if (SpatialRelation.CONTAINS == v || (level >= maxLevel)) {\n+ matches.add(new PackedQuadCell(term, v.transpose()));\n+ } else {// SpatialRelation.WITHIN, SpatialRelation.INTERSECTS\n+ build(cx, cy, level, matches, term, shape, maxLevel);\n+ }\n+ }\n+\n+ @Override\n+ public Cell readCell(BytesRef term, Cell scratch) {\n+ PackedQuadCell cell = (PackedQuadCell) scratch;\n+ if (cell == null)\n+ cell = (PackedQuadCell) getWorldCell();\n+ cell.readCell(term);\n+ return cell;\n+ }\n+\n+ @Override\n+ public CellIterator getTreeCellIterator(Shape shape, int detailLevel) {\n+ return new PrefixTreeIterator(shape);\n+ }\n+\n+ public void setPruneLeafyBranches( boolean pruneLeafyBranches ) {\n+ this.leafyPrune = pruneLeafyBranches;\n+ }\n+\n+ /**\n+ * PackedQuadCell Binary Representation is as follows\n+ * CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCDDDDDL\n+ *\n+ * Where C = Cell bits (2 per quad)\n+ * D = Depth bits (5 with max of 29 levels)\n+ * L = isLeaf bit\n+ */\n+ public class PackedQuadCell extends QuadCell {\n+ private long term;\n+\n+ PackedQuadCell(long term) {\n+ super(null, 0, 0);\n+ this.term = term;\n+ this.b_off = 0;\n+ this.bytes = longToByteArray(this.term);\n+ this.b_len = 8;\n+ readLeafAdjust();\n+ }\n+\n+ PackedQuadCell(long term, SpatialRelation shapeRel) {\n+ this(term);\n+ this.shapeRel = shapeRel;\n+ }\n+\n+ @Override\n+ protected void readCell(BytesRef bytes) {\n+ shapeRel = null;\n+ shape = null;\n+ this.bytes = bytes.bytes;\n+ this.b_off = bytes.offset;\n+ this.b_len = (short) bytes.length;\n+ this.term = longFromByteArray(this.bytes, bytes.offset);\n+ readLeafAdjust();\n+ }\n+\n+ private final int getShiftForLevel(final int level) {\n+ return 64 - (level<<1);\n+ }\n+\n+ public boolean isEnd(final int level, final int shift) {\n+ return (term != 0x0L && ((((0x1L<<(level<<1))-1)-(term>>>shift)) == 0x0L));\n+ }\n+\n+ /**\n+ * Get the next cell in the tree without using recursion. descend parameter requests traversal to the child nodes,\n+ * setting this to false will step to the next sibling.\n+ * Note: This complies with lexicographical ordering, once you've moved to the next sibling there is no backtracking.\n+ */\n+ public PackedQuadCell nextCell(boolean descend) {\n+ final int level = getLevel();\n+ final int shift = getShiftForLevel(level);\n+ // base case: can't go further\n+ if ( (!descend && isEnd(level, shift)) || isEnd(maxLevels, getShiftForLevel(maxLevels))) {\n+ return null;\n+ }\n+ long newTerm;\n+ final boolean isLeaf = (term&0x1L)==0x1L;\n+ // if descend requested && we're not at the maxLevel\n+ if ((descend && !isLeaf && (level != maxLevels)) || level == 0) {\n+ // simple case: increment level bits (next level)\n+ newTerm = ((term>>>1)+0x1L)<<1;\n+ } else { // we're not descending or we can't descend\n+ newTerm = term + (0x1L<<shift);\n+ // we're at the last sibling...force descend\n+ if (((term>>>shift)&0x3L) == 0x3L) {\n+ // adjust level for number popping up\n+ newTerm = ((newTerm>>>1) - (Long.numberOfTrailingZeros(newTerm>>>shift)>>>1))<<1;\n+ }\n+ }\n+ return new PackedQuadCell(newTerm);\n+ }\n+\n+ @Override\n+ protected void readLeafAdjust() {\n+ isLeaf = ((0x1L)&term) == 0x1L;\n+ if (getLevel() == getMaxLevels()) {\n+ isLeaf = true;\n+ }\n+ }\n+\n+ @Override\n+ public BytesRef getTokenBytesWithLeaf(BytesRef result) {\n+ if (isLeaf) {\n+ term |= 0x1L;\n+ }\n+ return getTokenBytesNoLeaf(result);\n+ }\n+\n+ @Override\n+ public BytesRef getTokenBytesNoLeaf(BytesRef result) {\n+ if (result == null)\n+ return new BytesRef(bytes, b_off, b_len);\n+ result.bytes = longToByteArray(this.term);\n+ result.offset = 0;\n+ result.length = result.bytes.length;\n+ return result;\n+ }\n+\n+ @Override\n+ public int compareToNoLeaf(Cell fromCell) {\n+ PackedQuadCell b = (PackedQuadCell) fromCell;\n+ final long thisTerm = (((0x1L)&term) == 0x1L) ? term-1 : term;\n+ final long fromTerm = (((0x1L)&b.term) == 0x1L) ? b.term-1 : b.term;\n+ final int result = compare(longToByteArray(thisTerm), 0, 8, longToByteArray(fromTerm), 0, 8);\n+ return result;\n+ }\n+\n+ @Override\n+ public int getLevel() {\n+ int l = (int)((term >>> 1)&0x1FL);\n+ return l;\n+ }\n+\n+ @Override\n+ protected Collection<Cell> getSubCells() {\n+ List<Cell> cells = new ArrayList<>(4);\n+ PackedQuadCell pqc = (PackedQuadCell)(new PackedQuadCell(((term&0x1)==0x1) ? this.term-1 : this.term))\n+ .nextCell(true);\n+ cells.add(pqc);\n+ cells.add((pqc = (PackedQuadCell) (pqc.nextCell(false))));\n+ cells.add((pqc = (PackedQuadCell) (pqc.nextCell(false))));\n+ cells.add(pqc.nextCell(false));\n+ return cells;\n+ }\n+\n+ @Override\n+ protected QuadCell getSubCell(Point p) {\n+ return (PackedQuadCell) PackedQuadPrefixTree.this.getCell(p, getLevel() + 1);//not performant!\n+ }\n+\n+ @Override\n+ public boolean isPrefixOf(Cell c) {\n+ PackedQuadCell cell = (PackedQuadCell)c;\n+ return (this.term==0x0L) ? true : isInternalPrefix(cell);\n+ }\n+\n+ protected boolean isInternalPrefix(PackedQuadCell c) {\n+ final int shift = 64 - (getLevel()<<1);\n+ return ((term>>>shift)-(c.term>>>shift)) == 0x0L;\n+ }\n+\n+ protected long concat(byte postfix) {\n+ // extra leaf bit\n+ return this.term | (((long)(postfix))<<((getMaxLevels()-getLevel()<<1)+6));\n+ }\n+\n+ /**\n+ * Constructs a bounding box shape out of the encoded cell\n+ */\n+ @Override\n+ protected Rectangle makeShape() {\n+ double xmin = PackedQuadPrefixTree.this.xmin;\n+ double ymin = PackedQuadPrefixTree.this.ymin;\n+ int level = getLevel();\n+\n+ byte b;\n+ for (short l=0, i=1; l<level; ++l, ++i) {\n+ b = (byte) ((term>>>(64-(i<<1))) & 0x3L);\n+\n+ switch (b) {\n+ case 0x00:\n+ ymin += levelH[l];\n+ break;\n+ case 0x01:\n+ xmin += levelW[l];\n+ ymin += levelH[l];\n+ break;\n+ case 0x02:\n+ break;//nothing really\n+ case 0x03:\n+ xmin += levelW[l];\n+ break;\n+ default:\n+ throw new RuntimeException(\"unexpected quadrant\");\n+ }\n+ }\n+\n+ double width, height;\n+ if (level > 0) {\n+ width = levelW[level - 1];\n+ height = levelH[level - 1];\n+ } else {\n+ width = gridW;\n+ height = gridH;\n+ }\n+ return new RectangleImpl(xmin, xmin + width, ymin, ymin + height, ctx);\n+ }\n+\n+ private long fromBytes(byte b1, byte b2, byte b3, byte b4, byte b5, byte b6, byte b7, byte b8) {\n+ return ((long)b1 & 255L) << 56 | ((long)b2 & 255L) << 48 | ((long)b3 & 255L) << 40\n+ | ((long)b4 & 255L) << 32 | ((long)b5 & 255L) << 24 | ((long)b6 & 255L) << 16\n+ | ((long)b7 & 255L) << 8 | (long)b8 & 255L;\n+ }\n+\n+ private byte[] longToByteArray(long value) {\n+ byte[] result = new byte[8];\n+ for(int i = 7; i >= 0; --i) {\n+ result[i] = (byte)((int)(value & 255L));\n+ value >>= 8;\n+ }\n+ return result;\n+ }\n+\n+ private long longFromByteArray(byte[] bytes, int ofs) {\n+ assert bytes.length >= 8;\n+ return fromBytes(bytes[0+ofs], bytes[1+ofs], bytes[2+ofs], bytes[3+ofs],\n+ bytes[4+ofs], bytes[5+ofs], bytes[6+ofs], bytes[7+ofs]);\n+ }\n+\n+ /**\n+ * Used for debugging, this will print the bits of the cell\n+ */\n+ @Override\n+ public String toString() {\n+ String s = \"\";\n+ for(int i = 0; i < Long.numberOfLeadingZeros(term); i++) {\n+ s+='0';\n+ }\n+ if (term != 0)\n+ s += Long.toBinaryString(term);\n+ return s;\n+ }\n+ } // PackedQuadCell\n+\n+ protected class PrefixTreeIterator extends CellIterator {\n+ private Shape shape;\n+ private PackedQuadCell thisCell;\n+ private PackedQuadCell nextCell;\n+\n+ private short leaves;\n+ private short level;\n+ private final short maxLevels;\n+ private CellIterator pruneIter;\n+\n+ PrefixTreeIterator(Shape shape) {\n+ this.shape = shape;\n+ this.thisCell = ((PackedQuadCell)(getWorldCell())).nextCell(true);\n+ this.maxLevels = (short)thisCell.getMaxLevels();\n+ this.nextCell = null;\n+ }\n+\n+ @Override\n+ public boolean hasNext() {\n+ if (nextCell != null) {\n+ return true;\n+ }\n+ SpatialRelation rel;\n+ // loop until we're at the end of the quad tree or we hit a relation\n+ while (thisCell != null) {\n+ rel = thisCell.getShape().relate(shape);\n+ if (rel == SpatialRelation.DISJOINT) {\n+ thisCell = thisCell.nextCell(false);\n+ } else { // within || intersects || contains\n+ thisCell.setShapeRel(rel);\n+ nextCell = thisCell;\n+ if (rel == SpatialRelation.WITHIN) {\n+ thisCell.setLeaf();\n+ thisCell = thisCell.nextCell(false);\n+ } else { // intersects || contains\n+ level = (short) (thisCell.getLevel());\n+ if (level == maxLevels || pruned(rel)) {\n+ thisCell.setLeaf();\n+ if (shape instanceof Point) {\n+ thisCell.setShapeRel(SpatialRelation.WITHIN);\n+ thisCell = null;\n+ } else {\n+ thisCell = thisCell.nextCell(false);\n+ }\n+ break;\n+ }\n+ thisCell = thisCell.nextCell(true);\n+ }\n+ break;\n+ }\n+ }\n+ return nextCell != null;\n+ }\n+\n+ private boolean pruned(SpatialRelation rel) {\n+ if (rel == SpatialRelation.INTERSECTS && leafyPrune && level == maxLevels-1) {\n+ for (leaves=0, pruneIter=thisCell.getNextLevelCells(shape); pruneIter.hasNext(); pruneIter.next(), ++leaves);\n+ return leaves == 4;\n+ }\n+ return false;\n+ }\n+\n+ @Override\n+ public Cell next() {\n+ if (nextCell == null) {\n+ if (!hasNext()) {\n+ throw new NoSuchElementException();\n+ }\n+ }\n+ // overriding since this implementation sets thisCell in hasNext\n+ Cell temp = nextCell;\n+ nextCell = null;\n+ return temp;\n+ }\n+\n+ @Override\n+ public void remove() {\n+ //no-op\n+ }\n+ }\n+}",
"filename": "src/main/java/org/apache/lucene/spatial/prefix/tree/PackedQuadPrefixTree.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,313 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.spatial.prefix.tree;\n+\n+import com.spatial4j.core.context.SpatialContext;\n+import com.spatial4j.core.shape.Point;\n+import com.spatial4j.core.shape.Rectangle;\n+import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.SpatialRelation;\n+import org.apache.lucene.util.BytesRef;\n+\n+import java.io.PrintStream;\n+import java.text.NumberFormat;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.List;\n+import java.util.Locale;\n+\n+/**\n+ * A {@link SpatialPrefixTree} which uses a\n+ * <a href=\"http://en.wikipedia.org/wiki/Quadtree\">quad tree</a> in which an\n+ * indexed term will be generated for each cell, 'A', 'B', 'C', 'D'.\n+ *\n+ * @lucene.experimental\n+ *\n+ * NOTE: Will be removed upon commit of LUCENE-6422\n+ */\n+public class QuadPrefixTree extends LegacyPrefixTree {\n+\n+ /**\n+ * Factory for creating {@link QuadPrefixTree} instances with useful defaults\n+ */\n+ public static class Factory extends SpatialPrefixTreeFactory {\n+\n+ @Override\n+ protected int getLevelForDistance(double degrees) {\n+ QuadPrefixTree grid = new QuadPrefixTree(ctx, MAX_LEVELS_POSSIBLE);\n+ return grid.getLevelForDistance(degrees);\n+ }\n+\n+ @Override\n+ protected SpatialPrefixTree newSPT() {\n+ return new QuadPrefixTree(ctx,\n+ maxLevels != null ? maxLevels : MAX_LEVELS_POSSIBLE);\n+ }\n+ }\n+\n+ public static final int MAX_LEVELS_POSSIBLE = 50;//not really sure how big this should be\n+\n+ public static final int DEFAULT_MAX_LEVELS = 12;\n+ protected final double xmin;\n+ protected final double xmax;\n+ protected final double ymin;\n+ protected final double ymax;\n+ protected final double xmid;\n+ protected final double ymid;\n+\n+ protected final double gridW;\n+ public final double gridH;\n+\n+ final double[] levelW;\n+ final double[] levelH;\n+ final int[] levelS; // side\n+ final int[] levelN; // number\n+\n+ public QuadPrefixTree(\n+ SpatialContext ctx, Rectangle bounds, int maxLevels) {\n+ super(ctx, maxLevels);\n+ this.xmin = bounds.getMinX();\n+ this.xmax = bounds.getMaxX();\n+ this.ymin = bounds.getMinY();\n+ this.ymax = bounds.getMaxY();\n+\n+ levelW = new double[maxLevels];\n+ levelH = new double[maxLevels];\n+ levelS = new int[maxLevels];\n+ levelN = new int[maxLevels];\n+\n+ gridW = xmax - xmin;\n+ gridH = ymax - ymin;\n+ this.xmid = xmin + gridW/2.0;\n+ this.ymid = ymin + gridH/2.0;\n+ levelW[0] = gridW/2.0;\n+ levelH[0] = gridH/2.0;\n+ levelS[0] = 2;\n+ levelN[0] = 4;\n+\n+ for (int i = 1; i < levelW.length; i++) {\n+ levelW[i] = levelW[i - 1] / 2.0;\n+ levelH[i] = levelH[i - 1] / 2.0;\n+ levelS[i] = levelS[i - 1] * 2;\n+ levelN[i] = levelN[i - 1] * 4;\n+ }\n+ }\n+\n+ public QuadPrefixTree(SpatialContext ctx) {\n+ this(ctx, DEFAULT_MAX_LEVELS);\n+ }\n+\n+ public QuadPrefixTree(\n+ SpatialContext ctx, int maxLevels) {\n+ this(ctx, ctx.getWorldBounds(), maxLevels);\n+ }\n+\n+ @Override\n+ public Cell getWorldCell() {\n+ return new QuadCell(BytesRef.EMPTY_BYTES, 0, 0);\n+ }\n+\n+ public void printInfo(PrintStream out) {\n+ NumberFormat nf = NumberFormat.getNumberInstance(Locale.ROOT);\n+ nf.setMaximumFractionDigits(5);\n+ nf.setMinimumFractionDigits(5);\n+ nf.setMinimumIntegerDigits(3);\n+\n+ for (int i = 0; i < maxLevels; i++) {\n+ out.println(i + \"]\\t\" + nf.format(levelW[i]) + \"\\t\" + nf.format(levelH[i]) + \"\\t\" +\n+ levelS[i] + \"\\t\" + (levelS[i] * levelS[i]));\n+ }\n+ }\n+\n+ @Override\n+ public int getLevelForDistance(double dist) {\n+ if (dist == 0)//short circuit\n+ return maxLevels;\n+ for (int i = 0; i < maxLevels-1; i++) {\n+ //note: level[i] is actually a lookup for level i+1\n+ if(dist > levelW[i] && dist > levelH[i]) {\n+ return i+1;\n+ }\n+ }\n+ return maxLevels;\n+ }\n+\n+ @Override\n+ public Cell getCell(Point p, int level) {\n+ List<Cell> cells = new ArrayList<>(1);\n+ build(xmid, ymid, 0, cells, new BytesRef(maxLevels+1), ctx.makePoint(p.getX(),p.getY()), level);\n+ return cells.get(0);//note cells could be longer if p on edge\n+ }\n+\n+ private void build(\n+ double x,\n+ double y,\n+ int level,\n+ List<Cell> matches,\n+ BytesRef str,\n+ Shape shape,\n+ int maxLevel) {\n+ assert str.length == level;\n+ double w = levelW[level] / 2;\n+ double h = levelH[level] / 2;\n+\n+ // Z-Order\n+ // http://en.wikipedia.org/wiki/Z-order_%28curve%29\n+ checkBattenberg('A', x - w, y + h, level, matches, str, shape, maxLevel);\n+ checkBattenberg('B', x + w, y + h, level, matches, str, shape, maxLevel);\n+ checkBattenberg('C', x - w, y - h, level, matches, str, shape, maxLevel);\n+ checkBattenberg('D', x + w, y - h, level, matches, str, shape, maxLevel);\n+\n+ // possibly consider hilbert curve\n+ // http://en.wikipedia.org/wiki/Hilbert_curve\n+ // http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-indexing-with-Quadtrees-and-Hilbert-Curves\n+ // if we actually use the range property in the query, this could be useful\n+ }\n+\n+ protected void checkBattenberg(\n+ char c,\n+ double cx,\n+ double cy,\n+ int level,\n+ List<Cell> matches,\n+ BytesRef str,\n+ Shape shape,\n+ int maxLevel) {\n+ assert str.length == level;\n+ assert str.offset == 0;\n+ double w = levelW[level] / 2;\n+ double h = levelH[level] / 2;\n+\n+ int strlen = str.length;\n+ Rectangle rectangle = ctx.makeRectangle(cx - w, cx + w, cy - h, cy + h);\n+ SpatialRelation v = shape.relate(rectangle);\n+ if (SpatialRelation.CONTAINS == v) {\n+ str.bytes[str.length++] = (byte)c;//append\n+ //str.append(SpatialPrefixGrid.COVER);\n+ matches.add(new QuadCell(BytesRef.deepCopyOf(str), v.transpose()));\n+ } else if (SpatialRelation.DISJOINT == v) {\n+ // nothing\n+ } else { // SpatialRelation.WITHIN, SpatialRelation.INTERSECTS\n+ str.bytes[str.length++] = (byte)c;//append\n+\n+ int nextLevel = level+1;\n+ if (nextLevel >= maxLevel) {\n+ //str.append(SpatialPrefixGrid.INTERSECTS);\n+ matches.add(new QuadCell(BytesRef.deepCopyOf(str), v.transpose()));\n+ } else {\n+ build(cx, cy, nextLevel, matches, str, shape, maxLevel);\n+ }\n+ }\n+ str.length = strlen;\n+ }\n+\n+ protected class QuadCell extends LegacyCell {\n+\n+ QuadCell(byte[] bytes, int off, int len) {\n+ super(bytes, off, len);\n+ }\n+\n+ QuadCell(BytesRef str, SpatialRelation shapeRel) {\n+ this(str.bytes, str.offset, str.length);\n+ this.shapeRel = shapeRel;\n+ }\n+\n+ @Override\n+ protected QuadPrefixTree getGrid() { return QuadPrefixTree.this; }\n+\n+ @Override\n+ protected int getMaxLevels() { return maxLevels; }\n+\n+ @Override\n+ protected Collection<Cell> getSubCells() {\n+ BytesRef source = getTokenBytesNoLeaf(null);\n+\n+ List<Cell> cells = new ArrayList<>(4);\n+ cells.add(new QuadCell(concat(source, (byte)'A'), null));\n+ cells.add(new QuadCell(concat(source, (byte)'B'), null));\n+ cells.add(new QuadCell(concat(source, (byte)'C'), null));\n+ cells.add(new QuadCell(concat(source, (byte)'D'), null));\n+ return cells;\n+ }\n+\n+ protected BytesRef concat(BytesRef source, byte b) {\n+ //+2 for new char + potential leaf\n+ final byte[] buffer = Arrays.copyOfRange(source.bytes, source.offset, source.offset + source.length + 2);\n+ BytesRef target = new BytesRef(buffer);\n+ target.length = source.length;\n+ target.bytes[target.length++] = b;\n+ return target;\n+ }\n+\n+ @Override\n+ public int getSubCellsSize() {\n+ return 4;\n+ }\n+\n+ @Override\n+ protected QuadCell getSubCell(Point p) {\n+ return (QuadCell) QuadPrefixTree.this.getCell(p, getLevel() + 1);//not performant!\n+ }\n+\n+ @Override\n+ public Shape getShape() {\n+ if (shape == null)\n+ shape = makeShape();\n+ return shape;\n+ }\n+\n+ protected Rectangle makeShape() {\n+ BytesRef token = getTokenBytesNoLeaf(null);\n+ double xmin = QuadPrefixTree.this.xmin;\n+ double ymin = QuadPrefixTree.this.ymin;\n+\n+ for (int i = 0; i < token.length; i++) {\n+ byte c = token.bytes[token.offset + i];\n+ switch (c) {\n+ case 'A':\n+ ymin += levelH[i];\n+ break;\n+ case 'B':\n+ xmin += levelW[i];\n+ ymin += levelH[i];\n+ break;\n+ case 'C':\n+ break;//nothing really\n+ case 'D':\n+ xmin += levelW[i];\n+ break;\n+ default:\n+ throw new RuntimeException(\"unexpected char: \" + c);\n+ }\n+ }\n+ int len = token.length;\n+ double width, height;\n+ if (len > 0) {\n+ width = levelW[len-1];\n+ height = levelH[len-1];\n+ } else {\n+ width = gridW;\n+ height = gridH;\n+ }\n+ return ctx.makeRectangle(xmin, xmin + width, ymin, ymin + height);\n+ }\n+ }//QuadCell\n+}",
"filename": "src/main/java/org/apache/lucene/spatial/prefix/tree/QuadPrefixTree.java",
"status": "added"
},
{
"diff": "@@ -26,9 +26,11 @@\n import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy;\n import org.apache.lucene.spatial.prefix.TermQueryPrefixTreeStrategy;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n+import org.apache.lucene.spatial.prefix.tree.PackedQuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.geo.SpatialStrategy;\n@@ -157,7 +159,13 @@ public GeoShapeFieldMapper build(BuilderContext context) {\n if (Names.TREE_GEOHASH.equals(tree)) {\n prefixTree = new GeohashPrefixTree(ShapeBuilder.SPATIAL_CONTEXT, getLevels(treeLevels, precisionInMeters, Defaults.GEOHASH_LEVELS, true));\n } else if (Names.TREE_QUADTREE.equals(tree)) {\n- prefixTree = new QuadPrefixTree(ShapeBuilder.SPATIAL_CONTEXT, getLevels(treeLevels, precisionInMeters, Defaults.QUADTREE_LEVELS, false));\n+ if (context.indexCreatedVersion().before(Version.V_1_6_0)) {\n+ prefixTree = new QuadPrefixTree(ShapeBuilder.SPATIAL_CONTEXT, getLevels(treeLevels, precisionInMeters, Defaults\n+ .QUADTREE_LEVELS, false));\n+ } else {\n+ prefixTree = new PackedQuadPrefixTree(ShapeBuilder.SPATIAL_CONTEXT, getLevels(treeLevels, precisionInMeters, Defaults\n+ .QUADTREE_LEVELS, false));\n+ }\n } else {\n throw new ElasticsearchIllegalArgumentException(\"Unknown prefix tree type [\" + tree + \"]\");\n }\n@@ -220,6 +228,7 @@ public GeoShapeFieldMapper(FieldMapper.Names names, SpatialPrefixTree tree, Stri\n super(names, 1, fieldType, false, null, null, null, null, null, indexSettings, multiFields, copyTo);\n this.recursiveStrategy = new RecursivePrefixTreeStrategy(tree, names.indexName());\n this.recursiveStrategy.setDistErrPct(distanceErrorPct);\n+ this.recursiveStrategy.setPruneLeafyBranches(false);\n this.termStrategy = new TermQueryPrefixTreeStrategy(tree, names.indexName());\n this.termStrategy.setDistErrPct(distanceErrorPct);\n this.defaultStrategy = resolveStrategy(defaultStrategyName);",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java",
"status": "modified"
}
]
} |
{
"body": "The effect is very similar to https://github.com/elasticsearch/elasticsearch/pull/5623\nWhen a document is indexed that does have a dynamic field then the indexing fails as expected. \nHowever, the type is created locally in the mapper service of the node but never updated on master, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-defbaaff93b959a2f9a93e7167f6f345R165\n\nThis can cause several problems:\n1. `_default_` mappings are applied locally and can potentially later not be updated anymore, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-ed65252ffbbf8656bf257a8cd6251420R68 (thanks @pkoenig10 for the test, https://github.com/elasticsearch/elasticsearch/issues/8423#issuecomment-64395503)\n2. Mappings that were created via `_default_` mappings when indexing a document can be lost, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-defbaaff93b959a2f9a93e7167f6f345R187\n",
"comments": [
{
"body": "Will this also be fixed on the 1.4 branch?\n",
"created_at": "2014-11-27T09:28:49Z"
},
{
"body": "Yes. \n\nTo fix this, there is two options:\n1. make sure the type is not created if indexing fails\n2. update the mapping on master even if indexing of doc failed\n\nOption 1 is rather tricky to implement and I do not see why the type should not be created in the mapping so I'll make a pr for option 2 shortly.\n",
"created_at": "2014-11-27T17:30:39Z"
},
{
"body": "I just checked 0992e7f, but https://gist.github.com/miccon/a4869fe04f9010015861 still fails.\n\nI suppose that when the mapping is created after the failed indexing request, the _all mapping is set to true (by default) and then the _all cannot by set to false anymore.\n\nIMHO not creating the type at all would be the cleaner solution, because even when the default mapping is set to strict creating the type can prevent you from updating the mapping later on (as the _all mapping is created automatically).\n",
"created_at": "2014-11-28T08:15:37Z"
},
{
"body": "I pushed the fix for the lost mappings but as @rjernst pointed out, not updating the mapping can only be done once https://github.com/elasticsearch/elasticsearch/issues/9365 is done so I'll leave this issue open.\n",
"created_at": "2015-02-24T16:57:49Z"
},
{
"body": "fixed by https://github.com/elastic/elasticsearch/pull/10634\n",
"created_at": "2015-05-22T12:27:16Z"
}
],
"number": 8650,
"title": "Mapping potentially lost with `\"dynamic\" : \"strict\"`, `_default_` mapping and failed document index"
} | {
"body": "This commit changes dynamic mappings updates so that they are synchronous on the\nentire cluster and their validity is checked by the master node. There are some\nimportant consequences of this commit:\n- a failing index request on a non-existing type does not implicitely create\n the type anymore\n- dynamic mappings updates cannot create inconsistent mappings on different\n shards\n- indexing requests that introduce new fields might induce latency spikes\n because of the overhead to update the mappings on the master node\n\nCloses #8688\nCloses #8650\n",
"number": 10634,
"review_comments": [
{
"body": "This seems like a bad case to get into? At least should it be warn or error level? This means we reapplied some updates from the translog, but the master rejected those updates...but we dont seem to do anything with the validation, so does that mean the mappings have already been updated in place? Also, could we limit the catch to just whatever exception would mean failed validation? Otherwise a bug as simple as an NPE in validate would get caught and just logged?\n",
"created_at": "2015-04-17T04:36:59Z"
},
{
"body": "This seems like a duplicate of the method above?\n",
"created_at": "2015-04-17T05:57:43Z"
},
{
"body": "Is illegal state the right exception to use? I would normally use this for inconsistent state (meaning we have broken code)?\n",
"created_at": "2015-04-17T06:05:21Z"
},
{
"body": "Use `== false` like we do in many other places?\n",
"created_at": "2015-04-17T06:41:40Z"
},
{
"body": "could you not merge this with the `if` above?\n",
"created_at": "2015-04-17T06:57:16Z"
},
{
"body": "removed newline?\n",
"created_at": "2015-04-17T07:09:06Z"
},
{
"body": "It does the same thing indeed, but on a Mapping object instead of a Mapper.\n",
"created_at": "2015-04-20T09:29:38Z"
},
{
"body": "I don't think we can? It might happen that the first condition is false and the second is true if you only got dynamic updates through `copy_to` directives (which needs to be handled differently since it can insert dynamic mappings at arbitrary places in the mappings).\n",
"created_at": "2015-04-20T09:32:10Z"
},
{
"body": "Yes, so that tests can check `\"foo\".equals(string)` instead of `\"foo\\n\".equals(string)`\n",
"created_at": "2015-04-20T09:32:55Z"
},
{
"body": "can this be an AtomicReference?\n",
"created_at": "2015-04-20T12:16:52Z"
},
{
"body": "waitForMappingUpdatePostRecovery defaults to 30s, give that the timeout now results in a failed shard (good!) I think we should be more lenient. Especially given the fact that local gateway recovery runs on full cluster restart where the master might be overloaded by things to do. How about something very conservative like 15m (which is what we use for the same update mapping - see RecoverySourceHandler#updateMappingOnMaster)\n",
"created_at": "2015-04-20T12:25:43Z"
}
],
"title": "Validate dynamic mappings updates on the master node."
} | {
"commits": [
{
"message": "Internal: Ensure that explanation descriptions are not null on serialization.\n\nAs requested on #10399"
},
{
"message": "Mappings: Validate dynamic mappings updates on the master node.\n\nThis commit changes dynamic mappings updates so that they are synchronous on the\nentire cluster and their validity is checked by the master node. There are some\nimportant consequences of this commit:\n - a failing index request on a non-existing type does not implicitely create\n the type anymore\n - dynamic mappings updates cannot create inconsistent mappings on different\n shards\n - indexing requests that introduce new fields might induce latency spikes\n because of the overhead to update the mappings on the master node\n\nClose #8688"
}
],
"files": [
{
"diff": "@@ -12,7 +12,7 @@\n indices.get_mapping:\n index: test_index\n \n- - match: { test_index.mappings.type_1.properties: {}}\n+ - match: { test_index.mappings.type_1: {}}\n \n ---\n \"Create index with settings\":\n@@ -106,7 +106,7 @@\n indices.get_mapping:\n index: test_index\n \n- - match: { test_index.mappings.type_1.properties: {}}\n+ - match: { test_index.mappings.type_1: {}}\n \n - do:\n indices.get_settings:",
"filename": "rest-api-spec/test/indices.create/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -21,10 +21,10 @@ setup:\n - do:\n indices.get_mapping: {}\n \n- - match: { test_1.mappings.type_1.properties: {}}\n- - match: { test_1.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_3.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n+ - match: { test_1.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_3: {}}\n \n ---\n \"Get /{index}/_mapping\":\n@@ -33,8 +33,8 @@ setup:\n indices.get_mapping:\n index: test_1\n \n- - match: { test_1.mappings.type_1.properties: {}}\n- - match: { test_1.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n+ - match: { test_1.mappings.type_2: {}}\n - is_false: test_2\n \n \n@@ -46,8 +46,8 @@ setup:\n index: test_1\n type: _all\n \n- - match: { test_1.mappings.type_1.properties: {}}\n- - match: { test_1.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n+ - match: { test_1.mappings.type_2: {}}\n - is_false: test_2\n \n ---\n@@ -58,8 +58,8 @@ setup:\n index: test_1\n type: '*'\n \n- - match: { test_1.mappings.type_1.properties: {}}\n- - match: { test_1.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n+ - match: { test_1.mappings.type_2: {}}\n - is_false: test_2\n \n ---\n@@ -70,7 +70,7 @@ setup:\n index: test_1\n type: type_1\n \n- - match: { test_1.mappings.type_1.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n - is_false: test_1.mappings.type_2\n - is_false: test_2\n \n@@ -82,8 +82,8 @@ setup:\n index: test_1\n type: type_1,type_2\n \n- - match: { test_1.mappings.type_1.properties: {}}\n- - match: { test_1.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_1: {}}\n+ - match: { test_1.mappings.type_2: {}}\n - is_false: test_2\n \n ---\n@@ -94,7 +94,7 @@ setup:\n index: test_1\n type: '*2'\n \n- - match: { test_1.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_2: {}}\n - is_false: test_1.mappings.type_1\n - is_false: test_2\n \n@@ -105,8 +105,8 @@ setup:\n indices.get_mapping:\n type: type_2\n \n- - match: { test_1.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_2: {}}\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -118,8 +118,8 @@ setup:\n index: _all\n type: type_2\n \n- - match: { test_1.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_2: {}}\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -131,8 +131,8 @@ setup:\n index: '*'\n type: type_2\n \n- - match: { test_1.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_2: {}}\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -144,8 +144,8 @@ setup:\n index: test_1,test_2\n type: type_2\n \n- - match: { test_1.mappings.type_2.properties: {}}\n- - match: { test_2.mappings.type_2.properties: {}}\n+ - match: { test_1.mappings.type_2: {}}\n+ - match: { test_2.mappings.type_2: {}}\n - is_false: test_2.mappings.type_3\n \n ---\n@@ -156,6 +156,6 @@ setup:\n index: '*2'\n type: type_2\n \n- - match: { test_2.mappings.type_2.properties: {}}\n+ - match: { test_2.mappings.type_2: {}}\n - is_false: test_1\n - is_false: test_2.mappings.type_3",
"filename": "rest-api-spec/test/indices.get_mapping/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -56,8 +56,8 @@ setup:\n indices.get_mapping:\n index: test-x*\n \n- - match: { test-xxx.mappings.type_1.properties: {}}\n- - match: { test-xxy.mappings.type_2.properties: {}}\n+ - match: { test-xxx.mappings.type_1: {}}\n+ - match: { test-xxy.mappings.type_2: {}}\n \n ---\n \"Get test-* with wildcard_expansion=all\":\n@@ -67,9 +67,9 @@ setup:\n index: test-x*\n expand_wildcards: all\n \n- - match: { test-xxx.mappings.type_1.properties: {}}\n- - match: { test-xxy.mappings.type_2.properties: {}}\n- - match: { test-xyy.mappings.type_3.properties: {}}\n+ - match: { test-xxx.mappings.type_1: {}}\n+ - match: { test-xxy.mappings.type_2: {}}\n+ - match: { test-xyy.mappings.type_3: {}}\n \n ---\n \"Get test-* with wildcard_expansion=open\":\n@@ -79,8 +79,8 @@ setup:\n index: test-x*\n expand_wildcards: open\n \n- - match: { test-xxx.mappings.type_1.properties: {}}\n- - match: { test-xxy.mappings.type_2.properties: {}}\n+ - match: { test-xxx.mappings.type_1: {}}\n+ - match: { test-xxy.mappings.type_2: {}}\n \n ---\n \"Get test-* with wildcard_expansion=closed\":\n@@ -90,7 +90,7 @@ setup:\n index: test-x*\n expand_wildcards: closed\n \n- - match: { test-xyy.mappings.type_3.properties: {}}\n+ - match: { test-xyy.mappings.type_3: {}}\n \n ---\n \"Get test-* with wildcard_expansion=none\":\n@@ -110,8 +110,8 @@ setup:\n index: test-x*\n expand_wildcards: open,closed\n \n- - match: { test-xxx.mappings.type_1.properties: {}}\n- - match: { test-xxy.mappings.type_2.properties: {}}\n- - match: { test-xyy.mappings.type_3.properties: {}}\n+ - match: { test-xxx.mappings.type_1: {}}\n+ - match: { test-xxy.mappings.type_2: {}}\n+ - match: { test-xyy.mappings.type_3: {}}\n \n ",
"filename": "rest-api-spec/test/indices.get_mapping/50_wildcard_expansion.yaml",
"status": "modified"
},
{
"diff": "@@ -19,14 +19,12 @@\n \n package org.elasticsearch.action.bulk;\n \n-import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.RoutingMissingException;\n-import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.delete.DeleteResponse;\n import org.elasticsearch.action.index.IndexRequest;\n@@ -44,26 +42,27 @@\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.compress.CompressedString;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.engine.DocumentAlreadyExistsException;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.SourceToParse;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.river.RiverIndexName;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportRequestOptions;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.Map;\n-import java.util.Set;\n \n /**\n * Performs the index operation.\n@@ -134,7 +133,6 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n final BulkShardRequest request = shardRequest.request;\n IndexService indexService = indicesService.indexServiceSafe(request.index());\n IndexShard indexShard = indexService.shardSafe(shardRequest.shardId.id());\n- final Set<String> mappingTypesToUpdate = Sets.newHashSet();\n \n long[] preVersions = new long[request.items().length];\n VersionType[] preVersionTypes = new VersionType[request.items().length];\n@@ -145,33 +143,17 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n preVersions[requestIndex] = indexRequest.version();\n preVersionTypes[requestIndex] = indexRequest.versionType();\n try {\n- try {\n- WriteResult result = shardIndexOperation(request, indexRequest, clusterState, indexShard, true);\n- // add the response\n- IndexResponse indexResponse = result.response();\n- setResponse(item, new BulkItemResponse(item.id(), indexRequest.opType().lowercase(), indexResponse));\n- if (result.mappingTypeToUpdate != null) {\n- mappingTypesToUpdate.add(result.mappingTypeToUpdate);\n- }\n- } catch (WriteFailureException e) {\n- if (e.getMappingTypeToUpdate() != null) {\n- mappingTypesToUpdate.add(e.getMappingTypeToUpdate());\n- }\n- throw e.getCause();\n- }\n+ WriteResult result = shardIndexOperation(request, indexRequest, clusterState, indexShard, indexService, true);\n+ // add the response\n+ IndexResponse indexResponse = result.response();\n+ setResponse(item, new BulkItemResponse(item.id(), indexRequest.opType().lowercase(), indexResponse));\n } catch (Throwable e) {\n // rethrow the failure if we are going to retry on primary and let parent failure to handle it\n if (retryPrimaryException(e)) {\n // restore updated versions...\n for (int j = 0; j < requestIndex; j++) {\n applyVersion(request.items()[j], preVersions[j], preVersionTypes[j]);\n }\n- for (String mappingTypeToUpdate : mappingTypesToUpdate) {\n- DocumentMapper docMapper = indexService.mapperService().documentMapper(mappingTypeToUpdate);\n- if (docMapper != null) {\n- mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), docMapper, indexService.indexUUID());\n- }\n- }\n throw (ElasticsearchException) e;\n }\n if (e instanceof ElasticsearchException && ((ElasticsearchException) e).status() == RestStatus.CONFLICT) {\n@@ -230,7 +212,7 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n for (int updateAttemptsCount = 0; updateAttemptsCount <= updateRequest.retryOnConflict(); updateAttemptsCount++) {\n UpdateResult updateResult;\n try {\n- updateResult = shardUpdateOperation(clusterState, request, updateRequest, indexShard);\n+ updateResult = shardUpdateOperation(clusterState, request, updateRequest, indexShard, indexService);\n } catch (Throwable t) {\n updateResult = new UpdateResult(null, null, false, t, null);\n }\n@@ -250,9 +232,6 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n }\n item = request.items()[requestIndex] = new BulkItemRequest(request.items()[requestIndex].id(), indexRequest);\n setResponse(item, new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResponse));\n- if (result.mappingTypeToUpdate != null) {\n- mappingTypesToUpdate.add(result.mappingTypeToUpdate);\n- }\n break;\n case DELETE:\n DeleteResponse response = updateResult.writeResult.response();\n@@ -331,13 +310,6 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n assert preVersionTypes[requestIndex] != null;\n }\n \n- for (String mappingTypToUpdate : mappingTypesToUpdate) {\n- DocumentMapper docMapper = indexService.mapperService().documentMapper(mappingTypToUpdate);\n- if (docMapper != null) {\n- mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), docMapper, indexService.indexUUID());\n- }\n- }\n-\n if (request.refresh()) {\n try {\n indexShard.refresh(\"refresh_flag_bulk\");\n@@ -363,12 +335,10 @@ private void setResponse(BulkItemRequest request, BulkItemResponse response) {\n static class WriteResult {\n \n final ActionWriteResponse response;\n- final String mappingTypeToUpdate;\n final Engine.IndexingOperation op;\n \n- WriteResult(ActionWriteResponse response, String mappingTypeToUpdate, Engine.IndexingOperation op) {\n+ WriteResult(ActionWriteResponse response, Engine.IndexingOperation op) {\n this.response = response;\n- this.mappingTypeToUpdate = mappingTypeToUpdate;\n this.op = op;\n }\n \n@@ -382,8 +352,25 @@ <T extends ActionWriteResponse> T response() {\n \n }\n \n+ private void applyMappingUpdate(IndexService indexService, String type, Mapping update) throws Throwable {\n+ // HACK: Rivers seem to have something specific that triggers potential\n+ // deadlocks when doing concurrent indexing. So for now they keep the\n+ // old behaviour of updating mappings locally first and then\n+ // asynchronously notifying the master\n+ // this can go away when rivers are removed\n+ final String indexName = indexService.index().name();\n+ final String indexUUID = indexService.indexUUID();\n+ if (indexName.equals(RiverIndexName.Conf.indexName(settings))) {\n+ indexService.mapperService().merge(type, new CompressedString(update.toBytes()), true);\n+ mappingUpdatedAction.updateMappingOnMaster(indexName, indexUUID, type, update, null);\n+ } else {\n+ mappingUpdatedAction.updateMappingOnMasterSynchronously(indexName, indexUUID, type, update);\n+ indexService.mapperService().merge(type, new CompressedString(update.toBytes()), true);\n+ }\n+ }\n+\n private WriteResult shardIndexOperation(BulkShardRequest request, IndexRequest indexRequest, ClusterState clusterState,\n- IndexShard indexShard, boolean processed) {\n+ IndexShard indexShard, IndexService indexService, boolean processed) throws Throwable {\n \n // validate, if routing is required, that we got routing\n MappingMetaData mappingMd = clusterState.metaData().index(request.index()).mappingOrDefault(indexRequest.type());\n@@ -400,45 +387,38 @@ private WriteResult shardIndexOperation(BulkShardRequest request, IndexRequest i\n SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.PRIMARY, indexRequest.source()).type(indexRequest.type()).id(indexRequest.id())\n .routing(indexRequest.routing()).parent(indexRequest.parent()).timestamp(indexRequest.timestamp()).ttl(indexRequest.ttl());\n \n- // update mapping on master if needed, we won't update changes to the same type, since once its changed, it won't have mappers added\n- String mappingTypeToUpdate = null;\n-\n long version;\n boolean created;\n Engine.IndexingOperation op;\n- try {\n- if (indexRequest.opType() == IndexRequest.OpType.INDEX) {\n- Engine.Index index = indexShard.prepareIndex(sourceToParse, indexRequest.version(), indexRequest.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates() || indexRequest.canHaveDuplicates());\n- if (index.parsedDoc().mappingsModified()) {\n- mappingTypeToUpdate = indexRequest.type();\n- }\n- indexShard.index(index);\n- version = index.version();\n- op = index;\n- created = index.created();\n- } else {\n- Engine.Create create = indexShard.prepareCreate(sourceToParse, indexRequest.version(), indexRequest.versionType(), Engine.Operation.Origin.PRIMARY,\n- request.canHaveDuplicates() || indexRequest.canHaveDuplicates(), indexRequest.autoGeneratedId());\n- if (create.parsedDoc().mappingsModified()) {\n- mappingTypeToUpdate = indexRequest.type();\n- }\n- indexShard.create(create);\n- version = create.version();\n- op = create;\n- created = true;\n+ if (indexRequest.opType() == IndexRequest.OpType.INDEX) {\n+ Engine.Index index = indexShard.prepareIndex(sourceToParse, indexRequest.version(), indexRequest.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates() || indexRequest.canHaveDuplicates());\n+ if (index.parsedDoc().dynamicMappingsUpdate() != null) {\n+ applyMappingUpdate(indexService, indexRequest.type(), index.parsedDoc().dynamicMappingsUpdate());\n+ }\n+ indexShard.index(index);\n+ version = index.version();\n+ op = index;\n+ created = index.created();\n+ } else {\n+ Engine.Create create = indexShard.prepareCreate(sourceToParse, indexRequest.version(), indexRequest.versionType(), Engine.Operation.Origin.PRIMARY,\n+ request.canHaveDuplicates() || indexRequest.canHaveDuplicates(), indexRequest.autoGeneratedId());\n+ if (create.parsedDoc().dynamicMappingsUpdate() != null) {\n+ applyMappingUpdate(indexService, indexRequest.type(), create.parsedDoc().dynamicMappingsUpdate());\n }\n- // update the version on request so it will happen on the replicas\n- indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery());\n- indexRequest.version(version);\n- } catch (Throwable t) {\n- throw new WriteFailureException(t, mappingTypeToUpdate);\n+ indexShard.create(create);\n+ version = create.version();\n+ op = create;\n+ created = true;\n }\n+ // update the version on request so it will happen on the replicas\n+ indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery());\n+ indexRequest.version(version);\n \n assert indexRequest.versionType().validateVersionForWrites(indexRequest.version());\n \n \n IndexResponse indexResponse = new IndexResponse(request.index(), indexRequest.type(), indexRequest.id(), version, created);\n- return new WriteResult(indexResponse, mappingTypeToUpdate, op);\n+ return new WriteResult(indexResponse, op);\n }\n \n private WriteResult shardDeleteOperation(BulkShardRequest request, DeleteRequest deleteRequest, IndexShard indexShard) {\n@@ -451,7 +431,7 @@ private WriteResult shardDeleteOperation(BulkShardRequest request, DeleteRequest\n assert deleteRequest.versionType().validateVersionForWrites(deleteRequest.version());\n \n DeleteResponse deleteResponse = new DeleteResponse(request.index(), deleteRequest.type(), deleteRequest.id(), delete.version(), delete.found());\n- return new WriteResult(deleteResponse, null, null);\n+ return new WriteResult(deleteResponse, null);\n }\n \n static class UpdateResult {\n@@ -507,14 +487,14 @@ <T extends ActionRequest> T request() {\n \n }\n \n- private UpdateResult shardUpdateOperation(ClusterState clusterState, BulkShardRequest bulkShardRequest, UpdateRequest updateRequest, IndexShard indexShard) {\n+ private UpdateResult shardUpdateOperation(ClusterState clusterState, BulkShardRequest bulkShardRequest, UpdateRequest updateRequest, IndexShard indexShard, IndexService indexService) {\n UpdateHelper.Result translate = updateHelper.prepare(updateRequest, indexShard);\n switch (translate.operation()) {\n case UPSERT:\n case INDEX:\n IndexRequest indexRequest = translate.action();\n try {\n- WriteResult result = shardIndexOperation(bulkShardRequest, indexRequest, clusterState, indexShard, false);\n+ WriteResult result = shardIndexOperation(bulkShardRequest, indexRequest, clusterState, indexShard, indexService, false);\n return new UpdateResult(translate, indexRequest, result);\n } catch (Throwable t) {\n t = ExceptionsHelper.unwrapCause(t);",
"filename": "src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,6 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.RoutingMissingException;\n-import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n@@ -38,15 +37,17 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.compress.CompressedString;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.SourceToParse;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.river.RiverIndexName;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -166,6 +167,23 @@ protected ShardIterator shards(ClusterState clusterState, InternalRequest reques\n .indexShards(clusterService.state(), request.concreteIndex(), request.request().type(), request.request().id(), request.request().routing());\n }\n \n+ private void applyMappingUpdate(IndexService indexService, String type, Mapping update) throws Throwable {\n+ // HACK: Rivers seem to have something specific that triggers potential\n+ // deadlocks when doing concurrent indexing. So for now they keep the\n+ // old behaviour of updating mappings locally first and then\n+ // asynchronously notifying the master\n+ // this can go away when rivers are removed\n+ final String indexName = indexService.index().name();\n+ final String indexUUID = indexService.indexUUID();\n+ if (indexName.equals(RiverIndexName.Conf.indexName(settings))) {\n+ indexService.mapperService().merge(type, new CompressedString(update.toBytes()), true);\n+ mappingUpdatedAction.updateMappingOnMaster(indexName, indexUUID, type, update, null);\n+ } else {\n+ mappingUpdatedAction.updateMappingOnMasterSynchronously(indexName, indexUUID, type, update);\n+ indexService.mapperService().merge(type, new CompressedString(update.toBytes()), true);\n+ }\n+ }\n+\n @Override\n protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n final IndexRequest request = shardRequest.request;\n@@ -186,48 +204,38 @@ protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(ClusterStat\n long version;\n boolean created;\n \n- try {\n- if (request.opType() == IndexRequest.OpType.INDEX) {\n- Engine.Index index = indexShard.prepareIndex(sourceToParse, request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates());\n- if (index.parsedDoc().mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), index.docMapper(), indexService.indexUUID());\n- }\n- indexShard.index(index);\n- version = index.version();\n- created = index.created();\n- } else {\n- Engine.Create create = indexShard.prepareCreate(sourceToParse,\n- request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates(), request.autoGeneratedId());\n- if (create.parsedDoc().mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), create.docMapper(), indexService.indexUUID());\n- }\n- indexShard.create(create);\n- version = create.version();\n- created = true;\n+ if (request.opType() == IndexRequest.OpType.INDEX) {\n+ Engine.Index index = indexShard.prepareIndex(sourceToParse, request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates());\n+ if (index.parsedDoc().dynamicMappingsUpdate() != null) {\n+ applyMappingUpdate(indexService, request.type(), index.parsedDoc().dynamicMappingsUpdate());\n }\n- if (request.refresh()) {\n- try {\n- indexShard.refresh(\"refresh_flag_index\");\n- } catch (Throwable e) {\n- // ignore\n- }\n+ indexShard.index(index);\n+ version = index.version();\n+ created = index.created();\n+ } else {\n+ Engine.Create create = indexShard.prepareCreate(sourceToParse,\n+ request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates(), request.autoGeneratedId());\n+ if (create.parsedDoc().dynamicMappingsUpdate() != null) {\n+ applyMappingUpdate(indexService, request.type(), create.parsedDoc().dynamicMappingsUpdate());\n }\n-\n- // update the version on the request, so it will be used for the replicas\n- request.version(version);\n- request.versionType(request.versionType().versionTypeForReplicationAndRecovery());\n-\n- assert request.versionType().validateVersionForWrites(request.version());\n- return new Tuple<>(new IndexResponse(shardRequest.shardId.getIndex(), request.type(), request.id(), version, created), shardRequest.request);\n- } catch (WriteFailureException e) {\n- if (e.getMappingTypeToUpdate() != null){\n- DocumentMapper docMapper = indexService.mapperService().documentMapper(e.getMappingTypeToUpdate());\n- if (docMapper != null) {\n- mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), docMapper, indexService.indexUUID());\n- }\n+ indexShard.create(create);\n+ version = create.version();\n+ created = true;\n+ }\n+ if (request.refresh()) {\n+ try {\n+ indexShard.refresh(\"refresh_flag_index\");\n+ } catch (Throwable e) {\n+ // ignore\n }\n- throw e.getCause();\n }\n+\n+ // update the version on the request, so it will be used for the replicas\n+ request.version(version);\n+ request.versionType(request.versionType().versionTypeForReplicationAndRecovery());\n+\n+ assert request.versionType().validateVersionForWrites(request.version());\n+ return new Tuple<>(new IndexResponse(shardRequest.shardId.getIndex(), request.type(), request.id(), version, created), shardRequest.request);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/action/index/TransportIndexAction.java",
"status": "modified"
},
{
"diff": "@@ -19,9 +19,10 @@\n \n package org.elasticsearch.cluster.action.index;\n \n-import com.google.common.collect.Lists;\n-import com.google.common.collect.Maps;\n+import com.google.common.collect.ImmutableMap;\n+\n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.ActionResponse;\n@@ -37,7 +38,6 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataMappingService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.compress.CompressedString;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -46,45 +46,44 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n import java.io.IOException;\n-import java.util.Collections;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.Map;\n import java.util.concurrent.BlockingQueue;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n-import java.util.concurrent.atomic.AtomicLong;\n+import java.util.concurrent.TimeoutException;\n \n /**\n * Called by shards in the cluster when their mapping was dynamically updated and it needs to be updated\n * in the cluster state meta data (and broadcast to all members).\n */\n public class MappingUpdatedAction extends TransportMasterNodeOperationAction<MappingUpdatedAction.MappingUpdatedRequest, MappingUpdatedAction.MappingUpdatedResponse> {\n \n- public static final String INDICES_MAPPING_ADDITIONAL_MAPPING_CHANGE_TIME = \"indices.mapping.additional_mapping_change_time\";\n+ public static final String INDICES_MAPPING_DYNAMIC_TIMEOUT = \"indices.mapping.dynamic_timeout\";\n public static final String ACTION_NAME = \"internal:cluster/mapping_updated\";\n \n- private final AtomicLong mappingUpdateOrderGen = new AtomicLong();\n private final MetaDataMappingService metaDataMappingService;\n \n private volatile MasterMappingUpdater masterMappingUpdater;\n \n- private volatile TimeValue additionalMappingChangeTime;\n+ private volatile TimeValue dynamicMappingUpdateTimeout;\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n- final TimeValue current = MappingUpdatedAction.this.additionalMappingChangeTime;\n- final TimeValue newValue = settings.getAsTime(INDICES_MAPPING_ADDITIONAL_MAPPING_CHANGE_TIME, current);\n+ TimeValue current = MappingUpdatedAction.this.dynamicMappingUpdateTimeout;\n+ TimeValue newValue = settings.getAsTime(INDICES_MAPPING_DYNAMIC_TIMEOUT, current);\n if (!current.equals(newValue)) {\n- logger.info(\"updating \" + INDICES_MAPPING_ADDITIONAL_MAPPING_CHANGE_TIME + \" from [{}] to [{}]\", current, newValue);\n- MappingUpdatedAction.this.additionalMappingChangeTime = newValue;\n+ logger.info(\"updating \" + INDICES_MAPPING_DYNAMIC_TIMEOUT + \" from [{}] to [{}]\", current, newValue);\n+ MappingUpdatedAction.this.dynamicMappingUpdateTimeout = newValue;\n }\n }\n }\n@@ -94,8 +93,7 @@ public MappingUpdatedAction(Settings settings, TransportService transportService\n MetaDataMappingService metaDataMappingService, NodeSettingsService nodeSettingsService, ActionFilters actionFilters) {\n super(settings, ACTION_NAME, transportService, clusterService, threadPool, actionFilters);\n this.metaDataMappingService = metaDataMappingService;\n- // this setting should probably always be 0, just add the option to wait for more changes within a time window\n- this.additionalMappingChangeTime = settings.getAsTime(INDICES_MAPPING_ADDITIONAL_MAPPING_CHANGE_TIME, TimeValue.timeValueMillis(0));\n+ this.dynamicMappingUpdateTimeout = settings.getAsTime(INDICES_MAPPING_DYNAMIC_TIMEOUT, TimeValue.timeValueSeconds(30));\n nodeSettingsService.addListener(new ApplySettings());\n }\n \n@@ -109,13 +107,58 @@ public void stop() {\n this.masterMappingUpdater = null;\n }\n \n- public void updateMappingOnMaster(String index, DocumentMapper documentMapper, String indexUUID) {\n- updateMappingOnMaster(index, documentMapper, indexUUID, null);\n+ public void updateMappingOnMaster(String index, String indexUUID, String type, Mapping mappingUpdate, MappingUpdateListener listener) {\n+ if (type.equals(MapperService.DEFAULT_MAPPING)) {\n+ throw new ElasticsearchIllegalArgumentException(\"_default_ mapping should not be updated\");\n+ }\n+ try {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject();\n+ mappingUpdate.toXContent(builder, new ToXContent.MapParams(ImmutableMap.<String, String>of()));\n+ final CompressedString mappingSource = new CompressedString(builder.endObject().bytes());\n+ masterMappingUpdater.add(new MappingChange(index, indexUUID, type, mappingSource, listener));\n+ } catch (IOException bogus) {\n+ throw new AssertionError(\"Cannot happen\", bogus);\n+ }\n }\n \n- public void updateMappingOnMaster(String index, DocumentMapper documentMapper, String indexUUID, MappingUpdateListener listener) {\n- assert !documentMapper.type().equals(MapperService.DEFAULT_MAPPING) : \"_default_ mapping should not be updated\";\n- masterMappingUpdater.add(new MappingChange(documentMapper, index, indexUUID, listener));\n+ /**\n+ * Same as {@link #updateMappingOnMasterSynchronously(String, String, String, Mapping, TimeValue)}\n+ * using the default timeout.\n+ */\n+ public void updateMappingOnMasterSynchronously(String index, String indexUUID, String type, Mapping mappingUpdate) throws Throwable {\n+ updateMappingOnMasterSynchronously(index, indexUUID, type, mappingUpdate, dynamicMappingUpdateTimeout);\n+ }\n+\n+ /**\n+ * Update mappings synchronously on the master node, waiting for at most\n+ * {@code timeout}. When this method returns successfully mappings have\n+ * been applied to the master node and propagated to data nodes.\n+ */\n+ public void updateMappingOnMasterSynchronously(String index, String indexUUID, String type, Mapping mappingUpdate, TimeValue timeout) throws Throwable {\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ final Throwable[] cause = new Throwable[1];\n+ final MappingUpdateListener listener = new MappingUpdateListener() {\n+\n+ @Override\n+ public void onMappingUpdate() {\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ cause[0] = t;\n+ latch.countDown();\n+ }\n+\n+ };\n+\n+ updateMappingOnMaster(index, indexUUID, type, mappingUpdate, listener);\n+ if (!latch.await(timeout.getMillis(), TimeUnit.MILLISECONDS)) {\n+ throw new TimeoutException(\"Time out while waiting for the master node to validate a mapping update for type [\" + type + \"]\");\n+ }\n+ if (cause[0] != null) {\n+ throw cause[0];\n+ }\n }\n \n @Override\n@@ -142,7 +185,7 @@ protected MappingUpdatedResponse newResponse() {\n \n @Override\n protected void masterOperation(final MappingUpdatedRequest request, final ClusterState state, final ActionListener<MappingUpdatedResponse> listener) throws ElasticsearchException {\n- metaDataMappingService.updateMapping(request.index(), request.indexUUID(), request.type(), request.mappingSource(), request.order, request.nodeId, new ActionListener<ClusterStateUpdateResponse>() {\n+ metaDataMappingService.updateMapping(request.index(), request.indexUUID(), request.type(), request.mappingSource(), request.nodeId, new ActionListener<ClusterStateUpdateResponse>() {\n @Override\n public void onResponse(ClusterStateUpdateResponse response) {\n listener.onResponse(new MappingUpdatedResponse());\n@@ -174,18 +217,16 @@ public static class MappingUpdatedRequest extends MasterNodeOperationRequest<Map\n private String indexUUID = IndexMetaData.INDEX_UUID_NA_VALUE;\n private String type;\n private CompressedString mappingSource;\n- private long order = -1; // -1 means not set...\n private String nodeId = null; // null means not set\n \n MappingUpdatedRequest() {\n }\n \n- public MappingUpdatedRequest(String index, String indexUUID, String type, CompressedString mappingSource, long order, String nodeId) {\n+ public MappingUpdatedRequest(String index, String indexUUID, String type, CompressedString mappingSource, String nodeId) {\n this.index = index;\n this.indexUUID = indexUUID;\n this.type = type;\n this.mappingSource = mappingSource;\n- this.order = order;\n this.nodeId = nodeId;\n }\n \n@@ -215,13 +256,6 @@ public CompressedString mappingSource() {\n return mappingSource;\n }\n \n- /**\n- * Returns -1 if not set...\n- */\n- public long order() {\n- return this.order;\n- }\n-\n /**\n * Returns null for not set.\n */\n@@ -241,7 +275,6 @@ public void readFrom(StreamInput in) throws IOException {\n type = in.readString();\n mappingSource = CompressedString.readCompressedString(in);\n indexUUID = in.readString();\n- order = in.readLong();\n nodeId = in.readOptionalString();\n }\n \n@@ -252,7 +285,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(type);\n mappingSource.writeTo(out);\n out.writeString(indexUUID);\n- out.writeLong(order);\n out.writeOptionalString(nodeId);\n }\n \n@@ -263,15 +295,17 @@ public String toString() {\n }\n \n private static class MappingChange {\n- public final DocumentMapper documentMapper;\n public final String index;\n public final String indexUUID;\n+ public final String type;\n+ public final CompressedString mappingSource;\n public final MappingUpdateListener listener;\n \n- MappingChange(DocumentMapper documentMapper, String index, String indexUUID, MappingUpdateListener listener) {\n- this.documentMapper = documentMapper;\n+ MappingChange(String index, String indexUUID, String type, CompressedString mappingSource, MappingUpdateListener listener) {\n this.index = index;\n this.indexUUID = indexUUID;\n+ this.type = type;\n+ this.mappingSource = mappingSource;\n this.listener = listener;\n }\n }\n@@ -313,142 +347,59 @@ public void close() {\n this.interrupt();\n }\n \n- class UpdateKey {\n- public final String indexUUID;\n- public final String type;\n-\n- UpdateKey(String indexUUID, String type) {\n- this.indexUUID = indexUUID;\n- this.type = type;\n- }\n-\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) {\n- return true;\n- }\n- if (o == null || getClass() != o.getClass()) {\n- return false;\n- }\n-\n- UpdateKey updateKey = (UpdateKey) o;\n-\n- if (!indexUUID.equals(updateKey.indexUUID)) {\n- return false;\n- }\n- if (!type.equals(updateKey.type)) {\n- return false;\n- }\n-\n- return true;\n- }\n-\n- @Override\n- public int hashCode() {\n- int result = indexUUID.hashCode();\n- result = 31 * result + type.hashCode();\n- return result;\n- }\n- }\n-\n- class UpdateValue {\n- public final MappingChange mainChange;\n- public final List<MappingUpdateListener> listeners = Lists.newArrayList();\n-\n- UpdateValue(MappingChange mainChange) {\n- this.mainChange = mainChange;\n- }\n-\n- public void notifyListeners(@Nullable Throwable t) {\n- for (MappingUpdateListener listener : listeners) {\n- try {\n- if (t == null) {\n- listener.onMappingUpdate();\n- } else {\n- listener.onFailure(t);\n- }\n- } catch (Throwable lisFailure) {\n- logger.warn(\"unexpected failure on mapping update listener callback [{}]\", lisFailure, listener);\n- }\n- }\n- }\n- }\n-\n @Override\n public void run() {\n- Map<UpdateKey, UpdateValue> pendingUpdates = Maps.newHashMap();\n while (running) {\n+ MappingUpdateListener listener = null;\n try {\n- MappingChange polledChange = queue.poll(10, TimeUnit.MINUTES);\n- if (polledChange == null) {\n+ final MappingChange change = queue.poll(10, TimeUnit.MINUTES);\n+ if (change == null) {\n continue;\n }\n- List<MappingChange> changes = Lists.newArrayList(polledChange);\n- if (additionalMappingChangeTime.millis() > 0) {\n- Thread.sleep(additionalMappingChangeTime.millis());\n- }\n- queue.drainTo(changes);\n- Collections.reverse(changes); // process then in newest one to oldest\n- // go over and add to pending updates map\n- for (MappingChange change : changes) {\n- UpdateKey key = new UpdateKey(change.indexUUID, change.documentMapper.type());\n- UpdateValue updateValue = pendingUpdates.get(key);\n- if (updateValue == null) {\n- updateValue = new UpdateValue(change);\n- pendingUpdates.put(key, updateValue);\n- }\n+ listener = change.listener;\n+\n+ final MappingUpdatedAction.MappingUpdatedRequest mappingRequest;\n+ try {\n+ DiscoveryNode node = clusterService.localNode();\n+ mappingRequest = new MappingUpdatedAction.MappingUpdatedRequest(\n+ change.index, change.indexUUID, change.type, change.mappingSource, node != null ? node.id() : null\n+ );\n+ } catch (Throwable t) {\n+ logger.warn(\"Failed to update master on updated mapping for index [\" + change.index + \"], type [\" + change.type + \"]\", t);\n if (change.listener != null) {\n- updateValue.listeners.add(change.listener);\n+ change.listener.onFailure(t);\n }\n+ continue;\n }\n-\n- for (Iterator<UpdateValue> iterator = pendingUpdates.values().iterator(); iterator.hasNext(); ) {\n- final UpdateValue updateValue = iterator.next();\n- iterator.remove();\n- MappingChange change = updateValue.mainChange;\n-\n- final MappingUpdatedAction.MappingUpdatedRequest mappingRequest;\n- try {\n- // we generate the order id before we get the mapping to send and refresh the source, so\n- // if 2 happen concurrently, we know that the later order will include the previous one\n- long orderId = mappingUpdateOrderGen.incrementAndGet();\n- change.documentMapper.refreshSource();\n- DiscoveryNode node = clusterService.localNode();\n- mappingRequest = new MappingUpdatedAction.MappingUpdatedRequest(\n- change.index, change.indexUUID, change.documentMapper.type(), change.documentMapper.mappingSource(), orderId, node != null ? node.id() : null\n- );\n- } catch (Throwable t) {\n- logger.warn(\"Failed to update master on updated mapping for index [\" + change.index + \"], type [\" + change.documentMapper.type() + \"]\", t);\n- updateValue.notifyListeners(t);\n- continue;\n- }\n- logger.trace(\"sending mapping updated to master: {}\", mappingRequest);\n- execute(mappingRequest, new ActionListener<MappingUpdatedAction.MappingUpdatedResponse>() {\n- @Override\n- public void onResponse(MappingUpdatedAction.MappingUpdatedResponse mappingUpdatedResponse) {\n- logger.debug(\"successfully updated master with mapping update: {}\", mappingRequest);\n- updateValue.notifyListeners(null);\n+ logger.trace(\"sending mapping updated to master: {}\", mappingRequest);\n+ execute(mappingRequest, new ActionListener<MappingUpdatedAction.MappingUpdatedResponse>() {\n+ @Override\n+ public void onResponse(MappingUpdatedAction.MappingUpdatedResponse mappingUpdatedResponse) {\n+ logger.debug(\"successfully updated master with mapping update: {}\", mappingRequest);\n+ if (change.listener != null) {\n+ change.listener.onMappingUpdate();\n }\n+ }\n \n- @Override\n- public void onFailure(Throwable e) {\n- logger.warn(\"failed to update master on updated mapping for {}\", e, mappingRequest);\n- updateValue.notifyListeners(e);\n+ @Override\n+ public void onFailure(Throwable e) {\n+ logger.warn(\"failed to update master on updated mapping for {}\", e, mappingRequest);\n+ if (change.listener != null) {\n+ change.listener.onFailure(e);\n }\n- });\n-\n- }\n+ }\n+ });\n } catch (Throwable t) {\n+ if (listener != null) {\n+ // even if the failure is expected, eg. if we got interrupted,\n+ // we need to notify the listener as there might be a latch\n+ // waiting for it to be called\n+ listener.onFailure(t);\n+ }\n if (t instanceof InterruptedException && !running) {\n // all is well, we are shutting down\n } else {\n- logger.warn(\"failed to process mapping updates\", t);\n- }\n- // cleanup all pending update callbacks that were not processed due to a global failure...\n- for (Iterator<Map.Entry<UpdateKey, UpdateValue>> iterator = pendingUpdates.entrySet().iterator(); iterator.hasNext(); ) {\n- Map.Entry<UpdateKey, UpdateValue> entry = iterator.next();\n- iterator.remove();\n- entry.getValue().notifyListeners(t);\n+ logger.warn(\"failed to process mapping update\", t);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/cluster/action/index/MappingUpdatedAction.java",
"status": "modified"
},
{
"diff": "@@ -43,9 +43,7 @@\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.InvalidTypeNameException;\n-import org.elasticsearch.indices.TypeMissingException;\n import org.elasticsearch.percolator.PercolatorService;\n-import org.elasticsearch.threadpool.ThreadPool;\n \n import java.util.*;\n \n@@ -57,7 +55,6 @@\n */\n public class MetaDataMappingService extends AbstractComponent {\n \n- private final ThreadPool threadPool;\n private final ClusterService clusterService;\n private final IndicesService indicesService;\n \n@@ -68,9 +65,8 @@ public class MetaDataMappingService extends AbstractComponent {\n private long refreshOrUpdateProcessedInsertOrder;\n \n @Inject\n- public MetaDataMappingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, IndicesService indicesService) {\n+ public MetaDataMappingService(Settings settings, ClusterService clusterService, IndicesService indicesService) {\n super(settings);\n- this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n }\n@@ -97,15 +93,13 @@ static class RefreshTask extends MappingTask {\n static class UpdateTask extends MappingTask {\n final String type;\n final CompressedString mappingSource;\n- final long order; // -1 for unknown\n final String nodeId; // null fr unknown\n final ActionListener<ClusterStateUpdateResponse> listener;\n \n- UpdateTask(String index, String indexUUID, String type, CompressedString mappingSource, long order, String nodeId, ActionListener<ClusterStateUpdateResponse> listener) {\n+ UpdateTask(String index, String indexUUID, String type, CompressedString mappingSource, String nodeId, ActionListener<ClusterStateUpdateResponse> listener) {\n super(index, indexUUID);\n this.type = type;\n this.mappingSource = mappingSource;\n- this.order = order;\n this.nodeId = nodeId;\n this.listener = listener;\n }\n@@ -176,35 +170,7 @@ Tuple<ClusterState, List<MappingTask>> executeRefreshOrUpdate(final ClusterState\n logger.debug(\"[{}] ignoring task [{}] - index meta data doesn't match task uuid\", index, task);\n continue;\n }\n- boolean add = true;\n- // if its an update task, make sure we only process the latest ordered one per node\n- if (task instanceof UpdateTask) {\n- UpdateTask uTask = (UpdateTask) task;\n- // we can only do something to compare if we have the order && node\n- if (uTask.order != -1 && uTask.nodeId != null) {\n- for (int i = 0; i < tasks.size(); i++) {\n- MappingTask existing = tasks.get(i);\n- if (existing instanceof UpdateTask) {\n- UpdateTask eTask = (UpdateTask) existing;\n- if (eTask.type.equals(uTask.type)) {\n- // if we have the order, and the node id, then we can compare, and replace if applicable\n- if (eTask.order != -1 && eTask.nodeId != null) {\n- if (eTask.nodeId.equals(uTask.nodeId) && uTask.order > eTask.order) {\n- // a newer update task, we can replace so we execute it one!\n- tasks.set(i, uTask);\n- add = false;\n- break;\n- }\n- }\n- }\n- }\n- }\n- }\n- }\n-\n- if (add) {\n- tasks.add(task);\n- }\n+ tasks.add(task);\n }\n \n // construct the actual index if needed, and make sure the relevant mappings are there\n@@ -365,13 +331,13 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n });\n }\n \n- public void updateMapping(final String index, final String indexUUID, final String type, final CompressedString mappingSource, final long order, final String nodeId, final ActionListener<ClusterStateUpdateResponse> listener) {\n+ public void updateMapping(final String index, final String indexUUID, final String type, final CompressedString mappingSource, final String nodeId, final ActionListener<ClusterStateUpdateResponse> listener) {\n final long insertOrder;\n synchronized (refreshOrUpdateMutex) {\n insertOrder = ++refreshOrUpdateInsertOrder;\n- refreshOrUpdateQueue.add(new UpdateTask(index, indexUUID, type, mappingSource, order, nodeId, listener));\n+ refreshOrUpdateQueue.add(new UpdateTask(index, indexUUID, type, mappingSource, nodeId, listener));\n }\n- clusterService.submitStateUpdateTask(\"update-mapping [\" + index + \"][\" + type + \"] / node [\" + nodeId + \"], order [\" + order + \"]\", Priority.HIGH, new ProcessedClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"update-mapping [\" + index + \"][\" + type + \"] / node [\" + nodeId + \"]\", Priority.HIGH, new ProcessedClusterStateUpdateTask() {\n private volatile List<MappingTask> allTasks;\n \n @Override\n@@ -398,7 +364,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n try {\n uTask.listener.onResponse(response);\n } catch (Throwable t) {\n- logger.debug(\"failed ot ping back on response of mapping processing for task [{}]\", t, uTask.listener);\n+ logger.debug(\"failed to ping back on response of mapping processing for task [{}]\", t, uTask.listener);\n }\n }\n }\n@@ -457,7 +423,7 @@ public ClusterState execute(final ClusterState currentState) throws Exception {\n newMapper = indexService.mapperService().parse(request.type(), new CompressedString(request.source()), existingMapper == null);\n if (existingMapper != null) {\n // first, simulate\n- DocumentMapper.MergeResult mergeResult = existingMapper.merge(newMapper, mergeFlags().simulate(true));\n+ DocumentMapper.MergeResult mergeResult = existingMapper.merge(newMapper.mapping(), mergeFlags().simulate(true));\n // if we have conflicts, and we are not supposed to ignore them, throw an exception\n if (!request.ignoreConflicts() && mergeResult.hasConflicts()) {\n throw new MergeMappingException(mergeResult.conflicts());",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java",
"status": "modified"
},
{
"diff": "@@ -68,7 +68,7 @@ public ClusterDynamicSettingsModule() {\n clusterDynamicSettings.addDynamicSetting(IndicesStore.INDICES_STORE_THROTTLE_TYPE);\n clusterDynamicSettings.addDynamicSetting(IndicesStore.INDICES_STORE_THROTTLE_MAX_BYTES_PER_SEC, Validator.BYTES_SIZE);\n clusterDynamicSettings.addDynamicSetting(IndicesTTLService.INDICES_TTL_INTERVAL, Validator.TIME);\n- clusterDynamicSettings.addDynamicSetting(MappingUpdatedAction.INDICES_MAPPING_ADDITIONAL_MAPPING_CHANGE_TIME, Validator.TIME);\n+ clusterDynamicSettings.addDynamicSetting(MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT, Validator.TIME);\n clusterDynamicSettings.addDynamicSetting(MetaData.SETTING_READ_ONLY);\n clusterDynamicSettings.addDynamicSetting(RecoverySettings.INDICES_RECOVERY_FILE_CHUNK_SIZE, Validator.BYTES_SIZE);\n clusterDynamicSettings.addDynamicSetting(RecoverySettings.INDICES_RECOVERY_TRANSLOG_OPS, Validator.INTEGER);",
"filename": "src/main/java/org/elasticsearch/cluster/settings/ClusterDynamicSettingsModule.java",
"status": "modified"
},
{
"diff": "@@ -559,6 +559,9 @@ public static void writeExplanation(StreamOutput out, Explanation explanation) t\n out.writeBoolean(false);\n }\n out.writeFloat(explanation.getValue());\n+ if (explanation.getDescription() == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"Explanation descriptions should NOT be null\\n[\" + explanation.toString() + \"]\");\n+ }\n out.writeString(explanation.getDescription());\n Explanation[] subExplanations = explanation.getDetails();\n if (subExplanations == null) {",
"filename": "src/main/java/org/elasticsearch/common/lucene/Lucene.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.EngineException;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.AbstractIndexShardComponent;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -44,10 +45,11 @@\n import java.io.Closeable;\n import java.io.IOException;\n import java.util.Arrays;\n-import java.util.Set;\n+import java.util.Map;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n *\n@@ -61,7 +63,7 @@ public class IndexShardGateway extends AbstractIndexShardComponent implements Cl\n private final TimeValue waitForMappingUpdatePostRecovery;\n private final TimeValue syncInterval;\n \n- private volatile ScheduledFuture flushScheduler;\n+ private volatile ScheduledFuture<?> flushScheduler;\n private final CancellableThreads cancellableThreads = new CancellableThreads();\n \n \n@@ -74,7 +76,7 @@ public IndexShardGateway(ShardId shardId, @IndexSettings Settings indexSettings,\n this.indexService = indexService;\n this.indexShard = indexShard;\n \n- this.waitForMappingUpdatePostRecovery = indexSettings.getAsTime(\"index.gateway.wait_for_mapping_update_post_recovery\", TimeValue.timeValueSeconds(30));\n+ this.waitForMappingUpdatePostRecovery = indexSettings.getAsTime(\"index.gateway.wait_for_mapping_update_post_recovery\", TimeValue.timeValueMinutes(15));\n syncInterval = indexSettings.getAsTime(\"index.gateway.sync\", TimeValue.timeValueSeconds(5));\n if (syncInterval.millis() > 0) {\n this.indexShard.translog().syncOnEachOperation(false);\n@@ -93,7 +95,7 @@ public IndexShardGateway(ShardId shardId, @IndexSettings Settings indexSettings,\n public void recover(boolean indexShouldExists, RecoveryState recoveryState) throws IndexShardGatewayRecoveryException {\n indexShard.prepareForIndexRecovery();\n long version = -1;\n- final Set<String> typesToUpdate;\n+ final Map<String, Mapping> typesToUpdate;\n SegmentInfos si = null;\n indexShard.store().incRef();\n try {\n@@ -149,41 +151,49 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n typesToUpdate = indexShard.performTranslogRecovery();\n \n indexShard.finalizeRecovery();\n+ for (Map.Entry<String, Mapping> entry : typesToUpdate.entrySet()) {\n+ validateMappingUpdate(entry.getKey(), entry.getValue());\n+ }\n indexShard.postRecovery(\"post recovery from gateway\");\n } catch (EngineException e) {\n throw new IndexShardGatewayRecoveryException(shardId, \"failed to recovery from gateway\", e);\n } finally {\n indexShard.store().decRef();\n }\n- for (final String type : typesToUpdate) {\n- final CountDownLatch latch = new CountDownLatch(1);\n- mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), indexService.mapperService().documentMapper(type), indexService.indexUUID(), new MappingUpdatedAction.MappingUpdateListener() {\n- @Override\n- public void onMappingUpdate() {\n- latch.countDown();\n- }\n+ }\n \n- @Override\n- public void onFailure(Throwable t) {\n- latch.countDown();\n- logger.debug(\"failed to send mapping update post recovery to master for [{}]\", t, type);\n- }\n- });\n- cancellableThreads.execute(new CancellableThreads.Interruptable() {\n- @Override\n- public void run() throws InterruptedException {\n- try {\n- if (latch.await(waitForMappingUpdatePostRecovery.millis(), TimeUnit.MILLISECONDS) == false) {\n- logger.debug(\"waited for mapping update on master for [{}], yet timed out\", type);\n+ private void validateMappingUpdate(final String type, Mapping update) {\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ final AtomicReference<Throwable> error = new AtomicReference<>();\n+ mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), indexService.indexUUID(), type, update, new MappingUpdatedAction.MappingUpdateListener() {\n+ @Override\n+ public void onMappingUpdate() {\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ latch.countDown();\n+ error.set(t);\n+ }\n+ });\n+ cancellableThreads.execute(new CancellableThreads.Interruptable() {\n+ @Override\n+ public void run() throws InterruptedException {\n+ try {\n+ if (latch.await(waitForMappingUpdatePostRecovery.millis(), TimeUnit.MILLISECONDS) == false) {\n+ logger.debug(\"waited for mapping update on master for [{}], yet timed out\", type);\n+ } else {\n+ if (error.get() != null) {\n+ throw new IndexShardGatewayRecoveryException(shardId, \"Failed to propagate mappings on master post recovery\", error.get());\n }\n- } catch (InterruptedException e) {\n- logger.debug(\"interrupted while waiting for mapping update\");\n- throw e;\n }\n+ } catch (InterruptedException e) {\n+ logger.debug(\"interrupted while waiting for mapping update\");\n+ throw e;\n }\n- });\n-\n- }\n+ }\n+ });\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -50,6 +50,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.mapper.Mapping.SourceTransform;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;\n import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n@@ -72,7 +73,6 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n@@ -82,8 +82,6 @@\n import java.util.Set;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n-import static com.google.common.collect.Lists.newArrayList;\n-\n /**\n *\n */\n@@ -165,7 +163,7 @@ public static class Builder {\n \n private Map<Class<? extends RootMapper>, RootMapper> rootMappers = new LinkedHashMap<>();\n \n- private List<SourceTransform> sourceTransforms;\n+ private List<SourceTransform> sourceTransforms = new ArrayList<>(1);\n \n private final String index;\n \n@@ -213,9 +211,6 @@ public Builder put(RootMapper.Builder mapper) {\n }\n \n public Builder transform(ScriptService scriptService, String script, ScriptType scriptType, String language, Map<String, Object> parameters) {\n- if (sourceTransforms == null) {\n- sourceTransforms = new ArrayList<>();\n- }\n sourceTransforms.add(new ScriptTransform(scriptService, script, scriptType, language, parameters));\n return this;\n }\n@@ -243,15 +238,9 @@ protected ParseContext.InternalParseContext initialValue() {\n \n private final DocumentMapperParser docMapperParser;\n \n- private volatile ImmutableMap<String, Object> meta;\n-\n private volatile CompressedString mappingSource;\n \n- private final RootObjectMapper rootObjectMapper;\n-\n- private final ImmutableMap<Class<? extends RootMapper>, RootMapper> rootMappers;\n- private final RootMapper[] rootMappersOrdered;\n- private final RootMapper[] rootMappersNotIncludedInObject;\n+ private final Mapping mapping;\n \n private volatile DocumentFieldMappers fieldMappers;\n \n@@ -267,8 +256,6 @@ protected ParseContext.InternalParseContext initialValue() {\n \n private final Object mappersMutex = new Object();\n \n- private final List<SourceTransform> sourceTransforms;\n-\n public DocumentMapper(String index, @Nullable Settings indexSettings, DocumentMapperParser docMapperParser,\n RootObjectMapper rootObjectMapper,\n ImmutableMap<String, Object> meta,\n@@ -278,19 +265,11 @@ public DocumentMapper(String index, @Nullable Settings indexSettings, DocumentMa\n this.type = rootObjectMapper.name();\n this.typeText = new StringAndBytesText(this.type);\n this.docMapperParser = docMapperParser;\n- this.meta = meta;\n- this.rootObjectMapper = rootObjectMapper;\n- this.sourceTransforms = sourceTransforms;\n-\n- this.rootMappers = ImmutableMap.copyOf(rootMappers);\n- this.rootMappersOrdered = rootMappers.values().toArray(new RootMapper[rootMappers.values().size()]);\n- List<RootMapper> rootMappersNotIncludedInObjectLst = newArrayList();\n- for (RootMapper rootMapper : rootMappersOrdered) {\n- if (!rootMapper.includeInObject()) {\n- rootMappersNotIncludedInObjectLst.add(rootMapper);\n- }\n- }\n- this.rootMappersNotIncludedInObject = rootMappersNotIncludedInObjectLst.toArray(new RootMapper[rootMappersNotIncludedInObjectLst.size()]);\n+ this.mapping = new Mapping(\n+ rootObjectMapper,\n+ rootMappers.values().toArray(new RootMapper[rootMappers.values().size()]),\n+ sourceTransforms.toArray(new SourceTransform[sourceTransforms.size()]),\n+ meta);\n \n this.typeFilter = typeMapper().termFilter(type, null);\n \n@@ -300,13 +279,9 @@ public DocumentMapper(String index, @Nullable Settings indexSettings, DocumentMa\n }\n \n FieldMapperListener.Aggregator fieldMappersAgg = new FieldMapperListener.Aggregator();\n- for (RootMapper rootMapper : rootMappersOrdered) {\n- if (rootMapper.includeInObject()) {\n- rootObjectMapper.putMapper(rootMapper);\n- } else {\n- if (rootMapper instanceof FieldMapper) {\n- fieldMappersAgg.mappers.add((FieldMapper) rootMapper);\n- }\n+ for (RootMapper rootMapper : this.mapping.rootMappers) {\n+ if (rootMapper instanceof FieldMapper) {\n+ fieldMappersAgg.mappers.add((FieldMapper) rootMapper);\n }\n }\n \n@@ -332,6 +307,10 @@ public void objectMapper(ObjectMapper objectMapper) {\n refreshSource();\n }\n \n+ public Mapping mapping() {\n+ return mapping;\n+ }\n+\n public String type() {\n return this.type;\n }\n@@ -341,15 +320,15 @@ public Text typeText() {\n }\n \n public ImmutableMap<String, Object> meta() {\n- return this.meta;\n+ return mapping.meta;\n }\n \n public CompressedString mappingSource() {\n return this.mappingSource;\n }\n \n public RootObjectMapper root() {\n- return this.rootObjectMapper;\n+ return mapping.root;\n }\n \n public UidFieldMapper uidMapper() {\n@@ -358,7 +337,7 @@ public UidFieldMapper uidMapper() {\n \n @SuppressWarnings({\"unchecked\"})\n public <T extends RootMapper> T rootMapper(Class<T> type) {\n- return (T) rootMappers.get(type);\n+ return mapping.rootMapper(type);\n }\n \n public IndexFieldMapper indexMapper() {\n@@ -445,13 +424,12 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n }\n source.type(this.type);\n \n- boolean mappingsModified = false;\n XContentParser parser = source.parser();\n try {\n if (parser == null) {\n parser = XContentHelper.createParser(source.source());\n }\n- if (sourceTransforms != null) {\n+ if (mapping.sourceTransforms.length > 0) {\n parser = transform(parser);\n }\n context.reset(parser, new ParseContext.Document(), source, listener);\n@@ -471,43 +449,22 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n throw new MapperParsingException(\"Malformed content, after first object, either the type field or the actual properties should exist\");\n }\n \n- for (RootMapper rootMapper : rootMappersOrdered) {\n+ for (RootMapper rootMapper : mapping.rootMappers) {\n rootMapper.preParse(context);\n }\n \n if (!emptyDoc) {\n- Mapper update = rootObjectMapper.parse(context);\n- for (RootObjectMapper mapper : context.updates()) {\n- if (update == null) {\n- update = mapper;\n- } else {\n- MapperUtils.merge(update, mapper);\n- }\n- }\n+ Mapper update = mapping.root.parse(context);\n if (update != null) {\n- // TODO: validate the mapping update on the master node\n- // lock to avoid concurrency issues with mapping updates coming from the API\n- synchronized(this) {\n- // simulate on the first time to check if the mapping update is applicable\n- MergeContext mergeContext = newMergeContext(new MergeFlags().simulate(true));\n- rootObjectMapper.merge(update, mergeContext);\n- if (mergeContext.hasConflicts()) {\n- throw new MapperParsingException(\"Could not apply generated dynamic mappings: \" + Arrays.toString(mergeContext.buildConflicts()));\n- } else {\n- // then apply it for real\n- mappingsModified = true;\n- mergeContext = newMergeContext(new MergeFlags().simulate(false));\n- rootObjectMapper.merge(update, mergeContext);\n- }\n- }\n+ context.addDynamicMappingsUpdate((RootObjectMapper) update);\n }\n }\n \n for (int i = 0; i < countDownTokens; i++) {\n parser.nextToken();\n }\n \n- for (RootMapper rootMapper : rootMappersOrdered) {\n+ for (RootMapper rootMapper : mapping.rootMappers) {\n rootMapper.postParse(context);\n }\n } catch (Throwable e) {\n@@ -548,8 +505,14 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n }\n }\n \n+ Mapper rootDynamicUpdate = context.dynamicMappingsUpdate();\n+ Mapping update = null;\n+ if (rootDynamicUpdate != null) {\n+ update = mapping.mappingUpdate(rootDynamicUpdate);\n+ }\n+\n ParsedDocument doc = new ParsedDocument(context.uid(), context.version(), context.id(), context.type(), source.routing(), source.timestamp(), source.ttl(), context.docs(),\n- context.source(), mappingsModified).parent(source.parent());\n+ context.source(), update).parent(source.parent());\n // reset the context to free up memory\n context.reset(null, null, null, null);\n return doc;\n@@ -600,10 +563,10 @@ public ObjectMapper findParentObjectMapper(ObjectMapper objectMapper) {\n * @return transformed version of transformMe. This may actually be the same object as sourceAsMap\n */\n public Map<String, Object> transformSourceAsMap(Map<String, Object> sourceAsMap) {\n- if (sourceTransforms == null) {\n+ if (mapping.sourceTransforms.length == 0) {\n return sourceAsMap;\n }\n- for (SourceTransform transform : sourceTransforms) {\n+ for (SourceTransform transform : mapping.sourceTransforms) {\n sourceAsMap = transform.transformSourceAsMap(sourceAsMap);\n }\n return sourceAsMap;\n@@ -629,12 +592,12 @@ public void addFieldMapperListener(FieldMapperListener fieldMapperListener) {\n }\n \n public void traverse(FieldMapperListener listener) {\n- for (RootMapper rootMapper : rootMappersOrdered) {\n+ for (RootMapper rootMapper : mapping.rootMappers) {\n if (!rootMapper.includeInObject() && rootMapper instanceof FieldMapper) {\n listener.fieldMapper((FieldMapper) rootMapper);\n }\n }\n- rootObjectMapper.traverse(listener);\n+ mapping.root.traverse(listener);\n }\n \n public void addObjectMappers(Collection<ObjectMapper> objectMappers) {\n@@ -662,7 +625,7 @@ public void addObjectMapperListener(ObjectMapperListener objectMapperListener) {\n }\n \n public void traverse(ObjectMapperListener listener) {\n- rootObjectMapper.traverse(listener);\n+ mapping.root.traverse(listener);\n }\n \n private MergeContext newMergeContext(MergeFlags mergeFlags) {\n@@ -672,11 +635,13 @@ private MergeContext newMergeContext(MergeFlags mergeFlags) {\n \n @Override\n public void addFieldMappers(List<FieldMapper<?>> fieldMappers) {\n+ assert mergeFlags().simulate() == false;\n DocumentMapper.this.addFieldMappers(fieldMappers);\n }\n \n @Override\n public void addObjectMappers(Collection<ObjectMapper> objectMappers) {\n+ assert mergeFlags().simulate() == false;\n DocumentMapper.this.addObjectMappers(objectMappers);\n }\n \n@@ -698,29 +663,13 @@ public String[] buildConflicts() {\n };\n }\n \n- public synchronized MergeResult merge(DocumentMapper mergeWith, MergeFlags mergeFlags) {\n+ public synchronized MergeResult merge(Mapping mapping, MergeFlags mergeFlags) {\n final MergeContext mergeContext = newMergeContext(mergeFlags);\n- assert rootMappers.size() == mergeWith.rootMappers.size();\n-\n- rootObjectMapper.merge(mergeWith.rootObjectMapper, mergeContext);\n- for (Map.Entry<Class<? extends RootMapper>, RootMapper> entry : rootMappers.entrySet()) {\n- // root mappers included in root object will get merge in the rootObjectMapper\n- if (entry.getValue().includeInObject()) {\n- continue;\n- }\n- RootMapper mergeWithRootMapper = mergeWith.rootMappers.get(entry.getKey());\n- if (mergeWithRootMapper != null) {\n- entry.getValue().merge(mergeWithRootMapper, mergeContext);\n- }\n- }\n-\n- if (!mergeFlags.simulate()) {\n- // let the merge with attributes to override the attributes\n- meta = mergeWith.meta();\n- // update the source of the merged one\n+ final MergeResult mergeResult = this.mapping.merge(mapping, mergeContext);\n+ if (mergeFlags.simulate() == false) {\n refreshSource();\n }\n- return new MergeResult(mergeContext.buildConflicts());\n+ return mergeResult;\n }\n \n public CompressedString refreshSource() throws ElasticsearchGenerationException {\n@@ -739,51 +688,15 @@ public CompressedString refreshSource() throws ElasticsearchGenerationException\n \n public void close() {\n cache.close();\n- rootObjectMapper.close();\n- for (RootMapper rootMapper : rootMappersOrdered) {\n+ mapping.root.close();\n+ for (RootMapper rootMapper : mapping.rootMappers) {\n rootMapper.close();\n }\n }\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- rootObjectMapper.toXContent(builder, params, new ToXContent() {\n- @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- if (sourceTransforms != null) {\n- if (sourceTransforms.size() == 1) {\n- builder.field(\"transform\");\n- sourceTransforms.get(0).toXContent(builder, params);\n- } else {\n- builder.startArray(\"transform\");\n- for (SourceTransform transform: sourceTransforms) {\n- transform.toXContent(builder, params);\n- }\n- builder.endArray();\n- }\n- }\n-\n- if (meta != null && !meta.isEmpty()) {\n- builder.field(\"_meta\", meta());\n- }\n- return builder;\n- }\n- // no need to pass here id and boost, since they are added to the root object mapper\n- // in the constructor\n- }, rootMappersNotIncludedInObject);\n- return builder;\n- }\n-\n- /**\n- * Transformations to be applied to the source before indexing and/or after loading.\n- */\n- private interface SourceTransform extends ToXContent {\n- /**\n- * Transform the source when it is expressed as a map. This is public so it can be transformed the source is loaded.\n- * @param sourceAsMap source to transform. This may be mutated by the script.\n- * @return transformed version of transformMe. This may actually be the same object as sourceAsMap\n- */\n- Map<String, Object> transformSourceAsMap(Map<String, Object> sourceAsMap);\n+ return mapping.toXContent(builder, params);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -339,7 +339,7 @@ private DocumentMapper merge(DocumentMapper mapper) {\n DocumentMapper oldMapper = mappers.get(mapper.type());\n \n if (oldMapper != null) {\n- DocumentMapper.MergeResult result = oldMapper.merge(mapper, mergeFlags().simulate(false));\n+ DocumentMapper.MergeResult result = oldMapper.merge(mapper.mapping(), mergeFlags().simulate(false));\n if (result.hasConflicts()) {\n // TODO: What should we do???\n if (logger.isDebugEnabled()) {\n@@ -417,26 +417,19 @@ public DocumentMapper documentMapper(String type) {\n }\n \n /**\n- * Returns the document mapper created, including if the document mapper ended up\n- * being actually created or not in the second tuple value.\n+ * Returns the document mapper created, including a mapping update if the\n+ * type has been dynamically created.\n */\n- public Tuple<DocumentMapper, Boolean> documentMapperWithAutoCreate(String type) {\n+ public Tuple<DocumentMapper, Mapping> documentMapperWithAutoCreate(String type) {\n DocumentMapper mapper = mappers.get(type);\n if (mapper != null) {\n- return Tuple.tuple(mapper, Boolean.FALSE);\n+ return Tuple.tuple(mapper, null);\n }\n if (!dynamic) {\n throw new TypeMissingException(index, type, \"trying to auto create mapping, but dynamic mapping is disabled\");\n }\n- // go ahead and dynamically create it\n- synchronized (typeMutex) {\n- mapper = mappers.get(type);\n- if (mapper != null) {\n- return Tuple.tuple(mapper, Boolean.FALSE);\n- }\n- merge(type, null, true);\n- return Tuple.tuple(mappers.get(type), Boolean.TRUE);\n- }\n+ mapper = parse(type, null, true);\n+ return Tuple.tuple(mapper, mapper.mapping());\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,8 @@\n \n package org.elasticsearch.index.mapper;\n \n-import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.mapper.object.RootObjectMapper;\n \n import java.io.IOException;\n import java.util.Collection;\n@@ -44,12 +42,8 @@ public static <M extends Mapper> M parseAndMergeUpdate(M mapper, ParseContext co\n return mapper;\n }\n \n- /**\n- * Merge {@code mergeWith} into {@code mergeTo}. Note: this method only\n- * merges mappings, not lookup structures. Conflicts are returned as exceptions.\n- */\n- public static void merge(Mapper mergeInto, Mapper mergeWith) {\n- MergeContext ctx = new MergeContext(new DocumentMapper.MergeFlags().simulate(false)) {\n+ private static MergeContext newStrictMergeContext() {\n+ return new MergeContext(new DocumentMapper.MergeFlags().simulate(false)) {\n \n @Override\n public boolean hasConflicts() {\n@@ -73,10 +67,25 @@ public void addFieldMappers(List<FieldMapper<?>> fieldMappers) {\n \n @Override\n public void addConflict(String mergeFailure) {\n- throw new ElasticsearchIllegalStateException(\"Merging dynamic updates triggered a conflict: \" + mergeFailure);\n+ throw new MapperParsingException(\"Merging dynamic updates triggered a conflict: \" + mergeFailure);\n }\n };\n- mergeInto.merge(mergeWith, ctx);\n+ }\n+\n+ /**\n+ * Merge {@code mergeWith} into {@code mergeTo}. Note: this method only\n+ * merges mappings, not lookup structures. Conflicts are returned as exceptions.\n+ */\n+ public static void merge(Mapper mergeInto, Mapper mergeWith) {\n+ mergeInto.merge(mergeWith, newStrictMergeContext());\n+ }\n+\n+ /**\n+ * Merge {@code mergeWith} into {@code mergeTo}. Note: this method only\n+ * merges mappings, not lookup structures. Conflicts are returned as exceptions.\n+ */\n+ public static void merge(Mapping mergeInto, Mapping mergeWith) {\n+ mergeInto.merge(mergeWith, newStrictMergeContext());\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/MapperUtils.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,171 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import com.google.common.collect.ImmutableMap;\n+\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.mapper.DocumentMapper.MergeResult;\n+import org.elasticsearch.index.mapper.object.RootObjectMapper;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n+\n+/**\n+ * Wrapper around everything that defines a mapping, without references to\n+ * utility classes like MapperService, ...\n+ */\n+public final class Mapping implements ToXContent {\n+\n+ /**\n+ * Transformations to be applied to the source before indexing and/or after loading.\n+ */\n+ public interface SourceTransform extends ToXContent {\n+ /**\n+ * Transform the source when it is expressed as a map. This is public so it can be transformed the source is loaded.\n+ * @param sourceAsMap source to transform. This may be mutated by the script.\n+ * @return transformed version of transformMe. This may actually be the same object as sourceAsMap\n+ */\n+ Map<String, Object> transformSourceAsMap(Map<String, Object> sourceAsMap);\n+ }\n+\n+ final RootObjectMapper root;\n+ final RootMapper[] rootMappers;\n+ final RootMapper[] rootMappersNotIncludedInObject;\n+ final ImmutableMap<Class<? extends RootMapper>, RootMapper> rootMappersMap;\n+ final SourceTransform[] sourceTransforms;\n+ volatile ImmutableMap<String, Object> meta;\n+\n+ public Mapping(RootObjectMapper rootObjectMapper, RootMapper[] rootMappers, SourceTransform[] sourceTransforms, ImmutableMap<String, Object> meta) {\n+ this.root = rootObjectMapper;\n+ this.rootMappers = rootMappers;\n+ List<RootMapper> rootMappersNotIncludedInObject = new ArrayList<>();\n+ ImmutableMap.Builder<Class<? extends RootMapper>, RootMapper> builder = ImmutableMap.builder();\n+ for (RootMapper rootMapper : rootMappers) {\n+ if (rootMapper.includeInObject()) {\n+ root.putMapper(rootMapper);\n+ } else {\n+ rootMappersNotIncludedInObject.add(rootMapper);\n+ }\n+ builder.put(rootMapper.getClass(), rootMapper);\n+ }\n+ this.rootMappersNotIncludedInObject = rootMappersNotIncludedInObject.toArray(new RootMapper[rootMappersNotIncludedInObject.size()]);\n+ this.rootMappersMap = builder.build();\n+ this.sourceTransforms = sourceTransforms;\n+ this.meta = meta;\n+ }\n+\n+ /** Return the root object mapper. */\n+ public RootObjectMapper root() {\n+ return root;\n+ }\n+\n+ /**\n+ * Generate a mapping update for the given root object mapper.\n+ */\n+ public Mapping mappingUpdate(Mapper rootObjectMapper) {\n+ return new Mapping((RootObjectMapper) rootObjectMapper, rootMappers, sourceTransforms, meta);\n+ }\n+\n+ /** Get the root mapper with the given class. */\n+ @SuppressWarnings(\"unchecked\")\n+ public <T extends RootMapper> T rootMapper(Class<T> clazz) {\n+ return (T) rootMappersMap.get(clazz);\n+ }\n+\n+ /** @see DocumentMapper#merge(DocumentMapper, org.elasticsearch.index.mapper.DocumentMapper.MergeFlags) */\n+ public MergeResult merge(Mapping mergeWith, MergeContext mergeContext) {\n+ assert rootMappers.length == mergeWith.rootMappers.length;\n+\n+ root.merge(mergeWith.root, mergeContext);\n+ for (RootMapper rootMapper : rootMappers) {\n+ // root mappers included in root object will get merge in the rootObjectMapper\n+ if (rootMapper.includeInObject()) {\n+ continue;\n+ }\n+ RootMapper mergeWithRootMapper = mergeWith.rootMapper(rootMapper.getClass());\n+ if (mergeWithRootMapper != null) {\n+ rootMapper.merge(mergeWithRootMapper, mergeContext);\n+ }\n+ }\n+\n+ if (mergeContext.mergeFlags().simulate() == false) {\n+ // let the merge with attributes to override the attributes\n+ meta = mergeWith.meta;\n+ }\n+ return new MergeResult(mergeContext.buildConflicts());\n+ }\n+ \n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ root.toXContent(builder, params, new ToXContent() {\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ if (sourceTransforms.length > 0) {\n+ if (sourceTransforms.length == 1) {\n+ builder.field(\"transform\");\n+ sourceTransforms[0].toXContent(builder, params);\n+ } else {\n+ builder.startArray(\"transform\");\n+ for (SourceTransform transform: sourceTransforms) {\n+ transform.toXContent(builder, params);\n+ }\n+ builder.endArray();\n+ }\n+ }\n+\n+ if (meta != null && !meta.isEmpty()) {\n+ builder.field(\"_meta\", meta);\n+ }\n+ return builder;\n+ }\n+ // no need to pass here id and boost, since they are added to the root object mapper\n+ // in the constructor\n+ }, rootMappersNotIncludedInObject);\n+ return builder;\n+ }\n+\n+ /** Serialize to a {@link BytesReference}. */\n+ public BytesReference toBytes() {\n+ try {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject();\n+ toXContent(builder, new ToXContent.MapParams(ImmutableMap.<String, String>of()));\n+ return builder.endObject().bytes();\n+ } catch (IOException bogus) {\n+ throw new AssertionError(bogus);\n+ }\n+ }\n+\n+ @Override\n+ public String toString() {\n+ try {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject();\n+ toXContent(builder, new ToXContent.MapParams(ImmutableMap.<String, String>of()));\n+ return builder.endObject().string();\n+ } catch (IOException bogus) {\n+ throw new AssertionError(bogus);\n+ }\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/index/mapper/Mapping.java",
"status": "added"
},
{
"diff": "@@ -359,13 +359,13 @@ public StringBuilder stringBuilder() {\n }\n \n @Override\n- public void addRootObjectUpdate(RootObjectMapper update) {\n- in.addRootObjectUpdate(update);\n+ public void addDynamicMappingsUpdate(Mapper update) {\n+ in.addDynamicMappingsUpdate(update);\n }\n \n @Override\n- public List<RootObjectMapper> updates() {\n- return in.updates();\n+ public Mapper dynamicMappingsUpdate() {\n+ return in.dynamicMappingsUpdate();\n }\n }\n \n@@ -401,13 +401,11 @@ public static class InternalParseContext extends ParseContext {\n \n private Map<String, String> ignoredValues = new HashMap<>();\n \n- private boolean mappingsModified = false;\n-\n private AllEntries allEntries = new AllEntries();\n \n private float docBoost = 1.0f;\n \n- private final List<RootObjectMapper> rootMapperDynamicUpdates = new ArrayList<>();\n+ private Mapper dynamicMappingsUpdate = null;\n \n public InternalParseContext(String index, @Nullable Settings indexSettings, DocumentMapperParser docMapperParser, DocumentMapper docMapper, ContentPath path) {\n this.index = index;\n@@ -432,12 +430,11 @@ public void reset(XContentParser parser, Document document, SourceToParse source\n this.sourceToParse = source;\n this.source = source == null ? null : sourceToParse.source();\n this.path.reset();\n- this.mappingsModified = false;\n this.listener = listener == null ? DocumentMapper.ParseListener.EMPTY : listener;\n this.allEntries = new AllEntries();\n this.ignoredValues.clear();\n this.docBoost = 1.0f;\n- this.rootMapperDynamicUpdates.clear();\n+ this.dynamicMappingsUpdate = null;\n }\n \n @Override\n@@ -604,13 +601,18 @@ public StringBuilder stringBuilder() {\n }\n \n @Override\n- public void addRootObjectUpdate(RootObjectMapper mapper) {\n- rootMapperDynamicUpdates.add(mapper);\n+ public void addDynamicMappingsUpdate(Mapper mapper) {\n+ assert mapper instanceof RootObjectMapper : mapper;\n+ if (dynamicMappingsUpdate == null) {\n+ dynamicMappingsUpdate = mapper;\n+ } else {\n+ MapperUtils.merge(dynamicMappingsUpdate, mapper);\n+ }\n }\n \n @Override\n- public List<RootObjectMapper> updates() {\n- return rootMapperDynamicUpdates;\n+ public Mapper dynamicMappingsUpdate() {\n+ return dynamicMappingsUpdate;\n }\n }\n \n@@ -820,13 +822,11 @@ public final <T> T parseExternalValue(Class<T> clazz) {\n \n /**\n * Add a dynamic update to the root object mapper.\n- * TODO: can we nuke it, it is only needed for copy_to\n */\n- public abstract void addRootObjectUpdate(RootObjectMapper update);\n+ public abstract void addDynamicMappingsUpdate(Mapper update);\n \n /**\n * Get dynamic updates to the root object mapper.\n- * TODO: can we nuke it, it is only needed for copy_to\n */\n- public abstract List<RootObjectMapper> updates();\n+ public abstract Mapper dynamicMappingsUpdate();\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/ParseContext.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,8 @@\n \n package org.elasticsearch.index.mapper;\n \n-import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.document.Field;\n import org.elasticsearch.common.bytes.BytesReference;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n \n import java.util.List;\n@@ -48,11 +46,11 @@ public class ParsedDocument {\n \n private BytesReference source;\n \n- private boolean mappingsModified;\n+ private Mapping dynamicMappingsUpdate;\n \n private String parent;\n \n- public ParsedDocument(Field uid, Field version, String id, String type, String routing, long timestamp, long ttl, List<Document> documents, BytesReference source, boolean mappingsModified) {\n+ public ParsedDocument(Field uid, Field version, String id, String type, String routing, long timestamp, long ttl, List<Document> documents, BytesReference source, Mapping dynamicMappingsUpdate) {\n this.uid = uid;\n this.version = version;\n this.id = id;\n@@ -62,7 +60,7 @@ public ParsedDocument(Field uid, Field version, String id, String type, String r\n this.ttl = ttl;\n this.documents = documents;\n this.source = source;\n- this.mappingsModified = mappingsModified;\n+ this.dynamicMappingsUpdate = dynamicMappingsUpdate;\n }\n \n public Field uid() {\n@@ -119,28 +117,19 @@ public String parent() {\n }\n \n /**\n- * Has the parsed document caused mappings to be modified?\n+ * Return dynamic updates to mappings or {@code null} if there were no\n+ * updates to the mappings.\n */\n- public boolean mappingsModified() {\n- return mappingsModified;\n+ public Mapping dynamicMappingsUpdate() {\n+ return dynamicMappingsUpdate;\n }\n \n- /**\n- * latches the mapping to be marked as modified.\n- */\n- public void setMappingsModified() {\n- this.mappingsModified = true;\n- }\n-\n- /**\n- * Uses the value of get document or create to automatically set if mapping is\n- * modified or not.\n- */\n- public ParsedDocument setMappingsModified(Tuple<DocumentMapper, Boolean> docMapper) {\n- if (docMapper.v2()) {\n- setMappingsModified();\n+ public void addDynamicMappingsUpdate(Mapping update) {\n+ if (dynamicMappingsUpdate == null) {\n+ dynamicMappingsUpdate = update;\n+ } else {\n+ MapperUtils.merge(dynamicMappingsUpdate, update);\n }\n- return this;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java",
"status": "modified"
},
{
"diff": "@@ -1112,7 +1112,7 @@ public void parse(String field, ParseContext context) throws IOException {\n update = parent.mappingUpdate(update);\n objectPath = parentPath;\n }\n- context.addRootObjectUpdate((RootObjectMapper) update);\n+ context.addDynamicMappingsUpdate((RootObjectMapper) update);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -366,7 +366,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n ParentFieldMapper other = (ParentFieldMapper) mergeWith;\n if (!Objects.equal(type, other.type)) {\n- mergeContext.addConflict(\"The _parent field's type option can't be changed\");\n+ mergeContext.addConflict(\"The _parent field's type option can't be changed: [\" + type + \"]->[\" + other.type + \"]\");\n }\n \n if (!mergeContext.mergeFlags().simulate()) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -502,6 +502,10 @@ public final Dynamic dynamic() {\n return this.dynamic == null ? Dynamic.TRUE : this.dynamic;\n }\n \n+ public void setDynamic(Dynamic dynamic) {\n+ this.dynamic = dynamic;\n+ }\n+\n protected boolean allowValue() {\n return true;\n }\n@@ -1045,13 +1049,16 @@ public int compare(Mapper o1, Mapper o2) {\n }\n }\n \n- if (!mappers.isEmpty()) {\n- builder.startObject(\"properties\");\n- for (Mapper mapper : sortedMappers) {\n- if (!(mapper instanceof InternalMapper)) {\n- mapper.toXContent(builder, params);\n+ int count = 0;\n+ for (Mapper mapper : sortedMappers) {\n+ if (!(mapper instanceof InternalMapper)) {\n+ if (count++ == 0) {\n+ builder.startObject(\"properties\");\n }\n+ mapper.toXContent(builder, params);\n }\n+ }\n+ if (count > 0) {\n builder.endObject();\n }\n builder.endObject();",
"filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.Version;\n-import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n import org.elasticsearch.cluster.ClusterService;\n@@ -450,18 +449,13 @@ public Engine.Create prepareCreate(SourceToParse source, long version, VersionTy\n return prepareCreate(docMapper(source.type()), source, version, versionType, origin, state != IndexShardState.STARTED || canHaveDuplicates, autoGeneratedId);\n }\n \n- static Engine.Create prepareCreate(Tuple<DocumentMapper, Boolean> docMapper, SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates, boolean autoGeneratedId) throws ElasticsearchException {\n+ static Engine.Create prepareCreate(Tuple<DocumentMapper, Mapping> docMapper, SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates, boolean autoGeneratedId) throws ElasticsearchException {\n long startTime = System.nanoTime();\n- try {\n- ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n- return new Engine.Create(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, canHaveDuplicates, autoGeneratedId);\n- } catch (Throwable t) {\n- if (docMapper.v2()) {\n- throw new WriteFailureException(t, docMapper.v1().type());\n- } else {\n- throw t;\n- }\n+ ParsedDocument doc = docMapper.v1().parse(source);\n+ if (docMapper.v2() != null) {\n+ doc.addDynamicMappingsUpdate(docMapper.v2());\n }\n+ return new Engine.Create(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, canHaveDuplicates, autoGeneratedId);\n }\n \n public ParsedDocument create(Engine.Create create) throws ElasticsearchException {\n@@ -486,18 +480,13 @@ public Engine.Index prepareIndex(SourceToParse source, long version, VersionType\n return prepareIndex(docMapper(source.type()), source, version, versionType, origin, state != IndexShardState.STARTED || canHaveDuplicates);\n }\n \n- static Engine.Index prepareIndex(Tuple<DocumentMapper, Boolean> docMapper, SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates) throws ElasticsearchException {\n+ static Engine.Index prepareIndex(Tuple<DocumentMapper, Mapping> docMapper, SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates) throws ElasticsearchException {\n long startTime = System.nanoTime();\n- try {\n- ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n- return new Engine.Index(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, canHaveDuplicates);\n- } catch (Throwable t) {\n- if (docMapper.v2()) {\n- throw new WriteFailureException(t, docMapper.v1().type());\n- } else {\n- throw t;\n- }\n+ ParsedDocument doc = docMapper.v1().parse(source);\n+ if (docMapper.v2() != null) {\n+ doc.addDynamicMappingsUpdate(docMapper.v2());\n }\n+ return new Engine.Index(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, canHaveDuplicates);\n }\n \n public ParsedDocument index(Engine.Index index) throws ElasticsearchException {\n@@ -800,14 +789,14 @@ public int performBatchRecovery(Iterable<Translog.Operation> operations) {\n /**\n * After the store has been recovered, we need to start the engine in order to apply operations\n */\n- public Set<String> performTranslogRecovery() throws ElasticsearchException {\n- final Set<String> recoveredTypes = internalPerformTranslogRecovery(false);\n+ public Map<String, Mapping> performTranslogRecovery() throws ElasticsearchException {\n+ final Map<String, Mapping> recoveredTypes = internalPerformTranslogRecovery(false);\n assert recoveryState.getStage() == RecoveryState.Stage.TRANSLOG : \"TRANSLOG stage expected but was: \" + recoveryState.getStage();\n return recoveredTypes;\n \n }\n \n- private Set<String> internalPerformTranslogRecovery(boolean skipTranslogRecovery) throws ElasticsearchException {\n+ private Map<String, Mapping> internalPerformTranslogRecovery(boolean skipTranslogRecovery) throws ElasticsearchException {\n if (state != IndexShardState.RECOVERING) {\n throw new IndexShardNotRecoveringException(shardId, state);\n }\n@@ -832,7 +821,7 @@ private Set<String> internalPerformTranslogRecovery(boolean skipTranslogRecovery\n */\n public void skipTranslogRecovery() throws ElasticsearchException {\n assert engineUnsafe() == null : \"engine was already created\";\n- Set<String> recoveredTypes = internalPerformTranslogRecovery(true);\n+ Map<String, Mapping> recoveredTypes = internalPerformTranslogRecovery(true);\n assert recoveredTypes.isEmpty();\n assert recoveryState.getTranslog().recoveredOperations() == 0;\n }\n@@ -1277,7 +1266,7 @@ private String getIndexUUID() {\n return indexSettings.get(IndexMetaData.SETTING_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n }\n \n- private Tuple<DocumentMapper, Boolean> docMapper(String type) {\n+ private Tuple<DocumentMapper, Mapping> docMapper(String type) {\n return mapperService.documentMapperWithAutoCreate(type);\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -26,14 +26,17 @@\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.engine.IgnoreOnRecoveryEngineException;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperAnalyzer;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MapperUtils;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.translog.Translog;\n \n-import java.util.HashSet;\n-import java.util.Set;\n+import java.util.HashMap;\n+import java.util.Map;\n \n import static org.elasticsearch.index.mapper.SourceToParse.source;\n \n@@ -47,7 +50,7 @@ public class TranslogRecoveryPerformer {\n private final IndexAliasesService indexAliasesService;\n private final IndexCache indexCache;\n private final MapperAnalyzer mapperAnalyzer;\n- private final Set<String> recoveredTypes = new HashSet<>();\n+ private final Map<String, Mapping> recoveredTypes = new HashMap<>();\n \n protected TranslogRecoveryPerformer(MapperService mapperService, MapperAnalyzer mapperAnalyzer, IndexQueryParserService queryParserService, IndexAliasesService indexAliasesService, IndexCache indexCache) {\n this.mapperService = mapperService;\n@@ -57,7 +60,7 @@ protected TranslogRecoveryPerformer(MapperService mapperService, MapperAnalyzer\n this.mapperAnalyzer = mapperAnalyzer;\n }\n \n- protected Tuple<DocumentMapper, Boolean> docMapper(String type) {\n+ protected Tuple<DocumentMapper, Mapping> docMapper(String type) {\n return mapperService.documentMapperWithAutoCreate(type); // protected for testing\n }\n \n@@ -74,6 +77,15 @@ int performBatchRecovery(Engine engine, Iterable<Translog.Operation> operations)\n return numOps;\n }\n \n+ private void addMappingUpdate(String type, Mapping update) {\n+ Mapping currentUpdate = recoveredTypes.get(type);\n+ if (currentUpdate == null) {\n+ recoveredTypes.put(type, update);\n+ } else {\n+ MapperUtils.merge(currentUpdate, update);\n+ }\n+ }\n+\n /**\n * Performs a single recovery operation, and returns the indexing operation (or null if its not an indexing operation)\n * that can then be used for mapping updates (for example) if needed.\n@@ -89,8 +101,8 @@ public void performRecoveryOperation(Engine engine, Translog.Operation operation\n create.version(), create.versionType().versionTypeForReplicationAndRecovery(), Engine.Operation.Origin.RECOVERY, true, false);\n mapperAnalyzer.setType(create.type()); // this is a PITA - once mappings are per index not per type this can go away an we can just simply move this to the engine eventually :)\n engine.create(engineCreate);\n- if (engineCreate.parsedDoc().mappingsModified()) {\n- recoveredTypes.add(engineCreate.type());\n+ if (engineCreate.parsedDoc().dynamicMappingsUpdate() != null) {\n+ addMappingUpdate(engineCreate.type(), engineCreate.parsedDoc().dynamicMappingsUpdate());\n }\n break;\n case SAVE:\n@@ -100,8 +112,8 @@ public void performRecoveryOperation(Engine engine, Translog.Operation operation\n index.version(), index.versionType().versionTypeForReplicationAndRecovery(), Engine.Operation.Origin.RECOVERY, true);\n mapperAnalyzer.setType(index.type());\n engine.index(engineIndex);\n- if (engineIndex.parsedDoc().mappingsModified()) {\n- recoveredTypes.add(engineIndex.type());\n+ if (engineIndex.parsedDoc().dynamicMappingsUpdate() != null) {\n+ addMappingUpdate(engineIndex.type(), engineIndex.parsedDoc().dynamicMappingsUpdate());\n }\n break;\n case DELETE:\n@@ -150,7 +162,7 @@ protected void operationProcessed() {\n /**\n * Returns the recovered types modifying the mapping during the recovery\n */\n- public Set<String> getRecoveredTypes() {\n+ public Map<String, Mapping> getRecoveredTypes() {\n return recoveredTypes;\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java",
"status": "modified"
},
{
"diff": "@@ -252,7 +252,7 @@ private Fields generateTermVectors(Collection<GetField> getFields, boolean withO\n return MultiFields.getFields(index.createSearcher().getIndexReader());\n }\n \n- private Fields generateTermVectorsFromDoc(TermVectorsRequest request, boolean doAllFields) throws IOException {\n+ private Fields generateTermVectorsFromDoc(TermVectorsRequest request, boolean doAllFields) throws Throwable {\n // parse the document, at the moment we do update the mapping, just like percolate\n ParsedDocument parsedDocument = parseDocument(indexShard.shardId().getIndex(), request.type(), request.doc());\n \n@@ -283,15 +283,18 @@ private Fields generateTermVectorsFromDoc(TermVectorsRequest request, boolean do\n return generateTermVectors(getFields, request.offsets(), request.perFieldAnalyzer());\n }\n \n- private ParsedDocument parseDocument(String index, String type, BytesReference doc) {\n+ private ParsedDocument parseDocument(String index, String type, BytesReference doc) throws Throwable {\n MapperService mapperService = indexShard.mapperService();\n IndexService indexService = indexShard.indexService();\n \n // TODO: make parsing not dynamically create fields not in the original mapping\n- Tuple<DocumentMapper, Boolean> docMapper = mapperService.documentMapperWithAutoCreate(type);\n- ParsedDocument parsedDocument = docMapper.v1().parse(source(doc).type(type).flyweight(true)).setMappingsModified(docMapper);\n- if (parsedDocument.mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(index, docMapper.v1(), indexService.indexUUID());\n+ Tuple<DocumentMapper, Mapping> docMapper = mapperService.documentMapperWithAutoCreate(type);\n+ ParsedDocument parsedDocument = docMapper.v1().parse(source(doc).type(type).flyweight(true));\n+ if (docMapper.v2() != null) {\n+ parsedDocument.addDynamicMappingsUpdate(docMapper.v2());\n+ }\n+ if (parsedDocument.dynamicMappingsUpdate() != null) {\n+ mappingUpdatedAction.updateMappingOnMasterSynchronously(index, indexService.indexUUID(), type, parsedDocument.dynamicMappingsUpdate());\n }\n return parsedDocument;\n }",
"filename": "src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorsService.java",
"status": "modified"
},
{
"diff": "@@ -567,7 +567,7 @@ public void onFailure(Throwable t) {\n }\n };\n for (DocumentMapper documentMapper : documentMappersToUpdate) {\n- mappingUpdatedAction.updateMappingOnMaster(indexService.index().getName(), documentMapper, indexService.indexUUID(), listener);\n+ mappingUpdatedAction.updateMappingOnMaster(indexService.index().getName(), indexService.indexUUID(), documentMapper.type(), documentMapper.mapping(), listener);\n }\n cancellableThreads.execute(new Interruptable() {\n @Override",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -69,6 +69,8 @@\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MapperUtils;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n@@ -280,10 +282,13 @@ private ParsedDocument parseRequest(IndexService documentIndexService, Percolate\n }\n \n MapperService mapperService = documentIndexService.mapperService();\n- Tuple<DocumentMapper, Boolean> docMapper = mapperService.documentMapperWithAutoCreate(request.documentType());\n- doc = docMapper.v1().parse(source(parser).type(request.documentType()).flyweight(true)).setMappingsModified(docMapper);\n- if (doc.mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(request.shardId().getIndex(), docMapper.v1(), documentIndexService.indexUUID());\n+ Tuple<DocumentMapper, Mapping> docMapper = mapperService.documentMapperWithAutoCreate(request.documentType());\n+ doc = docMapper.v1().parse(source(parser).type(request.documentType()).flyweight(true));\n+ if (docMapper.v2() != null) {\n+ doc.addDynamicMappingsUpdate(docMapper.v2());\n+ }\n+ if (doc.dynamicMappingsUpdate() != null) {\n+ mappingUpdatedAction.updateMappingOnMasterSynchronously(request.shardId().getIndex(), documentIndexService.indexUUID(), request.documentType(), doc.dynamicMappingsUpdate());\n }\n // the document parsing exists the \"doc\" object, so we need to set the new current field.\n currentFieldName = parser.currentName();\n@@ -387,7 +392,7 @@ private ParsedDocument parseFetchedDoc(PercolateContext context, BytesReference\n try {\n parser = XContentFactory.xContent(fetchedDoc).createParser(fetchedDoc);\n MapperService mapperService = documentIndexService.mapperService();\n- Tuple<DocumentMapper, Boolean> docMapper = mapperService.documentMapperWithAutoCreate(type);\n+ Tuple<DocumentMapper, Mapping> docMapper = mapperService.documentMapperWithAutoCreate(type);\n doc = docMapper.v1().parse(source(parser).type(type).flyweight(true));\n \n if (context.highlight() != null) {",
"filename": "src/main/java/org/elasticsearch/percolator/PercolatorService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.index.engine;\n \n+import com.google.common.collect.ImmutableMap;\n+\n import org.apache.log4j.AppenderSkeleton;\n import org.apache.log4j.Level;\n import org.apache.log4j.LogManager;\n@@ -63,11 +65,16 @@\n import org.elasticsearch.index.engine.Engine.Searcher;\n import org.elasticsearch.index.indexing.ShardIndexingService;\n import org.elasticsearch.index.indexing.slowlog.ShardSlowLogIndexingService;\n+import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.Mapper.BuilderContext;\n import org.elasticsearch.index.mapper.MapperAnalyzer;\n+import org.elasticsearch.index.mapper.MapperBuilders;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.mapper.ParsedDocument;\n+import org.elasticsearch.index.mapper.RootMapper;\n import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.mapper.object.RootObjectMapper;\n@@ -198,12 +205,12 @@ private Document testDocument() {\n }\n \n \n- private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, long timestamp, long ttl, Document document, BytesReference source, boolean mappingsModified) {\n+ private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, long timestamp, long ttl, Document document, BytesReference source, Mapping mappingUpdate) {\n Field uidField = new Field(\"_uid\", uid, UidFieldMapper.Defaults.FIELD_TYPE);\n Field versionField = new NumericDocValuesField(\"_version\", 0);\n document.add(uidField);\n document.add(versionField);\n- return new ParsedDocument(uidField, versionField, id, type, routing, timestamp, ttl, Arrays.asList(document), source, mappingsModified);\n+ return new ParsedDocument(uidField, versionField, id, type, routing, timestamp, ttl, Arrays.asList(document), source, mappingUpdate);\n }\n \n protected Store createStore() throws IOException {\n@@ -286,10 +293,10 @@ public void testSegments() throws Exception {\n final boolean defaultCompound = defaultSettings.getAsBoolean(EngineConfig.INDEX_COMPOUND_ON_FLUSH, true);\n \n // create a doc and refresh\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n engine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n engine.refresh(\"test\");\n \n@@ -322,7 +329,7 @@ public void testSegments() throws Exception {\n \n ((InternalEngine) engine).config().setCompoundOnFlush(false);\n \n- ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n engine.create(new Engine.Create(null, newUid(\"3\"), doc3));\n engine.refresh(\"test\");\n \n@@ -369,7 +376,7 @@ public void testSegments() throws Exception {\n assertThat(segments.get(1).isCompound(), equalTo(false));\n \n ((InternalEngine) engine).config().setCompoundOnFlush(true);\n- ParsedDocument doc4 = testParsedDocument(\"4\", \"4\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc4 = testParsedDocument(\"4\", \"4\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n engine.create(new Engine.Create(null, newUid(\"4\"), doc4));\n engine.refresh(\"test\");\n \n@@ -400,18 +407,18 @@ public void testVerboseSegments() throws Exception {\n List<Segment> segments = engine.segments(true);\n assertThat(segments.isEmpty(), equalTo(true));\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n engine.refresh(\"test\");\n \n segments = engine.segments(true);\n assertThat(segments.size(), equalTo(1));\n assertThat(segments.get(0).ramTree, notNullValue());\n \n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n engine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n engine.refresh(\"test\");\n- ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n engine.create(new Engine.Create(null, newUid(\"3\"), doc3));\n engine.refresh(\"test\");\n \n@@ -432,7 +439,7 @@ public void testSegmentsWithMergeFlag() throws Exception {\n Translog translog = createTranslog();\n Engine engine = createEngine(indexSettingsService, store, translog, mergeSchedulerProvider)) {\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n engine.flush();\n@@ -490,7 +497,7 @@ public void testSimpleOperations() throws Exception {\n // create a document\n Document document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -529,7 +536,7 @@ public void testSimpleOperations() throws Exception {\n document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n document.add(new Field(SourceFieldMapper.NAME, B_2.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, null);\n engine.index(new Engine.Index(null, newUid(\"1\"), doc));\n \n // its not updated yet...\n@@ -582,7 +589,7 @@ public void testSimpleOperations() throws Exception {\n // add it back\n document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -616,7 +623,7 @@ public void testSimpleOperations() throws Exception {\n // now do an update\n document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n engine.index(new Engine.Index(null, newUid(\"1\"), doc));\n \n // its not updated yet...\n@@ -645,7 +652,7 @@ public void testSearchResultRelease() throws Exception {\n searchResult.close();\n \n // create a document\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -678,7 +685,7 @@ public void testSearchResultRelease() throws Exception {\n \n @Test\n public void testFailEngineOnCorruption() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n engine.flush();\n final int failInPhase = randomIntBetween(1, 3);\n@@ -715,7 +722,7 @@ public void phase3(Translog.Snapshot snapshot) throws EngineException {\n MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term(\"value\", \"test\")), 1));\n searchResult.close();\n \n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n engine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n engine.refresh(\"foo\");\n \n@@ -732,7 +739,7 @@ public void phase3(Translog.Snapshot snapshot) throws EngineException {\n \n @Test\n public void testSimpleRecover() throws Exception {\n- final ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ final ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n engine.flush();\n \n@@ -789,10 +796,10 @@ public void phase3(Translog.Snapshot snapshot) throws EngineException {\n \n @Test\n public void testRecoverWithOperationsBetweenPhase1AndPhase2() throws Exception {\n- ParsedDocument doc1 = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc1 = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc1));\n engine.flush();\n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n engine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n \n engine.recover(new Engine.RecoveryHandler() {\n@@ -824,10 +831,10 @@ public void phase3(Translog.Snapshot snapshot) throws EngineException {\n \n @Test\n public void testRecoverWithOperationsBetweenPhase1AndPhase2AndPhase3() throws Exception {\n- ParsedDocument doc1 = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc1 = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc1));\n engine.flush();\n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n engine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n \n engine.recover(new Engine.RecoveryHandler() {\n@@ -844,7 +851,7 @@ public void phase2(Translog.Snapshot snapshot) throws EngineException {\n assertThat(create.source().toBytesArray(), equalTo(B_2));\n \n // add for phase3\n- ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n engine.create(new Engine.Create(null, newUid(\"3\"), doc3));\n } catch (IOException ex) {\n throw new ElasticsearchException(\"failed\", ex);\n@@ -870,7 +877,7 @@ public void phase3(Translog.Snapshot snapshot) throws EngineException {\n \n @Test\n public void testVersioningNewCreate() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Create create = new Engine.Create(null, newUid(\"1\"), doc);\n engine.create(create);\n assertThat(create.version(), equalTo(1l));\n@@ -882,7 +889,7 @@ public void testVersioningNewCreate() {\n \n @Test\n public void testExternalVersioningNewCreate() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Create create = new Engine.Create(null, newUid(\"1\"), doc, 12, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, 0);\n engine.create(create);\n assertThat(create.version(), equalTo(12l));\n@@ -894,7 +901,7 @@ public void testExternalVersioningNewCreate() {\n \n @Test\n public void testVersioningNewIndex() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -906,7 +913,7 @@ public void testVersioningNewIndex() {\n \n @Test\n public void testExternalVersioningNewIndex() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc, 12, VersionType.EXTERNAL, PRIMARY, 0);\n engine.index(index);\n assertThat(index.version(), equalTo(12l));\n@@ -918,7 +925,7 @@ public void testExternalVersioningNewIndex() {\n \n @Test\n public void testVersioningIndexConflict() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -947,7 +954,7 @@ public void testVersioningIndexConflict() {\n \n @Test\n public void testExternalVersioningIndexConflict() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc, 12, VersionType.EXTERNAL, PRIMARY, 0);\n engine.index(index);\n assertThat(index.version(), equalTo(12l));\n@@ -967,7 +974,7 @@ public void testExternalVersioningIndexConflict() {\n \n @Test\n public void testVersioningIndexConflictWithFlush() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -998,7 +1005,7 @@ public void testVersioningIndexConflictWithFlush() {\n \n @Test\n public void testExternalVersioningIndexConflictWithFlush() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc, 12, VersionType.EXTERNAL, PRIMARY, 0);\n engine.index(index);\n assertThat(index.version(), equalTo(12l));\n@@ -1021,7 +1028,7 @@ public void testExternalVersioningIndexConflictWithFlush() {\n public void testForceMerge() {\n int numDocs = randomIntBetween(10, 100);\n for (int i = 0; i < numDocs; i++) {\n- ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(Integer.toString(i)), doc);\n engine.index(index);\n engine.refresh(\"test\");\n@@ -1032,7 +1039,7 @@ public void testForceMerge() {\n engine.forceMerge(true, 1, false, false, false);\n assertEquals(engine.segments(true).size(), 1);\n \n- ParsedDocument doc = testParsedDocument(Integer.toString(0), Integer.toString(0), \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(0), Integer.toString(0), \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(Integer.toString(0)), doc);\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, true, false, false); //expunge deletes\n@@ -1043,7 +1050,7 @@ public void testForceMerge() {\n assertEquals(numDocs - 1, test.reader().maxDoc());\n }\n \n- doc = testParsedDocument(Integer.toString(1), Integer.toString(1), \"test\", null, -1, -1, testDocument(), B_1, false);\n+ doc = testParsedDocument(Integer.toString(1), Integer.toString(1), \"test\", null, -1, -1, testDocument(), B_1, null);\n index = new Engine.Index(null, newUid(Integer.toString(1)), doc);\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, false, false, false); //expunge deletes\n@@ -1077,7 +1084,7 @@ public void run() {\n int numDocs = randomIntBetween(1, 20);\n for (int j = 0; j < numDocs; j++) {\n i++;\n- ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(Integer.toString(i)), doc);\n engine.index(index);\n }\n@@ -1111,7 +1118,7 @@ public void run() {\n \n @Test\n public void testVersioningDeleteConflict() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -1162,7 +1169,7 @@ public void testVersioningDeleteConflict() {\n \n @Test\n public void testVersioningDeleteConflictWithFlush() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -1219,7 +1226,7 @@ public void testVersioningDeleteConflictWithFlush() {\n \n @Test\n public void testVersioningCreateExistsException() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Create create = new Engine.Create(null, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, 0);\n engine.create(create);\n assertThat(create.version(), equalTo(1l));\n@@ -1235,7 +1242,7 @@ public void testVersioningCreateExistsException() {\n \n @Test\n public void testVersioningCreateExistsExceptionWithFlush() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Create create = new Engine.Create(null, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, 0);\n engine.create(create);\n assertThat(create.version(), equalTo(1l));\n@@ -1253,7 +1260,7 @@ public void testVersioningCreateExistsExceptionWithFlush() {\n \n @Test\n public void testVersioningReplicaConflict1() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -1289,7 +1296,7 @@ public void testVersioningReplicaConflict1() {\n \n @Test\n public void testVersioningReplicaConflict2() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertThat(index.version(), equalTo(1l));\n@@ -1339,7 +1346,7 @@ public void testVersioningReplicaConflict2() {\n \n @Test\n public void testBasicCreatedFlag() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertTrue(index.created());\n@@ -1357,7 +1364,7 @@ public void testBasicCreatedFlag() {\n \n @Test\n public void testCreatedFlagAfterFlush() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n assertTrue(index.created());\n@@ -1414,7 +1421,7 @@ public void testIndexWriterInfoStream() {\n \n try {\n // First, with DEBUG, which should NOT log IndexWriter output:\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n engine.flush();\n assertFalse(mockAppender.sawIndexWriterMessage);\n@@ -1450,7 +1457,7 @@ public void testIndexWriterIFDInfoStream() {\n \n try {\n // First, with DEBUG, which should NOT log IndexWriter output:\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n engine.create(new Engine.Create(null, newUid(\"1\"), doc));\n engine.flush();\n assertFalse(mockAppender.sawIndexWriterMessage);\n@@ -1482,7 +1489,7 @@ public void testEnableGcDeletes() throws Exception {\n Document document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, null);\n engine.index(new Engine.Index(null, newUid(\"1\"), doc, 1, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), false));\n \n // Delete document we just added:\n@@ -1611,7 +1618,7 @@ public void testSettings() {\n @Test\n public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOException {\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n boolean canHaveDuplicates = false;\n boolean autoGeneratedId = true;\n \n@@ -1650,7 +1657,7 @@ public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOExcep\n @Test\n public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() throws IOException {\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, null);\n boolean canHaveDuplicates = true;\n boolean autoGeneratedId = true;\n \n@@ -1703,7 +1710,7 @@ public void testDeletesAloneCanTriggerRefresh() throws Exception {\n final Engine engine = new InternalEngine(config(indexSettingsService, store, translog, createMergeScheduler(indexSettingsService)), false)) {\n for (int i = 0; i < 100; i++) {\n String id = Integer.toString(i);\n- ParsedDocument doc = testParsedDocument(id, id, \"test\", null, -1, -1, testDocument(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(id, id, \"test\", null, -1, -1, testDocument(), B_1, null);\n engine.index(new Engine.Index(null, newUid(id), doc, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()));\n }\n \n@@ -1738,7 +1745,7 @@ public void testTranslogReplayWithFailure() throws IOException {\n boolean autoGeneratedId = true;\n final int numDocs = randomIntBetween(1, 10);\n for (int i = 0; i < numDocs; i++) {\n- ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n Engine.Create firstIndexRequest = new Engine.Create(null, newUid(Integer.toString(i)), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n engine.create(firstIndexRequest);\n assertThat(firstIndexRequest.version(), equalTo(1l));\n@@ -1795,7 +1802,7 @@ public void testSkipTranslogReplay() throws IOException {\n boolean autoGeneratedId = true;\n final int numDocs = randomIntBetween(1, 10);\n for (int i = 0; i < numDocs; i++) {\n- ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n Engine.Create firstIndexRequest = new Engine.Create(null, newUid(Integer.toString(i)), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n engine.create(firstIndexRequest);\n assertThat(firstIndexRequest.version(), equalTo(1l));\n@@ -1824,12 +1831,18 @@ public void testSkipTranslogReplay() throws IOException {\n \n }\n \n+ private Mapping dynamicUpdate() {\n+ BuilderContext context = new BuilderContext(ImmutableSettings.EMPTY, new ContentPath());\n+ final RootObjectMapper root = MapperBuilders.rootObject(\"some_type\").build(context);\n+ return new Mapping(root, new RootMapper[0], new Mapping.SourceTransform[0], ImmutableMap.<String, Object>of());\n+ }\n+\n public void testTranslogReplay() throws IOException {\n boolean canHaveDuplicates = true;\n boolean autoGeneratedId = true;\n final int numDocs = randomIntBetween(1, 10);\n for (int i = 0; i < numDocs; i++) {\n- ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), false);\n+ ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n Engine.Create firstIndexRequest = new Engine.Create(null, newUid(Integer.toString(i)), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n engine.create(firstIndexRequest);\n assertThat(firstIndexRequest.version(), equalTo(1l));\n@@ -1847,7 +1860,7 @@ public void testTranslogReplay() throws IOException {\n }\n \n TranslogHandler parser = (TranslogHandler) engine.config().getTranslogRecoveryPerformer();\n- parser.mappingModified = randomBoolean();\n+ parser.mappingUpdate = dynamicUpdate();\n \n long currentTranslogId = translog.currentId();\n engine.close();\n@@ -1861,9 +1874,9 @@ public void testTranslogReplay() throws IOException {\n }\n parser = (TranslogHandler) engine.config().getTranslogRecoveryPerformer();\n assertEquals(numDocs, parser.recoveredOps.get());\n- if (parser.mappingModified) {\n+ if (parser.mappingUpdate != null) {\n assertEquals(1, parser.getRecoveredTypes().size());\n- assertTrue(parser.getRecoveredTypes().contains(\"test\"));\n+ assertTrue(parser.getRecoveredTypes().containsKey(\"test\"));\n } else {\n assertEquals(0, parser.getRecoveredTypes().size());\n }\n@@ -1880,15 +1893,15 @@ public void testTranslogReplay() throws IOException {\n final boolean flush = randomBoolean();\n int randomId = randomIntBetween(numDocs + 1, numDocs + 10);\n String uuidValue = \"test#\" + Integer.toString(randomId);\n- ParsedDocument doc = testParsedDocument(uuidValue, Integer.toString(randomId), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), false);\n+ ParsedDocument doc = testParsedDocument(uuidValue, Integer.toString(randomId), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n Engine.Create firstIndexRequest = new Engine.Create(null, newUid(uuidValue), doc, 1, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n engine.create(firstIndexRequest);\n assertThat(firstIndexRequest.version(), equalTo(1l));\n if (flush) {\n engine.flush();\n }\n \n- doc = testParsedDocument(uuidValue, Integer.toString(randomId), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), false);\n+ doc = testParsedDocument(uuidValue, Integer.toString(randomId), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n Engine.Index idxRequest = new Engine.Index(null, newUid(uuidValue), doc, 2, VersionType.EXTERNAL, PRIMARY, System.nanoTime());\n engine.index(idxRequest);\n engine.refresh(\"test\");\n@@ -1922,7 +1935,7 @@ public void testTranslogReplay() throws IOException {\n public static class TranslogHandler extends TranslogRecoveryPerformer {\n \n private final DocumentMapper docMapper;\n- public boolean mappingModified = false;\n+ public Mapping mappingUpdate = null;\n \n public final AtomicInteger recoveredOps = new AtomicInteger(0);\n \n@@ -1939,8 +1952,8 @@ public TranslogHandler(String index) {\n }\n \n @Override\n- protected Tuple<DocumentMapper, Boolean> docMapper(String type) {\n- return new Tuple<>(docMapper, mappingModified);\n+ protected Tuple<DocumentMapper, Mapping> docMapper(String type) {\n+ return new Tuple<>(docMapper, mappingUpdate);\n }\n \n @Override",
"filename": "src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,7 @@\n import org.elasticsearch.index.deletionpolicy.SnapshotDeletionPolicy;\n import org.elasticsearch.index.indexing.ShardIndexingService;\n import org.elasticsearch.index.indexing.slowlog.ShardSlowLogIndexingService;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n@@ -175,12 +176,12 @@ private ParseContext.Document testDocument() {\n }\n \n \n- private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, long timestamp, long ttl, ParseContext.Document document, BytesReference source, boolean mappingsModified) {\n+ private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, long timestamp, long ttl, ParseContext.Document document, BytesReference source, Mapping mappingsUpdate) {\n Field uidField = new Field(\"_uid\", uid, UidFieldMapper.Defaults.FIELD_TYPE);\n Field versionField = new NumericDocValuesField(\"_version\", 0);\n document.add(uidField);\n document.add(versionField);\n- return new ParsedDocument(uidField, versionField, id, type, routing, timestamp, ttl, Arrays.asList(document), source, mappingsModified);\n+ return new ParsedDocument(uidField, versionField, id, type, routing, timestamp, ttl, Arrays.asList(document), source, mappingsUpdate);\n }\n \n protected Store createStore(Path p) throws IOException {\n@@ -276,10 +277,10 @@ public void testSegments() throws Exception {\n final boolean defaultCompound = defaultSettings.getAsBoolean(EngineConfig.INDEX_COMPOUND_ON_FLUSH, true);\n \n // create a doc and refresh\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n primaryEngine.refresh(\"test\");\n \n@@ -338,7 +339,7 @@ public void testSegments() throws Exception {\n \n primaryEngine.config().setCompoundOnFlush(false);\n \n- ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"3\"), doc3));\n primaryEngine.refresh(\"test\");\n \n@@ -409,7 +410,7 @@ public void testSegments() throws Exception {\n replicaEngine.refresh(\"test\");\n \n primaryEngine.config().setCompoundOnFlush(true);\n- ParsedDocument doc4 = testParsedDocument(\"4\", \"4\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc4 = testParsedDocument(\"4\", \"4\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"4\"), doc4));\n primaryEngine.refresh(\"test\");\n \n@@ -441,18 +442,18 @@ public void testVerboseSegments() throws Exception {\n List<Segment> segments = primaryEngine.segments(true);\n assertThat(segments.isEmpty(), equalTo(true));\n \n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n primaryEngine.refresh(\"test\");\n \n segments = primaryEngine.segments(true);\n assertThat(segments.size(), equalTo(1));\n assertThat(segments.get(0).ramTree, notNullValue());\n \n- ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, false);\n+ ParsedDocument doc2 = testParsedDocument(\"2\", \"2\", \"test\", null, -1, -1, testDocumentWithTextField(), B_2, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"2\"), doc2));\n primaryEngine.refresh(\"test\");\n- ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, false);\n+ ParsedDocument doc3 = testParsedDocument(\"3\", \"3\", \"test\", null, -1, -1, testDocumentWithTextField(), B_3, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"3\"), doc3));\n primaryEngine.refresh(\"test\");\n \n@@ -479,7 +480,7 @@ public void testShadowEngineIgnoresWriteOperations() throws Exception {\n // create a document\n ParseContext.Document document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n try {\n replicaEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n fail(\"should have thrown an exception\");\n@@ -498,7 +499,7 @@ public void testShadowEngineIgnoresWriteOperations() throws Exception {\n // index a document\n document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n try {\n replicaEngine.index(new Engine.Index(null, newUid(\"1\"), doc));\n fail(\"should have thrown an exception\");\n@@ -517,7 +518,7 @@ public void testShadowEngineIgnoresWriteOperations() throws Exception {\n // Now, add a document to the primary so we can test shadow engine deletes\n document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n primaryEngine.flush();\n replicaEngine.refresh(\"test\");\n@@ -573,7 +574,7 @@ public void testSimpleOperations() throws Exception {\n // create a document\n ParseContext.Document document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -629,7 +630,7 @@ public void testSimpleOperations() throws Exception {\n document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n document.add(new Field(SourceFieldMapper.NAME, B_2.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_2, null);\n primaryEngine.index(new Engine.Index(null, newUid(\"1\"), doc));\n \n // its not updated yet...\n@@ -700,7 +701,7 @@ public void testSimpleOperations() throws Exception {\n // add it back\n document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, B_1.toBytes(), SourceFieldMapper.Defaults.FIELD_TYPE));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -747,7 +748,7 @@ public void testSimpleOperations() throws Exception {\n // now do an update\n document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n- doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, false);\n+ doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, document, B_1, null);\n primaryEngine.index(new Engine.Index(null, newUid(\"1\"), doc));\n \n // its not updated yet...\n@@ -784,7 +785,7 @@ public void testSearchResultRelease() throws Exception {\n searchResult.close();\n \n // create a document\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n \n // its not there...\n@@ -830,7 +831,7 @@ public void testSearchResultRelease() throws Exception {\n \n @Test\n public void testFailEngineOnCorruption() {\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n primaryEngine.flush();\n MockDirectoryWrapper leaf = DirectoryUtils.getLeaf(replicaEngine.config().getStore().directory(), MockDirectoryWrapper.class);\n@@ -869,7 +870,7 @@ public void testExtractShardId() {\n @Test\n public void testFailStart() throws IOException {\n // Need a commit point for this\n- ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, false);\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n primaryEngine.create(new Engine.Create(null, newUid(\"1\"), doc));\n primaryEngine.flush();\n ",
"filename": "src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -20,8 +20,8 @@\n package org.elasticsearch.index.mapper.camelcase;\n \n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n@@ -39,18 +39,22 @@ public void testCamelCaseFieldNameStaysAsIs() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .endObject().endObject().string();\n \n- DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n- DocumentMapper documentMapper = parser.parse(mapping);\n+ IndexService index = createIndex(\"test\");\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ DocumentMapper documentMapper = index.mapperService().documentMapper(\"type\");\n \n ParsedDocument doc = documentMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n .field(\"thisIsCamelCase\", \"value1\")\n .endObject().bytes());\n \n+ assertNotNull(doc.dynamicMappingsUpdate());\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(doc.dynamicMappingsUpdate().toString()).get();\n+\n assertThat(documentMapper.mappers().indexName(\"thisIsCamelCase\").isEmpty(), equalTo(false));\n assertThat(documentMapper.mappers().indexName(\"this_is_camel_case\"), nullValue());\n \n documentMapper.refreshSource();\n- documentMapper = parser.parse(documentMapper.mappingSource().string());\n+ documentMapper = index.mapperService().documentMapperParser().parse(documentMapper.mappingSource().string());\n \n assertThat(documentMapper.mappers().indexName(\"thisIsCamelCase\").isEmpty(), equalTo(false));\n assertThat(documentMapper.mappers().indexName(\"this_is_camel_case\"), nullValue());",
"filename": "src/test/java/org/elasticsearch/index/mapper/camelcase/CamelCaseFieldNameTests.java",
"status": "modified"
},
{
"diff": "@@ -20,17 +20,23 @@\n package org.elasticsearch.index.mapper.copyto;\n \n import com.google.common.collect.ImmutableList;\n+\n import org.apache.lucene.index.IndexableField;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n-import org.elasticsearch.index.mapper.*;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n+import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n \n@@ -40,7 +46,10 @@\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.mapper.DocumentMapper.MergeFlags.mergeFlags;\n-import static org.hamcrest.Matchers.*;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.startsWith;\n \n /**\n *\n@@ -72,7 +81,9 @@ public void testCopyToFieldsParsing() throws Exception {\n .endObject()\n .endObject().endObject().endObject().string();\n \n- DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+ IndexService index = createIndex(\"test\");\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type1\").setSource(mapping).get();\n+ DocumentMapper docMapper = index.mapperService().documentMapper(\"type1\");\n FieldMapper fieldMapper = docMapper.mappers().name(\"copy_test\").mapper();\n assertThat(fieldMapper, instanceOf(StringFieldMapper.class));\n \n@@ -96,7 +107,8 @@ public void testCopyToFieldsParsing() throws Exception {\n .field(\"int_to_str_test\", 42)\n .endObject().bytes();\n \n- ParseContext.Document doc = docMapper.parse(\"type1\", \"1\", json).rootDoc();\n+ ParsedDocument parsedDoc = docMapper.parse(\"type1\", \"1\", json);\n+ ParseContext.Document doc = parsedDoc.rootDoc();\n assertThat(doc.getFields(\"copy_test\").length, equalTo(2));\n assertThat(doc.getFields(\"copy_test\")[0].stringValue(), equalTo(\"foo\"));\n assertThat(doc.getFields(\"copy_test\")[1].stringValue(), equalTo(\"bar\"));\n@@ -115,6 +127,9 @@ public void testCopyToFieldsParsing() throws Exception {\n assertThat(doc.getFields(\"new_field\").length, equalTo(2)); // new field has doc values\n assertThat(doc.getFields(\"new_field\")[0].numericValue().intValue(), equalTo(42));\n \n+ assertNotNull(parsedDoc.dynamicMappingsUpdate());\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type1\").setSource(parsedDoc.dynamicMappingsUpdate().toString()).get();\n+\n fieldMapper = docMapper.mappers().name(\"new_field\").mapper();\n assertThat(fieldMapper, instanceOf(LongFieldMapper.class));\n }\n@@ -215,11 +230,11 @@ public void testCopyToFieldMerge() throws Exception {\n \n DocumentMapper docMapperAfter = parser.parse(mappingAfter);\n \n- DocumentMapper.MergeResult mergeResult = docMapperBefore.merge(docMapperAfter, mergeFlags().simulate(true));\n+ DocumentMapper.MergeResult mergeResult = docMapperBefore.merge(docMapperAfter.mapping(), mergeFlags().simulate(true));\n \n assertThat(Arrays.toString(mergeResult.conflicts()), mergeResult.hasConflicts(), equalTo(false));\n \n- docMapperBefore.merge(docMapperAfter, mergeFlags().simulate(false));\n+ docMapperBefore.merge(docMapperAfter.mapping(), mergeFlags().simulate(false));\n \n fields = docMapperBefore.mappers().name(\"copy_test\").mapper().copyTo().copyToFields();\n ",
"filename": "src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -64,12 +64,12 @@ public void testMerge() throws IOException {\n .endObject().endObject().string();\n DocumentMapper stage2 = parser.parse(stage2Mapping);\n \n- DocumentMapper.MergeResult mergeResult = stage1.merge(stage2, mergeFlags().simulate(true));\n+ DocumentMapper.MergeResult mergeResult = stage1.merge(stage2.mapping(), mergeFlags().simulate(true));\n assertThat(mergeResult.hasConflicts(), equalTo(false));\n // Just simulated so merge hasn't happened yet\n assertThat(((TokenCountFieldMapper) stage1.mappers().smartNameFieldMapper(\"tc\")).analyzer(), equalTo(\"keyword\"));\n \n- mergeResult = stage1.merge(stage2, mergeFlags().simulate(false));\n+ mergeResult = stage1.merge(stage2.mapping(), mergeFlags().simulate(false));\n assertThat(mergeResult.hasConflicts(), equalTo(false));\n // Just simulated so merge hasn't happened yet\n assertThat(((TokenCountFieldMapper) stage1.mappers().smartNameFieldMapper(\"tc\")).analyzer(), equalTo(\"standard\"));",
"filename": "src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "We're running into an recurring issue of late where any change in cluster state (typically a node restart during an upgrade for example) results in the following errors streaming to the log of the currently elected master, as well as very high I/O on all eligible masters:\n\n```\n[2015-04-13 21:02:37,917][WARN ][cluster.metadata ] [elasticsearch-master2.localdomain] [pelias] \nre-syncing mappings with cluster state for types [[osmway, locality, openaddresses, geonames\nosmaddress, admin0, neighborhood, admin2, osmnode, admin1, local_admin]]\n```\n\nWe are presently running 1.5.1, but we had the same issue on 1.5.0. I'm cannot say with certainty whether we experienced the issue under 1.4.x. The production cluster setup is as follows, but we have also seen this problem in local development (single node):\n- 3 master eligible nodes\n- 2 routing (no data, no master) nodes\n- 8 data nodes\n- 40 shards, no replicas, ~600gb of data\n\nThe problematic mapping seems to be the `location` part of the suggester field:\n\n```\nPUT pelias\n{\n \"mappings\": {\n \"admin0\": {\n \"properties\": {\n \"suggest\": {\n \"context\": {\n \"dataset\": {\n \"default\": [],\n \"path\": \"_type\",\n \"type\": \"category\"\n },\n \"location\": {\n \"default\": [],\n \"neighbors\": true,\n \"path\": \"center_point\",\n \"precision\": [2,3,1,5,4],\n \"type\": \"geo\"\n }\n },\n \"max_input_length\": 50,\n \"payloads\": false,\n \"preserve_position_increments\": false,\n \"preserve_separators\": false,\n \"type\": \"completion\"\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "@areek please could you take a look at this. It appears to be the reordering of the `precision` field, as mentioned in #8937\n",
"created_at": "2015-04-14T13:16:01Z"
}
],
"number": 10581,
"title": "Endless re-syncing mappings"
} | {
"body": "closes #10581\ncloses #8937\n",
"number": 10602,
"review_comments": [
{
"body": "Why does this need a single shard?\n",
"created_at": "2015-04-15T07:16:55Z"
},
{
"body": "Should we pass in a random order?\n",
"created_at": "2015-04-15T07:17:32Z"
},
{
"body": "The bug could be reproduced using a single shard, changed it to test with multiple shards\n",
"created_at": "2015-04-15T15:41:35Z"
},
{
"body": "Thanks for the suggestion, fixed\n",
"created_at": "2015-04-15T15:41:36Z"
}
],
"title": "Make GeoContext mapping idempotent"
} | {
"commits": [
{
"message": "[FIX] Make GeoContext mapping idempotent\n\ncloses #10581\ncloses #8937"
}
],
"files": [
{
"diff": "@@ -601,7 +601,9 @@ public GeolocationContextMapping build() {\n if(precisions.isEmpty()) {\n precisions.add(GeoHashUtils.PRECISION);\n }\n- return new GeolocationContextMapping(name, precisions.toArray(), neighbors, defaultLocations, fieldName);\n+ int[] precisionArray = precisions.toArray();\n+ Arrays.sort(precisionArray);\n+ return new GeolocationContextMapping(name, precisionArray, neighbors, defaultLocations, fieldName);\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java",
"status": "modified"
},
{
"diff": "@@ -24,8 +24,10 @@\n import org.elasticsearch.action.suggest.SuggestRequest;\n import org.elasticsearch.action.suggest.SuggestRequestBuilder;\n import org.elasticsearch.action.suggest.SuggestResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.GeoHashUtils;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.mapper.MapperParsingException;\n@@ -153,7 +155,43 @@ public void testMultiLevelGeo() throws Exception {\n assertEquals(\"Hotel Amsterdam in Berlin\", suggestResponse.getSuggest().getSuggestion(suggestionName).iterator().next()\n .getOptions().iterator().next().getText().string()); \n }\n- } \n+ }\n+\n+ @Test\n+ public void testMappingIdempotency() throws Exception {\n+ List<Integer> precisions = new ArrayList<>();\n+ for (int i = 0; i < randomIntBetween(4, 12); i++) {\n+ precisions.add(i+1);\n+ }\n+ Collections.shuffle(precisions, getRandom());\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"properties\").startObject(\"completion\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .array(\"precision\", precisions.toArray(new Integer[precisions.size()]))\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject().endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(TYPE, mapping.string()));\n+ ensureYellow();\n+\n+ Collections.shuffle(precisions, getRandom());\n+ mapping = jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"properties\").startObject(\"completion\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .array(\"precision\", precisions.toArray(new Integer[precisions.size()]))\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject().endObject();\n+ assertAcked(client().admin().indices().preparePutMapping(INDEX).setType(TYPE).setSource(mapping.string()).get());\n+ }\n+\n \n @Test\n public void testGeoField() throws Exception {",
"filename": "src/test/java/org/elasticsearch/search/suggest/ContextSuggestSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "Completion mapping is not idempotent in the same way other field types are, probably due to the internal re-ordering of the `precision` array.\n\nIn the example below you can see that updating the mapping twice causes an error, if you start afresh and use the `precision` values `[ 1,2,3 ]` the issue goes away.\n\n**Note** when you `PUT` the mapping as below and then `GET` it back you'll notice the ordering of the precision array is now `[2,3,1,4]`.\n\n``` bash\n#!/bin/bash\n\n################################################\n# Completion Mapping Idempotence\n################################################\n\nES='localhost:9200';\n\n# drop/create index\ncurl -s -XDELETE \"$ES/testcase?pretty=true\" >/dev/null;\ncurl -s -XPUT \"$ES/testcase?pretty=true\" >/dev/null;\n\n# update mapping\nupdate_mapping() {\n curl -s -XPUT \"$ES/testcase/_mapping/mytype?pretty=true\" -d '\n {\n \"properties\": {\n \"suggest\": {\n \"type\": \"completion\",\n \"context\": {\n \"location\": {\n \"type\": \"geo\",\n \"precision\": [ 1, 2, 3, 4 ]\n }\n }\n }\n }\n }';\n}\n\nupdate_mapping;\n# \"acknowledged\" : true\n\nupdate_mapping;\n# \"error\" : \"MergeMappingException[Merge failed with failures {[mapper [suggest] has different 'context_mapping' values]}]\"\n\ncurl -s \"$ES/testcase/mytype/_mapping?pretty=true\" | grep precision\n# \"precision\" : [ 2, 3, 1, 4 ]\n```\n\ncc/ @markharwood as it may be related to https://github.com/elasticsearch/elasticsearch/commit/c0aef4adc4ffaf791ed9a42864cc578b7dc50ffc and @areek as it relates to context suggesters\n",
"comments": [
{
"body": "``` bash\ncurl -s localhost:9200/\n{\n \"status\" : 200,\n \"name\" : \"Alcmena\",\n \"version\" : {\n \"number\" : \"1.3.4\",\n \"build_hash\" : \"a70f3ccb52200f8f2c87e9c370c6597448eb3e45\",\n \"build_timestamp\" : \"2014-09-30T09:07:17Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.9\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n",
"created_at": "2014-12-12T21:53:12Z"
}
],
"number": 8937,
"title": "Completion type mapping is not idempotent"
} | {
"body": "closes #10581\ncloses #8937\n",
"number": 10602,
"review_comments": [
{
"body": "Why does this need a single shard?\n",
"created_at": "2015-04-15T07:16:55Z"
},
{
"body": "Should we pass in a random order?\n",
"created_at": "2015-04-15T07:17:32Z"
},
{
"body": "The bug could be reproduced using a single shard, changed it to test with multiple shards\n",
"created_at": "2015-04-15T15:41:35Z"
},
{
"body": "Thanks for the suggestion, fixed\n",
"created_at": "2015-04-15T15:41:36Z"
}
],
"title": "Make GeoContext mapping idempotent"
} | {
"commits": [
{
"message": "[FIX] Make GeoContext mapping idempotent\n\ncloses #10581\ncloses #8937"
}
],
"files": [
{
"diff": "@@ -601,7 +601,9 @@ public GeolocationContextMapping build() {\n if(precisions.isEmpty()) {\n precisions.add(GeoHashUtils.PRECISION);\n }\n- return new GeolocationContextMapping(name, precisions.toArray(), neighbors, defaultLocations, fieldName);\n+ int[] precisionArray = precisions.toArray();\n+ Arrays.sort(precisionArray);\n+ return new GeolocationContextMapping(name, precisionArray, neighbors, defaultLocations, fieldName);\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java",
"status": "modified"
},
{
"diff": "@@ -24,8 +24,10 @@\n import org.elasticsearch.action.suggest.SuggestRequest;\n import org.elasticsearch.action.suggest.SuggestRequestBuilder;\n import org.elasticsearch.action.suggest.SuggestResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.GeoHashUtils;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.mapper.MapperParsingException;\n@@ -153,7 +155,43 @@ public void testMultiLevelGeo() throws Exception {\n assertEquals(\"Hotel Amsterdam in Berlin\", suggestResponse.getSuggest().getSuggestion(suggestionName).iterator().next()\n .getOptions().iterator().next().getText().string()); \n }\n- } \n+ }\n+\n+ @Test\n+ public void testMappingIdempotency() throws Exception {\n+ List<Integer> precisions = new ArrayList<>();\n+ for (int i = 0; i < randomIntBetween(4, 12); i++) {\n+ precisions.add(i+1);\n+ }\n+ Collections.shuffle(precisions, getRandom());\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"properties\").startObject(\"completion\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .array(\"precision\", precisions.toArray(new Integer[precisions.size()]))\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject().endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(TYPE, mapping.string()));\n+ ensureYellow();\n+\n+ Collections.shuffle(precisions, getRandom());\n+ mapping = jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"properties\").startObject(\"completion\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .array(\"precision\", precisions.toArray(new Integer[precisions.size()]))\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject().endObject();\n+ assertAcked(client().admin().indices().preparePutMapping(INDEX).setType(TYPE).setSource(mapping.string()).get());\n+ }\n+\n \n @Test\n public void testGeoField() throws Exception {",
"filename": "src/test/java/org/elasticsearch/search/suggest/ContextSuggestSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "Starting with 1.5.0, creating a bulk request then calling **bulkRequest.execute().addListener(...)** results in\n\n> java.lang.AssertionError: Expected current thread [...] to not be a transport thread. Reason: \n\nfull stack trace : https://gist.github.com/nmunro-cvt/ea14d625629692ef2773\n\nStarted happening in 1.5.0, same code works fine pre-1.5.0.\n\nNoticed some new assertions were introduced in 1.5.0\n\n> https://github.com/elastic/elasticsearch/blob/master/src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java#L117\n\nHowever, I don't see what I need to do differently to pass the assertions.\n\nSimplified example to reproduce error: https://gist.github.com/nmunro-cvt/0be42236e9abf4359a44\n",
"comments": [
{
"body": "@jpountz any ideas what should be changed here?\n",
"created_at": "2015-04-05T17:44:08Z"
},
{
"body": "It looks to me like an actual bug that we only find about now thanks to #9164. The transport thread waits for the action to be executed so that it can run the listeners, but my understanding is that we should never wait in transport threads?\n\n@bleskes @s1monw Since you helped reviewing on #9164 could you have a look at this issue and confirm/infirm this is a real bug?\n",
"created_at": "2015-04-09T09:37:51Z"
},
{
"body": "@bleskes assigning to you\n",
"created_at": "2015-04-12T14:54:55Z"
},
{
"body": "this should be fixed now with https://github.com/elastic/elasticsearch/pull/10573 . @nmunro-cvt thx for reporting!\n",
"created_at": "2015-06-04T20:36:58Z"
}
],
"number": 10402,
"title": "BulkRequest with listener waiting for response fails assertion 'Expected current thread [...] to not be a transport thread'"
} | {
"body": "To protect ourselves against running blocking operations on a network thread we have added an assertion that triggers that verifies that the thread calling a BaseFuture.get() is not a networking one. While this assert is good, it wrongly triggers when the get() is called in order to pass it's result to a listener of AbstractListenableActionFuture who is marked to run on the same thread as the callee. At that point, we know that the operation has been completed and the get() call will not block.\n\nTo solve this, we change the assertion to ignore a get with a timeout of 0 and use that AbstractListenableActionFuture\n\nRelates to #10402\n",
"number": 10573,
"review_comments": [
{
"body": "Returning `null` feels wrong to me as then there is no difference between tryGet if the computation is not done and if it actually returns `null`. Maybe we need different semantics such as throwing an exception instead of returning `null` is sync.isDone() returns false? \n",
"created_at": "2015-04-13T21:15:12Z"
},
{
"body": "no need for the annotation if the test method name starts with \"test\" :)\n",
"created_at": "2015-04-13T21:15:47Z"
},
{
"body": "yeah, I considered a couple of approaches:\n- The one above (overloading the `null` value to indicate not done). \n- Having a special case for doing get with timeout set to 0 where we don't assert on a networking thread. \n- Having a method like you suggest called something like `getExpectingDone` which is expected to be call only after the future is guaranteed to be done. If feel `tryGet` sounds like something very light where the expectation is that is not done (i.e., a syntactic sugar for if (isDone) get() ).\n\nI don't like any of the above much, but I had a slight preference to the first. If you like the second better, I can do that. Of course, new ideas are welcome :)\n",
"created_at": "2015-04-16T14:06:09Z"
},
{
"body": "I had not though about option 2 but I really like the fact that it does not require to add a new API.\n\nBut if this argument does not resonate to you, feel free to push the change as-in, I don't have too strong feelings about it.\n",
"created_at": "2015-04-20T22:54:39Z"
}
],
"title": "Allow ActionListener to be called on the network thread"
} | {
"commits": [
{
"message": "Internal: allows ActionListener to be called on the network thread\n\nTo protect ourselves against running blocking operations on a network thread we have added an assertion that triggers that verifies that the thread calling a BaseFuture.get() is not a networking one. While this assert is good, it wrongly triggers when the get() is called in order to pass it's result to a listener of AbstractListenableActionFuture who is marked to run on the same thread as the callee. At that point, we know that the operation has been completed and the get() call will not block.\n\nTo solve this, we add a tryGet() option which is guaranteed not to block.\n\nRelates to #10402"
},
{
"message": "feedback"
}
],
"files": [
{
"diff": "@@ -99,7 +99,9 @@ protected void done() {\n \n private void executeListener(final ActionListener<T> listener) {\n try {\n- listener.onResponse(actionGet());\n+ // we use a timeout of 0 to by pass assertion forbidding to call actionGet() (blocking) on a network thread.\n+ // here we know we will never block\n+ listener.onResponse(actionGet(0));\n } catch (Throwable e) {\n listener.onFailure(e);\n }",
"filename": "src/main/java/org/elasticsearch/action/support/AbstractListenableActionFuture.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.common.util.concurrent;\n \n import com.google.common.annotations.Beta;\n-\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.transport.Transports;\n \n@@ -92,7 +91,7 @@ public abstract class BaseFuture<V> implements Future<V> {\n @Override\n public V get(long timeout, TimeUnit unit) throws InterruptedException,\n TimeoutException, ExecutionException {\n- Transports.assertNotTransportThread(\"Blocking operation\");\n+ assert timeout <= 0 || Transports.assertNotTransportThread(\"Blocking operation\");\n return sync.get(unit.toNanos(timeout));\n }\n \n@@ -114,7 +113,7 @@ public V get(long timeout, TimeUnit unit) throws InterruptedException,\n */\n @Override\n public V get() throws InterruptedException, ExecutionException {\n- Transports.assertNotTransportThread(\"Blocking operation\");\n+ assert Transports.assertNotTransportThread(\"Blocking operation\");\n return sync.get();\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/util/concurrent/BaseFuture.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,9 @@\n public enum Transports {\n ;\n \n+ /** threads whose name is prefixed by this string will be considered network threads, even though they aren't */\n+ public final static String TEST_MOCK_TRANSPORT_THREAD_PREFIX = \"__mock_network_thread\";\n+\n /**\n * Utility method to detect whether a thread is a network thread. Typically\n * used in assertions to make sure that we do not call blocking code from\n@@ -39,21 +42,24 @@ public static final boolean isTransportThread(Thread t) {\n NettyTransport.HTTP_SERVER_BOSS_THREAD_NAME_PREFIX,\n NettyTransport.HTTP_SERVER_WORKER_THREAD_NAME_PREFIX,\n NettyTransport.TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX,\n- NettyTransport.TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX)) {\n+ NettyTransport.TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX,\n+ TEST_MOCK_TRANSPORT_THREAD_PREFIX)) {\n if (threadName.contains(s)) {\n return true;\n }\n }\n return false;\n }\n \n- public static void assertTransportThread() {\n+ public static boolean assertTransportThread() {\n final Thread t = Thread.currentThread();\n assert isTransportThread(t) : \"Expected transport thread but got [\" + t + \"]\";\n+ return true;\n }\n \n- public static void assertNotTransportThread(String reason) {\n+ public static boolean assertNotTransportThread(String reason) {\n final Thread t = Thread.currentThread();\n- assert isTransportThread(t) ==false : \"Expected current thread [\" + t + \"] to not be a transport thread. Reason: \";\n+ assert isTransportThread(t) == false : \"Expected current thread [\" + t + \"] to not be a transport thread. Reason: [\" + reason + \"]\";\n+ return true;\n }\n }",
"filename": "src/main/java/org/elasticsearch/transport/Transports.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.action.support;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.Transports;\n+\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+public class ListenableActionFutureTests extends ElasticsearchTestCase {\n+\n+ public void testListenerIsCallableFromNetworkThreads() throws Throwable {\n+ ThreadPool threadPool = new ThreadPool(\"testListenerIsCallableFromNetworkThreads\");\n+ try {\n+ final PlainListenableActionFuture<Object> future = new PlainListenableActionFuture<>(threadPool);\n+ final CountDownLatch listenerCalled = new CountDownLatch(1);\n+ final AtomicReference<Throwable> error = new AtomicReference<>();\n+ final Object response = new Object();\n+ future.addListener(new ActionListener<Object>() {\n+ @Override\n+ public void onResponse(Object o) {\n+ listenerCalled.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ error.set(e);\n+ listenerCalled.countDown();\n+ }\n+ });\n+ Thread networkThread = new Thread(new AbstractRunnable() {\n+ @Override\n+ public void onFailure(Throwable t) {\n+ error.set(t);\n+ listenerCalled.countDown();\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n+ future.onResponse(response);\n+ }\n+ }, Transports.TEST_MOCK_TRANSPORT_THREAD_PREFIX + \"_testListenerIsCallableFromNetworkThread\");\n+ networkThread.start();\n+ networkThread.join();\n+ listenerCalled.await();\n+ if (error.get() != null) {\n+ throw error.get();\n+ }\n+ } finally {\n+ ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS);\n+ }\n+ }\n+\n+\n+}",
"filename": "src/test/java/org/elasticsearch/action/support/ListenableActionFutureTests.java",
"status": "added"
}
]
} |
{
"body": "In ES 1.5, many aggregations now return a formatted `<foo>_as_string` version of the return values by default, even without specifying a `format` in the request, or regardless of the field type.\n\nFor instance:\n\n``` json\n\"aggregations\": {\n \"my_stats\": {\n \"count\": 54,\n \"min\": 10148,\n \"max\": 19602,\n \"avg\": 15151.611111111111,\n \"sum\": 818187,\n \"min_as_string\": \"10148.0\",\n \"max_as_string\": \"19602.0\",\n \"avg_as_string\": \"15151.611111111111\",\n \"sum_as_string\": \"818187.0\"\n }\n }\n```\n\nAggregations where I've observed this:\n\n`as_string` always present:\n- min\n- max\n- sum\n- avg\n- stats\n- extended_stats\n- value_count\n- percentiles\n- percentiles_ranks\n- cardinality\n\n`as_string` only present if the field is a date, regardless if `format` is specified or not:\n- terms\n- histogram\n- date_histogram\n\nDoes it ever make sense to return a formatted version of a numeric field? For dates, should we only return a formatted version if `format` is specified in the request?\n",
"comments": [
{
"body": "@colings86 was this from one of your changes?\n",
"created_at": "2015-04-05T12:07:15Z"
},
{
"body": "@clintongormley yes, I think it might be due to https://github.com/elastic/elasticsearch/pull/9032. I'll assign it to myself and hopefully get a fix soon\n\n@gmarz formatting numeric results can be useful to do things such as padding and rounding when required. The `format` parameter should work on terms, and histogram when the field is numeric. I will look into that as well as fixing this issue so the `as_string` field is only present in the result when `format` is specified (or the field is a date).\n",
"created_at": "2015-04-07T08:05:06Z"
}
],
"number": 10284,
"title": "Aggregations: formatted string values are always returned in 1.5"
} | {
"body": "Closes #10284\n",
"number": 10571,
"review_comments": [],
"title": "Fix `_as_string` output to only show when format specified"
} | {
"commits": [
{
"message": "Aggregations: Fix _as_string output to only show when format specified\n\nCloses #10284"
}
],
"files": [
{
"diff": "@@ -107,7 +107,7 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n @Override\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.VALUE, count != 0 ? getValue() : null);\n- if (count != 0 && valueFormatter != null) {\n+ if (count != 0 && valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(getValue()));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java",
"status": "modified"
},
{
"diff": "@@ -128,7 +128,7 @@ public void merge(InternalCardinality other) {\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n final long cardinality = getValue();\n builder.field(CommonFields.VALUE, cardinality);\n- if (valueFormatter != null) {\n+ if (valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(cardinality));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java",
"status": "modified"
},
{
"diff": "@@ -102,7 +102,7 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n boolean hasValue = !Double.isInfinite(max);\n builder.field(CommonFields.VALUE, hasValue ? max : null);\n- if (hasValue && valueFormatter != null) {\n+ if (hasValue && valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(max));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java",
"status": "modified"
},
{
"diff": "@@ -103,7 +103,7 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n boolean hasValue = !Double.isInfinite(min);\n builder.field(CommonFields.VALUE, hasValue ? min : null);\n- if (hasValue && valueFormatter != null) {\n+ if (hasValue && valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(min));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java",
"status": "modified"
},
{
"diff": "@@ -113,7 +113,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n String key = String.valueOf(keys[i]);\n double value = value(keys[i]);\n builder.field(key, value);\n- if (valueFormatter != null) {\n+ if (valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(key + \"_as_string\", valueFormatter.format(value));\n }\n }\n@@ -125,7 +125,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n builder.startObject();\n builder.field(CommonFields.KEY, keys[i]);\n builder.field(CommonFields.VALUE, value);\n- if (valueFormatter != null) {\n+ if (valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(value));\n }\n builder.endObject();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractInternalPercentiles.java",
"status": "modified"
},
{
"diff": "@@ -209,7 +209,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n builder.field(Fields.MAX, count != 0 ? max : null);\n builder.field(Fields.AVG, count != 0 ? getAvg() : null);\n builder.field(Fields.SUM, count != 0 ? sum : null);\n- if (count != 0 && valueFormatter != null) {\n+ if (count != 0 && valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(Fields.MIN_AS_STRING, valueFormatter.format(min));\n builder.field(Fields.MAX_AS_STRING, valueFormatter.format(max));\n builder.field(Fields.AVG_AS_STRING, valueFormatter.format(getAvg()));",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java",
"status": "modified"
},
{
"diff": "@@ -197,7 +197,7 @@ protected XContentBuilder otherStatsToXCotent(XContentBuilder builder, Params pa\n .field(Fields.LOWER, count != 0 ? getStdDeviationBound(Bounds.LOWER) : null)\n .endObject();\n \n- if (count != 0 && valueFormatter != null) {\n+ if (count != 0 && valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(Fields.SUM_OF_SQRS_AS_STRING, valueFormatter.format(sumOfSqrs));\n builder.field(Fields.VARIANCE_AS_STRING, valueFormatter.format(getVariance()));\n builder.field(Fields.STD_DEVIATION_AS_STRING, getStdDeviationAsString());",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/InternalExtendedStats.java",
"status": "modified"
},
{
"diff": "@@ -101,7 +101,7 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n @Override\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.VALUE, sum);\n- if (valueFormatter != null) {\n+ if (valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(sum));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java",
"status": "modified"
},
{
"diff": "@@ -98,7 +98,7 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n @Override\n public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.VALUE, value);\n- if (valueFormatter != null) {\n+ if (valueFormatter != null && !(valueFormatter instanceof ValueFormatter.Raw)) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(value));\n }\n return builder;",
"filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java",
"status": "modified"
}
]
} |
{
"body": "There is inconsistent behavior when saving boolean json values as boolean, or as text. The occurring parse error is cryptic and confusing. \n\nRepro steps (assumming the `tweets` index doesn't exist):\n1. Insert a first record:\n\n```\n~ $ curl -XPUT 'http://127.0.0.1:9200/tweets/tweet/1' -d '{ \"key\" : 123123123 }'\n{\"_index\":\"tweets\",\"_type\":\"tweet\",\"_id\":\"1\",\"_version\":2,\"created\":false}\n```\n1. Insert a boolean as a string\n\n```\n~ $ curl -XPUT 'http://127.0.0.1:9200/tweets/tweet/2' -d '{ \"key\" : \"true\" }'\n{\"error\":\"MapperParsingException[failed to parse [key]]; nested: NumberFormatException[For input string: \\\"true\\\"]; \",\"status\":400}\n```\n\nThe error is expected, as auto-mapping has decided that `key` is a `number`, not a `boolean`\n1. Insert a boolean as boolean\n\n```\n~ $ curl -XPUT 'http://127.0.0.1:9200/tweets/tweet/3' -d '{ \"key\" : true }'\n{\"error\":\"MapperParsingException[failed to parse [key]]; nested: JsonParseException[Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\\n at [Source: [B@464291b8; line: 1, column: 15]]; \",\"status\":400}\n```\n\nThe expected result was supposed to be same as in 2). However, there is a cryptic message instead (which I personally don't understand). Is this a bug, or expected behavior for reasons still unknown to me? (disclaimer: I am new to ElasticSearch)\n",
"comments": [
{
"body": "I think the excepiton is just fine since we are throwing them on a different level. We expect numers or strings that we can parse. If you specify a boolean it fails on the json parser rather than when we try to interpret the number it's fails way earlier.\n",
"created_at": "2015-07-09T19:20:56Z"
}
],
"number": 10056,
"title": "[Bug] Unexpected behavior when indexing boolean values"
} | {
"body": "Makes the exception consistent with other \"incorrect data type\" exceptions, instead of a cryptic internal exception due to trying to derive a long from VALUE_BOOLEAN.\n\nNew exception will look like this:\n\n```\nPUT /tweets/\n{\n \"mappings\": {\n \"tweet\": {\n \"properties\": {\n \"key\": {\n \"type\": \"integer\"\n }\n }\n }\n }\n}\n\nPUT /tweets/tweet/1\n{ \"key\" : true }\n```\n\n```\n{\n \"error\": \"MapperParsingException[failed to parse [key]]; nested: NumberFormatException[For input boolean value: true]; \",\n \"status\": 400\n}\n```\n\nInstead of the existing message:\n\n```\n{\n \"error\": \"MapperParsingException[failed to parse [key]]; nested: JsonParseException[Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\\n at [Source: [B@464291b8; line: 1, column: 15]]; \",\n \"status\": 400\n}\n```\n\nFixes #10056\n",
"number": 10536,
"review_comments": [],
"title": "Improve exception message when indexing a boolean value into a numeric field"
} | {
"commits": [
{
"message": "Improve exception message when indexing a boolean value into a numeric field\n\nFixes #10056"
}
],
"files": [
{
"diff": "@@ -104,6 +104,8 @@ public short shortValue(boolean coerce) throws IOException {\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Short.class);\n return Short.parseShort(text());\n+ } else if (token == Token.VALUE_BOOLEAN) {\n+ throw new NumberFormatException(\"For input boolean value: \" + booleanValue());\n }\n short result = doShortValue();\n ensureNumberConversion(coerce, result, Short.class);\n@@ -124,6 +126,8 @@ public int intValue(boolean coerce) throws IOException {\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Integer.class);\n return Integer.parseInt(text());\n+ } else if (token == Token.VALUE_BOOLEAN) {\n+ throw new NumberFormatException(\"For input boolean value: \" + booleanValue());\n }\n int result = doIntValue();\n ensureNumberConversion(coerce, result, Integer.class);\n@@ -143,6 +147,8 @@ public long longValue(boolean coerce) throws IOException {\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Long.class);\n return Long.parseLong(text());\n+ } else if (token == Token.VALUE_BOOLEAN) {\n+ throw new NumberFormatException(\"For input boolean value: \" + booleanValue());\n }\n long result = doLongValue();\n ensureNumberConversion(coerce, result, Long.class);\n@@ -162,6 +168,8 @@ public float floatValue(boolean coerce) throws IOException {\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Float.class);\n return Float.parseFloat(text());\n+ } else if (token == Token.VALUE_BOOLEAN) {\n+ throw new NumberFormatException(\"For input boolean value: \" + booleanValue());\n }\n return doFloatValue();\n }\n@@ -180,6 +188,8 @@ public double doubleValue(boolean coerce) throws IOException {\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Double.class);\n return Double.parseDouble(text());\n+ } else if (token == Token.VALUE_BOOLEAN) {\n+ throw new NumberFormatException(\"For input boolean value: \" + booleanValue());\n }\n return doDoubleValue();\n }",
"filename": "src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.FieldInfo.DocValuesType;\n import org.apache.lucene.index.IndexableField;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n@@ -503,4 +504,35 @@ private static void assertPrecisionStepEquals(int expected, IndexableField field\n assertThat(ts, instanceOf(NumericTokenStream.class)); \n assertEquals(expected, ((NumericTokenStream)ts).getPrecisionStep());\n }\n+\n+ /** Test that numeric fields do not accept boolean values (Issue #10056) **/\n+ @Test\n+ public void testBoolThrowsException() throws Exception {\n+ String[] types = {\"integer\", \"long\", \"short\", \"float\", \"double\"};\n+\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\");\n+ for (String type : types) {\n+ builder.field(type + \"_field\").startObject().field(\"type\", type).endObject();\n+ }\n+ builder.endObject().endObject().endObject();\n+ DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(builder.string());\n+\n+\n+ for (String type : types) {\n+ try {\n+ defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(type + \"_field\", true)\n+ .endObject()\n+ .bytes());\n+ fail(\"Boolean value should not have been accepted by a field of type \" + type);\n+ } catch (MapperParsingException exception) {\n+ assertThat(\"failed to parse [\" + type + \"_field]\", equalTo(exception.getMessage()));\n+ Throwable cause = exception.getRootCause();\n+ assertThat(cause, instanceOf(NumberFormatException.class));\n+ assertThat(\"For input boolean value: true\", equalTo(cause.getMessage()));\n+ }\n+ }\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/numeric/SimpleNumericTests.java",
"status": "modified"
}
]
} |
{
"body": "GeoShapeFieldMapper does not overload `merge()`, so the following request produces \n `acknowledged: \"true\"` from the `AbstractFieldMapper`\n\n``` json\n# update mapping after inserting a shape\nPUT geo-shapes/doc/_mapping\n{\n \"doc\": {\n \"properties\": {\n \"geometry\": {\n \"type\": \"geo_shape\",\n \"tree\": \"quadtree\",\n \"tree_levels\": 26,\n \"distance_error_pct\": 0\n }\n }\n }\n}\n```\n\nThis is misleading since changing field parameters has zero impact on the `GeoShapeFieldMapper` itself leading to a user thinking they're indexing at a higher precision when no change has occurred. This is bad news for query expectations.\n\nThis issue will be corrected by overloading merge to add conflicts if certain parameters are changed. \n",
"comments": [],
"number": 10513,
"title": "[GEO] GeoShapeFieldMapper incorrectly acknowledges mapping changes"
} | {
"body": "Prevents the user from changing strategies, tree, tree_level or precision. distance_error_pct changes are allowed as they do not compromise the integrity of the index. A separate issue #10514 is open for allowing users to change tree_level or precision.\n\ncloses #10513 \n",
"number": 10533,
"review_comments": [
{
"body": "Shouldn't this be an error? It means they changed the type of the field?\n",
"created_at": "2015-04-10T15:30:07Z"
},
{
"body": "why have the condition at all? Just always overwrite? Same for disterrpct above\n",
"created_at": "2015-04-10T15:32:01Z"
},
{
"body": "can we assert what the mapping actually contains as well?\n",
"created_at": "2015-04-10T15:34:23Z"
},
{
"body": "can we also assert nothing changed in the mapper since there were merge conflicts?\n",
"created_at": "2015-04-10T15:34:49Z"
},
{
"body": "+1 fixed.\n",
"created_at": "2015-04-10T18:23:34Z"
},
{
"body": "+1 updated tests\n",
"created_at": "2015-04-10T18:23:46Z"
}
],
"title": "Add merge conflicts to GeoShapeFieldMapper"
} | {
"commits": [
{
"message": "[GEO] Add merge conflicts to GeoShapeFieldMapper\n\nPrevents the user from changing strategies, tree, tree_level or precision. distance_error_pct changes are allowed as they do not compromise the integrity of the index. A separate issue is open for allowing users to change tree_level or precision."
}
],
"files": [
{
"diff": "@@ -41,6 +41,8 @@\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.MergeContext;\n+import org.elasticsearch.index.mapper.MergeMappingException;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.core.AbstractFieldMapper;\n \n@@ -262,6 +264,50 @@ public void parse(ParseContext context) throws IOException {\n }\n }\n \n+ @Override\n+ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n+ super.merge(mergeWith, mergeContext);\n+ if (!this.getClass().equals(mergeWith.getClass())) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different field type\");\n+ return;\n+ }\n+ final GeoShapeFieldMapper fieldMergeWith = (GeoShapeFieldMapper) mergeWith;\n+ if (!mergeContext.mergeFlags().simulate()) {\n+ final PrefixTreeStrategy mergeWithStrategy = fieldMergeWith.defaultStrategy;\n+\n+ // prevent user from changing strategies\n+ if (!(this.defaultStrategy.getClass().equals(mergeWithStrategy.getClass()))) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different strategy\");\n+ }\n+\n+ final SpatialPrefixTree grid = this.defaultStrategy.getGrid();\n+ final SpatialPrefixTree mergeGrid = mergeWithStrategy.getGrid();\n+\n+ // prevent user from changing trees (changes encoding)\n+ if (!grid.getClass().equals(mergeGrid.getClass())) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree\");\n+ }\n+\n+ // TODO we should allow this, but at the moment levels is used to build bookkeeping variables\n+ // in lucene's SpatialPrefixTree implementations, need a patch to correct that first\n+ if (grid.getMaxLevels() != mergeGrid.getMaxLevels()) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different tree_levels or precision\");\n+ }\n+\n+ // bail if there were merge conflicts\n+ if (mergeContext.hasConflicts()) {\n+ return;\n+ }\n+\n+ // change distance error percent\n+ this.defaultStrategy.setDistErrPct(mergeWithStrategy.getDistErrPct());\n+\n+ // change orientation - this is allowed because existing dateline spanning shapes\n+ // have already been unwound and segmented\n+ this.shapeOrientation = fieldMergeWith.shapeOrientation;\n+ }\n+ }\n+\n @Override\n protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.mapper.geo;\n \n import org.apache.lucene.spatial.prefix.PrefixTreeStrategy;\n+import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.elasticsearch.common.geo.GeoUtils;\n@@ -31,9 +32,13 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n \n+import static org.elasticsearch.index.mapper.DocumentMapper.MergeFlags.mergeFlags;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.isIn;\n \n public class GeoShapeFieldMapperTests extends ElasticsearchSingleNodeTest {\n \n@@ -291,4 +296,63 @@ public void testLevelDefaults() throws IOException {\n assertThat(strategy.getGrid().getMaxLevels(), equalTo(GeoUtils.geoHashLevelsForPrecision(50d))); \n }\n }\n+\n+ @Test\n+ public void testGeoShapeMapperMerge() throws Exception {\n+ String stage1Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"tree\", \"geohash\").field(\"strategy\", \"recursive\")\n+ .field(\"precision\", \"1m\").field(\"tree_levels\", 8).field(\"distance_error_pct\", 0.01).field(\"orientation\", \"ccw\")\n+ .endObject().endObject().endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ DocumentMapper stage1 = parser.parse(stage1Mapping);\n+ String stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"tree\", \"quadtree\")\n+ .field(\"strategy\", \"term\").field(\"precision\", \"1km\").field(\"tree_levels\", 26).field(\"distance_error_pct\", 26)\n+ .field(\"orientation\", \"cw\").endObject().endObject().endObject().endObject().string();\n+ DocumentMapper stage2 = parser.parse(stage2Mapping);\n+\n+ DocumentMapper.MergeResult mergeResult = stage1.merge(stage2, mergeFlags().simulate(false));\n+ // check correct conflicts\n+ assertThat(mergeResult.hasConflicts(), equalTo(true));\n+ assertThat(mergeResult.conflicts().length, equalTo(3));\n+ ArrayList conflicts = new ArrayList<>(Arrays.asList(mergeResult.conflicts()));\n+ assertThat(\"mapper [shape] has different strategy\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different tree\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different tree_levels or precision\", isIn(conflicts));\n+\n+ // verify nothing changed\n+ FieldMapper fieldMapper = stage1.mappers().name(\"shape\").mapper();\n+ assertThat(fieldMapper, instanceOf(GeoShapeFieldMapper.class));\n+\n+ GeoShapeFieldMapper geoShapeFieldMapper = (GeoShapeFieldMapper) fieldMapper;\n+ PrefixTreeStrategy strategy = geoShapeFieldMapper.defaultStrategy();\n+\n+ assertThat(strategy, instanceOf(RecursivePrefixTreeStrategy.class));\n+ assertThat(strategy.getGrid(), instanceOf(GeohashPrefixTree.class));\n+ assertThat(strategy.getDistErrPct(), equalTo(0.01));\n+ assertThat(strategy.getGrid().getMaxLevels(), equalTo(GeoUtils.geoHashLevelsForPrecision(1d)));\n+ assertThat(geoShapeFieldMapper.orientation(), equalTo(ShapeBuilder.Orientation.CCW));\n+\n+ // correct mapping\n+ stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"precision\", \"1m\")\n+ .field(\"distance_error_pct\", 0.001).field(\"orientation\", \"cw\").endObject().endObject().endObject().endObject().string();\n+ stage2 = parser.parse(stage2Mapping);\n+ mergeResult = stage1.merge(stage2, mergeFlags().simulate(false));\n+\n+ // verify mapping changes, and ensure no failures\n+ assertThat(mergeResult.hasConflicts(), equalTo(false));\n+\n+ fieldMapper = stage1.mappers().name(\"shape\").mapper();\n+ assertThat(fieldMapper, instanceOf(GeoShapeFieldMapper.class));\n+\n+ geoShapeFieldMapper = (GeoShapeFieldMapper) fieldMapper;\n+ strategy = geoShapeFieldMapper.defaultStrategy();\n+\n+ assertThat(strategy, instanceOf(RecursivePrefixTreeStrategy.class));\n+ assertThat(strategy.getGrid(), instanceOf(GeohashPrefixTree.class));\n+ assertThat(strategy.getDistErrPct(), equalTo(0.001));\n+ assertThat(strategy.getGrid().getMaxLevels(), equalTo(GeoUtils.geoHashLevelsForPrecision(1d)));\n+ assertThat(geoShapeFieldMapper.orientation(), equalTo(ShapeBuilder.Orientation.CW));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "We created a search templates with rest api will adds to .scripts index , when we delete the templete it says success and when we explore the index the templete won't appear .\n When we recreate the search template with same name it will create , when we execute the template search it will take the old deleted template code . we updated to latest version 1.5 from 1.4.\nYou can easily replicate this issue ween u created a template with some wrong syntax , and delete that template and recreate with same name with valid syntax , then execute it wont work . I tried to flush cache on that index reorganize too \n",
"comments": [
{
"body": "@kumar007007 could you provide a simple curl recreation which demonstrates the issue please?\n",
"created_at": "2015-04-05T17:30:25Z"
},
{
"body": "// new index \n\nPOST testindex/test\n{\n \"searchtext\": \"dev1\"\n}\n\n// creating template with wrong type to create error \n\nPOST _search/template/git01\n{\n \"query\": {\n \"match_all\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"oooophrase_prefix\"\n }\n }\n }\n}\n\n// calling template it should error , it will give error \n\nGET testindex/_search/template\n{\n \"template\": {\"id\": \"git01\"}\n,\n\"params\": {\n\"P_Keyword\": \"dev\"\n}\n}\n\n// fixing the template error by updating right type \nPOST _search/template/git01\n{\n \"query\": {\n \"match_all\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"phrase_prefix\"\n }\n }\n }\n}\n\n// fixing the template error by deleting and recreating template \nDELETE _search/template/git01\n\nPOST _search/template/git01\n{\n \"query\": {\n \"match_all\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"phrase_prefix\"\n }\n }\n }\n}\n// using template still will give error \n\nGET testindex/_search/template\n{\n \"template\": {\"id\": \"git01\"}\n,\n\"params\": {\n\"P_Keyword\": \"dev\"\n}\n}\n",
"created_at": "2015-04-06T18:28:00Z"
},
{
"body": "Hi @kumar007007 \n\nYour example didn't actually work, but it gave me enough to demonstrate the problem: existing templates never get reparsed, even when they are overwritten:\n\n```\n # create test doc\nPOST testindex/test\n{\n \"searchtext\": \"dev1\"\n}\n\n # Create bad template\nPUT _search/template/git01\n{\n \"query\": {\n \"match\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"ooophrase_prefix\"\n }\n }\n }\n}\n\n # Throws exception\nGET testindex/_search/template\n{\n \"template\": {\n \"id\": \"git01\"\n },\n \"params\": {\n \"P_Keyword1\": \"dev\"\n }\n}\n\n # Overwrite with correct template\nPUT _search/template/git01\n{\n \"query\": {\n \"match\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"phrase_prefix\"\n }\n }\n }\n}\n\n # Search fails with old error\nGET testindex/_search/template\n{\n \"template\": {\n \"id\": \"git01\"\n },\n \"params\": {\n \"P_Keyword1\": \"dev\"\n }\n}\n\n # Create template with new ID\nPUT _search/template/git02\n{\n \"query\": {\n \"match\": {\n \"searchtext\": {\n \"query\": \"{{P_Keyword1}}\",\n \"type\": \"phrase_prefix\"\n }\n }\n }\n}\n\n # Now it works correctly\nGET testindex/_search/template\n{\n \"template\": {\n \"id\": \"git02\"\n },\n \"params\": {\n \"P_Keyword1\": \"dev\"\n }\n}\n```\n",
"created_at": "2015-04-06T18:56:37Z"
},
{
"body": "@MaineC coule you look at this one too please?\n",
"created_at": "2015-04-06T18:57:50Z"
},
{
"body": "i have the same issue.. forcing us to resort to only managing templates from file and not index. \n@MaineC show us some magic\n",
"created_at": "2015-04-07T20:56:52Z"
}
],
"number": 10397,
"title": "search templates created with restapi are not deleteing completely "
} | {
"body": "Closes #10397 \n\nWhen putting new templates to an index they are added to the cache\nof compiled templates as a side effect of the validate method. When\nupdating templates they are also validated but the scripts that are\nalready in the cache never get updated.\n\n@javanna Assigning to you for review as you seem to have written most of the code this PR touches, feel free to re-assign.\n",
"number": 10526,
"review_comments": [
{
"body": "you are entirely disabling the cache if you don't assign the result here to compiled.\n",
"created_at": "2015-04-10T08:51:15Z"
},
{
"body": "That was supposed to read\n\ncompiled = cache.getIf...\n\nStupid typo on my side that lead to fixing the test as a side effect...\n",
"created_at": "2015-04-10T09:23:43Z"
},
{
"body": "the problem is that when you fix it the test fails ;)\n",
"created_at": "2015-04-10T09:27:14Z"
},
{
"body": "I would love to have some more coverage here, maybe update a few more times (`randomIntBetween(1,10)` ?) and verify that updates always make it through, not just once?\n\nAlso, can we write a test for plain indexed scripts too (rather than search templates)?\n",
"created_at": "2015-04-16T09:40:39Z"
},
{
"body": "was wondering, should we use put template here instead of put indexed script?\n",
"created_at": "2015-04-16T09:41:13Z"
},
{
"body": "along the same lines, should we use get template here instead?\n",
"created_at": "2015-04-16T09:41:27Z"
},
{
"body": "I must be missing something: I don't find a client().preparePutTemplate(...) nor client().putTemplate(...)\n\nCan you elaborate please?\n",
"created_at": "2015-04-20T09:03:49Z"
},
{
"body": "I meant `client().admin().indices().preparePutTemplate()` or `client().admin().indices().putTemplate()`.\n",
"created_at": "2015-04-20T10:15:53Z"
},
{
"body": "Maybe I'm confused but I think we are talking about two different templates:\n\nThe template this test is checking is the following: http://www.elastic.co/guide/en/elasticsearch/reference/1.4/search-template.html\n\nThe template that can be stored with the methods you mentioned seems to refer to the following:\nhttp://www.elastic.co/guide/en/elasticsearch/reference/1.x/indices-templates.html\n",
"created_at": "2015-04-21T07:16:13Z"
},
{
"body": "You are right, I messed up :) I was pretty sure we had dedicated apis for search templates, but I confused the REST layer with the Java api. Leave it as-is, sorry for the confusion!\n",
"created_at": "2015-04-21T09:51:30Z"
},
{
"body": "I don't think it is a good idea to call randomInBetween from here? We should save the result to a variable outside of the loop.\n",
"created_at": "2015-04-21T09:54:03Z"
},
{
"body": "same as above\n",
"created_at": "2015-04-21T09:55:00Z"
},
{
"body": "m(\n\nGood catch.\n",
"created_at": "2015-04-21T17:53:54Z"
}
],
"title": "Fix updating templates."
} | {
"commits": [
{
"message": "Fix updating templates.\n\nCloses #10397\n\nWhen putting new templates to an index they are added to the cache\nof compiled templates as a side effect of the validate method. When\nupdating templates they are also validated but the scripts that are\nalready in the cache never get updated.\n\nAs per comments on PR #10526 adding more tests around updating scripts\nand templates."
}
],
"files": [
{
"diff": "@@ -25,6 +25,7 @@\n import com.google.common.cache.RemovalListener;\n import com.google.common.cache.RemovalNotification;\n import com.google.common.collect.ImmutableMap;\n+\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n@@ -34,6 +35,7 @@\n import org.elasticsearch.action.get.GetRequest;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.indexedscripts.delete.DeleteIndexedScriptRequest;\n import org.elasticsearch.action.indexedscripts.get.GetIndexedScriptRequest;\n@@ -72,6 +74,7 @@\n import java.nio.file.Path;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Map.Entry;\n import java.util.Set;\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.TimeUnit;\n@@ -236,30 +239,32 @@ public CompiledScript compile(String lang, String script, ScriptType scriptType\n /**\n * Compiles a script straight-away, or returns the previously compiled and cached script, without checking if it can be executed based on settings.\n */\n- public CompiledScript compileInternal(String lang, String script, ScriptType scriptType) {\n- assert script != null;\n+ public CompiledScript compileInternal(String lang, final String scriptOrId, final ScriptType scriptType) {\n+ assert scriptOrId != null;\n assert scriptType != null;\n if (lang == null) {\n lang = defaultLang;\n }\n if (logger.isTraceEnabled()) {\n- logger.trace(\"Compiling lang: [{}] type: [{}] script: {}\", lang, scriptType, script);\n+ logger.trace(\"Compiling lang: [{}] type: [{}] script: {}\", lang, scriptType, scriptOrId);\n }\n \n ScriptEngineService scriptEngineService = getScriptEngineServiceForLang(lang);\n- CacheKey cacheKey = newCacheKey(scriptEngineService, script);\n+ CacheKey cacheKey = newCacheKey(scriptEngineService, scriptOrId);\n \n if (scriptType == ScriptType.FILE) {\n CompiledScript compiled = staticCache.get(cacheKey); //On disk scripts will be loaded into the staticCache by the listener\n if (compiled == null) {\n- throw new ElasticsearchIllegalArgumentException(\"Unable to find on disk script \" + script);\n+ throw new ElasticsearchIllegalArgumentException(\"Unable to find on disk script \" + scriptOrId);\n }\n return compiled;\n }\n \n+ String script = scriptOrId;\n if (scriptType == ScriptType.INDEXED) {\n- final IndexedScript indexedScript = new IndexedScript(lang, script);\n+ final IndexedScript indexedScript = new IndexedScript(lang, scriptOrId);\n script = getScriptFromIndex(indexedScript.lang, indexedScript.id);\n+ cacheKey = newCacheKey(scriptEngineService, script);\n }\n \n CompiledScript compiled = cache.getIfPresent(cacheKey);",
"filename": "src/main/java/org/elasticsearch/script/ScriptService.java",
"status": "modified"
},
{
"diff": "@@ -20,9 +20,11 @@\n \n import com.google.common.collect.Maps;\n \n+import org.elasticsearch.action.index.IndexRequest.OpType;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.indexedscripts.delete.DeleteIndexedScriptResponse;\n import org.elasticsearch.action.indexedscripts.get.GetIndexedScriptResponse;\n+import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptRequestBuilder;\n import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptResponse;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequest;\n@@ -372,6 +374,48 @@ public void testIndexedTemplate() throws Exception {\n assertHitCount(sr, 4);\n }\n \n+ // Relates to #10397\n+ @Test\n+ public void testIndexedTemplateOverwrite() throws Exception {\n+ createIndex(\"testindex\");\n+ ensureGreen(\"testindex\");\n+\n+ index(\"testindex\", \"test\", \"1\", jsonBuilder().startObject().field(\"searchtext\", \"dev1\").endObject());\n+ refresh();\n+\n+ int iterations = randomIntBetween(2, 11);\n+ for (int i = 1; i < iterations; i++) {\n+ PutIndexedScriptResponse scriptResponse = client().preparePutIndexedScript(MustacheScriptEngineService.NAME, \"git01\", \n+ \"{\\\"query\\\": {\\\"match\\\": {\\\"searchtext\\\": {\\\"query\\\": \\\"{{P_Keyword1}}\\\",\\\"type\\\": \\\"ooophrase_prefix\\\"}}}}\").get();\n+ assertEquals(i * 2 - 1, scriptResponse.getVersion());\n+ \n+ GetIndexedScriptResponse getResponse = client().prepareGetIndexedScript(MustacheScriptEngineService.NAME, \"git01\").get();\n+ assertTrue(getResponse.isExists());\n+ \n+ Map<String, Object> templateParams = Maps.newHashMap();\n+ templateParams.put(\"P_Keyword1\", \"dev\");\n+ \n+ try {\n+ client().prepareSearch(\"testindex\").setTypes(\"test\").\n+ setTemplateName(\"git01\").setTemplateType(ScriptService.ScriptType.INDEXED).setTemplateParams(templateParams).get();\n+ fail(\"Broken test template is parsing w/o error.\");\n+ } catch (SearchPhaseExecutionException e) {\n+ // the above is expected to fail\n+ }\n+ \n+ PutIndexedScriptRequestBuilder builder = client()\n+ .preparePutIndexedScript(MustacheScriptEngineService.NAME, \"git01\",\n+ \"{\\\"query\\\": {\\\"match\\\": {\\\"searchtext\\\": {\\\"query\\\": \\\"{{P_Keyword1}}\\\",\\\"type\\\": \\\"phrase_prefix\\\"}}}}\")\n+ .setOpType(OpType.INDEX);\n+ scriptResponse = builder.get();\n+ assertEquals(i * 2, scriptResponse.getVersion());\n+ SearchResponse searchResponse = client().prepareSearch(\"testindex\").setTypes(\"test\").\n+ setTemplateName(\"git01\").setTemplateType(ScriptService.ScriptType.INDEXED).setTemplateParams(templateParams).get();\n+ assertHitCount(searchResponse, 1);\n+ }\n+ }\n+\n+ \n @Test\n public void testIndexedTemplateWithArray() throws Exception {\n createIndex(ScriptService.SCRIPT_INDEX);",
"filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryTest.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n \n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n@@ -87,6 +88,30 @@ public void testFieldIndexedScript() throws ExecutionException, InterruptedExce\n assertThat((Integer)sh.field(\"test2\").getValue(), equalTo(6));\n }\n \n+ // Relates to #10397\n+ @Test\n+ public void testUpdateScripts() {\n+ createIndex(\"test_index\");\n+ ensureGreen(\"test_index\");\n+ client().prepareIndex(\"test_index\", \"test_type\", \"1\").setSource(\"{\\\"foo\\\":\\\"bar\\\"}\").get();\n+ flush(\"test_index\");\n+\n+ int iterations = randomIntBetween(2, 11);\n+ for (int i = 1; i < iterations; i++) {\n+ PutIndexedScriptResponse response = \n+ client().preparePutIndexedScript(GroovyScriptEngineService.NAME, \"script1\", \"{\\\"script\\\":\\\"\" + i + \"\\\"}\").get();\n+ assertEquals(i, response.getVersion());\n+ \n+ String query = \"{\"\n+ + \" \\\"query\\\" : { \\\"match_all\\\": {}}, \"\n+ + \" \\\"script_fields\\\" : { \\\"test_field\\\" : { \\\"script_id\\\" : \\\"script1\\\", \\\"lang\\\":\\\"groovy\\\" } } }\"; \n+ SearchResponse searchResponse = client().prepareSearch().setSource(query).setIndices(\"test_index\").setTypes(\"test_type\").get();\n+ assertHitCount(searchResponse, 1);\n+ SearchHit sh = searchResponse.getHits().getAt(0);\n+ assertThat((Integer)sh.field(\"test_field\").getValue(), equalTo(i));\n+ }\n+ }\n+\n @Test\n public void testDisabledUpdateIndexedScriptsOnly() {\n if (randomBoolean()) {",
"filename": "src/test/java/org/elasticsearch/script/IndexedScriptTests.java",
"status": "modified"
}
]
} |
{
"body": "- We're running ES 1.4.2\n- Can sometimes be solved by closing/re-opening affected indices, but the issue usually returns prettys soon\n- This \"clogs\" up pending tasks and some tasks get stuck / cluster actions will now always timeout (e.g. loading/removing warmers)\n- This consumes all network capacity on the ES nodes\n\nhttps://gist.github.com/EikeDehling/a015a5137ac5d99dc850\n\n[buzzcapture@hermes ~]$ curl http://artemis3:9200/_cat/pending_tasks\n18149502 1s HIGH refresh-mapping [postings-5360000000][[posting]] \n18149503 745ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149509 736ms HIGH refresh-mapping [postings-5360000000][[posting]] \n18149504 737ms HIGH refresh-mapping [postings-4180000000][[posting]] \n18149510 735ms HIGH refresh-mapping [postings-5190000000][[posting]] \n18149512 735ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149506 736ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149505 736ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149511 735ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149515 732ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149519 290ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149521 289ms HIGH refresh-mapping [postings-5190000000][[posting]] \n18149513 734ms HIGH refresh-mapping [postings-5100000000][[posting]] \n18149525 287ms HIGH refresh-mapping [postings-5100000000][[posting]] \n18149507 736ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149508 736ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149514 733ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149516 299ms HIGH refresh-mapping [postings-4180000000][[posting]] \n18149517 298ms HIGH refresh-mapping [postings-5360000000][[posting]] \n18149518 291ms HIGH refresh-mapping [postings-5430000000][[posting]] \n12674966 1.7d NORMAL master ping (from: [Y8LnaPqjTv-4Vn4CWXWWlQ]) \n18149520 290ms HIGH refresh-mapping [postings-5430000000][[posting]] \n12676681 1.7d NORMAL master ping (from: [Y8LnaPqjTv-4Vn4CWXWWlQ]) \n18149522 289ms HIGH refresh-mapping [postings-5190000000][[posting]] \n18149523 288ms HIGH refresh-mapping [postings-5100000000][[posting]] \n18149524 288ms HIGH refresh-mapping [postings-4180000000][[posting]] \n12678378 1.7d NORMAL master ping (from: [Y8LnaPqjTv-4Vn4CWXWWlQ]) \n18149526 286ms HIGH refresh-mapping [postings-5430000000][[posting]] \n18149527 286ms HIGH refresh-mapping [postings-5190000000][[posting]] \n18149528 286ms HIGH refresh-mapping [postings-4180000000][[posting]] \n18149529 286ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149530 284ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149531 284ms HIGH refresh-mapping [postings-5360000000][[posting]] \n18149532 284ms HIGH refresh-mapping [postings-5100000000][[posting]] \n18149533 284ms HIGH refresh-mapping [postings-5500000000][[posting]] \n18149534 281ms HIGH refresh-mapping [postings-5360000000][[posting]]\n",
"comments": [
{
"body": "@EikeDehling can you enable debug logging for `indices.cluster` and grep the logs for the output of this line? I want to see what the difference is between the two sources . A gist will be great.\n\n```\nlogger.debug(\"[{}] parsed mapping [{}], and got different sources\\noriginal:\\n{}\\nparsed:\\n{}\", index, mappingType, mappingSource, mapperService.documentMapper(mappingType).mappingSource());\n```\n",
"created_at": "2015-03-30T19:53:17Z"
},
{
"body": "@bleskes Thanks for the quick reponse!\n\nGist here: https://gist.github.com/EikeDehling/129aa3f8213ad8552f49\n\nThe difference in mapping appears to be in nested elements, apparently they are not ordered alphbetically? The difference in serialisation is under posting.properties.body.fields._text_.fielddata , there entries there are ordered differently in the original/parsed version.\n",
"created_at": "2015-03-31T08:43:42Z"
},
{
"body": "This gist is a bit easier to read:\n\nhttps://gist.github.com/EikeDehling/fc1289cc443b7acdc3f4\n\nThe issue is under the key `posting.properties.body.fields._text_.fielddata` : Ordering is different for the original/parsed mapping.\n",
"created_at": "2015-03-31T17:52:13Z"
},
{
"body": "@EikeDehling thx for that. It's accurate. The problem lies in the way the field data settings are rendered:\n\nhttps://github.com/elastic/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java#L756\n\n```\nbuilder.field(\"fielddata\", (Map) fieldDataType.getSettings().getAsMap());\n```\n\nThe order of the keys in that map is arbitrary (practically). It may be different between master and nodes causing this endless loop.\n\nTo work around this, you can set `indices.cluster.send_refresh_mapping` to false (requires node restart). This will disable the sending of mapping refresh instructions. You must remember to remove this settings before you upgrade to the next ES version, which will have a fix for this.\n",
"created_at": "2015-03-31T19:27:29Z"
},
{
"body": "@EikeDehling do you run on Java8 by any chance? (wondering to better understand how frequently this can happen)\n",
"created_at": "2015-03-31T19:36:04Z"
},
{
"body": "We're running Java7, 1.7.0_45\n\nThanks for the tip about settings.\n\nI also found that line of code indeed, i'll try and make a patch/test.\n",
"created_at": "2015-04-01T08:25:01Z"
},
{
"body": "Cool. If you wait an hour or two, I’ll probably make a PR with a fix.\n\n> On 01 Apr 2015, at 10:25, EikeDehling notifications@github.com wrote:\n> \n> We're running Java7, 1.7.0_45\n> \n> Thanks for the tip about settings.\n> \n> I also found that line of code indeed, i'll try and make a patch/test.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2015-04-01T08:32:54Z"
},
{
"body": "This is my initial patch+unit test, happy to compare to what you're producing. Sorry, i'm not that handy with github/PR's yet.\n\nhttps://gist.github.com/EikeDehling/2e34a78a54de646b71ca\n\nAny chance there will also be a 1.4 release with a fix?\n",
"created_at": "2015-04-01T08:45:12Z"
},
{
"body": "@bleskes You said that the `indices.cluster.send_refresh_mapping` requires node restart, what would the effect be if you only have a few (say half of) the nodes which have this setting set to false?\n",
"created_at": "2015-04-01T09:24:24Z"
},
{
"body": "Then the other half might still send refresh mapping to the master. You need it on all data nodes. But you can do a rolling restart, one by one.\n\n> On 01 Apr 2015, at 11:24, wkoot notifications@github.com wrote:\n> \n> @bleskes You said that the indices.cluster.send_refresh_mapping requires node restart, what would the effect be if you only have a few (say half of) the nodes which have this setting set to false?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2015-04-01T09:28:23Z"
},
{
"body": "I am trying this fix in our staging environment, i'll let you know if that fixes the issue.\n\nhttps://github.com/EikeDehling/elasticsearch/commit/cc79d71bbc4d55cb12a50df2acc67ca6ba4ac5dc\n",
"created_at": "2015-04-01T09:47:52Z"
},
{
"body": "@EikeDehling looks good can you make a PR ? see https://www.elastic.co/contributing-to-elasticsearch . Also it would be great if you simplify the test and add random keys (but we can iterate on the PR). \n",
"created_at": "2015-04-01T11:43:51Z"
},
{
"body": "I made a PR, and afterwards signed the contributor license, i hope that's ok.\n\nI randomized the test and simplified a bit, happy to hear suggestions for improvements.\n",
"created_at": "2015-04-01T14:52:53Z"
}
],
"number": 10318,
"title": "Endless mapping refresh"
} | {
"body": "Fixes #10318\n\nHappy to hear feedback, e.g. suggestions to make the test simpler.\n",
"number": 10370,
"review_comments": [
{
"body": "we miss an import here, I believe\n",
"created_at": "2015-04-02T13:48:06Z"
},
{
"body": "also the TreeMap<String,Object>() can be simplified to:\n\n```\n TreeMap<String, Object> orderedFielddataSettings = new TreeMap<>();\n```\n",
"created_at": "2015-04-02T13:48:53Z"
},
{
"body": "I think this test can be made stronger by actually checking the bytes are consistent, which is what we want to use. can we check the following?\n\n```\n final DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n DocumentMapper docMapper = parser.parse(builder.string());\n DocumentMapper docMapper2 = parser.parse(docMapper.mappingSource().string());\n assertThat(docMapper.mappingSource(), equalTo(docMapper2.mappingSource()));\n```\n\nIt will cover all of this and more. We can then skip the rest (and also rename the test).\n",
"created_at": "2015-04-02T14:01:44Z"
},
{
"body": "can we add bogus non relevant settings? Would be good to know they don't mess up with things.\n",
"created_at": "2015-04-02T14:02:31Z"
}
],
"title": "Unneccesary mapping refreshes caused by unordered fielddata settings"
} | {
"commits": [
{
"message": " Bugfix+unittest for unneccesary mapping refreshes caused by unordered fielddata settings"
},
{
"message": "Handle review comments from Boaz Leskes"
}
],
"files": [
{
"diff": "@@ -82,6 +82,7 @@\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.TreeMap;\n \n /**\n *\n@@ -750,10 +751,13 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n builder.field(\"similarity\", SimilarityLookupService.DEFAULT_SIMILARITY);\n }\n \n+ TreeMap<String, Object> orderedFielddataSettings = new TreeMap<>();\n if (customFieldDataSettings != null) {\n- builder.field(\"fielddata\", (Map) customFieldDataSettings.getAsMap());\n+ orderedFielddataSettings.putAll(customFieldDataSettings.getAsMap());\n+ builder.field(\"fielddata\", orderedFielddataSettings);\n } else if (includeDefaults) {\n- builder.field(\"fielddata\", (Map) fieldDataType.getSettings().getAsMap());\n+ orderedFielddataSettings.putAll(fieldDataType.getSettings().getAsMap());\n+ builder.field(\"fielddata\", orderedFielddataSettings);\n }\n multiFields.toXContent(builder, params);\n ",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -38,7 +38,9 @@\n import org.junit.Test;\n \n import java.util.Arrays;\n+import java.util.Collections;\n import java.util.Map;\n+import java.util.TreeMap;\n \n import static org.elasticsearch.common.io.Streams.copyToBytesFromClasspath;\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n@@ -446,4 +448,37 @@ public void testMultiFieldsInConsistentOrder() throws Exception {\n assertThat(field, equalTo(multiFieldNames[i++]));\n }\n }\n+ \n+ @Test\n+ // The fielddata settings need to be the same after deserializing/re-serialsing, else unneccesary mapping sync's can be triggered\n+ public void testMultiFieldsFieldDataSettingsInConsistentOrder() throws Exception {\n+ final String MY_MULTI_FIELD = \"multi_field\";\n+ \n+ // Possible fielddata settings\n+ Map<String, Object> possibleSettings = new TreeMap<String, Object>();\n+ possibleSettings.put(\"filter.frequency.min\", 1);\n+ possibleSettings.put(\"filter.frequency.max\", 2);\n+ possibleSettings.put(\"filter.regex.pattern\", \".*\");\n+ possibleSettings.put(\"format\", \"fst\");\n+ possibleSettings.put(\"loading\", \"eager\");\n+ possibleSettings.put(\"foo\", \"bar\");\n+ possibleSettings.put(\"zetting\", \"zValue\");\n+ possibleSettings.put(\"aSetting\", \"aValue\");\n+ \n+ // Generate a mapping with the a random subset of possible fielddata settings\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"my_field\").field(\"type\", \"string\").startObject(\"fields\").startObject(MY_MULTI_FIELD)\n+ .field(\"type\", \"string\").startObject(\"fielddata\");\n+ String[] keys = possibleSettings.keySet().toArray(new String[]{});\n+ Collections.shuffle(Arrays.asList(keys));\n+ for(int i = randomIntBetween(0, possibleSettings.size()-1); i >= 0; --i)\n+ builder.field(keys[i], possibleSettings.get(keys[i]));\n+ builder.endObject().endObject().endObject().endObject().endObject().endObject().endObject();\n+ \n+ // Check the mapping remains identical when deserialed/re-serialsed \n+ final DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ DocumentMapper docMapper = parser.parse(builder.string());\n+ DocumentMapper docMapper2 = parser.parse(docMapper.mappingSource().string());\n+ assertThat(docMapper.mappingSource(), equalTo(docMapper2.mappingSource()));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java",
"status": "modified"
}
]
} |
{
"body": "If position_offset_gap is specified in a field's mapping, search_quote_analyzer is always added, even if no analyzer was specified:\n\nhttps://gist.github.com/jtibshirani/f4acab68e7e505fb1778\n\nFor consistency, search_quote_analyzer should only be in the mapping if it has been specified and differs from search_analyzer.\n",
"comments": [],
"number": 10357,
"title": "search_quote_analyzer always added to mapping when position_offset_gap is specified"
} | {
"body": "The check was ineffective and was causing search_quote_analyzer to be added to the mapping unnecessarily.\n\nCloses #10357\n",
"number": 10359,
"review_comments": [
{
"body": "good test, would be better to move it to a brand new class though, as this existing test class starts a single node cluster but you don't need to send any requests to it. This is a pure unit test, you can call it `StringFieldMapperXContentTests` and make it extend `ElasticsearchTestCase`.\n",
"created_at": "2015-04-02T08:00:03Z"
},
{
"body": "maybe move these two fields to the mapping above and have a single mapping?\n",
"created_at": "2015-04-02T08:02:14Z"
},
{
"body": "sorry, scratch that, you need parser, which needs the indexService, which needs an index on the cluster. It's all good, leave the test where it is. ;)\n",
"created_at": "2015-04-02T08:06:16Z"
},
{
"body": "this is a very minor comment, I am going to merge as-is, good job!\n",
"created_at": "2015-04-02T08:07:10Z"
}
],
"title": "Fixed an equality check in StringFieldMapper."
} | {
"commits": [
{
"message": "Fixed an equality check in StringFieldMapper.\n\nThe check was ineffective and was causing search_quote_analyzer to be added to the mapping unnecessarily.\n\nCloses #10357"
},
{
"message": "Added a test for search quote analyzer serialization."
}
],
"files": [
{
"diff": "@@ -382,7 +382,7 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || positionOffsetGap != Defaults.POSITION_OFFSET_GAP) {\n builder.field(\"position_offset_gap\", positionOffsetGap);\n }\n- if (searchQuotedAnalyzer != null && searchAnalyzer != searchQuotedAnalyzer) {\n+ if (searchQuotedAnalyzer != null && !searchQuotedAnalyzer.name().equals(searchAnalyzer.name())) {\n builder.field(\"search_quote_analyzer\", searchQuotedAnalyzer.name());\n } else if (includeDefaults) {\n if (searchQuotedAnalyzer == null) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -20,20 +20,22 @@\n package org.elasticsearch.index.mapper.string;\n \n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Lists;\n+\n import org.apache.lucene.index.DocValuesType;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.IndexableFieldType;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.TermFilter;\n import org.apache.lucene.queries.TermsFilter;\n-import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.mapper.ContentPath;\n@@ -52,6 +54,7 @@\n \n import java.util.Arrays;\n import java.util.Collections;\n+import java.util.Map;\n \n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -216,6 +219,79 @@ public void testDefaultsForNotAnalyzed() throws Exception {\n assertThat(fieldType.omitNorms(), equalTo(false));\n assertParseIdemPotent(fieldType, defaultMapper);\n }\n+ \n+ @Test\n+ public void testSearchQuoteAnalyzerSerialization() throws Exception {\n+ // Cases where search_quote_analyzer should not be added to the mapping.\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field1\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .endObject()\n+ .startObject(\"field2\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .field(\"analyzer\", \"standard\")\n+ .endObject()\n+ .startObject(\"field3\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .field(\"analyzer\", \"standard\")\n+ .field(\"search_analyzer\", \"simple\")\n+ .endObject()\n+ .startObject(\"field4\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .field(\"analyzer\", \"standard\")\n+ .field(\"search_analyzer\", \"simple\")\n+ .field(\"search_quote_analyzer\", \"simple\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper mapper = parser.parse(mapping);\n+ for (String fieldName : Lists.newArrayList(\"field1\", \"field2\", \"field3\", \"field4\")) {\n+ Map<String, Object> serializedMap = getSerializedMap(fieldName, mapper);\n+ assertFalse(serializedMap.containsKey(\"search_quote_analyzer\"));\n+ }\n+ \n+ // Cases where search_quote_analyzer should be present.\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field1\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .field(\"search_quote_analyzer\", \"simple\")\n+ .endObject()\n+ .startObject(\"field2\")\n+ .field(\"type\", \"string\")\n+ .field(\"position_offset_gap\", 1000)\n+ .field(\"analyzer\", \"standard\")\n+ .field(\"search_analyzer\", \"standard\")\n+ .field(\"search_quote_analyzer\", \"simple\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ \n+ mapper = parser.parse(mapping);\n+ for (String fieldName : Lists.newArrayList(\"field1\", \"field2\")) {\n+ Map<String, Object> serializedMap = getSerializedMap(fieldName, mapper);\n+ assertEquals(serializedMap.get(\"search_quote_analyzer\"), \"simple\");\n+ }\n+ }\n+ \n+ private Map<String, Object> getSerializedMap(String fieldName, DocumentMapper mapper) throws Exception {\n+ FieldMapper<?> fieldMapper = mapper.mappers().smartNameFieldMapper(fieldName);\n+ XContentBuilder builder = JsonXContent.contentBuilder().startObject();\n+ fieldMapper.toXContent(builder, ToXContent.EMPTY_PARAMS).endObject();\n+ builder.close();\n+ \n+ Map<String, Object> fieldMap = JsonXContent.jsonXContent.createParser(builder.bytes()).mapAndClose();\n+ @SuppressWarnings(\"unchecked\")\n+ Map<String, Object> result = (Map<String, Object>) fieldMap.get(fieldName);\n+ return result;\n+ }\n \n @Test\n public void testTermVectors() throws Exception {",
"filename": "src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java",
"status": "modified"
}
]
} |
{
"body": "If a repository cannot be registered on one of non-master nodes because of configuration issue (for example if the repository plugin is not installed on the node) it might prevent other properly configured repositories from registering on the node.\n",
"comments": [],
"number": 10351,
"title": "Failure to register of one misconfigured repository might prevent future repository registration"
} | {
"body": "Separate repository registration to make sure that failure in registering one repository doesn't cause failures to register other repositories.\n\nCloses #10351\n",
"number": 10354,
"review_comments": [],
"title": "Separate repository registration"
} | {
"commits": [
{
"message": "Snapshot/Restore: separate repository registration\n\nSeparate repository registration to make sure that failure in registering one repository doesn't cause failures to register other repositories.\n\nCloses #10351"
}
],
"files": [
{
"diff": "@@ -286,10 +286,19 @@ public void clusterChanged(ClusterChangedEvent event) {\n // Previous version is different from the version in settings\n logger.debug(\"updating repository [{}]\", repositoryMetaData.name());\n closeRepository(repositoryMetaData.name(), holder);\n- holder = createRepositoryHolder(repositoryMetaData);\n+ holder = null;\n+ try {\n+ holder = createRepositoryHolder(repositoryMetaData);\n+ } catch (RepositoryException ex) {\n+ logger.warn(\"failed to change repository [{}]\", ex, repositoryMetaData.name());\n+ }\n }\n } else {\n- holder = createRepositoryHolder(repositoryMetaData);\n+ try {\n+ holder = createRepositoryHolder(repositoryMetaData);\n+ } catch (RepositoryException ex) {\n+ logger.warn(\"failed to create repository [{}]\", ex, repositoryMetaData.name());\n+ }\n }\n if (holder != null) {\n logger.debug(\"registering repository [{}]\", repositoryMetaData.name());",
"filename": "src/main/java/org/elasticsearch/repositories/RepositoriesService.java",
"status": "modified"
},
{
"diff": "@@ -55,6 +55,7 @@\n import org.elasticsearch.indices.ttl.IndicesTTLService;\n import org.elasticsearch.repositories.RepositoryMissingException;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n+import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin;\n import org.elasticsearch.test.InternalTestCluster;\n import org.junit.Ignore;\n import org.junit.Test;\n@@ -575,6 +576,27 @@ public boolean clearData(String nodeName) {\n assertThat(reusedShards.size(), greaterThanOrEqualTo(numberOfShards / 2));\n }\n \n+\n+ @Test\n+ public void registrationFailureTest() {\n+ logger.info(\"--> start first node\");\n+ internalCluster().startNode(settingsBuilder().put(\"plugin.types\", MockRepositoryPlugin.class.getName()));\n+ logger.info(\"--> start second node\");\n+ // Make sure the first node is elected as master\n+ internalCluster().startNode(settingsBuilder().put(\"node.master\", false));\n+ // Register mock repositories\n+ for (int i = 0; i < 5; i++) {\n+ client().admin().cluster().preparePutRepository(\"test-repo\" + i)\n+ .setType(\"mock\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))).setVerify(false).get();\n+ }\n+ logger.info(\"--> make sure that properly setup repository can be registered on all nodes\");\n+ client().admin().cluster().preparePutRepository(\"test-repo-0\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))).get();\n+\n+ }\n+\n @Test\n @Ignore\n public void chaosSnapshotTest() throws Exception {",
"filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,40 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.snapshots.mockstore;\n+\n+import org.elasticsearch.plugins.AbstractPlugin;\n+import org.elasticsearch.repositories.RepositoriesModule;\n+\n+public class MockRepositoryPlugin extends AbstractPlugin {\n+\n+ @Override\n+ public String name() {\n+ return \"mock-repository\";\n+ }\n+\n+ @Override\n+ public String description() {\n+ return \"Mock Repository\";\n+ }\n+\n+ public void onModule(RepositoriesModule repositoriesModule) {\n+ repositoriesModule.registerRepository(\"mock\", MockRepositoryModule.class);\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryPlugin.java",
"status": "added"
}
]
} |
{
"body": "Hi! Please have a look at the following. Elastic is throwing an ArrayOutOfBoundsException when requesting inner_hits in my search query. The query works fine if inner_hits is not included. I feel this is a bug in elasticsearch. To reproduce:\n\n<pre><code>curl -XPOST 'http://localhost:9200/twitter'\n\ncurl -XPOST 'http://localhost:9200/twitter/_mapping/tweet' -d '\n{\n \"tweet\": {\n \"properties\": {\n \"comments\": {\n \"properties\": {\n \"messages\": {\n \"type\": \"nested\",\n \"properties\": {\n \"message\": {\n \"type\" : \"string\", \n \"index\": \"not_analyzed\"\n } \n }\n } \n }\n }\n }\n }\n}'\n\ncurl -XPOST 'http://localhost:9200/twitter/tweet' -d '\n{\n \"comments\": {\n \"messages\": [\n {\"message\": \"Nice website\"},\n {\"message\": \"Worst ever\"}\n ]\n }\n}'\n\ncurl -XGET 'http://localhost:9200/twitter/tweet/_search' -d '\n{\n \"query\": {\n \"nested\": {\n \"path\": \"comments.messages\",\n \"query\": {\n \"match\": {\"comments.messages.message\": \"Nice website\"}\n },\n \"inner_hits\" : {}\n }\n }\n}'\n\nResponse:\n\n{\"took\":54,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":4,\"failed\":1,\"failures\":[{\"index\":\"twitter\",\"shard\":4,\"status\":500,\"reason\":\"ArrayIndexOutOfBoundsException[-1]\"}]},\"hits\":{\"total\":1,\"max_score\":1.4054651,\"hits\":[]}}\n\n</code></pre>\n\n\nShould the document not have been returned with the \"Nice website\" comment in the inner_hits array?\n",
"comments": [
{
"body": "@mariusdw This is a bug. Elasticsearch mistakes the `comments` to be a nested object field and in your example it is just an object field.\n\nThis error will be fixed, but your mapping design (an object field that has a nested object field) raises a question. Elasticsearch indexes the `messages` json objects in a special way so that it can be used by nested query, nested sorting, inner hits etc. But the parent field is an object field and no special indexing happens there, so the nested features only work on the `messages` nested level. Is there a special reason why you chose this? It only makes sense if you have a single comment per field, otherwise I sugegst that you change the `comments` field to be of type `nested` too.\n",
"created_at": "2015-03-31T07:28:36Z"
},
{
"body": "Hi Martijn. Thanks for your reply. The example was just to reproduce the issue that I am experiencing with my real data structure. I kind of modified the example given in the elastic documentation to achieve this and maybe in the process of trying to simplify, used an example that isn't \"practical\" :)\n\nMaybe I should rather explain my real data structure. In our system, we store configuration for various hardware devices. There are two levels of settings that can be configured: \n1. Individual settings - settings that apply to an individual device only.\n2. Group settings - settings that apply to all devices unless overridden by an individual setting.\n\nEach configuration server reports these settings (together with some other data) to a central server that stores it as JSON documents in elasticsearch. We then do interesting things like check which percentage of devices that has a certain setting is online etc.\n\nAs there will only ever be one \"group settings\" for all devices of a certain type, I have decided to store this as a simple singular object inside my document. The individual settings for each device is then stored inside this single object as nested documents. \n\nLooking at your answer I think a simple workaround for me for now would be to change the \"group settings\" to also be a nested document. \n",
"created_at": "2015-03-31T08:54:15Z"
},
{
"body": "@mariusdw That decision to use object field makes perfect sense. When PR #10353 gets in inner hits will work again with your mapping. \n\nChanging the group settings to nested field will make it work for now, but does increase memory usage (due the fact that you have a nested `nested` field). I suggest that you move back to object field when 1.5.1 gets out.\n",
"created_at": "2015-03-31T20:55:01Z"
}
],
"number": 10334,
"title": "Elasticsearch inner_hits query results in ArrayOutOfBoundsException"
} | {
"body": "PR for #10334\n",
"number": 10353,
"review_comments": [],
"title": "Make sure inner hits also works for nested fields defined in object field"
} | {
"commits": [
{
"message": "inner hits: Make sure inner hits also work for nested fields defined in object field\n\nCloses #10334"
}
],
"files": [
{
"diff": "@@ -376,8 +376,14 @@ private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(Searc\n String field;\n Filter parentFilter;\n nestedParentObjectMapper = documentMapper.findParentObjectMapper(nestedObjectMapper);\n- if (nestedParentObjectMapper != null && nestedObjectMapper.nested().isNested()) {\n+ if (nestedParentObjectMapper != null) {\n field = nestedObjectMapper.name();\n+ if (!nestedParentObjectMapper.nested().isNested()) {\n+ nestedObjectMapper = nestedParentObjectMapper;\n+ // all right, the parent is a normal object field, so this is the best identiy we can give for that:\n+ nestedIdentity = new InternalSearchHit.InternalNestedIdentity(field, 0, nestedIdentity);\n+ continue;\n+ }\n parentFilter = nestedParentObjectMapper.nestedTypeFilter();\n } else {\n field = nestedObjectMapper.fullPath();",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -846,4 +846,42 @@ public void testNestedInnerHitsHiglightWithExcludeSource() throws Exception {\n assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n }\n \n+ @Test\n+ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"messages\").field(\"type\", \"nested\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").startObject(\"messages\").field(\"message\", \"fox eat quick\").endObject().endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"messages\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments.messages\").getAt(0).getNestedIdentity().getChild().getChild(), nullValue());\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "Today there is a chance that the state version for shard, index or cluster\nstate goes backwards or is reset on a full restart etc. depending on\nseveral factors not related to the state. To prevent any collisions\nwith already existing state files and to maintain write-once properties\nthis change introductes an incremental state ID instead of using the plain\nstate version. This also fixes a bug when the previous legacy state had a\ngreater version than the current state which causes an exception on node\nstartup or if left-over files are present.\n",
"comments": [
{
"body": "@bleskes pushed a new commit\n",
"created_at": "2015-03-30T16:49:21Z"
},
{
"body": "I left one comment regarding the use of versions in determining the counter. Another thing I was wondering about is how we deal with the scenario 1.5.0 can live us in - a higher id legacy file, with a lower id and non legacy file.\n",
"created_at": "2015-03-30T19:43:13Z"
},
{
"body": "> I left one comment regarding the use of versions in determining the counter. Another thing I was wondering about is how we deal with the scenario 1.5.0 can live us in - a higher id legacy file, with a lower id and non legacy file.\n\nwe don't deal with that at all. IMO this requires user interaction - no way to resolve it automatically and I don't think we should unless it's evident that there is a real problem here that happens regularly.\n",
"created_at": "2015-03-30T19:46:32Z"
},
{
"body": "> I don't think we should unless it's evident that there is a real problem here that happens regularly.\n\nFair enough. Let's wait and see.\n",
"created_at": "2015-03-30T19:48:36Z"
},
{
"body": "@bleskes can you take another look?\n",
"created_at": "2015-03-31T10:50:42Z"
},
{
"body": "LGTM\n",
"created_at": "2015-03-31T11:15:13Z"
}
],
"number": 10316,
"title": "Refactor state format to use incremental state IDs"
} | {
"body": "Today there is a chance that the state version for shard, index or cluster\nstate goes backwards or is reset on a full restart etc. depending on\nseveral factors not related to the state. To prevent any collisions\nwith already existing state files and to maintain write-once properties\nthis change introductes an incremental state ID instead of using the plain\nstate version. This also fixes a bug when the previous legacy state had a\ngreater version than the current state which causes an exception on node\nstartup or if left-over files are present.\n\nNote: this is the backport of #10316 to 1.x but since it was tricky I really want @bleskes to take another look\n",
"number": 10343,
"review_comments": [
{
"body": "this doesn't seem to be used, was something forgotten to?\n",
"created_at": "2015-03-31T20:56:19Z"
}
],
"title": "[STATE] Refactor state format to use incremental state IDs"
} | {
"commits": [
{
"message": "[STATE] Refactor state format to use incremental state IDs\n\nToday there is a chance that the state version for shard, index or cluster\nstate goes backwards or is reset on a full restart etc. depending on\nseveral factors not related to the state. To prevent any collisions\nwith already existing state files and to maintain write-once properties\nthis change introductes an incremental state ID instead of using the plain\nstate version. This also fixes a bug when the previous legacy state had a\ngreater version than the current state which causes an exception on node\nstartup or if left-over files are present.\n\nCloses #10316"
}
],
"files": [
{
"diff": "@@ -61,8 +61,6 @@ public class LocalGatewayMetaState extends AbstractComponent implements ClusterS\n \n static final String GLOBAL_STATE_FILE_PREFIX = \"global-\";\n private static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n- static final Pattern GLOBAL_STATE_FILE_PATTERN = Pattern.compile(GLOBAL_STATE_FILE_PREFIX + \"(\\\\d+)(\" + MetaDataStateFormat.STATE_FILE_EXTENSION + \")?\");\n- static final Pattern INDEX_STATE_FILE_PATTERN = Pattern.compile(INDEX_STATE_FILE_PREFIX + \"(\\\\d+)(\" + MetaDataStateFormat.STATE_FILE_EXTENSION + \")?\");\n private static final String GLOBAL_STATE_LOG_TYPE = \"[_global]\";\n static enum AutoImportDangledState {\n NO() {\n@@ -119,6 +117,8 @@ public static AutoImportDangledState fromString(String value) {\n private final Object danglingMutex = new Object();\n private final IndicesService indicesService;\n private final ClusterService clusterService;\n+ private final MetaDataStateFormat<IndexMetaData> indexStateFormat;\n+ private final MetaDataStateFormat<MetaData> globalStateFormat;\n \n @Inject\n public LocalGatewayMetaState(Settings settings, ThreadPool threadPool, NodeEnvironment nodeEnv,\n@@ -152,6 +152,8 @@ public LocalGatewayMetaState(Settings settings, ThreadPool threadPool, NodeEnvir\n \n logger.debug(\"using gateway.local.auto_import_dangled [{}], gateway.local.delete_timeout [{}], with gateway.local.dangling_timeout [{}]\",\n this.autoImportDangled, this.deleteTimeout, this.danglingTimeout);\n+ indexStateFormat = indexStateFormat(format, formatParams);\n+ globalStateFormat = globalStateFormat(format, gatewayModeFormatParams);\n if (DiscoveryNode.masterNode(settings) || DiscoveryNode.dataNode(settings)) {\n nodeEnv.ensureAtomicMoveSupported();\n }\n@@ -319,8 +321,8 @@ public void onFailure(Throwable e) {\n /**\n * Returns a StateFormat that can read and write {@link MetaData}\n */\n- static MetaDataStateFormat<MetaData> globalStateFormat(XContentType format, final ToXContent.Params formatParams, final boolean deleteOldFiles) {\n- return new MetaDataStateFormat<MetaData>(format, deleteOldFiles) {\n+ static MetaDataStateFormat<MetaData> globalStateFormat(XContentType format, final ToXContent.Params formatParams) {\n+ return new MetaDataStateFormat<MetaData>(format, GLOBAL_STATE_FILE_PREFIX) {\n \n @Override\n public void toXContent(XContentBuilder builder, MetaData state) throws IOException {\n@@ -337,8 +339,8 @@ public MetaData fromXContent(XContentParser parser) throws IOException {\n /**\n * Returns a StateFormat that can read and write {@link IndexMetaData}\n */\n- static MetaDataStateFormat<IndexMetaData> indexStateFormat(XContentType format, final ToXContent.Params formatParams, boolean deleteOldFiles) {\n- return new MetaDataStateFormat<IndexMetaData>(format, deleteOldFiles) {\n+ static MetaDataStateFormat<IndexMetaData> indexStateFormat(XContentType format, final ToXContent.Params formatParams) {\n+ return new MetaDataStateFormat<IndexMetaData>(format, INDEX_STATE_FILE_PREFIX) {\n \n @Override\n public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n@@ -353,10 +355,8 @@ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n \n private void writeIndex(String reason, IndexMetaData indexMetaData, @Nullable IndexMetaData previousIndexMetaData) throws Exception {\n logger.trace(\"[{}] writing state, reason [{}]\", indexMetaData.index(), reason);\n- final boolean deleteOldFiles = previousIndexMetaData != null && previousIndexMetaData.version() != indexMetaData.version();\n- final MetaDataStateFormat<IndexMetaData> writer = indexStateFormat(format, formatParams, deleteOldFiles);\n try {\n- writer.write(indexMetaData, INDEX_STATE_FILE_PREFIX, indexMetaData.version(),\n+ indexStateFormat.write(indexMetaData, indexMetaData.version(),\n nodeEnv.indexLocations(new Index(indexMetaData.index())));\n } catch (Throwable ex) {\n logger.warn(\"[{}]: failed to write index state\", ex, indexMetaData.index());\n@@ -366,9 +366,8 @@ private void writeIndex(String reason, IndexMetaData indexMetaData, @Nullable In\n \n private void writeGlobalState(String reason, MetaData metaData) throws Exception {\n logger.trace(\"{} writing state, reason [{}]\", GLOBAL_STATE_LOG_TYPE, reason);\n- final MetaDataStateFormat<MetaData> writer = globalStateFormat(format, gatewayModeFormatParams, true);\n try {\n- writer.write(metaData, GLOBAL_STATE_FILE_PREFIX, metaData.version(), nodeEnv.nodeDataLocations());\n+ globalStateFormat.write(metaData, metaData.version(), nodeEnv.nodeDataLocations());\n } catch (Throwable ex) {\n logger.warn(\"{}: failed to write global state\", ex, GLOBAL_STATE_LOG_TYPE);\n throw new IOException(\"failed to write global state\", ex);\n@@ -398,12 +397,11 @@ private MetaData loadState() throws Exception {\n \n @Nullable\n private IndexMetaData loadIndexState(String index) {\n- return MetaDataStateFormat.loadLatestState(logger, indexStateFormat(format, formatParams, true),\n- INDEX_STATE_FILE_PATTERN, \"[\" + index + \"]\", nodeEnv.indexLocations(new Index(index)));\n+ return indexStateFormat.loadLatestState(logger, nodeEnv.indexLocations(new Index(index)));\n }\n \n private MetaData loadGlobalState() {\n- return MetaDataStateFormat.loadLatestState(logger, globalStateFormat(format, gatewayModeFormatParams, true), GLOBAL_STATE_FILE_PATTERN, GLOBAL_STATE_LOG_TYPE, nodeEnv.nodeDataLocations());\n+ return globalStateFormat.loadLatestState(logger, nodeEnv.nodeDataLocations());\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/gateway/local/state/meta/LocalGatewayMetaState.java",
"status": "modified"
},
{
"diff": "@@ -22,11 +22,7 @@\n import com.google.common.collect.Collections2;\n import org.apache.lucene.codecs.CodecUtil;\n import org.apache.lucene.index.CorruptIndexException;\n-import org.apache.lucene.store.Directory;\n-import org.apache.lucene.store.IOContext;\n-import org.apache.lucene.store.IndexInput;\n-import org.apache.lucene.store.OutputStreamIndexOutput;\n-import org.apache.lucene.store.SimpleFSDirectory;\n+import org.apache.lucene.store.*;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.XIOUtils;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n@@ -36,15 +32,13 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.store.InputStreamIndexInput;\n import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.gateway.local.state.meta.CorruptStateException;\n \n import java.io.File;\n import java.io.FileInputStream;\n import java.io.IOException;\n import java.io.OutputStream;\n-import java.nio.file.Files;\n-import java.nio.file.Path;\n-import java.nio.file.Paths;\n-import java.nio.file.StandardCopyOption;\n+import java.nio.file.*;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.regex.Matcher;\n@@ -62,17 +56,19 @@ public abstract class MetaDataStateFormat<T> {\n private static final int STATE_FILE_VERSION = 0;\n private static final int BUFFER_SIZE = 4096;\n private final XContentType format;\n- private final boolean deleteOldFiles;\n+ private final String prefix;\n+ private final Pattern stateFilePattern;\n+\n \n /**\n * Creates a new {@link MetaDataStateFormat} instance\n * @param format the format of the x-content\n- * @param deleteOldFiles if <code>true</code> write operations will\n- * clean up old files written with this format.\n */\n- protected MetaDataStateFormat(XContentType format, boolean deleteOldFiles) {\n+ protected MetaDataStateFormat(XContentType format, String prefix) {\n this.format = format;\n- this.deleteOldFiles = deleteOldFiles;\n+ this.prefix = prefix;\n+ this.stateFilePattern = Pattern.compile(Pattern.quote(prefix) + \"(\\\\d+)(\" + MetaDataStateFormat.STATE_FILE_EXTENSION + \")?\");\n+\n }\n \n /**\n@@ -89,15 +85,16 @@ public XContentType format() {\n * it's target filename of the pattern <tt>{prefix}{version}.st</tt>.\n *\n * @param state the state object to write\n- * @param prefix the state names prefix used to compose the file name.\n * @param version the version of the state\n * @param locations the locations where the state should be written to.\n * @throws IOException if an IOException occurs\n */\n- public final void write(final T state, final String prefix, final long version, final File... locations) throws IOException {\n+ public final void write(final T state, final long version, final File... locations) throws IOException {\n Preconditions.checkArgument(locations != null, \"Locations must not be null\");\n Preconditions.checkArgument(locations.length > 0, \"One or more locations required\");\n- String fileName = prefix + version + STATE_FILE_EXTENSION;\n+ final long maxStateId = findMaxStateId(prefix, locations)+1;\n+ assert maxStateId >= 0 : \"maxStateId must be positive but was: [\" + maxStateId + \"]\";\n+ final String fileName = prefix + maxStateId + STATE_FILE_EXTENSION;\n Path stateLocation = Paths.get(locations[0].getPath(), STATE_DIR_NAME);\n Files.createDirectories(stateLocation);\n final Path tmpStatePath = stateLocation.resolve(fileName + \".tmp\");\n@@ -141,9 +138,7 @@ public void close() throws IOException {\n } finally {\n Files.deleteIfExists(tmpStatePath);\n }\n- if (deleteOldFiles) {\n- cleanupOldFiles(prefix, fileName, locations);\n- }\n+ cleanupOldFiles(prefix, fileName, locations);\n }\n \n protected XContentBuilder newXContentBuilder(XContentType type, OutputStream stream ) throws IOException {\n@@ -166,17 +161,14 @@ protected XContentBuilder newXContentBuilder(XContentType type, OutputStream str\n * Reads the state from a given file and compares the expected version against the actual version of\n * the state.\n */\n- public final T read(File file, long expectedVersion) throws IOException {\n+ public final T read(File file) throws IOException {\n try (Directory dir = newDirectory(file.getParentFile())) {\n try (final IndexInput indexInput = dir.openInput(file.getName(), IOContext.DEFAULT)) {\n- // We checksum the entire file before we even go and parse it. If it's corrupted we barf right here.\n+ // We checksum the entire file before we even go and parse it. If it's corrupted we barf right here.\n CodecUtil.checksumEntireFile(indexInput);\n CodecUtil.checkHeader(indexInput, STATE_FILE_CODEC, STATE_FILE_VERSION, STATE_FILE_VERSION);\n final XContentType xContentType = XContentType.values()[indexInput.readInt()];\n- final long version = indexInput.readLong();\n- if (version != expectedVersion) {\n- throw new CorruptStateException(\"State version mismatch expected: \" + expectedVersion + \" but was: \" + version);\n- }\n+ indexInput.readLong(); // version currently unused\n long filePointer = indexInput.getFilePointer();\n long contentSize = indexInput.length() - CodecUtil.footerLength() - filePointer;\n try (IndexInput slice = indexInput.slice(\"state_xcontent\", filePointer, contentSize)) {\n@@ -213,25 +205,39 @@ private void cleanupOldFiles(String prefix, String fileName, File[] locations) t\n }\n }\n \n+ long findMaxStateId(final String prefix, File... locations) throws IOException {\n+ long maxId = -1;\n+ for (File dataLocation : locations) {\n+ final File[] files = new File(dataLocation, STATE_DIR_NAME).listFiles();\n+ if (files != null) {\n+ for (File file : files) {\n+ if (!file.getName().startsWith(prefix)) {\n+ continue;\n+ }\n+ final Matcher matcher = stateFilePattern.matcher(file.getName());\n+ if (matcher.matches()) {\n+ final long id = Long.parseLong(matcher.group(1));\n+ maxId = Math.max(maxId, id);\n+ }\n+ }\n+ }\n+ }\n+ return maxId;\n+ }\n+\n /**\n * Tries to load the latest state from the given data-locations. It tries to load the latest state determined by\n * the states version from one or more data directories and if none of the latest states can be loaded an exception\n * is thrown to prevent accidentally loading a previous state and silently omitting the latest state.\n *\n * @param logger an elasticsearch logger instance\n- * @param format the actual metastate format to use\n- * @param pattern the file name pattern to identify files belonging to this pattern and to read the version from.\n- * The first capture group should return the version of the file. If the second capture group is has a\n- * null value the files is considered a legacy file and will be treated as if the file contains a plain\n- * x-content payload.\n- * @param stateType the state type we are loading. used for logging contenxt only.\n * @param dataLocations the data-locations to try.\n * @return the latest state or <code>null</code> if no state was found.\n */\n- public static <T> T loadLatestState(ESLogger logger, MetaDataStateFormat<T> format, Pattern pattern, String stateType, File... dataLocations) {\n- List<FileAndVersion> files = new ArrayList<>();\n- long maxVersion = -1;\n- boolean maxVersionIsLegacy = true;\n+ public T loadLatestState(ESLogger logger, File... dataLocations) {\n+ List<FileAndStateId> files = new ArrayList<>();\n+ long maxStateId = -1;\n+ boolean maxStateIdIsLegacy = true;\n if (dataLocations != null) { // select all eligable files first\n for (File dataLocation : dataLocations) {\n File stateDir = new File(dataLocation, MetaDataStateFormat.STATE_DIR_NAME);\n@@ -241,13 +247,13 @@ public static <T> T loadLatestState(ESLogger logger, MetaDataStateFormat<T> form\n continue;\n }\n for (File stateFile : stateFiles) {\n- final Matcher matcher = pattern.matcher(stateFile.getName());\n+ final Matcher matcher = stateFilePattern.matcher(stateFile.getName());\n if (matcher.matches()) {\n final long version = Long.parseLong(matcher.group(1));\n- maxVersion = Math.max(maxVersion, version);\n+ maxStateId = Math.max(maxStateId, version);\n final boolean legacy = MetaDataStateFormat.STATE_FILE_EXTENSION.equals(matcher.group(2)) == false;\n- maxVersionIsLegacy &= legacy; // on purpose, see NOTE below\n- files.add(new FileAndVersion(stateFile, version, legacy));\n+ maxStateIdIsLegacy &= legacy; // on purpose, see NOTE below\n+ files.add(new FileAndStateId(stateFile, version, legacy));\n }\n }\n }\n@@ -259,32 +265,32 @@ public static <T> T loadLatestState(ESLogger logger, MetaDataStateFormat<T> form\n // new format (ie. legacy == false) then we know that the latest version state ought to use this new format.\n // In case the state file with the latest version does not use the new format while older state files do,\n // the list below will be empty and loading the state will fail\n- for (FileAndVersion fileAndVersion : Collections2.filter(files, new VersionAndLegacyPredicate(maxVersion, maxVersionIsLegacy))) {\n+ for (FileAndStateId fileAndVersion : Collections2.filter(files, new StateIdAndLegacyPredicate(maxStateId, maxStateIdIsLegacy))) {\n try {\n final File stateFile = fileAndVersion.file;\n- final long version = fileAndVersion.version;\n+ final long id = fileAndVersion.id;\n final XContentParser parser;\n if (fileAndVersion.legacy) { // read the legacy format -- plain XContent\n try (FileInputStream stream = new FileInputStream(stateFile)) {\n final byte[] data = Streams.copyToByteArray(stream);\n if (data.length == 0) {\n- logger.debug(\"{}: no data for [{}], ignoring...\", stateType, stateFile.getAbsolutePath());\n+ logger.debug(\"{}: no data for [{}], ignoring...\", prefix, stateFile.getAbsolutePath());\n continue;\n }\n parser = XContentHelper.createParser(data, 0, data.length);\n- state = format.fromXContent(parser);\n+ state = fromXContent(parser);\n if (state == null) {\n- logger.debug(\"{}: no data for [{}], ignoring...\", stateType, stateFile.getAbsolutePath());\n+ logger.debug(\"{}: no data for [{}], ignoring...\", prefix, stateFile.getAbsolutePath());\n }\n }\n } else {\n- state = format.read(stateFile, version);\n- logger.trace(\"state version [{}] read from [{}]\", version, stateFile.getName());\n+ state = read(stateFile);\n+ logger.trace(\"state id [{}] read from [{}]\", id, stateFile.getName());\n }\n return state;\n } catch (Throwable e) {\n exceptions.add(e);\n- logger.debug(\"{}: failed to read [{}], ignoring...\", e, fileAndVersion.file.getAbsolutePath(), stateType);\n+ logger.debug(\"{}: failed to read [{}], ignoring...\", e, fileAndVersion.file.getAbsolutePath(), prefix);\n }\n }\n // if we reach this something went wrong\n@@ -297,41 +303,42 @@ public static <T> T loadLatestState(ESLogger logger, MetaDataStateFormat<T> form\n }\n \n /**\n- * Filters out all {@link FileAndVersion} instances with a different version than\n+ * Filters out all {@link org.elasticsearch.gateway.local.state.meta.MetaDataStateFormat.FileAndStateId} instances with a different id than\n * the given one.\n */\n- private static final class VersionAndLegacyPredicate implements Predicate<FileAndVersion> {\n- private final long version;\n+ private static final class StateIdAndLegacyPredicate implements Predicate<FileAndStateId> {\n+ private final long id;\n private final boolean legacy;\n \n- VersionAndLegacyPredicate(long version, boolean legacy) {\n- this.version = version;\n+ StateIdAndLegacyPredicate(long id, boolean legacy) {\n+ this.id = id;\n this.legacy = legacy;\n }\n \n @Override\n- public boolean apply(FileAndVersion input) {\n- return input.version == version && input.legacy == legacy;\n+ public boolean apply(FileAndStateId input) {\n+ return input.id == id && input.legacy == legacy;\n }\n }\n \n /**\n- * Internal struct-like class that holds the parsed state version, the file\n+ * Internal struct-like class that holds the parsed state id, the file\n * and a flag if the file is a legacy state ie. pre 1.5\n */\n- private static class FileAndVersion {\n+ private static class FileAndStateId {\n final File file;\n- final long version;\n+ final long id;\n final boolean legacy;\n \n- private FileAndVersion(File file, long version, boolean legacy) {\n+ private FileAndStateId(File file, long id, boolean legacy) {\n this.file = file;\n- this.version = version;\n+ this.id = id;\n this.legacy = legacy;\n }\n \n+ @Override\n public String toString() {\n- return \"[version:\" + version + \", legacy:\" + legacy + \", file:\" + file.getAbsolutePath() + \"]\";\n+ return \"[id:\" + id + \", legacy:\" + legacy + \", file:\" + file.getAbsolutePath() + \"]\";\n }\n }\n \n@@ -346,5 +353,4 @@ public static void deleteMetaState(Path... dataLocations) throws IOException {\n }\n XIOUtils.rm(stateDirectories);\n }\n-\n-}\n+}\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java",
"status": "modified"
},
{
"diff": "@@ -47,14 +47,13 @@\n public class LocalGatewayShardsState extends AbstractComponent implements ClusterStateListener {\n \n private static final String SHARD_STATE_FILE_PREFIX = \"state-\";\n- private static final Pattern SHARD_STATE_FILE_PATTERN = Pattern.compile(SHARD_STATE_FILE_PREFIX + \"(\\\\d+)(\" + MetaDataStateFormat.STATE_FILE_EXTENSION + \")?\");\n private static final String PRIMARY_KEY = \"primary\";\n private static final String VERSION_KEY = \"version\";\n \n private final NodeEnvironment nodeEnv;\n \n private volatile Map<ShardId, ShardStateInfo> currentState = Maps.newHashMap();\n-\n+ public static final MetaDataStateFormat<ShardStateInfo> FORMAT = newShardStateInfoFormat();\n @Inject\n public LocalGatewayShardsState(Settings settings, NodeEnvironment nodeEnv, TransportNodesListGatewayStartedShards listGatewayStartedShards) throws Exception {\n super(settings);\n@@ -186,17 +185,16 @@ private Map<ShardId, ShardStateInfo> loadShardsStateInfo() throws Exception {\n }\n \n private ShardStateInfo loadShardStateInfo(ShardId shardId) {\n- return MetaDataStateFormat.loadLatestState(logger, newShardStateInfoFormat(false), SHARD_STATE_FILE_PATTERN, shardId.toString(), nodeEnv.shardLocations(shardId));\n+ return FORMAT.loadLatestState(logger, nodeEnv.shardLocations(shardId));\n }\n \n private void writeShardState(String reason, ShardId shardId, ShardStateInfo shardStateInfo, @Nullable ShardStateInfo previousStateInfo) throws Exception {\n logger.trace(\"{} writing shard state, reason [{}]\", shardId, reason);\n- final boolean deleteOldFiles = previousStateInfo != null && previousStateInfo.version != shardStateInfo.version;\n- newShardStateInfoFormat(deleteOldFiles).write(shardStateInfo, SHARD_STATE_FILE_PREFIX, shardStateInfo.version, nodeEnv.shardLocations(shardId));\n+ FORMAT.write(shardStateInfo, shardStateInfo.version, nodeEnv.shardLocations(shardId));\n }\n \n- private MetaDataStateFormat<ShardStateInfo> newShardStateInfoFormat(boolean deleteOldFiles) {\n- return new MetaDataStateFormat<ShardStateInfo>(XContentType.JSON, deleteOldFiles) {\n+ private static MetaDataStateFormat<ShardStateInfo> newShardStateInfoFormat() {\n+ return new MetaDataStateFormat<ShardStateInfo>(XContentType.JSON, SHARD_STATE_FILE_PREFIX) {\n \n @Override\n protected XContentBuilder newXContentBuilder(XContentType type, OutputStream stream) throws IOException {",
"filename": "src/main/java/org/elasticsearch/gateway/local/state/shards/LocalGatewayShardsState.java",
"status": "modified"
},
{
"diff": "@@ -17,8 +17,9 @@\n * under the License.\n */\n package org.elasticsearch.gateway.local.state.meta;\n-\n import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import com.google.common.collect.Iterators;\n+\n import org.apache.lucene.codecs.CodecUtil;\n import org.apache.lucene.store.BaseDirectoryWrapper;\n import org.apache.lucene.store.ChecksumIndexInput;\n@@ -30,36 +31,38 @@\n import org.apache.lucene.util.TestRuleMarkFailure;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.gateway.local.state.shards.LocalGatewayShardsState;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Assert;\n import org.junit.Test;\n \n-import java.io.Closeable;\n-import java.io.File;\n-import java.io.FileOutputStream;\n-import java.io.IOException;\n-import java.io.RandomAccessFile;\n+import java.io.*;\n import java.net.URISyntaxException;\n-import java.net.URL;\n+import java.nio.ByteBuffer;\n+import java.nio.channels.FileChannel;\n+import java.nio.file.DirectoryStream;\n import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.StandardOpenOption;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashSet;\n import java.util.List;\n import java.util.Set;\n \n-import static org.hamcrest.Matchers.anyOf;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n@@ -68,13 +71,16 @@\n import static org.hamcrest.Matchers.startsWith;\n \n public class MetaDataStateFormatTest extends ElasticsearchTestCase {\n-\n+ \n+ private Path newTempDirPath(){\n+ return newTempDir(LifecycleScope.TEST).toPath();\n+ }\n \n /**\n * Ensure we can read a pre-generated cluster state.\n */\n public void testReadClusterState() throws URISyntaxException, IOException {\n- final MetaDataStateFormat<MetaData> format = new MetaDataStateFormat<MetaData>(randomFrom(XContentType.values()), false) {\n+ final MetaDataStateFormat<MetaData> format = new MetaDataStateFormat<MetaData>(randomFrom(XContentType.values()), \"global-\") {\n \n @Override\n public void toXContent(XContentBuilder builder, MetaData state) throws IOException {\n@@ -86,272 +92,276 @@ public MetaData fromXContent(XContentParser parser) throws IOException {\n return MetaData.Builder.fromXContent(parser);\n }\n };\n- final URL resource = this.getClass().getResource(\"global-3.st\");\n+ Path tmp = newTempDirPath();\n+ final InputStream resource = this.getClass().getResourceAsStream(\"global-3.st\");\n assertThat(resource, notNullValue());\n- MetaData read = format.read(new File(resource.toURI()), 3);\n+ Path dst = tmp.resolve(\"global-3.st\");\n+ Files.copy(resource, dst);\n+ MetaData read = format.read(dst.toFile());\n assertThat(read, notNullValue());\n assertThat(read.uuid(), equalTo(\"3O1tDF1IRB6fSJ-GrTMUtg\"));\n // indices are empty since they are serialized separately\n }\n \n public void testReadWriteState() throws IOException {\n- File[] dirs = new File[randomIntBetween(1, 5)];\n+ Path[] dirs = new Path[randomIntBetween(1, 5)];\n for (int i = 0; i < dirs.length; i++) {\n- dirs[i] = newTempDir(LifecycleScope.TEST);\n+ dirs[i] = newTempDirPath();\n }\n- final boolean deleteOldFiles = randomBoolean();\n- Format format = new Format(randomFrom(XContentType.values()), deleteOldFiles);\n+ final long id = addDummyFiles(\"foo-\", dirs);\n+ Format format = new Format(randomFrom(XContentType.values()), \"foo-\");\n DummyState state = new DummyState(randomRealisticUnicodeOfCodepointLengthBetween(1, 1000), randomInt(), randomLong(), randomDouble(), randomBoolean());\n int version = between(0, Integer.MAX_VALUE/2);\n- format.write(state, \"foo-\", version, dirs);\n- for (File file : dirs) {\n- File[] list = file.listFiles();\n+ format.write(state, version, toFiles(dirs));\n+ for (Path file : dirs) {\n+ Path[] list = content(\"*\", file);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n- File stateDir = list[0];\n- assertThat(stateDir.isDirectory(), is(true));\n- list = stateDir.listFiles();\n+ assertThat(list[0].getFileName().toString(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n+ Path stateDir = list[0];\n+ assertThat(Files.isDirectory(stateDir), is(true));\n+ list = content(\"foo-*\", stateDir);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(\"foo-\" + version + \".st\"));\n- DummyState read = format.read(list[0], version);\n+ assertThat(list[0].getFileName().toString(), equalTo(\"foo-\" + id + \".st\"));\n+ DummyState read = format.read(list[0].toFile());\n assertThat(read, equalTo(state));\n }\n final int version2 = between(version, Integer.MAX_VALUE);\n DummyState state2 = new DummyState(randomRealisticUnicodeOfCodepointLengthBetween(1, 1000), randomInt(), randomLong(), randomDouble(), randomBoolean());\n- format.write(state2, \"foo-\", version2, dirs);\n+ format.write(state2, version2, toFiles(dirs));\n \n- for (File file : dirs) {\n- File[] list = file.listFiles();\n+ for (Path file : dirs) {\n+ Path[] list = content(\"*\", file);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n- File stateDir = list[0];\n- assertThat(stateDir.isDirectory(), is(true));\n- list = stateDir.listFiles();\n- assertEquals(list.length, deleteOldFiles ? 1 : 2);\n- if (deleteOldFiles) {\n- assertThat(list[0].getName(), equalTo(\"foo-\" + version2 + \".st\"));\n- DummyState read = format.read(list[0], version2);\n- assertThat(read, equalTo(state2));\n- } else {\n- assertThat(list[0].getName(), anyOf(equalTo(\"foo-\" + version + \".st\"), equalTo(\"foo-\" + version2 + \".st\")));\n- assertThat(list[1].getName(), anyOf(equalTo(\"foo-\" + version + \".st\"), equalTo(\"foo-\" + version2 + \".st\")));\n- DummyState read = format.read(new File(stateDir, \"foo-\" + version2 + \".st\"), version2);\n- assertThat(read, equalTo(state2));\n- read = format.read(new File(stateDir, \"foo-\" + version + \".st\"), version);\n- assertThat(read, equalTo(state));\n- }\n+ assertThat(list[0].getFileName().toString(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n+ Path stateDir = list[0];\n+ assertThat(Files.isDirectory(stateDir), is(true));\n+ list = content(\"foo-*\", stateDir);\n+ assertEquals(list.length,1);\n+ assertThat(list[0].getFileName().toString(), equalTo(\"foo-\"+ (id+1) + \".st\"));\n+ DummyState read = format.read(list[0].toFile());\n+ assertThat(read, equalTo(state2));\n \n }\n }\n \n @Test\n public void testVersionMismatch() throws IOException {\n- File[] dirs = new File[randomIntBetween(1, 5)];\n+ Path[] dirs = new Path[randomIntBetween(1, 5)];\n for (int i = 0; i < dirs.length; i++) {\n- dirs[i] = newTempDir(LifecycleScope.TEST);\n+ dirs[i] = newTempDirPath();\n }\n- final boolean deleteOldFiles = randomBoolean();\n- Format format = new Format(randomFrom(XContentType.values()), deleteOldFiles);\n+ final long id = addDummyFiles(\"foo-\", dirs);\n+\n+ Format format = new Format(randomFrom(XContentType.values()), \"foo-\");\n DummyState state = new DummyState(randomRealisticUnicodeOfCodepointLengthBetween(1, 1000), randomInt(), randomLong(), randomDouble(), randomBoolean());\n int version = between(0, Integer.MAX_VALUE/2);\n- format.write(state, \"foo-\", version, dirs);\n- for (File file : dirs) {\n- File[] list = file.listFiles();\n+ format.write(state, version, toFiles(dirs));\n+ for (Path file : dirs) {\n+ Path[] list = content(\"*\", file);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n- File stateDir = list[0];\n- assertThat(stateDir.isDirectory(), is(true));\n- list = stateDir.listFiles();\n+ assertThat(list[0].getFileName().toString(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n+ Path stateDir = list[0];\n+ assertThat(Files.isDirectory(stateDir), is(true));\n+ list = content(\"foo-*\", stateDir);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(\"foo-\" + version + \".st\"));\n- try {\n- format.read(list[0], between(version+1, Integer.MAX_VALUE));\n- fail(\"corruption expected\");\n- } catch (CorruptStateException ex) {\n- // success\n- }\n- DummyState read = format.read(list[0], version);\n+ assertThat(list[0].getFileName().toString(), equalTo(\"foo-\" + id + \".st\"));\n+ DummyState read = format.read(list[0].toFile());\n assertThat(read, equalTo(state));\n }\n }\n \n public void testCorruption() throws IOException {\n- File[] dirs = new File[randomIntBetween(1, 5)];\n+ Path[] dirs = new Path[randomIntBetween(1, 5)];\n for (int i = 0; i < dirs.length; i++) {\n- dirs[i] = newTempDir(LifecycleScope.TEST);\n+ dirs[i] = newTempDirPath();\n }\n- final boolean deleteOldFiles = randomBoolean();\n- Format format = new Format(randomFrom(XContentType.values()), deleteOldFiles);\n+ final long id = addDummyFiles(\"foo-\", dirs);\n+ Format format = new Format(randomFrom(XContentType.values()), \"foo-\");\n DummyState state = new DummyState(randomRealisticUnicodeOfCodepointLengthBetween(1, 1000), randomInt(), randomLong(), randomDouble(), randomBoolean());\n int version = between(0, Integer.MAX_VALUE/2);\n- format.write(state, \"foo-\", version, dirs);\n- for (File file : dirs) {\n- File[] list = file.listFiles();\n+ format.write(state, version, toFiles(dirs));\n+ for (Path file : dirs) {\n+ Path[] list = content(\"*\", file);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n- File stateDir = list[0];\n- assertThat(stateDir.isDirectory(), is(true));\n- list = stateDir.listFiles();\n+ assertThat(list[0].getFileName().toString(), equalTo(MetaDataStateFormat.STATE_DIR_NAME));\n+ Path stateDir = list[0];\n+ assertThat(Files.isDirectory(stateDir), is(true));\n+ list = content(\"foo-*\", stateDir);\n assertEquals(list.length, 1);\n- assertThat(list[0].getName(), equalTo(\"foo-\" + version + \".st\"));\n- DummyState read = format.read(list[0], version);\n+ assertThat(list[0].getFileName().toString(), equalTo(\"foo-\" + id + \".st\"));\n+ DummyState read = format.read(list[0].toFile());\n assertThat(read, equalTo(state));\n // now corrupt it\n corruptFile(list[0], logger);\n try {\n- format.read(list[0], version);\n+ format.read(list[0].toFile());\n fail(\"corrupted file\");\n } catch (CorruptStateException ex) {\n // expected\n }\n }\n }\n \n- public static void corruptFile(File file, ESLogger logger) throws IOException {\n- File fileToCorrupt = file;\n- try (final SimpleFSDirectory dir = new SimpleFSDirectory(fileToCorrupt.getParentFile())) {\n+ public static void corruptFile(Path file, ESLogger logger) throws IOException {\n+ Path fileToCorrupt = file;\n+ try (final SimpleFSDirectory dir = new SimpleFSDirectory(fileToCorrupt.getParent().toFile())) {\n long checksumBeforeCorruption;\n- try (IndexInput input = dir.openInput(fileToCorrupt.getName(), IOContext.DEFAULT)) {\n+ try (IndexInput input = dir.openInput(fileToCorrupt.getFileName().toString(), IOContext.DEFAULT)) {\n checksumBeforeCorruption = CodecUtil.retrieveChecksum(input);\n }\n- try (RandomAccessFile raf = new RandomAccessFile(fileToCorrupt, \"rw\")) {\n- raf.seek(randomIntBetween(0, (int)Math.min(Integer.MAX_VALUE, raf.length()-1)));\n- long filePointer = raf.getFilePointer();\n- byte b = raf.readByte();\n- raf.seek(filePointer);\n- raf.writeByte(~b);\n- raf.getFD().sync();\n- logger.debug(\"Corrupting file {} -- flipping at position {} from {} to {} \", fileToCorrupt.getName(), filePointer, Integer.toHexString(b), Integer.toHexString(~b));\n+ try (FileChannel raf = FileChannel.open(fileToCorrupt, StandardOpenOption.READ, StandardOpenOption.WRITE)) {\n+ raf.position(randomIntBetween(0, (int)Math.min(Integer.MAX_VALUE, raf.size()-1)));\n+ long filePointer = raf.position();\n+ ByteBuffer bb = ByteBuffer.wrap(new byte[1]);\n+ raf.read(bb);\n+\n+ bb.flip();\n+ byte oldValue = bb.get(0);\n+ byte newValue = (byte) ~oldValue;\n+ bb.put(0, newValue);\n+ raf.write(bb, filePointer);\n+ logger.debug(\"Corrupting file {} -- flipping at position {} from {} to {} \", fileToCorrupt.getFileName().toString(), filePointer, Integer.toHexString(oldValue), Integer.toHexString(newValue));\n }\n- long checksumAfterCorruption;\n- long actualChecksumAfterCorruption;\n- try (ChecksumIndexInput input = dir.openChecksumInput(fileToCorrupt.getName(), IOContext.DEFAULT)) {\n- assertThat(input.getFilePointer(), is(0l));\n- input.seek(input.length() - 8); // one long is the checksum... 8 bytes\n- checksumAfterCorruption = input.getChecksum();\n- actualChecksumAfterCorruption = input.readLong();\n- }\n- StringBuilder msg = new StringBuilder();\n- msg.append(\"Checksum before: [\").append(checksumBeforeCorruption).append(\"]\");\n- msg.append(\" after: [\").append(checksumAfterCorruption).append(\"]\");\n- msg.append(\" checksum value after corruption: \").append(actualChecksumAfterCorruption).append(\"]\");\n- msg.append(\" file: \").append(fileToCorrupt.getName()).append(\" length: \").append(dir.fileLength(fileToCorrupt.getName()));\n- logger.debug(msg.toString());\n- assumeTrue(\"Checksum collision - \" + msg.toString(),\n- checksumAfterCorruption != checksumBeforeCorruption // collision\n- || actualChecksumAfterCorruption != checksumBeforeCorruption); // checksum corrupted\n+ long checksumAfterCorruption;\n+ long actualChecksumAfterCorruption;\n+ try (ChecksumIndexInput input = dir.openChecksumInput(fileToCorrupt.getFileName().toString(), IOContext.DEFAULT)) {\n+ assertThat(input.getFilePointer(), is(0l));\n+ input.seek(input.length() - 8); // one long is the checksum... 8 bytes\n+ checksumAfterCorruption = input.getChecksum();\n+ actualChecksumAfterCorruption = input.readLong();\n+ }\n+ StringBuilder msg = new StringBuilder();\n+ msg.append(\"Checksum before: [\").append(checksumBeforeCorruption).append(\"]\");\n+ msg.append(\" after: [\").append(checksumAfterCorruption).append(\"]\");\n+ msg.append(\" checksum value after corruption: \").append(actualChecksumAfterCorruption).append(\"]\");\n+ msg.append(\" file: \").append(fileToCorrupt.getFileName().toString()).append(\" length: \").append(dir.fileLength(fileToCorrupt.getFileName().toString()));\n+ logger.debug(msg.toString());\n+ assumeTrue(\"Checksum collision - \" + msg.toString(),\n+ checksumAfterCorruption != checksumBeforeCorruption // collision\n+ || actualChecksumAfterCorruption != checksumBeforeCorruption); // checksum corrupted\n }\n }\n \n // If the latest version doesn't use the legacy format while previous versions do, then fail hard\n public void testLatestVersionDoesNotUseLegacy() throws IOException {\n final ToXContent.Params params = ToXContent.EMPTY_PARAMS;\n- MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params, randomBoolean());\n- final File[] dirs = new File[2];\n- dirs[0] = newTempDir(LifecycleScope.TEST);\n- dirs[1] = newTempDir(LifecycleScope.TEST);\n- for (File dir : dirs) {\n- Files.createDirectories(new File(dir, MetaDataStateFormat.STATE_DIR_NAME).toPath());\n- }\n- final File dir1 = randomFrom(dirs);\n+ MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params);\n+ final Path[] dirs = new Path[2];\n+ dirs[0] = newTempDirPath();\n+ dirs[1] = newTempDirPath();\n+ for (Path dir : dirs) {\n+ Files.createDirectories(dir.resolve(MetaDataStateFormat.STATE_DIR_NAME));\n+ }\n+ final Path dir1 = randomFrom(dirs);\n final int v1 = randomInt(10);\n // write a first state file in the new format\n- format.write(randomMeta(), LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX, v1, dir1);\n+ format.write(randomMeta(), v1, toFiles(dir1));\n \n // write older state files in the old format but with a newer version\n final int numLegacyFiles = randomIntBetween(1, 5);\n for (int i = 0; i < numLegacyFiles; ++i) {\n- final File dir2 = randomFrom(dirs);\n+ final Path dir2 = randomFrom(dirs);\n final int v2 = v1 + 1 + randomInt(10);\n- try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(format.format(), new FileOutputStream(new File(new File(dir2, MetaDataStateFormat.STATE_DIR_NAME), LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX + v2)))) {\n+ try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(format.format(), Files.newOutputStream(dir2.resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX + v2)))) {\n xcontentBuilder.startObject();\n MetaData.Builder.toXContent(randomMeta(), xcontentBuilder, params);\n xcontentBuilder.endObject();\n }\n }\n \n try {\n- MetaDataStateFormat.loadLatestState(logger, format, LocalGatewayMetaState.GLOBAL_STATE_FILE_PATTERN, \"foobar\", dirs);\n+ format.loadLatestState(logger, toFiles(dirs));\n fail(\"latest version can not be read\");\n } catch (ElasticsearchIllegalStateException ex) {\n assertThat(ex.getMessage(), startsWith(\"Could not find a state file to recover from among \"));\n }\n+ // write the next state file in the new format and ensure it get's a higher ID\n+ final MetaData meta = randomMeta();\n+ format.write(meta, v1, toFiles(dirs));\n+ final MetaData metaData = format.loadLatestState(logger, toFiles(dirs));\n+ assertEquals(meta.uuid(), metaData.uuid());\n+ final Path path = randomFrom(dirs);\n+ final Path[] files = files(path.resolve(\"_state\"));\n+ assertEquals(1, files.length);\n+ assertEquals(\"global-\" + format.findMaxStateId(\"global-\", toFiles(dirs)) + \".st\", files[0].getFileName().toString());\n+\n }\n \n // If both the legacy and the new format are available for the latest version, prefer the new format\n public void testPrefersNewerFormat() throws IOException {\n final ToXContent.Params params = ToXContent.EMPTY_PARAMS;\n- MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params, randomBoolean());\n- final File[] dirs = new File[2];\n- dirs[0] = newTempDir(LifecycleScope.TEST);\n- dirs[1] = newTempDir(LifecycleScope.TEST);\n- for (File dir : dirs) {\n- Files.createDirectories(new File(dir, MetaDataStateFormat.STATE_DIR_NAME).toPath());\n- }\n- final File dir1 = randomFrom(dirs);\n+ MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params);\n+ final Path[] dirs = new Path[2];\n+ dirs[0] = newTempDirPath();\n+ dirs[1] = newTempDirPath();\n+ for (Path dir : dirs) {\n+ Files.createDirectories(dir.resolve(MetaDataStateFormat.STATE_DIR_NAME));\n+ }\n final long v = randomInt(10);\n \n MetaData meta = randomMeta();\n String uuid = meta.uuid();\n \n // write a first state file in the old format\n- final File dir2 = randomFrom(dirs);\n+ final Path dir2 = randomFrom(dirs);\n MetaData meta2 = randomMeta();\n assertFalse(meta2.uuid().equals(uuid));\n- try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(format.format(), new FileOutputStream(new File(new File(dir2, MetaDataStateFormat.STATE_DIR_NAME), LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX + v)))) {\n+ try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(format.format(), Files.newOutputStream(dir2.resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX + v)))) {\n xcontentBuilder.startObject();\n MetaData.Builder.toXContent(randomMeta(), xcontentBuilder, params);\n xcontentBuilder.endObject();\n }\n \n // write a second state file in the new format but with the same version\n- format.write(meta, LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX, v, dir1);\n+ format.write(meta, v, toFiles(dirs));\n \n- MetaData state = MetaDataStateFormat.loadLatestState(logger, format, LocalGatewayMetaState.GLOBAL_STATE_FILE_PATTERN, \"foobar\", dirs);\n- assertThat(state.uuid(), equalTo(uuid));\n+ MetaData state = format.loadLatestState(logger, toFiles(dirs));\n+ final Path path = randomFrom(dirs);\n+ assertTrue(Files.exists(path.resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(\"global-\" + (v+1) + \".st\")));\n+ assertEquals(state.uuid(), uuid);\n }\n \n @Test\n public void testLoadState() throws IOException {\n final ToXContent.Params params = ToXContent.EMPTY_PARAMS;\n- final File[] dirs = new File[randomIntBetween(1, 5)];\n+ final Path[] dirs = new Path[randomIntBetween(1, 5)];\n int numStates = randomIntBetween(1, 5);\n int numLegacy = randomIntBetween(0, numStates);\n List<MetaData> meta = new ArrayList<>();\n for (int i = 0; i < numStates; i++) {\n meta.add(randomMeta());\n }\n- Set<File> corruptedFiles = new HashSet<>();\n- MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params, randomBoolean());\n+ Set<Path> corruptedFiles = new HashSet<>();\n+ MetaDataStateFormat<MetaData> format = LocalGatewayMetaState.globalStateFormat(randomFrom(XContentType.values()), params);\n for (int i = 0; i < dirs.length; i++) {\n- dirs[i] = newTempDir(LifecycleScope.TEST);\n- Files.createDirectories(new File(dirs[i], MetaDataStateFormat.STATE_DIR_NAME).toPath());\n+ dirs[i] = newTempDirPath();\n+ Files.createDirectories(dirs[i].resolve(MetaDataStateFormat.STATE_DIR_NAME));\n for (int j = 0; j < numLegacy; j++) {\n XContentType type = format.format();\n if (randomBoolean() && (j < numStates - 1 || dirs.length > 0 && i != 0)) {\n- File file = new File(new File(dirs[i], MetaDataStateFormat.STATE_DIR_NAME), \"global-\"+j);\n- Files.createFile(file.toPath()); // randomly create 0-byte files -- there is extra logic to skip them\n+ Path file = dirs[i].resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(\"global-\"+j);\n+ Files.createFile(file); // randomly create 0-byte files -- there is extra logic to skip them\n } else {\n- try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(type, new FileOutputStream(new File(new File(dirs[i], MetaDataStateFormat.STATE_DIR_NAME), \"global-\" + j)))) {\n+ try (XContentBuilder xcontentBuilder = XContentFactory.contentBuilder(type, Files.newOutputStream(dirs[i].resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(\"global-\" + j)))) {\n xcontentBuilder.startObject();\n MetaData.Builder.toXContent(meta.get(j), xcontentBuilder, params);\n xcontentBuilder.endObject();\n }\n }\n }\n for (int j = numLegacy; j < numStates; j++) {\n- format.write(meta.get(j), LocalGatewayMetaState.GLOBAL_STATE_FILE_PREFIX, j, dirs[i]);\n+ format.write(meta.get(j), j, dirs[i].toFile());\n if (randomBoolean() && (j < numStates - 1 || dirs.length > 0 && i != 0)) { // corrupt a file that we do not necessarily need here....\n- File file = new File(new File(dirs[i], MetaDataStateFormat.STATE_DIR_NAME), \"global-\" + j + \".st\");\n+ Path file = dirs[i].resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(\"global-\" + j + \".st\");\n corruptedFiles.add(file);\n MetaDataStateFormatTest.corruptFile(file, logger);\n }\n }\n \n }\n- List<File> dirList = Arrays.asList(dirs);\n+ List<Path> dirList = Arrays.asList(dirs);\n Collections.shuffle(dirList, getRandom());\n- MetaData loadedMetaData = MetaDataStateFormat.loadLatestState(logger, format, LocalGatewayMetaState.GLOBAL_STATE_FILE_PATTERN, \"foobar\", dirList.toArray(new File[0]));\n+ MetaData loadedMetaData = format.loadLatestState(logger, toFiles(dirList.toArray(new Path[0])));\n MetaData latestMetaData = meta.get(numStates-1);\n assertThat(loadedMetaData.uuid(), not(equalTo(\"_na_\")));\n assertThat(loadedMetaData.uuid(), equalTo(latestMetaData.uuid()));\n@@ -368,14 +378,14 @@ public void testLoadState() throws IOException {\n // now corrupt all the latest ones and make sure we fail to load the state\n if (numStates > numLegacy) {\n for (int i = 0; i < dirs.length; i++) {\n- File file = new File(new File(dirs[i], MetaDataStateFormat.STATE_DIR_NAME), \"global-\" + (numStates-1) + \".st\");\n+ Path file = dirs[i].resolve(MetaDataStateFormat.STATE_DIR_NAME).resolve(\"global-\" + (numStates-1) + \".st\");\n if (corruptedFiles.contains(file)) {\n continue;\n }\n MetaDataStateFormatTest.corruptFile(file, logger);\n }\n try {\n- MetaDataStateFormat.loadLatestState(logger, format, LocalGatewayMetaState.GLOBAL_STATE_FILE_PATTERN, \"foobar\", dirList.toArray(new File[0]));\n+ format.loadLatestState(logger, toFiles(dirList.toArray(new Path[0])));\n fail(\"latest version can not be read\");\n } catch (ElasticsearchException ex) {\n assertThat(ex.getCause(), instanceOf(CorruptStateException.class));\n@@ -402,8 +412,8 @@ private IndexMetaData.Builder indexBuilder(String index) throws IOException {\n \n private class Format extends MetaDataStateFormat<DummyState> {\n \n- Format(XContentType format, boolean deleteOldFiles) {\n- super(format, deleteOldFiles);\n+ Format(XContentType format, String prefix) {\n+ super(format, prefix);\n }\n \n @Override\n@@ -498,7 +508,7 @@ public DummyState parse(XContentParser parser) throws IOException {\n while(parser.nextToken() != XContentParser.Token.END_OBJECT) {\n XContentParser.Token token = parser.currentToken();\n if (token == XContentParser.Token.FIELD_NAME) {\n- fieldName = parser.currentName();\n+ fieldName = parser.currentName();\n } else if (token == XContentParser.Token.VALUE_STRING) {\n assertTrue(\"string\".equals(fieldName));\n string = parser.text();\n@@ -554,4 +564,45 @@ public void close() throws IOException {\n }\n }\n }\n+\n+ public Path[] content(String glob, Path dir) throws IOException {\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, glob)) {\n+ return Iterators.toArray(stream.iterator(), Path.class);\n+ }\n+ }\n+\n+ public long addDummyFiles(String prefix, Path... paths) throws IOException {\n+ int realId = -1;\n+ for (Path path : paths) {\n+ if (randomBoolean()) {\n+ Path stateDir = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ Files.createDirectories(stateDir);\n+ String actualPrefix = prefix;\n+ int id = randomIntBetween(0, 10);\n+ if (randomBoolean()) {\n+ actualPrefix = \"dummy-\";\n+ } else {\n+ realId = Math.max(realId, id);\n+ }\n+ try (OutputStream stream = Files.newOutputStream(stateDir.resolve(actualPrefix + id + MetaDataStateFormat.STATE_FILE_EXTENSION))) {\n+ stream.write(0);\n+ }\n+ }\n+ }\n+ return realId + 1;\n+ }\n+\n+ private static File[] toFiles(Path... files) {\n+ File[] paths = new File[files.length];\n+ for (int i = 0; i < files.length; i++) {\n+ paths[i] = files[i].toFile();\n+ }\n+ return paths;\n+ }\n+\n+ public static Path[] files(Path directory) throws IOException {\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(directory)) {\n+ return Iterators.toArray(stream.iterator(), Path.class);\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormatTest.java",
"status": "modified"
}
]
} |
{
"body": "Hi,\n\nI've had some trouble importing some geo_shapes in 1.4+, I get the exception:\n`ElasticsearchParseException: Invaild shape: Hole is not within polygon`\n\nI got an example. Now, I'm not an expert in GIS, but:\n- The area looks like this: http://i.imgur.com/oDzSeDQ.png with one vertex of the hole in the same point as one vertex of the polygon.\n- The data is from postgis' ST_AsGeoJSON and according to ST_isValid it's a valid area.\n- **Every version of ES from 0.90 to 1.3.x takes the area just fine**. The problems happens after 1.4.0.\n\nHere's the gist with all the data to reproduce it:\nhttps://gist.github.com/bictorman/064a777499719ad66194\n",
"comments": [
{
"body": "@nknize I think this problem relates to the code to deal with polygons around the international date line. We rely on the fact that a hole is actually a hole (i.e. is within the polygon) to be able to place it properly. By definition a hole which intersects the polygons edges is not actually a hole but a modification to the external shape of the main polygon. Maybe we need to add some logic to rewrite the polygon to cut out the hole from the main polygon shape if it intersects with the edge?\n",
"created_at": "2015-02-06T10:16:14Z"
},
{
"body": "We've been hitting the same issues with polygons that are not close to the dateline and don't have interior rings that cross an exterior ring.\n\n```\n#!/bin/sh\n\ncurl -XDELETE http://localhost:9200/test_index\n\ncurl -XPUT http://localhost:9200/test_index/\n\ncurl -XPUT http://localhost:9200/test_index/test_type/_mapping -d '\n {\n \"properties\": {\n \"geom\": {\n \"type\": \"geo_shape\",\n \"tree\": \"quadtree\",\n \"tree_levels\": \"26\"\n }\n }\n }'\n\ncurl -XGET localhost:9200/test_index/test_type/_search?pretty -d \\\n '{\"query\": {\"filtered\": {\"filter\": {\"bool\": {\"must\": [{\"geo_shape\": {\"geom\": {\"shape\": {\"type\": \"polygon\", \"coordinates\": [[[-150.0, -30.0], [-150.0, 30.0], [150.0, 30.0], [150.0, -30.0], [-150.0, -30.0]], [[0.01, 0.01], [0.02, 0.02], [0.01, 0.02], [0.01, 0.01]]]}}}}, \"from\": 0, \"size\": 50}'\n```\n\nHere's the polygon geometry from that query: http://bl.ocks.org/anonymous/raw/0fb706605fe9e53c52f8/\n\nIf the exterior ring is changed so that it spans from -90 (west) to 90 (east) longitude, the query succeeds. When the exterior ring spans more than 180 degrees, the query fails - even if the exterior ring is not near the dateline. \n",
"created_at": "2015-03-20T19:40:01Z"
},
{
"body": "Are you by chance assuming that rings spanning more than 180 degrees need to be \"inverted\"? In my opinion, you're best off pretending the world is flat and accepting that 0, 0 is inside a polygon spanning -91 (west) to 91 (east) - regardless of the winding order of exterior rings.\n",
"created_at": "2015-03-20T19:54:45Z"
},
{
"body": "I should debug to confirm, but it feels like you may be testing whether an interior ring is inside an exterior ring with different logic than the intersect query. Perhaps the query above fails because you decide that the interior ring (a small ring near 0, 0) is not inside the exterior ring. Maybe that is because you assume that because the exterior ring spans more than 180 degrees longitude you need to invert it.\n\nBut a similar query without the interior ring returns results that are inside the exterior ring in the same sense that the interior ring is inside (i.e. without assuming that the > 180 spanning exterior ring implies a ring that crosses the dateline).\n",
"created_at": "2015-03-20T20:02:15Z"
},
{
"body": "Hey @bictorman and Tim, thanks for the update. I'll bump this up on the priority list to review before the next release. After looking at the provided examples it appears to be working as expected. We comply with OGC SFA spec (https://portal.opengeospatial.org/files/?artifact_id=829 see section 2.1.10 - specifically figure 2.5). \"Figure 2.5 shows some examples of geometric objects that violate the above assertions and are not representable as single instances of Polygon.\" That is, holes should not intersect edges (these need to be converted to multi-polygon or multiple polygon objects).\n\nJust to make sure there isn't some other latent bug, which version are you working with? Prior to 1.4.3 the ring orientation logic was incorrectly computed for polys > 180 (leading to behavior Tim described). That was corrected in 1.4.3+ to follow the OGC SFF right-hand rule along with an added `orientation` option to explicitly define orientation behavior. Per the right-hand rule, the polygon provided crosses the dateline, so the hole is outside the poly. A simple fix for this case should be to add \n\n```\n\"orientation\": \"left\"\n```\n\nAnd the ring vertices will be clockwise ordered. \n",
"created_at": "2015-03-21T01:55:22Z"
},
{
"body": "Thanks for the additional detail @nknize. I'll confirm the version we are using and try the orientation property. One concern is that GeoJSON doesn't specify the winding order and OGR (and more) don't enforce a winding order when serializing. Would it be possible to provide an option to disable the winding order check for polygons > 180? I like the behavior we're getting for polygons that don't span 180 and would find it convenient to force the same for all polygons (dateline be damned).\n",
"created_at": "2015-03-23T17:04:26Z"
},
{
"body": "We're running 1.4.4. I'll try to come up with some simple test cases to ensure we can get consistent behavior (for polygons with and without holes, regardless of span).\n",
"created_at": "2015-03-23T17:12:17Z"
},
{
"body": "@tschaub We can certainly add an ignore option to the orientation and provide the \"old\" behavior if that's desired (the default would remain \"right\"). We ultimately chose OGC compliance (despite the GeoJSON indifference) as a solution to the ongoing issues surrounding dateline crossing polys. If you don't have any dateline crossing polys (or already have application logic to handle it) then an `ignore` option certainly makes sense. Would this option help you out?\n\nIn the meantime, 1.4.4 supports the `orientation` option which can be applied to the mapping and/or on a per document basis. This way you can have a shape field default to the left-hand rule but have specific documents processed using the right hand rule. In the above case you'd pass this particular document with the `orientation: left` parameter and all is well.\n",
"created_at": "2015-03-23T21:03:39Z"
},
{
"body": "@bictorman There are 2 discussions here (latest surrounding winding order). I want to make sure we answered your original question/issue. \n\nWe comply with OGC SFA spec (https://portal.opengeospatial.org/files/?artifact_id=829 see section 2.1.10 - specifically figure 2.5). \"Figure 2.5 shows some examples of geometric objects that violate the above assertions and are not representable as single instances of Polygon.\" That is, holes should not intersect edges. So you'll need to convert those geometric objects to multi-polygons or multiple polygon objects.\n\nLet us know if there are any other questions. If not I'll close this as a non-issue. The second discussion (winding order) has been moved to #10227 \n",
"created_at": "2015-03-24T14:08:50Z"
},
{
"body": "@nknize Understood. Seems like I was misled by the behaviour of PostGIS and previous ES versions. We'll try to fix the data, thanks for the answer.\n",
"created_at": "2015-03-24T15:45:32Z"
},
{
"body": "Closing as non-issue\n",
"created_at": "2015-03-24T19:13:58Z"
},
{
"body": "@nknize A lot of other software out there (PostGIS, GEOS, JTS, etc.) interprets the OGC spec to allow an interior ring to touch the outside of a boundary, provided that it only touches at one point. (If only OGC had included a drawing of this simple case in the spec!) Does ES have a different interpretation of the spec than PostGIS etc, and if so, how is one to represent a polygon like the one @bictorman posted in ES? I'm familiar with the ESRI/ArcGIS way of representing this -- as an exterior ring that loops back on itself -- but I've always understood that to be different from the OGC way.\n\nSome more examples of OGC-valid polygons, including the case @bictorman posted, are found in the PostGIS docs:\nhttp://postgis.net/docs/using_postgis_dbmanagement.html#OGC_Validity\n",
"created_at": "2015-03-25T18:54:32Z"
},
{
"body": "@dbaston While I cannot comment on OGC compliance for other software packages your question did expose an issue with the ShapeBuilder incorrectly counting the closed coordinate of the interior ring as a violation of 2.1.10 assertion 3 (a conflict between the geojson parser and the OGC builder counting that point as 3 intersections). I'm going to reopen this issue to correct the closed coordinate issue. \n\nHopefully @bictorman was able to adjust his polygon accordingly. If not a working coordinate order can be found at the following GIST: https://gist.github.com/nknize/7a5d2bf9a9e654e1cbb5\n\nAs to the question of ES interpretation of the spec, its implemented to follow the spec as closely as possible per collaboration with the OGC and OSGeo community. Variations are often discovered by way of the mailing lists and issue discussions such as these (which keep the implementation in check). Thanks again for revisiting this. The feedback is greatly appreciated.\n",
"created_at": "2015-03-25T20:50:28Z"
},
{
"body": "+1\n",
"created_at": "2015-11-06T11:31:14Z"
}
],
"number": 9511,
"title": "Hole is not within polygon"
} | {
"body": "OGC SFA 2.1.10 assertion 3 (https://portal.opengeospatial.org/files/?artifact_id=829 ) allows interior boundaries to touch exterior boundaries provided they intersect at a single point. Issue #9511 provides an example where a valid shape is incorrectly interpreted as invalid (a false violation of assertion 3). When the intersecting point appears as the first and last coordinate of the interior boundary in a polygon, the ShapeBuilder incorrectly counted this as multiple intersecting vertices. \n\nThe fix required a little more than just a logic check. Passing the duplicate vertices resulted in a connected component in the edge graph causing a false invalid self crossing polygon. This required additional logic to the edge assignment in order to correctly segment the connected components. Finally, an additional hole validation has been added along with proper unit tests for testing valid and invalid conditions (including dateline crossing polys).\n\ncloses #9511\n",
"number": 10332,
"review_comments": [
{
"body": "Why the intermediate hash set? The hash set will have random iteration order right? So then the retainAll below becomes O(N^2)? Why not just copy the array lists, sort, and then call retainAll?\n",
"created_at": "2015-03-31T06:34:21Z"
},
{
"body": "Actually, it looks like retainAll always uses contains (I was thinking it could be smarter with a sorted list, but of course it has no way to know it is sorted!). So I would copy, sort, then just iterate both in step?\n",
"created_at": "2015-03-31T06:37:21Z"
},
{
"body": "I know this was already here, but could you rename the variable shift to something that doesn't collide with the function name? Maybe datelineShift?\n",
"created_at": "2015-03-31T06:54:51Z"
},
{
"body": "Or datelineOffset?\n",
"created_at": "2015-03-31T06:55:02Z"
},
{
"body": "I noticed HashSet inherits retainAll from AbstractCollection. So I eliminated the intermediate ArrayList and used it directly to avoid both unnecessary memory use and having to sort.\n",
"created_at": "2015-03-31T12:44:21Z"
},
{
"body": "Would this be clearer as \"Invalid polygon, interior cannot share more than one point with the exterior\"?\n",
"created_at": "2015-04-09T13:52:07Z"
},
{
"body": "Should we make this message a bit more user friendly? I'm not sure may users will know what a tangential point is\n",
"created_at": "2015-04-10T15:16:08Z"
}
],
"title": "Fix hole intersection at tangential coordinate"
} | {
"commits": [
{
"message": "[GEO] Fix hole intersection at tangential coordinate\n\nOGC SFA 2.1.10 assertion 3 allows interior boundaries to touch exterior boundaries provided they intersect at a single point. Issue #9511 provides an example where a valid shape is incorrectly interpreted as invalid (a false violation of assertion 3). When the intersecting point appears as the first and last coordinate of the interior boundary in a polygon, the ShapeBuilder incorrectly counted this as multiple intersecting vertices. The fix required a little more than just a logic check. Passing the duplicate vertices resulted in a connected component in the edge graph causing an invalid self crossing polygon. This required additional logic to the edge assignment in order to correctly segment the connected components. Finally, an additional hole validation has been added along with proper unit tests for testing valid and invalid conditions (including dateline crossing polys).\n\ncloses #9511"
},
{
"message": "removing unnecessary intermediate ArrayLists from validateHole"
},
{
"message": "variable name change for readability"
},
{
"message": "Fix infinite loop for self-loops in edge graph\n\nThe PR review process caught an infinite loop for polygons containing duplicate sequential coordinates (excluding start and end points). This created a self loop in the edge graph that caused an infinite loop during traversal. One proposed solution was to check for duplicate points, ignore, and proceed. This violates the OGC SFA definition of a Simple Curve (the base of a LinearRing). Per 6.1.6.1 in v.1.2.1, \"A Curve is simple if it does not pass through the same point twice with the possible exception of the two end points...A curve that is simple and closed is a Ring\".\n\nKeeping true to \"fail early\", this commit catches self-loops during creation of the edge and throws an IllegalShapeException. It complys with the OGC spec and avoids any further processing if an invalid ring is provided. One potential downside is the additional rigor it adds to loosely defined GeoJSON."
},
{
"message": "updating exception verbiage"
},
{
"message": "change WeakHashMap and update exception handling"
}
],
"files": [
{
"diff": "@@ -19,14 +19,18 @@\n \n package org.elasticsearch.common.geo.builders;\n \n+import com.google.common.collect.Sets;\n+import com.spatial4j.core.exception.InvalidShapeException;\n import com.spatial4j.core.shape.Shape;\n import com.vividsolutions.jts.geom.*;\n-import org.elasticsearch.ElasticsearchParseException;\n+import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.Iterator;\n \n /**\n@@ -111,6 +115,18 @@ public ShapeBuilder close() {\n return shell.close();\n }\n \n+ /**\n+ * Validates only 1 vertex is tangential (shared) between the interior and exterior of a polygon\n+ */\n+ protected void validateHole(BaseLineStringBuilder shell, BaseLineStringBuilder hole) {\n+ HashSet exterior = Sets.newHashSet(shell.points);\n+ HashSet interior = Sets.newHashSet(hole.points);\n+ exterior.retainAll(interior);\n+ if (exterior.size() >= 2) {\n+ throw new InvalidShapeException(\"Invalid polygon, interior cannot share more than one point with the exterior\");\n+ }\n+ }\n+\n /**\n * The coordinates setup by the builder will be assembled to a polygon. The result will consist of\n * a set of polygons. Each of these components holds a list of linestrings defining the polygon: the\n@@ -125,6 +141,7 @@ public Coordinate[][][] coordinates() {\n int numEdges = shell.points.size()-1; // Last point is repeated \n for (int i = 0; i < holes.size(); i++) {\n numEdges += holes.get(i).points.size()-1;\n+ validateHole(shell, this.holes.get(i));\n }\n \n Edge[] edges = new Edge[numEdges];\n@@ -253,28 +270,62 @@ private static int component(final Edge edge, final int id, final ArrayList<Edge\n }\n }\n \n- double shift = any.coordinate.x > DATELINE ? DATELINE : (any.coordinate.x < -DATELINE ? -DATELINE : 0);\n+ double shiftOffset = any.coordinate.x > DATELINE ? DATELINE : (any.coordinate.x < -DATELINE ? -DATELINE : 0);\n if (debugEnabled()) {\n- LOGGER.debug(\"shift: {[]}\", shift);\n+ LOGGER.debug(\"shift: {[]}\", shiftOffset);\n }\n \n // run along the border of the component, collect the\n // edges, shift them according to the dateline and\n // update the component id\n- int length = 0;\n+ int length = 0, connectedComponents = 0;\n+ // if there are two connected components, splitIndex keeps track of where to split the edge array\n+ // start at 1 since the source coordinate is shared\n+ int splitIndex = 1;\n Edge current = edge;\n+ Edge prev = edge;\n+ // bookkeep the source and sink of each visited coordinate\n+ HashMap<Coordinate, Pair<Edge, Edge>> visitedEdge = new HashMap<>();\n do {\n-\n- current.coordinate = shift(current.coordinate, shift); \n+ current.coordinate = shift(current.coordinate, shiftOffset);\n current.component = id;\n- if(edges != null) {\n+\n+ if (edges != null) {\n+ // found a closed loop - we have two connected components so we need to slice into two distinct components\n+ if (visitedEdge.containsKey(current.coordinate)) {\n+ if (connectedComponents > 0 && current.next != edge) {\n+ throw new InvalidShapeException(\"Shape contains more than one shared point\");\n+ }\n+\n+ // a negative id flags the edge as visited for the edges(...) method.\n+ // since we're splitting connected components, we want the edges method to visit\n+ // the newly separated component\n+ final int visitID = -id;\n+ Edge firstAppearance = visitedEdge.get(current.coordinate).getRight();\n+ // correct the graph pointers by correcting the 'next' pointer for both the\n+ // first appearance and this appearance of the edge\n+ Edge temp = firstAppearance.next;\n+ firstAppearance.next = current.next;\n+ current.next = temp;\n+ current.component = visitID;\n+ // backtrack until we get back to this coordinate, setting the visit id to\n+ // a non-visited value (anything positive)\n+ do {\n+ prev.component = visitID;\n+ prev = visitedEdge.get(prev.coordinate).getLeft();\n+ ++splitIndex;\n+ } while (!current.coordinate.equals(prev.coordinate));\n+ ++connectedComponents;\n+ } else {\n+ visitedEdge.put(current.coordinate, Pair.of(prev, current));\n+ }\n edges.add(current);\n+ prev = current;\n }\n-\n length++;\n- } while((current = current.next) != edge);\n+ } while(connectedComponents == 0 && (current = current.next) != edge);\n \n- return length;\n+ return (splitIndex != 1) ? length-splitIndex: length;\n }\n \n /**\n@@ -364,11 +415,12 @@ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Ed\n // if no intersection is found then the hole is not within the polygon, so\n // don't waste time calling a binary search\n final int pos;\n- if (intersections == 0 ||\n- (pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER)) >= 0) {\n- throw new ElasticsearchParseException(\"Invalid shape: Hole is not within polygon\");\n+ boolean sharedVertex = false;\n+ if (intersections == 0 || ((pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER)) >= 0)\n+ && !(sharedVertex = (edges[pos].intersect.compareTo(current.coordinate) == 0)) ) {\n+ throw new InvalidShapeException(\"Invalid shape: Hole is not within polygon\");\n }\n- final int index = -(pos+2);\n+ final int index = -((sharedVertex) ? 0 : pos+2);\n final int component = -edges[index].component - numHoles - 1;\n \n if(debugEnabled()) {\n@@ -465,7 +517,7 @@ private static int createEdges(int component, Orientation orientation, BaseLineS\n Edge[] edges, int offset) {\n // inner rings (holes) have an opposite direction than the outer rings\n // XOR will invert the orientation for outer ring cases (Truth Table:, T/T = F, T/F = T, F/T = T, F/F = F)\n- boolean direction = (component != 0 ^ orientation == Orientation.RIGHT);\n+ boolean direction = (component == 0 ^ orientation == Orientation.RIGHT);\n // set the points array accordingly (shell or hole)\n Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, 0, edges, offset, points.length-1);",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.context.jts.JtsSpatialContext;\n+import com.spatial4j.core.exception.InvalidShapeException;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.vividsolutions.jts.geom.Coordinate;\n@@ -446,7 +447,8 @@ protected static final class Edge {\n \n protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n this.coordinate = coordinate;\n- this.next = next;\n+ // use setter to catch duplicate point cases\n+ this.setNext(next);\n this.intersect = intersection;\n if (next != null) {\n this.component = next.component;\n@@ -457,6 +459,17 @@ protected Edge(Coordinate coordinate, Edge next) {\n this(coordinate, next, Edge.MAX_COORDINATE);\n }\n \n+ protected void setNext(Edge next) {\n+ // don't bother setting next if its null\n+ if (next != null) {\n+ // self-loop throws an invalid shape\n+ if (this.coordinate.equals(next.coordinate)) {\n+ throw new InvalidShapeException(\"Provided shape has duplicate consecutive coordinates at: \" + this.coordinate);\n+ }\n+ this.next = next;\n+ }\n+ }\n+\n private static final int top(Coordinate[] points, int offset, int length) {\n int top = 0; // we start at 1 here since top points to 0\n for (int i = 1; i < length; i++) {\n@@ -522,17 +535,19 @@ private static Edge[] concat(int component, boolean direction, Coordinate[] poin\n if (direction) {\n edges[edgeOffset + i] = new Edge(points[pointOffset + i], edges[edgeOffset + i - 1]);\n edges[edgeOffset + i].component = component;\n- } else {\n+ } else if(!edges[edgeOffset + i - 1].coordinate.equals(points[pointOffset + i])) {\n edges[edgeOffset + i - 1].next = edges[edgeOffset + i] = new Edge(points[pointOffset + i], null);\n edges[edgeOffset + i - 1].component = component;\n+ } else {\n+ throw new InvalidShapeException(\"Provided shape has duplicate consecutive coordinates at: \" + points[pointOffset + i]);\n }\n }\n \n if (direction) {\n- edges[edgeOffset].next = edges[edgeOffset + length - 1];\n+ edges[edgeOffset].setNext(edges[edgeOffset + length - 1]);\n edges[edgeOffset].component = component;\n } else {\n- edges[edgeOffset + length - 1].next = edges[edgeOffset];\n+ edges[edgeOffset + length - 1].setNext(edges[edgeOffset]);\n edges[edgeOffset + length - 1].component = component;\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -267,7 +267,7 @@ public void testParse_invalidMultiPolygon() throws IOException {\n \n XContentParser parser = JsonXContent.jsonXContent.createParser(multiPolygonGeoJson);\n parser.nextToken();\n- ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+ ElasticsearchGeoAssertions.assertValidException(parser, InvalidShapeException.class);\n }\n \n public void testParse_OGCPolygonWithoutHoles() throws IOException {",
"filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.test.ElasticsearchTestCase;\n@@ -39,14 +40,12 @@\n */\n public class ShapeBuilderTests extends ElasticsearchTestCase {\n \n- @Test\n public void testNewPoint() {\n Point point = ShapeBuilder.newPoint(-100, 45).build();\n assertEquals(-100D, point.getX(), 0.0d);\n assertEquals(45D, point.getY(), 0.0d);\n }\n \n- @Test\n public void testNewRectangle() {\n Rectangle rectangle = ShapeBuilder.newEnvelope().topLeft(-45, 30).bottomRight(45, -30).build();\n assertEquals(-45D, rectangle.getMinX(), 0.0d);\n@@ -55,7 +54,6 @@ public void testNewRectangle() {\n assertEquals(30D, rectangle.getMaxY(), 0.0d);\n }\n \n- @Test\n public void testNewPolygon() {\n Polygon polygon = ShapeBuilder.newPolygon()\n .point(-45, 30)\n@@ -71,7 +69,6 @@ public void testNewPolygon() {\n assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n- @Test\n public void testNewPolygon_coordinate() {\n Polygon polygon = ShapeBuilder.newPolygon()\n .point(new Coordinate(-45, 30))\n@@ -87,7 +84,6 @@ public void testNewPolygon_coordinate() {\n assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n- @Test\n public void testNewPolygon_coordinates() {\n Polygon polygon = ShapeBuilder.newPolygon()\n .points(new Coordinate(-45, 30), new Coordinate(45, 30), new Coordinate(45, -30), new Coordinate(-45, -30), new Coordinate(-45, 30)).toPolygon();\n@@ -98,8 +94,7 @@ public void testNewPolygon_coordinates() {\n assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n- \n- @Test\n+\n public void testLineStringBuilder() {\n // Building a simple LineString\n ShapeBuilder.newLineString()\n@@ -141,7 +136,6 @@ public void testLineStringBuilder() {\n .build();\n }\n \n- @Test\n public void testMultiLineString() {\n ShapeBuilder.newMultiLinestring()\n .linestring()\n@@ -175,7 +169,7 @@ public void testMultiLineString() {\n .end()\n .build();\n }\n- \n+\n @Test(expected = InvalidShapeException.class)\n public void testPolygonSelfIntersection() {\n ShapeBuilder.newPolygon()\n@@ -186,7 +180,6 @@ public void testPolygonSelfIntersection() {\n .close().build();\n }\n \n- @Test\n public void testGeoCircle() {\n double earthCircumference = 40075016.69;\n Circle circle = ShapeBuilder.newCircleBuilder().center(0, 0).radius(\"100m\").build();\n@@ -211,8 +204,7 @@ public void testGeoCircle() {\n assertEquals((360 * randomRadius) / earthCircumference, circle.getRadius(), 0.00000001);\n assertEquals(new PointImpl(randomLon, randomLat, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n }\n- \n- @Test\n+\n public void testPolygonWrapping() {\n Shape shape = ShapeBuilder.newPolygon()\n .point(-150.0, 65.0)\n@@ -224,19 +216,16 @@ public void testPolygonWrapping() {\n assertMultiPolygon(shape);\n }\n \n- @Test\n public void testLineStringWrapping() {\n Shape shape = ShapeBuilder.newLineString()\n .point(-150.0, 65.0)\n .point(-250.0, 65.0)\n .point(-250.0, -65.0)\n .point(-150.0, -65.0)\n .build();\n- \n assertMultiLineString(shape);\n }\n \n- @Test\n public void testDatelineOGC() {\n // tests that the following shape (defined in counterclockwise OGC order)\n // https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c crosses the dateline\n@@ -275,11 +264,9 @@ public void testDatelineOGC() {\n .point(-179,1);\n \n Shape shape = builder.close().build();\n-\n assertMultiPolygon(shape);\n }\n \n- @Test\n public void testDateline() {\n // tests that the following shape (defined in clockwise non-OGC order)\n // https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c crosses the dateline\n@@ -318,11 +305,9 @@ public void testDateline() {\n .point(-179,1);\n \n Shape shape = builder.close().build();\n-\n assertMultiPolygon(shape);\n }\n- \n- @Test\n+\n public void testComplexShapeWithHole() {\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n .point(-85.0018514,37.1311314)\n@@ -393,11 +378,9 @@ public void testComplexShapeWithHole() {\n .point(-85.0000002,37.1317672);\n \n Shape shape = builder.close().build();\n-\n- assertPolygon(shape);\n+ assertPolygon(shape);\n }\n \n- @Test\n public void testShapeWithHoleAtEdgeEndPoints() {\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n .point(-4, 2)\n@@ -416,11 +399,9 @@ public void testShapeWithHoleAtEdgeEndPoints() {\n .point(4, 1);\n \n Shape shape = builder.close().build();\n-\n- assertPolygon(shape);\n+ assertPolygon(shape);\n }\n \n- @Test\n public void testShapeWithPointOnDateline() {\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n .point(180, 0)\n@@ -429,11 +410,9 @@ public void testShapeWithPointOnDateline() {\n .point(180, 0);\n \n Shape shape = builder.close().build();\n-\n- assertPolygon(shape);\n+ assertPolygon(shape);\n }\n \n- @Test\n public void testShapeWithEdgeAlongDateline() {\n // test case 1: test the positive side of the dateline\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n@@ -456,7 +435,6 @@ public void testShapeWithEdgeAlongDateline() {\n assertPolygon(shape);\n }\n \n- @Test\n public void testShapeWithBoundaryHoles() {\n // test case 1: test the positive side of the dateline\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n@@ -481,7 +459,7 @@ public void testShapeWithBoundaryHoles() {\n .point(179, 10)\n .point(179, -10)\n .point(-176, -15)\n- .point(-172,0);\n+ .point(-172, 0);\n builder.hole()\n .point(-176, 10)\n .point(-176, -10)\n@@ -492,6 +470,89 @@ public void testShapeWithBoundaryHoles() {\n assertMultiPolygon(shape);\n }\n \n+ public void testShapeWithTangentialHole() {\n+ // test a shape with one tangential (shared) vertex (should pass)\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(179, 10)\n+ .point(168, 15)\n+ .point(164, 0)\n+ .point(166, -15)\n+ .point(179, -10)\n+ .point(179, 10);\n+ builder.hole()\n+ .point(-177, 10)\n+ .point(-178, -10)\n+ .point(-180, -5)\n+ .point(-180, 5)\n+ .point(-177, 10);\n+ Shape shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+ }\n+\n+ @Test(expected = InvalidShapeException.class)\n+ public void testShapeWithInvalidTangentialHole() {\n+ // test a shape with one invalid tangential (shared) vertex (should throw exception)\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(179, 10)\n+ .point(168, 15)\n+ .point(164, 0)\n+ .point(166, -15)\n+ .point(179, -10)\n+ .point(179, 10);\n+ builder.hole()\n+ .point(164, 0)\n+ .point(175, 10)\n+ .point(175, 5)\n+ .point(179, -10)\n+ .point(164, 0);\n+ Shape shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+ }\n+\n+ public void testBoundaryShapeWithTangentialHole() {\n+ // test a shape with one tangential (shared) vertex for each hole (should pass)\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-177, 10)\n+ .point(176, 15)\n+ .point(172, 0)\n+ .point(176, -15)\n+ .point(-177, -10)\n+ .point(-177, 10);\n+ builder.hole()\n+ .point(-177, 10)\n+ .point(-178, -10)\n+ .point(-180, -5)\n+ .point(-180, 5)\n+ .point(-177, 10);\n+ builder.hole()\n+ .point(172, 0)\n+ .point(176, 10)\n+ .point(176, -5)\n+ .point(172, 0);\n+ Shape shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+ }\n+\n+ @Test(expected = InvalidShapeException.class)\n+ public void testBoundaryShapeWithInvalidTangentialHole() {\n+ // test shape with two tangential (shared) vertices (should throw exception)\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-177, 10)\n+ .point(176, 15)\n+ .point(172, 0)\n+ .point(176, -15)\n+ .point(-177, -10)\n+ .point(-177, 10);\n+ builder.hole()\n+ .point(-177, 10)\n+ .point(172, 0)\n+ .point(180, -5)\n+ .point(176, -10)\n+ .point(-177, 10);\n+ Shape shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+ }\n+\n /**\n * Test an enveloping polygon around the max mercator bounds\n */\n@@ -510,7 +571,7 @@ public void testBoundaryShape() {\n \n @Test\n public void testShapeWithAlternateOrientation() {\n- // ccw: should produce a single polygon spanning hemispheres\n+ // cw: should produce a multi polygon spanning hemispheres\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n .point(180, 0)\n .point(176, 4)\n@@ -531,4 +592,16 @@ public void testShapeWithAlternateOrientation() {\n \n assertMultiPolygon(shape);\n }\n+\n+ @Test(expected = InvalidShapeException.class)\n+ public void testInvalidShapeWithConsecutiveDuplicatePoints() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(180, 0)\n+ .point(176, 4)\n+ .point(176, 4)\n+ .point(-176, 4)\n+ .point(180, 0);\n+ Shape shape = builder.close().build();\n+ assertPolygon(shape);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "The new `min_score` functionality in the `function_score` query doesn't seem to to work as advertised. For instance:\n\n```\nDELETE t\n\nPUT t\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"key\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"value\": {\n \"type\": \"long\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT /t/t/1\n{\n \"name\": \"test\",\n \"foo\": [\n {\n \"key\": \"bar\",\n \"value\": 10\n },\n {\n \"key\": \"bar\",\n \"value\": 20\n },\n {\n \"key\": \"bar\",\n \"value\": 30\n },\n {\n \"key\": \"bar\",\n \"value\": 40\n }\n ]\n}\n```\n\nThis query returns a score of 4 for the above document, which means that it should be filtered out by the `min_score` of `10`:\n\n```\nGET /_search\n{\n \"query\": {\n \"function_score\": {\n \"boost_mode\": \"replace\",\n \"min_score\": 10,\n \"query\": {\n \"nested\": {\n \"path\": \"foo\",\n \"score_mode\": \"sum\",\n \"query\": {\n \"constant_score\": {\n \"filter\": {\n \"match_all\": {}\n }\n }\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "This is a bug indeed and it seems to me for two reasons: \n1) when the score is 4 the document should not be returned \n2) the score should not be 4 in the first place. `\"boost_mode\": \"replace\"` should cause the score to be 1 (default if no functions and no query given).\nI'll work on a fix. For now the workaround would be to define a \n\n`\"weight\": 1,` \n\nif you really want to score all matches with 1 (which does not make much sense imo but should still work) or\n\n```\n \"script_score\": {\n \"script\": \"_score\"\n}, \n```\n\nin case one actually wants the score of the query to be used.\n",
"created_at": "2015-03-25T09:10:42Z"
},
{
"body": "To be clear, `min_score` as a child of `function_score` is new in 1.5.0.\n\n`min_score` at the _root_ document level works locally running 1.4.4.\n",
"created_at": "2015-03-27T02:42:53Z"
},
{
"body": "Currently the way it works is this: If function_score encounters a query without a function, then the query is just executed as is without wrapping in a function score query (https://github.com/elastic/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java#L161) so min_score has no effect in this case. This was implemented so on purpose but it seems to me that now with the min_score there is a usecase where you would want to use function_score without a function. I'll change that.\n",
"created_at": "2015-03-30T15:02:49Z"
}
],
"number": 10253,
"title": "`min_score` not working in `function_score` query"
} | {
"body": "For optimization purposes a function score query with an empty function\nwill just result in the original sub query. However, sometimes one might\nwant to use function_score query to actually filter out docs within for example\nbool clauses by using the min_score functionality.\nTherefore the sub query should only be used without wrapping inside\na function_score query if min_score was also not set.\n\ncloses #10253\n",
"number": 10326,
"review_comments": [
{
"body": "Note that you can use Objects.hashCode(function) directly which will make sure to return 0 if the value is null.\n",
"created_at": "2015-03-31T07:52:52Z"
}
],
"title": "Function score: Apply `min_score` to sub query score if no function provided"
} | {
"commits": [
{
"message": "tests for function score with empty functions"
},
{
"message": "first shot - maybe not so clean"
},
{
"message": "Revert \"first shot - maybe not so clean\"\n\nThis reverts commit 20b9ee95e98818099ffea139a9e888a3aa2118e6."
},
{
"message": "[function_score] apply min_score to sub query score if no function provided\n\nFor optimization pruposes a function score query with an empty function\nwill just result in the original sub query. However, sometimes one might\nwant to use function_score query to actually filter out docs within for example\nbool clauses by using the min_score functionallity.\nTherefore the sub query should only be used without wrapping inside\na function_score query if min_score was also not set.\n\ncloses #10253"
},
{
"message": "use objects.hashode"
}
],
"files": [
{
"diff": "@@ -27,6 +27,7 @@\n import org.apache.lucene.util.ToStringUtils;\n \n import java.io.IOException;\n+import java.util.Objects;\n import java.util.Set;\n \n /**\n@@ -43,7 +44,7 @@ public class FunctionScoreQuery extends Query {\n public FunctionScoreQuery(Query subQuery, ScoreFunction function, Float minScore) {\n this.subQuery = subQuery;\n this.function = function;\n- this.combineFunction = function.getDefaultScoreCombiner();\n+ this.combineFunction = function == null? combineFunction.MULT : function.getDefaultScoreCombiner();\n this.minScore = minScore;\n }\n \n@@ -124,7 +125,9 @@ public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOExcept\n if (subQueryScorer == null) {\n return null;\n }\n- function.setNextReader(context);\n+ if (function != null) {\n+ function.setNextReader(context);\n+ }\n return new FunctionFactorScorer(this, subQueryScorer, function, maxBoost, combineFunction, minScore);\n }\n \n@@ -134,9 +137,13 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n if (!subQueryExpl.isMatch()) {\n return subQueryExpl;\n }\n- function.setNextReader(context);\n- Explanation functionExplanation = function.explainScore(doc, subQueryExpl);\n- return combineFunction.explain(getBoost(), subQueryExpl, functionExplanation, maxBoost);\n+ if (function != null) {\n+ function.setNextReader(context);\n+ Explanation functionExplanation = function.explainScore(doc, subQueryExpl);\n+ return combineFunction.explain(getBoost(), subQueryExpl, functionExplanation, maxBoost);\n+ } else {\n+ return subQueryExpl;\n+ }\n }\n }\n \n@@ -153,8 +160,12 @@ private FunctionFactorScorer(CustomBoostFactorWeight w, Scorer scorer, ScoreFunc\n @Override\n public float innerScore() throws IOException {\n float score = scorer.score();\n- return scoreCombiner.combine(subQueryBoost, score,\n- function.score(scorer.docID(), score), maxBoost);\n+ if (function == null) {\n+ return subQueryBoost * score;\n+ } else {\n+ return scoreCombiner.combine(subQueryBoost, score,\n+ function.score(scorer.docID(), score), maxBoost);\n+ }\n }\n }\n \n@@ -171,12 +182,12 @@ public boolean equals(Object o) {\n if (o == null || getClass() != o.getClass())\n return false;\n FunctionScoreQuery other = (FunctionScoreQuery) o;\n- return this.getBoost() == other.getBoost() && this.subQuery.equals(other.subQuery) && this.function.equals(other.function)\n+ return this.getBoost() == other.getBoost() && this.subQuery.equals(other.subQuery) && (this.function != null ? this.function.equals(other.function) : other.function == null)\n && this.maxBoost == other.maxBoost;\n }\n \n @Override\n public int hashCode() {\n- return subQuery.hashCode() + 31 * function.hashCode() ^ Float.floatToIntBits(getBoost());\n+ return subQuery.hashCode() + 31 * Objects.hashCode(function) ^ Float.floatToIntBits(getBoost());\n }\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -90,7 +90,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n FiltersFunctionScoreQuery.ScoreMode scoreMode = FiltersFunctionScoreQuery.ScoreMode.Multiply;\n ArrayList<FiltersFunctionScoreQuery.FilterFunction> filterFunctions = new ArrayList<>();\n- float maxBoost = Float.MAX_VALUE;\n+ Float maxBoost = null;\n Float minScore = null;\n \n String currentFieldName = null;\n@@ -157,13 +157,17 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n query = new FilteredQuery(query, filter);\n }\n // if all filter elements returned null, just use the query\n- if (filterFunctions.isEmpty()) {\n+ if (filterFunctions.isEmpty() && combineFunction == null) {\n return query;\n }\n+ if (maxBoost == null) {\n+ maxBoost = Float.MAX_VALUE;\n+ }\n // handle cases where only one score function and no filter was\n // provided. In this case we create a FunctionScoreQuery.\n- if (filterFunctions.size() == 1 && (filterFunctions.get(0).filter == null || filterFunctions.get(0).filter instanceof MatchAllDocsFilter)) {\n- FunctionScoreQuery theQuery = new FunctionScoreQuery(query, filterFunctions.get(0).function, minScore);\n+ if (filterFunctions.size() == 0 || filterFunctions.size() == 1 && (filterFunctions.get(0).filter == null || filterFunctions.get(0).filter instanceof MatchAllDocsFilter)) {\n+ ScoreFunction function = filterFunctions.size() == 0 ? null : filterFunctions.get(0).function;\n+ FunctionScoreQuery theQuery = new FunctionScoreQuery(query, function, minScore);\n if (combineFunction != null) {\n theQuery.setCombineFunction(combineFunction);\n }",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -557,5 +557,40 @@ public void testFilterAndQueryGiven() throws IOException, ExecutionException, In\n assertThat(Float.parseFloat(hit.getId()), equalTo(hit.getScore()));\n }\n }\n+\n+ @Test\n+ public void testWithEmptyFunctions() throws IOException, ExecutionException, InterruptedException {\n+ assertAcked(prepareCreate(\"test\"));\n+ ensureYellow();\n+ index(\"test\", \"testtype\", \"1\", jsonBuilder().startObject().field(\"text\", \"test text\").endObject());\n+ refresh();\n+\n+ // make sure that min_score works if functions is empty, see https://github.com/elastic/elasticsearch/issues/10253\n+ float termQueryScore = 0.19178301f;\n+ testMinScoreApplied(\"sum\", termQueryScore);\n+ testMinScoreApplied(\"avg\", termQueryScore);\n+ testMinScoreApplied(\"max\", termQueryScore);\n+ testMinScoreApplied(\"min\", termQueryScore);\n+ testMinScoreApplied(\"multiply\", termQueryScore);\n+ testMinScoreApplied(\"replace\", termQueryScore);\n+ }\n+\n+ protected void testMinScoreApplied(String boostMode, float expectedScore) throws InterruptedException, ExecutionException {\n+ SearchResponse response = client().search(\n+ searchRequest().source(\n+ searchSource().explain(true).query(\n+ functionScoreQuery(termQuery(\"text\", \"text\")).boostMode(boostMode).setMinScore(0.1f)))).get();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().totalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getScore(), equalTo(expectedScore));\n+\n+ response = client().search(\n+ searchRequest().source(\n+ searchSource().explain(true).query(\n+ functionScoreQuery(termQuery(\"text\", \"text\")).boostMode(boostMode).setMinScore(2f)))).get();\n+\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().totalHits(), equalTo(0l));\n+ }\n }\n ",
"filename": "src/test/java/org/elasticsearch/search/functionscore/FunctionScoreTests.java",
"status": "modified"
}
]
} |
{
"body": "I'm updating a .NET client to support inner_hits for nested, has_child and has_parent queries and filters.\n\nThe inner_hits work for all queries and filters except nested filters. Is this a bug or a documentation error?\n\n```\n{\n \"filter\": {\n \"nested\": {\n \"path\": \"skillchildren\",\n \"filter\": {\n \"match_all\": { }\n },\n \"inner_hits\": { }\n }\n }\n}\n```\n\nGreetings Damien\n",
"comments": [
{
"body": "@damienbod inner_hits in a nested filter should work. I quickly tried to verified that `inner_hits` works in a `nested` filter. If it doesn't work in your case can you share your reproduction of the issue?\n",
"created_at": "2015-03-29T20:29:16Z"
},
{
"body": "Hi Thanks for your reply. Here's my data:\n\nMapping:\n\n```\nPUT http://localhost:9200/nestedcollectiontests/nestedcollectiontest/_mappings HTTP/1.1\nContent-Type: application/json\nHost: localhost:9200\nContent-Length: 616\nExpect: 100-continue\n\n{\"nestedcollectiontest\":{\"properties\":{\"id\":{ \"type\" : \"long\" },\"nameskillparent\":{ \"type\" : \"string\" },\"descriptionskillparent\":{ \"type\" : \"string\" },\"createdskillparent\":{ \"type\" : \"date\", \"format\": \"dateOptionalTime\"},\"updatedskillparent\":{ \"type\" : \"date\", \"format\": \"dateOptionalTime\"},\"skillchildren\":{\"type\":\"nested\",\"include_in_parent\":true,\"properties\":{\"id\":{ \"type\" : \"long\" },\"nameskillchild\":{ \"type\" : \"string\" },\"descriptionskillchild\":{ \"type\" : \"string\" },\"createdskillchild\":{ \"type\" : \"date\", \"format\": \"dateOptionalTime\"},\"updatedskillchild\":{ \"type\" : \"date\", \"format\": \"dateOptionalTime\"}}}}}}\n\n```\n\nTest Data:\n\n```\nPOST http://localhost:9200/_bulk HTTP/1.1\nContent-Type: application/json\nHost: localhost:9200\nContent-Length: 906\nExpect: 100-continue\n\n{\"index\":{\"_index\":\"nestedcollectiontests\",\"_type\":\"nestedcollectiontest\",\"_id\":8}}\n{\"id\":8,\"nameskillparent\":\"cool\",\"descriptionskillparent\":\"A test entity description\",\"createdskillparent\":\"2015-03-29T20:59:51.0179334+00:00\",\"updatedskillparent\":\"2015-03-29T20:59:51.0179334+00:00\",\"skillchildren\":[{\"id\":0,\"nameskillchild\":\"cool\",\"descriptionskillchild\":\"A test SkillChild description\",\"createdskillchild\":\"2015-03-29T20:59:51.0159335+00:00\",\"updatedskillchild\":\"2015-03-29T20:59:51.0159335+00:00\"},{\"id\":1,\"nameskillchild\":\"cool\",\"descriptionskillchild\":\"A test SkillChild description\",\"createdskillchild\":\"2015-03-29T20:59:51.0159335+00:00\",\"updatedskillchild\":\"2015-03-29T20:59:51.0159335+00:00\"},{\"id\":2,\"nameskillchild\":\"cool\",\"descriptionskillchild\":\"A test SkillChild description\",\"createdskillchild\":\"2015-03-29T20:59:51.0159335+00:00\",\"updatedskillchild\":\"2015-03-29T20:59:51.0159335+00:00\"}]}\n```\n\nSearch Request:\n\n```\nPOST http://localhost:9200/nestedcollectiontests/nestedcollectiontest/_search HTTP/1.1\nContent-Type: application/json\nHost: localhost:9200\nContent-Length: 88\nExpect: 100-continue\n\n{\"filter\":{\"nested\":{\"path\":\"skillchildren\",\"filter\":{\"match_all\":{}},\"inner_hits\":{}}}}\n```\n\nResponse:\n\n```\nHTTP/1.1 400 Bad Request\nContent-Type: application/json; charset=UTF-8\nContent-Length: 1997\n\n{\"error\":\"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[g6ANIIX0SieBnVvi3jhjJA][nestedcollectiontests][0]: SearchParseException[[nestedcollectiontests][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\\\"filter\\\":{\\\"nested\\\":{\\\"path\\\":\\\"skillchildren\\\",\\\"filter\\\":{\\\"match_all\\\":{}},\\\"inner_hits\\\":{}}}}]]]; nested: QueryParsingException[[nestedcollectiontests] [nested] requires either 'query' or 'filter' field]; }{[g6ANIIX0SieBnVvi3jhjJA][nestedcollectiontests][1]: SearchParseException[[nestedcollectiontests][1]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\\\"filter\\\":{\\\"nested\\\":{\\\"path\\\":\\\"skillchildren\\\",\\\"filter\\\":{\\\"match_all\\\":{}},\\\"inner_hits\\\":{}}}}]]]; nested: QueryParsingException[[nestedcollectiontests] [nested] requires either 'query' or 'filter' field]; }{[g6ANIIX0SieBnVvi3jhjJA][nestedcollectiontests][2]: SearchParseException[[nestedcollectiontests][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\\\"filter\\\":{\\\"nested\\\":{\\\"path\\\":\\\"skillchildren\\\",\\\"filter\\\":{\\\"match_all\\\":{}},\\\"inner_hits\\\":{}}}}]]]; nested: QueryParsingException[[nestedcollectiontests] [nested] requires either 'query' or 'filter' field]; }{[g6ANIIX0SieBnVvi3jhjJA][nestedcollectiontests][3]: SearchParseException[[nestedcollectiontests][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\\\"filter\\\":{\\\"nested\\\":{\\\"path\\\":\\\"skillchildren\\\",\\\"filter\\\":{\\\"match_all\\\":{}},\\\"inner_hits\\\":{}}}}]]]; nested: QueryParsingException[[nestedcollectiontests] [nested] requires either 'query' or 'filter' field]; }{[g6ANIIX0SieBnVvi3jhjJA][nestedcollectiontests][4]: SearchParseException[[nestedcollectiontests][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\\\"filter\\\":{\\\"nested\\\":{\\\"path\\\":\\\"skillchildren\\\",\\\"filter\\\":{\\\"match_all\\\":{}},\\\"inner_hits\\\":{}}}}]]]; nested: QueryParsingException[[nestedcollectiontests] [nested] requires either 'query' or 'filter' field]; }]\",\"status\":400}\n```\n\nElasticsearch version:\n\n```\n{\n\n \"status\": 200,\n \"name\": \"Shen Kuei\",\n \"cluster_name\": \"elasticsearch\",\n \"version\": \n\n {\n \"number\": \"1.5.0\",\n \"build_hash\": \"544816042d40151d3ce4ba4f95399d7860dc2e92\",\n \"build_timestamp\": \"2015-03-23T14:30:58Z\",\n \"build_snapshot\": false,\n \"lucene_version\": \"4.10.4\"\n },\n \"tagline\": \"You Know, for Search\"\n\n}\n```\n\nThanks for your help.\n\nGreetings Damien\n",
"created_at": "2015-03-29T21:05:03Z"
},
{
"body": "Hey @damienbod this is a bug. Thanks for sharing your data!\n\nIt only manifest if a inner filter is used in the nested filter. If a inner query is used instead then the issue doesn't occur, so the following works:\n\n```\n{\n \"filter\": {\n \"nested\": {\n \"path\": \"skillchildren\",\n \"query\": {\n \"match_all\": { }\n },\n \"inner_hits\": { }\n }\n }\n}\n```\n",
"created_at": "2015-03-29T21:58:21Z"
},
{
"body": "No problem\nThanks for looking \ngreetings Damien\n",
"created_at": "2015-03-30T04:14:50Z"
}
],
"number": 10308,
"title": "inner_hits does not work for nested filters"
} | {
"body": "PR for #10308\n",
"number": 10309,
"review_comments": [],
"title": "Fix bug where parse error is thrown if a inner filter is used in a nested filter/query."
} | {
"commits": [
{
"message": "inner hits: Fix bug where parse error is thrown if a inner filter is used in a nested filter/query.\n\nCloses #10308"
}
],
"files": [
{
"diff": "@@ -148,7 +148,7 @@ public ToParentBlockJoinQuery build() throws IOException {\n }\n \n if (innerHits != null) {\n- InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), getInnerQuery(), null, getParentObjectMapper(), nestedObjectMapper);\n+ InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), innerQuery, null, getParentObjectMapper(), nestedObjectMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : path;\n parseContext.addInnerHits(name, nestedInnerHits);\n }",
"filename": "src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -39,6 +39,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.FilterBuilders.hasChildFilter;\n import static org.elasticsearch.index.query.FilterBuilders.nestedFilter;\n+import static org.elasticsearch.index.query.FilterBuilders.queryFilter;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.*;\n@@ -110,7 +111,9 @@ public void testSimpleNested() throws Exception {\n .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")))\n .addInnerHit(\"comment\", new InnerHitsBuilder.InnerHit().setPath(\"comments\").setQuery(matchQuery(\"comments.message\", \"elephant\"))).request(),\n client().prepareSearch(\"articles\")\n- .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")).innerHit(new QueryInnerHitBuilder().setName(\"comment\"))).request()\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"elephant\")).innerHit(new QueryInnerHitBuilder().setName(\"comment\"))).request(),\n+ client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", queryFilter(matchQuery(\"comments.message\", \"elephant\"))).innerHit(new QueryInnerHitBuilder().setName(\"comment\").addSort(\"_doc\", SortOrder.DESC))).request()\n };\n for (SearchRequest searchRequest : searchRequests) {\n SearchResponse response = client().search(searchRequest).actionGet();",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "When creating an index we allow to omit `index.` ie. `number_of_replicas` will also be accepted. If we do this during restore the setting is silently ignored. \n",
"comments": [],
"number": 10133,
"title": "Restore doesn't prefix restore index settings with `index.` "
} | {
"body": "Closes #10133\n",
"number": 10269,
"review_comments": [
{
"body": "Why do we need to remove \"index\" here instead of just normalizing everything?\n",
"created_at": "2015-03-30T17:54:38Z"
},
{
"body": "It's no longer needed. It was added to handle \"index\" parameter in the REST API but the rest API is removing this and other parameters for quite a while now. I will remove this line. Thanks!\n",
"created_at": "2015-03-30T19:21:35Z"
}
],
"title": "Automatically add \"index.\" prefix to the settings are changed on restore if the prefix is missing"
} | {
"commits": [
{
"message": "Automatically add \"index.\" prefix to the settings are changed on restore if the prefix is missing\n\nCloses #10133"
}
],
"files": [
{
"diff": "@@ -154,7 +154,7 @@ public static State fromString(String state) {\n throw new ElasticsearchIllegalStateException(\"No state match for [\" + state + \"]\");\n }\n }\n-\n+ public static final String INDEX_SETTING_PREFIX = \"index.\";\n public static final String SETTING_NUMBER_OF_SHARDS = \"index.number_of_shards\";\n public static final String SETTING_NUMBER_OF_REPLICAS = \"index.number_of_replicas\";\n public static final String SETTING_SHADOW_REPLICAS = \"index.shadow_replicas\";",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java",
"status": "modified"
},
{
"diff": "@@ -329,13 +329,7 @@ public static IndexTemplateMetaData fromXContent(XContentParser parser, String t\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"settings\".equals(currentFieldName)) {\n ImmutableSettings.Builder templateSettingsBuilder = ImmutableSettings.settingsBuilder();\n- for (Map.Entry<String, String> entry : SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered()).entrySet()) {\n- if (!entry.getKey().startsWith(\"index.\")) {\n- templateSettingsBuilder.put(\"index.\" + entry.getKey(), entry.getValue());\n- } else {\n- templateSettingsBuilder.put(entry.getKey(), entry.getValue());\n- }\n- }\n+ templateSettingsBuilder.put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n builder.settings(templateSettingsBuilder.build());\n } else if (\"mappings\".equals(currentFieldName)) {\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java",
"status": "modified"
},
{
"diff": "@@ -195,13 +195,7 @@ public void validateIndexName(String index, ClusterState state) throws Elasticse\n private void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener, final Semaphore mdLock) {\n \n ImmutableSettings.Builder updatedSettingsBuilder = ImmutableSettings.settingsBuilder();\n- for (Map.Entry<String, String> entry : request.settings().getAsMap().entrySet()) {\n- if (!entry.getKey().startsWith(\"index.\")) {\n- updatedSettingsBuilder.put(\"index.\" + entry.getKey(), entry.getValue());\n- } else {\n- updatedSettingsBuilder.put(entry.getKey(), entry.getValue());\n- }\n- }\n+ updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n request.settings(updatedSettingsBuilder.build());\n \n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\", Priority.URGENT, new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(request, listener) {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -107,13 +107,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n \n public void putTemplate(final PutRequest request, final PutListener listener) {\n ImmutableSettings.Builder updatedSettingsBuilder = ImmutableSettings.settingsBuilder();\n- for (Map.Entry<String, String> entry : request.settings.getAsMap().entrySet()) {\n- if (!entry.getKey().startsWith(\"index.\")) {\n- updatedSettingsBuilder.put(\"index.\" + entry.getKey(), entry.getValue());\n- } else {\n- updatedSettingsBuilder.put(entry.getKey(), entry.getValue());\n- }\n- }\n+ updatedSettingsBuilder.put(request.settings).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n request.settings(updatedSettingsBuilder.build());\n \n if (request.name == null) {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java",
"status": "modified"
},
{
"diff": "@@ -168,16 +168,7 @@ public void onFailure(Throwable t) {\n \n public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n ImmutableSettings.Builder updatedSettingsBuilder = ImmutableSettings.settingsBuilder();\n- for (Map.Entry<String, String> entry : request.settings().getAsMap().entrySet()) {\n- if (entry.getKey().equals(\"index\")) {\n- continue;\n- }\n- if (!entry.getKey().startsWith(\"index.\")) {\n- updatedSettingsBuilder.put(\"index.\" + entry.getKey(), entry.getValue());\n- } else {\n- updatedSettingsBuilder.put(entry.getKey(), entry.getValue());\n- }\n- }\n+ updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n // never allow to change the number of shards\n for (String key : updatedSettingsBuilder.internalMap().keySet()) {\n if (key.equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -1100,6 +1100,25 @@ public boolean shouldIgnoreMissing(String placeholderName) {\n return this;\n }\n \n+ /**\n+ * Checks that all settings in the builder start with the specified prefix.\n+ *\n+ * If a setting doesn't start with the prefix, the builder appends the prefix to such setting.\n+ */\n+ public Builder normalizePrefix(String prefix) {\n+ Map<String, String> replacements = Maps.newHashMap();\n+ Iterator<Map.Entry<String, String>> iterator = map.entrySet().iterator();\n+ while(iterator.hasNext()) {\n+ Map.Entry<String, String> entry = iterator.next();\n+ if (entry.getKey().startsWith(prefix) == false) {\n+ replacements.put(prefix + entry.getKey(), entry.getValue());\n+ iterator.remove();\n+ }\n+ }\n+ map.putAll(replacements);\n+ return this;\n+ }\n+\n /**\n * Builds a {@link Settings} (underlying uses {@link ImmutableSettings}) based on everything\n * set on this builder.",
"filename": "src/main/java/org/elasticsearch/common/settings/ImmutableSettings.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.settings;\n \n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n@@ -44,7 +45,7 @@ public IndexSettingsService(Index index, Settings settings) {\n \n public synchronized void refreshSettings(Settings settings) {\n // this.settings include also the node settings\n- if (this.settings.getByPrefix(\"index.\").getAsMap().equals(settings.getByPrefix(\"index.\").getAsMap())) {\n+ if (this.settings.getByPrefix(IndexMetaData.INDEX_SETTING_PREFIX).getAsMap().equals(settings.getByPrefix(IndexMetaData.INDEX_SETTING_PREFIX).getAsMap())) {\n // nothing to update, same settings\n return;\n }",
"filename": "src/main/java/org/elasticsearch/index/settings/IndexSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -314,6 +314,7 @@ private IndexMetaData updateIndexSettings(IndexMetaData indexMetaData, Settings\n if (changeSettings.names().isEmpty() && ignoreSettings.length == 0) {\n return indexMetaData;\n }\n+ Settings normalizedChangeSettings = ImmutableSettings.settingsBuilder().put(changeSettings).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();\n IndexMetaData.Builder builder = IndexMetaData.builder(indexMetaData);\n Map<String, String> settingsMap = newHashMap(indexMetaData.settings().getAsMap());\n List<String> simpleMatchPatterns = newArrayList();\n@@ -340,7 +341,7 @@ private IndexMetaData updateIndexSettings(IndexMetaData indexMetaData, Settings\n }\n }\n }\n- for(Map.Entry<String, String> entry : changeSettings.getAsMap().entrySet()) {\n+ for(Map.Entry<String, String> entry : normalizedChangeSettings.getAsMap().entrySet()) {\n if (UNMODIFIABLE_SETTINGS.contains(entry.getKey())) {\n throw new SnapshotRestoreException(snapshotId, \"cannot modify setting [\" + entry.getKey() + \"] on restore\");\n } else {",
"filename": "src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -324,4 +324,44 @@ public void testThatArraysAreOverriddenCorrectly() throws IOException {\n assertThat(settings.get(\"value.data\"), is(\"1\"));\n assertThat(settings.get(\"value\"), is(nullValue()));\n }\n+\n+ @Test\n+ public void testPrefixNormalization() {\n+\n+ Settings settings = settingsBuilder().normalizePrefix(\"foo.\").build();\n+\n+ assertThat(settings.names().size(), equalTo(0));\n+\n+ settings = settingsBuilder()\n+ .put(\"bar\", \"baz\")\n+ .normalizePrefix(\"foo.\")\n+ .build();\n+\n+ assertThat(settings.getAsMap().size(), equalTo(1));\n+ assertThat(settings.get(\"bar\"), nullValue());\n+ assertThat(settings.get(\"foo.bar\"), equalTo(\"baz\"));\n+\n+\n+ settings = settingsBuilder()\n+ .put(\"bar\", \"baz\")\n+ .put(\"foo.test\", \"test\")\n+ .normalizePrefix(\"foo.\")\n+ .build();\n+\n+ assertThat(settings.getAsMap().size(), equalTo(2));\n+ assertThat(settings.get(\"bar\"), nullValue());\n+ assertThat(settings.get(\"foo.bar\"), equalTo(\"baz\"));\n+ assertThat(settings.get(\"foo.test\"), equalTo(\"test\"));\n+\n+ settings = settingsBuilder()\n+ .put(\"foo.test\", \"test\")\n+ .normalizePrefix(\"foo.\")\n+ .build();\n+\n+\n+ assertThat(settings.getAsMap().size(), equalTo(1));\n+ assertThat(settings.get(\"foo.test\"), equalTo(\"test\"));\n+ }\n+\n+\n }",
"filename": "src/test/java/org/elasticsearch/common/settings/ImmutableSettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -1557,7 +1557,7 @@ public void changeSettingsOnRestoreTest() throws Exception {\n cluster().wipeIndices(\"test-idx\");\n \n Settings newIndexSettings = ImmutableSettings.builder()\n- .put(INDEX_REFRESH_INTERVAL, \"5s\")\n+ .put(\"refresh_interval\", \"5s\")\n .put(\"index.analysis.analyzer.my_analyzer.type\", \"standard\")\n .build();\n ",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "Working on #10067, improving our back-compat test indices so that the translog has a delete-by-query on upgrade, I hit this pre-existing back-compat bug where on upgrade of an index <= 1.0.0 Beta2 that has a DBQ in its translog, this exception shows up:\n\n```\n 1> [2015-03-25 08:57:35,714][INFO ][index.gateway ] [node_t3] [test][0] ignoring recovery of a corrupt translog entry\n 1> org.elasticsearch.index.query.QueryParsingException: [test] request does not support [range]\n 1> at org.elasticsearch.index.query.IndexQueryParserService.parseQuery(IndexQueryParserService.java:362)\n 1> at org.elasticsearch.index.shard.IndexShard.prepareDeleteByQuery(IndexShard.java:537)\n 1> at org.elasticsearch.index.shard.IndexShard.performRecoveryOperation(IndexShard.java:864)\n 1> at org.elasticsearch.index.gateway.IndexShardGateway.recover(IndexShardGateway.java:235)\n 1> at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:114)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n```\n\nThis is happening because of #4074 when we required that the top-level \"query\" is present to delete-by-query requests, but prior to that we required that it is not present. So the translog has a DBQ without \"query\" and when we try to parse it we hit this exception.\n\nI have changes to create-bwc-index.py that shows the bug ... but I'm not sure how to cleanly fix it. Somehow on parsing a translog entry from an old enough version of ES we need to insert \"query\" at the top...\n",
"comments": [],
"number": 10262,
"title": "Core: delete-by-query fails to replay from translog < 1.0.0 Beta2"
} | {
"body": "This PR just improves our back compat tests in preparation for #10067. It's a standalone test improvement and I think we should push it first since when I deleted DBQ entirely in #10067, no tests failed when I broke back compat.\n\nI fixed create-bwc-index.py to add a delete-by-query into the translog so that on upgrade the translog must be replayed, and I also fixed OldIndexBackwardsCompatibilityTests to confirm the expected documents are in fact deleted on upgrade.\n\nI confirmed that if I intentionally break the translog replay of DBQ in master, OldIndexBackwardsCompatibilityTests in fact fails (good).\n\nHowever, I hit a pre-existing back-compat bug caused long ago by #4074. I opened #10262 for this but I'm not sure how to fix it... for now I worked around it here by avoiding DBQ in translog for version <= 1.0.0 Beta2.\n\nI also fixed get-bwc-version.py to special case 1.2.0 and pull that from maven instead of download.elasticsearch.org.\n",
"number": 10266,
"review_comments": [],
"title": "Tests: add delete-by-query into translog in OldIndexBackwardsCompatibilityTests"
} | {
"commits": [
{
"message": "Tests: add delete-by-query into translog in OldIndexBackwardsCompatibilityTests"
}
],
"files": [
{
"diff": "@@ -69,6 +69,34 @@ def index_documents(es, index_name, type, num_docs):\n logging.info('Flushing index')\n es.indices.flush(index=index_name)\n \n+def delete_by_query(es, version, index_name, doc_type):\n+\n+ logging.info('Deleting long_sort:[10..20] docs')\n+\n+ query = {'query':\n+ {'range':\n+ {'long_sort':\n+ {'gte': 10,\n+ 'lte': 20}}}}\n+\n+ if version.startswith('0.90.') or version in ('1.0.0.Beta1', '1.0.0.Beta2'):\n+ # TODO #10262: we can't write DBQ into the translog for these old versions until we fix this back-compat bug:\n+\n+ # #4074: these versions don't expect to see the top-level 'query' to count/delete_by_query:\n+ query = query['query']\n+ return\n+\n+ deleted_count = es.count(index=index_name, doc_type=doc_type, body=query)['count']\n+ \n+ result = es.delete_by_query(index=index_name,\n+ doc_type=doc_type,\n+ body=query)\n+\n+ # make sure no shards failed:\n+ assert result['_indices'][index_name]['_shards']['failed'] == 0, 'delete by query failed: %s' % result\n+\n+ logging.info('Deleted %d docs' % deleted_count)\n+\n def run_basic_asserts(es, index_name, type, num_docs):\n count = es.count(index=index_name)['count']\n assert count == num_docs, 'Expected %r but got %r documents' % (num_docs, count)\n@@ -150,7 +178,7 @@ def generate_index(client, version):\n }\n }\n # completion type was added in 0.90.3\n- if not version in ['0.90.0.Beta1', '0.90.0.RC1', '0.90.0.RC2', '0.90.0', '0.90.1', '0.90.2']:\n+ if version not in ['0.90.0.Beta1', '0.90.0.RC1', '0.90.0.RC2', '0.90.0', '0.90.1', '0.90.2']:\n mappings['analyzer_type1']['properties']['completion_with_index_analyzer'] = {\n 'type': 'completion',\n 'index_analyzer': 'standard'\n@@ -312,6 +340,12 @@ def main():\n generate_index(client, cfg.version)\n if cfg.snapshot_supported:\n snapshot_index(client, cfg)\n+\n+ # 10067: get a delete-by-query into the translog on upgrade. We must do\n+ # this after the snapshot, because it calls flush. Otherwise the index\n+ # will already have the deletions applied on upgrade.\n+ delete_by_query(client, cfg.version, 'test', 'doc')\n+ \n finally:\n if 'node' in vars():\n logging.info('Shutting down node with pid %d', node.pid)",
"filename": "dev-tools/create-bwc-index.py",
"status": "modified"
},
{
"diff": "@@ -61,7 +61,11 @@ def main():\n else:\n filename = '%s.tar.gz' % version_dir\n \n- url = 'https://download.elasticsearch.org/elasticsearch/elasticsearch/%s' % filename\n+ if c.version == '1.2.0':\n+ # 1.2.0 was pulled from download.elasticsearch.org because of routing bug:\n+ url = 'http://central.maven.org/maven2/org/elasticsearch/elasticsearch/1.2.0/%s' % filename\n+ else:\n+ url = 'https://download.elasticsearch.org/elasticsearch/elasticsearch/%s' % filename\n print('Downloading %s' % url)\n urllib.request.urlretrieve(url, filename)\n ",
"filename": "dev-tools/get-bwc-version.py",
"status": "modified"
},
{
"diff": "@@ -114,16 +114,17 @@ void assertOldIndexWorks(String index) throws Exception {\n assertBasicSearchWorks();\n assertRealtimeGetWorks();\n assertNewReplicasWork();\n- assertUpgradeWorks(isLatestLuceneVersion(index));\n+ Version version = extractVersion(index);\n+ assertUpgradeWorks(isLatestLuceneVersion(version));\n+ assertDeleteByQueryWorked(version);\n unloadIndex();\n }\n \n Version extractVersion(String index) {\n return Version.fromString(index.substring(index.indexOf('-') + 1, index.lastIndexOf('.')));\n }\n \n- boolean isLatestLuceneVersion(String index) {\n- Version version = extractVersion(index);\n+ boolean isLatestLuceneVersion(Version version) {\n return version.luceneVersion.major == Version.CURRENT.luceneVersion.major &&\n version.luceneVersion.minor == Version.CURRENT.luceneVersion.minor;\n }\n@@ -179,6 +180,16 @@ void assertNewReplicasWork() throws Exception {\n .execute().actionGet());\n waitNoPendingTasksOnAll(); // make sure the replicas are removed before going on\n }\n+\n+ // #10067: create-bwc-index.py deleted any doc with long_sort:[10-20]\n+ void assertDeleteByQueryWorked(Version version) throws Exception {\n+ if (version.onOrBefore(Version.V_1_0_0_Beta2)) {\n+ // TODO: remove this once #10262 is fixed\n+ return;\n+ }\n+ SearchRequestBuilder searchReq = client().prepareSearch(\"test\").setQuery(QueryBuilders.queryStringQuery(\"long_sort:[10 TO 20]\"));\n+ assertEquals(0, searchReq.get().getHits().getTotalHits());\n+ }\n \n void assertUpgradeWorks(boolean alreadyLatest) throws Exception {\n HttpRequestBuilder httpClient = httpClient();",
"filename": "src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityTests.java",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.Beta1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.RC1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.RC2.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.10.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.11.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.12.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.13.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.2.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.3.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.4.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.5.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.6.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.7.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.8.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-0.90.9.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.Beta1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.Beta2.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.RC1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.RC2.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.1.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.2.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.0.3.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.1.0.zip",
"status": "modified"
},
{
"diff": "",
"filename": "src/test/resources/org/elasticsearch/bwcompat/index-1.1.1.zip",
"status": "modified"
}
]
} |
{
"body": "I am using Elasticsearch 1.4.4.\nI am defining a parent-child index with three levels (grandparents) following the instructions in the document: https://www.elastic.co/guide/en/elasticsearch/guide/current/grandparents.html\nMy structure is Continent -> Country -> Region\n## MAPPING\n\nI create an index with the following mapping:\n\n```\ncurl -XPOST 'localhost:9200/geo' -d'\n{\n \"mappings\": {\n \"continent\": {},\n \"country\": {\n \"_parent\": {\n \"type\": \"continent\" \n }\n },\n \"region\": {\n \"_parent\": {\n \"type\": \"country\" \n }\n}\n}\n}' \n```\n## INDEXING\n\nI index three entities:\n\n```\ncurl -XPOST 'localhost:9200/geo/continent/europe' -d'\n{\n \"name\":\"Europe\"\n}'\n\ncurl -XPOST 'localhost:9200/geo/country/italy?parent=europe' -d'\n{\n \"name\":\"Italy\"\n}'\n\ncurl -XPOST 'localhost:9200/geo/region/lombardy?parent=italy&routing=europe' -d'\n{\n \"name\":\"Lombardia\"\n}'\n```\n## QUERY THAT WORKS\n\nIf I query and aggregate according to the document everything works fine:\n\n```\ncurl -XGET 'localhost:9200/geo/continent/_search?pretty=true' -d '\n{\n\"query\": {\n \"has_child\": {\n \"type\": \"country\",\n \"query\": {\n \"has_child\": {\n \"type\": \"region\",\n \"query\": {\n \"match\": {\n \"name\": \"Lombardia\"\n }\n }\n }\n }\n }\n},\n\"aggs\": {\n \"country\": {\n \"terms\": { \n \"field\": \"name\"\n },\n \"aggs\": {\n \"countries\": {\n \"children\": {\n \"type\": \"country\"\n },\n \"aggs\": {\n \"country_names\" : {\n \"terms\" : {\n \"field\" : \"country.name\"\n }\n } \n }\n }\n }\n }\n}\n}'\n```\n## QUERY THAT DOES NOT WORK\n\nHowever, if I try with multi-level aggregations like in:\n\n```\ncurl -XGET 'localhost:9200/geo/continent/_search?pretty=true' -d '\n{\n\"query\": {\n \"has_child\": {\n \"type\": \"country\",\n \"query\": {\n \"has_child\": {\n \"type\": \"region\",\n \"query\": {\n \"match\": {\n \"name\": \"Lombardia\"\n }\n }\n }\n }\n }\n},\n\"aggs\": {\n \"continent_names\": {\n \"terms\": { \n \"field\": \"name\"\n },\n \"aggs\": {\n \"countries\": {\n \"children\": {\n \"type\": \"country\"\n }, \n \"aggs\": {\n \"regions\": {\n \"children\": {\n \"type\": \"region\"\n }, \n \"aggs\": {\n \"region_names\" : {\n \"terms\" : {\n \"field\" : \"region.name\"\n }\n }\n }\n }\n }\n }\n }\n }\n}\n}'\n```\n\nI get back the following\n{\n\n \"error\" : \"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[b5CbW5byQdSSW-rIwta0rA][geo][0]: QueryPhaseExecutionException[[geo][0]: query[filtered(child_filter[country/continent](filtered%28child_filter[region/country]%28filtered%28name:lombardia%29->cache%28_type:region%29%29%29->cache%28_type:country%29))->cache(_type:continent)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; }{[b5CbW5byQdSSW-rIwta0rA][geo][1]: QueryPhaseExecutionException[[geo][1]: query[filtered(child_filter[country/continent](filtered%28child_filter[region/country]%28filtered%28name:lombardia%29->cache%28_type:region%29%29%29->cache%28_type:country%29))->cache(_type:continent)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; }{[b5CbW5byQdSSW-rIwta0rA][geo][2]: QueryPhaseExecutionException[[geo][2]: query[filtered(child_filter[country/continent](filtered%28child_filter[region/country]%28filtered%28name:lombardia%29->cache%28_type:region%29%29%29->cache%28_type:country%29))->cache(_type:continent)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; }{[b5CbW5byQdSSW-rIwta0rA][geo][3]: QueryPhaseExecutionException[[geo][3]: query[filtered(child_filter[country/continent](filtered%28child_filter[region/country]%28filtered%28name:lombardia%29->cache%28_type:region%29%29%29->cache%28_type:country%29))->cache(_type:continent)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; }{[b5CbW5byQdSSW-rIwta0rA][geo][4]: QueryPhaseExecutionException[[geo][4]: query[filtered(child_filter[country/continent](filtered%28child_filter[region/country]%28filtered%28name:lombardia%29->cache%28_type:region%29%29%29->cache%28_type:country%29))->cache(_type:continent)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; }]\",\n\n \"status\" : 500\n\n}\n\nThe 'grandparents' document says: \"Querying and aggregating across generations works, as long as you step through each generation.\"\n\nHow do I get the last query and aggregation to work across all three levels of the index?\n\nWhat am I missing?\n\nThank you,\nPaolo\n",
"comments": [
{
"body": "this seems like a bug, @martijnvg can you have a look please?\n",
"created_at": "2015-03-25T10:14:34Z"
},
{
"body": "@paolociccarese @javanna Sorry, I missed this! I opened PR #10263 for this bug.\n",
"created_at": "2015-03-25T14:03:19Z"
}
],
"number": 10158,
"title": "Defining Grandparents aggregations, getting a NullPointer"
} | {
"body": "PR for #10158\n\n1) Multiple nested children aggs result in a NPE. This only occurs in 1.x and not in master, since post collection works there in the correct order.\n2) Fixed a counting bug where the same readers where post collected twice. Possible fixes #9958\n\nNote this PR is against 1.x b/c bug 1 is fixed in master already. This bug fix was part of a refactoring that happened as part of #9544\n",
"number": 10263,
"review_comments": [
{
"body": "I think it should be a LinkedHashSet since we rely on doc ID order for tie-breaking.\n",
"created_at": "2015-03-25T15:23:32Z"
},
{
"body": "good point, I'll change that\n",
"created_at": "2015-03-25T15:45:35Z"
}
],
"title": "Fix 2 bugs in `children` agg"
} | {
"commits": [
{
"message": "inner_hits: Fix nested stored field support.\n\nThis also fixes a NPE when the nested part has been filtered out of the _source, because of _source filtering.\n\nCloses #9766"
},
{
"message": "aggs: Fix 2 bug in `children` agg\n\n1) multiple nested children aggs result in a NPE\n2) fixed a counting bug where the same readers where post collected twice\n\nCloses #10158"
}
],
"files": [
{
"diff": "@@ -82,6 +82,14 @@ public T setVersion(boolean version) {\n return (T) this;\n }\n \n+ /**\n+ * Add a stored field to be loaded and returned with the inner hit.\n+ */\n+ public T field(String name) {\n+ sourceBuilder().field(name);\n+ return (T) this;\n+ }\n+\n /**\n * Sets no fields to be loaded, resulting in only id and type to be returned per field.\n */",
"filename": "src/main/java/org/elasticsearch/index/query/support/BaseInnerHitBuilder.java",
"status": "modified"
},
{
"diff": "@@ -103,6 +103,17 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"fielddata_fields\":\n fieldDataFieldsParseElement.parse(parser, subSearchContext);\n break;\n+ case \"fields\":\n+ boolean added = false;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ String name = parser.text();\n+ added = true;\n+ subSearchContext.fieldNames().add(name);\n+ }\n+ if (!added) {\n+ subSearchContext.emptyFieldNames();\n+ }\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }\n@@ -124,6 +135,9 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"explain\":\n subSearchContext.explain(parser.booleanValue());\n break;\n+ case \"fields\":\n+ subSearchContext.fieldNames().add(parser.text());\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }",
"filename": "src/main/java/org/elasticsearch/index/query/support/InnerHitsQueryParserHelper.java",
"status": "modified"
},
{
"diff": "@@ -331,8 +331,8 @@ public BucketAggregationMode bucketAggregationMode() {\n * Called after collection of all document is done.\n */\n public final void postCollection() throws IOException {\n- collectableSubAggregators.postCollection();\n doPostCollection();\n+ collectableSubAggregators.postCollection();\n }\n \n /** Called upon release of the aggregator. */",
"filename": "src/main/java/org/elasticsearch/search/aggregations/Aggregator.java",
"status": "modified"
},
{
"diff": "@@ -43,9 +43,9 @@\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n import java.util.Arrays;\n-import java.util.List;\n+import java.util.LinkedHashSet;\n+import java.util.Set;\n \n // The RecordingPerReaderBucketCollector assumes per segment recording which isn't the case for this\n // aggregation, for this reason that collector can't be used\n@@ -66,7 +66,8 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement\n private final LongObjectPagedHashMap<long[]> parentOrdToOtherBuckets;\n private boolean multipleBucketsPerParentOrd = false;\n \n- private List<AtomicReaderContext> replay = new ArrayList<>();\n+ // This needs to be a Set to avoid duplicate reader context entries (#setNextReader(...) can get invoked multiple times with the same reader context)\n+ private Set<AtomicReaderContext> replay = new LinkedHashSet<>();\n private SortedDocValues globalOrdinals;\n private Bits parentDocs;\n \n@@ -143,7 +144,7 @@ public void setNextReader(AtomicReaderContext reader) {\n \n @Override\n protected void doPostCollection() throws IOException {\n- List<AtomicReaderContext> replay = this.replay;\n+ Set<AtomicReaderContext> replay = this.replay;\n this.replay = null;\n \n for (AtomicReaderContext atomicReaderContext : replay) {\n@@ -180,10 +181,6 @@ protected void doPostCollection() throws IOException {\n }\n }\n }\n- // Need to invoke post collection on all aggs that the children agg is wrapping,\n- // otherwise any post work that is required, because we started to collect buckets\n- // in the method will not be performed.\n- collectableSubAggregators.postCollection();\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java",
"status": "modified"
},
{
"diff": "@@ -294,7 +294,10 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n SearchHit.NestedIdentity nested = nestedIdentity;\n do {\n Object extractedValue = XContentMapValues.extractValue(nested.getField().string(), sourceAsMap);\n- if (extractedValue instanceof List) {\n+ if (extractedValue == null) {\n+ // The nested objects may not exist in the _source, because it was filtered because of _source filtering\n+ break;\n+ } else if (extractedValue instanceof List) {\n // nested field has an array value in the _source\n nestedParsedSource = (List<Map<String, Object>>) extractedValue;\n } else if (extractedValue instanceof Map) {",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.update.UpdateResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.aggregations.bucket.children.Children;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n@@ -329,6 +331,53 @@ public void testPostCollection() throws Exception {\n assertThat(termsAgg.getBucketByKey(\"44\").getDocCount(), equalTo(1l));\n }\n \n+ @Test\n+ public void testHierarchicalChildrenAggs() {\n+ String indexName = \"geo\";\n+ String grandParentType = \"continent\";\n+ String parentType = \"country\";\n+ String childType = \"city\";\n+ assertAcked(\n+ prepareCreate(indexName)\n+ .setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ )\n+ .addMapping(grandParentType)\n+ .addMapping(parentType, \"_parent\", \"type=\" + grandParentType)\n+ .addMapping(childType, \"_parent\", \"type=\" + parentType)\n+ );\n+\n+ client().prepareIndex(indexName, grandParentType, \"1\").setSource(\"name\", \"europe\").get();\n+ client().prepareIndex(indexName, parentType, \"2\").setParent(\"1\").setSource(\"name\", \"belgium\").get();\n+ client().prepareIndex(indexName, childType, \"3\").setParent(\"2\").setRouting(\"1\").setSource(\"name\", \"brussels\").get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(indexName)\n+ .setQuery(matchQuery(\"name\", \"europe\"))\n+ .addAggregation(\n+ children(parentType).childType(parentType).subAggregation(\n+ children(childType).childType(childType).subAggregation(\n+ terms(\"name\").field(\"name\")\n+ )\n+ )\n+ )\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ Children children = response.getAggregations().get(parentType);\n+ assertThat(children.getName(), equalTo(parentType));\n+ assertThat(children.getDocCount(), equalTo(1l));\n+ children = children.getAggregations().get(childType);\n+ assertThat(children.getName(), equalTo(childType));\n+ assertThat(children.getDocCount(), equalTo(1l));\n+ Terms terms = children.getAggregations().get(\"name\");\n+ assertThat(terms.getBuckets().size(), equalTo(1));\n+ assertThat(terms.getBuckets().get(0).getKey().toString(), equalTo(\"brussels\"));\n+ assertThat(terms.getBuckets().get(0).getDocCount(), equalTo(1l));\n+ }\n+\n private static final class Control {\n \n final String category;",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ChildrenTests.java",
"status": "modified"
},
{
"diff": "@@ -687,4 +687,156 @@ public void testNestedDefinedAsObject() throws Exception {\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n+ @Test\n+ public void testNestedInnerHitsWithStoredFieldsAndNoSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithHighlightOnStoredField() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\").setFetchSource(true)))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsHiglightWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "If you have two sibling aggregations that make use of the 'children' aggregation, then the second aggregation seems to count each document twice.\n\nHere is a cut-down example\n\nI have a mapping that has parent documents of type 'datastreams' with child documents of type 'readings'\n\nI ran this, which has two identical sibling aggregations:\n\n```\ncurl -XGET 'https://blahblahblah/index/datastreams/_search?search_type=count&pretty' -d '{\n\"aggs\" : {\n \"one\" : {\n \"children\": {\n \"type\": \"readings\"\n },\n \"aggs\" :{\n \"grand-total\" : { \"sum\" : { \"field\" : \"readings.dv\" } }\n }\n },\n \"two\" : {\n \"children\": {\n \"type\": \"readings\"\n },\n \"aggs\" :{\n \"grand-total\" : { \"sum\" : { \"field\" : \"readings.dv\" } }\n }\n } \n } \n}' \n```\n\nand got the response\n\n```\n{\n \"took\" : 23620,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1389,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"two\" : {\n \"doc_count\" : 729353222,\n \"grand-total\" : {\n \"value\" : 2.389726905175203E10\n }\n },\n \"one\" : {\n \"doc_count\" : 364676611,\n \"grand-total\" : {\n \"value\" : 1.1948634525852589E10\n }\n }\n }\n}\n```\n\nAlthough aggregation 'one' and 'two' are identical Elasticsearch seems to have double counted the documents for aggregation 'two'\n",
"comments": [
{
"body": "Hi @perryn on what version of ES are experiencing this interference issue? Also are you able to reproduce on a smaller scale?\n",
"created_at": "2015-03-03T08:57:03Z"
},
{
"body": "Hi @martijnvg \n\nThe above was seen on 1.4.4 but I was also able to reproduce the issue on a smaller scale on 1.4.2.\n\nI ran the same query as above and got the following result\n\n```\n{\n \"took\" : 4,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 600,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"one\" : {\n \"doc_count\" : 12,\n \"grand-total\" : {\n \"value\" : 259.251\n }\n },\n \"two\" : {\n \"doc_count\" : 24,\n \"grand-total\" : {\n \"value\" : 518.502\n }\n }\n }\n}\n```\n\ncheers\nPerryn\n",
"created_at": "2015-03-03T23:23:22Z"
},
{
"body": "@perryn I think that the issue you're reporting is fixed by #10263. Would be great if you can verify this!\n",
"created_at": "2015-03-25T14:02:26Z"
},
{
"body": "Can reproduce in 1.5. That's actually how I came here, by trying to find a solution.\n\nIf I specify children aggregation first and then would do different sub aggregations under it - they seem to work as expected. So the example below returns two identical histograms as requested for this example.\n{\n \"aggregations\": {\n \"rating_histogram_total_1\": {\n \"children\": {\n \"type\": \"myIndex\"\n },\n \"aggregations\": {\n \"rating_histogram_total_1\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n },\n \"rating_histogram_total_2\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n }\n }\n }\n }\n}\n\nThe following request, however, doesn't work. If I put children aggregation under different filters and then try to use the same histogram, the second histogram returns values n^2, the third one - n^3 and so on. I didn't debug what would happen if filters above \"children would have different criteria s\".\n\nExample: query:\n\n{\n \"aggregations\": {\n \"rating_histogram_total_1\": {\n \"children\": {\n \"type\": \"myIndex\"\n },\n \"aggregations\": {\n \"rating_histogram_total_1\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n }\n }\n },\n \"rating_histogram_total_2\": {\n \"children\": {\n \"type\": \"myIndex\"\n },\n \"aggregations\": {\n \"rating_histogram_total_2\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n }\n }\n },\n \"rating_histogram_total_3\": {\n \"children\": {\n \"type\": \"myIndex\"\n },\n \"aggregations\": {\n \"rating_histogram_total_3\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n }\n }\n },\n \"rating_histogram_total_4\": {\n \"children\": {\n \"type\": \"myIndex\"\n },\n \"aggregations\": {\n \"rating_histogram_total_4\": {\n \"histogram\": {\n \"field\": \"myIndex.reviewsRatingAverage\",\n \"interval\": 1,\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 5\n }\n }\n }\n }\n }\n }\n}\n\nAnd the results are:\n\n{\n \"aggregations\": {\n \"rating_histogram_total_4\": {\n \"doc_count\": 32,\n \"rating_histogram_total_4\": {\n \"buckets\": [\n {\n \"key\": 0,\n \"doc_count\": 24\n },\n {\n \"key\": 1,\n \"doc_count\": 0\n },\n {\n \"key\": 2,\n \"doc_count\": 0\n },\n {\n \"key\": 3,\n \"doc_count\": 8\n },\n {\n \"key\": 4,\n \"doc_count\": 0\n },\n {\n \"key\": 5,\n \"doc_count\": 0\n }\n ]\n }\n },\n \"rating_histogram_total_1\": {\n \"doc_count\": 4,\n \"rating_histogram_total_1\": {\n \"buckets\": [\n {\n \"key\": 0,\n \"doc_count\": 3\n },\n {\n \"key\": 1,\n \"doc_count\": 0\n },\n {\n \"key\": 2,\n \"doc_count\": 0\n },\n {\n \"key\": 3,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 0\n },\n {\n \"key\": 5,\n \"doc_count\": 0\n }\n ]\n }\n },\n \"rating_histogram_total_3\": {\n \"doc_count\": 16,\n \"rating_histogram_total_3\": {\n \"buckets\": [\n {\n \"key\": 0,\n \"doc_count\": 12\n },\n {\n \"key\": 1,\n \"doc_count\": 0\n },\n {\n \"key\": 2,\n \"doc_count\": 0\n },\n {\n \"key\": 3,\n \"doc_count\": 4\n },\n {\n \"key\": 4,\n \"doc_count\": 0\n },\n {\n \"key\": 5,\n \"doc_count\": 0\n }\n ]\n }\n },\n \"rating_histogram_total_2\": {\n \"doc_count\": 8,\n \"rating_histogram_total_2\": {\n \"buckets\": [\n {\n \"key\": 0,\n \"doc_count\": 6\n },\n {\n \"key\": 1,\n \"doc_count\": 0\n },\n {\n \"key\": 2,\n \"doc_count\": 0\n },\n {\n \"key\": 3,\n \"doc_count\": 2\n },\n {\n \"key\": 4,\n \"doc_count\": 0\n },\n {\n \"key\": 5,\n \"doc_count\": 0\n }\n ]\n }\n }\n }\n}\n\nI hope it will help.\nThank you.\n",
"created_at": "2015-03-29T23:57:38Z"
},
{
"body": "@AlexKovalevich Thanks for sharing. It think this was fixed via #10263 (since it looks similar to the issue you describe). This will be included in 1.5.1. I can't be sure if this really fixed your issue, because there is no reproduction. If you like you can build from the 1.5 branch and see if the issue still occurs. This would help a lot.\n",
"created_at": "2015-04-08T14:12:12Z"
},
{
"body": "@AlexKovalevich Now that 1.5.1 has been released can you check if the issue still occurs in your environment?\n",
"created_at": "2015-04-09T18:59:46Z"
},
{
"body": "@martijnvg I was running into the same issue and I can verify that 1.5.1 fixed it for me.\n",
"created_at": "2015-04-10T21:34:26Z"
},
{
"body": "awesome. @fozzylyon thanks for letting us know. closing\n",
"created_at": "2015-04-13T10:38:28Z"
},
{
"body": "Fixed for me too in 1.5.1 !\n(Sorry for delay, couldn't test earlier)\n",
"created_at": "2015-04-23T18:23:37Z"
}
],
"number": 9958,
"title": "multiple 'children' aggregations interfere with each other"
} | {
"body": "PR for #10158\n\n1) Multiple nested children aggs result in a NPE. This only occurs in 1.x and not in master, since post collection works there in the correct order.\n2) Fixed a counting bug where the same readers where post collected twice. Possible fixes #9958\n\nNote this PR is against 1.x b/c bug 1 is fixed in master already. This bug fix was part of a refactoring that happened as part of #9544\n",
"number": 10263,
"review_comments": [
{
"body": "I think it should be a LinkedHashSet since we rely on doc ID order for tie-breaking.\n",
"created_at": "2015-03-25T15:23:32Z"
},
{
"body": "good point, I'll change that\n",
"created_at": "2015-03-25T15:45:35Z"
}
],
"title": "Fix 2 bugs in `children` agg"
} | {
"commits": [
{
"message": "inner_hits: Fix nested stored field support.\n\nThis also fixes a NPE when the nested part has been filtered out of the _source, because of _source filtering.\n\nCloses #9766"
},
{
"message": "aggs: Fix 2 bug in `children` agg\n\n1) multiple nested children aggs result in a NPE\n2) fixed a counting bug where the same readers where post collected twice\n\nCloses #10158"
}
],
"files": [
{
"diff": "@@ -82,6 +82,14 @@ public T setVersion(boolean version) {\n return (T) this;\n }\n \n+ /**\n+ * Add a stored field to be loaded and returned with the inner hit.\n+ */\n+ public T field(String name) {\n+ sourceBuilder().field(name);\n+ return (T) this;\n+ }\n+\n /**\n * Sets no fields to be loaded, resulting in only id and type to be returned per field.\n */",
"filename": "src/main/java/org/elasticsearch/index/query/support/BaseInnerHitBuilder.java",
"status": "modified"
},
{
"diff": "@@ -103,6 +103,17 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"fielddata_fields\":\n fieldDataFieldsParseElement.parse(parser, subSearchContext);\n break;\n+ case \"fields\":\n+ boolean added = false;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ String name = parser.text();\n+ added = true;\n+ subSearchContext.fieldNames().add(name);\n+ }\n+ if (!added) {\n+ subSearchContext.emptyFieldNames();\n+ }\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }\n@@ -124,6 +135,9 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"explain\":\n subSearchContext.explain(parser.booleanValue());\n break;\n+ case \"fields\":\n+ subSearchContext.fieldNames().add(parser.text());\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }",
"filename": "src/main/java/org/elasticsearch/index/query/support/InnerHitsQueryParserHelper.java",
"status": "modified"
},
{
"diff": "@@ -331,8 +331,8 @@ public BucketAggregationMode bucketAggregationMode() {\n * Called after collection of all document is done.\n */\n public final void postCollection() throws IOException {\n- collectableSubAggregators.postCollection();\n doPostCollection();\n+ collectableSubAggregators.postCollection();\n }\n \n /** Called upon release of the aggregator. */",
"filename": "src/main/java/org/elasticsearch/search/aggregations/Aggregator.java",
"status": "modified"
},
{
"diff": "@@ -43,9 +43,9 @@\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n import java.util.Arrays;\n-import java.util.List;\n+import java.util.LinkedHashSet;\n+import java.util.Set;\n \n // The RecordingPerReaderBucketCollector assumes per segment recording which isn't the case for this\n // aggregation, for this reason that collector can't be used\n@@ -66,7 +66,8 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement\n private final LongObjectPagedHashMap<long[]> parentOrdToOtherBuckets;\n private boolean multipleBucketsPerParentOrd = false;\n \n- private List<AtomicReaderContext> replay = new ArrayList<>();\n+ // This needs to be a Set to avoid duplicate reader context entries (#setNextReader(...) can get invoked multiple times with the same reader context)\n+ private Set<AtomicReaderContext> replay = new LinkedHashSet<>();\n private SortedDocValues globalOrdinals;\n private Bits parentDocs;\n \n@@ -143,7 +144,7 @@ public void setNextReader(AtomicReaderContext reader) {\n \n @Override\n protected void doPostCollection() throws IOException {\n- List<AtomicReaderContext> replay = this.replay;\n+ Set<AtomicReaderContext> replay = this.replay;\n this.replay = null;\n \n for (AtomicReaderContext atomicReaderContext : replay) {\n@@ -180,10 +181,6 @@ protected void doPostCollection() throws IOException {\n }\n }\n }\n- // Need to invoke post collection on all aggs that the children agg is wrapping,\n- // otherwise any post work that is required, because we started to collect buckets\n- // in the method will not be performed.\n- collectableSubAggregators.postCollection();\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java",
"status": "modified"
},
{
"diff": "@@ -294,7 +294,10 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n SearchHit.NestedIdentity nested = nestedIdentity;\n do {\n Object extractedValue = XContentMapValues.extractValue(nested.getField().string(), sourceAsMap);\n- if (extractedValue instanceof List) {\n+ if (extractedValue == null) {\n+ // The nested objects may not exist in the _source, because it was filtered because of _source filtering\n+ break;\n+ } else if (extractedValue instanceof List) {\n // nested field has an array value in the _source\n nestedParsedSource = (List<Map<String, Object>>) extractedValue;\n } else if (extractedValue instanceof Map) {",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.update.UpdateResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.aggregations.bucket.children.Children;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n@@ -329,6 +331,53 @@ public void testPostCollection() throws Exception {\n assertThat(termsAgg.getBucketByKey(\"44\").getDocCount(), equalTo(1l));\n }\n \n+ @Test\n+ public void testHierarchicalChildrenAggs() {\n+ String indexName = \"geo\";\n+ String grandParentType = \"continent\";\n+ String parentType = \"country\";\n+ String childType = \"city\";\n+ assertAcked(\n+ prepareCreate(indexName)\n+ .setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ )\n+ .addMapping(grandParentType)\n+ .addMapping(parentType, \"_parent\", \"type=\" + grandParentType)\n+ .addMapping(childType, \"_parent\", \"type=\" + parentType)\n+ );\n+\n+ client().prepareIndex(indexName, grandParentType, \"1\").setSource(\"name\", \"europe\").get();\n+ client().prepareIndex(indexName, parentType, \"2\").setParent(\"1\").setSource(\"name\", \"belgium\").get();\n+ client().prepareIndex(indexName, childType, \"3\").setParent(\"2\").setRouting(\"1\").setSource(\"name\", \"brussels\").get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(indexName)\n+ .setQuery(matchQuery(\"name\", \"europe\"))\n+ .addAggregation(\n+ children(parentType).childType(parentType).subAggregation(\n+ children(childType).childType(childType).subAggregation(\n+ terms(\"name\").field(\"name\")\n+ )\n+ )\n+ )\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ Children children = response.getAggregations().get(parentType);\n+ assertThat(children.getName(), equalTo(parentType));\n+ assertThat(children.getDocCount(), equalTo(1l));\n+ children = children.getAggregations().get(childType);\n+ assertThat(children.getName(), equalTo(childType));\n+ assertThat(children.getDocCount(), equalTo(1l));\n+ Terms terms = children.getAggregations().get(\"name\");\n+ assertThat(terms.getBuckets().size(), equalTo(1));\n+ assertThat(terms.getBuckets().get(0).getKey().toString(), equalTo(\"brussels\"));\n+ assertThat(terms.getBuckets().get(0).getDocCount(), equalTo(1l));\n+ }\n+\n private static final class Control {\n \n final String category;",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ChildrenTests.java",
"status": "modified"
},
{
"diff": "@@ -687,4 +687,156 @@ public void testNestedDefinedAsObject() throws Exception {\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n+ @Test\n+ public void testNestedInnerHitsWithStoredFieldsAndNoSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithHighlightOnStoredField() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\").setFetchSource(true)))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsHiglightWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "I've been trying out the new 'inner_hits' stuff (see #8153, I built a 1.5 snapshot from commit 23ef4e8 on the 1.x branch).\n\nHowever, I don't seem to be able to request stored fields from my nested documents. It would be great to be able to specify a 'fields' element as part of the nested hit.\n\nNote that Martijn's comment in https://github.com/elasticsearch/elasticsearch/pull/8153#issuecomment-61798326 suggested it is possible, but I couldn't work out how... \n\nAs a side note, that comment was suggested in relation to disabling the source. In my case I was trying to disable the source for just the nested children, by adding the following to my mapping:\n\n``` json\n\"_source\" : {\n \"excludes\" : [\"nested_field\", \"nested_field.*\"]\n}\n```\n\nwhich resulted in a null pointer exception...\n\n```\njava.lang.NullPointerException\n at org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:295)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:178)\n at org.elasticsearch.search.fetch.innerhits.InnerHitsFetchSubPhase.hitExecute(InnerHitsFetchSubPhase.java:96)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:190)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:501)\n at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:452)\n at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:449)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\n```\n",
"comments": [
{
"body": "The NPE sounds like a bug, because inner hits is expecting the source to be there. You should be able to use the `fields` parameter to retrieve stored fields though (at least according to the docs: http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.x/search-request-inner-hits.html#nested-inner-hits )\n",
"created_at": "2015-02-27T21:38:56Z"
},
{
"body": "@clintongormley yeah, I noticed the docs say you can use fields, but it doesn't tell you where to put the `fields` element within the `inner_hits` element - I've tried a whole bunch of different places but can't make any of them do anything other than give me a parse error. Looking at `InnerHitsParseElement` and `InnerHitsQueryParserHelper` etc I can't see it looking for fields so I think probably the docs are wrong in this case.\n",
"created_at": "2015-03-05T10:20:16Z"
},
{
"body": "@tstibbs I referred to disabling the entire source and I think in your case you only leave out the nested part of your source. I see how this can result into the NPE you're describing.\n",
"created_at": "2015-03-24T07:50:59Z"
},
{
"body": "@martijnvg yep, I noticed that, just thought I'd report the NPE as it's presumably a bug (though a pretty minor one).\n\nMy original point about not being able to retrieve stored fields (e.g. if you disable the entire source) is still valid though I think.\n",
"created_at": "2015-03-24T07:57:38Z"
},
{
"body": "@tstibbs I opened #10235 in order to fix the stored field support in inner hits (it was supported, but forgot to add the parsing support for stored fields) and address the NPE you have found.\n",
"created_at": "2015-03-24T10:08:56Z"
}
],
"number": 9766,
"title": "Can't retrieve stored fields as part of inner hits"
} | {
"body": "This also fixes a NPE when the nested part has been filtered out of the _source, because of _source filtering.\n\nCloses #9766\n",
"number": 10235,
"review_comments": [],
"title": "Fix nested stored field support."
} | {
"commits": [
{
"message": "inner_hits: Fix nested stored field support.\n\nThis also fixes a NPE when the nested part has been filtered out of the _source, because of _source filtering.\n\nCloses #9766"
}
],
"files": [
{
"diff": "@@ -82,6 +82,14 @@ public T setVersion(boolean version) {\n return (T) this;\n }\n \n+ /**\n+ * Add a stored field to be loaded and returned with the inner hit.\n+ */\n+ public T field(String name) {\n+ sourceBuilder().field(name);\n+ return (T) this;\n+ }\n+\n /**\n * Sets no fields to be loaded, resulting in only id and type to be returned per field.\n */",
"filename": "src/main/java/org/elasticsearch/index/query/support/BaseInnerHitBuilder.java",
"status": "modified"
},
{
"diff": "@@ -103,6 +103,17 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"fielddata_fields\":\n fieldDataFieldsParseElement.parse(parser, subSearchContext);\n break;\n+ case \"fields\":\n+ boolean added = false;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ String name = parser.text();\n+ added = true;\n+ subSearchContext.fieldNames().add(name);\n+ }\n+ if (!added) {\n+ subSearchContext.emptyFieldNames();\n+ }\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }\n@@ -124,6 +135,9 @@ public static void parseCommonInnerHitOptions(XContentParser parser, XContentPar\n case \"explain\":\n subSearchContext.explain(parser.booleanValue());\n break;\n+ case \"fields\":\n+ subSearchContext.fieldNames().add(parser.text());\n+ break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Unknown key for a \" + token + \" for nested query: [\" + fieldName + \"].\");\n }",
"filename": "src/main/java/org/elasticsearch/index/query/support/InnerHitsQueryParserHelper.java",
"status": "modified"
},
{
"diff": "@@ -297,7 +297,10 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n SearchHit.NestedIdentity nested = nestedIdentity;\n do {\n Object extractedValue = XContentMapValues.extractValue(nested.getField().string(), sourceAsMap);\n- if (extractedValue instanceof List) {\n+ if (extractedValue == null) {\n+ // The nested objects may not exist in the _source, because it was filtered because of _source filtering\n+ break;\n+ } else if (extractedValue instanceof List) {\n // nested field has an array value in the _source\n nestedParsedSource = (List<Map<String, Object>>) extractedValue;\n } else if (extractedValue instanceof Map) {",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -687,4 +687,156 @@ public void testNestedDefinedAsObject() throws Exception {\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n }\n \n+ @Test\n+ public void testNestedInnerHitsWithStoredFieldsAndNoSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithHighlightOnStoredField() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().field(\"comments.message\").setFetchSource(true)))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ }\n+\n+ @Test\n+ public void testNestedInnerHitsHiglightWithExcludeSource() throws Exception {\n+ assertAcked(prepareCreate(\"articles\")\n+ .addMapping(\"article\", jsonBuilder().startObject()\n+ .startObject(\"_source\").field(\"excludes\", new String[]{\"comments\"}).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"message\").field(\"type\", \"string\").field(\"store\", \"yes\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ )\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder().addHighlightedField(\"comments.message\")))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).highlightFields().get(\"comments.message\").getFragments()[0]), equalTo(\"<em>fox</em> eat quick\"));\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "When deleting a shard the node that deletes the shard first checks if all shard copies are\nstarted on other nodes. A message is sent to each node and each node checks locally for\nSTARTED or RELOCATED.\nHowever, it might happen that the shard is still in state POST_RECOVERY, like this:\n\nshard is relocating from node1 to node2\n1. relocated shard on node2 goes in POST_RECOVERY and node2 sends shard started to master\n2. master updates routing table and sends new cluster state to node1 and node2\n3. node1 processes the cluster state and asks node2 if it has the active shard\n before node2 processes the new cluster state (which would cause it to set the shard to started)\n4. node2 sends back it does not have the shard started and so node1 does not delete it\n\nThis can be avoided by waiting until cluster state that sets the shard to started is actually processed.\n\ncloses #10018\n",
"comments": [
{
"body": "cool stuff I left some comments\n",
"created_at": "2015-03-19T22:57:44Z"
},
{
"body": "thanks for the review! addressed all comments. want to take another look?\n",
"created_at": "2015-03-20T00:30:05Z"
},
{
"body": "left some 3 comments other than that LGTM\n",
"created_at": "2015-03-20T18:05:56Z"
},
{
"body": "Pushed another commit. We have to catch EsRejectedExecutionException when we try to send back whether the shard is active or not. For example, InternalNode.stop will cause ObserverClusterStateListener.onClose of the listener to be called at some point and the reject exception that this might throw is not caught anywhere it seems. Alternatively we might also consider not trying to send back a response on close?\n",
"created_at": "2015-03-21T17:44:18Z"
},
{
"body": "I think we should catch there exception in the caller of the close method - this can hit the next user of this as well?\n",
"created_at": "2015-03-23T09:43:29Z"
},
{
"body": "I like the approach. Left some comments.\n",
"created_at": "2015-03-23T10:18:49Z"
},
{
"body": "@bleskes @s1monw thanks a lot for the review! I implemented all changes except where I added a comment because I was unsure what to do. \n@s1monw about the exception handling: It seems in general unchecked exceptions are not handled when listeners are called when the cluster service closes. I can add a catch for them (see b4e88ed9a155a9cd3832403b574efbdf9db612eb) but because they are not handled anywhere I suspect there is method behind it. I'd be happy for any insight into how exceptions should be handled properly here.\n",
"created_at": "2015-03-30T10:36:37Z"
},
{
"body": "@brwe I think we should detach the exception handling problem from this issue. Yet, we should still address it. IMO we really need to make sure that all listeners are notified even if one of them threw an exception. Can you open a folllowup?\n",
"created_at": "2015-03-30T11:28:45Z"
},
{
"body": "other than that ^^ LGTM :)\n",
"created_at": "2015-03-30T11:29:40Z"
},
{
"body": "LGTM. Left some very minor comments. no need for another review cycle.\n",
"created_at": "2015-03-30T21:10:31Z"
},
{
"body": "pushed to master and 1.x (17dffe222b923c17614905515773614d6963e13e)\n",
"created_at": "2015-03-31T14:01:18Z"
}
],
"number": 10172,
"title": "Shard not deleted after relocation if relocated shard is still in post recovery"
} | {
"body": "When nodes shutdown then any ClusterStatObserver is notified (onClose() called).\nIf the cluster state observer executes code which throws RejectedExecutionException then this\nis not caught automatically.\nThis is problematic for example when one wants to send something via a TransportChannel\non close but this operation is rejected because the node is shutting down.\nInstead, rejections should be caught and ignored.\n\nI have this problem in #10172 (https://github.com/elastic/elasticsearch/pull/10172/files#diff-60f857f2de28d320835e75281ae52146R371). It makes the tests fail occasionally.\n",
"number": 10196,
"review_comments": [],
"title": "Handle RejectedExecutionException in ClusterStateObserver on timeout"
} | {
"commits": [
{
"message": "Handle RejectedExecutionException in ClusterStateObserver on timeout\n\nWhen nodes shutdown then any ClusterStatObserver is notified (onClose() called).\nIf the cluster state observer executes code which throws RejectionException then this\nis not caught automatically.\nThis is problematic for example when one wants to send something via a TransportChannel\non close but this operation is rejected because the node is shutting down.\nInstead, rejections should be caught and ignored."
}
],
"files": [
{
"diff": "@@ -522,7 +522,7 @@ public void run() {\n }\n }\n \n- class NotifyTimeout implements Runnable {\n+ class NotifyTimeout extends AbstractRunnable {\n final TimeoutClusterStateListener listener;\n final TimeValue timeout;\n ScheduledFuture future;\n@@ -537,7 +537,17 @@ public void cancel() {\n }\n \n @Override\n- public void run() {\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"notifying cluster state listener on timeout failed \", t);\n+ }\n+\n+ @Override\n+ public void onRejection(Throwable t) {\n+ // do nothing\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n if (future.isCancelled()) {\n return;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java",
"status": "modified"
}
]
} |
{
"body": "SimpleSortTests.testIssue8226 for example fails about once a week. Example failure:\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_1x_metal/3129/\n\nI can reproduce it locally (although very rarely) with some additional logging (action.search.type: TRACE). \n\nHere is a brief analysis of what happened. Would be great if someone could take a look and let me know if this makes sense.\n\nFailure:\n\n```\n1> REPRODUCE WITH : mvn clean test -Dtests.seed=774A2866F1B6042D -Dtests.class=org.elasticsearch.search.sort.SimpleSortTests -Dtests.method=\"testIssue8226 {#76 seed=[774A2866F1B6042D:ACB4FF9F8C8CA341]}\" -Des.logger.level=DEBUG -Des.node.mode=network -Dtests.security.manager=true -Dtests.nightly=false -Dtests.client.ratio=0.0 -Dtests.heap.size=512m -Dtests.jvm.argline=\"-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true\" -Dtests.locale=fi_FI -Dtests.timezone=Etc/GMT+9 -Dtests.processors=4\n 1> Throwable:\n 1> java.lang.AssertionError: One or more shards were not successful but didn't trigger a failure\n 1> Expected: <47>\n 1> but: was <46>\n```\n\nHere is an example failure in detail, the relevant parts of the logs are below:\n## State\n\nnode_0 is master.\n[test_5][0] is relocating from node_1 to node_0.\nCluster state 3673 has the shard as relocating, in cluster state 3674 it is started.\nnode_0 is the coordinating node for the search request.\n\nIn brief, the request fails for shard [test_5][0] because node_0 operates on an older cluster state 3673 when processing the search request, while node_1 is already on 3674.\n## Course of events:\n1. node_0 sends shard started, but the shard is still in state POST_RECOVERY and will remain so until it receives the new cluster state and applies it locally\n2. node_0(master) receives the shard started request and publishes the new cluster state 3674 to node_0 and node_1\n3. node_1 receives the cluster state 3674 and applies it locally\n4. node_0 sends search request for [test_5][0] to node_1 because according to cluster state 3673 the shard is there and relocating\n -> request fails with IndexShardMissingException because node_1 already applied cluster state 3674 and deleted the shard.\n5. node_0 then sends request for [test_5][0] to node_0 because the shard is there as well (according to cluster state 3673 it is and initializing)\n -> request fails with IllegalIndexShardStateException because node_0 has not yet processed cluster state 3674 and therefore the shard is in POST_RECOVERY instead of STARTED\n No shard failure is logged because IndexShardMissingException and IllegalIndexShardStateException are explicitly excluded from shard failures.\n6. node_0 finally also gets to process the new cluster state and moves the shard [test_5][0] to STARTED but it is too late\n\nThis is a very rare condition and maybe too bad on client side because the information that one shard did not deliver results is there although it is not explicitly listed as shard failure. We can probably make the test pass easily be just waiting for relocations before executing the search request but that seems wrong because any search request can fail this way.\n## Sample log\n\n```\n[....]\n\n 1> [2015-01-26 09:27:14,435][DEBUG][indices.recovery ] [node_0] [test_5][0] recovery completed from [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}], took [84ms]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] sending shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] received shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: execute\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.action.shard ] [node_0] [test_5][0] will apply shard started [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n\n\n[....]\n\n\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] cluster state updated, version [3674], source [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] publishing cluster state version 3674\n 1> [2015-01-26 09:27:14,442][DEBUG][discovery.zen.publish ] [node_1] received cluster state version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] processing [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]: execute\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] cluster state updated, version [3674], source [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][indices.cluster ] [node_1] [test_5][0] removing shard (not allocated)\n 1> [2015-01-26 09:27:14,443][DEBUG][index ] [node_1] [test_5] [0] closing... (reason: [removing shard (not allocated)])\n 1> [2015-01-26 09:27:14,443][INFO ][test.store ] [node_1] [test_5][0] Shard state before potentially flushing is STARTED\n 1> [2015-01-26 09:27:14,453][DEBUG][search.sort ] cluster state:\n 1> version: 3673\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3006):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (1):\n 1> 13638/URGENT/shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]/13ms\n 1>\n\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_3][3]\n 1> [2015-01-26 09:27:14,460][TRACE][action.search.type ] [node_0] [test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [false]\n 1> org.elasticsearch.transport.RemoteTransportException: [node_1][inet[/192.168.2.102:9401]][indices:data/read/search[phase/dfs]]\n 1> Caused by: org.elasticsearch.index.IndexShardMissingException: [test_5][0] missing\n 1> at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:203)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:539)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:757)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:748)\n 1> at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:275)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_6][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_5][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_7][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_3][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_8][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_1][0]\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,463][TRACE][action.search.type ] [node_0] [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [true]\n 1> org.elasticsearch.index.shard.IllegalIndexShardStateException: [test_5][0] CurrentState[POST_RECOVERY] operations only allowed when started/relocated\n 1> at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:839)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:651)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:647)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:543)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:197)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:194)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_2][4]\n\n\n\n[...]\n\n\n\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,493][DEBUG][index.shard ] [node_0] [test_5][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: execute\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: no change in cluster_state\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: done applying updated cluster_state (version: 3674)\n 1> [2015-01-26 09:27:14,456][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_2][3]\n\n\n[...]\n\n 1> [2015-01-26 09:27:14,527][DEBUG][search.sort ] cluster state:\n 1> version: 3674\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3007):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (0):\n 1>\n\n[...]\n```\n",
"comments": [
{
"body": "A similar test failure:\n\n`org.elasticsearch.deleteByQuery.DeleteByQueryTests.testDeleteAllOneIndex`\n\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_master_metal/2579/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_centos/2640/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_regression/1263/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\n\nIt fails on the:\n\n``` java\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n```\n\nWhich I believe relates to the relocation issue Britta mentioned.\n",
"created_at": "2015-01-27T00:21:32Z"
},
{
"body": "I think this is unrelated. I actually fixed the DeleteByQueryTests yesterday (c3f1982f21150336f87b7b4def74e019e8bdac18) and this commit does not seem to be in the build you linked to.\n\nA brief explanation: DeleteByQuery is a write operation. The shard header returned and checked in DeleteByQueryTests is different from the one return for search requests. The reason why DeleteByQuery failed is because I added the check \n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.totalNumShards));\n\nbefore which was wrong because there was no ensureGreen() so some of the replicas might not have ben initialized yet. I fixed this in c3f1982f2 by instead checking\n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n",
"created_at": "2015-01-27T08:57:22Z"
},
{
"body": "I wonder if we should just allow reads in the POST_RECOVERY phase. At that point the shards is effectively ready to do everything it needs to do. @brwe this will solve the issue, right?\n",
"created_at": "2015-01-27T10:36:17Z"
},
{
"body": "@brwe okay, does that mean I can unmute the `DeleteByQueryTests.testDeleteAllOneIndex`?\n",
"created_at": "2015-01-27T16:42:41Z"
},
{
"body": "yes\n",
"created_at": "2015-01-27T16:45:32Z"
},
{
"body": "Unmuted the `DeleteByQueryTests.testDeleteAllOneIndex` test\n",
"created_at": "2015-01-27T17:42:04Z"
},
{
"body": "@bleskes I think that would fix it. However, before I push I want to try and write a test that reproduces reliably. Will not do before next week.\n",
"created_at": "2015-01-28T15:25:51Z"
},
{
"body": "@brwe please ping before starting on this. I want to make sure that we capture the original issue which caused us to introduce POST_RECOVERY. I don't recall exactly recall what the problem was (it was refresh related) and I think it was solved by a more recent change to how refresh work (#6545) but it requires careful thought\n",
"created_at": "2015-02-24T12:48:00Z"
},
{
"body": "@bleskes ping :)\nI finally came back to this and wrote a test that reproduces the failure reliably (#10194) but I did not quite get what you meant by \"capture the original issue\". Can you elaborate?\n",
"created_at": "2015-03-20T21:11:20Z"
},
{
"body": "@kimchy do you recall why we can't read in that state?\n",
"created_at": "2015-04-13T14:39:28Z"
}
],
"number": 9421,
"title": "After relocation shards might temporarily not be searchable if still in POST_RECOVERY"
} | {
"body": "...cating\n\nJust opening this to discuss. I wrote a test that reproduces #9421 reliably and also have a fix but we need to figure out if this is the right way to go.\n",
"number": 10194,
"review_comments": [],
"title": "[search] make sure search does not fail when shard is in post recovery after relo..."
} | {
"commits": [
{
"message": "[search] make sure search does not fail when in post recovery after relocating"
}
],
"files": [
{
"diff": "@@ -908,7 +908,8 @@ private void readAllowed(boolean writeOperation) throws IllegalIndexShardStateEx\n throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when started/relocated\");\n }\n } else {\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n+ // NOCOMMIT\n+ if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED && state != IndexShardState.POST_RECOVERY) {\n throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when started/relocated\");\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,89 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.basic;\n+\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.disruption.SlowClusterStateProcessing;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+\n+/**\n+ *\n+ */\n+@ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n+public class DisruptedSearchTests extends ElasticsearchIntegrationTest {\n+ private static final Settings SETTINGS = settingsBuilder().put(\"gateway.type\", \"local\").build();\n+\n+ @Test\n+ public void searchWithRelocationAndSlowClusterStateProcessing() throws Exception {\n+ final String masterNode = internalCluster().startNode(ImmutableSettings.builder().put(SETTINGS).put(\"node.data\", false));\n+ final String red_node = internalCluster().startNode(ImmutableSettings.builder().put(SETTINGS).put(\"node.color\", \"red\"));\n+ final String blue_node = internalCluster().startNode(ImmutableSettings.builder().put(SETTINGS).put(\"node.color\", \"blue\"));\n+ logger.info(\"--> creating index [test] with one shard and on replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ ImmutableSettings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(\"index.routing.allocation.include.color\", \"red\"))\n+ );\n+ ensureGreen(\"test\");\n+\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ for (int i = 0; i < 100; i++) {\n+ indexRequestBuilderList.add(client().prepareIndex().setIndex(\"test\").setType(\"doc\").setSource(\"{\\\"int_field\\\":1}\"));\n+ }\n+ indexRandom(true, indexRequestBuilderList);\n+ SearchThread searchThread = new SearchThread();\n+\n+ searchThread.start();\n+ SlowClusterStateProcessing disruption = null;\n+ disruption = new SlowClusterStateProcessing(blue_node, getRandom(), 0, 0, 1000, 2000);\n+ internalCluster().setDisruptionScheme(disruption);\n+ disruption.startDisrupting();\n+\n+ logger.info(\"--> move shard from node_1 to node_3, and wait for relocation to finish\");\n+ internalCluster().client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(\"index.routing.allocation.include.color\", \"blue\")).get();\n+ ensureGreen(\"test\");\n+ searchThread.stopSearching = true;\n+ disruption.stopDisrupting();\n+ }\n+\n+ public static class SearchThread extends Thread {\n+ public volatile boolean stopSearching = false;\n+ @Override\n+ public void run() {\n+ while (stopSearching == false) {\n+ assertSearchResponse(client().prepareSearch(\"test\").get());\n+ }\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/search/basic/DisruptedSearchTests.java",
"status": "added"
}
]
} |
{
"body": "When we start recovery of a shard we should wipe the state file of the copy if it's present otherwise gateway allocating can get confused interpreting a shard that is not fully recovered ie. due to a recovery failure as a valid copy since we only write the state when the shard is started.\n",
"comments": [
{
"body": "@brwe can you take care of this?\n",
"created_at": "2015-03-10T23:47:47Z"
},
{
"body": "I wonder if the correct time to wipe any _state file is before the temp file rename. Until then, the recovery doesn’t mess with any non-temp files. If the recover is cancelled, we leave the target shard intact.\n\n> On 10 Mar 2015, at 16:48, Simon Willnauer notifications@github.com wrote:\n> \n> @brwe can you take care of this?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2015-03-10T23:52:33Z"
},
{
"body": "@bleskes agreed.. we should remove it before we rename the first file.\n",
"created_at": "2015-03-15T21:26:43Z"
},
{
"body": "Just for reference, here is the relevant test failure: http://build-us-00.elasticsearch.org/job/es_core_1x_small/1800/\n",
"created_at": "2015-03-17T20:58:57Z"
}
],
"number": 10053,
"title": "Recovery should wipe the shard state file before starting recovery"
} | {
"body": "Today we leave the shard state behind even if a recovery is half finished\nthis causes in rare conditions shards to be recovered and promoted as\nprimaries that have never been fully recovered.\n\nCloses #10053\n",
"number": 10179,
"review_comments": [
{
"body": "why do we need this?\n",
"created_at": "2015-03-20T09:35:24Z"
}
],
"title": "Wipe shard state before switching recovered files live"
} | {
"commits": [
{
"message": "[RECOVERY] Wipe shard state before switching recovered files live\n\nToday we leave the shard state behind even if a recovery is half finished\nthis causes in rare conditions shards to be recovered and promoted as\nprimaries that have never been fully recovered.\n\nCloses #10053"
}
],
"files": [
{
"diff": "@@ -57,6 +57,7 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.aliases.IndexAliasesService;\n@@ -1005,6 +1006,17 @@ public final boolean isFlushOnClose() {\n return flushOnClose;\n }\n \n+ /**\n+ * Deletes the shards metadata state. This method can only be executed if the shard is not active.\n+ * @throws IOException if the delete fails\n+ */\n+ public void deleteShardState() throws IOException {\n+ if (this.routingEntry() != null && this.routingEntry().active()) {\n+ throw new ElasticsearchIllegalStateException(\"Can't delete shard state on a active shard\");\n+ }\n+ MetaDataStateFormat.deleteMetaState(nodeEnv.shardPaths(shardId));\n+ }\n+\n private class ApplyRefreshSettings implements IndexSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n@@ -1202,11 +1214,19 @@ class ShardEngineFailListener implements Engine.FailedEngineListener {\n // called by the current engine\n @Override\n public void onFailedEngine(ShardId shardId, String reason, @Nullable Throwable failure) {\n- for (Engine.FailedEngineListener listener : delegates) {\n+ try {\n+ for (Engine.FailedEngineListener listener : delegates) {\n+ try {\n+ listener.onFailedEngine(shardId, reason, failure);\n+ } catch (Exception e) {\n+ logger.warn(\"exception while notifying engine failure\", e);\n+ }\n+ }\n+ } finally {\n try {\n- listener.onFailedEngine(shardId, reason, failure);\n- } catch (Exception e) {\n- logger.warn(\"exception while notifying engine failure\", e);\n+ deleteShardState();\n+ } catch (IOException e) {\n+ logger.warn(\"failed to delete shard state\", e);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -386,6 +386,7 @@ public void messageReceived(RecoveryCleanFilesRequest request, TransportChannel\n // first, we go and move files that were created with the recovery id suffix to\n // the actual names, its ok if we have a corrupted index here, since we have replicas\n // to recover from in case of a full cluster shutdown just when this code executes...\n+ recoveryStatus.indexShard().deleteShardState(); // we have to delete it first since even if we fail to rename the shard might be invalid\n recoveryStatus.renameAllTempFiles();\n final Store store = recoveryStatus.store();\n // now write checksums",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.shard;\n \n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n@@ -30,6 +31,7 @@\n import org.elasticsearch.indices.cluster.IndicesClusterStateService;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n \n+import java.io.IOException;\n import java.util.HashSet;\n import java.util.Set;\n \n@@ -123,7 +125,6 @@ public void testPersistenceStateMetadataPersistence() throws Exception {\n assertEquals(\"inactive shard state shouldn't be persisted\", shardStateMetaData, new ShardStateMetaData(routing.version(), routing.primary(), shard.indexSettings.get(IndexMetaData.SETTING_UUID)));\n \n \n-\n shard.updateRoutingEntry(new MutableShardRouting(shard.shardRouting, shard.shardRouting.version()+1), false);\n shardStateMetaData = ShardStateMetaData.load(logger, shard.shardId, env.shardPaths(shard.shardId));\n assertFalse(\"shard state persisted despite of persist=false\", shardStateMetaData.equals(getShardStateMetadata(shard)));\n@@ -135,6 +136,34 @@ public void testPersistenceStateMetadataPersistence() throws Exception {\n shardStateMetaData = ShardStateMetaData.load(logger, shard.shardId, env.shardPaths(shard.shardId));\n assertEquals(shardStateMetaData, getShardStateMetadata(shard));\n assertEquals(shardStateMetaData, new ShardStateMetaData(routing.version(), routing.primary(), shard.indexSettings.get(IndexMetaData.SETTING_UUID)));\n+ }\n+\n+ public void testDeleteShardState() throws IOException {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ NodeEnvironment env = getInstanceFromNode(NodeEnvironment.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ IndexShard shard = test.shard(0);\n+ try {\n+ shard.deleteShardState();\n+ fail(\"shard is active metadata delete must fail\");\n+ } catch (ElasticsearchIllegalStateException ex) {\n+ // fine - only delete if non-active\n+ }\n+\n+ ShardRouting routing = shard.routingEntry();\n+ ShardStateMetaData shardStateMetaData = ShardStateMetaData.load(logger, shard.shardId, env.shardPaths(shard.shardId));\n+ assertEquals(shardStateMetaData, getShardStateMetadata(shard));\n+\n+ routing = new MutableShardRouting(shard.shardId.index().getName(), shard.shardId.id(), routing.currentNodeId(), routing.primary(), ShardRoutingState.INITIALIZING, shard.shardRouting.version()+1);\n+ shard.updateRoutingEntry(routing, true);\n+ shard.deleteShardState();\n+\n+ assertNull(\"no shard state expected after delete on initializing\", ShardStateMetaData.load(logger, shard.shardId, env.shardPaths(shard.shardId)));\n+\n+\n+\n \n }\n ",
"filename": "src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
}
]
} |
{
"body": "During a merge the `validate_lat` and `validate_lon` parameters for the `geo_point` field mapper are being overwritten by the incoming field mappings. This can cause problems during search where invalid geo_points (which were formerly acceptable) are no longer acceptable causing exceptions when parsing the (now invalid) geo points.\n",
"comments": [],
"number": 10164,
"title": "[GEO] GeoPointFieldMapper.validate_* overwritten on merge"
} | {
"body": "Fail merge if validate_lat or validate_lon values are not equal. This will prevent inconsistencies between geo_points in a merged index, and parse exceptions for bounding_box and distance filters.\n\nAlso merged separate GeoPoint test classes into a single GeoPointFieldMapperTest to be consistent with GeoShapeFieldMapperTests.\n\ncloses #10164\n",
"number": 10165,
"review_comments": [
{
"body": "why change normalize to false here? aren't you just trying to check validate?\n",
"created_at": "2015-03-20T19:58:39Z"
},
{
"body": "I think we should check the conflicts?\n",
"created_at": "2015-03-20T19:58:56Z"
},
{
"body": "correct. normalize will be changing in a separate PR. I must have accidentally included it in this one. :) Made changes.\n",
"created_at": "2015-03-23T20:05:30Z"
},
{
"body": "Added a check. Let me know if there's a better way.\n",
"created_at": "2015-03-23T20:05:31Z"
},
{
"body": "I don't know of a better way right now, although you could, if you want, do a contains check instead of exact equality? So maybe just look for `\"different validate_lat\"`?\n",
"created_at": "2015-03-23T20:14:55Z"
},
{
"body": "updated to simplify\n",
"created_at": "2015-03-23T20:32:15Z"
}
],
"title": "Fix validate_* merge policy for GeoPointFieldMapper"
} | {
"commits": [
{
"message": "[GEO] Fix validate_* merge policy for GeoPointFieldMapper\n\nFail merge if validate_lat or validate_lon values are not equal. This will prevent inconsistencies between geo_points in a merged index, and parse exceptions for bounding_box and distance filters.\n\nAlso merged separate GeoPoint test classes into a single GeoPointFieldMapperTest to be consistent with GeoShapeFieldMapperTests.\n\ncloses #10164"
}
],
"files": [
{
"diff": "@@ -665,11 +665,11 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n if (!Objects.equal(this.precisionStep, fieldMergeWith.precisionStep)) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different precision_step\");\n }\n-\n-\n- if (!mergeContext.mergeFlags().simulate()) {\n- this.validateLat = fieldMergeWith.validateLat;\n- this.validateLon = fieldMergeWith.validateLon;\n+ if (this.validateLat != fieldMergeWith.validateLat) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different validate_lat\");\n+ }\n+ if (this.validateLon != fieldMergeWith.validateLon) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different validate_lon\");\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java",
"status": "modified"
}
]
} |
{
"body": "curator 1.0.0 still uses «localhost:9200/<index_name>-_/_status», which is deprecated in ES 1.2.0\nAnyway, ES has a bug in /<index_name>-_/_status — it returns the list of ALL indices if wildcard doesn't match to any index and this may lead to data loss if curator takes a list of all existent indices and attempts to remove them.\nIn our installation we have lost 40T of data when last index named «intape-logstash-YYYY.MM.DD» has gone by the next time cron started job «curator -p intape-logstash- -C space -g1000».\n",
"comments": [
{
"body": "Looks like we have a similar behavior for _recovery endpoint:\n\n```\nDELETE /_all\n\nPOST /movies/dvd/1\n{\n\"title\": \"Elasticsearch: the movie\"\n}\n\nPOST /library/book/1\n{\n\"title\": \"Elasticsearch: the definitve guide\"\n}\n\nGET /v*/_mapping\n\nreturns empty object\n\nGET /v*/_recovery\n\nreturns full list\n\n```\n",
"created_at": "2015-01-09T09:41:50Z"
},
{
"body": "I just made it an adoptme.\n",
"created_at": "2015-01-09T09:43:13Z"
},
{
"body": "I did some research here. The problem seems to lie in a collision between APIs of lower components. The `MetaData#concreteIndices()` method takes a list of wildcards/aliases/index names and resolves it to a concrete list of indices to operate on where an empty return means nothing is to be done. The _status API (which is deprecated btw) and the _recovery API both pass this list to `RoutingTable#allAssignedShardsGrouped` which interprets an empty list to mean all of the indices. Gut feeling says the latter should change and return no shards (because it was _probably_ not updated when the `IndexOptions` mechanics were introduced) but it's a deep component - we need research the implications.\n",
"created_at": "2015-01-09T10:53:50Z"
},
{
"body": "Starting to look into this one. Since I'm not familiar with the _recovery api so far I started there and just saw that from the example about also\n\nGET /vfoobar/_recovery?ignore_unavailable=true\nreturns the full list of indices.\n",
"created_at": "2015-01-23T13:27:31Z"
},
{
"body": "Removing the expansion of empty list of indices in RoutingTable#allAssignedShardsGrouped solves the problem for the `_recovery` endpoint, but I'm not sure what the implications are for other operations using that method. From what I see so far these are\n- TransportRefreshAction\n- TransportIndicesStatsAction\n\nNot sure if the semantics of \"empty indices\" can mean \"all indices\" in these cases.\n\nAn alternative on changing the RoutingTable implementaion would be to set always set the RecoveryRequest \"allowNoIndices\" option to false. In that case when after expanding wildcards no index is matched we throw an IndexMissingException in MetaData#convertFromWildcards.\n\nThere are also at least four other places in RoutingTable where empty index lists are expanded like:\n\n```\nif (indices == null || indices.length == 0) {\n indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n}\n```\n\nNot sure if all of these should be looked at. I experimentally tried removing all of them but got some test failures after that, so it seems to depend on context when empty index list as input should be treated as \"all\" or \"none\".\n",
"created_at": "2015-02-24T16:16:35Z"
},
{
"body": "I had a closer look at how the index wildcard pattern get resolved to concrete indices in `MetaData#concreteIndices()`. \nIf it would be 100% sure that any input that could means \"all indices\" (that is: null or empty String[] or \"_all\") resolves to concrete list of names, then any empty list that is returned from that method would actually mean that there actually really is no indices or aliases for that input. Then in Routingtable we could get rid of handling the cases where indices=null / empty are treated like \"all\". \nI think that is almost the case already. If wildcard-expansion is on (for open or closed) then the \"_all\" case is sure to be resolved in `MetaData#concreteIndices()`. I think there are some edge cases where `MetaData#concreteIndices()` with empty input index names can lead to NPEs (when both expand_open, expand_closed) are false. Not sure if those cases should be allowed at all?\n",
"created_at": "2015-02-27T15:58:18Z"
},
{
"body": "I pushed a version where I removed the empty list expansion from the RoutingTable to https://github.com/cbuescher/elasticsearch/commit/5f52f9bc57094bc8703f522738ec562ae0313f57\nFixed all tests to pass there, not sure if I should open a PR for discussion?\n",
"created_at": "2015-02-27T16:02:24Z"
}
],
"number": 9081,
"title": "[REST API] «/<index_name>-*/_status» + «curator» ≈ massive data loss"
} | {
"body": "RoutingTables activePrimaryShardsGrouped(), allActiveShardsGrouped() and\nallAssignedShardsGrouped() methods treated empty index array input\nparameters as meaning \"all\" indices and expanded to the routing maps\nkeyset. However, the expansion of index names is now already done in\nMetaData#concreteIndices(). Returning an empty index name list here\nwhen a wildcard pattern didn't match any index name could lead to\nproblems like #9081 because the RoutingTable still expanded this\nlist of names to \"_all\". In case of e.g. the recovery endpoint this\ncould lead to problems.\n\nThis fix removes the index name expansion in RoutingTable and introduces\nsome more checks for preventing NPEs in MetaData#concreteIndices().\n\nCloses #9081\n",
"number": 10148,
"review_comments": [
{
"body": "if you rebase this change should go away right?\n",
"created_at": "2015-03-19T09:39:38Z"
},
{
"body": "can you explain what you are trying to achieve with this code block? Does it actually change the result of calling `concreteIndices`? Seems like it doesn't, il wildcards don't need to be expanded _all is treated as a concrete index, but it's not there, then we look at allowNoIndices anyway to decide whether we want to throw exception or just return empty. What am I missing?\n",
"created_at": "2015-03-19T09:45:28Z"
},
{
"body": "regardless of the below comment, use brackets here please and move the `else` the to the previous line.\n",
"created_at": "2015-03-19T09:46:52Z"
},
{
"body": "what happened here? we just want to leave this commented out? maybe we want to even remove this line then?\n",
"created_at": "2015-03-19T09:59:24Z"
},
{
"body": "wondering if these additional checks really belong here. IndicesRequestTests is a different beast, that tests indices resolution. I would probably move to a new test class that is more about testing responses and number of shards.\n",
"created_at": "2015-03-19T10:02:07Z"
},
{
"body": "same as above\n",
"created_at": "2015-03-19T10:02:15Z"
},
{
"body": "same as above, also I wonder if we could unit test `RoutingTable` instead of needing an integration tests to verify how many shards are touched...\n",
"created_at": "2015-03-19T10:02:45Z"
},
{
"body": "I think it is not clear here exactly when IndexMissingException is expected to be thrown or not. I would rather move the if on top and have different asserts path based on that. FOr the expected exception one you can then do:\n\n```\ntry {\n //do something\n fail(\"shouldn't get here\");\n} catch (IndexMissingException e) {\n //assert on exception\n}\n\n```\n",
"created_at": "2015-03-19T10:06:23Z"
},
{
"body": "same as above\n",
"created_at": "2015-03-19T10:06:37Z"
},
{
"body": "same as above\n",
"created_at": "2015-03-19T10:06:44Z"
},
{
"body": "why move from `GroupShardsIterator` to `List<ShardRouting>` ?\n",
"created_at": "2015-03-19T10:09:01Z"
},
{
"body": "Yes, I deleted that on master already. Should also maybe delete the gtelte on 1.x branch?\n",
"created_at": "2015-03-19T16:14:21Z"
},
{
"body": "this seems to be my code formatter at work here. Should I remove the comment? I think its useful to explain the `continue` here.\n",
"created_at": "2015-03-19T16:15:50Z"
},
{
"body": "I'll go and try to use this as yaml tests then\n",
"created_at": "2015-03-19T16:17:03Z"
},
{
"body": "When both indicesOptions.expandWildcardsOpen() and indicesOptions.expandWildcardsClosed() are both false for some reason, concrete indices falls through to here. If the 'aliasOrIndices' is an '_all' pattern we want to return here. Otherwise the existing implementation runs into a Null Pointer a few lines below\n\n```\nif (aliasesOrIndices.length == 1)\n```\n\nAt least that's what happens in the randomized test for this method when I introduced this. It's an edge case and the nesting of if-statements here is not nice, but there are so many different cases depending on the indicesOptions settings.\n",
"created_at": "2015-03-19T16:23:59Z"
},
{
"body": "The diff is missleading here I think. I split `allShards(String... indices)` into `allShards()` and `List<ShardRouting> allShards(String index)`. My idea was to get rid of the ambigous (String... indices) signature to have one method where it is clear that the caller really wants _all indices and one where he can get one single index, and then have the first call the second in a loop (basically splitting the existing code from `allShards(String... indices)`.\n\nI removed `GroupShardsIterator allShardsGrouped(String... indices)` completely because I didn't find any caller in the core ES codebase. The diff just mixed the changes I think, so it is a bit hard to read.\n",
"created_at": "2015-03-19T16:30:00Z"
},
{
"body": "yes I think so, should be done on 1.x too\n",
"created_at": "2015-03-19T17:28:56Z"
},
{
"body": "done\n",
"created_at": "2015-03-19T18:01:47Z"
},
{
"body": "I see! I got the split between the two methods, I like. I had missed the removal of `allShardsGrouped`, good!\n",
"created_at": "2015-03-20T08:25:03Z"
},
{
"body": "I would leave the first comment line but remove the commented out throw exception\n",
"created_at": "2015-03-20T08:25:48Z"
},
{
"body": "concreteIndicesAllpatternRandom=>concreteIndicesAllPatternRandom\n",
"created_at": "2015-03-20T08:28:48Z"
},
{
"body": "ah I see, that's because you pass in `null` as second argument. I guess in practice most of the apis fall into the expand wildcards case, that is why we have never seen this problem in reality. \n\nI think you should just do:\n\n```\nif (aliasesOrIndices == null || aliasesOrIndices.length == 0)\n```\n\nthen instead of\n\n```\nif (isAllIndices(aliasesOrIndices)) \n```\n\nand adapt the error message. This doesn't have to do with the `_all` expression, which is treated as a special wildcard.\n\nNote that we were previously missing to check `allowNoIndices` when `aliasesOrIndices` is empty... good test and good change! would be nice to figure out what the impact of this is, I think it might deserve the breaking label.... actually, maybe I would even pull out this change and get it in separately. What do you think?\n",
"created_at": "2015-03-20T08:55:35Z"
},
{
"body": "ping @cbuescher I think you haven't seen this comment ;)\n",
"created_at": "2015-03-23T17:53:15Z"
},
{
"body": "brackets please\n",
"created_at": "2015-03-23T17:56:03Z"
},
{
"body": "here too\n",
"created_at": "2015-03-23T17:56:08Z"
},
{
"body": "wondering if we really need to repeat this 100 times. Seems a little too high, wouldn't 10 be enough? Or even just rely on the fact that tests run continously. An alternative would be to use the `@Repeat` annotation instead of the loop. What do you think?\n",
"created_at": "2015-03-23T17:58:26Z"
},
{
"body": "does this really need to extend `ElasticsearchAllocationTestCase`? I think `ElasticsearchTestCase` should be enough.\n",
"created_at": "2015-03-23T18:00:20Z"
},
{
"body": "I think I was trying to use this to be able to change the ShardRoutingState in the test setup to be able to test other than just \"unassigned\" shards. I haven't yet figured out how to do that though.\n",
"created_at": "2015-03-26T11:22:18Z"
},
{
"body": "Sure, will decrease that\n",
"created_at": "2015-03-26T11:22:32Z"
},
{
"body": "if you get merge conflicts on these MetaData calls that should have been static, don't worry, I think I recently merged a PR from a contributor that fixed just that.\n",
"created_at": "2015-03-30T09:54:06Z"
}
],
"title": "Remove expansion of empty index arguments in RoutingTable"
} | {
"commits": [
{
"message": "Remove expansion of empty index arguments in RoutingTable\n\nRoutingTables activePrimaryShardsGrouped(), allActiveShardsGrouped() and\nallAssignedShardsGrouped() methods treated empty index array input\nparameters as meaning \"all\" indices and expanded to the routing maps\nkeyset. However, the expansion of index names is now already done in\nMetaData#concreteIndices(). Returning an empty index name list here\nwhen a wildcard pattern didn't match any index name could lead to\nproblems like #9081 because the RoutingTable still expanded this\nlist of names to \"_all\". In case of e.g. the recovery endpoint this\ncould lead to problems.\n\nThis fix removes the index name expansion in RoutingTable and introduces\nsome more checks for preventing NPEs in MetaData#concreteIndices().\n\nCloses #9081"
},
{
"message": "Changed nesting in MetaDataTests testing for exceptions"
},
{
"message": "Moved tests from IndicesRequestTests to yaml tests"
},
{
"message": "Minor deletes and renames"
},
{
"message": "Added RoutingTableTest"
},
{
"message": "Include changes for comments on MetaData and tests"
},
{
"message": "Adding to RoutingTableTest"
},
{
"message": "Prepending test* to all methods in MetaDataTests"
},
{
"message": "Adding detailed check for exception in MetaDataTests, more meaningful exception message"
},
{
"message": "Reverting changes in MetaData and tests"
}
],
"files": [
{
"diff": "@@ -1,9 +1,6 @@\n ---\n \"Indices recovery test\":\n \n- - skip:\n- features: gtelte\n-\n - do:\n indices.create:\n index: test_1\n@@ -39,4 +36,45 @@\n - gte: { test_1.shards.0.start.check_index_time_in_millis: 0 }\n - gte: { test_1.shards.0.start.total_time_in_millis: 0 }\n \n+---\n+\"Indices recovery test index name not matching\":\n+\n+ - do:\n+ indices.create:\n+ index: test_1\n+ body:\n+ settings:\n+ index:\n+ number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+ - do:\n+\n+ catch: missing\n+ indices.recovery:\n+ index: [foobar]\n+\n+---\n+\"Indices recovery test, wildcard not matching any index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_1\n+ body:\n+ settings:\n+ index:\n+ number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+ - do:\n+ indices.recovery:\n+ index: [v*]\n+\n+ - match: { $body: {} }\n ",
"filename": "rest-api-spec/test/indices.recovery/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,58 @@\n+---\n+setup:\n+ - do:\n+ indices.create:\n+ index: test_1\n+ body:\n+ settings:\n+ index:\n+ number_of_replicas: 0\n+ number_of_shards: 5\n+\n+ - do:\n+ indices.create:\n+ index: test_2\n+ body:\n+ settings:\n+ index:\n+ number_of_replicas: 0\n+ number_of_shards: 5\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+---\n+\"Indices refresh test _all\":\n+\n+ - do:\n+ indices.refresh:\n+ index: [_all]\n+\n+ - match: { _shards.total: 10 }\n+ - match: { _shards.successful: 10 }\n+ - match: { _shards.failed: 0 }\n+\n+---\n+\"Indices refresh test empty array\":\n+\n+\n+ - do:\n+ indices.refresh:\n+ index: []\n+\n+ - match: { _shards.total: 10 }\n+ - match: { _shards.successful: 10 }\n+ - match: { _shards.failed: 0 }\n+\n+---\n+\"Indices refresh test no-match wildcard\":\n+\n+ - do:\n+ indices.refresh:\n+ index: [bla*]\n+\n+ - match: { _shards.total: 0 }\n+ - match: { _shards.successful: 0 }\n+ - match: { _shards.failed: 0 }\n+",
"filename": "rest-api-spec/test/indices.refresh/10_basic.yaml",
"status": "added"
},
{
"diff": "@@ -60,6 +60,16 @@ setup:\n - is_true: indices.test1\n - is_true: indices.test2\n \n+---\n+\"Index - star, no match\":\n+ - do:\n+ indices.stats: { index: 'bla*' }\n+\n+ - match: { _shards.total: 0 }\n+ - is_true: _all\n+ - is_false: indices.test1\n+ - is_false: indices.test2\n+\n ---\n \"Index - one index\":\n - do:",
"filename": "rest-api-spec/test/indices.stats/10_index.yaml",
"status": "modified"
},
{
"diff": "@@ -117,61 +117,39 @@ public List<ShardRouting> shardsWithState(ShardRoutingState state) {\n }\n \n /**\n- * All the shards (replicas) for the provided indices.\n+ * All the shards (replicas) for all indices in this routing table.\n *\n- * @param indices The indices to return all the shards (replicas), can be <tt>null</tt> or empty array to indicate all indices\n- * @return All the shards matching the specific index\n- * @throws IndexMissingException If an index passed does not exists\n+ * @return All the shards\n */\n- public List<ShardRouting> allShards(String... indices) throws IndexMissingException {\n+ public List<ShardRouting> allShards() throws IndexMissingException {\n List<ShardRouting> shards = Lists.newArrayList();\n- if (indices == null || indices.length == 0) {\n- indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n- }\n+ String[] indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n for (String index : indices) {\n- IndexRoutingTable indexRoutingTable = index(index);\n- if (indexRoutingTable == null) {\n- throw new IndexMissingException(new Index(index));\n- }\n- for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {\n- for (ShardRouting shardRouting : indexShardRoutingTable) {\n- shards.add(shardRouting);\n- }\n- }\n+ List<ShardRouting> allShardsIndex = allShards(index);\n+ shards.addAll(allShardsIndex);\n }\n return shards;\n }\n \n /**\n- * All the shards (primary + replicas) for the provided indices grouped (each group is a single element, consisting\n- * of the shard). This is handy for components that expect to get group iterators, but still want in some\n- * cases to iterate over all the shards (and not just one shard in replication group).\n+ * All the shards (replicas) for the provided index.\n *\n- * @param indices The indices to return all the shards (replicas), can be <tt>null</tt> or empty array to indicate all indices\n- * @return All the shards grouped into a single shard element group each\n- * @throws IndexMissingException If an index passed does not exists\n- * @see IndexRoutingTable#groupByAllIt()\n+ * @param index The index to return all the shards (replicas).\n+ * @return All the shards matching the specific index\n+ * @throws IndexMissingException If the index passed does not exists\n */\n- public GroupShardsIterator allShardsGrouped(String... indices) throws IndexMissingException {\n- // use list here since we need to maintain identity across shards\n- ArrayList<ShardIterator> set = new ArrayList<>();\n- if (indices == null || indices.length == 0) {\n- indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n+ public List<ShardRouting> allShards(String index) throws IndexMissingException {\n+ List<ShardRouting> shards = Lists.newArrayList();\n+ IndexRoutingTable indexRoutingTable = index(index);\n+ if (indexRoutingTable == null) {\n+ throw new IndexMissingException(new Index(index));\n }\n- for (String index : indices) {\n- IndexRoutingTable indexRoutingTable = index(index);\n- if (indexRoutingTable == null) {\n- continue;\n- // we simply ignore indices that don't exists (make sense for operations that use it currently)\n-// throw new IndexMissingException(new Index(index));\n- }\n- for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {\n- for (ShardRouting shardRouting : indexShardRoutingTable) {\n- set.add(shardRouting.shardsIt());\n- }\n+ for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {\n+ for (ShardRouting shardRouting : indexShardRoutingTable) {\n+ shards.add(shardRouting);\n }\n }\n- return new GroupShardsIterator(set);\n+ return shards;\n }\n \n public GroupShardsIterator allActiveShardsGrouped(String[] indices, boolean includeEmpty) throws IndexMissingException {\n@@ -188,15 +166,11 @@ public GroupShardsIterator allActiveShardsGrouped(String[] indices, boolean incl\n public GroupShardsIterator allActiveShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets) throws IndexMissingException {\n // use list here since we need to maintain identity across shards\n ArrayList<ShardIterator> set = new ArrayList<>();\n- if (indices == null || indices.length == 0) {\n- indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n- }\n for (String index : indices) {\n IndexRoutingTable indexRoutingTable = index(index);\n if (indexRoutingTable == null) {\n continue;\n // we simply ignore indices that don't exists (make sense for operations that use it currently)\n-// throw new IndexMissingException(new Index(index));\n }\n for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {\n for (ShardRouting shardRouting : indexShardRoutingTable) {\n@@ -228,15 +202,11 @@ public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean in\n public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean includeEmpty, boolean includeRelocationTargets) throws IndexMissingException {\n // use list here since we need to maintain identity across shards\n ArrayList<ShardIterator> set = new ArrayList<>();\n- if (indices == null || indices.length == 0) {\n- indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n- }\n for (String index : indices) {\n IndexRoutingTable indexRoutingTable = index(index);\n if (indexRoutingTable == null) {\n continue;\n // we simply ignore indices that don't exists (make sense for operations that use it currently)\n-// throw new IndexMissingException(new Index(index));\n }\n for (IndexShardRoutingTable indexShardRoutingTable : indexRoutingTable) {\n for (ShardRouting shardRouting : indexShardRoutingTable) {\n@@ -259,17 +229,14 @@ public GroupShardsIterator allAssignedShardsGrouped(String[] indices, boolean in\n * of the primary shard). This is handy for components that expect to get group iterators, but still want in some\n * cases to iterate over all primary shards (and not just one shard in replication group).\n *\n- * @param indices The indices to return all the shards (replicas), can be <tt>null</tt> or empty array to indicate all indices\n+ * @param indices The indices to return all the shards (replicas)\n * @return All the primary shards grouped into a single shard element group each\n * @throws IndexMissingException If an index passed does not exists\n * @see IndexRoutingTable#groupByAllIt()\n */\n public GroupShardsIterator activePrimaryShardsGrouped(String[] indices, boolean includeEmpty) throws IndexMissingException {\n // use list here since we need to maintain identity across shards\n ArrayList<ShardIterator> set = new ArrayList<>();\n- if (indices == null || indices.length == 0) {\n- indices = indicesRouting.keySet().toArray(new String[indicesRouting.keySet().size()]);\n- }\n for (String index : indices) {\n IndexRoutingTable indexRoutingTable = index(index);\n if (indexRoutingTable == null) {",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsRequest;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest;\n+import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryAction;\n import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryRequest;\n import org.elasticsearch.action.bulk.BulkAction;\n@@ -94,6 +95,7 @@\n import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.cluster.settings.ClusterDynamicSettings;\n import org.elasticsearch.cluster.settings.DynamicSettings;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n@@ -880,7 +882,7 @@ private static void clearInterceptedActions() {\n ((InterceptingTransportService) transportService).clearInterceptedActions();\n }\n }\n- \n+\n private static void interceptTransportActions(String... actions) {\n Iterable<TransportService> transportServices = internalCluster().getInstances(TransportService.class);\n for (TransportService transportService : transportServices) {\n@@ -907,8 +909,7 @@ public static class InterceptingTransportService extends TransportService {\n private final Map<String, List<TransportRequest>> requests = new HashMap<>();\n \n @Inject\n- public InterceptingTransportService(Settings settings, Transport transport, ThreadPool threadPool,\n- NodeSettingsService nodeSettingsService, @ClusterDynamicSettings DynamicSettings dynamicSettings) {\n+ public InterceptingTransportService(Settings settings, Transport transport, ThreadPool threadPool) {\n super(settings, transport, threadPool);\n }\n ",
"filename": "src/test/java/org/elasticsearch/action/IndicesRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,248 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.node.DiscoveryNodes.Builder;\n+import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.IndexMissingException;\n+import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.nullValue;\n+\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.hamcrest.Matchers.is;\n+\n+public class RoutingTableTest extends ElasticsearchAllocationTestCase {\n+\n+ private static final String TEST_INDEX_1 = \"test1\";\n+ private static final String TEST_INDEX_2 = \"test2\";\n+ private RoutingTable emptyRoutingTable;\n+ private RoutingTable testRoutingTable;\n+ private int numberOfShards;\n+ private int numberOfReplicas;\n+ private int shardsPerIndex;\n+ private int totalNumberOfShards;\n+ private final static Settings DEFAULT_SETTINGS = ImmutableSettings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n+ private final AllocationService ALLOCATION_SERVICE = createAllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n+ .put(\"cluster.routing.allocation.node_initial_primaries_recoveries\", 10)\n+ .build());\n+ private ClusterState clusterState;\n+\n+ @Before\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ this.numberOfShards = randomIntBetween(1, 5);\n+ this.numberOfReplicas = randomIntBetween(1, 5);\n+ this.shardsPerIndex = this.numberOfShards * (this.numberOfReplicas + 1);\n+ this.totalNumberOfShards = this.shardsPerIndex * 2;\n+ logger.info(\"Setup test with \" + this.numberOfShards + \" shards and \" + this.numberOfReplicas + \" replicas.\");\n+ this.emptyRoutingTable = new RoutingTable.Builder().build();\n+ MetaData metaData = MetaData.builder()\n+ .put(createIndexMetaData(TEST_INDEX_1))\n+ .put(createIndexMetaData(TEST_INDEX_2))\n+ .build();\n+\n+ this.testRoutingTable = new RoutingTable.Builder()\n+ .add(new IndexRoutingTable.Builder(TEST_INDEX_1).initializeAsNew(metaData.index(TEST_INDEX_1)).build())\n+ .add(new IndexRoutingTable.Builder(TEST_INDEX_2).initializeAsNew(metaData.index(TEST_INDEX_2)).build())\n+ .build();\n+ this.clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(testRoutingTable).build();\n+ }\n+\n+ /**\n+ * puts primary shard routings into initializing state\n+ */\n+ private void initPrimaries() {\n+ logger.info(\"adding \" + (this.numberOfReplicas + 1) + \" nodes and performing rerouting\");\n+ Builder discoBuilder = DiscoveryNodes.builder();\n+ for (int i=0; i<this.numberOfReplicas+1;i++) {\n+ discoBuilder = discoBuilder.put(newNode(\"node\"+i));\n+ }\n+ this.clusterState = ClusterState.builder(clusterState).nodes(discoBuilder).build();\n+ RoutingAllocation.Result rerouteResult = ALLOCATION_SERVICE.reroute(clusterState);\n+ this.testRoutingTable = rerouteResult.routingTable();\n+ assertThat(rerouteResult.changed(), is(true));\n+ this.clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+ }\n+\n+ private void startInitializingShards(String index) {\n+ this.clusterState = ClusterState.builder(clusterState).routingTable(this.testRoutingTable).build();\n+ logger.info(\"start primary shards for index \" + index);\n+ RoutingAllocation.Result rerouteResult = ALLOCATION_SERVICE.applyStartedShards(this.clusterState, this.clusterState.routingNodes().shardsWithState(index, INITIALIZING));\n+ this.clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+ this.testRoutingTable = rerouteResult.routingTable();\n+ }\n+\n+ private IndexMetaData.Builder createIndexMetaData(String indexName) {\n+ return new IndexMetaData.Builder(indexName)\n+ .settings(DEFAULT_SETTINGS)\n+ .numberOfReplicas(this.numberOfReplicas)\n+ .numberOfShards(this.numberOfShards);\n+ }\n+\n+ @Test\n+ public void testAllShards() {\n+ assertThat(this.emptyRoutingTable.allShards().size(), is(0));\n+ assertThat(this.testRoutingTable.allShards().size(), is(this.totalNumberOfShards));\n+\n+ assertThat(this.testRoutingTable.allShards(TEST_INDEX_1).size(), is(this.shardsPerIndex));\n+ try {\n+ assertThat(this.testRoutingTable.allShards(\"not_existing\").size(), is(0));\n+ fail(\"Exception expected when calling allShards() with non existing index name\");\n+ } catch (IndexMissingException e) {\n+ // expected\n+ }\n+ }\n+\n+ @Test\n+ public void testHasIndex() {\n+ assertThat(this.testRoutingTable.hasIndex(TEST_INDEX_1), is(true));\n+ assertThat(this.testRoutingTable.hasIndex(\"foobar\"), is(false));\n+ }\n+\n+ @Test\n+ public void testIndex() {\n+ assertThat(this.testRoutingTable.index(TEST_INDEX_1).getIndex(), is(TEST_INDEX_1));\n+ assertThat(this.testRoutingTable.index(\"foobar\"), is(nullValue()));\n+ }\n+\n+ @Test\n+ public void testIndicesRouting() {\n+ assertThat(this.testRoutingTable.indicesRouting().size(), is(2));\n+ assertThat(this.testRoutingTable.getIndicesRouting().size(), is(2));\n+ assertSame(this.testRoutingTable.getIndicesRouting(), this.testRoutingTable.indicesRouting());\n+ }\n+\n+ @Test\n+ public void testShardsWithState() {\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED).size(), is(this.totalNumberOfShards));\n+\n+ initPrimaries();\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED).size(), is(this.totalNumberOfShards - 2 * this.numberOfShards));\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.INITIALIZING).size(), is(2 * this.numberOfShards));\n+\n+ startInitializingShards(TEST_INDEX_1);\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.STARTED).size(), is(this.numberOfShards));\n+ int initializingExpected = this.numberOfShards + this.numberOfShards * this.numberOfReplicas;\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.INITIALIZING).size(), is(initializingExpected));\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED).size(), is(this.totalNumberOfShards - initializingExpected - this.numberOfShards));\n+\n+ startInitializingShards(TEST_INDEX_2);\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.STARTED).size(), is(2 * this.numberOfShards));\n+ initializingExpected = 2 * this.numberOfShards * this.numberOfReplicas;\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.INITIALIZING).size(), is(initializingExpected));\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED).size(), is(this.totalNumberOfShards - initializingExpected - 2 * this.numberOfShards));\n+\n+ // now start all replicas too\n+ startInitializingShards(TEST_INDEX_1);\n+ startInitializingShards(TEST_INDEX_2);\n+ assertThat(this.testRoutingTable.shardsWithState(ShardRoutingState.STARTED).size(), is(this.totalNumberOfShards));\n+ }\n+\n+ @Test\n+ public void testActivePrimaryShardsGrouped() {\n+ assertThat(this.emptyRoutingTable.activePrimaryShardsGrouped(new String[0], true).size(), is(0));\n+ assertThat(this.emptyRoutingTable.activePrimaryShardsGrouped(new String[0], false).size(), is(0));\n+\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(0));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.numberOfShards));\n+\n+ initPrimaries();\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(0));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.numberOfShards));\n+\n+ startInitializingShards(TEST_INDEX_1);\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.numberOfShards));\n+\n+ startInitializingShards(TEST_INDEX_2);\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_2}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, false).size(), is(2 * this.numberOfShards));\n+ assertThat(this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, true).size(), is(2 * this.numberOfShards));\n+\n+ try {\n+ this.testRoutingTable.activePrimaryShardsGrouped(new String[] {TEST_INDEX_1, \"not_exists\"}, true);\n+ fail(\"Calling with non-existing index name should raise IndexMissingException\");\n+ } catch (IndexMissingException e) {\n+ // expected\n+ }\n+ }\n+\n+ @Test\n+ public void testAllActiveShardsGrouped() {\n+ assertThat(this.emptyRoutingTable.allActiveShardsGrouped(new String[0], true).size(), is(0));\n+ assertThat(this.emptyRoutingTable.allActiveShardsGrouped(new String[0], false).size(), is(0));\n+\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(0));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.shardsPerIndex));\n+\n+ initPrimaries();\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(0));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.shardsPerIndex));\n+\n+ startInitializingShards(TEST_INDEX_1);\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.shardsPerIndex));\n+\n+ startInitializingShards(TEST_INDEX_2);\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_2}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, false).size(), is(2 * this.numberOfShards));\n+ assertThat(this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, true).size(), is(this.totalNumberOfShards));\n+\n+ try {\n+ this.testRoutingTable.allActiveShardsGrouped(new String[] {TEST_INDEX_1, \"not_exists\"}, true);\n+ } catch (IndexMissingException e) {\n+ fail(\"Calling with non-existing index should be ignored at the moment\");\n+ }\n+ }\n+\n+ @Test\n+ public void testAllAssignedShardsGrouped() {\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(0));\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.shardsPerIndex));\n+\n+ initPrimaries();\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1}, false).size(), is(this.numberOfShards));\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1}, true).size(), is(this.shardsPerIndex));\n+\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, false).size(), is(2 * this.numberOfShards));\n+ assertThat(this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1, TEST_INDEX_2}, true).size(), is(this.totalNumberOfShards));\n+\n+ try {\n+ this.testRoutingTable.allAssignedShardsGrouped(new String[] {TEST_INDEX_1, \"not_exists\"}, false);\n+ } catch (IndexMissingException e) {\n+ fail(\"Calling with non-existing index should be ignored at the moment\");\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/cluster/routing/RoutingTableTest.java",
"status": "added"
}
]
} |
{
"body": "At the moment the IndexingMemoryController can try to update the index buffer memory of shards at any give moment. This update involves a flush, which may cause a FlushNotAllowedEngineException to be thrown in a concurrently finalizing recovery.\n\nCloses #6642\n",
"comments": [
{
"body": "left a bunch of comments.... the functionality makes lots of sense IMO\n",
"created_at": "2014-07-02T07:22:58Z"
},
{
"body": "2 minor nitpicks -- LGTM\n",
"created_at": "2014-07-02T10:00:48Z"
}
],
"number": 6667,
"title": "IndexingMemoryController should only update buffer settings of fully recovered shards"
} | {
"body": "To support real time gets, the engine keeps an in-memory map of recently index docs and their location in the translog. This is needed until documents become visible in Lucene. With 1.3.0, we have improved this map and made tightly integrated with refresh cycles in Lucene in order to keep the memory signature to a bare minimum. On top of that, if the version map grows above a 25% of the index buffer size, we proactively refresh in order to be able to trim the version map back to 0 (see #6363) . In the same release, we have fixed an issue where an update to the indexing buffer could result in an unwanted exception during recovery (#6667) . We solved this by waiting with updating the size of the index buffer until the shard was fully recovered. Sadly this two together can have a negative impact on the speed of translog recovery.\n\nDuring the second phase of recovery we replay all operations that happened on the shard during the first phase of copying files. In parallel we start indexing new documents into the new created shard. At some point (phase 3 in the recovery), the translog replay starts to send operation which have already been indexed into the shard. The version map is crucial in being able to quickly detect this and skip the replayed operations, without hitting lucene. Sadly #6667 (only updating the index memory buffer once shard is started) means that a shard is using the default 64MB for it's index buffer, and thus only 16MB (25%) for the version map. This much less then the default index buffer size 10% of machine memory (shared between shards).\n\nSince we don't flush anymore when updating the memory buffer, we can remove #6667 and update recovering shards as well. Also, we make the version map max size configurable, with the same default of 25% of the current index buffer.\n",
"number": 10046,
"review_comments": [
{
"body": "Can we include value's value in this error message?\n",
"created_at": "2015-03-10T12:13:07Z"
},
{
"body": "Same here?\n",
"created_at": "2015-03-10T12:13:16Z"
},
{
"body": "Same here?\n",
"created_at": "2015-03-10T12:14:30Z"
},
{
"body": "Do/should we need to document this new setting? It's very expert ...\n",
"created_at": "2015-03-10T12:17:48Z"
},
{
"body": "Because we don't trigger a refresh here, if the versionMapSize decreased to a value lower than amount of RAM versionMap is now using, we won't free up that RAM until the next indexing op comes to this engine? Not sure it matters (will users expect to use this to \"free up RAM used by indexer\"?).\n",
"created_at": "2015-03-10T12:25:47Z"
},
{
"body": "I think we shouldn't. The only reason I added it is to be able to change it in the cases we suspect this is potentially the source of an issue, mostly to validate it's not.\n",
"created_at": "2015-03-13T14:22:40Z"
},
{
"body": "will change\n",
"created_at": "2015-03-13T14:22:51Z"
},
{
"body": "will change\n",
"created_at": "2015-03-13T14:23:06Z"
},
{
"body": "here we will get it from the percentage error\n",
"created_at": "2015-03-13T14:23:16Z"
},
{
"body": "I don't think it will matter? I tried to keep this as simple as possible. This shouldn't be used as a way to reduced size quickly. On top of that - a refresh will trim it down anyway, no?\n",
"created_at": "2015-03-13T14:25:47Z"
},
{
"body": "True, the next refresh will clear it ... good enough.\n",
"created_at": "2015-03-13T15:27:32Z"
},
{
"body": "can we have a unittest for only this validator, that would be awesome\n",
"created_at": "2015-03-13T16:51:21Z"
},
{
"body": "this is really odd to return null here? unrelated I guesss.\n",
"created_at": "2015-03-13T16:52:03Z"
},
{
"body": "my bad. will add.\n",
"created_at": "2015-03-13T17:16:44Z"
},
{
"body": "yeah. that's the existing pattern.\n",
"created_at": "2015-03-13T17:17:03Z"
}
],
"title": "Engine: update index buffer size during recovery and allow configuring version map size"
} | {
"commits": [
{
"message": "Engine: update index buffer size during recovery and allow configuring version map size.\n\nTo support real time gets, the engine keeps an in-memory map of recently index docs and their location in the translog. This is needed until documents become visible in Lucene. With 1.3.0, we have improved this map and made tightly integrated with refresh cycles in Lucene in order to keep the memory signature to a bare minimum. On top of that, if the version map grows above a 25% of the index buffer size, we proactively refresh in order to be able to trim the version map back to 0 (see #6363) . In the same release, we have fixed an issue where an update to the indexing buffer could result in an unwanted exception during recovery (#6667) . We solved this by waiting with updating the size of the index buffer until the shard was fully recovered. Sadly this two together can have a negative impact on the speed of translog recovery.\n\nDuring the second phase of recovery we replay all operations that happened on the shard during the first phase of copying files. In parallel we start indexing new documents into the new created shard. At some point (phase 3 in the recovery), the translog replay starts to send operation which have already been indexed into the shard. The version map is crucial in being able to quickly detect this and skip the replayed operations, without hitting lucene. Sadly #6667 (only updating the index memory buffer once shard is started) means that a shard is using the default 64MB for it's index buffer, and thus only 16MB (25%) for the version map. This much less then the default index buffer size 10% of machine memory (shared between shards).\n\nSince we don't flush anymore when updating the memory buffer, we can remove #6667 and update recovering shards as well. Also, we make the version map max size configurable, with the same default of 25% of the current index buffer."
},
{
"message": "feedback"
},
{
"message": "more unit testing"
}
],
"files": [
{
"diff": "@@ -205,6 +205,44 @@ public String validate(String setting, String value) {\n }\n };\n \n+ public static final Validator PERCENTAGE = new Validator() {\n+ @Override\n+ public String validate(String setting, String value) {\n+ try {\n+ if (value == null) {\n+ return \"the value of \" + setting + \" can not be null\";\n+ }\n+ if (!value.endsWith(\"%\")) {\n+ return \"the value [\" + value + \"] for \" + setting + \" must end with %\";\n+ }\n+ final double asDouble = Double.parseDouble(value.substring(0, value.length() - 1));\n+ if (asDouble < 0.0 || asDouble > 100.0) {\n+ return \"the value [\" + value + \"] for \" + setting + \" must be a percentage between 0% and 100%\";\n+ }\n+ } catch (NumberFormatException ex) {\n+ return ex.getMessage();\n+ }\n+ return null;\n+ }\n+ };\n+\n+\n+ public static final Validator BYTES_SIZE_OR_PERCENTAGE = new Validator() {\n+ @Override\n+ public String validate(String setting, String value) {\n+ String byteSize = BYTES_SIZE.validate(setting, value);\n+ if (byteSize != null) {\n+ String percentage = PERCENTAGE.validate(setting, value);\n+ if (percentage == null) {\n+ return null;\n+ }\n+ return percentage + \" or be a valid bytes size value, like [16mb]\";\n+ }\n+ return null;\n+ }\n+ };\n+\n+\n public static final Validator MEMORY_SIZE = new Validator() {\n @Override\n public String validate(String setting, String value) {",
"filename": "src/main/java/org/elasticsearch/cluster/settings/Validator.java",
"status": "modified"
},
{
"diff": "@@ -23,7 +23,6 @@\n import org.apache.lucene.index.IndexWriterConfig;\n import org.apache.lucene.search.similarities.Similarity;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n@@ -53,6 +52,8 @@ public final class EngineConfig {\n private volatile boolean failOnMergeFailure = true;\n private volatile boolean failEngineOnCorruption = true;\n private volatile ByteSizeValue indexingBufferSize;\n+ private volatile ByteSizeValue versionMapSize;\n+ private volatile String versionMapSizeSetting;\n private final int indexConcurrency;\n private volatile boolean compoundOnFlush = true;\n private long gcDeletesInMillis = DEFAULT_GC_DELETES.millis();\n@@ -131,12 +132,20 @@ public final class EngineConfig {\n */\n public static final String INDEX_CHECKSUM_ON_MERGE = \"index.checksum_on_merge\";\n \n+ /**\n+ * The maximum size the version map should grow to before issuing a refresh. Can be an absolute value or a percentage of\n+ * the current index memory buffer (defaults to 25%)\n+ */\n+ public static final String INDEX_VERSION_MAP_SIZE = \"index.version_map_size\";\n+\n \n public static final TimeValue DEFAULT_REFRESH_INTERVAL = new TimeValue(1, TimeUnit.SECONDS);\n public static final TimeValue DEFAULT_GC_DELETES = TimeValue.timeValueSeconds(60);\n public static final ByteSizeValue DEFAUTL_INDEX_BUFFER_SIZE = new ByteSizeValue(64, ByteSizeUnit.MB);\n public static final ByteSizeValue INACTIVE_SHARD_INDEXING_BUFFER = ByteSizeValue.parseBytesSizeValue(\"500kb\");\n \n+ public static final String DEFAULT_VERSION_MAP_SIZE = \"25%\";\n+\n private static final String DEFAULT_CODEC_NAME = \"default\";\n \n \n@@ -167,13 +176,49 @@ public EngineConfig(ShardId shardId, boolean optimizeAutoGenerateId, ThreadPool\n failEngineOnCorruption = indexSettings.getAsBoolean(INDEX_FAIL_ON_CORRUPTION_SETTING, true);\n failOnMergeFailure = indexSettings.getAsBoolean(INDEX_FAIL_ON_MERGE_FAILURE_SETTING, true);\n gcDeletesInMillis = indexSettings.getAsTime(INDEX_GC_DELETES_SETTING, EngineConfig.DEFAULT_GC_DELETES).millis();\n+ versionMapSizeSetting = indexSettings.get(INDEX_VERSION_MAP_SIZE, DEFAULT_VERSION_MAP_SIZE);\n+ updateVersionMapSize();\n+ }\n+\n+ /** updates {@link #versionMapSize} based on current setting and {@link #indexingBufferSize} */\n+ private void updateVersionMapSize() {\n+ if (versionMapSizeSetting.endsWith(\"%\")) {\n+ double percent = Double.parseDouble(versionMapSizeSetting.substring(0, versionMapSizeSetting.length() - 1));\n+ versionMapSize = new ByteSizeValue((long) (((double) indexingBufferSize.bytes() * (percent / 100))));\n+ } else {\n+ versionMapSize = ByteSizeValue.parseBytesSizeValue(versionMapSizeSetting);\n+ }\n+ }\n+\n+ /**\n+ * Settings the version map size that should trigger a refresh. See {@link #INDEX_VERSION_MAP_SIZE} for details.\n+ */\n+ public void setVersionMapSizeSetting(String versionMapSizeSetting) {\n+ this.versionMapSizeSetting = versionMapSizeSetting;\n+ updateVersionMapSize();\n+ }\n+\n+ /**\n+ * current setting for the version map size that should trigger a refresh. See {@link #INDEX_VERSION_MAP_SIZE} for details.\n+ */\n+ public String getVersionMapSizeSetting() {\n+ return versionMapSizeSetting;\n+ }\n+\n+\n+ /**\n+ * returns the size of the version map that should trigger a refresh\n+ */\n+ public ByteSizeValue getVersionMapSize() {\n+ return versionMapSize;\n }\n \n /**\n * Sets the indexing buffer\n */\n public void setIndexingBufferSize(ByteSizeValue indexingBufferSize) {\n this.indexingBufferSize = indexingBufferSize;\n+ updateVersionMapSize();\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/engine/EngineConfig.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,8 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.SearcherFactory;\n import org.apache.lucene.search.SearcherManager;\n-import org.apache.lucene.store.*;\n+import org.apache.lucene.store.AlreadyClosedException;\n+import org.apache.lucene.store.LockObtainFailedException;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n@@ -370,11 +371,10 @@ public void index(Index index) throws EngineException {\n }\n \n /**\n- * Forces a refresh if the versionMap is using too much RAM (currently > 25% of IndexWriter's RAM buffer).\n+ * Forces a refresh if the versionMap is using too much RAM\n */\n private void checkVersionMapRefresh() {\n- // TODO: we force refresh when versionMap is using > 25% of IW's RAM buffer; should we make this separately configurable?\n- if (versionMap.ramBytesUsedForRefresh() > 0.25 * engineConfig.getIndexingBufferSize().bytes() && versionMapRefreshPending.getAndSet(true) == false) {\n+ if (versionMap.ramBytesUsedForRefresh() > config().getVersionMapSize().bytes() && versionMapRefreshPending.getAndSet(true) == false) {\n try {\n if (isClosed.get()) {\n // no point...",
"filename": "src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -39,9 +39,9 @@\n import org.elasticsearch.index.store.support.AbstractIndexStore;\n import org.elasticsearch.index.translog.TranslogService;\n import org.elasticsearch.index.translog.fs.FsTranslog;\n+import org.elasticsearch.indices.IndicesWarmer;\n import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.ttl.IndicesTTLService;\n-import org.elasticsearch.indices.IndicesWarmer;\n \n /**\n */\n@@ -87,6 +87,7 @@ public IndexDynamicSettingsModule() {\n indexDynamicSettings.addDynamicSetting(EngineConfig.INDEX_FAIL_ON_CORRUPTION_SETTING, Validator.BOOLEAN);\n indexDynamicSettings.addDynamicSetting(EngineConfig.INDEX_CHECKSUM_ON_MERGE, Validator.BOOLEAN);\n indexDynamicSettings.addDynamicSetting(IndexShard.INDEX_FLUSH_ON_CLOSE, Validator.BOOLEAN);\n+ indexDynamicSettings.addDynamicSetting(EngineConfig.INDEX_VERSION_MAP_SIZE, Validator.BYTES_SIZE_OR_PERCENTAGE);\n indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN, Validator.TIME);\n indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO, Validator.TIME);\n indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG, Validator.TIME);",
"filename": "src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java",
"status": "modified"
},
{
"diff": "@@ -962,7 +962,8 @@ public void addFailedEngineListener(Engine.FailedEngineListener failedEngineList\n public void updateBufferSize(ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n ByteSizeValue preValue = config.getIndexingBufferSize();\n config.setIndexingBufferSize(shardIndexingBufferSize);\n- if (preValue.bytes() != shardIndexingBufferSize.bytes()) {\n+ // update engine if it is already started.\n+ if (preValue.bytes() != shardIndexingBufferSize.bytes() && engineUnsafe() != null) {\n // its inactive, make sure we do a refresh / full IW flush in this case, since the memory\n // changes only after a \"data\" change has happened to the writer\n // the index writer lazily allocates memory and a refresh will clean it all up.\n@@ -1050,6 +1051,10 @@ public void onRefreshSettings(Settings settings) {\n config.setChecksumOnMerge(checksumOnMerge);\n change = true;\n }\n+ final String versionMapSize = settings.get(EngineConfig.INDEX_VERSION_MAP_SIZE, config.getVersionMapSizeSetting());\n+ if (config.getVersionMapSizeSetting().equals(versionMapSize) == false) {\n+ config.setVersionMapSizeSetting(versionMapSize);\n+ }\n }\n if (change) {\n refresh(\"apply settings\");\n@@ -1198,13 +1203,17 @@ private void checkIndex(boolean throwException) throws IndexShardException {\n }\n \n public Engine engine() {\n- Engine engine = this.currentEngineReference.get();\n+ Engine engine = engineUnsafe();\n if (engine == null) {\n throw new EngineClosedException(shardId);\n }\n return engine;\n }\n \n+ protected Engine engineUnsafe() {\n+ return this.currentEngineReference.get();\n+ }\n+\n class ShardEngineFailListener implements Engine.FailedEngineListener {\n private final CopyOnWriteArrayList<Engine.FailedEngineListener> delegates = new CopyOnWriteArrayList<>();\n ",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -28,14 +28,13 @@\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n-import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.EngineClosedException;\n import org.elasticsearch.index.engine.EngineConfig;\n import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n-import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.monitor.jvm.JvmInfo;\n@@ -65,7 +64,8 @@ public class IndexingMemoryController extends AbstractLifecycleComponent<Indexin\n \n private volatile ScheduledFuture scheduler;\n \n- private static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n+ private static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(\n+ IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n \n @Inject\n public IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService) {",
"filename": "src/main/java/org/elasticsearch/indices/memory/IndexingMemoryController.java",
"status": "modified"
},
{
"diff": "@@ -161,6 +161,7 @@ public void run() {\n \n long totalRecoveryTime = 0;\n long startTime = System.currentTimeMillis();\n+ long[] recoveryTimes = new long[3];\n for (int iteration = 0; iteration < 3; iteration++) {\n logger.info(\"--> removing replicas\");\n client1.admin().indices().prepareUpdateSettings(INDEX_NAME).setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS + \": 0\").get();\n@@ -170,7 +171,9 @@ public void run() {\n client1.admin().cluster().prepareHealth(INDEX_NAME).setWaitForGreenStatus().setTimeout(\"15m\").get();\n long recoveryTime = System.currentTimeMillis() - recoveryStart;\n totalRecoveryTime += recoveryTime;\n+ recoveryTimes[iteration] = recoveryTime;\n logger.info(\"--> recovery done in [{}]\", new TimeValue(recoveryTime));\n+\n // sleep some to let things clean up\n Thread.sleep(10000);\n }\n@@ -185,7 +188,9 @@ public void run() {\n \n backgroundLogger.join();\n \n- logger.info(\"average doc/s [{}], average relocation time [{}]\", (endDocIndexed - startDocIndexed) * 1000.0 / totalTime, new TimeValue(totalRecoveryTime / 3));\n+ logger.info(\"average doc/s [{}], average relocation time [{}], taking [{}], [{}], [{}]\", (endDocIndexed - startDocIndexed) * 1000.0 / totalTime, new TimeValue(totalRecoveryTime / 3),\n+ TimeValue.timeValueMillis(recoveryTimes[0]), TimeValue.timeValueMillis(recoveryTimes[1]), TimeValue.timeValueMillis(recoveryTimes[2])\n+ );\n \n client1.close();\n node1.close();",
"filename": "src/test/java/org/elasticsearch/benchmark/recovery/ReplicaRecoveryBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -22,9 +22,7 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n@@ -83,6 +81,24 @@ public void testValidators() throws Exception {\n assertThat(Validator.POSITIVE_INTEGER.validate(\"\", \"0\"), notNullValue());\n assertThat(Validator.POSITIVE_INTEGER.validate(\"\", \"-1\"), notNullValue());\n assertThat(Validator.POSITIVE_INTEGER.validate(\"\", \"10.2\"), notNullValue());\n+\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"asdasd\"), notNullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"-1\"), notNullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"20\"), notNullValue()); // we expect 20%\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"-1%\"), notNullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"101%\"), notNullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"100%\"), nullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"99%\"), nullValue());\n+ assertThat(Validator.PERCENTAGE.validate(\"\", \"0%\"), nullValue());\n+\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"asdasd\"), notNullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"20\"), nullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"20mb\"), nullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"-1%\"), notNullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"101%\"), notNullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"100%\"), nullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"99%\"), nullValue());\n+ assertThat(Validator.BYTES_SIZE_OR_PERCENTAGE.validate(\"\", \"0%\"), nullValue());\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/cluster/settings/SettingsValidatorTests.java",
"status": "modified"
},
{
"diff": "@@ -19,13 +19,13 @@\n package org.elasticsearch.index.engine;\n \n import org.apache.lucene.index.LiveIndexWriterConfig;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.engine.EngineConfig;\n-import org.elasticsearch.index.engine.InternalEngine;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n \n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n \n public class InternalEngineSettingsTest extends ElasticsearchSingleNodeTest {\n@@ -47,21 +47,30 @@ public void testSettingsUpdate() {\n client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(ImmutableSettings.builder().put(EngineConfig.INDEX_CHECKSUM_ON_MERGE, false).build()).get();\n assertThat(engine.getCurrentIndexWriterConfig().getCheckIntegrityAtMerge(), is(false));\n \n+ // VERSION MAP SIZE\n+ long indexBufferSize = engine.config().getIndexingBufferSize().bytes();\n+ long versionMapSize = engine.config().getVersionMapSize().bytes();\n+ assertThat(versionMapSize, equalTo((long) (indexBufferSize * 0.25)));\n+\n final int iters = between(1, 20);\n for (int i = 0; i < iters; i++) {\n boolean compoundOnFlush = randomBoolean();\n boolean failOnCorruption = randomBoolean();\n boolean failOnMerge = randomBoolean();\n boolean checksumOnMerge = randomBoolean();\n long gcDeletes = Math.max(0, randomLong());\n+ boolean versionMapAsPercent = randomBoolean();\n+ double versionMapPercent = randomIntBetween(0, 100);\n+ long versionMapSizeInMB = randomIntBetween(10, 20);\n+ String versionMapString = versionMapAsPercent ? versionMapPercent + \"%\" : versionMapSizeInMB + \"mb\";\n \n Settings build = ImmutableSettings.builder()\n .put(EngineConfig.INDEX_FAIL_ON_CORRUPTION_SETTING, failOnCorruption)\n .put(EngineConfig.INDEX_COMPOUND_ON_FLUSH, compoundOnFlush)\n .put(EngineConfig.INDEX_GC_DELETES_SETTING, gcDeletes)\n .put(EngineConfig.INDEX_FAIL_ON_MERGE_FAILURE_SETTING, failOnMerge)\n .put(EngineConfig.INDEX_CHECKSUM_ON_MERGE, checksumOnMerge)\n-\n+ .put(EngineConfig.INDEX_VERSION_MAP_SIZE, versionMapString)\n .build();\n \n client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(build).get();\n@@ -76,7 +85,13 @@ public void testSettingsUpdate() {\n assertEquals(engine.config().isFailOnMergeFailure(), failOnMerge); // only on the holder\n assertEquals(currentIndexWriterConfig.getCheckIntegrityAtMerge(), checksumOnMerge);\n \n-\n+ indexBufferSize = engine.config().getIndexingBufferSize().bytes();\n+ versionMapSize = engine.config().getVersionMapSize().bytes();\n+ if (versionMapAsPercent) {\n+ assertThat(versionMapSize, equalTo((long) (indexBufferSize * (versionMapPercent / 100))));\n+ } else {\n+ assertThat(versionMapSize, equalTo(1024 * 1024 * versionMapSizeInMB));\n+ }\n }\n \n Settings settings = ImmutableSettings.builder()\n@@ -101,5 +116,35 @@ public void testSettingsUpdate() {\n client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(settings).get();\n assertEquals(engine.getGcDeletesInMillis(), 1000);\n assertTrue(engine.config().isEnableGcDeletes());\n+\n+ settings = ImmutableSettings.builder()\n+ .put(EngineConfig.INDEX_VERSION_MAP_SIZE, \"sdfasfd\")\n+ .build();\n+ try {\n+ client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(settings).get();\n+ fail(\"settings update didn't fail, but should have\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ // good\n+ }\n+\n+ settings = ImmutableSettings.builder()\n+ .put(EngineConfig.INDEX_VERSION_MAP_SIZE, \"-12%\")\n+ .build();\n+ try {\n+ client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(settings).get();\n+ fail(\"settings update didn't fail, but should have\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ // good\n+ }\n+\n+ settings = ImmutableSettings.builder()\n+ .put(EngineConfig.INDEX_VERSION_MAP_SIZE, \"130%\")\n+ .build();\n+ try {\n+ client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(settings).get();\n+ fail(\"settings update didn't fail, but should have\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ // good\n+ }\n }\n }",
"filename": "src/test/java/org/elasticsearch/index/engine/InternalEngineSettingsTest.java",
"status": "modified"
}
]
} |
{
"body": "When issuing a date_histogram with `America/Sao_Paulo` as timezone, the query fails with the following stack trace:\n\nStack Trace:\n\n```\norg.elasticsearch.common.joda.time.IllegalInstantException: Illegal instant due to time zone offset transition (daylight savings time 'gap'): 2014-10-19T00:00:00.000 (America/Sao_Paulo)\n at org.elasticsearch.common.joda.time.DateTimeZone.convertLocalToUTC(DateTimeZone.java:1025)\n at org.elasticsearch.common.rounding.TimeZoneRounding$TimeTimeZoneRoundingFloor.nextRoundingValue(TimeZoneRounding.java:175)\n at org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.reduce(InternalHistogram.java:323)\n at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:140)\n at org.elasticsearch.search.controller.SearchPhaseController.merge(SearchPhaseController.java:407)\n at org.elasticsearch.action.search.type.TransportSearchCountAction$AsyncAction.moveToSecondPhase(TransportSearchCountAction.java:77)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java:397)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:198)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:174)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:171)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:568)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\n```\n\nPlease note that this is a very critical bug, we're already running on master due to another ElasticSearch bug(#9491).\n",
"comments": [
{
"body": "@cbuescher can you take a look?\n\n@greenboxal You should absolutely never be running on master (or 1.x). These are based on snapshots of Lucene, and those snapshots use as yet unreleased versions of Lucene's index format. There are zero backcompat guarantees when you do this. \n\nOnly use officially released versions of Elasticsearch (which are based on officially released version of Lucene).\n",
"created_at": "2015-03-06T23:44:41Z"
},
{
"body": "I can't use a stable release as #9491 fix didn't land in any released version(at least when we started using master).\nWithout that fix when the DST went off all date histograms(interval=1d) broke showing one day with 24 hours and another day on the same date with 1 hour of duration.\n",
"created_at": "2015-03-06T23:57:28Z"
},
{
"body": "We started to use Joda times utility method for local to UTC time conversions in the Rounding classes. It seems that doing this with the \"strict\" option set to true makes Joda time to throw exceptions when using this conversion on local time stamps that would fall in the DST gaps when DST is switched on. I could reproduce this on 1.x, but it also seems to affect the 1.4 and master branch where we now use DateTimeTone#convertLocalToUTC().\nI think it is save to switch the \"strict\" option when using this method to \"false\" to prevent it from throwing the exception. I will open a PR with tests and the changes in the Rounding classes.\n",
"created_at": "2015-03-08T19:05:02Z"
},
{
"body": "@cbuescher Do you think that the commit referenced above would handle this exception as well?\n\n```\norg.elasticsearch.index.mapper.MapperParsingException: failed to parse [day_timeinterval]\n at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:416)\n at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:709)\n at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:542)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:491)\n at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:392)\n at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:444)\n at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:512)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.index.mapper.MapperParsingException: failed to parse date field [2015-03-08 02:15:00 EST], tried both date format [yyyy-MM-dd HH:mm:ss z], and timestamp number with locale []\n at org.elasticsearch.index.mapper.core.DateFieldMapper.parseStringValue(DateFieldMapper.java:621)\n at org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:549)\n at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:235)\n at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:406)\n ... 12 more\nCaused by: org.elasticsearch.common.joda.time.IllegalInstantException: Cannot parse \"2015-03-08 02:15:00 EST\": Illegal instant due to time zone offset transition (America/New_York)\n at org.elasticsearch.common.joda.time.format.DateTimeParserBucket.computeMillis(DateTimeParserBucket.java:390)\n at org.elasticsearch.common.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:749)\n at org.elasticsearch.index.mapper.core.DateFieldMapper.parseStringValue(DateFieldMapper.java:615)\n\n```\n\nI know its pretty similar, but I want to make sure it would be solved by upgrading ES before I go about doing that.\n",
"created_at": "2015-04-22T18:12:30Z"
},
{
"body": "@deusofnull Thanks, from a first glance I would not say that this is related. IllegaInstantException is used in various places in joda time. This issue concerns a case where it is raised when converting time zones in DateTimeZone that fall into the DST gap. Your issue seems to be related to parsing a date that falls into the DST gap, thats why they look similar, but the rest of the stack trace is different. It would be great if you could open a separate issue and report what you are doing and which version of ES you are running there.\n",
"created_at": "2015-04-22T20:03:41Z"
},
{
"body": "@cbuescher Ill bring up a new issue then! Thanks! \n\nWhat is strange is that I was parsing 90 days back of timestamps and I got this error for all of them. All 15k of the timestamps. If I'm correct, DST started about a month ago... So why did those dates still elicit the error?\n",
"created_at": "2015-04-22T20:06:24Z"
}
],
"number": 10025,
"title": "Illegal instant due to time zone offset transition"
} | {
"body": "This solves a problem in the time zone rounding classes where time dates that\nfall into a DST gap will cause joda time library to throw an exception.\nChanging the conversion methods 'strict' option to false prevents this.\n\nCloses #10025\n",
"number": 10031,
"review_comments": [],
"title": "Be lenient when converting local to utc time in time zone roundings"
} | {
"commits": [
{
"message": "Aggregations: Be lenient when converting local to utc time in time zone roundings\n\nThis solves a problem in the time zone rounding classes where time dates that\nfall into a DST gap will cause joda time library to throw an exception.\nChanging the conversion methods 'strict' option to false prevents this.\n\nCloses #10025"
},
{
"message": "Removed unused import"
}
],
"files": [
{
"diff": "@@ -125,7 +125,7 @@ public long roundKey(long utcMillis) {\n long timeLocal = utcMillis;\n timeLocal = timeZone.convertUTCToLocal(utcMillis);\n long rounded = field.roundFloor(timeLocal);\n- return timeZone.convertLocalToUTC(rounded, true, utcMillis);\n+ return timeZone.convertLocalToUTC(rounded, false, utcMillis);\n }\n \n @Override\n@@ -139,7 +139,7 @@ public long nextRoundingValue(long time) {\n long timeLocal = time;\n timeLocal = timeZone.convertUTCToLocal(time);\n long nextInLocalTime = durationField.add(timeLocal, 1);\n- return timeZone.convertLocalToUTC(nextInLocalTime, true);\n+ return timeZone.convertLocalToUTC(nextInLocalTime, false);\n }\n \n @Override\n@@ -184,7 +184,7 @@ public long roundKey(long utcMillis) {\n long timeLocal = utcMillis;\n timeLocal = timeZone.convertUTCToLocal(utcMillis);\n long rounded = Rounding.Interval.roundValue(Rounding.Interval.roundKey(timeLocal, interval), interval);\n- return timeZone.convertLocalToUTC(rounded, true);\n+ return timeZone.convertLocalToUTC(rounded, false);\n }\n \n @Override\n@@ -198,7 +198,7 @@ public long nextRoundingValue(long time) {\n long timeLocal = time;\n timeLocal = timeZone.convertUTCToLocal(time);\n long next = timeLocal + interval;\n- return timeZone.convertLocalToUTC(next, true);\n+ return timeZone.convertLocalToUTC(next, false);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -280,6 +280,23 @@ public void testAmbiguousHoursAfterDSTSwitch() {\n equalTo(tzRounding.round(time(\"2014-08-11T17:00:00\", JERUSALEM_TIMEZONE))));\n }\n \n+ /**\n+ * test for #10025, strict local to UTC conversion can cause joda exceptions\n+ * on DST start\n+ */\n+ @Test\n+ public void testLenientConversionDST() {\n+ DateTimeZone tz = DateTimeZone.forID(\"America/Sao_Paulo\");\n+ long start = time(\"2014-10-18T20:50:00.000\", tz);\n+ long end = time(\"2014-10-19T01:00:00.000\", tz);\n+ Rounding tzRounding = new TimeZoneRounding.TimeUnitRounding(DateTimeUnit.MINUTES_OF_HOUR, tz);\n+ Rounding dayTzRounding = new TimeZoneRounding.TimeIntervalRounding(60000, tz);\n+ for (long time = start; time < end; time = time + 60000) {\n+ assertThat(tzRounding.nextRoundingValue(time), greaterThan(time));\n+ assertThat(dayTzRounding.nextRoundingValue(time), greaterThan(time));\n+ }\n+ }\n+\n private DateTimeUnit randomTimeUnit() {\n byte id = (byte) randomIntBetween(1, 8);\n return DateTimeUnit.resolve(id);",
"filename": "src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java",
"status": "modified"
}
]
} |
{
"body": "Currently we only check if the first byte of the body is a `BYTE_OBJECT_INDEFINITE` to determine whether the content is CBOR or not. However, what we should actually do is to check whether the \"major type\" is an object.\n\nSee:\n- https://github.com/FasterXML/jackson-dataformat-cbor/blob/master/src/main/java/com/fasterxml/jackson/dataformat/cbor/CBORParser.java#L614\n- https://github.com/FasterXML/jackson-dataformat-cbor/blob/master/src/main/java/com/fasterxml/jackson/dataformat/cbor/CBORParser.java#L682\n\nAlso, CBOR can be prefixed with a self-identifying tag, `0xd9d9f7`, which we should check for as well. Currently Jackson doesn't recognise this tag, but it looks like that will change in the future: https://github.com/FasterXML/jackson-dataformat-cbor/issues/6\n",
"comments": [
{
"body": "Jackson 2.4.3 now contains the above fixes. We should upgrade and add the changes mentioned above.\n",
"created_at": "2014-11-14T11:53:39Z"
},
{
"body": "if we get a fix for this I think it should go into `1.3.6`\n",
"created_at": "2014-11-21T09:39:05Z"
},
{
"body": "do we need to do anything else than upgrading jackson? @pickypg do you have a ETA for this?\n",
"created_at": "2014-11-23T12:55:00Z"
},
{
"body": "I should have this up for review on Monday.\n\nWe need to change `XContentFactory.xContentType(...)` to support the new header. By default, the new `CBORGenerator.Feature.WRITE_TYPE_HEADER` feature is `false`, so just upgrading will do nothing (nothing breaks, but nothing improves).\n",
"created_at": "2014-11-24T05:48:16Z"
},
{
"body": "Merged\n",
"created_at": "2014-11-25T19:04:53Z"
},
{
"body": "This was reverted because the JSON tokenizer was acting up in some of the randomized tests. I am looking at the root cause (my change or just incoming changes from 2.4.3).\n",
"created_at": "2014-11-25T21:57:11Z"
},
{
"body": "@clintongormley it'd be great to have this feature. What are the chances this will get into an upcoming release of Elasticsearch?",
"created_at": "2017-05-30T14:15:35Z"
},
{
"body": "@johnrfrank this was merged into 2.0.0. We've since deprecated content detection in favor of providing a content-type header.",
"created_at": "2017-05-30T14:42:14Z"
}
],
"number": 7640,
"title": "CBOR: Improve recognition of CBOR data format"
} | {
"body": "CBOR has a special header that is optional, if exists, allows for exact detection. Also, since we know which formats we support in ES, we can support the object major type case.\ncloses #7640\n",
"number": 10026,
"review_comments": [
{
"body": "can't you simply cast to byte directly? the signed extension is preserved though\n",
"created_at": "2015-03-18T07:07:06Z"
},
{
"body": "could we have more like those?\n",
"created_at": "2015-03-18T07:08:25Z"
},
{
"body": "maybe generate some examples with other langs and try to detect them?\n",
"created_at": "2015-03-18T07:08:48Z"
},
{
"body": "Here's Python:\n\n```\n>>> ', '.join('(byte) 0x%x' % x for x in cbor.dumps({'foo': 5}))\n'(byte) 0xa1, (byte) 0x63, (byte) 0x66, (byte) 0x6f, (byte) 0x6f, (byte) 0x5'\n```\n",
"created_at": "2015-03-20T20:25:28Z"
},
{
"body": "sure\n",
"created_at": "2015-03-20T20:26:01Z"
},
{
"body": "thanks @mikemccand !\n",
"created_at": "2015-03-20T20:35:46Z"
}
],
"title": "Better detection of CBOR"
} | {
"commits": [
{
"message": "Better detection of CBOR\nCBOR has a special header that is optional, if exists, allows for exact detection. Also, since we know which formats we support in ES, we can support the object major type case.\ncloses #7640"
},
{
"message": "first review"
},
{
"message": "first review"
}
],
"files": [
{
"diff": "@@ -234,28 +234,28 @@\n <dependency>\n <groupId>com.fasterxml.jackson.core</groupId>\n <artifactId>jackson-core</artifactId>\n- <version>2.4.2</version>\n+ <version>2.4.4</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-smile</artifactId>\n- <version>2.4.2</version>\n+ <version>2.4.4</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-yaml</artifactId>\n- <version>2.4.2</version>\n+ <version>2.4.4</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-cbor</artifactId>\n- <version>2.4.2</version>\n+ <version>2.4.4</version>\n <scope>compile</scope>\n </dependency>\n ",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -208,11 +208,11 @@ public static XContentType xContentType(byte[] data) {\n * Guesses the content type based on the provided input stream.\n */\n public static XContentType xContentType(InputStream si) throws IOException {\n- int first = si.read();\n+ byte first = (byte) si.read();\n if (first == -1) {\n return null;\n }\n- int second = si.read();\n+ byte second = (byte) si.read();\n if (second == -1) {\n return null;\n }\n@@ -231,9 +231,26 @@ public static XContentType xContentType(InputStream si) throws IOException {\n return XContentType.YAML;\n }\n }\n- if (first == (CBORConstants.BYTE_OBJECT_INDEFINITE & 0xff)){\n+ // CBOR logic similar to CBORFactory#hasCBORFormat\n+ if (first == CBORConstants.BYTE_OBJECT_INDEFINITE){\n+ return XContentType.CBOR;\n+ }\n+ if (CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_TAG, first)) {\n+ // Actually, specific \"self-describe tag\" is a very good indicator\n+ int third = si.read();\n+ if (third == -1) {\n+ return null;\n+ }\n+ if (first == (byte) 0xD9 && second == (byte) 0xD9 && third == (byte) 0xF7) {\n+ return XContentType.CBOR;\n+ }\n+ }\n+ // for small objects, some encoders just encode as major type object, we can safely\n+ // say its CBOR since it doesn't contradict SMILE or JSON, and its a last resort\n+ if (CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, first)) {\n return XContentType.CBOR;\n }\n+\n for (int i = 2; i < GUESS_HEADER_LENGTH; i++) {\n int val = si.read();\n if (val == -1) {\n@@ -279,9 +296,23 @@ public static XContentType xContentType(BytesReference bytes) {\n if (length > 2 && first == '-' && bytes.get(1) == '-' && bytes.get(2) == '-') {\n return XContentType.YAML;\n }\n- if (first == CBORConstants.BYTE_OBJECT_INDEFINITE){\n+ // CBOR logic similar to CBORFactory#hasCBORFormat\n+ if (first == CBORConstants.BYTE_OBJECT_INDEFINITE && length > 1){\n return XContentType.CBOR;\n }\n+ if (CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_TAG, first) && length > 2) {\n+ // Actually, specific \"self-describe tag\" is a very good indicator\n+ if (first == (byte) 0xD9 && bytes.get(1) == (byte) 0xD9 && bytes.get(2) == (byte) 0xF7) {\n+ return XContentType.CBOR;\n+ }\n+ }\n+ // for small objects, some encoders just encode as major type object, we can safely\n+ // say its CBOR since it doesn't contradict SMILE or JSON, and its a last resort\n+ if (CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, first)) {\n+ return XContentType.CBOR;\n+ }\n+\n+ // a last chance for JSON\n for (int i = 0; i < length; i++) {\n if (bytes.get(i) == '{') {\n return XContentType.JSON;",
"filename": "src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,14 @@\n \n package org.elasticsearch.common.xcontent;\n \n+import com.fasterxml.jackson.dataformat.cbor.CBORConstants;\n+import com.fasterxml.jackson.dataformat.smile.SmileConstants;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.stream.BytesStreamInput;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n+import java.io.ByteArrayInputStream;\n import java.io.IOException;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -69,4 +72,35 @@ private void testGuessType(XContentType type) throws IOException {\n assertThat(XContentFactory.xContentType(builder.string()), equalTo(type));\n }\n }\n+\n+ public void testCBORBasedOnMajorObjectDetection() {\n+ // for this {\"f \"=> 5} perl encoder for example generates:\n+ byte[] bytes = new byte[] {(byte) 0xA1, (byte) 0x43, (byte) 0x66, (byte) 6f, (byte) 6f, (byte) 0x5};\n+ assertThat(XContentFactory.xContentType(bytes), equalTo(XContentType.CBOR));\n+ //assertThat(((Number) XContentHelper.convertToMap(bytes, true).v2().get(\"foo\")).intValue(), equalTo(5));\n+\n+ // this if for {\"foo\" : 5} in python CBOR\n+ bytes = new byte[] {(byte) 0xA1, (byte) 0x63, (byte) 0x66, (byte) 0x6f, (byte) 0x6f, (byte) 0x5};\n+ assertThat(XContentFactory.xContentType(bytes), equalTo(XContentType.CBOR));\n+ assertThat(((Number) XContentHelper.convertToMap(bytes, true).v2().get(\"foo\")).intValue(), equalTo(5));\n+\n+ // also make sure major type check doesn't collide with SMILE and JSON, just in case\n+ assertThat(CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, SmileConstants.HEADER_BYTE_1), equalTo(false));\n+ assertThat(CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, (byte) '{'), equalTo(false));\n+ assertThat(CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, (byte) ' '), equalTo(false));\n+ assertThat(CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, (byte) '-'), equalTo(false));\n+ }\n+\n+ public void testCBORBasedOnMagicHeaderDetection() {\n+ byte[] bytes = new byte[] {(byte) 0xd9, (byte) 0xd9, (byte) 0xf7};\n+ assertThat(XContentFactory.xContentType(bytes), equalTo(XContentType.CBOR));\n+ }\n+\n+ public void testEmptyStream() throws Exception {\n+ ByteArrayInputStream is = new ByteArrayInputStream(new byte[0]);\n+ assertNull(XContentFactory.xContentType(is));\n+\n+ is = new ByteArrayInputStream(new byte[] {(byte) 1});\n+ assertNull(XContentFactory.xContentType(is));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/common/xcontent/XContentFactoryTests.java",
"status": "modified"
}
]
} |
{
"body": "In #9843, HunspellServiceTests leak scheduler and timer thread pools. In this test class, there are 2 tests which check excepetion cases. When those 2 tests are @Ignored, there are no more thread leaks.\n",
"comments": [],
"number": 9849,
"title": "Scheduling threads can be leaked on exception"
} | {
"body": "An unchecked exception might be thrown when instantiating the HunspellService, leading to thread leaks in tests.\n\nCloses #9849\n",
"number": 10020,
"review_comments": [
{
"body": "typo: dictionnary -> dictionary\n",
"created_at": "2015-03-27T08:38:42Z"
},
{
"body": "typo: dictionnaries -> dictionaries\n",
"created_at": "2015-03-27T08:39:01Z"
}
],
"title": "Fix thread leak in Hunspell service tests"
} | {
"commits": [
{
"message": "Fix thread leak in Hunspell service\n\nAn unchecked exception might be thrown when instantiating the HunspellService, leading to thread leaks in tests.\n\nCloses #9849"
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n import com.google.common.cache.CacheBuilder;\n import com.google.common.cache.CacheLoader;\n import com.google.common.cache.LoadingCache;\n+import com.google.common.util.concurrent.UncheckedExecutionException;\n import org.apache.lucene.analysis.hunspell.Dictionary;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.component.AbstractComponent;\n@@ -30,8 +31,8 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n \n-import java.io.*;\n-import java.net.MalformedURLException;\n+import java.io.IOException;\n+import java.io.InputStream;\n import java.nio.file.DirectoryStream;\n import java.nio.file.Files;\n import java.nio.file.Path;\n@@ -108,7 +109,7 @@ public Dictionary load(String locale) throws Exception {\n *\n * @param locale The name of the locale\n */\n- public Dictionary getDictionary(String locale) {\n+ public Dictionary getDictionary(String locale) {\n return dictionaries.getUnchecked(locale);\n }\n \n@@ -117,7 +118,7 @@ private Path resolveHunspellDirectory(Settings settings, Environment env) {\n if (location != null) {\n return Paths.get(location);\n }\n- return env.configFile().resolve( \"hunspell\");\n+ return env.configFile().resolve(\"hunspell\");\n }\n \n /**\n@@ -130,7 +131,13 @@ private void scanAndLoadDictionaries() throws IOException {\n if (Files.isDirectory(file)) {\n try (DirectoryStream<Path> inner = Files.newDirectoryStream(hunspellDir.resolve(file), \"*.dic\")) {\n if (inner.iterator().hasNext()) { // just making sure it's indeed a dictionary dir\n- dictionaries.getUnchecked(file.getFileName().toString());\n+ try {\n+ dictionaries.getUnchecked(file.getFileName().toString());\n+ } catch (UncheckedExecutionException e) {\n+ // The cache loader throws unchecked exception (see #loadDictionary()),\n+ // here we simply report the exception and continue loading the dictionaries\n+ logger.error(\"exception while loading dictionary {}\", file.getFileName(), e);\n+ }\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/indices/analysis/HunspellService.java",
"status": "modified"
},
{
"diff": "@@ -91,7 +91,6 @@ public void testCustomizeLocaleDirectory() throws Exception {\n }\n \n @Test\n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/9849\")\n public void testDicWithNoAff() throws Exception {\n Settings settings = ImmutableSettings.settingsBuilder()\n .put(\"path.conf\", getResourcePath(\"/indices/analyze/no_aff_conf_dir\"))\n@@ -111,7 +110,6 @@ public void testDicWithNoAff() throws Exception {\n }\n \n @Test\n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/9849\")\n public void testDicWithTwoAffs() throws Exception {\n Settings settings = ImmutableSettings.settingsBuilder()\n .put(\"path.conf\", getResourcePath(\"/indices/analyze/two_aff_conf_dir\"))",
"filename": "src/test/java/org/elasticsearch/indices/analyze/HunspellServiceTests.java",
"status": "modified"
}
]
} |
{
"body": "The issue was originally reported in https://github.com/elasticsearch/elasticsearch/issues/7980#issuecomment-76151889 If a current master node that contains all primary shards is restarted in the middle of snapshot operation, it might leave the snapshot hanging in `ABORTED` state. \n",
"comments": [
{
"body": ":+1: ran into this a few times.\n",
"created_at": "2015-03-10T23:28:59Z"
},
{
"body": "Ran into a similar issue and tried the snapshot cleanup utility. It didn't work as all shards were ignored:\n\n> Ignoring shard [[dev1_10_event.2015-03-15][4]] with state [ABORTED] on node [kyU3N9lpTIuTbdeUGp5ThQ] - node exists : [true]\n\nWhat's the reason for ignoring shards when the node exists?\n",
"created_at": "2015-06-08T08:41:08Z"
},
{
"body": "@srgclr if a node exists and a shard is in ABORTED state it can mean one of the two things - we hit #11314 or the shard is stuck in the I/O operation and we need to wait until the I/O operation is over or we need to restart the node. It's impossible for the cleanup utility to determine which state we are in. Because of this, it takes a safer route - assume that we are stuck in I/O operation and skip such shards.\n",
"created_at": "2015-06-10T13:41:26Z"
},
{
"body": "This should be solved by #11450. Closing.\n",
"created_at": "2015-08-20T22:20:08Z"
}
],
"number": 9924,
"title": "Snapshot/Restore: snapshot during rolling restart of a 2 node cluster might get stuck"
} | {
"body": "Related to #9924\n",
"number": 9981,
"review_comments": [],
"title": "Delete operation should ignore finalizing shards on nodes that no longer exist"
} | {
"commits": [
{
"message": "Snapshot/Restore: delete operation should ignore finalizing shards on nodes that no longer exist\n\nRelated to #9924"
}
],
"files": [
{
"diff": "@@ -24,7 +24,6 @@\n import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n-import org.elasticsearch.Version;\n import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.*;\n@@ -747,8 +746,7 @@ private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n return true;\n }\n for (DiscoveryNode node : event.nodesDelta().removedNodes()) {\n- for (ImmutableMap.Entry<ShardId, ShardSnapshotStatus> shardEntry : snapshot.shards().entrySet()) {\n- ShardSnapshotStatus shardStatus = shardEntry.getValue();\n+ for (ShardSnapshotStatus shardStatus : snapshot.shards().values()) {\n if (!shardStatus.state().completed() && node.getId().equals(shardStatus.nodeId())) {\n // At least one shard was running on the removed node - we need to fail it\n return true;\n@@ -1121,9 +1119,25 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n shards = snapshot.shards();\n endSnapshot(snapshot);\n } else {\n- // snapshot is being finalized - wait for it\n- logger.trace(\"trying to delete completed snapshot - save to delete\");\n- return currentState;\n+ boolean hasUncompletedShards = false;\n+ // Cleanup in case a node gone missing and snapshot wasn't updated for some reason\n+ for (ShardSnapshotStatus shardStatus : snapshot.shards().values()) {\n+ // Check if we still have shard running on existing nodes\n+ if (shardStatus.state().completed() == false && shardStatus.nodeId() != null && currentState.nodes().get(shardStatus.nodeId()) != null) {\n+ hasUncompletedShards = true;\n+ break;\n+ }\n+ }\n+ if (hasUncompletedShards) {\n+ // snapshot is being finalized - wait for shards to complete finalization process\n+ logger.debug(\"trying to delete completed snapshot - should wait for shards to finalize on all nodes\");\n+ return currentState;\n+ } else {\n+ // no shards to wait for - finish the snapshot\n+ logger.debug(\"trying to delete completed snapshot with no finalizing shards - can delete immediately\");\n+ shards = snapshot.shards();\n+ endSnapshot(snapshot);\n+ }\n }\n SnapshotMetaData.Entry newSnapshot = new SnapshotMetaData.Entry(snapshot, State.ABORTED, shards);\n snapshots = new SnapshotMetaData(newSnapshot);",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableList;\n \n+import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.ExceptionsHelper;\n@@ -39,31 +40,32 @@\n import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n-import org.elasticsearch.cluster.metadata.MappingMetaData;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n+import org.elasticsearch.cluster.ProcessedClusterStateUpdateTask;\n+import org.elasticsearch.cluster.metadata.*;\n+import org.elasticsearch.cluster.metadata.SnapshotMetaData.*;\n+import org.elasticsearch.cluster.metadata.SnapshotMetaData.State;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.support.AbstractIndexStore;\n import org.elasticsearch.indices.InvalidIndexNameException;\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.junit.Test;\n \n-import java.io.FileOutputStream;\n-import java.nio.channels.FileChannel;\n import java.nio.channels.SeekableByteChannel;\n import java.nio.file.Files;\n-import java.nio.file.OpenOption;\n import java.nio.file.Path;\n import java.nio.file.StandardOpenOption;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n@@ -882,7 +884,7 @@ public void snapshotClosedIndexTest() throws Exception {\n logger.info(\"--> closing index test-idx-closed\");\n assertAcked(client.admin().indices().prepareClose(\"test-idx-closed\"));\n ClusterStateResponse stateResponse = client.admin().cluster().prepareState().get();\n- assertThat(stateResponse.getState().metaData().index(\"test-idx-closed\").state(), equalTo(State.CLOSE));\n+ assertThat(stateResponse.getState().metaData().index(\"test-idx-closed\").state(), equalTo(IndexMetaData.State.CLOSE));\n assertThat(stateResponse.getState().routingTable().index(\"test-idx-closed\"), nullValue());\n \n logger.info(\"--> snapshot\");\n@@ -1665,6 +1667,67 @@ public void deleteIndexDuringSnapshotTest() throws Exception {\n }\n }\n \n+\n+ @Test\n+ public void deleteOrphanSnapshotTest() throws Exception {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(MockRepositoryModule.class.getCanonicalName()).setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDirPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))\n+ ));\n+\n+ createIndex(\"test-idx\");\n+ ensureGreen();\n+\n+ ClusterService clusterService = internalCluster().getInstance(ClusterService.class, internalCluster().getMasterName());\n+\n+ final CountDownLatch countDownLatch = new CountDownLatch(1);\n+\n+ logger.info(\"--> snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"test-idx\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> emulate an orphan snapshot\");\n+\n+ clusterService.submitStateUpdateTask(\"orphan snapshot test\", new ProcessedClusterStateUpdateTask() {\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) {\n+ // Simulate orphan snapshot\n+ ImmutableMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableMap.builder();\n+ shards.put(new ShardId(\"test-idx\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ shards.put(new ShardId(\"test-idx\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ shards.put(new ShardId(\"test-idx\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ ImmutableList.Builder<Entry> entries = ImmutableList.builder();\n+ entries.add(new Entry(new SnapshotId(\"test-repo\", \"test-snap\"), true, State.ABORTED, ImmutableList.of(\"test-idx\"), System.currentTimeMillis(), shards.build()));\n+ MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n+ mdBuilder.putCustom(SnapshotMetaData.TYPE, new SnapshotMetaData(entries.build()));\n+ return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ fail();\n+ }\n+\n+ @Override\n+ public void clusterStateProcessed(String source, ClusterState oldState, final ClusterState newState) {\n+ countDownLatch.countDown();\n+ }\n+ });\n+\n+ countDownLatch.await();\n+ logger.info(\"--> try deleting the orphan snapshot\");\n+\n+ assertAcked(client.admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap\").get(\"10s\"));\n+\n+ }\n+\n private boolean waitForIndex(final String index, TimeValue timeout) throws InterruptedException {\n return awaitBusy(new Predicate<Object>() {\n @Override",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "We currently filter test classes by suffix `Test` or `Tests` (https://github.com/elasticsearch/elasticsearch/blob/master/pom.xml#L495), but there is no automation in place to guarantee that test classes actually end with this suffix. As a result, some tests did not run on CI without anyone noticing (see for example https://github.com/elasticsearch/elasticsearch/commit/600cb886da0b8a4adae9a3a0eb688697c10c1491). Would be great if we had some automation that checks that classes that derive from Test classes like `ElasticsearchIntegrationTest` have the right suffix.\n",
"comments": [
{
"body": "+1 I think we should rather take `ElasticsearchTestCase` as the base class though.\n",
"created_at": "2015-03-02T13:48:01Z"
},
{
"body": "Or if a test class has '@Test' then the class name must end with *Test\n",
"created_at": "2015-03-02T13:57:38Z"
},
{
"body": "@martijnvg I'm afraid we don't use the `@Test` annotation consistently though... we better look also for method names that start with `test` if we take that approach.\n",
"created_at": "2015-03-02T14:02:50Z"
},
{
"body": "Yes, we take this approach we should also look for method names starting with `test`. I think we either should the test method name consistently or the test annotation, but that is a different discussion.\n",
"created_at": "2015-03-02T14:13:43Z"
},
{
"body": "thanks so much @brwe for finding this... `das glück des tüchtigen...` \n",
"created_at": "2015-03-02T16:35:10Z"
}
],
"number": 9945,
"title": "[TESTS] Make sure test end with ..Tests"
} | {
"body": "This commit adds a simple testcase that ensures all our tests end with the right naming.\n\nCloses #9945\n",
"number": 9947,
"review_comments": [
{
"body": "not directly related, but shall we also check that all of these do extend `ElasticsearchTestCase`?\n",
"created_at": "2015-03-02T14:52:39Z"
},
{
"body": "we are going over test classes only right, thanks to this line?\n",
"created_at": "2015-03-02T14:56:24Z"
},
{
"body": "the only case that we don't cover here is the classes that by mistake don't extend `ElasticsearchTestCase` (plain junit tests) whose name doesn't end with `Test` or `Tests` either..... pretty sure this never happened before. not sure we want to do anything about this. Maybe we could just separately check that there are no plain junit tests in the codebase.\n",
"created_at": "2015-03-02T15:00:28Z"
},
{
"body": "no I think we go over everything\n",
"created_at": "2015-03-02T15:00:37Z"
},
{
"body": "yeah we can\n",
"created_at": "2015-03-02T15:00:50Z"
}
],
"title": "[TESTS] Make sure test end with ..Tests"
} | {
"commits": [
{
"message": "[TESTS] Make sure test end with ..Tests\n\nThis commit adds a simple testcase that ensures all our tests end with the right naming.\n\nCloses #9945"
}
],
"files": [
{
"diff": "@@ -0,0 +1,172 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch;\n+\n+import com.google.common.base.Joiner;\n+import com.google.common.collect.Sets;\n+import junit.framework.TestCase;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.elasticsearch.test.ElasticsearchLuceneTestCase;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.test.ElasticsearchTokenStreamTestCase;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.lang.reflect.Method;\n+import java.lang.reflect.Modifier;\n+import java.net.URISyntaxException;\n+import java.nio.file.*;\n+import java.nio.file.attribute.BasicFileAttributes;\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+/**\n+ * Simple class that ensures that all subclasses concrete of ElasticsearchTestCase end with either Test | Tests\n+ */\n+public class NamingConventionTests extends ElasticsearchTestCase {\n+\n+ // see https://github.com/elasticsearch/elasticsearch/issues/9945\n+ public void testNamingConventions()\n+ throws ClassNotFoundException, IOException, URISyntaxException {\n+ final Set<Class> notImplementing = new HashSet<>();\n+ final Set<Class> pureUnitTest = new HashSet<>();\n+ final Set<Class> missingSuffix = new HashSet<>();\n+ String[] packages = {\"org.elasticsearch\", \"org.apache.lucene\"};\n+ for (final String packageName : packages) {\n+ final String path = \"/\" + packageName.replace('.', '/');\n+ final Path startPath = Paths.get(NamingConventionTests.class.getResource(path).toURI());\n+ final Set<Path> ignore = Sets.newHashSet(Paths.get(\"/org/elasticsearch/stresstest\"), Paths.get(\"/org/elasticsearch/benchmark/stress\"));\n+ Files.walkFileTree(startPath, new FileVisitor<Path>() {\n+ private Path pkgPrefix = Paths.get(path).getParent();\n+ @Override\n+ public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {\n+ Path next = pkgPrefix.resolve(dir.getFileName());\n+ if (ignore.contains(next)) {\n+ return FileVisitResult.SKIP_SUBTREE;\n+ }\n+ pkgPrefix = next;\n+ return FileVisitResult.CONTINUE;\n+ }\n+\n+ @Override\n+ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n+ try {\n+ String filename = file.getFileName().toString();\n+ if (filename.endsWith(\".class\")) {\n+ Class<?> clazz = loadClass(filename);\n+ if (Modifier.isAbstract(clazz.getModifiers()) == false && Modifier.isInterface(clazz.getModifiers()) == false) {\n+ if ((clazz.getName().endsWith(\"Tests\") || clazz.getName().endsWith(\"Test\"))) { // don't worry about the ones that match the pattern\n+ if (isTestCase(clazz) == false) {\n+ notImplementing.add(clazz);\n+ }\n+ } else if (isTestCase(clazz)) {\n+ missingSuffix.add(clazz);\n+ } else if (junit.framework.Test.class.isAssignableFrom(clazz) || hasTestAnnotation(clazz)) {\n+ pureUnitTest.add(clazz);\n+ }\n+ }\n+\n+ }\n+ } catch (ClassNotFoundException e) {\n+ throw new RuntimeException(e);\n+ }\n+ return FileVisitResult.CONTINUE;\n+ }\n+\n+ private boolean hasTestAnnotation(Class<?> clazz) {\n+ for (Method method : clazz.getDeclaredMethods()) {\n+ if (method.getAnnotation(Test.class) != null) {\n+ return true;\n+ }\n+ }\n+ return false;\n+\n+ }\n+\n+ private boolean isTestCase(Class<?> clazz) {\n+ return ElasticsearchTestCase.class.isAssignableFrom(clazz) || ElasticsearchLuceneTestCase.class.isAssignableFrom(clazz) || ElasticsearchTokenStreamTestCase.class.isAssignableFrom(clazz) || LuceneTestCase.class.isAssignableFrom(clazz);\n+ }\n+\n+ private Class<?> loadClass(String filename) throws ClassNotFoundException {\n+ StringBuilder pkg = new StringBuilder();\n+ for (Path p : pkgPrefix) {\n+ pkg.append(p.getFileName().toString()).append(\".\");\n+ }\n+ pkg.append(filename.substring(0, filename.length() - 6));\n+\n+ return Class.forName(pkg.toString());\n+ }\n+\n+ @Override\n+ public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {\n+ throw exc;\n+ }\n+\n+ @Override\n+ public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {\n+ pkgPrefix = pkgPrefix.getParent();\n+ return FileVisitResult.CONTINUE;\n+ }\n+ });\n+\n+ }\n+ assertTrue(missingSuffix.remove(WrongName.class));\n+ assertTrue(missingSuffix.remove(WrongNameTheSecond.class));\n+ assertTrue(notImplementing.remove(NotImplementingTests.class));\n+ assertTrue(notImplementing.remove(NotImplementingTest.class));\n+ assertTrue(pureUnitTest.remove(PlainUnit.class));\n+ assertTrue(pureUnitTest.remove(PlainUnitTheSecond.class));\n+\n+ String classesToSubclass = Joiner.on(',').join(\n+ ElasticsearchTestCase.class.getSimpleName(),\n+ ElasticsearchLuceneTestCase.class.getSimpleName(),\n+ ElasticsearchTokenStreamTestCase.class.getSimpleName(),\n+ LuceneTestCase.class.getSimpleName());\n+ assertTrue(\"Not all subclasses of \" + ElasticsearchTestCase.class.getSimpleName() +\n+ \" match the naming convention. Concrete classes must end with [Test|Tests]: \" + missingSuffix.toString(),\n+ missingSuffix.isEmpty());\n+ assertTrue(\"Pure Unit-Test found must subclass one of [\" + classesToSubclass +\"] \" + pureUnitTest.toString(),\n+ pureUnitTest.isEmpty());\n+ assertTrue(\"Classes ending with Test|Tests] must subclass [\" + classesToSubclass +\"] \" + notImplementing.toString(),\n+ notImplementing.isEmpty());\n+ }\n+\n+ /*\n+ * Some test the test classes\n+ */\n+\n+ @Ignore\n+ public static final class NotImplementingTests {}\n+ @Ignore\n+ public static final class NotImplementingTest {}\n+\n+ public static final class WrongName extends ElasticsearchTestCase {}\n+\n+ public static final class WrongNameTheSecond extends ElasticsearchLuceneTestCase {}\n+\n+ public static final class PlainUnit extends TestCase {}\n+\n+ public static final class PlainUnitTheSecond {\n+ @Test\n+ public void foo() {\n+ }\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/NamingConventionTests.java",
"status": "added"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Ignore;\n \n import java.io.IOException;\n import java.io.PrintWriter;\n@@ -33,7 +34,8 @@\n /**\n *\n */\n-public class CliToolTestCase extends ElasticsearchTestCase {\n+@Ignore\n+public abstract class CliToolTestCase extends ElasticsearchTestCase {\n \n protected static String[] args(String command) {\n if (!Strings.hasLength(command)) {",
"filename": "src/test/java/org/elasticsearch/common/cli/CliToolTestCase.java",
"status": "modified"
}
]
} |
{
"body": "This test reproduces the issue. Be aware, that this only happens, when the whole source is put into the `source` HTTP parameter and not sent as body, You can control this behaviour in `RestClient.callApiBuilder()` for testing purposes\n\n``` yaml\n\n---\n\"Expressions scripting test\":\n\n - do:\n index:\n index: test123\n type: test\n id: 1\n body: { age: 23 }\n\n - do:\n indices.refresh: {}\n\n - do: { search: { body: { script_fields : { my_field : { lang: expression, script: 'doc[\"age\"].value + 19' } } } } }\n - match: { hits.hits.0.fields.my_field: [ 42.0 ] }\n```\n\nThe problem is, that everything gets encoded correctly, with the exception of the `+` sign, which gets replaced with a space upon decoding - and this makes the script compilation fail.\n",
"comments": [],
"number": 9769,
"title": "Testing: RestClient does not escape `source` HTTP parameter correctly"
} | {
"body": "We've been relying on URI for url encoding, but it turns out it has some problems. For instance '+' stays as is while it should be encoded to `%2B`. If we go and manually encode query params we have to be careful though not to run into double encoding ('+'=>'%2B'=>'%252B'). The applied solution relies on URI encoding for the url path, but manual url encoding for the query parameters. We prevent URI from double encoding query params by using its single argument constructor that leaves everything as is.\n\nWe can also revert back the expression script REST test that revealed this to its original content (which contains an addition).\n\nCloses #9769\n",
"number": 9946,
"review_comments": [],
"title": "[TEST] Work around URI encode limitations in RestClient"
} | {
"commits": [
{
"message": "[TEST] Work around URI encode limitations in RestClient\n\nWe've been relying on URI for url encoding, but it turns out it has some problems. For instance '+' stays as is while it should be encoded to `%2B`. If we go and manually encode query params we have to be careful though not to run into double encoding ('+'=>'%2B'=>'%252B'). The applied solution relies on URI encoding for the url path, but manual url encoding for the query parameters. We prevent URI from double encoding query params by using its single argument constructor that leaves everything as is.\n\nWe can also revert back the expression script REST test that revealed this to its original content (which contains an addition).\n\nCloses #9769\nCloses #9946"
}
],
"files": [
{
"diff": "@@ -22,6 +22,6 @@ setup:\n ---\n \"Expressions scripting test\":\n \n- - do: { search: { body: { script_fields : { my_field : { lang: expression, script: 'doc[\"age\"].value' } } } } }\n- - match: { hits.hits.0.fields.my_field.0: 23.0 }\n+ - do: { search: { body: { script_fields : { my_field : { lang: expression, script: 'doc[\"age\"].value + 19' } } } } }\n+ - match: { hits.hits.0.fields.my_field.0: 42.0 }\n ",
"filename": "rest-api-spec/test/script/30_expressions.yaml",
"status": "modified"
},
{
"diff": "@@ -31,8 +31,10 @@\n import org.elasticsearch.http.HttpServerTransport;\n \n import java.io.IOException;\n+import java.io.UnsupportedEncodingException;\n import java.net.URI;\n import java.net.URISyntaxException;\n+import java.net.URLEncoder;\n import java.nio.charset.Charset;\n import java.util.Map;\n \n@@ -89,8 +91,13 @@ public HttpRequestBuilder path(String path) {\n }\n \n public HttpRequestBuilder addParam(String name, String value) {\n- this.params.put(name, value);\n- return this;\n+ try {\n+ //manually url encode params, since URI does it only partially (e.g. '+' stays as is)\n+ this.params.put(name, URLEncoder.encode(value, \"utf-8\"));\n+ return this;\n+ } catch (UnsupportedEncodingException e) {\n+ throw new RuntimeException(e);\n+ }\n }\n \n public HttpRequestBuilder addHeaders(Headers headers) {\n@@ -173,16 +180,18 @@ private HttpUriRequest buildRequest() {\n }\n \n private URI buildUri() {\n- String query;\n- if (params.size() == 0) {\n- query = null;\n- } else {\n- query = Joiner.on('&').withKeyValueSeparator(\"=\").join(params);\n- }\n try {\n- return new URI(protocol, null, host, port, path, query, null);\n- } catch (URISyntaxException e) {\n- throw new IllegalArgumentException(e);\n+ //url encode rules for path and query params are different. We use URI to encode the path, but we manually encode each query param through URLEncoder.\n+ URI uri = new URI(protocol, null, host, port, path, null, null);\n+ //String concatenation FTW. If we use the nicer multi argument URI constructor query parameters will get only partially encoded\n+ //(e.g. '+' will stay as is) hence when trying to properly encode params manually they will end up double encoded (+ becomes %252B instead of %2B).\n+ StringBuilder uriBuilder = new StringBuilder(protocol).append(\"://\").append(host).append(\":\").append(port).append(uri.getRawPath());\n+ if (params.size() > 0) {\n+ uriBuilder.append(\"?\").append(Joiner.on('&').withKeyValueSeparator(\"=\").join(params));\n+ }\n+ return URI.create(uriBuilder.toString());\n+ } catch(URISyntaxException e) {\n+ throw new IllegalArgumentException(\"unable to build uri\", e);\n }\n }\n ",
"filename": "src/test/java/org/elasticsearch/test/rest/client/http/HttpRequestBuilder.java",
"status": "modified"
}
]
} |
{
"body": "Using the Java API, If one sets the content of a search request through `SearchRequestBuilder#setSource` methods and then calls `toString` to see the result, not only the content of the request is not returned as it wasn't set through `sourceBuilder()`, the content of the request gets also reset due to the `internalBuilder()` call in `toString`.\n\nHere is a small failing test that demontrates it:\n\n```\nSearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client()).setSource(\"{\\n\" +\n \" \\\"query\\\" : {\\n\" +\n \" \\\"match\\\" : {\\n\" +\n \" \\\"field\\\" : {\\n\" +\n \" \\\"query\\\" : \\\"value\\\"\" +\n \" }\\n\" +\n \" }\\n\" +\n \" }\\n\" +\n \" }\");\nString preToString = searchRequestBuilder.request().source().toUtf8();\nsearchRequestBuilder.toString();\nString postToString = searchRequestBuilder.request().source().toUtf8();\nassertThat(preToString, equalTo(postToString));\n```\n",
"comments": [
{
"body": "From a user perspective it's pretty clear what the `toString` method should do, just print the request in json format. The problem is how, as properties can be set in so many ways that can override each other...which is why I guess the current implementation is half broken. I would consider even removing the current `toString` as it has this bad side effect. Curious on what people think about this.\n",
"created_at": "2014-03-27T12:37:42Z"
},
{
"body": "good catch!\n",
"created_at": "2014-03-27T12:48:42Z"
},
{
"body": "@javanna do you think you can work on this this week?\n",
"created_at": "2014-04-02T14:54:04Z"
},
{
"body": "I think I'll get to this soon, I'd appreciate comments on how to fix it though ;)\n",
"created_at": "2014-04-02T14:55:49Z"
},
{
"body": "Hey @GaelTadh I remember reviewing a PR from you for this issue, did you get it in after all?\n",
"created_at": "2014-10-10T08:12:30Z"
},
{
"body": "Nevermind I found it and linked this issue, I see it's not in yet, assigned this issue to you @GaelTadh \n",
"created_at": "2014-10-10T08:16:10Z"
},
{
"body": "Yeah I'll get it in ASAP, it got a little neglected.\n",
"created_at": "2014-10-10T09:04:05Z"
},
{
"body": "thanks @GaelTadh !\n",
"created_at": "2014-10-10T09:06:47Z"
}
],
"number": 5576,
"title": "SearchRequestBuilder#toString causes the content of the request to change"
} | {
"body": "Fixed SearchRequestBuilder#toString to not wipe the request source when called.\n\nImproved SearchRequestBuilder#toString to support the different ways a query can be set to it. Also printed out a merged version of source and extraSrouce in case there in case any extraSource is set, to reflect what will happen when executing the request builder.\n\nImplemented toString in CountRequestBuilder\n\nCloses #5576\nCloses #5555\n",
"number": 9944,
"review_comments": [
{
"body": "the source might be SMILE or CBOR, and then we can't convert it to utf8?\n",
"created_at": "2015-03-08T17:51:24Z"
},
{
"body": "oh boy I totally missed this, fixing\n",
"created_at": "2015-03-13T00:28:19Z"
},
{
"body": "This is printing an empty thing.\n\n``` java\n SearchRequestBuilder builder = client().prepareSearch(\"index\").setTypes(\"type\");\n System.out.println(\"builder = \" + builder);\n```\n\nGives \n\n```\nbuilder = { }\n```\n\nIs that expected @javanna? Or should we wait for #9962 for a fix?\n",
"created_at": "2015-05-29T20:28:35Z"
},
{
"body": "yes it is expected\n",
"created_at": "2015-05-30T06:47:37Z"
}
],
"title": "toString for SearchRequestBuilder and CountRequestBuilder"
} | {
"commits": [
{
"message": "Java api: SearchRequestBuilder#toString to print out the actual query without wiping the request source\n\nBroaden SearchRequestBuilder#toString to support the different ways a query can be set to it. Also printed out a merged version of source and extraSrouce in case there in case any extraSource is set, to reflect what will happen when executing the request builder.\n\nCloses #5576"
},
{
"message": "Java api: implement toString in CountRequestBuilder\n\nSimilarly to what SearchRequestBuilder does, we print out a string representation of they query that is going to be executed when executing the request builder.\n\nCloses #5555"
}
],
"files": [
{
"diff": "@@ -19,12 +19,14 @@\n \n package org.elasticsearch.action.count;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.QuerySourceBuilder;\n import org.elasticsearch.action.support.broadcast.BroadcastOperationRequestBuilder;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.QueryBuilder;\n \n /**\n@@ -152,4 +154,19 @@ private QuerySourceBuilder sourceBuilder() {\n }\n return sourceBuilder;\n }\n+\n+ @Override\n+ public String toString() {\n+ if (sourceBuilder != null) {\n+ return sourceBuilder.toString();\n+ }\n+ if (request.source() != null) {\n+ try {\n+ return XContentHelper.convertToJson(request.source().toBytesArray(), false, true);\n+ } catch(Exception e) {\n+ return \"{ \\\"error\\\" : \\\"\" + ExceptionsHelper.detailedMessage(e) + \"\\\"}\";\n+ }\n+ }\n+ return new QuerySourceBuilder().toString();\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/action/count/CountRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRequestBuilder;\n import org.elasticsearch.action.support.IndicesOptions;\n@@ -28,6 +29,7 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.FilterBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.script.ScriptService;\n@@ -366,9 +368,6 @@ public SearchRequestBuilder setNoFields() {\n \n /**\n * Indicates whether the response should contain the stored _source for every hit\n- *\n- * @param fetch\n- * @return\n */\n public SearchRequestBuilder setFetchSource(boolean fetch) {\n sourceBuilder().fetchSource(fetch);\n@@ -1008,7 +1007,17 @@ public SearchSourceBuilder internalBuilder() {\n \n @Override\n public String toString() {\n- return internalBuilder().toString();\n+ if (sourceBuilder != null) {\n+ return sourceBuilder.toString();\n+ }\n+ if (request.source() != null) {\n+ try {\n+ return XContentHelper.convertToJson(request.source().toBytesArray(), false, true);\n+ } catch(Exception e) {\n+ return \"{ \\\"error\\\" : \\\"\" + ExceptionsHelper.detailedMessage(e) + \"\\\"}\";\n+ }\n+ }\n+ return new SearchSourceBuilder().toString();\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.support;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -74,4 +75,15 @@ public BytesReference buildAsBytes(XContentType contentType) throws SearchSource\n throw new SearchSourceBuilderException(\"Failed to build search source\", e);\n }\n }\n+\n+ @Override\n+ public String toString() {\n+ try {\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON).prettyPrint();\n+ toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ return builder.string();\n+ } catch (Exception e) {\n+ return \"{ \\\"error\\\" : \\\"\" + ExceptionsHelper.detailedMessage(e) + \"\\\"}\";\n+ }\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/action/support/QuerySourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import com.google.common.collect.Lists;\n import org.elasticsearch.ElasticsearchGenerationException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n@@ -630,7 +631,7 @@ public String toString() {\n toXContent(builder, ToXContent.EMPTY_PARAMS);\n return builder.string();\n } catch (Exception e) {\n- return \"{ \\\"error\\\" : \\\"\" + e.getMessage() + \"\\\"}\";\n+ return \"{ \\\"error\\\" : \\\"\" + ExceptionsHelper.detailedMessage(e) + \"\\\"}\";\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,127 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.count;\n+\n+import org.elasticsearch.action.support.QuerySourceBuilder;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+\n+public class CountRequestBuilderTests extends ElasticsearchTestCase {\n+\n+ private static Client client;\n+\n+ @BeforeClass\n+ public static void initClient() {\n+ //this client will not be hit by any request, but it needs to be a non null proper client\n+ //that is why we create it but we don't add any transport address to it\n+ client = new TransportClient();\n+ }\n+\n+ @AfterClass\n+ public static void closeClient() {\n+ client.close();\n+ client = null;\n+ }\n+\n+ @Test\n+ public void testEmptySourceToString() {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ assertThat(countRequestBuilder.toString(), equalTo(new QuerySourceBuilder().toString()));\n+ }\n+\n+ @Test\n+ public void testQueryBuilderQueryToString() {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ countRequestBuilder.setQuery(QueryBuilders.matchAllQuery());\n+ assertThat(countRequestBuilder.toString(), equalTo(new QuerySourceBuilder().setQuery(QueryBuilders.matchAllQuery()).toString()));\n+ }\n+\n+ @Test\n+ public void testStringQueryToString() {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ String query = \"{ \\\"match_all\\\" : {} }\";\n+ countRequestBuilder.setQuery(new BytesArray(query));\n+ assertThat(countRequestBuilder.toString(), equalTo(\"{\\n \\\"query\\\":{ \\\"match_all\\\" : {} }\\n}\"));\n+ }\n+\n+ @Test\n+ public void testXContentBuilderQueryToString() throws IOException {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ XContentBuilder xContentBuilder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ xContentBuilder.startObject();\n+ xContentBuilder.startObject(\"match_all\");\n+ xContentBuilder.endObject();\n+ xContentBuilder.endObject();\n+ countRequestBuilder.setQuery(xContentBuilder);\n+ assertThat(countRequestBuilder.toString(), equalTo(new QuerySourceBuilder().setQuery(xContentBuilder.bytes()).toString()));\n+ }\n+\n+ @Test\n+ public void testStringSourceToString() {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ String query = \"{ \\\"query\\\": { \\\"match_all\\\" : {} } }\";\n+ countRequestBuilder.setSource(new BytesArray(query));\n+ assertThat(countRequestBuilder.toString(), equalTo(\"{ \\\"query\\\": { \\\"match_all\\\" : {} } }\"));\n+ }\n+\n+ @Test\n+ public void testXContentBuilderSourceToString() throws IOException {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client);\n+ XContentBuilder xContentBuilder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ xContentBuilder.startObject();\n+ xContentBuilder.startObject(\"match_all\");\n+ xContentBuilder.endObject();\n+ xContentBuilder.endObject();\n+ countRequestBuilder.setSource(xContentBuilder.bytes());\n+ assertThat(countRequestBuilder.toString(), equalTo(XContentHelper.convertToJson(xContentBuilder.bytes(), false, true)));\n+ }\n+\n+ @Test\n+ public void testThatToStringDoesntWipeSource() {\n+ String source = \"{\\n\" +\n+ \" \\\"query\\\" : {\\n\" +\n+ \" \\\"match\\\" : {\\n\" +\n+ \" \\\"field\\\" : {\\n\" +\n+ \" \\\"query\\\" : \\\"value\\\"\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\";\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client).setSource(new BytesArray(source));\n+ String preToString = countRequestBuilder.request().source().toUtf8();\n+ assertThat(countRequestBuilder.toString(), equalTo(source));\n+ String postToString = countRequestBuilder.request().source().toUtf8();\n+ assertThat(preToString, equalTo(postToString));\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/action/count/CountRequestBuilderTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,128 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.search.builder.SearchSourceBuilder;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+\n+public class SearchRequestBuilderTests extends ElasticsearchTestCase {\n+\n+ private static Client client;\n+\n+ @BeforeClass\n+ public static void initClient() {\n+ //this client will not be hit by any request, but it needs to be a non null proper client\n+ //that is why we create it but we don't add any transport address to it\n+ client = new TransportClient();\n+ }\n+\n+ @AfterClass\n+ public static void closeClient() {\n+ client.close();\n+ client = null;\n+ }\n+\n+ @Test\n+ public void testEmptySourceToString() {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ assertThat(searchRequestBuilder.toString(), equalTo(new SearchSourceBuilder().toString()));\n+ }\n+\n+ @Test\n+ public void testQueryBuilderQueryToString() {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ searchRequestBuilder.setQuery(QueryBuilders.matchAllQuery());\n+ assertThat(searchRequestBuilder.toString(), equalTo(new SearchSourceBuilder().query(QueryBuilders.matchAllQuery()).toString()));\n+ }\n+\n+ @Test\n+ public void testXContentBuilderQueryToString() throws IOException {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ XContentBuilder xContentBuilder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ xContentBuilder.startObject();\n+ xContentBuilder.startObject(\"match_all\");\n+ xContentBuilder.endObject();\n+ xContentBuilder.endObject();\n+ searchRequestBuilder.setQuery(xContentBuilder);\n+ assertThat(searchRequestBuilder.toString(), equalTo(new SearchSourceBuilder().query(xContentBuilder).toString()));\n+ }\n+\n+ @Test\n+ public void testStringQueryToString() {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ String query = \"{ \\\"match_all\\\" : {} }\";\n+ searchRequestBuilder.setQuery(query);\n+ assertThat(searchRequestBuilder.toString(), equalTo(\"{\\n \\\"query\\\":{ \\\"match_all\\\" : {} }\\n}\"));\n+ }\n+\n+ @Test\n+ public void testStringSourceToString() {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ String source = \"{ \\\"query\\\" : { \\\"match_all\\\" : {} } }\";\n+ searchRequestBuilder.setSource(source);\n+ assertThat(searchRequestBuilder.toString(), equalTo(source));\n+ }\n+\n+ @Test\n+ public void testXContentBuilderSourceToString() throws IOException {\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client);\n+ XContentBuilder xContentBuilder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ xContentBuilder.startObject();\n+ xContentBuilder.startObject(\"query\");\n+ xContentBuilder.startObject(\"match_all\");\n+ xContentBuilder.endObject();\n+ xContentBuilder.endObject();\n+ xContentBuilder.endObject();\n+ searchRequestBuilder.setSource(xContentBuilder);\n+ assertThat(searchRequestBuilder.toString(), equalTo(XContentHelper.convertToJson(xContentBuilder.bytes(), false, true)));\n+ }\n+\n+ @Test\n+ public void testThatToStringDoesntWipeRequestSource() {\n+ String source = \"{\\n\" +\n+ \" \\\"query\\\" : {\\n\" +\n+ \" \\\"match\\\" : {\\n\" +\n+ \" \\\"field\\\" : {\\n\" +\n+ \" \\\"query\\\" : \\\"value\\\"\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\";\n+ SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client).setSource(source);\n+ String preToString = searchRequestBuilder.request().source().toUtf8();\n+ assertThat(searchRequestBuilder.toString(), equalTo(source));\n+ String postToString = searchRequestBuilder.request().source().toUtf8();\n+ assertThat(preToString, equalTo(postToString));\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/action/search/SearchRequestBuilderTests.java",
"status": "added"
}
]
} |
{
"body": "Unfortunately the lock order is important in the current flush code. We have to acquire the readlock fist otherwise\nif we are flushing at the end of the recovery while holding the write lock we can deadlock if:\n- Thread 1: flushes via API and gets the flush lock but blocks on the readlock since Thread 2 has the writeLock\n- Thread 2: flushes at the end of the recovery holding the writeLock and blocks on the flushLock owned by Thread 2\n\nThis commit acquires the read lock first which would be done further down anyway for the time of the flush.\nAs a sideeffect we can now safely flush on calling close() while holding the writeLock.\n",
"comments": [
{
"body": "NOTE: released code is not affected since we added the flush at the end of recovery only in 1.5\n",
"created_at": "2015-02-11T11:25:25Z"
},
{
"body": "Left some small comments o.w. looks good.\n",
"created_at": "2015-02-11T11:49:42Z"
},
{
"body": "@bleskes I simplified the exception logic a bit and removed the flush counter. \n",
"created_at": "2015-02-11T11:51:52Z"
},
{
"body": "LGTM. Left one minor comment.\n",
"created_at": "2015-02-11T11:55:23Z"
},
{
"body": "@bleskes added some more traces there :)\n",
"created_at": "2015-02-11T12:00:08Z"
}
],
"number": 9648,
"title": "Fix deadlock problems when API flush and finish recovery happens concurrently"
} | {
"body": "Issue #9648 fixes a potential deadlock between two concurrent flushes - one at the end of recovery and one through the API or background flush. This back ports the logic to 1.4 . It is slightly more contrived as we still use the write lock in the flush code. \n\nIf we feel we have some concerns about this approach we can also move the recovery flush to happen on a generic thread.\n",
"number": 9942,
"review_comments": [
{
"body": "took me a while to get what you are doing here but I like it. Can you put a big comment here that we are doing this in a certain order to prevent deadlocks?\n",
"created_at": "2015-03-02T09:56:16Z"
}
],
"title": "Engine: back port #9648 - Fix deadlock problems when API flush and finish recovery happens concurrently"
} | {
"commits": [
{
"message": "backport deadlock"
},
{
"message": "added comment"
}
],
"files": [
{
"diff": "@@ -150,7 +150,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin\n // will not really happen, and then the commitUserData and the new translog will not be reflected\n private volatile boolean flushNeeded = false;\n private final AtomicInteger flushing = new AtomicInteger();\n- private final Lock flushLock = new ReentrantLock();\n+ private final InternalLock flushLock = new InternalLock(new ReentrantLock());\n \n private final RecoveryCounter onGoingRecoveries = new RecoveryCounter();\n \n@@ -853,118 +853,108 @@ public void flush(Flush flush) throws EngineException {\n throw new FlushNotAllowedEngineException(shardId, \"already flushing...\");\n }\n \n- flushLock.lock();\n- try {\n+ final InternalLock lockNeeded;\n+ switch (flush.type()) {\n+ case NEW_WRITER:\n+ lockNeeded = writeLock;\n+ break;\n+ case COMMIT:\n+ case COMMIT_TRANSLOG:\n+ lockNeeded = readLock;\n+ break;\n+ default:\n+ throw new ElasticsearchIllegalStateException(\"flush type [\" + flush.type() + \"] not supported\");\n+ }\n+\n+ /*\n+ we have to acquire the flush lock second to prevent dead locks and keep the locking order identical.\n+ callers may already have acquired the read-write lock so we have to be consistent and alwayss lock it first.\n+ */\n+ try (InternalLock _ = lockNeeded.acquire(); InternalLock flock = flushLock.acquire()) {\n+ if (onGoingRecoveries.get() > 0) {\n+ throw new FlushNotAllowedEngineException(shardId, \"Recovery is in progress, flush is not allowed\");\n+ }\n+ ensureOpen();\n if (flush.type() == Flush.Type.NEW_WRITER) {\n- try (InternalLock _ = writeLock.acquire()) {\n- if (onGoingRecoveries.get() > 0) {\n- throw new FlushNotAllowedEngineException(shardId, \"Recovery is in progress, flush is not allowed\");\n+ // disable refreshing, not dirty\n+ dirty = false;\n+ try {\n+ { // commit and close the current writer - we write the current tanslog ID just in case\n+ final long translogId = translog.currentId();\n+ indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n+ indexWriter.commit();\n+ indexWriter.rollback();\n }\n- // disable refreshing, not dirty\n- dirty = false;\n- try {\n- { // commit and close the current writer - we write the current tanslog ID just in case\n- final long translogId = translog.currentId();\n- indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n- indexWriter.commit();\n- indexWriter.rollback();\n- }\n- indexWriter = createWriter();\n- mergeScheduler.removeListener(this.throttle);\n-\n- this.throttle = new IndexThrottle(mergeScheduler, this.logger, indexingService);\n- mergeScheduler.addListener(throttle);\n- // commit on a just opened writer will commit even if there are no changes done to it\n- // we rely on that for the commit data translog id key\n- if (flushNeeded || flush.force()) {\n- flushNeeded = false;\n- long translogId = translogIdGenerator.incrementAndGet();\n- indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n- indexWriter.commit();\n- translog.newTranslog(translogId);\n- }\n-\n- SearcherManager current = this.searcherManager;\n- this.searcherManager = buildSearchManager(indexWriter);\n- versionMap.setManager(searcherManager);\n+ indexWriter = createWriter();\n+ mergeScheduler.removeListener(this.throttle);\n \n- try {\n- IOUtils.close(current);\n- } catch (Throwable t) {\n- logger.warn(\"Failed to close current SearcherManager\", t);\n- }\n+ this.throttle = new IndexThrottle(mergeScheduler, this.logger, indexingService);\n+ mergeScheduler.addListener(throttle);\n+ // commit on a just opened writer will commit even if there are no changes done to it\n+ // we rely on that for the commit data translog id key\n+ if (flushNeeded || flush.force()) {\n+ flushNeeded = false;\n+ long translogId = translogIdGenerator.incrementAndGet();\n+ indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n+ indexWriter.commit();\n+ translog.newTranslog(translogId);\n+ }\n \n- maybePruneDeletedTombstones();\n+ SearcherManager current = this.searcherManager;\n+ this.searcherManager = buildSearchManager(indexWriter);\n+ versionMap.setManager(searcherManager);\n \n+ try {\n+ IOUtils.close(current);\n } catch (Throwable t) {\n- throw new FlushFailedEngineException(shardId, t);\n+ logger.warn(\"Failed to close current SearcherManager\", t);\n }\n+ } catch (Throwable t) {\n+ throw new FlushFailedEngineException(shardId, t);\n }\n } else if (flush.type() == Flush.Type.COMMIT_TRANSLOG) {\n- try (InternalLock _ = readLock.acquire()) {\n- final IndexWriter indexWriter = currentIndexWriter();\n- if (onGoingRecoveries.get() > 0) {\n- throw new FlushNotAllowedEngineException(shardId, \"Recovery is in progress, flush is not allowed\");\n- }\n-\n- if (flushNeeded || flush.force()) {\n- flushNeeded = false;\n- try {\n- long translogId = translogIdGenerator.incrementAndGet();\n- translog.newTransientTranslog(translogId);\n- indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n- indexWriter.commit();\n- // we need to refresh in order to clear older version values\n- refresh(new Refresh(\"version_table_flush\").force(true));\n- // we need to move transient to current only after we refresh\n- // so items added to current will still be around for realtime get\n- // when tans overrides it\n- translog.makeTransientCurrent();\n-\n- } catch (Throwable e) {\n- translog.revertTransient();\n- throw new FlushFailedEngineException(shardId, e);\n- }\n- }\n- }\n-\n- // We don't have to do this here; we do it defensively to make sure that even if wall clock time is misbehaving\n- // (e.g., moves backwards) we will at least still sometimes prune deleted tombstones:\n- if (enableGcDeletes) {\n- pruneDeletedTombstones();\n- }\n-\n- } else if (flush.type() == Flush.Type.COMMIT) {\n- // note, its ok to just commit without cleaning the translog, its perfectly fine to replay a\n- // translog on an index that was opened on a committed point in time that is \"in the future\"\n- // of that translog\n- try (InternalLock _ = readLock.acquire()) {\n- final IndexWriter indexWriter = currentIndexWriter();\n- // we allow to *just* commit if there is an ongoing recovery happening...\n- // its ok to use this, only a flush will cause a new translogId, and we are locked here from\n- // other flushes use flushLock\n+ final IndexWriter indexWriter = currentIndexWriter();\n+ if (flushNeeded || flush.force()) {\n+ flushNeeded = false;\n try {\n- long translogId = translog.currentId();\n+ long translogId = translogIdGenerator.incrementAndGet();\n+ translog.newTransientTranslog(translogId);\n indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n indexWriter.commit();\n+ // we need to refresh in order to clear older version values\n+ refresh(new Refresh(\"version_table_flush\").force(true));\n+ // we need to move transient to current only after we refresh\n+ // so items added to current will still be around for realtime get\n+ // when tans overrides it\n+ translog.makeTransientCurrent();\n+\n } catch (Throwable e) {\n+ translog.revertTransient();\n throw new FlushFailedEngineException(shardId, e);\n }\n }\n-\n- // We don't have to do this here; we do it defensively to make sure that even if wall clock time is misbehaving\n- // (e.g., moves backwards) we will at least still sometimes prune deleted tombstones:\n- if (enableGcDeletes) {\n- pruneDeletedTombstones();\n+ } else if (flush.type() == Flush.Type.COMMIT) {\n+ // note, its ok to just commit without cleaning the translog, its perfectly fine to replay a\n+ // translog on an index that was opened on a committed point in time that is \"in the future\"\n+ // of that translog\n+ final IndexWriter indexWriter = currentIndexWriter();\n+ // we allow to *just* commit if there is an ongoing recovery happening...\n+ // its ok to use this, only a flush will cause a new translogId, and we are locked here from\n+ // other flushes use flushLock\n+ try {\n+ long translogId = translog.currentId();\n+ indexWriter.setCommitData(Collections.singletonMap(Translog.TRANSLOG_ID_KEY, Long.toString(translogId)));\n+ indexWriter.commit();\n+ } catch (Throwable e) {\n+ throw new FlushFailedEngineException(shardId, e);\n }\n \n } else {\n throw new ElasticsearchIllegalStateException(\"flush type [\" + flush.type() + \"] not supported\");\n }\n \n // reread the last committed segment infos\n- try (InternalLock _ = readLock.acquire()) {\n- ensureOpen();\n+ try {\n readLastCommittedSegmentsInfo();\n } catch (Throwable e) {\n if (closedOrFailed == false) {\n@@ -978,9 +968,14 @@ public void flush(Flush flush) throws EngineException {\n maybeFailEngine(ex, \"flush\");\n throw ex;\n } finally {\n- flushLock.unlock();\n flushing.decrementAndGet();\n }\n+\n+ // We don't have to do this here; we do it defensively to make sure that even if wall clock time is misbehaving\n+ // (e.g., moves backwards) we will at least still sometimes prune deleted tombstones:\n+ if (enableGcDeletes) {\n+ pruneDeletedTombstones();\n+ }\n }\n \n private void ensureOpen() {",
"filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java",
"status": "modified"
}
]
} |
{
"body": "Upgrading from version < 1.4 causes too much corruption. Users should not use rolling upgrade, we should just not allow this to happen.\n",
"comments": [
{
"body": "forgot to close this... this was fixed by #9925 \n",
"created_at": "2015-03-17T22:38:10Z"
}
],
"number": 9922,
"title": "disallow recovery from ancient versions"
} | {
"body": "This commit forces a full recovery if the source node is < 1.4.0 and\nprevents any recoveries from pre 1.3.2 nodes to\nwork around #7210\n\nCloses #9922\n\nnote: this is just a start, I need to fix some BWC test first before this can be pulled in but I wanted to get the discussion going\n",
"number": 9925,
"review_comments": [
{
"body": "Can't recovery -> Can't recover\n",
"created_at": "2015-02-27T21:33:36Z"
},
{
"body": "Remove the \", recovery as if there are none\"? Because we are failing the recovery instead right?\n",
"created_at": "2015-02-27T21:36:19Z"
},
{
"body": "can we check that we expect this exception? i.e., when version is before 1.3.2 and compression is on?\n",
"created_at": "2015-03-02T11:11:44Z"
},
{
"body": "left overs?\n",
"created_at": "2015-03-02T14:54:15Z"
},
{
"body": "ouch.\n",
"created_at": "2015-03-02T14:55:52Z"
},
{
"body": "hmm yeah :D\n",
"created_at": "2015-03-02T14:56:43Z"
},
{
"body": "oh well :) could be worse\n",
"created_at": "2015-03-02T14:56:55Z"
}
],
"title": "Don't recover from buggy version"
} | {
"commits": [
{
"message": "[RECOVERY] Don't recover from buggy version\n\nThis commit forces a full recovery if the source node is < 1.4.0 and\nprevents any recoveries from pre 1.3.2 nodes if compression is enabled to\nwork around #7210\n\nCloses #9922"
}
],
"files": [
{
"diff": "@@ -19,13 +19,15 @@\n \n package org.elasticsearch.cluster.routing.allocation.decider;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n \n /**\n * An allocation decider that prevents relocation or allocation from nodes\n@@ -37,10 +39,12 @@\n public class NodeVersionAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"node_version\";\n+ private final RecoverySettings recoverySettings;\n \n @Inject\n- public NodeVersionAllocationDecider(Settings settings) {\n+ public NodeVersionAllocationDecider(Settings settings, RecoverySettings recoverySettings) {\n super(settings);\n+ this.recoverySettings = recoverySettings;\n }\n \n @Override\n@@ -65,6 +69,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n \n private Decision isVersionCompatible(final RoutingNodes routingNodes, final String sourceNodeId, final RoutingNode target, RoutingAllocation allocation) {\n final RoutingNode source = routingNodes.node(sourceNodeId);\n+ if (source.node().version().before(Version.V_1_3_2) && recoverySettings.compress()) { // never recover from pre 1.3.2 with compression enabled\n+ return allocation.decision(Decision.NO, NAME, \"source node version [%s] is prone to corruption bugs with %s = true see issue #7210 for details\",\n+ source.node().version(), RecoverySettings.INDICES_RECOVERY_COMPRESS);\n+ }\n if (target.node().version().onOrAfter(source.node().version())) {\n /* we can allocate if we can recover from a node that is younger or on the same version\n * if the primary is already running on a newer version that won't work due to possible",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n+import org.elasticsearch.bootstrap.Elasticsearch;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Nullable;\n@@ -149,19 +150,37 @@ protected void retryRecovery(final RecoveryStatus recoveryStatus, final String r\n threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(recoveryStatus.recoveryId()));\n }\n \n+ // pkd private for testing\n+ Map<String, StoreFileMetaData> existingFiles(DiscoveryNode sourceNode, Store store) throws IOException {\n+ final Version sourceNodeVersion = sourceNode.version();\n+ if (sourceNodeVersion.onOrAfter(Version.V_1_4_0)) {\n+ return store.getMetadataOrEmpty().asMap();\n+ } else {\n+ logger.debug(\"Force full recovery source node version {}\", sourceNodeVersion);\n+ // force full recovery if we recover from nodes < 1.4.0\n+ return Collections.EMPTY_MAP;\n+ }\n+ }\n+\n private void doRecovery(final RecoveryStatus recoveryStatus) {\n assert recoveryStatus.sourceNode() != null : \"can't do a recovery without a source node\";\n \n logger.trace(\"collecting local files for {}\", recoveryStatus);\n final Map<String, StoreFileMetaData> existingFiles;\n try {\n- existingFiles = recoveryStatus.store().getMetadataOrEmpty().asMap();\n+ existingFiles = existingFiles(recoveryStatus.sourceNode(), recoveryStatus.store());\n } catch (Exception e) {\n- logger.debug(\"error while listing local files, recovery as if there are none\", e);\n+ logger.debug(\"error while listing local files\", e);\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(),\n new RecoveryFailedException(recoveryStatus.state(), \"failed to list local files\", e), true);\n return;\n }\n+ final Version sourceNodeVersion = recoveryStatus.sourceNode().version();\n+ if (sourceNodeVersion.before(Version.V_1_3_2) && recoverySettings.compress()) { // don't recover from pre 1.3.2 if compression is on?\n+ throw new ElasticsearchIllegalStateException(\"Can't recovery from node \"\n+ + recoveryStatus.sourceNode() + \" with [\" + RecoverySettings.INDICES_RECOVERY_COMPRESS\n+ + \" : true] due to compression bugs - see issue #7210 for details\" );\n+ }\n final StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n false, existingFiles, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n ",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -81,7 +81,6 @@\n \n /**\n */\n-@TestLogging(\"index.translog.fs:TRACE\")\n public class BasicBackwardsCompatibilityTest extends ElasticsearchBackwardsCompatIntegrationTest {\n \n /**",
"filename": "src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityTest.java",
"status": "modified"
},
{
"diff": "@@ -25,13 +25,11 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.cluster.routing.MutableShardRouting;\n-import org.elasticsearch.cluster.routing.RoutingNodes;\n-import org.elasticsearch.cluster.routing.RoutingTable;\n-import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.junit.Test;\n \n@@ -83,23 +81,19 @@ public void testDoNotAllocateFromPrimary() {\n \n logger.info(\"start two nodes and fully start the shards\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n- RoutingTable prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n assertThat(routingTable.index(\"test\").shard(i).replicaShardsWithState(UNASSIGNED).size(), equalTo(2));\n-\n }\n \n logger.info(\"start all the primary shards, replicas will start initializing\");\n RoutingNodes routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -109,10 +103,8 @@ public void testDoNotAllocateFromPrimary() {\n }\n \n routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -124,10 +116,8 @@ public void testDoNotAllocateFromPrimary() {\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n .put(newNode(\"node3\", getPreviousVersion())))\n .build();\n- prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -140,10 +130,8 @@ public void testDoNotAllocateFromPrimary() {\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n .put(newNode(\"node4\")))\n .build();\n- prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -153,10 +141,8 @@ public void testDoNotAllocateFromPrimary() {\n }\n \n routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -335,7 +321,79 @@ private final void assertRecoveryNodeVersions(RoutingNodes routingNodes) {\n assertTrue(routingNodes.node(toId).node().version().onOrAfter(routingNodes.node(fromId).node().version()));\n }\n }\n+ }\n+\n+ public void testFailRecoverFromPre132WithCompression() {\n+ final boolean compress = randomBoolean();\n+ AllocationService service = createAllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n+ .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, \"INDICES_ALL_ACTIVE\")\n+ .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", -1)\n+ .put(RecoverySettings.INDICES_RECOVERY_COMPRESS, compress)\n+ .build());\n+\n+ logger.info(\"Building initial routing table\");\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ assertThat(routingTable.index(\"test\").shards().size(), equalTo(1));\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(2));\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ }\n+ }\n+ Version version = randomVersion();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"old0\", version))).build();\n+ clusterState = stabilize(clusterState, service);\n+ routingTable = clusterState.routingTable();\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertEquals(routingTable.index(\"test\").shard(i).shards().size(), 2);\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ if (shard.primary()) {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"old0\");\n+ } else {\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ }\n+ }\n+ }\n \n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"old0\", version))\n+ .put(newNode(\"new0\"))).build();\n \n+ clusterState = stabilize(clusterState, service);\n+ routingTable = clusterState.routingTable();\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertEquals(routingTable.index(\"test\").shard(i).shards().size(), 2);\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ if (shard.primary()) {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"old0\");\n+ } else {\n+ if (version.before(Version.V_1_3_2) && compress) { // can't recover from pre 1.3.2 with compression enabled\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ } else {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"new0\");\n+ }\n+ }\n+ }\n+\n+\n+ }\n }\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/NodeVersionAllocationDeciderTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,71 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.indices.recovery;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.transport.LocalTransportAddress;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+\n+import java.io.IOException;\n+import java.util.Map;\n+\n+/**\n+ *\n+ */\n+public class RecoveryTargetTests extends ElasticsearchSingleNodeTest {\n+\n+ public void testFullRecoveryFromPre14() throws IOException {\n+ createIndex(\"test\");\n+ int numDocs = scaledRandomIntBetween(10, 100);\n+ for (int j = 0; j < numDocs; ++j) {\n+ String id = Integer.toString(j);\n+ client().prepareIndex(\"test\", \"type1\", id).setSource(\"text\", \"sometext\").get();\n+ }\n+ client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).setForce(true).get();\n+ RecoveryTarget recoveryTarget = getInstanceFromNode(RecoveryTarget.class);\n+ IndexService idxService = getInstanceFromNode(IndicesService.class).indexService(\"test\");\n+ Store store = idxService.shard(0).store();\n+ store.incRef();\n+ try {\n+ DiscoveryNode discoveryNode = new DiscoveryNode(\"123\", new LocalTransportAddress(\"123\"), Version.CURRENT);\n+ Map<String, StoreFileMetaData> metaDataMap = recoveryTarget.existingFiles(discoveryNode, store);\n+ assertTrue(metaDataMap.size() > 0);\n+ int iters = randomIntBetween(10, 20);\n+ for (int i = 0; i < iters; i++) {\n+ Version version = randomVersion();\n+ DiscoveryNode discoNode = new DiscoveryNode(\"123\", new LocalTransportAddress(\"123\"), version);\n+ Map<String, StoreFileMetaData> map = recoveryTarget.existingFiles(discoNode, store);\n+ if (version.before(Version.V_1_4_0)) {\n+ assertTrue(map.isEmpty());\n+ } else {\n+ assertEquals(map.size(), metaDataMap.size());\n+ }\n+\n+ }\n+ } finally {\n+ store.decRef();\n+ }\n+\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/indices/recovery/RecoveryTargetTests.java",
"status": "added"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.node.settings.NodeSettingsService;\n \n import java.lang.reflect.Constructor;\n@@ -64,16 +65,21 @@ public static AllocationService createAllocationService(Settings settings, Rando\n \n public static AllocationDeciders randomAllocationDeciders(Settings settings, NodeSettingsService nodeSettingsService, Random random) {\n final ImmutableSet<Class<? extends AllocationDecider>> defaultAllocationDeciders = AllocationDecidersModule.DEFAULT_ALLOCATION_DECIDERS;\n+ final RecoverySettings recoverySettings = new RecoverySettings(settings, nodeSettingsService);\n final List<AllocationDecider> list = new ArrayList<>();\n for (Class<? extends AllocationDecider> deciderClass : defaultAllocationDeciders) {\n try {\n try {\n Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, NodeSettingsService.class);\n list.add(constructor.newInstance(settings, nodeSettingsService));\n } catch (NoSuchMethodException e) {\n- Constructor<? extends AllocationDecider> constructor = null;\n- constructor = deciderClass.getConstructor(Settings.class);\n- list.add(constructor.newInstance(settings));\n+ try {\n+ Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class);\n+ list.add(constructor.newInstance(settings));\n+ } catch (NoSuchMethodException e1) {\n+ Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, RecoverySettings.class);\n+ list.add(constructor.newInstance(settings, recoverySettings));\n+ }\n }\n } catch (Exception ex) {\n throw new RuntimeException(ex);",
"filename": "src/test/java/org/elasticsearch/test/ElasticsearchAllocationTestCase.java",
"status": "modified"
}
]
} |
{
"body": "See CorruptedCompressorTests for details on how this bug can be hit.\n",
"comments": [
{
"body": "Please disable unsafe encode/decode complete.\n- This may crash machines that don't allow unaligned reads: https://github.com/ning/compress/issues/18\n- if (SUNOS) does not imply its safe to do such unaligned reads.\n- This may corrupt data on bigendian systems: https://github.com/ning/compress/issues/37\n- We do not test such situations. \n",
"created_at": "2014-08-08T19:38:02Z"
},
{
"body": "Ok, I think I addressed all the comments. The only unchanged thing is the license file, because I don't know which license to put in there (the original file had no license header).\n",
"created_at": "2014-08-08T20:12:44Z"
},
{
"body": "The PR to the compress-lzf project was merged, and a 1.0.2 release was made. I removed the X encoder and made the upgrade to 1.0.2.\n",
"created_at": "2014-08-09T19:03:45Z"
},
{
"body": "looks good, thanks Ryan.\n",
"created_at": "2014-08-11T12:57:29Z"
},
{
"body": "+1 as well\n",
"created_at": "2014-08-11T12:58:29Z"
},
{
"body": "Thanks. Pushed.\n",
"created_at": "2014-08-11T14:29:18Z"
},
{
"body": "Upgrading from 1.1.1 to 1.6.0 and noticing this output from our cluster\n\n``````\ninsertOrder timeInQueue priority source\n 37659 27ms HIGH shard-failed ([callers][2], node[Ko3b9KsESN68lTkPtVrHKw], relocating [4mcZCKvBRoKQJS_StGNPng], [P], s[INITIALIZING]), reason [shard failure [failed recovery][RecoveryFailedException[[callers][2]: Recovery failed from [aws_el1][4mcZCKvBRoKQJS_StGNPng][ip-10-55-11-210][inet[/10.55.11.210:9300]]{rack=useast1, master=true, zone=zonea} into [aws_el1a][Ko3b9KsESN68lTkPtVrHKw][ip-10-55-11-211][inet[/10.55.11.211:9300]]{rack=useast1, zone=zonea, master=true} (unexpected error)]; nested: ElasticsearchIllegalStateException[Can't recovery from node [aws_el1][4mcZCKvBRoKQJS_StGNPng][ip-10-55-11-210][inet[/10.55.11.210:9300]]{rack=useast1, master=true, zone=zonea} with [indices.recovery.compress : true] due to compression bugs - see issue #7210 for details]; ]]```\n\nwhat do we do?\n``````\n",
"created_at": "2015-06-18T20:00:30Z"
},
{
"body": "@taf2 Turn off compression before upgrading.\n",
"created_at": "2015-06-18T20:02:47Z"
},
{
"body": "@rjernst thanks! which kind of compression do we disable...\n\nis it this option in\n\n/etc/elasticsearch/elasticsearch.yml\n#transport.tcp.compress: true\n\n?\n\nor another option?\n",
"created_at": "2015-06-18T20:04:57Z"
},
{
"body": "okay sorry it looks like we need to disable indices.recovery.compress - but is this something that needs to be disabled on all nodes in the cluster or just the new 1.6.0 node we're starting up now?\n",
"created_at": "2015-06-18T20:06:01Z"
},
{
"body": "All nodes in the cluster, before starting the upgrade. The problem is old nodes with this setting enabled would use the old buggy code, which can then cause data copied between and old and new node to become corrupted.\n",
"created_at": "2015-06-18T20:07:08Z"
},
{
"body": "excellent thank you - we have run the following on the existing cluster:\n\n```\ncurl -XPUT localhost:9200/_cluster/settings -d '{\"transient\" : {\"indices.recovery.compress\" : false }}'\n```\n",
"created_at": "2015-06-18T20:10:20Z"
},
{
"body": "Thank you that did the trick!\n",
"created_at": "2015-06-18T20:12:39Z"
}
],
"number": 7210,
"title": "Fix a very rare case of corruption in compression used for internal cluster communication."
} | {
"body": "This commit forces a full recovery if the source node is < 1.4.0 and\nprevents any recoveries from pre 1.3.2 nodes to\nwork around #7210\n\nCloses #9922\n\nnote: this is just a start, I need to fix some BWC test first before this can be pulled in but I wanted to get the discussion going\n",
"number": 9925,
"review_comments": [
{
"body": "Can't recovery -> Can't recover\n",
"created_at": "2015-02-27T21:33:36Z"
},
{
"body": "Remove the \", recovery as if there are none\"? Because we are failing the recovery instead right?\n",
"created_at": "2015-02-27T21:36:19Z"
},
{
"body": "can we check that we expect this exception? i.e., when version is before 1.3.2 and compression is on?\n",
"created_at": "2015-03-02T11:11:44Z"
},
{
"body": "left overs?\n",
"created_at": "2015-03-02T14:54:15Z"
},
{
"body": "ouch.\n",
"created_at": "2015-03-02T14:55:52Z"
},
{
"body": "hmm yeah :D\n",
"created_at": "2015-03-02T14:56:43Z"
},
{
"body": "oh well :) could be worse\n",
"created_at": "2015-03-02T14:56:55Z"
}
],
"title": "Don't recover from buggy version"
} | {
"commits": [
{
"message": "[RECOVERY] Don't recover from buggy version\n\nThis commit forces a full recovery if the source node is < 1.4.0 and\nprevents any recoveries from pre 1.3.2 nodes if compression is enabled to\nwork around #7210\n\nCloses #9922"
}
],
"files": [
{
"diff": "@@ -19,13 +19,15 @@\n \n package org.elasticsearch.cluster.routing.allocation.decider;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n \n /**\n * An allocation decider that prevents relocation or allocation from nodes\n@@ -37,10 +39,12 @@\n public class NodeVersionAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"node_version\";\n+ private final RecoverySettings recoverySettings;\n \n @Inject\n- public NodeVersionAllocationDecider(Settings settings) {\n+ public NodeVersionAllocationDecider(Settings settings, RecoverySettings recoverySettings) {\n super(settings);\n+ this.recoverySettings = recoverySettings;\n }\n \n @Override\n@@ -65,6 +69,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n \n private Decision isVersionCompatible(final RoutingNodes routingNodes, final String sourceNodeId, final RoutingNode target, RoutingAllocation allocation) {\n final RoutingNode source = routingNodes.node(sourceNodeId);\n+ if (source.node().version().before(Version.V_1_3_2) && recoverySettings.compress()) { // never recover from pre 1.3.2 with compression enabled\n+ return allocation.decision(Decision.NO, NAME, \"source node version [%s] is prone to corruption bugs with %s = true see issue #7210 for details\",\n+ source.node().version(), RecoverySettings.INDICES_RECOVERY_COMPRESS);\n+ }\n if (target.node().version().onOrAfter(source.node().version())) {\n /* we can allocate if we can recover from a node that is younger or on the same version\n * if the primary is already running on a newer version that won't work due to possible",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n+import org.elasticsearch.bootstrap.Elasticsearch;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Nullable;\n@@ -149,19 +150,37 @@ protected void retryRecovery(final RecoveryStatus recoveryStatus, final String r\n threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(recoveryStatus.recoveryId()));\n }\n \n+ // pkd private for testing\n+ Map<String, StoreFileMetaData> existingFiles(DiscoveryNode sourceNode, Store store) throws IOException {\n+ final Version sourceNodeVersion = sourceNode.version();\n+ if (sourceNodeVersion.onOrAfter(Version.V_1_4_0)) {\n+ return store.getMetadataOrEmpty().asMap();\n+ } else {\n+ logger.debug(\"Force full recovery source node version {}\", sourceNodeVersion);\n+ // force full recovery if we recover from nodes < 1.4.0\n+ return Collections.EMPTY_MAP;\n+ }\n+ }\n+\n private void doRecovery(final RecoveryStatus recoveryStatus) {\n assert recoveryStatus.sourceNode() != null : \"can't do a recovery without a source node\";\n \n logger.trace(\"collecting local files for {}\", recoveryStatus);\n final Map<String, StoreFileMetaData> existingFiles;\n try {\n- existingFiles = recoveryStatus.store().getMetadataOrEmpty().asMap();\n+ existingFiles = existingFiles(recoveryStatus.sourceNode(), recoveryStatus.store());\n } catch (Exception e) {\n- logger.debug(\"error while listing local files, recovery as if there are none\", e);\n+ logger.debug(\"error while listing local files\", e);\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(),\n new RecoveryFailedException(recoveryStatus.state(), \"failed to list local files\", e), true);\n return;\n }\n+ final Version sourceNodeVersion = recoveryStatus.sourceNode().version();\n+ if (sourceNodeVersion.before(Version.V_1_3_2) && recoverySettings.compress()) { // don't recover from pre 1.3.2 if compression is on?\n+ throw new ElasticsearchIllegalStateException(\"Can't recovery from node \"\n+ + recoveryStatus.sourceNode() + \" with [\" + RecoverySettings.INDICES_RECOVERY_COMPRESS\n+ + \" : true] due to compression bugs - see issue #7210 for details\" );\n+ }\n final StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n false, existingFiles, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n ",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -81,7 +81,6 @@\n \n /**\n */\n-@TestLogging(\"index.translog.fs:TRACE\")\n public class BasicBackwardsCompatibilityTest extends ElasticsearchBackwardsCompatIntegrationTest {\n \n /**",
"filename": "src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityTest.java",
"status": "modified"
},
{
"diff": "@@ -25,13 +25,11 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.cluster.routing.MutableShardRouting;\n-import org.elasticsearch.cluster.routing.RoutingNodes;\n-import org.elasticsearch.cluster.routing.RoutingTable;\n-import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.junit.Test;\n \n@@ -83,23 +81,19 @@ public void testDoNotAllocateFromPrimary() {\n \n logger.info(\"start two nodes and fully start the shards\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n- RoutingTable prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n assertThat(routingTable.index(\"test\").shard(i).replicaShardsWithState(UNASSIGNED).size(), equalTo(2));\n-\n }\n \n logger.info(\"start all the primary shards, replicas will start initializing\");\n RoutingNodes routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -109,10 +103,8 @@ public void testDoNotAllocateFromPrimary() {\n }\n \n routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -124,10 +116,8 @@ public void testDoNotAllocateFromPrimary() {\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n .put(newNode(\"node3\", getPreviousVersion())))\n .build();\n- prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -140,10 +130,8 @@ public void testDoNotAllocateFromPrimary() {\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n .put(newNode(\"node4\")))\n .build();\n- prevRoutingTable = routingTable;\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -153,10 +141,8 @@ public void testDoNotAllocateFromPrimary() {\n }\n \n routingNodes = clusterState.routingNodes();\n- prevRoutingTable = routingTable;\n routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n- routingNodes = clusterState.routingNodes();\n \n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(3));\n@@ -335,7 +321,79 @@ private final void assertRecoveryNodeVersions(RoutingNodes routingNodes) {\n assertTrue(routingNodes.node(toId).node().version().onOrAfter(routingNodes.node(fromId).node().version()));\n }\n }\n+ }\n+\n+ public void testFailRecoverFromPre132WithCompression() {\n+ final boolean compress = randomBoolean();\n+ AllocationService service = createAllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n+ .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, \"INDICES_ALL_ACTIVE\")\n+ .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", -1)\n+ .put(RecoverySettings.INDICES_RECOVERY_COMPRESS, compress)\n+ .build());\n+\n+ logger.info(\"Building initial routing table\");\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ assertThat(routingTable.index(\"test\").shards().size(), equalTo(1));\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(2));\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ }\n+ }\n+ Version version = randomVersion();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"old0\", version))).build();\n+ clusterState = stabilize(clusterState, service);\n+ routingTable = clusterState.routingTable();\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertEquals(routingTable.index(\"test\").shard(i).shards().size(), 2);\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ if (shard.primary()) {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"old0\");\n+ } else {\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ }\n+ }\n+ }\n \n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"old0\", version))\n+ .put(newNode(\"new0\"))).build();\n \n+ clusterState = stabilize(clusterState, service);\n+ routingTable = clusterState.routingTable();\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertEquals(routingTable.index(\"test\").shard(i).shards().size(), 2);\n+ for (ShardRouting shard : routingTable.index(\"test\").shard(i).shards()) {\n+ if (shard.primary()) {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"old0\");\n+ } else {\n+ if (version.before(Version.V_1_3_2) && compress) { // can't recover from pre 1.3.2 with compression enabled\n+ assertEquals(shard.state(), UNASSIGNED);\n+ assertNull(shard.currentNodeId());\n+ } else {\n+ assertEquals(shard.state(), STARTED);\n+ assertEquals(shard.currentNodeId(), \"new0\");\n+ }\n+ }\n+ }\n+\n+\n+ }\n }\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/NodeVersionAllocationDeciderTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,71 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.indices.recovery;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.transport.LocalTransportAddress;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+\n+import java.io.IOException;\n+import java.util.Map;\n+\n+/**\n+ *\n+ */\n+public class RecoveryTargetTests extends ElasticsearchSingleNodeTest {\n+\n+ public void testFullRecoveryFromPre14() throws IOException {\n+ createIndex(\"test\");\n+ int numDocs = scaledRandomIntBetween(10, 100);\n+ for (int j = 0; j < numDocs; ++j) {\n+ String id = Integer.toString(j);\n+ client().prepareIndex(\"test\", \"type1\", id).setSource(\"text\", \"sometext\").get();\n+ }\n+ client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).setForce(true).get();\n+ RecoveryTarget recoveryTarget = getInstanceFromNode(RecoveryTarget.class);\n+ IndexService idxService = getInstanceFromNode(IndicesService.class).indexService(\"test\");\n+ Store store = idxService.shard(0).store();\n+ store.incRef();\n+ try {\n+ DiscoveryNode discoveryNode = new DiscoveryNode(\"123\", new LocalTransportAddress(\"123\"), Version.CURRENT);\n+ Map<String, StoreFileMetaData> metaDataMap = recoveryTarget.existingFiles(discoveryNode, store);\n+ assertTrue(metaDataMap.size() > 0);\n+ int iters = randomIntBetween(10, 20);\n+ for (int i = 0; i < iters; i++) {\n+ Version version = randomVersion();\n+ DiscoveryNode discoNode = new DiscoveryNode(\"123\", new LocalTransportAddress(\"123\"), version);\n+ Map<String, StoreFileMetaData> map = recoveryTarget.existingFiles(discoNode, store);\n+ if (version.before(Version.V_1_4_0)) {\n+ assertTrue(map.isEmpty());\n+ } else {\n+ assertEquals(map.size(), metaDataMap.size());\n+ }\n+\n+ }\n+ } finally {\n+ store.decRef();\n+ }\n+\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/indices/recovery/RecoveryTargetTests.java",
"status": "added"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.node.settings.NodeSettingsService;\n \n import java.lang.reflect.Constructor;\n@@ -64,16 +65,21 @@ public static AllocationService createAllocationService(Settings settings, Rando\n \n public static AllocationDeciders randomAllocationDeciders(Settings settings, NodeSettingsService nodeSettingsService, Random random) {\n final ImmutableSet<Class<? extends AllocationDecider>> defaultAllocationDeciders = AllocationDecidersModule.DEFAULT_ALLOCATION_DECIDERS;\n+ final RecoverySettings recoverySettings = new RecoverySettings(settings, nodeSettingsService);\n final List<AllocationDecider> list = new ArrayList<>();\n for (Class<? extends AllocationDecider> deciderClass : defaultAllocationDeciders) {\n try {\n try {\n Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, NodeSettingsService.class);\n list.add(constructor.newInstance(settings, nodeSettingsService));\n } catch (NoSuchMethodException e) {\n- Constructor<? extends AllocationDecider> constructor = null;\n- constructor = deciderClass.getConstructor(Settings.class);\n- list.add(constructor.newInstance(settings));\n+ try {\n+ Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class);\n+ list.add(constructor.newInstance(settings));\n+ } catch (NoSuchMethodException e1) {\n+ Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, RecoverySettings.class);\n+ list.add(constructor.newInstance(settings, recoverySettings));\n+ }\n }\n } catch (Exception ex) {\n throw new RuntimeException(ex);",
"filename": "src/test/java/org/elasticsearch/test/ElasticsearchAllocationTestCase.java",
"status": "modified"
}
]
} |
{
"body": "Seems that there is an NPE in the `waitForDocs` method of ElasticsearchIntegrationTest. I discovered when trying to write a test case using this class which calls the method: `waitForDocs(final long numDocs)` which propagates down a null value to this portion of the code.\n\nProblem seems to surface on this line:\nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java#L999\n\nAs noted in the documentation indexer is Nullable, but if null here, we see a null pointer. I think in the null case we could default to 0 and the code would work as documented, but wanted to get some feedback before making the change.\n",
"comments": [
{
"body": "thanks for reporting it!! ;) I pushed a fix\n",
"created_at": "2015-02-26T20:20:33Z"
}
],
"number": 9907,
"title": "NPE on ElasticsearchIntegrationTest"
} | {
"body": "Closes #9907\n",
"number": 9909,
"review_comments": [],
"title": "[TEST] Fix NPE in ElasticsearchIntegrationTest if no indexer is provided"
} | {
"commits": [
{
"message": "[TEST] Fix NPE in ElasticsearchIntegrationTest if no indexer is provided\n\nCloses #9907"
}
],
"files": [
{
"diff": "@@ -129,6 +129,7 @@\n import java.util.*;\n import java.util.concurrent.*;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicLong;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n@@ -991,35 +992,37 @@ public long waitForDocs(final long numDocs, final @Nullable BackgroundIndexer in\n */\n public long waitForDocs(final long numDocs, int maxWaitTime, TimeUnit maxWaitTimeUnit, final @Nullable BackgroundIndexer indexer)\n throws InterruptedException {\n- final long[] lastKnownCount = {-1};\n+ final AtomicLong lastKnownCount = new AtomicLong(-1);\n long lastStartCount = -1;\n Predicate<Object> testDocs = new Predicate<Object>() {\n @Override\n public boolean apply(Object o) {\n- lastKnownCount[0] = indexer.totalIndexedDocs();\n- if (lastKnownCount[0] >= numDocs) {\n+ if (indexer != null) {\n+ lastKnownCount.set(indexer.totalIndexedDocs());\n+ }\n+ if (lastKnownCount.get() >= numDocs) {\n long count = client().prepareCount().setQuery(matchAllQuery()).execute().actionGet().getCount();\n- if (count == lastKnownCount[0]) {\n+ if (count == lastKnownCount.get()) {\n // no progress - try to refresh for the next time\n client().admin().indices().prepareRefresh().get();\n }\n- lastKnownCount[0] = count;\n- logger.debug(\"[{}] docs visible for search. waiting for [{}]\", lastKnownCount[0], numDocs);\n+ lastKnownCount.set(count);\n+ logger.debug(\"[{}] docs visible for search. waiting for [{}]\", lastKnownCount.get(), numDocs);\n } else {\n- logger.debug(\"[{}] docs indexed. waiting for [{}]\", lastKnownCount[0], numDocs);\n+ logger.debug(\"[{}] docs indexed. waiting for [{}]\", lastKnownCount.get(), numDocs);\n }\n- return lastKnownCount[0] >= numDocs;\n+ return lastKnownCount.get() >= numDocs;\n }\n };\n \n while (!awaitBusy(testDocs, maxWaitTime, maxWaitTimeUnit)) {\n- if (lastStartCount == lastKnownCount[0]) {\n+ if (lastStartCount == lastKnownCount.get()) {\n // we didn't make any progress\n fail(\"failed to reach \" + numDocs + \"docs\");\n }\n- lastStartCount = lastKnownCount[0];\n+ lastStartCount = lastKnownCount.get();\n }\n- return lastKnownCount[0];\n+ return lastKnownCount.get();\n }\n \n ",
"filename": "src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java",
"status": "modified"
}
]
} |
{
"body": "Hi guys. \n\nI've stumbled upon an issue. When specifying the `time_zone` parameters in a range filter where `gte` and `lt` are set using datemath, the time zone is not taken into account when rounding!\n\nThe following is part of a filters aggregation, and I expect to be given the week 4 weeks ago, starting on a\nsunday at 23:00 UTC time, and ending on sunday 23:00 UTC time with a full week in between. \n\n```\n{\n \"filters\": {\n \"4w\": {\n \"range\": {\n \"timestamp\": {\n \"lt\": \"now-4w+1w/w\",\n \"gte\": \"now-4w/w\",\n \"time_zone\": \"+01:00\"\n }\n }\n }\n }\n}\n```\n\nBut no, after inspecting the docs that are returned, there are no documents returned between the first sunday 23:00 and monday 00:00 UTC. There are however documents returned between sunday 23:00 and monday 00:00 UTC at the end of the interval.\n\nIs this a bug? Should not timezones be included when doing rounding in date math?\n",
"comments": [
{
"body": "Will have a look since this might or might not be related to the rounding issues recently solved in #9790.\n",
"created_at": "2015-02-25T09:03:24Z"
},
{
"body": "@mewwts Which version are you running?\n",
"created_at": "2015-02-25T09:04:02Z"
},
{
"body": "From a first glance: the rounding in DateMathParser (which seems to be used in range filter) is not related to the TimeZoneRounding which was subject of #9790, so I was wrong in assuming the two issues are related. Seems like rounding in DateMathParser does not take into account time zone correctly.\n",
"created_at": "2015-02-25T10:29:22Z"
},
{
"body": "Hi @cbuescher, thanks for your reply and quick fix!?\nI'm running 1.4.2. I was going to take a look at this myself, but I can see you beat me to it!\n",
"created_at": "2015-02-25T13:21:40Z"
},
{
"body": "Hands down, greatest response ever. Thanks @cbuescher. Now the wait for 1.4.5, I guess?\n",
"created_at": "2015-02-25T18:08:40Z"
},
{
"body": "So far only on out main branch, but will merge with the current 1.4 branch, so after that's happened it will go into 1.4.5. \n",
"created_at": "2015-02-25T18:11:25Z"
},
{
"body": ":+1: Thanks, @cbuescher.\n",
"created_at": "2015-02-25T18:33:30Z"
},
{
"body": "On branch 1.4 with dff19cb and on 1.x with 8391b51\n",
"created_at": "2015-03-02T11:35:53Z"
},
{
"body": "Great @cbuescher!\n",
"created_at": "2015-03-03T09:03:25Z"
}
],
"number": 9814,
"title": "Range filters, date math and time_zone"
} | {
"body": "Currently rounding in DateMathParser is always done in UTC, even \nwhen another time zone is specified. This is fixed by passing the time zone \ndown to the rounding logic when it is specified.\n\nCloses #9814\n",
"number": 9885,
"review_comments": [],
"title": "DateMath: Use time zone when rounding. "
} | {
"commits": [
{
"message": "DateMath: Fix using time zone when rounding.\n\nCurrently rounding in DateMathParser is always done in UTC, even\nwhen another time zone is specified. This fix corrects this by\npassing the specified time zone down to the rounding logic.\n\nCloses #9814"
},
{
"message": "added one more test case, corrected minor typo"
}
],
"files": [
{
"diff": "@@ -76,11 +76,14 @@ public long parse(String text, Callable<Long> now, boolean roundUp, DateTimeZone\n }\n }\n \n- return parseMath(mathString, time, roundUp);\n+ return parseMath(mathString, time, roundUp, timeZone);\n }\n \n- private long parseMath(String mathString, long time, boolean roundUp) throws ElasticsearchParseException {\n- MutableDateTime dateTime = new MutableDateTime(time, DateTimeZone.UTC);\n+ private long parseMath(String mathString, long time, boolean roundUp, DateTimeZone timeZone) throws ElasticsearchParseException {\n+ if (timeZone == null) {\n+ timeZone = DateTimeZone.UTC;\n+ }\n+ MutableDateTime dateTime = new MutableDateTime(time, timeZone);\n for (int i = 0; i < mathString.length(); ) {\n char c = mathString.charAt(i++);\n final boolean round;",
"filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java",
"status": "modified"
},
{
"diff": "@@ -146,16 +146,26 @@ public void testRounding() {\n assertDateMathEquals(\"2014-11-18||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n assertDateMathEquals(\"2014||/y\", \"2014-01-01\", 0, false, null);\n assertDateMathEquals(\"2014-01-01T00:00:00.001||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n+ // rounding should also take into account time zone\n+ assertDateMathEquals(\"2014-11-18||/y\", \"2013-12-31T23:00:00.000Z\", 0, false, DateTimeZone.forID(\"CET\"));\n+ assertDateMathEquals(\"2014-11-18||/y\", \"2014-12-31T22:59:59.999Z\", 0, true, DateTimeZone.forID(\"CET\"));\n \n assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-01\", 0, false, null);\n assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-30T23:59:59.999\", 0, true, null);\n assertDateMathEquals(\"2014-11||/M\", \"2014-11-01\", 0, false, null);\n assertDateMathEquals(\"2014-11||/M\", \"2014-11-30T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18||/M\", \"2014-10-31T23:00:00.000Z\", 0, false, DateTimeZone.forID(\"CET\"));\n+ assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-30T22:59:59.999Z\", 0, true, DateTimeZone.forID(\"CET\"));\n \n assertDateMathEquals(\"2014-11-18T14||/w\", \"2014-11-17\", 0, false, null);\n assertDateMathEquals(\"2014-11-18T14||/w\", \"2014-11-23T23:59:59.999\", 0, true, null);\n assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-17\", 0, false, null);\n assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-23T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-16T23:00:00.000Z\", 0, false, DateTimeZone.forID(\"+01:00\"));\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-17T01:00:00.000Z\", 0, false, DateTimeZone.forID(\"-01:00\"));\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-16T23:00:00.000Z\", 0, false, DateTimeZone.forID(\"CET\"));\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-23T22:59:59.999Z\", 0, true, DateTimeZone.forID(\"CET\"));\n+ assertDateMathEquals(\"2014-07-22||/w\", \"2014-07-20T22:00:00.000Z\", 0, false, DateTimeZone.forID(\"CET\")); // with DST\n \n assertDateMathEquals(\"2014-11-18T14||/d\", \"2014-11-18\", 0, false, null);\n assertDateMathEquals(\"2014-11-18T14||/d\", \"2014-11-18T23:59:59.999\", 0, true, null);\n@@ -181,7 +191,7 @@ public void testRounding() {\n assertDateMathEquals(\"2014-11-18T14:27:32||/s\", \"2014-11-18T14:27:32\", 0, false, null);\n assertDateMathEquals(\"2014-11-18T14:27:32||/s\", \"2014-11-18T14:27:32.999\", 0, true, null);\n }\n- \n+\n public void testTimestamps() {\n assertDateMathEquals(\"1418248078000\", \"2014-12-10T21:47:58.000\");\n ",
"filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java",
"status": "modified"
}
]
} |
{
"body": "Using a timezone in combination with 'pre_zone_adjust_large_interval' set to true leads to the creation of buckets which are not correctly aligned. \n\nExample:\nTimezone set to CET, interval set to month (march and april are off here):\n\n```\n\"aggregations\" : {\n \"histo\" : {\n \"buckets\" : [ {\n \"key_as_string\" : \"2013-12-31T23:00:00.000Z\",\n \"key\" : 1388530800000,\n \"doc_count\" : 1\n }, {\n \"key_as_string\" : \"2014-01-31T23:00:00.000Z\",\n \"key\" : 1391209200000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-02-28T23:00:00.000Z\",\n \"key\" : 1393628400000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-03-28T23:00:00.000Z\",\n \"key\" : 1396047600000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-04-28T23:00:00.000Z\",\n \"key\" : 1398726000000,\n \"doc_count\" : 0\n } ]\n }\n }\n```\n\nReproduction:\nhttps://gist.github.com/miccon/4eecfeafc3a66a9d8b24\n",
"comments": [
{
"body": "This also leads to 'double' buckets during daylight savings time change:\n\nExample with interval set to day:\n\n```\n\"aggregations\" : {\n \"histo\" : {\n \"buckets\" : [ {\n \"key_as_string\" : \"2014-10-25T22:00:00.000Z\", // Still daylight saving (midnight 26th)\n \"key\" : 1414274400000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-10-26T22:00:00.000Z\", // Should be 23h in UTC (wrong bucket)\n \"key\" : 1414360800000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-10-26T23:00:00.000Z\", // Winter time (midnight 27th)\n \"key\" : 1414364400000,\n \"doc_count\" : 1\n }, {\n \"key_as_string\" : \"2014-10-27T23:00:00.000Z\",\n \"key\" : 1414450800000,\n \"doc_count\" : 0\n } ]\n }\n }\n```\n\nhttps://gist.github.com/miccon/de967d67cc8fc80fb3d8\n",
"created_at": "2014-10-23T15:00:37Z"
},
{
"body": "In the code the following assertion is triggered (using 1.4, seems to be also present in master)\n\n```\nCaused by: java.lang.AssertionError\n at org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.reduce(InternalHistogram.java:368)\n at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:140)\n at org.elasticsearch.search.controller.SearchPhaseController.merge(SearchPhaseController.java:374)\n at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction$1.doRun(TransportSearchQueryAndFetchAction.java:83)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n ... 3 more\n```\n",
"created_at": "2014-10-23T15:04:26Z"
},
{
"body": "+1\n\nAlthough the title is misleading - the issue has nothing to do with extended bounds.\n\nRunning the commands below:\n\n```\ncurl -sXDELETE 'localhost:9200/test'\ncurl -sXPOST 'localhost:9200/test/test/?pretty=true&refresh=true' -d '{\"date\": \"2014-01-01T0:00:00Z\"}'\ncurl -sXPOST 'localhost:9200/test/test/?pretty=true&refresh=true' -d '{\"date\": \"2014-04-01T0:00:00Z\"}'\ncurl -sXPOST 'localhost:9200/test/test/?pretty=true&refresh=true' -d '{\"date\": \"2014-04-30T0:00:00Z\"}'\ncurl -sXGET 'localhost:9200/_search?pretty=true' -d ' \n{\n \"size\": 0,\n \"aggs\": {\n \"histo\": {\n \"date_histogram\": {\n \"field\": \"date\",\n \"interval\": \"month\",\n \"pre_zone\": \"+01:00\",\n \"pre_zone_adjust_large_interval\": true,\n \"min_doc_count\": 0\n }\n }\n }\n}'\n```\n\nResults in:\n\n```\n{\n \"took\" : 6,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 127,\n \"successful\" : 127,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 30155,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"histo\" : {\n \"buckets\" : [ {\n \"key_as_string\" : \"2013-12-31T23:00:00.000Z\",\n \"key\" : 1388530800000,\n \"doc_count\" : 1\n }, {\n \"key_as_string\" : \"2014-01-31T23:00:00.000Z\",\n \"key\" : 1391209200000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-02-28T23:00:00.000Z\",\n \"key\" : 1393628400000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-03-28T23:00:00.000Z\",\n \"key\" : 1396047600000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2014-03-31T23:00:00.000Z\",\n \"key\" : 1396306800000,\n \"doc_count\" : 2\n } ]\n }\n }\n}\n```\n\nThe bucket keys I'd expect are: `2013-12-31T23:00:00.000Z`, `2014-01-31T23:00:00.000Z`, `2014-02-28T23:00:00.000Z`, `2014-03-31T23:00:00.000Z`. Note that this is only an issue with positive `pre_zone` values and `pre_zone_adjust_large_interval` equal to `true`.\n",
"created_at": "2015-01-13T22:45:16Z"
},
{
"body": "This looks very similar to #9491 and #7673 to me. Probably also fixed the same way. Will see if this behaviour is already fixed with the latest clean up of the `date_histogram` on master.\n",
"created_at": "2015-02-20T15:30:01Z"
},
{
"body": "I tried the script of @wojcikstefan on current master (7c20a8), works there. However, bug is reproducable on 1.x. Will dig into this further.\n",
"created_at": "2015-02-20T16:12:21Z"
},
{
"body": "@jpountz Had a look at these cases and why they still don't work on 1.x an 1.4 even after fix from https://github.com/elasticsearch/elasticsearch/pull/9790. The reason is that using `pre_zone_adjust_large_interval = true` for month and bigger intervals forces the TimeZoneRounding implementation to be TimeTimeZoneRoundingFloor. When calculating next buckets keys when inserting empty buckets we should do the adding of the time duration in local time (where also the rounding takes place).\nAdding the back and forth conversion for preTz in TimeTimeZoneRoundingFloor.nextRoundingValue solved the issue for me. Will issue a PR if you want to have a look.\n",
"created_at": "2015-02-23T16:56:17Z"
},
{
"body": "@cbuescher +1 on a PR, your description of the fix looks good to me. Thanks for taking care of this!\n",
"created_at": "2015-02-23T17:40:39Z"
},
{
"body": "Fix on 1.4 branch: e869e90\nFix on 1.x branch: f829670\nOnly tests on master: 4ef430d\n",
"created_at": "2015-02-23T18:16:42Z"
}
],
"number": 8209,
"title": "Extended bounds create misaligned empty buckets in date histogram aggregation."
} | {
"body": "Fixes an issue with using `date_histogram` aggregation for month intervals\nin combination with `pre_zone_adjust_large_interval` reported in #8209.\n\nCloses #8209\n",
"number": 9828,
"review_comments": [],
"title": "Aggs: Fix rounding issue using `date_histogram` with `pre_zone_adjust_large_interval`"
} | {
"commits": [
{
"message": "Aggs: Fix rounding issue using `date_histogram` with `pre_zone_adjust_large_interval`\n\nThis fixes an issue with using `date_histogram` aggregation for month intervals\nin combination with `pre_zone_adjust_large_interval` reported in #8209.\n\nCloses #8209"
}
],
"files": [
{
"diff": "@@ -169,8 +169,10 @@ public long valueForKey(long time) {\n @Override\n public long nextRoundingValue(long time) {\n long currentWithoutPostZone = postTz.convertLocalToUTC(time, true);\n- long nextWithoutPostZone = durationField.add(currentWithoutPostZone, 1);\n- return postTz.convertUTCToLocal(nextWithoutPostZone);\n+ // we also need to correct for preTz because rounding takes place in local time zone\n+ long local = preTz.convertUTCToLocal(currentWithoutPostZone);\n+ long nextLocal = durationField.add(local, 1);\n+ return postTz.convertUTCToLocal(preTz.convertLocalToUTC((nextLocal), true));\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -155,15 +155,28 @@ public void testPreZoneAdjustLargeInterval() {\n \n @Test\n public void testAmbiguousHoursAfterDSTSwitch() {\n- Rounding tzRounding;\n-\n- tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).build();\n+ Rounding tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).build();\n // Both timestamps \"2014-10-25T22:30:00Z\" and \"2014-10-25T23:30:00Z\" are \"2014-10-26T01:30:00\" in local time because\n // of DST switch between them. This test checks that they are both returned to their correct UTC time after rounding.\n assertThat(tzRounding.round(time(\"2014-10-25T22:30:00\", DateTimeZone.UTC)), equalTo(time(\"2014-10-25T22:00:00\", DateTimeZone.UTC)));\n assertThat(tzRounding.round(time(\"2014-10-25T23:30:00\", DateTimeZone.UTC)), equalTo(time(\"2014-10-25T23:00:00\", DateTimeZone.UTC)));\n }\n \n+ @Test\n+ public void testNextRoundingValueCornerCase8209() {\n+ Rounding tzRounding = TimeZoneRounding.builder(DateTimeUnit.MONTH_OF_YEAR).preZone(DateTimeZone.forID(\"+01:00\")).\n+ preZoneAdjustLargeInterval(true).build();\n+ long roundedValue = tzRounding.round(time(\"2014-01-01T00:00:00Z\", DateTimeZone.UTC));\n+ assertThat(roundedValue, equalTo(time(\"2013-12-31T23:00:00.000Z\", DateTimeZone.UTC)));\n+ roundedValue = tzRounding.nextRoundingValue(roundedValue);\n+ assertThat(roundedValue, equalTo(time(\"2014-01-31T23:00:00.000Z\", DateTimeZone.UTC)));\n+ roundedValue = tzRounding.nextRoundingValue(roundedValue);\n+ assertThat(roundedValue, equalTo(time(\"2014-02-28T23:00:00.000Z\", DateTimeZone.UTC)));\n+ roundedValue = tzRounding.nextRoundingValue(roundedValue);\n+ assertThat(roundedValue, equalTo(time(\"2014-03-31T23:00:00.000Z\", DateTimeZone.UTC)));\n+ roundedValue = tzRounding.nextRoundingValue(roundedValue);\n+ assertThat(roundedValue, equalTo(time(\"2014-04-30T23:00:00.000Z\", DateTimeZone.UTC)));\n+ }\n \n private long utc(String time) {\n return time(time, DateTimeZone.UTC);",
"filename": "src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java",
"status": "modified"
},
{
"diff": "@@ -1303,6 +1303,31 @@ public void testIssue7673() throws InterruptedException, ExecutionException {\n assertThat(histo.getBuckets().get(2).getDocCount(), equalTo(1L));\n }\n \n+ public void testIssue8209() throws InterruptedException, ExecutionException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test8209\").addMapping(\"type\", \"d\", \"type=date\").get());\n+ indexRandom(true,\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-01-01T0:00:00Z\"),\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-01T0:00:00Z\"),\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-30T0:00:00Z\"));\n+ ensureSearchable(\"test8209\");\n+ SearchResponse response = client().prepareSearch(\"test8209\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogram.Interval.MONTH).preZone(\"+01:00\")\n+ .minDocCount(0)\n+ .preZoneAdjustLargeInterval(true))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo.getBuckets().size(), equalTo(4));\n+ assertThat(histo.getBuckets().get(0).getKey(), equalTo(\"2013-12-31T23:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(0).getDocCount(), equalTo(1L));\n+ assertThat(histo.getBuckets().get(1).getKey(), equalTo(\"2014-01-31T23:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(1).getDocCount(), equalTo(0L));\n+ assertThat(histo.getBuckets().get(2).getKey(), equalTo(\"2014-02-28T23:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(2).getDocCount(), equalTo(0L));\n+ assertThat(histo.getBuckets().get(3).getKey(), equalTo(\"2014-03-31T23:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(3).getDocCount(), equalTo(2L));\n+ }\n+\n /**\n * see issue #9634, negative interval in date_histogram should raise exception\n */",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "For ongoing recoveries we maintain a recovery state. That state is used by both the recovery mechanism as the recovery API, which reports on ongoing recoveries. This means RecoveryState can be accessed concurrently from multiple threads. \n\nI run into at least one problem concerning the list of files that should be replicated:\n\nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java#L563\n",
"comments": [
{
"body": "On a node restart, it's common that a user constantly checks the state of shard recoveries until the cluster is green. Like to check if the UNASSIGNED shard counts are dropping. Or, to list those shards for debugging when we find that those counts are not dropping as fast as expected. Would these activities be affected by this bug?\n",
"created_at": "2014-11-04T19:36:35Z"
},
{
"body": "Also, what ES version(s) does this bug apply to?\n",
"created_at": "2014-11-04T19:40:41Z"
}
],
"number": 6644,
"title": "RecoveryState should be concurrently accessible"
} | {
"body": "To support the `_recovery` API, the recovery process keeps track of current progress in a class called RecoveryState. This class currently have some issues, mostly around concurrency (see #6644 ). This PR cleans it up as well as other issues around it:\n- Make the Index subsection API cleaner:\n - remove redundant information - all calculation is done based on the underlying file map\n - clearer definition of what is what: total files, vs reused files (local files that match the source) vs recovered files (copied over). % based progress is reported based on recovered files only.\n - cleaned up json response to match other API (sadly this breaks the structure). We now properly report human values for dates and other units.\n - Add more robust unit testing\n- Detail flag was passed along as state (it's now a ToXContent param)\n- State lookup during reporting is now always done via the IndexShard , no more fall backs to many other classes.\n- Cleanup APIs around time and move the little computations to the state class as opposed to doing them out of the API\n\nI also improved error messages out of the REST testing infra for things I run into.\n\nGiven the BWC nature of the change I'm on the fence whether this should go into 1.4.X - it does fix a concurrency issue and makes things consistent where they weren't.\n\nCloses #6644 \n",
"number": 9811,
"review_comments": [
{
"body": "weird... this if has an empty body?\n",
"created_at": "2015-02-24T10:14:47Z"
},
{
"body": "instead of calling `recoveryState.getIndex()` all time can we assign it to a local var above?\n",
"created_at": "2015-02-24T10:15:47Z"
},
{
"body": "instead of calling `recoveryState.getIndex()` all time can we assign it to a local var above?\n",
"created_at": "2015-02-24T10:15:54Z"
},
{
"body": "does it make sense to add the exception to the logging?\n",
"created_at": "2015-02-24T10:16:59Z"
},
{
"body": "this diff seems outdated, we do not use listAll() here in that place? did you intend to open the PR against 1.x?\n",
"created_at": "2015-02-24T10:18:03Z"
},
{
"body": "can we assign `recoveryState.getStart()` to a local var?\n",
"created_at": "2015-02-24T10:18:28Z"
},
{
"body": "s/this.//\n",
"created_at": "2015-02-24T10:18:55Z"
},
{
"body": "I wonder if we can put all these in a struct like class where everything is final and make the struct volatile? the all seems to be set at the same time?\n",
"created_at": "2015-02-24T10:20:54Z"
},
{
"body": "yeah. I tend to work on 1.x when it's non-trivial in terms of BWC. I'll rebase once we're done.\n",
"created_at": "2015-02-24T15:00:28Z"
},
{
"body": "will add.\n",
"created_at": "2015-02-24T15:05:33Z"
}
],
"title": "RecoveryState clean up"
} | {
"commits": [
{
"message": "wip"
},
{
"message": "wip"
},
{
"message": "state refactored"
},
{
"message": "more compilation errors and a test"
},
{
"message": "Fixed tests and extend error messages in rest API. Also made % reporting 100% there is nothing to recover."
},
{
"message": "doc changes and minor tweaks"
},
{
"message": "fixed json byte fields to conform to other API."
},
{
"message": "better handling of BWC in RecoveryFilesInfoRequest"
},
{
"message": "feedback + more simplifications"
}
],
"files": [
{
"diff": "@@ -8,29 +8,30 @@ For example, the following command would show recovery information for the indic\n \n [source,js]\n --------------------------------------------------\n-curl -XGET http://localhost:9200/index1,index2/_recovery?pretty=true\n+curl -XGET http://localhost:9200/index1,index2/_recovery\n --------------------------------------------------\n \n To see cluster-wide recovery status simply leave out the index names.\n \n [source,js]\n --------------------------------------------------\n-curl -XGET http://localhost:9200/_recovery?pretty=true\n+curl -XGET http://localhost:9200/_recovery?pretty&human\n --------------------------------------------------\n \n Response:\n-\n+coming[1.5.0, this syntax was change to fix inconsistencies with other API]\n [source,js]\n --------------------------------------------------\n {\n \"index1\" : {\n \"shards\" : [ {\n \"id\" : 0,\n- \"type\" : \"snapshot\",\n- \"stage\" : \"index\",\n+ \"type\" : \"SNAPSHOT\",\n+ \"stage\" : \"INDEX\",\n \"primary\" : true,\n \"start_time\" : \"2014-02-24T12:15:59.716\",\n- \"stop_time\" : 0,\n+ \"start_time_in_millis\": 1393244159716,\n+ \"total_time\" : \"2.9m\"\n \"total_time_in_millis\" : 175576,\n \"source\" : {\n \"repository\" : \"my_repository\",\n@@ -44,26 +45,33 @@ Response:\n \"name\" : \"my_es_node\"\n },\n \"index\" : {\n+ \"size\" : {\n+ \"total\" : \"75.4mb\"\n+ \"total_in_bytes\" : 79063092,\n+ \"reused\" : \"0b\",\n+ \"reused_in_bytes\" : 0,\n+ \"recovered\" : \"65.7mb\",\n+ \"recovered_in_bytes\" : 68891939,\n+ \"percent\" : \"87.1%\"\n+ },\n \"files\" : {\n \"total\" : 73,\n \"reused\" : 0,\n \"recovered\" : 69,\n \"percent\" : \"94.5%\"\n },\n- \"bytes\" : {\n- \"total\" : 79063092,\n- \"reused\" : 0,\n- \"recovered\" : 68891939,\n- \"percent\" : \"87.1%\"\n- },\n+ \"total_time\" : \"0s\",\n \"total_time_in_millis\" : 0\n },\n \"translog\" : {\n \"recovered\" : 0,\n+ \"total_time\" : \"0s\",\n \"total_time_in_millis\" : 0\n },\n \"start\" : {\n- \"check_index_time\" : 0,\n+ \"check_index_time\" : \"0s\",\n+ \"check_index_time_in_millis\" : 0,\n+ \"total_time\" : \"0s\",\n \"total_time_in_millis\" : 0\n }\n } ]\n@@ -80,9 +88,10 @@ In some cases a higher level of detail may be preferable. Setting \"detailed=true\n \n [source,js]\n --------------------------------------------------\n-curl -XGET http://localhost:9200/_recovery?pretty=true&detailed=true\n+curl -XGET http://localhost:9200/_recovery?pretty&human&detailed=true\n --------------------------------------------------\n \n+coming[1.5.0, this syntax was change to fix inconsistencies with other API]\n Response:\n \n [source,js]\n@@ -91,11 +100,14 @@ Response:\n \"index1\" : {\n \"shards\" : [ {\n \"id\" : 0,\n- \"type\" : \"gateway\",\n- \"stage\" : \"done\",\n+ \"type\" : \"GATEWAY\",\n+ \"stage\" : \"DONE\",\n \"primary\" : true,\n \"start_time\" : \"2014-02-24T12:38:06.349\",\n+ \"start_time_in_millis\" : \"1393245486349\",\n \"stop_time\" : \"2014-02-24T12:38:08.464\",\n+ \"stop_time_in_millis\" : \"1393245488464\",\n+ \"total_time\" : \"2.1s\",\n \"total_time_in_millis\" : 2115,\n \"source\" : {\n \"id\" : \"RGMdRc-yQWWKIBM4DGvwqQ\",\n@@ -110,10 +122,19 @@ Response:\n \"name\" : \"my_es_node\"\n },\n \"index\" : {\n+ \"size\" : {\n+ \"total\" : \"24.7mb\",\n+ \"total_in_bytes\" : 26001617,\n+ \"reused\" : \"24.7mb\",\n+ \"reused_in_bytes\" : 26001617,\n+ \"recovered\" : \"0b\",\n+ \"recovered_in_bytes\" : 0,\n+ \"percent\" : \"100.0%\"\n+ },\n \"files\" : {\n \"total\" : 26,\n \"reused\" : 26,\n- \"recovered\" : 26,\n+ \"recovered\" : 0,\n \"percent\" : \"100.0%\",\n \"details\" : [ {\n \"name\" : \"segments.gen\",\n@@ -131,20 +152,17 @@ Response:\n ...\n ]\n },\n- \"bytes\" : {\n- \"total\" : 26001617,\n- \"reused\" : 26001617,\n- \"recovered\" : 26001617,\n- \"percent\" : \"100.0%\"\n- },\n+ \"total_time\" : \"2ms\",\n \"total_time_in_millis\" : 2\n },\n \"translog\" : {\n \"recovered\" : 71,\n+ \"total_time\" : \"2.0s\",\n \"total_time_in_millis\" : 2025\n },\n \"start\" : {\n \"check_index_time\" : 0,\n+ \"total_time\" : \"88ms\",\n \"total_time_in_millis\" : 88\n }\n } ]",
"filename": "docs/reference/indices/recovery.asciidoc",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,8 @@\n \\d+\\.\\d+% \\s+ # files_percent\n \\d+ \\s+ # bytes\n \\d+\\.\\d+% \\s+ # bytes_percent\n+ \\d+ \\s+ # total_files\n+ \\d+ \\s+ # total_bytes\n \\n\n )+\n $/",
"filename": "rest-api-spec/test/cat.recovery/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -28,10 +28,10 @@\n - gte: { test_1.shards.0.index.files.reused: 0 }\n - gte: { test_1.shards.0.index.files.recovered: 0 }\n - match: { test_1.shards.0.index.files.percent: /^\\d+\\.\\d\\%$/ }\n- - gte: { test_1.shards.0.index.bytes.total: 0 }\n- - gte: { test_1.shards.0.index.bytes.reused: 0 }\n- - gte: { test_1.shards.0.index.bytes.recovered: 0 }\n- - match: { test_1.shards.0.index.bytes.percent: /^\\d+\\.\\d\\%$/ }\n+ - gte: { test_1.shards.0.index.size.total_in_bytes: 0 }\n+ - gte: { test_1.shards.0.index.size.reused_in_bytes: 0 }\n+ - gte: { test_1.shards.0.index.size.recovered_in_bytes: 0 }\n+ - match: { test_1.shards.0.index.size.percent: /^\\d+\\.\\d\\%$/ }\n - gte: { test_1.shards.0.translog.recovered: 0 }\n - gte: { test_1.shards.0.translog.total_time_in_millis: 0 }\n - gte: { test_1.shards.0.start.check_index_time_in_millis: 0 }",
"filename": "rest-api-spec/test/indices.recovery/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -28,9 +28,9 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.HashMap;\n \n /**\n * Information regarding the recovery state of indices and their associated shards.\n@@ -64,17 +64,6 @@ public boolean hasRecoveries() {\n return shardResponses.size() > 0;\n }\n \n- public void addShardRecovery(String index, ShardRecoveryResponse shardRecoveryResponse) {\n-\n- List<ShardRecoveryResponse> shardRecoveries = shardResponses.get(index);\n- if (shardRecoveries == null) {\n- shardRecoveries = new ArrayList<>();\n- shardResponses.put(index, shardRecoveries);\n- }\n-\n- shardRecoveries.add(shardRecoveryResponse);\n- }\n-\n public boolean detailed() {\n return detailed;\n }\n@@ -99,7 +88,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.startArray(\"shards\");\n for (ShardRecoveryResponse recoveryResponse : responses) {\n builder.startObject();\n- recoveryResponse.detailed(this.detailed);\n recoveryResponse.toXContent(builder, params);\n builder.endObject();\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.admin.indices.recovery;\n \n import org.elasticsearch.action.support.broadcast.BroadcastShardOperationResponse;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -35,7 +36,6 @@\n public class ShardRecoveryResponse extends BroadcastShardOperationResponse implements ToXContent {\n \n RecoveryState recoveryState;\n- private boolean detailed = false;\n \n public ShardRecoveryResponse() { }\n \n@@ -58,23 +58,15 @@ public void recoveryState(RecoveryState recoveryState) {\n }\n \n /**\n- * Gets the recovery state information for the shard.\n+ * Gets the recovery state information for the shard. Null if shard wasn't recovered / recovery didn't start yet.\n *\n * @return Recovery state\n */\n+ @Nullable\n public RecoveryState recoveryState() {\n return recoveryState;\n }\n \n- public boolean detailed() {\n- return detailed;\n- }\n-\n- public void detailed(boolean detailed) {\n- this.detailed = detailed;\n- this.recoveryState.setDetailed(detailed);\n- }\n-\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n recoveryState.toXContent(builder, params);",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/ShardRecoveryResponse.java",
"status": "modified"
},
{
"diff": "@@ -34,13 +34,11 @@\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.gateway.IndexShardGatewayService;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryState;\n-import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -58,15 +56,13 @@ public class TransportRecoveryAction extends\n TransportBroadcastOperationAction<RecoveryRequest, RecoveryResponse, TransportRecoveryAction.ShardRecoveryRequest, ShardRecoveryResponse> {\n \n private final IndicesService indicesService;\n- private final RecoveryTarget recoveryTarget;\n \n @Inject\n public TransportRecoveryAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- TransportService transportService, IndicesService indicesService, RecoveryTarget recoveryTarget, ActionFilters actionFilters) {\n+ TransportService transportService, IndicesService indicesService, ActionFilters actionFilters) {\n \n super(settings, RecoveryAction.NAME, threadPool, clusterService, transportService, actionFilters);\n this.indicesService = indicesService;\n- this.recoveryTarget = recoveryTarget;\n }\n \n @Override\n@@ -100,6 +96,12 @@ protected RecoveryResponse newResponse(RecoveryRequest request, AtomicReferenceA\n } else {\n ShardRecoveryResponse recoveryResponse = (ShardRecoveryResponse) shardResponse;\n successfulShards++;\n+\n+ if (recoveryResponse.recoveryState() == null) {\n+ // recovery not yet started\n+ continue;\n+ }\n+\n String indexName = recoveryResponse.getIndex();\n List<ShardRecoveryResponse> responses = shardResponses.get(indexName);\n \n@@ -146,17 +148,6 @@ protected ShardRecoveryResponse shardOperation(ShardRecoveryRequest request) thr\n ShardRecoveryResponse shardRecoveryResponse = new ShardRecoveryResponse(request.shardId());\n \n RecoveryState state = indexShard.recoveryState();\n-\n- if (state == null) {\n- state = recoveryTarget.recoveryState(indexShard);\n- }\n-\n- if (state == null) {\n- IndexShardGatewayService gatewayService =\n- indexService.shardInjectorSafe(request.shardId().id()).getInstance(IndexShardGatewayService.class);\n- state = gatewayService.recoveryState();\n- }\n-\n shardRecoveryResponse.recoveryState(state);\n return shardRecoveryResponse;\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/TransportRecoveryAction.java",
"status": "modified"
},
{
"diff": "@@ -40,14 +40,12 @@\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n-import org.elasticsearch.index.gateway.IndexShardGatewayService;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryState;\n-import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -66,13 +64,10 @@ public class TransportIndicesStatusAction extends TransportBroadcastOperationAct\n \n private final IndicesService indicesService;\n \n- private final RecoveryTarget peerRecoveryTarget;\n-\n @Inject\n public TransportIndicesStatusAction(Settings settings, ThreadPool threadPool, ClusterService clusterService, TransportService transportService,\n- IndicesService indicesService, RecoveryTarget peerRecoveryTarget, ActionFilters actionFilters) {\n+ IndicesService indicesService, ActionFilters actionFilters) {\n super(settings, IndicesStatusAction.NAME, threadPool, clusterService, transportService, actionFilters);\n- this.peerRecoveryTarget = peerRecoveryTarget;\n this.indicesService = indicesService;\n }\n \n@@ -179,60 +174,56 @@ protected ShardStatus shardOperation(IndexShardStatusRequest request) throws Ela\n \n if (request.recovery) {\n // check on going recovery (from peer or gateway)\n- RecoveryState peerRecoveryState = indexShard.recoveryState();\n- if (peerRecoveryState == null) {\n- peerRecoveryState = peerRecoveryTarget.recoveryState(indexShard);\n- }\n- if (peerRecoveryState != null) {\n- PeerRecoveryStatus.Stage stage;\n- switch (peerRecoveryState.getStage()) {\n- case INIT:\n- stage = PeerRecoveryStatus.Stage.INIT;\n- break;\n- case INDEX:\n- stage = PeerRecoveryStatus.Stage.INDEX;\n- break;\n- case TRANSLOG:\n- stage = PeerRecoveryStatus.Stage.TRANSLOG;\n- break;\n- case FINALIZE:\n- stage = PeerRecoveryStatus.Stage.FINALIZE;\n- break;\n- case DONE:\n- stage = PeerRecoveryStatus.Stage.DONE;\n- break;\n- default:\n- stage = PeerRecoveryStatus.Stage.INIT;\n- }\n- shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, peerRecoveryState.getTimer().startTime(),\n- peerRecoveryState.getTimer().time(),\n- peerRecoveryState.getIndex().totalByteCount(),\n- peerRecoveryState.getIndex().reusedByteCount(),\n- peerRecoveryState.getIndex().recoveredByteCount(), peerRecoveryState.getTranslog().currentTranslogOperations());\n- }\n-\n- IndexShardGatewayService gatewayService = indexService.shardInjectorSafe(request.shardId().id()).getInstance(IndexShardGatewayService.class);\n- RecoveryState gatewayRecoveryState = gatewayService.recoveryState();\n- if (gatewayRecoveryState != null) {\n- GatewayRecoveryStatus.Stage stage;\n- switch (gatewayRecoveryState.getStage()) {\n- case INIT:\n- stage = GatewayRecoveryStatus.Stage.INIT;\n- break;\n- case INDEX:\n- stage = GatewayRecoveryStatus.Stage.INDEX;\n- break;\n- case TRANSLOG:\n- stage = GatewayRecoveryStatus.Stage.TRANSLOG;\n- break;\n- case DONE:\n- stage = GatewayRecoveryStatus.Stage.DONE;\n- break;\n- default:\n- stage = GatewayRecoveryStatus.Stage.INIT;\n+ RecoveryState recoveryState = indexShard.recoveryState();\n+ if (recoveryState != null) {\n+ final RecoveryState.Index index = recoveryState.getIndex();\n+ if (recoveryState.getType() == RecoveryState.Type.REPLICA || recoveryState.getType() == RecoveryState.Type.REPLICA) {\n+ PeerRecoveryStatus.Stage stage;\n+ switch (recoveryState.getStage()) {\n+ case INIT:\n+ stage = PeerRecoveryStatus.Stage.INIT;\n+ break;\n+ case INDEX:\n+ stage = PeerRecoveryStatus.Stage.INDEX;\n+ break;\n+ case TRANSLOG:\n+ stage = PeerRecoveryStatus.Stage.TRANSLOG;\n+ break;\n+ case FINALIZE:\n+ stage = PeerRecoveryStatus.Stage.FINALIZE;\n+ break;\n+ case DONE:\n+ stage = PeerRecoveryStatus.Stage.DONE;\n+ break;\n+ default:\n+ stage = PeerRecoveryStatus.Stage.INIT;\n+ }\n+ shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, recoveryState.getTimer().startTime(),\n+ recoveryState.getTimer().time(),\n+ index.totalBytes(),\n+ index.reusedBytes(),\n+ index.recoveredBytes(), recoveryState.getTranslog().currentTranslogOperations());\n+ } else if (recoveryState.getType() == RecoveryState.Type.GATEWAY) {\n+ GatewayRecoveryStatus.Stage stage;\n+ switch (recoveryState.getStage()) {\n+ case INIT:\n+ stage = GatewayRecoveryStatus.Stage.INIT;\n+ break;\n+ case INDEX:\n+ stage = GatewayRecoveryStatus.Stage.INDEX;\n+ break;\n+ case TRANSLOG:\n+ stage = GatewayRecoveryStatus.Stage.TRANSLOG;\n+ break;\n+ case DONE:\n+ stage = GatewayRecoveryStatus.Stage.DONE;\n+ break;\n+ default:\n+ stage = GatewayRecoveryStatus.Stage.INIT;\n+ }\n+ shardStatus.gatewayRecoveryStatus = new GatewayRecoveryStatus(stage, recoveryState.getTimer().startTime(), recoveryState.getTimer().time(),\n+ index.totalBytes(), index.reusedBytes(), index.recoveredBytes(), recoveryState.getTranslog().currentTranslogOperations());\n }\n- shardStatus.gatewayRecoveryStatus = new GatewayRecoveryStatus(stage, gatewayRecoveryState.getTimer().startTime(), gatewayRecoveryState.getTimer().time(),\n- gatewayRecoveryState.getIndex().totalByteCount(), gatewayRecoveryState.getIndex().reusedByteCount(), gatewayRecoveryState.getIndex().recoveredByteCount(), gatewayRecoveryState.getTranslog().currentTranslogOperations());\n }\n }\n return shardStatus;",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/status/TransportIndicesStatusAction.java",
"status": "modified"
},
{
"diff": "@@ -30,11 +30,6 @@ public interface IndexShardGateway extends IndexShardComponent, CloseableIndexCo\n \n String type();\n \n- /**\n- * The last / on going recovery status.\n- */\n- RecoveryState recoveryState();\n-\n /**\n * Recovers the state of the shard from the gateway.\n */",
"filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -27,7 +27,6 @@\n import org.elasticsearch.index.CloseableIndexComponent;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.*;\n-import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.snapshots.IndexShardSnapshotAndRestoreService;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -49,8 +48,6 @@ public class IndexShardGatewayService extends AbstractIndexShardComponent implem\n \n private final IndexShardSnapshotAndRestoreService snapshotService;\n \n- private RecoveryState recoveryState;\n-\n @Inject\n public IndexShardGatewayService(ShardId shardId, @IndexSettings Settings indexSettings, ThreadPool threadPool,\n IndexShard indexShard, IndexShardGateway shardGateway, IndexShardSnapshotAndRestoreService snapshotService, ClusterService clusterService) {\n@@ -59,8 +56,6 @@ public IndexShardGatewayService(ShardId shardId, @IndexSettings Settings indexSe\n this.indexShard = indexShard;\n this.shardGateway = shardGateway;\n this.snapshotService = snapshotService;\n- this.recoveryState = new RecoveryState(shardId);\n- this.recoveryState.setType(RecoveryState.Type.GATEWAY);\n this.clusterService = clusterService;\n }\n \n@@ -78,13 +73,6 @@ public static interface RecoveryListener {\n void onRecoveryFailed(IndexShardGatewayRecoveryException e);\n }\n \n- public RecoveryState recoveryState() {\n- if (recoveryState.getTimer().startTime() > 0 && recoveryState.getStage() != RecoveryState.Stage.DONE) {\n- recoveryState.getTimer().time(System.currentTimeMillis() - recoveryState.getTimer().startTime());\n- }\n- return recoveryState;\n- }\n-\n /**\n * Recovers the state of the shard from the gateway.\n */\n@@ -100,34 +88,30 @@ public void recover(final boolean indexShouldExists, final RecoveryListener list\n }\n try {\n if (indexShard.routingEntry().restoreSource() != null) {\n- indexShard.recovering(\"from snapshot\");\n+ indexShard.recovering(\"from snapshot\", RecoveryState.Type.SNAPSHOT, indexShard.routingEntry().restoreSource());\n } else {\n- indexShard.recovering(\"from gateway\");\n+ indexShard.recovering(\"from gateway\", RecoveryState.Type.GATEWAY, clusterService.localNode());\n }\n } catch (IllegalIndexShardStateException e) {\n // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n listener.onIgnoreRecovery(\"already in recovering process, \" + e.getMessage());\n return;\n }\n \n+ final RecoveryState recoveryState = indexShard.recoveryState();\n+\n threadPool.generic().execute(new Runnable() {\n @Override\n public void run() {\n recoveryState.getTimer().startTime(System.currentTimeMillis());\n- recoveryState.setTargetNode(clusterService.localNode());\n recoveryState.setStage(RecoveryState.Stage.INIT);\n- recoveryState.setPrimary(indexShard.routingEntry().primary());\n \n try {\n if (indexShard.routingEntry().restoreSource() != null) {\n logger.debug(\"restoring from {} ...\", indexShard.routingEntry().restoreSource());\n- recoveryState.setType(RecoveryState.Type.SNAPSHOT);\n- recoveryState.setRestoreSource(indexShard.routingEntry().restoreSource());\n snapshotService.restore(recoveryState);\n } else {\n logger.debug(\"starting recovery from {} ...\", shardGateway);\n- recoveryState.setType(RecoveryState.Type.GATEWAY);\n- recoveryState.setSourceNode(clusterService.localNode());\n shardGateway.recover(indexShouldExists, recoveryState);\n }\n \n@@ -143,17 +127,23 @@ public void run() {\n // refresh the shard\n indexShard.refresh(\"post_gateway\");\n \n- recoveryState.getTimer().time(System.currentTimeMillis() - recoveryState.getTimer().startTime());\n recoveryState.setStage(RecoveryState.Stage.DONE);\n \n if (logger.isTraceEnabled()) {\n StringBuilder sb = new StringBuilder();\n sb.append(\"recovery completed from \").append(shardGateway).append(\", took [\").append(timeValueMillis(recoveryState.getTimer().time())).append(\"]\\n\");\n- sb.append(\" index : files [\").append(recoveryState.getIndex().totalFileCount()).append(\"] with total_size [\").append(new ByteSizeValue(recoveryState.getIndex().totalByteCount())).append(\"], took[\").append(TimeValue.timeValueMillis(recoveryState.getIndex().time())).append(\"]\\n\");\n- sb.append(\" : recovered_files [\").append(recoveryState.getIndex().numberOfRecoveredFiles()).append(\"] with total_size [\").append(new ByteSizeValue(recoveryState.getIndex().recoveredTotalSize())).append(\"]\\n\");\n- sb.append(\" : reusing_files [\").append(recoveryState.getIndex().reusedFileCount()).append(\"] with total_size [\").append(new ByteSizeValue(recoveryState.getIndex().reusedByteCount())).append(\"]\\n\");\n- sb.append(\" start : took [\").append(TimeValue.timeValueMillis(recoveryState.getStart().time())).append(\"], check_index [\").append(timeValueMillis(recoveryState.getStart().checkIndexTime())).append(\"]\\n\");\n- sb.append(\" translog : number_of_operations [\").append(recoveryState.getTranslog().currentTranslogOperations()).append(\"], took [\").append(TimeValue.timeValueMillis(recoveryState.getTranslog().time())).append(\"]\");\n+ RecoveryState.Index index = recoveryState.getIndex();\n+ sb.append(\" index : files [\").append(index.totalFileCount()).append(\"] with total_size [\")\n+ .append(new ByteSizeValue(index.totalBytes())).append(\"], took[\")\n+ .append(TimeValue.timeValueMillis(index.time())).append(\"]\\n\");\n+ sb.append(\" : recovered_files [\").append(index.recoveredFileCount()).append(\"] with total_size [\")\n+ .append(new ByteSizeValue(index.recoveredBytes())).append(\"]\\n\");\n+ sb.append(\" : reusing_files [\").append(index.reusedFileCount()).append(\"] with total_size [\")\n+ .append(new ByteSizeValue(index.reusedBytes())).append(\"]\\n\");\n+ sb.append(\" start : took [\").append(TimeValue.timeValueMillis(recoveryState.getStart().time())).append(\"], check_index [\")\n+ .append(timeValueMillis(recoveryState.getStart().checkIndexTime())).append(\"]\\n\");\n+ sb.append(\" translog : number_of_operations [\").append(recoveryState.getTranslog().currentTranslogOperations())\n+ .append(\"], took [\").append(TimeValue.timeValueMillis(recoveryState.getTranslog().time())).append(\"]\");\n logger.trace(sb.toString());\n } else if (logger.isDebugEnabled()) {\n logger.debug(\"recovery completed from [{}], took [{}]\", shardGateway, timeValueMillis(recoveryState.getTimer().time()));",
"filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGatewayService.java",
"status": "modified"
},
{
"diff": "@@ -34,15 +34,15 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.CancellableThreads;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.gateway.IndexShardGateway;\n import org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.AbstractIndexShardComponent;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.translog.*;\n import org.elasticsearch.index.translog.fs.FsTranslog;\n import org.elasticsearch.indices.recovery.RecoveryState;\n@@ -73,8 +73,6 @@ public class LocalIndexShardGateway extends AbstractIndexShardComponent implemen\n \n private final TimeValue waitForMappingUpdatePostRecovery;\n \n- private final RecoveryState recoveryState = new RecoveryState();\n-\n private volatile ScheduledFuture flushScheduler;\n private final TimeValue syncInterval;\n private final CancellableThreads cancellableThreads = new CancellableThreads();\n@@ -106,11 +104,6 @@ public String toString() {\n return \"local\";\n }\n \n- @Override\n- public RecoveryState recoveryState() {\n- return recoveryState;\n- }\n-\n @Override\n public void recover(boolean indexShouldExists, RecoveryState recoveryState) throws IndexShardGatewayRecoveryException {\n recoveryState.getIndex().startTime(System.currentTimeMillis());\n@@ -162,37 +155,29 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n throw new IndexShardGatewayRecoveryException(shardId(), \"failed to fetch index version after copying it over\", e);\n }\n recoveryState.getIndex().updateVersion(version);\n- recoveryState.getIndex().time(System.currentTimeMillis() - recoveryState.getIndex().startTime());\n+ recoveryState.getIndex().stopTime(System.currentTimeMillis());\n \n // since we recover from local, just fill the files and size\n try {\n- int numberOfFiles = 0;\n- long totalSizeInBytes = 0;\n+ RecoveryState.Index index = recoveryState.getIndex();\n for (String name : indexShard.store().directory().listAll()) {\n- numberOfFiles++;\n- long length = indexShard.store().directory().fileLength(name);\n- totalSizeInBytes += length;\n- recoveryState.getIndex().addFileDetail(name, length, length);\n+ final long length = indexShard.store().directory().fileLength(name);\n+ // we reuse all local files. no files a recovered\n+ index.addFileDetail(name, length, true);\n }\n- RecoveryState.Index index = recoveryState.getIndex();\n- index.totalFileCount(numberOfFiles);\n- index.totalByteCount(totalSizeInBytes);\n- index.reusedFileCount(numberOfFiles);\n- index.reusedByteCount(totalSizeInBytes);\n- index.recoveredFileCount(numberOfFiles);\n- index.recoveredByteCount(totalSizeInBytes);\n- } catch (Exception e) {\n- // ignore\n+ } catch (IOException e) {\n+ logger.debug(\"failed to list file details\", e);\n }\n \n- recoveryState.getStart().startTime(System.currentTimeMillis());\n+ final RecoveryState.Start stateStart = recoveryState.getStart();\n+ stateStart.startTime(System.currentTimeMillis());\n recoveryState.setStage(RecoveryState.Stage.START);\n if (translogId == -1) {\n // no translog files, bail\n indexShard.postRecovery(\"post recovery from gateway, no translog for id [\" + translogId + \"]\");\n // no index, just start the shard and bail\n- recoveryState.getStart().time(System.currentTimeMillis() - recoveryState.getStart().startTime());\n- recoveryState.getStart().checkIndexTime(indexShard.checkIndexTook());\n+ stateStart.stopTime(System.currentTimeMillis());\n+ stateStart.checkIndexTime(indexShard.checkIndexTook());\n return;\n }\n \n@@ -231,15 +216,15 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n // no translog files, bail\n indexShard.postRecovery(\"post recovery from gateway, no translog\");\n // no index, just start the shard and bail\n- recoveryState.getStart().time(System.currentTimeMillis() - recoveryState.getStart().startTime());\n- recoveryState.getStart().checkIndexTime(indexShard.checkIndexTook());\n+ stateStart.stopTime(System.currentTimeMillis());\n+ stateStart.checkIndexTime(0);\n return;\n }\n \n // recover from the translog file\n indexShard.performRecoveryPrepareForTranslog();\n- recoveryState.getStart().time(System.currentTimeMillis() - recoveryState.getStart().startTime());\n- recoveryState.getStart().checkIndexTime(indexShard.checkIndexTook());\n+ stateStart.stopTime(System.currentTimeMillis());\n+ stateStart.checkIndexTime(indexShard.checkIndexTook());\n \n recoveryState.getTranslog().startTime(System.currentTimeMillis());\n recoveryState.setStage(RecoveryState.Stage.TRANSLOG);",
"filename": "src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -24,11 +24,11 @@\n import org.elasticsearch.gateway.none.NoneGateway;\n import org.elasticsearch.index.gateway.IndexShardGateway;\n import org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException;\n-import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.AbstractIndexShardComponent;\n-import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n \n import java.io.IOException;\n \n@@ -39,8 +39,6 @@ public class NoneIndexShardGateway extends AbstractIndexShardComponent implement\n \n private final IndexShard indexShard;\n \n- private final RecoveryState recoveryState = new RecoveryState();\n-\n @Inject\n public NoneIndexShardGateway(ShardId shardId, @IndexSettings Settings indexSettings, IndexShard indexShard) {\n super(shardId, indexSettings);\n@@ -52,11 +50,6 @@ public String toString() {\n return \"_none_\";\n }\n \n- @Override\n- public RecoveryState recoveryState() {\n- return recoveryState;\n- }\n-\n @Override\n public void recover(boolean indexShouldExists, RecoveryState recoveryState) throws IndexShardGatewayRecoveryException {\n recoveryState.getIndex().startTime(System.currentTimeMillis());\n@@ -72,9 +65,10 @@ public void recover(boolean indexShouldExists, RecoveryState recoveryState) thro\n indexShard.store().decRef();\n }\n indexShard.postRecovery(\"post recovery from gateway\");\n- recoveryState.getIndex().time(System.currentTimeMillis() - recoveryState.getIndex().startTime());\n- recoveryState.getTranslog().startTime(System.currentTimeMillis());\n- recoveryState.getTranslog().time(System.currentTimeMillis() - recoveryState.getIndex().startTime());\n+ long time = System.currentTimeMillis();\n+ recoveryState.getIndex().stopTime(time);\n+ recoveryState.getTranslog().startTime(time);\n+ recoveryState.getTranslog().time(0);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/gateway/none/NoneIndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,9 @@\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.RestoreSource;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.common.Booleans;\n@@ -40,6 +43,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.component.Lifecycle;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.lucene.Lucene;\n@@ -145,6 +149,7 @@ public class IndexShard extends AbstractIndexShardComponent {\n private final IndexService indexService;\n private final ShardSuggestService shardSuggestService;\n private final ShardFixedBitSetFilterCache shardFixedBitSetFilterCache;\n+ private final DiscoveryNode localNode;\n \n private final Object mutex = new Object();\n private final String checkIndexOnStartup;\n@@ -172,10 +177,11 @@ public class IndexShard extends AbstractIndexShardComponent {\n \n @Inject\n public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, MergeSchedulerProvider mergeScheduler, Translog translog,\n- ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService, ShardIndexingService indexingService, ShardGetService getService, ShardSearchService searchService, ShardIndexWarmerService shardWarmerService,\n- ShardFilterCache shardFilterCache, ShardFieldData shardFieldData, PercolatorQueriesRegistry percolatorQueriesRegistry, ShardPercolateService shardPercolateService, CodecService codecService,\n- ShardTermVectorService termVectorService, IndexFieldDataService indexFieldDataService, IndexService indexService, ShardSuggestService shardSuggestService, ShardQueryCache shardQueryCache, ShardFixedBitSetFilterCache shardFixedBitSetFilterCache,\n- @Nullable IndicesWarmer warmer, SnapshotDeletionPolicy deletionPolicy, AnalysisService analysisService, SimilarityService similarityService, MergePolicyProvider mergePolicyProvider, EngineFactory factory) {\n+ ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService, ShardIndexingService indexingService, ShardGetService getService, ShardSearchService searchService, ShardIndexWarmerService shardWarmerService,\n+ ShardFilterCache shardFilterCache, ShardFieldData shardFieldData, PercolatorQueriesRegistry percolatorQueriesRegistry, ShardPercolateService shardPercolateService, CodecService codecService,\n+ ShardTermVectorService termVectorService, IndexFieldDataService indexFieldDataService, IndexService indexService, ShardSuggestService shardSuggestService, ShardQueryCache shardQueryCache, ShardFixedBitSetFilterCache shardFixedBitSetFilterCache,\n+ @Nullable IndicesWarmer warmer, SnapshotDeletionPolicy deletionPolicy, AnalysisService analysisService, SimilarityService similarityService, MergePolicyProvider mergePolicyProvider, EngineFactory factory,\n+ ClusterService clusterService) {\n super(shardId, indexSettings);\n Preconditions.checkNotNull(store, \"Store must be provided to the index shard\");\n Preconditions.checkNotNull(deletionPolicy, \"Snapshot deletion policy must be provided to the index shard\");\n@@ -206,6 +212,8 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexS\n this.codecService = codecService;\n this.shardSuggestService = shardSuggestService;\n this.shardFixedBitSetFilterCache = shardFixedBitSetFilterCache;\n+ assert clusterService.lifecycleState() == Lifecycle.State.STARTED; // otherwise localNode is still none;\n+ this.localNode = clusterService.localNode();\n state = IndexShardState.CREATED;\n this.refreshInterval = indexSettings.getAsTime(INDEX_REFRESH_INTERVAL, EngineConfig.DEFAULT_REFRESH_INTERVAL);\n indexSettingsService.addListener(applyRefreshSettings);\n@@ -215,7 +223,7 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexS\n /* create engine config */\n this.config = new EngineConfig(shardId,\n indexSettings.getAsBoolean(EngineConfig.INDEX_OPTIMIZE_AUTOGENERATED_ID_SETTING, false),\n- threadPool,indexingService,indexSettingsService, warmer, store, deletionPolicy, translog, mergePolicyProvider, mergeScheduler,\n+ threadPool, indexingService, indexSettingsService, warmer, store, deletionPolicy, translog, mergePolicyProvider, mergeScheduler,\n analysisService.defaultIndexAnalyzer(), similarityService.similarity(), codecService, failedEngineListener);\n \n \n@@ -224,7 +232,9 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexS\n this.checkIndexOnStartup = indexSettings.get(\"index.shard.check_on_startup\", \"false\");\n }\n \n- public MergeSchedulerProvider mergeScheduler() { return this.mergeScheduler; }\n+ public MergeSchedulerProvider mergeScheduler() {\n+ return this.mergeScheduler;\n+ }\n \n public Store store() {\n return this.store;\n@@ -342,10 +352,23 @@ public IndexShard routingEntry(ShardRouting newRouting) {\n return this;\n }\n \n+\n /**\n- * Marks the shard as recovering, fails with exception is recovering is not allowed to be set.\n+ * Marks the shard as recovering based on a remote or local node, fails with exception is recovering is not allowed to be set.\n */\n- public IndexShardState recovering(String reason) throws IndexShardStartedException,\n+ public IndexShardState recovering(String reason, RecoveryState.Type type, DiscoveryNode sourceNode) throws IndexShardStartedException,\n+ IndexShardRelocatedException, IndexShardRecoveringException, IndexShardClosedException {\n+ return recovering(reason, new RecoveryState(shardId, shardRouting.primary(), type, sourceNode, localNode));\n+ }\n+\n+ /**\n+ * Marks the shard as recovering based on a restore, fails with exception is recovering is not allowed to be set.\n+ */\n+ public IndexShardState recovering(String reason, RecoveryState.Type type, RestoreSource restoreSource) throws IndexShardStartedException {\n+ return recovering(reason, new RecoveryState(shardId, shardRouting.primary(), type, restoreSource, localNode));\n+ }\n+\n+ private IndexShardState recovering(String reason, RecoveryState recoveryState) throws IndexShardStartedException,\n IndexShardRelocatedException, IndexShardRecoveringException, IndexShardClosedException {\n synchronized (mutex) {\n if (state == IndexShardState.CLOSED) {\n@@ -363,6 +386,7 @@ public IndexShardState recovering(String reason) throws IndexShardStartedExcepti\n if (state == IndexShardState.POST_RECOVERY) {\n throw new IndexShardRecoveringException(shardId);\n }\n+ this.recoveryState = recoveryState;\n return changeState(IndexShardState.RECOVERING, reason);\n }\n }\n@@ -722,17 +746,13 @@ public void performRecoveryRestart() throws IOException {\n }\n \n /**\n- * The peer recovery state if this shard recovered from a peer shard, null o.w.\n+ * Returns the current {@link RecoveryState} if this shard is recovering or has been recovering.\n+ * Returns null if the recovery has not yet started or shard was not recovered (created via an API).\n */\n public RecoveryState recoveryState() {\n return this.recoveryState;\n }\n \n- public void performRecoveryFinalization(boolean withFlush, RecoveryState recoveryState) throws ElasticsearchException {\n- performRecoveryFinalization(withFlush);\n- this.recoveryState = recoveryState;\n- }\n-\n public void performRecoveryFinalization(boolean withFlush) throws ElasticsearchException {\n if (withFlush) {\n engine().flush();\n@@ -1095,7 +1115,7 @@ public Engine engine() {\n return engine;\n }\n \n- class ShardEngineFailListener implements Engine.FailedEngineListener {\n+ class ShardEngineFailListener implements Engine.FailedEngineListener {\n private final CopyOnWriteArrayList<Engine.FailedEngineListener> delegates = new CopyOnWriteArrayList<>();\n \n // called by the current engine",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -45,8 +45,8 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.xcontent.*;\n-import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.snapshots.*;\n import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo;\n@@ -160,7 +160,7 @@ public void restore(SnapshotId snapshotId, ShardId shardId, ShardId snapshotShar\n try {\n recoveryState.getIndex().startTime(System.currentTimeMillis());\n snapshotContext.restore();\n- recoveryState.getIndex().time(System.currentTimeMillis() - recoveryState.getIndex().startTime());\n+ recoveryState.getIndex().stopTime(System.currentTimeMillis());\n } catch (Throwable e) {\n throw new IndexShardRestoreFailedException(shardId, \"failed to restore snapshot [\" + snapshotId.getSnapshot() + \"]\", e);\n }\n@@ -711,10 +711,6 @@ public void restore() throws IOException {\n BlobStoreIndexShardSnapshot snapshot = loadSnapshot();\n \n recoveryState.setStage(RecoveryState.Stage.INDEX);\n- int numberOfFiles = 0;\n- long totalSize = 0;\n- int numberOfReusedFiles = 0;\n- long reusedTotalSize = 0;\n final Store.MetadataSnapshot recoveryTargetMetadata;\n try {\n recoveryTargetMetadata = store.getMetadataOrEmpty();\n@@ -744,22 +740,16 @@ public void restore() throws IOException {\n final Store.RecoveryDiff diff = sourceMetaData.recoveryDiff(recoveryTargetMetadata);\n for (StoreFileMetaData md : diff.identical) {\n FileInfo fileInfo = fileInfos.get(md.name());\n- numberOfFiles++;\n- totalSize += md.length();\n- numberOfReusedFiles++;\n- reusedTotalSize += md.length();\n- recoveryState.getIndex().addReusedFileDetail(fileInfo.name(), fileInfo.length());\n+ recoveryState.getIndex().addFileDetail(fileInfo.name(), fileInfo.length(), true);\n if (logger.isTraceEnabled()) {\n logger.trace(\"[{}] [{}] not_recovering [{}] from [{}], exists in local store and is same\", shardId, snapshotId, fileInfo.physicalName(), fileInfo.name());\n }\n }\n \n for (StoreFileMetaData md : Iterables.concat(diff.different, diff.missing)) {\n FileInfo fileInfo = fileInfos.get(md.name());\n- numberOfFiles++;\n- totalSize += fileInfo.length();\n filesToRecover.add(fileInfo);\n- recoveryState.getIndex().addFileDetail(fileInfo.name(), fileInfo.length());\n+ recoveryState.getIndex().addFileDetail(fileInfo.name(), fileInfo.length(), false);\n if (logger.isTraceEnabled()) {\n if (md == null) {\n logger.trace(\"[{}] [{}] recovering [{}] from [{}], does not exists in local store\", shardId, snapshotId, fileInfo.physicalName(), fileInfo.name());\n@@ -769,16 +759,13 @@ public void restore() throws IOException {\n }\n }\n final RecoveryState.Index index = recoveryState.getIndex();\n- index.totalFileCount(numberOfFiles);\n- index.totalByteCount(totalSize);\n- index.reusedFileCount(numberOfReusedFiles);\n- index.reusedByteCount(reusedTotalSize);\n if (filesToRecover.isEmpty()) {\n logger.trace(\"no files to recover, all exists within the local store\");\n }\n \n if (logger.isTraceEnabled()) {\n- logger.trace(\"[{}] [{}] recovering_files [{}] with total_size [{}], reusing_files [{}] with reused_size [{}]\", shardId, snapshotId, numberOfFiles, new ByteSizeValue(totalSize), numberOfReusedFiles, new ByteSizeValue(reusedTotalSize));\n+ logger.trace(\"[{}] [{}] recovering_files [{}] with total_size [{}], reusing_files [{}] with reused_size [{}]\", shardId, snapshotId,\n+ index.totalRecoverFiles(), new ByteSizeValue(index.totalRecoverBytes()), index.reusedFileCount(), new ByteSizeValue(index.reusedFileCount()));\n }\n try {\n for (final FileInfo fileToRecover : filesToRecover) {\n@@ -828,16 +815,13 @@ public void restore() throws IOException {\n */\n private void restoreFile(final FileInfo fileInfo) throws IOException {\n boolean success = false;\n- RecoveryState.File file = recoveryState.getIndex().file(fileInfo.name());\n try (InputStream stream = new PartSliceStream(blobContainer, fileInfo)) {\n try (final IndexOutput indexOutput = store.createVerifyingOutput(fileInfo.physicalName(), fileInfo.metadata(), IOContext.DEFAULT)) {\n final byte[] buffer = new byte[BUFFER_SIZE];\n int length;\n while((length=stream.read(buffer))>0){\n indexOutput.writeBytes(buffer,0,length);\n- if (file != null) {\n- file.updateRecovered(length);\n- }\n+ recoveryState.getIndex().addRecoveredBytesToFile(fileInfo.name(), length);\n if (restoreRateLimiter != null) {\n rateLimiterListener.onRestorePause(restoreRateLimiter.pause(length));\n }\n@@ -852,7 +836,6 @@ private void restoreFile(final FileInfo fileInfo) throws IOException {\n \n }\n store.directory().sync(Collections.singleton(fileInfo.physicalName()));\n- recoveryState.getIndex().addRecoveredFileCount(1);\n success = true;\n } catch (CorruptIndexException ex) {\n try {",
"filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java",
"status": "modified"
},
{
"diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexShardAlreadyExistsException;\n import org.elasticsearch.index.IndexShardMissingException;\n import org.elasticsearch.index.aliases.IndexAlias;\n@@ -56,18 +57,20 @@\n import org.elasticsearch.index.gateway.IndexShardGatewayService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.settings.IndexSettingsService;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryFailedException;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.util.*;\n+import java.util.HashMap;\n+import java.util.Iterator;\n+import java.util.List;\n+import java.util.Map;\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicLong;\n ",
"filename": "src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java",
"status": "modified"
},
{
"diff": "@@ -58,9 +58,9 @@ public RecoveriesCollection(ESLogger logger, ThreadPool threadPool) {\n *\n * @return the id of the new recovery.\n */\n- public long startRecovery(IndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state,\n+ public long startRecovery(IndexShard indexShard, DiscoveryNode sourceNode,\n RecoveryTarget.RecoveryListener listener, TimeValue activityTimeout) {\n- RecoveryStatus status = new RecoveryStatus(indexShard, sourceNode, state, listener);\n+ RecoveryStatus status = new RecoveryStatus(indexShard, sourceNode, listener);\n RecoveryStatus existingStatus = onGoingRecoveries.putIfAbsent(status.recoveryId(), status);\n assert existingStatus == null : \"found two RecoveryStatus instances with the same id\";\n logger.trace(\"{} started recovery from {}, id [{}]\", indexShard.shardId(), sourceNode, status.recoveryId());\n@@ -150,6 +150,10 @@ public StatusRef findRecoveryByShard(IndexShard indexShard) {\n return null;\n }\n \n+ /** the number of ongoing recoveries */\n+ public int size() {\n+ return onGoingRecoveries.size();\n+ }\n \n /** cancel all ongoing recoveries for the given shard. typically because the shards is closed */\n public void cancelRecoveriesForShard(ShardId shardId, String reason) {",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.index.shard.ShardId;\n@@ -40,13 +41,20 @@ class RecoveryFilesInfoRequest extends TransportRequest {\n List<Long> phase1FileSizes;\n List<String> phase1ExistingFileNames;\n List<Long> phase1ExistingFileSizes;\n+\n+ @Deprecated\n long phase1TotalSize;\n+\n+ @Deprecated\n long phase1ExistingTotalSize;\n \n RecoveryFilesInfoRequest() {\n }\n \n- RecoveryFilesInfoRequest(long recoveryId, ShardId shardId, List<String> phase1FileNames, List<Long> phase1FileSizes, List<String> phase1ExistingFileNames, List<Long> phase1ExistingFileSizes, long phase1TotalSize, long phase1ExistingTotalSize) {\n+ RecoveryFilesInfoRequest(long recoveryId, ShardId shardId, List<String> phase1FileNames, List<Long> phase1FileSizes,\n+ List<String> phase1ExistingFileNames, List<Long> phase1ExistingFileSizes,\n+ // needed for BWC only\n+ @Deprecated long phase1TotalSize, @Deprecated long phase1ExistingTotalSize) {\n this.recoveryId = recoveryId;\n this.shardId = shardId;\n this.phase1FileNames = phase1FileNames;\n@@ -94,8 +102,12 @@ public void readFrom(StreamInput in) throws IOException {\n phase1ExistingFileSizes.add(in.readVLong());\n }\n \n- phase1TotalSize = in.readVLong();\n- phase1ExistingTotalSize = in.readVLong();\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ //phase1TotalSize\n+ in.readVLong();\n+ //phase1ExistingTotalSize\n+ in.readVLong();\n+ }\n }\n \n @Override\n@@ -124,7 +136,9 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeVLong(phase1ExistingFileSize);\n }\n \n- out.writeVLong(phase1TotalSize);\n- out.writeVLong(phase1ExistingTotalSize);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ out.writeVLong(phase1TotalSize);\n+ out.writeVLong(phase1ExistingTotalSize);\n+ }\n }\n }",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryFilesInfoRequest.java",
"status": "modified"
},
{
"diff": "@@ -19,25 +19,27 @@\n \n package org.elasticsearch.indices.recovery;\n \n-import org.elasticsearch.common.xcontent.XContentBuilderString;\n-\n+import com.google.common.collect.ImmutableList;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.RestoreSource;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.io.stream.Streamable;\n+import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.io.stream.Streamable;\n+import org.elasticsearch.common.xcontent.XContentBuilderString;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.cluster.routing.RestoreSource;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n import java.util.List;\n-import java.util.concurrent.atomic.AtomicInteger;\n-import java.util.concurrent.atomic.AtomicLong;\n import java.util.Locale;\n+import java.util.Map;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n /**\n * Keeps track of state related to shard recovery.\n@@ -114,24 +116,37 @@ public static Type fromId(byte id) throws ElasticsearchIllegalArgumentException\n \n private volatile Stage stage = Stage.INIT;\n \n- private Index index = new Index();\n- private Translog translog = new Translog();\n- private Start start = new Start();\n- private Timer timer = new Timer();\n+ private final Index index = new Index();\n+ private final Translog translog = new Translog();\n+ private final Start start = new Start();\n+ private final Timer timer = new Timer();\n \n private Type type;\n private ShardId shardId;\n private RestoreSource restoreSource;\n private DiscoveryNode sourceNode;\n private DiscoveryNode targetNode;\n \n- private boolean detailed = false;\n- private boolean primary = false;\n+ private volatile boolean primary = false;\n+\n+ private RecoveryState() {\n+ }\n+\n+ public RecoveryState(ShardId shardId, boolean primary, Type type, DiscoveryNode sourceNode, DiscoveryNode targetNode) {\n+ this(shardId, primary, type, sourceNode, null, targetNode);\n+ }\n \n- public RecoveryState() { }\n+ public RecoveryState(ShardId shardId, boolean primary, Type type, RestoreSource restoreSource, DiscoveryNode targetNode) {\n+ this(shardId, primary, type, null, restoreSource, targetNode);\n+ }\n \n- public RecoveryState(ShardId shardId) {\n+ private RecoveryState(ShardId shardId, boolean primary, Type type, @Nullable DiscoveryNode sourceNode, @Nullable RestoreSource restoreSource, DiscoveryNode targetNode) {\n this.shardId = shardId;\n+ this.primary = primary;\n+ this.type = type;\n+ this.sourceNode = sourceNode;\n+ this.restoreSource = restoreSource;\n+ this.targetNode = targetNode;\n }\n \n public ShardId getShardId() {\n@@ -170,43 +185,18 @@ public Type getType() {\n return type;\n }\n \n- public void setType(Type type) {\n- this.type = type;\n- }\n-\n- public void setSourceNode(DiscoveryNode sourceNode) {\n- this.sourceNode = sourceNode;\n- }\n-\n public DiscoveryNode getSourceNode() {\n return sourceNode;\n }\n \n- public void setTargetNode(DiscoveryNode targetNode) {\n- this.targetNode = targetNode;\n- }\n-\n public DiscoveryNode getTargetNode() {\n return targetNode;\n }\n \n- public void setRestoreSource(RestoreSource restoreSource) {\n- this.restoreSource = restoreSource;\n- }\n-\n public RestoreSource getRestoreSource() {\n return restoreSource;\n }\n \n- public void setDetailed(boolean detailed) {\n- this.detailed = detailed;\n- this.index.detailed(detailed);\n- }\n-\n- public void setPrimary(boolean primary) {\n- this.primary = primary;\n- }\n-\n public boolean getPrimary() {\n return primary;\n }\n@@ -221,7 +211,7 @@ public static RecoveryState readRecoveryState(StreamInput in) throws IOException\n public void readFrom(StreamInput in) throws IOException {\n timer.startTime(in.readVLong());\n timer.stopTime(in.readVLong());\n- timer.time(in.readVLong());\n+ timer.time = in.readVLong();\n type = Type.fromId(in.readByte());\n stage = Stage.fromId(in.readByte());\n shardId = ShardId.readShardId(in);\n@@ -230,10 +220,13 @@ public void readFrom(StreamInput in) throws IOException {\n if (in.readBoolean()) {\n sourceNode = DiscoveryNode.readNode(in);\n }\n- index = Index.readIndex(in);\n- translog = Translog.readTranslog(in);\n- start = Start.readStart(in);\n- detailed = in.readBoolean();\n+ index.readFrom(in);\n+ translog.readFrom(in);\n+ start.readFrom(in);\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ // used to the detailed flag\n+ in.readBoolean();\n+ }\n primary = in.readBoolean();\n }\n \n@@ -254,7 +247,10 @@ public void writeTo(StreamOutput out) throws IOException {\n index.writeTo(out);\n translog.writeTo(out);\n start.writeTo(out);\n- out.writeBoolean(detailed);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ // detailed flag\n+ out.writeBoolean(true);\n+ }\n out.writeBoolean(primary);\n }\n \n@@ -265,9 +261,11 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(Fields.TYPE, type.toString());\n builder.field(Fields.STAGE, stage.toString());\n builder.field(Fields.PRIMARY, primary);\n- builder.timeValueField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, timer.startTime);\n- builder.timeValueField(Fields.STOP_TIME_IN_MILLIS, Fields.STOP_TIME, timer.stopTime);\n- builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, timer.time);\n+ builder.dateValueField(Fields.START_TIME_IN_MILLIS, Fields.START_TIME, timer.startTime);\n+ if (timer.stopTime > 0) {\n+ builder.dateValueField(Fields.STOP_TIME_IN_MILLIS, Fields.STOP_TIME, timer.stopTime);\n+ }\n+ builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, timer.time());\n \n if (restoreSource != null) {\n builder.field(Fields.SOURCE);\n@@ -291,7 +289,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n \n builder.startObject(Fields.INDEX);\n- index.detailed(this.detailed);\n index.toXContent(builder, params);\n builder.endObject();\n \n@@ -327,21 +324,25 @@ static final class Fields {\n static final XContentBuilderString TRANSLOG = new XContentBuilderString(\"translog\");\n static final XContentBuilderString START = new XContentBuilderString(\"start\");\n static final XContentBuilderString RECOVERED = new XContentBuilderString(\"recovered\");\n+ static final XContentBuilderString RECOVERED_IN_BYTES = new XContentBuilderString(\"recovered_in_bytes\");\n static final XContentBuilderString CHECK_INDEX_TIME = new XContentBuilderString(\"check_index_time\");\n static final XContentBuilderString CHECK_INDEX_TIME_IN_MILLIS = new XContentBuilderString(\"check_index_time_in_millis\");\n static final XContentBuilderString LENGTH = new XContentBuilderString(\"length\");\n+ static final XContentBuilderString LENGTH_IN_BYTES = new XContentBuilderString(\"length_in_bytes\");\n static final XContentBuilderString FILES = new XContentBuilderString(\"files\");\n static final XContentBuilderString TOTAL = new XContentBuilderString(\"total\");\n+ static final XContentBuilderString TOTAL_IN_BYTES = new XContentBuilderString(\"total_in_bytes\");\n static final XContentBuilderString REUSED = new XContentBuilderString(\"reused\");\n+ static final XContentBuilderString REUSED_IN_BYTES = new XContentBuilderString(\"reused_in_bytes\");\n static final XContentBuilderString PERCENT = new XContentBuilderString(\"percent\");\n static final XContentBuilderString DETAILS = new XContentBuilderString(\"details\");\n- static final XContentBuilderString BYTES = new XContentBuilderString(\"bytes\");\n+ static final XContentBuilderString SIZE = new XContentBuilderString(\"size\");\n }\n \n public static class Timer {\n- private long startTime = 0;\n- private long time = 0;\n- private long stopTime = 0;\n+ private volatile long startTime = 0;\n+ private volatile long time = 0;\n+ private volatile long stopTime = 0;\n \n public long startTime() {\n return startTime;\n@@ -352,26 +353,29 @@ public void startTime(long startTime) {\n }\n \n public long time() {\n- return time;\n- }\n-\n- public void time(long time) {\n- this.time = time;\n+ if (startTime == 0) {\n+ return 0;\n+ }\n+ if (time > 0) {\n+ return time;\n+ }\n+ return Math.max(0, System.currentTimeMillis() - startTime);\n }\n \n public long stopTime() {\n return stopTime;\n }\n \n public void stopTime(long stopTime) {\n+ this.time = Math.max(0, stopTime - startTime);\n this.stopTime = stopTime;\n }\n }\n \n public static class Start implements ToXContent, Streamable {\n- private long startTime;\n- private long time;\n- private long checkIndexTime;\n+ private volatile long startTime;\n+ private volatile long time;\n+ private volatile long checkIndexTime;\n \n public long startTime() {\n return this.startTime;\n@@ -385,8 +389,8 @@ public long time() {\n return this.time;\n }\n \n- public void time(long time) {\n- this.time = time;\n+ public void stopTime(long time) {\n+ this.time = Math.max(0, time - startTime);\n }\n \n public long checkIndexTime() {\n@@ -397,12 +401,6 @@ public void checkIndexTime(long checkIndexTime) {\n this.checkIndexTime = checkIndexTime;\n }\n \n- public static Start readStart(StreamInput in) throws IOException {\n- Start start = new Start();\n- start.readFrom(in);\n- return start;\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n startTime = in.readVLong();\n@@ -426,9 +424,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n public static class Translog implements ToXContent, Streamable {\n- private long startTime = 0;\n- private long time;\n- private volatile int currentTranslogOperations = 0;\n+ private volatile long startTime = 0;\n+ private volatile long time;\n+ private final AtomicInteger currentTranslogOperations = new AtomicInteger();\n \n public long startTime() {\n return this.startTime;\n@@ -447,59 +445,83 @@ public void time(long time) {\n }\n \n public void addTranslogOperations(int count) {\n- this.currentTranslogOperations += count;\n+ this.currentTranslogOperations.addAndGet(count);\n }\n \n public void incrementTranslogOperations() {\n- this.currentTranslogOperations++;\n+ this.currentTranslogOperations.incrementAndGet();\n }\n \n public int currentTranslogOperations() {\n- return this.currentTranslogOperations;\n- }\n-\n- public static Translog readTranslog(StreamInput in) throws IOException {\n- Translog translog = new Translog();\n- translog.readFrom(in);\n- return translog;\n+ return this.currentTranslogOperations.get();\n }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n startTime = in.readVLong();\n time = in.readVLong();\n- currentTranslogOperations = in.readVInt();\n+ currentTranslogOperations.set(in.readVInt());\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n out.writeVLong(startTime);\n out.writeVLong(time);\n- out.writeVInt(currentTranslogOperations);\n+ out.writeVInt(currentTranslogOperations.get());\n }\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.field(Fields.RECOVERED, currentTranslogOperations);\n+ builder.field(Fields.RECOVERED, currentTranslogOperations.get());\n builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, time);\n return builder;\n }\n }\n \n public static class File implements ToXContent, Streamable {\n- String name;\n- long length;\n- long recovered;\n+ private String name;\n+ private long length;\n+ private long recovered;\n+ private boolean reused;\n \n- public File() { }\n+ public File() {\n+ }\n \n- public File(String name, long length) {\n+ public File(String name, long length, boolean reused) {\n+ assert name != null;\n this.name = name;\n this.length = length;\n+ this.reused = reused;\n }\n \n- public void updateRecovered(long length) {\n- recovered += length;\n+ void addRecoveredBytes(long bytes) {\n+ assert reused == false : \"file is marked as reused, can't update recovered bytes\";\n+ assert bytes >= 0 : \"can't recovered negative bytes. got [\" + bytes + \"]\";\n+ recovered += bytes;\n+ }\n+\n+ /** file name * */\n+ public String name() {\n+ return name;\n+ }\n+\n+ /** file length * */\n+ public long length() {\n+ return length;\n+ }\n+\n+ /** number of bytes recovered for this file (so far). 0 if the file is reused * */\n+ public long recovered() {\n+ return recovered;\n+ }\n+\n+ /** returns true if the file is reused from a local copy */\n+ public boolean reused() {\n+ return reused;\n+ }\n+\n+ boolean fullyRecovered() {\n+ return reused == false && length == recovered;\n }\n \n public static File readFile(StreamInput in) throws IOException {\n@@ -513,91 +535,71 @@ public void readFrom(StreamInput in) throws IOException {\n name = in.readString();\n length = in.readVLong();\n recovered = in.readVLong();\n+ if (in.getVersion().onOrAfter(Version.V_1_5_0)) {\n+ reused = in.readBoolean();\n+ } else {\n+ reused = recovered > 0;\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n out.writeString(name);\n out.writeVLong(length);\n out.writeVLong(recovered);\n+ if (out.getVersion().onOrAfter(Version.V_1_5_0)) {\n+ out.writeBoolean(reused);\n+ }\n }\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n builder.field(Fields.NAME, name);\n- builder.field(Fields.LENGTH, length);\n- builder.field(Fields.RECOVERED, recovered);\n+ builder.byteSizeField(Fields.LENGTH_IN_BYTES, Fields.LENGTH, length);\n+ builder.field(Fields.REUSED, reused);\n+ builder.byteSizeField(Fields.RECOVERED_IN_BYTES, Fields.RECOVERED, length);\n builder.endObject();\n return builder;\n }\n- }\n \n- public static class Index implements ToXContent, Streamable {\n-\n- private long startTime = 0;\n- private long time = 0;\n-\n- private List<File> fileDetails = new ArrayList<>();\n- private List<File> reusedFileDetails = new ArrayList<>();\n-\n- private long version = -1;\n-\n- private boolean detailed = false;\n-\n- private int totalFileCount = 0;\n- private int reusedFileCount = 0;\n- private AtomicInteger recoveredFileCount = new AtomicInteger();\n-\n- private long totalByteCount = 0;\n- private long reusedByteCount = 0;\n- private AtomicLong recoveredByteCount = new AtomicLong();\n-\n- public List<File> fileDetails() {\n- return fileDetails;\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (obj instanceof File) {\n+ File other = (File) obj;\n+ return name.equals(other.name) && length == other.length() && reused == other.reused() && recovered == other.recovered();\n+ }\n+ return false;\n }\n \n- public List<File> reusedFileDetails() {\n- return reusedFileDetails;\n+ @Override\n+ public String toString() {\n+ return \"file (name [\" + name + \"], reused [\" + reused + \"], length [\" + length + \"], recovered [\" + recovered + \"])\";\n }\n+ }\n \n- public void addFileDetail(String name, long length) {\n- fileDetails.add(new File(name, length));\n- }\n+ public static class Index implements ToXContent, Streamable {\n \n- public void addFileDetail(String name, long length, long recovered) {\n- File file = new File(name, length);\n- file.recovered = recovered;\n- fileDetails.add(file);\n- }\n+ private volatile long startTime = 0;\n+ private volatile long time = 0;\n \n- public void addFileDetails(List<String> names, List<Long> lengths) {\n- for (int i = 0; i < names.size(); i++) {\n- fileDetails.add(new File(names.get(i), lengths.get(i)));\n- }\n- }\n+ private Map<String, File> fileDetails = ConcurrentCollections.newConcurrentMap();\n+\n+ private volatile long version = -1;\n \n- public void addReusedFileDetail(String name, long length) {\n- reusedFileDetails.add(new File(name, length));\n+ public List<File> fileDetails() {\n+ return ImmutableList.copyOf(fileDetails.values());\n }\n \n- public void addReusedFileDetails(List<String> names, List<Long> lengths) {\n- for (int i = 0; i < names.size(); i++) {\n- reusedFileDetails.add(new File(names.get(i), lengths.get(i)));\n- }\n+ public void addFileDetail(String name, long length, boolean reused) {\n+ File file = new File(name, length, reused);\n+ File existing = fileDetails.put(name, file);\n+ assert existing == null : \"file [\" + name + \"] is already reported\";\n }\n \n- public File file(String name) {\n- for (File file : fileDetails) {\n- if (file.name.equals(name))\n- return file;\n- }\n- for (File file : reusedFileDetails) {\n- if (file.name.equals(name)) {\n- return file;\n- }\n- }\n- return null;\n+ public void addRecoveredBytesToFile(String name, long bytes) {\n+ File file = fileDetails.get(name);\n+ file.addRecoveredBytes(bytes);\n }\n \n public long startTime() {\n@@ -612,195 +614,247 @@ public long time() {\n return this.time;\n }\n \n- public void time(long time) {\n- this.time = time;\n+ public void stopTime(long stopTime) {\n+ assert stopTime >= 0;\n+ this.time = Math.max(0, stopTime - startTime);\n }\n \n public long version() {\n return this.version;\n }\n \n+ /** total number of files that are part of this recovery, both re-used and recovered */\n public int totalFileCount() {\n- return totalFileCount;\n- }\n-\n- public void totalFileCount(int totalFileCount) {\n- this.totalFileCount = totalFileCount;\n+ return fileDetails.size();\n }\n \n- public int recoveredFileCount() {\n- return recoveredFileCount.get();\n- }\n-\n- public void recoveredFileCount(int recoveredFileCount) {\n- this.recoveredFileCount.set(recoveredFileCount);\n+ /** total number of files to be recovered (potentially not yet done) */\n+ public int totalRecoverFiles() {\n+ int total = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused() == false) {\n+ total++;\n+ }\n+ }\n+ return total;\n }\n \n- public void addRecoveredFileCount(int updatedCount) {\n- this.recoveredFileCount.addAndGet(updatedCount);\n- }\n \n- public float percentFilesRecovered() {\n- if (totalFileCount == 0) { // indicates we are still in init phase\n+ /** number of file that were recovered (excluding on ongoing files) */\n+ public int recoveredFileCount() {\n+ int count = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.fullyRecovered()) {\n+ count++;\n+ }\n+ }\n+ return count;\n+ }\n+\n+ /** percent of recovered (i.e., not reused) files out of the total files to be recovered */\n+ public float recoveredFilesPercent() {\n+ int total = 0;\n+ int recovered = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused() == false) {\n+ total++;\n+ if (file.fullyRecovered()) {\n+ recovered++;\n+ }\n+ }\n+ }\n+ if (total == 0 && fileDetails.size() == 0) { // indicates we are still in init phase\n return 0.0f;\n }\n- final int filesRecovered = recoveredFileCount.get();\n- if ((totalFileCount - filesRecovered) == 0) {\n+ if (total == recovered) {\n return 100.0f;\n } else {\n- float result = 100.0f * (filesRecovered / (float)totalFileCount);\n+ float result = 100.0f * (recovered / (float) total);\n return result;\n }\n }\n \n- public int numberOfRecoveredFiles() {\n- return totalFileCount - reusedFileCount;\n- }\n-\n- public long totalByteCount() {\n- return this.totalByteCount;\n- }\n-\n- public void totalByteCount(long totalByteCount) {\n- this.totalByteCount = totalByteCount;\n- }\n-\n- public long recoveredByteCount() {\n- return recoveredByteCount.longValue();\n+ /** total number of bytes in th shard */\n+ public long totalBytes() {\n+ long total = 0;\n+ for (File file : fileDetails.values()) {\n+ total += file.length();\n+ }\n+ return total;\n }\n \n- public void recoveredByteCount(long recoveredByteCount) {\n- this.recoveredByteCount.set(recoveredByteCount);\n+ /** total number of bytes recovered so far, including both existing and reused */\n+ public long recoveredBytes() {\n+ long recovered = 0;\n+ for (File file : fileDetails.values()) {\n+ recovered += file.recovered();\n+ }\n+ return recovered;\n }\n \n- public void addRecoveredByteCount(long updatedSize) {\n- recoveredByteCount.addAndGet(updatedSize);\n+ /** total bytes of files to be recovered (potentially not yet done) */\n+ public long totalRecoverBytes() {\n+ long total = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused() == false) {\n+ total += file.length();\n+ }\n+ }\n+ return total;\n }\n \n- public long numberOfRecoveredBytes() {\n- return recoveredByteCount.get() - reusedByteCount;\n+ public long totalReuseBytes() {\n+ long total = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused()) {\n+ total += file.length();\n+ }\n+ }\n+ return total;\n }\n \n- public float percentBytesRecovered() {\n- if (totalByteCount == 0) { // indicates we are still in init phase\n+ /** percent of bytes recovered out of total files bytes *to be* recovered */\n+ public float recoveredBytesPercent() {\n+ long total = 0;\n+ long recovered = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused() == false) {\n+ total += file.length();\n+ recovered += file.recovered();\n+ }\n+ }\n+ if (total == 0 && fileDetails.size() == 0) {\n+ // indicates we are still in init phase\n return 0.0f;\n }\n- final long recByteCount = recoveredByteCount.get();\n- if ((totalByteCount - recByteCount) == 0) {\n+ if (total == recovered) {\n return 100.0f;\n } else {\n- float result = 100.0f * (recByteCount / (float) totalByteCount);\n- return result;\n+ return 100.0f * recovered / total;\n }\n }\n \n public int reusedFileCount() {\n- return reusedFileCount;\n- }\n-\n- public void reusedFileCount(int reusedFileCount) {\n- this.reusedFileCount = reusedFileCount;\n- }\n-\n- public long reusedByteCount() {\n- return this.reusedByteCount;\n- }\n-\n- public void reusedByteCount(long reusedByteCount) {\n- this.reusedByteCount = reusedByteCount;\n+ int reused = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused()) {\n+ reused++;\n+ }\n+ }\n+ return reused;\n }\n \n- public long recoveredTotalSize() {\n- return totalByteCount - reusedByteCount;\n+ public long reusedBytes() {\n+ long reused = 0;\n+ for (File file : fileDetails.values()) {\n+ if (file.reused()) {\n+ reused += file.length();\n+ }\n+ }\n+ return reused;\n }\n \n public void updateVersion(long version) {\n this.version = version;\n }\n \n- public void detailed(boolean detailed) {\n- this.detailed = detailed;\n- }\n-\n- public static Index readIndex(StreamInput in) throws IOException {\n- Index index = new Index();\n- index.readFrom(in);\n- return index;\n- }\n-\n @Override\n public void readFrom(StreamInput in) throws IOException {\n startTime = in.readVLong();\n time = in.readVLong();\n- totalFileCount = in.readVInt();\n- totalByteCount = in.readVLong();\n- reusedFileCount = in.readVInt();\n- reusedByteCount = in.readVLong();\n- recoveredFileCount = new AtomicInteger(in.readVInt());\n- recoveredByteCount = new AtomicLong(in.readVLong());\n- int size = in.readVInt();\n- fileDetails = new ArrayList<>(size);\n- for (int i = 0; i < size; i++) {\n- fileDetails.add(File.readFile(in));\n- }\n- size = in.readVInt();\n- reusedFileDetails = new ArrayList<>(size);\n- for (int i = 0; i < size; i++) {\n- reusedFileDetails.add(File.readFile(in));\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ // This may result in skewed reports as we didn't report all files in advance, relying on this totals\n+ in.readVInt(); // totalFileCount\n+ in.readVLong(); // totalBytes\n+ in.readVInt(); // reusedFileCount\n+ in.readVLong(); // reusedByteCount\n+ in.readVInt(); // recoveredFileCount\n+ in.readVLong(); // recoveredByteCount\n+ int size = in.readVInt();\n+ for (int i = 0; i < size; i++) {\n+ File file = File.readFile(in);\n+ fileDetails.put(file.name, file);\n+ }\n+ size = in.readVInt();\n+ for (int i = 0; i < size; i++) {\n+ File file = File.readFile(in);\n+ fileDetails.put(file.name, file);\n+ }\n+ } else {\n+ int size = in.readVInt();\n+ for (int i = 0; i < size; i++) {\n+ File file = File.readFile(in);\n+ fileDetails.put(file.name, file);\n+ }\n }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n out.writeVLong(startTime);\n out.writeVLong(time);\n- out.writeVInt(totalFileCount);\n- out.writeVLong(totalByteCount);\n- out.writeVInt(reusedFileCount);\n- out.writeVLong(reusedByteCount);\n- out.writeVInt(recoveredFileCount.get());\n- out.writeVLong(recoveredByteCount.get());\n- out.writeVInt(fileDetails.size());\n- for (File file : fileDetails) {\n- file.writeTo(out);\n- }\n- out.writeVInt(reusedFileDetails.size());\n- for (File file : reusedFileDetails) {\n- file.writeTo(out);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ out.writeVInt(totalFileCount());\n+ out.writeVLong(totalBytes());\n+ out.writeVInt(reusedFileCount());\n+ out.writeVLong(reusedBytes());\n+ out.writeVInt(recoveredFileCount());\n+ out.writeVLong(recoveredBytes());\n+ final File[] files = fileDetails.values().toArray(new File[0]);\n+ int nonReusedCount = 0;\n+ int reusedCount = 0;\n+ for (File file : files) {\n+ if (file.reused()) {\n+ reusedCount++;\n+ } else {\n+ nonReusedCount++;\n+ }\n+ }\n+ out.writeVInt(nonReusedCount);\n+ for (File file : files) {\n+ if (file.reused() == false) {\n+ file.writeTo(out);\n+ }\n+ }\n+ out.writeVInt(reusedCount);\n+ for (File file : files) {\n+ if (file.reused()) {\n+ file.writeTo(out);\n+ }\n+ }\n+ } else {\n+ final File[] files = fileDetails.values().toArray(new File[0]);\n+ out.writeVInt(files.length);\n+ for (File file : files) {\n+ file.writeTo(out);\n+ }\n }\n }\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n-\n- int filesRecovered = recoveredFileCount.get();\n- long bytesRecovered = recoveredByteCount.get();\n+ // stream size first, as it matters more and the files section can be long\n+ builder.startObject(Fields.SIZE);\n+ builder.byteSizeField(Fields.TOTAL_IN_BYTES, Fields.TOTAL, totalBytes());\n+ builder.byteSizeField(Fields.REUSED_IN_BYTES, Fields.REUSED, totalBytes());\n+ builder.byteSizeField(Fields.RECOVERED_IN_BYTES, Fields.RECOVERED, recoveredBytes());\n+ builder.field(Fields.PERCENT, String.format(Locale.ROOT, \"%1.1f%%\", recoveredBytesPercent()));\n+ builder.endObject();\n \n builder.startObject(Fields.FILES);\n- builder.field(Fields.TOTAL, totalFileCount);\n- builder.field(Fields.REUSED, reusedFileCount);\n- builder.field(Fields.RECOVERED, filesRecovered);\n- builder.field(Fields.PERCENT, String.format(Locale.ROOT, \"%1.1f%%\", percentFilesRecovered()));\n- if (detailed) {\n+ builder.field(Fields.TOTAL, totalFileCount());\n+ builder.field(Fields.REUSED, reusedFileCount());\n+ builder.field(Fields.RECOVERED, recoveredFileCount());\n+ builder.field(Fields.PERCENT, String.format(Locale.ROOT, \"%1.1f%%\", recoveredFilesPercent()));\n+ if (params.paramAsBoolean(\"details\", false)) {\n builder.startArray(Fields.DETAILS);\n- for (File file : fileDetails) {\n- file.toXContent(builder, params);\n- }\n- for (File file : reusedFileDetails) {\n+ for (File file : fileDetails.values()) {\n file.toXContent(builder, params);\n }\n builder.endArray();\n }\n builder.endObject();\n-\n- builder.startObject(Fields.BYTES);\n- builder.field(Fields.TOTAL, totalByteCount);\n- builder.field(Fields.REUSED, reusedByteCount);\n- builder.field(Fields.RECOVERED, bytesRecovered);\n- builder.field(Fields.PERCENT, String.format(Locale.ROOT, \"%1.1f%%\", percentBytesRecovered()));\n- builder.endObject();\n builder.timeValueField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, time);\n-\n return builder;\n }\n ",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,6 @@ public class RecoveryStatus extends AbstractRefCounted {\n private final ShardId shardId;\n private final long recoveryId;\n private final IndexShard indexShard;\n- private final RecoveryState state;\n private final DiscoveryNode sourceNode;\n private final String tempFilePrefix;\n private final Store store;\n@@ -73,7 +72,7 @@ public class RecoveryStatus extends AbstractRefCounted {\n // last time this status was accessed\n private volatile long lastAccessTime = System.nanoTime();\n \n- public RecoveryStatus(IndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state, RecoveryTarget.RecoveryListener listener) {\n+ public RecoveryStatus(IndexShard indexShard, DiscoveryNode sourceNode, RecoveryTarget.RecoveryListener listener) {\n \n super(\"recovery_status\");\n this.recoveryId = idGenerator.incrementAndGet();\n@@ -82,9 +81,9 @@ public RecoveryStatus(IndexShard indexShard, DiscoveryNode sourceNode, RecoveryS\n this.indexShard = indexShard;\n this.sourceNode = sourceNode;\n this.shardId = indexShard.shardId();\n- this.state = state;\n- this.state.getTimer().startTime(System.currentTimeMillis());\n- this.tempFilePrefix = RECOVERY_PREFIX + this.state.getTimer().startTime() + \".\";\n+ final RecoveryState.Timer timer = this.indexShard.recoveryState().getTimer();\n+ timer.startTime(System.currentTimeMillis());\n+ this.tempFilePrefix = RECOVERY_PREFIX + timer.startTime() + \".\";\n this.store = indexShard.store();\n // make sure the store is not released until we are done.\n store.incRef();\n@@ -110,7 +109,7 @@ public DiscoveryNode sourceNode() {\n }\n \n public RecoveryState state() {\n- return state;\n+ return indexShard.recoveryState();\n }\n \n public CancellableThreads CancellableThreads() {\n@@ -133,11 +132,11 @@ public Store store() {\n }\n \n public void stage(RecoveryState.Stage stage) {\n- state.setStage(stage);\n+ state().setStage(stage);\n }\n \n public RecoveryState.Stage stage() {\n- return state.getStage();\n+ return state().getStage();\n }\n \n public Store.LegacyChecksums legacyChecksums() {\n@@ -178,7 +177,7 @@ public void cancel(String reason) {\n public void fail(RecoveryFailedException e, boolean sendShardFailure) {\n if (finished.compareAndSet(false, true)) {\n try {\n- listener.onRecoveryFailure(state, e, sendShardFailure);\n+ listener.onRecoveryFailure(state(), e, sendShardFailure);\n } finally {\n try {\n cancellableThreads.cancel(\"failed recovery [\" + e.getMessage() + \"]\");\n@@ -196,7 +195,7 @@ public void markAsDone() {\n assert tempFileNames.isEmpty() : \"not all temporary files are renamed\";\n // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n decRef();\n- listener.onRecoveryDone(state);\n+ listener.onRecoveryDone(state());\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java",
"status": "modified"
},
{
"diff": "@@ -117,9 +117,6 @@ public RecoveryState recoveryState(IndexShard indexShard) {\n return null;\n }\n final RecoveryStatus recoveryStatus = statusRef.status();\n- if (recoveryStatus.state().getTimer().startTime() > 0 && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n- recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n- }\n return recoveryStatus.state();\n } catch (Exception e) {\n // shouldn't really happen, but have to be here due to auto close\n@@ -129,19 +126,14 @@ public RecoveryState recoveryState(IndexShard indexShard) {\n \n public void startRecovery(final IndexShard indexShard, final RecoveryState.Type recoveryType, final DiscoveryNode sourceNode, final RecoveryListener listener) {\n try {\n- indexShard.recovering(\"from \" + sourceNode);\n+ indexShard.recovering(\"from \" + sourceNode, recoveryType, sourceNode);\n } catch (IllegalIndexShardStateException e) {\n // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n logger.debug(\"{} ignore recovery. already in recovering process, {}\", indexShard.shardId(), e.getMessage());\n return;\n }\n // create a new recovery status, and process...\n- RecoveryState recoveryState = new RecoveryState(indexShard.shardId());\n- recoveryState.setType(recoveryType);\n- recoveryState.setSourceNode(sourceNode);\n- recoveryState.setTargetNode(clusterService.localNode());\n- recoveryState.setPrimary(indexShard.routingEntry().primary());\n- final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, recoveryState, listener, recoverySettings.activityTimeout());\n+ final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, listener, recoverySettings.activityTimeout());\n threadPool.generic().execute(new RecoveryRunner(recoveryId));\n \n }\n@@ -309,8 +301,7 @@ public String executor() {\n public void messageReceived(RecoveryFinalizeRecoveryRequest request, TransportChannel channel) throws Exception {\n try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n final RecoveryStatus recoveryStatus = statusRef.status();\n- recoveryStatus.indexShard().performRecoveryFinalization(false, recoveryStatus.state());\n- recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n+ recoveryStatus.indexShard().performRecoveryFinalization(false);\n recoveryStatus.stage(RecoveryState.Stage.DONE);\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n@@ -361,12 +352,12 @@ public void messageReceived(RecoveryFilesInfoRequest request, TransportChannel c\n try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n final RecoveryStatus recoveryStatus = statusRef.status();\n final RecoveryState.Index index = recoveryStatus.state().getIndex();\n- index.addFileDetails(request.phase1FileNames, request.phase1FileSizes);\n- index.addReusedFileDetails(request.phase1ExistingFileNames, request.phase1ExistingFileSizes);\n- index.totalByteCount(request.phase1TotalSize);\n- index.totalFileCount(request.phase1FileNames.size() + request.phase1ExistingFileNames.size());\n- index.reusedByteCount(request.phase1ExistingTotalSize);\n- index.reusedFileCount(request.phase1ExistingFileNames.size());\n+ for (int i = 0; i < request.phase1ExistingFileNames.size(); i++) {\n+ index.addFileDetail(request.phase1ExistingFileNames.get(i), request.phase1ExistingFileSizes.get(i), true);\n+ }\n+ for (int i = 0; i < request.phase1FileNames.size(); i++) {\n+ index.addFileDetail(request.phase1FileNames.get(i), request.phase1FileSizes.get(i), false);\n+ }\n // recoveryBytesCount / recoveryFileCount will be set as we go...\n recoveryStatus.stage(RecoveryState.Stage.INDEX);\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n@@ -453,11 +444,7 @@ public void messageReceived(final RecoveryFileChunkRequest request, TransportCha\n content = content.toBytesArray();\n }\n indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n- recoveryStatus.state().getIndex().addRecoveredByteCount(content.length());\n- RecoveryState.File file = recoveryStatus.state().getIndex().file(request.name());\n- if (file != null) {\n- file.updateRecovered(request.length());\n- }\n+ recoveryStatus.state().getIndex().addRecoveredBytesToFile(request.name(), content.length());\n if (indexOutput.getFilePointer() >= request.length() || request.lastChunk()) {\n try {\n Store.verify(indexOutput);\n@@ -471,7 +458,6 @@ public void messageReceived(final RecoveryFileChunkRequest request, TransportCha\n assert Arrays.asList(store.directory().listAll()).contains(temporaryFileName);\n store.directory().sync(Collections.singleton(temporaryFileName));\n IndexOutput remove = recoveryStatus.removeOpenIndexOutputs(request.name());\n- recoveryStatus.state().getIndex().addRecoveredFileCount(1);\n assert remove == null || remove == indexOutput; // remove maybe null if we got finished\n }\n }",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -30,7 +30,10 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.indices.recovery.RecoveryState;\n-import org.elasticsearch.rest.*;\n+import org.elasticsearch.rest.RestChannel;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.rest.RestRequest;\n+import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.action.support.RestResponseListener;\n import org.elasticsearch.rest.action.support.RestTable;\n \n@@ -89,10 +92,12 @@ Table getTableWithHeader(RestRequest request) {\n .addCell(\"target_host\", \"alias:thost;desc:target host\")\n .addCell(\"repository\", \"alias:rep;desc:repository\")\n .addCell(\"snapshot\", \"alias:snap;desc:snapshot\")\n- .addCell(\"files\", \"alias:f;desc:number of files\")\n+ .addCell(\"files\", \"alias:f;desc:number of files to recover\")\n .addCell(\"files_percent\", \"alias:fp;desc:percent of files recovered\")\n- .addCell(\"bytes\", \"alias:b;desc:size in bytes\")\n+ .addCell(\"bytes\", \"alias:b;desc:size to recover in bytes\")\n .addCell(\"bytes_percent\", \"alias:bp;desc:percent of bytes recovered\")\n+ .addCell(\"total_files\", \"alias:tf;desc:total number of files\")\n+ .addCell(\"total_bytes\", \"alias:tb;desc:total number of bytes\")\n .endHeaders();\n return t;\n }\n@@ -145,10 +150,12 @@ public int compare(ShardRecoveryResponse o1, ShardRecoveryResponse o2) {\n t.addCell(state.getTargetNode().getHostName());\n t.addCell(state.getRestoreSource() == null ? \"n/a\" : state.getRestoreSource().snapshotId().getRepository());\n t.addCell(state.getRestoreSource() == null ? \"n/a\" : state.getRestoreSource().snapshotId().getSnapshot());\n+ t.addCell(state.getIndex().totalRecoverFiles());\n+ t.addCell(String.format(Locale.ROOT, \"%1.1f%%\", state.getIndex().recoveredFilesPercent()));\n+ t.addCell(state.getIndex().totalRecoverBytes());\n+ t.addCell(String.format(Locale.ROOT, \"%1.1f%%\", state.getIndex().recoveredBytesPercent()));\n t.addCell(state.getIndex().totalFileCount());\n- t.addCell(String.format(Locale.ROOT, \"%1.1f%%\", state.getIndex().percentFilesRecovered()));\n- t.addCell(state.getIndex().totalByteCount());\n- t.addCell(String.format(Locale.ROOT, \"%1.1f%%\", state.getIndex().percentBytesRecovered()));\n+ t.addCell(state.getIndex().totalBytes());\n t.endRow();\n }\n }",
"filename": "src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java",
"status": "modified"
},
{
"diff": "@@ -133,7 +133,7 @@ public void run() {\n long bytes;\n if (indexRecoveries.size() > 0) {\n translogOps = indexRecoveries.get(0).recoveryState().getTranslog().currentTranslogOperations();\n- bytes = recoveryResponse.shardResponses().get(INDEX_NAME).get(0).recoveryState().getIndex().recoveredByteCount();\n+ bytes = recoveryResponse.shardResponses().get(INDEX_NAME).get(0).recoveryState().getIndex().recoveredBytes();\n } else {\n bytes = lastBytes = 0;\n translogOps = lastTranslogOps = 0;",
"filename": "src/test/java/org/elasticsearch/benchmark/recovery/ReplicaRecoveryBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -99,28 +98,28 @@ public void testReusePeerRecovery() throws Exception {\n if (!recoveryState.getPrimary()) {\n RecoveryState.Index index = recoveryState.getIndex();\n if (compatibilityVersion().onOrAfter(Version.V_1_2_0)) {\n- assertThat(index.toString(), index.recoveredByteCount(), equalTo(0l));\n- assertThat(index.toString(), index.reusedByteCount(), greaterThan(0l));\n- assertThat(index.toString(), index.reusedByteCount(), equalTo(index.totalByteCount()));\n+ assertThat(index.toString(), index.recoveredBytes(), equalTo(0l));\n+ assertThat(index.toString(), index.reusedBytes(), greaterThan(0l));\n+ assertThat(index.toString(), index.reusedBytes(), equalTo(index.totalBytes()));\n assertThat(index.toString(), index.recoveredFileCount(), equalTo(0));\n assertThat(index.toString(), index.reusedFileCount(), equalTo(index.totalFileCount()));\n assertThat(index.toString(), index.reusedFileCount(), greaterThan(0));\n- assertThat(index.toString(), index.percentBytesRecovered(), equalTo(0.f));\n- assertThat(index.toString(), index.percentFilesRecovered(), equalTo(0.f));\n- assertThat(index.toString(), index.reusedByteCount(), greaterThan(index.numberOfRecoveredBytes()));\n+ assertThat(index.toString(), index.recoveredBytesPercent(), equalTo(100.f));\n+ assertThat(index.toString(), index.recoveredFilesPercent(), equalTo(100.f));\n+ assertThat(index.toString(), index.reusedBytes(), greaterThan(index.recoveredBytes()));\n } else {\n /* We added checksums on 1.3 but they were available on 1.2 already since this uses Lucene 4.8.\n * yet in this test we upgrade the entire cluster and therefor the 1.3 nodes try to read the checksum\n * from the files even if they haven't been written with ES 1.3. Due to that we don't have to recover\n * the segments files if we are on 1.2 or above...*/\n- assertThat(index.toString(), index.recoveredByteCount(), greaterThan(0l));\n+ assertThat(index.toString(), index.recoveredBytes(), greaterThan(0l));\n assertThat(index.toString(), index.recoveredFileCount(), greaterThan(0));\n- assertThat(index.toString(), index.reusedByteCount(), greaterThan(0l));\n- assertThat(index.toString(), index.percentBytesRecovered(), greaterThan(0.0f));\n- assertThat(index.toString(), index.percentBytesRecovered(), lessThan(100.0f));\n- assertThat(index.toString(), index.percentFilesRecovered(), greaterThan(0.0f));\n- assertThat(index.toString(), index.percentFilesRecovered(), lessThan(100.0f));\n- assertThat(index.toString(), index.reusedByteCount(), greaterThan(index.numberOfRecoveredBytes()));\n+ assertThat(index.toString(), index.reusedBytes(), greaterThan(0l));\n+ assertThat(index.toString(), index.recoveredBytesPercent(), greaterThan(0.0f));\n+ assertThat(index.toString(), index.recoveredBytesPercent(), lessThan(100.0f));\n+ assertThat(index.toString(), index.recoveredFilesPercent(), greaterThan(0.0f));\n+ assertThat(index.toString(), index.recoveredFilesPercent(), lessThan(100.0f));\n+ assertThat(index.toString(), index.reusedBytes(), greaterThan(index.recoveredBytes()));\n }\n // TODO upgrade via optimize?\n }",
"filename": "src/test/java/org/elasticsearch/gateway/local/RecoveryBackwardsCompatibilityTests.java",
"status": "modified"
},
{
"diff": "@@ -403,18 +403,18 @@ public void testReusePeerRecovery() throws Exception {\n if (!recoveryState.getPrimary()) {\n logger.info(\"--> replica shard {} recovered from {} to {}, recovered {}, reuse {}\",\n response.getShardId(), recoveryState.getSourceNode().name(), recoveryState.getTargetNode().name(),\n- recoveryState.getIndex().recoveredTotalSize(), recoveryState.getIndex().reusedByteCount());\n- assertThat(\"no bytes should be recovered\", recoveryState.getIndex().recoveredByteCount(), equalTo(0l));\n- assertThat(\"data should have been reused\", recoveryState.getIndex().reusedByteCount(), greaterThan(0l));\n- assertThat(\"all bytes should be reused\", recoveryState.getIndex().reusedByteCount(), equalTo(recoveryState.getIndex().totalByteCount()));\n+ recoveryState.getIndex().recoveredBytes(), recoveryState.getIndex().reusedBytes());\n+ assertThat(\"no bytes should be recovered\", recoveryState.getIndex().recoveredBytes(), equalTo(0l));\n+ assertThat(\"data should have been reused\", recoveryState.getIndex().reusedBytes(), greaterThan(0l));\n+ assertThat(\"all bytes should be reused\", recoveryState.getIndex().reusedBytes(), equalTo(recoveryState.getIndex().totalBytes()));\n assertThat(\"no files should be recovered\", recoveryState.getIndex().recoveredFileCount(), equalTo(0));\n assertThat(\"all files should be reused\", recoveryState.getIndex().reusedFileCount(), equalTo(recoveryState.getIndex().totalFileCount()));\n assertThat(\"> 0 files should be reused\", recoveryState.getIndex().reusedFileCount(), greaterThan(0));\n- assertThat(\"all bytes should be reused bytes\",\n- recoveryState.getIndex().reusedByteCount(), greaterThan(recoveryState.getIndex().numberOfRecoveredBytes()));\n } else {\n- assertThat(recoveryState.getIndex().recoveredByteCount(), equalTo(recoveryState.getIndex().reusedByteCount()));\n- assertThat(recoveryState.getIndex().recoveredFileCount(), equalTo(recoveryState.getIndex().reusedFileCount()));\n+ assertThat(recoveryState.getIndex().recoveredBytes(), equalTo(0l));\n+ assertThat(recoveryState.getIndex().reusedBytes(), equalTo(recoveryState.getIndex().totalBytes()));\n+ assertThat(recoveryState.getIndex().recoveredFileCount(), equalTo(0));\n+ assertThat(recoveryState.getIndex().reusedFileCount(), equalTo(recoveryState.getIndex().totalFileCount()));\n }\n }\n }",
"filename": "src/test/java/org/elasticsearch/gateway/local/SimpleRecoveryLocalGatewayTests.java",
"status": "modified"
},
{
"diff": "@@ -423,10 +423,10 @@ private IndicesStatsResponse createAndPopulateIndex(String name, int nodeCount,\n \n private void validateIndexRecoveryState(RecoveryState.Index indexState) {\n assertThat(indexState.time(), greaterThanOrEqualTo(0L));\n- assertThat(indexState.percentFilesRecovered(), greaterThanOrEqualTo(0.0f));\n- assertThat(indexState.percentFilesRecovered(), lessThanOrEqualTo(100.0f));\n- assertThat(indexState.percentBytesRecovered(), greaterThanOrEqualTo(0.0f));\n- assertThat(indexState.percentBytesRecovered(), lessThanOrEqualTo(100.0f));\n+ assertThat(indexState.recoveredFilesPercent(), greaterThanOrEqualTo(0.0f));\n+ assertThat(indexState.recoveredFilesPercent(), lessThanOrEqualTo(100.0f));\n+ assertThat(indexState.recoveredBytesPercent(), greaterThanOrEqualTo(0.0f));\n+ assertThat(indexState.recoveredBytesPercent(), lessThanOrEqualTo(100.0f));\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/indices/recovery/IndexRecoveryTests.java",
"status": "modified"
},
{
"diff": "@@ -18,43 +18,168 @@\n */\n package org.elasticsearch.indices.recovery;\n \n+import org.elasticsearch.common.io.stream.BytesStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.Streamable;\n+import org.elasticsearch.indices.recovery.RecoveryState.File;\n import org.elasticsearch.test.ElasticsearchTestCase;\n \n-import static org.hamcrest.Matchers.closeTo;\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.hamcrest.Matchers.*;\n \n public class RecoveryStateTest extends ElasticsearchTestCase {\n \n- public void testPercentage() {\n- RecoveryState state = new RecoveryState();\n- RecoveryState.Index index = state.getIndex();\n- index.totalByteCount(100);\n- index.reusedByteCount(20);\n- index.recoveredByteCount(80);\n- assertThat((double)index.percentBytesRecovered(), closeTo(80.0d, 0.1d));\n-\n- index.totalFileCount(100);\n- index.reusedFileCount(80);\n- index.recoveredFileCount(20);\n- assertThat((double)index.percentFilesRecovered(), closeTo(20.0d, 0.1d));\n-\n- index.totalByteCount(0);\n- index.reusedByteCount(0);\n- index.recoveredByteCount(0);\n- assertThat((double)index.percentBytesRecovered(), closeTo(0d, 0.1d));\n-\n- index.totalFileCount(0);\n- index.reusedFileCount(0);\n- index.recoveredFileCount(0);\n- assertThat((double)index.percentFilesRecovered(), closeTo(00.0d, 0.1d));\n-\n- index.totalByteCount(10);\n- index.reusedByteCount(0);\n- index.recoveredByteCount(10);\n- assertThat((double)index.percentBytesRecovered(), closeTo(100d, 0.1d));\n-\n- index.totalFileCount(20);\n- index.reusedFileCount(0);\n- index.recoveredFileCount(20);\n- assertThat((double)index.percentFilesRecovered(), closeTo(100.0d, 0.1d));\n+ abstract class Streamer<T extends Streamable> extends Thread {\n+\n+ private T lastRead;\n+ final private AtomicBoolean shouldStop;\n+ final private T source;\n+ final AtomicReference<Throwable> error = new AtomicReference<>();\n+\n+ Streamer(AtomicBoolean shouldStop, T source) {\n+ this.shouldStop = shouldStop;\n+ this.source = source;\n+ }\n+\n+ void serializeDeserialize() throws IOException {\n+ BytesStreamOutput out = new BytesStreamOutput();\n+ source.writeTo(out);\n+ out.close();\n+ StreamInput in = new BytesStreamInput(out.bytes());\n+ lastRead = deserialize(in);\n+ }\n+\n+ abstract T deserialize(StreamInput in) throws IOException;\n+\n+ @Override\n+ public void run() {\n+ try {\n+ while (shouldStop.get() == false) {\n+ serializeDeserialize();\n+ }\n+ serializeDeserialize();\n+ } catch (Throwable t) {\n+ error.set(t);\n+ }\n+ }\n }\n+\n+ public void testIndex() throws Exception {\n+ File[] files = new File[randomIntBetween(1, 20)];\n+ ArrayList<File> filesToRecover = new ArrayList<>();\n+ long totalFileBytes = 0;\n+ long totalReusedBytes = 0;\n+ int totalReused = 0;\n+ for (int i = 0; i < files.length; i++) {\n+ final int fileLength = randomIntBetween(1, 1000);\n+ final boolean reused = randomBoolean();\n+ totalFileBytes += fileLength;\n+ files[i] = new RecoveryState.File(\"f_\" + i, fileLength, reused);\n+ if (reused) {\n+ totalReused++;\n+ totalReusedBytes += fileLength;\n+ } else {\n+ filesToRecover.add(files[i]);\n+ }\n+ }\n+\n+ Collections.shuffle(Arrays.asList(files));\n+\n+ final RecoveryState.Index index = new RecoveryState.Index();\n+ final long startTime = System.currentTimeMillis();\n+ // before we start we must report 0\n+ assertThat(index.recoveredFilesPercent(), equalTo((float) 0.0));\n+ assertThat(index.recoveredBytesPercent(), equalTo((float) 0.0));\n+\n+ index.startTime(startTime);\n+ for (File file : files) {\n+ index.addFileDetail(file.name(), file.length(), file.reused());\n+ }\n+\n+ logger.info(\"testing initial information\");\n+ assertThat(index.totalBytes(), equalTo(totalFileBytes));\n+ assertThat(index.reusedBytes(), equalTo(totalReusedBytes));\n+ assertThat(index.totalRecoverBytes(), equalTo(totalFileBytes - totalReusedBytes));\n+ assertThat(index.totalFileCount(), equalTo(files.length));\n+ assertThat(index.reusedFileCount(), equalTo(totalReused));\n+ assertThat(index.totalRecoverFiles(), equalTo(filesToRecover.size()));\n+ assertThat(index.recoveredFileCount(), equalTo(0));\n+ assertThat(index.recoveredBytes(), equalTo(0l));\n+ assertThat(index.recoveredFilesPercent(), equalTo(filesToRecover.size() == 0 ? 100.0f : 0.0f));\n+ assertThat(index.recoveredBytesPercent(), equalTo(filesToRecover.size() == 0 ? 100.0f : 0.0f));\n+ assertThat(index.startTime(), equalTo(startTime));\n+\n+\n+ long bytesToRecover = totalFileBytes - totalReusedBytes;\n+ boolean completeRecovery = bytesToRecover == 0 || randomBoolean();\n+ if (completeRecovery == false) {\n+ bytesToRecover = randomIntBetween(1, (int) bytesToRecover);\n+ logger.info(\"performing partial recovery ([{}] bytes of [{}])\", bytesToRecover, totalFileBytes - totalReusedBytes);\n+ }\n+ AtomicBoolean streamShouldStop = new AtomicBoolean();\n+\n+ Streamer<RecoveryState.Index> backgroundReader = new Streamer<RecoveryState.Index>(streamShouldStop, index) {\n+ @Override\n+ RecoveryState.Index deserialize(StreamInput in) throws IOException {\n+ RecoveryState.Index index = new RecoveryState.Index();\n+ index.readFrom(in);\n+ return index;\n+ }\n+ };\n+\n+ backgroundReader.start();\n+\n+ long recoveredBytes = 0;\n+ while (bytesToRecover > 0) {\n+ File file = randomFrom(filesToRecover);\n+ long toRecover = Math.min(bytesToRecover, randomIntBetween(1, (int) (file.length() - file.recovered())));\n+ index.addRecoveredBytesToFile(file.name(), toRecover);\n+ file.addRecoveredBytes(toRecover);\n+ bytesToRecover -= toRecover;\n+ recoveredBytes += toRecover;\n+ if (file.reused() || file.fullyRecovered()) {\n+ filesToRecover.remove(file);\n+ }\n+ }\n+\n+ if (completeRecovery) {\n+ assertThat(filesToRecover.size(), equalTo(0));\n+ long time = System.currentTimeMillis();\n+ index.stopTime(time);\n+ assertThat(index.time(), equalTo(Math.max(0, time - startTime)));\n+ }\n+\n+ logger.info(\"testing serialized information\");\n+ streamShouldStop.set(true);\n+ backgroundReader.join();\n+ assertThat(backgroundReader.lastRead.fileDetails().toArray(), arrayContainingInAnyOrder(index.fileDetails().toArray()));\n+ assertThat(backgroundReader.lastRead.startTime(), equalTo(index.startTime()));\n+ assertThat(backgroundReader.lastRead.time(), equalTo(index.time()));\n+\n+ logger.info(\"testing post recovery\");\n+ assertThat(index.totalBytes(), equalTo(totalFileBytes));\n+ assertThat(index.reusedBytes(), equalTo(totalReusedBytes));\n+ assertThat(index.totalRecoverBytes(), equalTo(totalFileBytes - totalReusedBytes));\n+ assertThat(index.totalFileCount(), equalTo(files.length));\n+ assertThat(index.reusedFileCount(), equalTo(totalReused));\n+ assertThat(index.totalRecoverFiles(), equalTo(files.length - totalReused));\n+ assertThat(index.recoveredFileCount(), equalTo(index.totalRecoverFiles() - filesToRecover.size()));\n+ assertThat(index.recoveredBytes(), equalTo(recoveredBytes));\n+ if (index.totalRecoverFiles() == 0) {\n+ assertThat((double) index.recoveredFilesPercent(), equalTo(100.0));\n+ assertThat((double) index.recoveredBytesPercent(), equalTo(100.0));\n+ } else {\n+ assertThat((double) index.recoveredFilesPercent(), closeTo(100.0 * index.recoveredFileCount() / index.totalRecoverFiles(), 0.1));\n+ assertThat((double) index.recoveredBytesPercent(), closeTo(100.0 * index.recoveredBytes() / index.totalRecoverBytes(), 0.1));\n+ }\n+ assertThat(index.startTime(), equalTo(startTime));\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/indices/recovery/RecoveryStateTest.java",
"status": "modified"
},
{
"diff": "@@ -38,11 +38,10 @@ public class RecoveryStatusTests extends ElasticsearchSingleNodeTest {\n \n public void testRenameTempFiles() throws IOException {\n IndexService service = createIndex(\"foo\");\n- RecoveryState state = new RecoveryState();\n \n IndexShard indexShard = service.shard(0);\n DiscoveryNode node = new DiscoveryNode(\"foo\", new LocalTransportAddress(\"bar\"), Version.CURRENT);\n- RecoveryStatus status = new RecoveryStatus(indexShard, node, state, new RecoveryTarget.RecoveryListener() {\n+ RecoveryStatus status = new RecoveryStatus(indexShard, node, new RecoveryTarget.RecoveryListener() {\n @Override\n public void onRecoveryDone(RecoveryState state) {\n }",
"filename": "src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java",
"status": "modified"
},
{
"diff": "@@ -116,9 +116,8 @@ long startRecovery(RecoveriesCollection collection) {\n long startRecovery(RecoveriesCollection collection, RecoveryTarget.RecoveryListener listener, TimeValue timeValue) {\n IndicesService indexServices = getInstanceFromNode(IndicesService.class);\n IndexShard indexShard = indexServices.indexServiceSafe(\"test\").shard(0);\n- return collection.startRecovery(\n- indexShard, new DiscoveryNode(\"id\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- new RecoveryState(indexShard.shardId()), listener, timeValue);\n+ final DiscoveryNode sourceNode = new DiscoveryNode(\"id\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ return collection.startRecovery(indexShard, sourceNode, listener, timeValue);\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/recovery/RecoveriesCollectionTests.java",
"status": "modified"
},
{
"diff": "@@ -561,7 +561,7 @@ public boolean clearData(String nodeName) {\n \n IntSet reusedShards = IntOpenHashSet.newInstance();\n for (ShardRecoveryResponse response : client().admin().indices().prepareRecoveries(\"test-idx\").get().shardResponses().get(\"test-idx\")) {\n- if (response.recoveryState().getIndex().reusedByteCount() > 0) {\n+ if (response.recoveryState().getIndex().reusedBytes() > 0) {\n reusedShards.add(response.getShardId());\n }\n }",
"filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -24,12 +24,12 @@\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.junit.Assert.assertThat;\n+import static org.junit.Assert.fail;\n \n /**\n * Represents a gt assert section:\n- *\n- * - gt: { fields._ttl: 0}\n- *\n+ * <p/>\n+ * - gt: { fields._ttl: 0}\n */\n public class GreaterThanAssertion extends Assertion {\n \n@@ -42,10 +42,14 @@ public GreaterThanAssertion(String field, Object expectedValue) {\n @Override\n @SuppressWarnings(\"unchecked\")\n protected void doAssert(Object actualValue, Object expectedValue) {\n- logger.trace(\"assert that [{}] is greater than [{}]\", actualValue, expectedValue);\n- assertThat(actualValue, instanceOf(Comparable.class));\n- assertThat(expectedValue, instanceOf(Comparable.class));\n- assertThat(errorMessage(), (Comparable)actualValue, greaterThan((Comparable) expectedValue));\n+ logger.trace(\"assert that [{}] is greater than [{}] (field: [{}])\", actualValue, expectedValue, getField());\n+ assertThat(\"value of [\" + getField() + \"] is not comparable (got [\" + actualValue.getClass() + \"])\", actualValue, instanceOf(Comparable.class));\n+ assertThat(\"expected value of [\" + getField() + \"] is not comparable (got [\" + expectedValue.getClass() + \"])\", expectedValue, instanceOf(Comparable.class));\n+ try {\n+ assertThat(errorMessage(), (Comparable) actualValue, greaterThan((Comparable) expectedValue));\n+ } catch (ClassCastException e) {\n+ fail(\"cast error while checking (\" + errorMessage() + \"): \" + e);\n+ }\n }\n \n private String errorMessage() {",
"filename": "src/test/java/org/elasticsearch/test/rest/section/GreaterThanAssertion.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.junit.Assert.assertThat;\n+import static org.junit.Assert.fail;\n \n /**\n * Represents a gte assert section:\n@@ -41,10 +42,14 @@ public GreaterThanEqualToAssertion(String field, Object expectedValue) {\n \n @Override\n protected void doAssert(Object actualValue, Object expectedValue) {\n- logger.trace(\"assert that [{}] is greater than or equal to [{}]\", actualValue, expectedValue);\n- assertThat(actualValue, instanceOf(Comparable.class));\n- assertThat(expectedValue, instanceOf(Comparable.class));\n- assertThat(errorMessage(), (Comparable)actualValue, greaterThanOrEqualTo((Comparable) expectedValue));\n+ logger.trace(\"assert that [{}] is greater than or equal to [{}] (field: [{}])\", actualValue, expectedValue, getField());\n+ assertThat(\"value of [\" + getField() + \"] is not comparable (got [\" + actualValue.getClass() + \"])\", actualValue, instanceOf(Comparable.class));\n+ assertThat(\"expected value of [\" + getField() + \"] is not comparable (got [\" + expectedValue.getClass() + \"])\", expectedValue, instanceOf(Comparable.class));\n+ try {\n+ assertThat(errorMessage(), (Comparable) actualValue, greaterThanOrEqualTo((Comparable) expectedValue));\n+ } catch (ClassCastException e) {\n+ fail(\"cast error while checking (\" + errorMessage() + \"): \" + e);\n+ }\n }\n \n private String errorMessage() {",
"filename": "src/test/java/org/elasticsearch/test/rest/section/GreaterThanEqualToAssertion.java",
"status": "modified"
}
]
} |
{
"body": "Hey there.\n\nSince we upgraded to version 1.3.7, we noticed that we sometimes get weird double buckets when running our date_histogram aggregations.\n\nAfter trying some things we figured out it only happens when we set `pre_zone_adjust_large_interval` to true, `pre_zone` to our local time zone (which uses DST), and `interval` is 'day' or bigger.\n\nIn the aggregration's result we see that we get two buckets corresponding to the same interval, but with an hour difference between their keys.\n\nHere is a simple example that should reproduce this issue:\n\n``` javascript\nPOST /test/t\n{\n \"d\": \"2014-10-08T13:00:00Z\"\n}\n\nPOST /test/t\n{\n \"d\": \"2014-11-08T13:00:00Z\"\n}\n\nGET /test/_search?size=0\n{\n \"aggs\": {\n \"test\": {\n \"date_histogram\": {\n \"field\": \"d\",\n \"interval\": \"year\",\n \"pre_zone\": \"Asia/Jerusalem\",\n \"pre_zone_adjust_large_interval\": true\n }\n }\n }\n}\n```\n\nAnd the result:\n\n``` javascript\n\"aggregations\": {\n \"test\": {\n \"buckets\": [\n {\n \"key_as_string\": \"2013-12-31T21:00:00.000Z\",\n \"key\": 1388523600000,\n \"doc_count\": 1\n },\n {\n \"key_as_string\": \"2013-12-31T22:00:00.000Z\",\n \"key\": 1388527200000,\n \"doc_count\": 1\n }\n ]\n }\n }\n```\n\nAs you can see, although the two timestamps unarguably belong to the same year, they're put into different buckets. As we figured out, it happens because the timezone offset is different for the two timestamps as one of them occurs when DST is on and the other is not.\n\nWe use `Asia/Jerusalem` timezone but the same problem happens in every timezone with DST (eg CET), as long as `pre_zone_adjust_large_interval` is used (When it's not used both documents will go to the same bucket `\"2014-01-01T00:00:00.000Z\"`).\n\nThis problem never happened before we upgraded to 1.3.7. Moreover, while digging into the code we figured out that this bug is a direct result of the proposed fix for issue #8339. While that fix indeed solved the timezone problem in 'hour' intervals around DST switch, it caused the new problem with bigger intervals.\n\nThe problem is also in version 1.4.2.\n",
"comments": [
{
"body": "I had the time to look into this issue and figure out the exact reasons for this to happen and how it might be fixed.\n\nThe issue is with the `TimeTimeZoneRoundingFloor` strategy in `TimeZoneRounding` (used when `pre_zone` is not UTC, `interval<=hour` or `pre_zone_adjust_large_interval=true`).\nRounding the key in this strategy is done as follows: shifting the key from UTC to local time in `pre_zone`, rounding the key, then shifting it back to UTC.\n\nThe problem is to determine the offset to subtract from the rounded key in order to take it back to the correct UTC time. The desired offset might be different from the original added offset due to DST switch (either the original shift or the rounding resulted in a timestamp that occurs in different DST configuration).\n\nPrior to fix #8655, the offset used to shift back was the offset at the rounded key. This worked well in most cases for large intervals, but not for hour intervals around the DST switch.\nAfter the fix, the offset used to shift back is always same offset used to switch from UTC to local time. This always gives a correct result for hour intervals (assuming DST switch always occurs in round hours).\n\nBut as we saw there is a problem with large intervals now - if the timezone offset at the original key is different than what it is at rounded key (which points to midnight), then we'll end up with a rounded key one hour away from midnight. For example `2014-08-08T00:00:00Z` has +3 offset in `Asia/Jerusalem` and after shifting and rounding we get `2014-01-01T00:00:00Z`. Subtracting 3 hours from that results in `2013-12-31T21:00:00Z` which is not the expected result (which is `2013-12-31T22:00:00Z` because the offset is +2 in winter). Moreover, if two timestamps occur in same month/year but with different DST, they are mapped into different keys (one is correct and one is not), and that's why we get the double buckets.\n\nI figured out that in order to solve this, we can use [getOffsetFromLocal](http://joda-time.sourceforge.net/apidocs/org/joda/time/DateTimeZone.html#getOffsetFromLocal%28long%29) method of the timezone class. This method is the opposite of `getOffset` and it always gives the correct offset we need to subtract in order to get back from shifted time to UTC.\n\nI changed the code to use this method and all the tests pass, including tests I added for that issue (that fail on the latest code).\n\nBut then, I found another rare case that fails when using `getOffsetFromLocal` (but succeeds with latest code). This is the case immediately after switching DST on->off, the two hours before and after the switch are \"ambiguous\" in local time. If `interval>=day`, this is irrelevant because rounding takes us to midnight anyway. Nevertheless, if `interval=hour`, there is no way for `getOffsetFromLocal` to distinguish between them so this means they both go into the same bucket. That's probably OK for most applications, because hour histogram around DST switch is confusing to display whatsoever. But since I am pedantic, I want to solve this case as well... ^_^\n\nAfter looking at the work done on `TimeZoneRounding` in #9637 (planned for 2.0), I saw the use of the methods `convertUTCToLocal` and `convertLocalToUTC` which basically do the same as adding/subtracting the offset returned by `getOffset`/`getOffsetFromLocal`. Except that `convertLocalToUTC` can get the original UTC timestamp as an optional parameter and it uses it to solve this exact issue (the original UTC time helps to solve the ambiguity). I tried using these methods and it seems to work well in all cases (all tests pass).\n\nI will post a pull request with my fix and my additional tests as soon as I figure out if anything should also be changed in the strategies that deal with intervals such as \"2d\" (I think this problem is irrelevant there).\n\nHere are the tests I added in order to cover `pre_zone_adjust_large_interval` functionality and the double buckets bug:\n\n``` java\n @Test\n public void testAdjustPreTimeZone() {\n Rounding tzRounding;\n\n // Day interval\n tzRounding = TimeZoneRounding.builder(DateTimeUnit.DAY_OF_MONTH).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).preZoneAdjustLargeInterval(true).build();\n assertThat(tzRounding.round(time(\"2014-11-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-11-11T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n // DST on\n assertThat(tzRounding.round(time(\"2014-08-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-08-11T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n // Day of switching DST on -> off\n assertThat(tzRounding.round(time(\"2014-10-26T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-10-26T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n // Day of switching DST off -> on\n assertThat(tzRounding.round(time(\"2015-03-27T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2015-03-27T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n\n // Month interval\n tzRounding = TimeZoneRounding.builder(DateTimeUnit.MONTH_OF_YEAR).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).preZoneAdjustLargeInterval(true).build();\n assertThat(tzRounding.round(time(\"2014-11-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-11-01T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n // DST on\n assertThat(tzRounding.round(time(\"2014-10-10T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-10-01T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n\n // Year interval\n tzRounding = TimeZoneRounding.builder(DateTimeUnit.YEAR_OF_CENTURY).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).preZoneAdjustLargeInterval(true).build();\n assertThat(tzRounding.round(time(\"2014-11-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))), equalTo(time(\"2014-01-01T00:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))));\n\n // Two timestamps in same year (\"Double buckets\" bug in 1.3.7)\n tzRounding = TimeZoneRounding.builder(DateTimeUnit.YEAR_OF_CENTURY).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).preZoneAdjustLargeInterval(true).build();\n assertThat(tzRounding.round(time(\"2014-11-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\"))),\n equalTo(tzRounding.round(time(\"2014-08-11T17:00:00\", DateTimeZone.forID(\"Asia/Jerusalem\")))));\n }\n```\n\nHere is a test that covers the ambiguous hours bug:\n\n``` java\n @Test\n public void testAmbiguousHoursAfterDSTSwitch() {\n Rounding tzRounding;\n\n tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"Asia/Jerusalem\")).preZoneAdjustLargeInterval(true).build();\n assertThat(tzRounding.round(time(\"2014-10-25T22:30:00\", DateTimeZone.UTC)), equalTo(time(\"2014-10-25T22:00:00\", DateTimeZone.UTC)));\n assertThat(tzRounding.round(time(\"2014-10-25T23:30:00\", DateTimeZone.UTC)), equalTo(time(\"2014-10-25T23:00:00\", DateTimeZone.UTC)));\n }\n```\n",
"created_at": "2015-02-10T22:51:49Z"
},
{
"body": "I can confirm I've seen this issue before - back in the facet days with 0.90. I recall opening an issue on this but can't find it now. Good catch!\n",
"created_at": "2015-02-10T22:58:27Z"
},
{
"body": "Thanks for the great test case. Since I'm currently trying to clean up the time zone management for 2.0 I included your test case from the first comment. I can reproduce the issue on 1.4 and current 2.0 branch and I was able to make it pass using the following changes to TimeZoneRounding.TimeTimeZoneRoundingFloor:\n\n```\npublic long roundKey(long utcMillis) {\n long local = preTz.convertUTCToLocal(utcMillis);\n return preTz.convertLocalToUTC(field.roundFloor(local), true, utcMillis);\n}\n```\n\nThis basically follows your suggestions and the work in #9637. I will also add your other Rounding tests there and see if they pass with my intended implementation. However, the `pre_zone_adjust_large_interval` will hopefully be gone for 2.0.\n",
"created_at": "2015-02-11T15:34:27Z"
},
{
"body": "Yes, the code for roundKey that you included looks exactly like my final solution, that's great.\nCan this fix also be included in 1.x? (As I said I can post a PR for this myself, but if you plan to include it in 2.0 I guess it's better you manage it).\n\nAbout removing `pre_zone_adjust_large_interval`, from what I understood, you plan to always return times in UTC on buckets. Which is equivalent to setting `pre_zone_adjust_large_interval=true` now. Am I right? If so, it means you can use the tests I provided for `pre_zone_adjust_large_interval=true` to test 2.0 (instead of the existing test that assume `pre_zone_adjust_large_interval=false` and will fail on your new code).\n",
"created_at": "2015-02-11T17:05:50Z"
},
{
"body": "> Which is equivalent to setting pre_zone_adjust_large_interval=true\n\nYes, I think that is what it comes down to. I was able to use your test cases above with minor modifications to test on 2.0 branch. I think the plan is to backport the parts that are bug fixes to 1.x branch, but you can also propose a PR for that. This will still have to work with pre/postZone etc... because those will only be cleaned up later.\n",
"created_at": "2015-02-11T17:14:01Z"
},
{
"body": "@orenash I'm going to try to backport some of the changes I did in #9637 on 2.0 to the 1.x branch which will likely adress this issue here. So if you'd like to post a PR for your tests I could include these in the branches as run the backported changes against them as well.\n",
"created_at": "2015-02-17T10:33:17Z"
},
{
"body": "Alright, thanks a lot! I will do so in the next days.\n",
"created_at": "2015-02-17T20:39:25Z"
},
{
"body": "Hi @orenash, I included an integration tests based on your first comment in https://github.com/elasticsearch/elasticsearch/pull/9790, still would like to include your aditional rounding tests if you want to put them into a separate PR.\n",
"created_at": "2015-02-20T16:09:48Z"
},
{
"body": "Don't you prefer to have the tests in the same PR with your fix? If you do not, I will open a PR later today.\n",
"created_at": "2015-02-20T16:51:00Z"
},
{
"body": "No, this is great, will merge that after the fix is on the branch. Many thanks.\n",
"created_at": "2015-02-20T19:02:18Z"
},
{
"body": "Fixed on 1.x with 78f0202 and on 1.4 with b7dbf1e\n",
"created_at": "2015-02-23T11:10:19Z"
}
],
"number": 9491,
"title": "date_histogram issue when using \"pre_zone_adjust_large_interval\" and a timezone with DST"
} | {
"body": "This fix enhances the internal time zone conversion in the\nTimeZoneRounding classes that were the cause of issues with\nstrange date bucket keys in #9491 and #7673.\n\nCloses #9491\nCloses #7673\n",
"number": 9790,
"review_comments": [],
"title": "Aggs: Fix rounding issues when using `date_histogram` and time zones"
} | {
"commits": [
{
"message": "Aggs: Fix rounding issues when using `date_histogram` and time zones\n\nThis fix enhances the internal time zone conversion in the\nTimeZoneRounding classes that were the cause of issues with\nstrange date bucket keys in #9491 and #7673.\n\nCloses #9491\nCloses #7673"
}
],
"files": [
{
"diff": "@@ -156,21 +156,21 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long offset = preTz.getOffset(utcMillis);\n- long time = utcMillis + offset;\n- return field.roundFloor(time) - offset;\n+ long local = preTz.convertUTCToLocal(utcMillis);\n+ return preTz.convertLocalToUTC(field.roundFloor(local), true, utcMillis);\n }\n \n @Override\n public long valueForKey(long time) {\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n- return time;\n+ return postTz.convertUTCToLocal(time);\n }\n \n @Override\n- public long nextRoundingValue(long value) {\n- return durationField.add(value, 1);\n+ public long nextRoundingValue(long time) {\n+ long currentWithoutPostZone = postTz.convertLocalToUTC(time, true);\n+ long nextWithoutPostZone = durationField.add(currentWithoutPostZone, 1);\n+ return postTz.convertUTCToLocal(nextWithoutPostZone);\n }\n \n @Override\n@@ -268,21 +268,22 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n- return field.roundFloor(time);\n+ long local = preTz.convertUTCToLocal(utcMillis);\n+ return field.roundFloor(local);\n }\n \n @Override\n public long valueForKey(long time) {\n // after rounding, since its day level (and above), its actually UTC!\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n- return time;\n+ return postTz.convertUTCToLocal(time);\n }\n \n @Override\n- public long nextRoundingValue(long value) {\n- return durationField.add(value, 1);\n+ public long nextRoundingValue(long currentWithPostZone) {\n+ long currentWithoutPostZone = postTz.convertLocalToUTC(currentWithPostZone, true);\n+ long nextWithoutPostZone = durationField.add(currentWithoutPostZone, 1);\n+ return postTz.convertUTCToLocal(nextWithoutPostZone);\n }\n \n @Override\n@@ -375,17 +376,17 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n+ long time = preTz.convertUTCToLocal(utcMillis);\n return Rounding.Interval.roundKey(time, interval);\n }\n \n @Override\n public long valueForKey(long key) {\n long time = Rounding.Interval.roundValue(key, interval);\n // now, time is still in local, move it to UTC\n- time = time - preTz.getOffset(time);\n+ time = preTz.convertLocalToUTC(time, true);\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n+ time = postTz.convertUTCToLocal(time);\n return time;\n }\n \n@@ -435,7 +436,7 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n+ long time = preTz.convertUTCToLocal(utcMillis);\n return Rounding.Interval.roundKey(time, interval);\n }\n \n@@ -444,7 +445,7 @@ public long valueForKey(long key) {\n long time = Rounding.Interval.roundValue(key, interval);\n // after rounding, since its day level (and above), its actually UTC!\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n+ time = postTz.convertUTCToLocal(time);\n return time;\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -44,11 +44,13 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.List;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n@@ -1204,7 +1206,7 @@ public void singleValueField_WithExtendedBounds() throws Exception {\n \n @Test\n public void singleValue_WithMultipleDateFormatsFromMapping() throws Exception {\n- \n+\n String mappingJson = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\").startObject(\"date\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime||dd-MM-yyyy\").endObject().endObject().endObject().endObject().string();\n prepareCreate(\"idx2\").addMapping(\"type\", mappingJson).execute().actionGet();\n IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n@@ -1263,6 +1265,44 @@ public void testIssue6965() {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n+ public void testDSTBoundaryIssue9491() throws InterruptedException, ExecutionException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test9491\").addMapping(\"type\", \"d\", \"type=date\").get());\n+ indexRandom(true,\n+ client().prepareIndex(\"test9491\", \"type\").setSource(\"d\", \"2014-10-08T13:00:00Z\"),\n+ client().prepareIndex(\"test9491\", \"type\").setSource(\"d\", \"2014-11-08T13:00:00Z\"));\n+ ensureSearchable(\"test9491\");\n+ SearchResponse response = client().prepareSearch(\"test9491\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogram.Interval.YEAR).preZone(\"Asia/Jerusalem\")\n+ .preZoneAdjustLargeInterval(true))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo.getBuckets().size(), equalTo(1));\n+ assertThat(histo.getBuckets().get(0).getKey(), equalTo(\"2013-12-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(0).getDocCount(), equalTo(2L));\n+ }\n+\n+ public void testIssue7673() throws InterruptedException, ExecutionException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test7673\").addMapping(\"type\", \"d\", \"type=date\").get());\n+ indexRandom(true,\n+ client().prepareIndex(\"test7673\", \"type\").setSource(\"d\", \"2013-07-01T00:00:00Z\"),\n+ client().prepareIndex(\"test7673\", \"type\").setSource(\"d\", \"2013-09-01T00:00:00Z\"));\n+ ensureSearchable(\"test7673\");\n+ SearchResponse response = client().prepareSearch(\"test7673\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogram.Interval.MONTH).postZone(\"-02:00\")\n+ .minDocCount(0))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo.getBuckets().size(), equalTo(3));\n+ assertThat(histo.getBuckets().get(0).getKey(), equalTo(\"2013-06-30T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(0).getDocCount(), equalTo(1L));\n+ assertThat(histo.getBuckets().get(1).getKey(), equalTo(\"2013-07-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(1).getDocCount(), equalTo(0L));\n+ assertThat(histo.getBuckets().get(2).getKey(), equalTo(\"2013-08-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(2).getDocCount(), equalTo(1L));\n+ }\n+\n /**\n * see issue #9634, negative interval in date_histogram should raise exception\n */",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "I sent following request to my elasticsearch 1.3.2 server `http://localhost:9200/payment_prod/2002/_search?search_type=count` with negative `post_zone` and `pre_zone`:\n\n``` json\n{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n }\n }\n },\n \"aggs\" : {\n \"by_time\" : {\n \"date_histogram\" : {\n \"field\" : \"date\",\n \"interval\" : \"month\",\n \"post_zone\" : -2,\n \"pre_zone\" : -2,\n \"min_doc_count\" : 0,\n \"format\" : \"yyyy-MM-dd--HH:mm:ss.SSSZ\"\n }\n }\n }\n}\n```\n\nIt seems to me that I shouldn't get the bucket 2013-07-30--22:00:00.000+0000 in the response.\n\n``` json\n{\n \"took\" : 105,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 4018428,\n \"max_score\" : 0.0,\n \"hits\" : []\n },\n \"aggregations\" : {\n \"by_time\" : {\n \"buckets\" : [{\n \"key_as_string\" : \"2013-06-30--22:00:00.000+0000\",\n \"key\" : 1372629600000,\n \"doc_count\" : 235258\n }, {\n \"key_as_string\" : \"2013-07-30--22:00:00.000+0000\",\n \"key\" : 1375221600000,\n \"doc_count\" : 0\n }, {\n \"key_as_string\" : \"2013-07-31--22:00:00.000+0000\",\n \"key\" : 1375308000000,\n \"doc_count\" : 341928\n }, {\n \"key_as_string\" : \"2013-08-31--22:00:00.000+0000\",\n \"key\" : 1377986400000,\n \"doc_count\" : 330148\n }\n ]\n }\n }\n}\n```\n\nWith a small update on the request, post and pre zone with positive values, `http://localhost:9200/payment_prod/2002/_search?search_type=count`:\n\n``` json\n{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n }\n }\n },\n \"aggs\" : {\n \"by_time\" : {\n \"date_histogram\" : {\n \"field\" : \"date\",\n \"interval\" : \"month\",\n \"post_zone\" : 2,\n \"pre_zone\" : 2,\n \"min_doc_count\" : 0,\n \"format\" : \"yyyy-MM-dd--HH:mm:ss.SSSZ\"\n }\n }\n }\n}\n```\n\nIn such case, response seems valid for me without strange bucket:\n\n``` json\n{\n \"took\" : 87,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 4018428,\n \"max_score\" : 0.0,\n \"hits\" : []\n },\n \"aggregations\" : {\n \"by_time\" : {\n \"buckets\" : [{\n \"key_as_string\" : \"2013-07-01--02:00:00.000+0000\",\n \"key\" : 1372644000000,\n \"doc_count\" : 233384\n }, {\n \"key_as_string\" : \"2013-08-01--02:00:00.000+0000\",\n \"key\" : 1375322400000,\n \"doc_count\" : 341918\n }\n ]\n }\n }\n```\n\nIn fact problem occurs with post_zone < 0 and min_doc_count = 0. Without one of those predicates, result seems more reliable.\n\nAm I wrong or is there a problem with elasticsearch post_zone management?\n",
"comments": [
{
"body": "thanks for the report @glelouarn - we'll take a look \n",
"created_at": "2014-09-10T14:39:17Z"
},
{
"body": "Hi work on it and initialized the pull request https://github.com/elasticsearch/elasticsearch/pull/9029\n",
"created_at": "2014-12-22T10:55:41Z"
},
{
"body": "I encountered similar problems while working on #9062. The problem is in the way Rounding.nextRoundingValue() is used when adding empty buckets in InternalHistogram.addEmptyBuckets(). The assumtion there is that when adding time durations (like 1M) to the key of an existing (non-empty) bucket to fill the histogram with empty buckets one always ends up with the key of the next non-empty bucket.\n\ne.g. in the example about one would expect\n\nnextRoundingValue(\"2013-06-30--22:00:00.000+0000\") => \"2013-07-31--22:00:00.000+0000\"\n(well, that is 2hour before the 1st of next month)\n\nInternally we use DurationField.add() from Joda-Time, but that works in a slightly different way (at least for months), e.g. if you start with 2014-01-31T22:00:00.000Z and then add 1month durations consecutively one gets:\n\n2014-01-31T22:00:00.000Z + month -> 2014-02-28T22:00:00.000Z\n2014-02-28T22:00:00.000Z + month -> 2014-03-28T22:00:00.000Z\n2014-03-28T22:00:00.000Z + month -> 2014-04-28T22:00:00.000Z\netc...\n\nbut the following rounded non-empty buckets will have keys 2014-03-31T22:00:00.000Z, 2014-04-30T22:00:00.000Z etc...\n\nThe time-zone offset is just one way to run into this. I think that any offset (could be positive as well) that makes bucket keys lie in the range of day_of_month in 28-31 will likely result in the same glitch, at least for DateTimeUnit.MONTH_OF_YEAR and above. \n",
"created_at": "2015-02-05T12:11:35Z"
},
{
"body": "This is fixed on 1.4 and 1.x with https://github.com/elasticsearch/elasticsearch/pull/9790 I think.\n",
"created_at": "2015-03-02T10:24:33Z"
},
{
"body": "The fix for this will be in the next release (either 1.4.5 or 1.5).\n",
"created_at": "2015-03-17T18:56:35Z"
}
],
"number": 7673,
"title": "Strange bucket for date_histogram with negative post_zone and zero min_doc_count"
} | {
"body": "This fix enhances the internal time zone conversion in the\nTimeZoneRounding classes that were the cause of issues with\nstrange date bucket keys in #9491 and #7673.\n\nCloses #9491\nCloses #7673\n",
"number": 9790,
"review_comments": [],
"title": "Aggs: Fix rounding issues when using `date_histogram` and time zones"
} | {
"commits": [
{
"message": "Aggs: Fix rounding issues when using `date_histogram` and time zones\n\nThis fix enhances the internal time zone conversion in the\nTimeZoneRounding classes that were the cause of issues with\nstrange date bucket keys in #9491 and #7673.\n\nCloses #9491\nCloses #7673"
}
],
"files": [
{
"diff": "@@ -156,21 +156,21 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long offset = preTz.getOffset(utcMillis);\n- long time = utcMillis + offset;\n- return field.roundFloor(time) - offset;\n+ long local = preTz.convertUTCToLocal(utcMillis);\n+ return preTz.convertLocalToUTC(field.roundFloor(local), true, utcMillis);\n }\n \n @Override\n public long valueForKey(long time) {\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n- return time;\n+ return postTz.convertUTCToLocal(time);\n }\n \n @Override\n- public long nextRoundingValue(long value) {\n- return durationField.add(value, 1);\n+ public long nextRoundingValue(long time) {\n+ long currentWithoutPostZone = postTz.convertLocalToUTC(time, true);\n+ long nextWithoutPostZone = durationField.add(currentWithoutPostZone, 1);\n+ return postTz.convertUTCToLocal(nextWithoutPostZone);\n }\n \n @Override\n@@ -268,21 +268,22 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n- return field.roundFloor(time);\n+ long local = preTz.convertUTCToLocal(utcMillis);\n+ return field.roundFloor(local);\n }\n \n @Override\n public long valueForKey(long time) {\n // after rounding, since its day level (and above), its actually UTC!\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n- return time;\n+ return postTz.convertUTCToLocal(time);\n }\n \n @Override\n- public long nextRoundingValue(long value) {\n- return durationField.add(value, 1);\n+ public long nextRoundingValue(long currentWithPostZone) {\n+ long currentWithoutPostZone = postTz.convertLocalToUTC(currentWithPostZone, true);\n+ long nextWithoutPostZone = durationField.add(currentWithoutPostZone, 1);\n+ return postTz.convertUTCToLocal(nextWithoutPostZone);\n }\n \n @Override\n@@ -375,17 +376,17 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n+ long time = preTz.convertUTCToLocal(utcMillis);\n return Rounding.Interval.roundKey(time, interval);\n }\n \n @Override\n public long valueForKey(long key) {\n long time = Rounding.Interval.roundValue(key, interval);\n // now, time is still in local, move it to UTC\n- time = time - preTz.getOffset(time);\n+ time = preTz.convertLocalToUTC(time, true);\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n+ time = postTz.convertUTCToLocal(time);\n return time;\n }\n \n@@ -435,7 +436,7 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n+ long time = preTz.convertUTCToLocal(utcMillis);\n return Rounding.Interval.roundKey(time, interval);\n }\n \n@@ -444,7 +445,7 @@ public long valueForKey(long key) {\n long time = Rounding.Interval.roundValue(key, interval);\n // after rounding, since its day level (and above), its actually UTC!\n // now apply post Tz\n- time = time + postTz.getOffset(time);\n+ time = postTz.convertUTCToLocal(time);\n return time;\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -44,11 +44,13 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.List;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n@@ -1204,7 +1206,7 @@ public void singleValueField_WithExtendedBounds() throws Exception {\n \n @Test\n public void singleValue_WithMultipleDateFormatsFromMapping() throws Exception {\n- \n+\n String mappingJson = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\").startObject(\"date\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime||dd-MM-yyyy\").endObject().endObject().endObject().endObject().string();\n prepareCreate(\"idx2\").addMapping(\"type\", mappingJson).execute().actionGet();\n IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n@@ -1263,6 +1265,44 @@ public void testIssue6965() {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n+ public void testDSTBoundaryIssue9491() throws InterruptedException, ExecutionException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test9491\").addMapping(\"type\", \"d\", \"type=date\").get());\n+ indexRandom(true,\n+ client().prepareIndex(\"test9491\", \"type\").setSource(\"d\", \"2014-10-08T13:00:00Z\"),\n+ client().prepareIndex(\"test9491\", \"type\").setSource(\"d\", \"2014-11-08T13:00:00Z\"));\n+ ensureSearchable(\"test9491\");\n+ SearchResponse response = client().prepareSearch(\"test9491\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogram.Interval.YEAR).preZone(\"Asia/Jerusalem\")\n+ .preZoneAdjustLargeInterval(true))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo.getBuckets().size(), equalTo(1));\n+ assertThat(histo.getBuckets().get(0).getKey(), equalTo(\"2013-12-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(0).getDocCount(), equalTo(2L));\n+ }\n+\n+ public void testIssue7673() throws InterruptedException, ExecutionException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test7673\").addMapping(\"type\", \"d\", \"type=date\").get());\n+ indexRandom(true,\n+ client().prepareIndex(\"test7673\", \"type\").setSource(\"d\", \"2013-07-01T00:00:00Z\"),\n+ client().prepareIndex(\"test7673\", \"type\").setSource(\"d\", \"2013-09-01T00:00:00Z\"));\n+ ensureSearchable(\"test7673\");\n+ SearchResponse response = client().prepareSearch(\"test7673\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogram.Interval.MONTH).postZone(\"-02:00\")\n+ .minDocCount(0))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo.getBuckets().size(), equalTo(3));\n+ assertThat(histo.getBuckets().get(0).getKey(), equalTo(\"2013-06-30T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(0).getDocCount(), equalTo(1L));\n+ assertThat(histo.getBuckets().get(1).getKey(), equalTo(\"2013-07-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(1).getDocCount(), equalTo(0L));\n+ assertThat(histo.getBuckets().get(2).getKey(), equalTo(\"2013-08-31T22:00:00.000Z\"));\n+ assertThat(histo.getBuckets().get(2).getDocCount(), equalTo(1L));\n+ }\n+\n /**\n * see issue #9634, negative interval in date_histogram should raise exception\n */",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "Just been trying out a 1.5 snapshot (built from commit 75b6d8e from the 1.x branch) to request inner hits.\n\nIf your mapping has a 'nested' type and you specify your child document as an object rather than an array, you get a ClassCastException. I.e. your doc looks like this:\n\n``` json\n{\n \"nested_field\": {\n \"make\": \"ford\"\n }\n}\n```\n\nWhen searching you will get an exception like this:\n\n> java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to java.util.List\n> at org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:294)\n> at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:178)\n> at org.elasticsearch.search.fetch.innerhits.InnerHitsFetchSubPhase.hitExecute(InnerHitsFetchSubPhase.java:96)\n> at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:190)\n> at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:501)\n\nNote that it works ok if you index your document like this (note the extra square brackets to indicate an array)\n\n``` json\n{\n \"nested_field\": [\n {\n \"make\": \"ford\"\n }\n ]\n}\n```\n\nI think that either the nested hits stuff should support this single child using the non-array format, or elasticsearch should throw an error when you try to index a document using that syntax.\n\n(note this issue came out of a [comment](https://github.com/elasticsearch/elasticsearch/pull/8153#issuecomment-74576628) on pull request #8153)\n",
"comments": [],
"number": 9723,
"title": "Single nested child can cause ClassCastException when requesting inner_hits"
} | {
"body": "PR for #9723\n",
"number": 9743,
"review_comments": [],
"title": "Don't fail if an object is specified as a nested value instead of an array."
} | {
"commits": [
{
"message": "inner hits: Don't fail if an object is specified as a nested value instead of an array.\n\nCloses #9723"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.fetch;\n \n+import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n@@ -27,6 +28,7 @@\n import org.apache.lucene.util.BitDocIdSet;\n import org.apache.lucene.util.BitSet;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n@@ -293,7 +295,16 @@ private InternalSearchHit createNestedSearchHit(SearchContext context, int neste\n List<Map<String, Object>> nestedParsedSource;\n SearchHit.NestedIdentity nested = nestedIdentity;\n do {\n- nestedParsedSource = (List<Map<String, Object>>) XContentMapValues.extractValue(nested.getField().string(), sourceAsMap);\n+ Object extractedValue = XContentMapValues.extractValue(nested.getField().string(), sourceAsMap);\n+ if (extractedValue instanceof List) {\n+ // nested field has an array value in the _source\n+ nestedParsedSource = (List<Map<String, Object>>) extractedValue;\n+ } else if (extractedValue instanceof Map) {\n+ // nested field has an object value in the _source. This just means the nested field has just one inner object, which is valid, but uncommon.\n+ nestedParsedSource = ImmutableList.of((Map < String, Object >) extractedValue);\n+ } else {\n+ throw new ElasticsearchIllegalStateException(\"extracted source isn't an object or an array\");\n+ }\n sourceAsMap = nestedParsedSource.get(nested.getOffset());\n nested = nested.getChild();\n } while (nested != null);",
"filename": "src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -645,4 +645,29 @@ public void testNestedMultipleLayers() throws Exception {\n assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n }\n \n+ @Test\n+ // https://github.com/elasticsearch/elasticsearch/issues/9723\n+ public void testNestedDefinedAsObject() throws Exception {\n+ assertAcked(prepareCreate(\"articles\").addMapping(\"article\", \"comments\", \"type=nested\", \"title\", \"type=string\"));\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"title\", \"quick brown fox\")\n+ .startObject(\"comments\").field(\"message\", \"fox eat quick\").endObject()\n+ .endObject()));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"articles\")\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\")).innerHit(new QueryInnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "We have 6 servers and 14 shards in cluster, the index size 26GB, we have 1 replica so total size is 52GB, and ES v1.4.0, java version \"1.7.0_65\"\nWe use servers with RAM of 14GB (m3.xlarge), and heap is set to 7GB\n\nAfter update from 0.90 we started facing next issue:\nrandom cluster servers around once a day/two hits the heap size limit (java.lang.OutOfMemoryError: Java heap space) in log, and cluster falls - becomes red or yellow\n\nWe tried to add more servers to cluster - even 8, but than it's a matter of time when we'll hit the problem, so looks like there is no matter how many servers are in cluster - it still hits the limit after some time.\nBefore we started facing the problem we were running smoothly with 3 servers\nAlso we tried to set indices.fielddata.cache.size: 40% but it didnt helped\nAlso, there are possible workarounds to flush heap:\n1) reboot some server - than heap becomes under 70% and for some time cluster is ok\nor\n2) decrease number of replicas to 0, and than back to 1\n\nUpgrade to 1.4.1 hasn't solved an issue. \n\nFinally <b>found the query causing the cluster crashes</b>. After I commented code doing this - the claster is ok for few days. Before it was crashing once a day in average.\n\nthe query looks like:\n\n```\n {\n \"sort\": [\n {\n \"user_last_contacted.ct\": {\n \"nested_filter\": {\n \"term\": {\n \"user_last_contacted.owner_id\": \"542b2b7fb0bc2244056fd90f\"\n }\n },\n \"order\": \"desc\",\n \"missing\": \"_last\"\n }\n }\n ],\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"term\": {\n \"company_id\": \"52c0e0b7e0534664db9dfb9a\"\n }\n },\n \"query\": {\n \"match_all\": {}\n }\n }\n },\n \"explain\": false,\n \"from\": 0,\n \"size\": 100\n }\n```\n\nthe mapping looks like:\n\n```\n \"contact\": {\n \"_all\": {\n \"type\": \"string\",\n \"enabled\": true,\n \"analyzer\": \"default_full\",\n \"index\": \"analyzed\"\n },\n \"_routing\": {\n \"path\": \"company_id\",\n \"required\": true\n },\n \"_source\": {\n \"enabled\": false\n },\n \"include_in_all\": true,\n \"dynamic\": false,\n \"properties\": {\n \"user_last_contacted\": {\n \"include_in_all\": false,\n \"dynamic\": false,\n \"type\": \"nested\",\n \"properties\": {\n \"ct\": {\n \"include_in_all\": false,\n \"index\": \"not_analyzed\",\n \"type\": \"date\"\n },\n \"owner_id\": {\n \"type\": \"string\"\n }\n }\n }...\n```\n\nuser_last_contacted is an array field with nested objects. The size of the array can be 100+ items.\n",
"comments": [
{
"body": "@martijnvg could you take a look at this one please?\n",
"created_at": "2014-12-09T12:46:17Z"
},
{
"body": "@serj-p This looks related to #8394, wrong cache behaviour (not the filter cache, but for in the fixed bitset cache for nested object fields) is causing higher heap usage that is causing OOM. Can you try to upgrade to version 1.4.1 this should resolve the OOM.\n",
"created_at": "2014-12-16T17:21:18Z"
},
{
"body": "As I mentioned, upgrade to 1.4.1 hasn't solved the issue. \n",
"created_at": "2014-12-16T17:32:41Z"
},
{
"body": "Sorry, I read your description too quickly... I see why an OOM can occur with nested sorting, the fix for the fixed bitset cache that was added in 1.4.1 (#8440) missed to do the change for nested sorting.\n\nAre you using using the `nested` query/filter or `nested` aggregator in another search request by any chance? If so can you confirm that this is working without eventually going OOM?\n",
"created_at": "2014-12-16T20:51:13Z"
},
{
"body": "I'm not using `nested` aggregator, but I'm using `nested` query/filter and can confirm that cluster is running smoothly for two weeks after I disabled `nested` sorting.\n",
"created_at": "2014-12-16T21:10:06Z"
},
{
"body": "@serj-p Thanks for confirming this. I'll fix this issue with nested sorting.\n",
"created_at": "2014-12-16T21:47:23Z"
}
],
"number": 8810,
"title": "java.lang.OutOfMemoryError: Java heap space after upgrade from 0.90"
} | {
"body": "Random access based bitsets are not required for the child level nested level filters.\n\nCloses #8810\n",
"number": 9740,
"review_comments": [
{
"body": "Should we rather initialize lastSeenRootDoc=0 and lastEmittedValue=missingValue to avoid the `if (rootDoc == 0)` check below?\n",
"created_at": "2015-02-19T16:07:36Z"
},
{
"body": "yes, that makes sense! I'll change that.\n",
"created_at": "2015-02-19T20:27:45Z"
}
],
"title": "Don't use the fixed bitset filter cache for child nested level filters, but the regular filter cache instead"
} | {
"commits": [
{
"message": "Don't use the fixed bitset filter cache for child nested level filters, but the regular filter cache instead.\n\nRandom access based bitsets are not required for the child level nested level filters.\n\nCloses #8810"
}
],
"files": [
{
"diff": "@@ -129,9 +129,11 @@ public abstract class XFieldComparatorSource extends FieldComparatorSource {\n * parent + 1, or 0 if there is no previous parent, and R (excluded).\n */\n public static class Nested {\n- private final BitDocIdSetFilter rootFilter, innerFilter;\n \n- public Nested(BitDocIdSetFilter rootFilter, BitDocIdSetFilter innerFilter) {\n+ private final BitDocIdSetFilter rootFilter;\n+ private final Filter innerFilter;\n+\n+ public Nested(BitDocIdSetFilter rootFilter, Filter innerFilter) {\n this.rootFilter = rootFilter;\n this.innerFilter = innerFilter;\n }\n@@ -144,10 +146,10 @@ public BitDocIdSet rootDocs(LeafReaderContext ctx) throws IOException {\n }\n \n /**\n- * Get a {@link BitDocIdSet} that matches the inner documents.\n+ * Get a {@link DocIdSet} that matches the inner documents.\n */\n- public BitDocIdSet innerDocs(LeafReaderContext ctx) throws IOException {\n- return innerFilter.getDocIdSet(ctx);\n+ public DocIdSet innerDocs(LeafReaderContext ctx) throws IOException {\n+ return innerFilter.getDocIdSet(ctx, null);\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.index.BinaryDocValues;\n import org.apache.lucene.index.RandomAccessOrds;\n import org.apache.lucene.index.SortedDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.SortField;\n@@ -81,7 +82,7 @@ protected SortedDocValues getSortedDocValues(LeafReaderContext context, String f\n selectedValues = sortMode.select(values);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, rootDocs, innerDocs);\n }\n if (sortMissingFirst(missingValue) || sortMissingLast(missingValue)) {\n@@ -132,7 +133,7 @@ protected BinaryDocValues getBinaryDocValues(LeafReaderContext context, String f\n selectedValues = sortMode.select(values, nonNullMissingBytes);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, nonNullMissingBytes, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues;",
"filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.SortField;\n@@ -78,7 +79,7 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues.getRawDoubleValues();",
"filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.util.BitSet;\n@@ -70,7 +71,7 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues.getRawFloatValues();",
"filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n import org.apache.lucene.index.SortedNumericDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.util.BitSet;\n@@ -69,7 +70,7 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues;",
"filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n package org.elasticsearch.search;\n \n import org.apache.lucene.index.*;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.BitSet;\n import org.apache.lucene.util.BytesRef;\n@@ -31,6 +33,7 @@\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n \n+import java.io.IOException;\n import java.util.Locale;\n \n /**\n@@ -438,39 +441,61 @@ public long get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- // TODO: technically innerDocs need not be BitSet: only needs advance() ?\n- public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final BitSet rootDocs, final DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n return select(DocValues.emptySortedNumeric(maxDoc), missingValue);\n }\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(DocValues.emptySortedNumeric(maxDoc), missingValue);\n+ }\n+\n return new NumericDocValues() {\n \n+ int lastSeenRootDoc = 0;\n+ long lastEmittedValue = missingValue;\n+\n @Override\n public long get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n- if (rootDoc == 0) {\n- return missingValue;\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n+ // If via compareBottom this method has previously invoked for the same rootDoc then we need to use the\n+ // last seen value, because innerDocs can't re-iterate over nested child docs it has already emitted,\n+ // because DocIdSetIterator can only advance forwards.\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n }\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n-\n- long accumulated = startLong();\n- int numValues = 0;\n+ long accumulated = startLong();\n+ int numValues = 0;\n \n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final int count = values.count();\n- for (int i = 0; i < count; ++i) {\n- final long value = values.valueAt(i);\n- accumulated = apply(accumulated, value);\n+ for (int doc = firstNestedDoc; doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final int count = values.count();\n+ for (int i = 0; i < count; ++i) {\n+ final long value = values.valueAt(i);\n+ accumulated = apply(accumulated, value);\n+ }\n+ numValues += count;\n }\n- numValues += count;\n+ lastSeenRootDoc = rootDoc;\n+ if (numValues == 0) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = reduce(accumulated, numValues);\n+ }\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n }\n-\n- return numValues == 0\n- ? missingValue\n- : reduce(accumulated, numValues);\n }\n };\n }\n@@ -531,39 +556,60 @@ public double get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- // TODO: technically innerDocs need not be BitSet: only needs advance() ?\n- public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final BitSet rootDocs, final DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n return select(FieldData.emptySortedNumericDoubles(maxDoc), missingValue);\n }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(FieldData.emptySortedNumericDoubles(maxDoc), missingValue);\n+ }\n+\n return new NumericDoubleValues() {\n \n+ int lastSeenRootDoc = 0;\n+ double lastEmittedValue = missingValue;\n+\n @Override\n public double get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n- if (rootDoc == 0) {\n- return missingValue;\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n }\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n+ double accumulated = startDouble();\n+ int numValues = 0;\n \n- double accumulated = startDouble();\n- int numValues = 0;\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final int count = values.count();\n+ for (int i = 0; i < count; ++i) {\n+ final double value = values.valueAt(i);\n+ accumulated = apply(accumulated, value);\n+ }\n+ numValues += count;\n+ }\n \n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final int count = values.count();\n- for (int i = 0; i < count; ++i) {\n- final double value = values.valueAt(i);\n- accumulated = apply(accumulated, value);\n+ lastSeenRootDoc = rootDoc;\n+ if (numValues == 0) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = reduce(accumulated, numValues);\n }\n- numValues += count;\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n }\n-\n- return numValues == 0\n- ? missingValue\n- : reduce(accumulated, numValues);\n }\n };\n }\n@@ -615,11 +661,16 @@ public BytesRef get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- // TODO: technically innerDocs need not be BitSet: only needs advance() ?\n- public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final BitSet rootDocs, final DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n return select(FieldData.emptySortedBinary(maxDoc), missingValue);\n }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(FieldData.emptySortedBinary(maxDoc), missingValue);\n+ }\n+\n final BinaryDocValues selectedValues = select(values, new BytesRef());\n final Bits docsWithValue;\n if (FieldData.unwrapSingleton(values) != null) {\n@@ -631,35 +682,54 @@ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef\n \n final BytesRefBuilder spare = new BytesRefBuilder();\n \n+ int lastSeenRootDoc = 0;\n+ BytesRef lastEmittedValue = missingValue;\n+\n @Override\n public BytesRef get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n- if (rootDoc == 0) {\n- return missingValue;\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n-\n- BytesRefBuilder accumulated = null;\n-\n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final BytesRef innerValue = selectedValues.get(doc);\n- if (innerValue.length > 0 || docsWithValue == null || docsWithValue.get(doc)) {\n- if (accumulated == null) {\n- spare.copyBytes(innerValue);\n- accumulated = spare;\n- } else {\n- final BytesRef applied = apply(accumulated.get(), innerValue);\n- if (applied == innerValue) {\n- accumulated.copyBytes(innerValue);\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n+\n+ BytesRefBuilder accumulated = null;\n+\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final BytesRef innerValue = selectedValues.get(doc);\n+ if (innerValue.length > 0 || docsWithValue == null || docsWithValue.get(doc)) {\n+ if (accumulated == null) {\n+ spare.copyBytes(innerValue);\n+ accumulated = spare;\n+ } else {\n+ final BytesRef applied = apply(accumulated.get(), innerValue);\n+ if (applied == innerValue) {\n+ accumulated.copyBytes(innerValue);\n+ }\n }\n }\n }\n- }\n \n- return accumulated == null ? missingValue : accumulated.get();\n+ lastSeenRootDoc = rootDoc;\n+ if (accumulated == null) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = accumulated.get();\n+ }\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n }\n };\n }\n@@ -709,14 +779,22 @@ public int getValueCount() {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- // TODO: technically innerDocs need not be BitSet: only needs advance() ?\n- public SortedDocValues select(final RandomAccessOrds values, final BitSet rootDocs, final BitSet innerDocs) {\n- if (rootDocs == null || innerDocs == null) {\n- return select((RandomAccessOrds) DocValues.emptySortedSet());\n+ public SortedDocValues select(final RandomAccessOrds values, final BitSet rootDocs, final DocIdSet innerDocSet) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n+ return select(DocValues.emptySortedSet());\n+ }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(DocValues.emptySortedSet());\n }\n+\n final SortedDocValues selectedValues = select(values);\n return new SortedDocValues() {\n \n+ int lastSeenRootDoc = 0;\n+ int lastEmittedOrd = -1;\n+\n @Override\n public BytesRef lookupOrd(int ord) {\n return selectedValues.lookupOrd(ord);\n@@ -730,26 +808,37 @@ public int getValueCount() {\n @Override\n public int getOrd(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n- if (rootDoc == 0) {\n- return -1;\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedOrd;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n- int ord = -1;\n-\n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- final int innerOrd = selectedValues.getOrd(doc);\n- if (innerOrd != -1) {\n- if (ord == -1) {\n- ord = innerOrd;\n- } else {\n- ord = applyOrd(ord, innerOrd);\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n+ int ord = -1;\n+\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ final int innerOrd = selectedValues.getOrd(doc);\n+ if (innerOrd != -1) {\n+ if (ord == -1) {\n+ ord = innerOrd;\n+ } else {\n+ ord = applyOrd(ord, innerOrd);\n+ }\n }\n }\n- }\n \n- return ord;\n+ lastSeenRootDoc = rootDoc;\n+ return lastEmittedOrd = ord;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/search/MultiValueMode.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n-import org.apache.lucene.search.FieldComparator;\n-import org.apache.lucene.search.SortField;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BitSet;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n@@ -157,12 +156,13 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n \n final Nested nested;\n if (nestedHelper != null && nestedHelper.getPath() != null) {\n+ \n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n- BitDocIdSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedHelper.filterFound()) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getInnerFilter(), null, context.queryParserService().autoFilterCachePolicy());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getNestedObjectMapper().nestedTypeFilter(), null, context.queryParserService().autoFilterCachePolicy());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {\n@@ -188,7 +188,7 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String\n selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE);\n } else {\n final BitSet rootDocs = nested.rootDocs(context).bits();\n- final BitSet innerDocs = nested.innerDocs(context).bits();\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues.getRawDoubleValues();",
"filename": "src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.index.BinaryDocValues;\n import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n@@ -131,11 +132,11 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n final Nested nested;\n if (nestedHelper != null && nestedHelper.getPath() != null) {\n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n- BitDocIdSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedHelper.filterFound()) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getInnerFilter(), null, context.queryParserService().autoFilterCachePolicy());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getNestedObjectMapper().nestedTypeFilter(), null, context.queryParserService().autoFilterCachePolicy());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {",
"filename": "src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Lists;\n+import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n@@ -252,11 +253,11 @@ private void addSortField(SearchContext context, List<SortField> sortFields, Str\n final Nested nested;\n if (nestedHelper != null && nestedHelper.getPath() != null) {\n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n- BitDocIdSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedHelper.filterFound()) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getInnerFilter(), null, context.queryParserService().autoFilterCachePolicy());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(nestedHelper.getNestedObjectMapper().nestedTypeFilter(), null, context.queryParserService().autoFilterCachePolicy());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {",
"filename": "src/main/java/org/elasticsearch/search/sort/SortParseElement.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.carrotsearch.randomizedtesting.generators.RandomStrings;\n import org.apache.lucene.index.*;\n+import org.apache.lucene.util.BitDocIdSet;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.index.fielddata.FieldData;\n@@ -29,6 +30,7 @@\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.test.ElasticsearchTestCase;\n \n+import java.io.IOException;\n import java.util.Arrays;\n \n public class MultiValueModeTests extends ElasticsearchTestCase {\n@@ -55,7 +57,7 @@ private static FixedBitSet randomInnerDocs(FixedBitSet rootDocs) {\n return innerDocs;\n }\n \n- public void testSingleValuedLongs() {\n+ public void testSingleValuedLongs() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[] array = new long[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -82,7 +84,7 @@ public long get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedLongs() {\n+ public void testMultiValuedLongs() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[][] array = new long[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -142,10 +144,10 @@ private void verify(SortedNumericDocValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (long missingValue : new long[] { 0, randomLong() }) {\n for (MultiValueMode mode : MultiValueMode.values()) {\n- final NumericDocValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n+ final NumericDocValues selected = mode.select(values, missingValue, rootDocs, new BitDocIdSet(innerDocs), maxDoc);\n int prevRoot = -1;\n for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) {\n final long actual = selected.get(root);\n@@ -172,7 +174,7 @@ private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootD\n }\n }\n \n- public void testSingleValuedDoubles() {\n+ public void testSingleValuedDoubles() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final double[] array = new double[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -199,7 +201,7 @@ public double get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedDoubles() {\n+ public void testMultiValuedDoubles() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final double[][] array = new double[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -259,10 +261,10 @@ private void verify(SortedNumericDoubleValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (long missingValue : new long[] { 0, randomLong() }) {\n for (MultiValueMode mode : MultiValueMode.values()) {\n- final NumericDoubleValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n+ final NumericDoubleValues selected = mode.select(values, missingValue, rootDocs, new BitDocIdSet(innerDocs), maxDoc);\n int prevRoot = -1;\n for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) {\n final double actual = selected.get(root);\n@@ -289,7 +291,7 @@ private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet ro\n }\n }\n \n- public void testSingleValuedStrings() {\n+ public void testSingleValuedStrings() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final BytesRef[] array = new BytesRef[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -319,7 +321,7 @@ public BytesRef get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedStrings() {\n+ public void testMultiValuedStrings() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final BytesRef[][] array = new BytesRef[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -384,10 +386,10 @@ private void verify(SortedBinaryDocValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(RandomStrings.randomAsciiOfLength(getRandom(), 8)) }) {\n for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) {\n- final BinaryDocValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n+ final BinaryDocValues selected = mode.select(values, missingValue, rootDocs, new BitDocIdSet(innerDocs), maxDoc);\n int prevRoot = -1;\n for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) {\n final BytesRef actual = selected.get(root);\n@@ -416,7 +418,7 @@ private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDo\n }\n \n \n- public void testSingleValuedOrds() {\n+ public void testSingleValuedOrds() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final int[] array = new int[numDocs];\n for (int i = 0; i < array.length; ++i) {\n@@ -449,7 +451,7 @@ public int getValueCount() {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedOrds() {\n+ public void testMultiValuedOrds() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[][] array = new long[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -518,9 +520,9 @@ private void verify(RandomAccessOrds values, int maxDoc) {\n }\n }\n \n- private void verify(RandomAccessOrds values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(RandomAccessOrds values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) {\n- final SortedDocValues selected = mode.select(values, rootDocs, innerDocs);\n+ final SortedDocValues selected = mode.select(values, rootDocs, new BitDocIdSet(innerDocs));\n int prevRoot = -1;\n for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) {\n final int actual = selected.getOrd(root);",
"filename": "src/test/java/org/elasticsearch/search/MultiValueModeTests.java",
"status": "modified"
}
]
} |
{
"body": "I encountered two problems when setting up a tribe node\n1. I can't set the cluster name of the tribe node\n2. (minor) I can't set transport port\n\nVersions used:\n- ES: 1.4.2\n- java: Java(TM) SE Runtime Environment (build 1.7.0_51-b13), \n Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)\n- OS: MacOSX 10.9.5\n\nYou can find a recreation here: https://gist.github.com/jpparis-orange/44e464b8bb09246a4239\n(Change ES_BASE to adapt to your environment).\nIn the recreation, there are 8 comment lines right before launching the tribe node. For each pair of lines, you find the -Des... option to add in the tribe node launch command and the result in the logs. When adding one of the four options, the tribe node will not discover the clusters.\n",
"comments": [
{
"body": "Hi @jpparis-orange \n\nSorry but I just can't follow what you're trying to do here. I think you're trying to do too much all at once, which means that it is not clear where things are going wrong.\n",
"created_at": "2015-02-09T11:09:57Z"
},
{
"body": "Hello @clintongormley \n\nThe gist recreation creates 2 clusters and a tribe node: everything is ok. Then, I'll change the recreation to exhibit the problems.\n\nIn the tribe node start (say after line 38), I add the following line \n-Des.cluster.name=\"T1\" \\\nThe tribe node will not connect to the clusters.\nSame with -Des.cluster.name=\"elasticsearch\" \\\n\nThe second problem is about setting transport port. Starting with the recreation, add the following line after line 38\n-Des.transport.tcp.port=\"9900\" \\\nThe tribe node will not start.\nSame with -Des.transport.tcp.port=\"9300\" \\\n",
"created_at": "2015-02-09T12:39:38Z"
},
{
"body": "OK - to make this easier to follow, start two clusters as follows:\n\n```\n./bin/elasticsearch --cluster.name C1 --node.name N1 -d\n./bin/elasticsearch --cluster.name C2 --node.name N2 -d\n```\n\nThen start a tribe node as follows:\n\n```\n./bin/elasticsearch --tribe.t1.cluster.name C1 --tribe.t2.cluster.name C2 --node-name T1\n```\n\nThis works, however doing: `GET localhost:9202/_cluster/state?pretty` will return a cluster name of `elasticsearch`, and this cannot be changed, eg:\n\n```\n./bin/elasticsearch --tribe.t1.cluster.name C1 --tribe.t2.cluster.name C2 --node-name T1 --cluster.name foo # tribe node doesn't join the other cluster\n```\n\nAlso, trying to set a different transport port for the tribe node doesn't work, eg:\n\n```\n./bin/elasticsearch --tribe.t1.cluster.name C1 --tribe.t2.cluster.name C2 --node-name T1 --transport.tcp.port 9900\n```\n\nThe above fails with:\n\n```\n{1.4.2}: Startup Failed ...\n- BindTransportException[Failed to bind to [9900]]\n ChannelException[Failed to bind to: 0.0.0.0/0.0.0.0:9900]\n BindException[Address already in use]\n```\n",
"created_at": "2015-02-09T15:17:29Z"
},
{
"body": "That's simpler, and it's what I had in mind: sorry for the complicated recreation. Do you want that I simplify the gist too?\n\nTwo complements: \n1. --cluster.name elasticsearch give the same result as --cluster.name foo\n2. --transport.tcp.port 9300 give the same result as --transport.tcp.port 9900\n",
"created_at": "2015-02-09T20:50:59Z"
}
],
"number": 9576,
"title": "Tribe node setup"
} | {
"body": "The tribe node, at startup, sets up the tribe clients that will join their corresponding tribes. All of the tribe.\\* settings are properly forwarded to the corresponding tribe client. System properties and global configuration settings (all but tribe.*) must not be forwarded to the tribe clients though or they will end up overriding per tribe settings with same name.\n\nFor instance if you set the `transport.tcp.port` to some defined value for the tribe node, via system property or configuration file, that same value must not be forwarded to the tribe clients, otherwise they will try and use the same port, which will be already occupied by the tribe node itself, resulting in startup failed. Same for cluster.name, which will cause the tribe clients not to join their tribes.\n\nThis change is quite hard to test in our test infra, given that we make sure that the internal test cluster ignores any system property for proper test isolation. I tested it manually and verfied that the fix works and doesn't seem to have any downside. If anybody has any idea on how we can test this, I would be happy to add some tests to make sure we have proper coverage for it.\n\nCloses #9576\n",
"number": 9721,
"review_comments": [],
"title": "System properties and configuration settings must not be forwarded to tribe clients"
} | {
"commits": [
{
"message": "Tribe node: system properties and configuration settings must not be forwarded to tribe clients\n\nThe tribe node, at startup, sets up the tribe clients that will join their corresponding tribes. All of the tribe.* settings are properly forwarded to the corresponding tribe client. System properties and global configuration settings must not be forwarded to the tribe client though or they will end up overriding per tribe settings with same name causing issues.\n\n For instance if you set the transport.tcp.port to some defined value for the tribe node, via system property or configuration file, that same value must not be forwarded to the tribe clients, otherwise they will try and use the same port, which will be already occupied by the tribe node itself, resulting in startup failed. Same for cluster.name, which will cause the tribe clients not to join their tribes.\n\nCloses #9576\nCloses #9721"
}
],
"files": [
{
"diff": "@@ -128,10 +128,11 @@ public TribeService(Settings settings, ClusterService clusterService, DiscoveryS\n ImmutableSettings.Builder sb = ImmutableSettings.builder().put(entry.getValue());\n sb.put(\"node.name\", settings.get(\"name\") + \"/\" + entry.getKey());\n sb.put(TRIBE_NAME, entry.getKey());\n+ sb.put(\"config.ignore_system_properties\", true);\n if (sb.get(\"http.enabled\") == null) {\n sb.put(\"http.enabled\", false);\n }\n- nodes.add(NodeBuilder.nodeBuilder().settings(sb).client(true).build());\n+ nodes.add(NodeBuilder.nodeBuilder().settings(sb).client(true).loadConfigSettings(false).build());\n }\n \n String[] blockIndicesWrite = Strings.EMPTY_ARRAY;",
"filename": "src/main/java/org/elasticsearch/tribe/TribeService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,115 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.tribe;\n+\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.node.Node;\n+import org.elasticsearch.node.NodeBuilder;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.test.InternalTestCluster;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.nio.file.Path;\n+import java.nio.file.Paths;\n+\n+import static org.hamcrest.CoreMatchers.either;\n+import static org.hamcrest.CoreMatchers.equalTo;\n+\n+/**\n+ * This test doesn't extend {@link org.elasticsearch.test.ElasticsearchIntegrationTest} as the internal cluster ignores system properties\n+ * all the time, while we need to make the tribe node accept them in this case, so that we can verify that they are not read again as part\n+ * of the tribe client nodes initialization. Note that the started nodes will obey to the 'node.mode' settings as the internal cluster does.\n+ */\n+public class TribeUnitTests extends ElasticsearchTestCase {\n+\n+ private static Node tribe1;\n+ private static Node tribe2;\n+\n+ private static final String NODE_MODE = InternalTestCluster.nodeMode();\n+\n+ @BeforeClass\n+ public static void createTribes() {\n+ tribe1 = NodeBuilder.nodeBuilder().settings(ImmutableSettings.builder().put(\"config.ignore_system_properties\", true).put(\"http.enabled\", false)\n+ .put(\"node.mode\", NODE_MODE).put(\"cluster.name\", \"tribe1\").put(\"node.name\", \"tribe1_node\")).node();\n+ tribe2 = NodeBuilder.nodeBuilder().settings(ImmutableSettings.builder().put(\"config.ignore_system_properties\", true).put(\"http.enabled\", false)\n+ .put(\"node.mode\", NODE_MODE).put(\"cluster.name\", \"tribe2\").put(\"node.name\", \"tribe2_node\")).node();\n+\n+ }\n+\n+ @AfterClass\n+ public static void closeTribes() {\n+ tribe1.close();\n+ tribe1 = null;\n+ tribe2.close();\n+ tribe2 = null;\n+ }\n+\n+ @Test\n+ public void testThatTribeClientsIgnoreGlobalSysProps() throws Exception {\n+ System.setProperty(\"es.cluster.name\", \"tribe_node_cluster\");\n+ System.setProperty(\"es.tribe.t1.cluster.name\", \"tribe1\");\n+ System.setProperty(\"es.tribe.t2.cluster.name\", \"tribe2\");\n+\n+ try {\n+ assertTribeNodeSuccesfullyCreated(ImmutableSettings.EMPTY);\n+ } finally {\n+ System.clearProperty(\"es.cluster.name\");\n+ System.clearProperty(\"es.tribe.t1.cluster.name\");\n+ System.clearProperty(\"es.tribe.t2.cluster.name\");\n+ }\n+ }\n+\n+ @Test\n+ public void testThatTribeClientsIgnoreGlobalConfig() throws Exception {\n+ Path pathConf = Paths.get(TribeUnitTests.class.getResource(\"elasticsearch.yml\").toURI()).getParent();\n+ Settings settings = ImmutableSettings.builder().put(\"config.ignore_system_properties\", true).put(\"path.conf\", pathConf).build();\n+ assertTribeNodeSuccesfullyCreated(settings);\n+ }\n+\n+ private static void assertTribeNodeSuccesfullyCreated(Settings extraSettings) throws Exception {\n+ //tribe node doesn't need the node.mode setting, as it's forced local internally anyways. The tribe clients do need it to make sure\n+ //they can find their corresponding tribes using the proper transport\n+ Settings settings = ImmutableSettings.builder().put(\"http.enabled\", false).put(\"node.name\", \"tribe_node\")\n+ .put(\"tribe.t1.node.mode\", NODE_MODE).put(\"tribe.t2.node.mode\", NODE_MODE).put(extraSettings).build();\n+\n+ try (Node node = NodeBuilder.nodeBuilder().settings(settings).node()) {\n+ try (Client client = node.client()) {\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterState state = client.admin().cluster().prepareState().clear().setNodes(true).get().getState();\n+ assertThat(state.getClusterName().value(), equalTo(\"tribe_node_cluster\"));\n+ assertThat(state.getNodes().getSize(), equalTo(5));\n+ for (DiscoveryNode discoveryNode : state.getNodes()) {\n+ assertThat(discoveryNode.getName(), either(equalTo(\"tribe1_node\")).or(equalTo(\"tribe2_node\")).or(equalTo(\"tribe_node\"))\n+ .or(equalTo(\"tribe_node/t1\")).or(equalTo(\"tribe_node/t2\")));\n+ }\n+ }\n+ });\n+ }\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/tribe/TribeUnitTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,3 @@\n+cluster.name: tribe_node_cluster\n+tribe.t1.cluster.name: tribe1\n+tribe.t2.cluster.name: tribe2\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/tribe/elasticsearch.yml",
"status": "added"
}
]
} |
{
"body": "Using an inner nested filter within a `nested_filter` for `sort`ing does not work (tested in 1.4.2) and it silently fails to handle the inner nested filter.\n\nThe corresponding test code is commented out at https://github.com/elasticsearch/elasticsearch/tree/v1.4.2/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java#L1171\n\nExample follows:\n\n```\nPUT /test\n{\"mappings\":{\"type\":{\"properties\":{\"officelocation\":{\"type\":\"string\"},\"users\":{\"type\":\"nested\",\"properties\":{\"first\":{\"type\":\"string\"},\"last\":{\"type\":\"string\"},\"workstation\":{\"type\":\"nested\",\"properties\":{\"stationid\":{\"type\":\"string\"},\"phoneid\":{\"type\":\"string\"}}}}}}}}}\n\nPUT /test/type/1\n{\"officelocation\":\"glendale\",\"users\":[{\"first\":\"fname1\",\"last\":\"lname1\",\"workstation\":[{\"stationid\":\"s1\",\"phoneid\":\"p1\"},{\"stationid\":\"s2\",\"phoneid\":\"p2\"}]},{\"first\":\"fname2\",\"last\":\"lname2\",\"workstation\":[{\"stationid\":\"s3\",\"phoneid\":\"p3\"},{\"stationid\":\"s4\",\"phoneid\":\"p4\"}]},{\"first\":\"fname3\",\"last\":\"lname3\",\"workstation\":[{\"stationid\":\"s5\",\"phoneid\":\"p5\"},{\"stationid\":\"s6\",\"phoneid\":\"p6\"}]}]}\n\nPUT /test/type/2\n{\"officelocation\":\"glendale\",\"users\":[{\"first\":\"fname4\",\"last\":\"lname4\",\"workstation\":[{\"stationid\":\"s1\",\"phoneid\":\"p1\"},{\"stationid\":\"s2\",\"phoneid\":\"p2\"}]},{\"first\":\"fname5\",\"last\":\"lname5\",\"workstation\":[{\"stationid\":\"s3\",\"phoneid\":\"p3\"},{\"stationid\":\"s4\",\"phoneid\":\"p4\"}]},{\"first\":\"fname1\",\"last\":\"lname1\",\"workstation\":[{\"stationid\":\"s5 ss\",\"phoneid\":\"p5\"},{\"stationid\":\"s6\",\"phoneid\":\"p6\"}]}]}\n\nGET /test/_search\n{\n \"fields\": \"_id\",\n \"sort\": [\n {\n \"users.first\": {\n \"order\": \"asc\"\n }\n },\n {\n \"users.first\": {\n \"order\": \"asc\",\n \"nested_path\": \"users\",\n \"nested_filter\": {\n \"nested\": {\n \"path\": \"users.workstation\",\n \"filter\": {\n \"term\": {\n \"users.workstation.stationid\": \"s5\"\n }\n }\n }\n }\n }\n }\n ]\n}\n```\n\nThis returns\n\n```\n{\n \"took\": 9,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 2,\n \"max_score\": null,\n \"hits\": [\n {\n \"_index\": \"test\",\n \"_type\": \"myType\",\n \"_id\": \"1\",\n \"_score\": null,\n \"sort\": [\n \"fname1\",\n null // <- should be \"fname3\"\n ]\n },\n {\n \"_index\": \"test\",\n \"_type\": \"myType\",\n \"_id\": \"2\",\n \"_score\": null,\n \"sort\": [\n \"fname1\",\n null // <- should be \"fname1\"\n ]\n }\n ]\n }\n}\n```\n\nHere is the nested filter as a query:\n\n```\nGET /test/_search\n{\"query\":{\"filtered\":{\"filter\":{\"nested\":{\"path\":\"users\",\"filter\":{\"nested\":{\"path\":\"users.workstation\",\"filter\":{\"term\":{\"users.workstation.stationid\":\"s5\"}}}}}}}}}\n```\n",
"comments": [
{
"body": "A possible workaround for this problem would be to index inner \"workstation\" object **both** as nested fields and as flattened object field.This can be achieved by setting **\"include_in_parent\"** to **true**.\nHere is the updated schema (note the `\"include_in_parent\":\"true\"` under **workstation** :\n`PUT /test`\n\n``` json\n{\"mappings\":{\"type\":{\"properties\":{\"officelocation\":{\"type\":\"string\"},\"users\":{\"type\":\"nested\",\"properties\":{\"first\":{\"type\":\"string\"},\"last\":{\"type\":\"string\"},\"workstation\":{\"type\":\"nested\", \"include_in_parent\":\"true\" ,\"properties\":{\"stationid\":{\"type\":\"string\"},\"phoneid\":{\"type\":\"string\"}}}}}}}}}\n```\n\nHere is the updated query:\n`POST test/_search`\n\n``` json\n{\n \"fields\":\"_id\",\n \"sort\":[\n {\n \"users.first\":{\n \"order\":\"asc\"\n }\n },\n {\n \"users.first\":{\n \"order\":\"asc\",\n \"nested_path\":\"users\",\n \"nested_filter\":{\n \"term\":{\n \"users.workstation.stationid\":\"s5\"\n }\n }\n }\n }\n ]\n}\n```\n\nThis returns:\n\n``` json\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : null,\n \"hits\" : [ {\n \"_index\" : \"test123\",\n \"_type\" : \"type\",\n \"_id\" : \"2\",\n \"_score\" : null,\n \"sort\" : [ \"fname1\", \"fname1\" ]\n }, {\n \"_index\" : \"test123\",\n \"_type\" : \"type\",\n \"_id\" : \"1\",\n \"_score\" : null,\n \"sort\" : [ \"fname1\", \"fname3\" ]\n } ]\n }\n}\n```\n",
"created_at": "2015-01-16T15:02:41Z"
},
{
"body": "The reason why the sorting goes wrong here is that the nested query is dependant about the nested context it is placed in. If a nested query has no other nested query above it, it assumes that it is should link back to the main/root document. If a nested query is placed under another nested query then it assumes it should link back to nested level that belong to the path the parent nested query has been set to. Nested sorting doesn't set this nested context and therefor the nested query doesn't link back to the `users` level, but to the root level instead.\n\nWe need to make sure that nested sorting sets the nested level properly, so that other nested queries know about this. The tricky bit is here that due to how the sorting elements gets parsed (in a streaming manner), the nested filter may be parsed before the `path` field has been parsed, so in order to do this properly the filter should be parsed after the `path` field has been parsed.\n",
"created_at": "2015-01-19T13:02:09Z"
},
{
"body": "I encountered the same problem, too. \n\nThe include_in_parent workaround can solve this problem only when you don't need the relations defined in workstation nested object.\n\nFor example, if you need to filter by workstation {\"stationid\":\"s5\",\"phoneid\":\"p6\"}, user {\"first\":\"fname3\",\"last\":\"lname3\",\"workstation\":[{\"stationid\":\"s5\",\"phoneid\":\"p5\"},{\"stationid\":\"s6\",\"phoneid\":\"p6\"}]} in the first document would match, even there's no {\"stationid\":\"s5\",\"phoneid\":\"p6\"} in his workstations.\n",
"created_at": "2015-02-10T09:55:51Z"
}
],
"number": 9305,
"title": "Sorting with nested_filter does not work with inner nested document"
} | {
"body": "The nested scope is set by any nested feature, so that sub nested queries and filters know about their context and these sub nested queries and filters can construct the right parent filter.\n\nRemoved the LateBindingParentFilter workaround in the nested query parser in favour of the nested scope maintained in the parse context.\n\nDue to this change nested queries and filters can now also be included in nested sorting and inner hits, because those features also now use the nested scope.\n\nThis change doesn't fix the usage of nested filters in nested and reverse_nested aggregations. The `nested` filter shouldn't be used inside these aggregations and instead the `nested` and `reverse_nested` aggs should be used to query on the right level. In a different change `nested` inside a `nested` and `reverse_nested` aggregation should result in a parse error.\n\nAlso fixes #9305\n",
"number": 9692,
"review_comments": [
{
"body": "Better now! Maybe we can remove docIdSet != EMPTY from the if now?\n",
"created_at": "2015-02-13T16:19:56Z"
},
{
"body": "should we just use a stack of object mappers instead of this object?\n",
"created_at": "2015-02-13T16:23:21Z"
},
{
"body": "+1 make sense, but should we encapsulate the Stack in this class?\n",
"created_at": "2015-02-13T16:33:16Z"
},
{
"body": "Note: I was wrongly assuming that DocIdSet.EMPTY returns a null iterator, which is not the case.\n",
"created_at": "2015-02-16T11:06:04Z"
}
],
"title": "Added nested scope to parse context that keeps track the current nested level during search request parsing"
} | {
"commits": [
{
"message": "Added nested scope to query parse context that keeps track the current nested level during search request parsing.\n\nThe nested scope is set by any nested feature, so that sub nested queries and filters know about their context and these sub nested queries and filters can construct the right parent filter.\nRemoved the LateBindingParentFilter workaround in the nested query parser in favour of the nested scope maintained in the query parse context.\nDue to this change nested queries and filters can now also be included in nested sorting and inner hits, because those features also now use the nested scope.\n\nThis change doesn't fix the usage of nested filters in nested and reverse_nested aggregations. The `nested` filter shouldn't be used inside these aggregations and instead the `nested` and `reverse_nested` aggs should be used to query on the right level. In a different change `nested` inside a `nested` and `reverse_nested` aggregation should result in a parse error.\n\nCloses #9305"
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.index.LeafReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BitDocIdSet;\n@@ -142,7 +143,11 @@ public Value call() throws Exception {\n } else {\n BitDocIdSet.Builder builder = new BitDocIdSet.Builder(context.reader().maxDoc());\n if (docIdSet != null && docIdSet != DocIdSet.EMPTY) {\n- builder.or(docIdSet.iterator());\n+ DocIdSetIterator iterator = docIdSet.iterator();\n+ // some filters (QueryWrapperFilter) return not null or DocIdSet.EMPTY if there no matching docs\n+ if (iterator != null) {\n+ builder.or(iterator);\n+ }\n }\n BitDocIdSet bits = builder.build();\n // code expects this to be non-null",
"filename": "src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -19,26 +19,15 @@\n \n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.FilteredQuery;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.HashedBytesRef;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.query.support.InnerHitsQueryParserHelper;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n-import org.elasticsearch.search.internal.SubSearchContext;\n \n import java.io.IOException;\n \n@@ -61,123 +50,58 @@ public String[] names() {\n @Override\n public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n+ final NestedQueryParser.ToBlockJoinQueryBuilder builder = new NestedQueryParser.ToBlockJoinQueryBuilder(parseContext);\n \n- Query query = null;\n- boolean queryFound = false;\n- Filter filter = null;\n- boolean filterFound = false;\n float boost = 1.0f;\n- String path = null;\n boolean cache = false;\n HashedBytesRef cacheKey = null;\n String filterName = null;\n- Tuple<String, SubSearchContext> innerHits = null;\n \n- // we need a late binding filter so we can inject a parent nested filter inner nested queries\n- NestedQueryParser.LateBindingParentFilter currentParentFilterContext = NestedQueryParser.parentFilterContext.get();\n-\n- NestedQueryParser.LateBindingParentFilter usAsParentFilter = new NestedQueryParser.LateBindingParentFilter();\n- NestedQueryParser.parentFilterContext.set(usAsParentFilter);\n-\n- try {\n- String currentFieldName = null;\n- XContentParser.Token token;\n- while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- currentFieldName = parser.currentName();\n- } else if (token == XContentParser.Token.START_OBJECT) {\n- if (\"query\".equals(currentFieldName)) {\n- queryFound = true;\n- query = parseContext.parseInnerQuery();\n- } else if (\"filter\".equals(currentFieldName)) {\n- filterFound = true;\n- filter = parseContext.parseInnerFilter();\n- } else if (\"inner_hits\".equals(currentFieldName)) {\n- innerHits = innerHitsQueryParserHelper.parse(parseContext);\n- } else {\n- throw new QueryParsingException(parseContext.index(), \"[nested] filter does not support [\" + currentFieldName + \"]\");\n- }\n- } else if (token.isValue()) {\n- if (\"path\".equals(currentFieldName)) {\n- path = parser.text();\n- } else if (\"boost\".equals(currentFieldName)) {\n- boost = parser.floatValue();\n- } else if (\"_name\".equals(currentFieldName)) {\n- filterName = parser.text();\n- } else if (\"_cache\".equals(currentFieldName)) {\n- cache = parser.booleanValue();\n- } else if (\"_cache_key\".equals(currentFieldName) || \"_cacheKey\".equals(currentFieldName)) {\n- cacheKey = new HashedBytesRef(parser.text());\n- } else {\n- throw new QueryParsingException(parseContext.index(), \"[nested] filter does not support [\" + currentFieldName + \"]\");\n- }\n+ String currentFieldName = null;\n+ XContentParser.Token token;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (\"query\".equals(currentFieldName)) {\n+ builder.query();\n+ } else if (\"filter\".equals(currentFieldName)) {\n+ builder.filter();\n+ } else if (\"inner_hits\".equals(currentFieldName)) {\n+ builder.setInnerHits(innerHitsQueryParserHelper.parse(parseContext));\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] filter does not support [\" + currentFieldName + \"]\");\n+ }\n+ } else if (token.isValue()) {\n+ if (\"path\".equals(currentFieldName)) {\n+ builder.setPath(parser.text());\n+ } else if (\"boost\".equals(currentFieldName)) {\n+ boost = parser.floatValue();\n+ } else if (\"_name\".equals(currentFieldName)) {\n+ filterName = parser.text();\n+ } else if (\"_cache\".equals(currentFieldName)) {\n+ cache = parser.booleanValue();\n+ } else if (\"_cache_key\".equals(currentFieldName) || \"_cacheKey\".equals(currentFieldName)) {\n+ cacheKey = new HashedBytesRef(parser.text());\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] filter does not support [\" + currentFieldName + \"]\");\n }\n }\n- if (!queryFound && !filterFound) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] requires either 'query' or 'filter' field\");\n- }\n- if (path == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] requires 'path' field\");\n- }\n-\n- if (query == null && filter == null) {\n- return null;\n- }\n-\n- if (filter != null) {\n- query = new ConstantScoreQuery(filter);\n- }\n-\n- query.setBoost(boost);\n-\n- MapperService.SmartNameObjectMapper mapper = parseContext.smartObjectMapper(path);\n- if (mapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n- }\n- ObjectMapper objectMapper = mapper.mapper();\n- if (objectMapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n- }\n- if (!objectMapper.nested().isNested()) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] nested object under path [\" + path + \"] is not of nested type\");\n- }\n-\n- BitDocIdSetFilter childFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter());\n- usAsParentFilter.filter = childFilter;\n- // wrap the child query to only work on the nested path type\n- query = new FilteredQuery(query, childFilter);\n- if (innerHits != null) {\n- DocumentMapper childDocumentMapper = mapper.docMapper();\n- ObjectMapper parentObjectMapper = childDocumentMapper.findParentObjectMapper(objectMapper);\n- InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), query, null, parentObjectMapper, objectMapper);\n- String name = innerHits.v1() != null ? innerHits.v1() : path;\n- parseContext.addInnerHits(name, nestedInnerHits);\n- }\n-\n- BitDocIdSetFilter parentFilter = currentParentFilterContext;\n- if (parentFilter == null) {\n- parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n- // don't do special parent filtering, since we might have same nested mapping on two different types\n- //if (mapper.hasDocMapper()) {\n- // // filter based on the type...\n- // parentFilter = mapper.docMapper().typeFilter();\n- //}\n- } else {\n- parentFilter = parseContext.bitsetFilter(parentFilter);\n- }\n-\n- Filter nestedFilter = Queries.wrap(new ToParentBlockJoinQuery(query, parentFilter, ScoreMode.None), parseContext);\n-\n+ }\n+ builder.setScoreMode(ScoreMode.None);\n+ ToParentBlockJoinQuery joinQuery = builder.build();\n+ if (joinQuery != null) {\n+ joinQuery.getChildQuery().setBoost(boost);\n+ Filter nestedFilter = Queries.wrap(joinQuery, parseContext);\n if (cache) {\n nestedFilter = parseContext.cacheFilter(nestedFilter, cacheKey, parseContext.autoFilterCachePolicy());\n }\n if (filterName != null) {\n parseContext.addNamedFilter(filterName, nestedFilter);\n }\n return nestedFilter;\n- } finally {\n- // restore the thread local one...\n- NestedQueryParser.parentFilterContext.set(currentParentFilterContext);\n+ } else {\n+ return null;\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/query/NestedFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -19,25 +19,21 @@\n \n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.FilteredQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n-import org.apache.lucene.util.BitDocIdSet;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.index.query.support.InnerHitsQueryParserHelper;\n-import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext.NestedInnerHits;\n+import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport;\n+import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n \n import java.io.IOException;\n@@ -61,155 +57,110 @@ public String[] names() {\n @Override\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n+ final ToBlockJoinQueryBuilder builder = new ToBlockJoinQueryBuilder(parseContext);\n \n- Query query = null;\n- boolean queryFound = false;\n- Filter filter = null;\n- boolean filterFound = false;\n float boost = 1.0f;\n- String path = null;\n ScoreMode scoreMode = ScoreMode.Avg;\n String queryName = null;\n- Tuple<String, SubSearchContext> innerHits = null;\n-\n- // we need a late binding filter so we can inject a parent nested filter inner nested queries\n- LateBindingParentFilter currentParentFilterContext = parentFilterContext.get();\n-\n- LateBindingParentFilter usAsParentFilter = new LateBindingParentFilter();\n- parentFilterContext.set(usAsParentFilter);\n-\n- try {\n- String currentFieldName = null;\n- XContentParser.Token token;\n- while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- currentFieldName = parser.currentName();\n- } else if (token == XContentParser.Token.START_OBJECT) {\n- if (\"query\".equals(currentFieldName)) {\n- queryFound = true;\n- query = parseContext.parseInnerQuery();\n- } else if (\"filter\".equals(currentFieldName)) {\n- filterFound = true;\n- filter = parseContext.parseInnerFilter();\n- } else if (\"inner_hits\".equals(currentFieldName)) {\n- innerHits = innerHitsQueryParserHelper.parse(parseContext);\n- } else {\n- throw new QueryParsingException(parseContext.index(), \"[nested] query does not support [\" + currentFieldName + \"]\");\n- }\n- } else if (token.isValue()) {\n- if (\"path\".equals(currentFieldName)) {\n- path = parser.text();\n- } else if (\"boost\".equals(currentFieldName)) {\n- boost = parser.floatValue();\n- } else if (\"score_mode\".equals(currentFieldName) || \"scoreMode\".equals(currentFieldName)) {\n- String sScoreMode = parser.text();\n- if (\"avg\".equals(sScoreMode)) {\n- scoreMode = ScoreMode.Avg;\n- } else if (\"max\".equals(sScoreMode)) {\n- scoreMode = ScoreMode.Max;\n- } else if (\"total\".equals(sScoreMode) || \"sum\".equals(sScoreMode)) {\n- scoreMode = ScoreMode.Total;\n- } else if (\"none\".equals(sScoreMode)) {\n- scoreMode = ScoreMode.None;\n- } else {\n- throw new QueryParsingException(parseContext.index(), \"illegal score_mode for nested query [\" + sScoreMode + \"]\");\n- }\n- } else if (\"_name\".equals(currentFieldName)) {\n- queryName = parser.text();\n+\n+ String currentFieldName = null;\n+ XContentParser.Token token;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (\"query\".equals(currentFieldName)) {\n+ builder.query();\n+ } else if (\"filter\".equals(currentFieldName)) {\n+ builder.filter();\n+ } else if (\"inner_hits\".equals(currentFieldName)) {\n+ builder.setInnerHits(innerHitsQueryParserHelper.parse(parseContext));\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] query does not support [\" + currentFieldName + \"]\");\n+ }\n+ } else if (token.isValue()) {\n+ if (\"path\".equals(currentFieldName)) {\n+ builder.setPath(parser.text());\n+ } else if (\"boost\".equals(currentFieldName)) {\n+ boost = parser.floatValue();\n+ } else if (\"score_mode\".equals(currentFieldName) || \"scoreMode\".equals(currentFieldName)) {\n+ String sScoreMode = parser.text();\n+ if (\"avg\".equals(sScoreMode)) {\n+ scoreMode = ScoreMode.Avg;\n+ } else if (\"max\".equals(sScoreMode)) {\n+ scoreMode = ScoreMode.Max;\n+ } else if (\"total\".equals(sScoreMode) || \"sum\".equals(sScoreMode)) {\n+ scoreMode = ScoreMode.Total;\n+ } else if (\"none\".equals(sScoreMode)) {\n+ scoreMode = ScoreMode.None;\n } else {\n- throw new QueryParsingException(parseContext.index(), \"[nested] query does not support [\" + currentFieldName + \"]\");\n+ throw new QueryParsingException(parseContext.index(), \"illegal score_mode for nested query [\" + sScoreMode + \"]\");\n }\n+ } else if (\"_name\".equals(currentFieldName)) {\n+ queryName = parser.text();\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] query does not support [\" + currentFieldName + \"]\");\n }\n }\n- if (!queryFound && !filterFound) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] requires either 'query' or 'filter' field\");\n- }\n- if (path == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] requires 'path' field\");\n- }\n-\n- if (query == null && filter == null) {\n- return null;\n- }\n-\n- if (filter != null) {\n- query = new ConstantScoreQuery(filter);\n- }\n-\n- MapperService.SmartNameObjectMapper mapper = parseContext.smartObjectMapper(path);\n- if (mapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n- }\n- ObjectMapper objectMapper = mapper.mapper();\n- if (objectMapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n- }\n- if (!objectMapper.nested().isNested()) {\n- throw new QueryParsingException(parseContext.index(), \"[nested] nested object under path [\" + path + \"] is not of nested type\");\n- }\n-\n- BitDocIdSetFilter childFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter());\n- usAsParentFilter.filter = childFilter;\n- // wrap the child query to only work on the nested path type\n- query = new FilteredQuery(query, childFilter);\n- if (innerHits != null) {\n- DocumentMapper childDocumentMapper = mapper.docMapper();\n- ObjectMapper parentObjectMapper = childDocumentMapper.findParentObjectMapper(objectMapper);\n- NestedInnerHits nestedInnerHits = new NestedInnerHits(innerHits.v2(), query, null, parentObjectMapper, objectMapper);\n- String name = innerHits.v1() != null ? innerHits.v1() : path;\n- parseContext.addInnerHits(name, nestedInnerHits);\n- }\n+ }\n \n- BitDocIdSetFilter parentFilter = currentParentFilterContext;\n- if (parentFilter == null) {\n- parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n- // don't do special parent filtering, since we might have same nested mapping on two different types\n- //if (mapper.hasDocMapper()) {\n- // // filter based on the type...\n- // parentFilter = mapper.docMapper().typeFilter();\n- //}\n- } else {\n- parentFilter = parseContext.bitsetFilter(parentFilter);\n- }\n- ToParentBlockJoinQuery joinQuery = new ToParentBlockJoinQuery(query, parentFilter, scoreMode);\n+ builder.setScoreMode(scoreMode);\n+ ToParentBlockJoinQuery joinQuery = builder.build();\n+ if (joinQuery != null) {\n joinQuery.setBoost(boost);\n if (queryName != null) {\n parseContext.addNamedQuery(queryName, joinQuery);\n }\n- return joinQuery;\n- } finally {\n- // restore the thread local one...\n- parentFilterContext.set(currentParentFilterContext);\n }\n+ return joinQuery;\n }\n \n- // TODO: Change this mechanism in favour of how parent nested object type is resolved in nested and reverse_nested agg\n- // with this also proper validation can be performed on what is a valid nested child nested object type to be used\n- public static ThreadLocal<LateBindingParentFilter> parentFilterContext = new ThreadLocal<>();\n-\n- public static class LateBindingParentFilter extends BitDocIdSetFilter {\n+ public static class ToBlockJoinQueryBuilder extends NestedInnerQueryParseSupport {\n \n- public BitDocIdSetFilter filter;\n+ private ScoreMode scoreMode;\n+ private Tuple<String, SubSearchContext> innerHits;\n \n- @Override\n- public int hashCode() {\n- return filter.hashCode();\n+ public ToBlockJoinQueryBuilder(QueryParseContext parseContext) throws IOException {\n+ super(parseContext);\n }\n \n- @Override\n- public boolean equals(Object obj) {\n- if (!(obj instanceof LateBindingParentFilter)) return false;\n- return filter.equals(((LateBindingParentFilter) obj).filter);\n+ public void setScoreMode(ScoreMode scoreMode) {\n+ this.scoreMode = scoreMode;\n }\n \n- @Override\n- public String toString() {\n- return filter.toString();\n+ public void setInnerHits(Tuple<String, SubSearchContext> innerHits) {\n+ this.innerHits = innerHits;\n }\n \n- @Override\n- public BitDocIdSet getDocIdSet(LeafReaderContext ctx) throws IOException {\n- return filter.getDocIdSet(ctx);\n+ @Nullable\n+ public ToParentBlockJoinQuery build() throws IOException {\n+ Query innerQuery;\n+ if (queryFound) {\n+ innerQuery = getInnerQuery();\n+ } else if (filterFound) {\n+ Filter innerFilter = getInnerFilter();\n+ if (innerFilter != null) {\n+ innerQuery = new ConstantScoreQuery(getInnerFilter());\n+ } else {\n+ innerQuery = null;\n+ }\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] requires either 'query' or 'filter' field\");\n+ }\n+\n+ if (innerHits != null) {\n+ ObjectMapper parentObjectMapper = childDocumentMapper.findParentObjectMapper(nestedObjectMapper);\n+ InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), getInnerQuery(), null, parentObjectMapper, nestedObjectMapper);\n+ String name = innerHits.v1() != null ? innerHits.v1() : path;\n+ parseContext.addInnerHits(name, nestedInnerHits);\n+ }\n+\n+ if (innerQuery != null) {\n+ return new ToParentBlockJoinQuery(new FilteredQuery(innerQuery, childFilter), parentFilter, scoreMode);\n+ } else {\n+ return null;\n+ }\n }\n+\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.index.mapper.MapperBuilders;\n import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;\n@@ -111,6 +112,8 @@ public static void removeTypes() {\n \n private boolean mapUnmappedFieldAsString;\n \n+ private NestedScope nestedScope;\n+\n public QueryParseContext(Index index, IndexQueryParserService indexQueryParser) {\n this(index, indexQueryParser, false);\n }\n@@ -138,6 +141,7 @@ public void reset(XContentParser jp) {\n this.namedFilters.clear();\n this.requireCustomQueryWrappingFilter = false;\n this.propagateNoCache = false;\n+ this.nestedScope = new NestedScope();\n }\n \n public Index index() {\n@@ -467,4 +471,8 @@ public long nowInMillis() {\n public boolean requireCustomQueryWrappingFilter() {\n return requireCustomQueryWrappingFilter;\n }\n+\n+ public NestedScope nestedScope() {\n+ return nestedScope;\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,205 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.query.support;\n+\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.join.BitDocIdSetFilter;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.object.ObjectMapper;\n+import org.elasticsearch.index.query.QueryParseContext;\n+import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n+import org.elasticsearch.search.internal.SearchContext;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A helper that helps with parsing inner queries of the nested query.\n+ * 1) Takes into account that type nested path can appear before or after the inner query\n+ * 2) Updates the {@link NestedScope} when parsing the inner query.\n+ */\n+public class NestedInnerQueryParseSupport {\n+\n+ protected final QueryParseContext parseContext;\n+\n+ private BytesReference source;\n+ private Query innerQuery;\n+ private Filter innerFilter;\n+ protected String path;\n+\n+ private boolean filterParsed = false;\n+ private boolean queryParsed = false;\n+ protected boolean queryFound = false;\n+ protected boolean filterFound = false;\n+\n+ protected BitDocIdSetFilter parentFilter;\n+ protected BitDocIdSetFilter childFilter;\n+\n+ protected DocumentMapper childDocumentMapper;\n+ protected ObjectMapper nestedObjectMapper;\n+\n+ public NestedInnerQueryParseSupport(XContentParser parser, SearchContext searchContext) {\n+ parseContext = searchContext.queryParserService().getParseContext();\n+ parseContext.reset(parser);\n+ }\n+\n+ public NestedInnerQueryParseSupport(QueryParseContext parseContext) {\n+ this.parseContext = parseContext;\n+ }\n+\n+ public void query() throws IOException {\n+ if (path != null) {\n+ setPathLevel();\n+ try {\n+ innerQuery = parseContext.parseInnerQuery();\n+ } finally {\n+ resetPathLevel();\n+ }\n+ queryParsed = true;\n+ } else {\n+ source = XContentFactory.smileBuilder().copyCurrentStructure(parseContext.parser()).bytes();\n+ }\n+ queryFound = true;\n+ }\n+\n+ public void filter() throws IOException {\n+ if (path != null) {\n+ setPathLevel();\n+ try {\n+ innerFilter = parseContext.parseInnerFilter();\n+ } finally {\n+ resetPathLevel();\n+ }\n+ filterParsed = true;\n+ } else {\n+ source = XContentFactory.smileBuilder().copyCurrentStructure(parseContext.parser()).bytes();\n+ }\n+ filterFound = true;\n+ }\n+\n+ public Query getInnerQuery() throws IOException {\n+ if (queryParsed) {\n+ return innerQuery;\n+ } else {\n+ if (path == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] requires 'path' field\");\n+ }\n+ if (!queryFound) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] requires either 'query' or 'filter' field\");\n+ }\n+\n+ XContentParser old = parseContext.parser();\n+ try {\n+ XContentParser innerParser = XContentHelper.createParser(source);\n+ parseContext.parser(innerParser);\n+ setPathLevel();\n+ try {\n+ innerQuery = parseContext.parseInnerQuery();\n+ } finally {\n+ resetPathLevel();\n+ }\n+ queryParsed = true;\n+ return innerQuery;\n+ } finally {\n+ parseContext.parser(old);\n+ }\n+ }\n+ }\n+\n+ public Filter getInnerFilter() throws IOException {\n+ if (filterParsed) {\n+ return innerFilter;\n+ } else {\n+ if (path == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] requires 'path' field\");\n+ }\n+ if (!filterFound) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] requires either 'query' or 'filter' field\");\n+ }\n+\n+ setPathLevel();\n+ XContentParser old = parseContext.parser();\n+ try {\n+ XContentParser innerParser = XContentHelper.createParser(source);\n+ parseContext.parser(innerParser);\n+ innerFilter = parseContext.parseInnerFilter();\n+ filterParsed = true;\n+ return innerFilter;\n+ } finally {\n+ resetPathLevel();\n+ parseContext.parser(old);\n+ }\n+ }\n+ }\n+\n+ public void setPath(String path) {\n+ this.path = path;\n+ MapperService.SmartNameObjectMapper smart = parseContext.smartObjectMapper(path);\n+ if (smart == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n+ }\n+ childDocumentMapper = smart.docMapper();\n+ nestedObjectMapper = smart.mapper();\n+ if (nestedObjectMapper == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] failed to find nested object under path [\" + path + \"]\");\n+ }\n+ if (!nestedObjectMapper.nested().isNested()) {\n+ throw new QueryParsingException(parseContext.index(), \"[nested] nested object under path [\" + path + \"] is not of nested type\");\n+ }\n+ }\n+\n+ public String getPath() {\n+ return path;\n+ }\n+\n+ public ObjectMapper getNestedObjectMapper() {\n+ return nestedObjectMapper;\n+ }\n+\n+ public boolean queryFound() {\n+ return queryFound;\n+ }\n+\n+ public boolean filterFound() {\n+ return filterFound;\n+ }\n+\n+ private void setPathLevel() {\n+ ObjectMapper objectMapper = parseContext.nestedScope().getObjectMapper();\n+ if (objectMapper == null) {\n+ parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE);\n+ } else {\n+ parentFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter());\n+ }\n+ childFilter = parseContext.bitsetFilter(nestedObjectMapper.nestedTypeFilter());\n+ parseContext.nestedScope().nextLevel(nestedObjectMapper);\n+ }\n+\n+ private void resetPathLevel() {\n+ parseContext.nestedScope().previousLevel();\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/index/query/support/NestedInnerQueryParseSupport.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,55 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.query.support;\n+\n+import org.elasticsearch.index.mapper.object.ObjectMapper;\n+\n+import java.util.Deque;\n+import java.util.LinkedList;\n+\n+/**\n+ * During query parsing this keeps track of the current nested level.\n+ */\n+public final class NestedScope {\n+\n+ private final Deque<ObjectMapper> levelStack = new LinkedList<>();\n+\n+ /**\n+ * @return For the current nested level returns the object mapper that belongs to that\n+ */\n+ public ObjectMapper getObjectMapper() {\n+ return levelStack.peek();\n+ }\n+\n+ /**\n+ * Sets the new current nested level and moves old current nested level down\n+ */\n+ public void nextLevel(ObjectMapper level) {\n+ levelStack.push(level);\n+ }\n+\n+ /**\n+ * Sets the previous nested level as current nested level and removes the current nested level.\n+ */\n+ public void previousLevel() {\n+ ObjectMapper level = levelStack.pop();\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/index/query/support/NestedScope.java",
"status": "added"
},
{
"diff": "@@ -19,9 +19,9 @@\n package org.elasticsearch.percolator;\n \n import com.google.common.collect.ImmutableList;\n-import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.Counter;\n@@ -31,6 +31,7 @@\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.cache.filter.FilterCache;\n@@ -43,7 +44,6 @@\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.index.query.ParsedQuery;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;\n@@ -687,4 +687,5 @@ public void innerHits(InnerHitsContext innerHitsContext) {\n public InnerHitsContext innerHits() {\n throw new UnsupportedOperationException();\n }\n+\n }",
"filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.query.NestedQueryParser;\n+import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsParseElement;\n import org.elasticsearch.search.fetch.script.ScriptFieldsParseElement;\n@@ -60,14 +60,16 @@ public InnerHitsParseElement(SortParseElement sortParseElement, FetchSourceParse\n }\n \n @Override\n- public void parse(XContentParser parser, SearchContext context) throws Exception {\n- Map<String, InnerHitsContext.BaseInnerHits> innerHitsMap = parseInnerHits(parser, context);\n+ public void parse(XContentParser parser, SearchContext searchContext) throws Exception {\n+ QueryParseContext parseContext = searchContext.queryParserService().getParseContext();\n+ parseContext.reset(parser);\n+ Map<String, InnerHitsContext.BaseInnerHits> innerHitsMap = parseInnerHits(parser, parseContext, searchContext);\n if (innerHitsMap != null) {\n- context.innerHits(new InnerHitsContext(innerHitsMap));\n+ searchContext.innerHits(new InnerHitsContext(innerHitsMap));\n }\n }\n \n- private Map<String, InnerHitsContext.BaseInnerHits> parseInnerHits(XContentParser parser, SearchContext context) throws Exception {\n+ private Map<String, InnerHitsContext.BaseInnerHits> parseInnerHits(XContentParser parser, QueryParseContext parseContext, SearchContext searchContext) throws Exception {\n XContentParser.Token token;\n Map<String, InnerHitsContext.BaseInnerHits> innerHitsMap = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n@@ -79,7 +81,7 @@ private Map<String, InnerHitsContext.BaseInnerHits> parseInnerHits(XContentParse\n if (token != XContentParser.Token.START_OBJECT) {\n throw new ElasticsearchIllegalArgumentException(\"Inner hit definition for [\" + innerHitName + \" starts with a [\" + token + \"], expected a [\" + XContentParser.Token.START_OBJECT + \"].\");\n }\n- InnerHitsContext.BaseInnerHits innerHits = parseInnerHit(parser, context, innerHitName);\n+ InnerHitsContext.BaseInnerHits innerHits = parseInnerHit(parser, parseContext, searchContext, innerHitName);\n if (innerHitsMap == null) {\n innerHitsMap = new HashMap<>();\n }\n@@ -88,7 +90,7 @@ private Map<String, InnerHitsContext.BaseInnerHits> parseInnerHits(XContentParse\n return innerHitsMap;\n }\n \n- private InnerHitsContext.BaseInnerHits parseInnerHit(XContentParser parser, SearchContext context, String innerHitName) throws Exception {\n+ private InnerHitsContext.BaseInnerHits parseInnerHit(XContentParser parser, QueryParseContext parseContext, SearchContext searchContext, String innerHitName) throws Exception {\n XContentParser.Token token = parser.nextToken();\n if (token != XContentParser.Token.FIELD_NAME) {\n throw new ElasticsearchIllegalArgumentException(\"Unexpected token \" + token + \" inside inner hit definition. Either specify [path] or [type] object\");\n@@ -98,13 +100,15 @@ private InnerHitsContext.BaseInnerHits parseInnerHit(XContentParser parser, Sear\n if (token != XContentParser.Token.START_OBJECT) {\n throw new ElasticsearchIllegalArgumentException(\"Inner hit definition for [\" + innerHitName + \" starts with a [\" + token + \"], expected a [\" + XContentParser.Token.START_OBJECT + \"].\");\n }\n- final boolean nested;\n+\n+ String nestedPath = null;\n+ String type = null;\n switch (fieldName) {\n case \"path\":\n- nested = true;\n+ nestedPath = parser.currentName();\n break;\n case \"type\":\n- nested = false;\n+ type = parser.currentName();\n break;\n default:\n throw new ElasticsearchIllegalArgumentException(\"Either path or type object must be defined\");\n@@ -119,32 +123,69 @@ private InnerHitsContext.BaseInnerHits parseInnerHit(XContentParser parser, Sear\n throw new ElasticsearchIllegalArgumentException(\"Inner hit definition for [\" + innerHitName + \" starts with a [\" + token + \"], expected a [\" + XContentParser.Token.START_OBJECT + \"].\");\n }\n \n- NestedQueryParser.LateBindingParentFilter parentFilter = null;\n- NestedQueryParser.LateBindingParentFilter currentFilter = null;\n+ final InnerHitsContext.BaseInnerHits innerHits;\n+ if (nestedPath != null) {\n+ innerHits = parseNested(parser, parseContext, searchContext, fieldName);\n+ } else if (type != null) {\n+ innerHits = parseParentChild(parser, parseContext, searchContext, fieldName);\n+ } else {\n+ throw new ElasticsearchIllegalArgumentException(\"Either [path] or [type] must be defined\");\n+ }\n+\n+ // Completely consume all json objects:\n+ token = parser.nextToken();\n+ if (token != XContentParser.Token.END_OBJECT) {\n+ throw new ElasticsearchIllegalArgumentException(\"Expected [\" + XContentParser.Token.END_OBJECT + \"] token, but got a [\" + token + \"] token.\");\n+ }\n+ token = parser.nextToken();\n+ if (token != XContentParser.Token.END_OBJECT) {\n+ throw new ElasticsearchIllegalArgumentException(\"Expected [\" + XContentParser.Token.END_OBJECT + \"] token, but got a [\" + token + \"] token.\");\n+ }\n+\n+ return innerHits;\n+ }\n \n+ private InnerHitsContext.ParentChildInnerHits parseParentChild(XContentParser parser, QueryParseContext parseContext, SearchContext searchContext, String type) throws Exception {\n+ ParseResult parseResult = parseSubSearchContext(searchContext, parseContext, parser);\n+ DocumentMapper documentMapper = searchContext.mapperService().documentMapper(type);\n+ if (documentMapper == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"type [\" + type + \"] doesn't exist\");\n+ }\n+ return new InnerHitsContext.ParentChildInnerHits(parseResult.context(), parseResult.query(), parseResult.childInnerHits(), documentMapper);\n+ }\n \n- String nestedPath = null;\n- String type = null;\n- if (nested) {\n- nestedPath = fieldName;\n- currentFilter = new NestedQueryParser.LateBindingParentFilter();\n- parentFilter = NestedQueryParser.parentFilterContext.get();\n- NestedQueryParser.parentFilterContext.set(currentFilter);\n- } else {\n- type = fieldName;\n+ private InnerHitsContext.NestedInnerHits parseNested(XContentParser parser, QueryParseContext parseContext, SearchContext searchContext, String nestedPath) throws Exception {\n+ MapperService.SmartNameObjectMapper smartNameObjectMapper = searchContext.smartNameObjectMapper(nestedPath);\n+ if (smartNameObjectMapper == null || !smartNameObjectMapper.hasMapper()) {\n+ throw new ElasticsearchIllegalArgumentException(\"path [\" + nestedPath +\"] doesn't exist\");\n+ }\n+ ObjectMapper childObjectMapper = smartNameObjectMapper.mapper();\n+ if (!childObjectMapper.nested().isNested()) {\n+ throw new ElasticsearchIllegalArgumentException(\"path [\" + nestedPath +\"] isn't nested\");\n }\n+ DocumentMapper childDocumentMapper = smartNameObjectMapper.docMapper();\n+ parseContext.nestedScope().nextLevel(childObjectMapper);\n+ ParseResult parseResult = parseSubSearchContext(searchContext, parseContext, parser);\n+ parseContext.nestedScope().previousLevel();\n \n+ ObjectMapper parentObjectMapper = childDocumentMapper.findParentObjectMapper(childObjectMapper);\n+ return new InnerHitsContext.NestedInnerHits(parseResult.context(), parseResult.query(), parseResult.childInnerHits(), parentObjectMapper, childObjectMapper);\n+ }\n+\n+ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryParseContext parseContext, XContentParser parser) throws Exception {\n Query query = null;\n Map<String, InnerHitsContext.BaseInnerHits> childInnerHits = null;\n- SubSearchContext subSearchContext = new SubSearchContext(context);\n+ SubSearchContext subSearchContext = new SubSearchContext(searchContext);\n+ String fieldName = null;\n+ XContentParser.Token token;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n fieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"query\".equals(fieldName)) {\n- query = context.queryParserService().parse(parser).query();\n+ query = searchContext.queryParserService().parseInnerQuery(parseContext);\n } else if (\"inner_hits\".equals(fieldName)) {\n- childInnerHits = parseInnerHits(parser, context);\n+ childInnerHits = parseInnerHits(parser, parseContext, searchContext);\n } else {\n parseCommonInnerHitOptions(parser, token, fieldName, subSearchContext, sortParseElement, sourceParseElement, highlighterParseElement, scriptFieldsParseElement, fieldDataFieldsParseElement);\n }\n@@ -153,47 +194,34 @@ private InnerHitsContext.BaseInnerHits parseInnerHit(XContentParser parser, Sear\n }\n }\n \n- // Completely consume all json objects:\n- token = parser.nextToken();\n- if (token != XContentParser.Token.END_OBJECT) {\n- throw new ElasticsearchIllegalArgumentException(\"Expected [\" + XContentParser.Token.END_OBJECT + \"] token, but got a [\" + token + \"] token.\");\n+ if (query == null) {\n+ query = new MatchAllDocsQuery();\n }\n- token = parser.nextToken();\n- if (token != XContentParser.Token.END_OBJECT) {\n- throw new ElasticsearchIllegalArgumentException(\"Expected [\" + XContentParser.Token.END_OBJECT + \"] token, but got a [\" + token + \"] token.\");\n+ return new ParseResult(subSearchContext, query, childInnerHits);\n+ }\n+\n+ private static final class ParseResult {\n+\n+ private final SubSearchContext context;\n+ private final Query query;\n+ private final Map<String, InnerHitsContext.BaseInnerHits> childInnerHits;\n+\n+ private ParseResult(SubSearchContext context, Query query, Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n+ this.context = context;\n+ this.query = query;\n+ this.childInnerHits = childInnerHits;\n }\n \n- if (query == null) {\n- query = new MatchAllDocsQuery();\n+ public SubSearchContext context() {\n+ return context;\n }\n \n- if (nestedPath != null && type != null) {\n- throw new ElasticsearchIllegalArgumentException(\"Either [path] or [type] can be defined not both\");\n- } else if (nestedPath != null) {\n- MapperService.SmartNameObjectMapper smartNameObjectMapper = context.smartNameObjectMapper(nestedPath);\n- if (smartNameObjectMapper == null || !smartNameObjectMapper.hasMapper()) {\n- throw new ElasticsearchIllegalArgumentException(\"path [\" + nestedPath +\"] doesn't exist\");\n- }\n- ObjectMapper childObjectMapper = smartNameObjectMapper.mapper();\n- if (!childObjectMapper.nested().isNested()) {\n- throw new ElasticsearchIllegalArgumentException(\"path [\" + nestedPath +\"] isn't nested\");\n- }\n- DocumentMapper childDocumentMapper = smartNameObjectMapper.docMapper();\n- if (currentFilter != null && childDocumentMapper != null) {\n- currentFilter.filter = context.bitsetFilterCache().getBitDocIdSetFilter(childObjectMapper.nestedTypeFilter());\n- NestedQueryParser.parentFilterContext.set(parentFilter);\n- }\n+ public Query query() {\n+ return query;\n+ }\n \n- ObjectMapper parentObjectMapper = childDocumentMapper.findParentObjectMapper(childObjectMapper);\n- return new InnerHitsContext.NestedInnerHits(subSearchContext, query, childInnerHits, parentObjectMapper, childObjectMapper);\n- } else if (type != null) {\n- DocumentMapper documentMapper = context.mapperService().documentMapper(type);\n- if (documentMapper == null) {\n- throw new ElasticsearchIllegalArgumentException(\"type [\" + type + \"] doesn't exist\");\n- }\n- return new InnerHitsContext.ParentChildInnerHits(subSearchContext, query, childInnerHits, documentMapper);\n- } else {\n- throw new ElasticsearchIllegalArgumentException(\"Either [path] or [type] must be defined\");\n+ public Map<String, InnerHitsContext.BaseInnerHits> childInnerHits() {\n+ return childInnerHits;\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsParseElement.java",
"status": "modified"
},
{
"diff": "@@ -50,6 +50,7 @@\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;",
"filename": "src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -562,4 +562,5 @@ public MapperService.SmartNameObjectMapper smartNameObjectMapper(String name) {\n public Counter timeEstimateCounter() {\n return in.timeEstimateCounter();\n }\n+\n }",
"filename": "src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParseContext;\n+import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;",
"filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java",
"status": "modified"
},
{
"diff": "@@ -71,6 +71,8 @@ public class SubSearchContext extends FilteredSearchContext {\n private boolean trackScores;\n private boolean version;\n \n+ private InnerHitsContext innerHitsContext;\n+\n public SubSearchContext(SearchContext context) {\n super(context);\n this.fetchSearchResult = new FetchSearchResult();\n@@ -350,8 +352,6 @@ public Counter timeEstimateCounter() {\n throw new UnsupportedOperationException(\"Not supported\");\n }\n \n- private InnerHitsContext innerHitsContext;\n-\n @Override\n public void innerHits(InnerHitsContext innerHitsContext) {\n this.innerHitsContext = innerHitsContext;",
"filename": "src/main/java/org/elasticsearch/search/internal/SubSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,9 @@\n \n package org.elasticsearch.search.sort;\n \n-import org.apache.lucene.index.NumericDocValues;\n import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.NumericDocValues;\n import org.apache.lucene.search.FieldComparator;\n-import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BitSet;\n@@ -37,9 +36,8 @@\n import org.elasticsearch.index.fielddata.*;\n import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.mapper.ObjectMappers;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.query.ParsedFilter;\n+import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport;\n import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.MultiValueMode;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -66,8 +64,7 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n GeoDistance geoDistance = GeoDistance.DEFAULT;\n boolean reverse = false;\n MultiValueMode sortMode = null;\n- String nestedPath = null;\n- Filter nestedFilter = null;\n+ NestedInnerQueryParseSupport nestedHelper = null;\n \n boolean normalizeLon = true;\n boolean normalizeLat = true;\n@@ -84,8 +81,10 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n } else if (token == XContentParser.Token.START_OBJECT) {\n // the json in the format of -> field : { lat : 30, lon : 12 }\n if (\"nested_filter\".equals(currentName) || \"nestedFilter\".equals(currentName)) {\n- ParsedFilter parsedFilter = context.queryParserService().parseInnerFilter(parser);\n- nestedFilter = parsedFilter == null ? null : parsedFilter.filter();\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedHelper.filter();\n } else {\n fieldName = currentName;\n GeoPoint point = new GeoPoint();\n@@ -107,7 +106,10 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n } else if (\"sort_mode\".equals(currentName) || \"sortMode\".equals(currentName) || \"mode\".equals(currentName)) {\n sortMode = MultiValueMode.fromString(parser.text());\n } else if (\"nested_path\".equals(currentName) || \"nestedPath\".equals(currentName)) {\n- nestedPath = parser.text();\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedHelper.setPath(parser.text());\n } else {\n GeoPoint point = new GeoPoint();\n point.resetFromString(parser.text());\n@@ -141,27 +143,26 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n for (int i = 0; i< geoPoints.size(); i++) {\n distances[i] = geoDistance.fixedSourceDistance(geoPoints.get(i).lat(), geoPoints.get(i).lon(), unit);\n }\n- ObjectMapper objectMapper;\n- if (nestedPath != null) {\n- ObjectMappers objectMappers = context.mapperService().objectMapper(nestedPath);\n- if (objectMappers == null) {\n- throw new ElasticsearchIllegalArgumentException(\"failed to find nested object mapping for explicit nested path [\" + nestedPath + \"]\");\n- }\n- objectMapper = objectMappers.mapper();\n- if (!objectMapper.nested().isNested()) {\n- throw new ElasticsearchIllegalArgumentException(\"mapping for explicit nested path is not mapped as nested: [\" + nestedPath + \"]\");\n+\n+ // TODO: remove this in master, we should be explicit when we want to sort on nested fields and don't do anything automatically\n+ if (nestedHelper == null || nestedHelper.getNestedObjectMapper() == null) {\n+ ObjectMapper objectMapper = context.mapperService().resolveClosestNestedObjectMapper(fieldName);\n+ if (objectMapper != null && objectMapper.nested().isNested()) {\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(context.queryParserService().getParseContext());\n+ }\n+ nestedHelper.setPath(objectMapper.fullPath());\n }\n- } else {\n- objectMapper = context.mapperService().resolveClosestNestedObjectMapper(fieldName);\n }\n+\n final Nested nested;\n- if (objectMapper != null && objectMapper.nested().isNested()) {\n+ if (nestedHelper != null && nestedHelper.getPath() != null) {\n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n BitDocIdSetFilter innerDocumentsFilter;\n- if (nestedFilter != null) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter);\n+ if (nestedHelper.filterFound()) {\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {",
"filename": "src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java",
"status": "modified"
},
{
"diff": "@@ -19,23 +19,20 @@\n \n package org.elasticsearch.search.sort;\n \n-import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.BinaryDocValues;\n-import org.apache.lucene.search.Filter;\n+import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n-import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.*;\n import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;\n import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;\n import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorSource;\n-import org.elasticsearch.index.mapper.ObjectMappers;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.query.ParsedFilter;\n+import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport;\n import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.SearchScript;\n@@ -67,8 +64,7 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n Map<String, Object> params = null;\n boolean reverse = false;\n MultiValueMode sortMode = null;\n- String nestedPath = null;\n- Filter nestedFilter = null;\n+ NestedInnerQueryParseSupport nestedHelper = null;\n \n XContentParser.Token token;\n String currentName = parser.currentName();\n@@ -80,8 +76,10 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n if (\"params\".equals(currentName)) {\n params = parser.map();\n } else if (\"nested_filter\".equals(currentName) || \"nestedFilter\".equals(currentName)) {\n- ParsedFilter parsedFilter = context.queryParserService().parseInnerFilter(parser);\n- nestedFilter = parsedFilter == null ? null : parsedFilter.filter();\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedHelper.filter();\n }\n } else if (token.isValue()) {\n if (\"reverse\".equals(currentName)) {\n@@ -104,7 +102,10 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n } else if (\"mode\".equals(currentName)) {\n sortMode = MultiValueMode.fromString(parser.text());\n } else if (\"nested_path\".equals(currentName) || \"nestedPath\".equals(currentName)) {\n- nestedPath = parser.text();\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedHelper.setPath(parser.text());\n }\n }\n }\n@@ -128,22 +129,13 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n // If nested_path is specified, then wrap the `fieldComparatorSource` in a `NestedFieldComparatorSource`\n ObjectMapper objectMapper;\n final Nested nested;\n- if (nestedPath != null) {\n- ObjectMappers objectMappers = context.mapperService().objectMapper(nestedPath);\n- if (objectMappers == null) {\n- throw new ElasticsearchIllegalArgumentException(\"failed to find nested object mapping for explicit nested path [\" + nestedPath + \"]\");\n- }\n- objectMapper = objectMappers.mapper();\n- if (!objectMapper.nested().isNested()) {\n- throw new ElasticsearchIllegalArgumentException(\"mapping for explicit nested path is not mapped as nested: [\" + nestedPath + \"]\");\n- }\n-\n+ if (nestedHelper != null && nestedHelper.getPath() != null) {\n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n BitDocIdSetFilter innerDocumentsFilter;\n- if (nestedFilter != null) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter);\n+ if (nestedHelper.filterFound()) {\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {",
"filename": "src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Lists;\n-import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n@@ -32,18 +31,18 @@\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.mapper.ObjectMappers;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n-import org.elasticsearch.index.query.ParsedFilter;\n+import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport;\n import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.MultiValueMode;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.SearchParseException;\n-import org.elasticsearch.search.internal.SubSearchContext;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SubSearchContext;\n \n+import java.io.IOException;\n import java.util.List;\n \n /**\n@@ -86,13 +85,13 @@ public void parse(XContentParser parser, SearchContext context) throws Exception\n if (token == XContentParser.Token.START_OBJECT) {\n addCompoundSortField(parser, context, sortFields);\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- addSortField(context, sortFields, parser.text(), false, null, null, null, null, null);\n+ addSortField(context, sortFields, parser.text(), false, null, null, null, null);\n } else {\n throw new ElasticsearchIllegalArgumentException(\"malformed sort format, within the sort array, an object, or an actual string are allowed\");\n }\n }\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- addSortField(context, sortFields, parser.text(), false, null, null, null, null, null);\n+ addSortField(context, sortFields, parser.text(), false, null, null, null, null);\n } else if (token == XContentParser.Token.START_OBJECT) {\n addCompoundSortField(parser, context, sortFields);\n } else {\n@@ -127,8 +126,7 @@ private void addCompoundSortField(XContentParser parser, SearchContext context,\n String innerJsonName = null;\n String unmappedType = null;\n MultiValueMode sortMode = null;\n- Filter nestedFilter = null;\n- String nestedPath = null;\n+ NestedInnerQueryParseSupport nestedFilterParseHelper = null;\n token = parser.nextToken();\n if (token == XContentParser.Token.VALUE_STRING) {\n String direction = parser.text();\n@@ -139,7 +137,7 @@ private void addCompoundSortField(XContentParser parser, SearchContext context,\n } else {\n throw new ElasticsearchIllegalArgumentException(\"sort direction [\" + fieldName + \"] not supported\");\n }\n- addSortField(context, sortFields, fieldName, reverse, unmappedType, missing, sortMode, nestedPath, nestedFilter);\n+ addSortField(context, sortFields, fieldName, reverse, unmappedType, missing, sortMode, nestedFilterParseHelper);\n } else {\n if (parsers.containsKey(fieldName)) {\n sortFields.add(parsers.get(fieldName).parse(parser, context));\n@@ -169,27 +167,32 @@ private void addCompoundSortField(XContentParser parser, SearchContext context,\n } else if (\"mode\".equals(innerJsonName)) {\n sortMode = MultiValueMode.fromString(parser.text());\n } else if (\"nested_path\".equals(innerJsonName) || \"nestedPath\".equals(innerJsonName)) {\n- nestedPath = parser.text();\n+ if (nestedFilterParseHelper == null) {\n+ nestedFilterParseHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedFilterParseHelper.setPath(parser.text());\n } else {\n throw new ElasticsearchIllegalArgumentException(\"sort option [\" + innerJsonName + \"] not supported\");\n }\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"nested_filter\".equals(innerJsonName) || \"nestedFilter\".equals(innerJsonName)) {\n- ParsedFilter parsedFilter = context.queryParserService().parseInnerFilter(parser);\n- nestedFilter = parsedFilter == null ? null : parsedFilter.filter();\n+ if (nestedFilterParseHelper == null) {\n+ nestedFilterParseHelper = new NestedInnerQueryParseSupport(parser, context);\n+ }\n+ nestedFilterParseHelper.filter();\n } else {\n throw new ElasticsearchIllegalArgumentException(\"sort option [\" + innerJsonName + \"] not supported\");\n }\n }\n }\n- addSortField(context, sortFields, fieldName, reverse, unmappedType, missing, sortMode, nestedPath, nestedFilter);\n+ addSortField(context, sortFields, fieldName, reverse, unmappedType, missing, sortMode, nestedFilterParseHelper);\n }\n }\n }\n }\n }\n \n- private void addSortField(SearchContext context, List<SortField> sortFields, String fieldName, boolean reverse, String unmappedType, @Nullable final String missing, MultiValueMode sortMode, String nestedPath, Filter nestedFilter) {\n+ private void addSortField(SearchContext context, List<SortField> sortFields, String fieldName, boolean reverse, String unmappedType, @Nullable final String missing, MultiValueMode sortMode, NestedInnerQueryParseSupport nestedHelper) throws IOException {\n if (SCORE_FIELD_NAME.equals(fieldName)) {\n if (reverse) {\n sortFields.add(SORT_SCORE_REVERSE);\n@@ -233,29 +236,27 @@ private void addSortField(SearchContext context, List<SortField> sortFields, Str\n sortMode = resolveDefaultSortMode(reverse);\n }\n \n-\n- ObjectMapper objectMapper = null;\n- if (nestedPath != null) {\n- ObjectMappers objectMappers = context.mapperService().objectMapper(nestedPath);\n- if (objectMappers == null) {\n- throw new ElasticsearchIllegalArgumentException(\"failed to find nested object mapping for explicit nested path [\" + nestedPath + \"]\");\n- }\n- objectMapper = objectMappers.mapper();\n- if (!objectMapper.nested().isNested()) {\n- throw new ElasticsearchIllegalArgumentException(\"mapping for explicit nested path is not mapped as nested: [\" + nestedPath + \"]\");\n- }\n- } else if (!(context instanceof SubSearchContext)) {\n+ // TODO: remove this in master, we should be explicit when we want to sort on nested fields and don't do anything automatically\n+ if (!(context instanceof SubSearchContext)) {\n // Only automatically resolve nested path when sort isn't defined for top_hits\n- objectMapper = context.mapperService().resolveClosestNestedObjectMapper(fieldName);\n+ if (nestedHelper == null || nestedHelper.getNestedObjectMapper() == null) {\n+ ObjectMapper objectMapper = context.mapperService().resolveClosestNestedObjectMapper(fieldName);\n+ if (objectMapper != null && objectMapper.nested().isNested()) {\n+ if (nestedHelper == null) {\n+ nestedHelper = new NestedInnerQueryParseSupport(context.queryParserService().getParseContext());\n+ }\n+ nestedHelper.setPath(objectMapper.fullPath());\n+ }\n+ }\n }\n final Nested nested;\n- if (objectMapper != null && objectMapper.nested().isNested()) {\n+ if (nestedHelper != null && nestedHelper.getPath() != null) {\n BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n BitDocIdSetFilter innerDocumentsFilter;\n- if (nestedFilter != null) {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter);\n+ if (nestedHelper.filterFound()) {\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getInnerFilter());\n } else {\n- innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedHelper.getNestedObjectMapper().nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {",
"filename": "src/main/java/org/elasticsearch/search/sort/SortParseElement.java",
"status": "modified"
},
{
"diff": "@@ -356,7 +356,7 @@ private void noChildrenNestedDeleteByQuery(long total, int docToDelete) throws E\n client().prepareDeleteByQuery(\"test\").setQuery(QueryBuilders.idsQuery(\"type1\").ids(Integer.toString(docToDelete))).execute().actionGet();\n flush();\n refresh();\n- assertDocumentCount(\"test\", total-1);\n+ assertDocumentCount(\"test\", total - 1);\n \n for (int i = 0; i < total; i++) {\n assertThat(client().prepareGet(\"test\", \"type1\", Integer.toString(i)).execute().actionGet().isExists(), equalTo(i != docToDelete));\n@@ -1167,6 +1167,142 @@ public void testSortNestedWithNestedFilter() throws Exception {\n assertThat(searchResponse.getHits().getHits()[2].sortValues()[0].toString(), equalTo(\"3\"));\n }\n \n+ @Test\n+ // https://github.com/elasticsearch/elasticsearch/issues/9305\n+ public void testNestedSortingWithNestedFilterAsFilter() throws Exception {\n+ assertAcked(prepareCreate(\"test\").addMapping(\"type\", jsonBuilder().startObject().startObject(\"properties\")\n+ .startObject(\"officelocation\").field(\"type\", \"string\").endObject()\n+ .startObject(\"users\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"first\").field(\"type\", \"string\").endObject()\n+ .startObject(\"last\").field(\"type\", \"string\").endObject()\n+ .startObject(\"workstations\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"stationid\").field(\"type\", \"string\").endObject()\n+ .startObject(\"phoneid\").field(\"type\", \"string\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject()));\n+\n+ client().prepareIndex(\"test\", \"type\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"officelocation\", \"gendale\")\n+ .startArray(\"users\")\n+ .startObject()\n+ .field(\"first\", \"fname1\")\n+ .field(\"last\", \"lname1\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s1\")\n+ .field(\"phoneid\", \"p1\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s2\")\n+ .field(\"phoneid\", \"p2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"first\", \"fname2\")\n+ .field(\"last\", \"lname2\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s3\")\n+ .field(\"phoneid\", \"p3\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s4\")\n+ .field(\"phoneid\", \"p4\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"first\", \"fname3\")\n+ .field(\"last\", \"lname3\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s5\")\n+ .field(\"phoneid\", \"p5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s6\")\n+ .field(\"phoneid\", \"p6\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .endObject()).get();\n+\n+ client().prepareIndex(\"test\", \"type\", \"2\").setSource(jsonBuilder().startObject()\n+ .field(\"officelocation\", \"gendale\")\n+ .startArray(\"users\")\n+ .startObject()\n+ .field(\"first\", \"fname4\")\n+ .field(\"last\", \"lname4\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s1\")\n+ .field(\"phoneid\", \"p1\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s2\")\n+ .field(\"phoneid\", \"p2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"first\", \"fname5\")\n+ .field(\"last\", \"lname5\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s3\")\n+ .field(\"phoneid\", \"p3\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s4\")\n+ .field(\"phoneid\", \"p4\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"first\", \"fname1\")\n+ .field(\"last\", \"lname1\")\n+ .startArray(\"workstations\")\n+ .startObject()\n+ .field(\"stationid\", \"s5\")\n+ .field(\"phoneid\", \"p5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"stationid\", \"s6\")\n+ .field(\"phoneid\", \"p6\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .endObject()).get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .addSort(SortBuilders.fieldSort(\"users.first\")\n+ .order(SortOrder.ASC))\n+ .addSort(SortBuilders.fieldSort(\"users.first\")\n+ .order(SortOrder.ASC)\n+ .setNestedPath(\"users\")\n+ .setNestedFilter(nestedFilter(\"users.workstations\", termFilter(\"users.workstations.stationid\", \"s5\"))))\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 2);\n+ assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"2\"));\n+ assertThat(searchResponse.getHits().getAt(0).sortValues()[0].toString(), equalTo(\"fname1\"));\n+ assertThat(searchResponse.getHits().getAt(0).sortValues()[1].toString(), equalTo(\"fname1\"));\n+ assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(1).sortValues()[0].toString(), equalTo(\"fname1\"));\n+ assertThat(searchResponse.getHits().getAt(1).sortValues()[1].toString(), equalTo(\"fname3\"));\n+ }\n+\n @Test\n public void testCheckFixedBitSetCache() throws Exception {\n boolean loadFixedBitSeLazily = randomBoolean();",
"filename": "src/test/java/org/elasticsearch/nested/SimpleNestedTests.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.script.ScriptService;\n@@ -606,4 +607,5 @@ public void innerHits(InnerHitsContext innerHitsContext) {\n public InnerHitsContext innerHits() {\n throw new UnsupportedOperationException();\n }\n+\n }",
"filename": "src/test/java/org/elasticsearch/test/TestSearchContext.java",
"status": "modified"
}
]
} |
{
"body": "Using a negative numeric `interval` setting in `date_histogram` can lead to OOM erros on 1.4.2:\n\n```\nDELETE /_all\n\nPUT /index/doc/1\n{\n \"date\": \"2014/01/01\"\n}\n\nPUT /index/doc/2\n{\n \"date\": \"2014/03/01\"\n}\n\nGET /index/doc/_search?search_type=count\n{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n }\n }\n },\n \"aggs\" : {\n \"by_time\" : {\n \"date_histogram\" : {\n \"field\" : \"date\",\n \"interval\" : \"-60000\",\n \"min_doc_count\" : 0,\n \"format\" : \"yyyy-MM-dd--HH:mm:ss.SSSZ\"\n }\n }\n }\n}\n\n-->\n{\n \"error\": \"ReduceSearchPhaseException[Failed to execute phase [query], [reduce] ]; nested: OutOfMemoryError[Java heap space]; \",\n \"status\": 503\n}\n```\n",
"comments": [
{
"body": "Without the \"min_doc_count\" there seems to be no problem. Still, negative intervals don't make much sense in date_histogram, so I'll make it raise IllegalArgument exception. \n",
"created_at": "2015-02-13T12:45:04Z"
},
{
"body": "For `histogram` there is already a check for negative intervals in place in HistogramParser, will just change it so also 0 is not allowed. \n",
"created_at": "2015-02-13T13:23:16Z"
},
{
"body": "> Without the \"min_doc_count\" there seems to be no problem. Still, negative intervals don't make much sense in date_histogram, so I'll make it raise IllegalArgument exception. \n\nFYI we prefer using ElasticsearchIllegalArgumentException instead of IllegalArgument whenever possible. The reason is that these exceptions are converted to 400 - Bad request on the rest layer.\n",
"created_at": "2015-02-13T13:24:56Z"
},
{
"body": "Yes, that's what I actually meant. Is there a good way to test for exceptions in the ElasticsearchIntegrationTest? All the exceptions on shard level seem to get wrapped into a SearchPhaseExecutionException, the only way to check the undelying cause seems to me to check the message string. At least that's what I see a lot in other tests, e.g. SimpleChildQuerySearchTests:\n\n```\n...\ncatch (SearchPhaseExecutionException e) {\n assertThat(e.getMessage(), containsString(\"[has_child] 'max_children' is less than 'min_children'\"));\n }\n...\n```\n",
"created_at": "2015-02-13T13:29:59Z"
},
{
"body": "Yeah, unfortunately I don't think we can do better (or I'm not aware of it).\n",
"created_at": "2015-02-13T13:33:04Z"
}
],
"number": 9634,
"title": "Aggregations: Prevent negative intervals in date_histogram"
} | {
"body": "Negative settings for `interval` in `date_histogram` could lead to OOM errors in conjunction with `min_doc_count`=0. This fix raises exceptions in the histogram builder and the TimeZoneRounding classes so that the query fails before this can happen.\n\nCloses #9634\n",
"number": 9690,
"review_comments": [],
"title": "Prevent negative intervals in date_histogram"
} | {
"commits": [
{
"message": "Add negative interval check in TimeZoneRoundings"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.rounding;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -63,6 +64,8 @@ public Builder(DateTimeUnit unit) {\n \n public Builder(TimeValue interval) {\n this.unit = null;\n+ if (interval.millis() < 1)\n+ throw new ElasticsearchIllegalArgumentException(\"Zero or negative time interval not supported\");\n this.interval = interval.millis();\n }\n \n@@ -309,6 +312,8 @@ static class UTCIntervalTimeZoneRounding extends TimeZoneRounding {\n }\n \n UTCIntervalTimeZoneRounding(long interval) {\n+ if (interval < 1)\n+ throw new ElasticsearchIllegalArgumentException(\"Zero or negative time interval not supported\");\n this.interval = interval;\n }\n \n@@ -356,6 +361,8 @@ static class TimeIntervalTimeZoneRounding extends TimeZoneRounding {\n }\n \n TimeIntervalTimeZoneRounding(long interval, DateTimeZone preTz, DateTimeZone postTz) {\n+ if (interval < 1)\n+ throw new ElasticsearchIllegalArgumentException(\"Zero or negative time interval not supported\");\n this.interval = interval;\n this.preTz = preTz;\n this.postTz = postTz;\n@@ -414,6 +421,8 @@ static class DayIntervalTimeZoneRounding extends TimeZoneRounding {\n }\n \n DayIntervalTimeZoneRounding(long interval, DateTimeZone preTz, DateTimeZone postTz) {\n+ if (interval < 1)\n+ throw new ElasticsearchIllegalArgumentException(\"Zero or negative time interval not supported\");\n this.interval = interval;\n this.preTz = preTz;\n this.postTz = postTz;",
"filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -118,7 +118,7 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n }\n }\n \n- if (interval < 0) {\n+ if (interval < 1) {\n throw new SearchParseException(context, \"Missing required field [interval] for histogram aggregation [\" + aggregationName + \"]\");\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramParser.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.ImmutableSettings;\n@@ -43,13 +44,13 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.List;\n+import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n /**\n@@ -1261,4 +1262,18 @@ public void testIssue6965() {\n assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key.getMillis()));\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n+\n+ /**\n+ * see issue #9634, negative interval in date_histogram should raise exception\n+ */\n+ public void testExeptionOnNegativerInterval() {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"date\").interval(-TimeUnit.DAYS.toMillis(1)).minDocCount(0)).execute()\n+ .actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(e.getMessage(), containsString(\"IllegalArgumentException\"));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import com.carrotsearch.hppc.LongOpenHashSet;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n@@ -941,4 +942,17 @@ public void singleValuedField_WithExtendedBounds() throws Exception {\n }\n }\n \n+ /**\n+ * see issue #9634, negative interval in histogram should raise exception\n+ */\n+ public void testExeptionOnNegativerInterval() {\n+ try {\n+ client().prepareSearch(\"empty_bucket_idx\")\n+ .addAggregation(histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(-1).minDocCount(0)).execute().actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(e.getMessage(), containsString(\"Missing required field [interval]\"));\n+ }\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/HistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "This parameter is serialized as a vLong while it could sometimes be negative.\n",
"comments": [
{
"body": "LGTM\n",
"created_at": "2014-10-14T10:21:33Z"
},
{
"body": "we just hit this in our BWC test here http://build-us-00.elasticsearch.org/job/es_bwc_1x/7637/CHECK_BRANCH=tags%2Fv1.1.2,jdk=JDK7,label=bwc/testReport/junit/org.elasticsearch.snapshots/SnapshotBackwardsCompatibilityTest/testSnapshotAndRestore/\n\nmaybe we should make sure the value to vlong is positive?\n",
"created_at": "2015-02-11T12:25:00Z"
},
{
"body": "@jpountz ping\n",
"created_at": "2015-02-11T12:25:13Z"
},
{
"body": "@s1monw we use -1 to signal stuff in the queue which are not updateTasks (as far as I can tell those are timeout notifications). See https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java#L306\n\nI think this is what is broken. We should have a common queue item base class that gives as the time inserted + a source and only have these in queue (UpdateTask should inherit from it).\n",
"created_at": "2015-02-11T12:29:46Z"
},
{
"body": "I just wanna fix the assertion here. We arlready have new code that transfers it correctly. WE can't fix this differently sorry.\n",
"created_at": "2015-02-11T12:30:48Z"
},
{
"body": "^^ by that I mean for nodes < 1.4.0\n",
"created_at": "2015-02-11T12:31:16Z"
},
{
"body": "My suggestion implies we never write negative values. if we never have to write a negative value, I think the problem is solved for <1.4 as well. \n\n> maybe we should make sure the value to vlong is positive?\n\nmaybe you meant writing a 0 where we see -1 (which is a valid value now). This is also a workaround but is not the \"right\" thing. Just wanted to share the idea of a proper fix.\n",
"created_at": "2015-02-11T12:35:11Z"
},
{
"body": "again the issue is fixed we use `writeLong` if you wanna communicate this differently that's all fine with me but it won't fix anything.\n",
"created_at": "2015-02-11T12:41:11Z"
},
{
"body": "@s1monw are you suggesting we should fix it by removing the assertion?\n",
"created_at": "2015-02-11T13:02:24Z"
},
{
"body": "> @s1monw are you suggesting we should fix it by removing the assertion?\n\nno I am suggesting to fix the BWC code to send `out.writeVLong(Math.max(0, timeInQueue));` since the serialization will be broken if you pass a neg value to `writeVLong` no?\n",
"created_at": "2015-02-11T14:54:43Z"
},
{
"body": "Makes sense to me. +1\n",
"created_at": "2015-02-11T15:51:24Z"
}
],
"number": 8077,
"title": "Fix serialization of PendingClusterTask.timeInQueue."
} | {
"body": "At the moment we sometime submit generic runnables, which make life slightly harder when generated pending task list which have to account for them. This commit adds an abstract TimedPrioritizedRunnable class which should always be used. This class also automatically measures time in queue, which is needed for the pending task reporting.\n\n Relates to #8077\n\n Closes #9354\n",
"number": 9671,
"review_comments": [
{
"body": "this is no longer needed. I'll fix this.\n",
"created_at": "2015-02-12T10:07:35Z"
}
],
"title": "Introduce TimedPrioritizedRunnable base class to all commands that go into InternalClusterService.updateTasksExecutor"
} | {
"commits": [
{
"message": "Internal: Introduce TimedPrioritizedRunnable base class to all commands that go into InternalClusterService.updateTasksExecutor\n\n At the moment we sometime submit generic runnables, which make life slightly harder when generated pending task list which have to account for them. This commit adds an abstract TimedPrioritizedRunnable class which should always be used. This class also automatically measures time in queue, which is needed for the pending task reporting.\n\n Relates to #8077\n\n Closes #9354"
}
],
"files": [
{
"diff": "@@ -235,7 +235,7 @@ public void add(final TimeValue timeout, final TimeoutClusterStateListener liste\n }\n // call the post added notification on the same event thread\n try {\n- updateTasksExecutor.execute(new PrioritizedRunnable(Priority.HIGH) {\n+ updateTasksExecutor.execute(new TimedPrioritizedRunnable(Priority.HIGH, \"_add_listener_\") {\n @Override\n public void run() {\n NotifyTimeout notifyTimeout = new NotifyTimeout(listener, timeout);\n@@ -272,7 +272,7 @@ public void run() {\n threadPool.generic().execute(new Runnable() {\n @Override\n public void run() {\n- timeoutUpdateTask.onFailure(task.source, new ProcessClusterEventTimeoutException(timeoutUpdateTask.timeout(), task.source));\n+ timeoutUpdateTask.onFailure(task.source(), new ProcessClusterEventTimeoutException(timeoutUpdateTask.timeout(), task.source()));\n }\n });\n }\n@@ -291,35 +291,54 @@ public void run() {\n \n @Override\n public List<PendingClusterTask> pendingTasks() {\n- long now = System.currentTimeMillis();\n PrioritizedEsThreadPoolExecutor.Pending[] pendings = updateTasksExecutor.getPending();\n List<PendingClusterTask> pendingClusterTasks = new ArrayList<>(pendings.length);\n for (PrioritizedEsThreadPoolExecutor.Pending pending : pendings) {\n final String source;\n final long timeInQueue;\n- if (pending.task instanceof UpdateTask) {\n- UpdateTask updateTask = (UpdateTask) pending.task;\n- source = updateTask.source;\n- timeInQueue = now - updateTask.addedAt;\n+ if (pending.task instanceof TimedPrioritizedRunnable) {\n+ TimedPrioritizedRunnable runnable = (TimedPrioritizedRunnable) pending.task;\n+ source = runnable.source();\n+ timeInQueue = runnable.timeSinceCreatedInMillis();\n } else {\n+ assert false : \"expected TimedPrioritizedRunnable got \" + pending.task.getClass();\n source = \"unknown\";\n- timeInQueue = -1;\n+ timeInQueue = 0;\n }\n \n pendingClusterTasks.add(new PendingClusterTask(pending.insertionOrder, pending.priority, new StringText(source), timeInQueue, pending.executing));\n }\n return pendingClusterTasks;\n }\n \n- class UpdateTask extends PrioritizedRunnable {\n+ static abstract class TimedPrioritizedRunnable extends PrioritizedRunnable {\n+ private final long creationTime;\n+ protected final String source;\n+\n+ protected TimedPrioritizedRunnable(Priority priority, String source) {\n+ super(priority);\n+ this.source = source;\n+ this.creationTime = System.currentTimeMillis();\n+ }\n+\n+ public long timeSinceCreatedInMillis() {\n+ // max with 0 to make sure we always return a non negative number\n+ // even if time shifts.\n+ return Math.max(0, System.currentTimeMillis() - creationTime);\n+ }\n+\n+ public String source() {\n+ return source;\n+ }\n+ }\n+\n+ class UpdateTask extends TimedPrioritizedRunnable {\n \n- public final String source;\n public final ClusterStateUpdateTask updateTask;\n- public final long addedAt = System.currentTimeMillis();\n+\n \n UpdateTask(String source, Priority priority, ClusterStateUpdateTask updateTask) {\n- super(priority);\n- this.source = source;\n+ super(priority, source);\n this.updateTask = updateTask;\n }\n ",
"filename": "src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,8 @@ public PendingClusterTask() {\n }\n \n public PendingClusterTask(long insertOrder, Priority priority, Text source, long timeInQueue, boolean executing) {\n+ assert timeInQueue >= 0 : \"got a negative timeInQueue [\" + timeInQueue + \"]\";\n+ assert insertOrder >= 0 : \"got a negative insertOrder [\" + insertOrder + \"]\";\n this.insertOrder = insertOrder;\n this.priority = priority;\n this.source = source;\n@@ -99,7 +101,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeLong(timeInQueue);\n } else {\n out.writeVLong(Math.max(0, timeInQueue));\n- }\n+ }\n if (out.getVersion().onOrAfter(Version.V_1_3_0)) {\n out.writeBoolean(executing);\n }",
"filename": "src/main/java/org/elasticsearch/cluster/service/PendingClusterTask.java",
"status": "modified"
}
]
} |
{
"body": "This parameter is serialized as a vLong while it could sometimes be negative.\n",
"comments": [
{
"body": "LGTM\n",
"created_at": "2014-10-14T10:21:33Z"
},
{
"body": "we just hit this in our BWC test here http://build-us-00.elasticsearch.org/job/es_bwc_1x/7637/CHECK_BRANCH=tags%2Fv1.1.2,jdk=JDK7,label=bwc/testReport/junit/org.elasticsearch.snapshots/SnapshotBackwardsCompatibilityTest/testSnapshotAndRestore/\n\nmaybe we should make sure the value to vlong is positive?\n",
"created_at": "2015-02-11T12:25:00Z"
},
{
"body": "@jpountz ping\n",
"created_at": "2015-02-11T12:25:13Z"
},
{
"body": "@s1monw we use -1 to signal stuff in the queue which are not updateTasks (as far as I can tell those are timeout notifications). See https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java#L306\n\nI think this is what is broken. We should have a common queue item base class that gives as the time inserted + a source and only have these in queue (UpdateTask should inherit from it).\n",
"created_at": "2015-02-11T12:29:46Z"
},
{
"body": "I just wanna fix the assertion here. We arlready have new code that transfers it correctly. WE can't fix this differently sorry.\n",
"created_at": "2015-02-11T12:30:48Z"
},
{
"body": "^^ by that I mean for nodes < 1.4.0\n",
"created_at": "2015-02-11T12:31:16Z"
},
{
"body": "My suggestion implies we never write negative values. if we never have to write a negative value, I think the problem is solved for <1.4 as well. \n\n> maybe we should make sure the value to vlong is positive?\n\nmaybe you meant writing a 0 where we see -1 (which is a valid value now). This is also a workaround but is not the \"right\" thing. Just wanted to share the idea of a proper fix.\n",
"created_at": "2015-02-11T12:35:11Z"
},
{
"body": "again the issue is fixed we use `writeLong` if you wanna communicate this differently that's all fine with me but it won't fix anything.\n",
"created_at": "2015-02-11T12:41:11Z"
},
{
"body": "@s1monw are you suggesting we should fix it by removing the assertion?\n",
"created_at": "2015-02-11T13:02:24Z"
},
{
"body": "> @s1monw are you suggesting we should fix it by removing the assertion?\n\nno I am suggesting to fix the BWC code to send `out.writeVLong(Math.max(0, timeInQueue));` since the serialization will be broken if you pass a neg value to `writeVLong` no?\n",
"created_at": "2015-02-11T14:54:43Z"
},
{
"body": "Makes sense to me. +1\n",
"created_at": "2015-02-11T15:51:24Z"
}
],
"number": 8077,
"title": "Fix serialization of PendingClusterTask.timeInQueue."
} | {
"body": "`#writeVLong` can only serialize positive values, yet this BWC code\nin `PendingClusterTask` passes occational `-1` causing assertions to trip.\nIt also yields completely wrong values ie. if `-1` is deserialized it yields\n`9223372036854775807`. This commit ensure that `timeInQueue` is positive ie.\nat least `0`\n\nRelates to #8077\n\nNote this PR is against `1.x`\n",
"number": 9662,
"review_comments": [
{
"body": "Indentation issue?\n",
"created_at": "2015-02-11T21:34:04Z"
}
],
"title": "Ensure that we don't pass negative `timeInQueue` to writeVLong"
} | {
"commits": [
{
"message": "Ensure that we don't pass negative `timeInQueue` to writeVLong\n\n`#writeVLong` can only serialize positive values, yet this BWC code\nin `PendingClusterTask` passes occational `-1` causing assertions to trip.\nIt also yields completely wrong values ie. if `-1 is deserialized it yields\n`9223372036854775807`. This commit ensure that `timeInQueue` is positive ie.\nat least `0`\n\nRelates to #8077"
}
],
"files": [
{
"diff": "@@ -98,8 +98,8 @@ public void writeTo(StreamOutput out) throws IOException {\n // timeInQueue is set to -1 when unknown and can be negative if time goes backwards\n out.writeLong(timeInQueue);\n } else {\n- out.writeVLong(timeInQueue);\n- }\n+ out.writeVLong(Math.max(0, timeInQueue));\n+ }\n if (out.getVersion().onOrAfter(Version.V_1_3_0)) {\n out.writeBoolean(executing);\n }",
"filename": "src/main/java/org/elasticsearch/cluster/service/PendingClusterTask.java",
"status": "modified"
}
]
} |
{
"body": "Even though we have ThreadPool.OPTIMIZE pool with size=1, if the incoming optimize request does not wait_for_completion, then Lucene's IndexWriter runs the optimize in the background and the request returns immediately, freeing the thread pool to run another optimize.\n\nSo if the application submits 10 optimize requests (without wait_for_completion), all 10 will run concurrently, which is bad.\n\nIf the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?\n",
"comments": [
{
"body": "> If the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?\n\nI think that is correct, or rather whatever other optimize requests have already been queued will be run before the blocking wait for merges for the first request that ran. So I think we need to somehow push the waiting thread to the front of the queue, and always have that regardless of the optimize settings (except for wait_for_completion, which of course just runs in the foreground and holds the single optimize thread).\n",
"created_at": "2015-02-10T20:58:51Z"
},
{
"body": "Ok, here is my proposal after speaking with Shay:\n- For 1.4.3: Change the default for `wait_for_completion` to `true`\n- For 1.5.0: Remove `wait_for_completion` (and `wait_for_merges` in the optimize api)\n- For 2.0: Once we have the task api, try to add back some of this async functionality as a long running task that can be managed.\n",
"created_at": "2015-02-10T21:39:43Z"
},
{
"body": "+1 for this plan.\n\nSeparately, it would be nice if we could simply call IW.forceMerge(), which waits itself. This would fix #8923 ... must we really hold the readLock when calling forceMerge? Anyway, that can be done separately...\n",
"created_at": "2015-02-10T21:46:04Z"
}
],
"number": 9638,
"title": "Core: only one optimize operation should run at once"
} | {
"body": "This has been very trappy. Rather than continue to allow buggy behavior\nof having upgrade/optimize requests sidestep the single shard per node\nlimits optimize is supposed to be subject to, this removes\nthe ability to run the upgrade/optimize async.\n\ncloses #9638\n",
"number": 9640,
"review_comments": [
{
"body": "Wonderful :)\n",
"created_at": "2015-02-11T19:09:55Z"
}
],
"title": "Remove ability to run optimize and upgrade async"
} | {
"commits": [
{
"message": "Core: Remove ability to run optimize and upgrade async\n\nThis has been very trappy. Rather than continue to allow buggy behavior\nof having upgrade/optimize requests sidestep the single shard per node\nlimits optimize is supposed to be subject to, this removes\nthe ability to run the upgrade/optimize async.\n\ncloses #9638"
}
],
"files": [
{
"diff": "@@ -7,6 +7,10 @@ operations (and relates to the number of segments a Lucene index holds\n within each shard). The optimize operation allows to reduce the number\n of segments by merging them.\n \n+This call will block until the optimize is complete. If the http connection\n+is lost, the request will continue in the background, and\n+any new requests will block until the previous optimize is complete.\n+\n [source,js]\n --------------------------------------------------\n $ curl -XPOST 'http://localhost:9200/twitter/_optimize'\n@@ -33,10 +37,6 @@ deletes. Defaults to `false`. Note that this won't override the\n `flush`:: Should a flush be performed after the optimize. Defaults to\n `true`.\n \n-`wait_for_merge`:: Should the request wait for the merge to end. Defaults\n-to `true`. Note, a merge can potentially be a very heavy operation, so\n-it might make sense to run it set to `false`.\n-\n [float]\n [[optimize-multi-index]]\n === Multi Index",
"filename": "docs/reference/indices/optimize.asciidoc",
"status": "modified"
},
{
"diff": "@@ -17,15 +17,9 @@ NOTE: Upgrading is an I/O intensive operation, and is limited to processing a\n single shard per node at a time. It also is not allowed to run at the same\n time as optimize.\n \n-[float]\n-[[upgrade-parameters]]\n-==== Request Parameters\n-\n-The `upgrade` API accepts the following request parameters:\n-\n-[horizontal]\n-`wait_for_completion`:: Should the request wait for the upgrade to complete. Defaults\n-to `false`.\n+This call will block until the upgrade is complete. If the http connection\n+is lost, the request will continue in the background, and\n+any new requests will block until the previous upgrade is complete.\n \n [float]\n === Check upgrade status",
"filename": "docs/reference/indices/upgrade.asciidoc",
"status": "modified"
},
{
"diff": "@@ -30,9 +30,6 @@\n * A request to optimize one or more indices. In order to optimize on all the indices, pass an empty array or\n * <tt>null</tt> for the indices.\n * <p/>\n- * <p>{@link #waitForMerge(boolean)} allows to control if the call will block until the optimize completes and\n- * defaults to <tt>true</tt>.\n- * <p/>\n * <p>{@link #maxNumSegments(int)} allows to control the number of segments to optimize down to. By default, will\n * cause the optimize process to optimize down to half the configured number of segments.\n *\n@@ -43,14 +40,12 @@\n public class OptimizeRequest extends BroadcastOperationRequest<OptimizeRequest> {\n \n public static final class Defaults {\n- public static final boolean WAIT_FOR_MERGE = true;\n public static final int MAX_NUM_SEGMENTS = -1;\n public static final boolean ONLY_EXPUNGE_DELETES = false;\n public static final boolean FLUSH = true;\n public static final boolean UPGRADE = false;\n }\n-\n- private boolean waitForMerge = Defaults.WAIT_FOR_MERGE;\n+ \n private int maxNumSegments = Defaults.MAX_NUM_SEGMENTS;\n private boolean onlyExpungeDeletes = Defaults.ONLY_EXPUNGE_DELETES;\n private boolean flush = Defaults.FLUSH;\n@@ -69,21 +64,6 @@ public OptimizeRequest() {\n \n }\n \n- /**\n- * Should the call block until the optimize completes. Defaults to <tt>true</tt>.\n- */\n- public boolean waitForMerge() {\n- return waitForMerge;\n- }\n-\n- /**\n- * Should the call block until the optimize completes. Defaults to <tt>true</tt>.\n- */\n- public OptimizeRequest waitForMerge(boolean waitForMerge) {\n- this.waitForMerge = waitForMerge;\n- return this;\n- }\n-\n /**\n * Will optimize the index down to <= maxNumSegments. By default, will cause the optimize\n * process to optimize down to half the configured number of segments.\n@@ -151,7 +131,6 @@ public OptimizeRequest upgrade(boolean upgrade) {\n \n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n- waitForMerge = in.readBoolean();\n maxNumSegments = in.readInt();\n onlyExpungeDeletes = in.readBoolean();\n flush = in.readBoolean();\n@@ -160,7 +139,6 @@ public void readFrom(StreamInput in) throws IOException {\n \n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n- out.writeBoolean(waitForMerge);\n out.writeInt(maxNumSegments);\n out.writeBoolean(onlyExpungeDeletes);\n out.writeBoolean(flush);\n@@ -170,8 +148,7 @@ public void writeTo(StreamOutput out) throws IOException {\n @Override\n public String toString() {\n return \"OptimizeRequest{\" +\n- \"waitForMerge=\" + waitForMerge +\n- \", maxNumSegments=\" + maxNumSegments +\n+ \"maxNumSegments=\" + maxNumSegments +\n \", onlyExpungeDeletes=\" + onlyExpungeDeletes +\n \", flush=\" + flush +\n \", upgrade=\" + upgrade +",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/optimize/OptimizeRequest.java",
"status": "modified"
},
{
"diff": "@@ -27,9 +27,6 @@\n * A request to optimize one or more indices. In order to optimize on all the indices, pass an empty array or\n * <tt>null</tt> for the indices.\n * <p/>\n- * <p>{@link #setWaitForMerge(boolean)} allows to control if the call will block until the optimize completes and\n- * defaults to <tt>true</tt>.\n- * <p/>\n * <p>{@link #setMaxNumSegments(int)} allows to control the number of segments to optimize down to. By default, will\n * cause the optimize process to optimize down to half the configured number of segments.\n */\n@@ -39,14 +36,6 @@ public OptimizeRequestBuilder(IndicesAdminClient indicesClient) {\n super(indicesClient, new OptimizeRequest());\n }\n \n- /**\n- * Should the call block until the optimize completes. Defaults to <tt>true</tt>.\n- */\n- public OptimizeRequestBuilder setWaitForMerge(boolean waitForMerge) {\n- request.waitForMerge(waitForMerge);\n- return this;\n- }\n-\n /**\n * Will optimize the index down to <= maxNumSegments. By default, will cause the optimize\n * process to optimize down to half the configured number of segments.",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/optimize/OptimizeRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -232,12 +232,12 @@ public Condition newCondition() {\n /**\n * Optimizes to 1 segment\n */\n- abstract void forceMerge(boolean flush, boolean waitForMerge);\n+ abstract void forceMerge(boolean flush);\n \n /**\n * Triggers a forced merge on this engine\n */\n- public abstract void forceMerge(boolean flush, boolean waitForMerge, int maxNumSegments, boolean onlyExpungeDeletes, boolean upgrade) throws EngineException;\n+ public abstract void forceMerge(boolean flush, int maxNumSegments, boolean onlyExpungeDeletes, boolean upgrade) throws EngineException;\n \n /**\n * Snapshots the index and returns a handle to it. Will always try and \"commit\" the",
"filename": "src/main/java/org/elasticsearch/index/engine/Engine.java",
"status": "modified"
},
{
"diff": "@@ -817,12 +817,12 @@ private void waitForMerges(boolean flushAfter, boolean upgrade) {\n }\n \n @Override\n- public void forceMerge(boolean flush, boolean waitForMerge) {\n- forceMerge(flush, waitForMerge, 1, false, false);\n+ public void forceMerge(boolean flush) {\n+ forceMerge(flush, 1, false, false);\n }\n \n @Override\n- public void forceMerge(final boolean flush, boolean waitForMerge, int maxNumSegments, boolean onlyExpungeDeletes, final boolean upgrade) throws EngineException {\n+ public void forceMerge(final boolean flush, int maxNumSegments, boolean onlyExpungeDeletes, final boolean upgrade) throws EngineException {\n if (optimizeMutex.compareAndSet(false, true)) {\n try (ReleasableLock _ = readLock.acquire()) {\n ensureOpen();\n@@ -855,23 +855,7 @@ public void forceMerge(final boolean flush, boolean waitForMerge, int maxNumSegm\n }\n }\n \n- // wait for the merges outside of the read lock\n- if (waitForMerge) {\n- waitForMerges(flush, upgrade);\n- } else if (flush || upgrade) {\n- // we only need to monitor merges for async calls if we are going to flush\n- engineConfig.getThreadPool().executor(ThreadPool.Names.OPTIMIZE).execute(new AbstractRunnable() {\n- @Override\n- public void onFailure(Throwable t) {\n- logger.error(\"Exception while waiting for merges asynchronously after optimize\", t);\n- }\n-\n- @Override\n- protected void doRun() throws Exception {\n- waitForMerges(flush, upgrade);\n- }\n- });\n- }\n+ waitForMerges(flush, upgrade);\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -625,8 +625,7 @@ public void optimize(OptimizeRequest optimize) throws ElasticsearchException {\n if (logger.isTraceEnabled()) {\n logger.trace(\"optimize with {}\", optimize);\n }\n- engine().forceMerge(optimize.flush(), optimize.waitForMerge(), optimize\n- .maxNumSegments(), optimize.onlyExpungeDeletes(), optimize.upgrade());\n+ engine().forceMerge(optimize.flush(), optimize.maxNumSegments(), optimize.onlyExpungeDeletes(), optimize.upgrade());\n }\n \n public SnapshotIndexCommit snapshotIndex() throws EngineException {",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n OptimizeRequest optimizeRequest = new OptimizeRequest(Strings.splitStringByCommaToArray(request.param(\"index\")));\n optimizeRequest.listenerThreaded(false);\n optimizeRequest.indicesOptions(IndicesOptions.fromRequest(request, optimizeRequest.indicesOptions()));\n- optimizeRequest.waitForMerge(request.paramAsBoolean(\"wait_for_merge\", optimizeRequest.waitForMerge()));\n optimizeRequest.maxNumSegments(request.paramAsInt(\"max_num_segments\", optimizeRequest.maxNumSegments()));\n optimizeRequest.onlyExpungeDeletes(request.paramAsBoolean(\"only_expunge_deletes\", optimizeRequest.onlyExpungeDeletes()));\n optimizeRequest.flush(request.paramAsBoolean(\"flush\", optimizeRequest.flush()));",
"filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/optimize/RestOptimizeAction.java",
"status": "modified"
},
{
"diff": "@@ -90,7 +90,6 @@ public RestResponse buildResponse(IndicesSegmentResponse response, XContentBuild\n \n void handlePost(RestRequest request, RestChannel channel, Client client) {\n OptimizeRequest optimizeReq = new OptimizeRequest(Strings.splitStringByCommaToArray(request.param(\"index\")));\n- optimizeReq.waitForMerge(request.paramAsBoolean(\"wait_for_completion\", false));\n optimizeReq.flush(true);\n optimizeReq.upgrade(true);\n optimizeReq.maxNumSegments(Integer.MAX_VALUE); // we just want to upgrade the segments, not actually optimize to a single segment",
"filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/upgrade/RestUpgradeAction.java",
"status": "modified"
},
{
"diff": "@@ -367,7 +367,7 @@ public void testReusePeerRecovery() throws Exception {\n }\n logger.info(\"Running Cluster Health\");\n ensureGreen();\n- client().admin().indices().prepareOptimize(\"test\").setWaitForMerge(true).setMaxNumSegments(100).get(); // just wait for merges\n+ client().admin().indices().prepareOptimize(\"test\").setMaxNumSegments(100).get(); // just wait for merges\n client().admin().indices().prepareFlush().setWaitIfOngoing(true).setForce(true).get();\n \n logger.info(\"--> disabling allocation while the cluster is shut down\");",
"filename": "src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayTests.java",
"status": "modified"
},
{
"diff": "@@ -411,30 +411,9 @@ public void testVerboseSegments() throws Exception {\n public void testSegmentsWithMergeFlag() throws Exception {\n final Store store = createStore();\n ConcurrentMergeSchedulerProvider mergeSchedulerProvider = new ConcurrentMergeSchedulerProvider(shardId, EMPTY_SETTINGS, threadPool, new IndexSettingsService(shardId.index(), EMPTY_SETTINGS));\n- final AtomicReference<CountDownLatch> waitTillMerge = new AtomicReference<>();\n- final AtomicReference<CountDownLatch> waitForMerge = new AtomicReference<>();\n- mergeSchedulerProvider.addListener(new MergeSchedulerProvider.Listener() {\n- @Override\n- public void beforeMerge(OnGoingMerge merge) {\n- try {\n- if (waitTillMerge.get() != null) {\n- waitTillMerge.get().countDown();\n- }\n- if (waitForMerge.get() != null) {\n- waitForMerge.get().await();\n- }\n- } catch (InterruptedException e) {\n- throw ExceptionsHelper.convertToRuntime(e);\n- }\n- }\n-\n- @Override\n- public void afterMerge(OnGoingMerge merge) {\n- }\n- });\n-\n IndexSettingsService indexSettingsService = new IndexSettingsService(shardId.index(), ImmutableSettings.builder().put(defaultSettings).put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build());\n final Engine engine = createEngine(indexSettingsService, store, createTranslog(), mergeSchedulerProvider);\n+ \n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n engine.index(index);\n@@ -456,24 +435,13 @@ public void afterMerge(OnGoingMerge merge) {\n for (Segment segment : segments) {\n assertThat(segment.getMergeId(), nullValue());\n }\n-\n- waitTillMerge.set(new CountDownLatch(1));\n- waitForMerge.set(new CountDownLatch(1));\n- engine.forceMerge(false, false);\n- waitTillMerge.get().await();\n-\n- for (Segment segment : engine.segments(false)) {\n- assertThat(segment.getMergeId(), notNullValue());\n- }\n-\n- waitForMerge.get().countDown();\n-\n+ \n index = new Engine.Index(null, newUid(\"4\"), doc);\n engine.index(index);\n engine.flush();\n final long gen1 = store.readLastCommittedSegmentsInfo().getGeneration();\n // now, optimize and wait for merges, see that we have no merge flag\n- engine.forceMerge(true, true);\n+ engine.forceMerge(true);\n \n for (Segment segment : engine.segments(false)) {\n assertThat(segment.getMergeId(), nullValue());\n@@ -483,25 +451,14 @@ public void afterMerge(OnGoingMerge merge) {\n \n final boolean flush = randomBoolean();\n final long gen2 = store.readLastCommittedSegmentsInfo().getGeneration();\n- engine.forceMerge(flush, false);\n- waitTillMerge.get().await();\n+ engine.forceMerge(flush);\n for (Segment segment : engine.segments(false)) {\n assertThat(segment.getMergeId(), nullValue());\n }\n- waitForMerge.get().countDown();\n \n if (flush) {\n- awaitBusy(new Predicate<Object>() {\n- @Override\n- public boolean apply(Object o) {\n- try {\n- // we should have had just 1 merge, so last generation should be exact\n- return store.readLastCommittedSegmentsInfo().getLastGeneration() == gen2;\n- } catch (IOException e) {\n- throw ExceptionsHelper.convertToRuntime(e);\n- }\n- }\n- });\n+ // we should have had just 1 merge, so last generation should be exact\n+ assertEquals(gen2 + 1, store.readLastCommittedSegmentsInfo().getLastGeneration());\n }\n \n engine.close();",
"filename": "src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -215,7 +215,7 @@ public void testUpdateThrottleSettings() {\n \n // Optimize does a waitForMerges, which we must do to make sure all in-flight (throttled) merges finish:\n logger.info(\"test: optimize\");\n- client().admin().indices().prepareOptimize(\"test\").setWaitForMerge(true).get();\n+ client().admin().indices().prepareOptimize(\"test\").get();\n logger.info(\"test: optimize done\");\n \n // Record current throttling so far\n@@ -253,7 +253,7 @@ public void testUpdateThrottleSettings() {\n // when ElasticsearchIntegrationTest.after tries to remove indices created by the test:\n \n // Wait for merges to finish\n- client().admin().indices().prepareOptimize(\"test\").setWaitForMerge(true).get();\n+ client().admin().indices().prepareOptimize(\"test\").get();\n flush();\n \n logger.info(\"test: test done\");\n@@ -369,7 +369,7 @@ public void testUpdateMergeMaxThreadCount() {\n .put(ConcurrentMergeSchedulerProvider.MAX_THREAD_COUNT, \"1\")\n )\n .get();\n-\n+ \n // Make sure we log the change:\n assertTrue(mockAppender.sawUpdateMaxThreadCount);\n ",
"filename": "src/test/java/org/elasticsearch/indices/settings/UpdateSettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -380,7 +380,7 @@ public void throttleStats() throws Exception {\n // Optimize & flush and wait; else we sometimes get a \"Delete Index failed - not acked\"\n // when ElasticsearchIntegrationTest.after tries to remove indices created by the test:\n logger.info(\"test: now optimize\");\n- client().admin().indices().prepareOptimize(\"test\").setWaitForMerge(true).get();\n+ client().admin().indices().prepareOptimize(\"test\").get();\n flush();\n logger.info(\"test: test done\");\n }\n@@ -517,7 +517,7 @@ public void testMergeStats() {\n client().prepareIndex(\"test1\", \"type2\", Integer.toString(i)).setSource(\"field\", \"value\").execute().actionGet();\n client().admin().indices().prepareFlush().execute().actionGet();\n }\n- client().admin().indices().prepareOptimize().setWaitForMerge(true).setMaxNumSegments(1).execute().actionGet();\n+ client().admin().indices().prepareOptimize().setMaxNumSegments(1).execute().actionGet();\n stats = client().admin().indices().prepareStats()\n .setMerge(true)\n .execute().actionGet();\n@@ -544,7 +544,7 @@ public void testSegmentsStats() {\n assertThat(stats.getTotal().getSegments().getVersionMapMemoryInBytes(), greaterThan(0l));\n \n client().admin().indices().prepareFlush().get();\n- client().admin().indices().prepareOptimize().setWaitForMerge(true).setMaxNumSegments(1).execute().actionGet();\n+ client().admin().indices().prepareOptimize().setMaxNumSegments(1).execute().actionGet();\n stats = client().admin().indices().prepareStats().setSegments(true).get();\n \n assertThat(stats.getTotal().getSegments(), notNullValue());",
"filename": "src/test/java/org/elasticsearch/indices/stats/IndexStatsTests.java",
"status": "modified"
},
{
"diff": "@@ -157,7 +157,7 @@ public boolean apply(Object o) {\n logger.info(\"--> Single index upgrade complete\");\n \n logger.info(\"--> Running upgrade on the rest of the indexes\");\n- runUpgrade(httpClient, null, \"wait_for_completion\", \"true\");\n+ runUpgrade(httpClient, null);\n logSegmentsState();\n logger.info(\"--> Full upgrade complete\");\n assertUpgraded(httpClient, null);",
"filename": "src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeTest.java",
"status": "modified"
},
{
"diff": "@@ -1929,7 +1929,7 @@ public void testParentChildCaching() throws Exception {\n client().prepareIndex(\"test\", \"child\", \"c1\").setParent(\"p1\").setSource(\"c_field\", \"blue\").get();\n client().prepareIndex(\"test\", \"child\", \"c2\").setParent(\"p1\").setSource(\"c_field\", \"red\").get();\n client().prepareIndex(\"test\", \"child\", \"c3\").setParent(\"p2\").setSource(\"c_field\", \"red\").get();\n- client().admin().indices().prepareOptimize(\"test\").setFlush(true).setWaitForMerge(true).get();\n+ client().admin().indices().prepareOptimize(\"test\").setFlush(true).get();\n client().prepareIndex(\"test\", \"parent\", \"p3\").setSource(\"p_field\", \"p_value3\").get();\n client().prepareIndex(\"test\", \"parent\", \"p4\").setSource(\"p_field\", \"p_value4\").get();\n client().prepareIndex(\"test\", \"child\", \"c4\").setParent(\"p3\").setSource(\"c_field\", \"green\").get();",
"filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java",
"status": "modified"
},
{
"diff": "@@ -1414,7 +1414,7 @@ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedExc\n }\n indexRandom(true, builders);\n flushAndRefresh();\n- assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setFlush(true).setMaxNumSegments(1).get());\n \n CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -188,7 +188,7 @@ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedExc\n }\n indexRandom(true, builders);\n flushAndRefresh();\n- assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setFlush(true).setMaxNumSegments(1).get());\n \n CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));",
"filename": "src/test/java/org/elasticsearch/snapshots/SnapshotBackwardsCompatibilityTest.java",
"status": "modified"
}
]
} |
{
"body": "Even though we have ThreadPool.OPTIMIZE pool with size=1, if the incoming optimize request does not wait_for_completion, then Lucene's IndexWriter runs the optimize in the background and the request returns immediately, freeing the thread pool to run another optimize.\n\nSo if the application submits 10 optimize requests (without wait_for_completion), all 10 will run concurrently, which is bad.\n\nIf the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?\n",
"comments": [
{
"body": "> If the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?\n\nI think that is correct, or rather whatever other optimize requests have already been queued will be run before the blocking wait for merges for the first request that ran. So I think we need to somehow push the waiting thread to the front of the queue, and always have that regardless of the optimize settings (except for wait_for_completion, which of course just runs in the foreground and holds the single optimize thread).\n",
"created_at": "2015-02-10T20:58:51Z"
},
{
"body": "Ok, here is my proposal after speaking with Shay:\n- For 1.4.3: Change the default for `wait_for_completion` to `true`\n- For 1.5.0: Remove `wait_for_completion` (and `wait_for_merges` in the optimize api)\n- For 2.0: Once we have the task api, try to add back some of this async functionality as a long running task that can be managed.\n",
"created_at": "2015-02-10T21:39:43Z"
},
{
"body": "+1 for this plan.\n\nSeparately, it would be nice if we could simply call IW.forceMerge(), which waits itself. This would fix #8923 ... must we really hold the readLock when calling forceMerge? Anyway, that can be done separately...\n",
"created_at": "2015-02-10T21:46:04Z"
}
],
"number": 9638,
"title": "Core: only one optimize operation should run at once"
} | {
"body": "This has ended up being very trappy. Most people don't realize the\nparameter is there, and using a wildcard on index names for upgrade\nwill end up essentially bypassing the optimize concurrency controls\nthrough its threadpool.\n\nSee #9638\n",
"number": 9639,
"review_comments": [],
"title": "Upgrade: Change wait_for_completion to default to true"
} | {
"commits": [
{
"message": "Upgrade: Change wait_for_completion to default to true\n\nThis has ended up being very trappy. Most people don't realize the\nparameter is there, and using a wildcard on index names for upgrade\nwill end up essentially bypassing the optimize concurrency controls\nthrough its threadpool.\n\nSee #9638"
}
],
"files": [
{
"diff": "@@ -25,7 +25,7 @@ The `upgrade` API accepts the following request parameters:\n \n [horizontal]\n `wait_for_completion`:: Should the request wait for the upgrade to complete. Defaults\n-to `false`.\n+to `true`.\n \n [float]\n === Check upgrade status",
"filename": "docs/reference/indices/upgrade.asciidoc",
"status": "modified"
},
{
"diff": "@@ -90,7 +90,7 @@ public RestResponse buildResponse(IndicesSegmentResponse response, XContentBuild\n \n void handlePost(RestRequest request, RestChannel channel, Client client) {\n OptimizeRequest optimizeReq = new OptimizeRequest(Strings.splitStringByCommaToArray(request.param(\"index\")));\n- optimizeReq.waitForMerge(request.paramAsBoolean(\"wait_for_completion\", false));\n+ optimizeReq.waitForMerge(request.paramAsBoolean(\"wait_for_completion\", true));\n optimizeReq.flush(true);\n optimizeReq.upgrade(true);\n optimizeReq.maxNumSegments(Integer.MAX_VALUE); // we just want to upgrade the segments, not actually optimize to a single segment",
"filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/upgrade/RestUpgradeAction.java",
"status": "modified"
}
]
} |
{
"body": "From an irc conversation, someone did this:\n\n``` bash\ncurl -XPUT \"localhost:9200/my_index/_settings\" -d '{\n \"index\": {\n \"number_of_replicas\": 0\n }\n}'\ncurl -XPOST \"localhost:9200/my_index/_close\"\ncurl -XPUT \"localhost:9200/my_index/_settings\" -d '{\n \"index\": {\n \"number_of_replicas\": 2\n }\n}'\ncurl -XPOST \"localhost:9200/my_index/_open\"\n```\n\nand the index wouldn't open properly. Setting number_of_replicas back to 0 and then opening the index, and then setting number_of_replicas to 2 fixed the issue. It'd be nice if setting the number_of_replicas to something that prevents the index from opening wasn't possible.\n",
"comments": [
{
"body": "Hmm agreed... although I'm not sure how easy it would be to do. While the index is closed, we don't keep track of where or how many shards there are I believe.\n",
"created_at": "2015-02-05T13:48:13Z"
},
{
"body": "Yeah- I dunno. Maybe its better to just stop all replica count changes on\nclosed indexes or only allow it with some dangerous=OK flag or something.\n\nOr more documentation but I doubt that's enough.\nOn Feb 5, 2015 8:48 AM, \"Clinton Gormley\" notifications@github.com wrote:\n\n> Hmm agreed... although I'm not sure how easy it would be to do. While the\n> index is closed, we don't keep track of where or how many shards there are\n> I believe.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9566#issuecomment-73048215\n> .\n",
"created_at": "2015-02-05T14:58:32Z"
},
{
"body": "> Or more documentation but I doubt that's enough.\n\nNot sure how hard it is to fix but agreed that we should document this behaviour until it's fixed.\n",
"created_at": "2015-02-06T09:13:15Z"
},
{
"body": "I tend to say that we should forbid changing these settings while the index is closed as it is not clear what the effect of it would be. I also can't think of a use case where it will be helpful. I briefly looked at the code and it looks easy to add a black list of settings for closed indices.\n",
"created_at": "2015-02-06T09:41:51Z"
},
{
"body": "@bleskes +1 on not allowing to change these settings! Can you open a PR for this? I think the fact that you can do this is actually a bug?\n",
"created_at": "2015-02-13T10:20:52Z"
}
],
"number": 9566,
"title": "Setting number_of_replicas on a closed index can put it in an unopenable state"
} | {
"body": "Issue #9566 raises the point that setting the number of shards on a closed index can lead to this index not beeing able to open again. This change in documentation is ment to warn the user about this issue.\n",
"number": 9591,
"review_comments": [],
"title": "Add warning to settings documentation because setting number_of_replicas on a closed can lead to index beeing not openable again"
} | {
"commits": [
{
"message": "Update update-settings.asciidoc"
}
],
"files": [
{
"diff": "@@ -29,6 +29,12 @@ curl -XPUT 'localhost:9200/my_index/_settings' -d '\n }'\n --------------------------------------------------\n \n+[WARNING]\n+========================\n+When changing the number of replicas the index needs to be open. Changing\n+the number of replicas on a closed index might prevent the index to be opened correctly again.\n+========================\n+\n Below is the list of settings that can be changed using the update\n settings API:\n ",
"filename": "docs/reference/indices/update-settings.asciidoc",
"status": "modified"
}
]
} |
{
"body": "When performing a snapshot into an FS repository that doesn't have enough space, it's possible to run out of space while taking the snapshot, predictably the snapshot process fails, however, in the case that the \"finalizing\" file cannot be written, Elasticsearch is unable to list or delete the snapshot.\n\nTrying to get the status of _all (or the snapshot that failed) snapshots returns\n\n```\ncurl -XGET '0:9200/_snapshot/repobkps/_all?pretty'\n{\n \"error\" : \"RemoteTransportException[[elasticsearch][inet[/127.0.0.1:9300]][cluster:admin/snapshot/get]]; nested: ElasticsearchParseException[Failed to derive xcontent from (offset=0, length=0): []]; \",\n \"status\" : 400\n} \n```\n\nListing all of the snapshots shows no snaps are present:\n\n```\ncurl -XGET '0:9200/_snapshot/repobkps/_status?pretty'\n{\n \"snapshots\" : [ ]\n} \n```\n\nThe snapshot cannot be deleted either:\n\n```\ncurl -XDELETE '0:9200/_snapshot/repobkps/snp1'\n{\"error\":\"RemoteTransportException[[elasticsearch][inet[/127.0.0.1:9300]][cluster:admin/snapshot/delete]]; nested: ElasticsearchParseException[Failed to derive xcontent from (offset=0, length=0): []]; \",\"status\":400} \n```\n",
"comments": [
{
"body": "Changes in #8782 make this failure with running out of disk space no longer possible. But assuming that the snapshot file can still get corrupted or truncated, we need to make it possible to delete such snapshot.\n",
"created_at": "2015-02-04T03:37:16Z"
}
],
"number": 9534,
"title": "ElasticsearchParseException[Failed to derive xcontent from (offset=0, length=0): []] once snapshotting into a full filesystem"
} | {
"body": "Improve resiliency of snapshot deletion operation by allowing deletion of snapshot with corrupted snapshot files.\n\nCloses #9534\n",
"number": 9569,
"review_comments": [],
"title": "Allow deletion of snapshots with corrupted snapshot files"
} | {
"commits": [
{
"message": "Snapshot/Restore: Allow deletion of snapshots with corrupted snapshot files\n\nImprove resiliency of snapshot deletion operation by allowing deletion of snapshot with corrupted snapshot files.\n\nCloses #9534"
}
],
"files": [
{
"diff": "@@ -259,10 +259,17 @@ public void initializeSnapshot(SnapshotId snapshotId, ImmutableList<String> indi\n */\n @Override\n public void deleteSnapshot(SnapshotId snapshotId) {\n- Snapshot snapshot = readSnapshot(snapshotId);\n+ ImmutableList<String> indices = ImmutableList.of();\n+ try {\n+ indices = readSnapshot(snapshotId).indices();\n+ } catch (SnapshotMissingException ex) {\n+ throw ex;\n+ } catch (SnapshotException | ElasticsearchParseException ex) {\n+ logger.warn(\"cannot read snapshot file [{}]\", ex, snapshotId);\n+ }\n MetaData metaData = null;\n try {\n- metaData = readSnapshotMetaData(snapshotId, snapshot.indices(), true);\n+ metaData = readSnapshotMetaData(snapshotId, indices, true);\n } catch (IOException | SnapshotException ex) {\n logger.warn(\"cannot read metadata for snapshot [{}]\", ex, snapshotId);\n }\n@@ -284,7 +291,7 @@ public void deleteSnapshot(SnapshotId snapshotId) {\n }\n writeSnapshotList(snapshotIds);\n // Now delete all indices\n- for (String index : snapshot.indices()) {\n+ for (String index : indices) {\n BlobPath indexPath = basePath().add(\"indices\").add(index);\n BlobContainer indexMetaDataBlobContainer = blobStore().blobContainer(indexPath);\n try {",
"filename": "src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java",
"status": "modified"
},
{
"diff": "@@ -54,8 +54,13 @@\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.junit.Test;\n \n+import java.io.FileOutputStream;\n+import java.nio.channels.FileChannel;\n+import java.nio.channels.SeekableByteChannel;\n import java.nio.file.Files;\n+import java.nio.file.OpenOption;\n import java.nio.file.Path;\n+import java.nio.file.StandardOpenOption;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n@@ -821,6 +826,48 @@ public void deleteSnapshotWithMissingMetadataTest() throws Exception {\n assertThrows(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"test-snap-1\"), SnapshotMissingException.class);\n }\n \n+ @Test\n+ public void deleteSnapshotWithCorruptedSnapshotFileTest() throws Exception {\n+ Client client = client();\n+\n+ Path repo = newTempDirPath();\n+ logger.info(\"--> creating repository at \" + repo.toAbsolutePath());\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", repo)\n+ .put(\"compress\", false)\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\");\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test-idx-1\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-2\", \"doc\").setSource(\"foo\", \"bar\"));\n+\n+ logger.info(\"--> creating snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> truncate snapshot file to make it unreadable\");\n+ Path snapshotPath = repo.resolve(\"snapshot-test-snap-1\");\n+ try(SeekableByteChannel outChan = Files.newByteChannel(snapshotPath, StandardOpenOption.WRITE)) {\n+ outChan.truncate(randomInt(10));\n+ }\n+ logger.info(\"--> delete snapshot\");\n+ client.admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap-1\").get();\n+\n+ logger.info(\"--> make sure snapshot doesn't exist\");\n+ assertThrows(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"test-snap-1\"), SnapshotMissingException.class);\n+\n+ logger.info(\"--> make sure that we can create the snapshot again\");\n+ createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+ }\n+\n+\n @Test\n public void snapshotClosedIndexTest() throws Exception {\n Client client = client();",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "See the gist at [1] for a visual example of the polygons involved and a script to reproduce the bug. The bug occurs when you index a document containing two simple polygons and then query for shape which intersect a polygon representing a ring around (but not touching) the two indexed shapes. The query returns the document when it shouldn't since neither of the shapes interact with the ring.\n\nChanging the `relation` option to either `within` or `disjoint` produce no results from the query.\n\n[1] https://gist.github.com/colings86/0a0184ba79c5367685ff\n\nThis bug was raised from the following mailing list post: https://groups.google.com/forum/#!topic/elasticsearch/TlqaVu91R7A\n",
"comments": [
{
"body": "@nknize are you able to take a look at this?\n",
"created_at": "2015-01-20T09:29:45Z"
},
{
"body": "Thanks for reporting this, there appears to be a legit bug in Lucene-Spatial's IntersectsPrefixTreeFilter. Will take a look and get a patch out ASAP. \n",
"created_at": "2015-01-20T21:27:14Z"
},
{
"body": "I did one more test - checking if right/left hand rule impact this.\nAfter indexing following geometries both of them showed in results for that query\n\n```\n{\"coordinates\":[[[-87.6544,41.9677],[-87.6544,41.9717],[-87.6489,41.9717],[-87.6489,41.9677],[-87.6544,41.9677]]],\"type\":\"Polygon\"}\n{\"coordinates\":[[[-87.6544,41.9677],[-87.6489,41.9677],[-87.6489,41.9717],[-87.6544,41.9717],[-87.6544,41.9677]]],\"type\":\"Polygon\"}\n\n```\n",
"created_at": "2015-01-20T22:44:59Z"
}
],
"number": 9360,
"title": "geo_shape query matches shapes inside hole of complex polygon"
} | {
"body": "\"The OpenGIS Abstract Specification: An Object Model for Interoperable Geoprocessing\" published by the OGC defines \"The boundary of a geometric object is a set of geometric objects of the next lower dimension.\" The bounding box of a GeometryCollection is therefore the set of bounding rectangles derived from the geometric objects of the next lower dimension. This commit updates the computeBoundingBox and relate methods for the ShapeCollection base class to correctly determine the prefixTree detail level used in Lucene's FilterCellIterator.\n\ncloses #9360\n",
"number": 9550,
"review_comments": [
{
"body": "I think this can be a local inside the loop?\n",
"created_at": "2015-02-03T20:57:52Z"
},
{
"body": "I would use the word \"greater\" here instead of \">\"\n",
"created_at": "2015-02-03T20:59:08Z"
},
{
"body": "I think this should be XSpaceCollection if these changes will be going back into spatial4j?\n",
"created_at": "2015-02-03T21:00:13Z"
},
{
"body": "As discussed, I think this logic is missing a more detailed check here. If the other shape is within the bbox, it does not mean it is within the actual shape.\n",
"created_at": "2015-02-03T22:22:48Z"
}
],
"title": "Correct bounding box logic for GeometryCollection type"
} | {
"commits": [
{
"message": "[GEO] Correct bounding box logic for GeometryCollection type\n\n\"The OpenGIS Abstract Specification: An Object Model for Interoperable Geoprocessing\" published by the OGC defines \"The boundary of a geometric object is a set of geometric objects of the next lower dimension.\" The bounding box of a GeometryCollection is therefore the set of bounding rectangles derived from the geometric objects of the next lower dimension. This commit updates the computeBoundingBox and relate methods for the ShapeCollection base class to correctly determine the prefixTree detail level used in Lucene's FilterCellIterator.\n\ncloses #9360"
}
],
"files": [
{
"diff": "@@ -0,0 +1,81 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.geo;\n+\n+import com.spatial4j.core.context.SpatialContext;\n+import com.spatial4j.core.shape.Rectangle;\n+import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.ShapeCollection;\n+\n+import java.util.Collection;\n+import java.util.List;\n+\n+/**\n+ * Overrides bounding box logic in ShapeCollection base class to comply with\n+ * OGC OpenGIS Abstract Specification: An Object Model for Interoperable Geoprocessing.\n+ *\n+ * This class also overrides the 'relate' method to leverage the updated bbox logic.\n+ * NOTE: This algorithm is O(N) and can possibly be improved O(log n) using an internal R*-Tree\n+ * data structure for a collection of bounding boxes\n+ */\n+public class XShapeCollection<S extends Shape> extends ShapeCollection<S> {\n+\n+ public XShapeCollection(List<S> shapes, SpatialContext ctx) {\n+ super(shapes, ctx);\n+ }\n+\n+ @Override\n+ protected Rectangle computeBoundingBox(Collection<? extends Shape> shapes, SpatialContext ctx) {\n+ Rectangle retBox = shapes.iterator().next().getBoundingBox();\n+ for (Shape geom : shapes) {\n+ retBox = expandBBox(retBox, geom.getBoundingBox());\n+ }\n+ return retBox;\n+ }\n+\n+ /**\n+ * Spatial4J shapes have no knowledge of directed edges. For this reason, a bounding box\n+ * that wraps the dateline can have a min longitude that is mathematically > than the\n+ * Rectangles' minX value. This is an issue for geometric collections (e.g., MultiPolygon\n+ * and ShapeCollection) Until geometry logic can be cleaned up in Spatial4J, ES provides\n+ * the following expansion algorithm for GeometryCollections\n+ */\n+ private Rectangle expandBBox(Rectangle bbox, Rectangle expand) {\n+ if (bbox.equals(expand) || bbox.equals(SpatialContext.GEO.getWorldBounds())) {\n+ return bbox;\n+ }\n+\n+ double minX = bbox.getMinX();\n+ double eMinX = expand.getMinX();\n+ double maxX = bbox.getMaxX();\n+ double eMaxX = expand.getMaxX();\n+ double minY = bbox.getMinY();\n+ double eMinY = expand.getMinY();\n+ double maxY = bbox.getMaxY();\n+ double eMaxY = expand.getMaxY();\n+\n+ bbox.reset(Math.min(Math.min(minX, maxX), Math.min(eMinX, eMaxX)),\n+ Math.max(Math.max(minX, maxX), Math.max(eMinX, eMaxX)),\n+ Math.min(Math.min(minY, maxY), Math.min(eMinY, eMaxY)),\n+ Math.max(Math.max(minY, maxY), Math.max(eMinY, eMaxY)));\n+\n+ return bbox;\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/common/geo/XShapeCollection.java",
"status": "added"
},
{
"diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Shape;\n-import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.XShapeCollection;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -115,7 +115,7 @@ public Shape build() {\n if (shapes.size() == 1)\n return shapes.get(0);\n else\n- return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);\n+ return new XShapeCollection<>(shapes, SPATIAL_CONTEXT);\n //note: ShapeCollection is probably faster than a Multi* geom.\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/GeometryCollectionBuilder.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,8 @@\n \n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Shape;\n-import com.spatial4j.core.shape.ShapeCollection;\n import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.XShapeCollection;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -51,7 +51,7 @@ public Shape build() {\n for (Coordinate coord : points) {\n shapes.add(SPATIAL_CONTEXT.makePoint(coord.x, coord.y));\n }\n- return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);\n+ return new XShapeCollection<>(shapes, SPATIAL_CONTEXT);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPointBuilder.java",
"status": "modified"
},
{
"diff": "@@ -23,7 +23,7 @@\n import java.util.ArrayList;\n import java.util.List;\n \n-import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.XShapeCollection;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n@@ -96,7 +96,7 @@ public Shape build() {\n if (shapes.size() == 1)\n return shapes.get(0);\n else\n- return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);\n+ return new XShapeCollection<>(shapes, SPATIAL_CONTEXT);\n //note: ShapeCollection is probably faster than a Multi* geom.\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPolygonBuilder.java",
"status": "modified"
}
]
} |
{
"body": "This issue is easier to explain with an example: let's imagine that doc ID 0 and 1 are nested documents of doc ID 2. If the parent collector calls the nested aggregator with doc ID 2 and buckets 0 and 1 then the nested aggregator will call its sub aggregators with the following docID, bucketOrd tuples:\n- docID= 0, bucket=0\n- docID=1, bucket=0\n- docID=0, bucket=1\n- docID=1, bucket=1\n\nI think we should either buffer all collected buckets per doc ID or change the nested aggregator to have a different sub aggregator instance per bucket?\n",
"comments": [],
"number": 9547,
"title": "Nested aggregation collects docs out-of-order"
} | {
"body": "Close #9547\n",
"number": 9548,
"review_comments": [],
"title": "Make the nested aggregation call sub aggregators with doc IDs in order"
} | {
"commits": [
{
"message": "Aggs: Make the nested aggregation call sub aggregators with doc IDs in order.\n\nClose #9547"
}
],
"files": [
{
"diff": "@@ -18,8 +18,6 @@\n */\n package org.elasticsearch.search.aggregations.bucket.nested;\n \n-import com.carrotsearch.hppc.IntArrayList;\n-import com.carrotsearch.hppc.IntObjectOpenHashMap;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n@@ -53,10 +51,6 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo\n private BitSet parentDocs;\n private LeafReaderContext reader;\n \n- private BitSet rootDocs;\n- private int currentRootDoc = -1;\n- private final IntObjectOpenHashMap<IntArrayList> childDocIdBuffers = new IntObjectOpenHashMap<>();\n-\n public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator, Map<String, Object> metaData, FilterCachingPolicy filterCachingPolicy) throws IOException {\n super(name, factories, aggregationContext, parentAggregator, metaData);\n this.parentAggregator = parentAggregator;\n@@ -76,19 +70,14 @@ public void setNextReader(LeafReaderContext reader) {\n } else {\n childDocs = childDocIdSet.iterator();\n }\n- BitDocIdSetFilter rootDocsFilter = context.searchContext().bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE);\n- BitDocIdSet rootDocIdSet = rootDocsFilter.getDocIdSet(reader);\n- rootDocs = rootDocIdSet.bits();\n- // We need to reset the current root doc, otherwise we may emit incorrect child docs if the next segment happen to start with the same root doc id value\n- currentRootDoc = -1;\n- childDocIdBuffers.clear();\n } catch (IOException ioe) {\n throw new AggregationExecutionException(\"Failed to aggregate [\" + name + \"]\", ioe);\n }\n }\n \n @Override\n public void collect(int parentDoc, long bucketOrd) throws IOException {\n+ assert bucketOrd == 0;\n // here we translate the parent doc to a list of its nested docs, and then call super.collect for evey one of them so they'll be collected\n \n // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent doc), so we can skip:\n@@ -119,21 +108,19 @@ public void collect(int parentDoc, long bucketOrd) throws IOException {\n }\n }\n \n+ final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n+ int childDocId = childDocs.docID();\n+ if (childDocId <= prevParentDoc) {\n+ childDocId = childDocs.advance(prevParentDoc + 1);\n+ }\n+\n int numChildren = 0;\n- IntArrayList iterator = getChildren(parentDoc);\n- final int[] buffer = iterator.buffer;\n- final int size = iterator.size();\n- for (int i = 0; i < size; i++) {\n- numChildren++;\n- collectBucketNoCounts(buffer[i], bucketOrd);\n+ for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n+ collectBucketNoCounts(childDocId, bucketOrd);\n+ numChildren += 1;\n }\n incrementBucketDocCount(bucketOrd, numChildren);\n }\n-\n- @Override\n- protected void doClose() {\n- childDocIdBuffers.clear();\n- } \n \n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {\n@@ -169,6 +156,9 @@ public Factory(String name, String path, FilterCachingPolicy filterCachingPolicy\n \n @Override\n public Aggregator createInternal(AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, Map<String, Object> metaData) throws IOException {\n+ if (collectsFromSingleBucket == false) {\n+ return asMultiBucketAggregator(this, context, parent);\n+ }\n MapperService.SmartNameObjectMapper mapper = context.searchContext().smartNameObjectMapper(path);\n if (mapper == null) {\n return new Unmapped(name, context, parent, metaData);\n@@ -196,43 +186,4 @@ public InternalAggregation buildEmptyAggregation() {\n }\n }\n \n- // The aggs framework can collect buckets for the same parent doc id more than once and because the children docs\n- // can only be consumed once we need to buffer the child docs. We only need to buffer child docs in the scope\n- // of the current root doc.\n-\n- // Examples:\n- // 1) nested agg wrapped is by terms agg and multiple buckets per document are emitted\n- // 2) Multiple nested fields are defined. A nested agg joins back to another nested agg via the reverse_nested agg.\n- // For each child in the first nested agg the second nested agg gets invoked with the same buckets / docids\n- private IntArrayList getChildren(final int parentDocId) throws IOException {\n- int rootDocId = rootDocs.nextSetBit(parentDocId);\n- if (currentRootDoc == rootDocId) {\n- final IntArrayList childDocIdBuffer = childDocIdBuffers.get(parentDocId);\n- if (childDocIdBuffer != null) {\n- return childDocIdBuffer;\n- } else {\n- // here we translate the parent doc to a list of its nested docs,\n- // and then collect buckets for every one of them so they'll be collected\n- final IntArrayList newChildDocIdBuffer = new IntArrayList();\n- childDocIdBuffers.put(parentDocId, newChildDocIdBuffer);\n- int prevParentDoc = parentDocs.prevSetBit(parentDocId - 1);\n- int childDocId;\n- if (childDocs.docID() > prevParentDoc) {\n- childDocId = childDocs.docID();\n- } else {\n- childDocId = childDocs.advance(prevParentDoc + 1);\n- }\n- for (; childDocId < parentDocId; childDocId = childDocs.nextDoc()) {\n- newChildDocIdBuffer.add(childDocId);\n- }\n- return newChildDocIdBuffer;\n- }\n- } else {\n- this.currentRootDoc = rootDocId;\n- childDocIdBuffers.clear();\n- return getChildren(parentDocId);\n- }\n- }\n-\n-\n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -483,7 +483,6 @@ public void nonExistingNestedField() throws Exception {\n }\n \n @Test\n- @AwaitsFix(bugUrl=\"http://github.com/elasticsearch/elasticsearch/issues/9547\")\n public void testSameParentDocHavingMultipleBuckets() throws Exception {\n XContentBuilder mapping = jsonBuilder().startObject().startObject(\"product\").field(\"dynamic\", \"strict\").startObject(\"properties\")\n .startObject(\"id\").field(\"type\", \"long\").endObject()",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ReverseNestedTests.java",
"status": "modified"
}
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.