issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "Follow-on from #11185:\n\nAs of #9398 we now allocate an entire shard to one path on node's path.data, instead of file by file.\n\nEven though we do the initial allocation to the path.data with the most free space, if the shards then grow in lopsided ways (an \"adversary\"), we can get to a state where one path.data is nearly full while others are very empty, on a single node. I suspect such adversaries would not be uncommon in practice...\n\nYet, DiskThresholdDecider only looks at total usable space on the node (not per-path on path.data) so it won't notice when only one path is close to full... we need to fix that? Also, the shard allocation process needs to see/address each path.data separately somehow?\n\nSometimes a shard would just move from one path.data to another path.data on the same node (maybe we should bias for that, all other criteria being nearly equal, since we save on network traffic).\n\nI think it's important to fix this but this is beyond my knowledge of ES now ...\n",
"comments": [
{
"body": "> DiskThresholdDecider only looks at total usable space on the node (not per-path on path.data) so it won't notice when only one path is close to full... we need to fix that?\n\n+1, I think instead of doing the average for all the paths we can just use the `max` used data path on the node?\n",
"created_at": "2015-05-20T21:27:49Z"
},
{
"body": "> I think instead of doing the average for all the paths we can just use the max used data path on the node?\n\nI think so, at least for the logic that notices when a node is \"getting full\" and triggers a rebalance.\n",
"created_at": "2015-05-20T21:41:57Z"
},
{
"body": "I think there are only two possible solutions here:\n1. Write a path-aware allocator\n2. Remove multi-path support \n",
"created_at": "2015-06-19T09:42:46Z"
},
{
"body": "If we remove multi-path support, users can run multiple instances per server, and we'd need to provide a tool to migrate data before starting the cluster (or use snapshot/restore)\n",
"created_at": "2015-06-19T09:46:37Z"
}
],
"number": 11271,
"title": "Shards should rebalance across multiple path.data on one node"
} | {
"body": "Today we only guess how big the shard will be that we are allocating on a node.\nYet, we have this information on the master but it's not available on the data nodes\nwhen we pick a data path for the shard. We use some rather simple heuristic based on\nexisting shard sizes on this node which might be complete bogus. This change adds\nthe expected shard size to the ShardRouting for RELOCATING and INITIALIZING shards\nto be used on the actual node to find the best data path for the shard.\n\nCloses #11271\n",
"number": 12947,
"review_comments": [
{
"body": "Instead of passing defaultValue here, can we just return ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE? And maybe move that constant here?\n",
"created_at": "2015-08-17T20:12:48Z"
},
{
"body": "I assume 2.0.0.beta1 will not be wire-compatible with 2.0.0 GA? So we don't need any version checks when we serialize/deserialize here...\n",
"created_at": "2015-08-17T20:15:52Z"
},
{
"body": "Instead of -1 should we use the constant?\n",
"created_at": "2015-08-17T20:16:28Z"
},
{
"body": "that is correct\n",
"created_at": "2015-08-17T20:20:07Z"
},
{
"body": "others use thsi as well and they need `0`\n",
"created_at": "2015-08-17T20:20:33Z"
},
{
"body": "yes we should\n",
"created_at": "2015-08-17T20:20:43Z"
},
{
"body": "Can't they check the constant and do their own \"use 0 instead\"?\n",
"created_at": "2015-08-17T20:30:20Z"
},
{
"body": "Actually, thinking about it more ... I think I like the required defaultValue arg: it avoids sneaky bugs when the caller fails to check for the -1.\n",
"created_at": "2015-08-17T20:45:51Z"
}
],
"title": "Add `expectedShardSize` to ShardRouting and use it in path.data allocation"
} | {
"commits": [
{
"message": "Add `expectedShardSize` to ShardRouting and use it in path.data allocation\n\nToday we only guess how big the shard will be that we are allocating on a node.\nYet, we have this information on the master but it's not available on the data nodes\nwhen we pick a data path for the shard. We use some rather simple heuristic based on\nexisting shard sizes on this node which might be complete bogus. This change adds\nthe expected shard size to the ShardRouting for RELOCATING and INITIALIZING shards\nto be used on the actual node to find the best data path for the shard.\n\nCloses #11271"
},
{
"message": "Use constant to determin if expected size is available"
}
],
"files": [
{
"diff": "@@ -30,7 +30,7 @@\n * <code>InternalClusterInfoService.shardIdentifierFromRouting(String)</code>\n * for the key used in the shardSizes map\n */\n-public final class ClusterInfo {\n+public class ClusterInfo {\n \n private final Map<String, DiskUsage> usages;\n final Map<String, Long> shardSizes;\n@@ -54,6 +54,11 @@ public Long getShardSize(ShardRouting shardRouting) {\n return shardSizes.get(shardIdentifierFromRouting(shardRouting));\n }\n \n+ public long getShardSize(ShardRouting shardRouting, long defaultValue) {\n+ Long shardSize = getShardSize(shardRouting);\n+ return shardSize == null ? defaultValue : shardSize;\n+ }\n+\n /**\n * Method that incorporates the ShardId for the shard into a string that\n * includes a 'p' or 'r' depending on whether the shard is a primary.",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterInfo.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.LatchedActionListener;\n import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n@@ -36,6 +37,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.monitor.fs.FsInfo;\n import org.elasticsearch.node.settings.NodeSettingsService;\n@@ -45,6 +47,7 @@\n import java.util.*;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n * InternalClusterInfoService provides the ClusterInfoService interface,",
"filename": "core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java",
"status": "modified"
},
{
"diff": "@@ -345,10 +345,10 @@ public String prettyPrint() {\n /**\n * Moves a shard from unassigned to initialize state\n */\n- public void initialize(ShardRouting shard, String nodeId) {\n+ public void initialize(ShardRouting shard, String nodeId, long expectedSize) {\n ensureMutable();\n assert shard.unassigned() : shard;\n- shard.initialize(nodeId);\n+ shard.initialize(nodeId, expectedSize);\n node(nodeId).add(shard);\n inactiveShardCount++;\n if (shard.primary()) {\n@@ -362,10 +362,10 @@ public void initialize(ShardRouting shard, String nodeId) {\n * shard as well as assigning it. And returning the target initializing\n * shard.\n */\n- public ShardRouting relocate(ShardRouting shard, String nodeId) {\n+ public ShardRouting relocate(ShardRouting shard, String nodeId, long expectedShardSize) {\n ensureMutable();\n relocatingShards++;\n- shard.relocate(nodeId);\n+ shard.relocate(nodeId, expectedShardSize);\n ShardRouting target = shard.buildTargetRelocatingShard();\n node(target.currentNodeId()).add(target);\n assignedShardsAdd(target);\n@@ -608,16 +608,9 @@ public ShardRouting next() {\n /**\n * Initializes the current unassigned shard and moves it from the unassigned list.\n */\n- public void initialize(String nodeId) {\n- initialize(nodeId, current.version());\n- }\n-\n- /**\n- * Initializes the current unassigned shard and moves it from the unassigned list.\n- */\n- public void initialize(String nodeId, long version) {\n+ public void initialize(String nodeId, long version, long expectedShardSize) {\n innerRemove();\n- nodes.initialize(new ShardRouting(current, version), nodeId);\n+ nodes.initialize(new ShardRouting(current, version), nodeId, expectedShardSize);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,11 @@\n */\n public final class ShardRouting implements Streamable, ToXContent {\n \n+ /**\n+ * Used if shard size is not available\n+ */\n+ public static final long UNAVAILABLE_EXPECTED_SHARD_SIZE = -1;\n+\n private String index;\n private int shardId;\n private String currentNodeId;\n@@ -50,6 +55,7 @@ public final class ShardRouting implements Streamable, ToXContent {\n private final transient List<ShardRouting> asList;\n private transient ShardId shardIdentifier;\n private boolean frozen = false;\n+ private long expectedShardSize = UNAVAILABLE_EXPECTED_SHARD_SIZE;\n \n private ShardRouting() {\n this.asList = Collections.singletonList(this);\n@@ -60,7 +66,7 @@ public ShardRouting(ShardRouting copy) {\n }\n \n public ShardRouting(ShardRouting copy, long version) {\n- this(copy.index(), copy.id(), copy.currentNodeId(), copy.relocatingNodeId(), copy.restoreSource(), copy.primary(), copy.state(), version, copy.unassignedInfo(), copy.allocationId(), true);\n+ this(copy.index(), copy.id(), copy.currentNodeId(), copy.relocatingNodeId(), copy.restoreSource(), copy.primary(), copy.state(), version, copy.unassignedInfo(), copy.allocationId(), true, copy.getExpectedShardSize());\n }\n \n /**\n@@ -69,7 +75,7 @@ public ShardRouting(ShardRouting copy, long version) {\n */\n ShardRouting(String index, int shardId, String currentNodeId,\n String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version,\n- UnassignedInfo unassignedInfo, AllocationId allocationId, boolean internal) {\n+ UnassignedInfo unassignedInfo, AllocationId allocationId, boolean internal, long expectedShardSize) {\n this.index = index;\n this.shardId = shardId;\n this.currentNodeId = currentNodeId;\n@@ -81,20 +87,24 @@ public ShardRouting(ShardRouting copy, long version) {\n this.restoreSource = restoreSource;\n this.unassignedInfo = unassignedInfo;\n this.allocationId = allocationId;\n+ this.expectedShardSize = expectedShardSize;\n+ assert expectedShardSize == UNAVAILABLE_EXPECTED_SHARD_SIZE || state == ShardRoutingState.INITIALIZING || state == ShardRoutingState.RELOCATING : expectedShardSize + \" state: \" + state;\n+ assert expectedShardSize >= 0 || state != ShardRoutingState.INITIALIZING || state != ShardRoutingState.RELOCATING : expectedShardSize + \" state: \" + state;\n assert !(state == ShardRoutingState.UNASSIGNED && unassignedInfo == null) : \"unassigned shard must be created with meta\";\n if (!internal) {\n assert state == ShardRoutingState.UNASSIGNED;\n assert currentNodeId == null;\n assert relocatingNodeId == null;\n assert allocationId == null;\n }\n+\n }\n \n /**\n * Creates a new unassigned shard.\n */\n public static ShardRouting newUnassigned(String index, int shardId, RestoreSource restoreSource, boolean primary, UnassignedInfo unassignedInfo) {\n- return new ShardRouting(index, shardId, null, null, restoreSource, primary, ShardRoutingState.UNASSIGNED, 0, unassignedInfo, null, true);\n+ return new ShardRouting(index, shardId, null, null, restoreSource, primary, ShardRoutingState.UNASSIGNED, 0, unassignedInfo, null, true, UNAVAILABLE_EXPECTED_SHARD_SIZE);\n }\n \n /**\n@@ -205,7 +215,7 @@ public String relocatingNodeId() {\n public ShardRouting buildTargetRelocatingShard() {\n assert relocating();\n return new ShardRouting(index, shardId, relocatingNodeId, currentNodeId, restoreSource, primary, ShardRoutingState.INITIALIZING, version, unassignedInfo,\n- AllocationId.newTargetRelocation(allocationId), true);\n+ AllocationId.newTargetRelocation(allocationId), true, expectedShardSize);\n }\n \n /**\n@@ -317,6 +327,11 @@ public void readFromThin(StreamInput in) throws IOException {\n if (in.readBoolean()) {\n allocationId = new AllocationId(in);\n }\n+ if (relocating() || initializing()) {\n+ expectedShardSize = in.readLong();\n+ } else {\n+ expectedShardSize = UNAVAILABLE_EXPECTED_SHARD_SIZE;\n+ }\n freeze();\n }\n \n@@ -368,6 +383,10 @@ public void writeToThin(StreamOutput out) throws IOException {\n } else {\n out.writeBoolean(false);\n }\n+ if (relocating() || initializing()) {\n+ out.writeLong(expectedShardSize);\n+ }\n+\n }\n \n @Override\n@@ -397,33 +416,36 @@ void moveToUnassigned(UnassignedInfo unassignedInfo) {\n relocatingNodeId = null;\n this.unassignedInfo = unassignedInfo;\n allocationId = null;\n+ expectedShardSize = UNAVAILABLE_EXPECTED_SHARD_SIZE;\n }\n \n /**\n * Initializes an unassigned shard on a node.\n */\n- void initialize(String nodeId) {\n+ void initialize(String nodeId, long expectedShardSize) {\n ensureNotFrozen();\n version++;\n assert state == ShardRoutingState.UNASSIGNED : this;\n assert relocatingNodeId == null : this;\n state = ShardRoutingState.INITIALIZING;\n currentNodeId = nodeId;\n allocationId = AllocationId.newInitializing();\n+ this.expectedShardSize = expectedShardSize;\n }\n \n /**\n * Relocate the shard to another node.\n *\n * @param relocatingNodeId id of the node to relocate the shard\n */\n- void relocate(String relocatingNodeId) {\n+ void relocate(String relocatingNodeId, long expectedShardSize) {\n ensureNotFrozen();\n version++;\n assert state == ShardRoutingState.STARTED : \"current shard has to be started in order to be relocated \" + this;\n state = ShardRoutingState.RELOCATING;\n this.relocatingNodeId = relocatingNodeId;\n this.allocationId = AllocationId.newRelocation(allocationId);\n+ this.expectedShardSize = expectedShardSize;\n }\n \n /**\n@@ -436,7 +458,7 @@ void cancelRelocation() {\n assert state == ShardRoutingState.RELOCATING : this;\n assert assignedToNode() : this;\n assert relocatingNodeId != null : this;\n-\n+ expectedShardSize = UNAVAILABLE_EXPECTED_SHARD_SIZE;\n state = ShardRoutingState.STARTED;\n relocatingNodeId = null;\n allocationId = AllocationId.cancelRelocation(allocationId);\n@@ -470,6 +492,7 @@ void moveToStarted() {\n // relocation target\n allocationId = AllocationId.finishRelocation(allocationId);\n }\n+ expectedShardSize = UNAVAILABLE_EXPECTED_SHARD_SIZE;\n state = ShardRoutingState.STARTED;\n }\n \n@@ -669,6 +692,9 @@ public String shortSummary() {\n if (this.unassignedInfo != null) {\n sb.append(\", \").append(unassignedInfo.toString());\n }\n+ if (expectedShardSize != UNAVAILABLE_EXPECTED_SHARD_SIZE) {\n+ sb.append(\", expected_shard_size[\").append(expectedShardSize).append(\"]\");\n+ }\n return sb.toString();\n }\n \n@@ -682,7 +708,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n .field(\"shard\", shardId().id())\n .field(\"index\", shardId().index().name())\n .field(\"version\", version);\n-\n+ if (expectedShardSize != UNAVAILABLE_EXPECTED_SHARD_SIZE){\n+ builder.field(\"expected_shard_size_in_bytes\", expectedShardSize);\n+ }\n if (restoreSource() != null) {\n builder.field(\"restore_source\");\n restoreSource().toXContent(builder, params);\n@@ -709,4 +737,12 @@ void freeze() {\n boolean isFrozen() {\n return frozen;\n }\n+\n+ /**\n+ * Returns the expected shard size for {@link ShardRoutingState#RELOCATING} and {@link ShardRoutingState#INITIALIZING}\n+ * shards. If it's size is not available {@value #UNAVAILABLE_EXPECTED_SHARD_SIZE} will be returned.\n+ */\n+ public long getExpectedShardSize() {\n+ return expectedShardSize;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java",
"status": "modified"
},
{
"diff": "@@ -507,7 +507,7 @@ public boolean move(ShardRouting shard, RoutingNode node ) {\n Decision decision = allocation.deciders().canAllocate(shard, target, allocation);\n if (decision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n sourceNode.removeShard(shard);\n- ShardRouting targetRelocatingShard = routingNodes.relocate(shard, target.nodeId());\n+ ShardRouting targetRelocatingShard = routingNodes.relocate(shard, target.nodeId(), allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n currentNode.addShard(targetRelocatingShard, decision);\n if (logger.isTraceEnabled()) {\n logger.trace(\"Moved shard [{}] to node [{}]\", shard, currentNode.getNodeId());\n@@ -687,7 +687,7 @@ public int compare(ShardRouting o1,\n if (logger.isTraceEnabled()) {\n logger.trace(\"Assigned shard [{}] to [{}]\", shard, minNode.getNodeId());\n }\n- routingNodes.initialize(shard, routingNodes.node(minNode.getNodeId()).nodeId());\n+ routingNodes.initialize(shard, routingNodes.node(minNode.getNodeId()).nodeId(), allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n changed = true;\n continue; // don't add to ignoreUnassigned\n } else {\n@@ -779,10 +779,10 @@ private boolean tryRelocateShard(Operation operation, ModelNode minNode, ModelNo\n /* now allocate on the cluster - if we are started we need to relocate the shard */\n if (candidate.started()) {\n RoutingNode lowRoutingNode = routingNodes.node(minNode.getNodeId());\n- routingNodes.relocate(candidate, lowRoutingNode.nodeId());\n+ routingNodes.relocate(candidate, lowRoutingNode.nodeId(), allocation.clusterInfo().getShardSize(candidate, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n \n } else {\n- routingNodes.initialize(candidate, routingNodes.node(minNode.getNodeId()).nodeId());\n+ routingNodes.initialize(candidate, routingNodes.node(minNode.getNodeId()).nodeId(), allocation.clusterInfo().getShardSize(candidate, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n }\n return true;\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java",
"status": "modified"
},
{
"diff": "@@ -231,7 +231,7 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n unassigned.updateUnassignedInfo(new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED,\n \"force allocation from previous reason \" + unassigned.unassignedInfo().getReason() + \", \" + unassigned.unassignedInfo().getMessage(), unassigned.unassignedInfo().getFailure()));\n }\n- it.initialize(routingNode.nodeId());\n+ it.initialize(routingNode.nodeId(), unassigned.version(), allocation.clusterInfo().getShardSize(unassigned, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n break;\n }\n return new RerouteExplanation(this, decision);",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateAllocationCommand.java",
"status": "modified"
},
{
"diff": "@@ -178,7 +178,7 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n if (decision.type() == Decision.Type.THROTTLE) {\n // its being throttled, maybe have a flag to take it into account and fail? for now, just do it since the \"user\" wants it...\n }\n- allocation.routingNodes().relocate(shardRouting, toRoutingNode.nodeId());\n+ allocation.routingNodes().relocate(shardRouting, toRoutingNode.nodeId(), allocation.clusterInfo().getShardSize(shardRouting, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n }\n \n if (!found) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/MoveAllocationCommand.java",
"status": "modified"
},
{
"diff": "@@ -28,7 +28,6 @@\n import com.google.common.collect.Maps;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Booleans;\n-import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.io.stream.StreamInput;",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -94,12 +94,12 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n DiscoveryNode node = nodesToAllocate.yesNodes.get(0);\n logger.debug(\"[{}][{}]: allocating [{}] to [{}] on primary allocation\", shard.index(), shard.id(), shard, node);\n changed = true;\n- unassignedIterator.initialize(node.id(), nodesAndVersions.highestVersion);\n+ unassignedIterator.initialize(node.id(), nodesAndVersions.highestVersion, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE);\n } else if (nodesToAllocate.throttleNodes.isEmpty() == true && nodesToAllocate.noNodes.isEmpty() == false) {\n DiscoveryNode node = nodesToAllocate.noNodes.get(0);\n logger.debug(\"[{}][{}]: forcing allocating [{}] to [{}] on primary allocation\", shard.index(), shard.id(), shard, node);\n changed = true;\n- unassignedIterator.initialize(node.id(), nodesAndVersions.highestVersion);\n+ unassignedIterator.initialize(node.id(), nodesAndVersions.highestVersion, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE);\n } else {\n // we are throttling this, but we have enough to allocate to this node, ignore it for now\n logger.debug(\"[{}][{}]: throttling allocation [{}] to [{}] on primary allocation\", shard.index(), shard.id(), shard, nodesToAllocate.throttleNodes);",
"filename": "core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -169,7 +169,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation, long allocateUna\n logger.debug(\"[{}][{}]: allocating [{}] to [{}] in order to reuse its unallocated persistent store\", shard.index(), shard.id(), shard, nodeWithHighestMatch.node());\n // we found a match\n changed = true;\n- unassignedIterator.initialize(nodeWithHighestMatch.nodeId());\n+ unassignedIterator.initialize(nodeWithHighestMatch.nodeId(), shard.version(), allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));\n }\n } else if (matchingNodes.hasAnyData() == false) {\n // if we didn't manage to find *any* data (regardless of matching sizes), check if the allocation",
"filename": "core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n@@ -270,7 +271,8 @@ private long getAvgShardSizeInBytes() throws IOException {\n }\n }\n \n- public synchronized IndexShard createShard(int sShardId, boolean primary) {\n+ public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n+ final boolean primary = routing.primary();\n /*\n * TODO: we execute this in parallel but it's a synced method. Yet, we might\n * be able to serialize the execution via the cluster state in the future. for now we just\n@@ -299,7 +301,7 @@ public synchronized IndexShard createShard(int sShardId, boolean primary) {\n }\n }\n if (path == null) {\n- path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings, getAvgShardSizeInBytes(), this);\n+ path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings, routing.getExpectedShardSize() == ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE ? getAvgShardSizeInBytes() : routing.getExpectedShardSize(), this);\n logger.debug(\"{} creating using a new path [{}]\", shardId, path);\n } else {\n logger.debug(\"{} creating using an existing path [{}]\", shardId, path);",
"filename": "core/src/main/java/org/elasticsearch/index/IndexService.java",
"status": "modified"
},
{
"diff": "@@ -638,7 +638,7 @@ private void applyInitializingShard(final ClusterState state, final IndexMetaDat\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}] creating shard\", shardRouting.index(), shardId);\n }\n- IndexShard indexShard = indexService.createShard(shardId, shardRouting.primary());\n+ IndexShard indexShard = indexService.createShard(shardId, shardRouting);\n indexShard.updateRoutingEntry(shardRouting, state.blocks().disableStatePersistence() == false);\n indexShard.addFailedEngineListener(failedEngineHandler);\n } catch (IndexShardAlreadyExistsException e) {",
"filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java",
"status": "modified"
},
{
"diff": "@@ -18,12 +18,16 @@\n */\n package org.elasticsearch.cluster.allocation;\n \n+import org.elasticsearch.cluster.ClusterInfoService;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.InternalClusterInfoService;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.junit.Test;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;",
"filename": "core/src/test/java/org/elasticsearch/cluster/allocation/SimpleAllocationIT.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,7 @@ public void testShardToStarted() {\n assertThat(shard.allocationId(), nullValue());\n \n logger.info(\"-- initialize the shard\");\n- shard.initialize(\"node1\");\n+ shard.initialize(\"node1\", -1);\n AllocationId allocationId = shard.allocationId();\n assertThat(allocationId, notNullValue());\n assertThat(allocationId.getId(), notNullValue());\n@@ -53,12 +53,12 @@ public void testShardToStarted() {\n public void testSuccessfulRelocation() {\n logger.info(\"-- build started shard\");\n ShardRouting shard = ShardRouting.newUnassigned(\"test\", 0, null, true, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null));\n- shard.initialize(\"node1\");\n+ shard.initialize(\"node1\", -1);\n shard.moveToStarted();\n \n AllocationId allocationId = shard.allocationId();\n logger.info(\"-- relocate the shard\");\n- shard.relocate(\"node2\");\n+ shard.relocate(\"node2\", -1);\n assertThat(shard.allocationId(), not(equalTo(allocationId)));\n assertThat(shard.allocationId().getId(), equalTo(allocationId.getId()));\n assertThat(shard.allocationId().getRelocationId(), notNullValue());\n@@ -77,12 +77,12 @@ public void testSuccessfulRelocation() {\n public void testCancelRelocation() {\n logger.info(\"-- build started shard\");\n ShardRouting shard = ShardRouting.newUnassigned(\"test\", 0, null, true, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null));\n- shard.initialize(\"node1\");\n+ shard.initialize(\"node1\", -1);\n shard.moveToStarted();\n \n AllocationId allocationId = shard.allocationId();\n logger.info(\"-- relocate the shard\");\n- shard.relocate(\"node2\");\n+ shard.relocate(\"node2\", -1);\n assertThat(shard.allocationId(), not(equalTo(allocationId)));\n assertThat(shard.allocationId().getId(), equalTo(allocationId.getId()));\n assertThat(shard.allocationId().getRelocationId(), notNullValue());\n@@ -98,7 +98,7 @@ public void testCancelRelocation() {\n public void testMoveToUnassigned() {\n logger.info(\"-- build started shard\");\n ShardRouting shard = ShardRouting.newUnassigned(\"test\", 0, null, true, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null));\n- shard.initialize(\"node1\");\n+ shard.initialize(\"node1\", -1);\n shard.moveToStarted();\n \n logger.info(\"-- move to unassigned\");\n@@ -110,7 +110,7 @@ public void testMoveToUnassigned() {\n public void testReinitializing() {\n logger.info(\"-- build started shard\");\n ShardRouting shard = ShardRouting.newUnassigned(\"test\", 0, null, true, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null));\n- shard.initialize(\"node1\");\n+ shard.initialize(\"node1\", -1);\n shard.moveToStarted();\n AllocationId allocationId = shard.allocationId();\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/AllocationIdTests.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,7 @@ public static void randomChange(ShardRouting shardRouting, String[] nodes) {\n break;\n case 1:\n if (shardRouting.unassigned()) {\n- shardRouting.initialize(randomFrom(nodes));\n+ shardRouting.initialize(randomFrom(nodes), -1);\n }\n break;\n case 2:",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/RandomShardRoutingMutator.java",
"status": "modified"
},
{
"diff": "@@ -25,14 +25,22 @@\n public class ShardRoutingHelper {\n \n public static void relocate(ShardRouting routing, String nodeId) {\n- routing.relocate(nodeId);\n+ relocate(routing, nodeId, -1);\n+ }\n+\n+ public static void relocate(ShardRouting routing, String nodeId, long expectedByteSize) {\n+ routing.relocate(nodeId, expectedByteSize);\n }\n \n public static void moveToStarted(ShardRouting routing) {\n routing.moveToStarted();\n }\n \n public static void initialize(ShardRouting routing, String nodeId) {\n- routing.initialize(nodeId);\n+ initialize(routing, nodeId, -1);\n+ }\n+\n+ public static void initialize(ShardRouting routing, String nodeId, long expectedSize) {\n+ routing.initialize(nodeId, expectedSize);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java",
"status": "modified"
},
{
"diff": "@@ -103,12 +103,12 @@ public void testIsSourceTargetRelocation() {\n ShardRouting startedShard1 = new ShardRouting(initializingShard1);\n startedShard1.moveToStarted();\n ShardRouting sourceShard0a = new ShardRouting(startedShard0);\n- sourceShard0a.relocate(\"node2\");\n+ sourceShard0a.relocate(\"node2\", -1);\n ShardRouting targetShard0a = sourceShard0a.buildTargetRelocatingShard();\n ShardRouting sourceShard0b = new ShardRouting(startedShard0);\n- sourceShard0b.relocate(\"node2\");\n+ sourceShard0b.relocate(\"node2\", -1);\n ShardRouting sourceShard1 = new ShardRouting(startedShard1);\n- sourceShard1.relocate(\"node2\");\n+ sourceShard1.relocate(\"node2\", -1);\n \n // test true scenarios\n assertTrue(targetShard0a.isRelocationTargetOf(sourceShard0a));\n@@ -254,7 +254,7 @@ public void testFrozenOnRoutingTable() {\n }\n \n try {\n- routing.initialize(\"boom\");\n+ routing.initialize(\"boom\", -1);\n fail(\"must be frozen\");\n } catch (IllegalStateException ex) {\n // expected\n@@ -273,7 +273,7 @@ public void testFrozenOnRoutingTable() {\n }\n \n try {\n- routing.relocate(\"foobar\");\n+ routing.relocate(\"foobar\", -1);\n fail(\"must be frozen\");\n } catch (IllegalStateException ex) {\n // expected\n@@ -287,4 +287,39 @@ public void testFrozenOnRoutingTable() {\n assertEquals(version, routing.version());\n }\n }\n+\n+ public void testExpectedSize() throws IOException {\n+ final int iters = randomIntBetween(10, 100);\n+ for (int i = 0; i < iters; i++) {\n+ ShardRouting routing = randomShardRouting(\"test\", 0);\n+ long byteSize = randomIntBetween(0, Integer.MAX_VALUE);\n+ if (routing.unassigned()) {\n+ ShardRoutingHelper.initialize(routing, \"foo\", byteSize);\n+ } else if (routing.started()) {\n+ ShardRoutingHelper.relocate(routing, \"foo\", byteSize);\n+ } else {\n+ byteSize = -1;\n+ }\n+ if (randomBoolean()) {\n+ BytesStreamOutput out = new BytesStreamOutput();\n+ routing.writeTo(out);\n+ routing = ShardRouting.readShardRoutingEntry(StreamInput.wrap(out.bytes()));\n+ }\n+ if (routing.initializing() || routing.relocating()) {\n+ assertEquals(routing.toString(), byteSize, routing.getExpectedShardSize());\n+ if (byteSize >= 0) {\n+ assertTrue(routing.toString(), routing.toString().contains(\"expected_shard_size[\" + byteSize + \"]\"));\n+ }\n+ if (routing.initializing()) {\n+ routing = new ShardRouting(routing);\n+ routing.moveToStarted();\n+ assertEquals(-1, routing.getExpectedShardSize());\n+ assertFalse(routing.toString(), routing.toString().contains(\"expected_shard_size[\" + byteSize + \"]\"));\n+ }\n+ } else {\n+ assertFalse(routing.toString(), routing.toString().contains(\"expected_shard_size [\" + byteSize + \"]\"));\n+ assertEquals(byteSize, routing.getExpectedShardSize());\n+ }\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/ShardRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -28,25 +28,25 @@\n public class TestShardRouting {\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, null, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, null, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true, -1);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true, -1);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, boolean primary, ShardRoutingState state, AllocationId allocationId, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), allocationId, true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), allocationId, true, -1);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true, -1);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId,\n String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version,\n UnassignedInfo unassignedInfo) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, unassignedInfo, buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, unassignedInfo, buildAllocationId(state), true, -1);\n }\n \n private static AllocationId buildAllocationId(ShardRoutingState state) {",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/TestShardRouting.java",
"status": "modified"
},
{
"diff": "@@ -192,7 +192,7 @@ public void testStateTransitionMetaHandling() {\n ShardRouting shard = TestShardRouting.newShardRouting(\"test\", 1, null, null, null, true, ShardRoutingState.UNASSIGNED, 1, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null));\n ShardRouting mutable = new ShardRouting(shard);\n assertThat(mutable.unassignedInfo(), notNullValue());\n- mutable.initialize(\"test_node\");\n+ mutable.initialize(\"test_node\", -1);\n assertThat(mutable.state(), equalTo(ShardRoutingState.INITIALIZING));\n assertThat(mutable.unassignedInfo(), notNullValue());\n mutable.moveToStarted();",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
},
{
"diff": "@@ -369,37 +369,37 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n switch (sr.id()) {\n case 0:\n if (sr.primary()) {\n- allocation.routingNodes().initialize(sr, \"node1\");\n+ allocation.routingNodes().initialize(sr, \"node1\", -1);\n } else {\n- allocation.routingNodes().initialize(sr, \"node0\");\n+ allocation.routingNodes().initialize(sr, \"node0\", -1);\n }\n break;\n case 1:\n if (sr.primary()) {\n- allocation.routingNodes().initialize(sr, \"node1\");\n+ allocation.routingNodes().initialize(sr, \"node1\", -1);\n } else {\n- allocation.routingNodes().initialize(sr, \"node2\");\n+ allocation.routingNodes().initialize(sr, \"node2\", -1);\n }\n break;\n case 2:\n if (sr.primary()) {\n- allocation.routingNodes().initialize(sr, \"node3\");\n+ allocation.routingNodes().initialize(sr, \"node3\", -1);\n } else {\n- allocation.routingNodes().initialize(sr, \"node2\");\n+ allocation.routingNodes().initialize(sr, \"node2\", -1);\n }\n break;\n case 3:\n if (sr.primary()) {\n- allocation.routingNodes().initialize(sr, \"node3\");\n+ allocation.routingNodes().initialize(sr, \"node3\", -1);\n } else {\n- allocation.routingNodes().initialize(sr, \"node1\");\n+ allocation.routingNodes().initialize(sr, \"node1\", -1);\n }\n break;\n case 4:\n if (sr.primary()) {\n- allocation.routingNodes().initialize(sr, \"node2\");\n+ allocation.routingNodes().initialize(sr, \"node2\", -1);\n } else {\n- allocation.routingNodes().initialize(sr, \"node0\");\n+ allocation.routingNodes().initialize(sr, \"node0\", -1);\n }\n break;\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceConfigurationTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,179 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing.allocation;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterInfo;\n+import org.elasticsearch.cluster.ClusterInfoService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.RoutingNodes;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n+import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.test.ESAllocationTestCase;\n+import org.junit.Test;\n+\n+import java.util.Collections;\n+\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n+import static org.elasticsearch.cluster.routing.allocation.RoutingNodesUtils.numberOfShardsOfType;\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ */\n+public class ExpectedShardSizeAllocationTests extends ESAllocationTestCase {\n+\n+ private final ESLogger logger = Loggers.getLogger(ExpectedShardSizeAllocationTests.class);\n+\n+ @Test\n+ public void testInitializingHasExpectedSize() {\n+ final long byteSize = randomIntBetween(0, Integer.MAX_VALUE);\n+ AllocationService strategy = createAllocationService(Settings.EMPTY, new ClusterInfoService() {\n+ @Override\n+ public ClusterInfo getClusterInfo() {\n+ return new ClusterInfo(Collections.EMPTY_MAP, Collections.EMPTY_MAP) {\n+ @Override\n+ public Long getShardSize(ShardRouting shardRouting) {\n+ if (shardRouting.index().equals(\"test\") && shardRouting.shardId().getId() == 0) {\n+ return byteSize;\n+ }\n+ return null;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public void addListener(Listener listener) {\n+ }\n+ });\n+\n+ logger.info(\"Building initial routing table\");\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+ logger.info(\"Adding one node and performing rerouting\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ assertEquals(1, clusterState.getRoutingNodes().node(\"node1\").numberOfShardsWithState(ShardRoutingState.INITIALIZING));\n+ assertEquals(byteSize, clusterState.getRoutingNodes().getRoutingTable().shardsWithState(ShardRoutingState.INITIALIZING).get(0).getExpectedShardSize());\n+ logger.info(\"Start the primary shard\");\n+ RoutingNodes routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ assertEquals(1, clusterState.getRoutingNodes().node(\"node1\").numberOfShardsWithState(ShardRoutingState.STARTED));\n+ assertEquals(1, clusterState.getRoutingNodes().unassigned().size());\n+\n+ logger.info(\"Add another one node and reroute\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).put(newNode(\"node2\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ assertEquals(1, clusterState.getRoutingNodes().node(\"node2\").numberOfShardsWithState(ShardRoutingState.INITIALIZING));\n+ assertEquals(byteSize, clusterState.getRoutingNodes().getRoutingTable().shardsWithState(ShardRoutingState.INITIALIZING).get(0).getExpectedShardSize());\n+ }\n+\n+ @Test\n+ public void testExpectedSizeOnMove() {\n+ final long byteSize = randomIntBetween(0, Integer.MAX_VALUE);\n+ final AllocationService allocation = createAllocationService(Settings.EMPTY, new ClusterInfoService() {\n+ @Override\n+ public ClusterInfo getClusterInfo() {\n+ return new ClusterInfo(Collections.EMPTY_MAP, Collections.EMPTY_MAP) {\n+ @Override\n+ public Long getShardSize(ShardRouting shardRouting) {\n+ if (shardRouting.index().equals(\"test\") && shardRouting.shardId().getId() == 0) {\n+ return byteSize;\n+ }\n+ return null;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public void addListener(Listener listener) {\n+ }\n+ });\n+ logger.info(\"creating an index with 1 shard, no replica\");\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0))\n+ .build();\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"adding two nodes and performing rerouting\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n+ RoutingAllocation.Result rerouteResult = allocation.reroute(clusterState);\n+ clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+\n+ logger.info(\"start primary shard\");\n+ rerouteResult = allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING));\n+ clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+\n+ logger.info(\"move the shard\");\n+ String existingNodeId = clusterState.routingTable().index(\"test\").shard(0).primaryShard().currentNodeId();\n+ String toNodeId;\n+ if (\"node1\".equals(existingNodeId)) {\n+ toNodeId = \"node2\";\n+ } else {\n+ toNodeId = \"node1\";\n+ }\n+ rerouteResult = allocation.reroute(clusterState, new AllocationCommands(new MoveAllocationCommand(new ShardId(\"test\", 0), existingNodeId, toNodeId)));\n+ assertThat(rerouteResult.changed(), equalTo(true));\n+ clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+ assertEquals(clusterState.getRoutingNodes().node(existingNodeId).get(0).state(), ShardRoutingState.RELOCATING);\n+ assertEquals(clusterState.getRoutingNodes().node(toNodeId).get(0).state(),ShardRoutingState.INITIALIZING);\n+\n+ assertEquals(clusterState.getRoutingNodes().node(existingNodeId).get(0).getExpectedShardSize(), byteSize);\n+ assertEquals(clusterState.getRoutingNodes().node(toNodeId).get(0).getExpectedShardSize(), byteSize);\n+\n+ logger.info(\"finish moving the shard\");\n+ rerouteResult = allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING));\n+ clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+\n+ assertThat(clusterState.getRoutingNodes().node(existingNodeId).isEmpty(), equalTo(true));\n+ assertThat(clusterState.getRoutingNodes().node(toNodeId).get(0).state(), equalTo(ShardRoutingState.STARTED));\n+ assertEquals(clusterState.getRoutingNodes().node(toNodeId).get(0).getExpectedShardSize(), -1);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/ExpectedShardSizeAllocationTests.java",
"status": "added"
},
{
"diff": "@@ -20,19 +20,25 @@\n package org.elasticsearch.cluster.routing.allocation;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterInfo;\n+import org.elasticsearch.cluster.ClusterInfoService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESAllocationTestCase;\n import org.junit.Test;\n \n+import java.util.Collections;\n+\n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.hamcrest.Matchers.equalTo;\n@@ -47,12 +53,33 @@ public class RebalanceAfterActiveTests extends ESAllocationTestCase {\n \n @Test\n public void testRebalanceOnlyAfterAllShardsAreActive() {\n- AllocationService strategy = createAllocationService(settingsBuilder()\n- .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n- .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, \"always\")\n- .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", -1)\n- .build());\n+ final long[] sizes = new long[5];\n+ for (int i =0; i < sizes.length; i++) {\n+ sizes[i] = randomIntBetween(0, Integer.MAX_VALUE);\n+ }\n \n+ AllocationService strategy = createAllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n+ .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, \"always\")\n+ .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", -1)\n+ .build(),\n+ new ClusterInfoService() {\n+ @Override\n+ public ClusterInfo getClusterInfo() {\n+ return new ClusterInfo(Collections.EMPTY_MAP, Collections.EMPTY_MAP) {\n+ @Override\n+ public Long getShardSize(ShardRouting shardRouting) {\n+ if (shardRouting.index().equals(\"test\")) {\n+ return sizes[shardRouting.getId()];\n+ }\n+ return null; }\n+ };\n+ }\n+\n+ @Override\n+ public void addListener(Listener listener) {\n+ }\n+ });\n logger.info(\"Building initial routing table\");\n \n MetaData metaData = MetaData.builder()\n@@ -97,6 +124,7 @@ public void testRebalanceOnlyAfterAllShardsAreActive() {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(2));\n assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n assertThat(routingTable.index(\"test\").shard(i).replicaShards().get(0).state(), equalTo(INITIALIZING));\n+ assertEquals(routingTable.index(\"test\").shard(i).replicaShards().get(0).getExpectedShardSize(), sizes[i]);\n }\n \n logger.info(\"now, start 8 more nodes, and check that no rebalancing/relocation have happened\");\n@@ -112,6 +140,8 @@ public void testRebalanceOnlyAfterAllShardsAreActive() {\n assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(2));\n assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n assertThat(routingTable.index(\"test\").shard(i).replicaShards().get(0).state(), equalTo(INITIALIZING));\n+ assertEquals(routingTable.index(\"test\").shard(i).replicaShards().get(0).getExpectedShardSize(), sizes[i]);\n+\n }\n \n logger.info(\"start the replica shards, rebalancing should start\");\n@@ -124,6 +154,16 @@ public void testRebalanceOnlyAfterAllShardsAreActive() {\n // we only allow one relocation at a time\n assertThat(routingTable.shardsWithState(STARTED).size(), equalTo(5));\n assertThat(routingTable.shardsWithState(RELOCATING).size(), equalTo(5));\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ int num = 0;\n+ for (ShardRouting routing : routingTable.index(\"test\").shard(i).shards()) {\n+ if (routing.state() == RELOCATING || routing.state() == INITIALIZING) {\n+ assertEquals(routing.getExpectedShardSize(), sizes[i]);\n+ num++;\n+ }\n+ }\n+ assertTrue(num > 0);\n+ }\n \n logger.info(\"complete relocation, other half of relocation should happen\");\n routingNodes = clusterState.getRoutingNodes();\n@@ -135,6 +175,14 @@ public void testRebalanceOnlyAfterAllShardsAreActive() {\n // we now only relocate 3, since 2 remain where they are!\n assertThat(routingTable.shardsWithState(STARTED).size(), equalTo(7));\n assertThat(routingTable.shardsWithState(RELOCATING).size(), equalTo(3));\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ for (ShardRouting routing : routingTable.index(\"test\").shard(i).shards()) {\n+ if (routing.state() == RELOCATING || routing.state() == INITIALIZING) {\n+ assertEquals(routing.getExpectedShardSize(), sizes[i]);\n+ }\n+ }\n+ }\n+\n \n logger.info(\"complete relocation, thats it!\");\n routingNodes = clusterState.getRoutingNodes();",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/RebalanceAfterActiveTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterInfo;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -302,7 +303,7 @@ private RoutingAllocation onePrimaryOnNode1And1Replica(AllocationDeciders decide\n .metaData(metaData)\n .routingTable(routingTable)\n .nodes(DiscoveryNodes.builder().put(node1).put(node2).put(node3)).build();\n- return new RoutingAllocation(deciders, new RoutingNodes(state, false), state.nodes(), null);\n+ return new RoutingAllocation(deciders, new RoutingNodes(state, false), state.nodes(), ClusterInfo.EMPTY);\n }\n \n private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDeciders deciders) {\n@@ -321,7 +322,7 @@ private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDecid\n .metaData(metaData)\n .routingTable(routingTable)\n .nodes(DiscoveryNodes.builder().put(node1).put(node2).put(node3)).build();\n- return new RoutingAllocation(deciders, new RoutingNodes(state, false), state.nodes(), null);\n+ return new RoutingAllocation(deciders, new RoutingNodes(state, false), state.nodes(), ClusterInfo.EMPTY);\n }\n \n class TestAllocator extends ReplicaShardAllocator {",
"filename": "core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.action.admin.indices.stats.IndexStats;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n@@ -454,6 +455,22 @@ public void testIndexDirIsDeletedWhenShardRemoved() throws Exception {\n assertPathHasBeenCleared(idxPath);\n }\n \n+ public void testExpectedShardSizeIsPresent() throws InterruptedException {\n+ assertAcked(client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0));\n+ for (int i = 0; i < 50; i++) {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"{}\").get();\n+ }\n+ ensureGreen(\"test\");\n+ InternalClusterInfoService clusterInfoService = (InternalClusterInfoService) getInstanceFromNode(ClusterInfoService.class);\n+ InternalClusterInfoService.ClusterInfoUpdateJob job = clusterInfoService.new ClusterInfoUpdateJob(false);\n+ job.run();\n+ ClusterState state = getInstanceFromNode(ClusterService.class).state();\n+ Long test = clusterInfoService.getClusterInfo().getShardSize(state.getRoutingTable().index(\"test\").getShards().get(0).primaryShard());\n+ assertNotNull(test);\n+ assertTrue(test > 0);\n+ }\n+\n public void testIndexCanChangeCustomDataPath() throws Exception {\n Environment env = getInstanceFromNode(Environment.class);\n Path idxPath = env.sharedDataFile().resolve(randomAsciiOfLength(10));",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.test;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterInfoService;\n import org.elasticsearch.cluster.ClusterModule;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.EmptyClusterInfoService;\n@@ -63,7 +64,7 @@ public static AllocationService createAllocationService(Settings settings) {\n }\n \n public static AllocationService createAllocationService(Settings settings, Random random) {\n- return createAllocationService(settings, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), random);\n+ return createAllocationService(settings, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), random);\n }\n \n public static AllocationService createAllocationService(Settings settings, NodeSettingsService nodeSettingsService, Random random) {\n@@ -72,6 +73,13 @@ public static AllocationService createAllocationService(Settings settings, NodeS\n new ShardsAllocators(settings, NoopGatewayAllocator.INSTANCE), EmptyClusterInfoService.INSTANCE);\n }\n \n+ public static AllocationService createAllocationService(Settings settings, ClusterInfoService clusterInfoService) {\n+ return new AllocationService(settings,\n+ randomAllocationDeciders(settings, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), getRandom()),\n+ new ShardsAllocators(settings, NoopGatewayAllocator.INSTANCE), clusterInfoService);\n+ }\n+\n+\n \n public static AllocationDeciders randomAllocationDeciders(Settings settings, NodeSettingsService nodeSettingsService, Random random) {\n final List<Class<? extends AllocationDecider>> defaultAllocationDeciders = ClusterModule.DEFAULT_ALLOCATION_DECIDERS;",
"filename": "core/src/test/java/org/elasticsearch/test/ESAllocationTestCase.java",
"status": "modified"
}
]
} |
{
"body": "Try this on two linux machines:\n\nbin/elasticsearch -Des.network.host=_eth0_\n\nYou will see that it binds to a link-local ipv6 address: \n\n[2015-08-15 22:59:25,266][INFO ][org.elasticsearch.http ] [Timberius] bound_address {inet[/fe80:0:0:0:f66d:4ff:fe90:ce0c%2:9200]}, publish_address {inet[/fe80:0:0:0:f66d:4ff:fe90:ce0c%eth0:9200]}\n\nNodes do get multicast packets from each other, but the address isn't going to work, because its link-local:\n\n[2015-08-15 23:00:19,016][WARN ][org.elasticsearch.discovery.zen.ping.multicast] [Shamrock] failed to connect to requesting node [Bloodstorm][RVsIyNniTq6LYEQmTPRCEA][mac2][inet[/fe80:0:0:0:3e15:c2ff:fee5:d26c%4:9300]]\nConnectTransportException[[Bloodstorm][inet[/fe80:0:0:0:3e15:c2ff:fee5:d26c%4:9300]] connect_timeout[30s]]; nested: SocketException[Network is unreachable];\n\nThis makes it tricky to get things working since most machines are dual-stack and we are picking an address that won't go anywhere. Of course you can do -Des.network.host=_eth0:ipv4_ to workaround it. \n",
"comments": [
{
"body": "Confirmed this locally, when I do `bin/elasticsearch -Des.network.host=_wlp3s0_` it binds to the link-local IPv6 address (`bound_address {inet[/fe80:0:0:0:8638:35ff:fe5e:93ce%2:9300]}`) even though this is not a loopback interface. Interestingly enough, it sends multicast **from** the IPv4 address for the `wlp3s0` interface, see:\n\n```\n» sudo tcpdump -n -s0 -A -i wlp3s0 udp port 54328 \ntcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on wlp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes\n11:21:58.375184 IP 192.168.0.4.54328 > 224.2.2.4.54328: UDP, length 116\nE....}@...E-.........8.8.|... ....z.....elasticsearch Protector.3oLX8kvNSb2wMCxlijXI_Q.Xanadu.domain.192.168.0.4.............85..^........$V...z\n```\n\nSince this affects 2.0, what do you think about defaulting `ES_USE_IPV4` to true for the 2.0.0 beta/ga (which fixes this issue for me) and addressing IPv6 fully going forward? I'm concerned about some things:\n- People don't think about IPv6 much with firewalls yet, so if they bind to `eth0` and only protect IPv4, they could be in a bad situation (we should carefully test and document using IPv6 with ES)\n- Connectivity problems due to a lack of testing coverage since every OS under the sun seems to do IPv6/multicast slightly different (as well as DNS issues with IPv6 also...)\n- IPv6 with privacy extensions enabled causing development problems (haven't tested to see how it would affect this yet)\n- Current lack of our test coverage for IPv6 in general\n\nI think we could address the last one through the Vagrant testing that @nik9000 has been working on. We should make sure each machine has both the v4 and v6 stacks and we can bind/discover things correctly.\n\nThoughts?\n",
"created_at": "2015-08-16T17:33:25Z"
},
{
"body": "We could absolutely add more tests around this with vagrant. Spin up a\ncouple of machines with a private network and have more bats tests. I don't\nknow if we can do osx vms though.\nOn Aug 16, 2015 10:33 AM, \"Lee Hinman\" notifications@github.com wrote:\n\n> Confirmed this locally, when I do bin/elasticsearch\n> -Des.network.host=_wlp3s0_ it binds to the link-local IPv6 address (bound_address\n> {inet[/fe80:0:0:0:8638:35ff:fe5e:93ce%2:9300]}) even though this is not a\n> loopback interface. Interestingly enough, it sends multicast _from_ the\n> IPv4 address for the wlp3s0 interface, see:\n> \n> » sudo tcpdump -n -s0 -A -i wlp3s0 udp port 54328\n> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode\n> listening on wlp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes\n> 11:21:58.375184 IP 192.168.0.4.54328 > 224.2.2.4.54328: UDP, length 116\n> E....}@...E-.........8.8.|... ....z.....elasticsearch Protector.3oLX8kvNSb2wMCxlijXI_Q.Xanadu.domain.192.168.0.4.............85..^........$V...z\n> \n> Since this affects 2.0, what do you think about defaulting ES_USE_IPV4 to\n> true for the 2.0.0 beta/ga (which fixes this issue for me) and addressing\n> IPv6 fully going forward? I'm concerned about some things:\n> - People don't think about IPv6 much with firewalls yet, so if they\n> bind to eth0 and only protect IPv4, they could be in a bad situation\n> (we should carefully test and document using IPv6 with ES)\n> - Connectivity problems due to a lack of testing coverage since every\n> OS under the sun seems to do IPv6/multicast slightly different (as well as\n> DNS issues with IPv6 also...)\n> - IPv6 with privacy extensions enabled causing development problems\n> (haven't tested to see how it would affect this yet)\n> - Current lack of our test coverage for IPv6\n> \n> I think we could address the last one through the Vagrant testing that\n> @nik9000 https://github.com/nik9000 has been working on. We should make\n> sure each machine has both the v4 and v6 stacks and we can bind/discover\n> things correctly.\n> \n> Thoughts?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/12915#issuecomment-131588135\n> .\n",
"created_at": "2015-08-16T17:38:49Z"
},
{
"body": "We should also test IPv6-**only** machines to ensure we have no hard requirement on IPv4-specific behavior.\n",
"created_at": "2015-08-16T17:40:45Z"
},
{
"body": "We can technically do OS X VMs but we can't legally do OS X VMs until we provision some Mac hardware for our CI infrastructure and run OS X as the host OS. Tests utilizing these VMs can run legally only on this hardware.\n\nFrom the [OS X 10.10 SLA](http://images.apple.com/legal/sla/docs/OSX1010.pdf):\n\n> B. Mac App Store License. If you obtained a license for the Apple Software from the Mac App Store, then subject to the terms and conditions of this License and as permitted by the Mac App Store Usage Rules set forth in the App Store Terms and Conditions (http://www.apple.com/legal/internet-services/itunes/ww/) (“Usage Rules”), you are granted a limited, non-transferable, non-exclusive license:\n> \n> [...]\n> \n> (iii) to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using OS X Server; or (d) personal, non-commercial use.\n",
"created_at": "2015-08-16T17:48:51Z"
},
{
"body": "It looks like there are a couple of OSX boxes in vagrant atlas too. So that\nshould be possible.\n\nNik\n\nOn Sun, Aug 16, 2015 at 10:48 AM, Jason Tedor notifications@github.com\nwrote:\n\n> We can technically do OS X VMs but we can't legally do OS X VMs until we\n> provision some Mac hardware for our CI infrastructure and run OS X as the\n> host OS. Tests utilizing these VMs could only run on this hardware.\n> \n> From the OS X 10.10 EULA\n> http://images.apple.com/legal/sla/docs/OSX1010.pdf:\n> \n> B. Mac App Store License. If you obtained a license for the Apple Software\n> from the Mac App Store, then subject to the terms and conditions of this\n> License and as permitted by the Mac App Store Usage Rules set forth in the\n> App Store Terms and Conditions (\n> http://www.apple.com/legal/internet-services/ itunes/ww/) (“Usage\n> Rules”), you are granted a limited, non-transferable, non-exclusive license:\n> \n> [...]\n> \n> (iii) to install, use and run up to two (2) additional copies or instances\n> of the Apple Software within virtual operating system environments on each\n> Mac Computer you own or control that is already running the Apple Software,\n> for purposes of: (a) software development; (b) testing during software\n> development; (c) using OS X Server; or (d) personal, non-commercial use.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/12915#issuecomment-131591616\n> .\n",
"created_at": "2015-08-16T17:54:00Z"
},
{
"body": "> Since this affects 2.0, what do you think about defaulting ES_USE_IPV4 to true for the 2.0.0 beta/ga (which fixes this issue for me) and addressing IPv6 fully going forward?\n\nOne problem with that i simple stuff like `localhost` isn't going to work. Stuff like curl is simply not going to care. When I played with this issue, i ended out with all kinds of interesting situations: advertising 127.0.0.1 on ethernet multicast, advertising ipv6 over ipv4, etc. \n\nI think it was buggy all along but hidden by the fact that things were binding to all interfaces or whatever before?\n",
"created_at": "2015-08-16T19:48:22Z"
}
],
"number": 12915,
"title": "-Des.network.host=_eth0_ binds to ipv6 link-local address"
} | {
"body": "Elasticsearch doesn't work well today with modern machines that might support both ipv4 and ipv6. The worst problem here that `curl http://localhost...` does not work e.g. on mac (#12906), because we aren't bound to any v6 address.\n\nEqually bad, we only bind to loopback by default for 2.0, so if you want to connect to the network \"for real\" you might provide an interface name, but that always tends to do the wrong thing too (#12915), e.g. pick link local or some other useless address.\n\nThis is a compromise fix that tries to keep things simple:\n- we bind to multiple addresses when a specified host/interface name has multiple addresses. If you dont like this, then specify a single address.\n- we still only _publish_ by default to one (this would require more work).\n- no changes for ipv6 multicast or anything like that yet.\n\nThe default for which address to publish when bound to multiple addresses is pretty simple, we prefer ipv4 by default (java.net.preferIPv4Stack) and I think we should until things like multicast are fixed to work correct over ipv6. Otherwise we prefer \"real addresses\" > site local > link local and so on.\n\nSome things are still a bit messy, because real cleanups need to not be right before a beta release. \n\nCloses #12906\nCloses #12915\n",
"number": 12942,
"review_comments": [
{
"body": "Do you prefer the double-if? If not I guess you could do:\n\n``` java\nif (bindHost == null) {\n bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING, defaultValue2));\n}\nreturn resolveInetAddress(bindHost);\n```\n\nInstead\n",
"created_at": "2015-08-17T17:52:36Z"
},
{
"body": "Doesn't this also need to check if the address is actually an ipv6 address also?\n",
"created_at": "2015-08-17T17:54:31Z"
},
{
"body": "Can you add a comment for why reusing addresses is bad on Windows? (I don't have any idea why)\n",
"created_at": "2015-08-17T17:55:27Z"
},
{
"body": "This logging is helpful, thanks for adding it!\n",
"created_at": "2015-08-17T17:57:26Z"
},
{
"body": "No. The length in bytes is enough.\n",
"created_at": "2015-08-17T18:05:56Z"
},
{
"body": "Its not related to this change, I just documented what it does.\n",
"created_at": "2015-08-17T18:06:15Z"
},
{
"body": "Ahh of course, I missed the length call, thanks\n",
"created_at": "2015-08-17T18:12:38Z"
},
{
"body": "who is this `shay banon` guy after all?\n",
"created_at": "2015-08-17T18:14:46Z"
},
{
"body": "I have changes coming in a commit. Basically it was too hard for me to see the various fallbacks for the default case (null). \n",
"created_at": "2015-08-17T18:15:03Z"
},
{
"body": "I am not sure where we use this else but would make it sense to move `_local_` to a constant? Primarily I am curious what this means and if we can add some javadocs to the constant\n",
"created_at": "2015-08-17T18:16:45Z"
},
{
"body": "I like this sorting here - neat!\n",
"created_at": "2015-08-17T18:18:58Z"
},
{
"body": "When I first looked at this I wondered if this is prone to concurrent modification exception? I wondered if we should make the `serverChannels` final and use a `CopyOnWriteArrayList` instead. Reading without sync would be safe. But then looking at the entire file I think we should just make the list final and clear it before we close each channel? the volatile confuses me and I think it's not needd\n",
"created_at": "2015-08-17T18:28:42Z"
},
{
"body": "I was concurrently cleaning up this code, because it was still too much for me. Please see the latest commit.\n",
"created_at": "2015-08-17T18:29:35Z"
},
{
"body": "this must be a ConcurrentMap you use putIfAbsent\n",
"created_at": "2015-08-17T18:42:37Z"
},
{
"body": "All the concurrency in these methods is super confusing. Its doing really slow stuff like binding to ports, which should not happen all the time. I am happy to fix the concurrency, it will be to make everything synchronized: this is all nuts.\n",
"created_at": "2015-08-17T18:49:05Z"
},
{
"body": "I agree - we can do this in a follow up!\n",
"created_at": "2015-08-17T19:10:44Z"
}
],
"title": "Fix network binding for ipv4/ipv6"
} | {
"commits": [
{
"message": "Fix transport / interface code\n\nNext up: multicast and then http"
},
{
"message": "fix http too"
},
{
"message": "remove nocommit"
},
{
"message": "fix docs, fix another bug in multicast (publish host = bad here!)"
},
{
"message": "Merge branch 'master' into network_cleanup"
},
{
"message": "localhost all the way down"
},
{
"message": "Add some unit tests for utility methods"
},
{
"message": "Cleanup/fix logic around custom resolvers"
},
{
"message": "fix java 7 compilation oops"
}
],
"files": [
{
"diff": "@@ -42,24 +42,24 @@ h3. Installation\n \n * \"Download\":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.\n * Run @bin/elasticsearch@ on unix, or @bin\\elasticsearch.bat@ on windows.\n-* Run @curl -X GET http://127.0.0.1:9200/@.\n+* Run @curl -X GET http://localhost:9200/@.\n * Start more servers ...\n \n h3. Indexing\n \n Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n Now, let's see if the information was added by GETting it:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'\n </pre>\n \n h3. Searching\n@@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic?\n Let's find all the tweets that @kimchy@ posted:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n </pre>\n \n We can also use the JSON query language Elasticsearch provides instead of a query string:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '\n {\n \"query\" : {\n \"match\" : { \"user\": \"kimchy\" }\n@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n Just for kicks, let's get all the documents stored (we should see the user as well):\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n We can also do range search (the @postDate@ was automatically identified as date)\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"range\" : {\n@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In\n Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@\n Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):\n \n <pre>\n-curl -XPUT http://127.0.0.1:9200/another_user/ -d '\n+curl -XPUT http://localhost:9200/another_user/ -d '\n {\n \"index\" : {\n \"numberOfShards\" : 1,\n@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea\n index (twitter user), for example:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n Or on all the indices:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}",
"filename": "README.textile",
"status": "modified"
},
{
"diff": "@@ -42,24 +42,24 @@ h3. Installation\n \n * \"Download\":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.\n * Run @bin/elasticsearch@ on unix, or @bin\\elasticsearch.bat@ on windows.\n-* Run @curl -X GET http://127.0.0.1:9200/@.\n+* Run @curl -X GET http://localhost:9200/@.\n * Start more servers ...\n \n h3. Indexing\n \n Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n Now, let's see if the information was added by GETting it:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'\n </pre>\n \n h3. Searching\n@@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic?\n Let's find all the tweets that @kimchy@ posted:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n </pre>\n \n We can also use the JSON query language Elasticsearch provides instead of a query string:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '\n {\n \"query\" : {\n \"match\" : { \"user\": \"kimchy\" }\n@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n Just for kicks, let's get all the documents stored (we should see the user as well):\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n We can also do range search (the @postDate@ was automatically identified as date)\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"range\" : {\n@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In\n Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@\n Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):\n \n <pre>\n-curl -XPUT http://127.0.0.1:9200/another_user/ -d '\n+curl -XPUT http://localhost:9200/another_user/ -d '\n {\n \"index\" : {\n \"numberOfShards\" : 1,\n@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea\n index (twitter user), for example:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n Or on all the indices:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}",
"filename": "core/README.textile",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n+\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Strings;\n@@ -33,6 +34,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n+import java.net.InetAddress;\n import java.util.Map;\n \n import static org.elasticsearch.common.transport.TransportAddressSerializers.addressToStream;\n@@ -136,7 +138,7 @@ public DiscoveryNode(String nodeId, TransportAddress address, Version version) {\n * @param version the version of the node.\n */\n public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, Map<String, String> attributes, Version version) {\n- this(nodeName, nodeId, NetworkUtils.getLocalHostName(\"\"), NetworkUtils.getLocalHostAddress(\"\"), address, attributes, version);\n+ this(nodeName, nodeId, NetworkUtils.getLocalHost().getHostName(), NetworkUtils.getLocalHost().getHostAddress(), address, attributes, version);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java",
"status": "modified"
},
{
"diff": "@@ -28,11 +28,8 @@\n \n import java.io.IOException;\n import java.net.InetAddress;\n-import java.net.NetworkInterface;\n import java.net.UnknownHostException;\n-import java.util.Collection;\n import java.util.List;\n-import java.util.Locale;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.TimeUnit;\n \n@@ -41,7 +38,8 @@\n */\n public class NetworkService extends AbstractComponent {\n \n- public static final String LOCAL = \"#local#\";\n+ /** By default, we bind to loopback interfaces */\n+ public static final String DEFAULT_NETWORK_HOST = \"_local_\";\n \n private static final String GLOBAL_NETWORK_HOST_SETTING = \"network.host\";\n private static final String GLOBAL_NETWORK_BINDHOST_SETTING = \"network.bind_host\";\n@@ -71,12 +69,12 @@ public static interface CustomNameResolver {\n /**\n * Resolves the default value if possible. If not, return <tt>null</tt>.\n */\n- InetAddress resolveDefault();\n+ InetAddress[] resolveDefault();\n \n /**\n * Resolves a custom value handling, return <tt>null</tt> if can't handle it.\n */\n- InetAddress resolveIfPossible(String value);\n+ InetAddress[] resolveIfPossible(String value);\n }\n \n private final List<CustomNameResolver> customNameResolvers = new CopyOnWriteArrayList<>();\n@@ -94,100 +92,86 @@ public void addCustomNameResolver(CustomNameResolver customNameResolver) {\n customNameResolvers.add(customNameResolver);\n }\n \n-\n- public InetAddress resolveBindHostAddress(String bindHost) throws IOException {\n- return resolveBindHostAddress(bindHost, InetAddress.getLoopbackAddress().getHostAddress());\n- }\n-\n- public InetAddress resolveBindHostAddress(String bindHost, String defaultValue2) throws IOException {\n- return resolveInetAddress(bindHost, settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);\n- }\n-\n- public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {\n- InetAddress address = resolvePublishHostAddress(publishHost,\n- InetAddress.getLoopbackAddress().getHostAddress());\n- // verify that its not a local address\n- if (address == null || address.isAnyLocalAddress()) {\n- address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- if (address == null) {\n- address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());\n- if (address == null) {\n- address = NetworkUtils.getLocalAddress();\n- if (address == null) {\n- return NetworkUtils.getLocalhost(NetworkUtils.StackType.IPv4);\n- }\n+ public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException {\n+ // first check settings\n+ if (bindHost == null) {\n+ bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));\n+ }\n+ // next check any registered custom resolvers\n+ if (bindHost == null) {\n+ for (CustomNameResolver customNameResolver : customNameResolvers) {\n+ InetAddress addresses[] = customNameResolver.resolveDefault();\n+ if (addresses != null) {\n+ return addresses;\n }\n }\n }\n- return address;\n- }\n-\n- public InetAddress resolvePublishHostAddress(String publishHost, String defaultValue2) throws IOException {\n- return resolveInetAddress(publishHost, settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);\n+ // finally, fill with our default\n+ if (bindHost == null) {\n+ bindHost = DEFAULT_NETWORK_HOST;\n+ }\n+ return resolveInetAddress(bindHost);\n }\n \n- public InetAddress resolveInetAddress(String host, String defaultValue1, String defaultValue2) throws UnknownHostException, IOException {\n- if (host == null) {\n- host = defaultValue1;\n- }\n- if (host == null) {\n- host = defaultValue2;\n+ // TODO: needs to be InetAddress[]\n+ public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {\n+ // first check settings\n+ if (publishHost == null) {\n+ publishHost = settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));\n }\n- if (host == null) {\n+ // next check any registered custom resolvers\n+ if (publishHost == null) {\n for (CustomNameResolver customNameResolver : customNameResolvers) {\n- InetAddress inetAddress = customNameResolver.resolveDefault();\n- if (inetAddress != null) {\n- return inetAddress;\n+ InetAddress addresses[] = customNameResolver.resolveDefault();\n+ if (addresses != null) {\n+ return addresses[0];\n }\n }\n- return null;\n }\n- String origHost = host;\n+ // finally, fill with our default\n+ if (publishHost == null) {\n+ publishHost = DEFAULT_NETWORK_HOST;\n+ }\n+ // TODO: allow publishing multiple addresses\n+ return resolveInetAddress(publishHost)[0];\n+ }\n+\n+ private InetAddress[] resolveInetAddress(String host) throws UnknownHostException, IOException {\n if ((host.startsWith(\"#\") && host.endsWith(\"#\")) || (host.startsWith(\"_\") && host.endsWith(\"_\"))) {\n host = host.substring(1, host.length() - 1);\n-\n+ // allow custom resolvers to have special names\n for (CustomNameResolver customNameResolver : customNameResolvers) {\n- InetAddress inetAddress = customNameResolver.resolveIfPossible(host);\n- if (inetAddress != null) {\n- return inetAddress;\n+ InetAddress addresses[] = customNameResolver.resolveIfPossible(host);\n+ if (addresses != null) {\n+ return addresses;\n }\n }\n-\n- if (host.equals(\"local\")) {\n- return NetworkUtils.getLocalAddress();\n- } else if (host.startsWith(\"non_loopback\")) {\n- if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv4\")) {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- } else if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv6\")) {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv6);\n- } else {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());\n- }\n- } else {\n- NetworkUtils.StackType stackType = NetworkUtils.getIpStackType();\n- if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv4\")) {\n- stackType = NetworkUtils.StackType.IPv4;\n- host = host.substring(0, host.length() - 5);\n- } else if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv6\")) {\n- stackType = NetworkUtils.StackType.IPv6;\n- host = host.substring(0, host.length() - 5);\n- }\n- Collection<NetworkInterface> allInterfs = NetworkUtils.getAllAvailableInterfaces();\n- for (NetworkInterface ni : allInterfs) {\n- if (!ni.isUp()) {\n- continue;\n+ switch (host) {\n+ case \"local\":\n+ return NetworkUtils.getLoopbackAddresses();\n+ case \"local:ipv4\":\n+ return NetworkUtils.filterIPV4(NetworkUtils.getLoopbackAddresses());\n+ case \"local:ipv6\":\n+ return NetworkUtils.filterIPV6(NetworkUtils.getLoopbackAddresses());\n+ case \"non_loopback\":\n+ return NetworkUtils.getFirstNonLoopbackAddresses();\n+ case \"non_loopback:ipv4\":\n+ return NetworkUtils.filterIPV4(NetworkUtils.getFirstNonLoopbackAddresses());\n+ case \"non_loopback:ipv6\":\n+ return NetworkUtils.filterIPV6(NetworkUtils.getFirstNonLoopbackAddresses());\n+ default:\n+ /* an interface specification */\n+ if (host.endsWith(\":ipv4\")) {\n+ host = host.substring(0, host.length() - 5);\n+ return NetworkUtils.filterIPV4(NetworkUtils.getAddressesForInterface(host));\n+ } else if (host.endsWith(\":ipv6\")) {\n+ host = host.substring(0, host.length() - 5);\n+ return NetworkUtils.filterIPV6(NetworkUtils.getAddressesForInterface(host));\n+ } else {\n+ return NetworkUtils.getAddressesForInterface(host);\n }\n- if (host.equals(ni.getName()) || host.equals(ni.getDisplayName())) {\n- if (ni.isLoopback()) {\n- return NetworkUtils.getFirstAddress(ni, stackType);\n- } else {\n- return NetworkUtils.getFirstNonLoopbackAddress(ni, stackType);\n- }\n- }\n- }\n }\n- throw new IOException(\"Failed to find network interface for [\" + origHost + \"]\");\n }\n- return InetAddress.getByName(host);\n+ return NetworkUtils.getAllByName(host);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkService.java",
"status": "modified"
},
{
"diff": "@@ -19,303 +19,205 @@\n \n package org.elasticsearch.common.network;\n \n-import com.google.common.collect.Lists;\n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.CollectionUtil;\n import org.apache.lucene.util.Constants;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n \n-import java.net.*;\n-import java.util.*;\n+import java.net.Inet4Address;\n+import java.net.Inet6Address;\n+import java.net.InetAddress;\n+import java.net.NetworkInterface;\n+import java.net.SocketException;\n+import java.net.UnknownHostException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.Comparator;\n+import java.util.List;\n \n /**\n- *\n+ * Utilities for network interfaces / addresses\n */\n public abstract class NetworkUtils {\n \n- private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class);\n-\n- public static enum StackType {\n- IPv4, IPv6, Unknown\n- }\n-\n- public static final String IPv4_SETTING = \"java.net.preferIPv4Stack\";\n- public static final String IPv6_SETTING = \"java.net.preferIPv6Addresses\";\n-\n- public static final String NON_LOOPBACK_ADDRESS = \"non_loopback_address\";\n-\n- private final static InetAddress localAddress;\n-\n- static {\n- InetAddress localAddressX;\n- try {\n- localAddressX = InetAddress.getLocalHost();\n- } catch (Throwable e) {\n- logger.warn(\"failed to resolve local host, fallback to loopback\", e);\n- localAddressX = InetAddress.getLoopbackAddress();\n+ /** no instantation */\n+ private NetworkUtils() {}\n+ \n+ /**\n+ * By default we bind to any addresses on an interface/name, unless restricted by :ipv4 etc.\n+ * This property is unrelated to that, this is about what we *publish*. Today the code pretty much\n+ * expects one address so this is used for the sort order.\n+ * @deprecated transition mechanism only\n+ */\n+ @Deprecated\n+ static final boolean PREFER_V4 = Boolean.parseBoolean(System.getProperty(\"java.net.preferIPv4Stack\", \"true\")); \n+ \n+ /** Sorts an address by preference. This way code like publishing can just pick the first one */\n+ static int sortKey(InetAddress address, boolean prefer_v4) {\n+ int key = address.getAddress().length;\n+ if (prefer_v4 == false) {\n+ key = -key;\n }\n- localAddress = localAddressX;\n- }\n-\n- public static boolean defaultReuseAddress() {\n- return Constants.WINDOWS ? false : true;\n- }\n-\n- public static boolean isIPv4() {\n- return System.getProperty(\"java.net.preferIPv4Stack\") != null && System.getProperty(\"java.net.preferIPv4Stack\").equals(\"true\");\n- }\n-\n- public static InetAddress getIPv4Localhost() throws UnknownHostException {\n- return getLocalhost(StackType.IPv4);\n- }\n-\n- public static InetAddress getIPv6Localhost() throws UnknownHostException {\n- return getLocalhost(StackType.IPv6);\n- }\n-\n- public static InetAddress getLocalAddress() {\n- return localAddress;\n- }\n-\n- public static String getLocalHostName(String defaultHostName) {\n- if (localAddress == null) {\n- return defaultHostName;\n+ \n+ if (address.isAnyLocalAddress()) {\n+ key += 5;\n }\n- String hostName = localAddress.getHostName();\n- if (hostName == null) {\n- return defaultHostName;\n+ if (address.isMulticastAddress()) {\n+ key += 4;\n }\n- return hostName;\n- }\n-\n- public static String getLocalHostAddress(String defaultHostAddress) {\n- if (localAddress == null) {\n- return defaultHostAddress;\n+ if (address.isLoopbackAddress()) {\n+ key += 3;\n }\n- String hostAddress = localAddress.getHostAddress();\n- if (hostAddress == null) {\n- return defaultHostAddress;\n+ if (address.isLinkLocalAddress()) {\n+ key += 2;\n+ }\n+ if (address.isSiteLocalAddress()) {\n+ key += 1;\n }\n- return hostAddress;\n- }\n \n- public static InetAddress getLocalhost(StackType ip_version) throws UnknownHostException {\n- if (ip_version == StackType.IPv4)\n- return InetAddress.getByName(\"127.0.0.1\");\n- else\n- return InetAddress.getByName(\"::1\");\n+ return key;\n }\n \n- /**\n- * Returns the first non-loopback address on any interface on the current host.\n- *\n- * @param ip_version Constraint on IP version of address to be returned, 4 or 6\n+ /** \n+ * Sorts addresses by order of preference. This is used to pick the first one for publishing\n+ * @deprecated remove this when multihoming is really correct\n */\n- public static InetAddress getFirstNonLoopbackAddress(StackType ip_version) throws SocketException {\n- InetAddress address;\n- for (NetworkInterface intf : getInterfaces()) {\n- try {\n- if (!intf.isUp() || intf.isLoopback())\n- continue;\n- } catch (Exception e) {\n- // might happen when calling on a network interface that does not exists\n- continue;\n- }\n- address = getFirstNonLoopbackAddress(intf, ip_version);\n- if (address != null) {\n- return address;\n+ @Deprecated\n+ private static void sortAddresses(List<InetAddress> list) {\n+ Collections.sort(list, new Comparator<InetAddress>() {\n+ @Override\n+ public int compare(InetAddress left, InetAddress right) {\n+ int cmp = Integer.compare(sortKey(left, PREFER_V4), sortKey(right, PREFER_V4));\n+ if (cmp == 0) {\n+ cmp = new BytesRef(left.getAddress()).compareTo(new BytesRef(right.getAddress()));\n+ }\n+ return cmp;\n }\n- }\n-\n- return null;\n- }\n-\n- private static List<NetworkInterface> getInterfaces() throws SocketException {\n- Enumeration intfs = NetworkInterface.getNetworkInterfaces();\n-\n- List<NetworkInterface> intfsList = Lists.newArrayList();\n- while (intfs.hasMoreElements()) {\n- intfsList.add((NetworkInterface) intfs.nextElement());\n- }\n-\n- sortInterfaces(intfsList);\n- return intfsList;\n+ });\n }\n+ \n+ private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class);\n \n- private static void sortInterfaces(List<NetworkInterface> intfsList) {\n- // order by index, assuming first ones are more interesting\n- CollectionUtil.timSort(intfsList, new Comparator<NetworkInterface>() {\n+ /** Return all interfaces (and subinterfaces) on the system */\n+ static List<NetworkInterface> getInterfaces() throws SocketException {\n+ List<NetworkInterface> all = new ArrayList<>();\n+ addAllInterfaces(all, Collections.list(NetworkInterface.getNetworkInterfaces()));\n+ Collections.sort(all, new Comparator<NetworkInterface>() {\n @Override\n- public int compare(NetworkInterface o1, NetworkInterface o2) {\n- return Integer.compare (o1.getIndex(), o2.getIndex());\n+ public int compare(NetworkInterface left, NetworkInterface right) {\n+ return Integer.compare(left.getIndex(), right.getIndex());\n }\n });\n- }\n-\n-\n- /**\n- * Returns the first non-loopback address on the given interface on the current host.\n- *\n- * @param intf the interface to be checked\n- * @param ipVersion Constraint on IP version of address to be returned, 4 or 6\n- */\n- public static InetAddress getFirstNonLoopbackAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {\n- if (intf == null)\n- throw new IllegalArgumentException(\"Network interface pointer is null\");\n-\n- for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {\n- InetAddress address = (InetAddress) addresses.nextElement();\n- if (!address.isLoopbackAddress()) {\n- if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||\n- (address instanceof Inet6Address && ipVersion == StackType.IPv6))\n- return address;\n+ return all;\n+ }\n+ \n+ /** Helper for getInterfaces, recursively adds subinterfaces to {@code target} */\n+ private static void addAllInterfaces(List<NetworkInterface> target, List<NetworkInterface> level) {\n+ if (!level.isEmpty()) {\n+ target.addAll(level);\n+ for (NetworkInterface intf : level) {\n+ addAllInterfaces(target, Collections.list(intf.getSubInterfaces()));\n }\n }\n- return null;\n }\n-\n- /**\n- * Returns the first address with the proper ipVersion on the given interface on the current host.\n- *\n- * @param intf the interface to be checked\n- * @param ipVersion Constraint on IP version of address to be returned, 4 or 6\n- */\n- public static InetAddress getFirstAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {\n- if (intf == null)\n- throw new IllegalArgumentException(\"Network interface pointer is null\");\n-\n- for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {\n- InetAddress address = (InetAddress) addresses.nextElement();\n- if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||\n- (address instanceof Inet6Address && ipVersion == StackType.IPv6))\n- return address;\n+ \n+ /** Returns system default for SO_REUSEADDR */\n+ public static boolean defaultReuseAddress() {\n+ return Constants.WINDOWS ? false : true;\n+ }\n+ \n+ /** Returns localhost, or if its misconfigured, falls back to loopback. Use with caution!!!! */\n+ // TODO: can we remove this?\n+ public static InetAddress getLocalHost() {\n+ try {\n+ return InetAddress.getLocalHost();\n+ } catch (UnknownHostException e) {\n+ logger.warn(\"failed to resolve local host, fallback to loopback\", e);\n+ return InetAddress.getLoopbackAddress();\n }\n- return null;\n }\n-\n- /**\n- * A function to check if an interface supports an IP version (i.e has addresses\n- * defined for that IP version).\n- *\n- * @param intf\n- * @return\n- */\n- public static boolean interfaceHasIPAddresses(NetworkInterface intf, StackType ipVersion) throws SocketException, UnknownHostException {\n- boolean supportsVersion = false;\n- if (intf != null) {\n- // get all the InetAddresses defined on the interface\n- Enumeration addresses = intf.getInetAddresses();\n- while (addresses != null && addresses.hasMoreElements()) {\n- // get the next InetAddress for the current interface\n- InetAddress address = (InetAddress) addresses.nextElement();\n-\n- // check if we find an address of correct version\n- if ((address instanceof Inet4Address && (ipVersion == StackType.IPv4)) ||\n- (address instanceof Inet6Address && (ipVersion == StackType.IPv6))) {\n- supportsVersion = true;\n- break;\n- }\n+ \n+ /** Returns addresses for all loopback interfaces that are up. */\n+ public static InetAddress[] getLoopbackAddresses() throws SocketException {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (NetworkInterface intf : getInterfaces()) {\n+ if (intf.isLoopback() && intf.isUp()) {\n+ list.addAll(Collections.list(intf.getInetAddresses()));\n }\n- } else {\n- throw new UnknownHostException(\"network interface not found\");\n }\n- return supportsVersion;\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No up-and-running loopback interfaces found, got \" + getInterfaces());\n+ }\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n }\n-\n- /**\n- * Tries to determine the type of IP stack from the available interfaces and their addresses and from the\n- * system properties (java.net.preferIPv4Stack and java.net.preferIPv6Addresses)\n- *\n- * @return StackType.IPv4 for an IPv4 only stack, StackYTypeIPv6 for an IPv6 only stack, and StackType.Unknown\n- * if the type cannot be detected\n- */\n- public static StackType getIpStackType() {\n- boolean isIPv4StackAvailable = isStackAvailable(true);\n- boolean isIPv6StackAvailable = isStackAvailable(false);\n-\n- // if only IPv4 stack available\n- if (isIPv4StackAvailable && !isIPv6StackAvailable) {\n- return StackType.IPv4;\n+ \n+ /** Returns addresses for the first non-loopback interface that is up. */\n+ public static InetAddress[] getFirstNonLoopbackAddresses() throws SocketException {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (NetworkInterface intf : getInterfaces()) {\n+ if (intf.isLoopback() == false && intf.isUp()) {\n+ list.addAll(Collections.list(intf.getInetAddresses()));\n+ break;\n+ }\n }\n- // if only IPv6 stack available\n- else if (isIPv6StackAvailable && !isIPv4StackAvailable) {\n- return StackType.IPv6;\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No up-and-running non-loopback interfaces found, got \" + getInterfaces());\n }\n- // if dual stack\n- else if (isIPv4StackAvailable && isIPv6StackAvailable) {\n- // get the System property which records user preference for a stack on a dual stack machine\n- if (Boolean.getBoolean(IPv4_SETTING)) // has preference over java.net.preferIPv6Addresses\n- return StackType.IPv4;\n- if (Boolean.getBoolean(IPv6_SETTING))\n- return StackType.IPv6;\n- return StackType.IPv6;\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns addresses for the given interface (it must be marked up) */\n+ public static InetAddress[] getAddressesForInterface(String name) throws SocketException {\n+ NetworkInterface intf = NetworkInterface.getByName(name);\n+ if (intf == null) {\n+ throw new IllegalArgumentException(\"No interface named '\" + name + \"' found, got \" + getInterfaces());\n }\n- return StackType.Unknown;\n- }\n-\n-\n- public static boolean isStackAvailable(boolean ipv4) {\n- Collection<InetAddress> allAddrs = getAllAvailableAddresses();\n- for (InetAddress addr : allAddrs)\n- if (ipv4 && addr instanceof Inet4Address || (!ipv4 && addr instanceof Inet6Address))\n- return true;\n- return false;\n- }\n-\n-\n- /**\n- * Returns all the available interfaces, including first level sub interfaces.\n- */\n- public static List<NetworkInterface> getAllAvailableInterfaces() throws SocketException {\n- List<NetworkInterface> allInterfaces = new ArrayList<>();\n- for (Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); interfaces.hasMoreElements(); ) {\n- NetworkInterface intf = interfaces.nextElement();\n- allInterfaces.add(intf);\n-\n- Enumeration<NetworkInterface> subInterfaces = intf.getSubInterfaces();\n- if (subInterfaces != null && subInterfaces.hasMoreElements()) {\n- while (subInterfaces.hasMoreElements()) {\n- allInterfaces.add(subInterfaces.nextElement());\n- }\n- }\n+ if (!intf.isUp()) {\n+ throw new IllegalArgumentException(\"Interface '\" + name + \"' is not up and running\");\n }\n- sortInterfaces(allInterfaces);\n- return allInterfaces;\n- }\n-\n- public static Collection<InetAddress> getAllAvailableAddresses() {\n- // we want consistent order here.\n- final Set<InetAddress> retval = new TreeSet<>(new Comparator<InetAddress>() {\n- BytesRef left = new BytesRef();\n- BytesRef right = new BytesRef();\n- @Override\n- public int compare(InetAddress o1, InetAddress o2) {\n- return set(left, o1).compareTo(set(right, o1));\n- }\n-\n- private BytesRef set(BytesRef ref, InetAddress addr) {\n- ref.bytes = addr.getAddress();\n- ref.offset = 0;\n- ref.length = ref.bytes.length;\n- return ref;\n+ List<InetAddress> list = Collections.list(intf.getInetAddresses());\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"Interface '\" + name + \"' has no internet addresses\");\n+ }\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns addresses for the given host, sorted by order of preference */\n+ public static InetAddress[] getAllByName(String host) throws UnknownHostException {\n+ InetAddress addresses[] = InetAddress.getAllByName(host);\n+ sortAddresses(Arrays.asList(addresses));\n+ return addresses;\n+ }\n+ \n+ /** Returns only the IPV4 addresses in {@code addresses} */\n+ public static InetAddress[] filterIPV4(InetAddress addresses[]) {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (InetAddress address : addresses) {\n+ if (address instanceof Inet4Address) {\n+ list.add(address);\n }\n- });\n- try {\n- for (NetworkInterface intf : getInterfaces()) {\n- Enumeration<InetAddress> addrs = intf.getInetAddresses();\n- while (addrs.hasMoreElements())\n- retval.add(addrs.nextElement());\n+ }\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No ipv4 addresses found in \" + Arrays.toString(addresses));\n+ }\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns only the IPV6 addresses in {@code addresses} */\n+ public static InetAddress[] filterIPV6(InetAddress addresses[]) {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (InetAddress address : addresses) {\n+ if (address instanceof Inet6Address) {\n+ list.add(address);\n }\n- } catch (SocketException e) {\n- logger.warn(\"Failed to derive all available interfaces\", e);\n }\n-\n- return retval;\n- }\n-\n-\n- private NetworkUtils() {\n-\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No ipv6 addresses found in \" + Arrays.toString(addresses));\n+ }\n+ return list.toArray(new InetAddress[list.size()]);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java",
"status": "modified"
},
{
"diff": "@@ -131,7 +131,9 @@ protected void doStart() {\n boolean deferToInterface = settings.getAsBoolean(\"discovery.zen.ping.multicast.defer_group_to_set_interface\", Constants.MAC_OS_X);\n multicastChannel = MulticastChannel.getChannel(nodeName(), shared,\n new MulticastChannel.Config(port, group, bufferSize, ttl,\n- networkService.resolvePublishHostAddress(address),\n+ // don't use publish address, the use case for that is e.g. a firewall or proxy and\n+ // may not even be bound to an interface on this machine! use the first bound address.\n+ networkService.resolveBindHostAddress(address)[0],\n deferToInterface),\n new Receiver());\n } catch (Throwable t) {",
"filename": "core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java",
"status": "modified"
},
{
"diff": "@@ -51,6 +51,10 @@\n import java.io.IOException;\n import java.net.InetAddress;\n import java.net.InetSocketAddress;\n+import java.net.SocketAddress;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n import java.util.concurrent.Executors;\n import java.util.concurrent.atomic.AtomicReference;\n \n@@ -128,7 +132,7 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n \n protected volatile BoundTransportAddress boundAddress;\n \n- protected volatile Channel serverChannel;\n+ protected volatile List<Channel> serverChannels = new ArrayList<>();\n \n protected OpenChannelsHandler serverOpenChannels;\n \n@@ -243,21 +247,43 @@ protected void doStart() {\n serverBootstrap.setOption(\"child.reuseAddress\", reuseAddress);\n \n // Bind and start to accept incoming connections.\n- InetAddress hostAddressX;\n+ InetAddress hostAddresses[];\n try {\n- hostAddressX = networkService.resolveBindHostAddress(bindHost);\n+ hostAddresses = networkService.resolveBindHostAddress(bindHost);\n } catch (IOException e) {\n throw new BindHttpException(\"Failed to resolve host [\" + bindHost + \"]\", e);\n }\n- final InetAddress hostAddress = hostAddressX;\n+ \n+ for (InetAddress address : hostAddresses) {\n+ bindAddress(address);\n+ }\n \n+ InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(0).getLocalAddress();\n+ InetSocketAddress publishAddress;\n+ if (0 == publishPort) {\n+ publishPort = boundAddress.getPort();\n+ }\n+ try {\n+ publishAddress = new InetSocketAddress(networkService.resolvePublishHostAddress(publishHost), publishPort);\n+ } catch (Exception e) {\n+ throw new BindTransportException(\"Failed to resolve publish address\", e);\n+ }\n+ this.boundAddress = new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress));\n+ }\n+ \n+ private void bindAddress(final InetAddress hostAddress) {\n PortsRange portsRange = new PortsRange(port);\n final AtomicReference<Exception> lastException = new AtomicReference<>();\n+ final AtomicReference<SocketAddress> boundSocket = new AtomicReference<>();\n boolean success = portsRange.iterate(new PortsRange.PortCallback() {\n @Override\n public boolean onPortNumber(int portNumber) {\n try {\n- serverChannel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));\n+ synchronized (serverChannels) {\n+ Channel channel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));\n+ serverChannels.add(channel);\n+ boundSocket.set(channel.getLocalAddress());\n+ }\n } catch (Exception e) {\n lastException.set(e);\n return false;\n@@ -268,25 +294,18 @@ public boolean onPortNumber(int portNumber) {\n if (!success) {\n throw new BindHttpException(\"Failed to bind to [\" + port + \"]\", lastException.get());\n }\n-\n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannel.getLocalAddress();\n- InetSocketAddress publishAddress;\n- if (0 == publishPort) {\n- publishPort = boundAddress.getPort();\n- }\n- try {\n- publishAddress = new InetSocketAddress(networkService.resolvePublishHostAddress(publishHost), publishPort);\n- } catch (Exception e) {\n- throw new BindTransportException(\"Failed to resolve publish address\", e);\n- }\n- this.boundAddress = new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress));\n+ logger.info(\"Bound http to address [{}]\", boundSocket.get());\n }\n \n @Override\n protected void doStop() {\n- if (serverChannel != null) {\n- serverChannel.close().awaitUninterruptibly();\n- serverChannel = null;\n+ synchronized (serverChannels) {\n+ if (serverChannels != null) {\n+ for (Channel channel : serverChannels) {\n+ channel.close().awaitUninterruptibly();\n+ }\n+ serverChannels = null;\n+ }\n }\n \n if (serverOpenChannels != null) {",
"filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java",
"status": "modified"
},
{
"diff": "@@ -146,8 +146,8 @@ public class NettyTransport extends AbstractLifecycleComponent<Transport> implem\n // node id to actual channel\n protected final ConcurrentMap<DiscoveryNode, NodeChannels> connectedNodes = newConcurrentMap();\n protected final Map<String, ServerBootstrap> serverBootstraps = newConcurrentMap();\n- protected final Map<String, Channel> serverChannels = newConcurrentMap();\n- protected final Map<String, BoundTransportAddress> profileBoundAddresses = newConcurrentMap();\n+ protected final Map<String, List<Channel>> serverChannels = newConcurrentMap();\n+ protected final ConcurrentMap<String, BoundTransportAddress> profileBoundAddresses = newConcurrentMap();\n protected volatile TransportServiceAdapter transportServiceAdapter;\n protected volatile BoundTransportAddress boundAddress;\n protected final KeyedLock<String> connectionLock = new KeyedLock<>();\n@@ -286,7 +286,7 @@ protected void doStart() {\n bindServerBootstrap(name, mergedSettings);\n }\n \n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).getLocalAddress();\n+ InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).get(0).getLocalAddress();\n int publishPort = settings.getAsInt(\"transport.netty.publish_port\", settings.getAsInt(\"transport.publish_port\", boundAddress.getPort()));\n String publishHost = settings.get(\"transport.netty.publish_host\", settings.get(\"transport.publish_host\", settings.get(\"transport.host\")));\n InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);\n@@ -397,23 +397,38 @@ private Settings createFallbackSettings() {\n \n private void bindServerBootstrap(final String name, final Settings settings) {\n // Bind and start to accept incoming connections.\n- InetAddress hostAddressX;\n+ InetAddress hostAddresses[];\n String bindHost = settings.get(\"bind_host\");\n try {\n- hostAddressX = networkService.resolveBindHostAddress(bindHost);\n+ hostAddresses = networkService.resolveBindHostAddress(bindHost);\n } catch (IOException e) {\n throw new BindTransportException(\"Failed to resolve host [\" + bindHost + \"]\", e);\n }\n- final InetAddress hostAddress = hostAddressX;\n+ for (InetAddress hostAddress : hostAddresses) {\n+ bindServerBootstrap(name, hostAddress, settings);\n+ }\n+ }\n+ \n+ private void bindServerBootstrap(final String name, final InetAddress hostAddress, Settings settings) {\n \n String port = settings.get(\"port\");\n PortsRange portsRange = new PortsRange(port);\n final AtomicReference<Exception> lastException = new AtomicReference<>();\n+ final AtomicReference<SocketAddress> boundSocket = new AtomicReference<>();\n boolean success = portsRange.iterate(new PortsRange.PortCallback() {\n @Override\n public boolean onPortNumber(int portNumber) {\n try {\n- serverChannels.put(name, serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber)));\n+ Channel channel = serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber));\n+ synchronized (serverChannels) {\n+ List<Channel> list = serverChannels.get(name);\n+ if (list == null) {\n+ list = new ArrayList<>();\n+ serverChannels.put(name, list);\n+ }\n+ list.add(channel);\n+ boundSocket.set(channel.getLocalAddress());\n+ }\n } catch (Exception e) {\n lastException.set(e);\n return false;\n@@ -426,14 +441,15 @@ public boolean onPortNumber(int portNumber) {\n }\n \n if (!DEFAULT_PROFILE.equals(name)) {\n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(name).getLocalAddress();\n+ InetSocketAddress boundAddress = (InetSocketAddress) boundSocket.get();\n int publishPort = settings.getAsInt(\"publish_port\", boundAddress.getPort());\n String publishHost = settings.get(\"publish_host\", boundAddress.getHostString());\n InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);\n- profileBoundAddresses.put(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));\n+ // TODO: support real multihoming with publishing. Today we use putIfAbsent so only the prioritized address is published\n+ profileBoundAddresses.putIfAbsent(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));\n }\n \n- logger.debug(\"Bound profile [{}] to address [{}]\", name, serverChannels.get(name).getLocalAddress());\n+ logger.info(\"Bound profile [{}] to address [{}]\", name, boundSocket.get());\n }\n \n private void createServerBootstrap(String name, Settings settings) {\n@@ -500,15 +516,17 @@ public void run() {\n nodeChannels.close();\n }\n \n- Iterator<Map.Entry<String, Channel>> serverChannelIterator = serverChannels.entrySet().iterator();\n+ Iterator<Map.Entry<String, List<Channel>>> serverChannelIterator = serverChannels.entrySet().iterator();\n while (serverChannelIterator.hasNext()) {\n- Map.Entry<String, Channel> serverChannelEntry = serverChannelIterator.next();\n+ Map.Entry<String, List<Channel>> serverChannelEntry = serverChannelIterator.next();\n String name = serverChannelEntry.getKey();\n- Channel serverChannel = serverChannelEntry.getValue();\n- try {\n- serverChannel.close().awaitUninterruptibly();\n- } catch (Throwable t) {\n- logger.debug(\"Error closing serverChannel for profile [{}]\", t, name);\n+ List<Channel> serverChannels = serverChannelEntry.getValue();\n+ for (Channel serverChannel : serverChannels) {\n+ try {\n+ serverChannel.close().awaitUninterruptibly();\n+ } catch (Throwable t) {\n+ logger.debug(\"Error closing serverChannel for profile [{}]\", t, name);\n+ }\n }\n serverChannelIterator.remove();\n }",
"filename": "core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,77 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.network;\n+\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.net.InetAddress;\n+\n+/**\n+ * Tests for network utils. Please avoid using any methods that cause DNS lookups!\n+ */\n+public class NetworkUtilsTests extends ESTestCase {\n+ \n+ /**\n+ * test sort key order respects PREFER_IPV4\n+ */\n+ public void testSortKey() throws Exception {\n+ InetAddress localhostv4 = InetAddress.getByName(\"127.0.0.1\");\n+ InetAddress localhostv6 = InetAddress.getByName(\"::1\");\n+ assertTrue(NetworkUtils.sortKey(localhostv4, true) < NetworkUtils.sortKey(localhostv6, true));\n+ assertTrue(NetworkUtils.sortKey(localhostv6, false) < NetworkUtils.sortKey(localhostv4, false));\n+ }\n+ \n+ /**\n+ * test ordinary addresses sort before private addresses\n+ */\n+ public void testSortKeySiteLocal() throws Exception {\n+ InetAddress siteLocal = InetAddress.getByName(\"172.16.0.1\");\n+ assert siteLocal.isSiteLocalAddress();\n+ InetAddress ordinary = InetAddress.getByName(\"192.192.192.192\");\n+ assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(siteLocal, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(siteLocal, false));\n+ \n+ InetAddress siteLocal6 = InetAddress.getByName(\"fec0::1\");\n+ assert siteLocal6.isSiteLocalAddress();\n+ InetAddress ordinary6 = InetAddress.getByName(\"fddd::1\");\n+ assertTrue(NetworkUtils.sortKey(ordinary6, true) < NetworkUtils.sortKey(siteLocal6, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary6, false) < NetworkUtils.sortKey(siteLocal6, false));\n+ }\n+ \n+ /**\n+ * test private addresses sort before link local addresses\n+ */\n+ public void testSortKeyLinkLocal() throws Exception {\n+ InetAddress linkLocal = InetAddress.getByName(\"fe80::1\");\n+ assert linkLocal.isLinkLocalAddress();\n+ InetAddress ordinary = InetAddress.getByName(\"fddd::1\");\n+ assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(linkLocal, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(linkLocal, false));\n+ }\n+ \n+ /**\n+ * Test filtering out ipv4/ipv6 addresses\n+ */\n+ public void testFilter() throws Exception {\n+ InetAddress addresses[] = { InetAddress.getByName(\"::1\"), InetAddress.getByName(\"127.0.0.1\") };\n+ assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"127.0.0.1\") }, NetworkUtils.filterIPV4(addresses));\n+ assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"::1\") }, NetworkUtils.filterIPV6(addresses));\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java",
"status": "added"
},
{
"diff": "@@ -504,7 +504,7 @@ private static Settings getRandomNodeSettings(long seed) {\n public static String clusterName(String prefix, long clusterSeed) {\n StringBuilder builder = new StringBuilder(prefix);\n final int childVM = RandomizedTest.systemPropertyAsInt(SysGlobals.CHILDVM_SYSPROP_JVM_ID, 0);\n- builder.append('-').append(NetworkUtils.getLocalHostName(\"__default_host__\"));\n+ builder.append('-').append(NetworkUtils.getLocalHost().getHostName());\n builder.append(\"-CHILD_VM=[\").append(childVM).append(']');\n builder.append(\"-CLUSTER_SEED=[\").append(clusterSeed).append(']');\n // if multiple maven task run on a single host we better have an identifier that doesn't rely on input params",
"filename": "core/src/test/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -135,29 +135,6 @@ public void testThatDefaultProfilePortOverridesGeneralConfiguration() throws Exc\n }\n }\n \n- @Test\n- public void testThatBindingOnDifferentHostsWorks() throws Exception {\n- int[] ports = getRandomPorts(2);\n- InetAddress firstNonLoopbackAddress = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- assumeTrue(\"No IP-v4 non-loopback address available - are you on a plane?\", firstNonLoopbackAddress != null);\n- Settings settings = settingsBuilder()\n- .put(\"network.host\", \"127.0.0.1\")\n- .put(\"transport.tcp.port\", ports[0])\n- .put(\"transport.profiles.default.bind_host\", \"127.0.0.1\")\n- .put(\"transport.profiles.client1.bind_host\", firstNonLoopbackAddress.getHostAddress())\n- .put(\"transport.profiles.client1.port\", ports[1])\n- .build();\n-\n- ThreadPool threadPool = new ThreadPool(\"tst\");\n- try (NettyTransport ignored = startNettyTransport(settings, threadPool)) {\n- assertPortIsBound(\"127.0.0.1\", ports[0]);\n- assertPortIsBound(firstNonLoopbackAddress.getHostAddress(), ports[1]);\n- assertConnectionRefused(ports[1]);\n- } finally {\n- terminate(threadPool);\n- }\n- }\n-\n @Test\n public void testThatProfileWithoutValidNameIsIgnored() throws Exception {\n int[] ports = getRandomPorts(3);",
"filename": "core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@\n <waitfor maxwait=\"30\" maxwaitunit=\"second\"\n checkevery=\"500\" checkeveryunit=\"millisecond\"\n timeoutproperty=\"@{timeoutproperty}\">\n- <http url=\"http://127.0.0.1:@{port}\"/>\n+ <http url=\"http://localhost:@{port}\"/>\n </waitfor>\n </sequential>\n </macrodef>\n@@ -138,7 +138,7 @@\n <waitfor maxwait=\"30\" maxwaitunit=\"second\"\n checkevery=\"500\" checkeveryunit=\"millisecond\"\n timeoutproperty=\"@{timeoutproperty}\">\n- <http url=\"http://127.0.0.1:@{port}/_cluster/health?wait_for_nodes=2\"/>\n+ <http url=\"http://localhost:@{port}/_cluster/health?wait_for_nodes=2\"/>\n </waitfor>\n </sequential>\n </macrodef>",
"filename": "dev-tools/src/main/resources/ant/integration-tests.xml",
"status": "modified"
},
{
"diff": "@@ -153,7 +153,7 @@\n <parallelism>1</parallelism>\n <systemProperties>\n <!-- use external cluster -->\n- <tests.cluster>127.0.0.1:${integ.transport.port}</tests.cluster>\n+ <tests.cluster>localhost:${integ.transport.port}</tests.cluster>\n </systemProperties>\n </configuration>\n </execution>",
"filename": "distribution/pom.xml",
"status": "modified"
},
{
"diff": "@@ -38,7 +38,7 @@ respond to. It provides the following settings with the\n |`ttl` |The ttl of the multicast message. Defaults to `3`.\n \n |`address` |The address to bind to, defaults to `null` which means it\n-will bind to all available network interfaces.\n+will bind `network.bind_host`\n \n |`enabled` |Whether multicast ping discovery is enabled. Defaults to `true`.\n |=======================================================================",
"filename": "docs/reference/modules/discovery/zen.asciidoc",
"status": "modified"
},
{
"diff": "@@ -9,13 +9,15 @@ network settings allows to set common settings that will be shared among\n all network based modules (unless explicitly overridden in each module).\n \n The `network.bind_host` setting allows to control the host different network\n-components will bind on. By default, the bind host will be `anyLoopbackAddress`\n-(typically `127.0.0.1` or `::1`).\n+components will bind on. By default, the bind host will be `_local_`\n+(loopback addresses such as `127.0.0.1`, `::1`).\n \n The `network.publish_host` setting allows to control the host the node will\n publish itself within the cluster so other nodes will be able to connect to it.\n-Of course, this can't be the `anyLocalAddress`, and by default, it will be the\n-first loopback address (if possible), or the local address.\n+Currently an elasticsearch node may be bound to multiple addresses, but only\n+publishes one. If not specified, this defaults to the \"best\" address from \n+`network.bind_host`. By default, IPv4 addresses are preferred to IPv6, and \n+ordinary addresses are preferred to site-local or link-local addresses.\n \n The `network.host` setting is a simple setting to automatically set both\n `network.bind_host` and `network.publish_host` to the same host value.\n@@ -27,21 +29,25 @@ in the following table:\n [cols=\"<,<\",options=\"header\",]\n |=======================================================================\n |Logical Host Setting Value |Description\n-|`_local_` |Will be resolved to the local ip address.\n+|`_local_` |Will be resolved to loopback addresses\n \n-|`_non_loopback_` |The first non loopback address.\n+|`_local:ipv4_` |Will be resolved to loopback IPv4 addresses\n \n-|`_non_loopback:ipv4_` |The first non loopback IPv4 address.\n+|`_local:ipv6_` |Will be resolved to loopback IPv6 addresses\n \n-|`_non_loopback:ipv6_` |The first non loopback IPv6 address.\n+|`_non_loopback_` |Addresses of the first non loopback interface\n \n-|`_[networkInterface]_` |Resolves to the ip address of the provided\n+|`_non_loopback:ipv4_` |IPv4 addresses of the first non loopback interface\n+\n+|`_non_loopback:ipv6_` |IPv6 addresses of the first non loopback interface\n+\n+|`_[networkInterface]_` |Resolves to the addresses of the provided\n network interface. For example `_en0_`.\n \n-|`_[networkInterface]:ipv4_` |Resolves to the ipv4 address of the\n+|`_[networkInterface]:ipv4_` |Resolves to the ipv4 addresses of the\n provided network interface. For example `_en0:ipv4_`.\n \n-|`_[networkInterface]:ipv6_` |Resolves to the ipv6 address of the\n+|`_[networkInterface]:ipv6_` |Resolves to the ipv6 addresses of the\n provided network interface. For example `_en0:ipv6_`.\n |=======================================================================\n ",
"filename": "docs/reference/modules/network.asciidoc",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@ public Ec2NameResolver(Settings settings) {\n * @throws IOException if ec2 meta-data cannot be obtained.\n * @see CustomNameResolver#resolveIfPossible(String)\n */\n- public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n+ public InetAddress[] resolve(Ec2HostnameType type, boolean warnOnFailure) {\n URLConnection urlConnection = null;\n InputStream in = null;\n try {\n@@ -109,7 +109,8 @@ public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n logger.error(\"no ec2 metadata returned from {}\", url);\n return null;\n }\n- return InetAddress.getByName(metadataResult);\n+ // only one address: because we explicitly ask for only one via the Ec2HostnameType\n+ return new InetAddress[] { InetAddress.getByName(metadataResult) };\n } catch (IOException e) {\n if (warnOnFailure) {\n logger.warn(\"failed to get metadata for [\" + type.configName + \"]: \" + ExceptionsHelper.detailedMessage(e));\n@@ -123,13 +124,13 @@ public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n }\n \n @Override\n- public InetAddress resolveDefault() {\n+ public InetAddress[] resolveDefault() {\n return null; // using this, one has to explicitly specify _ec2_ in network setting\n // return resolve(Ec2HostnameType.DEFAULT, false);\n }\n \n @Override\n- public InetAddress resolveIfPossible(String value) {\n+ public InetAddress[] resolveIfPossible(String value) {\n for (Ec2HostnameType type : Ec2HostnameType.values()) {\n if (type.configName.equals(value)) {\n return resolve(type, true);",
"filename": "plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java",
"status": "modified"
},
{
"diff": "@@ -414,7 +414,7 @@\n <parallelism>1</parallelism>\n <systemProperties>\n <!-- use external cluster -->\n- <tests.cluster>127.0.0.1:${integ.transport.port}</tests.cluster>\n+ <tests.cluster>localhost:${integ.transport.port}</tests.cluster>\n </systemProperties>\n </configuration>\n </execution>",
"filename": "plugins/pom.xml",
"status": "modified"
}
]
} |
{
"body": "Today we use `localhost` in all of our curl examples, but Elasticsearch is binding to either `127.0.0.1` or `::1`, and `localhost` may not work.\n\nWe should try to bind to both.\n",
"comments": [
{
"body": "Related to https://github.com/elastic/elasticsearch/issues/12914 and https://github.com/elastic/elasticsearch/issues/12915\n",
"created_at": "2015-08-16T18:00:27Z"
},
{
"body": "I'm working on this: I've made progress but need a few more hours to have something to show.\n",
"created_at": "2015-08-17T13:37:18Z"
}
],
"number": 12906,
"title": "Ensure `localhost` works on IPv4 and IPv6"
} | {
"body": "Elasticsearch doesn't work well today with modern machines that might support both ipv4 and ipv6. The worst problem here that `curl http://localhost...` does not work e.g. on mac (#12906), because we aren't bound to any v6 address.\n\nEqually bad, we only bind to loopback by default for 2.0, so if you want to connect to the network \"for real\" you might provide an interface name, but that always tends to do the wrong thing too (#12915), e.g. pick link local or some other useless address.\n\nThis is a compromise fix that tries to keep things simple:\n- we bind to multiple addresses when a specified host/interface name has multiple addresses. If you dont like this, then specify a single address.\n- we still only _publish_ by default to one (this would require more work).\n- no changes for ipv6 multicast or anything like that yet.\n\nThe default for which address to publish when bound to multiple addresses is pretty simple, we prefer ipv4 by default (java.net.preferIPv4Stack) and I think we should until things like multicast are fixed to work correct over ipv6. Otherwise we prefer \"real addresses\" > site local > link local and so on.\n\nSome things are still a bit messy, because real cleanups need to not be right before a beta release. \n\nCloses #12906\nCloses #12915\n",
"number": 12942,
"review_comments": [
{
"body": "Do you prefer the double-if? If not I guess you could do:\n\n``` java\nif (bindHost == null) {\n bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING, defaultValue2));\n}\nreturn resolveInetAddress(bindHost);\n```\n\nInstead\n",
"created_at": "2015-08-17T17:52:36Z"
},
{
"body": "Doesn't this also need to check if the address is actually an ipv6 address also?\n",
"created_at": "2015-08-17T17:54:31Z"
},
{
"body": "Can you add a comment for why reusing addresses is bad on Windows? (I don't have any idea why)\n",
"created_at": "2015-08-17T17:55:27Z"
},
{
"body": "This logging is helpful, thanks for adding it!\n",
"created_at": "2015-08-17T17:57:26Z"
},
{
"body": "No. The length in bytes is enough.\n",
"created_at": "2015-08-17T18:05:56Z"
},
{
"body": "Its not related to this change, I just documented what it does.\n",
"created_at": "2015-08-17T18:06:15Z"
},
{
"body": "Ahh of course, I missed the length call, thanks\n",
"created_at": "2015-08-17T18:12:38Z"
},
{
"body": "who is this `shay banon` guy after all?\n",
"created_at": "2015-08-17T18:14:46Z"
},
{
"body": "I have changes coming in a commit. Basically it was too hard for me to see the various fallbacks for the default case (null). \n",
"created_at": "2015-08-17T18:15:03Z"
},
{
"body": "I am not sure where we use this else but would make it sense to move `_local_` to a constant? Primarily I am curious what this means and if we can add some javadocs to the constant\n",
"created_at": "2015-08-17T18:16:45Z"
},
{
"body": "I like this sorting here - neat!\n",
"created_at": "2015-08-17T18:18:58Z"
},
{
"body": "When I first looked at this I wondered if this is prone to concurrent modification exception? I wondered if we should make the `serverChannels` final and use a `CopyOnWriteArrayList` instead. Reading without sync would be safe. But then looking at the entire file I think we should just make the list final and clear it before we close each channel? the volatile confuses me and I think it's not needd\n",
"created_at": "2015-08-17T18:28:42Z"
},
{
"body": "I was concurrently cleaning up this code, because it was still too much for me. Please see the latest commit.\n",
"created_at": "2015-08-17T18:29:35Z"
},
{
"body": "this must be a ConcurrentMap you use putIfAbsent\n",
"created_at": "2015-08-17T18:42:37Z"
},
{
"body": "All the concurrency in these methods is super confusing. Its doing really slow stuff like binding to ports, which should not happen all the time. I am happy to fix the concurrency, it will be to make everything synchronized: this is all nuts.\n",
"created_at": "2015-08-17T18:49:05Z"
},
{
"body": "I agree - we can do this in a follow up!\n",
"created_at": "2015-08-17T19:10:44Z"
}
],
"title": "Fix network binding for ipv4/ipv6"
} | {
"commits": [
{
"message": "Fix transport / interface code\n\nNext up: multicast and then http"
},
{
"message": "fix http too"
},
{
"message": "remove nocommit"
},
{
"message": "fix docs, fix another bug in multicast (publish host = bad here!)"
},
{
"message": "Merge branch 'master' into network_cleanup"
},
{
"message": "localhost all the way down"
},
{
"message": "Add some unit tests for utility methods"
},
{
"message": "Cleanup/fix logic around custom resolvers"
},
{
"message": "fix java 7 compilation oops"
}
],
"files": [
{
"diff": "@@ -42,24 +42,24 @@ h3. Installation\n \n * \"Download\":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.\n * Run @bin/elasticsearch@ on unix, or @bin\\elasticsearch.bat@ on windows.\n-* Run @curl -X GET http://127.0.0.1:9200/@.\n+* Run @curl -X GET http://localhost:9200/@.\n * Start more servers ...\n \n h3. Indexing\n \n Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n Now, let's see if the information was added by GETting it:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'\n </pre>\n \n h3. Searching\n@@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic?\n Let's find all the tweets that @kimchy@ posted:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n </pre>\n \n We can also use the JSON query language Elasticsearch provides instead of a query string:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '\n {\n \"query\" : {\n \"match\" : { \"user\": \"kimchy\" }\n@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n Just for kicks, let's get all the documents stored (we should see the user as well):\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n We can also do range search (the @postDate@ was automatically identified as date)\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"range\" : {\n@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In\n Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@\n Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):\n \n <pre>\n-curl -XPUT http://127.0.0.1:9200/another_user/ -d '\n+curl -XPUT http://localhost:9200/another_user/ -d '\n {\n \"index\" : {\n \"numberOfShards\" : 1,\n@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea\n index (twitter user), for example:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n Or on all the indices:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}",
"filename": "README.textile",
"status": "modified"
},
{
"diff": "@@ -42,24 +42,24 @@ h3. Installation\n \n * \"Download\":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.\n * Run @bin/elasticsearch@ on unix, or @bin\\elasticsearch.bat@ on windows.\n-* Run @curl -X GET http://127.0.0.1:9200/@.\n+* Run @curl -X GET http://localhost:9200/@.\n * Start more servers ...\n \n h3. Indexing\n \n Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '\n Now, let's see if the information was added by GETting it:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'\n </pre>\n \n h3. Searching\n@@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic?\n Let's find all the tweets that @kimchy@ posted:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n </pre>\n \n We can also use the JSON query language Elasticsearch provides instead of a query string:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '\n {\n \"query\" : {\n \"match\" : { \"user\": \"kimchy\" }\n@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '\n Just for kicks, let's get all the documents stored (we should see the user as well):\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n We can also do range search (the @postDate@ was automatically identified as date)\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n {\n \"query\" : {\n \"range\" : {\n@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In\n Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:\n \n <pre>\n-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ \"name\" : \"Shay Banon\" }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T13:12:00\",\n \"message\": \"Trying out Elasticsearch, so far so good?\"\n }'\n \n-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '\n+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '\n {\n \"user\": \"kimchy\",\n \"postDate\": \"2009-11-15T14:12:12\",\n@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@\n Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):\n \n <pre>\n-curl -XPUT http://127.0.0.1:9200/another_user/ -d '\n+curl -XPUT http://localhost:9200/another_user/ -d '\n {\n \"index\" : {\n \"numberOfShards\" : 1,\n@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea\n index (twitter user), for example:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}\n@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '\n Or on all the indices:\n \n <pre>\n-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '\n+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '\n {\n \"query\" : {\n \"matchAll\" : {}",
"filename": "core/README.textile",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n+\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Strings;\n@@ -33,6 +34,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n+import java.net.InetAddress;\n import java.util.Map;\n \n import static org.elasticsearch.common.transport.TransportAddressSerializers.addressToStream;\n@@ -136,7 +138,7 @@ public DiscoveryNode(String nodeId, TransportAddress address, Version version) {\n * @param version the version of the node.\n */\n public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, Map<String, String> attributes, Version version) {\n- this(nodeName, nodeId, NetworkUtils.getLocalHostName(\"\"), NetworkUtils.getLocalHostAddress(\"\"), address, attributes, version);\n+ this(nodeName, nodeId, NetworkUtils.getLocalHost().getHostName(), NetworkUtils.getLocalHost().getHostAddress(), address, attributes, version);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java",
"status": "modified"
},
{
"diff": "@@ -28,11 +28,8 @@\n \n import java.io.IOException;\n import java.net.InetAddress;\n-import java.net.NetworkInterface;\n import java.net.UnknownHostException;\n-import java.util.Collection;\n import java.util.List;\n-import java.util.Locale;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.TimeUnit;\n \n@@ -41,7 +38,8 @@\n */\n public class NetworkService extends AbstractComponent {\n \n- public static final String LOCAL = \"#local#\";\n+ /** By default, we bind to loopback interfaces */\n+ public static final String DEFAULT_NETWORK_HOST = \"_local_\";\n \n private static final String GLOBAL_NETWORK_HOST_SETTING = \"network.host\";\n private static final String GLOBAL_NETWORK_BINDHOST_SETTING = \"network.bind_host\";\n@@ -71,12 +69,12 @@ public static interface CustomNameResolver {\n /**\n * Resolves the default value if possible. If not, return <tt>null</tt>.\n */\n- InetAddress resolveDefault();\n+ InetAddress[] resolveDefault();\n \n /**\n * Resolves a custom value handling, return <tt>null</tt> if can't handle it.\n */\n- InetAddress resolveIfPossible(String value);\n+ InetAddress[] resolveIfPossible(String value);\n }\n \n private final List<CustomNameResolver> customNameResolvers = new CopyOnWriteArrayList<>();\n@@ -94,100 +92,86 @@ public void addCustomNameResolver(CustomNameResolver customNameResolver) {\n customNameResolvers.add(customNameResolver);\n }\n \n-\n- public InetAddress resolveBindHostAddress(String bindHost) throws IOException {\n- return resolveBindHostAddress(bindHost, InetAddress.getLoopbackAddress().getHostAddress());\n- }\n-\n- public InetAddress resolveBindHostAddress(String bindHost, String defaultValue2) throws IOException {\n- return resolveInetAddress(bindHost, settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);\n- }\n-\n- public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {\n- InetAddress address = resolvePublishHostAddress(publishHost,\n- InetAddress.getLoopbackAddress().getHostAddress());\n- // verify that its not a local address\n- if (address == null || address.isAnyLocalAddress()) {\n- address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- if (address == null) {\n- address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());\n- if (address == null) {\n- address = NetworkUtils.getLocalAddress();\n- if (address == null) {\n- return NetworkUtils.getLocalhost(NetworkUtils.StackType.IPv4);\n- }\n+ public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException {\n+ // first check settings\n+ if (bindHost == null) {\n+ bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));\n+ }\n+ // next check any registered custom resolvers\n+ if (bindHost == null) {\n+ for (CustomNameResolver customNameResolver : customNameResolvers) {\n+ InetAddress addresses[] = customNameResolver.resolveDefault();\n+ if (addresses != null) {\n+ return addresses;\n }\n }\n }\n- return address;\n- }\n-\n- public InetAddress resolvePublishHostAddress(String publishHost, String defaultValue2) throws IOException {\n- return resolveInetAddress(publishHost, settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);\n+ // finally, fill with our default\n+ if (bindHost == null) {\n+ bindHost = DEFAULT_NETWORK_HOST;\n+ }\n+ return resolveInetAddress(bindHost);\n }\n \n- public InetAddress resolveInetAddress(String host, String defaultValue1, String defaultValue2) throws UnknownHostException, IOException {\n- if (host == null) {\n- host = defaultValue1;\n- }\n- if (host == null) {\n- host = defaultValue2;\n+ // TODO: needs to be InetAddress[]\n+ public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {\n+ // first check settings\n+ if (publishHost == null) {\n+ publishHost = settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));\n }\n- if (host == null) {\n+ // next check any registered custom resolvers\n+ if (publishHost == null) {\n for (CustomNameResolver customNameResolver : customNameResolvers) {\n- InetAddress inetAddress = customNameResolver.resolveDefault();\n- if (inetAddress != null) {\n- return inetAddress;\n+ InetAddress addresses[] = customNameResolver.resolveDefault();\n+ if (addresses != null) {\n+ return addresses[0];\n }\n }\n- return null;\n }\n- String origHost = host;\n+ // finally, fill with our default\n+ if (publishHost == null) {\n+ publishHost = DEFAULT_NETWORK_HOST;\n+ }\n+ // TODO: allow publishing multiple addresses\n+ return resolveInetAddress(publishHost)[0];\n+ }\n+\n+ private InetAddress[] resolveInetAddress(String host) throws UnknownHostException, IOException {\n if ((host.startsWith(\"#\") && host.endsWith(\"#\")) || (host.startsWith(\"_\") && host.endsWith(\"_\"))) {\n host = host.substring(1, host.length() - 1);\n-\n+ // allow custom resolvers to have special names\n for (CustomNameResolver customNameResolver : customNameResolvers) {\n- InetAddress inetAddress = customNameResolver.resolveIfPossible(host);\n- if (inetAddress != null) {\n- return inetAddress;\n+ InetAddress addresses[] = customNameResolver.resolveIfPossible(host);\n+ if (addresses != null) {\n+ return addresses;\n }\n }\n-\n- if (host.equals(\"local\")) {\n- return NetworkUtils.getLocalAddress();\n- } else if (host.startsWith(\"non_loopback\")) {\n- if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv4\")) {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- } else if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv6\")) {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv6);\n- } else {\n- return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());\n- }\n- } else {\n- NetworkUtils.StackType stackType = NetworkUtils.getIpStackType();\n- if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv4\")) {\n- stackType = NetworkUtils.StackType.IPv4;\n- host = host.substring(0, host.length() - 5);\n- } else if (host.toLowerCase(Locale.ROOT).endsWith(\":ipv6\")) {\n- stackType = NetworkUtils.StackType.IPv6;\n- host = host.substring(0, host.length() - 5);\n- }\n- Collection<NetworkInterface> allInterfs = NetworkUtils.getAllAvailableInterfaces();\n- for (NetworkInterface ni : allInterfs) {\n- if (!ni.isUp()) {\n- continue;\n+ switch (host) {\n+ case \"local\":\n+ return NetworkUtils.getLoopbackAddresses();\n+ case \"local:ipv4\":\n+ return NetworkUtils.filterIPV4(NetworkUtils.getLoopbackAddresses());\n+ case \"local:ipv6\":\n+ return NetworkUtils.filterIPV6(NetworkUtils.getLoopbackAddresses());\n+ case \"non_loopback\":\n+ return NetworkUtils.getFirstNonLoopbackAddresses();\n+ case \"non_loopback:ipv4\":\n+ return NetworkUtils.filterIPV4(NetworkUtils.getFirstNonLoopbackAddresses());\n+ case \"non_loopback:ipv6\":\n+ return NetworkUtils.filterIPV6(NetworkUtils.getFirstNonLoopbackAddresses());\n+ default:\n+ /* an interface specification */\n+ if (host.endsWith(\":ipv4\")) {\n+ host = host.substring(0, host.length() - 5);\n+ return NetworkUtils.filterIPV4(NetworkUtils.getAddressesForInterface(host));\n+ } else if (host.endsWith(\":ipv6\")) {\n+ host = host.substring(0, host.length() - 5);\n+ return NetworkUtils.filterIPV6(NetworkUtils.getAddressesForInterface(host));\n+ } else {\n+ return NetworkUtils.getAddressesForInterface(host);\n }\n- if (host.equals(ni.getName()) || host.equals(ni.getDisplayName())) {\n- if (ni.isLoopback()) {\n- return NetworkUtils.getFirstAddress(ni, stackType);\n- } else {\n- return NetworkUtils.getFirstNonLoopbackAddress(ni, stackType);\n- }\n- }\n- }\n }\n- throw new IOException(\"Failed to find network interface for [\" + origHost + \"]\");\n }\n- return InetAddress.getByName(host);\n+ return NetworkUtils.getAllByName(host);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkService.java",
"status": "modified"
},
{
"diff": "@@ -19,303 +19,205 @@\n \n package org.elasticsearch.common.network;\n \n-import com.google.common.collect.Lists;\n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.CollectionUtil;\n import org.apache.lucene.util.Constants;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n \n-import java.net.*;\n-import java.util.*;\n+import java.net.Inet4Address;\n+import java.net.Inet6Address;\n+import java.net.InetAddress;\n+import java.net.NetworkInterface;\n+import java.net.SocketException;\n+import java.net.UnknownHostException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.Comparator;\n+import java.util.List;\n \n /**\n- *\n+ * Utilities for network interfaces / addresses\n */\n public abstract class NetworkUtils {\n \n- private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class);\n-\n- public static enum StackType {\n- IPv4, IPv6, Unknown\n- }\n-\n- public static final String IPv4_SETTING = \"java.net.preferIPv4Stack\";\n- public static final String IPv6_SETTING = \"java.net.preferIPv6Addresses\";\n-\n- public static final String NON_LOOPBACK_ADDRESS = \"non_loopback_address\";\n-\n- private final static InetAddress localAddress;\n-\n- static {\n- InetAddress localAddressX;\n- try {\n- localAddressX = InetAddress.getLocalHost();\n- } catch (Throwable e) {\n- logger.warn(\"failed to resolve local host, fallback to loopback\", e);\n- localAddressX = InetAddress.getLoopbackAddress();\n+ /** no instantation */\n+ private NetworkUtils() {}\n+ \n+ /**\n+ * By default we bind to any addresses on an interface/name, unless restricted by :ipv4 etc.\n+ * This property is unrelated to that, this is about what we *publish*. Today the code pretty much\n+ * expects one address so this is used for the sort order.\n+ * @deprecated transition mechanism only\n+ */\n+ @Deprecated\n+ static final boolean PREFER_V4 = Boolean.parseBoolean(System.getProperty(\"java.net.preferIPv4Stack\", \"true\")); \n+ \n+ /** Sorts an address by preference. This way code like publishing can just pick the first one */\n+ static int sortKey(InetAddress address, boolean prefer_v4) {\n+ int key = address.getAddress().length;\n+ if (prefer_v4 == false) {\n+ key = -key;\n }\n- localAddress = localAddressX;\n- }\n-\n- public static boolean defaultReuseAddress() {\n- return Constants.WINDOWS ? false : true;\n- }\n-\n- public static boolean isIPv4() {\n- return System.getProperty(\"java.net.preferIPv4Stack\") != null && System.getProperty(\"java.net.preferIPv4Stack\").equals(\"true\");\n- }\n-\n- public static InetAddress getIPv4Localhost() throws UnknownHostException {\n- return getLocalhost(StackType.IPv4);\n- }\n-\n- public static InetAddress getIPv6Localhost() throws UnknownHostException {\n- return getLocalhost(StackType.IPv6);\n- }\n-\n- public static InetAddress getLocalAddress() {\n- return localAddress;\n- }\n-\n- public static String getLocalHostName(String defaultHostName) {\n- if (localAddress == null) {\n- return defaultHostName;\n+ \n+ if (address.isAnyLocalAddress()) {\n+ key += 5;\n }\n- String hostName = localAddress.getHostName();\n- if (hostName == null) {\n- return defaultHostName;\n+ if (address.isMulticastAddress()) {\n+ key += 4;\n }\n- return hostName;\n- }\n-\n- public static String getLocalHostAddress(String defaultHostAddress) {\n- if (localAddress == null) {\n- return defaultHostAddress;\n+ if (address.isLoopbackAddress()) {\n+ key += 3;\n }\n- String hostAddress = localAddress.getHostAddress();\n- if (hostAddress == null) {\n- return defaultHostAddress;\n+ if (address.isLinkLocalAddress()) {\n+ key += 2;\n+ }\n+ if (address.isSiteLocalAddress()) {\n+ key += 1;\n }\n- return hostAddress;\n- }\n \n- public static InetAddress getLocalhost(StackType ip_version) throws UnknownHostException {\n- if (ip_version == StackType.IPv4)\n- return InetAddress.getByName(\"127.0.0.1\");\n- else\n- return InetAddress.getByName(\"::1\");\n+ return key;\n }\n \n- /**\n- * Returns the first non-loopback address on any interface on the current host.\n- *\n- * @param ip_version Constraint on IP version of address to be returned, 4 or 6\n+ /** \n+ * Sorts addresses by order of preference. This is used to pick the first one for publishing\n+ * @deprecated remove this when multihoming is really correct\n */\n- public static InetAddress getFirstNonLoopbackAddress(StackType ip_version) throws SocketException {\n- InetAddress address;\n- for (NetworkInterface intf : getInterfaces()) {\n- try {\n- if (!intf.isUp() || intf.isLoopback())\n- continue;\n- } catch (Exception e) {\n- // might happen when calling on a network interface that does not exists\n- continue;\n- }\n- address = getFirstNonLoopbackAddress(intf, ip_version);\n- if (address != null) {\n- return address;\n+ @Deprecated\n+ private static void sortAddresses(List<InetAddress> list) {\n+ Collections.sort(list, new Comparator<InetAddress>() {\n+ @Override\n+ public int compare(InetAddress left, InetAddress right) {\n+ int cmp = Integer.compare(sortKey(left, PREFER_V4), sortKey(right, PREFER_V4));\n+ if (cmp == 0) {\n+ cmp = new BytesRef(left.getAddress()).compareTo(new BytesRef(right.getAddress()));\n+ }\n+ return cmp;\n }\n- }\n-\n- return null;\n- }\n-\n- private static List<NetworkInterface> getInterfaces() throws SocketException {\n- Enumeration intfs = NetworkInterface.getNetworkInterfaces();\n-\n- List<NetworkInterface> intfsList = Lists.newArrayList();\n- while (intfs.hasMoreElements()) {\n- intfsList.add((NetworkInterface) intfs.nextElement());\n- }\n-\n- sortInterfaces(intfsList);\n- return intfsList;\n+ });\n }\n+ \n+ private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class);\n \n- private static void sortInterfaces(List<NetworkInterface> intfsList) {\n- // order by index, assuming first ones are more interesting\n- CollectionUtil.timSort(intfsList, new Comparator<NetworkInterface>() {\n+ /** Return all interfaces (and subinterfaces) on the system */\n+ static List<NetworkInterface> getInterfaces() throws SocketException {\n+ List<NetworkInterface> all = new ArrayList<>();\n+ addAllInterfaces(all, Collections.list(NetworkInterface.getNetworkInterfaces()));\n+ Collections.sort(all, new Comparator<NetworkInterface>() {\n @Override\n- public int compare(NetworkInterface o1, NetworkInterface o2) {\n- return Integer.compare (o1.getIndex(), o2.getIndex());\n+ public int compare(NetworkInterface left, NetworkInterface right) {\n+ return Integer.compare(left.getIndex(), right.getIndex());\n }\n });\n- }\n-\n-\n- /**\n- * Returns the first non-loopback address on the given interface on the current host.\n- *\n- * @param intf the interface to be checked\n- * @param ipVersion Constraint on IP version of address to be returned, 4 or 6\n- */\n- public static InetAddress getFirstNonLoopbackAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {\n- if (intf == null)\n- throw new IllegalArgumentException(\"Network interface pointer is null\");\n-\n- for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {\n- InetAddress address = (InetAddress) addresses.nextElement();\n- if (!address.isLoopbackAddress()) {\n- if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||\n- (address instanceof Inet6Address && ipVersion == StackType.IPv6))\n- return address;\n+ return all;\n+ }\n+ \n+ /** Helper for getInterfaces, recursively adds subinterfaces to {@code target} */\n+ private static void addAllInterfaces(List<NetworkInterface> target, List<NetworkInterface> level) {\n+ if (!level.isEmpty()) {\n+ target.addAll(level);\n+ for (NetworkInterface intf : level) {\n+ addAllInterfaces(target, Collections.list(intf.getSubInterfaces()));\n }\n }\n- return null;\n }\n-\n- /**\n- * Returns the first address with the proper ipVersion on the given interface on the current host.\n- *\n- * @param intf the interface to be checked\n- * @param ipVersion Constraint on IP version of address to be returned, 4 or 6\n- */\n- public static InetAddress getFirstAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {\n- if (intf == null)\n- throw new IllegalArgumentException(\"Network interface pointer is null\");\n-\n- for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {\n- InetAddress address = (InetAddress) addresses.nextElement();\n- if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||\n- (address instanceof Inet6Address && ipVersion == StackType.IPv6))\n- return address;\n+ \n+ /** Returns system default for SO_REUSEADDR */\n+ public static boolean defaultReuseAddress() {\n+ return Constants.WINDOWS ? false : true;\n+ }\n+ \n+ /** Returns localhost, or if its misconfigured, falls back to loopback. Use with caution!!!! */\n+ // TODO: can we remove this?\n+ public static InetAddress getLocalHost() {\n+ try {\n+ return InetAddress.getLocalHost();\n+ } catch (UnknownHostException e) {\n+ logger.warn(\"failed to resolve local host, fallback to loopback\", e);\n+ return InetAddress.getLoopbackAddress();\n }\n- return null;\n }\n-\n- /**\n- * A function to check if an interface supports an IP version (i.e has addresses\n- * defined for that IP version).\n- *\n- * @param intf\n- * @return\n- */\n- public static boolean interfaceHasIPAddresses(NetworkInterface intf, StackType ipVersion) throws SocketException, UnknownHostException {\n- boolean supportsVersion = false;\n- if (intf != null) {\n- // get all the InetAddresses defined on the interface\n- Enumeration addresses = intf.getInetAddresses();\n- while (addresses != null && addresses.hasMoreElements()) {\n- // get the next InetAddress for the current interface\n- InetAddress address = (InetAddress) addresses.nextElement();\n-\n- // check if we find an address of correct version\n- if ((address instanceof Inet4Address && (ipVersion == StackType.IPv4)) ||\n- (address instanceof Inet6Address && (ipVersion == StackType.IPv6))) {\n- supportsVersion = true;\n- break;\n- }\n+ \n+ /** Returns addresses for all loopback interfaces that are up. */\n+ public static InetAddress[] getLoopbackAddresses() throws SocketException {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (NetworkInterface intf : getInterfaces()) {\n+ if (intf.isLoopback() && intf.isUp()) {\n+ list.addAll(Collections.list(intf.getInetAddresses()));\n }\n- } else {\n- throw new UnknownHostException(\"network interface not found\");\n }\n- return supportsVersion;\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No up-and-running loopback interfaces found, got \" + getInterfaces());\n+ }\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n }\n-\n- /**\n- * Tries to determine the type of IP stack from the available interfaces and their addresses and from the\n- * system properties (java.net.preferIPv4Stack and java.net.preferIPv6Addresses)\n- *\n- * @return StackType.IPv4 for an IPv4 only stack, StackYTypeIPv6 for an IPv6 only stack, and StackType.Unknown\n- * if the type cannot be detected\n- */\n- public static StackType getIpStackType() {\n- boolean isIPv4StackAvailable = isStackAvailable(true);\n- boolean isIPv6StackAvailable = isStackAvailable(false);\n-\n- // if only IPv4 stack available\n- if (isIPv4StackAvailable && !isIPv6StackAvailable) {\n- return StackType.IPv4;\n+ \n+ /** Returns addresses for the first non-loopback interface that is up. */\n+ public static InetAddress[] getFirstNonLoopbackAddresses() throws SocketException {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (NetworkInterface intf : getInterfaces()) {\n+ if (intf.isLoopback() == false && intf.isUp()) {\n+ list.addAll(Collections.list(intf.getInetAddresses()));\n+ break;\n+ }\n }\n- // if only IPv6 stack available\n- else if (isIPv6StackAvailable && !isIPv4StackAvailable) {\n- return StackType.IPv6;\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No up-and-running non-loopback interfaces found, got \" + getInterfaces());\n }\n- // if dual stack\n- else if (isIPv4StackAvailable && isIPv6StackAvailable) {\n- // get the System property which records user preference for a stack on a dual stack machine\n- if (Boolean.getBoolean(IPv4_SETTING)) // has preference over java.net.preferIPv6Addresses\n- return StackType.IPv4;\n- if (Boolean.getBoolean(IPv6_SETTING))\n- return StackType.IPv6;\n- return StackType.IPv6;\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns addresses for the given interface (it must be marked up) */\n+ public static InetAddress[] getAddressesForInterface(String name) throws SocketException {\n+ NetworkInterface intf = NetworkInterface.getByName(name);\n+ if (intf == null) {\n+ throw new IllegalArgumentException(\"No interface named '\" + name + \"' found, got \" + getInterfaces());\n }\n- return StackType.Unknown;\n- }\n-\n-\n- public static boolean isStackAvailable(boolean ipv4) {\n- Collection<InetAddress> allAddrs = getAllAvailableAddresses();\n- for (InetAddress addr : allAddrs)\n- if (ipv4 && addr instanceof Inet4Address || (!ipv4 && addr instanceof Inet6Address))\n- return true;\n- return false;\n- }\n-\n-\n- /**\n- * Returns all the available interfaces, including first level sub interfaces.\n- */\n- public static List<NetworkInterface> getAllAvailableInterfaces() throws SocketException {\n- List<NetworkInterface> allInterfaces = new ArrayList<>();\n- for (Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); interfaces.hasMoreElements(); ) {\n- NetworkInterface intf = interfaces.nextElement();\n- allInterfaces.add(intf);\n-\n- Enumeration<NetworkInterface> subInterfaces = intf.getSubInterfaces();\n- if (subInterfaces != null && subInterfaces.hasMoreElements()) {\n- while (subInterfaces.hasMoreElements()) {\n- allInterfaces.add(subInterfaces.nextElement());\n- }\n- }\n+ if (!intf.isUp()) {\n+ throw new IllegalArgumentException(\"Interface '\" + name + \"' is not up and running\");\n }\n- sortInterfaces(allInterfaces);\n- return allInterfaces;\n- }\n-\n- public static Collection<InetAddress> getAllAvailableAddresses() {\n- // we want consistent order here.\n- final Set<InetAddress> retval = new TreeSet<>(new Comparator<InetAddress>() {\n- BytesRef left = new BytesRef();\n- BytesRef right = new BytesRef();\n- @Override\n- public int compare(InetAddress o1, InetAddress o2) {\n- return set(left, o1).compareTo(set(right, o1));\n- }\n-\n- private BytesRef set(BytesRef ref, InetAddress addr) {\n- ref.bytes = addr.getAddress();\n- ref.offset = 0;\n- ref.length = ref.bytes.length;\n- return ref;\n+ List<InetAddress> list = Collections.list(intf.getInetAddresses());\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"Interface '\" + name + \"' has no internet addresses\");\n+ }\n+ sortAddresses(list);\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns addresses for the given host, sorted by order of preference */\n+ public static InetAddress[] getAllByName(String host) throws UnknownHostException {\n+ InetAddress addresses[] = InetAddress.getAllByName(host);\n+ sortAddresses(Arrays.asList(addresses));\n+ return addresses;\n+ }\n+ \n+ /** Returns only the IPV4 addresses in {@code addresses} */\n+ public static InetAddress[] filterIPV4(InetAddress addresses[]) {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (InetAddress address : addresses) {\n+ if (address instanceof Inet4Address) {\n+ list.add(address);\n }\n- });\n- try {\n- for (NetworkInterface intf : getInterfaces()) {\n- Enumeration<InetAddress> addrs = intf.getInetAddresses();\n- while (addrs.hasMoreElements())\n- retval.add(addrs.nextElement());\n+ }\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No ipv4 addresses found in \" + Arrays.toString(addresses));\n+ }\n+ return list.toArray(new InetAddress[list.size()]);\n+ }\n+ \n+ /** Returns only the IPV6 addresses in {@code addresses} */\n+ public static InetAddress[] filterIPV6(InetAddress addresses[]) {\n+ List<InetAddress> list = new ArrayList<>();\n+ for (InetAddress address : addresses) {\n+ if (address instanceof Inet6Address) {\n+ list.add(address);\n }\n- } catch (SocketException e) {\n- logger.warn(\"Failed to derive all available interfaces\", e);\n }\n-\n- return retval;\n- }\n-\n-\n- private NetworkUtils() {\n-\n+ if (list.isEmpty()) {\n+ throw new IllegalArgumentException(\"No ipv6 addresses found in \" + Arrays.toString(addresses));\n+ }\n+ return list.toArray(new InetAddress[list.size()]);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java",
"status": "modified"
},
{
"diff": "@@ -131,7 +131,9 @@ protected void doStart() {\n boolean deferToInterface = settings.getAsBoolean(\"discovery.zen.ping.multicast.defer_group_to_set_interface\", Constants.MAC_OS_X);\n multicastChannel = MulticastChannel.getChannel(nodeName(), shared,\n new MulticastChannel.Config(port, group, bufferSize, ttl,\n- networkService.resolvePublishHostAddress(address),\n+ // don't use publish address, the use case for that is e.g. a firewall or proxy and\n+ // may not even be bound to an interface on this machine! use the first bound address.\n+ networkService.resolveBindHostAddress(address)[0],\n deferToInterface),\n new Receiver());\n } catch (Throwable t) {",
"filename": "core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java",
"status": "modified"
},
{
"diff": "@@ -51,6 +51,10 @@\n import java.io.IOException;\n import java.net.InetAddress;\n import java.net.InetSocketAddress;\n+import java.net.SocketAddress;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n import java.util.concurrent.Executors;\n import java.util.concurrent.atomic.AtomicReference;\n \n@@ -128,7 +132,7 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n \n protected volatile BoundTransportAddress boundAddress;\n \n- protected volatile Channel serverChannel;\n+ protected volatile List<Channel> serverChannels = new ArrayList<>();\n \n protected OpenChannelsHandler serverOpenChannels;\n \n@@ -243,21 +247,43 @@ protected void doStart() {\n serverBootstrap.setOption(\"child.reuseAddress\", reuseAddress);\n \n // Bind and start to accept incoming connections.\n- InetAddress hostAddressX;\n+ InetAddress hostAddresses[];\n try {\n- hostAddressX = networkService.resolveBindHostAddress(bindHost);\n+ hostAddresses = networkService.resolveBindHostAddress(bindHost);\n } catch (IOException e) {\n throw new BindHttpException(\"Failed to resolve host [\" + bindHost + \"]\", e);\n }\n- final InetAddress hostAddress = hostAddressX;\n+ \n+ for (InetAddress address : hostAddresses) {\n+ bindAddress(address);\n+ }\n \n+ InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(0).getLocalAddress();\n+ InetSocketAddress publishAddress;\n+ if (0 == publishPort) {\n+ publishPort = boundAddress.getPort();\n+ }\n+ try {\n+ publishAddress = new InetSocketAddress(networkService.resolvePublishHostAddress(publishHost), publishPort);\n+ } catch (Exception e) {\n+ throw new BindTransportException(\"Failed to resolve publish address\", e);\n+ }\n+ this.boundAddress = new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress));\n+ }\n+ \n+ private void bindAddress(final InetAddress hostAddress) {\n PortsRange portsRange = new PortsRange(port);\n final AtomicReference<Exception> lastException = new AtomicReference<>();\n+ final AtomicReference<SocketAddress> boundSocket = new AtomicReference<>();\n boolean success = portsRange.iterate(new PortsRange.PortCallback() {\n @Override\n public boolean onPortNumber(int portNumber) {\n try {\n- serverChannel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));\n+ synchronized (serverChannels) {\n+ Channel channel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));\n+ serverChannels.add(channel);\n+ boundSocket.set(channel.getLocalAddress());\n+ }\n } catch (Exception e) {\n lastException.set(e);\n return false;\n@@ -268,25 +294,18 @@ public boolean onPortNumber(int portNumber) {\n if (!success) {\n throw new BindHttpException(\"Failed to bind to [\" + port + \"]\", lastException.get());\n }\n-\n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannel.getLocalAddress();\n- InetSocketAddress publishAddress;\n- if (0 == publishPort) {\n- publishPort = boundAddress.getPort();\n- }\n- try {\n- publishAddress = new InetSocketAddress(networkService.resolvePublishHostAddress(publishHost), publishPort);\n- } catch (Exception e) {\n- throw new BindTransportException(\"Failed to resolve publish address\", e);\n- }\n- this.boundAddress = new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress));\n+ logger.info(\"Bound http to address [{}]\", boundSocket.get());\n }\n \n @Override\n protected void doStop() {\n- if (serverChannel != null) {\n- serverChannel.close().awaitUninterruptibly();\n- serverChannel = null;\n+ synchronized (serverChannels) {\n+ if (serverChannels != null) {\n+ for (Channel channel : serverChannels) {\n+ channel.close().awaitUninterruptibly();\n+ }\n+ serverChannels = null;\n+ }\n }\n \n if (serverOpenChannels != null) {",
"filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java",
"status": "modified"
},
{
"diff": "@@ -146,8 +146,8 @@ public class NettyTransport extends AbstractLifecycleComponent<Transport> implem\n // node id to actual channel\n protected final ConcurrentMap<DiscoveryNode, NodeChannels> connectedNodes = newConcurrentMap();\n protected final Map<String, ServerBootstrap> serverBootstraps = newConcurrentMap();\n- protected final Map<String, Channel> serverChannels = newConcurrentMap();\n- protected final Map<String, BoundTransportAddress> profileBoundAddresses = newConcurrentMap();\n+ protected final Map<String, List<Channel>> serverChannels = newConcurrentMap();\n+ protected final ConcurrentMap<String, BoundTransportAddress> profileBoundAddresses = newConcurrentMap();\n protected volatile TransportServiceAdapter transportServiceAdapter;\n protected volatile BoundTransportAddress boundAddress;\n protected final KeyedLock<String> connectionLock = new KeyedLock<>();\n@@ -286,7 +286,7 @@ protected void doStart() {\n bindServerBootstrap(name, mergedSettings);\n }\n \n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).getLocalAddress();\n+ InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).get(0).getLocalAddress();\n int publishPort = settings.getAsInt(\"transport.netty.publish_port\", settings.getAsInt(\"transport.publish_port\", boundAddress.getPort()));\n String publishHost = settings.get(\"transport.netty.publish_host\", settings.get(\"transport.publish_host\", settings.get(\"transport.host\")));\n InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);\n@@ -397,23 +397,38 @@ private Settings createFallbackSettings() {\n \n private void bindServerBootstrap(final String name, final Settings settings) {\n // Bind and start to accept incoming connections.\n- InetAddress hostAddressX;\n+ InetAddress hostAddresses[];\n String bindHost = settings.get(\"bind_host\");\n try {\n- hostAddressX = networkService.resolveBindHostAddress(bindHost);\n+ hostAddresses = networkService.resolveBindHostAddress(bindHost);\n } catch (IOException e) {\n throw new BindTransportException(\"Failed to resolve host [\" + bindHost + \"]\", e);\n }\n- final InetAddress hostAddress = hostAddressX;\n+ for (InetAddress hostAddress : hostAddresses) {\n+ bindServerBootstrap(name, hostAddress, settings);\n+ }\n+ }\n+ \n+ private void bindServerBootstrap(final String name, final InetAddress hostAddress, Settings settings) {\n \n String port = settings.get(\"port\");\n PortsRange portsRange = new PortsRange(port);\n final AtomicReference<Exception> lastException = new AtomicReference<>();\n+ final AtomicReference<SocketAddress> boundSocket = new AtomicReference<>();\n boolean success = portsRange.iterate(new PortsRange.PortCallback() {\n @Override\n public boolean onPortNumber(int portNumber) {\n try {\n- serverChannels.put(name, serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber)));\n+ Channel channel = serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber));\n+ synchronized (serverChannels) {\n+ List<Channel> list = serverChannels.get(name);\n+ if (list == null) {\n+ list = new ArrayList<>();\n+ serverChannels.put(name, list);\n+ }\n+ list.add(channel);\n+ boundSocket.set(channel.getLocalAddress());\n+ }\n } catch (Exception e) {\n lastException.set(e);\n return false;\n@@ -426,14 +441,15 @@ public boolean onPortNumber(int portNumber) {\n }\n \n if (!DEFAULT_PROFILE.equals(name)) {\n- InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(name).getLocalAddress();\n+ InetSocketAddress boundAddress = (InetSocketAddress) boundSocket.get();\n int publishPort = settings.getAsInt(\"publish_port\", boundAddress.getPort());\n String publishHost = settings.get(\"publish_host\", boundAddress.getHostString());\n InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);\n- profileBoundAddresses.put(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));\n+ // TODO: support real multihoming with publishing. Today we use putIfAbsent so only the prioritized address is published\n+ profileBoundAddresses.putIfAbsent(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));\n }\n \n- logger.debug(\"Bound profile [{}] to address [{}]\", name, serverChannels.get(name).getLocalAddress());\n+ logger.info(\"Bound profile [{}] to address [{}]\", name, boundSocket.get());\n }\n \n private void createServerBootstrap(String name, Settings settings) {\n@@ -500,15 +516,17 @@ public void run() {\n nodeChannels.close();\n }\n \n- Iterator<Map.Entry<String, Channel>> serverChannelIterator = serverChannels.entrySet().iterator();\n+ Iterator<Map.Entry<String, List<Channel>>> serverChannelIterator = serverChannels.entrySet().iterator();\n while (serverChannelIterator.hasNext()) {\n- Map.Entry<String, Channel> serverChannelEntry = serverChannelIterator.next();\n+ Map.Entry<String, List<Channel>> serverChannelEntry = serverChannelIterator.next();\n String name = serverChannelEntry.getKey();\n- Channel serverChannel = serverChannelEntry.getValue();\n- try {\n- serverChannel.close().awaitUninterruptibly();\n- } catch (Throwable t) {\n- logger.debug(\"Error closing serverChannel for profile [{}]\", t, name);\n+ List<Channel> serverChannels = serverChannelEntry.getValue();\n+ for (Channel serverChannel : serverChannels) {\n+ try {\n+ serverChannel.close().awaitUninterruptibly();\n+ } catch (Throwable t) {\n+ logger.debug(\"Error closing serverChannel for profile [{}]\", t, name);\n+ }\n }\n serverChannelIterator.remove();\n }",
"filename": "core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,77 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.network;\n+\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.net.InetAddress;\n+\n+/**\n+ * Tests for network utils. Please avoid using any methods that cause DNS lookups!\n+ */\n+public class NetworkUtilsTests extends ESTestCase {\n+ \n+ /**\n+ * test sort key order respects PREFER_IPV4\n+ */\n+ public void testSortKey() throws Exception {\n+ InetAddress localhostv4 = InetAddress.getByName(\"127.0.0.1\");\n+ InetAddress localhostv6 = InetAddress.getByName(\"::1\");\n+ assertTrue(NetworkUtils.sortKey(localhostv4, true) < NetworkUtils.sortKey(localhostv6, true));\n+ assertTrue(NetworkUtils.sortKey(localhostv6, false) < NetworkUtils.sortKey(localhostv4, false));\n+ }\n+ \n+ /**\n+ * test ordinary addresses sort before private addresses\n+ */\n+ public void testSortKeySiteLocal() throws Exception {\n+ InetAddress siteLocal = InetAddress.getByName(\"172.16.0.1\");\n+ assert siteLocal.isSiteLocalAddress();\n+ InetAddress ordinary = InetAddress.getByName(\"192.192.192.192\");\n+ assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(siteLocal, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(siteLocal, false));\n+ \n+ InetAddress siteLocal6 = InetAddress.getByName(\"fec0::1\");\n+ assert siteLocal6.isSiteLocalAddress();\n+ InetAddress ordinary6 = InetAddress.getByName(\"fddd::1\");\n+ assertTrue(NetworkUtils.sortKey(ordinary6, true) < NetworkUtils.sortKey(siteLocal6, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary6, false) < NetworkUtils.sortKey(siteLocal6, false));\n+ }\n+ \n+ /**\n+ * test private addresses sort before link local addresses\n+ */\n+ public void testSortKeyLinkLocal() throws Exception {\n+ InetAddress linkLocal = InetAddress.getByName(\"fe80::1\");\n+ assert linkLocal.isLinkLocalAddress();\n+ InetAddress ordinary = InetAddress.getByName(\"fddd::1\");\n+ assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(linkLocal, true));\n+ assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(linkLocal, false));\n+ }\n+ \n+ /**\n+ * Test filtering out ipv4/ipv6 addresses\n+ */\n+ public void testFilter() throws Exception {\n+ InetAddress addresses[] = { InetAddress.getByName(\"::1\"), InetAddress.getByName(\"127.0.0.1\") };\n+ assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"127.0.0.1\") }, NetworkUtils.filterIPV4(addresses));\n+ assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"::1\") }, NetworkUtils.filterIPV6(addresses));\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java",
"status": "added"
},
{
"diff": "@@ -504,7 +504,7 @@ private static Settings getRandomNodeSettings(long seed) {\n public static String clusterName(String prefix, long clusterSeed) {\n StringBuilder builder = new StringBuilder(prefix);\n final int childVM = RandomizedTest.systemPropertyAsInt(SysGlobals.CHILDVM_SYSPROP_JVM_ID, 0);\n- builder.append('-').append(NetworkUtils.getLocalHostName(\"__default_host__\"));\n+ builder.append('-').append(NetworkUtils.getLocalHost().getHostName());\n builder.append(\"-CHILD_VM=[\").append(childVM).append(']');\n builder.append(\"-CLUSTER_SEED=[\").append(clusterSeed).append(']');\n // if multiple maven task run on a single host we better have an identifier that doesn't rely on input params",
"filename": "core/src/test/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -135,29 +135,6 @@ public void testThatDefaultProfilePortOverridesGeneralConfiguration() throws Exc\n }\n }\n \n- @Test\n- public void testThatBindingOnDifferentHostsWorks() throws Exception {\n- int[] ports = getRandomPorts(2);\n- InetAddress firstNonLoopbackAddress = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);\n- assumeTrue(\"No IP-v4 non-loopback address available - are you on a plane?\", firstNonLoopbackAddress != null);\n- Settings settings = settingsBuilder()\n- .put(\"network.host\", \"127.0.0.1\")\n- .put(\"transport.tcp.port\", ports[0])\n- .put(\"transport.profiles.default.bind_host\", \"127.0.0.1\")\n- .put(\"transport.profiles.client1.bind_host\", firstNonLoopbackAddress.getHostAddress())\n- .put(\"transport.profiles.client1.port\", ports[1])\n- .build();\n-\n- ThreadPool threadPool = new ThreadPool(\"tst\");\n- try (NettyTransport ignored = startNettyTransport(settings, threadPool)) {\n- assertPortIsBound(\"127.0.0.1\", ports[0]);\n- assertPortIsBound(firstNonLoopbackAddress.getHostAddress(), ports[1]);\n- assertConnectionRefused(ports[1]);\n- } finally {\n- terminate(threadPool);\n- }\n- }\n-\n @Test\n public void testThatProfileWithoutValidNameIsIgnored() throws Exception {\n int[] ports = getRandomPorts(3);",
"filename": "core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@\n <waitfor maxwait=\"30\" maxwaitunit=\"second\"\n checkevery=\"500\" checkeveryunit=\"millisecond\"\n timeoutproperty=\"@{timeoutproperty}\">\n- <http url=\"http://127.0.0.1:@{port}\"/>\n+ <http url=\"http://localhost:@{port}\"/>\n </waitfor>\n </sequential>\n </macrodef>\n@@ -138,7 +138,7 @@\n <waitfor maxwait=\"30\" maxwaitunit=\"second\"\n checkevery=\"500\" checkeveryunit=\"millisecond\"\n timeoutproperty=\"@{timeoutproperty}\">\n- <http url=\"http://127.0.0.1:@{port}/_cluster/health?wait_for_nodes=2\"/>\n+ <http url=\"http://localhost:@{port}/_cluster/health?wait_for_nodes=2\"/>\n </waitfor>\n </sequential>\n </macrodef>",
"filename": "dev-tools/src/main/resources/ant/integration-tests.xml",
"status": "modified"
},
{
"diff": "@@ -153,7 +153,7 @@\n <parallelism>1</parallelism>\n <systemProperties>\n <!-- use external cluster -->\n- <tests.cluster>127.0.0.1:${integ.transport.port}</tests.cluster>\n+ <tests.cluster>localhost:${integ.transport.port}</tests.cluster>\n </systemProperties>\n </configuration>\n </execution>",
"filename": "distribution/pom.xml",
"status": "modified"
},
{
"diff": "@@ -38,7 +38,7 @@ respond to. It provides the following settings with the\n |`ttl` |The ttl of the multicast message. Defaults to `3`.\n \n |`address` |The address to bind to, defaults to `null` which means it\n-will bind to all available network interfaces.\n+will bind `network.bind_host`\n \n |`enabled` |Whether multicast ping discovery is enabled. Defaults to `true`.\n |=======================================================================",
"filename": "docs/reference/modules/discovery/zen.asciidoc",
"status": "modified"
},
{
"diff": "@@ -9,13 +9,15 @@ network settings allows to set common settings that will be shared among\n all network based modules (unless explicitly overridden in each module).\n \n The `network.bind_host` setting allows to control the host different network\n-components will bind on. By default, the bind host will be `anyLoopbackAddress`\n-(typically `127.0.0.1` or `::1`).\n+components will bind on. By default, the bind host will be `_local_`\n+(loopback addresses such as `127.0.0.1`, `::1`).\n \n The `network.publish_host` setting allows to control the host the node will\n publish itself within the cluster so other nodes will be able to connect to it.\n-Of course, this can't be the `anyLocalAddress`, and by default, it will be the\n-first loopback address (if possible), or the local address.\n+Currently an elasticsearch node may be bound to multiple addresses, but only\n+publishes one. If not specified, this defaults to the \"best\" address from \n+`network.bind_host`. By default, IPv4 addresses are preferred to IPv6, and \n+ordinary addresses are preferred to site-local or link-local addresses.\n \n The `network.host` setting is a simple setting to automatically set both\n `network.bind_host` and `network.publish_host` to the same host value.\n@@ -27,21 +29,25 @@ in the following table:\n [cols=\"<,<\",options=\"header\",]\n |=======================================================================\n |Logical Host Setting Value |Description\n-|`_local_` |Will be resolved to the local ip address.\n+|`_local_` |Will be resolved to loopback addresses\n \n-|`_non_loopback_` |The first non loopback address.\n+|`_local:ipv4_` |Will be resolved to loopback IPv4 addresses\n \n-|`_non_loopback:ipv4_` |The first non loopback IPv4 address.\n+|`_local:ipv6_` |Will be resolved to loopback IPv6 addresses\n \n-|`_non_loopback:ipv6_` |The first non loopback IPv6 address.\n+|`_non_loopback_` |Addresses of the first non loopback interface\n \n-|`_[networkInterface]_` |Resolves to the ip address of the provided\n+|`_non_loopback:ipv4_` |IPv4 addresses of the first non loopback interface\n+\n+|`_non_loopback:ipv6_` |IPv6 addresses of the first non loopback interface\n+\n+|`_[networkInterface]_` |Resolves to the addresses of the provided\n network interface. For example `_en0_`.\n \n-|`_[networkInterface]:ipv4_` |Resolves to the ipv4 address of the\n+|`_[networkInterface]:ipv4_` |Resolves to the ipv4 addresses of the\n provided network interface. For example `_en0:ipv4_`.\n \n-|`_[networkInterface]:ipv6_` |Resolves to the ipv6 address of the\n+|`_[networkInterface]:ipv6_` |Resolves to the ipv6 addresses of the\n provided network interface. For example `_en0:ipv6_`.\n |=======================================================================\n ",
"filename": "docs/reference/modules/network.asciidoc",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@ public Ec2NameResolver(Settings settings) {\n * @throws IOException if ec2 meta-data cannot be obtained.\n * @see CustomNameResolver#resolveIfPossible(String)\n */\n- public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n+ public InetAddress[] resolve(Ec2HostnameType type, boolean warnOnFailure) {\n URLConnection urlConnection = null;\n InputStream in = null;\n try {\n@@ -109,7 +109,8 @@ public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n logger.error(\"no ec2 metadata returned from {}\", url);\n return null;\n }\n- return InetAddress.getByName(metadataResult);\n+ // only one address: because we explicitly ask for only one via the Ec2HostnameType\n+ return new InetAddress[] { InetAddress.getByName(metadataResult) };\n } catch (IOException e) {\n if (warnOnFailure) {\n logger.warn(\"failed to get metadata for [\" + type.configName + \"]: \" + ExceptionsHelper.detailedMessage(e));\n@@ -123,13 +124,13 @@ public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {\n }\n \n @Override\n- public InetAddress resolveDefault() {\n+ public InetAddress[] resolveDefault() {\n return null; // using this, one has to explicitly specify _ec2_ in network setting\n // return resolve(Ec2HostnameType.DEFAULT, false);\n }\n \n @Override\n- public InetAddress resolveIfPossible(String value) {\n+ public InetAddress[] resolveIfPossible(String value) {\n for (Ec2HostnameType type : Ec2HostnameType.values()) {\n if (type.configName.equals(value)) {\n return resolve(type, true);",
"filename": "plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java",
"status": "modified"
},
{
"diff": "@@ -414,7 +414,7 @@\n <parallelism>1</parallelism>\n <systemProperties>\n <!-- use external cluster -->\n- <tests.cluster>127.0.0.1:${integ.transport.port}</tests.cluster>\n+ <tests.cluster>localhost:${integ.transport.port}</tests.cluster>\n </systemProperties>\n </configuration>\n </execution>",
"filename": "plugins/pom.xml",
"status": "modified"
}
]
} |
{
"body": "We have 17 threadpools today, and while all of them are configured individually to create a number of threads that is reasonable for the number of cores, summing up all created threads gives a high number. For instance a machine with 4 cores starts between 30 (only considering fixed threadpools) and 59 threads (considering fixed + scaling threadpools), and a machine with 32 cores starts between 215 and 373 threads. And this doesn't account for threads that are created by Lucene itself.\n\nThese numbers look too high to me, and while we could decrease the size of individual thread pools, this would have the downside that eg. only issuing get requests could not make use of a whole machine's resources. So maybe we could look at better sharing threadpools across tasks? For instance I was thinking we could merge the following threadpools:\n- get, search, suggest, percolate, warmer (read operations)\n- index, bulk, refresh, flush (write operations)\n- fetch_shard_started, fetch_shard_stored\n",
"comments": [
{
"body": "No consensus could be reached. Closing.\n",
"created_at": "2016-01-26T18:00:13Z"
}
],
"number": 12666,
"title": "Better share threadpools"
} | {
"body": "Because we have thread pools for almost everything, even if each of them has a\nreasonable size, the total number of threads that elasticsearch creates is\nhigh-ish. For instance, with 8 processors, elasticsearch creates between 58\n(only fixed thread pools) and 111 threads (including fixed and scaling pools).\nWith this change, the numbers go down to 33/59.\n\nIdeally the SEARCH and GET thread pools should be the same, but I couldn't do\nit now given that some SEARCH requests block on GET requests in order to\nretrieve indexed scripts or geo shapes. So they are still separate pools for\nnow.\n\nHowever, the INDEX, BULK, REFRESH and FLUSH thread pools have been merged into\na single WRITE thread pool, the SEARCH, PERCOLATE and SUGGEST have been merged\ninto a single READ thread pool and FETCH_SHARD_STARTED and FETCH_SHARD_STORE\nhave been merged into FETCH_SHARD. Also the WARMER pool has been removed: it\nwas useful to parallelize fielddata loading but now that we have doc values by\ndefault, we can make things simpler by just loading them in the current thread.\n\nClose #12666\n",
"number": 12939,
"review_comments": [
{
"body": "I guess one sacrifice here is that the warmers will run in series instead of parallel now. That is probably OK unless someone has thousands of the the things and they take a long time to run - like on a newly merged segment. But I'm pretty sure the docs advise against having tons and tons of warmers anyway.\n",
"created_at": "2015-08-17T16:00:40Z"
},
{
"body": "The name of this test is now inaccurate. Is that ok?\n",
"created_at": "2015-08-17T16:11:15Z"
},
{
"body": "Wait - no. It wasn't about the _update_ thread pool - it was just about updating the thread pool settings. Ok. Ignore that last comment.\n",
"created_at": "2015-08-17T16:11:45Z"
},
{
"body": "Right, and current changes will hopefully make warming faster, like doc-values by default (ES 2.0) or disk-based norms (Lucene 5.3).\n",
"created_at": "2015-08-17T16:18:46Z"
}
],
"title": "Share thread pools that have similar purposes."
} | {
"commits": [
{
"message": "Share thread pools that have similar purposes.\n\nBecause we have thread pools for almost everything, even if each of them has a\nreasonable size, the total number of threads that elasticsearch creates is\nhigh-ish. For instance, with 8 processors, elasticsearch creates between 58\n(only fixed thread pools) and 111 threads (including fixed and scaling pools).\nWith this change, the numbers go down to 33/59.\n\nIdeally the SEARCH and GET thread pools should be the same, but I couldn't do\nit now given that some SEARCH requests block on GET requests in order to\nretrieve indexed scripts or geo shapes. So they are still separate pools for\nnow.\n\nHowever, the INDEX, BULK, REFRESH and FLUSH thread pools have been merged into\na single WRITE thread pool, the SEARCH, PERCOLATE and SUGGEST have been merged\ninto a single READ thread pool and FETCH_SHARD_STARTED and FETCH_SHARD_STORE\nhave been merged into FETCH_SHARD. Also the WARMER pool has been removed: it\nwas useful to parallelize fielddata loading but now that we have doc values by\ndefault, we can make things simpler by just loading them in the current thread.\n\nClose #12666"
},
{
"message": "Update documentation."
},
{
"message": "Add INDEX thread pool back."
}
],
"files": [
{
"diff": "@@ -49,18 +49,15 @@\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardUtils;\n import org.elasticsearch.indices.IndicesWarmer;\n-import org.elasticsearch.indices.IndicesWarmer.TerminationHandle;\n-import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.indices.IndicesWarmer.WarmerContext;\n \n import java.io.Closeable;\n import java.io.IOException;\n import java.util.HashSet;\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.Callable;\n-import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n-import java.util.concurrent.Executor;\n \n /**\n * This is a cache for {@link BitDocIdSet} based filters and is unbounded by size or time.\n@@ -241,9 +238,9 @@ public int hashCode() {\n final class BitDocIdSetFilterWarmer extends IndicesWarmer.Listener {\n \n @Override\n- public IndicesWarmer.TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, IndicesWarmer.WarmerContext context, ThreadPool threadPool) {\n+ public void warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, IndicesWarmer.WarmerContext context) {\n if (!loadRandomAccessFiltersEagerly) {\n- return TerminationHandle.NO_WAIT;\n+ return;\n }\n \n boolean hasNested = false;\n@@ -267,43 +264,25 @@ public IndicesWarmer.TerminationHandle warmNewReaders(final IndexShard indexShar\n warmUp.add(Queries.newNonNestedFilter());\n }\n \n- final Executor executor = threadPool.executor(executor());\n- final CountDownLatch latch = new CountDownLatch(context.searcher().reader().leaves().size() * warmUp.size());\n for (final LeafReaderContext ctx : context.searcher().reader().leaves()) {\n for (final Filter filterToWarm : warmUp) {\n- executor.execute(new Runnable() {\n-\n- @Override\n- public void run() {\n- try {\n- final long start = System.nanoTime();\n- getAndLoadIfNotPresent(filterToWarm, ctx);\n- if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed bitset for [{}], took [{}]\", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start));\n- }\n- } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"failed to load bitset for [{}]\", t, filterToWarm);\n- } finally {\n- latch.countDown();\n- }\n+ try {\n+ final long start = System.nanoTime();\n+ getAndLoadIfNotPresent(filterToWarm, ctx);\n+ if (indexShard.warmerService().logger().isTraceEnabled()) {\n+ indexShard.warmerService().logger().trace(\"warmed bitset for [{}], took [{}]\", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start));\n }\n-\n- });\n+ } catch (Throwable t) {\n+ indexShard.warmerService().logger().warn(\"failed to load bitset for [{}]\", t, filterToWarm);\n+ }\n }\n }\n- return new TerminationHandle() {\n- @Override\n- public void awaitTermination() throws InterruptedException {\n- latch.await();\n- }\n- };\n }\n \n @Override\n- public TerminationHandle warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, IndicesWarmer.WarmerContext context, ThreadPool threadPool) {\n- return TerminationHandle.NO_WAIT;\n+ public void warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context) {\n+ // no-op\n }\n-\n }\n \n Cache<Object, Cache<Filter, Value>> getLoadedFilters() {",
"filename": "core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.indices;\n \n-import com.google.common.collect.Lists;\n import org.apache.lucene.index.IndexReader;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -31,9 +30,7 @@\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.TimeUnit;\n \n@@ -43,18 +40,15 @@ public final class IndicesWarmer extends AbstractComponent {\n \n public static final String INDEX_WARMER_ENABLED = \"index.warmer.enabled\";\n \n- private final ThreadPool threadPool;\n-\n private final ClusterService clusterService;\n \n private final IndicesService indicesService;\n \n private final CopyOnWriteArrayList<Listener> listeners = new CopyOnWriteArrayList<>();\n \n @Inject\n- public IndicesWarmer(Settings settings, ThreadPool threadPool, ClusterService clusterService, IndicesService indicesService) {\n+ public IndicesWarmer(Settings settings, ClusterService clusterService, IndicesService indicesService) {\n super(settings);\n- this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n }\n@@ -100,27 +94,11 @@ private void warmInternal(final WarmerContext context, boolean topReader) {\n }\n indexShard.warmerService().onPreWarm();\n long time = System.nanoTime();\n- final List<TerminationHandle> terminationHandles = Lists.newArrayList();\n- // get a handle on pending tasks\n for (final Listener listener : listeners) {\n if (topReader) {\n- terminationHandles.add(listener.warmTopReader(indexShard, indexMetaData, context, threadPool));\n+ listener.warmTopReader(indexShard, indexMetaData, context);\n } else {\n- terminationHandles.add(listener.warmNewReaders(indexShard, indexMetaData, context, threadPool));\n- }\n- }\n- // wait for termination\n- for (TerminationHandle terminationHandle : terminationHandles) {\n- try {\n- terminationHandle.awaitTermination();\n- } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n- if (topReader) {\n- logger.warn(\"top warming has been interrupted\", e);\n- } else {\n- logger.warn(\"warming has been interrupted\", e);\n- }\n- break;\n+ listener.warmNewReaders(indexShard, indexMetaData, context);\n }\n }\n long took = System.nanoTime() - time;\n@@ -134,27 +112,13 @@ private void warmInternal(final WarmerContext context, boolean topReader) {\n }\n }\n \n- /** A handle on the execution of warm-up action. */\n- public interface TerminationHandle {\n-\n- public static TerminationHandle NO_WAIT = new TerminationHandle() {\n- @Override\n- public void awaitTermination() {}\n- };\n-\n- /** Wait until execution of the warm-up action completes. */\n- void awaitTermination() throws InterruptedException;\n- }\n public static abstract class Listener {\n \n- public String executor() {\n- return ThreadPool.Names.WARMER;\n- }\n-\n- /** Queue tasks to warm-up the given segments and return handles that allow to wait for termination of the execution of those tasks. */\n- public abstract TerminationHandle warmNewReaders(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context, ThreadPool threadPool);\n+ /** Warm new leaf readers in the current thread. */\n+ public abstract void warmNewReaders(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context);\n \n- public abstract TerminationHandle warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context, ThreadPool threadPool);\n+ /** Warm the top reader in the current thread. */\n+ public abstract void warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context);\n }\n \n public static final class WarmerContext {",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesWarmer.java",
"status": "modified"
},
{
"diff": "@@ -50,34 +50,26 @@\n public class RestThreadPoolAction extends AbstractCatAction {\n \n private final static String[] SUPPORTED_NAMES = new String[]{\n- ThreadPool.Names.BULK,\n- ThreadPool.Names.FLUSH,\n+ ThreadPool.Names.FETCH_SHARD,\n ThreadPool.Names.GENERIC,\n ThreadPool.Names.GET,\n ThreadPool.Names.INDEX,\n ThreadPool.Names.MANAGEMENT,\n ThreadPool.Names.OPTIMIZE,\n- ThreadPool.Names.PERCOLATE,\n- ThreadPool.Names.REFRESH,\n- ThreadPool.Names.SEARCH,\n+ ThreadPool.Names.READ,\n ThreadPool.Names.SNAPSHOT,\n- ThreadPool.Names.SUGGEST,\n- ThreadPool.Names.WARMER\n+ ThreadPool.Names.WRITE\n };\n \n private final static String[] SUPPORTED_ALIASES = new String[]{\n- \"b\",\n- \"f\",\n+ \"fs\",\n \"ge\",\n \"g\",\n \"i\",\n \"ma\",\n \"o\",\n- \"p\",\n \"r\",\n- \"s\",\n \"sn\",\n- \"su\",\n \"w\"\n };\n \n@@ -86,9 +78,8 @@ public class RestThreadPoolAction extends AbstractCatAction {\n }\n \n private final static String[] DEFAULT_THREAD_POOLS = new String[]{\n- ThreadPool.Names.BULK,\n- ThreadPool.Names.INDEX,\n- ThreadPool.Names.SEARCH,\n+ ThreadPool.Names.WRITE,\n+ ThreadPool.Names.READ\n };\n \n private final static Map<String, String> ALIAS_TO_THREAD_POOL;",
"filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java",
"status": "modified"
},
{
"diff": "@@ -72,7 +72,6 @@\n import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.IndicesWarmer;\n-import org.elasticsearch.indices.IndicesWarmer.TerminationHandle;\n import org.elasticsearch.indices.IndicesWarmer.WarmerContext;\n import org.elasticsearch.indices.cache.request.IndicesRequestCache;\n import org.elasticsearch.node.settings.NodeSettingsService;\n@@ -920,7 +919,7 @@ public int getActiveContexts() {\n static class NormsWarmer extends IndicesWarmer.Listener {\n \n @Override\n- public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context, ThreadPool threadPool) {\n+ public void warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context) {\n final Loading defaultLoading = Loading.parse(indexMetaData.settings().get(NORMS_LOADING_KEY), Loading.LAZY);\n final MapperService mapperService = indexShard.mapperService();\n final ObjectSet<String> warmUp = new ObjectHashSet<>();\n@@ -937,51 +936,35 @@ public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaDa\n }\n }\n \n- final CountDownLatch latch = new CountDownLatch(1);\n- // Norms loading may be I/O intensive but is not CPU intensive, so we execute it in a single task\n- threadPool.executor(executor()).execute(new Runnable() {\n- @Override\n- public void run() {\n- try {\n- for (ObjectCursor<String> stringObjectCursor : warmUp) {\n- final String indexName = stringObjectCursor.value;\n- final long start = System.nanoTime();\n- for (final LeafReaderContext ctx : context.searcher().reader().leaves()) {\n- final NumericDocValues values = ctx.reader().getNormValues(indexName);\n- if (values != null) {\n- values.get(0);\n- }\n- }\n- if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed norms for [{}], took [{}]\", indexName, TimeValue.timeValueNanos(System.nanoTime() - start));\n- }\n+ try {\n+ for (ObjectCursor<String> stringObjectCursor : warmUp) {\n+ final String indexName = stringObjectCursor.value;\n+ final long start = System.nanoTime();\n+ for (final LeafReaderContext ctx : context.searcher().reader().leaves()) {\n+ final NumericDocValues values = ctx.reader().getNormValues(indexName);\n+ if (values != null) {\n+ values.get(0);\n }\n- } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"failed to warm-up norms\", t);\n- } finally {\n- latch.countDown();\n+ }\n+ if (indexShard.warmerService().logger().isTraceEnabled()) {\n+ indexShard.warmerService().logger().trace(\"warmed norms for [{}], took [{}]\", indexName, TimeValue.timeValueNanos(System.nanoTime() - start));\n }\n }\n- });\n-\n- return new TerminationHandle() {\n- @Override\n- public void awaitTermination() throws InterruptedException {\n- latch.await();\n- }\n- };\n+ } catch (Throwable t) {\n+ indexShard.warmerService().logger().warn(\"failed to warm-up norms\", t);\n+ }\n }\n \n @Override\n- public TerminationHandle warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context, ThreadPool threadPool) {\n- return TerminationHandle.NO_WAIT;\n+ public void warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context) {\n+ // no-op\n }\n }\n \n static class FieldDataWarmer extends IndicesWarmer.Listener {\n \n @Override\n- public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context, ThreadPool threadPool) {\n+ public void warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context) {\n final MapperService mapperService = indexShard.mapperService();\n final Map<String, MappedFieldType> warmUp = new HashMap<>();\n for (DocumentMapper docMapper : mapperService.docMappers(false)) {\n@@ -1002,40 +985,23 @@ public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaDa\n }\n }\n final IndexFieldDataService indexFieldDataService = indexShard.indexFieldDataService();\n- final Executor executor = threadPool.executor(executor());\n- final CountDownLatch latch = new CountDownLatch(context.searcher().reader().leaves().size() * warmUp.size());\n for (final LeafReaderContext ctx : context.searcher().reader().leaves()) {\n for (final MappedFieldType fieldType : warmUp.values()) {\n- executor.execute(new Runnable() {\n-\n- @Override\n- public void run() {\n- try {\n- final long start = System.nanoTime();\n- indexFieldDataService.getForField(fieldType).load(ctx);\n- if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed fielddata for [{}], took [{}]\", fieldType.names().fullName(), TimeValue.timeValueNanos(System.nanoTime() - start));\n- }\n- } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"failed to warm-up fielddata for [{}]\", t, fieldType.names().fullName());\n- } finally {\n- latch.countDown();\n- }\n+ try {\n+ final long start = System.nanoTime();\n+ indexFieldDataService.getForField(fieldType).load(ctx);\n+ if (indexShard.warmerService().logger().isTraceEnabled()) {\n+ indexShard.warmerService().logger().trace(\"warmed fielddata for [{}], took [{}]\", fieldType.names().fullName(), TimeValue.timeValueNanos(System.nanoTime() - start));\n }\n-\n- });\n+ } catch (Throwable t) {\n+ indexShard.warmerService().logger().warn(\"failed to warm-up fielddata for [{}]\", t, fieldType.names().fullName());\n+ }\n }\n }\n- return new TerminationHandle() {\n- @Override\n- public void awaitTermination() throws InterruptedException {\n- latch.await();\n- }\n- };\n }\n \n @Override\n- public TerminationHandle warmTopReader(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context, ThreadPool threadPool) {\n+ public void warmTopReader(final IndexShard indexShard, IndexMetaData indexMetaData, final WarmerContext context) {\n final MapperService mapperService = indexShard.mapperService();\n final Map<String, MappedFieldType> warmUpGlobalOrdinals = new HashMap<>();\n for (DocumentMapper docMapper : mapperService.docMappers(false)) {\n@@ -1055,104 +1021,70 @@ public TerminationHandle warmTopReader(final IndexShard indexShard, IndexMetaDat\n }\n }\n final IndexFieldDataService indexFieldDataService = indexShard.indexFieldDataService();\n- final Executor executor = threadPool.executor(executor());\n- final CountDownLatch latch = new CountDownLatch(warmUpGlobalOrdinals.size());\n for (final MappedFieldType fieldType : warmUpGlobalOrdinals.values()) {\n- executor.execute(new Runnable() {\n- @Override\n- public void run() {\n- try {\n- final long start = System.nanoTime();\n- IndexFieldData.Global ifd = indexFieldDataService.getForField(fieldType);\n- ifd.loadGlobal(context.reader());\n- if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed global ordinals for [{}], took [{}]\", fieldType.names().fullName(), TimeValue.timeValueNanos(System.nanoTime() - start));\n- }\n- } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"failed to warm-up global ordinals for [{}]\", t, fieldType.names().fullName());\n- } finally {\n- latch.countDown();\n- }\n+ try {\n+ final long start = System.nanoTime();\n+ IndexFieldData.Global<?> ifd = indexFieldDataService.getForField(fieldType);\n+ ifd.loadGlobal(context.reader());\n+ if (indexShard.warmerService().logger().isTraceEnabled()) {\n+ indexShard.warmerService().logger().trace(\"warmed global ordinals for [{}], took [{}]\", fieldType.names().fullName(), TimeValue.timeValueNanos(System.nanoTime() - start));\n }\n- });\n- }\n- return new TerminationHandle() {\n- @Override\n- public void awaitTermination() throws InterruptedException {\n- latch.await();\n+ } catch (Throwable t) {\n+ indexShard.warmerService().logger().warn(\"failed to warm-up global ordinals for [{}]\", t, fieldType.names().fullName());\n }\n- };\n+ }\n }\n }\n \n class SearchWarmer extends IndicesWarmer.Listener {\n \n @Override\n- public TerminationHandle warmNewReaders(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context, ThreadPool threadPool) {\n- return internalWarm(indexShard, indexMetaData, context, threadPool, false);\n+ public void warmNewReaders(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context) {\n+ internalWarm(indexShard, indexMetaData, context, threadPool, false);\n }\n \n @Override\n- public TerminationHandle warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context, ThreadPool threadPool) {\n- return internalWarm(indexShard, indexMetaData, context, threadPool, true);\n+ public void warmTopReader(IndexShard indexShard, IndexMetaData indexMetaData, WarmerContext context) {\n+ internalWarm(indexShard, indexMetaData, context, threadPool, true);\n }\n \n- public TerminationHandle internalWarm(final IndexShard indexShard, final IndexMetaData indexMetaData, final IndicesWarmer.WarmerContext warmerContext, ThreadPool threadPool, final boolean top) {\n+ public void internalWarm(final IndexShard indexShard, final IndexMetaData indexMetaData, final IndicesWarmer.WarmerContext warmerContext, ThreadPool threadPool, final boolean top) {\n IndexWarmersMetaData custom = indexMetaData.custom(IndexWarmersMetaData.TYPE);\n if (custom == null) {\n- return TerminationHandle.NO_WAIT;\n+ return ;\n }\n- final Executor executor = threadPool.executor(executor());\n- final CountDownLatch latch = new CountDownLatch(custom.entries().size());\n for (final IndexWarmersMetaData.Entry entry : custom.entries()) {\n- executor.execute(new Runnable() {\n-\n- @Override\n- public void run() {\n- SearchContext context = null;\n- try {\n- long now = System.nanoTime();\n- ShardSearchRequest request = new ShardSearchLocalRequest(indexShard.shardId(), indexMetaData.numberOfShards(),\n- SearchType.QUERY_THEN_FETCH, entry.source(), entry.types(), entry.requestCache());\n- context = createContext(request, warmerContext.searcher());\n- // if we use sort, we need to do query to sort on it and load relevant field data\n- // if not, we might as well set size=0 (and cache if needed)\n- if (context.sort() == null) {\n- context.size(0);\n- }\n- boolean canCache = indicesQueryCache.canCache(request, context);\n- // early terminate when we can cache, since we can only do proper caching on top level searcher\n- // also, if we can't cache, and its top, we don't need to execute it, since we already did when its not top\n- if (canCache != top) {\n- return;\n- }\n- loadOrExecuteQueryPhase(request, context, queryPhase);\n- long took = System.nanoTime() - now;\n- if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed [{}], took [{}]\", entry.name(), TimeValue.timeValueNanos(took));\n- }\n- } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"warmer [{}] failed\", t, entry.name());\n- } finally {\n- try {\n- if (context != null) {\n- freeContext(context.id());\n- cleanContext(context);\n- }\n- } finally {\n- latch.countDown();\n- }\n- }\n+ SearchContext context = null;\n+ try {\n+ long now = System.nanoTime();\n+ ShardSearchRequest request = new ShardSearchLocalRequest(indexShard.shardId(), indexMetaData.numberOfShards(),\n+ SearchType.QUERY_THEN_FETCH, entry.source(), entry.types(), entry.requestCache());\n+ context = createContext(request, warmerContext.searcher());\n+ // if we use sort, we need to do query to sort on it and load relevant field data\n+ // if not, we might as well set size=0 (and cache if needed)\n+ if (context.sort() == null) {\n+ context.size(0);\n+ }\n+ boolean canCache = indicesQueryCache.canCache(request, context);\n+ // early terminate when we can cache, since we can only do proper caching on top level searcher\n+ // also, if we can't cache, and its top, we don't need to execute it, since we already did when its not top\n+ if (canCache != top) {\n+ return;\n+ }\n+ loadOrExecuteQueryPhase(request, context, queryPhase);\n+ long took = System.nanoTime() - now;\n+ if (indexShard.warmerService().logger().isTraceEnabled()) {\n+ indexShard.warmerService().logger().trace(\"warmed [{}], took [{}]\", entry.name(), TimeValue.timeValueNanos(took));\n+ }\n+ } catch (Throwable t) {\n+ indexShard.warmerService().logger().warn(\"warmer [{}] failed\", t, entry.name());\n+ } finally {\n+ if (context != null) {\n+ freeContext(context.id());\n+ cleanContext(context);\n }\n-\n- });\n- }\n- return new TerminationHandle() {\n- @Override\n- public void awaitTermination() throws InterruptedException {\n- latch.await();\n }\n- };\n+ }\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,10 @@\n \n import com.google.common.base.Objects;\n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.ImmutableMap.Builder;\n import com.google.common.collect.Maps;\n import com.google.common.util.concurrent.MoreExecutors;\n+\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractComponent;\n@@ -60,23 +62,49 @@\n public class ThreadPool extends AbstractComponent {\n \n public static class Names {\n+ // Actual threadpools\n public static final String SAME = \"same\";\n public static final String GENERIC = \"generic\";\n public static final String LISTENER = \"listener\";\n- public static final String GET = \"get\";\n- public static final String INDEX = \"index\";\n- public static final String BULK = \"bulk\";\n- public static final String SEARCH = \"search\";\n- public static final String SUGGEST = \"suggest\";\n- public static final String PERCOLATE = \"percolate\";\n public static final String MANAGEMENT = \"management\";\n- public static final String FLUSH = \"flush\";\n- public static final String REFRESH = \"refresh\";\n- public static final String WARMER = \"warmer\";\n+ public static final String READ = \"read\";\n+ public static final String WRITE = \"write\";\n public static final String SNAPSHOT = \"snapshot\";\n public static final String OPTIMIZE = \"optimize\";\n- public static final String FETCH_SHARD_STARTED = \"fetch_shard_started\";\n- public static final String FETCH_SHARD_STORE = \"fetch_shard_store\";\n+ public static final String FETCH_SHARD = \"fetch_shard\";\n+ // this would ideally be an alias to \"read\" but some search\n+ // requests block on get requests to retrieve eg. indexed scripts\n+ // or geo shapes, so we need to keep get in its own thread pool\n+ // to avoid dead locks\n+ public static final String GET = \"get\";\n+ // not an alias to WRITE in order to prevent heavy bulk requests from\n+ // delaying lightweight create/index/delete operations\n+ public static final String INDEX = \"index\";\n+\n+ // Aliases\n+ public static final String SEARCH = READ;\n+ public static final String SUGGEST = READ;\n+ public static final String PERCOLATE = READ;\n+ public static final String BULK = WRITE;\n+ public static final String FLUSH = WRITE;\n+ public static final String REFRESH = WRITE;\n+ public static final String FETCH_SHARD_STARTED = FETCH_SHARD;\n+ public static final String FETCH_SHARD_STORE = FETCH_SHARD;\n+\n+ private static Map<String, String> DEPRECATED_THREADPOOLS;\n+ static {\n+ Builder<String, String> builder = ImmutableMap.builder();\n+ builder.put(\"search\", SEARCH);\n+ builder.put(\"suggest\", SUGGEST);\n+ builder.put(\"percolate\", PERCOLATE);\n+ builder.put(\"warmer\", REFRESH);\n+ builder.put(\"bulk\", BULK);\n+ builder.put(\"flush\", FLUSH);\n+ builder.put(\"refresh\", REFRESH);\n+ builder.put(\"fetch_shard_started\", FETCH_SHARD_STARTED);\n+ builder.put(\"fetch_shard_store\", FETCH_SHARD_STORE);\n+ DEPRECATED_THREADPOOLS = builder.build();\n+ }\n }\n \n public static final String THREADPOOL_GROUP = \"threadpool.\";\n@@ -110,23 +138,17 @@ public ThreadPool(Settings settings) {\n int halfProcMaxAt10 = Math.min(((availableProcessors + 1) / 2), 10);\n defaultExecutorTypeSettings = ImmutableMap.<String, Settings>builder()\n .put(Names.GENERIC, settingsBuilder().put(\"type\", \"cached\").put(\"keep_alive\", \"30s\").build())\n+ .put(Names.WRITE, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 50).build())\n .put(Names.INDEX, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 200).build())\n- .put(Names.BULK, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 50).build())\n+ .put(Names.READ, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", ((availableProcessors * 3) / 2) + 1).put(\"queue_size\", 1000).build())\n .put(Names.GET, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 1000).build())\n- .put(Names.SEARCH, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", ((availableProcessors * 3) / 2) + 1).put(\"queue_size\", 1000).build())\n- .put(Names.SUGGEST, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 1000).build())\n- .put(Names.PERCOLATE, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", availableProcessors).put(\"queue_size\", 1000).build())\n .put(Names.MANAGEMENT, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", 5).build())\n // no queue as this means clients will need to handle rejections on listener queue even if the operation succeeded\n // the assumption here is that the listeners should be very lightweight on the listeners side\n .put(Names.LISTENER, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", halfProcMaxAt10).build())\n- .put(Names.FLUSH, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n- .put(Names.REFRESH, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt10).build())\n- .put(Names.WARMER, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n .put(Names.SNAPSHOT, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n- .put(Names.OPTIMIZE, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", 1).build())\n- .put(Names.FETCH_SHARD_STARTED, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", availableProcessors * 2).build())\n- .put(Names.FETCH_SHARD_STORE, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", availableProcessors * 2).build())\n+ .put(Names.OPTIMIZE, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", 1).build())\n+ .put(Names.FETCH_SHARD, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", availableProcessors * 2).build())\n .build();\n \n Map<String, ExecutorHolder> executors = Maps.newHashMap();\n@@ -136,7 +158,13 @@ public ThreadPool(Settings settings) {\n \n // Building custom thread pools\n for (Map.Entry<String, Settings> entry : groupSettings.entrySet()) {\n- if (executors.containsKey(entry.getKey())) {\n+ final String threadPoolName = entry.getKey();\n+ if (executors.containsKey(threadPoolName)) {\n+ continue;\n+ }\n+ if (Names.DEPRECATED_THREADPOOLS.containsKey(threadPoolName)) {\n+ final String replacement = Names.DEPRECATED_THREADPOOLS.get(threadPoolName);\n+ deprecationLogger.deprecated(\"threadpool [{}] has been merged together with other threadpools into the [{}] threadpool\", threadPoolName, replacement);\n continue;\n }\n executors.put(entry.getKey(), build(entry.getKey(), entry.getValue(), Settings.EMPTY));\n@@ -446,11 +474,17 @@ public void updateSettings(Settings settings) {\n \n // Building custom thread pools\n for (Map.Entry<String, Settings> entry : groupSettings.entrySet()) {\n- if (defaultExecutorTypeSettings.containsKey(entry.getKey())) {\n+ final String threadPoolName = entry.getKey();\n+ if (defaultExecutorTypeSettings.containsKey(threadPoolName)) {\n+ continue;\n+ }\n+ if (Names.DEPRECATED_THREADPOOLS.containsKey(threadPoolName)) {\n+ final String replacement = Names.DEPRECATED_THREADPOOLS.get(threadPoolName);\n+ deprecationLogger.deprecated(\"threadpool [{}] has been merged together with other threadpools into the [{}] threadpool\", threadPoolName, replacement);\n continue;\n }\n \n- ExecutorHolder oldExecutorHolder = executors.get(entry.getKey());\n+ ExecutorHolder oldExecutorHolder = executors.get(threadPoolName);\n ExecutorHolder newExecutorHolder = rebuild(entry.getKey(), oldExecutorHolder, entry.getValue(), Settings.EMPTY);\n // Can't introduce new thread pools at runtime, because The oldExecutorHolder variable will be null in the\n // case the settings contains a thread pool not defined in the initial settings in the constructor. The if",
"filename": "core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java",
"status": "modified"
},
{
"diff": "@@ -47,12 +47,10 @@ public class RejectionActionIT extends ESIntegTestCase {\n protected Settings nodeSettings(int nodeOrdinal) {\n return Settings.builder()\n .put(super.nodeSettings(nodeOrdinal))\n- .put(\"threadpool.search.size\", 1)\n- .put(\"threadpool.search.queue_size\", 1)\n- .put(\"threadpool.index.size\", 1)\n- .put(\"threadpool.index.queue_size\", 1)\n- .put(\"threadpool.get.size\", 1)\n- .put(\"threadpool.get.queue_size\", 1)\n+ .put(\"threadpool.read.size\", 1)\n+ .put(\"threadpool.read.queue_size\", 1)\n+ .put(\"threadpool.write.size\", 1)\n+ .put(\"threadpool.write.queue_size\", 1)\n .build();\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/action/RejectionActionIT.java",
"status": "modified"
},
{
"diff": "@@ -39,9 +39,9 @@ public class SearchWithRejectionsIT extends ESIntegTestCase {\n @Override\n public Settings nodeSettings(int nodeOrdinal) {\n return settingsBuilder().put(super.nodeSettings(nodeOrdinal))\n- .put(\"threadpool.search.type\", \"fixed\")\n- .put(\"threadpool.search.size\", 1)\n- .put(\"threadpool.search.queue_size\", 1)\n+ .put(\"threadpool.read.type\", \"fixed\")\n+ .put(\"threadpool.read.size\", 1)\n+ .put(\"threadpool.read.queue_size\", 1)\n .build();\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/SearchWithRejectionsIT.java",
"status": "modified"
},
{
"diff": "@@ -407,10 +407,8 @@ private static Settings getRandomNodeSettings(long seed) {\n }\n if (random.nextBoolean()) {\n // change threadpool types to make sure we don't have components that rely on the type of thread pools\n- for (String name : Arrays.asList(ThreadPool.Names.BULK, ThreadPool.Names.FLUSH, ThreadPool.Names.GET,\n- ThreadPool.Names.INDEX, ThreadPool.Names.MANAGEMENT, ThreadPool.Names.OPTIMIZE,\n- ThreadPool.Names.PERCOLATE, ThreadPool.Names.REFRESH, ThreadPool.Names.SEARCH, ThreadPool.Names.SNAPSHOT,\n- ThreadPool.Names.SUGGEST, ThreadPool.Names.WARMER)) {\n+ for (String name : Arrays.asList(ThreadPool.Names.WRITE, ThreadPool.Names.READ, ThreadPool.Names.MANAGEMENT,\n+ ThreadPool.Names.OPTIMIZE, ThreadPool.Names.SNAPSHOT, ThreadPool.Names.LISTENER, ThreadPool.Names.FETCH_SHARD)) {\n if (random.nextBoolean()) {\n final String type = RandomPicks.randomFrom(random, Arrays.asList(\"fixed\", \"cached\", \"scaling\"));\n builder.put(ThreadPool.THREADPOOL_GROUP + name + \".type\", type);",
"filename": "core/src/test/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -63,7 +63,7 @@ public class SimpleThreadPoolIT extends ESIntegTestCase {\n \n @Override\n protected Settings nodeSettings(int nodeOrdinal) {\n- return Settings.settingsBuilder().put(super.nodeSettings(nodeOrdinal)).put(\"threadpool.search.type\", \"cached\").build();\n+ return Settings.settingsBuilder().put(super.nodeSettings(nodeOrdinal)).put(\"threadpool.read.type\", \"cached\").build();\n }\n \n @Test\n@@ -131,7 +131,7 @@ public void testUpdatingThreadPoolSettings() throws Exception {\n ThreadPool threadPool = internalCluster().getDataNodeInstance(ThreadPool.class);\n // Check that settings are changed\n assertThat(((ThreadPoolExecutor) threadPool.executor(Names.SEARCH)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(5L));\n- client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.search.keep_alive\", \"10m\").build()).execute().actionGet();\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.read.keep_alive\", \"10m\").build()).execute().actionGet();\n assertThat(((ThreadPoolExecutor) threadPool.executor(Names.SEARCH)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(10L));\n \n // Make sure that threads continue executing when executor is replaced\n@@ -149,7 +149,7 @@ public void run() {\n }\n }\n });\n- client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.search.type\", \"fixed\").build()).execute().actionGet();\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.read.type\", \"fixed\").build()).execute().actionGet();\n assertThat(threadPool.executor(Names.SEARCH), not(sameInstance(oldExecutor)));\n assertThat(((ThreadPoolExecutor) oldExecutor).isShutdown(), equalTo(true));\n assertThat(((ThreadPoolExecutor) oldExecutor).isTerminating(), equalTo(true));\n@@ -169,7 +169,7 @@ public void run() {\n }\n }\n });\n- client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.search.type\", \"fixed\").build()).execute().actionGet();\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder().put(\"threadpool.read.type\", \"fixed\").build()).execute().actionGet();\n barrier.await();\n Thread.sleep(200);\n ",
"filename": "core/src/test/java/org/elasticsearch/threadpool/SimpleThreadPoolIT.java",
"status": "modified"
},
{
"diff": "@@ -96,7 +96,7 @@ public void testThatToXContentWritesOutUnboundedCorrectly() throws Exception {\n public void testThatNegativeSettingAllowsToStart() throws InterruptedException {\n Settings settings = settingsBuilder().put(\"name\", \"index\").put(\"threadpool.index.queue_size\", \"-1\").build();\n ThreadPool threadPool = new ThreadPool(settings);\n- assertThat(threadPool.info(\"index\").getQueueSize(), is(nullValue()));\n+ assertThat(threadPool.info(ThreadPool.Names.INDEX).getQueueSize(), is(nullValue()));\n terminate(threadPool);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/threadpool/ThreadPoolSerializationTests.java",
"status": "modified"
},
{
"diff": "@@ -52,22 +52,22 @@ private ThreadPool.Info info(ThreadPool threadPool, String name) {\n public void testCachedExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(\n Settings.settingsBuilder()\n- .put(\"threadpool.search.type\", \"cached\")\n+ .put(\"threadpool.read.type\", \"cached\")\n .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"cached\"));\n assertThat(info(threadPool, Names.SEARCH).getKeepAlive().minutes(), equalTo(5L));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n \n // Replace with different type\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.type\", \"same\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.type\", \"same\").build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"same\"));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(MoreExecutors.directExecutor().getClass()));\n \n // Replace with different type again\n threadPool.updateSettings(settingsBuilder()\n- .put(\"threadpool.search.type\", \"scaling\")\n- .put(\"threadpool.search.keep_alive\", \"10m\")\n+ .put(\"threadpool.read.type\", \"scaling\")\n+ .put(\"threadpool.read.keep_alive\", \"10m\")\n .build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"scaling\"));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n@@ -77,15 +77,15 @@ public void testCachedExecutorType() throws InterruptedException {\n assertThat(((EsThreadPoolExecutor) threadPool.executor(Names.SEARCH)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(10L));\n \n // Put old type back\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.type\", \"cached\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.type\", \"cached\").build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"cached\"));\n // Make sure keep alive value reused\n assertThat(info(threadPool, Names.SEARCH).getKeepAlive().minutes(), equalTo(10L));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n \n // Change keep alive\n Executor oldExecutor = threadPool.executor(Names.SEARCH);\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.keep_alive\", \"1m\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.keep_alive\", \"1m\").build());\n // Make sure keep alive value changed\n assertThat(info(threadPool, Names.SEARCH).getKeepAlive().minutes(), equalTo(1L));\n assertThat(((EsThreadPoolExecutor) threadPool.executor(Names.SEARCH)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(1L));\n@@ -94,7 +94,7 @@ public void testCachedExecutorType() throws InterruptedException {\n assertThat(threadPool.executor(Names.SEARCH), sameInstance(oldExecutor));\n \n // Set the same keep alive\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.keep_alive\", \"1m\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.keep_alive\", \"1m\").build());\n // Make sure keep alive value didn't change\n assertThat(info(threadPool, Names.SEARCH).getKeepAlive().minutes(), equalTo(1L));\n assertThat(((EsThreadPoolExecutor) threadPool.executor(Names.SEARCH)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(1L));\n@@ -107,17 +107,17 @@ public void testCachedExecutorType() throws InterruptedException {\n @Test\n public void testFixedExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(settingsBuilder()\n- .put(\"threadpool.search.type\", \"fixed\")\n+ .put(\"threadpool.read.type\", \"fixed\")\n .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n \n // Replace with different type\n threadPool.updateSettings(settingsBuilder()\n- .put(\"threadpool.search.type\", \"scaling\")\n- .put(\"threadpool.search.keep_alive\", \"10m\")\n- .put(\"threadpool.search.min\", \"2\")\n- .put(\"threadpool.search.size\", \"15\")\n+ .put(\"threadpool.read.type\", \"scaling\")\n+ .put(\"threadpool.read.keep_alive\", \"10m\")\n+ .put(\"threadpool.read.min\", \"2\")\n+ .put(\"threadpool.read.size\", \"15\")\n .build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"scaling\"));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n@@ -131,7 +131,7 @@ public void testFixedExecutorType() throws InterruptedException {\n \n // Put old type back\n threadPool.updateSettings(settingsBuilder()\n- .put(\"threadpool.search.type\", \"fixed\")\n+ .put(\"threadpool.read.type\", \"fixed\")\n .build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"fixed\"));\n // Make sure keep alive value is not used\n@@ -145,7 +145,7 @@ public void testFixedExecutorType() throws InterruptedException {\n \n // Change size\n Executor oldExecutor = threadPool.executor(Names.SEARCH);\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.size\", \"10\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.size\", \"10\").build());\n // Make sure size values changed\n assertThat(info(threadPool, Names.SEARCH).getMax(), equalTo(10));\n assertThat(info(threadPool, Names.SEARCH).getMin(), equalTo(10));\n@@ -157,7 +157,7 @@ public void testFixedExecutorType() throws InterruptedException {\n \n // Change queue capacity\n threadPool.updateSettings(settingsBuilder()\n- .put(\"threadpool.search.queue\", \"500\")\n+ .put(\"threadpool.read.queue\", \"500\")\n .build());\n \n terminate(threadPool);\n@@ -167,8 +167,8 @@ public void testFixedExecutorType() throws InterruptedException {\n @Test\n public void testScalingExecutorType() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(settingsBuilder()\n- .put(\"threadpool.search.type\", \"scaling\")\n- .put(\"threadpool.search.size\", 10)\n+ .put(\"threadpool.read.type\", \"scaling\")\n+ .put(\"threadpool.read.size\", 10)\n .put(\"name\",\"testCachedExecutorType\").build());\n \n assertThat(info(threadPool, Names.SEARCH).getMin(), equalTo(1));\n@@ -180,10 +180,10 @@ public void testScalingExecutorType() throws InterruptedException {\n // Change settings that doesn't require pool replacement\n Executor oldExecutor = threadPool.executor(Names.SEARCH);\n threadPool.updateSettings(settingsBuilder()\n- .put(\"threadpool.search.type\", \"scaling\")\n- .put(\"threadpool.search.keep_alive\", \"10m\")\n- .put(\"threadpool.search.min\", \"2\")\n- .put(\"threadpool.search.size\", \"15\")\n+ .put(\"threadpool.read.type\", \"scaling\")\n+ .put(\"threadpool.read.keep_alive\", \"10m\")\n+ .put(\"threadpool.read.min\", \"2\")\n+ .put(\"threadpool.read.size\", \"15\")\n .build());\n assertThat(info(threadPool, Names.SEARCH).getType(), equalTo(\"scaling\"));\n assertThat(threadPool.executor(Names.SEARCH), instanceOf(EsThreadPoolExecutor.class));\n@@ -202,7 +202,7 @@ public void testScalingExecutorType() throws InterruptedException {\n @Test(timeout = 10000)\n public void testShutdownDownNowDoesntBlock() throws Exception {\n ThreadPool threadPool = new ThreadPool(Settings.settingsBuilder()\n- .put(\"threadpool.search.type\", \"cached\")\n+ .put(\"threadpool.read.type\", \"cached\")\n .put(\"name\",\"testCachedExecutorType\").build());\n \n final CountDownLatch latch = new CountDownLatch(1);\n@@ -218,7 +218,7 @@ public void run() {\n }\n }\n });\n- threadPool.updateSettings(settingsBuilder().put(\"threadpool.search.type\", \"fixed\").build());\n+ threadPool.updateSettings(settingsBuilder().put(\"threadpool.read.type\", \"fixed\").build());\n assertThat(threadPool.executor(Names.SEARCH), not(sameInstance(oldExecutor)));\n assertThat(((ThreadPoolExecutor) oldExecutor).isShutdown(), equalTo(true));\n assertThat(((ThreadPoolExecutor) oldExecutor).isTerminating(), equalTo(true));",
"filename": "core/src/test/java/org/elasticsearch/threadpool/UpdateThreadPoolSettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -9,51 +9,28 @@ of discarded.\n \n There are several thread pools, but the important ones include:\n \n-`index`::\n- For index/delete operations. Defaults to `fixed`\n+`write`::\n+ For index/delete/bulk/refresh/flush operations. Defaults to `fixed`\n with a size of `# of available processors`,\n queue_size of `200`.\n \n-`search`::\n- For count/search operations. Defaults to `fixed`\n+`read`::\n+ For count/search/suggest/percolate operations. Defaults to `fixed`\n with a size of `int((# of available_processors * 3) / 2) + 1`,\n queue_size of `1000`.\n \n-`suggest`::\n- For suggest operations. Defaults to `fixed`\n- with a size of `# of available processors`,\n- queue_size of `1000`.\n-\n `get`::\n- For get operations. Defaults to `fixed`\n- with a size of `# of available processors`,\n- queue_size of `1000`.\n-\n-`bulk`::\n- For bulk operations. Defaults to `fixed`\n- with a size of `# of available processors`,\n- queue_size of `50`.\n-\n-`percolate`::\n- For percolate operations. Defaults to `fixed`\n+ For get operations only. Defaults to `fixed`\n with a size of `# of available processors`,\n queue_size of `1000`.\n \n `snapshot`::\n For snapshot/restore operations. Defaults to `scaling` with a\n keep-alive of `5m` and a size of `min(5, (# of available processors)/2)`.\n \n-`warmer`::\n- For segment warm-up operations. Defaults to `scaling` with a\n- keep-alive of `5m` and a size of `min(5, (# of available processors)/2)`.\n-\n-`refresh`::\n- For refresh operations. Defaults to `scaling` with a\n- keep-alive of `5m` and a size of `min(10, (# of available processors)/2)`.\n-\n `listener`::\n Mainly for java client executing of action when listener threaded is set to true.\n- Default size of `(# of available processors)/2`, max at 10.\n+ Default size of `min(10, (# of available processors)/2)`.\n \n Changing a specific thread pool can be done by setting its type and\n specific type parameters, for example, changing the `index` thread pool\n@@ -62,7 +39,7 @@ to have more threads:\n [source,js]\n --------------------------------------------------\n threadpool:\n- index:\n+ write:\n type: fixed\n size: 30\n --------------------------------------------------\n@@ -110,7 +87,7 @@ full, it will abort the request.\n [source,js]\n --------------------------------------------------\n threadpool:\n- index:\n+ write:\n type: fixed\n size: 30\n queue_size: 1000\n@@ -129,7 +106,7 @@ around in the thread pool without it doing any work.\n [source,js]\n --------------------------------------------------\n threadpool:\n- warmer:\n+ snapshot:\n type: scaling\n size: 8\n keep_alive: 2m",
"filename": "docs/reference/modules/threadpool.asciidoc",
"status": "modified"
},
{
"diff": "@@ -7,17 +7,17 @@\n \n - match:\n $body: |\n- / #host ip bulk.active bulk.queue bulk.rejected index.active index.queue index.rejected search.active search.queue search.rejected\n- ^ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n+ / #host ip read.active read.queue read.rejected write.active write.queue write.rejected\n+ ^ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n \n - do:\n cat.thread_pool:\n v: true\n \n - match:\n $body: |\n- /^ host \\s+ ip \\s+ bulk.active \\s+ bulk.queue \\s+ bulk.rejected \\s+ index.active \\s+ index.queue \\s+ index.rejected \\s+ search.active \\s+ search.queue \\s+ search.rejected \\s+ \\n\n- (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n+ /^ host \\s+ ip \\s+ read.active \\s+ read.queue \\s+ read.rejected \\s+ write.active \\s+ write.queue \\s+ write.rejected \\s+ \\n\n+ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n \n - do:\n cat.thread_pool:\n@@ -31,141 +31,22 @@\n \n - do:\n cat.thread_pool:\n- h: id,ba,fa,gea,ga,ia,maa,ma,oa,pa\n+ h: id,fsa,gea,ga,maa,oa,ra,sna,wa\n v: true\n full_id: true\n \n - match:\n $body: |\n- /^ id \\s+ ba \\s+ fa \\s+ gea \\s+ ga \\s+ ia \\s+ maa \\s+ oa \\s+ pa \\s+ \\n\n- (\\S+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n+ /^ id \\s+ fsa \\s+ gea \\s+ ga \\s+ maa \\s+ oa \\s+ ra \\s+ sna \\s+ wa \\s+ \\n\n+ (\\S+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\n)+ $/\n \n - do:\n cat.thread_pool:\n- h: id,bulk.type,bulk.active,bulk.size,bulk.queue,bulk.queueSize,bulk.rejected,bulk.largest,bulk.completed,bulk.min,bulk.max,bulk.keepAlive\n+ h: id,write.type,write.active,write.size,write.queue,write.queueSize,write.rejected,write.largest,write.completed,write.min,write.max,write.keepAlive\n v: true\n \n - match:\n $body: |\n- /^ id \\s+ bulk.type \\s+ bulk.active \\s+ bulk.size \\s+ bulk.queue \\s+ bulk.queueSize \\s+ bulk.rejected \\s+ bulk.largest \\s+ bulk.completed \\s+ bulk.min \\s+ bulk.max \\s+ bulk.keepAlive \\s+ \\n\n+ /^ id \\s+ write.type \\s+ write.active \\s+ write.size \\s+ write.queue \\s+ write.queueSize \\s+ write.rejected \\s+ write.largest \\s+ write.completed \\s+ write.min \\s+ write.max \\s+ write.keepAlive \\s+ \\n\n (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n \n- - do:\n- cat.thread_pool:\n- h: id,flush.type,flush.active,flush.size,flush.queue,flush.queueSize,flush.rejected,flush.largest,flush.completed,flush.min,flush.max,flush.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ flush.type \\s+ flush.active \\s+ flush.size \\s+ flush.queue \\s+ flush.queueSize \\s+ flush.rejected \\s+ flush.largest \\s+ flush.completed \\s+ flush.min \\s+ flush.max \\s+ flush.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,generic.type,generic.active,generic.size,generic.queue,generic.queueSize,generic.rejected,generic.largest,generic.completed,generic.min,generic.max,generic.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ generic.type \\s+ generic.active \\s+ generic.size \\s+ generic.queue \\s+ generic.queueSize \\s+ generic.rejected \\s+ generic.largest \\s+ generic.completed \\s+ generic.min \\s+ generic.max \\s+ generic.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,get.type,get.active,get.size,get.queue,get.queueSize,get.rejected,get.largest,get.completed,get.min,get.max,get.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ get.type \\s+ get.active \\s+ get.size \\s+ get.queue \\s+ get.queueSize \\s+ get.rejected \\s+ get.largest \\s+ get.completed \\s+ get.min \\s+ get.max \\s+ get.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,index.type,index.active,index.size,index.queue,index.queueSize,index.rejected,index.largest,index.completed,index.min,index.max,index.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ index.type \\s+ index.active \\s+ index.size \\s+ index.queue \\s+ index.queueSize \\s+ index.rejected \\s+ index.largest \\s+ index.completed \\s+ index.min \\s+ index.max \\s+ index.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,management.type,management.active,management.size,management.queue,management.queueSize,management.rejected,management.largest,management.completed,management.min,management.max,management.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ management.type \\s+ management.active \\s+ management.size \\s+ management.queue \\s+ management.queueSize \\s+ management.rejected \\s+ management.largest \\s+ management.completed \\s+ management.min \\s+ management.max \\s+ management.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,optimize.type,optimize.active,optimize.size,optimize.queue,optimize.queueSize,optimize.rejected,optimize.largest,optimize.completed,optimize.min,optimize.max,optimize.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ optimize.type \\s+ optimize.active \\s+ optimize.size \\s+ optimize.queue \\s+ optimize.queueSize \\s+ optimize.rejected \\s+ optimize.largest \\s+ optimize.completed \\s+ optimize.min \\s+ optimize.max \\s+ optimize.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,percolate.type,percolate.active,percolate.size,percolate.queue,percolate.queueSize,percolate.rejected,percolate.largest,percolate.completed,percolate.min,percolate.max,percolate.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ percolate.type \\s+ percolate.active \\s+ percolate.size \\s+ percolate.queue \\s+ percolate.queueSize \\s+ percolate.rejected \\s+ percolate.largest \\s+ percolate.completed \\s+ percolate.min \\s+ percolate.max \\s+ percolate.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,refresh.type,refresh.active,refresh.size,refresh.queue,refresh.queueSize,refresh.rejected,refresh.largest,refresh.completed,refresh.min,refresh.max,refresh.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ refresh.type \\s+ refresh.active \\s+ refresh.size \\s+ refresh.queue \\s+ refresh.queueSize \\s+ refresh.rejected \\s+ refresh.largest \\s+ refresh.completed \\s+ refresh.min \\s+ refresh.max \\s+ refresh.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,search.type,search.active,search.size,search.queue,search.queueSize,search.rejected,search.largest,search.completed,search.min,search.max,search.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ search.type \\s+ search.active \\s+ search.size \\s+ search.queue \\s+ search.queueSize \\s+ search.rejected \\s+ search.largest \\s+ search.completed \\s+ search.min \\s+ search.max \\s+ search.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,snapshot.type,snapshot.active,snapshot.size,snapshot.queue,snapshot.queueSize,snapshot.rejected,snapshot.largest,snapshot.completed,snapshot.min,snapshot.max,snapshot.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ snapshot.type \\s+ snapshot.active \\s+ snapshot.size \\s+ snapshot.queue \\s+ snapshot.queueSize \\s+ snapshot.rejected \\s+ snapshot.largest \\s+ snapshot.completed \\s+ snapshot.min \\s+ snapshot.max \\s+ snapshot.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,suggest.type,suggest.active,suggest.size,suggest.queue,suggest.queueSize,suggest.rejected,suggest.largest,suggest.completed,suggest.min,suggest.max,suggest.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ suggest.type \\s+ suggest.active \\s+ suggest.size \\s+ suggest.queue \\s+ suggest.queueSize \\s+ suggest.rejected \\s+ suggest.largest \\s+ suggest.completed \\s+ suggest.min \\s+ suggest.max \\s+ suggest.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/\n-\n- - do:\n- cat.thread_pool:\n- h: id,warmer.type,warmer.active,warmer.size,warmer.queue,warmer.queueSize,warmer.rejected,warmer.largest,warmer.completed,warmer.min,warmer.max,warmer.keepAlive\n- v: true\n-\n- - match:\n- $body: |\n- /^ id \\s+ warmer.type \\s+ warmer.active \\s+ warmer.size \\s+ warmer.queue \\s+ warmer.queueSize \\s+ warmer.rejected \\s+ warmer.largest \\s+ warmer.completed \\s+ warmer.min \\s+ warmer.max \\s+ warmer.keepAlive \\s+ \\n\n- (\\S+ \\s+ (cached|fixed|scaling)? \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d+ \\s+ \\d+ \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ \\S* \\s+ \\n)+ $/",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.thread_pool/10_basic.yaml",
"status": "modified"
}
]
} |
{
"body": "```\nPUT my_index\n{\n \"mappings\": {\n \"my_type\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"murmur3\"\n }\n }\n }\n }\n}\n\nGET my_index/_mapping/*/field/foo?include_defaults\n```\n\nreturn: `{\"index\": \"not_analyzed\"}`\n",
"comments": [],
"number": 12874,
"title": "Murmur3 fields should not be indexed"
} | {
"body": "This move the `murmur3` field to the `mapper-murmur3` plugin and fixes its\ndefaults so that values will not be indexed by default, as the only purpose\nof this field is to speed up `cardinality` aggregations on high-cardinality\nstring fields, which only requires doc values.\n\nI also removed the `rehash` option from the `cardinality` aggregation as it\ndoesn't bring much value (rehashing is cheap) and allowed to remove the\ncoupling between the `cardinality` aggregation and the `murmur3` field.\n\nClose #12874\n",
"number": 12931,
"review_comments": [
{
"body": "I'm currently changing namings within this PR: https://github.com/elastic/elasticsearch/pull/12879\nI think you should use here `mapper-murmur3`\n",
"created_at": "2015-08-17T11:20:09Z"
},
{
"body": "Same as previous, use `Plugin: Mapper: Murmur3`\n",
"created_at": "2015-08-17T11:20:11Z"
},
{
"body": "If you don't mind, I would rather change it after your PR is merged or let you do it if I merge first so that this commit is consistent.\n",
"created_at": "2015-08-17T11:22:06Z"
},
{
"body": "++\n",
"created_at": "2015-08-17T11:39:43Z"
},
{
"body": "muti -> multi\n",
"created_at": "2015-08-17T18:52:25Z"
}
],
"title": "Move the `murmur3` field to a plugin and fix defaults."
} | {
"commits": [
{
"message": "Move the `murmur3` field to a plugin and fix defaults.\n\nThis move the `murmur3` field to the `mapper-murmur3` plugin and fixes its\ndefaults so that values will not be indexed by default, as the only purpose\nof this field is to speed up `cardinality` aggregations on high-cardinality\nstring fields, which only requires doc values.\n\nI also removed the `rehash` option from the `cardinality` aggregation as it\ndoesn't bring much value (rehashing is cheap) and allowed to remove the\ncoupling between the `cardinality` aggregation and the `murmur3` field.\n\nClose #12874"
}
],
"files": [
{
"diff": "@@ -101,8 +101,7 @@ public DocumentMapperParser(@IndexSettings Settings indexSettings, MapperService\n .put(ObjectMapper.NESTED_CONTENT_TYPE, new ObjectMapper.TypeParser())\n .put(TypeParsers.MULTI_FIELD_CONTENT_TYPE, TypeParsers.multiFieldConverterTypeParser)\n .put(CompletionFieldMapper.CONTENT_TYPE, new CompletionFieldMapper.TypeParser())\n- .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser())\n- .put(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser());\n+ .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser());\n \n if (ShapesAvailability.JTS_AVAILABLE) {\n typeParsersBuilder.put(GeoShapeFieldMapper.CONTENT_TYPE, new GeoShapeFieldMapper.TypeParser());",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java",
"status": "modified"
},
{
"diff": "@@ -84,10 +84,6 @@ public static LongFieldMapper.Builder longField(String name) {\n return new LongFieldMapper.Builder(name);\n }\n \n- public static Murmur3FieldMapper.Builder murmur3Field(String name) {\n- return new Murmur3FieldMapper.Builder(name);\n- }\n-\n public static FloatFieldMapper.Builder floatField(String name) {\n return new FloatFieldMapper.Builder(name);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java",
"status": "modified"
},
{
"diff": "@@ -86,6 +86,7 @@ public enum OutputMode {\n \"elasticsearch-delete-by-query\",\n \"elasticsearch-lang-javascript\",\n \"elasticsearch-lang-python\",\n+ \"elasticsearch-mapper-murmur3\",\n \"elasticsearch-mapper-size\"\n ).build();\n ",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -56,7 +56,6 @@\n public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue {\n \n private final int precision;\n- private final boolean rehash;\n private final ValuesSource valuesSource;\n \n // Expensive to initialize, so we only initialize it when we have an actual value source\n@@ -66,11 +65,10 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue\n private Collector collector;\n private ValueFormatter formatter;\n \n- public CardinalityAggregator(String name, ValuesSource valuesSource, boolean rehash, int precision, ValueFormatter formatter,\n+ public CardinalityAggregator(String name, ValuesSource valuesSource, int precision, ValueFormatter formatter,\n AggregationContext context, Aggregator parent, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) throws IOException {\n super(name, context, parent, pipelineAggregators, metaData);\n this.valuesSource = valuesSource;\n- this.rehash = rehash;\n this.precision = precision;\n this.counts = valuesSource == null ? null : new HyperLogLogPlusPlus(precision, context.bigArrays(), 1);\n this.formatter = formatter;\n@@ -85,13 +83,6 @@ private Collector pickCollector(LeafReaderContext ctx) throws IOException {\n if (valuesSource == null) {\n return new EmptyCollector();\n }\n- // if rehash is false then the value source is either already hashed, or the user explicitly\n- // requested not to hash the values (perhaps they already hashed the values themselves before indexing the doc)\n- // so we can just work with the original value source as is\n- if (!rehash) {\n- MurmurHash3Values hashValues = MurmurHash3Values.cast(((ValuesSource.Numeric) valuesSource).longValues(ctx));\n- return new DirectCollector(counts, hashValues);\n- }\n \n if (valuesSource instanceof ValuesSource.Numeric) {\n ValuesSource.Numeric source = (ValuesSource.Numeric) valuesSource;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.aggregations.metrics.cardinality;\n \n-import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -35,12 +34,10 @@\n final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<ValuesSource> {\n \n private final long precisionThreshold;\n- private final boolean rehash;\n \n- CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold, boolean rehash) {\n+ CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold) {\n super(name, InternalCardinality.TYPE.name(), config);\n this.precisionThreshold = precisionThreshold;\n- this.rehash = rehash;\n }\n \n private int precision(Aggregator parent) {\n@@ -50,16 +47,13 @@ private int precision(Aggregator parent) {\n @Override\n protected Aggregator createUnmapped(AggregationContext context, Aggregator parent, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData)\n throws IOException {\n- return new CardinalityAggregator(name, null, true, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData);\n+ return new CardinalityAggregator(name, null, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData);\n }\n \n @Override\n protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent,\n boolean collectsFromSingleBucket, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) throws IOException {\n- if (!(valuesSource instanceof ValuesSource.Numeric) && !rehash) {\n- throw new AggregationExecutionException(\"Turning off rehashing for cardinality aggregation [\" + name + \"] on non-numeric values in not allowed\");\n- }\n- return new CardinalityAggregator(name, valuesSource, rehash, precision(parent), config.formatter(), context, parent, pipelineAggregators,\n+ return new CardinalityAggregator(name, valuesSource, precision(parent), config.formatter(), context, parent, pipelineAggregators,\n metaData);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -21,11 +21,9 @@\n \n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.mapper.core.Murmur3FieldMapper;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n import org.elasticsearch.search.aggregations.support.ValuesSourceParser;\n import org.elasticsearch.search.internal.SearchContext;\n \n@@ -35,6 +33,7 @@\n public class CardinalityParser implements Aggregator.Parser {\n \n private static final ParseField PRECISION_THRESHOLD = new ParseField(\"precision_threshold\");\n+ private static final ParseField REHASH = new ParseField(\"rehash\").withAllDeprecated(\"no replacement - values will always be rehashed\");\n \n @Override\n public String type() {\n@@ -44,10 +43,9 @@ public String type() {\n @Override\n public AggregatorFactory parse(String name, XContentParser parser, SearchContext context) throws IOException {\n \n- ValuesSourceParser vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build();\n+ ValuesSourceParser<?> vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build();\n \n long precisionThreshold = -1;\n- Boolean rehash = null;\n \n XContentParser.Token token;\n String currentFieldName = null;\n@@ -57,8 +55,8 @@ public AggregatorFactory parse(String name, XContentParser parser, SearchContext\n } else if (vsParser.token(currentFieldName, token, parser)) {\n continue;\n } else if (token.isValue()) {\n- if (\"rehash\".equals(currentFieldName)) {\n- rehash = parser.booleanValue();\n+ if (context.parseFieldMatcher().match(currentFieldName, REHASH)) {\n+ // ignore\n } else if (context.parseFieldMatcher().match(currentFieldName, PRECISION_THRESHOLD)) {\n precisionThreshold = parser.longValue();\n } else {\n@@ -70,15 +68,7 @@ public AggregatorFactory parse(String name, XContentParser parser, SearchContext\n }\n }\n \n- ValuesSourceConfig<?> config = vsParser.config();\n-\n- if (rehash == null && config.fieldContext() != null && config.fieldContext().fieldType() instanceof Murmur3FieldMapper.Murmur3FieldType) {\n- rehash = false;\n- } else if (rehash == null) {\n- rehash = true;\n- }\n-\n- return new CardinalityAggregatorFactory(name, config, precisionThreshold, rehash);\n+ return new CardinalityAggregatorFactory(name, vsParser.config(), precisionThreshold);\n \n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,7 @@ OFFICIAL PLUGINS\n - elasticsearch-delete-by-query\n - elasticsearch-lang-javascript\n - elasticsearch-lang-python\n+ - elasticsearch-mapper-murmur3\n - elasticsearch-mapper-size\n \n ",
"filename": "core/src/main/resources/org/elasticsearch/plugins/plugin-install.help",
"status": "modified"
},
{
"diff": "@@ -1116,7 +1116,7 @@ void indexSingleDocumentWithStringFieldsGeneratedFromText(boolean stored, boolea\n @Test\n public void testGeneratedNumberFieldsUnstored() throws IOException {\n indexSingleDocumentWithNumericFieldsGeneratedFromText(false, randomBoolean());\n- String[] fieldsList = {\"token_count\", \"text.token_count\", \"murmur\", \"text.murmur\"};\n+ String[] fieldsList = {\"token_count\", \"text.token_count\"};\n // before refresh - document is only in translog\n assertGetFieldsAlwaysNull(indexOrAlias(), \"doc\", \"1\", fieldsList);\n refresh();\n@@ -1130,7 +1130,7 @@ public void testGeneratedNumberFieldsUnstored() throws IOException {\n @Test\n public void testGeneratedNumberFieldsStored() throws IOException {\n indexSingleDocumentWithNumericFieldsGeneratedFromText(true, randomBoolean());\n- String[] fieldsList = {\"token_count\", \"text.token_count\", \"murmur\", \"text.murmur\"};\n+ String[] fieldsList = {\"token_count\", \"text.token_count\"};\n // before refresh - document is only in translog\n assertGetFieldsNull(indexOrAlias(), \"doc\", \"1\", fieldsList);\n assertGetFieldsException(indexOrAlias(), \"doc\", \"1\", fieldsList);\n@@ -1159,21 +1159,13 @@ void indexSingleDocumentWithNumericFieldsGeneratedFromText(boolean stored, boole\n \" \\\"analyzer\\\": \\\"standard\\\",\\n\" +\n \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n \" },\\n\" +\n- \" \\\"murmur\\\": {\\n\" +\n- \" \\\"type\\\": \\\"murmur3\\\",\\n\" +\n- \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n- \" },\\n\" +\n \" \\\"text\\\": {\\n\" +\n \" \\\"type\\\": \\\"string\\\",\\n\" +\n \" \\\"fields\\\": {\\n\" +\n \" \\\"token_count\\\": {\\n\" +\n \" \\\"type\\\": \\\"token_count\\\",\\n\" +\n \" \\\"analyzer\\\": \\\"standard\\\",\\n\" +\n \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n- \" },\\n\" +\n- \" \\\"murmur\\\": {\\n\" +\n- \" \\\"type\\\": \\\"murmur3\\\",\\n\" +\n- \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n \" }\\n\" +\n \" }\\n\" +\n \" }\" +\n@@ -1185,7 +1177,6 @@ void indexSingleDocumentWithNumericFieldsGeneratedFromText(boolean stored, boole\n assertAcked(prepareCreate(\"test\").addAlias(new Alias(\"alias\")).setSource(createIndexSource));\n ensureGreen();\n String doc = \"{\\n\" +\n- \" \\\"murmur\\\": \\\"Some value that can be hashed\\\",\\n\" +\n \" \\\"token_count\\\": \\\"A text with five words.\\\",\\n\" +\n \" \\\"text\\\": \\\"A text with five words.\\\"\\n\" +\n \"}\\n\";",
"filename": "core/src/test/java/org/elasticsearch/get/GetActionIT.java",
"status": "modified"
},
{
"diff": "@@ -550,6 +550,7 @@ public void testOfficialPluginName_ThrowsException() throws IOException {\n PluginManager.checkForOfficialPlugins(\"elasticsearch-delete-by-query\");\n PluginManager.checkForOfficialPlugins(\"elasticsearch-lang-javascript\");\n PluginManager.checkForOfficialPlugins(\"elasticsearch-lang-python\");\n+ PluginManager.checkForOfficialPlugins(\"elasticsearch-mapper-murmur3\");\n \n try {\n PluginManager.checkForOfficialPlugins(\"elasticsearch-mapper-attachment\");",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java",
"status": "modified"
},
{
"diff": "@@ -61,54 +61,23 @@ public void setupSuiteScopeCluster() throws Exception {\n jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n .startObject(\"str_value\")\n .field(\"type\", \"string\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n .endObject()\n .startObject(\"str_values\")\n .field(\"type\", \"string\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n .endObject()\n .startObject(\"l_value\")\n .field(\"type\", \"long\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n .endObject()\n .startObject(\"l_values\")\n .field(\"type\", \"long\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n .endObject()\n- .startObject(\"d_value\")\n- .field(\"type\", \"double\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .startObject(\"d_values\")\n- .field(\"type\", \"double\")\n- .startObject(\"fields\")\n- .startObject(\"hash\")\n- .field(\"type\", \"murmur3\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject().endObject()).execute().actionGet();\n+ .startObject(\"d_value\")\n+ .field(\"type\", \"double\")\n+ .endObject()\n+ .startObject(\"d_values\")\n+ .field(\"type\", \"double\")\n+ .endObject()\n+ .endObject().endObject().endObject()).execute().actionGet();\n \n numDocs = randomIntBetween(2, 100);\n precisionThreshold = randomIntBetween(0, 1 << randomInt(20));\n@@ -145,12 +114,12 @@ private void assertCount(Cardinality count, long value) {\n assertThat(count.getValue(), greaterThan(0L));\n }\n }\n- private String singleNumericField(boolean hash) {\n- return (randomBoolean() ? \"l_value\" : \"d_value\") + (hash ? \".hash\" : \"\");\n+ private String singleNumericField() {\n+ return randomBoolean() ? \"l_value\" : \"d_value\";\n }\n \n private String multiNumericField(boolean hash) {\n- return (randomBoolean() ? \"l_values\" : \"d_values\") + (hash ? \".hash\" : \"\");\n+ return randomBoolean() ? \"l_values\" : \"d_values\";\n }\n \n @Test\n@@ -195,24 +164,10 @@ public void singleValuedString() throws Exception {\n assertCount(count, numDocs);\n }\n \n- @Test\n- public void singleValuedStringHashed() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n- .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(\"str_value.hash\"))\n- .execute().actionGet();\n-\n- assertSearchResponse(response);\n-\n- Cardinality count = response.getAggregations().get(\"cardinality\");\n- assertThat(count, notNullValue());\n- assertThat(count.getName(), equalTo(\"cardinality\"));\n- assertCount(count, numDocs);\n- }\n-\n @Test\n public void singleValuedNumeric() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n- .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField(false)))\n+ .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField()))\n .execute().actionGet();\n \n assertSearchResponse(response);\n@@ -229,7 +184,7 @@ public void singleValuedNumeric_getProperty() throws Exception {\n SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n .addAggregation(\n global(\"global\").subAggregation(\n- cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField(false))))\n+ cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField())))\n .execute().actionGet();\n \n assertSearchResponse(searchResponse);\n@@ -254,7 +209,7 @@ public void singleValuedNumeric_getProperty() throws Exception {\n @Test\n public void singleValuedNumericHashed() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n- .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField(true)))\n+ .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField()))\n .execute().actionGet();\n \n assertSearchResponse(response);\n@@ -279,20 +234,6 @@ public void multiValuedString() throws Exception {\n assertCount(count, numDocs * 2);\n }\n \n- @Test\n- public void multiValuedStringHashed() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n- .addAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(\"str_values.hash\"))\n- .execute().actionGet();\n-\n- assertSearchResponse(response);\n-\n- Cardinality count = response.getAggregations().get(\"cardinality\");\n- assertThat(count, notNullValue());\n- assertThat(count.getName(), equalTo(\"cardinality\"));\n- assertCount(count, numDocs * 2);\n- }\n-\n @Test\n public void multiValuedNumeric() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -356,7 +297,7 @@ public void singleValuedNumericScript() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n .addAggregation(\n cardinality(\"cardinality\").precisionThreshold(precisionThreshold).script(\n- new Script(\"doc['\" + singleNumericField(false) + \"'].value\")))\n+ new Script(\"doc['\" + singleNumericField() + \"'].value\")))\n .execute().actionGet();\n \n assertSearchResponse(response);\n@@ -417,7 +358,7 @@ public void multiValuedStringValueScript() throws Exception {\n public void singleValuedNumericValueScript() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n .addAggregation(\n- cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField(false))\n+ cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(singleNumericField())\n .script(new Script(\"_value\")))\n .execute().actionGet();\n \n@@ -464,23 +405,4 @@ public void asSubAgg() throws Exception {\n }\n }\n \n- @Test\n- public void asSubAggHashed() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n- .addAggregation(terms(\"terms\").field(\"str_value\")\n- .collectMode(randomFrom(SubAggCollectionMode.values()))\n- .subAggregation(cardinality(\"cardinality\").precisionThreshold(precisionThreshold).field(\"str_values.hash\")))\n- .execute().actionGet();\n-\n- assertSearchResponse(response);\n-\n- Terms terms = response.getAggregations().get(\"terms\");\n- for (Terms.Bucket bucket : terms.getBuckets()) {\n- Cardinality count = bucket.getAggregations().get(\"cardinality\");\n- assertThat(count, notNullValue());\n- assertThat(count.getName(), equalTo(\"cardinality\"));\n- assertCount(count, 2);\n- }\n- }\n-\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,101 @@\n+[[mapper-murmur3]]\n+=== Mapper Murmur3 Plugin\n+\n+The mapper-murmur3 plugin provides the ability to compute hash of field values\n+at index-time and store them in the index. This can sometimes be helpful when\n+running cardinality aggregations on high-cardinality and large string fields.\n+\n+[[mapper-murmur3-install]]\n+[float]\n+==== Installation\n+\n+This plugin can be installed using the plugin manager:\n+\n+[source,sh]\n+----------------------------------------------------------------\n+sudo bin/plugin install mapper-murmur3\n+----------------------------------------------------------------\n+\n+The plugin must be installed on every node in the cluster, and each node must\n+be restarted after installation.\n+\n+[[mapper-murmur3-remove]]\n+[float]\n+==== Removal\n+\n+The plugin can be removed with the following command:\n+\n+[source,sh]\n+----------------------------------------------------------------\n+sudo bin/plugin remove mapper-murmur3\n+----------------------------------------------------------------\n+\n+The node must be stopped before removing the plugin.\n+\n+[[mapper-murmur3-usage]]\n+==== Using the `murmur3` field\n+\n+The `murmur3` is typically used within a multi-field, so that both the original\n+value and its hash are stored in the index:\n+\n+[source,js]\n+--------------------------\n+PUT my_index\n+{\n+ \"mappings\": {\n+ \"my_type\": {\n+ \"properties\": {\n+ \"my_field\": {\n+ \"type\": \"string\",\n+ \"fields\": {\n+ \"hash\": {\n+ \"type\": \"murmur3\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+--------------------------\n+// AUTOSENSE\n+\n+Such a mapping would allow to refer to `my_field.hash` in order to get hashes\n+of the values of the `my_field` field. This is only useful in order to run\n+`cardinality` aggregations:\n+\n+[source,js]\n+--------------------------\n+# Example documents\n+PUT my_index/my_type/1\n+{\n+ \"my_field\": \"This is a document\"\n+}\n+\n+PUT my_index/my_type/2\n+{\n+ \"my_field\": \"This is another document\"\n+}\n+\n+GET my_index/_search\n+{\n+ \"aggs\": {\n+ \"my_field_cardinality\": {\n+ \"cardinality\": {\n+ \"field\": \"my_field.hash\" <1>\n+ }\n+ }\n+ }\n+}\n+--------------------------\n+// AUTOSENSE\n+\n+<1> Counting unique values on the `my_field.hash` field\n+\n+Running a `cardinality` aggregation on the `my_field` field directly would\n+yield the same result, however using `my_field.hash` instead might result in\n+a speed-up if the field has a high-cardinality. On the other hand, it is\n+discouraged to use the `murmur3` field on numeric fields and string fields\n+that are not almost unique as the use of a `murmur3` field is unlikely to\n+bring significant speed-ups, while increasing the amount of disk space required\n+to store the index.",
"filename": "docs/plugins/mapper-murmur3.asciidoc",
"status": "added"
},
{
"diff": "@@ -14,5 +14,10 @@ The mapper-size plugin provides the `_size` meta field which, when enabled,\n indexes the size in bytes of the original\n {ref}/mapping-source-field.html[`_source`] field.\n \n-include::mapper-size.asciidoc[]\n+<<mapper-murmur3>>::\n+\n+The mapper-murmur3 plugin allows hashes to be computed at index-time and stored\n+in the index for later use with the `cardinality` aggregation.\n \n+include::mapper-size.asciidoc[]\n+include::mapper-murmur3.asciidoc[]",
"filename": "docs/plugins/mapper.asciidoc",
"status": "modified"
},
{
"diff": "@@ -23,9 +23,9 @@ match a query:\n \n ==== Precision control\n \n-This aggregation also supports the `precision_threshold` and `rehash` options:\n+This aggregation also supports the `precision_threshold` option:\n \n-experimental[The `precision_threshold` and `rehash` options are specific to the current internal implementation of the `cardinality` agg, which may change in the future]\n+experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future]\n \n [source,js]\n --------------------------------------------------\n@@ -34,8 +34,7 @@ experimental[The `precision_threshold` and `rehash` options are specific to the\n \"author_count\" : {\n \"cardinality\" : {\n \"field\" : \"author_hash\",\n- \"precision_threshold\": 100, <1>\n- \"rehash\": false <2>\n+ \"precision_threshold\": 100 <1>\n }\n }\n }\n@@ -49,11 +48,6 @@ supported value is 40000, thresholds above this number will have the same\n effect as a threshold of 40000.\n Default value depends on the number of parent aggregations that multiple\n create buckets (such as terms or histograms).\n-<2> If you computed a hash on client-side, stored it into your documents and want\n-Elasticsearch to use them to compute counts using this hash function without\n-rehashing values, it is possible to specify `rehash: false`. Default value is\n-`true`. Please note that the hash must be indexed as a long when `rehash` is\n-false.\n \n ==== Counts are approximate\n \n@@ -86,47 +80,11 @@ counting millions of items.\n \n ==== Pre-computed hashes\n \n-If you don't want Elasticsearch to re-compute hashes on every run of this\n-aggregation, it is possible to use pre-computed hashes, either by computing a\n-hash on client-side, indexing it and specifying `rehash: false`, or by using\n-the special `murmur3` field mapper, typically in the context of a `multi-field`\n-in the mapping:\n-\n-[source,js]\n---------------------------------------------------\n-{\n- \"author\": {\n- \"type\": \"string\",\n- \"fields\": {\n- \"hash\": {\n- \"type\": \"murmur3\"\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-\n-With such a mapping, Elasticsearch is going to compute hashes of the `author`\n-field at indexing time and store them in the `author.hash` field. This\n-way, unique counts can be computed using the cardinality aggregation by only\n-loading the hashes into memory, not the values of the `author` field, and\n-without computing hashes on the fly:\n-\n-[source,js]\n---------------------------------------------------\n-{\n- \"aggs\" : {\n- \"author_count\" : {\n- \"cardinality\" : {\n- \"field\" : \"author.hash\"\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-\n-NOTE: `rehash` is automatically set to `false` when computing unique counts on\n-a `murmur3` field.\n+On string fields that have a high cardinality, it might be faster to store the\n+hash of your field values in your index and then run the cardinality aggregation\n+on this field. This can either be done by providing hash values from client-side\n+or by letting elasticsearch compute hash values for you by using the \n+{plugins}/mapper-size.html[`mapper-murmur3`] plugin.\n \n NOTE: Pre-computing hashes is usually only useful on very large and/or\n high-cardinality fields as it saves CPU and memory. However, on numeric",
"filename": "docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@ document:\n <<search-suggesters-completion,Completion datatype>>::\n `completion` to provide auto-complete suggestions\n <<token-count>>:: `token_count` to count the number of tokens in a string\n+{plugins}/mapper-size.html[`mapper-murmur3`]:: `murmur3` to compute hashes of values at index-time and store them in the index\n \n Attachment datatype::\n ",
"filename": "docs/reference/mapping/types.asciidoc",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,16 @@ can install the plugin with:\n The `_shutdown` API has been removed without a replacement. Nodes should be\n managed via the operating system and the provided start/stop scripts.\n \n+==== `murmur3` is now a plugin\n+\n+The `murmur3` field, which indexes hashes of the field values, has been moved\n+out of core and is available as a plugin. It can be installed as:\n+\n+[source,sh]\n+------------------\n+./bin/plugin install mapper-murmur3\n+------------------\n+\n ==== `_size` is now a plugin\n \n The `_size` meta-data field, which indexes the size in bytes of the original",
"filename": "docs/reference/migration/migrate_2_0/removals.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1 @@\n+This plugin has no third party dependencies",
"filename": "plugins/mapper-murmur3/licenses/no_deps.txt",
"status": "added"
},
{
"diff": "@@ -0,0 +1,43 @@\n+<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+<!-- Licensed to Elasticsearch under one or more contributor\n+license agreements. See the NOTICE file distributed with this work for additional\n+information regarding copyright ownership. ElasticSearch licenses this file to you\n+under the Apache License, Version 2.0 (the \"License\"); you may not use this\n+file except in compliance with the License. You may obtain a copy of the\n+License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by\n+applicable law or agreed to in writing, software distributed under the License\n+is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+KIND, either express or implied. See the License for the specific language\n+governing permissions and limitations under the License. -->\n+\n+<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n+ xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n+ xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n+ <modelVersion>4.0.0</modelVersion>\n+\n+ <parent>\n+ <groupId>org.elasticsearch.plugin</groupId>\n+ <artifactId>elasticsearch-plugin</artifactId>\n+ <version>2.1.0-SNAPSHOT</version>\n+ </parent>\n+\n+ <artifactId>elasticsearch-mapper-murmur3</artifactId>\n+ <name>Elasticsearch Mapper Murmur3 plugin</name>\n+ <description>The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index.</description>\n+\n+ <properties>\n+ <elasticsearch.plugin.classname>org.elasticsearch.plugin.mapper.MapperMurmur3Plugin</elasticsearch.plugin.classname>\n+ <tests.rest.suite>mapper_murmur3</tests.rest.suite>\n+ <tests.rest.load_packaged>false</tests.rest.load_packaged>\n+ </properties>\n+\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-assembly-plugin</artifactId>\n+ </plugin>\n+ </plugins>\n+ </build>\n+\n+</project>",
"filename": "plugins/mapper-murmur3/pom.xml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,65 @@\n+# Integration tests for Mapper Murmur3 components\n+#\n+\n+---\n+\"Mapper Murmur3\":\n+\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ mappings:\n+ type1: { \"properties\": { \"foo\": { \"type\": \"string\", \"fields\": { \"hash\": { \"type\": \"murmur3\" } } } } }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: type1\n+ id: 0\n+ body: { \"foo\": null }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ search:\n+ body: { \"aggs\": { \"foo_count\": { \"cardinality\": { \"field\": \"foo.hash\" } } } }\n+\n+ - match: { aggregations.foo_count.value: 0 }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: type1\n+ id: 1\n+ body: { \"foo\": \"bar\" }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: type1\n+ id: 2\n+ body: { \"foo\": \"baz\" }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: type1\n+ id: 3\n+ body: { \"foo\": \"quux\" }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: type1\n+ id: 4\n+ body: { \"foo\": \"bar\" }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ search:\n+ body: { \"aggs\": { \"foo_count\": { \"cardinality\": { \"field\": \"foo.hash\" } } } }\n+\n+ - match: { aggregations.foo_count.value: 3 }",
"filename": "plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,36 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.murmur3;\n+\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.AbstractIndexComponent;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.mapper.MapperService;\n+\n+public class RegisterMurmur3FieldMapper extends AbstractIndexComponent {\n+\n+ @Inject\n+ public RegisterMurmur3FieldMapper(Index index, Settings indexSettings, MapperService mapperService) {\n+ super(index, indexSettings);\n+ mapperService.documentMapperParser().putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser());\n+ }\n+\n+}",
"filename": "plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,31 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.plugin.mapper;\n+\n+import org.elasticsearch.common.inject.AbstractModule;\n+import org.elasticsearch.index.mapper.murmur3.RegisterMurmur3FieldMapper;\n+\n+public class MapperMurmur3IndexModule extends AbstractModule {\n+\n+ @Override\n+ protected void configure() {\n+ bind(RegisterMurmur3FieldMapper.class).asEagerSingleton();\n+ }\n+}",
"filename": "plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.plugin.mapper;\n+\n+import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.plugins.AbstractPlugin;\n+\n+import java.util.Collection;\n+import java.util.Collections;\n+\n+public class MapperMurmur3Plugin extends AbstractPlugin {\n+\n+ @Override\n+ public String name() {\n+ return \"mapper-murmur3\";\n+ }\n+\n+ @Override\n+ public String description() {\n+ return \"A mapper that allows to precompute murmur3 hashes of values at index-time and store them in the index\";\n+ }\n+\n+ @Override\n+ public Collection<Class<? extends Module>> indexModules() {\n+ return Collections.<Class<? extends Module>>singleton(MapperMurmur3IndexModule.class);\n+ }\n+\n+}",
"filename": "plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.murmur3;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Name;\n+import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;\n+\n+import org.elasticsearch.test.rest.ESRestTestCase;\n+import org.elasticsearch.test.rest.RestTestCandidate;\n+import org.elasticsearch.test.rest.parser.RestTestParseException;\n+\n+import java.io.IOException;\n+\n+public class MapperMurmur3RestIT extends ESRestTestCase {\n+\n+ public MapperMurmur3RestIT(@Name(\"yaml\") RestTestCandidate testCandidate) {\n+ super(testCandidate);\n+ }\n+\n+ @ParametersFactory\n+ public static Iterable<Object[]> parameters() throws IOException, RestTestParseException {\n+ return createParameters(0, 1);\n+ }\n+}\n+",
"filename": "plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java",
"status": "added"
},
{
"diff": "@@ -436,6 +436,7 @@\n <module>delete-by-query</module>\n <module>lang-python</module>\n <module>lang-javascript</module>\n+ <module>mapper-murmur3</module>\n <module>mapper-size</module>\n <module>jvm-example</module>\n <module>site-example</module>",
"filename": "plugins/pom.xml",
"status": "modified"
},
{
"diff": "@@ -333,6 +333,14 @@\n <overWrite>true</overWrite>\n </artifactItem>\n \n+ <artifactItem>\n+ <groupId>org.elasticsearch.plugin</groupId>\n+ <artifactId>elasticsearch-mapper-murmur3</artifactId>\n+ <version>${elasticsearch.version}</version>\n+ <type>zip</type>\n+ <overWrite>true</overWrite>\n+ </artifactItem>\n+\n <artifactItem>\n <groupId>org.elasticsearch.plugin</groupId>\n <artifactId>elasticsearch-mapper-size</artifactId>",
"filename": "qa/smoke-test-plugins/pom.xml",
"status": "modified"
}
]
} |
{
"body": "As of https://github.com/elastic/elasticsearch/pull/12054, there is no longer a `network` section in the nodes info and stats requests. The following should no longer work:\n\n```\nGET _nodes/network\nGET _nodes/stats/network\n```\n",
"comments": [
{
"body": "Here's a quick fix. \n",
"created_at": "2015-08-17T03:25:27Z"
}
],
"number": 12889,
"title": "Remove the `network` option from nodes info/stats"
} | {
"body": "Closes #12889\n",
"number": 12922,
"review_comments": [],
"title": "Refactor, remove _node/network and _node/stats/network. "
} | {
"commits": [
{
"message": "There is no longer a network section in the nodes info and stats\nrequests. Remove _node/network and _node/stats/network\n\ncloses #12889"
}
],
"files": [
{
"diff": "@@ -35,7 +35,6 @@ public class NodesInfoRequest extends BaseNodesRequest<NodesInfoRequest> {\n private boolean process = true;\n private boolean jvm = true;\n private boolean threadPool = true;\n- private boolean network = true;\n private boolean transport = true;\n private boolean http = true;\n private boolean plugins = true;\n@@ -60,7 +59,6 @@ public NodesInfoRequest clear() {\n process = false;\n jvm = false;\n threadPool = false;\n- network = false;\n transport = false;\n http = false;\n plugins = false;\n@@ -76,7 +74,6 @@ public NodesInfoRequest all() {\n process = true;\n jvm = true;\n threadPool = true;\n- network = true;\n transport = true;\n http = true;\n plugins = true;\n@@ -158,21 +155,6 @@ public NodesInfoRequest threadPool(boolean threadPool) {\n return this;\n }\n \n- /**\n- * Should the node Network be returned.\n- */\n- public boolean network() {\n- return this.network;\n- }\n-\n- /**\n- * Should the node Network be returned.\n- */\n- public NodesInfoRequest network(boolean network) {\n- this.network = network;\n- return this;\n- }\n-\n /**\n * Should the node Transport be returned.\n */\n@@ -228,7 +210,6 @@ public void readFrom(StreamInput in) throws IOException {\n process = in.readBoolean();\n jvm = in.readBoolean();\n threadPool = in.readBoolean();\n- network = in.readBoolean();\n transport = in.readBoolean();\n http = in.readBoolean();\n plugins = in.readBoolean();\n@@ -242,7 +223,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(process);\n out.writeBoolean(jvm);\n out.writeBoolean(threadPool);\n- out.writeBoolean(network);\n out.writeBoolean(transport);\n out.writeBoolean(http);\n out.writeBoolean(plugins);",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequest.java",
"status": "modified"
},
{
"diff": "@@ -87,14 +87,6 @@ public NodesInfoRequestBuilder setThreadPool(boolean threadPool) {\n return this;\n }\n \n- /**\n- * Should the node Network info be returned.\n- */\n- public NodesInfoRequestBuilder setNetwork(boolean network) {\n- request.network(network);\n- return this;\n- }\n-\n /**\n * Should the node Transport info be returned.\n */",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -80,7 +80,7 @@ protected NodeInfo newNodeResponse() {\n protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) {\n NodesInfoRequest request = nodeRequest.request;\n return nodeService.info(request.settings(), request.os(), request.process(), request.jvm(), request.threadPool(),\n- request.network(), request.transport(), request.http(), request.plugins());\n+ request.transport(), request.http(), request.plugins());\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java",
"status": "modified"
},
{
"diff": "@@ -36,7 +36,6 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {\n private boolean process;\n private boolean jvm;\n private boolean threadPool;\n- private boolean network;\n private boolean fs;\n private boolean transport;\n private boolean http;\n@@ -63,7 +62,6 @@ public NodesStatsRequest all() {\n this.process = true;\n this.jvm = true;\n this.threadPool = true;\n- this.network = true;\n this.fs = true;\n this.transport = true;\n this.http = true;\n@@ -81,7 +79,6 @@ public NodesStatsRequest clear() {\n this.process = false;\n this.jvm = false;\n this.threadPool = false;\n- this.network = false;\n this.fs = false;\n this.transport = false;\n this.http = false;\n@@ -171,21 +168,6 @@ public NodesStatsRequest threadPool(boolean threadPool) {\n return this;\n }\n \n- /**\n- * Should the node Network be returned.\n- */\n- public boolean network() {\n- return this.network;\n- }\n-\n- /**\n- * Should the node Network be returned.\n- */\n- public NodesStatsRequest network(boolean network) {\n- this.network = network;\n- return this;\n- }\n-\n /**\n * Should the node file system stats be returned.\n */\n@@ -260,7 +242,6 @@ public void readFrom(StreamInput in) throws IOException {\n process = in.readBoolean();\n jvm = in.readBoolean();\n threadPool = in.readBoolean();\n- network = in.readBoolean();\n fs = in.readBoolean();\n transport = in.readBoolean();\n http = in.readBoolean();\n@@ -276,7 +257,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(process);\n out.writeBoolean(jvm);\n out.writeBoolean(threadPool);\n- out.writeBoolean(network);\n out.writeBoolean(fs);\n out.writeBoolean(transport);\n out.writeBoolean(http);",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java",
"status": "modified"
},
{
"diff": "@@ -107,14 +107,6 @@ public NodesStatsRequestBuilder setThreadPool(boolean threadPool) {\n return this;\n }\n \n- /**\n- * Should the node Network stats be returned.\n- */\n- public NodesStatsRequestBuilder setNetwork(boolean network) {\n- request.network(network);\n- return this;\n- }\n-\n /**\n * Should the node file system stats be returned.\n */",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -79,7 +79,7 @@ protected NodeStats newNodeResponse() {\n @Override\n protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) {\n NodesStatsRequest request = nodeStatsRequest.request;\n- return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(), request.network(),\n+ return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(),\n request.fs(), request.transport(), request.http(), request.breaker(), request.script());\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java",
"status": "modified"
},
{
"diff": "@@ -99,8 +99,8 @@ protected ClusterStatsNodeResponse newNodeResponse() {\n \n @Override\n protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeRequest) {\n- NodeInfo nodeInfo = nodeService.info(false, true, false, true, false, false, true, false, true);\n- NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, false, true, true, false, false, true, false, false, false, false);\n+ NodeInfo nodeInfo = nodeService.info(false, true, false, true, false, true, false, true);\n+ NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, false, true, true, false, true, false, false, false, false);\n List<ShardStats> shardsStats = new ArrayList<>();\n for (IndexService indexService : indicesService) {\n for (IndexShard indexShard : indexService) {",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java",
"status": "modified"
},
{
"diff": "@@ -119,7 +119,7 @@ public NodeInfo info() {\n }\n \n public NodeInfo info(boolean settings, boolean os, boolean process, boolean jvm, boolean threadPool,\n- boolean network, boolean transport, boolean http, boolean plugin) {\n+ boolean transport, boolean http, boolean plugin) {\n return new NodeInfo(version, Build.CURRENT, discovery.localNode(), serviceAttributes,\n settings ? this.settings : null,\n os ? monitorService.osService().info() : null,\n@@ -149,7 +149,7 @@ public NodeStats stats() throws IOException {\n );\n }\n \n- public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean network,\n+ public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool,\n boolean fs, boolean transport, boolean http, boolean circuitBreaker,\n boolean script) {\n // for indices stats we want to include previous allocated shards stats as well (it will",
"filename": "core/src/main/java/org/elasticsearch/node/service/NodeService.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,7 @@\n public class RestNodesInfoAction extends BaseRestHandler {\n \n private final SettingsFilter settingsFilter;\n- private final static Set<String> ALLOWED_METRICS = Sets.newHashSet(\"http\", \"jvm\", \"network\", \"os\", \"plugins\", \"process\", \"settings\", \"thread_pool\", \"transport\");\n+ private final static Set<String> ALLOWED_METRICS = Sets.newHashSet(\"http\", \"jvm\", \"os\", \"plugins\", \"process\", \"settings\", \"thread_pool\", \"transport\");\n \n @Inject\n public RestNodesInfoAction(Settings settings, RestController controller, Client client, SettingsFilter settingsFilter) {\n@@ -91,7 +91,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n nodesInfoRequest.process(metrics.contains(\"process\"));\n nodesInfoRequest.jvm(metrics.contains(\"jvm\"));\n nodesInfoRequest.threadPool(metrics.contains(\"thread_pool\"));\n- nodesInfoRequest.network(metrics.contains(\"network\"));\n nodesInfoRequest.transport(metrics.contains(\"transport\"));\n nodesInfoRequest.http(metrics.contains(\"http\"));\n nodesInfoRequest.plugins(metrics.contains(\"plugins\"));",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/info/RestNodesInfoAction.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,6 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n nodesStatsRequest.os(metrics.contains(\"os\"));\n nodesStatsRequest.jvm(metrics.contains(\"jvm\"));\n nodesStatsRequest.threadPool(metrics.contains(\"thread_pool\"));\n- nodesStatsRequest.network(metrics.contains(\"network\"));\n nodesStatsRequest.fs(metrics.contains(\"fs\"));\n nodesStatsRequest.transport(metrics.contains(\"transport\"));\n nodesStatsRequest.http(metrics.contains(\"http\"));",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@ public static void main(String[] args) {\n Node node = NodeBuilder.nodeBuilder().settings(Settings.settingsBuilder()\n .put(\"monitor.os.refresh_interval\", 0)\n .put(\"monitor.process.refresh_interval\", 0)\n- .put(\"monitor.network.refresh_interval\", 0)\n ).node();\n \n JvmService jvmService = node.injector().getInstance(JvmService.class);",
"filename": "core/src/test/java/org/elasticsearch/stresstest/leaks/GenericStatsLeak.java",
"status": "modified"
},
{
"diff": "@@ -94,7 +94,6 @@\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n import org.elasticsearch.node.service.NodeService;\n import org.elasticsearch.script.ScriptService;\n-import org.elasticsearch.search.SearchModule;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.test.disruption.ServiceDisruptionScheme;\n import org.elasticsearch.search.MockSearchService;\n@@ -1865,7 +1864,7 @@ public void run() {\n }\n \n NodeService nodeService = getInstanceFromNode(NodeService.class, nodeAndClient.node);\n- NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false, false);\n+ NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false);\n assertThat(\"Fielddata size must be 0 on node: \" + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l));\n assertThat(\"Query cache size must be 0 on node: \" + stats.getNode(), stats.getIndices().getQueryCache().getMemorySizeInBytes(), equalTo(0l));\n assertThat(\"FixedBitSet cache size must be 0 on node: \" + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l));",
"filename": "core/src/test/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
}
]
} |
{
"body": "Performing a stats aggregation over a string field throws 500 error with an exception as opposed to a simplified error stating the the field specified is not valid for a stats aggregation.\n\nI would expect a 400 for example since the client to fix their request before retrying it. Additionally it would help if the error message was more indicative of the problem as opposed to a SearchPhaseExecutionException. I would hope we should be able to detect this prior to performing the actual search.\n\n```\nGET /stack/_search?search_type=count\n{\n \"aggs\": {\n \"dateAggTest\": {\n \"stats\": {\n \"field\": \"title\"\n }\n }\n }\n}\n```\n\n```\n{\n \"error\": \"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[hjvyPMaiTgewDbpd9j67_A][stack][0]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[hjvyPMaiTgewDbpd9j67_A][stack][1]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[hjvyPMaiTgewDbpd9j67_A][stack][2]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[hjvyPMaiTgewDbpd9j67_A][stack][3]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}{[hjvyPMaiTgewDbpd9j67_A][stack][4]: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]}]\",\n \"status\": 500\n}\n```\n",
"comments": [
{
"body": "Agreed, should be a 400 - probably need a wider review of exceptions too\n",
"created_at": "2015-08-13T10:12:31Z"
},
{
"body": "We should have a better error message here.\n\nAnd should we just treat every ClassCastException is a 400 rather than 500? If user send correct request and get 400, OK, you find a bug we are going to fix it.\n",
"created_at": "2015-08-14T07:58:13Z"
},
{
"body": "> And should we just treat every ClassCastException is a 400 rather than 500? If user send correct request and get 400, OK, you find a bug we are going to fix it.\n\nI don't think we can generalize that class-cast exceptions are user-input errors rather than internal errors. We should just better validate that fields have the expected type.\n\nIt might be harder with scripts as scripts can do pretty much anything, but I assume that if we can make it work well on fields, that would be a good start already.\n",
"created_at": "2015-08-14T13:26:41Z"
},
{
"body": "+1 @jpountz, I would assume scripts to be out of scope. Not enough gain for the amount of work it would take to handle all cases, etc.\n",
"created_at": "2015-08-14T16:30:04Z"
}
],
"number": 12842,
"title": "stats aggregation returns 500 error when performed over an invalid field"
} | {
"body": "Let stats aggregation returns 400 error when performed over an invalid field\n\ncloses #12842\n",
"number": 12913,
"review_comments": [],
"title": "Validate class before cast."
} | {
"commits": [
{
"message": "Throw IllegalArgumentException instead of ClassCastException,\nLet stats aggregation returns 400 error when performed over an invalid field\n\ncloses #12842"
}
],
"files": [
{
"diff": "@@ -156,6 +156,12 @@ private ValuesSource.Numeric numericScript(ValuesSourceConfig<?> config) throws\n }\n \n private ValuesSource.Numeric numericField(ValuesSourceConfig<?> config) throws IOException {\n+\n+ if (!(config.fieldContext.indexFieldData() instanceof IndexNumericFieldData)) {\n+ throw new IllegalArgumentException(\"Expected numeric type on field [\" + config.fieldContext.field() +\n+ \"], but got [\" + config.fieldContext.fieldType().typeName() + \"]\");\n+ }\n+\n ValuesSource.Numeric dataSource = new ValuesSource.Numeric.FieldData((IndexNumericFieldData) config.fieldContext.indexFieldData());\n if (config.script != null) {\n dataSource = new ValuesSource.Numeric.WithScript(dataSource, config.script);\n@@ -184,6 +190,12 @@ private ValuesSource.Bytes bytesScript(ValuesSourceConfig<?> config) throws IOEx\n }\n \n private ValuesSource.GeoPoint geoPointField(ValuesSourceConfig<?> config) throws IOException {\n+\n+ if (!(config.fieldContext.indexFieldData() instanceof IndexGeoPointFieldData)) {\n+ throw new IllegalArgumentException(\"Expected geo_point type on field [\" + config.fieldContext.field() +\n+ \"], but got [\" + config.fieldContext.fieldType().typeName() + \"]\");\n+ }\n+\n return new ValuesSource.GeoPoint.Fielddata((IndexGeoPointFieldData) config.fieldContext.indexFieldData());\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java",
"status": "modified"
}
]
} |
{
"body": "When installing a plugin, the plugin manager outputs PluginInfo by default, which I think it should only do in verbose mode. Extra points for formatting the info nicely:\n\n```\n.........................................................................................................DONE\nPluginInfo{name='cloud-aws', description='The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.', site=false, jvm=true, classname=org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin, isolated=true, version='2.0.0-beta1'}\n```\n",
"comments": [
{
"body": "Got a fix for this. I'll submit the pull request later today.\n",
"created_at": "2015-08-18T07:02:20Z"
}
],
"number": 12907,
"title": "Output PluginInfo only when in verbose mode"
} | {
"body": "Hi\n\nMoved output to verbose level and additionally changed the plugin info output format\n\nBefore:\n\n``` shell\nPluginInfo{name='cloud-aws', description='The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.', site=false, jvm=true, classname=org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin, isolated=true, version='2.1.0-SNAPSHOT'}\n```\n\nAfter:\n\n``` shell\n- Plugin information:\nName: cloud-aws\nDescription: The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.\nSite: false\nVersion: 2.1.0-SNAPSHOT\nJVM: true\n* Classname: org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin\n* Isolated: true\n```\n\nFixes #12907\n\nWhat do you think @clintongormley?\n",
"number": 12908,
"review_comments": [],
"title": "Output plugin info only in verbose mode"
} | {
"commits": [
{
"message": "Merge pull request #1 from nik9000/plugin-info-verbose\n\nPluginManager plugin info printing"
}
],
"files": []
} |
{
"body": "If you create a template with `number_of_shards` set to 0, then it accepts it and the eventual index fails. Attempted in ES 1.7.1. (If you try to do this directly with the index, then it will appropriately block the attempt.)\n\n``` http\n# Create the template\nPUT /_template/test_shards\n{\n \"template\": \"test_shards*\",\n \"settings\": {\n \"number_of_shards\" : 0\n }\n}\n\n# Create the index\nPUT /test_shards\n```\n\nOnce created, the cluster is in a red state because no primaries are allocated, which is kind of odd on its own because no primaries are missing.\n\n``` http\nDELETE /test_shards\n```\n\nWhen trying to delete the index, an exception is logged:\n\n```\n[2015-08-13 18:49:11,042][WARN ][cluster.action.index ] [WallE] [test_shards]failed to ack index store deleted for index\njava.lang.IllegalArgumentException: settings must contain a non-null > 0 number of shards\n at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:445)\n at org.elasticsearch.indices.IndicesService.processPendingDeletes(IndicesService.java:733)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.lockIndexAndAck(NodeIndexDeletedAction.java:125)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.access$500(NodeIndexDeletedAction.java:49)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction$1.doRun(NodeIndexDeletedAction.java:94)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nHowever, after an usually long delay, it eventually responds with success. Looking on disk, the index's directory still exists, but it is appropriately empty.\n### Workaround\n\nYou can fix the issue by updating the template to fix the issue, creating the index, and then deleting it. (You can also just specify the `number_of_shards` directly at index time to override them template, but fixing the template is the appropriate fix if you run into this issue!)\n\n``` http\n# Recreate the index\nPUT /test_shards\n{\n \"settings\" : {\n \"number_of_shards\" : 1\n }\n}\n\n# Clean it up\nDELETE /test_shards\n```\n\nThis will appropriately cleanup the directory.\n",
"comments": [],
"number": 12865,
"title": "Template allows creation of index with 0 primary shards"
} | {
"body": "Previously settings specified in index templates were not validated upon\ntemplate creation. Creating an index from an index template with invalid\nsettings could lead to cluster stability issues because creation of such\nindexes would bypass index settings validation.\n\nThis commit adds validation of settings specified in index templates at\ntemplate creation time. This works by routing the index template\nsettings through the index settings validation mechanism.\n\nCloses #12865\n",
"number": 12892,
"review_comments": [
{
"body": "I don't think we really need scope for this test since it's a unit test?\n",
"created_at": "2015-08-14T20:41:26Z"
},
{
"body": "Oh, yeah, that's just left over because my initial implementation of the test was as an integration test and I didn't remove it when I rewrote it as a unit test.\n",
"created_at": "2015-08-14T21:19:46Z"
}
],
"title": "Validate settings specified in index templates at template creation time"
} | {
"commits": [
{
"message": "Validate settings specified in index templates at template creation time\n\nPreviously settings specified in index templates were not validated upon\ntemplate creation. Creating an index from an index template with invalid\nsettings could lead to cluster stability issues because creation of such\nindexes would bypass index settings validation.\n\nThis commit adds validation of settings specified in index templates at\ntemplate creation time. This works by routing the index template\nsettings through the index settings validation mechanism.\n\nCloses #12865"
}
],
"files": [
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.indices.IndexCreationException;\n import org.elasticsearch.indices.IndexTemplateAlreadyExistsException;\n import org.elasticsearch.indices.IndexTemplateMissingException;\n import org.elasticsearch.indices.InvalidIndexTemplateException;\n@@ -50,12 +51,14 @@ public class MetaDataIndexTemplateService extends AbstractComponent {\n \n private final ClusterService clusterService;\n private final AliasValidator aliasValidator;\n+ private final MetaDataCreateIndexService metaDataCreateIndexService;\n \n @Inject\n- public MetaDataIndexTemplateService(Settings settings, ClusterService clusterService, AliasValidator aliasValidator) {\n+ public MetaDataIndexTemplateService(Settings settings, ClusterService clusterService, MetaDataCreateIndexService metaDataCreateIndexService, AliasValidator aliasValidator) {\n super(settings);\n this.clusterService = clusterService;\n this.aliasValidator = aliasValidator;\n+ this.metaDataCreateIndexService = metaDataCreateIndexService;\n }\n \n public void removeTemplates(final RemoveRequest request, final RemoveListener listener) {\n@@ -207,6 +210,12 @@ private void validate(PutRequest request) {\n throw new InvalidIndexTemplateException(request.name, \"template must not container the following characters \" + Strings.INVALID_FILENAME_CHARS);\n }\n \n+ try {\n+ metaDataCreateIndexService.validateIndexSettings(request.name, request.settings);\n+ } catch (IndexCreationException exception) {\n+ throw new InvalidIndexTemplateException(request.name, exception.getDetailedMessage());\n+ }\n+\n for (Alias alias : request.aliases) {\n //we validate the alias only partially, as we don't know yet to which index it'll get applied to\n aliasValidator.validateAliasStandalone(alias);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,81 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.template.put;\n+\n+import com.google.common.collect.Lists;\n+import com.google.common.collect.Maps;\n+import com.google.common.collect.Sets;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.IndexTemplateFilter;\n+import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n+import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n+import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.PutRequest;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.InvalidIndexTemplateException;\n+import org.elasticsearch.test.ESTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.List;\n+import java.util.Map;\n+\n+public class MetaDataIndexTemplateServiceTests extends ESTestCase {\n+ @Test\n+ public void testIndexTemplateInvalidNumberOfShards() throws IOException {\n+ MetaDataCreateIndexService createIndexService = new MetaDataCreateIndexService(\n+ Settings.EMPTY,\n+ null,\n+ null,\n+ null,\n+ null,\n+ null,\n+ Version.CURRENT,\n+ null,\n+ Sets.<IndexTemplateFilter>newHashSet(),\n+ null,\n+ null\n+ );\n+ MetaDataIndexTemplateService service = new MetaDataIndexTemplateService(Settings.EMPTY, null, createIndexService, null);\n+\n+ PutRequest request = new PutRequest(\"test\", \"test_shards\");\n+ request.template(\"test_shards*\");\n+\n+ Map<String, Object> map = Maps.newHashMap();\n+ map.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, \"0\");\n+ request.settings(Settings.settingsBuilder().put(map).build());\n+\n+ final List<Throwable> throwables = Lists.newArrayList();\n+ service.putTemplate(request, new MetaDataIndexTemplateService.PutListener() {\n+ @Override\n+ public void onResponse(MetaDataIndexTemplateService.PutResponse response) {\n+\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ throwables.add(t);\n+ }\n+ });\n+ assertEquals(throwables.size(), 1);\n+ assertTrue(throwables.get(0) instanceof InvalidIndexTemplateException);\n+ assertTrue(throwables.get(0).getMessage().contains(\"index must have 1 or more primary shards\"));\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java",
"status": "added"
}
]
} |
{
"body": "Hey all, \n\nI'm performing a date histogram aggregation over the past day (`'now/d'` -> `'now/d'`), and would like to get results into hourly buckets. I am using the `extended_bounds` of the aggregation because I would still like to get empty buckets back (as long as they are within my time range).\n\nEverything is working _almost_ as expected... except it seems like `extended_bounds` is not respecting the time zone. \n\nMy query returns the UTC \"midnight\" bucket which becomes 7pm (on the previous day) when I adjust for my timezone (CST). I would expect that `extended_bounds` with `time_zone` would return that timezone's midnight.\n\nSo if I were performing a query from `'now/d'` to `'now/d'` in CST (which is GMT-5 currently), I would expect the first bucket to be `'2015-07-15T05:00:00.000Z'` and _not_ `'2015-07-15T00:00:00.000Z'`.\n\nMy question is: Is `extended_bounds` not respecting `time_zone`?\n\nI'm having trouble succinctly describing this in English; hopefully my test case below helps to explain:\n#### Test Case:\n\nIndex a new document with timestamp of 00:00 GMT:\n\n```\ncurl -XPOST \"http://localhost:9200/analytics/event\" -d'\n{\n \"name\": \"Prince George\",\n \"event-date\": {\n \"timestamp\": \"2015-07-15T00:00:00.000Z\"\n }\n}'\n```\n\nIndex a new document with timestamp of 08:00 GMT:\n\n```\ncurl -XPOST \"http://localhost:9200/analytics/event\" -d'\n{\n \"name\": \"James Madison\",\n \"event-date\": {\n \"timestamp\": \"2015-07-15T08:00:00.000Z\"\n }\n}'\n```\n\nSearch!\n\n```\ncurl -XPOST \"http://localhost:9200/analytics/event/_search\" -d'\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"match_all\": {}\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"range\": {\n \"timestamp\": {\n \"from\": \"now/d\",\n \"to\": \"now/d\",\n \"time_zone\": \"-5:00\",\n \"include_lower\": true,\n \"include_upper\": true\n }\n }\n }\n ]\n }\n }\n }\n },\n \"aggs\": {\n \"dateagg\": {\n \"date_histogram\": {\n \"field\": \"event-date.timestamp\",\n \"interval\": \"1h\",\n \"time_zone\": \"-5:00\",\n \"min_doc_count\": 0,\n \"extended_bounds\": {\n \"min\": \"now/d\",\n \"max\": \"now/d\"\n }\n }\n }\n }\n}'\n```\n\n**Result**:\n\n```\n{\n \"took\": 92,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"analytics\",\n \"_type\": \"event\",\n \"_id\": \"AU6S1QrhLJ3RT4VnGs-M\",\n \"_score\": 1,\n \"_source\": {\n \"name\": \"James Madison\",\n \"event-date\": {\n \"timestamp\": \"2015-07-15T08:00:00.000Z\"\n }\n }\n }\n ]\n },\n \"aggregations\": {\n \"dateagg\": {\n \"buckets\": [\n {\n \"key_as_string\": \"2015-07-15T00:00:00.000Z\",\n \"key\": 1436918400000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T01:00:00.000Z\",\n \"key\": 1436922000000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T02:00:00.000Z\",\n \"key\": 1436925600000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T03:00:00.000Z\",\n \"key\": 1436929200000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T04:00:00.000Z\",\n \"key\": 1436932800000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T05:00:00.000Z\",\n \"key\": 1436936400000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T06:00:00.000Z\",\n \"key\": 1436940000000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T07:00:00.000Z\",\n \"key\": 1436943600000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2015-07-15T08:00:00.000Z\",\n \"key\": 1436947200000,\n \"doc_count\": 1\n }\n ]\n }\n }\n}\n```\n\nAs you can see, we are getting the expected hits (\"James Madison\"), and the document is in the correct bucket. I would prefer the first bucket to start at the beginning of the day in the timezone I specified in the query (\"-05:00\"), and not at the beginning of the day in UTC.\n\nThanks for your review, and thanks your hard work!\n",
"comments": [
{
"body": "@cbuescher could you take a look at this one? I think the docs need some updating - I'm confused as to best way to do something like this.\n\nthanks\n",
"created_at": "2015-07-17T13:20:51Z"
},
{
"body": "I just verified this on 1.7. and the result still looks like @feltnerm describes it above. Might be a bug, but looking at the code, at first glance it looks like timezone should already be applied to `extended_bounds`. Will have to look into this more deeply.\n",
"created_at": "2015-07-21T12:23:45Z"
},
{
"body": "@feltnerm which version of ES were you using for the output you described above?\n",
"created_at": "2015-07-21T12:24:38Z"
},
{
"body": "@cbuescher the above is from 1.5.2, but I have tried with 1.6.x as well. Can try on 1.6.x or 1.7.x if needed.\n",
"created_at": "2015-07-21T15:09:01Z"
},
{
"body": "Some preliminary findings: I was able to dig a bit deeper into this, and could reproduce the behaviour on master, haven't found a way to fix this though. \nThe root problem seems to be that the extended bounds datemath expression is first evaluated without considering the timezone (so the \"/d\" day rounding that comes with the datemath expression results in UTC day changes). Later on, the time zone rounding of the aggregation is applied, but since we have specified one hour intervals here, the min/max of the extended bounds doesn't change. Before jumping to a quick fix here we need to rethink what the timezone parameter in this type of aggregation should apply to, if it should also affect date math expressions like \"now/d\" or if this has undesired side effects. \n",
"created_at": "2015-07-28T13:43:16Z"
}
],
"number": 12278,
"title": "Date Histogram Aggregations w/ `extended_bounds` and `time_zone`"
} | {
"body": "This PR adds a timezone field to ValueParser.DateMath that is\nset to UTC by default but can be set using the existing constructors.\nThis makes it possible for extended bounds setting in DateHistogram\nto also use date math expressions that e.g. round by day and apply\nthis rounding in the time zone specified in the date histogram\naggregation request.\n\nCloses #12278\n",
"number": 12886,
"review_comments": [],
"title": "Make ValueParser.DateMath aware of timezone setting"
} | {
"commits": [
{
"message": "Aggregations: Make ValueParser.DateMath aware of timezone setting\n\nThis PR adds a timezone field to ValueParser.DateMath that is\nset to UTC by default but can be set using the existing constructors.\nThis makes it possible for extended bounds setting in DateHistogram\nto also use date math expressions that e.g. round by day and apply\nthis rounding in the time zone specified in the date histogram\naggregation request.\n\nCloses #12278"
},
{
"message": "Adding comments to test"
}
],
"files": [
{
"diff": "@@ -69,14 +69,14 @@ public static class DateTime extends Patternable<DateTime> {\n public static final DateTime DEFAULT = new DateTime(DateFieldMapper.Defaults.DATE_TIME_FORMATTER.format(), ValueFormatter.DateTime.DEFAULT, ValueParser.DateMath.DEFAULT);\n \n public static DateTime format(String format, DateTimeZone timezone) {\n- return new DateTime(format, new ValueFormatter.DateTime(format, timezone), new ValueParser.DateMath(format));\n+ return new DateTime(format, new ValueFormatter.DateTime(format, timezone), new ValueParser.DateMath(format, timezone));\n }\n \n public static DateTime mapper(DateFieldMapper.DateFieldType fieldType, DateTimeZone timezone) {\n- return new DateTime(fieldType.dateTimeFormatter().format(), ValueFormatter.DateTime.mapper(fieldType, timezone), ValueParser.DateMath.mapper(fieldType));\n+ return new DateTime(fieldType.dateTimeFormatter().format(), ValueFormatter.DateTime.mapper(fieldType, timezone), ValueParser.DateMath.mapper(fieldType, timezone));\n }\n \n- public DateTime(String pattern, ValueFormatter formatter, ValueParser parser) {\n+ private DateTime(String pattern, ValueFormatter formatter, ValueParser parser) {\n super(pattern, formatter, parser);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/format/ValueFormat.java",
"status": "modified"
},
{
"diff": "@@ -18,13 +18,15 @@\n */\n package org.elasticsearch.search.aggregations.support.format;\n \n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n import org.elasticsearch.index.mapper.ip.IpFieldMapper;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.joda.time.DateTimeZone;\n \n import java.text.DecimalFormat;\n import java.text.DecimalFormatSymbols;\n@@ -80,16 +82,21 @@ public double parseDouble(String value, SearchContext searchContext) {\n */\n static class DateMath implements ValueParser {\n \n- public static final DateMath DEFAULT = new ValueParser.DateMath(new DateMathParser(DateFieldMapper.Defaults.DATE_TIME_FORMATTER));\n+ public static final DateMath DEFAULT = new ValueParser.DateMath(new DateMathParser(DateFieldMapper.Defaults.DATE_TIME_FORMATTER), DateTimeZone.UTC);\n \n private DateMathParser parser;\n \n- public DateMath(String format) {\n- this(new DateMathParser(Joda.forPattern(format)));\n+ private DateTimeZone timezone = DateTimeZone.UTC;\n+\n+ public DateMath(String format, DateTimeZone timezone) {\n+ this(new DateMathParser(Joda.forPattern(format)), timezone);\n }\n \n- public DateMath(DateMathParser parser) {\n+ public DateMath(DateMathParser parser, @Nullable DateTimeZone timeZone) {\n this.parser = parser;\n+ if (timeZone != null) {\n+ this.timezone = timeZone;\n+ }\n }\n \n @Override\n@@ -100,16 +107,16 @@ public Long call() throws Exception {\n return searchContext.nowInMillis();\n }\n };\n- return parser.parse(value, now);\n+ return parser.parse(value, now, false, timezone);\n }\n \n @Override\n public double parseDouble(String value, SearchContext searchContext) {\n return parseLong(value, searchContext);\n }\n \n- public static DateMath mapper(DateFieldMapper.DateFieldType fieldType) {\n- return new DateMath(new DateMathParser(fieldType.dateTimeFormatter()));\n+ public static DateMath mapper(DateFieldMapper.DateFieldType fieldType, @Nullable DateTimeZone timezone) {\n+ return new DateMath(new DateMathParser(fieldType.dateTimeFormatter()), timezone);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/format/ValueParser.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,11 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n@@ -42,6 +44,7 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n+import java.util.concurrent.Callable;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n@@ -558,7 +561,7 @@ public void singleValuedField_WithValueScript() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n- \n+\n \n /*\n [ Jan 2, Feb 3]\n@@ -904,7 +907,7 @@ public void script_MultiValued() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n- \n+\n \n /*\n [ Jan 2, Feb 3]\n@@ -1195,6 +1198,70 @@ public void singleValueField_WithExtendedBounds() throws Exception {\n }\n }\n \n+ /**\n+ * Test date histogram aggregation with hour interval, timezone shift and\n+ * extended bounds (see https://github.com/elastic/elasticsearch/issues/12278)\n+ */\n+ @Test\n+ public void singleValueField_WithExtendedBoundsTimezone() throws Exception {\n+\n+ String index = \"test12278\";\n+ prepareCreate(index)\n+ .setSettings(Settings.builder().put(indexSettings()).put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0))\n+ .execute().actionGet();\n+\n+ DateMathParser parser = new DateMathParser(Joda.getStrictStandardDateFormatter());\n+\n+ final Callable<Long> callable = new Callable<Long>() {\n+ @Override\n+ public Long call() throws Exception {\n+ return System.currentTimeMillis();\n+ }\n+ };\n+\n+ // we pick a random timezone offset of +12/-12 hours and insert two documents\n+ // one at 00:00 in that time zone and one at 12:00\n+ List<IndexRequestBuilder> builders = new ArrayList<>();\n+ int timeZoneHourOffset = randomIntBetween(-12, 12);\n+ DateTimeZone timezone = DateTimeZone.forOffsetHours(timeZoneHourOffset);\n+ DateTime timeZoneStartToday = new DateTime(parser.parse(\"now/d\", callable, false, timezone), DateTimeZone.UTC);\n+ DateTime timeZoneNoonToday = new DateTime(parser.parse(\"now/d+12h\", callable, false, timezone), DateTimeZone.UTC);\n+ builders.add(indexDoc(index, timeZoneStartToday, 1));\n+ builders.add(indexDoc(index, timeZoneNoonToday, 2));\n+ indexRandom(true, builders);\n+ ensureSearchable(index);\n+\n+ SearchResponse response = null;\n+ // retrieve those docs with the same time zone and extended bounds\n+ response = client()\n+ .prepareSearch(index)\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"now/d\").to(\"now/d\").includeLower(true).includeUpper(true).timeZone(timezone.getID()))\n+ .addAggregation(\n+ dateHistogram(\"histo\").field(\"date\").interval(DateHistogramInterval.hours(1)).timeZone(timezone.getID()).minDocCount(0)\n+ .extendedBounds(\"now/d\", \"now/d+23h\")\n+ ).execute().actionGet();\n+ assertSearchResponse(response);\n+\n+ assertThat(\"Expected 24 buckets for one day aggregation with hourly interval\", response.getHits().totalHits(), equalTo(2l));\n+\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(24));\n+\n+ for (int i = 0; i < buckets.size(); i++) {\n+ Histogram.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(\"Bucket \" + i +\" had wrong key\", (DateTime) bucket.getKey(), equalTo(new DateTime(timeZoneStartToday.getMillis() + (i * 60 * 60 * 1000), DateTimeZone.UTC)));\n+ if (i == 0 || i == 12) {\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ } else {\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+ }\n+ }\n+ }\n+\n @Test\n public void singleValue_WithMultipleDateFormatsFromMapping() throws Exception {\n \n@@ -1233,7 +1300,7 @@ public void testIssue6965() {\n .execute().actionGet();\n \n assertSearchResponse(response);\n- \n+\n DateTimeZone tz = DateTimeZone.forID(\"+01:00\");\n \n Histogram histo = response.getAggregations().get(\"histo\");",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramIT.java",
"status": "modified"
}
]
} |
{
"body": "The plugin script fails to initialize when the the environment file (for DEB installations, /etc/default/elasticsearch) is sourced and contains memory parameters in ES_JAVA_OPTS such as a larger new size.\n\nExample output:\n\n```\n$ bin/plugin -r pluginname\nError occurred during initialization of VM\nToo small initial heap for new size specified\n```\n\nThis is caused by the hardcoded Xmx and Xms parameters in the plugin script. Removing the hardcoded values allows the script to execute successfully.\n\nWe may be able to just use the default JVM values here instead of hardcoding these values.\n",
"comments": [
{
"body": "steps to reproduce:\n- You need java 1.7, does not happen with java 1.8\n- set `ES_JAVA_OPTS=\"-XX:NewSize=256m\"` in `/etc/default/elasticsearch`\n\nThen call the plugin manager like\n\n```\n# /usr/share/elasticsearch/bin/plugin\nError occurred during initialization of VM\nToo small initial heap for new size specified\n```\n\nI am not sure if it is a good idea to remove those values, because this means, that you can potentially run the plugin manager with the same heap configured than your elasticsearch process, given you used `-Xms` and `-Xmx` instead of `ES_HEAP_SIZE`, which is never used in the plugin manager\n\nAs java 7 is EOL and it works with Java8, I am currently leaning towards closing this, as soon as I figured out, what actually happens in Java8 - as in what values are really getting used and which get ignored.\n\n**Update**: No need to have a package installed, you can simply use JAVA_OPTS on osx as well and call `bin/plugin`\n",
"created_at": "2015-08-10T12:49:34Z"
},
{
"body": "So, using a small java class, that basically prints out `ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()` shows, that java8 is doing the right thing and respects `Xmx` and `Xms`\n\njava8:\n\n```\n# java -XX:NewSize=256m -Xmx64m -Xms16m Foo\ninit = 16777216(16384K) used = 1311456(1280K) committed = 15204352(14848K) max = 45613056(44544K)\n# java -Xmx64m -Xms16m Foo\ninit = 16777216(16384K) used = 872696(852K) committed = 16252928(15872K) max = 59768832(58368K)\n```\n\njava7 (obviously)\n\n```\n# java -XX:NewSize=256m -Xmx64m -Xms16m Foo\nError occurred during initialization of VM\nToo small initial heap for new size specified\n# java -Xmx64m -Xms16m Foo\ninit = 16777216(16384K) used = 480376(469K) committed = 16777216(16384K) max = 59768832(58368K)\n```\n\n@jaymode objections against leaving as it is to prevent accidentally crazy high heaps for the plugin manager if people configure `-Xmx` in the `JAVA_OPTS` or `ES_JAVA_OPTS` for Elasticsearch itself?\n",
"created_at": "2015-08-10T13:14:40Z"
},
{
"body": "I'm ok with leaving memory values specified. \n\nI think the question is can we do anything to make an error less likely if memory values are configured in that option? I can't think of a good option since sourcing the config files has value. Maybe we don't use `ES_JAVA_OPTS` and introduce a `CLI_JAVA_OPTS` or something? \n",
"created_at": "2015-08-10T13:39:56Z"
},
{
"body": "this also raises an interesting point, if we want to merge the plugin manager into the `BootstrapCliParser` as this means, both have to run with the same memory settings, which might be bad...\n",
"created_at": "2015-08-10T13:52:47Z"
},
{
"body": "so we would need `ES_JAVA_OPTS`, `PLUGIN_JAVA_OPTS` and `COMMON_JAVA_OPTS` or are two options sufficient? Just mapping it out... given the single hit we had with this, I would postpone it for now\n",
"created_at": "2015-08-10T13:59:46Z"
},
{
"body": "why doe the plugin manager need java opts at all?\n",
"created_at": "2015-08-10T18:22:41Z"
},
{
"body": "We've used the ES_JAVA_OPTS as a way to specify a custom conf directory/file location. That was before we started reading those files. \n\nI think removing both JAVA_OPTS and ES_JAVA_OPTS is ok since we:\n1. read the environment configuration files for RPM/DEB now\n2. have the ability to specify the options with `--` syntax\n",
"created_at": "2015-08-10T18:41:25Z"
},
{
"body": "Regardless of whether we remove those env settings, I don't see why plugin manager needs them. This is a tiny program that just installs/removes/lists plugins. It should not require setting eg heap size or crazy other java options.\n",
"created_at": "2015-08-10T19:00:58Z"
},
{
"body": "> Regardless of whether we remove those env settings, I don't see why plugin manager needs them.\n\n+1\n",
"created_at": "2015-08-10T19:22:59Z"
},
{
"body": "so, removing `JAVA_OPTS` and `ES_JAVA_OPTS` settings still pass all integration tests... and works under CentOS, going to create a PR after testing the debian package\n",
"created_at": "2015-08-11T12:05:14Z"
}
],
"number": 12479,
"title": "Plugin script fails when memory parameters are defined in environment file"
} | {
"body": "When calling the plugin manager on java 7 with additional JAVA_OPTS\nthat change heap configuration compared to what is set at the plugin\nmanager shell script. This resulted in errors.\n\nThis commit removes the JAVA_OPTS and ES_JAVA_OPTS from the plugin\nmanager call to prevent those settings.\n\nCloses #12479\n",
"number": 12801,
"review_comments": [],
"title": "Remove unused java opts/es java opts from plugin manager call"
} | {
"commits": [
{
"message": "Plugins: Remove java opts/es java opts from plugin manager\n\n... and run as client VM.\n\nReasoning: When calling the plugin manager on java 7 with additional JAVA_OPTS\nthat change heap configuration compared to what is set at the plugin\nmanager shell script. This resulted in errors.\n\nThis commit removes the JAVA_OPTS and ES_JAVA_OPTS from the plugin\nmanager call to prevent those settings.\n\nCloses #12479"
}
],
"files": [
{
"diff": "@@ -108,4 +108,4 @@ fi\n HOSTNAME=`hostname | cut -d. -f1`\n export HOSTNAME\n \n-eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $args\n+eval \"$JAVA\" -client -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $args",
"filename": "distribution/src/main/resources/bin/plugin",
"status": "modified"
},
{
"diff": "@@ -11,7 +11,7 @@ TITLE Elasticsearch Plugin Manager ${project.version}\n \n SET HOSTNAME=%COMPUTERNAME%\n \n-\"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% -Xmx64m -Xms16m -Des.path.home=\"%ES_HOME%\" -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginManagerCliParser\" %*\n+\"%JAVA_HOME%\\bin\\java\" -client -Des.path.home=\"%ES_HOME%\" -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginManagerCliParser\" %*\n goto finally\n \n ",
"filename": "distribution/src/main/resources/bin/plugin.bat",
"status": "modified"
}
]
} |
{
"body": "There are a few things going on here. I would expect this create-index request to throw an exception because the `search_analyzer` is different between fields:\n\n```\nPUT t\n{\n \"mappings\": {\n \"x\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\", \n \"search_analyzer\": \"whitespace\"\n }\n }\n },\n \"y\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"simple\"\n }\n }\n }\n }\n}\n```\n\nIt silently accepts the difference, and sets both `search_analyzer`'s to `simple`:\n\n```\nGET t/_mapping/*/field/foo\n\n{\n \"t\": {\n \"mappings\": {\n \"y\": {\n \"foo\": {\n \"full_name\": \"foo\",\n \"mapping\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"simple\"\n }\n }\n }\n },\n \"x\": {\n \"foo\": {\n \"full_name\": \"foo\",\n \"mapping\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"simple\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nHowever, a plain GET-mapping shows the original settings:\n\n```\nGET t/_mapping/\n\n{\n \"t\": {\n \"mappings\": {\n \"x\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"whitespace\"\n }\n }\n },\n \"y\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"simple\"\n }\n }\n }\n }\n }\n}\n```\n\nChanging the `search_analyzer` on the `x` type throws a conflict exception (correctly) unless I specify `update_all_types`:\n\n```\nPUT t/_mapping/x?update_all_types=1\n{\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\", \n \"search_analyzer\": \"pattern\"\n }\n }\n}\n```\n\nThis request correctly changes the `search_analyzer` for both fields:\n\n```\nGET t/_mapping/*/field/foo\n\n{\n \"t\": {\n \"mappings\": {\n \"y\": {\n \"foo\": {\n \"full_name\": \"foo\",\n \"mapping\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"pattern\"\n }\n }\n }\n },\n \"x\": {\n \"foo\": {\n \"full_name\": \"foo\",\n \"mapping\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"pattern\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nBut it only changes the JSON mapping for the `x` type:\n\n```\nGET t/_mapping/\n\n{\n \"t\": {\n \"mappings\": {\n \"x\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"pattern\"\n }\n }\n },\n \"y\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"simple\"\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [],
"number": 12753,
"title": "JSON mappings differ from real mappings"
} | {
"body": "This was a straight up bug found in #12753. If only one type existed,\nthe compatibility check for a new type was not strict, so changes to\nan updateable setting like search_analyzer got through (but only\npartially). This change fixes the check and adds tests (which were\npreviously a TODO). \n\nThis also fixes a bug in dynamic field creation which woudln't copy\nfielddata settings when duplicating a pre-existing field with the\nsame name.\n\ncloses #12753\n",
"number": 12779,
"review_comments": [],
"title": "Fix field type compatiblity check to work when only one previous type exists"
} | {
"commits": [
{
"message": "Mappings: Fix field type compatiblity check to work when only one previous type exists.\n\nThis was a straight up bug found in #12753. If only one type existed,\nthe compatibility check for a new type was not strict, so changes to\nan updateable setting like search_analyzer got through (but only\npartially). This change fixes the check and adds tests (which were\npreviously a TODO).\n\nThis also fixes a bug in dynamic field creation which woudln't copy\nfielddata settings when duplicating a pre-existing field with the\nsame name.\n\ncloses #12753"
}
],
"files": [
{
"diff": "@@ -629,6 +629,7 @@ private static ObjectMapper parseDynamicValue(final ParseContext context, Object\n // best-effort to not introduce a conflict\n if (builder instanceof StringFieldMapper.Builder) {\n StringFieldMapper.Builder stringBuilder = (StringFieldMapper.Builder) builder;\n+ stringBuilder.fieldDataSettings(existingFieldType.fieldDataType().getSettings());\n stringBuilder.store(existingFieldType.stored());\n stringBuilder.indexOptions(existingFieldType.indexOptions());\n stringBuilder.tokenized(existingFieldType.tokenized());\n@@ -638,6 +639,7 @@ private static ObjectMapper parseDynamicValue(final ParseContext context, Object\n stringBuilder.searchAnalyzer(existingFieldType.searchAnalyzer());\n } else if (builder instanceof NumberFieldMapper.Builder) {\n NumberFieldMapper.Builder<?,?> numberBuilder = (NumberFieldMapper.Builder<?, ?>) builder;\n+ numberBuilder.fieldDataSettings(existingFieldType.fieldDataType().getSettings());\n numberBuilder.store(existingFieldType.stored());\n numberBuilder.indexOptions(existingFieldType.indexOptions());\n numberBuilder.tokenized(existingFieldType.tokenized());",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -110,21 +110,21 @@ public void checkCompatibility(Collection<FieldMapper> newFieldMappers, boolean\n List<String> conflicts = new ArrayList<>();\n ref.get().checkTypeName(fieldMapper.fieldType(), conflicts);\n if (conflicts.isEmpty()) { // only check compat if they are the same type\n- boolean strict = ref.getNumAssociatedMappers() > 1 && updateAllTypes == false;\n+ boolean strict = updateAllTypes == false;\n ref.get().checkCompatibility(fieldMapper.fieldType(), conflicts, strict);\n }\n if (conflicts.isEmpty() == false) {\n- throw new IllegalArgumentException(\"Mapper for [\" + fieldMapper.fieldType().names().fullName() + \"] conflicts with existing mapping in other types\" + conflicts.toString());\n+ throw new IllegalArgumentException(\"Mapper for [\" + fieldMapper.fieldType().names().fullName() + \"] conflicts with existing mapping in other types:\\n\" + conflicts.toString());\n }\n }\n \n // field type for the index name must be compatible too\n- MappedFieldTypeReference indexNameRef = fullNameToFieldType.get(fieldMapper.fieldType().names().indexName());\n+ MappedFieldTypeReference indexNameRef = indexNameToFieldType.get(fieldMapper.fieldType().names().indexName());\n if (indexNameRef != null) {\n List<String> conflicts = new ArrayList<>();\n- ref.get().checkTypeName(fieldMapper.fieldType(), conflicts);\n+ indexNameRef.get().checkTypeName(fieldMapper.fieldType(), conflicts);\n if (conflicts.isEmpty()) { // only check compat if they are the same type\n- boolean strict = indexNameRef.getNumAssociatedMappers() > 1 && updateAllTypes == false;\n+ boolean strict = updateAllTypes == false;\n indexNameRef.get().checkCompatibility(fieldMapper.fieldType(), conflicts, strict);\n }\n if (conflicts.isEmpty() == false) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java",
"status": "modified"
},
{
"diff": "@@ -274,7 +274,7 @@ public void checkCompatibility(MappedFieldType other, List<String> conflicts, bo\n conflicts.add(\"mapper [\" + names().fullName() + \"] has different analyzer\");\n }\n \n- if (!names().equals(other.names())) {\n+ if (!names().indexName().equals(other.names().indexName())) {\n conflicts.add(\"mapper [\" + names().fullName() + \"] has different index_name\");\n }\n if (Objects.equals(similarity(), other.similarity()) == false) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n@@ -131,7 +132,67 @@ public void testAddExistingBridgeName() {\n }\n }\n \n- // TODO: add tests for validation\n+ public void testCheckCompatibilityNewField() {\n+ FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n+ FieldTypeLookup lookup = new FieldTypeLookup();\n+ lookup.checkCompatibility(newList(f1), false);\n+ }\n+\n+ public void testCheckCompatibilityMismatchedTypes() {\n+ FieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n+ FieldTypeLookup lookup = new FieldTypeLookup();\n+ lookup = lookup.copyAndAddAll(newList(f1));\n+\n+ MappedFieldType ft2 = FakeFieldMapper.makeOtherFieldType(\"foo\", \"foo\");\n+ FieldMapper f2 = new FakeFieldMapper(\"foo\", ft2);\n+ try {\n+ lookup.checkCompatibility(newList(f2), false);\n+ fail(\"expected type mismatch\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"cannot be changed from type [faketype] to [otherfaketype]\"));\n+ }\n+ // fails even if updateAllTypes == true\n+ try {\n+ lookup.checkCompatibility(newList(f2), true);\n+ fail(\"expected type mismatch\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"cannot be changed from type [faketype] to [otherfaketype]\"));\n+ }\n+ }\n+\n+ public void testCheckCompatibilityConflict() {\n+ FieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n+ FieldTypeLookup lookup = new FieldTypeLookup();\n+ lookup = lookup.copyAndAddAll(newList(f1));\n+\n+ MappedFieldType ft2 = FakeFieldMapper.makeFieldType(\"foo\", \"bar\");\n+ ft2.setBoost(2.0f);\n+ FieldMapper f2 = new FakeFieldMapper(\"foo\", ft2);\n+ try {\n+ lookup.checkCompatibility(newList(f2), false);\n+ fail(\"expected conflict\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"to update [boost] across all types\"));\n+ }\n+ lookup.checkCompatibility(newList(f2), true); // boost is updateable, so ok if forcing\n+ // now with a non changeable setting\n+ MappedFieldType ft3 = FakeFieldMapper.makeFieldType(\"foo\", \"bar\");\n+ ft3.setStored(true);\n+ FieldMapper f3 = new FakeFieldMapper(\"foo\", ft3);\n+ try {\n+ lookup.checkCompatibility(newList(f3), false);\n+ fail(\"expected conflict\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"has different store values\"));\n+ }\n+ // even with updateAllTypes == true, incompatible\n+ try {\n+ lookup.checkCompatibility(newList(f3), true);\n+ fail(\"expected conflict\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"has different store values\"));\n+ }\n+ }\n \n public void testSimpleMatchIndexNames() {\n FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"baz\");\n@@ -179,11 +240,19 @@ static class FakeFieldMapper extends FieldMapper {\n public FakeFieldMapper(String fullName, String indexName) {\n super(fullName, makeFieldType(fullName, indexName), makeFieldType(fullName, indexName), dummySettings, null, null);\n }\n+ public FakeFieldMapper(String fullName, MappedFieldType fieldType) {\n+ super(fullName, fieldType, fieldType, dummySettings, null, null);\n+ }\n static MappedFieldType makeFieldType(String fullName, String indexName) {\n FakeFieldType fieldType = new FakeFieldType();\n fieldType.setNames(new MappedFieldType.Names(indexName, indexName, fullName));\n return fieldType;\n }\n+ static MappedFieldType makeOtherFieldType(String fullName, String indexName) {\n+ OtherFakeFieldType fieldType = new OtherFakeFieldType();\n+ fieldType.setNames(new MappedFieldType.Names(indexName, indexName, fullName));\n+ return fieldType;\n+ }\n static class FakeFieldType extends MappedFieldType {\n public FakeFieldType() {}\n protected FakeFieldType(FakeFieldType ref) {\n@@ -198,6 +267,20 @@ public String typeName() {\n return \"faketype\";\n }\n }\n+ static class OtherFakeFieldType extends MappedFieldType {\n+ public OtherFakeFieldType() {}\n+ protected OtherFakeFieldType(OtherFakeFieldType ref) {\n+ super(ref);\n+ }\n+ @Override\n+ public MappedFieldType clone() {\n+ return new OtherFakeFieldType(this);\n+ }\n+ @Override\n+ public String typeName() {\n+ return \"otherfaketype\";\n+ }\n+ }\n @Override\n protected String contentType() { return null; }\n @Override",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,8 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY)));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY))\n+ .setUpdateAllTypes(true));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -72,7 +73,8 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ .addMapping(\"child\", \"_parent\", \"type=parent\")\n+ .setUpdateAllTypes(true));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -87,7 +89,8 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER)));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER))\n+ .setUpdateAllTypes(true));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -102,7 +105,8 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS)));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS))\n+ .setUpdateAllTypes(true));\n ensureGreen();\n \n // Need to do 2 separate refreshes, otherwise we have 1 segment and then we can't measure if global ordinals",
"filename": "core/src/test/java/org/elasticsearch/search/child/ParentFieldLoadingIT.java",
"status": "modified"
}
]
} |
{
"body": "For example, distribution/rpm does not have this set, so it inherits the value from the parent, and you see this wacky stuff in the rpm metadata:\n\n [rpm-info] Name : elasticsearch\n [rpm-info] Version : 2.0.0\n...\n [rpm-info] Summary : Elasticsearch RPM Distribution\n [rpm-info] Description :\n [rpm-info] Elasticsearch Parent POM\n",
"comments": [],
"number": 12550,
"title": "Make sure all distribution modules have description in pom.xml"
} | {
"body": "Also adds an explicit description the RPM package so it doesn't inherit the description from the POM.\n\nCloses #12550\n",
"number": 12771,
"review_comments": [],
"title": "Makes sure all POMs contain a description"
} | {
"commits": [
{
"message": "Packaging: Makes sure all POMs contain a description\n\nAdds an explicit description the RPM package so it doesn't inherit the description from the POM.\n\nCloses #12550\n\nAlso, modified descriptions for deb and rpm packages to be the same and to reference the documentation rather than listing features that are out of date."
}
],
"files": [
{
"diff": "@@ -4,6 +4,7 @@\n <artifactId>elasticsearch-dev-tools</artifactId>\n <version>2.0.0-beta1-SNAPSHOT</version>\n <name>Elasticsearch Build Resources</name>\n+ <description>Tools to assist in building and developing in the Elasticsearch project</description>\n <parent>\n <groupId>org.sonatype.oss</groupId>\n <artifactId>oss-parent</artifactId>",
"filename": "dev-tools/pom.xml",
"status": "modified"
},
{
"diff": "@@ -17,6 +17,7 @@\n But if you do this, then maven lifecycle does not execute any test (nor compile any test)\n -->\n <!--packaging>deb</packaging-->\n+ <description>The Debian distribution of Elasticsearch</description>\n \n <properties>\n <deb.sign>false</deb.sign>",
"filename": "distribution/deb/pom.xml",
"status": "modified"
},
{
"diff": "@@ -6,33 +6,4 @@ Depends: libc6, adduser\n Section: web\n Priority: optional\n Homepage: https://www.elastic.co/\n-Description: Open Source, Distributed, RESTful Search Engine\n- Elasticsearch is a distributed RESTful search engine built for the cloud.\n- .\n- Features include:\n- .\n- + Distributed and Highly Available Search Engine.\n- - Each index is fully sharded with a configurable number of shards.\n- - Each shard can have one or more replicas.\n- - Read / Search operations performed on either one of the replica shard.\n- + Multi Tenant with Multi Types.\n- - Support for more than one index.\n- - Support for more than one type per index.\n- - Index level configuration (number of shards, index storage, ...).\n- + Various set of APIs\n- - HTTP RESTful API\n- - Native Java API.\n- - All APIs perform automatic node operation rerouting.\n- + Document oriented\n- - No need for upfront schema definition.\n- - Schema can be defined per type for customization of the indexing process.\n- + Reliable, Asynchronous Write Behind for long term persistency.\n- + (Near) Real Time Search.\n- + Built on top of Lucene\n- - Each shard is a fully functional Lucene index\n- - All the power of Lucene easily exposed through simple\n- configuration/plugins.\n- + Per operation consistency\n- - Single document level operations are atomic, consistent, isolated and\n- durable.\n- + Open Source under the Apache License, version 2 (\"ALv2\").\n+Description: Elasticsearch is a distributed RESTful search engine built for the cloud. Reference documentation can be found at https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the 'Elasticsearch: The Definitive Guide' book can be found at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html",
"filename": "distribution/deb/src/main/packaging/scripts/control",
"status": "modified"
},
{
"diff": "@@ -13,6 +13,7 @@\n <artifactId>elasticsearch</artifactId>\n <name>Elasticsearch RPM Distribution</name>\n <packaging>rpm</packaging>\n+ <description>The RPM distribution of Elasticsearch</description>\n \n <dependencies>\n <dependency>\n@@ -122,6 +123,7 @@\n <defaultUsername>root</defaultUsername>\n <defaultGroupname>root</defaultGroupname>\n <icon>${project.basedir}/src/main/resources/logo/elastic.gif</icon>\n+ <description>Elasticsearch is a distributed RESTful search engine built for the cloud. Reference documentation can be found at https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the 'Elasticsearch: The Definitive Guide' book can be found at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html</description>\n <mappings>\n <!-- Add bin directory -->\n <mapping>",
"filename": "distribution/rpm/pom.xml",
"status": "modified"
},
{
"diff": "@@ -17,6 +17,7 @@\n But if you do this, then maven lifecycle does not execute any test (nor compile any test)\n -->\n <!--packaging>pom</packaging-->\n+ <description>The TAR distribution of Elasticsearch</description>\n \n <dependencies>\n <dependency>",
"filename": "distribution/tar/pom.xml",
"status": "modified"
},
{
"diff": "@@ -17,6 +17,7 @@\n But if you do this, then maven lifecycle does not execute any test (nor compile any test)\n -->\n <!--packaging>pom</packaging-->\n+ <description>The ZIP distribution of Elasticsearch</description>\n \n <dependencies>\n <dependency>",
"filename": "distribution/zip/pom.xml",
"status": "modified"
},
{
"diff": "@@ -11,6 +11,7 @@\n <packaging>pom</packaging>\n <name>Elasticsearch Plugin POM</name>\n <inceptionYear>2009</inceptionYear>\n+ <description>A parent project for Elasticsearch plugins</description>\n \n <parent>\n <groupId>org.elasticsearch</groupId>",
"filename": "plugins/pom.xml",
"status": "modified"
},
{
"diff": "@@ -4,6 +4,7 @@\n <artifactId>elasticsearch-rest-api-spec</artifactId>\n <version>2.0.0-beta1-SNAPSHOT</version>\n <name>Elasticsearch Rest API Spec</name>\n+ <description>REST API Specification and tests for use with the Elasticsearch REST Test framework</description>\n <parent>\n <groupId>org.sonatype.oss</groupId>\n <artifactId>oss-parent</artifactId>",
"filename": "rest-api-spec/pom.xml",
"status": "modified"
}
]
} |
{
"body": "I started 2 AWS instances using latest 2.0.0-SNAPSHOT I built locally.\n\nWhen starting with defaults, everything is fine.\n\nWhen changing `elasticsearch.yml` to:\n\n``` yml\ndiscovery.type: ec2\ndiscovery.ec2.tag.Name: dadoonet-test-2.0.0-SNAP\n```\n\nI get this error while launching:\n\n```\n[2015-08-04 16:37:18,363][ERROR][org.elasticsearch.bootstrap] Exception\nNoClassSettingsException[Failed to load class setting [discovery.type] with value [ec2]]; nested: ClassNotFoundException[org.elasticsearch.discovery.ec2.Ec2DiscoveryModule];\n at org.elasticsearch.common.settings.Settings.loadClass(Settings.java:604)\n at org.elasticsearch.common.settings.Settings.getAsClass(Settings.java:592)\n at org.elasticsearch.discovery.DiscoveryModule.spawnModules(DiscoveryModule.java:53)\n at org.elasticsearch.common.inject.ModulesBuilder.add(ModulesBuilder.java:44)\n at org.elasticsearch.node.Node.<init>(Node.java:177)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177)\n at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:272)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)\nCaused by: java.lang.ClassNotFoundException: org.elasticsearch.discovery.ec2.Ec2DiscoveryModule\n at java.net.URLClassLoader$1.run(URLClassLoader.java:366)\n at java.net.URLClassLoader$1.run(URLClassLoader.java:355)\n at java.security.AccessController.doPrivileged(Native Method)\n at java.net.URLClassLoader.findClass(URLClassLoader.java:354)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:425)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:358)\n at org.elasticsearch.common.settings.Settings.loadClass(Settings.java:602)\n ... 8 more\n```\n\nThe cloud plugin contains needed libs I think:\n\n```\n[ec2-user@ip-10-0-0-113 cloud-aws]$ ll\ntotal 5776\n-rw-rw-r-- 1 ec2-user ec2-user 513834 4 août 16:32 aws-java-sdk-core-1.10.0.jar\n-rw-rw-r-- 1 ec2-user ec2-user 2153751 4 août 16:32 aws-java-sdk-ec2-1.10.0.jar\n-rw-rw-r-- 1 ec2-user ec2-user 258130 4 août 16:32 aws-java-sdk-kms-1.10.0.jar\n-rw-rw-r-- 1 ec2-user ec2-user 563917 4 août 16:32 aws-java-sdk-s3-1.10.0.jar\n-rw-rw-r-- 1 ec2-user ec2-user 232771 4 août 16:32 commons-codec-1.6.jar\n-rw-rw-r-- 1 ec2-user ec2-user 62050 4 août 16:32 commons-logging-1.1.3.jar\n-rw-rw-r-- 1 ec2-user ec2-user 44814 4 août 16:32 elasticsearch-cloud-aws-2.0.0-SNAPSHOT.jar\n-rw-rw-r-- 1 ec2-user ec2-user 592008 4 août 16:32 httpclient-4.3.6.jar\n-rw-rw-r-- 1 ec2-user ec2-user 282793 4 août 16:32 httpcore-4.3.3.jar\n-rw-rw-r-- 1 ec2-user ec2-user 39815 4 août 16:32 jackson-annotations-2.5.0.jar\n-rw-rw-r-- 1 ec2-user ec2-user 1143162 4 août 16:32 jackson-databind-2.5.3.jar\n-rw-rw-r-- 1 ec2-user ec2-user 2253 4 août 16:32 plugin-descriptor.properties\n```\n\nAnd the cloud jar file contains the class we are looking for: `org/elasticsearch/discovery/ec2/Ec2DiscoveryModule.class`\n\n```\n[ec2-user@ip-10-0-0-113 cloud-aws]$ unzip -l elasticsearch-cloud-aws-2.0.0-SNAPSHOT.jar \nArchive: elasticsearch-cloud-aws-2.0.0-SNAPSHOT.jar\n Length Date Time Name\n--------- ---------- ----- ----\n 0 08-04-2015 17:40 META-INF/\n 525 08-04-2015 17:40 META-INF/MANIFEST.MF\n 0 08-04-2015 17:40 org/\n 0 08-04-2015 17:40 org/elasticsearch/\n 0 08-04-2015 17:40 org/elasticsearch/cloud/\n 0 08-04-2015 17:40 org/elasticsearch/cloud/aws/\n 0 08-04-2015 17:40 org/elasticsearch/cloud/aws/blobstore/\n 0 08-04-2015 17:40 org/elasticsearch/cloud/aws/network/\n 0 08-04-2015 17:40 org/elasticsearch/cloud/aws/node/\n 0 08-04-2015 17:40 org/elasticsearch/discovery/\n 0 08-04-2015 17:40 org/elasticsearch/discovery/ec2/\n 0 08-04-2015 17:40 org/elasticsearch/plugin/\n 0 08-04-2015 17:40 org/elasticsearch/plugin/cloud/\n 0 08-04-2015 17:40 org/elasticsearch/plugin/cloud/aws/\n 0 08-04-2015 17:40 org/elasticsearch/repositories/\n 0 08-04-2015 17:40 org/elasticsearch/repositories/s3/\n 8232 08-04-2015 17:40 org/elasticsearch/cloud/aws/AwsEc2Service.class\n 1641 08-04-2015 17:40 org/elasticsearch/cloud/aws/AwsModule.class\n 689 08-04-2015 17:40 org/elasticsearch/cloud/aws/AwsS3Service.class\n 1232 08-04-2015 17:40 org/elasticsearch/cloud/aws/AwsSigner.class\n 11319 08-04-2015 17:40 org/elasticsearch/cloud/aws/blobstore/DefaultS3OutputStream.class\n 7517 08-04-2015 17:40 org/elasticsearch/cloud/aws/blobstore/S3BlobContainer.class\n 6371 08-04-2015 17:40 org/elasticsearch/cloud/aws/blobstore/S3BlobStore.class\n 3246 08-04-2015 17:40 org/elasticsearch/cloud/aws/blobstore/S3OutputStream.class\n 8992 08-04-2015 17:40 org/elasticsearch/cloud/aws/InternalAwsS3Service.class\n 2076 08-04-2015 17:40 org/elasticsearch/cloud/aws/network/Ec2NameResolver$Ec2HostnameType.class\n 4004 08-04-2015 17:40 org/elasticsearch/cloud/aws/network/Ec2NameResolver.class\n 3668 08-04-2015 17:40 org/elasticsearch/cloud/aws/node/Ec2CustomNodeAttributes.class\n 1139 08-04-2015 17:40 org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider$1.class\n 1472 08-04-2015 17:40 org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider$HostType.class\n 9807 08-04-2015 17:40 org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.class\n 2317 08-04-2015 17:40 org/elasticsearch/discovery/ec2/Ec2Discovery.class\n 1635 08-04-2015 17:40 org/elasticsearch/discovery/ec2/Ec2DiscoveryModule.class\n 2631 08-04-2015 17:40 org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.class\n 6781 08-04-2015 17:40 org/elasticsearch/repositories/s3/S3Repository.class\n 1099 08-04-2015 17:40 org/elasticsearch/repositories/s3/S3RepositoryModule.class\n 0 08-04-2015 17:40 META-INF/maven/\n 0 08-04-2015 17:40 META-INF/maven/org.elasticsearch.plugin/\n 0 08-04-2015 17:40 META-INF/maven/org.elasticsearch.plugin/elasticsearch-cloud-aws/\n 2172 08-04-2015 16:39 META-INF/maven/org.elasticsearch.plugin/elasticsearch-cloud-aws/pom.xml\n 142 08-04-2015 17:40 META-INF/maven/org.elasticsearch.plugin/elasticsearch-cloud-aws/pom.properties\n--------- -------\n 88707 41 files\n```\n",
"comments": [
{
"body": "We need tests that test this stuff... that needs to block any bugfix here.\n\nI know if someone can fix it, they will feel pressure to push fix without tests. But we cannot develop software this way anymore!\n",
"created_at": "2015-08-04T17:08:41Z"
},
{
"body": "So I tried to run `Ec2DiscoveryITest` from my IDE using options:\n\n```\n-Dtests.thirdparty=true -Dtests.config=/Users/dpilato/Documents/Elasticsearch/work/aws/elasticsearch.yml -Des.logger.level=DEBUG\n```\n\nAnd it went well. My config is:\n\n```\ncloud:\n aws:\n access_key: \"KEY\"\n secret_key: \"SECRET\"\n\ndiscovery.type: ec2\n```\n\nSame when running from command line with `mvn test`, I don't get this error. \n\nSo I get only this when deploying the plugin in elasticsearch and trying to load AWS module.\n",
"created_at": "2015-08-04T17:24:19Z"
},
{
"body": "ok, I think one thing we can do is cutover the thirdparty tests to run in the integration test phase so they run realistically? they are definitely integration tests. then they can be run in jenkins and we have more coverage for the discovery plugins: currently their integration tests only check that they were installed.\n",
"created_at": "2015-08-04T18:00:42Z"
},
{
"body": "So the problem is coming from the changes in ClassLoading.\n\nThe Discovery Module is started within a Node using the Node settings (and the Node classloader aka elasticsearch core classloader).\nWhen the plugin is loaded, it's now loaded within a child classloader.\n\nDiscovery then tries to call something like `Classes.loadClass(settings.getClassLoader(), ...);`.\n\nsettings.getClassLoader() is actually elasticsearch core Classloader, not the plugin one.\n\nSo we need somehow to tell the discovery module that he should use the plugin Classloader if any.\n\nI'm not sure yet how to do that.\n\nI pushed some changes for now in my branch https://github.com/dadoonet/elasticsearch/tree/plugins/discovery but it does not fix the issue. It only falls back to the default discovery instead of failling elasticsearch to start.\n",
"created_at": "2015-08-04T18:51:39Z"
},
{
"body": "To reproduce the issue:\n\nBuild aws or azure plugin:\n\n```\ncd plugins/cloud-aws\nmvn clean install -DskipTests\n```\n\nBuild core:\n\n```\ncd core\nmvn clean install -DskipTests\n```\n\nBuild a distribution: \n\n```\ncd distribution/tar\nmvn clean install -DskipTests\ncd target/releases\ntar xzf elasticsearch-2.0.0-SNAPSHOT.tar.gz\ncd elasticsearch-2.0.0-SNAPSHOT\nbin/plugin install cloud-aws --url file:../../../../../plugins/cloud-aws/target/releases/elasticsearch-cloud-aws-2.0.0-SNAPSHOT.zip\n```\n\nChange `config/elasticsearch.yml` to:\n\n``` yml\ndiscovery.type: ec2\n```\n\nThen start elasticsearch.\n\nIf you want to iterate other the code, just in core run `mvn package` and then copy the elasticsearch jar file from target to `distribution/tar/target/releases/elasticsearch-2.0.0-SNAPSHOT/lib`.\n",
"created_at": "2015-08-04T19:08:49Z"
}
],
"number": 12643,
"title": "Unable to start AWS discovery with 2.0.0-SNAPSHOT"
} | {
"body": "This method on settings loaded a class, based on a setting value, using the default classloader. It had all kinds of leniency in how the classname was found, and simply cannot work with plugins having isolated classloaders.\n\nThis change removes that method. Some of the uses of it were for custom extension points, like custom repository or discovery types. A lot were just there to plugin mock implementations for tests. For the settings that were legitimate, all now support plugins adding the given setting via onModule. For those that were specific to tests for mocks, they now use Classes.loadClass (a helper around Class.forName). This is a temporary measure until (in a future PR) tests can change the implementation via package private statics.\n\nI also removed a number of unnecessary intermediate modules, added a \"jvm-example\" plugin that can be filled in in the future as a smoke test for breaking plugins, and gave some documentation to \"spawn\" modules interface.\n\ncloses #12643\ncloses #12656\n",
"number": 12744,
"review_comments": [
{
"body": "sorry... -1 revert the name\n",
"created_at": "2015-08-09T19:05:08Z"
},
{
"body": "this won't play along with module pre processing... so creating modules directly will actually be broken\n",
"created_at": "2015-08-09T19:06:30Z"
},
{
"body": "agree... don't use `SpawnModules` for extensibility... extensibility is not the goal of this interface. The goal of this interface is modularization (enabling modules to be composed of other modules) and to play along with the module pre processing phase.\n",
"created_at": "2015-08-09T19:08:13Z"
},
{
"body": "-1 on deprecation at this point, as the logic behind it is broken\n",
"created_at": "2015-08-09T19:08:42Z"
},
{
"body": "The logic is not broken. This interface is broken, for the reasons described in the javadoc. All this interface does is make for (1) more classes (2) more obfuscation and (3) possibility for broken pluggability APIs. If this interface stays around, nothing stops new extension points from being added which are broken again by trying to plugin a module \"implementation\", which as explained, simply cannot work with plugin isolation.\n\nI don't know what you mean by \"module pre processing\", but extension points should be clear, and plugins should bind ancillary classes they need in their own modules. For example, if a plugin wants to bind a custom transport protocol, it should add the transport protocol to TransportModule with onModule, and any ancillary classes it needs bound can be added in its own module. There is no need to \"group\" these bindings together in one module.\n",
"created_at": "2015-08-09T21:43:47Z"
},
{
"body": "> (1) more classes\n\nsure, I can live with a proposed alternative that will remove the class, but quite frankly, I don't care if we have an extra class here\n\n> (2) more obfuscation\n\nnote sure what's obfuscated here... it's as simple as \"modules that implement this depend on other modules\"\n\n> (3) possibility for broken pluggability APIs\n\nGive me any extensible module that doesn't implement this interface and I'll show you how I can implement it in a broken way. As I mentioned, this interface is not for extensibility, but for modularity. Abusing things can be done everywhere... and we need to make sure that we carefully define the contract for APIs. And documentation **is** part of the contract - I'll be happy to see an explanation in the javadoc that states 1) what this interface is for and all about in general, 2) that you should not abuse it for extensibility and if you do want extensibility point to a top package level javadoc that explains what is the right way of opening an extension point.\n\n> I don't know what you mean by \"module pre processing\"\n\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/inject/PreProcessModule.java\n\n> but extension points should be clear, and plugins should bind ancillary classes they need in their own modules.\n\ntotally agree, and that's also the case with the `SpanwModules` itself. The only thing that happens here in addition is the ability to group modules per functionality/layer if you will. To your example, if I'm replacing the `TransportModule`, I don't want to declare that I'm replacing it in one place and in another place define the sub-modules of the new transport module. I want to simply declare \"this is the new transport module\" and it comes with whatever sub-modules that it defines - one place - clean. On the same note, if you choose to remove/replace a module, I don't want to think carefully what other modules I need to replace along with it - that what you'll end up with if you flatten all the modules - there're no clear module boundaries and dependencies. \n\nThe problem that you're referring to in the javadoc (and I generally agree that there is a problem here) should be fixed, but in the appropriate place and the spawn modules should not be \"blamed\" for it. The problem here is that there needs to be a well defined function for replacing modules, on the container level.. somewhere you can simply call:\n\n```\nreplace(TransportModule.class).with(MyTransportModule.class)\n```\n\nand the implementation should take care of unregistering the sub-modules of the prev. one and registering the sub-modules of the new one. \n\nRemoving this interface will prevent defining well define & separated modules and promote module spaghetti - where every module is a top level module... without proper modularisation.\n\nI'd remove this change from this PR and I'm all for considering another PR that really tackles the problem here, providing a good alternative that still promotes modularity. Deprecating this interface does not provide a better/safer solution, but instead forces to flatten all modules across the board, which is really bad IMO.\n",
"created_at": "2015-08-10T00:45:30Z"
},
{
"body": "> I'd remove this change from this PR and I'm all for considering another PR that really tackles the problem here, providing a good alternative that still promotes modularity.\n\nuh, when you find a crappy/troublesome api or functionality, there need not always be a replacement. Sometimes things are just a bad idea and should be removed completely.\n",
"created_at": "2015-08-10T01:38:26Z"
},
{
"body": "> uh, when you find a crappy/troublesome api or functionality, there need not always be a replacement. Sometimes things are just a bad idea and should be removed completely.\n\nI wholeheartedly agree with you. And to add to that, if removing the crappy api introduces another crappy thing (api/design/etc..), then you need to rethink the change and sometimes avoid it until you come up with a better alternative (and that's where I believe we're standing on this issue)\n",
"created_at": "2015-08-10T07:18:10Z"
},
{
"body": "> if removing the crappy api introduces another crappy thing (api/design/etc..)\n\nBut that's not the case here. Flattening modules is a good thing. It will make dependencies of services/components much less \"spaghetti\". Obviously I can't do flattening/cleanup in this PR. But the deprecation is here to warn against adding more of this craziness.\n\n> The problem here is that there needs to be a well defined function for replacing modules, on the container level..\n\nI disagree. There should be no need to replace a module, there is no need to make every part of Elasticsearch pluggable. We should have very specific pieces that are overrideable. Using the DiscoveryModule as an example, we have \"local\" and \"zen\" discovery types that are \"builtin\" to ES. These are always available, and zen has a number of associated classes that are bound in discovery module. However, if a plugin wants to add a new discovery type, the Discovery class is the only thing they are plugging in, and that is registerable with the DiscoveryModule. If they require extra bindings, they can do so in their own module, which is added as part of their plugin. All of our extension points should work this way. \n",
"created_at": "2015-08-10T07:32:37Z"
},
{
"body": "I'll be very happy if we find ourselves in a place where we simply don't need to replace a module. I didn't check the full code base to validate that indeed that works. I just know that I had to extend a module once for testability (i.e. exposing the extensibility for tests only, without exposing the extensibility for plugins). \n\nIn any case, that doesn't change the reasoning behind properly modularizing the code base such that, 1) each module is the only entity responsible for its sub modules, 2) modules can be extensible at any level\n",
"created_at": "2015-08-10T07:45:24Z"
},
{
"body": "> I wholeheartedly agree with you. And to add to that, if removing the crappy api introduces another crappy thing (api/design/etc..), then you need to rethink the change and sometimes avoid it until you come up with a better alternative (and that's where I believe we're standing on this issue)\n\nBut its not removed here, only deprecated. You just have sour grapes because we are telling you your code fucking sucks. Thats fine, I'm -1 to push this fix _without_ the deprecation so we will figure this shit out. Great because its a blocker issue.\n\nAnd guess what: I won't back down. Ever.\n",
"created_at": "2015-08-10T08:30:46Z"
},
{
"body": "> But its not removed here, only deprecated.\n\ndeprecation is not the right mechanism as IMO we're deprecating the wrong thing here. It'd be like me deprecating the `Engine` just because there's an issue with index recovery. We need a separate issue for this... and sure, label it for 2.0... but it's not a blocker for beta1.\n\n> You just have sour grapes because we are telling you your code fucking sucks.\n\n:)\n\nIt's not my code... it's our code. I don't take this discussion personal at all and I have no problem with anyone saying our code sucks (you know it oh too well). I believe when one points to code that sucks it's awesome and it's even more awesome to also provide/suggest a better alternative. And yes.. I do find the discussion around this issue awesome by itself, regardless of what ends up being pushed... simply because I know it eventually leads to a better code base.\n\nThis PR is not a defining moment in anyone's life. Like you, I have my opinions when it comes to code. Sometimes it's aligned with others, sometimes it's not, whatever it is, we always find a way to agree and move forward, simply because we all come with good intentions to improve things. You're the same... I don't expect you (or anyone working on elasticsearch) to be different - and that, my dear friend, is what makes you awesome. We discuss and we resolve thing by agreeing to agree, not by agreeing to disagree - that's a monumental pillar in our culture.\n\nBottom line, this PR introduces changes I'm -1 on. These changes I'm \"-1'ing\" are not blocking the actual bug fix. I suggest to remove the controversial part of the PR (which... again... does not block the bug fix) and open a separate issue to discuss the suggested \"cleanup\".\n\nLets see what others have to say about it... and act on additional feedback.\n",
"created_at": "2015-08-10T09:54:28Z"
},
{
"body": "yeah - no need to rename this\n",
"created_at": "2015-08-10T09:55:51Z"
},
{
"body": "can you elaborate on this I don't understand that problem necessarily?\n",
"created_at": "2015-08-10T09:56:33Z"
},
{
"body": "I think we should reduce the scope here, remove the deprecation and open another issue to remove `SpawnModules` altogether. Elasticsearch core has gone through major refactorings to reduce the massive over-abstraction and over-engineering that we know as legacy code. Every project will get there and following the path we went down the last couple of weeks we should really make thinks clear and simple. Simple here means we need to add clean extension points without any modules depending on modules etc. you can extend a module and that's it. We should flatten out these extension points and don't allow for this recursive abstractions. It's hard enough to support one layer, multiple layers just make things harder and harder. \n",
"created_at": "2015-08-10T11:46:48Z"
},
{
"body": "+1 on moving this to another issue and having the discussion there\n",
"created_at": "2015-08-10T11:53:06Z"
},
{
"body": "we can leave is a TODO but I think we need a way to plug this in since there are quite some of them out there. I wonder if we can just have a `Map<String, Class<? extends ShardsAllocator>` here and a register method?\n",
"created_at": "2015-08-10T12:18:09Z"
},
{
"body": "just use `Collections.unmodifiableList()`\n",
"created_at": "2015-08-10T12:25:25Z"
},
{
"body": "can we have constants here?\n",
"created_at": "2015-08-10T12:26:25Z"
},
{
"body": "s/blah blah blah/Leniency is the root of all evil/\n",
"created_at": "2015-08-10T12:32:25Z"
},
{
"body": "I created #12781 to do this as a follow up.\n",
"created_at": "2015-08-10T19:54:22Z"
},
{
"body": "Nice.\n",
"created_at": "2015-08-10T19:56:50Z"
},
{
"body": "+1 to break out deprecation to a separate issue\n\nI see plenty of good fixes here, which everyone seems to agree on, and then also plenty of controversy around the deprecation, so I think it makes sense to split it off.\n",
"created_at": "2015-08-10T21:10:06Z"
}
],
"title": "Remove Settings.getAsClass"
} | {
"commits": [
{
"message": "This method on settings loaded a class, based on a setting value, using\nthe default classloader. It had all kinds of leniency in how the\nclassname was found, and simply cannot work with plugins having isolated\nclassloaders.\n\nThis change removes that method. Some of the uses of it were for custom\nextension points, like custom repository or discovery types. A lot were\njust there to plugin mock implementations for tests. For the settings\nthat were legitimate, all now support plugins adding the given setting\nvia onModule. For those that were specific to tests for mocks, they now\nuse Classes.loadClass (a helper around Class.forName). This is a\ntemporary measure until (in a future PR) tests can change the\nimplementation via package private statics.\n\nI also removed a number of unnecessary intermediate modules, added a\n\"jvm-example\" plugin that can be filled in in the future as a smoke test\nfor breaking plugins, and gave some documentation to \"spawn\" modules\ninterface.\n\ncloses #12643\ncloses #12656"
}
],
"files": [
{
"diff": "@@ -19,17 +19,13 @@\n \n package org.elasticsearch.cache.recycler;\n \n-import com.google.common.collect.ImmutableList;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.inject.Module;\n-import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.common.settings.Settings;\n \n-import static org.elasticsearch.common.inject.Modules.createModule;\n-\n /**\n */\n-public class PageCacheRecyclerModule extends AbstractModule implements SpawnModules {\n+public class PageCacheRecyclerModule extends AbstractModule {\n \n public static final String CACHE_IMPL = \"cache.recycler.page_cache_impl\";\n \n@@ -41,10 +37,12 @@ public PageCacheRecyclerModule(Settings settings) {\n \n @Override\n protected void configure() {\n- }\n-\n- @Override\n- public Iterable<? extends Module> spawnModules() {\n- return ImmutableList.of(createModule(settings.getAsClass(CACHE_IMPL, DefaultPageCacheRecyclerModule.class), settings));\n+ String impl = settings.get(CACHE_IMPL);\n+ if (impl == null) {\n+ bind(PageCacheRecycler.class).asEagerSingleton();\n+ } else {\n+ Class<? extends PageCacheRecycler> implClass = Classes.loadClass(getClass().getClassLoader(), impl);\n+ bind(PageCacheRecycler.class).to(implClass).asEagerSingleton();\n+ }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cache/recycler/PageCacheRecyclerModule.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.cluster.routing.allocation.AllocationModule;\n import org.elasticsearch.cluster.service.InternalClusterService;\n import org.elasticsearch.cluster.settings.ClusterDynamicSettingsModule;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.Module;\n import org.elasticsearch.common.inject.SpawnModules;\n@@ -88,7 +89,12 @@ protected void configure() {\n bind(NodeMappingRefreshAction.class).asEagerSingleton();\n bind(MappingUpdatedAction.class).asEagerSingleton();\n \n- bind(ClusterInfoService.class).to(settings.getAsClass(CLUSTER_SERVICE_IMPL, InternalClusterInfoService.class)).asEagerSingleton();\n+ String impl = settings.get(CLUSTER_SERVICE_IMPL);\n+ Class<? extends ClusterInfoService> implClass = InternalClusterInfoService.class;\n+ if (impl != null) {\n+ implClass = Classes.loadClass(getClass().getClassLoader(), impl);\n+ }\n+ bind(ClusterInfoService.class).to(implClass).asEagerSingleton();\n \n Multibinder<IndexTemplateFilter> mbinder = Multibinder.newSetBinder(binder(), IndexTemplateFilter.class);\n for (Class<? extends IndexTemplateFilter> indexTemplateFilter : indexTemplateFilters) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNodeFilters;\n import org.elasticsearch.cluster.routing.HashFunction;\n import org.elasticsearch.cluster.routing.Murmur3HashFunction;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n@@ -245,10 +246,11 @@ private IndexMetaData(String index, long version, State state, Settings settings\n } else {\n this.minimumCompatibleLuceneVersion = null;\n }\n- final Class<? extends HashFunction> hashFunctionClass = settings.getAsClass(SETTING_LEGACY_ROUTING_HASH_FUNCTION, null);\n- if (hashFunctionClass == null) {\n+ final String hashFunction = settings.get(SETTING_LEGACY_ROUTING_HASH_FUNCTION);\n+ if (hashFunction == null) {\n routingHashFunction = MURMUR3_HASH_FUNCTION;\n } else {\n+ final Class<? extends HashFunction> hashFunctionClass = Classes.loadClass(getClass().getClassLoader(), hashFunction);\n try {\n routingHashFunction = hashFunctionClass.newInstance();\n } catch (InstantiationException | IllegalAccessException e) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.cluster.routing.HashFunction;\n import org.elasticsearch.cluster.routing.SimpleHashFunction;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -66,14 +67,18 @@ public MetaDataIndexUpgradeService(Settings settings, ScriptService scriptServic\n // the hash function package has changed we replace the two hash functions if their fully qualified name is used.\n if (hasCustomPre20HashFunction) {\n switch (pre20HashFunctionName) {\n+ case \"Simple\":\n+ case \"simple\":\n case \"org.elasticsearch.cluster.routing.operation.hash.simple.SimpleHashFunction\":\n pre20HashFunction = SimpleHashFunction.class;\n break;\n+ case \"Djb\":\n+ case \"djb\":\n case \"org.elasticsearch.cluster.routing.operation.hash.djb.DjbHashFunction\":\n pre20HashFunction = DjbHashFunction.class;\n break;\n default:\n- pre20HashFunction = settings.getAsClass(DEPRECATED_SETTING_ROUTING_HASH_FUNCTION, DjbHashFunction.class, \"org.elasticsearch.cluster.routing.\", \"HashFunction\");\n+ pre20HashFunction = Classes.loadClass(getClass().getClassLoader(), pre20HashFunctionName);\n }\n } else {\n pre20HashFunction = DjbHashFunction.class;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.routing.allocation.allocator;\n \n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n@@ -64,8 +65,7 @@ private Class<? extends ShardsAllocator> loadShardsAllocator(Settings settings)\n logger.warn(\"{} allocator has been removed in 2.0 using {} instead\", EVEN_SHARD_COUNT_ALLOCATOR_KEY, BALANCED_ALLOCATOR_KEY);\n shardsAllocator = BalancedShardsAllocator.class;\n } else {\n- shardsAllocator = settings.getAsClass(TYPE_KEY, BalancedShardsAllocator.class,\n- \"org.elasticsearch.cluster.routing.allocation.allocator.\", \"Allocator\");\n+ throw new IllegalArgumentException(\"Unknown ShardsAllocator type [\" + type + \"]\");\n }\n return shardsAllocator;\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocatorModule.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.common;\n \n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.bootstrap.Elasticsearch;\n import org.elasticsearch.common.inject.Module;\n import org.elasticsearch.common.settings.NoClassSettingsException;\n \n@@ -81,14 +83,6 @@ public static String getPackageName(Class<?> clazz) {\n return (lastDotIndex != -1 ? className.substring(0, lastDotIndex) : \"\");\n }\n \n- public static String getPackageNameNoDomain(Class<?> clazz) {\n- String fullPackage = getPackageName(clazz);\n- if (fullPackage.startsWith(\"org.\") || fullPackage.startsWith(\"com.\") || fullPackage.startsWith(\"net.\")) {\n- return fullPackage.substring(4);\n- }\n- return fullPackage;\n- }\n-\n public static boolean isInnerClass(Class<?> clazz) {\n return !Modifier.isStatic(clazz.getModifiers())\n && clazz.getEnclosingClass() != null;\n@@ -99,47 +93,13 @@ public static boolean isConcrete(Class<?> clazz) {\n return !clazz.isInterface() && !Modifier.isAbstract(modifiers);\n }\n \n- public static <T> Class<? extends T> loadClass(ClassLoader classLoader, String className, String prefixPackage, String suffixClassName) {\n- return loadClass(classLoader, className, prefixPackage, suffixClassName, null);\n- }\n-\n- @SuppressWarnings({\"unchecked\"})\n- public static <T> Class<? extends T> loadClass(ClassLoader classLoader, String className, String prefixPackage, String suffixClassName, String errorPrefix) {\n- Throwable t = null;\n- String[] classNames = classNames(className, prefixPackage, suffixClassName);\n- for (String fullClassName : classNames) {\n- try {\n- return (Class<? extends T>) classLoader.loadClass(fullClassName);\n- } catch (ClassNotFoundException ex) {\n- t = ex;\n- } catch (NoClassDefFoundError er) {\n- t = er;\n- }\n- }\n- if (errorPrefix == null) {\n- errorPrefix = \"failed to load class\";\n- }\n- throw new NoClassSettingsException(errorPrefix + \" with value [\" + className + \"]; tried \" + Arrays.toString(classNames), t);\n- }\n-\n- private static String[] classNames(String className, String prefixPackage, String suffixClassName) {\n- String prefixValue = prefixPackage;\n- int packageSeparator = className.lastIndexOf('.');\n- String classNameValue = className;\n- // If class name contains package use it as package prefix instead of specified default one\n- if (packageSeparator > 0) {\n- prefixValue = className.substring(0, packageSeparator + 1);\n- classNameValue = className.substring(packageSeparator + 1);\n+ public static <T> Class<? extends T> loadClass(ClassLoader classLoader, String className) {\n+ try {\n+ return (Class<? extends T>) classLoader.loadClass(className);\n+ } catch (ClassNotFoundException|NoClassDefFoundError e) {\n+ throw new ElasticsearchException(\"failed to load class [\" + className + \"]\", e);\n }\n- return new String[]{\n- className,\n- prefixValue + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,\n- prefixValue + toCamelCase(classNameValue) + \".\" + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,\n- prefixValue + toCamelCase(classNameValue).toLowerCase(Locale.ROOT) + \".\" + Strings.capitalize(toCamelCase(classNameValue)) + suffixClassName,\n- };\n }\n \n- private Classes() {\n-\n- }\n+ private Classes() {}\n }",
"filename": "core/src/main/java/org/elasticsearch/common/Classes.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,15 @@\n package org.elasticsearch.common.inject;\n \n /**\n+ * This interface can be added to a Module to spawn sub modules. DO NOT USE.\n *\n+ * This is fundamentally broken.\n+ * <ul>\n+ * <li>If you have a plugin with multiple modules, return all the modules at once.</li>\n+ * <li>If you are trying to make the implementation of a module \"pluggable\", don't do it.\n+ * This is not extendable because custom implementations (using onModule) cannot be\n+ * registered before spawnModules() is called.</li>\n+ * </ul>\n */\n public interface SpawnModules {\n ",
"filename": "core/src/main/java/org/elasticsearch/common/inject/SpawnModules.java",
"status": "modified"
},
{
"diff": "@@ -533,78 +533,6 @@ public SizeValue getAsSize(String[] settings, SizeValue defaultValue) throws Set\n return parseSizeValue(get(settings), defaultValue);\n }\n \n- /**\n- * Returns the setting value (as a class) associated with the setting key. If it does not exists,\n- * returns the default class provided.\n- *\n- * @param setting The setting key\n- * @param defaultClazz The class to return if no value is associated with the setting\n- * @param <T> The type of the class\n- * @return The class setting value, or the default class provided is no value exists\n- * @throws org.elasticsearch.common.settings.NoClassSettingsException Failure to load a class\n- */\n- @SuppressWarnings({\"unchecked\"})\n- public <T> Class<? extends T> getAsClass(String setting, Class<? extends T> defaultClazz) throws NoClassSettingsException {\n- String sValue = get(setting);\n- if (sValue == null) {\n- return defaultClazz;\n- }\n- try {\n- return (Class<? extends T>) getClassLoader().loadClass(sValue);\n- } catch (ClassNotFoundException e) {\n- throw new NoClassSettingsException(\"Failed to load class setting [\" + setting + \"] with value [\" + sValue + \"]\", e);\n- }\n- }\n-\n- /**\n- * Returns the setting value (as a class) associated with the setting key. If the value itself fails to\n- * represent a loadable class, the value will be appended to the <tt>prefixPackage</tt> and suffixed with the\n- * <tt>suffixClassName</tt> and it will try to be loaded with it.\n- *\n- * @param setting The setting key\n- * @param defaultClazz The class to return if no value is associated with the setting\n- * @param prefixPackage The prefix package to prefix the value with if failing to load the class as is\n- * @param suffixClassName The suffix class name to prefix the value with if failing to load the class as is\n- * @param <T> The type of the class\n- * @return The class represented by the setting value, or the default class provided if no value exists\n- * @throws org.elasticsearch.common.settings.NoClassSettingsException Failure to load the class\n- */\n- @SuppressWarnings({\"unchecked\"})\n- public <T> Class<? extends T> getAsClass(String setting, Class<? extends T> defaultClazz, String prefixPackage, String suffixClassName) throws NoClassSettingsException {\n- String sValue = get(setting);\n- if (sValue == null) {\n- return defaultClazz;\n- }\n- String fullClassName = sValue;\n- try {\n- return (Class<? extends T>) getClassLoader().loadClass(fullClassName);\n- } catch (ClassNotFoundException e) {\n- String prefixValue = prefixPackage;\n- int packageSeparator = sValue.lastIndexOf('.');\n- if (packageSeparator > 0) {\n- prefixValue = sValue.substring(0, packageSeparator + 1);\n- sValue = sValue.substring(packageSeparator + 1);\n- }\n- fullClassName = prefixValue + Strings.capitalize(toCamelCase(sValue)) + suffixClassName;\n- try {\n- return (Class<? extends T>) getClassLoader().loadClass(fullClassName);\n- } catch (ClassNotFoundException e1) {\n- return loadClass(prefixValue, sValue, suffixClassName, setting);\n- } catch (NoClassDefFoundError e1) {\n- return loadClass(prefixValue, sValue, suffixClassName, setting);\n- }\n- }\n- }\n-\n- private <T> Class<? extends T> loadClass(String prefixValue, String sValue, String suffixClassName, String setting) {\n- String fullClassName = prefixValue + toCamelCase(sValue).toLowerCase(Locale.ROOT) + \".\" + Strings.capitalize(toCamelCase(sValue)) + suffixClassName;\n- try {\n- return (Class<? extends T>) getClassLoader().loadClass(fullClassName);\n- } catch (ClassNotFoundException e2) {\n- throw new NoClassSettingsException(\"Failed to load class setting [\" + setting + \"] with value [\" + get(setting) + \"]\", e2);\n- }\n- }\n-\n /**\n * The values associated with a setting prefix as an array. The settings array is in the format of:\n * <tt>settingPrefix.[index]</tt>.\n@@ -858,6 +786,43 @@ public String remove(String key) {\n return map.remove(key);\n }\n \n+ /**\n+ * Removes the specified value from the given key.\n+ * Returns true if the value was found and removed, false otherwise.\n+ */\n+ public boolean removeArrayElement(String key, String value) {\n+ // TODO: this is too crazy, we should just have a multimap...\n+ String oldValue = get(key);\n+ if (oldValue != null) {\n+ // single valued case\n+ boolean match = oldValue.equals(value);\n+ if (match) {\n+ remove(key);\n+ }\n+ return match;\n+ }\n+\n+ // multi valued\n+ int i = 0;\n+ while (true) {\n+ String toCheck = map.get(key + '.' + i++);\n+ if (toCheck == null) {\n+ return false;\n+ } else if (toCheck.equals(value)) {\n+ break;\n+ }\n+ }\n+ // found the value, shift values after it back one index\n+ int j = i + 1;\n+ while (true) {\n+ String toMove = map.get(key + '.' + j++);\n+ if (toMove == null) {\n+ return true;\n+ }\n+ put(key + '.' + i++, toMove);\n+ }\n+ }\n+\n /**\n * Returns a setting value based on the setting key.\n */\n@@ -1028,6 +993,26 @@ public Builder putArray(String setting, String... values) {\n return this;\n }\n \n+ /**\n+ * Sets the setting as an array of values, but keeps existing elements for the key.\n+ */\n+ public Builder extendArray(String setting, String... values) {\n+ // check for a singular (non array) value\n+ String oldSingle = remove(setting);\n+ // find the highest array index\n+ int counter = 0;\n+ while (map.containsKey(setting + '.' + counter)) {\n+ ++counter;\n+ }\n+ if (oldSingle != null) {\n+ put(setting + '.' + counter++, oldSingle);\n+ }\n+ for (String value : values) {\n+ put(setting + '.' + counter++, value);\n+ }\n+ return this;\n+ }\n+\n /**\n * Sets the setting group.\n */",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -19,17 +19,15 @@\n \n package org.elasticsearch.common.util;\n \n-import com.google.common.collect.ImmutableList;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.inject.Module;\n-import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.common.settings.Settings;\n \n import static org.elasticsearch.common.inject.Modules.createModule;\n \n /**\n */\n-public class BigArraysModule extends AbstractModule implements SpawnModules {\n+public class BigArraysModule extends AbstractModule {\n \n public static final String IMPL = \"common.util.big_arrays_impl\";\n \n@@ -41,10 +39,12 @@ public BigArraysModule(Settings settings) {\n \n @Override\n protected void configure() {\n- }\n-\n- @Override\n- public Iterable<? extends Module> spawnModules() {\n- return ImmutableList.of(createModule(settings.getAsClass(IMPL, DefaultBigArraysModule.class), settings));\n+ String impl = settings.get(IMPL);\n+ if (impl == null) {\n+ bind(BigArrays.class).asEagerSingleton();\n+ } else {\n+ Class<? extends BigArrays> implClass = Classes.loadClass(getClass().getClassLoader(), impl);\n+ bind(BigArrays.class).to(implClass).asEagerSingleton();\n+ }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/util/BigArraysModule.java",
"status": "modified"
},
{
"diff": "@@ -19,42 +19,71 @@\n \n package org.elasticsearch.discovery;\n \n-import com.google.common.collect.ImmutableList;\n+import com.google.common.collect.Lists;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.inject.Module;\n-import org.elasticsearch.common.inject.Modules;\n-import org.elasticsearch.common.inject.SpawnModules;\n+import org.elasticsearch.common.inject.multibindings.Multibinder;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.discovery.local.LocalDiscoveryModule;\n-import org.elasticsearch.discovery.zen.ZenDiscoveryModule;\n+import org.elasticsearch.discovery.local.LocalDiscovery;\n+import org.elasticsearch.discovery.zen.ZenDiscovery;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n+import org.elasticsearch.discovery.zen.ping.ZenPingService;\n+import org.elasticsearch.discovery.zen.ping.unicast.UnicastHostsProvider;\n+\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n \n /**\n- *\n+ * A module for loading classes for node discovery.\n */\n-public class DiscoveryModule extends AbstractModule implements SpawnModules {\n-\n- private final Settings settings;\n+public class DiscoveryModule extends AbstractModule {\n \n public static final String DISCOVERY_TYPE_KEY = \"discovery.type\";\n \n+ private final Settings settings;\n+ private final List<Class<? extends UnicastHostsProvider>> unicastHostProviders = Lists.newArrayList();\n+ private final Map<String, Class<? extends Discovery>> discoveryTypes = new HashMap<>();\n+\n public DiscoveryModule(Settings settings) {\n this.settings = settings;\n+ addDiscoveryType(\"local\", LocalDiscovery.class);\n+ addDiscoveryType(\"zen\", ZenDiscovery.class);\n }\n \n- @Override\n- public Iterable<? extends Module> spawnModules() {\n- Class<? extends Module> defaultDiscoveryModule;\n- if (DiscoveryNode.localNode(settings)) {\n- defaultDiscoveryModule = LocalDiscoveryModule.class;\n- } else {\n- defaultDiscoveryModule = ZenDiscoveryModule.class;\n- }\n- return ImmutableList.of(Modules.createModule(settings.getAsClass(DISCOVERY_TYPE_KEY, defaultDiscoveryModule, \"org.elasticsearch.discovery.\", \"DiscoveryModule\"), settings));\n+ /**\n+ * Adds a custom unicast hosts provider to build a dynamic list of unicast hosts list when doing unicast discovery.\n+ */\n+ public void addUnicastHostProvider(Class<? extends UnicastHostsProvider> unicastHostProvider) {\n+ unicastHostProviders.add(unicastHostProvider);\n+ }\n+\n+ /**\n+ * Adds a custom Discovery type.\n+ */\n+ public void addDiscoveryType(String type, Class<? extends Discovery> clazz) {\n+ discoveryTypes.put(type, clazz);\n }\n \n @Override\n protected void configure() {\n+ String defaultType = DiscoveryNode.localNode(settings) ? \"local\" : \"zen\";\n+ String discoveryType = settings.get(DISCOVERY_TYPE_KEY, defaultType);\n+ Class<? extends Discovery> discoveryClass = discoveryTypes.get(discoveryType);\n+ if (discoveryClass == null) {\n+ throw new IllegalArgumentException(\"Unknown Discovery type [\" + discoveryType + \"]\");\n+ }\n+\n+ if (discoveryType.equals(\"local\") == false) {\n+ bind(ElectMasterService.class).asEagerSingleton();\n+ bind(ZenPingService.class).asEagerSingleton();\n+ Multibinder<UnicastHostsProvider> unicastHostsProviderMultibinder = Multibinder.newSetBinder(binder(), UnicastHostsProvider.class);\n+ for (Class<? extends UnicastHostsProvider> unicastHostProvider : unicastHostProviders) {\n+ unicastHostsProviderMultibinder.addBinding().to(unicastHostProvider);\n+ }\n+ }\n+ bind(Discovery.class).to(discoveryClass).asEagerSingleton();\n bind(DiscoveryService.class).asEagerSingleton();\n }\n }\n\\ No newline at end of file",
"filename": "core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java",
"status": "modified"
},
{
"diff": "@@ -34,33 +34,25 @@ public class HttpServerModule extends AbstractModule {\n private final Settings settings;\n private final ESLogger logger;\n \n- private Class<? extends HttpServerTransport> configuredHttpServerTransport;\n- private String configuredHttpServerTransportSource;\n+ private Class<? extends HttpServerTransport> httpServerTransportClass;\n \n public HttpServerModule(Settings settings) {\n this.settings = settings;\n this.logger = Loggers.getLogger(getClass(), settings);\n+ this.httpServerTransportClass = NettyHttpServerTransport.class;\n }\n \n @SuppressWarnings({\"unchecked\"})\n @Override\n protected void configure() {\n- if (configuredHttpServerTransport != null) {\n- logger.info(\"Using [{}] as http transport, overridden by [{}]\", configuredHttpServerTransport.getName(), configuredHttpServerTransportSource);\n- bind(HttpServerTransport.class).to(configuredHttpServerTransport).asEagerSingleton();\n- } else {\n- Class<? extends HttpServerTransport> defaultHttpServerTransport = NettyHttpServerTransport.class;\n- Class<? extends HttpServerTransport> httpServerTransport = settings.getAsClass(\"http.type\", defaultHttpServerTransport, \"org.elasticsearch.http.\", \"HttpServerTransport\");\n- bind(HttpServerTransport.class).to(httpServerTransport).asEagerSingleton();\n- }\n-\n+ bind(HttpServerTransport.class).to(httpServerTransportClass).asEagerSingleton();\n bind(HttpServer.class).asEagerSingleton();\n }\n \n public void setHttpServerTransport(Class<? extends HttpServerTransport> httpServerTransport, String source) {\n Preconditions.checkNotNull(httpServerTransport, \"Configured http server transport may not be null\");\n Preconditions.checkNotNull(source, \"Plugin, that changes transport may not be null\");\n- this.configuredHttpServerTransport = httpServerTransport;\n- this.configuredHttpServerTransportSource = source;\n+ logger.info(\"Using [{}] as http transport, overridden by [{}]\", httpServerTransportClass.getName(), source);\n+ this.httpServerTransportClass = httpServerTransport;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/http/HttpServerModule.java",
"status": "modified"
},
{
"diff": "@@ -324,7 +324,7 @@ public void close() throws IOException {\n injector.getInstance(IndicesQueryCache.class).onClose(shardId);\n }\n }), path));\n- modules.add(new DeletionPolicyModule(indexSettings));\n+ modules.add(new DeletionPolicyModule());\n try {\n shardInjector = modules.createChildInjector(injector);\n } catch (CreationException e) {",
"filename": "core/src/main/java/org/elasticsearch/index/IndexService.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n \n import java.util.LinkedList;\n import java.util.Map;\n+import java.util.Objects;\n \n /**\n *\n@@ -114,12 +115,8 @@ public void processAnalyzer(String name, Class<? extends AnalyzerProvider> analy\n private final Map<String, Class<? extends TokenizerFactory>> tokenizers = Maps.newHashMap();\n private final Map<String, Class<? extends AnalyzerProvider>> analyzers = Maps.newHashMap();\n \n-\n- public AnalysisModule(Settings settings) {\n- this(settings, null);\n- }\n-\n public AnalysisModule(Settings settings, IndicesAnalysisService indicesAnalysisService) {\n+ Objects.requireNonNull(indicesAnalysisService);\n this.settings = settings;\n this.indicesAnalysisService = indicesAnalysisService;\n processors.add(new DefaultProcessor());\n@@ -173,24 +170,13 @@ protected void configure() {\n String charFilterName = entry.getKey();\n Settings charFilterSettings = entry.getValue();\n \n- Class<? extends CharFilterFactory> type = null;\n- try {\n- type = charFilterSettings.getAsClass(\"type\", null, \"org.elasticsearch.index.analysis.\", \"CharFilterFactory\");\n- } catch (NoClassSettingsException e) {\n- // nothing found, see if its in bindings as a binding name\n- if (charFilterSettings.get(\"type\") != null) {\n- type = charFiltersBindings.charFilters.get(Strings.toUnderscoreCase(charFilterSettings.get(\"type\")));\n- if (type == null) {\n- type = charFiltersBindings.charFilters.get(Strings.toCamelCase(charFilterSettings.get(\"type\")));\n- }\n- }\n- if (type == null) {\n- throw new IllegalArgumentException(\"failed to find char filter type [\" + charFilterSettings.get(\"type\") + \"] for [\" + charFilterName + \"]\", e);\n- }\n+ String typeName = charFilterSettings.get(\"type\");\n+ if (typeName == null) {\n+ throw new IllegalArgumentException(\"CharFilter [\" + charFilterName + \"] must have a type associated with it\");\n }\n+ Class<? extends CharFilterFactory> type = charFiltersBindings.charFilters.get(typeName);\n if (type == null) {\n- // nothing found, see if its in bindings as a binding name\n- throw new IllegalArgumentException(\"Char Filter [\" + charFilterName + \"] must have a type associated with it\");\n+ throw new IllegalArgumentException(\"Unknown CharFilter type [\" + typeName + \"] for [\" + charFilterName + \"]\");\n }\n charFilterBinder.addBinding(charFilterName).toProvider(FactoryProvider.newFactory(CharFilterFactoryFactory.class, type)).in(Scopes.SINGLETON);\n }\n@@ -206,11 +192,8 @@ protected void configure() {\n if (clazz.getAnnotation(AnalysisSettingsRequired.class) != null) {\n continue;\n }\n- // register it as default under the name\n- if (indicesAnalysisService != null && indicesAnalysisService.hasCharFilter(charFilterName)) {\n- // don't register it here, we will use explicitly register it in the AnalysisService\n- //charFilterBinder.addBinding(charFilterName).toInstance(indicesAnalysisService.charFilterFactoryFactory(charFilterName));\n- } else {\n+ // register if it's not builtin\n+ if (indicesAnalysisService.hasCharFilter(charFilterName) == false) {\n charFilterBinder.addBinding(charFilterName).toProvider(FactoryProvider.newFactory(CharFilterFactoryFactory.class, clazz)).in(Scopes.SINGLETON);\n }\n }\n@@ -233,23 +216,13 @@ protected void configure() {\n String tokenFilterName = entry.getKey();\n Settings tokenFilterSettings = entry.getValue();\n \n- Class<? extends TokenFilterFactory> type = null;\n- try {\n- type = tokenFilterSettings.getAsClass(\"type\", null, \"org.elasticsearch.index.analysis.\", \"TokenFilterFactory\");\n- } catch (NoClassSettingsException e) {\n- // nothing found, see if its in bindings as a binding name\n- if (tokenFilterSettings.get(\"type\") != null) {\n- type = tokenFiltersBindings.tokenFilters.get(Strings.toUnderscoreCase(tokenFilterSettings.get(\"type\")));\n- if (type == null) {\n- type = tokenFiltersBindings.tokenFilters.get(Strings.toCamelCase(tokenFilterSettings.get(\"type\")));\n- }\n- }\n- if (type == null) {\n- throw new IllegalArgumentException(\"failed to find token filter type [\" + tokenFilterSettings.get(\"type\") + \"] for [\" + tokenFilterName + \"]\", e);\n- }\n+ String typeName = tokenFilterSettings.get(\"type\");\n+ if (typeName == null) {\n+ throw new IllegalArgumentException(\"TokenFilter [\" + tokenFilterName + \"] must have a type associated with it\");\n }\n+ Class<? extends TokenFilterFactory> type = tokenFiltersBindings.tokenFilters.get(typeName);\n if (type == null) {\n- throw new IllegalArgumentException(\"token filter [\" + tokenFilterName + \"] must have a type associated with it\");\n+ throw new IllegalArgumentException(\"Unknown TokenFilter type [\" + typeName + \"] for [\" + tokenFilterName + \"]\");\n }\n tokenFilterBinder.addBinding(tokenFilterName).toProvider(FactoryProvider.newFactory(TokenFilterFactoryFactory.class, type)).in(Scopes.SINGLETON);\n }\n@@ -265,11 +238,8 @@ protected void configure() {\n if (clazz.getAnnotation(AnalysisSettingsRequired.class) != null) {\n continue;\n }\n- // register it as default under the name\n- if (indicesAnalysisService != null && indicesAnalysisService.hasTokenFilter(tokenFilterName)) {\n- // don't register it here, we will use explicitly register it in the AnalysisService\n- // tokenFilterBinder.addBinding(tokenFilterName).toInstance(indicesAnalysisService.tokenFilterFactoryFactory(tokenFilterName));\n- } else {\n+ // register if it's not builtin\n+ if (indicesAnalysisService.hasTokenFilter(tokenFilterName) == false) {\n tokenFilterBinder.addBinding(tokenFilterName).toProvider(FactoryProvider.newFactory(TokenFilterFactoryFactory.class, clazz)).in(Scopes.SINGLETON);\n }\n }\n@@ -291,24 +261,13 @@ protected void configure() {\n String tokenizerName = entry.getKey();\n Settings tokenizerSettings = entry.getValue();\n \n-\n- Class<? extends TokenizerFactory> type = null;\n- try {\n- type = tokenizerSettings.getAsClass(\"type\", null, \"org.elasticsearch.index.analysis.\", \"TokenizerFactory\");\n- } catch (NoClassSettingsException e) {\n- // nothing found, see if its in bindings as a binding name\n- if (tokenizerSettings.get(\"type\") != null) {\n- type = tokenizersBindings.tokenizers.get(Strings.toUnderscoreCase(tokenizerSettings.get(\"type\")));\n- if (type == null) {\n- type = tokenizersBindings.tokenizers.get(Strings.toCamelCase(tokenizerSettings.get(\"type\")));\n- }\n- }\n- if (type == null) {\n- throw new IllegalArgumentException(\"failed to find tokenizer type [\" + tokenizerSettings.get(\"type\") + \"] for [\" + tokenizerName + \"]\", e);\n- }\n+ String typeName = tokenizerSettings.get(\"type\");\n+ if (typeName == null) {\n+ throw new IllegalArgumentException(\"Tokenizer [\" + tokenizerName + \"] must have a type associated with it\");\n }\n+ Class<? extends TokenizerFactory> type = tokenizersBindings.tokenizers.get(typeName);\n if (type == null) {\n- throw new IllegalArgumentException(\"token filter [\" + tokenizerName + \"] must have a type associated with it\");\n+ throw new IllegalArgumentException(\"Unknown Tokenizer type [\" + typeName + \"] for [\" + tokenizerName + \"]\");\n }\n tokenizerBinder.addBinding(tokenizerName).toProvider(FactoryProvider.newFactory(TokenizerFactoryFactory.class, type)).in(Scopes.SINGLETON);\n }\n@@ -324,11 +283,8 @@ protected void configure() {\n if (clazz.getAnnotation(AnalysisSettingsRequired.class) != null) {\n continue;\n }\n- // register it as default under the name\n- if (indicesAnalysisService != null && indicesAnalysisService.hasTokenizer(tokenizerName)) {\n- // don't register it here, we will use explicitly register it in the AnalysisService\n- // tokenizerBinder.addBinding(tokenizerName).toProvider(FactoryProvider.newFactory(TokenizerFactoryFactory.class, clazz)).in(Scopes.SINGLETON);\n- } else {\n+ // register if it's not builtin\n+ if (indicesAnalysisService.hasTokenizer(tokenizerName) == false) {\n tokenizerBinder.addBinding(tokenizerName).toProvider(FactoryProvider.newFactory(TokenizerFactoryFactory.class, clazz)).in(Scopes.SINGLETON);\n }\n }\n@@ -350,42 +306,27 @@ protected void configure() {\n String analyzerName = entry.getKey();\n Settings analyzerSettings = entry.getValue();\n \n- Class<? extends AnalyzerProvider> type = null;\n- try {\n- type = analyzerSettings.getAsClass(\"type\", null, \"org.elasticsearch.index.analysis.\", \"AnalyzerProvider\");\n- } catch (NoClassSettingsException e) {\n- // nothing found, see if its in bindings as a binding name\n- if (analyzerSettings.get(\"type\") != null) {\n- type = analyzersBindings.analyzers.get(Strings.toUnderscoreCase(analyzerSettings.get(\"type\")));\n- if (type == null) {\n- type = analyzersBindings.analyzers.get(Strings.toCamelCase(analyzerSettings.get(\"type\")));\n- }\n- }\n- if (type == null) {\n- // no specific type, check if it has a tokenizer associated with it\n- String tokenizerName = analyzerSettings.get(\"tokenizer\");\n- if (tokenizerName != null) {\n- // we have a tokenizer, use the CustomAnalyzer\n- type = CustomAnalyzerProvider.class;\n- } else {\n- throw new IllegalArgumentException(\"failed to find analyzer type [\" + analyzerSettings.get(\"type\") + \"] or tokenizer for [\" + analyzerName + \"]\", e);\n- }\n- }\n- }\n- if (type == null) {\n- // no specific type, check if it has a tokenizer associated with it\n- String tokenizerName = analyzerSettings.get(\"tokenizer\");\n- if (tokenizerName != null) {\n- // we have a tokenizer, use the CustomAnalyzer\n+ String typeName = analyzerSettings.get(\"type\");\n+ Class<? extends AnalyzerProvider> type;\n+ if (typeName == null) {\n+ if (analyzerSettings.get(\"tokenizer\") != null) {\n+ // custom analyzer, need to add it\n type = CustomAnalyzerProvider.class;\n } else {\n- throw new IllegalArgumentException(\"failed to find analyzer type [\" + analyzerSettings.get(\"type\") + \"] or tokenizer for [\" + analyzerName + \"]\");\n+ throw new IllegalArgumentException(\"Analyzer [\" + analyzerName + \"] must have a type associated with it\");\n+ }\n+ } else if (typeName.equals(\"custom\")) {\n+ type = CustomAnalyzerProvider.class;\n+ } else {\n+ type = analyzersBindings.analyzers.get(typeName);\n+ if (type == null) {\n+ throw new IllegalArgumentException(\"Unknown Analyzer type [\" + typeName + \"] for [\" + analyzerName + \"]\");\n }\n }\n+\n analyzerBinder.addBinding(analyzerName).toProvider(FactoryProvider.newFactory(AnalyzerProviderFactory.class, type)).in(Scopes.SINGLETON);\n }\n \n-\n // go over the analyzers in the bindings and register the ones that are not configured\n for (Map.Entry<String, Class<? extends AnalyzerProvider>> entry : analyzersBindings.analyzers.entrySet()) {\n String analyzerName = entry.getKey();\n@@ -398,11 +339,8 @@ protected void configure() {\n if (clazz.getAnnotation(AnalysisSettingsRequired.class) != null) {\n continue;\n }\n- // register it as default under the name\n- if (indicesAnalysisService != null && indicesAnalysisService.hasAnalyzer(analyzerName)) {\n- // don't register it here, we will use explicitly register it in the AnalysisService\n- // analyzerBinder.addBinding(analyzerName).toProvider(FactoryProvider.newFactory(AnalyzerProviderFactory.class, clazz)).in(Scopes.SINGLETON);\n- } else {\n+ // register if it's not builtin\n+ if (indicesAnalysisService.hasAnalyzer(analyzerName) == false) {\n analyzerBinder.addBinding(analyzerName).toProvider(FactoryProvider.newFactory(AnalyzerProviderFactory.class, clazz)).in(Scopes.SINGLETON);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/analysis/AnalysisModule.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.index.cache.query;\n \n+import org.elasticsearch.cluster.metadata.AliasOrIndex;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.Scopes;\n import org.elasticsearch.common.settings.Settings;\n@@ -43,8 +45,12 @@ public QueryCacheModule(Settings settings) {\n \n @Override\n protected void configure() {\n- bind(QueryCache.class)\n- .to(settings.getAsClass(QueryCacheSettings.QUERY_CACHE_TYPE, IndexQueryCache.class, \"org.elasticsearch.index.cache.query.\", \"QueryCache\"))\n- .in(Scopes.SINGLETON);\n+ Class<? extends IndexQueryCache> queryCacheClass = IndexQueryCache.class;\n+ String customQueryCache = settings.get(QueryCacheSettings.QUERY_CACHE_TYPE);\n+ if (customQueryCache != null) {\n+ // TODO: make this only useable from tests\n+ queryCacheClass = Classes.loadClass(getClass().getClassLoader(), customQueryCache);\n+ }\n+ bind(QueryCache.class).to(queryCacheClass).in(Scopes.SINGLETON);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheModule.java",
"status": "modified"
},
{
"diff": "@@ -22,30 +22,14 @@\n import org.apache.lucene.index.IndexDeletionPolicy;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.name.Names;\n-import org.elasticsearch.common.settings.Settings;\n \n-import static org.elasticsearch.index.deletionpolicy.DeletionPolicyModule.DeletionPolicySettings.TYPE;\n-\n-/**\n- *\n- */\n public class DeletionPolicyModule extends AbstractModule {\n \n- public static class DeletionPolicySettings {\n- public static final String TYPE = \"index.deletionpolicy.type\";\n- }\n-\n- private final Settings settings;\n-\n- public DeletionPolicyModule(Settings settings) {\n- this.settings = settings;\n- }\n-\n @Override\n protected void configure() {\n bind(IndexDeletionPolicy.class)\n .annotatedWith(Names.named(\"actual\"))\n- .to(settings.getAsClass(TYPE, KeepOnlyLastDeletionPolicy.class))\n+ .to(KeepOnlyLastDeletionPolicy.class)\n .asEagerSingleton();\n \n bind(SnapshotDeletionPolicy.class)",
"filename": "core/src/main/java/org/elasticsearch/index/deletionpolicy/DeletionPolicyModule.java",
"status": "modified"
},
{
"diff": "@@ -20,9 +20,11 @@\n package org.elasticsearch.index.shard;\n \n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.multibindings.Multibinder;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.cache.query.index.IndexQueryCache;\n import org.elasticsearch.index.engine.IndexSearcherWrapper;\n import org.elasticsearch.index.engine.IndexSearcherWrappingService;\n import org.elasticsearch.index.engine.EngineFactory;\n@@ -39,10 +41,6 @@\n public class IndexShardModule extends AbstractModule {\n \n public static final String ENGINE_FACTORY = \"index.engine.factory\";\n- private static final Class<? extends EngineFactory> DEFAULT_ENGINE_FACTORY_CLASS = InternalEngineFactory.class;\n-\n- private static final String ENGINE_PREFIX = \"org.elasticsearch.index.engine.\";\n- private static final String ENGINE_SUFFIX = \"EngineFactory\";\n \n private final ShardId shardId;\n private final Settings settings;\n@@ -72,7 +70,13 @@ protected void configure() {\n bind(TranslogService.class).asEagerSingleton();\n }\n \n- bind(EngineFactory.class).to(settings.getAsClass(ENGINE_FACTORY, DEFAULT_ENGINE_FACTORY_CLASS, ENGINE_PREFIX, ENGINE_SUFFIX));\n+ Class<? extends InternalEngineFactory> engineFactoryClass = InternalEngineFactory.class;\n+ String customEngineFactory = settings.get(ENGINE_FACTORY);\n+ if (customEngineFactory != null) {\n+ // TODO: make this only useable from tests\n+ engineFactoryClass = Classes.loadClass(getClass().getClassLoader(), customEngineFactory);\n+ }\n+ bind(EngineFactory.class).to(engineFactoryClass);\n bind(StoreRecoveryService.class).asEagerSingleton();\n bind(ShardPercolateService.class).asEagerSingleton();\n bind(ShardTermVectorsService.class).asEagerSingleton();",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShardModule.java",
"status": "modified"
},
{
"diff": "@@ -46,6 +46,12 @@ public class SimilarityModule extends AbstractModule {\n \n public SimilarityModule(Settings settings) {\n this.settings = settings;\n+ addSimilarity(\"default\", DefaultSimilarityProvider.class);\n+ addSimilarity(\"BM25\", BM25SimilarityProvider.class);\n+ addSimilarity(\"DFR\", DFRSimilarityProvider.class);\n+ addSimilarity(\"IB\", IBSimilarityProvider.class);\n+ addSimilarity(\"LMDirichlet\", LMDirichletSimilarityProvider.class);\n+ addSimilarity(\"LMJelinekMercer\", LMJelinekMercerSimilarityProvider.class);\n }\n \n /**\n@@ -60,30 +66,25 @@ public void addSimilarity(String name, Class<? extends SimilarityProvider> simil\n \n @Override\n protected void configure() {\n- Map<String, Class<? extends SimilarityProvider>> providers = Maps.newHashMap(similarities);\n+ MapBinder<String, SimilarityProvider.Factory> similarityBinder =\n+ MapBinder.newMapBinder(binder(), String.class, SimilarityProvider.Factory.class);\n \n Map<String, Settings> similaritySettings = settings.getGroups(SIMILARITY_SETTINGS_PREFIX);\n for (Map.Entry<String, Settings> entry : similaritySettings.entrySet()) {\n String name = entry.getKey();\n Settings settings = entry.getValue();\n \n- Class<? extends SimilarityProvider> type =\n- settings.getAsClass(\"type\", null, \"org.elasticsearch.index.similarity.\", \"SimilarityProvider\");\n- if (type == null) {\n- throw new IllegalArgumentException(\"SimilarityProvider [\" + name + \"] must have an associated type\");\n+ String typeName = settings.get(\"type\");\n+ if (typeName == null) {\n+ throw new IllegalArgumentException(\"Similarity [\" + name + \"] must have an associated type\");\n+ } else if (similarities.containsKey(typeName) == false) {\n+ throw new IllegalArgumentException(\"Unknown Similarity type [\" + typeName + \"] for [\" + name + \"]\");\n }\n- providers.put(name, type);\n- }\n-\n- MapBinder<String, SimilarityProvider.Factory> similarityBinder =\n- MapBinder.newMapBinder(binder(), String.class, SimilarityProvider.Factory.class);\n-\n- for (Map.Entry<String, Class<? extends SimilarityProvider>> entry : providers.entrySet()) {\n- similarityBinder.addBinding(entry.getKey()).toProvider(FactoryProvider.newFactory(SimilarityProvider.Factory.class, entry.getValue())).in(Scopes.SINGLETON);\n+ similarityBinder.addBinding(entry.getKey()).toProvider(FactoryProvider.newFactory(SimilarityProvider.Factory.class, similarities.get(typeName))).in(Scopes.SINGLETON);\n }\n \n for (PreBuiltSimilarityProvider.Factory factory : Similarities.listFactories()) {\n- if (!providers.containsKey(factory.name())) {\n+ if (!similarities.containsKey(factory.name())) {\n similarityBinder.addBinding(factory.name()).toInstance(factory);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/similarity/SimilarityModule.java",
"status": "modified"
},
{
"diff": "@@ -19,20 +19,22 @@\n \n package org.elasticsearch.index.store;\n \n-import com.google.common.collect.ImmutableList;\n-import org.elasticsearch.common.inject.*;\n+import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.settings.Settings;\n \n+import java.util.HashMap;\n+import java.util.Map;\n import java.util.Locale;\n \n /**\n *\n */\n-public class IndexStoreModule extends AbstractModule implements SpawnModules {\n+public class IndexStoreModule extends AbstractModule {\n \n public static final String STORE_TYPE = \"index.store.type\";\n \n private final Settings settings;\n+ private final Map<String, Class<? extends IndexStore>> storeTypes = new HashMap<>();\n \n public enum Type {\n NIOFS,\n@@ -56,25 +58,30 @@ public IndexStoreModule(Settings settings) {\n this.settings = settings;\n }\n \n- @Override\n- public Iterable<? extends Module> spawnModules() {\n- final String storeType = settings.get(STORE_TYPE, Type.DEFAULT.getSettingsKey());\n+ public void addIndexStore(String type, Class<? extends IndexStore> clazz) {\n+ storeTypes.put(type, clazz);\n+ }\n+\n+ private static boolean isBuiltinType(String storeType) {\n for (Type type : Type.values()) {\n if (type.match(storeType)) {\n- return ImmutableList.of(new DefaultStoreModule());\n+ return true;\n }\n }\n- final Class<? extends Module> indexStoreModule = settings.getAsClass(STORE_TYPE, null, \"org.elasticsearch.index.store.\", \"IndexStoreModule\");\n- return ImmutableList.of(Modules.createModule(indexStoreModule, settings));\n+ return false;\n }\n \n @Override\n- protected void configure() {}\n-\n- private static class DefaultStoreModule extends AbstractModule {\n- @Override\n- protected void configure() {\n+ protected void configure() {\n+ final String storeType = settings.get(STORE_TYPE);\n+ if (storeType == null || isBuiltinType(storeType)) {\n bind(IndexStore.class).asEagerSingleton();\n+ } else {\n+ Class<? extends IndexStore> clazz = storeTypes.get(storeType);\n+ if (clazz == null) {\n+ throw new IllegalArgumentException(\"Unknown store type [\" + storeType + \"]\");\n+ }\n+ bind(IndexStore.class).to(clazz).asEagerSingleton();\n }\n }\n }\n\\ No newline at end of file",
"filename": "core/src/main/java/org/elasticsearch/index/store/IndexStoreModule.java",
"status": "modified"
},
{
"diff": "@@ -24,7 +24,7 @@\n \n public class CircuitBreakerModule extends AbstractModule {\n \n- public static final String IMPL = \"indices.breaker.type\";\n+ public static final String TYPE_KEY = \"indices.breaker.type\";\n \n private final Settings settings;\n \n@@ -34,6 +34,15 @@ public CircuitBreakerModule(Settings settings) {\n \n @Override\n protected void configure() {\n- bind(CircuitBreakerService.class).to(settings.getAsClass(IMPL, HierarchyCircuitBreakerService.class)).asEagerSingleton();\n+ String type = settings.get(TYPE_KEY);\n+ Class<? extends CircuitBreakerService> impl;\n+ if (type == null || type.equals(\"hierarchy\")) {\n+ impl = HierarchyCircuitBreakerService.class;\n+ } else if (type.equals(\"none\")) {\n+ impl = NoneCircuitBreakerService.class;\n+ } else {\n+ throw new IllegalArgumentException(\"Unknown circuit breaker type [\" + type + \"]\");\n+ }\n+ bind(CircuitBreakerService.class).to(impl).asEagerSingleton();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/breaker/CircuitBreakerModule.java",
"status": "modified"
},
{
"diff": "@@ -20,13 +20,15 @@\n package org.elasticsearch.repositories;\n \n import com.google.common.collect.ImmutableList;\n-import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.Module;\n import org.elasticsearch.common.inject.Modules;\n import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.common.settings.Settings;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n+\n import static org.elasticsearch.common.Strings.toCamelCase;\n \n /**\n@@ -67,7 +69,11 @@ public RepositoryModule(RepositoryName repositoryName, Settings settings, Settin\n */\n @Override\n public Iterable<? extends Module> spawnModules() {\n- return ImmutableList.of(Modules.createModule(loadTypeModule(repositoryName.type(), \"org.elasticsearch.repositories.\", \"RepositoryModule\"), globalSettings));\n+ Class<? extends Module> repoModuleClass = typesRegistry.type(repositoryName.type());\n+ if (repoModuleClass == null) {\n+ throw new IllegalArgumentException(\"Could not find repository type [\" + repositoryName.getType() + \"] for repository [\" + repositoryName.getName() + \"]\");\n+ }\n+ return Collections.unmodifiableList(Arrays.asList(Modules.createModule(repoModuleClass, globalSettings)));\n }\n \n /**\n@@ -77,12 +83,4 @@ public Iterable<? extends Module> spawnModules() {\n protected void configure() {\n bind(RepositorySettings.class).toInstance(new RepositorySettings(globalSettings, settings));\n }\n-\n- private Class<? extends Module> loadTypeModule(String type, String prefixPackage, String suffixClassName) {\n- Class<? extends Module> registered = typesRegistry.type(type);\n- if (registered != null) {\n- return registered;\n- }\n- return Classes.loadClass(globalSettings.getClassLoader(), type, prefixPackage, suffixClassName);\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/repositories/RepositoryModule.java",
"status": "modified"
},
{
"diff": "@@ -75,17 +75,6 @@ protected void configure() {\n scriptsBinder.addBinding(entry.getKey()).to(entry.getValue()).asEagerSingleton();\n }\n \n- // now, check for config based ones\n- Map<String, Settings> nativeSettings = settings.getGroups(\"script.native\");\n- for (Map.Entry<String, Settings> entry : nativeSettings.entrySet()) {\n- String name = entry.getKey();\n- Class<? extends NativeScriptFactory> type = entry.getValue().getAsClass(\"type\", NativeScriptFactory.class);\n- if (type == NativeScriptFactory.class) {\n- throw new IllegalArgumentException(\"type is missing for native script [\" + name + \"]\");\n- }\n- scriptsBinder.addBinding(name).to(type).asEagerSingleton();\n- }\n-\n Multibinder<ScriptEngineService> multibinder = Multibinder.newSetBinder(binder(), ScriptEngineService.class);\n multibinder.addBinding().to(NativeScriptEngineService.class);\n ",
"filename": "core/src/main/java/org/elasticsearch/script/ScriptModule.java",
"status": "modified"
},
{
"diff": "@@ -33,15 +33,7 @@\n import org.elasticsearch.search.dfs.DfsPhase;\n import org.elasticsearch.search.fetch.FetchPhase;\n import org.elasticsearch.search.fetch.FetchSubPhaseModule;\n-import org.elasticsearch.search.fetch.explain.ExplainFetchSubPhase;\n-import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsFetchSubPhase;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsFetchSubPhase;\n-import org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase;\n-import org.elasticsearch.search.fetch.script.ScriptFieldsFetchSubPhase;\n-import org.elasticsearch.search.fetch.source.FetchSourceSubPhase;\n-import org.elasticsearch.search.fetch.version.VersionFetchSubPhase;\n import org.elasticsearch.search.highlight.HighlightModule;\n-import org.elasticsearch.search.highlight.HighlightPhase;\n import org.elasticsearch.search.query.QueryPhase;\n import org.elasticsearch.search.suggest.SuggestModule;\n ",
"filename": "core/src/main/java/org/elasticsearch/search/SearchModule.java",
"status": "modified"
},
{
"diff": "@@ -19,16 +19,11 @@\n \n package org.elasticsearch.search;\n \n-import com.google.common.collect.ImmutableList;\n-\n+import org.elasticsearch.common.Classes;\n import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.inject.Module;\n-import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.common.settings.Settings;\n \n-import static org.elasticsearch.common.inject.Modules.createModule;\n-\n-public class SearchServiceModule extends AbstractModule implements SpawnModules {\n+public class SearchServiceModule extends AbstractModule {\n \n public static final String IMPL = \"search.service_impl\";\n \n@@ -40,10 +35,12 @@ public SearchServiceModule(Settings settings) {\n \n @Override\n protected void configure() {\n- }\n-\n- @Override\n- public Iterable<? extends Module> spawnModules() {\n- return ImmutableList.of(createModule(settings.getAsClass(IMPL, DefaultSearchServiceModule.class), settings));\n+ String impl = settings.get(IMPL);\n+ if (impl == null) {\n+ bind(SearchService.class).asEagerSingleton();\n+ } else {\n+ Class<? extends SearchService> implClass = Classes.loadClass(getClass().getClassLoader(), impl);\n+ bind(SearchService.class).to(implClass).asEagerSingleton();\n+ }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/SearchServiceModule.java",
"status": "modified"
},
{
"diff": "@@ -74,7 +74,7 @@\n /**\n * The main module for the get (binding all get components together)\n */\n-public class AggregationModule extends AbstractModule implements SpawnModules{\n+public class AggregationModule extends AbstractModule implements SpawnModules {\n \n private List<Class<? extends Aggregator.Parser>> aggParsers = Lists.newArrayList();\n private List<Class<? extends PipelineAggregator.Parser>> pipelineAggParsers = Lists.newArrayList();",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregationModule.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.transport;\n \n import com.google.common.base.Preconditions;\n+import com.google.common.collect.Maps;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n@@ -29,6 +30,8 @@\n import org.elasticsearch.transport.local.LocalTransport;\n import org.elasticsearch.transport.netty.NettyTransport;\n \n+import java.util.Map;\n+\n /**\n *\n */\n@@ -37,9 +40,14 @@ public class TransportModule extends AbstractModule {\n public static final String TRANSPORT_TYPE_KEY = \"transport.type\";\n public static final String TRANSPORT_SERVICE_TYPE_KEY = \"transport.service.type\";\n \n+ public static final String LOCAL_TRANSPORT = \"local\";\n+ public static final String NETTY_TRANSPORT = \"netty\";\n+\n private final ESLogger logger;\n private final Settings settings;\n \n+ private final Map<String, Class<? extends TransportService>> transportServices = Maps.newHashMap();\n+ private final Map<String, Class<? extends Transport>> transports = Maps.newHashMap();\n private Class<? extends TransportService> configuredTransportService;\n private Class<? extends Transport> configuredTransport;\n private String configuredTransportServiceSource;\n@@ -48,6 +56,22 @@ public class TransportModule extends AbstractModule {\n public TransportModule(Settings settings) {\n this.settings = settings;\n this.logger = Loggers.getLogger(getClass(), settings);\n+ addTransport(LOCAL_TRANSPORT, LocalTransport.class);\n+ addTransport(NETTY_TRANSPORT, NettyTransport.class);\n+ }\n+\n+ public void addTransportService(String name, Class<? extends TransportService> clazz) {\n+ Class<? extends TransportService> oldClazz = transportServices.put(name, clazz);\n+ if (oldClazz != null) {\n+ throw new IllegalArgumentException(\"Cannot register TransportService [\" + name + \"] to \" + clazz.getName() + \", already registered to \" + oldClazz.getName());\n+ }\n+ }\n+\n+ public void addTransport(String name, Class<? extends Transport> clazz) {\n+ Class<? extends Transport> oldClazz = transports.put(name, clazz);\n+ if (oldClazz != null) {\n+ throw new IllegalArgumentException(\"Cannot register Transport [\" + name + \"] to \" + clazz.getName() + \", already registered to \" + oldClazz.getName());\n+ }\n }\n \n @Override\n@@ -56,12 +80,14 @@ protected void configure() {\n logger.info(\"Using [{}] as transport service, overridden by [{}]\", configuredTransportService.getName(), configuredTransportServiceSource);\n bind(TransportService.class).to(configuredTransportService).asEagerSingleton();\n } else {\n- Class<? extends TransportService> defaultTransportService = TransportService.class;\n- Class<? extends TransportService> transportService = settings.getAsClass(TRANSPORT_SERVICE_TYPE_KEY, defaultTransportService, \"org.elasticsearch.transport.\", \"TransportService\");\n- if (!TransportService.class.equals(transportService)) {\n- bind(TransportService.class).to(transportService).asEagerSingleton();\n- } else {\n+ String typeName = settings.get(TRANSPORT_SERVICE_TYPE_KEY);\n+ if (typeName == null) {\n bind(TransportService.class).asEagerSingleton();\n+ } else {\n+ if (transportServices.containsKey(typeName) == false) {\n+ throw new IllegalArgumentException(\"Unknown TransportService [\" + typeName + \"]\");\n+ }\n+ bind(TransportService.class).to(transportServices.get(typeName)).asEagerSingleton();\n }\n }\n \n@@ -71,9 +97,13 @@ protected void configure() {\n logger.info(\"Using [{}] as transport, overridden by [{}]\", configuredTransport.getName(), configuredTransportSource);\n bind(Transport.class).to(configuredTransport).asEagerSingleton();\n } else {\n- Class<? extends Transport> defaultTransport = DiscoveryNode.localNode(settings) ? LocalTransport.class : NettyTransport.class;\n- Class<? extends Transport> transport = settings.getAsClass(TRANSPORT_TYPE_KEY, defaultTransport, \"org.elasticsearch.transport.\", \"Transport\");\n- bind(Transport.class).to(transport).asEagerSingleton();\n+ String defaultType = DiscoveryNode.localNode(settings) ? LOCAL_TRANSPORT : NETTY_TRANSPORT;\n+ String typeName = settings.get(TRANSPORT_TYPE_KEY, defaultType);\n+ Class<? extends Transport> clazz = transports.get(typeName);\n+ if (clazz == null) {\n+ throw new IllegalArgumentException(\"Unknown Transport [\" + typeName + \"]\");\n+ }\n+ bind(Transport.class).to(clazz).asEagerSingleton();\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/transport/TransportModule.java",
"status": "modified"
},
{
"diff": "@@ -89,6 +89,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.plugins.AbstractPlugin;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.test.ESIntegTestCase;\n@@ -143,7 +144,7 @@ protected int minimumNumberOfReplicas() {\n protected Settings nodeSettings(int nodeOrdinal) {\n return Settings.settingsBuilder()\n .put(super.nodeSettings(nodeOrdinal))\n- .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, InterceptingTransportService.class.getName())\n+ .extendArray(\"plugin.types\", InterceptingTransportService.Plugin.class.getName())\n .build();\n }\n \n@@ -843,6 +844,24 @@ private static List<TransportRequest> consumeTransportRequests(String action) {\n \n public static class InterceptingTransportService extends TransportService {\n \n+ public static class Plugin extends AbstractPlugin {\n+ @Override\n+ public String name() {\n+ return \"intercepting-transport-service\";\n+ }\n+ @Override\n+ public String description() {\n+ return \"an intercepting transport service for testing\";\n+ }\n+ public void onModule(TransportModule transportModule) {\n+ transportModule.addTransportService(\"intercepting\", InterceptingTransportService.class);\n+ }\n+ @Override\n+ public Settings additionalSettings() {\n+ return Settings.builder().put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, \"intercepting\").build();\n+ }\n+ }\n+\n private final Set<String> actions = new HashSet<>();\n \n private final Map<String, List<TransportRequest>> requests = new HashMap<>();",
"filename": "core/src/test/java/org/elasticsearch/action/IndicesRequestIT.java",
"status": "modified"
}
]
} |
{
"body": "With RESTART_ON_UPGRADE disabled (default for at least 1.5.2), the init script of ES 1.7.x will be unable to restart ES after a Debian package upgrade from 1.5.x or earlier.\n\nThis is because the pid file path changed in 1.6.0 (IIRC), so when a user invokes e.g. `/etc/init.d/elasticsearch restart` after the package has been upgraded no pid file is found and ES is deemed to not be running. The subsequent start will then fail since it can't bind to any ports as they're still busy by the running ES process.\n\nThis broke my automated upgrade from 1.5.2 to 1.7.1, requiring a manual shutdown of ES on affected machines and/or a package rollback. Could we try a little harder to find the pid file by e.g. introducing a fallback pid file in the 1.7.x series? Perhaps migrating the old pid file with something as simple as\n\n```\nLEGACY_PID_FILE=\"/var/run/elasticsearch.pid\"\nif [ -f \"$LEGACY_PID_FILE\" -a \"$LEGACY_PID_FILE\" != \"$PID_FILE\" ] ; then \n mv \"$LEGACY_PID_FILE\" \"$PID_FILE\" || exit 1\nfi\n```\n\nright after the PID_DIR creation in /etc/init.d/elasticsearch would do?\n",
"comments": [
{
"body": "@magnusbaeck I was thinking of something similar - it sounds a good idea to me.\n",
"created_at": "2015-08-05T08:36:14Z"
},
{
"body": "Great. I'll look into a patch.\n",
"created_at": "2015-08-05T08:50:15Z"
},
{
"body": "This is a duplicate of #11747. Keeping this one open for now since it's more descriptive and already has the \"bug\" label.\n",
"created_at": "2015-08-08T19:06:58Z"
},
{
"body": "We no longer restart after upgrade. Closing\n",
"created_at": "2016-01-26T17:43:58Z"
}
],
"number": 12649,
"title": "Changed pid file path causes init script to fail to restart ES after upgrade"
} | {
"body": "This fixes the init script breakage that was introduced in 1.6. Submitting this low-risk bugfix to master but please consider cherry-picking it into the 1.7 branch.\n\nCloses #12649\n",
"number": 12741,
"review_comments": [],
"title": "Move existing PID file to new location"
} | {
"commits": [
{
"message": "In Debian init script, move legacy PID file to new location.\n\nWhen the PID file moved from /var/run to /var/run/elasticsearch\nin ES 1.6, upgrades of the Debian package stopped working if\nRESTART_ON_UPGRADE was disabled.\n\nThis is because the >=1.6 init script assumed that the PID file\nwas found in the current (new) location, so\n\"/etc/init.d/elasticsearch stop\" would bail out right away thinking\nthat ES wasn't running, and the \"start\" and \"restart\" actions would\nattempt to start ES again (unsuccessfully since ES was still running\nand hogging port(s)).\n\nWe fix this by moving the old PID file to the new location at\nthe beginning of the init script so that all script actions relying\non the PID file continue to work.\n\nFixes issue #12649."
}
],
"files": [
{
"diff": "@@ -106,6 +106,7 @@ fi\n \n # Define other required variables\n PID_FILE=\"$PID_DIR/$NAME.pid\"\n+LEGACY_PID_FILE=\"/var/run/$NAME.pid\"\n DAEMON=$ES_HOME/bin/elasticsearch\n DAEMON_OPTS=\"-d -p $PID_FILE --default.config=$CONF_FILE --default.path.home=$ES_HOME --default.path.logs=$LOG_DIR --default.path.data=$DATA_DIR --default.path.conf=$CONF_DIR\"\n \n@@ -131,6 +132,21 @@ checkJava() {\n \tfi\n }\n \n+# Ensure that the PID_DIR exists (it is cleaned at OS startup time)\n+if [ -n \"$PID_DIR\" ] && [ ! -e \"$PID_DIR\" ]; then\n+\tmkdir -p \"$PID_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_DIR\"\n+fi\n+if [ -n \"$PID_FILE\" ] && [ ! -e \"$PID_FILE\" ]; then\n+\ttouch \"$PID_FILE\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_FILE\"\n+fi\n+\n+# Move any PID file at the (pre 1.6) legacy location to the current\n+# location so that this init script can locate old daemons that are\n+# still running after an upgrade.\n+if [ -f \"$LEGACY_PID_FILE\" ] && [ \"$LEGACY_PID_FILE\" != \"$PID_FILE\" ]; then\n+\tmv \"$LEGACY_PID_FILE\" \"$PID_FILE\" || exit 1\n+fi\n+\n case \"$1\" in\n start)\n \tcheckJava\n@@ -152,14 +168,6 @@ case \"$1\" in\n \t# Prepare environment\n \tmkdir -p \"$LOG_DIR\" \"$DATA_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$LOG_DIR\" \"$DATA_DIR\"\n \n-\t# Ensure that the PID_DIR exists (it is cleaned at OS startup time)\n-\tif [ -n \"$PID_DIR\" ] && [ ! -e \"$PID_DIR\" ]; then\n-\t\tmkdir -p \"$PID_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_DIR\"\n-\tfi\n-\tif [ -n \"$PID_FILE\" ] && [ ! -e \"$PID_FILE\" ]; then\n-\t\ttouch \"$PID_FILE\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_FILE\"\n-\tfi\n-\n \tif [ -n \"$MAX_OPEN_FILES\" ]; then\n \t\tulimit -n $MAX_OPEN_FILES\n \tfi",
"filename": "distribution/deb/src/main/packaging/init.d/elasticsearch",
"status": "modified"
}
]
} |
{
"body": "Custom data paths in shadow replicas will not work with the security manager today.\n",
"comments": [],
"number": 12714,
"title": "Fix custom data paths to work with security manager"
} | {
"body": "This allows `path.shared_data` to be added to the security manager while\nstill allowing a custom `data_path` for indices using shadow replicas.\n\nFor example, configuring `path.shared_data: /tmp/foo`, then created an\nindex with:\n\n```\nPOST /myindex\n{\n \"index\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 1,\n \"data_path\": \"/tmp/foo/bar/baz\",\n \"shadow_replicas\": true\n }\n}\n```\n\nThe index will then reside in `/tmp/foo/bar/baz`.\n\n`path.shared_data` defaults to `${path.home}/data` if not specified.\n\nResolves #12714\nRelates to #11065\n",
"number": 12729,
"review_comments": [
{
"body": "why the .getParent?\n",
"created_at": "2015-08-07T16:03:55Z"
},
{
"body": "I wanted it to be a level higher than the temp directory, to capture the directory that other temp dirs are created in.\n",
"created_at": "2015-08-07T16:11:23Z"
},
{
"body": "Why not create one temp dir, and then a subdir off of that? There is nothing that guarantees tempdirs are all created side by side.\n",
"created_at": "2015-08-07T16:13:50Z"
},
{
"body": "Because the directory set for `path.shared_data` and the custom directories set during tests when indices are created are uncoupled, if I set it to a specific directory in advance, every test (including where we randomly add a custom data path) would be required to know what the shared data path was already set to.\n",
"created_at": "2015-08-07T16:16:09Z"
},
{
"body": "I would like it better if custom data paths were explicitely created in a specific drectory. However it's an existing issue that your PR does not introduce, so could you just add a comment explaining why you set the custom data path this way and add a TODO to explicitely set custom index paths to be sub directories?\n",
"created_at": "2015-08-10T12:22:13Z"
},
{
"body": "since this test is contained maybe it could configure a shared data path and then make sure that all index paths are sub dirs?\n",
"created_at": "2015-08-10T12:23:33Z"
},
{
"body": "Sure I will change that.\n",
"created_at": "2015-08-10T15:13:37Z"
},
{
"body": "Will do\n",
"created_at": "2015-08-10T15:13:42Z"
},
{
"body": "Should it resolve to the same as path.data instead?\n",
"created_at": "2015-08-10T15:52:26Z"
},
{
"body": "This is the same as `path.data`? Or do you mean it should append the cluster name onto it?\n",
"created_at": "2015-08-10T15:54:59Z"
},
{
"body": "I mean if path.data is set eg. in the config/elasticsearch.yml to be another location\n",
"created_at": "2015-08-10T15:56:25Z"
},
{
"body": "Or maybe sharedDataFile should just be null if not set?\n",
"created_at": "2015-08-10T15:57:21Z"
},
{
"body": "Ahh, part of the reason is that `path.data` can be an array, and so I didn't want to just randomly pick one of the paths\n",
"created_at": "2015-08-10T15:57:41Z"
},
{
"body": "I am wondering if we need both node.enable_custom_paths and path.shared_data\n",
"created_at": "2015-08-10T16:02:17Z"
},
{
"body": "We could probably remove `enable_custom_paths` now, but maybe as a separate change?\n",
"created_at": "2015-08-10T16:03:53Z"
},
{
"body": "I don't mind doing it as a separate change\n",
"created_at": "2015-08-10T16:05:05Z"
},
{
"body": "I don't want to set it as null because if we remove `custom_paths_enabled` we'll now get a NullPointerException if someone tries to use it. I think if we remove `custom_paths_enabled` we can set this to a better default if you're concerned about it using the data path. Maybe just `${es.path.home}/data/custom` or something?\n",
"created_at": "2015-08-10T16:05:24Z"
},
{
"body": "Then maybe it should be null when not set, otherwise we request an unnecessary permission to the security manager, while we should be requesting as few permissions as possible?\n",
"created_at": "2015-08-10T16:06:05Z"
},
{
"body": "Great, I opened #12776 for this.\n",
"created_at": "2015-08-10T16:06:55Z"
},
{
"body": "I can make it null and add better messaging in the validation I think.\n",
"created_at": "2015-08-10T16:08:13Z"
},
{
"body": "just checking that this test does not create files in this dir, otherwise we should rather use the java.io.tmpdir?\n",
"created_at": "2015-08-11T13:31:18Z"
},
{
"body": "This doesn't create any files here, it just checks and resolves paths (unit test only)\n",
"created_at": "2015-08-11T13:38:13Z"
},
{
"body": ":+1:\n",
"created_at": "2015-08-11T13:40:38Z"
}
],
"title": "Add `path.shared_data`"
} | {
"commits": [
{
"message": "Add `path.shared_data`\n\nThis allows `path.shared_data` to be added to the security manager while\nstill allowing a custom `data_path` for indices using shadow replicas.\n\nFor example, configuring `path.shared_data: /tmp/foo`, then created an\nindex with:\n\n```\nPOST /myindex\n{\n \"index\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 1,\n \"data_path\": \"/tmp/foo/bar/baz\",\n \"shadow_replicas\": true\n }\n}\n```\n\nThe index will then reside in `/tmp/foo/bar/baz`.\n\n`path.shared_data` defaults to `null` if not specified.\n\nResolves #12714\nRelates to #11065"
}
],
"files": [
{
"diff": "@@ -126,6 +126,9 @@ static Permissions createPermissions(Environment environment) throws IOException\n // read-write dirs\n addPath(policy, environment.tmpFile(), \"read,readlink,write,delete\");\n addPath(policy, environment.logsFile(), \"read,readlink,write,delete\");\n+ if (environment.sharedDataFile() != null) {\n+ addPath(policy, environment.sharedDataFile(), \"read,readlink,write,delete\");\n+ }\n for (Path path : environment.dataFiles()) {\n addPath(policy, path, \"read,readlink,write,delete\");\n }",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Security.java",
"status": "modified"
},
{
"diff": "@@ -50,12 +50,14 @@\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -99,12 +101,14 @@ public class MetaDataCreateIndexService extends AbstractComponent {\n private final AliasValidator aliasValidator;\n private final IndexTemplateFilter indexTemplateFilter;\n private final NodeEnvironment nodeEnv;\n+ private final Environment env;\n \n @Inject\n public MetaDataCreateIndexService(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n IndicesService indicesService, AllocationService allocationService, MetaDataService metaDataService,\n Version version, AliasValidator aliasValidator,\n- Set<IndexTemplateFilter> indexTemplateFilters, NodeEnvironment nodeEnv) {\n+ Set<IndexTemplateFilter> indexTemplateFilters, Environment env,\n+ NodeEnvironment nodeEnv) {\n super(settings);\n this.threadPool = threadPool;\n this.clusterService = clusterService;\n@@ -114,6 +118,7 @@ public MetaDataCreateIndexService(Settings settings, ThreadPool threadPool, Clus\n this.version = version;\n this.aliasValidator = aliasValidator;\n this.nodeEnv = nodeEnv;\n+ this.env = env;\n \n if (indexTemplateFilters.isEmpty()) {\n this.indexTemplateFilter = DEFAULT_INDEX_TEMPLATE_FILTER;\n@@ -511,8 +516,13 @@ private void validate(CreateIndexClusterStateUpdateRequest request, ClusterState\n public void validateIndexSettings(String indexName, Settings settings) throws IndexCreationException {\n String customPath = settings.get(IndexMetaData.SETTING_DATA_PATH, null);\n List<String> validationErrors = Lists.newArrayList();\n- if (customPath != null && nodeEnv.isCustomPathsEnabled() == false) {\n- validationErrors.add(\"custom data_paths for indices is disabled\");\n+ if (customPath != null && env.sharedDataFile() == null) {\n+ validationErrors.add(\"path.shared_data must be set in order to use custom data paths\");\n+ } else if (customPath != null) {\n+ Path resolvedPath = PathUtils.get(new Path[]{env.sharedDataFile()}, customPath);\n+ if (resolvedPath == null) {\n+ validationErrors.add(\"custom path [\" + customPath + \"] is not a sub-path of path.shared_data [\" + env.sharedDataFile() + \"]\");\n+ }\n }\n Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -57,6 +57,8 @@ public class Environment {\n \n private final Path pluginsFile;\n \n+ private final Path sharedDataFile;\n+\n /** location of bin/, used by plugin manager */\n private final Path binFile;\n \n@@ -126,6 +128,11 @@ public Environment(Settings settings) {\n dataFiles = new Path[]{homeFile.resolve(\"data\")};\n dataWithClusterFiles = new Path[]{homeFile.resolve(\"data\").resolve(ClusterName.clusterNameFromSettings(settings).value())};\n }\n+ if (settings.get(\"path.shared_data\") != null) {\n+ sharedDataFile = PathUtils.get(cleanPath(settings.get(\"path.shared_data\")));\n+ } else {\n+ sharedDataFile = null;\n+ }\n String[] repoPaths = settings.getAsArray(\"path.repo\");\n if (repoPaths.length > 0) {\n repoFiles = new Path[repoPaths.length];\n@@ -165,6 +172,13 @@ public Path[] dataFiles() {\n return dataFiles;\n }\n \n+ /**\n+ * The shared data location\n+ */\n+ public Path sharedDataFile() {\n+ return sharedDataFile;\n+ }\n+\n /**\n * The data location with the cluster name as a sub directory.\n */",
"filename": "core/src/main/java/org/elasticsearch/env/Environment.java",
"status": "modified"
},
{
"diff": "@@ -105,6 +105,7 @@ public String toString() {\n }\n \n private final NodePath[] nodePaths;\n+ private final Path sharedDataPath;\n private final Lock[] locks;\n \n private final boolean addNodeId;\n@@ -137,13 +138,15 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n \n if (!DiscoveryNode.nodeRequiresLocalStorage(settings)) {\n nodePaths = null;\n+ sharedDataPath = null;\n locks = null;\n localNodeId = -1;\n return;\n }\n \n final NodePath[] nodePaths = new NodePath[environment.dataWithClusterFiles().length];\n final Lock[] locks = new Lock[nodePaths.length];\n+ sharedDataPath = environment.sharedDataFile();\n \n int localNodeId = -1;\n IOException lastException = null;\n@@ -792,17 +795,16 @@ public static boolean hasCustomDataPath(@IndexSettings Settings indexSettings) {\n *\n * @param indexSettings settings for the index\n */\n- @SuppressForbidden(reason = \"Lee is working on it: https://github.com/elastic/elasticsearch/pull/11065\")\n private Path resolveCustomLocation(@IndexSettings Settings indexSettings) {\n assert indexSettings != Settings.EMPTY;\n String customDataDir = indexSettings.get(IndexMetaData.SETTING_DATA_PATH);\n if (customDataDir != null) {\n // This assert is because this should be caught by MetaDataCreateIndexService\n assert customPathsEnabled;\n if (addNodeId) {\n- return PathUtils.get(customDataDir).resolve(Integer.toString(this.localNodeId));\n+ return sharedDataPath.resolve(customDataDir).resolve(Integer.toString(this.localNodeId));\n } else {\n- return PathUtils.get(customDataDir);\n+ return sharedDataPath.resolve(customDataDir);\n }\n } else {\n throw new IllegalArgumentException(\"no custom \" + IndexMetaData.SETTING_DATA_PATH + \" setting available\");",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -77,6 +77,7 @@ public void testEnvironmentPaths() throws Exception {\n settingsBuilder.put(\"path.scripts\", esHome.resolve(\"scripts\").toString());\n settingsBuilder.put(\"path.plugins\", esHome.resolve(\"plugins\").toString());\n settingsBuilder.putArray(\"path.data\", esHome.resolve(\"data1\").toString(), esHome.resolve(\"data2\").toString());\n+ settingsBuilder.put(\"path.shared_data\", esHome.resolve(\"custom\").toString());\n settingsBuilder.put(\"path.logs\", esHome.resolve(\"logs\").toString());\n settingsBuilder.put(\"pidfile\", esHome.resolve(\"test.pid\").toString());\n Settings settings = settingsBuilder.build();\n@@ -122,6 +123,7 @@ public void testEnvironmentPaths() throws Exception {\n for (Path dataPath : environment.dataWithClusterFiles()) {\n assertExactPermissions(new FilePermission(dataPath.toString(), \"read,readlink,write,delete\"), permissions);\n }\n+ assertExactPermissions(new FilePermission(environment.sharedDataFile().toString(), \"read,readlink,write,delete\"), permissions);\n // logs: r/w\n assertExactPermissions(new FilePermission(environment.logsFile().toString(), \"read,readlink,write,delete\"), permissions);\n // temp dir: r/w",
"filename": "core/src/test/java/org/elasticsearch/bootstrap/SecurityTests.java",
"status": "modified"
},
{
"diff": "@@ -300,7 +300,7 @@ public void run() {\n @Test\n public void testCustomDataPaths() throws Exception {\n String[] dataPaths = tmpPaths();\n- NodeEnvironment env = newNodeEnvironment(dataPaths, Settings.EMPTY);\n+ NodeEnvironment env = newNodeEnvironment(dataPaths, \"/tmp\", Settings.EMPTY);\n \n Settings s1 = Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).build();\n Settings s2 = Settings.builder().put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build();\n@@ -323,7 +323,7 @@ public void testCustomDataPaths() throws Exception {\n env.indexPaths(i), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n \n env.close();\n- NodeEnvironment env2 = newNodeEnvironment(dataPaths,\n+ NodeEnvironment env2 = newNodeEnvironment(dataPaths, \"/tmp\",\n Settings.builder().put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH, false).build());\n \n assertThat(env2.availableShardPaths(sid), equalTo(env2.availableShardPaths(sid)));\n@@ -381,4 +381,14 @@ public NodeEnvironment newNodeEnvironment(String[] dataPaths, Settings settings)\n .putArray(\"path.data\", dataPaths).build();\n return new NodeEnvironment(build, new Environment(build));\n }\n+\n+ public NodeEnvironment newNodeEnvironment(String[] dataPaths, String sharedDataPath, Settings settings) throws IOException {\n+ Settings build = Settings.builder()\n+ .put(settings)\n+ .put(\"path.home\", createTempDir().toAbsolutePath().toString())\n+ .put(\"path.shared_data\", sharedDataPath)\n+ .put(NodeEnvironment.SETTING_CUSTOM_DATA_PATH_ENABLED, true)\n+ .putArray(\"path.data\", dataPaths).build();\n+ return new NodeEnvironment(build, new Environment(build));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
},
{
"diff": "@@ -68,23 +68,44 @@\n @ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0)\n public class IndexWithShadowReplicasIT extends ESIntegTestCase {\n \n- private Settings nodeSettings() {\n+ private Settings nodeSettings(Path dataPath) {\n+ return nodeSettings(dataPath.toString());\n+ }\n+\n+ private Settings nodeSettings(String dataPath) {\n return Settings.builder()\n .put(\"node.add_id_to_custom_path\", false)\n .put(\"node.enable_custom_paths\", true)\n+ .put(\"path.shared_data\", dataPath)\n .put(\"index.store.fs.fs_lock\", randomFrom(\"native\", \"simple\"))\n .build();\n }\n \n+ public void testCannotCreateWithBadPath() throws Exception {\n+ Settings nodeSettings = nodeSettings(\"/badpath\");\n+ internalCluster().startNodesAsync(1, nodeSettings).get();\n+ Settings idxSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_DATA_PATH, \"/etc/foo\")\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build();\n+ try {\n+ assertAcked(prepareCreate(\"foo\").setSettings(idxSettings));\n+ fail(\"should have failed\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage(),\n+ e.getMessage().contains(\"custom path [/etc/foo] is not a sub-path of path.shared_data\"));\n+ }\n+ }\n+\n /**\n * Tests the case where we create an index without shadow replicas, snapshot it and then restore into\n * an index with shadow replicas enabled.\n */\n public void testRestoreToShadow() throws ExecutionException, InterruptedException {\n- Settings nodeSettings = nodeSettings();\n+ final Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n internalCluster().startNodesAsync(3, nodeSettings).get();\n- final Path dataPath = createTempDir();\n Settings idxSettings = Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build();\n@@ -137,11 +158,11 @@ public void testRestoreToShadow() throws ExecutionException, InterruptedExceptio\n \n @Test\n public void testIndexWithFewDocuments() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ final Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n internalCluster().startNodesAsync(3, nodeSettings).get();\n final String IDX = \"test\";\n- final Path dataPath = createTempDir();\n \n Settings idxSettings = Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n@@ -200,10 +221,10 @@ public void testIndexWithFewDocuments() throws Exception {\n \n @Test\n public void testReplicaToPrimaryPromotion() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n String node1 = internalCluster().startNode(nodeSettings);\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -259,10 +280,10 @@ public void testReplicaToPrimaryPromotion() throws Exception {\n \n @Test\n public void testPrimaryRelocation() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n String node1 = internalCluster().startNode(nodeSettings);\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -320,10 +341,10 @@ public void testPrimaryRelocation() throws Exception {\n \n @Test\n public void testPrimaryRelocationWithConcurrentIndexing() throws Throwable {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n String node1 = internalCluster().startNode(nodeSettings);\n- Path dataPath = createTempDir();\n final String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -393,14 +414,15 @@ public void run() {\n \n @Test\n public void testPrimaryRelocationWhereRecoveryFails() throws Exception {\n+ Path dataPath = createTempDir();\n Settings nodeSettings = Settings.builder()\n .put(\"node.add_id_to_custom_path\", false)\n .put(\"node.enable_custom_paths\", true)\n .put(\"plugin.types\", MockTransportService.Plugin.class.getName())\n+ .put(\"path.shared_data\", dataPath)\n .build();\n \n String node1 = internalCluster().startNode(nodeSettings);\n- Path dataPath = createTempDir();\n final String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -490,11 +512,11 @@ public void run() {\n \n @Test\n public void testIndexWithShadowReplicasCleansUp() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n int nodeCount = randomIntBetween(2, 5);\n internalCluster().startNodesAsync(nodeCount, nodeSettings).get();\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -532,10 +554,10 @@ public void testIndexWithShadowReplicasCleansUp() throws Exception {\n */\n @Test\n public void testShadowReplicaNaturalRelocation() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n internalCluster().startNodesAsync(2, nodeSettings).get();\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -586,10 +608,10 @@ public void run() {\n \n @Test\n public void testShadowReplicasUsingFieldData() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n \n internalCluster().startNodesAsync(3, nodeSettings).get();\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings idxSettings = Settings.builder()\n@@ -655,15 +677,15 @@ public void run() {\n \n @Test\n public void testIndexOnSharedFSRecoversToAnyNode() throws Exception {\n- Settings nodeSettings = nodeSettings();\n+ Path dataPath = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath);\n Settings fooSettings = Settings.builder().put(nodeSettings).put(\"node.affinity\", \"foo\").build();\n Settings barSettings = Settings.builder().put(nodeSettings).put(\"node.affinity\", \"bar\").build();\n \n final Future<List<String>> fooNodes = internalCluster().startNodesAsync(2, fooSettings);\n final Future<List<String>> barNodes = internalCluster().startNodesAsync(2, barSettings);\n fooNodes.get();\n barNodes.get();\n- Path dataPath = createTempDir();\n String IDX = \"test\";\n \n Settings includeFoo = Settings.builder()",
"filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java",
"status": "modified"
},
{
"diff": "@@ -42,10 +42,24 @@\n /**\n * Tests for custom data path locations and templates\n */\n+@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0)\n public class IndicesCustomDataPathIT extends ESIntegTestCase {\n \n private String path;\n \n+ private Settings nodeSettings(Path dataPath) {\n+ return nodeSettings(dataPath.toString());\n+ }\n+\n+ private Settings nodeSettings(String dataPath) {\n+ return Settings.builder()\n+ .put(\"node.add_id_to_custom_path\", false)\n+ .put(\"node.enable_custom_paths\", true)\n+ .put(\"path.shared_data\", dataPath)\n+ .put(\"index.store.fs.fs_lock\", randomFrom(\"native\", \"simple\"))\n+ .build();\n+ }\n+\n @Before\n public void setup() {\n path = createTempDir().toAbsolutePath().toString();\n@@ -61,6 +75,7 @@ public void teardown() throws Exception {\n public void testDataPathCanBeChanged() throws Exception {\n final String INDEX = \"idx\";\n Path root = createTempDir();\n+ internalCluster().startNodesAsync(1, nodeSettings(root));\n Path startDir = root.resolve(\"start\");\n Path endDir = root.resolve(\"end\");\n logger.info(\"--> start dir: [{}]\", startDir.toAbsolutePath().toString());\n@@ -128,9 +143,12 @@ public void testDataPathCanBeChanged() throws Exception {\n @Test\n public void testIndexCreatedWithCustomPathAndTemplate() throws Exception {\n final String INDEX = \"myindex2\";\n+ internalCluster().startNodesAsync(1, nodeSettings(path));\n \n logger.info(\"--> creating an index with data_path [{}]\", path);\n- Settings.Builder sb = Settings.builder().put(IndexMetaData.SETTING_DATA_PATH, path);\n+ Settings.Builder sb = Settings.builder()\n+ .put(IndexMetaData.SETTING_DATA_PATH, path)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0);\n \n client().admin().indices().prepareCreate(INDEX).setSettings(sb).get();\n ensureGreen(INDEX);",
"filename": "core/src/test/java/org/elasticsearch/indices/IndicesCustomDataPathIT.java",
"status": "modified"
},
{
"diff": "@@ -30,8 +30,10 @@\n \n import org.apache.commons.lang3.StringUtils;\n import org.apache.http.impl.client.HttpClients;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.MergeSchedulerConfig;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.LuceneTestCase;\n@@ -691,15 +693,12 @@ public Settings indexSettings() {\n if (numberOfReplicas >= 0) {\n builder.put(SETTING_NUMBER_OF_REPLICAS, numberOfReplicas).build();\n }\n- // norelease: disabled because custom data paths don't play well against\n- // an external test cluster: the security manager is not happy that random\n- // files are touched. See http://build-us-00.elastic.co/job/es_core_master_strong/4357/console\n // 30% of the time\n- // if (randomInt(9) < 3) {\n- // final Path dataPath = createTempDir();\n- // logger.info(\"using custom data_path for index: [{}]\", dataPath);\n- // builder.put(IndexMetaData.SETTING_DATA_PATH, dataPath);\n- // }\n+ if (randomInt(9) < 3) {\n+ final String dataPath = randomAsciiOfLength(10);\n+ logger.info(\"using custom data_path for index: [{}]\", dataPath);\n+ builder.put(IndexMetaData.SETTING_DATA_PATH, dataPath);\n+ }\n return builder.build();\n }\n \n@@ -1616,6 +1615,7 @@ protected Settings nodeSettings(int nodeOrdinal) {\n // from failing on nodes without enough disk space\n .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK, \"1b\")\n .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK, \"1b\")\n+ .put(NodeEnvironment.SETTING_CUSTOM_DATA_PATH_ENABLED, true)\n .put(\"script.indexed\", \"on\")\n .put(\"script.inline\", \"on\")\n // wait short time for other active shards before actually deleting, default 30s not needed in tests",
"filename": "core/src/test/java/org/elasticsearch/test/ESIntegTestCase.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.node.Node;\n@@ -118,16 +119,20 @@ protected boolean resetNodeAfterTest() {\n \n private static Node newNode() {\n Node build = NodeBuilder.nodeBuilder().local(true).data(true).settings(Settings.builder()\n- .put(ClusterName.SETTING, InternalTestCluster.clusterName(\"single-node-cluster\", randomLong()))\n- .put(\"path.home\", createTempDir())\n- .put(\"node.name\", nodeName())\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n- .put(\"script.inline\", \"on\")\n- .put(\"script.indexed\", \"on\")\n- .put(EsExecutors.PROCESSORS, 1) // limit the number of threads created\n- .put(\"http.enabled\", false)\n- .put(InternalSettingsPreparer.IGNORE_SYSTEM_PROPERTIES_SETTING, true) // make sure we get what we set :)\n+ .put(ClusterName.SETTING, InternalTestCluster.clusterName(\"single-node-cluster\", randomLong()))\n+ .put(\"path.home\", createTempDir())\n+ // TODO: use a consistent data path for custom paths\n+ // This needs to tie into the ESIntegTestCase#indexSettings() method\n+ .put(\"path.shared_data\", createTempDir().getParent())\n+ .put(\"node.name\", nodeName())\n+ .put(NodeEnvironment.SETTING_CUSTOM_DATA_PATH_ENABLED, true)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(\"script.inline\", \"on\")\n+ .put(\"script.indexed\", \"on\")\n+ .put(EsExecutors.PROCESSORS, 1) // limit the number of threads created\n+ .put(\"http.enabled\", false)\n+ .put(InternalSettingsPreparer.IGNORE_SYSTEM_PROPERTIES_SETTING, true) // make sure we get what we set :)\n ).build();\n build.start();\n assertThat(DiscoveryNode.localNode(build.settings()), is(true));",
"filename": "core/src/test/java/org/elasticsearch/test/ESSingleNodeTestCase.java",
"status": "modified"
},
{
"diff": "@@ -301,6 +301,7 @@ public InternalTestCluster(long clusterSeed, Path baseDir,\n builder.put(\"path.data\", dataPath.toString());\n }\n }\n+ builder.put(\"path.shared_data\", baseDir.resolve(\"custom\"));\n builder.put(\"path.home\", baseDir);\n builder.put(\"path.repo\", baseDir.resolve(\"repos\"));\n builder.put(\"transport.tcp.port\", BASE_PORT + \"-\" + (BASE_PORT + 100));",
"filename": "core/src/test/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -14,26 +14,20 @@ settings, you need to enable using it in elasticsearch.yml:\n [source,yaml]\n --------------------------------------------------\n node.enable_custom_paths: true\n+node.add_id_to_custom_path: false\n --------------------------------------------------\n \n-You will also need to disable the default security manager that Elasticsearch\n-runs with. You can do this by either passing\n-`-Des.security.manager.enabled=false` with the parameters while starting\n-Elasticsearch, or you can disable it in elasticsearch.yml:\n+You will also need to indicate to the security manager where the custom indices\n+will be, so that the correct permissions can be applied. You can do this by\n+setting the `path.shared_data` setting in elasticsearch.yml:\n \n [source,yaml]\n --------------------------------------------------\n-security.manager.enabled: false\n+path.shared_data: /opt/data\n --------------------------------------------------\n \n-[WARNING]\n-========================\n-Disabling the security manager means that the Elasticsearch process is not\n-limited to the directories and files that it can read and write. However,\n-because the `index.data_path` setting is set when creating the index, the\n-security manager would prevent writing or reading from the index's location, so\n-it must be disabled.\n-========================\n+This means that Elasticsearch can read and write to files in any subdirectory of\n+the `path.shared_data` setting.\n \n You can then create an index with a custom data path, where each node will use\n this path for the data:\n@@ -54,15 +48,15 @@ curl -XPUT 'localhost:9200/my_index' -d '\n \"index\" : {\n \"number_of_shards\" : 1,\n \"number_of_replicas\" : 4,\n- \"data_path\": \"/var/data/my_index\",\n+ \"data_path\": \"/opt/data/my_index\",\n \"shadow_replicas\": true\n } \n }'\n --------------------------------------------------\n \n [WARNING]\n ========================\n-In the above example, the \"/var/data/my_index\" path is a shared filesystem that\n+In the above example, the \"/opt/data/my_index\" path is a shared filesystem that\n must be available on every node in the Elasticsearch cluster. You must also\n ensure that the Elasticsearch process has the correct permissions to read from\n and write to the directory used in the `index.data_path` setting.",
"filename": "docs/reference/indices/shadow-replicas.asciidoc",
"status": "modified"
}
]
} |
{
"body": "In my work on #12646 I've noticed that the test for removing the rpm fails about 30% of the time. I don't know why yet. I'll work on that next but I don't think that should block #12646.\n",
"comments": [
{
"body": "Here is the error:\n\n```\n [exec] centos-7: ok 26 [RPM] rpm command is available\n [exec] centos-7: ok 27 [RPM] package is available\n [exec] centos-7: ok 28 [RPM] package is not installed\n [exec] centos-7: ok 29 [RPM] install package\n [exec] centos-7: ok 30 [RPM] package is installed\n [exec] centos-7: ok 31 [RPM] verify package installation\n [exec] centos-7: ok 32 [RPM] test elasticsearch\n [exec] centos-7: ok 33 [RPM] remove package\n [exec] centos-7: ok 34 [RPM] package has been removed\n [exec] centos-7: not ok 35 [RPM] verify package removal\n [exec] centos-7: # (in test file /vagrant/tests/src/test/resources/packaging/scripts/40_rpm_package.bats, line 122)\n [exec] centos-7: # `[ \"$status\" -eq 1 ] || [ \"$status\" -eq 0 ]' failed\n```\n",
"created_at": "2015-08-06T01:06:26Z"
}
],
"number": 12682,
"title": "Bats test for removing the rpm fails every once in a while"
} | {
"body": "Bats testing uncovered a useless systemctl check, that resulted in an\nerror, because the systemctl file was uninstalled, but we hoped to\ncheck for an explicetely configured SystemExitCode.\n\nIn addition we did not reload the systemctl configuration when uninstalling\nelasticsearch, which now is fixed as well.\n\nCloses #12682\n",
"number": 12724,
"review_comments": [],
"title": "Bats testing: Remove useless systemctl check"
} | {
"commits": [
{
"message": "Bats testing: Remove useless systemctl check\n\nBats testing uncovered a useless systemctl check, that resulted in an\nerror, because the systemctl file was uninstalled, but we hoped to\ncheck for an explicetely configured SystemExitCode.\n\nIn addition we did not reload the systemctl configuration when uninstalling\nelasticsearch, which now is fixed as well.\n\nCloses #12682"
}
],
"files": [
{
"diff": "@@ -68,7 +68,7 @@ fi\n \n if [ \"$REMOVE_SERVICE\" = \"true\" ]; then\n if command -v systemctl >/dev/null; then\n- systemctl --no-reload disable elasticsearch.service > /dev/null 2>&1 || true\n+ systemctl disable elasticsearch.service > /dev/null 2>&1 || true\n fi\n \n if command -v chkconfig >/dev/null; then",
"filename": "distribution/src/main/packaging/scripts/postrm",
"status": "modified"
},
{
"diff": "@@ -116,11 +116,6 @@ setup() {\n # The removal must disable the service\n # see prerm file\n if is_systemd; then\n- # Redhat based systemd distros usually returns exit code 1\n- # OpenSUSE13 returns 0\n- run systemctl status elasticsearch.service\n- [ \"$status\" -eq 1 ] || [ \"$status\" -eq 0 ]\n-\n run systemctl is-enabled elasticsearch.service\n [ \"$status\" -eq 1 ]\n fi",
"filename": "distribution/src/test/resources/packaging/scripts/40_rpm_package.bats",
"status": "modified"
}
]
} |
{
"body": "I'm not sure if I should do this. \n\nCloses #12677 \n",
"comments": [
{
"body": "I actually think maybe we should switch to `\\\"$@\\\"` instead of `\\\"$*\\\"`, see http://www.bashguru.com/2009/11/how-to-pass-arguments-to-shell-script.html for the difference when quoted (particularly for files with spaces in the name).\n",
"created_at": "2015-08-07T03:27:29Z"
},
{
"body": "Hmm, probably I am not quite understand. But I don't think there is any difference between `\\\"$*\\\"` and `\\\"$@\\\"`, we escape the double quotes there which will be passed into java program, and `$*` and `$@` became same. \n\nTry this dirty bash script \n\n```\n#!/bin/bash\n\nfunction print_args_at {\n printf \"%s\\n\" \"$@\"\n}\n\nfunction print_args_star {\n printf \"%s\\n\" \"$*\"\n}\n\nfunction print_args_star1 {\n printf \"%s\\n\" \\\"$*\\\"\n}\n\nfunction print_args_star2 {\n printf \"%s\\n\" \\\"$@\\\"\n}\n\nprint_args_at \"one\" \"two three\" \"four\"\nprint_args_star \"one\" \"two three\" \"four\"\nprint_args_star1 \"one\" \"two three\" \"four\"\nprint_args_star2 \"one\" \"two three\" \"four\"\n```\n",
"created_at": "2015-08-07T03:52:40Z"
},
{
"body": "I am working to make integration tests always use spaces in directory names (some fixes are needed) to blast out these bugs. But I agree with @dakrone here.\n",
"created_at": "2015-08-07T04:55:06Z"
},
{
"body": "OK, thanks guys, updated. \n",
"created_at": "2015-08-07T05:19:23Z"
},
{
"body": "@spinscale @xuzha is this PR related to https://github.com/elastic/elasticsearch/pull/12709 ?\n",
"created_at": "2015-08-07T11:16:18Z"
},
{
"body": "Does this work for you? `./bin/elasticsearch -Des.pidfile=\"/path/with space/es.pid\"` - results in this error to me:\n\n```\nException in thread \"main\" java.nio.file.AccessDeniedException: /path\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)\n at java.nio.file.Files.createDirectory(Files.java:674)\n at java.nio.file.Files.createAndCheckIsDirectory(Files.jav\n...\n```\n\nAlso, calling something like `/bin/elasticsearch --http.cors.enabled true --http.cors.allow-origin 'foo'` results in this as only a single argument is passed?\n\n```\nERROR: Parameter [http.cors.enabled true http.cors.allow-origin foo] needs value\n```\n",
"created_at": "2015-08-07T12:35:34Z"
},
{
"body": "@spinscale You are right, adding \\\" \\\" make them only a single argument is passed. But AccessDeniedException is the expected exception, right?\n\nAnyway, Robert has a much better fix 12710. Ill just close this dumb one.\n",
"created_at": "2015-08-07T16:26:35Z"
}
],
"number": 12709,
"title": "Escape CLI parameters when starting elasticsearch"
} | {
"body": "By using pathnames with spaces in tests we can kickout all the bugs.\nI applied the fix for #12709 but we needed more fixes actually.\n\nTODO: windows\n",
"number": 12710,
"review_comments": [],
"title": "use spaces liberally in integration tests and fix space handling"
} | {
"commits": [
{
"message": "use spaces liberally in integration tests and fix space handling"
},
{
"message": "cleanup"
},
{
"message": "put this back, we dont test it. wont be my fault"
},
{
"message": "improve execution output"
}
],
"files": [
{
"diff": "@@ -13,7 +13,6 @@\n <!-- runs an OS script -->\n <macrodef name=\"run-script\">\n <attribute name=\"script\"/>\n- <attribute name=\"args\"/>\n <attribute name=\"spawn\" default=\"false\"/>\n <element name=\"nested\" optional=\"true\"/>\n <sequential>\n@@ -23,23 +22,24 @@\n </condition>\n \n <!-- create a temp CWD, to enforce that commands don't rely on CWD -->\n- <mkdir dir=\"${integ.temp}\"/>\n+ <local name=\"temp.cwd\"/>\n+ <tempfile property=\"temp.cwd\" destDir=\"${integ.temp}\"/>\n+ <mkdir dir=\"${temp.cwd}\"/>\n \n <!-- print commands we run -->\n <local name=\"script.base\"/>\n <basename file=\"@{script}\" property=\"script.base\"/>\n- <echo>execute: ${script.base} @{args}</echo>\n+ <!-- crappy way to output, but we need it. make it nice later -->\n+ <echoxml><exec script=\"${script.base}\"><nested/></exec></echoxml>\n \n- <exec executable=\"cmd\" osfamily=\"winnt\" dir=\"${integ.temp}\" failonerror=\"${failonerror}\" spawn=\"@{spawn}\" taskname=\"${script.base}\">\n+ <exec executable=\"cmd\" osfamily=\"winnt\" dir=\"${temp.cwd}\" failonerror=\"${failonerror}\" spawn=\"@{spawn}\" taskname=\"${script.base}\">\n <arg value=\"/c\"/>\n <arg value=\"@{script}.bat\"/>\n- <arg line=\"@{args}\"/>\n <nested/>\n </exec>\n \n- <exec executable=\"sh\" osfamily=\"unix\" dir=\"${integ.temp}\" failonerror=\"${failonerror}\" spawn=\"@{spawn}\" taskname=\"${script.base}\">\n+ <exec executable=\"sh\" osfamily=\"unix\" dir=\"${temp.cwd}\" failonerror=\"${failonerror}\" spawn=\"@{spawn}\" taskname=\"${script.base}\">\n <arg value=\"@{script}\"/>\n- <arg line=\"@{args}\"/>\n <nested/>\n </exec>\n </sequential>\n@@ -86,7 +86,14 @@\n \n <!-- install plugin -->\n <echo>Installing plugin @{name}...</echo>\n- <run-script script=\"@{home}/bin/plugin\" args=\"install @{name} -u ${url}\"/>\n+ <run-script script=\"@{home}/bin/plugin\">\n+ <nested>\n+ <arg value=\"install\"/>\n+ <arg value=\"@{name}\"/>\n+ <arg value=\"-u\"/>\n+ <arg value=\"${url}\"/>\n+ </nested>\n+ </run-script>\n \n <!-- check that plugin was installed into correct place -->\n <local name=\"longname\"/>\n@@ -133,37 +140,27 @@\n <attribute name=\"es.http.port\" default=\"${integ.http.port}\"/>\n <attribute name=\"es.transport.tcp.port\" default=\"${integ.transport.port}\"/>\n <attribute name=\"es.pidfile\" default=\"${integ.pidfile}\"/>\n- <attribute name=\"additional.args\" default=\"\"/>\n <attribute name=\"jvm.args\" default=\"${tests.jvm.argline}\"/>\n <sequential>\n \n- <!-- build args to pass to es -->\n- <local name=\"integ.args\"/>\n- <property name=\"integ.args\" value=\"\n--Des.cluster.name=@{es.cluster.name}\n--Des.http.port=@{es.http.port}\n--Des.transport.tcp.port=@{es.transport.tcp.port}\n--Des.pidfile=@{es.pidfile}\n--Des.path.repo=@{home}/repo\n--Des.discovery.zen.ping.multicast.enabled=false\n--Des.script.inline=on\n--Des.script.indexed=on\n--Des.repositories.url.allowed_urls=http://snapshot.test*\n-@{additional.args}\"\n- />\n-\n <!-- run bin/elasticsearch with args -->\n <echo>Starting up external cluster...</echo>\n- <echo>JAVA=${java.home}</echo>\n- <echo>ARGS=@{jvm.args}</echo>\n \n <run-script script=\"@{home}/bin/elasticsearch\" \n- spawn=\"@{spawn}\"\n- args=\"${integ.args}\">\n+ spawn=\"@{spawn}\">\n <nested>\n <env key=\"JAVA_HOME\" value=\"${java.home}\"/>\n <!-- we pass these as gc options, even if they arent, to avoid conflicting gc options -->\n <env key=\"ES_GC_OPTS\" value=\"@{jvm.args}\"/>\n+ <arg value=\"-Des.cluster.name=@{es.cluster.name}\"/>\n+ <arg value=\"-Des.http.port=@{es.http.port}\"/>\n+ <arg value=\"-Des.transport.tcp.port=@{es.transport.tcp.port}\"/>\n+ <arg value=\"-Des.pidfile=@{es.pidfile}\"/>\n+ <arg value=\"-Des.path.repo=@{home}/repo\"/>\n+ <arg value=\"-Des.discovery.zen.ping.multicast.enabled=false\"/>\n+ <arg value=\"-Des.script.inline=on\"/>\n+ <arg value=\"-Des.script.indexed=on\"/>\n+ <arg value=\"-Des.repositories.url.allowed_urls=http://snapshot.test*\"/>\n </nested>\n </run-script>\n \n@@ -306,7 +303,7 @@\n <arg value=\"-q\"/>\n <arg value=\"-i\"/>\n <arg value=\"-p\"/>\n- <arg value=\"${rpm.file}\"/> \n+ <arg value=\"${rpm.file}\"/>\n </exec>\n <!-- extract contents from .rpm package -->\n <exec executable=\"rpm\" failonerror=\"true\" taskname=\"rpm\">\n@@ -319,7 +316,7 @@\n <arg value=\"--noscripts\"/> \n <arg value=\"--notriggers\"/> \n <arg value=\"-i\"/>\n- <arg value=\"${rpm.file}\"/> \n+ <arg value=\"${rpm.file}\"/>\n </exec>\n </sequential>\n </target>",
"filename": "dev-tools/src/main/resources/ant/integration-tests.xml",
"status": "modified"
},
{
"diff": "@@ -126,11 +126,11 @@ export HOSTNAME=`hostname -s`\n # manual parsing to find out, if process should be detached\n daemonized=`echo $* | grep -E -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n if [ -z \"$daemonized\" ] ; then\n- eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start $*\n+ exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch start \"$@\"\n else\n- eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start $* <&- &\n+ exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch start \"$@\" <&- &\n fi\n \n exit $?",
"filename": "distribution/src/main/resources/bin/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ if [ -n \"$ES_GC_LOG_FILE\" ]; then\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintClassHistogram\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintTenuringDistribution\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime\"\n- JAVA_OPTS=\"$JAVA_OPTS \\\"-Xloggc:$ES_GC_LOG_FILE\\\"\"\n+ JAVA_OPTS=\"$JAVA_OPTS -Xloggc:\\\"$ES_GC_LOG_FILE\\\"\"\n \n # Ensure that the directory for the log file exists: the JVM will not create it.\n mkdir -p \"`dirname \\\"$ES_GC_LOG_FILE\\\"`\"",
"filename": "distribution/src/main/resources/bin/elasticsearch.in.sh",
"status": "modified"
},
{
"diff": "@@ -106,8 +106,8 @@\n <tests.ifNoTests>fail</tests.ifNoTests>\n <skip.unit.tests>${skipTests}</skip.unit.tests>\n <skip.integ.tests>${skipTests}</skip.integ.tests>\n- <integ.scratch>${project.build.directory}/integ-tests</integ.scratch>\n- <integ.deps>${project.build.directory}/integ-deps</integ.deps>\n+ <integ.scratch>${project.build.directory}/integ tests</integ.scratch>\n+ <integ.deps>${project.build.directory}/integ deps</integ.deps>\n <integ.temp>${integ.scratch}/temp</integ.temp>\n <integ.http.port>9400</integ.http.port>\n <integ.transport.port>9500</integ.transport.port>",
"filename": "pom.xml",
"status": "modified"
}
]
} |
{
"body": "If you attempt to start elasticsearch and have an argument that has a space in it, the argument is not parsed correctly and the value after the space is treated as a separate argument.\n\n```\n$ bin/elasticsearch -Des.pidfile=\"/path/with space/es.pid\"\nERROR: Parameter [space/es.pid]does not start with --\n```\n\nThe fix is probably similar to the ones in #12508\n",
"comments": [
{
"body": "I'm not quite sure we should change this, user could escape the ' \" ' by doing this \n\n```\nbin/elasticsearch -Des.pidfile=\\\"/path/with space/es.pid\\\"\n\n```\n",
"created_at": "2015-08-06T23:33:07Z"
},
{
"body": "Thanks for opening a PR @xuzha. I think having to escape the quotes on the command line makes it less user friendly. When I think of running commands with spaces, I would expect to need to escape spaces or put quotes around the item with spaces; I didn't expect that I'd need to escape the quotes.\n",
"created_at": "2015-08-07T10:16:32Z"
},
{
"body": "Closed by #12710\n",
"created_at": "2015-08-11T16:16:53Z"
}
],
"number": 12677,
"title": "elasticsearch script does not work with arguments that have spaces"
} | {
"body": "I'm not sure if I should do this. \n\nCloses #12677 \n",
"number": 12709,
"review_comments": [],
"title": "Escape CLI parameters when starting elasticsearch"
} | {
"commits": [
{
"message": "Escape quote, use $@ instead of $*\n\nCloses #12677"
}
],
"files": [
{
"diff": "@@ -127,10 +127,10 @@ export HOSTNAME=`hostname -s`\n daemonized=`echo $* | grep -E -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n if [ -z \"$daemonized\" ] ; then\n eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start $*\n+ org.elasticsearch.bootstrap.Elasticsearch start \\\"$@\\\"\n else\n eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start $* <&- &\n+ org.elasticsearch.bootstrap.Elasticsearch start \\\"$@\\\" <&- &\n fi\n \n exit $?",
"filename": "distribution/src/main/resources/bin/elasticsearch",
"status": "modified"
}
]
} |
{
"body": "``` bash\n[vagrant@localhost releases]$ sudo rpm -if elasticsearch-2.0.0-beta1-SNAPSHOT.rpm \nwarning: elasticsearch-2.0.0-beta1-SNAPSHOT.rpm: Header V4 RSA/SHA1 Signature, key ID cd27c2b4: NOKEY\n package elasticsearch-2.0.0-beta1_SNAPSHOT20150806181247.noarch is intended for a different operating system\n[vagrant@localhost releases]$ man rpm\n[vagrant@localhost releases]$ sudo rpm -i --ignoreos elasticsearch-2.0.0-beta1-SNAPSHOT.rpm \nwarning: elasticsearch-2.0.0-beta1-SNAPSHOT.rpm: Header V4 RSA/SHA1 Signature, key ID cd27c2b4: NOKEY\nCreating elasticsearch group... OK\nCreating elasticsearch user... OK\n### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd\n sudo systemctl daemon-reload\n sudo systemctl enable elasticsearch.service\n### You can start elasticsearch service by executing\n sudo systemctl start elasticsearch.service\n[vagrant@localhost releases]$ sudo service elasticsearch start\nStarting elasticsearch (via systemctl): [ OK ]\n[vagrant@localhost releases]$ \n```\n",
"comments": [
{
"body": "This is master.\n",
"created_at": "2015-08-06T18:35:53Z"
}
],
"number": 12701,
"title": "RPMs build on osx won't install on centos"
} | {
"body": "Closes #12701\n",
"number": 12706,
"review_comments": [],
"title": "Let OSX build rpms for linux"
} | {
"commits": [
{
"message": "Let OSX build rpms for linux\n\nCloses #12701"
}
],
"files": [
{
"diff": "@@ -105,6 +105,13 @@\n <group>Application/Internet</group>\n <packager>Elasticsearch</packager>\n <prefix>/usr</prefix>\n+ <!-- To get rpm-maven-plugin to pickup targetOS you need\n+ to specify needarch too.... If you don't specify\n+ targetOS then it'll just use whatever the current\n+ machine's OS is which means you can't build rpms for\n+ linux on OSX even if you have rpmbuild.... -->\n+ <needarch>noarch</needarch>\n+ <targetOS>linux</targetOS>\n <changelogFile>src/changelog</changelogFile>\n <defineStatements>\n <defineStatement>_unpackaged_files_terminate_build 0</defineStatement>",
"filename": "distribution/rpm/pom.xml",
"status": "modified"
}
]
} |
{
"body": "Starting Elasticsearch as follows:\n\n```\n./bin/elasticsearch --http.cors.enabled true --http.cors.allow-origin '*' \n```\n\nbreaks with (eg) \n\n```\nERROR: Parameter [NOTICE.txt]does not start with --\n```\n\nThe wildcard is being expanded by the CLI parsing, although that should be done by the shell (and in this case shouldn't be done because the wildcard is in quotes)\n",
"comments": [
{
"body": "Closed by #12710\n",
"created_at": "2015-08-13T09:43:37Z"
}
],
"number": 12689,
"title": "Don't expand wildcards in command line options"
} | {
"body": "In order to not accidentally expand wilcard arguments\nlike --http.cors.allow-origin '*' on startup, globbing\nneeds to be disabled before Elasticsearch is started.\n\nCloses #12689\n",
"number": 12692,
"review_comments": [],
"title": "Startup: Disable globbing in shell script"
} | {
"commits": [
{
"message": "Startup: Disable globbing in shell script\n\nIn order to not accidentally expand wilcard arguments\nlike --http.cors.allow-origin '*' on startup, globbing\nneeds to be disabled before Elasticsearch is started.\n\nCloses #12689"
}
],
"files": [
{
"diff": "@@ -125,6 +125,7 @@ export HOSTNAME=`hostname -s`\n \n # manual parsing to find out, if process should be detached\n daemonized=`echo $* | grep -E -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n+set -f\n if [ -z \"$daemonized\" ] ; then\n eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" \\\n org.elasticsearch.bootstrap.Elasticsearch start $*",
"filename": "distribution/src/main/resources/bin/elasticsearch",
"status": "modified"
}
]
} |
{
"body": "A common query containing only stopwords causes a NullPointerException if the query has a _name property. This doesn't happen without the _name property, and doesn't happen with other types of queries.\n\n```\n# create index \ncurl -XPOST localhost:9200/test -d '{\n \"settings\" : {\n \"number_of_shards\" : 1\n },\n \"mappings\" : {\n \"type1\" : {\n \"properties\" : {\n \"name\" : { \"type\" : \"string\", \"analyzer\" : \"stop\" }\n }\n }\n }\n}'\n```\n\n```\n# common query with a stop word correctly returns no results\ncurl -XGET localhost:9200/test/type1/_search -d '{\n \"query\": {\n \"common\": {\n \"name\": {\n \"query\": \"the\"\n }\n }\n }\n}'\n\n{\"took\":35,\"timed_out\":false,\"_shards\":{\"total\":1,\"successful\":1,\"failed\":0},\n\"hits\":{\"total\":0,\"max_score\":null,\"hits\":[]}}\n```\n\n```\n# common query with a _name causes a null pointer exception\ncurl -XGET localhost:9200/test/type1/_search -d '{\n \"query\": {\n \"common\": {\n \"name\": {\n \"query\": \"the\",\n \"_name\": \"queryname\"\n }\n }\n }\n}'\n\n{\"error\":\"SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed; shardFailures {[vFIFOFczQTSxJJO-kobPBQ][test][0]: SearchParseException[[test][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \\\"query\\\": {\n \\\"common\\\": {\n \\\"name\\\": {\n \\\"query\\\": \\\"the\\\",\n \\\"_name\\\": \\\"queryname\\\"\n }\n }\n }\n}]]]; nested: NullPointerException[Query may not be null]; }]\",\"status\":400}muzio:~ brett$\n\n```\n\n```\n# a regular match query doesn't have this problem\ncurl -XGET 10.4.4.118:9200/test/type1/_search -d '{\n \"query\": {\n \"match\": {\n \"name\": {\n \"query\": \"the\",\n \"_name\": \"queryname\"\n }\n }\n }\n}'\n```\n",
"comments": [
{
"body": "@brettrp Thank you for reporting this issue! The underlying issue is that because of the analyzer you have configured (the stop analyzer), the query is parsed to the null query. This issue helped us uncover a deeper bug in our handling of named queries that are null. We will have a fix in place soon.\n",
"created_at": "2015-08-06T13:02:46Z"
}
],
"number": 12683,
"title": "common terms query containing only stopwords with a _name causes a null pointer exception"
} | {
"body": "Adding a named query that is null can lead to a `NullPointerException`\nwhen copying the named queries. This is due to an implementation detail\nin [`QueryParseContent.copyNamedQueries`](https://github.com/elastic/elasticsearch/blob/d0abffc9acb9afddc83ae99ae17848b813fd918f/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java#L188). In particular, this method uses\n[`com.google.common.collect.ImmutableMap.copyOf`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/ImmutableMap.html#copyOf%28java.util.Map%29). A documented requirement\nof [`ImmutableMap`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/ImmutableMap.html) is that none of the entries have a null key nor null\nvalue. Therefore, we should not add such queries to the [`namedQueries`](https://github.com/elastic/elasticsearch/blob/d0abffc9acb9afddc83ae99ae17848b813fd918f/core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java#L86)\nmap. This will not change any behavior since [`Map.get`](http://docs.oracle.com/javase/7/docs/api/java/util/Map.html#get%28java.lang.Object%29) returns null if no\nentry with the given key exists anyway.\n\nCloses #12683\n",
"number": 12691,
"review_comments": [
{
"body": "I think you can shorten the mapping here: `addMapping(type, \"name\", \"type=string, analyzer=stop\"` that hurts my eyes less :)\n",
"created_at": "2015-08-06T13:02:04Z"
},
{
"body": "shall we assert on something at the end?\n",
"created_at": "2015-08-06T13:02:37Z"
},
{
"body": "Incorporated, thanks!\n",
"created_at": "2015-08-06T13:32:59Z"
},
{
"body": "Agree, thanks!\n",
"created_at": "2015-08-06T13:33:15Z"
}
],
"title": "Do not track named queries that are null"
} | {
"commits": [
{
"message": "Do not track named queries that are null\n\nAdding a named query that is null can lead to a NullPointerException\nwhen copying the named queries. This is due to an implementation detail\nin QueryParseContent.copyNamedQueries. In particular, this method uses\ncom.google.common.collect.ImmutableMap.copyOf. A documented requirement\nof ImmutableMap is that none of the entries have a null key nor null\nvalue. Therefore, we should not add such queries to the namedQueries\nmap. This will not change any behavior since Map.get returns null if no\nentry with the given key exists anyway.\n\nCloses #12683"
}
],
"files": [
{
"diff": "@@ -182,7 +182,9 @@ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType mapper) {\n }\n \n public void addNamedQuery(String name, Query query) {\n- namedQueries.put(name, query);\n+ if (query != null) {\n+ namedQueries.put(name, query);\n+ }\n }\n \n public ImmutableMap<String, Query> copyNamedQueries() {",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,52 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.query;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+public class CommonTermsQueryParserTest extends ESSingleNodeTestCase {\n+ @Test\n+ public void testWhenParsedQueryIsNullNoNullPointerExceptionIsThrown() throws IOException {\n+ final String index = \"test-index\";\n+ final String type = \"test-type\";\n+ client()\n+ .admin()\n+ .indices()\n+ .prepareCreate(index)\n+ .addMapping(type, \"name\", \"type=string,analyzer=stop\")\n+ .execute()\n+ .actionGet();\n+ ensureGreen();\n+\n+ CommonTermsQueryBuilder commonTermsQueryBuilder =\n+ new CommonTermsQueryBuilder(\"name\", \"the\").queryName(\"query-name\");\n+\n+ // the named query parses to null; we are testing this does not cause a NullPointerException\n+ SearchResponse response =\n+ client().prepareSearch(index).setTypes(type).setQuery(commonTermsQueryBuilder).execute().actionGet();\n+\n+ assertNotNull(response);\n+ assertEquals(response.getHits().hits().length, 0);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/index/query/CommonTermsQueryParserTest.java",
"status": "added"
}
]
} |
{
"body": "Previously when RoutingService checks for delayed shards, it can see\nshards that are delayed, but are past their delay time so the logged\noutput looks like:\n\n```\ndelaying allocation for [0] unassigned shards, next check in [0s]\n```\n\nThis change allows shards that have passed their delay to be counted\ncorrectly for the logging. Additionally, it places a 5 second minimum\ndelay between scheduled reroutes to try to minimize the number of\nreroutes run.\n\nThis also adds a test that creates a large number of unassigned delayed\nshards and ensures that they are rerouted even if a single reroute does\nnot allocated all shards (due to a low concurrent_recoveries setting).\n\nResolves #12456 \n\n(This PR is against 1.7 and will be forward-ported)\n",
"comments": [
{
"body": "LGTM. Left a non blocking question.\n",
"created_at": "2015-07-28T19:01:38Z"
},
{
"body": "LGTM\n",
"created_at": "2015-07-28T22:04:57Z"
},
{
"body": "Closing this, handled a different way in #12532 \n",
"created_at": "2015-07-29T15:16:42Z"
}
],
"number": 12515,
"title": "Fix messaging about delayed allocation"
} | {
"body": "In order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456 \n\nThis is the forward-port of #12532, but actually ended up not being that difficult so it's not much different.\n",
"number": 12678,
"review_comments": [],
"title": "Avoid extra reroutes of delayed shards in RoutingService"
} | {
"commits": [
{
"message": "Avoid extra reroutes of delayed shards in RoutingService\n\nIn order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456"
}
],
"files": [
{
"diff": "@@ -272,13 +272,13 @@ private ClusterHealthResponse clusterHealth(ClusterHealthRequest request, Cluste\n } catch (IndexNotFoundException e) {\n // one of the specified indices is not there - treat it as RED.\n ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(), Strings.EMPTY_ARRAY, clusterState,\n- numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState),\n+ numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState),\n pendingTaskTimeInQueue);\n response.status = ClusterHealthStatus.RED;\n return response;\n }\n \n return new ClusterHealthResponse(clusterName.value(), concreteIndices, clusterState, numberOfPendingTasks,\n- numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState), pendingTaskTimeInQueue);\n+ numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState), pendingTaskTimeInQueue);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java",
"status": "modified"
},
{
"diff": "@@ -57,6 +57,7 @@ public class RoutingService extends AbstractLifecycleComponent<RoutingService> i\n private AtomicBoolean rerouting = new AtomicBoolean();\n private volatile long registeredNextDelaySetting = Long.MAX_VALUE;\n private volatile ScheduledFuture registeredNextDelayFuture;\n+ private volatile long unassignedShardsAllocatedTimestamp = 0;\n \n @Inject\n public RoutingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService) {\n@@ -87,6 +88,19 @@ public AllocationService getAllocationService() {\n return this.allocationService;\n }\n \n+ /**\n+ * Update the last time the allocator tried to assign unassigned shards\n+ *\n+ * This is used so that both the GatewayAllocator and RoutingService use a\n+ * consistent timestamp for comparing which shards have been delayed to\n+ * avoid a race condition where GatewayAllocator thinks the shard should\n+ * be delayed and the RoutingService thinks it has already passed the delay\n+ * and that the GatewayAllocator has/will handle it.\n+ */\n+ public void setUnassignedShardsAllocatedTimestamp(long timeInMillis) {\n+ this.unassignedShardsAllocatedTimestamp = timeInMillis;\n+ }\n+\n /**\n * Initiates a reroute.\n */\n@@ -108,20 +122,29 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n- registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n- @Override\n- protected void doRun() throws Exception {\n- registeredNextDelaySetting = Long.MAX_VALUE;\n- reroute(\"assign delayed unassigned shards\");\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n- }\n- });\n+ // We use System.currentTimeMillis here because we want the\n+ // next delay from the \"now\" perspective, rather than the\n+ // delay from the last time the GatewayAllocator tried to\n+ // assign/delay the shard\n+ TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), settings, event.state()));\n+ int unassignedDelayedShards = UnassignedInfo.getNumberOfDelayedUnassigned(unassignedShardsAllocatedTimestamp, settings, event.state());\n+ if (unassignedDelayedShards > 0) {\n+ logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\",\n+ unassignedDelayedShards, nextDelay);\n+ registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ reroute(\"assign delayed unassigned shards\");\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ }\n+ });\n+ }\n } else {\n logger.trace(\"no need to schedule reroute due to delayed unassigned, next_delay_setting [{}], registered [{}]\", nextDelaySetting, registeredNextDelaySetting);\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
},
{
"diff": "@@ -199,12 +199,12 @@ public long getAllocationDelayTimeoutSetting(Settings settings, Settings indexSe\n /**\n * The time in millisecond until this unassigned shard can be reassigned.\n */\n- public long getDelayAllocationExpirationIn(Settings settings, Settings indexSettings) {\n+ public long getDelayAllocationExpirationIn(long unassignedShardsAllocatedTimestamp, Settings settings, Settings indexSettings) {\n long delayTimeout = getAllocationDelayTimeoutSetting(settings, indexSettings);\n if (delayTimeout == 0) {\n return 0;\n }\n- long delta = System.currentTimeMillis() - timestamp;\n+ long delta = unassignedShardsAllocatedTimestamp - timestamp;\n // account for time drift, treat it as no timeout\n if (delta < 0) {\n return 0;\n@@ -216,12 +216,12 @@ public long getDelayAllocationExpirationIn(Settings settings, Settings indexSett\n /**\n * Returns the number of shards that are unassigned and currently being delayed.\n */\n- public static int getNumberOfDelayedUnassigned(Settings settings, ClusterState state) {\n+ public static int getNumberOfDelayedUnassigned(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n int count = 0;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (delay > 0) {\n count++;\n }\n@@ -251,12 +251,12 @@ public static long findSmallestDelayedAllocationSetting(Settings settings, Clust\n /**\n * Finds the next (closest) delay expiration of an unassigned shard. Returns 0 if there are none.\n */\n- public static long findNextDelayedAllocationIn(Settings settings, ClusterState state) {\n+ public static long findNextDelayedAllocationIn(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n long nextDelay = Long.MAX_VALUE;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (nextShardDelay > 0 && nextShardDelay < nextDelay) {\n nextDelay = nextShardDelay;\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -113,6 +113,10 @@ public void applyFailedShards(FailedRerouteAllocation allocation) {\n }\n \n public boolean allocateUnassigned(final RoutingAllocation allocation) {\n+ // Take a snapshot of the current time and tell the RoutingService\n+ // about it, so it will use a consistent timestamp for delays\n+ long lastAllocateUnassignedRun = System.currentTimeMillis();\n+ this.routingService.setUnassignedShardsAllocatedTimestamp(lastAllocateUnassignedRun);\n boolean changed = false;\n \n RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n@@ -127,7 +131,7 @@ protected Settings getIndexSettings(String index) {\n \n changed |= primaryShardAllocator.allocateUnassigned(allocation);\n changed |= replicaShardAllocator.processExistingRecoveries(allocation);\n- changed |= replicaShardAllocator.allocateUnassigned(allocation);\n+ changed |= replicaShardAllocator.allocateUnassigned(allocation, lastAllocateUnassignedRun);\n return changed;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -111,6 +111,10 @@ public boolean processExistingRecoveries(RoutingAllocation allocation) {\n }\n \n public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ return allocateUnassigned(allocation, System.currentTimeMillis());\n+ }\n+\n+ public boolean allocateUnassigned(RoutingAllocation allocation, long allocateUnassignedTimestapm) {\n boolean changed = false;\n final RoutingNodes routingNodes = allocation.routingNodes();\n final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n@@ -174,7 +178,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n // will anyhow wait to find an existing copy of the shard to be allocated\n // note: the other side of the equation is scheduling a reroute in a timely manner, which happens in the RoutingService\n IndexMetaData indexMetaData = allocation.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(allocateUnassignedTimestapm, settings, indexMetaData.getSettings());\n if (delay > 0) {\n logger.debug(\"[{}][{}]: delaying allocation of [{}] for [{}]\", shard.index(), shard.id(), shard, TimeValue.timeValueMillis(delay));\n /**",
"filename": "core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -34,15 +34,18 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.cluster.routing.allocation.decider.DisableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider.Allocation;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n+import org.elasticsearch.test.InternalTestCluster;\n import org.junit.Test;\n \n import java.nio.file.Path;\n@@ -160,6 +163,40 @@ public void rerouteWithAllocateLocalGateway_enableAllocationSettings() throws Ex\n rerouteWithAllocateLocalGateway(commonSettings);\n }\n \n+ @Test\n+ public void testDelayWithALargeAmountOfShards() throws Exception {\n+ Settings commonSettings = settingsBuilder()\n+ .put(\"gateway.type\", \"local\")\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .build();\n+ logger.info(\"--> starting 4 nodes\");\n+ String node_1 = internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+\n+ assertThat(cluster().size(), equalTo(4));\n+ ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes(\"4\").execute().actionGet();\n+ assertThat(healthResponse.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> create indices\");\n+ for (int i = 0; i < 25; i++) {\n+ client().admin().indices().prepareCreate(\"test\" + i)\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 5).put(\"index.number_of_replicas\", 1)\n+ .put(\"index.unassigned.node_left.delayed_timeout\", randomIntBetween(250, 1000) + \"ms\"))\n+ .execute().actionGet();\n+ }\n+\n+ ensureGreen(TimeValue.timeValueMinutes(1));\n+\n+ logger.info(\"--> stopping node1\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1));\n+\n+ // This might run slowly on older hardware\n+ ensureGreen(TimeValue.timeValueMinutes(2));\n+ }\n+\n private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception {\n logger.info(\"--> starting 2 nodes\");\n String node_1 = internalCluster().startNode(commonSettings);",
"filename": "core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ESAllocationTestCase;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n@@ -112,6 +113,10 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // We need to update the routing service's last attempted run to\n+ // signal that the GatewayAllocator tried to allocated it but\n+ // it was delayed\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis());\n ClusterState newState = clusterState;\n \n routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n@@ -125,6 +130,44 @@ public void run() {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ @Test\n+ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n+ AllocationService allocation = createAllocationService();\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\"))).build();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ // remove node2 and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // Set it in the future so the delay will be negative\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis() + TimeValue.timeValueMinutes(1).millis());\n+\n+ ClusterState newState = clusterState;\n+\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n+\n+ // verify the registration has been updated\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(100L));\n+ }\n+ });\n+ }\n+\n private class TestRoutingService extends RoutingService {\n \n private AtomicBoolean rerouted = new AtomicBoolean();",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -273,7 +273,8 @@ public void testUnassignedDelayedOnlyOnNodeLeft() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- long delay = unassignedInfo.getDelayAllocationExpirationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n+ long delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, greaterThan(0l));\n assertThat(delay, lessThan(TimeValue.timeValueHours(10).millis()));\n }\n@@ -290,7 +291,8 @@ public void testUnassignedDelayOnlyNodeLeftNonNodeLeftReason() throws Exception\n UnassignedInfo unassignedInfo = new UnassignedInfo(RandomPicks.randomFrom(getRandom(), reasons), null);\n long delay = unassignedInfo.getAllocationDelayTimeoutSetting(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, equalTo(0l));\n- delay = unassignedInfo.getDelayAllocationExpirationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n+ delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, equalTo(0l));\n }\n \n@@ -306,7 +308,8 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -315,7 +318,8 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n+ assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n }\n \n @Test\n@@ -330,7 +334,8 @@ public void testFindNextDelayedAllocation() {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -343,7 +348,8 @@ public void testFindNextDelayedAllocation() {\n long nextDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSetting(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelaySetting, equalTo(TimeValue.timeValueHours(10).millis()));\n \n- long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n+ long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelay, greaterThan(TimeValue.timeValueHours(9).millis()));\n assertThat(nextDelay, lessThanOrEqualTo(TimeValue.timeValueHours(10).millis()));\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
}
]
} |
{
"body": "Upgraded from 1.6.1, a 1.7.0 node joins, we use allocation excludes to force data away from the nodes we intend to stop. When the 1.7.0 node has all the data, the old node is stopped.\n\n1.7.0 becomes the master, then logs \"delaying allocation for [66] unassigned shards, next check in [59.6s]\". 22 _minutes_ later, it then starts logging \"delaying allocation for [0] unassigned shards, next check in [0s]\" forever.\n\nWe had another cluster doing the same, which stopped logging it after two more nodes joined the cluster.\n\nNTP enabled, clocks seem fine, drift not out of the ordinary.\n",
"comments": [
{
"body": "@kimchy any ideas?\n",
"created_at": "2015-07-27T11:30:22Z"
},
{
"body": "@alexbrasetvik does this problem persist when the entire cluster is running 1.7.0, or does it only occur when the cluster is in a mixed-version state?\n",
"created_at": "2015-07-27T15:50:17Z"
},
{
"body": "It happens also for clusters created with 1.7.0.\n",
"created_at": "2015-07-27T16:33:47Z"
},
{
"body": "@alexbrasetvik I was able to reproduce this, still trying to figure out what causes it\n",
"created_at": "2015-07-27T16:35:52Z"
},
{
"body": "I see this multiple times too. Besides the logging itself, actually it never allocates my unassigned shards. I have a 20 machine cluster, all on 1.7 already. I shutdown a node via kopf. I see ~100 unassigned shards. The node then joins the cluster again, it's not initializing the unassigned shards. \n\nI manually reroute one unassigned shard, then i start seeing the cluster initializing the other unassigned shards. \n\nIn pending tasks, I'm seeing\n\n```\n 176228 29.1s URGENT shard-started ([xxxx-2015.27][0], node[nBe-T6GzTPOKoFHz-sIz8A], [R], s[INITIALIZING], unassigned_info[[reason=NODE_LEFT], at[2015-07-29T19:18:21.992Z], details[node_left[Vsj8k-eIQX2PFgAQQWDIJA]]]), reason [master [...][DwayquBqT8u8Xvns_HiIag][CO3SCH020050240][inet[/10.65.207.36:9300]]{...} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n```\n",
"created_at": "2015-07-29T19:43:09Z"
}
],
"number": 12456,
"title": "Delayed allocation \"stuck\" on 0 shards"
} | {
"body": "In order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456 \n\nThis is the forward-port of #12532, but actually ended up not being that difficult so it's not much different.\n",
"number": 12678,
"review_comments": [],
"title": "Avoid extra reroutes of delayed shards in RoutingService"
} | {
"commits": [
{
"message": "Avoid extra reroutes of delayed shards in RoutingService\n\nIn order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456"
}
],
"files": [
{
"diff": "@@ -272,13 +272,13 @@ private ClusterHealthResponse clusterHealth(ClusterHealthRequest request, Cluste\n } catch (IndexNotFoundException e) {\n // one of the specified indices is not there - treat it as RED.\n ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(), Strings.EMPTY_ARRAY, clusterState,\n- numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState),\n+ numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState),\n pendingTaskTimeInQueue);\n response.status = ClusterHealthStatus.RED;\n return response;\n }\n \n return new ClusterHealthResponse(clusterName.value(), concreteIndices, clusterState, numberOfPendingTasks,\n- numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState), pendingTaskTimeInQueue);\n+ numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState), pendingTaskTimeInQueue);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java",
"status": "modified"
},
{
"diff": "@@ -57,6 +57,7 @@ public class RoutingService extends AbstractLifecycleComponent<RoutingService> i\n private AtomicBoolean rerouting = new AtomicBoolean();\n private volatile long registeredNextDelaySetting = Long.MAX_VALUE;\n private volatile ScheduledFuture registeredNextDelayFuture;\n+ private volatile long unassignedShardsAllocatedTimestamp = 0;\n \n @Inject\n public RoutingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService) {\n@@ -87,6 +88,19 @@ public AllocationService getAllocationService() {\n return this.allocationService;\n }\n \n+ /**\n+ * Update the last time the allocator tried to assign unassigned shards\n+ *\n+ * This is used so that both the GatewayAllocator and RoutingService use a\n+ * consistent timestamp for comparing which shards have been delayed to\n+ * avoid a race condition where GatewayAllocator thinks the shard should\n+ * be delayed and the RoutingService thinks it has already passed the delay\n+ * and that the GatewayAllocator has/will handle it.\n+ */\n+ public void setUnassignedShardsAllocatedTimestamp(long timeInMillis) {\n+ this.unassignedShardsAllocatedTimestamp = timeInMillis;\n+ }\n+\n /**\n * Initiates a reroute.\n */\n@@ -108,20 +122,29 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n- registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n- @Override\n- protected void doRun() throws Exception {\n- registeredNextDelaySetting = Long.MAX_VALUE;\n- reroute(\"assign delayed unassigned shards\");\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n- }\n- });\n+ // We use System.currentTimeMillis here because we want the\n+ // next delay from the \"now\" perspective, rather than the\n+ // delay from the last time the GatewayAllocator tried to\n+ // assign/delay the shard\n+ TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), settings, event.state()));\n+ int unassignedDelayedShards = UnassignedInfo.getNumberOfDelayedUnassigned(unassignedShardsAllocatedTimestamp, settings, event.state());\n+ if (unassignedDelayedShards > 0) {\n+ logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\",\n+ unassignedDelayedShards, nextDelay);\n+ registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ reroute(\"assign delayed unassigned shards\");\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ }\n+ });\n+ }\n } else {\n logger.trace(\"no need to schedule reroute due to delayed unassigned, next_delay_setting [{}], registered [{}]\", nextDelaySetting, registeredNextDelaySetting);\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
},
{
"diff": "@@ -199,12 +199,12 @@ public long getAllocationDelayTimeoutSetting(Settings settings, Settings indexSe\n /**\n * The time in millisecond until this unassigned shard can be reassigned.\n */\n- public long getDelayAllocationExpirationIn(Settings settings, Settings indexSettings) {\n+ public long getDelayAllocationExpirationIn(long unassignedShardsAllocatedTimestamp, Settings settings, Settings indexSettings) {\n long delayTimeout = getAllocationDelayTimeoutSetting(settings, indexSettings);\n if (delayTimeout == 0) {\n return 0;\n }\n- long delta = System.currentTimeMillis() - timestamp;\n+ long delta = unassignedShardsAllocatedTimestamp - timestamp;\n // account for time drift, treat it as no timeout\n if (delta < 0) {\n return 0;\n@@ -216,12 +216,12 @@ public long getDelayAllocationExpirationIn(Settings settings, Settings indexSett\n /**\n * Returns the number of shards that are unassigned and currently being delayed.\n */\n- public static int getNumberOfDelayedUnassigned(Settings settings, ClusterState state) {\n+ public static int getNumberOfDelayedUnassigned(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n int count = 0;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (delay > 0) {\n count++;\n }\n@@ -251,12 +251,12 @@ public static long findSmallestDelayedAllocationSetting(Settings settings, Clust\n /**\n * Finds the next (closest) delay expiration of an unassigned shard. Returns 0 if there are none.\n */\n- public static long findNextDelayedAllocationIn(Settings settings, ClusterState state) {\n+ public static long findNextDelayedAllocationIn(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n long nextDelay = Long.MAX_VALUE;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (nextShardDelay > 0 && nextShardDelay < nextDelay) {\n nextDelay = nextShardDelay;\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -113,6 +113,10 @@ public void applyFailedShards(FailedRerouteAllocation allocation) {\n }\n \n public boolean allocateUnassigned(final RoutingAllocation allocation) {\n+ // Take a snapshot of the current time and tell the RoutingService\n+ // about it, so it will use a consistent timestamp for delays\n+ long lastAllocateUnassignedRun = System.currentTimeMillis();\n+ this.routingService.setUnassignedShardsAllocatedTimestamp(lastAllocateUnassignedRun);\n boolean changed = false;\n \n RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n@@ -127,7 +131,7 @@ protected Settings getIndexSettings(String index) {\n \n changed |= primaryShardAllocator.allocateUnassigned(allocation);\n changed |= replicaShardAllocator.processExistingRecoveries(allocation);\n- changed |= replicaShardAllocator.allocateUnassigned(allocation);\n+ changed |= replicaShardAllocator.allocateUnassigned(allocation, lastAllocateUnassignedRun);\n return changed;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -111,6 +111,10 @@ public boolean processExistingRecoveries(RoutingAllocation allocation) {\n }\n \n public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ return allocateUnassigned(allocation, System.currentTimeMillis());\n+ }\n+\n+ public boolean allocateUnassigned(RoutingAllocation allocation, long allocateUnassignedTimestapm) {\n boolean changed = false;\n final RoutingNodes routingNodes = allocation.routingNodes();\n final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n@@ -174,7 +178,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n // will anyhow wait to find an existing copy of the shard to be allocated\n // note: the other side of the equation is scheduling a reroute in a timely manner, which happens in the RoutingService\n IndexMetaData indexMetaData = allocation.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(allocateUnassignedTimestapm, settings, indexMetaData.getSettings());\n if (delay > 0) {\n logger.debug(\"[{}][{}]: delaying allocation of [{}] for [{}]\", shard.index(), shard.id(), shard, TimeValue.timeValueMillis(delay));\n /**",
"filename": "core/src/main/java/org/elasticsearch/gateway/ReplicaShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -34,15 +34,18 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.cluster.routing.allocation.decider.DisableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider.Allocation;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n+import org.elasticsearch.test.InternalTestCluster;\n import org.junit.Test;\n \n import java.nio.file.Path;\n@@ -160,6 +163,40 @@ public void rerouteWithAllocateLocalGateway_enableAllocationSettings() throws Ex\n rerouteWithAllocateLocalGateway(commonSettings);\n }\n \n+ @Test\n+ public void testDelayWithALargeAmountOfShards() throws Exception {\n+ Settings commonSettings = settingsBuilder()\n+ .put(\"gateway.type\", \"local\")\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .build();\n+ logger.info(\"--> starting 4 nodes\");\n+ String node_1 = internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+\n+ assertThat(cluster().size(), equalTo(4));\n+ ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes(\"4\").execute().actionGet();\n+ assertThat(healthResponse.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> create indices\");\n+ for (int i = 0; i < 25; i++) {\n+ client().admin().indices().prepareCreate(\"test\" + i)\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 5).put(\"index.number_of_replicas\", 1)\n+ .put(\"index.unassigned.node_left.delayed_timeout\", randomIntBetween(250, 1000) + \"ms\"))\n+ .execute().actionGet();\n+ }\n+\n+ ensureGreen(TimeValue.timeValueMinutes(1));\n+\n+ logger.info(\"--> stopping node1\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1));\n+\n+ // This might run slowly on older hardware\n+ ensureGreen(TimeValue.timeValueMinutes(2));\n+ }\n+\n private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception {\n logger.info(\"--> starting 2 nodes\");\n String node_1 = internalCluster().startNode(commonSettings);",
"filename": "core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ESAllocationTestCase;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n@@ -112,6 +113,10 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // We need to update the routing service's last attempted run to\n+ // signal that the GatewayAllocator tried to allocated it but\n+ // it was delayed\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis());\n ClusterState newState = clusterState;\n \n routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n@@ -125,6 +130,44 @@ public void run() {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ @Test\n+ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n+ AllocationService allocation = createAllocationService();\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\"))).build();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ // remove node2 and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // Set it in the future so the delay will be negative\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis() + TimeValue.timeValueMinutes(1).millis());\n+\n+ ClusterState newState = clusterState;\n+\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n+\n+ // verify the registration has been updated\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(100L));\n+ }\n+ });\n+ }\n+\n private class TestRoutingService extends RoutingService {\n \n private AtomicBoolean rerouted = new AtomicBoolean();",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -273,7 +273,8 @@ public void testUnassignedDelayedOnlyOnNodeLeft() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- long delay = unassignedInfo.getDelayAllocationExpirationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n+ long delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, greaterThan(0l));\n assertThat(delay, lessThan(TimeValue.timeValueHours(10).millis()));\n }\n@@ -290,7 +291,8 @@ public void testUnassignedDelayOnlyNodeLeftNonNodeLeftReason() throws Exception\n UnassignedInfo unassignedInfo = new UnassignedInfo(RandomPicks.randomFrom(getRandom(), reasons), null);\n long delay = unassignedInfo.getAllocationDelayTimeoutSetting(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, equalTo(0l));\n- delay = unassignedInfo.getDelayAllocationExpirationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n+ delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), Settings.EMPTY);\n assertThat(delay, equalTo(0l));\n }\n \n@@ -306,7 +308,8 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -315,7 +318,8 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n+ assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n }\n \n @Test\n@@ -330,7 +334,8 @@ public void testFindNextDelayedAllocation() {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -343,7 +348,8 @@ public void testFindNextDelayedAllocation() {\n long nextDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSetting(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelaySetting, equalTo(TimeValue.timeValueHours(10).millis()));\n \n- long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n+ long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(),\n+ Settings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelay, greaterThan(TimeValue.timeValueHours(9).millis()));\n assertThat(nextDelay, lessThanOrEqualTo(TimeValue.timeValueHours(10).millis()));\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
}
]
} |
{
"body": "When one of the parent directories has a space in there the plugin binary can't find the correct path.\n\n```\n$ pwd\n/home/richard/temp dir/elasticsearch-1.7.0\n$ ./bin/plugin\nError: Could not find or load main class dir.elasticsearch-1.7.0.config\n```\n",
"comments": [
{
"body": "Confirmed on OSX and Ubuntu.\n",
"created_at": "2015-07-28T12:50:57Z"
},
{
"body": "Assigned to myself as I sent a PR for it. Anyone else can take it if they want it and think they can make a better pull request.\n",
"created_at": "2015-07-28T15:21:28Z"
},
{
"body": "Ok - we have a fix merged for 1.7 and now I'll need to forward port that to master.\n",
"created_at": "2015-08-03T13:42:23Z"
}
],
"number": 12504,
"title": "Space in directory structure fails plugin binary"
} | {
"body": "Fixes ES_HOME with spaces and installing plugins from a local directory\nwith spaces.\n\nCloses #12504\n",
"number": 12610,
"review_comments": [],
"title": "Fix plugin script to allow spaces in ES_HOME"
} | {
"commits": [
{
"message": "Plugin script: Fix spaces\n\nFixes ES_HOME with spaces and installing plugins from a local directory\nwith spaces.\n\nCloses #12504"
}
],
"files": [
{
"diff": "@@ -69,15 +69,15 @@ fi\n while [ $# -gt 0 ]; do\n case $1 in\n -D*=*)\n- properties=\"$properties $1\"\n+ properties=\"$properties \\\"$1\\\"\"\n ;;\n -D*)\n var=$1\n shift\n- properties=\"$properties $var=$1\"\n+ properties=\"$properties \\\"$var\\\"=\\\"$1\\\"\"\n ;;\n *)\n- args=\"$args $1\"\n+ args=\"$args \\\"$1\\\"\"\n esac\n shift\n done\n@@ -88,7 +88,7 @@ if [ -e \"$CONF_DIR\" ]; then\n *-Des.default.path.conf=*|*-Des.path.conf=*)\n ;;\n *)\n- properties=\"$properties -Des.default.path.conf=$CONF_DIR\"\n+ properties=\"$properties -Des.default.path.conf=\\\"$CONF_DIR\\\"\"\n ;;\n esac\n fi\n@@ -98,11 +98,11 @@ if [ -e \"$CONF_FILE\" ]; then\n *-Des.default.config=*|*-Des.config=*)\n ;;\n *)\n- properties=\"$properties -Des.default.config=$CONF_FILE\"\n+ properties=\"$properties -Des.default.config=\\\"$CONF_FILE\\\"\"\n ;;\n esac\n fi\n \n export HOSTNAME=`hostname -s`\n \n-exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"$ES_HOME\" $properties -cp \"$ES_HOME/lib/*\" org.elasticsearch.plugins.PluginManagerCliParser $args\n+eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $args",
"filename": "distribution/src/main/resources/bin/plugin",
"status": "modified"
},
{
"diff": "@@ -257,3 +257,101 @@ setup() {\n run rm -rf \"$TEMP_CONFIG_DIR\"\n [ \"$status\" -eq 0 ]\n }\n+\n+@test \"[TAR] install shield plugin to elasticsearch directory with a space\" {\n+ export ES_DIR=\"/tmp/elastic search\"\n+\n+ # Install the archive\n+ install_archive\n+\n+ # Checks that the archive is correctly installed\n+ verify_archive_installation\n+\n+ # Move the Elasticsearch installation to a directory with a space in it\n+ rm -rf \"$ES_DIR\"\n+ mv /tmp/elasticsearch \"$ES_DIR\"\n+\n+ # Checks that plugin archive is available\n+ [ -e \"$SHIELD_ZIP\" ]\n+\n+ # Install Shield\n+ run \"$ES_DIR/bin/plugin\" install elasticsearch/shield/latest -u \"file://$SHIELD_ZIP\"\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that Shield is correctly installed\n+ assert_file_exist \"$ES_DIR/bin/shield\"\n+ assert_file_exist \"$ES_DIR/bin/shield/esusers\"\n+ assert_file_exist \"$ES_DIR/bin/shield/syskeygen\"\n+ assert_file_exist \"$ES_DIR/config/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield/role_mapping.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/roles.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/users\"\n+ assert_file_exist \"$ES_DIR/config/shield/users_roles\"\n+ assert_file_exist \"$ES_DIR/plugins/shield\"\n+\n+ # Remove the plugin\n+ run \"$ES_DIR/bin/plugin\" remove elasticsearch/shield/latest\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that the plugin is correctly removed\n+ assert_file_not_exist \"$ES_DIR/bin/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield/role_mapping.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/roles.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/users\"\n+ assert_file_exist \"$ES_DIR/config/shield/users_roles\"\n+ assert_file_not_exist \"$ES_DIR/plugins/shield\"\n+\n+ #Cleanup our temporary Elasticsearch installation\n+ rm -rf \"$ES_DIR\"\n+}\n+\n+@test \"[TAR] install shield plugin from a directory with a space\" {\n+\n+ export SHIELD_ZIP_WITH_SPACE=\"/tmp/plugins with space/shield.zip\"\n+\n+ # Install the archive\n+ install_archive\n+\n+ # Checks that the archive is correctly installed\n+ verify_archive_installation\n+\n+ # Checks that plugin archive is available\n+ [ -e \"$SHIELD_ZIP\" ]\n+\n+ # Copy the shield plugin to a directory with a space in it\n+ rm -f \"$SHIELD_ZIP_WITH_SPACE\"\n+ mkdir -p \"$(dirname \"$SHIELD_ZIP_WITH_SPACE\")\"\n+ cp $SHIELD_ZIP \"$SHIELD_ZIP_WITH_SPACE\"\n+\n+ # Install Shield\n+ run /tmp/elasticsearch/bin/plugin install elasticsearch/shield/latest -u \"file://$SHIELD_ZIP_WITH_SPACE\"\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that Shield is correctly installed\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield/esusers\"\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield/syskeygen\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/role_mapping.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/roles.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users_roles\"\n+ assert_file_exist \"/tmp/elasticsearch/plugins/shield\"\n+\n+ # Remove the plugin\n+ run /tmp/elasticsearch/bin/plugin remove elasticsearch/shield/latest\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that the plugin is correctly removed\n+ assert_file_not_exist \"/tmp/elasticsearch/bin/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/role_mapping.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/roles.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users_roles\"\n+ assert_file_not_exist \"/tmp/elasticsearch/plugins/shield\"\n+\n+ #Cleanup our plugin directory with a space\n+ rm -rf \"$SHIELD_ZIP_WITH_SPACE\"\n+}",
"filename": "distribution/src/test/resources/packaging/scripts/25_tar_plugins.bats",
"status": "modified"
}
]
} |
{
"body": "SimpleSortTests.testIssue8226 for example fails about once a week. Example failure:\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_1x_metal/3129/\n\nI can reproduce it locally (although very rarely) with some additional logging (action.search.type: TRACE). \n\nHere is a brief analysis of what happened. Would be great if someone could take a look and let me know if this makes sense.\n\nFailure:\n\n```\n1> REPRODUCE WITH : mvn clean test -Dtests.seed=774A2866F1B6042D -Dtests.class=org.elasticsearch.search.sort.SimpleSortTests -Dtests.method=\"testIssue8226 {#76 seed=[774A2866F1B6042D:ACB4FF9F8C8CA341]}\" -Des.logger.level=DEBUG -Des.node.mode=network -Dtests.security.manager=true -Dtests.nightly=false -Dtests.client.ratio=0.0 -Dtests.heap.size=512m -Dtests.jvm.argline=\"-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true\" -Dtests.locale=fi_FI -Dtests.timezone=Etc/GMT+9 -Dtests.processors=4\n 1> Throwable:\n 1> java.lang.AssertionError: One or more shards were not successful but didn't trigger a failure\n 1> Expected: <47>\n 1> but: was <46>\n```\n\nHere is an example failure in detail, the relevant parts of the logs are below:\n## State\n\nnode_0 is master.\n[test_5][0] is relocating from node_1 to node_0.\nCluster state 3673 has the shard as relocating, in cluster state 3674 it is started.\nnode_0 is the coordinating node for the search request.\n\nIn brief, the request fails for shard [test_5][0] because node_0 operates on an older cluster state 3673 when processing the search request, while node_1 is already on 3674.\n## Course of events:\n1. node_0 sends shard started, but the shard is still in state POST_RECOVERY and will remain so until it receives the new cluster state and applies it locally\n2. node_0(master) receives the shard started request and publishes the new cluster state 3674 to node_0 and node_1\n3. node_1 receives the cluster state 3674 and applies it locally\n4. node_0 sends search request for [test_5][0] to node_1 because according to cluster state 3673 the shard is there and relocating\n -> request fails with IndexShardMissingException because node_1 already applied cluster state 3674 and deleted the shard.\n5. node_0 then sends request for [test_5][0] to node_0 because the shard is there as well (according to cluster state 3673 it is and initializing)\n -> request fails with IllegalIndexShardStateException because node_0 has not yet processed cluster state 3674 and therefore the shard is in POST_RECOVERY instead of STARTED\n No shard failure is logged because IndexShardMissingException and IllegalIndexShardStateException are explicitly excluded from shard failures.\n6. node_0 finally also gets to process the new cluster state and moves the shard [test_5][0] to STARTED but it is too late\n\nThis is a very rare condition and maybe too bad on client side because the information that one shard did not deliver results is there although it is not explicitly listed as shard failure. We can probably make the test pass easily be just waiting for relocations before executing the search request but that seems wrong because any search request can fail this way.\n## Sample log\n\n```\n[....]\n\n 1> [2015-01-26 09:27:14,435][DEBUG][indices.recovery ] [node_0] [test_5][0] recovery completed from [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}], took [84ms]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] sending shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] received shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: execute\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.action.shard ] [node_0] [test_5][0] will apply shard started [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n\n\n[....]\n\n\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] cluster state updated, version [3674], source [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] publishing cluster state version 3674\n 1> [2015-01-26 09:27:14,442][DEBUG][discovery.zen.publish ] [node_1] received cluster state version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] processing [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]: execute\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] cluster state updated, version [3674], source [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][indices.cluster ] [node_1] [test_5][0] removing shard (not allocated)\n 1> [2015-01-26 09:27:14,443][DEBUG][index ] [node_1] [test_5] [0] closing... (reason: [removing shard (not allocated)])\n 1> [2015-01-26 09:27:14,443][INFO ][test.store ] [node_1] [test_5][0] Shard state before potentially flushing is STARTED\n 1> [2015-01-26 09:27:14,453][DEBUG][search.sort ] cluster state:\n 1> version: 3673\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3006):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (1):\n 1> 13638/URGENT/shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]/13ms\n 1>\n\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_3][3]\n 1> [2015-01-26 09:27:14,460][TRACE][action.search.type ] [node_0] [test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [false]\n 1> org.elasticsearch.transport.RemoteTransportException: [node_1][inet[/192.168.2.102:9401]][indices:data/read/search[phase/dfs]]\n 1> Caused by: org.elasticsearch.index.IndexShardMissingException: [test_5][0] missing\n 1> at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:203)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:539)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:757)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:748)\n 1> at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:275)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_6][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_5][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_7][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_3][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_8][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_1][0]\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,463][TRACE][action.search.type ] [node_0] [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [true]\n 1> org.elasticsearch.index.shard.IllegalIndexShardStateException: [test_5][0] CurrentState[POST_RECOVERY] operations only allowed when started/relocated\n 1> at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:839)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:651)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:647)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:543)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:197)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:194)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_2][4]\n\n\n\n[...]\n\n\n\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,493][DEBUG][index.shard ] [node_0] [test_5][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: execute\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: no change in cluster_state\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: done applying updated cluster_state (version: 3674)\n 1> [2015-01-26 09:27:14,456][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_2][3]\n\n\n[...]\n\n 1> [2015-01-26 09:27:14,527][DEBUG][search.sort ] cluster state:\n 1> version: 3674\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3007):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (0):\n 1>\n\n[...]\n```\n",
"comments": [
{
"body": "A similar test failure:\n\n`org.elasticsearch.deleteByQuery.DeleteByQueryTests.testDeleteAllOneIndex`\n\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_master_metal/2579/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_centos/2640/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_regression/1263/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\n\nIt fails on the:\n\n``` java\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n```\n\nWhich I believe relates to the relocation issue Britta mentioned.\n",
"created_at": "2015-01-27T00:21:32Z"
},
{
"body": "I think this is unrelated. I actually fixed the DeleteByQueryTests yesterday (c3f1982f21150336f87b7b4def74e019e8bdac18) and this commit does not seem to be in the build you linked to.\n\nA brief explanation: DeleteByQuery is a write operation. The shard header returned and checked in DeleteByQueryTests is different from the one return for search requests. The reason why DeleteByQuery failed is because I added the check \n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.totalNumShards));\n\nbefore which was wrong because there was no ensureGreen() so some of the replicas might not have ben initialized yet. I fixed this in c3f1982f2 by instead checking\n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n",
"created_at": "2015-01-27T08:57:22Z"
},
{
"body": "I wonder if we should just allow reads in the POST_RECOVERY phase. At that point the shards is effectively ready to do everything it needs to do. @brwe this will solve the issue, right?\n",
"created_at": "2015-01-27T10:36:17Z"
},
{
"body": "@brwe okay, does that mean I can unmute the `DeleteByQueryTests.testDeleteAllOneIndex`?\n",
"created_at": "2015-01-27T16:42:41Z"
},
{
"body": "yes\n",
"created_at": "2015-01-27T16:45:32Z"
},
{
"body": "Unmuted the `DeleteByQueryTests.testDeleteAllOneIndex` test\n",
"created_at": "2015-01-27T17:42:04Z"
},
{
"body": "@bleskes I think that would fix it. However, before I push I want to try and write a test that reproduces reliably. Will not do before next week.\n",
"created_at": "2015-01-28T15:25:51Z"
},
{
"body": "@brwe please ping before starting on this. I want to make sure that we capture the original issue which caused us to introduce POST_RECOVERY. I don't recall exactly recall what the problem was (it was refresh related) and I think it was solved by a more recent change to how refresh work (#6545) but it requires careful thought\n",
"created_at": "2015-02-24T12:48:00Z"
},
{
"body": "@bleskes ping :)\nI finally came back to this and wrote a test that reproduces the failure reliably (#10194) but I did not quite get what you meant by \"capture the original issue\". Can you elaborate?\n",
"created_at": "2015-03-20T21:11:20Z"
},
{
"body": "@kimchy do you recall why we can't read in that state?\n",
"created_at": "2015-04-13T14:39:28Z"
}
],
"number": 9421,
"title": "After relocation shards might temporarily not be searchable if still in POST_RECOVERY"
} | {
"body": "Currently, we do not allow reads on shards which are in POST_RECOVERY which unfortunately can cause search failures on shards which just recovered if there no replicas (#9421). \nThe reason why we did not allow reads on shards that are in POST_RECOVERY is that after relocating a shard might miss a refresh if the node that executed the refresh is behind with cluster state processing. If that happens, a user might execute index/refresh/search but still not find the document that was indexed.\nMore details [here](https://github.com/elastic/elasticsearch/compare/elastic:master...brwe:replicated_refresh?expand=1#diff-90cb64bedfe055b7d7f1cc28103784d5R971).\n\n@bleskes and I discussed this briefly and he mentioned we could make refresh a replicated operation that goes the same route that index operations go and thereby make sure that the refresh reaches every shard. In this case we could also allow reads on POST_RECOVERY.\n\nI make this PR as a proof of concept so that we can discuss if this is actually a good idea. \nThis PR contains:\n- a reliable test for #9421\n- a fix for #9421 \n- a test for the visibility issue that we have when we allow reads in POST_RECOVERY\n- the change to make refresh a replicated action just like index, delete, etc.\n\nLet me know what you think. I would make the same changes for flush also.\n",
"number": 12600,
"review_comments": [
{
"body": "can this be private final?\n",
"created_at": "2015-08-04T09:00:56Z"
},
{
"body": "is this trace needed?\n",
"created_at": "2015-08-04T09:01:23Z"
},
{
"body": "we have a `CountDown` class that helps to do this and prevents you from counting down more than actually specified...\n",
"created_at": "2015-08-04T09:02:44Z"
},
{
"body": "can this be private?\n",
"created_at": "2015-08-04T09:13:41Z"
}
],
"title": "Fix for search failures if shard is in POST_RECOVERY"
} | {
"commits": [
{
"message": "Test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)\n\nsee #9421"
},
{
"message": "test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY"
},
{
"message": "Allow reads on shards that are in POST_RECOVERY\n\nsee #9421"
},
{
"message": "Make refresh a replicated action\n\nWhen a client indexes a documents and then calls refresh on the index\nthen the document must be visible after that with search requests.\nThis might not be the case if refresh is a BroadcastOperationAction,\nsee DiscoveryWithServiceDisruptionsTests.testReadOnPostRecoveryShards\n\nrelated to #9421"
},
{
"message": "remove @Slow after rebase"
},
{
"message": "review comments"
},
{
"message": "more abstractions and make flush replicated action as well"
},
{
"message": "test for synced flush"
}
],
"files": [
{
"diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.action.admin.indices.flush;\n \n import org.elasticsearch.action.ActionRequest;\n-import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n \n@@ -37,7 +37,7 @@\n * @see org.elasticsearch.client.IndicesAdminClient#flush(FlushRequest)\n * @see FlushResponse\n */\n-public class FlushRequest extends BroadcastRequest<FlushRequest> {\n+public class FlushRequest extends ReplicatedBroadcastRequest<FlushRequest> {\n \n private boolean force = false;\n private boolean waitIfOngoing = false;",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/FlushRequest.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.action.admin.indices.flush;\n \n import org.elasticsearch.action.ShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastResponse;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n \n@@ -32,7 +32,7 @@\n *\n *\n */\n-public class FlushResponse extends BroadcastResponse {\n+public class FlushResponse extends ReplicatedBroadcastResponse {\n \n FlushResponse() {\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/FlushResponse.java",
"status": "modified"
},
{
"diff": "@@ -19,27 +19,28 @@\n \n package org.elasticsearch.action.admin.indices.flush;\n \n-import org.elasticsearch.action.support.broadcast.BroadcastShardRequest;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.index.shard.ShardId;\n \n import java.io.IOException;\n \n-/**\n- *\n- */\n-class ShardFlushRequest extends BroadcastShardRequest {\n+public class ShardFlushRequest extends ReplicatedBroadcastShardRequest<ShardFlushRequest> {\n+\n private FlushRequest request = new FlushRequest();\n \n- ShardFlushRequest() {\n+ public ShardFlushRequest(ShardId shardId, FlushRequest request) {\n+ super(shardId);\n+ this.request = request;\n }\n \n- ShardFlushRequest(ShardId shardId, FlushRequest request) {\n- super(shardId, request);\n- this.request = request;\n+ public ShardFlushRequest() {\n }\n \n+ FlushRequest getRequest() {\n+ return request;\n+ }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n@@ -53,7 +54,5 @@ public void writeTo(StreamOutput out) throws IOException {\n request.writeTo(out);\n }\n \n- FlushRequest getRequest() {\n- return request;\n- }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java",
"status": "modified"
},
{
"diff": "@@ -21,98 +21,43 @@\n \n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.ActionFilters;\n-import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.TransportReplicatedBroadcastAction;\n import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.block.ClusterBlockException;\n-import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n-import org.elasticsearch.cluster.routing.GroupShardsIterator;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.List;\n-import java.util.concurrent.atomic.AtomicReferenceArray;\n-\n-import static com.google.common.collect.Lists.newArrayList;\n \n /**\n * Flush Action.\n */\n-public class TransportFlushAction extends TransportBroadcastAction<FlushRequest, FlushResponse, ShardFlushRequest, ShardFlushResponse> {\n-\n- private final IndicesService indicesService;\n+public class TransportFlushAction extends TransportReplicatedBroadcastAction<FlushRequest, FlushResponse, ShardFlushRequest, ReplicatedBroadcastShardResponse> {\n \n @Inject\n public TransportFlushAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- TransportService transportService, IndicesService indicesService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, FlushAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n- FlushRequest.class, ShardFlushRequest.class, ThreadPool.Names.FLUSH);\n- this.indicesService = indicesService;\n- }\n-\n- @Override\n- protected FlushResponse newResponse(FlushRequest request, AtomicReferenceArray shardsResponses, ClusterState clusterState) {\n- int successfulShards = 0;\n- int failedShards = 0;\n- List<ShardOperationFailedException> shardFailures = null;\n- for (int i = 0; i < shardsResponses.length(); i++) {\n- Object shardResponse = shardsResponses.get(i);\n- if (shardResponse == null) {\n- // a non active shard, ignore\n- } else if (shardResponse instanceof BroadcastShardOperationFailedException) {\n- failedShards++;\n- if (shardFailures == null) {\n- shardFailures = newArrayList();\n- }\n- shardFailures.add(new DefaultShardOperationFailedException((BroadcastShardOperationFailedException) shardResponse));\n- } else {\n- successfulShards++;\n- }\n- }\n- return new FlushResponse(shardsResponses.length(), successfulShards, failedShards, shardFailures);\n- }\n-\n- @Override\n- protected ShardFlushRequest newShardRequest(int numShards, ShardRouting shard, FlushRequest request) {\n- return new ShardFlushRequest(shard.shardId(), request);\n- }\n-\n- @Override\n- protected ShardFlushResponse newShardResponse() {\n- return new ShardFlushResponse();\n- }\n-\n- @Override\n- protected ShardFlushResponse shardOperation(ShardFlushRequest request) {\n- IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n- indexShard.flush(request.getRequest());\n- return new ShardFlushResponse(request.shardId());\n+ TransportService transportService, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver,\n+ TransportShardFlushAction replicatedFlushAction) {\n+ super(FlushAction.NAME, FlushRequest.class, settings, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, replicatedFlushAction);\n }\n \n- /**\n- * The refresh request works against *all* shards.\n- */\n @Override\n- protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest request, String[] concreteIndices) {\n- return clusterState.routingTable().allActiveShardsGrouped(concreteIndices, true, true);\n+ protected ReplicatedBroadcastShardResponse newShardResponse(int totalNumCopies, ShardId shardId) {\n+ return new ReplicatedBroadcastShardResponse(shardId, totalNumCopies);\n }\n \n @Override\n- protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ protected ShardFlushRequest newShardRequest(ShardId shardId, FlushRequest request) {\n+ return new ShardFlushRequest(shardId, request);\n }\n \n @Override\n- protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n+ protected FlushResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures) {\n+ return new FlushResponse(totalNumCopies, successfulShards, failedShards, shardFailures);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,75 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.flush;\n+\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.TransportReplicatedBroadcastShardAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+/**\n+ *\n+ */\n+public class TransportShardFlushAction extends TransportReplicatedBroadcastShardAction<ShardFlushRequest, ReplicatedBroadcastShardResponse> {\n+\n+ public static final String NAME = \"indices:data/write/flush\";\n+\n+ @Inject\n+ public TransportShardFlushAction(Settings settings, TransportService transportService, ClusterService clusterService,\n+ IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction,\n+ MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, mappingUpdatedAction,\n+ actionFilters, indexNameExpressionResolver, ShardFlushRequest.class, ShardFlushRequest.class, ThreadPool.Names.FLUSH);\n+ }\n+\n+ @Override\n+ protected ReplicatedBroadcastShardResponse newResponseInstance() {\n+ return new ReplicatedBroadcastShardResponse();\n+ }\n+\n+ @Override\n+ protected Tuple<ReplicatedBroadcastShardResponse, ShardFlushRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n+ IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId.getIndex()).shardSafe(shardRequest.shardId.id());\n+ indexShard.flush(shardRequest.request.getRequest());\n+ logger.trace(\"{} flush request executed on primary\", indexShard.shardId());\n+ int totalNumShards = clusterState.getMetaData().index(indexShard.shardId().index().getName()).getNumberOfReplicas() + 1;\n+ return new Tuple<>(new ReplicatedBroadcastShardResponse(shardRequest.shardId, totalNumShards), shardRequest.request);\n+ }\n+\n+ @Override\n+ protected void shardOperationOnReplica(ShardId shardId, ShardFlushRequest request) {\n+ IndexShard indexShard = indicesService.indexServiceSafe(request.getShardId().getIndex()).shardSafe(request.getShardId().id());\n+ indexShard.flush(request.getRequest());\n+ logger.trace(\"{} flush request executed on replica\", indexShard.shardId());\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java",
"status": "added"
},
{
"diff": "@@ -22,8 +22,6 @@\n import org.elasticsearch.action.Action;\n import org.elasticsearch.client.ElasticsearchClient;\n \n-/**\n- */\n public class RefreshAction extends Action<RefreshRequest, RefreshResponse, RefreshRequestBuilder> {\n \n public static final RefreshAction INSTANCE = new RefreshAction();",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshAction.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.action.admin.indices.refresh;\n \n import org.elasticsearch.action.ActionRequest;\n-import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastRequest;\n \n /**\n * A refresh request making all operations performed since the last refresh available for search. The (near) real-time\n@@ -31,8 +31,7 @@\n * @see org.elasticsearch.client.IndicesAdminClient#refresh(RefreshRequest)\n * @see RefreshResponse\n */\n-public class RefreshRequest extends BroadcastRequest<RefreshRequest> {\n-\n+public class RefreshRequest extends ReplicatedBroadcastRequest<RefreshRequest> {\n \n RefreshRequest() {\n }\n@@ -48,5 +47,4 @@ public RefreshRequest(ActionRequest originalRequest) {\n public RefreshRequest(String... indices) {\n super(indices);\n }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshRequest.java",
"status": "modified"
},
{
"diff": "@@ -20,35 +20,19 @@\n package org.elasticsearch.action.admin.indices.refresh;\n \n import org.elasticsearch.action.ShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastResponse;\n \n-import java.io.IOException;\n import java.util.List;\n \n /**\n * The response of a refresh action.\n- *\n- *\n */\n-public class RefreshResponse extends BroadcastResponse {\n+public class RefreshResponse extends ReplicatedBroadcastResponse {\n \n RefreshResponse() {\n-\n }\n \n RefreshResponse(int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n super(totalShards, successfulShards, failedShards, shardFailures);\n }\n-\n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- super.readFrom(in);\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- super.writeTo(out);\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshResponse.java",
"status": "modified"
},
{
"diff": "@@ -21,99 +21,45 @@\n \n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.ActionFilters;\n-import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardRequest;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.TransportReplicatedBroadcastAction;\n import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.block.ClusterBlockException;\n-import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n-import org.elasticsearch.cluster.routing.GroupShardsIterator;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.List;\n-import java.util.concurrent.atomic.AtomicReferenceArray;\n-\n-import static com.google.common.collect.Lists.newArrayList;\n \n /**\n * Refresh action.\n */\n-public class TransportRefreshAction extends TransportBroadcastAction<RefreshRequest, RefreshResponse, ShardRefreshRequest, ShardRefreshResponse> {\n-\n- private final IndicesService indicesService;\n+public class TransportRefreshAction extends TransportReplicatedBroadcastAction<RefreshRequest, RefreshResponse, ReplicatedBroadcastShardRequest, ReplicatedBroadcastShardResponse> {\n \n @Inject\n public TransportRefreshAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- TransportService transportService, IndicesService indicesService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, RefreshAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n- RefreshRequest.class, ShardRefreshRequest.class, ThreadPool.Names.REFRESH);\n- this.indicesService = indicesService;\n- }\n-\n- @Override\n- protected RefreshResponse newResponse(RefreshRequest request, AtomicReferenceArray shardsResponses, ClusterState clusterState) {\n- int successfulShards = 0;\n- int failedShards = 0;\n- List<ShardOperationFailedException> shardFailures = null;\n- for (int i = 0; i < shardsResponses.length(); i++) {\n- Object shardResponse = shardsResponses.get(i);\n- if (shardResponse == null) {\n- // non active shard, ignore\n- } else if (shardResponse instanceof BroadcastShardOperationFailedException) {\n- failedShards++;\n- if (shardFailures == null) {\n- shardFailures = newArrayList();\n- }\n- shardFailures.add(new DefaultShardOperationFailedException((BroadcastShardOperationFailedException) shardResponse));\n- } else {\n- successfulShards++;\n- }\n- }\n- return new RefreshResponse(shardsResponses.length(), successfulShards, failedShards, shardFailures);\n- }\n-\n- @Override\n- protected ShardRefreshRequest newShardRequest(int numShards, ShardRouting shard, RefreshRequest request) {\n- return new ShardRefreshRequest(shard.shardId(), request);\n- }\n-\n- @Override\n- protected ShardRefreshResponse newShardResponse() {\n- return new ShardRefreshResponse();\n- }\n-\n- @Override\n- protected ShardRefreshResponse shardOperation(ShardRefreshRequest request) {\n- IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n- indexShard.refresh(\"api\");\n- logger.trace(\"{} refresh request executed\", indexShard.shardId());\n- return new ShardRefreshResponse(request.shardId());\n+ TransportService transportService, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver,\n+ TransportShardRefreshAction shardRefreshAction) {\n+ super(RefreshAction.NAME, RefreshRequest.class, settings, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, shardRefreshAction);\n }\n \n- /**\n- * The refresh request works against *all* shards.\n- */\n @Override\n- protected GroupShardsIterator shards(ClusterState clusterState, RefreshRequest request, String[] concreteIndices) {\n- return clusterState.routingTable().allAssignedShardsGrouped(concreteIndices, true, true);\n+ protected ReplicatedBroadcastShardResponse newShardResponse(int totalNumCopies, ShardId shardId) {\n+ return new ReplicatedBroadcastShardResponse(shardId, totalNumCopies);\n }\n \n @Override\n- protected ClusterBlockException checkGlobalBlock(ClusterState state, RefreshRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ protected ReplicatedBroadcastShardRequest newShardRequest(ShardId shardId, RefreshRequest request) {\n+ return new ReplicatedBroadcastShardRequest(shardId);\n }\n \n @Override\n- protected ClusterBlockException checkRequestBlock(ClusterState state, RefreshRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n+ protected ReplicatedBroadcastResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures) {\n+ return new RefreshResponse(totalNumCopies, successfulShards, failedShards, shardFailures);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportRefreshAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.refresh;\n+\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardRequest;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastShardResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.TransportReplicatedBroadcastShardAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+/**\n+ *\n+ */\n+public class TransportShardRefreshAction extends TransportReplicatedBroadcastShardAction<ReplicatedBroadcastShardRequest, ReplicatedBroadcastShardResponse> {\n+\n+ public static final String NAME = \"indices:data/write/refresh\";\n+\n+ @Inject\n+ public TransportShardRefreshAction(Settings settings, TransportService transportService, ClusterService clusterService,\n+ IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction,\n+ MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, mappingUpdatedAction,\n+ actionFilters, indexNameExpressionResolver, ReplicatedBroadcastShardRequest.class, ReplicatedBroadcastShardRequest.class, ThreadPool.Names.REFRESH);\n+ }\n+\n+ @Override\n+ protected ReplicatedBroadcastShardResponse newResponseInstance() {\n+ return new ReplicatedBroadcastShardResponse();\n+ }\n+\n+ @Override\n+ protected Tuple<ReplicatedBroadcastShardResponse, ReplicatedBroadcastShardRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n+ IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId.getIndex()).shardSafe(shardRequest.shardId.id());\n+ indexShard.refresh(\"api\");\n+ logger.trace(\"{} refresh request executed on primary\", indexShard.shardId());\n+ int totalNumShards = clusterState.getMetaData().index(indexShard.shardId().index().getName()).getNumberOfReplicas() + 1;\n+ return new Tuple<>(new ReplicatedBroadcastShardResponse(shardRequest.shardId, totalNumShards), shardRequest.request);\n+ }\n+\n+ @Override\n+ protected void shardOperationOnReplica(ShardId shardId, ReplicatedBroadcastShardRequest request) {\n+ IndexShard indexShard = indicesService.indexServiceSafe(request.getShardId().getIndex()).shardSafe(request.getShardId().id());\n+ indexShard.refresh(\"api\");\n+ logger.trace(\"{} refresh request executed on replica\", indexShard.shardId());\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n+\n+/**\n+ * A request that is broadcasted to all primaries of an index and then replicated just like write requests.\n+ * This is used for refresh and flush.\n+ */\n+public class ReplicatedBroadcastRequest<Request extends ReplicatedBroadcastRequest> extends BroadcastRequest<Request> {\n+\n+\n+ ReplicatedBroadcastRequest() {\n+ }\n+\n+ /**\n+ * Copy constructor that creates a new refresh request that is a copy of the one provided as an argument.\n+ * The new request will inherit though headers and context from the original request that caused it.\n+ */\n+ public ReplicatedBroadcastRequest(ActionRequest originalRequest) {\n+ super(originalRequest);\n+ }\n+\n+ public ReplicatedBroadcastRequest(String... indices) {\n+ super(indices);\n+ }\n+\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/ReplicatedBroadcastRequest.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,54 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.action.ShardOperationFailedException;\n+import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+\n+import java.io.IOException;\n+import java.util.List;\n+\n+/**\n+ * The response to a replicated broadcast request.\n+ *\n+ *\n+ */\n+public class ReplicatedBroadcastResponse extends BroadcastResponse {\n+\n+ public ReplicatedBroadcastResponse() {\n+\n+ }\n+\n+ public ReplicatedBroadcastResponse(int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n+ super(totalShards, successfulShards, failedShards, shardFailures);\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/ReplicatedBroadcastResponse.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,79 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.support.replication.ReplicationRequest;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.index.shard.ShardId;\n+\n+import java.io.IOException;\n+\n+public class ReplicatedBroadcastShardRequest<Request extends ReplicatedBroadcastShardRequest> extends ReplicationRequest<Request> {\n+\n+ public ShardId getShardId() {\n+ return shardId;\n+ }\n+\n+ public void setShardId(ShardId shardId) {\n+ this.shardId = shardId;\n+ index(shardId.index().name());\n+ }\n+\n+ private ShardId shardId;\n+\n+ /**\n+ * Copy constructor that creates a new refresh request that is a copy of the one provided as an argument.\n+ * The new request will inherit though headers and context from the original request that caused it.\n+ */\n+ public ReplicatedBroadcastShardRequest(ActionRequest originalRequest) {\n+ super(originalRequest);\n+ }\n+\n+ public ReplicatedBroadcastShardRequest() {\n+ }\n+\n+ public ReplicatedBroadcastShardRequest(ShardId shardId) {\n+ this.shardId = shardId;\n+ index(shardId.index().name());\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ this.shardId = ShardId.readShardId(in);\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ shardId.writeTo(out);\n+ }\n+\n+ @Override\n+ public boolean skipExecutionOnShadowReplicas() {\n+ return false;\n+ }\n+\n+ public ReplicatedBroadcastShardResponse newResponse(ShardId shardId, int totalNumShards) {\n+ return new ReplicatedBroadcastShardResponse(shardId, totalNumShards);\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/ReplicatedBroadcastShardRequest.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,82 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.index.shard.ShardId;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A response of an index operation,\n+ *\n+ * @see ReplicatedBroadcastShardRequest\n+ */\n+public class ReplicatedBroadcastShardResponse extends ActionWriteResponse {\n+\n+ public ShardId getShardId() {\n+ return shardId;\n+ }\n+\n+ public void setShardId(ShardId shardId) {\n+ this.shardId = shardId;\n+ }\n+\n+ private ShardId shardId;\n+\n+ private int totalNumCopies;\n+\n+\n+ public ReplicatedBroadcastShardResponse() {\n+ }\n+\n+ public ReplicatedBroadcastShardResponse(ShardId shardId, int totalNumCopies) {\n+ this.shardId = shardId;\n+ this.totalNumCopies = totalNumCopies;\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ shardId = ShardId.readShardId(in);\n+ totalNumCopies = in.readVInt();\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ shardId.writeTo(out);\n+ out.writeVInt(totalNumCopies);\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"ReplicatedRefreshResponse{\" +\n+ \"shardId=\" + shardId +\n+ \", totalNumCopies=\" + totalNumCopies +\n+ '}';\n+ }\n+\n+ public int getTotalNumCopies() {\n+ return totalNumCopies;\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/ReplicatedBroadcastShardResponse.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,140 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.ShardOperationFailedException;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n+import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.GroupShardsIterator;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardsIterator;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.CountDown;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+\n+import static com.google.common.collect.Lists.newArrayList;\n+\n+/**\n+ * Refresh action.\n+ */\n+public abstract class TransportReplicatedBroadcastAction<Request extends ReplicatedBroadcastRequest, Response extends ReplicatedBroadcastResponse, ShardRequest extends ReplicatedBroadcastShardRequest, ShardResponse extends ReplicatedBroadcastShardResponse> extends HandledTransportAction<Request, Response> {\n+\n+ private final TransportReplicatedBroadcastShardAction replicatedBroadcastShardAction;\n+ private final ClusterService clusterService;\n+\n+ public TransportReplicatedBroadcastAction(String name, Class<Request> request, Settings settings, ThreadPool threadPool, ClusterService clusterService,\n+ TransportService transportService,\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, TransportReplicatedBroadcastShardAction replicatedBroadcastShardAction) {\n+ super(settings, name, threadPool, transportService, actionFilters, indexNameExpressionResolver,request);\n+ this.replicatedBroadcastShardAction = replicatedBroadcastShardAction;\n+ this.clusterService = clusterService;\n+ }\n+\n+ @Override\n+ protected void doExecute(final Request request, final ActionListener<Response> listener) {\n+ GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterService.state(), indexNameExpressionResolver.concreteIndices(clusterService.state(), request), null, null);\n+ final CopyOnWriteArrayList<ShardResponse> shardsResponses = new CopyOnWriteArrayList();\n+ final CountDown responsesCountDown = new CountDown(groupShardsIterator.size());\n+ if (responsesCountDown.isCountedDown() == false) {\n+ for (final ShardsIterator shardsIterator : groupShardsIterator) {\n+ final ShardRouting shardRouting = shardsIterator.nextOrNull();\n+ if (shardRouting == null) {\n+ if (responsesCountDown.countDown()) {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ } else {\n+ final ShardId shardId = shardRouting.shardId();\n+ replicatedBroadcastShardAction.execute(newShardRequest(shardId, request), new ActionListener<ShardResponse>() {\n+ @Override\n+ public void onResponse(ShardResponse shardResponse) {\n+ shardsResponses.add(shardResponse);\n+ if (responsesCountDown.countDown()) {\n+ logger.trace(\"replicated broadcast: got response from {}\", shardResponse.getShardId());\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ logger.trace(\"replicated broadcast: got failure from {}\", shardId);\n+ int totalNumCopies = clusterService.state().getMetaData().index(shardId.index().getName()).getNumberOfReplicas() + 1;\n+ ShardResponse shardResponse = newShardResponse(totalNumCopies, shardId);\n+ ActionWriteResponse.ShardInfo.Failure failure = new ActionWriteResponse.ShardInfo.Failure(shardId.index().name(), shardId.id(), null, e, ExceptionsHelper.status(e), true);\n+ ActionWriteResponse.ShardInfo.Failure[] failures = new ActionWriteResponse.ShardInfo.Failure[totalNumCopies];\n+ Arrays.fill(failures, failure);\n+ shardResponse.setShardInfo(new ActionWriteResponse.ShardInfo(totalNumCopies, 0, failures));\n+ shardsResponses.add(shardResponse);\n+ if (responsesCountDown.countDown()) {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ }\n+ });\n+ }\n+ }\n+ } else {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ }\n+\n+ protected abstract ShardResponse newShardResponse(int totalNumCopies, ShardId shardId);\n+\n+ protected abstract ShardRequest newShardRequest(ShardId shardId, Request request);\n+\n+ private void finishAndNotifyListener(ActionListener listener, CopyOnWriteArrayList<ShardResponse> shardsResponses) {\n+ logger.trace(\"replicated broadcast: got all shard responses\");\n+ int successfulShards = 0;\n+ int failedShards = 0;\n+ int totalNumCopies = 0;\n+ List<ShardOperationFailedException> shardFailures = null;\n+ for (int i = 0; i < shardsResponses.size(); i++) {\n+ ReplicatedBroadcastShardResponse shardResponse = shardsResponses.get(i);\n+ if (shardResponse == null) {\n+ // non active shard, ignore\n+ } else {\n+ failedShards += shardResponse.getShardInfo().getFailed();\n+ successfulShards += shardResponse.getShardInfo().getSuccessful();\n+ totalNumCopies += shardResponse.getTotalNumCopies();\n+ if (shardFailures == null) {\n+ shardFailures = newArrayList();\n+ }\n+ for (ActionWriteResponse.ShardInfo.Failure failure : shardResponse.getShardInfo().getFailures()) {\n+ shardFailures.add(new DefaultShardOperationFailedException(new BroadcastShardOperationFailedException(new ShardId(failure.index(), failure.shardId()), failure.getCause())));\n+ }\n+ }\n+ }\n+ listener.onResponse(newResponse(successfulShards, failedShards, totalNumCopies, shardFailures));\n+ }\n+\n+ protected abstract ReplicatedBroadcastResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures);\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/TransportReplicatedBroadcastAction.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,78 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replicatedbroadcast;\n+\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.replication.TransportReplicationAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+/**\n+ *\n+ */\n+public abstract class TransportReplicatedBroadcastShardAction<Request extends ReplicatedBroadcastShardRequest, Response extends ReplicatedBroadcastShardResponse> extends TransportReplicationAction<Request, Request, Response> {\n+\n+ protected TransportReplicatedBroadcastShardAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Class<Request> request, Class<Request> replicaRequest, String executor) {\n+ super(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, mappingUpdatedAction, actionFilters, indexNameExpressionResolver, request, replicaRequest, executor);\n+ }\n+\n+ @Override\n+ protected boolean resolveIndex() {\n+ return true;\n+ }\n+\n+ @Override\n+ protected boolean checkWriteConsistency() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected ShardIterator shards(ClusterState clusterState, InternalRequest request) {\n+ return clusterService.operationRouting().shards(clusterService.state(), request.concreteIndex(), request.request().getShardId().id()).shardsIt();\n+ }\n+\n+ @Override\n+ protected abstract Tuple<Response, Request> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable;\n+\n+ @Override\n+ protected abstract void shardOperationOnReplica(ShardId shardId, Request request);\n+\n+ @Override\n+ protected ClusterBlockException checkGlobalBlock(ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, new String[]{request.concreteIndex()});\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/action/support/replicatedbroadcast/TransportReplicatedBroadcastShardAction.java",
"status": "added"
},
{
"diff": "@@ -192,4 +192,8 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(index);\n out.writeBoolean(canHaveDuplicates);\n }\n+\n+ public boolean skipExecutionOnShadowReplicas() {\n+ return true;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java",
"status": "modified"
},
{
"diff": "@@ -172,6 +172,7 @@ protected boolean ignoreReplicaException(Throwable e) {\n if (isConflictException(e)) {\n return true;\n }\n+ // TODO should we check here for refresh and flush failures which should not make replica fail? if so, which are they?\n return false;\n }\n \n@@ -748,7 +749,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n if (shard.relocating()) {\n numberOfPendingShardInstances++;\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings())) {\n+ } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) && replicaRequest.skipExecutionOnShadowReplicas()) {\n // If the replicas use shadow replicas, there is no reason to\n // perform the action on the replica, so skip it and\n // immediately return\n@@ -782,7 +783,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n // we have to replicate to the other copy\n numberOfPendingShardInstances += 1;\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings())) {\n+ } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) && replicaRequest.skipExecutionOnShadowReplicas()) {\n // If the replicas use shadow replicas, there is no reason to\n // perform the action on the replica, so skip it and\n // immediately return\n@@ -861,7 +862,7 @@ protected void doRun() {\n if (shard.relocating()) {\n performOnReplica(shard, shard.relocatingNodeId());\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) == false) {\n+ } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) == false || replicaRequest.skipExecutionOnShadowReplicas() == false) {\n performOnReplica(shard, shard.currentNodeId());\n if (shard.relocating()) {\n performOnReplica(shard, shard.relocatingNodeId());",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java",
"status": "modified"
},
{
"diff": "@@ -226,7 +226,7 @@ protected IndexShardRoutingTable shards(ClusterState clusterState, String index,\n return shards(clusterState, index, shardId);\n }\n \n- protected IndexShardRoutingTable shards(ClusterState clusterState, String index, int shardId) {\n+ public IndexShardRoutingTable shards(ClusterState clusterState, String index, int shardId) {\n IndexShardRoutingTable indexShard = indexRoutingTable(clusterState, index).shard(shardId);\n if (indexShard == null) {\n throw new ShardNotFoundException(new ShardId(index, shardId));",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java",
"status": "modified"
},
{
"diff": "@@ -919,7 +919,7 @@ public boolean ignoreRecoveryAttempt() {\n \n public void readAllowed() throws IllegalIndexShardStateException {\n IndexShardState state = this.state; // one time volatile read\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n+ if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED && state != IndexShardState.POST_RECOVERY) {\n throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when started/relocated\");\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -28,8 +28,8 @@\n import org.elasticsearch.action.admin.indices.close.CloseIndexRequest;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexAction;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;\n-import org.elasticsearch.action.admin.indices.flush.FlushAction;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n+import org.elasticsearch.action.admin.indices.flush.TransportShardFlushAction;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsAction;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsRequest;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsAction;\n@@ -42,8 +42,8 @@\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryAction;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryRequest;\n-import org.elasticsearch.action.admin.indices.refresh.RefreshAction;\n import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n+import org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction;\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentsAction;\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentsRequest;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsAction;\n@@ -85,6 +85,7 @@\n import org.elasticsearch.action.update.UpdateAction;\n import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.action.update.UpdateResponse;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -96,31 +97,16 @@\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.Transport;\n-import org.elasticsearch.transport.TransportChannel;\n-import org.elasticsearch.transport.TransportModule;\n-import org.elasticsearch.transport.TransportRequest;\n-import org.elasticsearch.transport.TransportRequestHandler;\n-import org.elasticsearch.transport.TransportService;\n+import org.elasticsearch.transport.*;\n import org.junit.After;\n import org.junit.Before;\n import org.junit.Test;\n \n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.emptyIterable;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n-import static org.hamcrest.Matchers.hasItem;\n-import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.*;\n \n @ClusterScope(scope = Scope.SUITE, numClientNodes = 1, minNumDataNodes = 2)\n public class IndicesRequestIT extends ESIntegTestCase {\n@@ -383,14 +369,15 @@ public void testExists() {\n \n @Test\n public void testFlush() {\n- String flushShardAction = FlushAction.NAME + \"[s]\";\n- interceptTransportActions(flushShardAction);\n+ String[] indexShardActions = new String[]{TransportShardFlushAction.NAME + \"[r]\", TransportShardFlushAction.NAME};\n+ interceptTransportActions(indexShardActions);\n \n FlushRequest flushRequest = new FlushRequest(randomIndicesOrAliases());\n internalCluster().clientNodeClient().admin().indices().flush(flushRequest).actionGet();\n \n clearInterceptedActions();\n- assertSameIndices(flushRequest, flushShardAction);\n+ String[] indices = new IndexNameExpressionResolver(Settings.EMPTY).concreteIndices(client().admin().cluster().prepareState().get().getState(), flushRequest);\n+ assertIndicesSubset(Arrays.asList(indices), indexShardActions);\n }\n \n @Test\n@@ -407,14 +394,15 @@ public void testOptimize() {\n \n @Test\n public void testRefresh() {\n- String refreshShardAction = RefreshAction.NAME + \"[s]\";\n- interceptTransportActions(refreshShardAction);\n+ String[] indexShardActions = new String[]{TransportShardRefreshAction.NAME + \"[r]\", TransportShardRefreshAction.NAME};\n+ interceptTransportActions(indexShardActions);\n \n RefreshRequest refreshRequest = new RefreshRequest(randomIndicesOrAliases());\n internalCluster().clientNodeClient().admin().indices().refresh(refreshRequest).actionGet();\n \n clearInterceptedActions();\n- assertSameIndices(refreshRequest, refreshShardAction);\n+ String[] indices = new IndexNameExpressionResolver(Settings.EMPTY).concreteIndices(client().admin().cluster().prepareState().get().getState(), refreshRequest);\n+ assertIndicesSubset(Arrays.asList(indices), indexShardActions);\n }\n \n @Test",
"filename": "core/src/test/java/org/elasticsearch/action/IndicesRequestIT.java",
"status": "modified"
},
{
"diff": "@@ -61,7 +61,8 @@ public void testFlushWithBlocks() {\n for (String blockSetting : Arrays.asList(SETTING_READ_ONLY, SETTING_BLOCKS_METADATA)) {\n try {\n enableIndexBlock(\"test\", blockSetting);\n- assertBlocked(client().admin().indices().prepareFlush(\"test\"));\n+ FlushResponse flushResponse = client().admin().indices().prepareFlush(\"test\").get();\n+ assertBlocked(flushResponse);\n } finally {\n disableIndexBlock(\"test\", blockSetting);\n }\n@@ -74,7 +75,7 @@ public void testFlushWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareFlush());\n+ assertBlocked(client().admin().indices().prepareFlush().get());\n } finally {\n setClusterReadOnly(false);\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/flush/FlushBlocksIT.java",
"status": "modified"
},
{
"diff": "@@ -74,7 +74,7 @@ public void testOptimizeWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareFlush());\n+ assertBlocked(client().admin().indices().prepareOptimize());\n } finally {\n setClusterReadOnly(false);\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/optimize/OptimizeBlocksIT.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,7 @@ public void testRefreshWithBlocks() {\n for (String blockSetting : Arrays.asList(SETTING_READ_ONLY, SETTING_BLOCKS_METADATA)) {\n try {\n enableIndexBlock(\"test\", blockSetting);\n- assertBlocked(client().admin().indices().prepareRefresh(\"test\"));\n+ assertBlocked(client().admin().indices().prepareRefresh(\"test\").get());\n } finally {\n disableIndexBlock(\"test\", blockSetting);\n }\n@@ -70,7 +70,7 @@ public void testRefreshWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareRefresh());\n+ assertBlocked(client().admin().indices().prepareRefresh().get());\n } finally {\n setClusterReadOnly(false);\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/refresh/RefreshBlocksIT.java",
"status": "modified"
},
{
"diff": "@@ -20,26 +20,40 @@\n package org.elasticsearch.discovery;\n \n import com.google.common.base.Predicate;\n+import com.sun.javafx.property.adapter.PropertyDescriptor;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n+import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;\n+import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.*;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.DjbHashFunction;\n+import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.discovery.zen.fd.FaultDetection;\n@@ -48,6 +62,16 @@\n import org.elasticsearch.discovery.zen.ping.ZenPingService;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;\n import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction;\n+import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.flush.IndicesSyncedFlushResult;\n+import org.elasticsearch.indices.flush.SyncedFlushService;\n+import org.elasticsearch.indices.recovery.RecoverySource;\n+import org.elasticsearch.indices.store.IndicesStoreIntegrationIT;\n+import org.elasticsearch.rest.BytesRestResponse;\n+import org.elasticsearch.rest.RestResponse;\n+import org.elasticsearch.rest.action.support.RestBuilderListener;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.discovery.ClusterDiscoveryConfiguration;\n@@ -841,7 +865,9 @@ public void isolatedUnicastNodes() throws Exception {\n }\n \n \n- /** Test cluster join with issues in cluster state publishing * */\n+ /**\n+ * Test cluster join with issues in cluster state publishing *\n+ */\n @Test\n public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n List<String> nodes = startCluster(2, 1);\n@@ -948,6 +974,302 @@ public void testNodeNotReachableFromMaster() throws Exception {\n ensureStableCluster(3);\n }\n \n+ /*\n+ * Tests a visibility issue if a shard is in POST_RECOVERY\n+ *\n+ * When a user indexes a document, then refreshes and then a executes a search and all are successful and no timeouts etc then\n+ * the document must be visible for the search.\n+ *\n+ * When a primary is relocating from node_1 to node_2, there can be a short time where both old and new primary\n+ * are started and accept indexing and read requests. However, the new primary might not be visible to nodes\n+ * that lag behind one cluster state. If such a node then sends a refresh to the index, this refresh request\n+ * must reach the new primary on node_2 too. Otherwise a different node that searches on the new primary might not\n+ * find the indexed document although a refresh was executed before.\n+ *\n+ * In detail:\n+ * Cluster state 0:\n+ * node_1: [index][0] STARTED (ShardRoutingState)\n+ * node_2: no shard\n+ *\n+ * 0. primary ([index][0]) relocates from node_1 to node_2\n+ * Cluster state 1:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is RECOVERING)\n+ *\n+ * 1. node_2 is done recovering, moves its shard to IndexShardState.POST_RECOVERY and sends a message to master that the shard is ShardRoutingState.STARTED\n+ * Cluster state is still the same but the IndexShardState on node_2 has changed and it now accepts writes and reads:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is POST_RECOVERY)\n+ *\n+ * 2. any node receives an index request which is then executed on node_1 and node_2\n+ *\n+ * 3. node_3 sends a refresh but it is a little behind with cluster state processing and still on cluster state 0.\n+ * If refresh was a broadcast operation it send it to node_1 only because it does not know node_2 has a shard too\n+ *\n+ * 4. node_3 catches up with the cluster state and acks it to master which now can process the shard started message\n+ * from node_2 before and updates cluster state to:\n+ * Cluster state 2:\n+ * node_1: [index][0] no shard\n+ * node_2: [index][0] STARTED (ShardRoutingState), (IndexShardState on node_2 is still POST_RECOVERY)\n+ *\n+ * master sends this to all nodes.\n+ *\n+ * 5. node_4 and node_3 process cluster state 2, but node_1 and node_2 have not yet\n+ *\n+ * If now node_4 searches for document that was indexed before, it will search at node_2 because it is on\n+ * cluster state 2. It should be able to retrieve it with a search because the refresh from before was\n+ * successful.\n+ */\n+ @Test\n+ public void testReadOnPostRecoveryShards() throws Exception {\n+ List<BlockClusterStateProcessing> clusterStateBlocks = new ArrayList<>();\n+ try {\n+ configureCluster(5, 1);\n+ // we could probably write a test without a dedicated master node but it is easier if we use one\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ // node_1 will have the shard in the beginning\n+ Future<String> node1Future = internalCluster().startDataOnlyNodeAsync();\n+ final String masterNode = masterNodeFuture.get();\n+ final String node_1 = node1Future.get();\n+ logger.info(\"--> creating index [test] with one shard and zero replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(IndexShard.INDEX_REFRESH_INTERVAL, -1))\n+ .addMapping(\"doc\", jsonBuilder().startObject().startObject(\"doc\")\n+ .startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject()\n+ .endObject().endObject())\n+ );\n+ ensureGreen(\"test\");\n+ logger.info(\"--> starting three more data nodes\");\n+ List<String> nodeNamesFuture = internalCluster().startDataOnlyNodesAsync(3).get();\n+ final String node_2 = nodeNamesFuture.get(0);\n+ final String node_3 = nodeNamesFuture.get(1);\n+ final String node_4 = nodeNamesFuture.get(2);\n+ logger.info(\"--> running cluster_health\");\n+ ClusterHealthResponse clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForNodes(\"5\")\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> move shard from node_1 to node_2, and wait for relocation to finish\");\n+\n+ // block cluster state updates on node_3 so that it only sees the shard on node_1\n+ BlockClusterStateProcessing disruptionNode3 = new BlockClusterStateProcessing(node_3, getRandom());\n+ clusterStateBlocks.add(disruptionNode3);\n+ internalCluster().setDisruptionScheme(disruptionNode3);\n+ disruptionNode3.startDisrupting();\n+ // register a Tracer that notifies begin and end of a relocation\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatchNode2 = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatchNode2 = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new StartRecoveryToShardStaredTracer(logger, beginRelocationLatchNode2, endRelocationLatchNode2));\n+\n+ // block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ BlockClusterStateProcessing disruptionNode2 = new BlockClusterStateProcessing(node_2, getRandom());\n+ clusterStateBlocks.add(disruptionNode2);\n+ disruptionNode2.applyToCluster(internalCluster());\n+ BlockClusterStateProcessing disruptionNode1 = new BlockClusterStateProcessing(node_1, getRandom());\n+ clusterStateBlocks.add(disruptionNode1);\n+ disruptionNode1.applyToCluster(internalCluster());\n+\n+ logger.info(\"--> move shard from node_1 to node_2\");\n+ // don't block on the relocation. cluster state updates are blocked on node_3 and the relocation would timeout\n+ Future<ClusterRerouteResponse> rerouteFuture = internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).setTimeout(new TimeValue(1000, TimeUnit.MILLISECONDS)).execute();\n+\n+ logger.info(\"--> wait for relocation to start\");\n+ // wait for relocation to start\n+ beginRelocationLatchNode2.await();\n+ // start to block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ // one STARTED on node_1 and one in POST_RECOVERY on node_2\n+ disruptionNode1.startDisrupting();\n+ disruptionNode2.startDisrupting();\n+ endRelocationLatchNode2.await();\n+ final Client node3Client = internalCluster().client(node_3);\n+ final Client node2Client = internalCluster().client(node_2);\n+ final Client node1Client = internalCluster().client(node_1);\n+ final Client node4Client = internalCluster().client(node_4);\n+ logger.info(\"--> index doc\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ assertTrue(node3Client.prepareIndex(\"test\", \"doc\").setSource(\"{\\\"text\\\":\\\"a\\\"}\").get().isCreated());\n+ //sometimes refresh and sometimes flush\n+ int refreshOrFlushType = randomIntBetween(1, 3);\n+ switch (refreshOrFlushType) {\n+ case 1: {\n+ logger.info(\"--> refresh from node_3\");\n+ RefreshResponse refreshResponse = node3Client.admin().indices().prepareRefresh().get();\n+ assertThat(refreshResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(refreshResponse.getTotalShards(), equalTo(1));\n+ assertThat(refreshResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ case 2: {\n+ logger.info(\"--> flush from node_3\");\n+ FlushResponse flushResponse = node3Client.admin().indices().prepareFlush().get();\n+ assertThat(flushResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(flushResponse.getTotalShards(), equalTo(1));\n+ assertThat(flushResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ case 3: {\n+ logger.info(\"--> synced flush from node_3\");\n+ final AtomicReference<Object> syncedFlushResult = new AtomicReference<>();\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ internalCluster().getInstance(SyncedFlushService.class, node_3).attemptSyncedFlush(new String[]{\"test\"}, IndicesOptions.lenientExpandOpen(), new ActionListener<IndicesSyncedFlushResult>() {\n+ @Override\n+ public void onResponse(IndicesSyncedFlushResult indicesSyncedFlushResult) {\n+ syncedFlushResult.set(indicesSyncedFlushResult);\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ syncedFlushResult.set(e);\n+ latch.countDown();\n+\n+ }\n+ });\n+ latch.await();\n+ assertFalse(syncedFlushResult.get() instanceof Throwable);\n+ IndicesSyncedFlushResult flushResponse = (IndicesSyncedFlushResult)syncedFlushResult.get();\n+ assertThat(flushResponse.failedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(flushResponse.totalShards(), equalTo(1));\n+ assertThat(flushResponse.successfulShards(), equalTo(1));\n+ }\n+ default:\n+ fail(\"this is test bug, number should be between 1 and 3\");\n+ }\n+ // now stop disrupting so that node_3 can ack last cluster state to master and master can continue\n+ // to publish the next cluster state\n+ logger.info(\"--> stop disrupting node_3\");\n+ disruptionNode3.stopDisrupting();\n+ rerouteFuture.get();\n+ logger.info(\"--> wait for node_4 to get new cluster state\");\n+ // wait until node_4 actually has the new cluster state in which node_1 has no shard\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterState clusterState = node4Client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ // get the node id from the name. TODO: Is there a better way to do this?\n+ String nodeId = null;\n+ for (RoutingNode node : clusterState.getRoutingNodes()) {\n+ if (node.node().name().equals(node_1)) {\n+ nodeId = node.nodeId();\n+ }\n+ }\n+ assertNotNull(nodeId);\n+ // check that node_1 does not have the shard in local cluster state\n+ assertFalse(clusterState.getRoutingNodes().routingNodeIter(nodeId).hasNext());\n+ }\n+ });\n+\n+ logger.info(\"--> run count from node_4\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ CountResponse countResponse = node4Client.prepareCount(\"test\").setPreference(\"local\").get();\n+ assertThat(countResponse.getCount(), equalTo(1l));\n+ logger.info(\"--> stop disrupting node_1 and node_2\");\n+ disruptionNode2.stopDisrupting();\n+ disruptionNode1.stopDisrupting();\n+ // wait for relocation to finish\n+ logger.info(\"--> wait for relocation to finish\");\n+ clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+ } catch (AssertionError e) {\n+ for (BlockClusterStateProcessing blockClusterStateProcessing : clusterStateBlocks) {\n+ blockClusterStateProcessing.stopDisrupting();\n+ }\n+ throw e;\n+ }\n+ }\n+\n+ /**\n+ * This Tracer can be used to signal start of a recovery and shard started event after translog was copied\n+ */\n+ public static class StartRecoveryToShardStaredTracer extends MockTransportService.Tracer {\n+ private final ESLogger logger;\n+ private final CountDownLatch beginRelocationLatch;\n+ private final CountDownLatch sentShardStartedLatch;\n+\n+ public StartRecoveryToShardStaredTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch sentShardStartedLatch) {\n+ this.logger = logger;\n+ this.beginRelocationLatch = beginRelocationLatch;\n+ this.sentShardStartedLatch = sentShardStartedLatch;\n+ }\n+\n+ @Override\n+ public void requestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) {\n+ if (action.equals(RecoverySource.Actions.START_RECOVERY)) {\n+ logger.info(\"sent: {}, relocation starts\", action);\n+ beginRelocationLatch.countDown();\n+ }\n+ if (action.equals(ShardStateAction.SHARD_STARTED_ACTION_NAME)) {\n+ logger.info(\"sent: {}, shard started\", action);\n+ sentShardStartedLatch.countDown();\n+ }\n+ }\n+ }\n+\n+ private void logLocalClusterStates(Client... clients) {\n+ int counter = 1;\n+ for (Client client : clients) {\n+ ClusterState clusterState = client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ logger.info(\"--> cluster state on node_{} {}\", counter, clusterState.prettyPrint());\n+ counter++;\n+ }\n+ }\n+\n+ @Test\n+ public void searchWithRelocationAndSlowClusterStateProcessing() throws Exception {\n+ configureCluster(3, 1);\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ Future<String> node_1Future = internalCluster().startDataOnlyNodeAsync();\n+\n+ final String node_1 = node_1Future.get();\n+\n+ final String masterNode = masterNodeFuture.get();\n+ logger.info(\"--> creating index [test] with one shard and on replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n+ );\n+ ensureGreen(\"test\");\n+\n+ Future<String> node_2Future = internalCluster().startDataOnlyNodeAsync();\n+ final String node_2 = node_2Future.get();\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ for (int i = 0; i < 100; i++) {\n+ indexRequestBuilderList.add(client().prepareIndex().setIndex(\"test\").setType(\"doc\").setSource(\"{\\\"int_field\\\":1}\"));\n+ }\n+ indexRandom(true, indexRequestBuilderList);\n+ SingleNodeDisruption disruption = new BlockClusterStateProcessing(node_2, getRandom());\n+\n+ internalCluster().setDisruptionScheme(disruption);\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatch = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatch = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new IndicesStoreIntegrationIT.ReclocationStartEndTracer(logger, beginRelocationLatch, endRelocationLatch));\n+ internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).get();\n+ // wait for relocation to start\n+ beginRelocationLatch.await();\n+ disruption.startDisrupting();\n+ // wait for relocation to finish\n+ endRelocationLatch.await();\n+ // now search for the documents and see if we get a reply\n+ // wait a little so that cluster state observer is registered\n+ assertThat(client().prepareCount().get().getCount(), equalTo(100l));\n+ }\n+\n @Test\n public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Exception {\n // test for https://github.com/elastic/elasticsearch/issues/8823\n@@ -961,6 +1283,7 @@ public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Ex\n ensureGreen();\n \n internalCluster().restartNode(masterNode, new InternalTestCluster.RestartCallback() {\n+ @Override\n public boolean clearData(String nodeName) {\n return true;\n }\n@@ -975,7 +1298,7 @@ public boolean clearData(String nodeName) {\n @Test\n public void testIndicesDeleted() throws Exception {\n configureCluster(3, 2);\n- Future<List<String>> masterNodes= internalCluster().startMasterOnlyNodesAsync(2);\n+ Future<List<String>> masterNodes = internalCluster().startMasterOnlyNodesAsync(2);\n Future<String> dataNode = internalCluster().startDataOnlyNodeAsync();\n dataNode.get();\n masterNodes.get();",
"filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java",
"status": "modified"
},
{
"diff": "@@ -426,12 +426,12 @@ public boolean apply(Object o) {\n * state processing when a recover starts and only unblocking it shortly after the node receives\n * the ShardActiveRequest.\n */\n- static class ReclocationStartEndTracer extends MockTransportService.Tracer {\n+ public static class ReclocationStartEndTracer extends MockTransportService.Tracer {\n private final ESLogger logger;\n private final CountDownLatch beginRelocationLatch;\n private final CountDownLatch receivedShardExistsRequestLatch;\n \n- ReclocationStartEndTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch receivedShardExistsRequestLatch) {\n+ public ReclocationStartEndTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch receivedShardExistsRequestLatch) {\n this.logger = logger;\n this.beginRelocationLatch = beginRelocationLatch;\n this.receivedShardExistsRequestLatch = receivedShardExistsRequestLatch;",
"filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;\n+import org.elasticsearch.action.support.replicatedbroadcast.ReplicatedBroadcastResponse;\n import org.elasticsearch.plugins.PluginInfo;\n import org.elasticsearch.action.admin.cluster.node.info.PluginsInfo;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n@@ -67,6 +68,7 @@\n import org.elasticsearch.search.suggest.Suggest;\n import org.elasticsearch.test.VersionUtils;\n import org.elasticsearch.test.rest.client.http.HttpResponse;\n+import org.hamcrest.CoreMatchers;\n import org.hamcrest.Matcher;\n import org.hamcrest.Matchers;\n import org.junit.Assert;\n@@ -126,6 +128,22 @@ public static void assertBlocked(ActionRequestBuilder builder) {\n assertBlocked(builder, null);\n }\n \n+ /**\n+ * Checks that all shard requests of a replicated brodcast request failed due to a cluster block\n+ *\n+ * @param replicatedBroadcastResponse the response that should only contain failed shard responses\n+ *\n+ * */\n+ public static void assertBlocked(ReplicatedBroadcastResponse replicatedBroadcastResponse) {\n+ assertThat(\"all shard requests should have failed\", replicatedBroadcastResponse.getFailedShards(), Matchers.equalTo(replicatedBroadcastResponse.getTotalShards()));\n+ for (ShardOperationFailedException exception : replicatedBroadcastResponse.getShardFailures()) {\n+ ClusterBlockException clusterBlockException = (ClusterBlockException) ExceptionsHelper.unwrap(exception.getCause(), ClusterBlockException.class);\n+ assertNotNull(\"expected the cause of failure to be a ClusterBlockException but got \" + exception.getCause().getMessage(), clusterBlockException);\n+ assertThat(clusterBlockException.blocks().size(), greaterThan(0));\n+ assertThat(clusterBlockException.status(), CoreMatchers.equalTo(RestStatus.FORBIDDEN));\n+ }\n+ }\n+\n /**\n * Executes the request and fails if the request has not been blocked by a specific {@link ClusterBlock}.\n *",
"filename": "core/src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java",
"status": "modified"
}
]
} |
{
"body": "Currently we silently ignore the pipeline agg in question. Until #12336 is implemented we should report an error.\n\nFor a reproduction see:\n\n```\nGET logstash-2015.01/_search?search_type=count\n{\n \"aggs\": {\n \"time\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"day\"\n },\n \"aggs\": {\n \"test\": {\n \"filters\": {\n \"filters\": {\n \"get\": {\n \"term\": {\n \"verb\": \"get\"\n }\n },\n \"post\": {\n \"term\": {\n \"verb\": \"post\"\n }\n }\n }\n }\n },\n \"get_derive\": {\n \"derivative\": {\n \"buckets_path\": \"test>get>_count\"\n }\n }\n }\n }\n }\n}\n```\n\nWhich outputs (not the missing `get_derive` agg):\n\n```\n {\n \"key_as_string\": \"1420243200000\",\n \"key\": 1420243200000,\n \"doc_count\": 10664,\n \"test\": {\n \"buckets\": {\n \"get\": {\n \"doc_count\": 10592\n },\n \"post\": {\n \"doc_count\": 15\n }\n }\n }\n },\n```\n",
"comments": [],
"number": 12360,
"title": "Issue an error when a pipeline aggs references a muliti-bucket aggregation "
} | {
"body": "Previously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path\n\nCloses #12360\n",
"number": 12595,
"review_comments": [
{
"body": "Can we catch a more specific exception, and not print it?\n",
"created_at": "2015-08-03T08:32:19Z"
},
{
"body": "I think this should rather be an IllegalArgumentException? (ie. the path argument is illegal?)\n",
"created_at": "2015-08-03T09:44:23Z"
}
],
"title": "Full path validation for pipeline aggregations"
} | {
"commits": [
{
"message": "Aggregations: Full path validation for pipeline aggregations\n\nPreviously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path\n\nCloses #12360"
}
],
"files": [
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorFactory;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n import org.elasticsearch.search.aggregations.support.AggregationPath;\n+import org.elasticsearch.search.aggregations.support.AggregationPath.PathElement;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -162,40 +163,79 @@ private List<PipelineAggregatorFactory> resolvePipelineAggregatorOrder(List<Pipe\n for (PipelineAggregatorFactory factory : pipelineAggregatorFactories) {\n pipelineAggregatorFactoriesMap.put(factory.getName(), factory);\n }\n- Set<String> aggFactoryNames = new HashSet<>();\n+ Map<String, AggregatorFactory> aggFactoriesMap = new HashMap<>();\n for (AggregatorFactory aggFactory : aggFactories) {\n- aggFactoryNames.add(aggFactory.name);\n+ aggFactoriesMap.put(aggFactory.name, aggFactory);\n }\n List<PipelineAggregatorFactory> orderedPipelineAggregatorrs = new LinkedList<>();\n List<PipelineAggregatorFactory> unmarkedFactories = new ArrayList<PipelineAggregatorFactory>(pipelineAggregatorFactories);\n Set<PipelineAggregatorFactory> temporarilyMarked = new HashSet<PipelineAggregatorFactory>();\n while (!unmarkedFactories.isEmpty()) {\n PipelineAggregatorFactory factory = unmarkedFactories.get(0);\n- resolvePipelineAggregatorOrder(aggFactoryNames, pipelineAggregatorFactoriesMap, orderedPipelineAggregatorrs, unmarkedFactories, temporarilyMarked, factory);\n+ resolvePipelineAggregatorOrder(aggFactoriesMap, pipelineAggregatorFactoriesMap, orderedPipelineAggregatorrs,\n+ unmarkedFactories, temporarilyMarked, factory);\n }\n return orderedPipelineAggregatorrs;\n }\n \n- private void resolvePipelineAggregatorOrder(Set<String> aggFactoryNames, Map<String, PipelineAggregatorFactory> pipelineAggregatorFactoriesMap,\n+ private void resolvePipelineAggregatorOrder(Map<String, AggregatorFactory> aggFactoriesMap,\n+ Map<String, PipelineAggregatorFactory> pipelineAggregatorFactoriesMap,\n List<PipelineAggregatorFactory> orderedPipelineAggregators, List<PipelineAggregatorFactory> unmarkedFactories, Set<PipelineAggregatorFactory> temporarilyMarked,\n PipelineAggregatorFactory factory) {\n if (temporarilyMarked.contains(factory)) {\n- throw new IllegalStateException(\"Cyclical dependancy found with pipeline aggregator [\" + factory.getName() + \"]\");\n+ throw new IllegalArgumentException(\"Cyclical dependancy found with pipeline aggregator [\" + factory.getName() + \"]\");\n } else if (unmarkedFactories.contains(factory)) {\n temporarilyMarked.add(factory);\n String[] bucketsPaths = factory.getBucketsPaths();\n for (String bucketsPath : bucketsPaths) {\n- List<String> bucketsPathElements = AggregationPath.parse(bucketsPath).getPathElementsAsStringList();\n- String firstAggName = bucketsPathElements.get(0);\n- if (bucketsPath.equals(\"_count\") || bucketsPath.equals(\"_key\") || aggFactoryNames.contains(firstAggName)) {\n+ List<AggregationPath.PathElement> bucketsPathElements = AggregationPath.parse(bucketsPath).getPathElements();\n+ String firstAggName = bucketsPathElements.get(0).name;\n+ if (bucketsPath.equals(\"_count\") || bucketsPath.equals(\"_key\")) {\n+ continue;\n+ } else if (aggFactoriesMap.containsKey(firstAggName)) {\n+ AggregatorFactory aggFactory = aggFactoriesMap.get(firstAggName);\n+ for (int i = 1; i < bucketsPathElements.size(); i++) {\n+ PathElement pathElement = bucketsPathElements.get(i);\n+ String aggName = pathElement.name;\n+ if ((i == bucketsPathElements.size() - 1) && (aggName.equalsIgnoreCase(\"_key\") || aggName.equals(\"_count\"))) {\n+ break;\n+ } else {\n+ // Check the non-pipeline sub-aggregator\n+ // factories\n+ AggregatorFactory[] subFactories = aggFactory.factories.factories;\n+ boolean foundSubFactory = false;\n+ for (AggregatorFactory subFactory : subFactories) {\n+ if (aggName.equals(subFactory.name)) {\n+ aggFactory = subFactory;\n+ foundSubFactory = true;\n+ break;\n+ }\n+ }\n+ // Check the pipeline sub-aggregator factories\n+ if (!foundSubFactory && (i == bucketsPathElements.size() - 1)) {\n+ List<PipelineAggregatorFactory> subPipelineFactories = aggFactory.factories.pipelineAggregatorFactories;\n+ for (PipelineAggregatorFactory subFactory : subPipelineFactories) {\n+ if (aggName.equals(subFactory.name())) {\n+ foundSubFactory = true;\n+ break;\n+ }\n+ }\n+ }\n+ if (!foundSubFactory) {\n+ throw new IllegalArgumentException(\"No aggregation [\" + aggName + \"] found for path [\" + bucketsPath\n+ + \"]\");\n+ }\n+ }\n+ }\n continue;\n } else {\n PipelineAggregatorFactory matchingFactory = pipelineAggregatorFactoriesMap.get(firstAggName);\n if (matchingFactory != null) {\n- resolvePipelineAggregatorOrder(aggFactoryNames, pipelineAggregatorFactoriesMap, orderedPipelineAggregators, unmarkedFactories,\n+ resolvePipelineAggregatorOrder(aggFactoriesMap, pipelineAggregatorFactoriesMap, orderedPipelineAggregators,\n+ unmarkedFactories,\n temporarilyMarked, matchingFactory);\n } else {\n- throw new IllegalStateException(\"No aggregation found for path [\" + bucketsPath + \"]\");\n+ throw new IllegalArgumentException(\"No aggregation found for path [\" + bucketsPath + \"]\");\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,7 @@ public abstract class PipelineAggregatorFactory {\n \n /**\n * Constructs a new pipeline aggregator factory.\n- * \n+ *\n * @param name\n * The aggregation name\n * @param type\n@@ -49,10 +49,14 @@ public PipelineAggregatorFactory(String name, String type, String[] bucketsPaths\n this.bucketsPaths = bucketsPaths;\n }\n \n+ public String name() {\n+ return name;\n+ }\n+\n /**\n * Validates the state of this factory (makes sure the factory is properly\n * configured)\n- * \n+ *\n * @param pipelineAggregatorFactories\n * @param factories\n * @param parent\n@@ -66,7 +70,7 @@ public final void validate(AggregatorFactory parent, AggregatorFactory[] factori\n \n /**\n * Creates the pipeline aggregator\n- * \n+ *\n * @param context\n * The aggregation context\n * @param parent\n@@ -77,7 +81,7 @@ public final void validate(AggregatorFactory parent, AggregatorFactory[] factori\n * with <tt>0</tt> as a bucket ordinal. Some factories can take\n * advantage of this in order to return more optimized\n * implementations.\n- * \n+ *\n * @return The created aggregator\n */\n public final PipelineAggregator create() throws IOException {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/PipelineAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -19,15 +19,18 @@\n \n package org.elasticsearch.search.aggregations.pipeline;\n \n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.Bucket;\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n-import org.elasticsearch.search.aggregations.pipeline.SimpleValue;\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n import org.elasticsearch.search.aggregations.pipeline.derivative.Derivative;\n import org.elasticsearch.search.aggregations.support.AggregationPath;\n@@ -39,12 +42,13 @@\n import java.util.ArrayList;\n import java.util.List;\n \n-import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filters;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.stats;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.closeTo;\n@@ -228,15 +232,15 @@ public void singleValuedField_normalised() {\n Derivative docCountDeriv = bucket.getAggregations().get(\"deriv\");\n if (i > 0) {\n assertThat(docCountDeriv, notNullValue());\n- assertThat(docCountDeriv.value(), closeTo((double) (firstDerivValueCounts[i - 1]), 0.00001));\n+ assertThat(docCountDeriv.value(), closeTo((firstDerivValueCounts[i - 1]), 0.00001));\n assertThat(docCountDeriv.normalizedValue(), closeTo((double) (firstDerivValueCounts[i - 1]) / 5, 0.00001));\n } else {\n assertThat(docCountDeriv, nullValue());\n }\n Derivative docCount2ndDeriv = bucket.getAggregations().get(\"2nd_deriv\");\n if (i > 1) {\n assertThat(docCount2ndDeriv, notNullValue());\n- assertThat(docCount2ndDeriv.value(), closeTo((double) (secondDerivValueCounts[i - 2]), 0.00001));\n+ assertThat(docCount2ndDeriv.value(), closeTo((secondDerivValueCounts[i - 2]), 0.00001));\n assertThat(docCount2ndDeriv.normalizedValue(), closeTo((double) (secondDerivValueCounts[i - 2]) * 2, 0.00001));\n } else {\n assertThat(docCount2ndDeriv, nullValue());\n@@ -596,6 +600,42 @@ public void singleValueAggDerivativeWithGaps_random() throws Exception {\n }\n }\n \n+ @Test\n+ public void singleValueAggDerivative_invalidPath() throws Exception {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(\n+ histogram(\"histo\")\n+ .field(SINGLE_VALUED_FIELD_NAME)\n+ .interval(interval)\n+ .subAggregation(\n+ filters(\"filters\").filter(QueryBuilders.termQuery(\"tag\", \"foo\")).subAggregation(\n+ sum(\"sum\").field(SINGLE_VALUED_FIELD_NAME)))\n+ .subAggregation(derivative(\"deriv\").setBucketsPaths(\"filters>get>sum\"))).execute().actionGet();\n+ fail(\"Expected an Exception but didn't get one\");\n+ } catch (Exception e) {\n+ Throwable cause = ExceptionsHelper.unwrapCause(e);\n+ if (cause == null) {\n+ throw e;\n+ } else if (cause instanceof SearchPhaseExecutionException) {\n+ ElasticsearchException[] rootCauses = ((SearchPhaseExecutionException) cause).guessRootCauses();\n+ // If there is more than one root cause then something\n+ // unexpected happened and we should re-throw the original\n+ // exception\n+ if (rootCauses.length > 1) {\n+ throw e;\n+ }\n+ ElasticsearchException rootCauseWrapper = rootCauses[0];\n+ Throwable rootCause = rootCauseWrapper.getCause();\n+ if (rootCause == null || !(rootCause instanceof IllegalArgumentException)) {\n+ throw e;\n+ }\n+ } else {\n+ throw e;\n+ }\n+ }\n+ }\n+\n private void checkBucketKeyAndDocCount(final String msg, final Histogram.Bucket bucket, final long expectedKey,\n final long expectedDocCount) {\n assertThat(msg, bucket, notNullValue());",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/DerivativeTests.java",
"status": "modified"
}
]
} |
{
"body": "ValueFormatter.DateTime is mutable so that we can set the timezone to use in order to display date/times (DateHistogramParser calls this method). But we also have the ValueFormatter.DateTime.DEFAULT constant, so if this constant is used and there are several concurrent requests, then they might invalidate each other's timezone parameter.\n",
"comments": [],
"number": 12531,
"title": "Inconsistent usage of ValueFormatter.DateTime"
} | {
"body": "This PR prevents setting timezone on ValueFormatter.DateTime. Instead\nthe timezone information needed when printing buckets key-as-string\ninformation is provided at constrution time of the ValueFormatter, making\nsure we don't overwrite any constants. This, however, made it necessary to\nbe able to access the timezone information when resolving the format\nin ValueSourseParser, so the `time_zone` parameter is now parsed alongside\nthe `format` parameter in ValueSourceParser rather than in DateHistogramParser.\n\nCloses #12531\n",
"number": 12581,
"review_comments": [
{
"body": "why do we need this null check, it looks trappy?\n",
"created_at": "2015-07-31T16:33:34Z"
},
{
"body": "maybe we should check that the timeZone argument is not null and fix callers to not call this method with a null timezone?\n",
"created_at": "2015-07-31T16:35:25Z"
},
{
"body": "Will add throwing an exception here, this was just for added security since the callers should already always use UTC as default.\n",
"created_at": "2015-08-03T10:07:33Z"
},
{
"body": "Maybe Input should be immutable if we are going to make it public? or at least immutable to outside classes?\n",
"created_at": "2015-08-10T10:26:33Z"
},
{
"body": "Lookig at this again, I don't like exposing the Input just to get the timezone information in the DateHistogramParser, I think it better belongs there where it was in the first place. Will add a commit where I change to passing timezone as optional argument to `ValueSourceParse.config()`, IMHO much cleaner.\n",
"created_at": "2015-08-10T11:02:52Z"
}
],
"title": "Fix setting timezone on default DateTime formatter"
} | {
"commits": [
{
"message": "Aggregations: Fix setting timezone on default DateTime formatter\n\nThis PR prevents setting timezone on ValueFormatter.DateTime. Instead\nthe timezone information needed when printing buckets key-as-string\ninformation is provided at constrution time of the ValueFormatter, making\nsure we don't overwrite any constants. This, however, made it necessary to\nbe able to access the timezone information when resolving the format\nin ValueSourseParser, so the `time_zone` parameter is now parsed alongside\nthe `format` parameter in ValueSourceParser rather than in DateHistogramParser.\n\nCloses #12531"
}
],
"files": [
{
"diff": "@@ -64,6 +64,9 @@ public Builder(TimeValue interval) {\n }\n \n public Builder timeZone(DateTimeZone timeZone) {\n+ if (timeZone == null) {\n+ throw new IllegalArgumentException(\"Setting null as timezone is not supported\");\n+ }\n this.timeZone = timeZone;\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java",
"status": "modified"
},
{
"diff": "@@ -33,10 +33,7 @@\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n import org.elasticsearch.search.aggregations.support.ValuesSourceParser;\n-import org.elasticsearch.search.aggregations.support.format.ValueFormatter.DateTime;\n import org.elasticsearch.search.internal.SearchContext;\n-import org.joda.time.DateTimeZone;\n-\n import java.io.IOException;\n \n /**\n@@ -45,7 +42,6 @@\n public class DateHistogramParser implements Aggregator.Parser {\n \n static final ParseField EXTENDED_BOUNDS = new ParseField(\"extended_bounds\");\n- static final ParseField TIME_ZONE = new ParseField(\"time_zone\");\n static final ParseField OFFSET = new ParseField(\"offset\");\n static final ParseField INTERVAL = new ParseField(\"interval\");\n \n@@ -83,14 +79,14 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n ValuesSourceParser vsParser = ValuesSourceParser.numeric(aggregationName, InternalDateHistogram.TYPE, context)\n .targetValueType(ValueType.DATE)\n .formattable(true)\n+ .timezoneAware(true)\n .build();\n \n boolean keyed = false;\n long minDocCount = 0;\n ExtendedBounds extendedBounds = null;\n InternalOrder order = (InternalOrder) Histogram.Order.KEY_ASC;\n String interval = null;\n- DateTimeZone timeZone = DateTimeZone.UTC;\n long offset = 0;\n \n XContentParser.Token token;\n@@ -101,9 +97,7 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n } else if (vsParser.token(currentFieldName, token, parser)) {\n continue;\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- if (context.parseFieldMatcher().match(currentFieldName, TIME_ZONE)) {\n- timeZone = DateTimeZone.forID(parser.text());\n- } else if (context.parseFieldMatcher().match(currentFieldName, OFFSET)) {\n+ if (context.parseFieldMatcher().match(currentFieldName, OFFSET)) {\n offset = parseOffset(parser.text());\n } else if (context.parseFieldMatcher().match(currentFieldName, INTERVAL)) {\n interval = parser.text();\n@@ -121,8 +115,6 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n } else if (token == XContentParser.Token.VALUE_NUMBER) {\n if (\"min_doc_count\".equals(currentFieldName) || \"minDocCount\".equals(currentFieldName)) {\n minDocCount = parser.longValue();\n- } else if (\"time_zone\".equals(currentFieldName) || \"timeZone\".equals(currentFieldName)) {\n- timeZone = DateTimeZone.forOffsetHours(parser.intValue());\n } else {\n throw new SearchParseException(context, \"Unknown key for a \" + token + \" in [\" + aggregationName + \"]: [\"\n + currentFieldName + \"].\", parser.getTokenLocation());\n@@ -193,13 +185,10 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n }\n \n Rounding rounding = tzRoundingBuilder\n- .timeZone(timeZone)\n+ .timeZone(vsParser.input().timezone())\n .offset(offset).build();\n \n ValuesSourceConfig config = vsParser.config();\n- if (config.formatter()!=null) {\n- ((DateTime) config.formatter()).setTimeZone(timeZone);\n- }\n return new HistogramAggregator.Factory(aggregationName, config, rounding, order, keyed, minDocCount, extendedBounds,\n new InternalDateHistogram.Factory());\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.aggregations.support;\n \n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;\n@@ -39,6 +40,7 @@\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.support.format.ValueFormat;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n import java.util.Map;\n@@ -50,6 +52,8 @@\n */\n public class ValuesSourceParser<VS extends ValuesSource> {\n \n+ static final ParseField TIME_ZONE = new ParseField(\"time_zone\");\n+\n public static Builder any(String aggName, InternalAggregation.Type aggType, SearchContext context) {\n return new Builder<>(aggName, aggType, context, ValuesSource.class);\n }\n@@ -66,14 +70,19 @@ public static Builder<ValuesSource.GeoPoint> geoPoint(String aggName, InternalAg\n return new Builder<>(aggName, aggType, context, ValuesSource.GeoPoint.class).targetValueType(ValueType.GEOPOINT).scriptable(false);\n }\n \n- private static class Input {\n- String field = null;\n- Script script = null;\n+ public static class Input {\n+ private String field = null;\n+ private Script script = null;\n @Deprecated\n- Map<String, Object> params = null; // TODO Remove in 3.0\n- ValueType valueType = null;\n- String format = null;\n- Object missing = null;\n+ private Map<String, Object> params = null; // TODO Remove in 3.0\n+ private ValueType valueType = null;\n+ private String format = null;\n+ private Object missing = null;\n+ private DateTimeZone timezone = DateTimeZone.UTC;\n+\n+ public DateTimeZone timezone() {\n+ return this.timezone;\n+ }\n }\n \n private final String aggName;\n@@ -83,6 +92,7 @@ private static class Input {\n \n private boolean scriptable = true;\n private boolean formattable = false;\n+ private boolean timezoneAware = false;\n private ValueType targetValueType = null;\n private ScriptParameterParser scriptParameterParser = new ScriptParameterParser();\n \n@@ -105,6 +115,8 @@ public boolean token(String currentFieldName, XContentParser.Token token, XConte\n input.field = parser.text();\n } else if (formattable && \"format\".equals(currentFieldName)) {\n input.format = parser.text();\n+ } else if (timezoneAware && context.parseFieldMatcher().match(currentFieldName, TIME_ZONE)) {\n+ input.timezone = DateTimeZone.forID(parser.text());\n } else if (scriptable) {\n if (\"value_type\".equals(currentFieldName) || \"valueType\".equals(currentFieldName)) {\n input.valueType = ValueType.resolveForScript(parser.text());\n@@ -123,6 +135,14 @@ public boolean token(String currentFieldName, XContentParser.Token token, XConte\n }\n return true;\n }\n+ if (token == XContentParser.Token.VALUE_NUMBER) {\n+ if (timezoneAware && context.parseFieldMatcher().match(currentFieldName, TIME_ZONE)) {\n+ input.timezone = DateTimeZone.forOffsetHours(parser.intValue());\n+ } else {\n+ return false;\n+ }\n+ return true;\n+ }\n if (scriptable && token == XContentParser.Token.START_OBJECT) {\n if (context.parseFieldMatcher().match(currentFieldName, ScriptField.SCRIPT)) {\n input.script = Script.parse(parser, context.parseFieldMatcher());\n@@ -203,7 +223,7 @@ public ValuesSourceConfig<VS> config() {\n config.fieldContext = new FieldContext(input.field, indexFieldData, fieldType);\n config.missing = input.missing;\n config.script = createScript();\n- config.format = resolveFormat(input.format, fieldType);\n+ config.format = resolveFormat(input.format, input.timezone, fieldType);\n return config;\n }\n \n@@ -222,9 +242,9 @@ private static ValueFormat resolveFormat(@Nullable String format, @Nullable Valu\n return valueFormat;\n }\n \n- private static ValueFormat resolveFormat(@Nullable String format, MappedFieldType fieldType) {\n+ private static ValueFormat resolveFormat(@Nullable String format, @Nullable DateTimeZone timezone, MappedFieldType fieldType) {\n if (fieldType instanceof DateFieldMapper.DateFieldType) {\n- return format != null ? ValueFormat.DateTime.format(format) : ValueFormat.DateTime.mapper((DateFieldMapper.DateFieldType) fieldType);\n+ return format != null ? ValueFormat.DateTime.format(format, timezone) : ValueFormat.DateTime.mapper((DateFieldMapper.DateFieldType) fieldType, timezone);\n }\n if (fieldType instanceof IpFieldMapper.IpFieldType) {\n return ValueFormat.IPv4;\n@@ -238,6 +258,10 @@ private static ValueFormat resolveFormat(@Nullable String format, MappedFieldTyp\n return ValueFormat.RAW;\n }\n \n+ public Input input() {\n+ return this.input;\n+ }\n+\n public static class Builder<VS extends ValuesSource> {\n \n private final ValuesSourceParser<VS> parser;\n@@ -256,6 +280,11 @@ public Builder<VS> formattable(boolean formattable) {\n return this;\n }\n \n+ public Builder<VS> timezoneAware(boolean timezoneAware) {\n+ parser.timezoneAware = timezoneAware;\n+ return this;\n+ }\n+\n public Builder<VS> targetValueType(ValueType valueType) {\n parser.targetValueType = valueType;\n return this;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceParser.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.aggregations.support.format;\n \n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.joda.time.DateTimeZone;\n \n /**\n *\n@@ -67,12 +68,12 @@ public static class DateTime extends Patternable<DateTime> {\n \n public static final DateTime DEFAULT = new DateTime(DateFieldMapper.Defaults.DATE_TIME_FORMATTER.format(), ValueFormatter.DateTime.DEFAULT, ValueParser.DateMath.DEFAULT);\n \n- public static DateTime format(String format) {\n- return new DateTime(format, new ValueFormatter.DateTime(format), new ValueParser.DateMath(format));\n+ public static DateTime format(String format, DateTimeZone timezone) {\n+ return new DateTime(format, new ValueFormatter.DateTime(format, timezone), new ValueParser.DateMath(format));\n }\n \n- public static DateTime mapper(DateFieldMapper.DateFieldType fieldType) {\n- return new DateTime(fieldType.dateTimeFormatter().format(), ValueFormatter.DateTime.mapper(fieldType), ValueParser.DateMath.mapper(fieldType));\n+ public static DateTime mapper(DateFieldMapper.DateFieldType fieldType, DateTimeZone timezone) {\n+ return new DateTime(fieldType.dateTimeFormatter().format(), ValueFormatter.DateTime.mapper(fieldType, timezone), ValueParser.DateMath.mapper(fieldType));\n }\n \n public DateTime(String pattern, ValueFormatter formatter, ValueParser parser) {\n@@ -81,7 +82,7 @@ public DateTime(String pattern, ValueFormatter formatter, ValueParser parser) {\n \n @Override\n public DateTime create(String pattern) {\n- return format(pattern);\n+ return format(pattern, DateTimeZone.UTC);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/format/ValueFormat.java",
"status": "modified"
},
{
"diff": "@@ -33,7 +33,6 @@\n import java.text.DecimalFormatSymbols;\n import java.text.NumberFormat;\n import java.util.Locale;\n-import java.util.TimeZone;\n \n /**\n * A strategy for formatting time represented as millis long value to string\n@@ -61,7 +60,6 @@ public interface ValueFormatter extends Streamable {\n String format(long value);\n \n /**\n- * The \n * @param value double The double value to format.\n * @return The formatted value as string\n */\n@@ -104,8 +102,8 @@ public static class DateTime implements ValueFormatter {\n public static final ValueFormatter DEFAULT = new ValueFormatter.DateTime(DateFieldMapper.Defaults.DATE_TIME_FORMATTER);\n private DateTimeZone timeZone = DateTimeZone.UTC;\n \n- public static DateTime mapper(DateFieldMapper.DateFieldType fieldType) {\n- return new DateTime(fieldType.dateTimeFormatter());\n+ public static DateTime mapper(DateFieldMapper.DateFieldType fieldType, DateTimeZone timezone) {\n+ return new DateTime(fieldType.dateTimeFormatter(), timezone);\n }\n \n static final byte ID = 2;\n@@ -122,15 +120,21 @@ public DateTime(FormatDateTimeFormatter formatter) {\n this.formatter = formatter;\n }\n \n+ public DateTime(String format, DateTimeZone timezone) {\n+ this.formatter = Joda.forPattern(format);\n+ this.timeZone = timezone != null ? timezone : DateTimeZone.UTC;\n+ }\n+\n+ public DateTime(FormatDateTimeFormatter formatter, DateTimeZone timezone) {\n+ this.formatter = formatter;\n+ this.timeZone = timezone != null ? timezone : DateTimeZone.UTC;\n+ }\n+\n @Override\n public String format(long time) {\n return formatter.printer().withZone(timeZone).print(time);\n }\n \n- public void setTimeZone(DateTimeZone timeZone) {\n- this.timeZone = timeZone;\n- }\n-\n @Override\n public String format(double value) {\n return format((long) value);\n@@ -264,7 +268,7 @@ public void writeTo(StreamOutput out) throws IOException {\n \n }\n }\n- \n+\n static class BooleanFormatter implements ValueFormatter {\n \n static final byte ID = 10;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/format/ValueFormatter.java",
"status": "modified"
}
]
} |
{
"body": "Have 4 nodes, 3 are sharing the node attribute B, and 1 is using the node attribute A.\n\nPer node specification (https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html#cluster-nodes), the following works: \n\nThe following returns 3 nodes:\n\n```\ncurl -XGET \"http://localhost:9200/_nodes/pod:B?pretty\"\n```\n\nThe following returns 1 node:\n\n```\ncurl -XGET \"http://localhost:9200/_nodes/pod:A?pretty\"\n```\n\nThe following returns 4 nodes:\n\n```\ncurl -XGET \"http://localhost:9200/_nodes/pod:B,pod:A?pretty\"\n```\n\nHowever, the `_only_nodes` specification for search preference does not work (https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-preference.html#search-request-preference):\n\n```\ncurl -XGET \"http://localhost:9200/only_node_test/_search?preference=_only_nodes:pod:B,pod:A\"\n\n{\n \"error\": \"IllegalArgumentException[No data node with critera [pod:B,pod:A] found]\",\n \"status\": 500\n}\n```\n\nWhat is the right syntax for the argument to use multiple node attributes as part of _only_nodes search preference?\n",
"comments": [
{
"body": "Seems like this affects all multiple forms of the specification, for example, the following multiple-node-id spec returns 3 nodes:\n\n```\ncurl -XGET \"http://localhost:9200/_nodes/Kbqa_XmMTIOlMD2iMDivRg,y4cT5bIRSliW0_UUs8OXkA,ebDNoVsMQSWdFa-UA66uTw?pretty\"\n```\n\nBut when used with _only_nodes search preference, it also doesn't accept it:\n\n```\ncurl -XGET \"http://localhost:9200/only_node_test/_search?preference=_only_nodes:Kbqa_XmMTIOlMD2iMDivRg,y4cT5bIRSliW0_UUs8OXkA,ebDNoVsMQSWdFa-UA66uTw\"\n\n{\n \"error\": \"IllegalArgumentException[No data node with critera [Kbqa_XmMTIOlMD2iMDivRg,y4cT5bIRSliW0_UUs8OXkA,ebDNoVsMQSWdFa-UA66uTw] found]\",\n \"status\": 500\n}\n```\n",
"created_at": "2015-08-06T17:54:20Z"
},
{
"body": "Can we also fix this in the 2.x branch?\n",
"created_at": "2016-05-10T17:36:32Z"
}
],
"number": 12700,
"title": "Multiple node spec does not work with _only_nodes search preference"
} | {
"body": "_only_node never used to shuffle shards - was under the assumption that its done at much higher level in code . Added shuffle() to distribute traffic ; Also modified exception to show which shard is not available for specified node.\n\nCloses https://github.com/elastic/elasticsearch/issues/12546\nCloses #12700\n",
"number": 12575,
"review_comments": [
{
"body": "Do you think that `nodeAttributes` would be a better name now?\n",
"created_at": "2015-09-01T09:14:10Z"
},
{
"body": "Should the `fail` message make it clear that we are now expecting an `ElasticsearchIllegalArgumentException`?\n",
"created_at": "2015-09-01T09:16:36Z"
},
{
"body": "Would it be okay if we removed the whitespace only changes from this pull request?\n",
"created_at": "2015-09-01T09:17:33Z"
},
{
"body": "Same comment here regarding whitespace only changes.\n",
"created_at": "2015-09-01T09:17:56Z"
},
{
"body": "Same comment here regarding whitespace only changes.\n",
"created_at": "2015-09-01T09:18:07Z"
},
{
"body": "Also here regarding whitespace only changes.\n",
"created_at": "2015-09-01T09:18:17Z"
},
{
"body": "This is another whitespace only change.\n",
"created_at": "2015-09-01T09:18:35Z"
},
{
"body": "Also here regarding whitespace.\n",
"created_at": "2015-09-01T09:18:50Z"
},
{
"body": "Is the name of this test missing detail after `OnlyNodes`?\n",
"created_at": "2015-09-01T10:11:00Z"
},
{
"body": "Same question here regarding the name of the test.\n",
"created_at": "2015-09-01T10:11:49Z"
},
{
"body": "And same question here regarding the name of the test.\n",
"created_at": "2015-09-01T10:12:11Z"
}
],
"title": "Shuffle shards for _only_nodes + support multiple specifications like cluster API "
} | {
"commits": [
{
"message": "Shuffle shards for _only_nodes and add multi spec support\n\n- Makes sure nodes are shuffled when _only_nodes is used\n- support multiple node specs to be consistent with cluster api\n- Adds tests and validtion for possible IndexOutOfRange exception\n- Added Shard details in exception message"
}
],
"files": [
{
"diff": "@@ -23,8 +23,10 @@\n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Sets;\n import com.google.common.collect.UnmodifiableIterator;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -373,22 +375,24 @@ public ShardIterator onlyNodeActiveInitializingShardsIt(String nodeId) {\n * @param nodeAttribute\n * @param discoveryNodes\n */\n- public ShardIterator onlyNodeSelectorActiveInitializingShardsIt(String nodeAttribute, DiscoveryNodes discoveryNodes) {\n+ public ShardIterator onlyNodeSelectorActiveInitializingShardsIt(String[] nodeAttribute, DiscoveryNodes discoveryNodes) {\n ArrayList<ShardRouting> ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size());\n Set<String> selectedNodes = Sets.newHashSet(discoveryNodes.resolveNodesIds(nodeAttribute));\n \n- for (ShardRouting shardRouting : activeShards) {\n+ int seed = shuffler.nextSeed();\n+ for (ShardRouting shardRouting : shuffler.shuffle(activeShards,seed)) {\n if (selectedNodes.contains(shardRouting.currentNodeId())) {\n ordered.add(shardRouting);\n }\n }\n- for (ShardRouting shardRouting : allInitializingShards) {\n+ for (ShardRouting shardRouting : shuffler.shuffle(allInitializingShards,seed)) {\n if (selectedNodes.contains(shardRouting.currentNodeId())) {\n ordered.add(shardRouting);\n }\n }\n+\n if (ordered.isEmpty()) {\n- throw new IllegalArgumentException(\"No data node with critera [\" + nodeAttribute + \"] found\");\n+ throw new ElasticsearchIllegalArgumentException(\"No data nodes with critera(s) [\" + Strings.arrayToCommaDelimitedString(nodeAttribute) + \"] found for shard:\" + shardId());\n }\n return new PlainShardIterator(shardId, ordered);\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -202,6 +202,11 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n }\n }\n preferenceType = Preference.parse(preference);\n+ if ( preferenceType == Preference.PREFER_NODE || preferenceType == Preference.ONLY_NODES || preferenceType == Preference.ONLY_NODE){\n+ if ( preference.length() <= preferenceType.type().length()){\n+ throw new ElasticsearchIllegalArgumentException(\"invalid preference specification [\" + preference + \"]\");\n+ }\n+ }\n switch (preferenceType) {\n case PREFER_NODE:\n return indexShard.preferNodeActiveInitializingShardsIt(preference.substring(Preference.PREFER_NODE.type().length() + 1));\n@@ -219,7 +224,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n return indexShard.onlyNodeActiveInitializingShardsIt(nodeId);\n case ONLY_NODES:\n String nodeAttribute = preference.substring(Preference.ONLY_NODES.type().length() + 1);\n- return indexShard.onlyNodeSelectorActiveInitializingShardsIt(nodeAttribute, nodes);\n+ return indexShard.onlyNodeSelectorActiveInitializingShardsIt(nodeAttribute.split(\",\"), nodes);\n \n default:\n throw new ElasticsearchIllegalArgumentException(\"unknown preference [\" + preferenceType + \"]\");",
"filename": "src/main/java/org/elasticsearch/cluster/routing/operation/plain/PlainOperationRouting.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -291,30 +292,33 @@ public void testNodeSelectorRouting(){\n routingTable = strategy.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING)).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n \n- ShardsIterator shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"disk:ebs\",clusterState.nodes());\n+ ShardsIterator shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"disk:ebs\"} ,clusterState.nodes());\n assertThat(shardsIterator.size(), equalTo(1));\n assertThat(shardsIterator.nextOrNull().currentNodeId(),equalTo(\"node1\"));\n \n- shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"dis*:eph*\",clusterState.nodes());\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"dis*:eph*\"},clusterState.nodes());\n assertThat(shardsIterator.size(), equalTo(1));\n assertThat(shardsIterator.nextOrNull().currentNodeId(),equalTo(\"node2\"));\n \n- shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"fred\",clusterState.nodes());\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"dis*:ebs*\",\"di*:eph*\"},clusterState.nodes());\n+ assertThat(shardsIterator.size(), equalTo(2));\n+\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"fred\"} ,clusterState.nodes());\n assertThat(shardsIterator.size(), equalTo(1));\n assertThat(shardsIterator.nextOrNull().currentNodeId(),equalTo(\"node1\"));\n \n- shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"bar*\",clusterState.nodes());\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"bar*\"},clusterState.nodes());\n assertThat(shardsIterator.size(), equalTo(1));\n assertThat(shardsIterator.nextOrNull().currentNodeId(),equalTo(\"node2\"));\n \n try {\n- shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"welma\", clusterState.nodes());\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"welma\"}, clusterState.nodes());\n fail(\"shouldve raised illegalArgumentException\");\n- } catch (IllegalArgumentException illegal) {\n+ } catch (ElasticsearchIllegalArgumentException illegal) {\n //expected exception\n }\n \n- shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(\"fred\",clusterState.nodes());\n+ shardsIterator = clusterState.routingTable().index(\"test\").shard(0).onlyNodeSelectorActiveInitializingShardsIt(new String[] {\"fred\"},clusterState.nodes());\n assertThat(shardsIterator.size(), equalTo(1));\n assertThat(shardsIterator.nextOrNull().currentNodeId(),equalTo(\"node1\"));\n }",
"filename": "src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java",
"status": "modified"
},
{
"diff": "@@ -21,15 +21,21 @@\n \n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n+import org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest;\n+import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;\n+import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n+import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.routing.operation.plain.Preference;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n@@ -42,24 +48,24 @@\n public class SearchPreferenceTests extends ElasticsearchIntegrationTest {\n \n @Test\n- public void testThatAllPreferencesAreParsedToValid(){\n+ public void testThatAllPreferencesAreParsedToValid() {\n //list of all enums and their strings as reference\n- assertThat(Preference.parse(\"_shards\"),equalTo(Preference.SHARDS));\n- assertThat(Preference.parse(\"_prefer_node\"),equalTo(Preference.PREFER_NODE));\n- assertThat(Preference.parse(\"_local\"),equalTo(Preference.LOCAL));\n- assertThat(Preference.parse(\"_primary\"),equalTo(Preference.PRIMARY));\n- assertThat(Preference.parse(\"_primary_first\"),equalTo(Preference.PRIMARY_FIRST));\n- assertThat(Preference.parse(\"_only_local\"),equalTo(Preference.ONLY_LOCAL));\n- assertThat(Preference.parse(\"_only_node\"),equalTo(Preference.ONLY_NODE));\n+ assertThat(Preference.parse(\"_shards\"), equalTo(Preference.SHARDS));\n+ assertThat(Preference.parse(\"_prefer_node\"), equalTo(Preference.PREFER_NODE));\n+ assertThat(Preference.parse(\"_local\"), equalTo(Preference.LOCAL));\n+ assertThat(Preference.parse(\"_primary\"), equalTo(Preference.PRIMARY));\n+ assertThat(Preference.parse(\"_primary_first\"), equalTo(Preference.PRIMARY_FIRST));\n+ assertThat(Preference.parse(\"_only_local\"), equalTo(Preference.ONLY_LOCAL));\n+ assertThat(Preference.parse(\"_only_node\"), equalTo(Preference.ONLY_NODE));\n assertThat(Preference.parse(\"_only_nodes\"), equalTo(Preference.ONLY_NODES));\n }\n \n @Test // see #2896\n public void testStopOneNodePreferenceWithRedState() throws InterruptedException, IOException {\n- assertAcked(prepareCreate(\"test\").setSettings(settingsBuilder().put(\"index.number_of_shards\", cluster().numDataNodes()+2).put(\"index.number_of_replicas\", 0)));\n+ assertAcked(prepareCreate(\"test\").setSettings(settingsBuilder().put(\"index.number_of_shards\", cluster().numDataNodes() + 2).put(\"index.number_of_replicas\", 0)));\n ensureGreen();\n for (int i = 0; i < 10; i++) {\n- client().prepareIndex(\"test\", \"type1\", \"\"+i).setSource(\"field1\", \"value1\").execute().actionGet();\n+ client().prepareIndex(\"test\", \"type1\", \"\" + i).setSource(\"field1\", \"value1\").execute().actionGet();\n }\n refresh();\n internalCluster().stopRandomDataNode();\n@@ -91,7 +97,7 @@ public void noPreferenceRandom() throws Exception {\n settingsBuilder().put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(1, maximumNumberOfReplicas()))\n ));\n ensureGreen();\n- \n+\n client().prepareIndex(\"test\", \"type1\").setSource(\"field1\", \"value1\").execute().actionGet();\n refresh();\n \n@@ -104,6 +110,77 @@ public void noPreferenceRandom() throws Exception {\n assertThat(firstNodeId, not(equalTo(secondNodeId)));\n }\n \n+ @Test\n+ public void nodesOnlyPreferenceRandom() throws Exception {\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ //this test needs at least a replica to make sure two consecutive searches go to two different copies of the same data\n+ settingsBuilder().put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(1, maximumNumberOfReplicas()))\n+ ));\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \"type1\").setSource(\"field1\", \"value1\").execute().actionGet();\n+ refresh();\n+ final Client client = internalCluster().smartClient();\n+ SearchResponse searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(\"_only_nodes:*,nodes*\").execute().actionGet(); // multiple wildchar to cover multi-param usecase\n+ String firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(\"_only_nodes:*,nodes*\").execute().actionGet();\n+ String secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(\"_only_nodes:*\").execute().actionGet(); \n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(\"_only_nodes:*\").execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ ArrayList<String> allNodeIds = new ArrayList<>();\n+ ArrayList<String> allNodeNames = new ArrayList<>();\n+ ArrayList<String> allNodeHosts = new ArrayList<>();\n+ NodesStatsResponse nodeStats = client().admin().cluster().prepareNodesStats().execute().actionGet();\n+ for (NodeStats node : nodeStats.getNodes()) {\n+ allNodeIds.add(node.getNode().getId());\n+ allNodeNames.add(node.getNode().getName());\n+ allNodeHosts.add(node.getHostname());\n+ }\n+\n+ String node_expr = \"_only_nodes:\" + Strings.arrayToCommaDelimitedString(allNodeIds.toArray());\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ node_expr = \"_only_nodes:\" + Strings.arrayToCommaDelimitedString(allNodeNames.toArray());\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ node_expr = \"_only_nodes:\" + Strings.arrayToCommaDelimitedString(allNodeHosts.toArray());\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ node_expr = \"_only_nodes:\" + Strings.arrayToCommaDelimitedString(allNodeHosts.toArray());\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ //Mix of valid and invalid nodes\n+ node_expr = \"_only_nodes:*,invalidnode\";\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ firstNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ searchResponse = client.prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(node_expr).execute().actionGet();\n+ secondNodeId = searchResponse.getHits().getAt(0).shard().nodeId();\n+ assertThat(firstNodeId, not(equalTo(secondNodeId)));\n+\n+ }\n+\n @Test\n public void simplePreferenceTests() throws Exception {\n createIndex(\"test\");\n@@ -118,7 +195,7 @@ public void simplePreferenceTests() throws Exception {\n shardsRange.append(\",\").append(i);\n }\n \n- String[] preferences = new String[]{\"1234\", \"_primary\", \"_local\", shardsRange.toString(), \"_primary_first\",\"_only_nodes:*\"};\n+ String[] preferences = new String[]{\"1234\", \"_primary\", \"_local\", shardsRange.toString(), \"_primary_first\", \"_only_nodes:*\"};\n for (String pref : preferences) {\n SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(matchAllQuery()).setPreference(pref).execute().actionGet();\n assertHitCount(searchResponse, 1);\n@@ -127,11 +204,39 @@ public void simplePreferenceTests() throws Exception {\n }\n }\n \n- @Test (expected = ElasticsearchIllegalArgumentException.class)\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n public void testThatSpecifyingNonExistingNodesReturnsUsefulError() throws Exception {\n createIndex(\"test\");\n ensureGreen();\n-\n client().prepareSearch().setQuery(matchAllQuery()).setPreference(\"_only_node:DOES-NOT-EXIST\").execute().actionGet();\n }\n+\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testThatSpecifyingNoNodesReturnsUsefulError() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ client().prepareSearch().setQuery(matchAllQuery()).setPreference(\"_only_nodes:\").execute().actionGet();\n+ }\n+\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testThatSpecifyingInvalidNodeSpecForOnlyNodes() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ client().prepareSearch().setQuery(matchAllQuery()).setPreference(\"_only_nodes\").execute().actionGet();\n+ }\n+\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testThatSpecifyingInvalidNodeSpecForOnlyNode() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ client().prepareSearch().setQuery(matchAllQuery()).setPreference(\"_only_node\").execute().actionGet();\n+ }\n+\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testThatSpecifyingInvalidNodeSpecForPreferNodes() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ client().prepareSearch().setQuery(matchAllQuery()).setPreference(\"_prefer_node\").execute().actionGet();\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/preference/SearchPreferenceTests.java",
"status": "modified"
}
]
} |
{
"body": "Previously when RoutingService checks for delayed shards, it can see\nshards that are delayed, but are past their delay time so the logged\noutput looks like:\n\n```\ndelaying allocation for [0] unassigned shards, next check in [0s]\n```\n\nThis change allows shards that have passed their delay to be counted\ncorrectly for the logging. Additionally, it places a 5 second minimum\ndelay between scheduled reroutes to try to minimize the number of\nreroutes run.\n\nThis also adds a test that creates a large number of unassigned delayed\nshards and ensures that they are rerouted even if a single reroute does\nnot allocated all shards (due to a low concurrent_recoveries setting).\n\nResolves #12456 \n\n(This PR is against 1.7 and will be forward-ported)\n",
"comments": [
{
"body": "LGTM. Left a non blocking question.\n",
"created_at": "2015-07-28T19:01:38Z"
},
{
"body": "LGTM\n",
"created_at": "2015-07-28T22:04:57Z"
},
{
"body": "Closing this, handled a different way in #12532 \n",
"created_at": "2015-07-29T15:16:42Z"
}
],
"number": 12515,
"title": "Fix messaging about delayed allocation"
} | {
"body": "In order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456\n\nThis is a PR against 1.7 and will be forward-ported\n",
"number": 12532,
"review_comments": [
{
"body": "Would it be useful to have 0 have the special \"I've not even tried yet\" meaning? I think setting it to now is kind-of funky because you aren't actually doing an allocation when you build the object.\n",
"created_at": "2015-07-29T17:00:42Z"
},
{
"body": "Yikes! Is that really 2 minutes of 1 cpu running 100%?\n",
"created_at": "2015-07-29T17:02:33Z"
},
{
"body": "I could set this to 0, but either way, gateway allocator should always run at least once before these does. So this is just because I wanted a non-null value.\n",
"created_at": "2015-07-29T18:49:07Z"
},
{
"body": "Some of our CI nodes are reeeeeaaaalllly slow, and since this creates 25 indices with 10 total shards each, I just wanted to make sure it doesn't time out. I do not expect it to take 2 minutes regardless (it's much faster for me). Another reason this test in annotated with `@Slow`\n",
"created_at": "2015-07-29T18:50:14Z"
},
{
"body": "can we call this timestamp? I think the \"lastXXX\" part depends on the context of calling it. If we do, would be good to rename the other ones in this change.\n",
"created_at": "2015-08-04T14:17:54Z"
},
{
"body": "can we configure the delayed allocation to not be the default (`1m`) but something high enough to trigger what we are trying to fix, like `200ms`? This will speed up the test.\n",
"created_at": "2015-08-04T14:21:42Z"
},
{
"body": "Sure, I already randomize it between 4 and 15 seconds (since it should find the lowest delay setting), but I can lower it even more.\n",
"created_at": "2015-08-04T14:22:52Z"
},
{
"body": "we probably want to rest here as well: `registeredNextDelaySetting = Long.MAX_VALUE;`\n",
"created_at": "2015-08-04T14:38:53Z"
},
{
"body": "Yes good idea, otherwise it assumes one was running.\n",
"created_at": "2015-08-04T18:31:38Z"
},
{
"body": "Sure, I'll rename to `unassignedShardsAllocatedTimestamp`\n",
"created_at": "2015-08-04T18:32:09Z"
}
],
"title": "Avoid extra reroutes of delayed shards in RoutingService"
} | {
"commits": [
{
"message": "Avoid extra reroutes of delayed shards in RoutingService\n\nIn order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456"
}
],
"files": [
{
"diff": "@@ -272,11 +272,15 @@ private ClusterHealthResponse clusterHealth(ClusterHealthRequest request, Cluste\n concreteIndices = clusterState.metaData().concreteIndices(request.indicesOptions(), request.indices());\n } catch (IndexMissingException e) {\n // one of the specified indices is not there - treat it as RED.\n- ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(), Strings.EMPTY_ARRAY, clusterState, numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState));\n+ ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(),\n+ Strings.EMPTY_ARRAY, clusterState, numberOfPendingTasks, numberOfInFlightFetch,\n+ UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState));\n response.status = ClusterHealthStatus.RED;\n return response;\n }\n \n- return new ClusterHealthResponse(clusterName.value(), concreteIndices, clusterState, numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState));\n+ return new ClusterHealthResponse(clusterName.value(),\n+ concreteIndices, clusterState, numberOfPendingTasks, numberOfInFlightFetch,\n+ UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState));\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java",
"status": "modified"
},
{
"diff": "@@ -57,6 +57,7 @@ public class RoutingService extends AbstractLifecycleComponent<RoutingService> i\n private AtomicBoolean rerouting = new AtomicBoolean();\n private volatile long registeredNextDelaySetting = Long.MAX_VALUE;\n private volatile ScheduledFuture registeredNextDelayFuture;\n+ private volatile long unassignedShardsAllocatedTimestamp = 0;\n \n @Inject\n public RoutingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService) {\n@@ -87,6 +88,19 @@ public AllocationService getAllocationService() {\n return this.allocationService;\n }\n \n+ /**\n+ * Update the last time the allocator tried to assign unassigned shards\n+ *\n+ * This is used so that both the GatewayAllocator and RoutingService use a\n+ * consistent timestamp for comparing which shards have been delayed to\n+ * avoid a race condition where GatewayAllocator thinks the shard should\n+ * be delayed and the RoutingService thinks it has already passed the delay\n+ * and that the GatewayAllocator has/will handle it.\n+ */\n+ public void setUnassignedShardsAllocatedTimestamp(long timeInMillis) {\n+ this.unassignedShardsAllocatedTimestamp = timeInMillis;\n+ }\n+\n /**\n * Initiates a reroute.\n */\n@@ -108,20 +122,29 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n- registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n- @Override\n- protected void doRun() throws Exception {\n- registeredNextDelaySetting = Long.MAX_VALUE;\n- reroute(\"assign delayed unassigned shards\");\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n- }\n- });\n+ // We use System.currentTimeMillis here because we want the\n+ // next delay from the \"now\" perspective, rather than the\n+ // delay from the last time the GatewayAllocator tried to\n+ // assign/delay the shard\n+ TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), settings, event.state()));\n+ int unassignedDelayedShards = UnassignedInfo.getNumberOfDelayedUnassigned(unassignedShardsAllocatedTimestamp, settings, event.state());\n+ if (unassignedDelayedShards > 0) {\n+ logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\",\n+ unassignedDelayedShards, nextDelay);\n+ registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ reroute(\"assign delayed unassigned shards\");\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ }\n+ });\n+ }\n } else {\n logger.trace(\"no need to schedule reroute due to delayed unassigned, next_delay_setting [{}], registered [{}]\", nextDelaySetting, registeredNextDelaySetting);\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
},
{
"diff": "@@ -167,12 +167,12 @@ public long getAllocationDelayTimeoutSetting(Settings settings, Settings indexSe\n /**\n * The time in millisecond until this unassigned shard can be reassigned.\n */\n- public long getDelayAllocationExpirationIn(Settings settings, Settings indexSettings) {\n+ public long getDelayAllocationExpirationIn(long unassignedShardsAllocatedTimestamp, Settings settings, Settings indexSettings) {\n long delayTimeout = getAllocationDelayTimeoutSetting(settings, indexSettings);\n if (delayTimeout == 0) {\n return 0;\n }\n- long delta = System.currentTimeMillis() - timestamp;\n+ long delta = unassignedShardsAllocatedTimestamp - timestamp;\n // account for time drift, treat it as no timeout\n if (delta < 0) {\n return 0;\n@@ -184,12 +184,12 @@ public long getDelayAllocationExpirationIn(Settings settings, Settings indexSett\n /**\n * Returns the number of shards that are unassigned and currently being delayed.\n */\n- public static int getNumberOfDelayedUnassigned(Settings settings, ClusterState state) {\n+ public static int getNumberOfDelayedUnassigned(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n int count = 0;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (delay > 0) {\n count++;\n }\n@@ -219,12 +219,12 @@ public static long findSmallestDelayedAllocationSetting(Settings settings, Clust\n /**\n * Finds the next (closest) delay expiration of an unassigned shard. Returns 0 if there are none.\n */\n- public static long findNextDelayedAllocationIn(Settings settings, ClusterState state) {\n+ public static long findNextDelayedAllocationIn(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n long nextDelay = Long.MAX_VALUE;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (nextShardDelay > 0 && nextShardDelay < nextDelay) {\n nextDelay = nextShardDelay;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -145,6 +145,11 @@ private boolean recoverOnAnyNode(@IndexSettings Settings idxSettings) {\n \n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ // Take a snapshot of the current time and tell the RoutingService\n+ // about it, so it will use a consistent timestamp for delays\n+ long lastAllocateUnassignedRun = System.currentTimeMillis();\n+ this.routingService.setUnassignedShardsAllocatedTimestamp(lastAllocateUnassignedRun);\n+\n boolean changed = false;\n DiscoveryNodes nodes = allocation.nodes();\n RoutingNodes routingNodes = allocation.routingNodes();\n@@ -526,7 +531,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // note: we only care about replica in delayed allocation, since if we have an unassigned primary it\n // will anyhow wait to find an existing copy of the shard to be allocated\n // note: the other side of the equation is scheduling a reroute in a timely manner, which happens in the RoutingService\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(lastAllocateUnassignedRun, settings, indexMetaData.getSettings());\n if (delay > 0) {\n logger.debug(\"[{}][{}]: delaying allocation of [{}] for [{}]\", shard.index(), shard.id(), shard, TimeValue.timeValueMillis(delay));\n /**",
"filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,13 @@\n \n package org.elasticsearch.cluster.allocation;\n \n+import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.RoutingService;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.RerouteExplanation;\n@@ -33,15 +35,19 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.cluster.routing.allocation.decider.DisableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.io.File;\n@@ -162,6 +168,45 @@ public void rerouteWithAllocateLocalGateway_enableAllocationSettings() throws Ex\n rerouteWithAllocateLocalGateway(commonSettings);\n }\n \n+ /**\n+ * Test that we don't miss any reroutes when concurrent_recoveries\n+ * is set very low and there are a large number of unassigned shards.\n+ */\n+ @Test\n+ @LuceneTestCase.Slow\n+ public void testDelayWithALargeAmountOfShards() throws Exception {\n+ Settings commonSettings = settingsBuilder()\n+ .put(\"gateway.type\", \"local\")\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .build();\n+ logger.info(\"--> starting 4 nodes\");\n+ String node_1 = internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+\n+ assertThat(cluster().size(), equalTo(4));\n+ ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes(\"4\").execute().actionGet();\n+ assertThat(healthResponse.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> create indices\");\n+ for (int i = 0; i < 25; i++) {\n+ client().admin().indices().prepareCreate(\"test\" + i)\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 5).put(\"index.number_of_replicas\", 1)\n+ .put(\"index.unassigned.node_left.delayed_timeout\", randomIntBetween(250, 1000) + \"ms\"))\n+ .execute().actionGet();\n+ }\n+\n+ ensureGreen(TimeValue.timeValueMinutes(1));\n+\n+ logger.info(\"--> stopping node1\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1));\n+\n+ // This might run slowly on older hardware\n+ ensureGreen(TimeValue.timeValueMinutes(2));\n+ }\n+\n private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception {\n logger.info(\"--> starting 2 nodes\");\n String node_1 = internalCluster().startNode(commonSettings);",
"filename": "src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteTests.java",
"status": "modified"
},
{
"diff": "@@ -27,7 +27,9 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n@@ -91,6 +93,7 @@ public void testNoDelayedUnassigned() throws Exception {\n }\n \n @Test\n+ @TestLogging(\"_root:DEBUG\")\n public void testDelayedUnassignedScheduleReroute() throws Exception {\n AllocationService allocation = createAllocationService();\n MetaData metaData = MetaData.builder()\n@@ -111,6 +114,10 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // We need to update the routing service's last attempted run to\n+ // signal that the GatewayAllocator tried to allocated it but\n+ // it was delayed\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis());\n ClusterState newState = clusterState;\n \n routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n@@ -124,6 +131,44 @@ public void run() {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ @Test\n+ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n+ AllocationService allocation = createAllocationService();\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\"))).build();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ // remove node2 and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // Set it in the future so the delay will be negative\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis() + TimeValue.timeValueMinutes(1).millis());\n+\n+ ClusterState newState = clusterState;\n+\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n+\n+ // verify the registration has been updated\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(100L));\n+ }\n+ });\n+ }\n+\n private class TestRoutingService extends RoutingService {\n \n private AtomicBoolean rerouted = new AtomicBoolean();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -269,7 +269,7 @@ public void testUnassignedDelayedOnlyOnNodeLeft() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- long delay = unassignedInfo.getDelayAllocationExpirationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n+ long delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, greaterThan(0l));\n assertThat(delay, lessThan(TimeValue.timeValueHours(10).millis()));\n }\n@@ -286,7 +286,7 @@ public void testUnassignedDelayOnlyNodeLeftNonNodeLeftReason() throws Exception\n UnassignedInfo unassignedInfo = new UnassignedInfo(RandomPicks.randomFrom(getRandom(), reasons), null);\n long delay = unassignedInfo.getAllocationDelayTimeoutSetting(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, equalTo(0l));\n- delay = unassignedInfo.getDelayAllocationExpirationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n+ delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, equalTo(0l));\n }\n \n@@ -302,7 +302,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -311,7 +311,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n+ assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n }\n \n @Test\n@@ -326,7 +326,7 @@ public void testFindNextDelayedAllocation() {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -339,7 +339,7 @@ public void testFindNextDelayedAllocation() {\n long nextDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSetting(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelaySetting, equalTo(TimeValue.timeValueHours(10).millis()));\n \n- long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n+ long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelay, greaterThan(TimeValue.timeValueHours(9).millis()));\n assertThat(nextDelay, lessThanOrEqualTo(TimeValue.timeValueHours(10).millis()));\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
}
]
} |
{
"body": "Upgraded from 1.6.1, a 1.7.0 node joins, we use allocation excludes to force data away from the nodes we intend to stop. When the 1.7.0 node has all the data, the old node is stopped.\n\n1.7.0 becomes the master, then logs \"delaying allocation for [66] unassigned shards, next check in [59.6s]\". 22 _minutes_ later, it then starts logging \"delaying allocation for [0] unassigned shards, next check in [0s]\" forever.\n\nWe had another cluster doing the same, which stopped logging it after two more nodes joined the cluster.\n\nNTP enabled, clocks seem fine, drift not out of the ordinary.\n",
"comments": [
{
"body": "@kimchy any ideas?\n",
"created_at": "2015-07-27T11:30:22Z"
},
{
"body": "@alexbrasetvik does this problem persist when the entire cluster is running 1.7.0, or does it only occur when the cluster is in a mixed-version state?\n",
"created_at": "2015-07-27T15:50:17Z"
},
{
"body": "It happens also for clusters created with 1.7.0.\n",
"created_at": "2015-07-27T16:33:47Z"
},
{
"body": "@alexbrasetvik I was able to reproduce this, still trying to figure out what causes it\n",
"created_at": "2015-07-27T16:35:52Z"
},
{
"body": "I see this multiple times too. Besides the logging itself, actually it never allocates my unassigned shards. I have a 20 machine cluster, all on 1.7 already. I shutdown a node via kopf. I see ~100 unassigned shards. The node then joins the cluster again, it's not initializing the unassigned shards. \n\nI manually reroute one unassigned shard, then i start seeing the cluster initializing the other unassigned shards. \n\nIn pending tasks, I'm seeing\n\n```\n 176228 29.1s URGENT shard-started ([xxxx-2015.27][0], node[nBe-T6GzTPOKoFHz-sIz8A], [R], s[INITIALIZING], unassigned_info[[reason=NODE_LEFT], at[2015-07-29T19:18:21.992Z], details[node_left[Vsj8k-eIQX2PFgAQQWDIJA]]]), reason [master [...][DwayquBqT8u8Xvns_HiIag][CO3SCH020050240][inet[/10.65.207.36:9300]]{...} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n```\n",
"created_at": "2015-07-29T19:43:09Z"
}
],
"number": 12456,
"title": "Delayed allocation \"stuck\" on 0 shards"
} | {
"body": "In order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456\n\nThis is a PR against 1.7 and will be forward-ported\n",
"number": 12532,
"review_comments": [
{
"body": "Would it be useful to have 0 have the special \"I've not even tried yet\" meaning? I think setting it to now is kind-of funky because you aren't actually doing an allocation when you build the object.\n",
"created_at": "2015-07-29T17:00:42Z"
},
{
"body": "Yikes! Is that really 2 minutes of 1 cpu running 100%?\n",
"created_at": "2015-07-29T17:02:33Z"
},
{
"body": "I could set this to 0, but either way, gateway allocator should always run at least once before these does. So this is just because I wanted a non-null value.\n",
"created_at": "2015-07-29T18:49:07Z"
},
{
"body": "Some of our CI nodes are reeeeeaaaalllly slow, and since this creates 25 indices with 10 total shards each, I just wanted to make sure it doesn't time out. I do not expect it to take 2 minutes regardless (it's much faster for me). Another reason this test in annotated with `@Slow`\n",
"created_at": "2015-07-29T18:50:14Z"
},
{
"body": "can we call this timestamp? I think the \"lastXXX\" part depends on the context of calling it. If we do, would be good to rename the other ones in this change.\n",
"created_at": "2015-08-04T14:17:54Z"
},
{
"body": "can we configure the delayed allocation to not be the default (`1m`) but something high enough to trigger what we are trying to fix, like `200ms`? This will speed up the test.\n",
"created_at": "2015-08-04T14:21:42Z"
},
{
"body": "Sure, I already randomize it between 4 and 15 seconds (since it should find the lowest delay setting), but I can lower it even more.\n",
"created_at": "2015-08-04T14:22:52Z"
},
{
"body": "we probably want to rest here as well: `registeredNextDelaySetting = Long.MAX_VALUE;`\n",
"created_at": "2015-08-04T14:38:53Z"
},
{
"body": "Yes good idea, otherwise it assumes one was running.\n",
"created_at": "2015-08-04T18:31:38Z"
},
{
"body": "Sure, I'll rename to `unassignedShardsAllocatedTimestamp`\n",
"created_at": "2015-08-04T18:32:09Z"
}
],
"title": "Avoid extra reroutes of delayed shards in RoutingService"
} | {
"commits": [
{
"message": "Avoid extra reroutes of delayed shards in RoutingService\n\nIn order to avoid extra reroutes, `RoutingService` should avoid\nscheduling a reroute of any shards where the delay is negative. To make\nsure that we don't encounter a race condition between the\nGatewayAllocator thinking a shard is delayed and RoutingService thinking\nit is not, the GatewayAllocator will update the RoutingService with the\nlast time it checked in order to use a consistent \"view\" of the delay.\n\nResolves #12456\nRelates to #12515 and #12456"
}
],
"files": [
{
"diff": "@@ -272,11 +272,15 @@ private ClusterHealthResponse clusterHealth(ClusterHealthRequest request, Cluste\n concreteIndices = clusterState.metaData().concreteIndices(request.indicesOptions(), request.indices());\n } catch (IndexMissingException e) {\n // one of the specified indices is not there - treat it as RED.\n- ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(), Strings.EMPTY_ARRAY, clusterState, numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState));\n+ ClusterHealthResponse response = new ClusterHealthResponse(clusterName.value(),\n+ Strings.EMPTY_ARRAY, clusterState, numberOfPendingTasks, numberOfInFlightFetch,\n+ UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState));\n response.status = ClusterHealthStatus.RED;\n return response;\n }\n \n- return new ClusterHealthResponse(clusterName.value(), concreteIndices, clusterState, numberOfPendingTasks, numberOfInFlightFetch, UnassignedInfo.getNumberOfDelayedUnassigned(settings, clusterState));\n+ return new ClusterHealthResponse(clusterName.value(),\n+ concreteIndices, clusterState, numberOfPendingTasks, numberOfInFlightFetch,\n+ UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), settings, clusterState));\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java",
"status": "modified"
},
{
"diff": "@@ -57,6 +57,7 @@ public class RoutingService extends AbstractLifecycleComponent<RoutingService> i\n private AtomicBoolean rerouting = new AtomicBoolean();\n private volatile long registeredNextDelaySetting = Long.MAX_VALUE;\n private volatile ScheduledFuture registeredNextDelayFuture;\n+ private volatile long unassignedShardsAllocatedTimestamp = 0;\n \n @Inject\n public RoutingService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService) {\n@@ -87,6 +88,19 @@ public AllocationService getAllocationService() {\n return this.allocationService;\n }\n \n+ /**\n+ * Update the last time the allocator tried to assign unassigned shards\n+ *\n+ * This is used so that both the GatewayAllocator and RoutingService use a\n+ * consistent timestamp for comparing which shards have been delayed to\n+ * avoid a race condition where GatewayAllocator thinks the shard should\n+ * be delayed and the RoutingService thinks it has already passed the delay\n+ * and that the GatewayAllocator has/will handle it.\n+ */\n+ public void setUnassignedShardsAllocatedTimestamp(long timeInMillis) {\n+ this.unassignedShardsAllocatedTimestamp = timeInMillis;\n+ }\n+\n /**\n * Initiates a reroute.\n */\n@@ -108,20 +122,29 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n- registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n- @Override\n- protected void doRun() throws Exception {\n- registeredNextDelaySetting = Long.MAX_VALUE;\n- reroute(\"assign delayed unassigned shards\");\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n- }\n- });\n+ // We use System.currentTimeMillis here because we want the\n+ // next delay from the \"now\" perspective, rather than the\n+ // delay from the last time the GatewayAllocator tried to\n+ // assign/delay the shard\n+ TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), settings, event.state()));\n+ int unassignedDelayedShards = UnassignedInfo.getNumberOfDelayedUnassigned(unassignedShardsAllocatedTimestamp, settings, event.state());\n+ if (unassignedDelayedShards > 0) {\n+ logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\",\n+ unassignedDelayedShards, nextDelay);\n+ registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n+ @Override\n+ protected void doRun() throws Exception {\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ reroute(\"assign delayed unassigned shards\");\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.warn(\"failed to schedule/execute reroute post unassigned shard\", t);\n+ registeredNextDelaySetting = Long.MAX_VALUE;\n+ }\n+ });\n+ }\n } else {\n logger.trace(\"no need to schedule reroute due to delayed unassigned, next_delay_setting [{}], registered [{}]\", nextDelaySetting, registeredNextDelaySetting);\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
},
{
"diff": "@@ -167,12 +167,12 @@ public long getAllocationDelayTimeoutSetting(Settings settings, Settings indexSe\n /**\n * The time in millisecond until this unassigned shard can be reassigned.\n */\n- public long getDelayAllocationExpirationIn(Settings settings, Settings indexSettings) {\n+ public long getDelayAllocationExpirationIn(long unassignedShardsAllocatedTimestamp, Settings settings, Settings indexSettings) {\n long delayTimeout = getAllocationDelayTimeoutSetting(settings, indexSettings);\n if (delayTimeout == 0) {\n return 0;\n }\n- long delta = System.currentTimeMillis() - timestamp;\n+ long delta = unassignedShardsAllocatedTimestamp - timestamp;\n // account for time drift, treat it as no timeout\n if (delta < 0) {\n return 0;\n@@ -184,12 +184,12 @@ public long getDelayAllocationExpirationIn(Settings settings, Settings indexSett\n /**\n * Returns the number of shards that are unassigned and currently being delayed.\n */\n- public static int getNumberOfDelayedUnassigned(Settings settings, ClusterState state) {\n+ public static int getNumberOfDelayedUnassigned(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n int count = 0;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (delay > 0) {\n count++;\n }\n@@ -219,12 +219,12 @@ public static long findSmallestDelayedAllocationSetting(Settings settings, Clust\n /**\n * Finds the next (closest) delay expiration of an unassigned shard. Returns 0 if there are none.\n */\n- public static long findNextDelayedAllocationIn(Settings settings, ClusterState state) {\n+ public static long findNextDelayedAllocationIn(long unassignedShardsAllocatedTimestamp, Settings settings, ClusterState state) {\n long nextDelay = Long.MAX_VALUE;\n for (ShardRouting shard : state.routingTable().shardsWithState(ShardRoutingState.UNASSIGNED)) {\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n- long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long nextShardDelay = shard.unassignedInfo().getDelayAllocationExpirationIn(unassignedShardsAllocatedTimestamp, settings, indexMetaData.getSettings());\n if (nextShardDelay > 0 && nextShardDelay < nextDelay) {\n nextDelay = nextShardDelay;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -145,6 +145,11 @@ private boolean recoverOnAnyNode(@IndexSettings Settings idxSettings) {\n \n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ // Take a snapshot of the current time and tell the RoutingService\n+ // about it, so it will use a consistent timestamp for delays\n+ long lastAllocateUnassignedRun = System.currentTimeMillis();\n+ this.routingService.setUnassignedShardsAllocatedTimestamp(lastAllocateUnassignedRun);\n+\n boolean changed = false;\n DiscoveryNodes nodes = allocation.nodes();\n RoutingNodes routingNodes = allocation.routingNodes();\n@@ -526,7 +531,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // note: we only care about replica in delayed allocation, since if we have an unassigned primary it\n // will anyhow wait to find an existing copy of the shard to be allocated\n // note: the other side of the equation is scheduling a reroute in a timely manner, which happens in the RoutingService\n- long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n+ long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(lastAllocateUnassignedRun, settings, indexMetaData.getSettings());\n if (delay > 0) {\n logger.debug(\"[{}][{}]: delaying allocation of [{}] for [{}]\", shard.index(), shard.id(), shard, TimeValue.timeValueMillis(delay));\n /**",
"filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,13 @@\n \n package org.elasticsearch.cluster.allocation;\n \n+import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.RoutingService;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.RerouteExplanation;\n@@ -33,15 +35,19 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.cluster.routing.allocation.decider.DisableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.io.File;\n@@ -162,6 +168,45 @@ public void rerouteWithAllocateLocalGateway_enableAllocationSettings() throws Ex\n rerouteWithAllocateLocalGateway(commonSettings);\n }\n \n+ /**\n+ * Test that we don't miss any reroutes when concurrent_recoveries\n+ * is set very low and there are a large number of unassigned shards.\n+ */\n+ @Test\n+ @LuceneTestCase.Slow\n+ public void testDelayWithALargeAmountOfShards() throws Exception {\n+ Settings commonSettings = settingsBuilder()\n+ .put(\"gateway.type\", \"local\")\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .build();\n+ logger.info(\"--> starting 4 nodes\");\n+ String node_1 = internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+\n+ assertThat(cluster().size(), equalTo(4));\n+ ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes(\"4\").execute().actionGet();\n+ assertThat(healthResponse.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> create indices\");\n+ for (int i = 0; i < 25; i++) {\n+ client().admin().indices().prepareCreate(\"test\" + i)\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 5).put(\"index.number_of_replicas\", 1)\n+ .put(\"index.unassigned.node_left.delayed_timeout\", randomIntBetween(250, 1000) + \"ms\"))\n+ .execute().actionGet();\n+ }\n+\n+ ensureGreen(TimeValue.timeValueMinutes(1));\n+\n+ logger.info(\"--> stopping node1\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1));\n+\n+ // This might run slowly on older hardware\n+ ensureGreen(TimeValue.timeValueMinutes(2));\n+ }\n+\n private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception {\n logger.info(\"--> starting 2 nodes\");\n String node_1 = internalCluster().startNode(commonSettings);",
"filename": "src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteTests.java",
"status": "modified"
},
{
"diff": "@@ -27,7 +27,9 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n@@ -91,6 +93,7 @@ public void testNoDelayedUnassigned() throws Exception {\n }\n \n @Test\n+ @TestLogging(\"_root:DEBUG\")\n public void testDelayedUnassignedScheduleReroute() throws Exception {\n AllocationService allocation = createAllocationService();\n MetaData metaData = MetaData.builder()\n@@ -111,6 +114,10 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // We need to update the routing service's last attempted run to\n+ // signal that the GatewayAllocator tried to allocated it but\n+ // it was delayed\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis());\n ClusterState newState = clusterState;\n \n routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n@@ -124,6 +131,44 @@ public void run() {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ @Test\n+ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n+ AllocationService allocation = createAllocationService();\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test\"))).build();\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).localNodeId(\"node1\").masterNodeId(\"node1\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // starting primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ // starting replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ // remove node2 and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // Set it in the future so the delay will be negative\n+ routingService.setUnassignedShardsAllocatedTimestamp(System.currentTimeMillis() + TimeValue.timeValueMinutes(1).millis());\n+\n+ ClusterState newState = clusterState;\n+\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ assertThat(routingService.hasReroutedAndClear(), equalTo(false));\n+\n+ // verify the registration has been updated\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(100L));\n+ }\n+ });\n+ }\n+\n private class TestRoutingService extends RoutingService {\n \n private AtomicBoolean rerouted = new AtomicBoolean();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -269,7 +269,7 @@ public void testUnassignedDelayedOnlyOnNodeLeft() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- long delay = unassignedInfo.getDelayAllocationExpirationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n+ long delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, greaterThan(0l));\n assertThat(delay, lessThan(TimeValue.timeValueHours(10).millis()));\n }\n@@ -286,7 +286,7 @@ public void testUnassignedDelayOnlyNodeLeftNonNodeLeftReason() throws Exception\n UnassignedInfo unassignedInfo = new UnassignedInfo(RandomPicks.randomFrom(getRandom(), reasons), null);\n long delay = unassignedInfo.getAllocationDelayTimeoutSetting(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, equalTo(0l));\n- delay = unassignedInfo.getDelayAllocationExpirationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n+ delay = unassignedInfo.getDelayAllocationExpirationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), ImmutableSettings.EMPTY);\n assertThat(delay, equalTo(0l));\n }\n \n@@ -302,7 +302,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -311,7 +311,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n+ assertThat(clusterState.prettyPrint(), UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(2));\n }\n \n @Test\n@@ -326,7 +326,7 @@ public void testFindNextDelayedAllocation() {\n .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"test1\")).addAsNew(metaData.index(\"test2\"))).build();\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n- assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n+ assertThat(UnassignedInfo.getNumberOfDelayedUnassigned(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState), equalTo(0));\n // starting primaries\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n@@ -339,7 +339,7 @@ public void testFindNextDelayedAllocation() {\n long nextDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSetting(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelaySetting, equalTo(TimeValue.timeValueHours(10).millis()));\n \n- long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n+ long nextDelay = UnassignedInfo.findNextDelayedAllocationIn(System.currentTimeMillis(), ImmutableSettings.builder().put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10h\").build(), clusterState);\n assertThat(nextDelay, greaterThan(TimeValue.timeValueHours(9).millis()));\n assertThat(nextDelay, lessThanOrEqualTo(TimeValue.timeValueHours(10).millis()));\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
}
]
} |
{
"body": "To reproduce, start a node with 1g heap and then:\n\n```\nDELETE test\n\nPUT index/doc/id\n{\n \"text\": \"text\"\n}\n\nPOST index/_search\n{\n \"aggs\": {\n \"text\": {\n \"terms\": {\n \"field\": \"text\",\n \"size\": 10\n },\n \"aggs\": {\n \"top_hits\": {\n \"top_hits\": {\n \"size\": 200000000\n }\n }\n }\n }\n }\n}\n```\n\nTested 1.7.0.\nWhile it is not a good idea to retrieve 200000000 documents I would still not expect an OOM with only one document.\n",
"comments": [],
"number": 12510,
"title": "top hits: huge number in \"size\" makes node go OOM even with very few docs"
} | {
"body": "PR for #12510\n",
"number": 12518,
"review_comments": [],
"title": "Protected against `size` and `offset` larger than total number of document in a shard"
} | {
"commits": [
{
"message": "top_hits: If topN (based on `offset` + `size`) is higher than the maxDoc of an shard then normalize topN to maxDoc.\n\nCloses #12510"
}
],
"files": [
{
"diff": "@@ -117,6 +117,9 @@ public void collect(int docId, long bucket) throws IOException {\n if (collectors == null) {\n Sort sort = subSearchContext.sort();\n int topN = subSearchContext.from() + subSearchContext.size();\n+ // In the QueryPhase we don't need this protection, because it is build into the IndexSearcher,\n+ // but here we create collectors ourselves and we need prevent OOM because of crazy an offset and size.\n+ topN = Math.min(topN, subSearchContext.searcher().getIndexReader().maxDoc());\n TopDocsCollector<?> topLevelCollector = sort != null ? TopFieldCollector.create(sort, topN, true, subSearchContext.trackScores(), subSearchContext.trackScores()) : TopScoreDocCollector.create(topN);\n collectors = new TopDocsAndLeafCollector(topLevelCollector);\n collectors.leafCollector = collectors.topLevelCollector.getLeafCollector(ctx);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.util.ArrayUtil;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -928,4 +929,20 @@ public void testTopHitsInNested() throws Exception {\n }\n }\n }\n+\n+ @Test\n+ public void testDontExplode() throws Exception {\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\")\n+ .setTypes(\"type\")\n+ .addAggregation(terms(\"terms\")\n+ .executionHint(randomExecutionHint())\n+ .field(TERMS_AGGS_FIELD)\n+ .subAggregation(\n+ topHits(\"hits\").setSize(ArrayUtil.MAX_ARRAY_LENGTH - 1).addSort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC))\n+ )\n+ )\n+ .get();\n+ assertNoFailures(response);\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "Upgraded from 1.6.1, a 1.7.0 node joins, we use allocation excludes to force data away from the nodes we intend to stop. When the 1.7.0 node has all the data, the old node is stopped.\n\n1.7.0 becomes the master, then logs \"delaying allocation for [66] unassigned shards, next check in [59.6s]\". 22 _minutes_ later, it then starts logging \"delaying allocation for [0] unassigned shards, next check in [0s]\" forever.\n\nWe had another cluster doing the same, which stopped logging it after two more nodes joined the cluster.\n\nNTP enabled, clocks seem fine, drift not out of the ordinary.\n",
"comments": [
{
"body": "@kimchy any ideas?\n",
"created_at": "2015-07-27T11:30:22Z"
},
{
"body": "@alexbrasetvik does this problem persist when the entire cluster is running 1.7.0, or does it only occur when the cluster is in a mixed-version state?\n",
"created_at": "2015-07-27T15:50:17Z"
},
{
"body": "It happens also for clusters created with 1.7.0.\n",
"created_at": "2015-07-27T16:33:47Z"
},
{
"body": "@alexbrasetvik I was able to reproduce this, still trying to figure out what causes it\n",
"created_at": "2015-07-27T16:35:52Z"
},
{
"body": "I see this multiple times too. Besides the logging itself, actually it never allocates my unassigned shards. I have a 20 machine cluster, all on 1.7 already. I shutdown a node via kopf. I see ~100 unassigned shards. The node then joins the cluster again, it's not initializing the unassigned shards. \n\nI manually reroute one unassigned shard, then i start seeing the cluster initializing the other unassigned shards. \n\nIn pending tasks, I'm seeing\n\n```\n 176228 29.1s URGENT shard-started ([xxxx-2015.27][0], node[nBe-T6GzTPOKoFHz-sIz8A], [R], s[INITIALIZING], unassigned_info[[reason=NODE_LEFT], at[2015-07-29T19:18:21.992Z], details[node_left[Vsj8k-eIQX2PFgAQQWDIJA]]]), reason [master [...][DwayquBqT8u8Xvns_HiIag][CO3SCH020050240][inet[/10.65.207.36:9300]]{...} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n```\n",
"created_at": "2015-07-29T19:43:09Z"
}
],
"number": 12456,
"title": "Delayed allocation \"stuck\" on 0 shards"
} | {
"body": "Previously when RoutingService checks for delayed shards, it can see\nshards that are delayed, but are past their delay time so the logged\noutput looks like:\n\n```\ndelaying allocation for [0] unassigned shards, next check in [0s]\n```\n\nThis change allows shards that have passed their delay to be counted\ncorrectly for the logging. Additionally, it places a 5 second minimum\ndelay between scheduled reroutes to try to minimize the number of\nreroutes run.\n\nThis also adds a test that creates a large number of unassigned delayed\nshards and ensures that they are rerouted even if a single reroute does\nnot allocated all shards (due to a low concurrent_recoveries setting).\n\nResolves #12456 \n\n(This PR is against 1.7 and will be forward-ported)\n",
"number": 12515,
"review_comments": [
{
"body": "perhaps the minimum delay should be a setting? \n",
"created_at": "2015-07-28T19:01:13Z"
},
{
"body": "Sure, I will make it a non-dynamic configurable setting.\n",
"created_at": "2015-07-28T19:17:16Z"
},
{
"body": "So negative delays count delays? I figured they should count as not-delayed.\n",
"created_at": "2015-07-28T19:28:39Z"
},
{
"body": "One too many `*`s here I think.\n",
"created_at": "2015-07-28T19:30:42Z"
},
{
"body": "Count _as_ delays I mean.\n",
"created_at": "2015-07-28T19:33:11Z"
},
{
"body": "Ah! I get it! 0 means they are not delayed at all but a negative number means they've already expired. They _aren't_ delayed any more. OK - I retract my question. Maybe a comment?\n",
"created_at": "2015-07-28T19:35:54Z"
},
{
"body": "Sure I'll add a comment\n",
"created_at": "2015-07-28T19:42:41Z"
}
],
"title": "Fix messaging about delayed allocation"
} | {
"commits": [
{
"message": "Fix messaging about delayed allocation\n\nPreviously when RoutingService checks for delayed shards, it can see\nshards that are delayed, but are past their delay time so the logged\noutput looks like:\n\n```\ndelaying allocation for [0] unassigned shards, next check in [0s]\n```\n\nThis change allows shards that have passed their delay to be counted\ncorrectly for the logging. Additionally, it places a 5 second minimum\ndelay between scheduled reroutes to try to minimize the number of\nreroutes run.\n\nThis also adds a test that creates a large number of unassigned delayed\nshards and ensures that they are rerouted even if a single reroute does\nnot allocated all shards (due to a low concurrent_recoveries setting).\n\nResolves #12456"
},
{
"message": "Make minimum reroute delay configurable"
},
{
"message": "Add clarifying comment"
}
],
"files": [
{
"diff": "@@ -48,11 +48,15 @@\n */\n public class RoutingService extends AbstractLifecycleComponent<RoutingService> implements ClusterStateListener {\n \n+ public static final String CLUSTER_ROUTING_SERVICE_MINIMUM_DELAY_SETTING = \"cluster.routing_service.minimum_reroute_delay\";\n+\n private static final String CLUSTER_UPDATE_TASK_SOURCE = \"cluster_reroute\";\n+ private static final TimeValue DEFAULT_ROUTING_SERVICE_MINIMUM_DELAY = TimeValue.timeValueSeconds(5);\n \n final ThreadPool threadPool;\n private final ClusterService clusterService;\n private final AllocationService allocationService;\n+ private final long minimumRerouteDelayMillis;\n \n private AtomicBoolean rerouting = new AtomicBoolean();\n private volatile long registeredNextDelaySetting = Long.MAX_VALUE;\n@@ -64,6 +68,8 @@ public RoutingService(Settings settings, ThreadPool threadPool, ClusterService c\n this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.allocationService = allocationService;\n+ this.minimumRerouteDelayMillis = settings.getAsTime(CLUSTER_ROUTING_SERVICE_MINIMUM_DELAY_SETTING,\n+ DEFAULT_ROUTING_SERVICE_MINIMUM_DELAY).millis();\n if (clusterService != null) {\n clusterService.addFirst(this);\n }\n@@ -108,7 +114,10 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n- TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n+ long nextDelayMillis = UnassignedInfo.findNextDelayedAllocationIn(settings, event.state());\n+ // Schedule the delay at least the minimum time in the future\n+ nextDelayMillis = Math.max(this.minimumRerouteDelayMillis, nextDelayMillis);\n+ TimeValue nextDelay = TimeValue.timeValueMillis(nextDelayMillis);\n logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n @Override",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
},
{
"diff": "@@ -190,7 +190,9 @@ public static int getNumberOfDelayedUnassigned(Settings settings, ClusterState s\n if (shard.primary() == false) {\n IndexMetaData indexMetaData = state.metaData().index(shard.getIndex());\n long delay = shard.unassignedInfo().getDelayAllocationExpirationIn(settings, indexMetaData.getSettings());\n- if (delay > 0) {\n+ // A negative delay means the shard has already expired (and so\n+ // should be considered) and a delay of 0 means there is no delay.\n+ if (delay != 0) {\n count++;\n }\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,13 @@\n \n package org.elasticsearch.cluster.allocation;\n \n+import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.RoutingService;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.RerouteExplanation;\n@@ -33,15 +35,19 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.cluster.routing.allocation.decider.DisableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.io.File;\n@@ -162,6 +168,47 @@ public void rerouteWithAllocateLocalGateway_enableAllocationSettings() throws Ex\n rerouteWithAllocateLocalGateway(commonSettings);\n }\n \n+ /**\n+ * Test that we don't miss any reroutes when concurrent_recoveries\n+ * is set very low and there are a large number of unassigned shards.\n+ */\n+ @Test\n+ @LuceneTestCase.Slow\n+ public void testDelayWithALargeAmountOfShards() throws Exception {\n+ Settings commonSettings = settingsBuilder()\n+ .put(\"gateway.type\", \"local\")\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .put(RoutingService.CLUSTER_ROUTING_SERVICE_MINIMUM_DELAY_SETTING,\n+ TimeValue.timeValueSeconds(randomIntBetween(2, 6)))\n+ .build();\n+ logger.info(\"--> starting 4 nodes\");\n+ String node_1 = internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+ internalCluster().startNode(commonSettings);\n+\n+ assertThat(cluster().size(), equalTo(4));\n+ ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes(\"4\").execute().actionGet();\n+ assertThat(healthResponse.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> create indices\");\n+ for (int i = 0; i < 25; i++) {\n+ client().admin().indices().prepareCreate(\"test\" + i)\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 5).put(\"index.number_of_replicas\", 1)\n+ .put(\"index.unassigned.node_left.delayed_timeout\", randomIntBetween(4, 15) + \"s\"))\n+ .execute().actionGet();\n+ }\n+\n+ ensureGreen(TimeValue.timeValueMinutes(1));\n+\n+ logger.info(\"--> stopping node1\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1));\n+\n+ // This might run slowly on older hardware\n+ ensureGreen(TimeValue.timeValueMinutes(2));\n+ }\n+\n private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception {\n logger.info(\"--> starting 2 nodes\");\n String node_1 = internalCluster().startNode(commonSettings);",
"filename": "src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteTests.java",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE my_index\n\nPOST my_index/my_type/1\n{}\n\nGET my_index/_search\n{\n \"query\": {\n \"match\": {\n \"_all\": \"foo\"\n }\n }\n}\n```\n\nReturns an NPE with the following stack trace:\n\n```\nFailed to execute [org.elasticsearch.action.search.SearchRequest@402fa749]\nRemoteTransportException[[Havok][inet[/127.0.0.1:9300]][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: ElasticsearchException; nested: NullPointerException;\nCaused by: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: ElasticsearchException; nested: NullPointerException;\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:157)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:334)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:346)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:340)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\nCaused by: ElasticsearchException; nested: NullPointerException;\n at org.elasticsearch.ExceptionsHelper.convertToElastic(ExceptionsHelper.java:57)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:133)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:694)\n at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:410)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:439)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:151)\n ... 9 more\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.lucene.all.AllTermQuery.rewrite(AllTermQuery.java:130)\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:786)\n at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:833)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:130)\n ... 15 more\n```\n",
"comments": [],
"number": 12439,
"title": "NPE when querying `_all` field with no fields"
} | {
"body": "In 2.x searching the _all field spat out NullPointerException. It never\nhas in 1.x but we never had a test for it. This adds that test.\n\nCloses #12439\n",
"number": 12511,
"review_comments": [],
"title": "[TEST] Add tests for searching missing _all field"
} | {
"commits": [
{
"message": "[TEST] Add tests for searching missing _all field\n\nIn 2.x searching the _all field spat out NullPointerException. It never\nhas in 1.x but we never had a test for it. This adds that test.\n\nCloses #12439"
}
],
"files": [
{
"diff": "@@ -1830,6 +1830,24 @@ public void testMultiMatchLenientIssue3797() {\n assertHitCount(searchResponse, 1l);\n }\n \n+ @Test\n+ public void testAllFieldEmptyMapping() throws Exception {\n+ client().prepareIndex(\"myindex\", \"mytype\").setId(\"1\").setSource(\"{}\").setRefresh(true).get();\n+ SearchResponse response = client().prepareSearch(\"myindex\").setQuery(matchQuery(\"_all\", \"foo\")).get();\n+ assertNoFailures(response);\n+ }\n+\n+ @Test\n+ public void testAllDisabledButQueried() throws Exception {\n+ createIndex(\"myindex\");\n+ assertAcked(client().admin().indices().preparePutMapping(\"myindex\").setType(\"mytype\").setSource(\n+ jsonBuilder().startObject().startObject(\"mytype\").startObject(\"_all\").field(\"enabled\", false)));\n+ client().prepareIndex(\"myindex\", \"mytype\").setId(\"1\").setSource(\"bar\", \"foo\").setRefresh(true).get();\n+ SearchResponse response = client().prepareSearch(\"myindex\").setQuery(matchQuery(\"_all\", \"foo\")).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 0);\n+ }\n+\n @Test\n public void testIndicesQuery() throws Exception {\n createIndex(\"index1\", \"index2\", \"index3\");",
"filename": "src/test/java/org/elasticsearch/search/query/SearchQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "When one of the parent directories has a space in there the plugin binary can't find the correct path.\n\n```\n$ pwd\n/home/richard/temp dir/elasticsearch-1.7.0\n$ ./bin/plugin\nError: Could not find or load main class dir.elasticsearch-1.7.0.config\n```\n",
"comments": [
{
"body": "Confirmed on OSX and Ubuntu.\n",
"created_at": "2015-07-28T12:50:57Z"
},
{
"body": "Assigned to myself as I sent a PR for it. Anyone else can take it if they want it and think they can make a better pull request.\n",
"created_at": "2015-07-28T15:21:28Z"
},
{
"body": "Ok - we have a fix merged for 1.7 and now I'll need to forward port that to master.\n",
"created_at": "2015-08-03T13:42:23Z"
}
],
"number": 12504,
"title": "Space in directory structure fails plugin binary"
} | {
"body": "When ES_HOME has spaces we get funky \"cannot find main class directory>\"\nerrors. This makes them stop by escaping directories and using eval instead\nof exec.\n\nCloses #12504\n",
"number": 12508,
"review_comments": [
{
"body": "`$ES_DIR/config` is intentionally not removed? might be worth a comment in the test\n",
"created_at": "2015-07-30T17:28:14Z"
},
{
"body": "I'd replace `elasticsearch/shield/latest` with `shield`, since that's a more typical command\n",
"created_at": "2015-07-30T17:42:40Z"
},
{
"body": "Honestly? I have no idea - this is the same checks that are done when the installation directory doesn't have a space. I'd love there to be some method call for this but the other tests seemed to prefer copy and paste over a refactor and I was afraid to do it - especially as I'll have to forward port this to 1.7 and my bash-foo is weak.\n",
"created_at": "2015-07-30T17:43:04Z"
},
{
"body": "\"I just copy and pasted like the last guy\" <--- its a poor defense but in this case I'm sticking to it.\n",
"created_at": "2015-07-30T17:43:33Z"
},
{
"body": "But all the other tests do it that way! Maybe that is another issue we can file?\n",
"created_at": "2015-07-30T17:45:16Z"
},
{
"body": "Yeah I'm fine with that\n",
"created_at": "2015-07-30T17:47:55Z"
},
{
"body": "> $ES_DIR/config is intentionally not removed? might be worth a comment in the test\n\nConfiguration is not removed when removing a plugin. I think elasticsearch configuration is removed when purging the package.\n",
"created_at": "2015-08-03T07:18:36Z"
},
{
"body": "This is something that changed recently but the BATS files were not updated. I think it changed only in master.\n",
"created_at": "2015-08-03T07:19:40Z"
}
],
"title": "Plugin script: Fix ES_HOME with spaces"
} | {
"commits": [
{
"message": "Plugin script: Fix ES_HOME with spaces\n\nWhen ES_HOME has spaces we get funky \"cannot find main class directory>\"\nerrors. This makes them stop by escaping directories and using eval instead\nof exec.\n\nCloses #12504"
},
{
"message": "Plugin: Fix arguments with spaces\n\nAlso add some bats tests"
}
],
"files": [
{
"diff": "@@ -69,15 +69,15 @@ fi\n while [ $# -gt 0 ]; do\n case $1 in\n -D*=*)\n- properties=\"$properties $1\"\n+ properties=\"$properties \\\"$1\\\"\"\n ;;\n -D*)\n var=$1\n shift\n- properties=\"$properties $var=$1\"\n+ properties=\"$properties \\\"$var\\\"=\\\"$1\\\"\"\n ;;\n *)\n- args=\"$args $1\"\n+ args=\"$args \\\"$1\\\"\"\n esac\n shift\n done\n@@ -88,7 +88,7 @@ if [ -e \"$CONF_DIR\" ]; then\n *-Des.default.path.conf=*|*-Des.path.conf=*)\n ;;\n *)\n- properties=\"$properties -Des.default.path.conf=$CONF_DIR\"\n+ properties=\"$properties -Des.default.path.conf=\\\"$CONF_DIR\\\"\"\n ;;\n esac\n fi\n@@ -98,11 +98,11 @@ if [ -e \"$CONF_FILE\" ]; then\n *-Des.default.config=*|*-Des.config=*)\n ;;\n *)\n- properties=\"$properties -Des.default.config=$CONF_FILE\"\n+ properties=\"$properties -Des.default.config=\\\"$CONF_FILE\\\"\"\n ;;\n esac\n fi\n \n export HOSTNAME=`hostname -s`\n \n-exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"$ES_HOME\" $properties -cp \"$ES_HOME/lib/*\" org.elasticsearch.plugins.PluginManager $args\n+eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\\\"\"$ES_HOME\"\\\" $properties -cp \\\"\"$ES_HOME/lib/*\"\\\" org.elasticsearch.plugins.PluginManager $args",
"filename": "bin/plugin",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n load packaging_test_utils\n \n setup() {\n+\n # Cleans everything for every test execution\n clean_before_test\n \n@@ -342,3 +343,100 @@ setup() {\n run rm -rf \"$TEMP_CONFIG_DIR\"\n [ \"$status\" -eq 0 ]\n }\n+\n+@test \"[TAR] install shield plugin to elasticsearch directory with a space\" {\n+ export ES_DIR=\"/tmp/elastic search\"\n+\n+ # Install the archive\n+ install_archive\n+\n+ # Checks that the archive is correctly installed\n+ verify_archive_installation\n+\n+ # Move the Elasticsearch installation to a directory with a space in it\n+ rm -rf \"$ES_DIR\"\n+ mv /tmp/elasticsearch \"$ES_DIR\"\n+\n+ # Checks that plugin archive is available\n+ [ -e \"$SHIELD_ZIP\" ]\n+\n+ # Install Shield\n+ run \"$ES_DIR/bin/plugin\" -i elasticsearch/shield/latest -u \"file://$SHIELD_ZIP\"\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that Shield is correctly installed\n+ assert_file_exist \"$ES_DIR/bin/shield\"\n+ assert_file_exist \"$ES_DIR/bin/shield/esusers\"\n+ assert_file_exist \"$ES_DIR/bin/shield/syskeygen\"\n+ assert_file_exist \"$ES_DIR/config/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield/role_mapping.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/roles.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/users\"\n+ assert_file_exist \"$ES_DIR/config/shield/users_roles\"\n+ assert_file_exist \"$ES_DIR/plugins/shield\"\n+\n+ # Remove the plugin\n+ run \"$ES_DIR/bin/plugin\" -r elasticsearch/shield/latest\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that the plugin is correctly removed\n+ assert_file_not_exist \"$ES_DIR/bin/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield\"\n+ assert_file_exist \"$ES_DIR/config/shield/role_mapping.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/roles.yml\"\n+ assert_file_exist \"$ES_DIR/config/shield/users\"\n+ assert_file_exist \"$ES_DIR/config/shield/users_roles\"\n+ assert_file_not_exist \"$ES_DIR/plugins/shield\"\n+\n+ #Cleanup our temporary Elasticsearch installation\n+ rm -rf \"$ES_DIR\"\n+}\n+\n+@test \"[TAR] install shield plugin from a directory with a space\" {\n+ export SHIELD_ZIP_WITH_SPACE=\"/tmp/plugins with space/shield.zip\"\n+\n+ # Install the archive\n+ install_archive\n+\n+ # Checks that the archive is correctly installed\n+ verify_archive_installation\n+\n+ # Checks that plugin archive is available\n+ [ -e \"$SHIELD_ZIP\" ]\n+\n+ # Copy the shield plugin to a directory with a space in it\n+ rm -f \"$SHIELD_ZIP_WITH_SPACE\"\n+ mkdir -p \"$(dirname \"$SHIELD_ZIP_WITH_SPACE\")\"\n+ cp $SHIELD_ZIP \"$SHIELD_ZIP_WITH_SPACE\"\n+\n+ # Install Shield\n+ run /tmp/elasticsearch/bin/plugin -i elasticsearch/shield/latest -u \"file://$SHIELD_ZIP_WITH_SPACE\"\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that Shield is correctly installed\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield/esusers\"\n+ assert_file_exist \"/tmp/elasticsearch/bin/shield/syskeygen\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/role_mapping.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/roles.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users_roles\"\n+ assert_file_exist \"/tmp/elasticsearch/plugins/shield\"\n+\n+ # Remove the plugin\n+ run /tmp/elasticsearch/bin/plugin -r elasticsearch/shield/latest\n+ [ \"$status\" -eq 0 ]\n+\n+ # Checks that the plugin is correctly removed\n+ assert_file_not_exist \"/tmp/elasticsearch/bin/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/role_mapping.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/roles.yml\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users\"\n+ assert_file_exist \"/tmp/elasticsearch/config/shield/users_roles\"\n+ assert_file_not_exist \"/tmp/elasticsearch/plugins/shield\"\n+\n+ #Cleanup our plugin directory with a space\n+ rm -rf \"$SHIELD_ZIP_WITH_SPACE\"\n+}",
"filename": "src/test/resources/packaging/scripts/25_tar_plugins.bats",
"status": "modified"
}
]
} |
{
"body": "When one of the parent directories has a space in there the plugin binary can't find the correct path.\n\n```\n$ pwd\n/home/richard/temp dir/elasticsearch-1.7.0\n$ ./bin/plugin\nError: Could not find or load main class dir.elasticsearch-1.7.0.config\n```\n",
"comments": [
{
"body": "Confirmed on OSX and Ubuntu.\n",
"created_at": "2015-07-28T12:50:57Z"
},
{
"body": "Assigned to myself as I sent a PR for it. Anyone else can take it if they want it and think they can make a better pull request.\n",
"created_at": "2015-07-28T15:21:28Z"
},
{
"body": "Ok - we have a fix merged for 1.7 and now I'll need to forward port that to master.\n",
"created_at": "2015-08-03T13:42:23Z"
}
],
"number": 12504,
"title": "Space in directory structure fails plugin binary"
} | {
"body": "When ES_HOME has spaces we get funky \"cannot find main class <part of a\ndirectory>\" errors. This makes them stop by escaping directories and using\neval instead of exec.\n\nCloses #12504\n",
"number": 12507,
"review_comments": [
{
"body": "This should not be changed, the value `${packaging.plugin.default.config.dir}` is replaced at build time this line allows us to define the default value when `CONF_DIR` is not explicitly defined in the current env.\n",
"created_at": "2015-07-28T13:23:27Z"
},
{
"body": "Same as precedent comment\n",
"created_at": "2015-07-28T13:23:41Z"
},
{
"body": "Same as precedent comment\n",
"created_at": "2015-07-28T13:23:47Z"
},
{
"body": "Why did you change the class name?\n",
"created_at": "2015-07-28T13:24:15Z"
},
{
"body": "Bleh - looks like I blindly applied this to the wrong branch. Ignore. Its garbage.\n",
"created_at": "2015-07-28T13:27:21Z"
}
],
"title": "Plugin script: Fix ES_HOME with spaces"
} | {
"commits": [
{
"message": "Plugin script: Fix ES_HOME with spaces\n\nWhen ES_HOME has spaces we get funky \"cannot find main class <part of a\ndirectory>\" errors. This makes them stop by escaping directories and using\neval instead of exec.\n\nCloses #12504"
}
],
"files": [
{
"diff": "@@ -23,20 +23,20 @@ ES_HOME=`cd \"$ES_HOME\"; pwd`\n \n # Sets the default values for elasticsearch variables used in this script\n if [ -z \"$CONF_DIR\" ]; then\n- CONF_DIR=\"${packaging.plugin.default.config.dir}\"\n+ CONF_DIR=\"$ES_HOME/config\"\n \n if [ -z \"$CONF_FILE\" ]; then\n CONF_FILE=\"$CONF_DIR/elasticsearch.yml\"\n fi\n fi\n \n if [ -z \"$CONF_FILE\" ]; then\n- CONF_FILE=\"${packaging.plugin.default.config.file}\"\n+ CONF_FILE=\"$ES_HOME/config/elasticsearch.yml\"\n fi\n \n # The default env file is defined at building/packaging time.\n-# For a ${packaging.type} package, the value is \"${packaging.env.file}\".\n-ES_ENV_FILE=\"${packaging.env.file}\"\n+# For a tar.gz package, the value is \"\".\n+ES_ENV_FILE=\"\"\n \n # If an include is specified with the ES_INCLUDE environment variable, use it\n if [ -n \"$ES_INCLUDE\" ]; then\n@@ -88,7 +88,7 @@ if [ -e \"$CONF_DIR\" ]; then\n *-Des.default.path.conf=*|*-Des.path.conf=*)\n ;;\n *)\n- properties=\"$properties -Des.default.path.conf=$CONF_DIR\"\n+ properties=\"$properties -Des.default.path.conf=\\\"$CONF_DIR\\\"\"\n ;;\n esac\n fi\n@@ -98,11 +98,11 @@ if [ -e \"$CONF_FILE\" ]; then\n *-Des.default.config=*|*-Des.config=*)\n ;;\n *)\n- properties=\"$properties -Des.default.config=$CONF_FILE\"\n+ properties=\"$properties -Des.default.config=\\\"$CONF_FILE\\\"\"\n ;;\n esac\n fi\n \n export HOSTNAME=`hostname -s`\n \n-exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"$ES_HOME\" $properties -cp \"$ES_HOME/lib/*\" org.elasticsearch.plugins.PluginManagerCliParser $args\n+eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\\\"\"$ES_HOME\"\\\" $properties -cp \\\"\"$ES_HOME/lib/*\"\\\" org.elasticsearch.plugins.PluginManager $args",
"filename": "distribution/src/main/resources/bin/plugin",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE my_index\n\nPOST my_index/my_type/1\n{}\n\nGET my_index/_search\n{\n \"query\": {\n \"match\": {\n \"_all\": \"foo\"\n }\n }\n}\n```\n\nReturns an NPE with the following stack trace:\n\n```\nFailed to execute [org.elasticsearch.action.search.SearchRequest@402fa749]\nRemoteTransportException[[Havok][inet[/127.0.0.1:9300]][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: ElasticsearchException; nested: NullPointerException;\nCaused by: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: ElasticsearchException; nested: NullPointerException;\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:157)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:334)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:346)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:340)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\nCaused by: ElasticsearchException; nested: NullPointerException;\n at org.elasticsearch.ExceptionsHelper.convertToElastic(ExceptionsHelper.java:57)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:133)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:694)\n at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:410)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:439)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:151)\n ... 9 more\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.lucene.all.AllTermQuery.rewrite(AllTermQuery.java:130)\n at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:786)\n at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:833)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:130)\n ... 15 more\n```\n",
"comments": [],
"number": 12439,
"title": "NPE when querying `_all` field with no fields"
} | {
"body": "This can happen in two ways:\n1. The _all field is disabled.\n2. There are documents in the index, the _all field is enabled, but there are\nno fields in any of the documents.\n\nIn both of these cases we now rewrite the query to a MatchNoDocsQuery which\nshould be safe because there isn't anything to match.\n\nCloses #12439\n",
"number": 12495,
"review_comments": [
{
"body": "Can you propagate the boost to this query? If this query is enclosed in a BooleanQuery, it could have an impact on the normalization factor.\n",
"created_at": "2015-07-28T07:29:21Z"
},
{
"body": "Sure, yeah. Makes sense. Do you think its worth adding a test for that?\n",
"created_at": "2015-07-28T13:17:31Z"
},
{
"body": "no\n",
"created_at": "2015-07-28T13:21:59Z"
}
],
"title": "_all: Stop NPE querying _all when it doesn't exist"
} | {
"commits": [
{
"message": "_all: Stop NPE querying _all when it doesn't exist\n\nThis can happen in two ways:\n1. The _all field is disabled.\n2. There are documents in the index, the _all field is enabled, but there are\nno fields in any of the documents.\n\nIn both of these cases we now rewrite the query to a MatchNoDocsQuery which\nshould be safe because there isn't anything to match.\n\nCloses #12439"
},
{
"message": "_all: Add missing boost\n\nWhen we rewrite to a MatchNoTermsQuery we were throwing out the boost which\ncould could lead to funky changes when the query against _all was in a\nbool query."
}
],
"files": [
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.index.Terms;\n import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.payloads.AveragePayloadFunction;\n@@ -124,14 +125,23 @@ public boolean equals(Object obj) {\n \n @Override\n public Query rewrite(IndexReader reader) throws IOException {\n+ boolean fieldExists = false;\n boolean hasPayloads = false;\n for (LeafReaderContext context : reader.leaves()) {\n final Terms terms = context.reader().terms(term.field());\n- if (terms.hasPayloads()) {\n- hasPayloads = true;\n- break;\n+ if (terms != null) {\n+ fieldExists = true;\n+ if (terms.hasPayloads()) {\n+ hasPayloads = true;\n+ break;\n+ }\n }\n }\n+ if (fieldExists == false) {\n+ Query rewritten = new MatchNoDocsQuery();\n+ rewritten.setBoost(getBoost());\n+ return rewritten;\n+ }\n if (hasPayloads == false) {\n TermQuery rewritten = new TermQuery(term);\n rewritten.setBoost(getBoost());",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java",
"status": "modified"
},
{
"diff": "@@ -1867,6 +1867,24 @@ public void testMultiMatchLenientIssue3797() {\n assertHitCount(searchResponse, 1l);\n }\n \n+ @Test\n+ public void testAllFieldEmptyMapping() throws Exception {\n+ client().prepareIndex(\"myindex\", \"mytype\").setId(\"1\").setSource(\"{}\").setRefresh(true).get();\n+ SearchResponse response = client().prepareSearch(\"myindex\").setQuery(matchQuery(\"_all\", \"foo\")).get();\n+ assertNoFailures(response);\n+ }\n+\n+ @Test\n+ public void testAllDisabledButQueried() throws Exception {\n+ createIndex(\"myindex\");\n+ assertAcked(client().admin().indices().preparePutMapping(\"myindex\").setType(\"mytype\").setSource(\n+ jsonBuilder().startObject().startObject(\"mytype\").startObject(\"_all\").field(\"enabled\", false)));\n+ client().prepareIndex(\"myindex\", \"mytype\").setId(\"1\").setSource(\"bar\", \"foo\").setRefresh(true).get();\n+ SearchResponse response = client().prepareSearch(\"myindex\").setQuery(matchQuery(\"_all\", \"foo\")).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 0);\n+ }\n+\n @Test\n public void testIndicesQuery() throws Exception {\n createIndex(\"index1\", \"index2\", \"index3\");",
"filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies and no cluster state change have happened in the mean while, do we go and delete the shard folder.\n\nCurrently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard in use from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.\n\nNormally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.\n\nInstead, we should change the decision to clean up an index folder to be based on checking the index directory for being empty and containing no shards.\n\nNote: this PR is against the 1.6 branch.\n",
"comments": [
{
"body": "LGTM\n",
"created_at": "2015-07-27T16:08:59Z"
},
{
"body": "to me it seems like that some of the safety that has been added here is still prone to concurrency issues. We for instance check `this.indices` in `canDeleteIndexContents` which can be concurrently modified by a createIndex operation so I think there are still races in this code that could cause problems?\n",
"created_at": "2015-08-05T15:27:53Z"
},
{
"body": "@s1monw to me it looked like all these calls are performed on updateTask thread so they shouldn't run into concurrency issues. What did I miss?\n",
"created_at": "2015-08-07T21:57:12Z"
},
{
"body": "for instance \n\n``` Java\npublic boolean canDeleteIndexContents(Index index, Settings indexSettings) {\n final Tuple<IndexService, Injector> indexServiceInjectorTuple = this.indices.get(index.name());\n if (IndexMetaData.isOnSharedFilesystem(indexSettings) == false) {\n if (indexServiceInjectorTuple == null && nodeEnv.hasNodeFile()) {\n return true;\n }\n } else {\n logger.trace(\"{} skipping index directory deletion due to shadow replicas\", index);\n }\n return false;\n }\n```\n\nchecks if the index is present but it could be created concurrently such that it can potentially check before it's created but delete after it's creation was successful? \n",
"created_at": "2015-08-10T14:00:11Z"
},
{
"body": "@s1monw concurrently on which thread?\n",
"created_at": "2015-08-10T14:04:29Z"
},
{
"body": "well we can call thsi API from everywhere I wonder if we should synchronize all these operations? there are pending deletes threads etc that are kicked off so I really thing we should protect that.\n",
"created_at": "2015-08-10T14:09:42Z"
},
{
"body": "+1 to not relaying on the methods to be called from the cluster state update thread. If I understand the concern correctly, It relates to an index being deleted and created concurrently. In that case I _think_ the shard locking in NodeEnvironment will protect us from deleting data. However, I agree it would be nice to have clear concurrency semantics on this level of the code. Right now we sometimes synchronize and sometimes not which make it hard to reason about- that is potentially a much bigger change and I was trying to make the smallest intervention. @s1monw if I misunderstood what you said and there is a concrete hole in the logic - can you open an issue so we won't forget?\n",
"created_at": "2015-08-13T11:50:59Z"
}
],
"number": 12487,
"title": "IndicesStore shouldn't try to delete index after deleting a shard"
} | {
"body": "Port of #12487 into 2.0\n\nWhen a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies _and_ no cluster state change have happened in the mean while, do we go and delete the shard folder.\n\nCurrently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard _in use_ from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.\n\nNormally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.\n\nInstead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.\n",
"number": 12494,
"review_comments": [],
"title": "IndicesStore shouldn't try to delete index after deleting a shard"
} | {
"commits": [
{
"message": "Internal: IndicesStore shouldn't try to delete index after deleting a shard\n\nWhen a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.\n\nCurrently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.\n\nNormally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.\n\nInstead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards."
}
],
"files": [
{
"diff": "@@ -22,13 +22,15 @@\n import com.google.common.collect.ImmutableSet;\n import com.google.common.collect.Sets;\n \n+import com.google.common.primitives.Ints;\n import org.apache.lucene.index.IndexWriter;\n import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.store.*;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n@@ -661,6 +663,56 @@ public Set<String> findAllIndices() throws IOException {\n return indices;\n }\n \n+ /**\n+ * Tries to find all allocated shards for the given index\n+ * on the current node. NOTE: This methods is prone to race-conditions on the filesystem layer since it might not\n+ * see directories created concurrently or while it's traversing.\n+ * @param index the index to filter shards\n+ * @return a set of shard IDs\n+ * @throws IOException if an IOException occurs\n+ */\n+ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n+ assert index != null;\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ assert assertEnvIsLocked();\n+ final Set<ShardId> shardIds = Sets.newHashSet();\n+ String indexName = index.name();\n+ for (final NodePath nodePath : nodePaths) {\n+ Path location = nodePath.indicesPath;\n+ if (Files.isDirectory(location)) {\n+ try (DirectoryStream<Path> indexStream = Files.newDirectoryStream(location)) {\n+ for (Path indexPath : indexStream) {\n+ if (indexName.equals(indexPath.getFileName().toString())) {\n+ shardIds.addAll(findAllShardsForIndex(indexPath));\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return shardIds;\n+ }\n+\n+ private static Set<ShardId> findAllShardsForIndex(Path indexPath) throws IOException {\n+ Set<ShardId> shardIds = new HashSet<>();\n+ if (Files.isDirectory(indexPath)) {\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n+ String currentIndex = indexPath.getFileName().toString();\n+ for (Path shardPath : stream) {\n+ if (Files.isDirectory(shardPath)) {\n+ Integer shardId = Ints.tryParse(shardPath.getFileName().toString());\n+ if (shardId != null) {\n+ ShardId id = new ShardId(currentIndex, shardId);\n+ shardIds.add(id);\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return shardIds;\n+ }\n+\n @Override\n public void close() {\n if (closed.compareAndSet(false, true) && locks != null) {",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -524,18 +524,38 @@ public void deleteShardStore(String reason, ShardLock lock, Settings indexSettin\n * This method deletes the shard contents on disk for the given shard ID. This method will fail if the shard deleting\n * is prevented by {@link #canDeleteShardContent(org.elasticsearch.index.shard.ShardId, org.elasticsearch.cluster.metadata.IndexMetaData)}\n * of if the shards lock can not be acquired.\n+ *\n+ * On data nodes, if the deleted shard is the last shard folder in its index, the method will attempt to remove the index folder as well.\n+ *\n * @param reason the reason for the shard deletion\n * @param shardId the shards ID to delete\n- * @param metaData the shards index metadata. This is required to access the indexes settings etc.\n+ * @param clusterState . This is required to access the indexes settings etc.\n * @throws IOException if an IOException occurs\n */\n- public void deleteShardStore(String reason, ShardId shardId, IndexMetaData metaData) throws IOException {\n+ public void deleteShardStore(String reason, ShardId shardId, ClusterState clusterState) throws IOException {\n+ final IndexMetaData metaData = clusterState.getMetaData().indices().get(shardId.getIndex());\n+\n final Settings indexSettings = buildIndexSettings(metaData);\n if (canDeleteShardContent(shardId, indexSettings) == false) {\n throw new IllegalStateException(\"Can't delete shard \" + shardId);\n }\n nodeEnv.deleteShardDirectorySafe(shardId, indexSettings);\n- logger.trace(\"{} deleting shard reason [{}]\", shardId, reason);\n+ logger.debug(\"{} deleted shard reason [{}]\", shardId, reason);\n+\n+ if (clusterState.nodes().localNode().isMasterNode() == false && // master nodes keep the index meta data, even if having no shards..\n+ canDeleteIndexContents(shardId.index(), indexSettings)) {\n+ if (nodeEnv.findAllShardIds(shardId.index()).isEmpty()) {\n+ try {\n+ // note that deleteIndexStore have more safety checks and may throw an exception if index was concurrently created.\n+ deleteIndexStore(\"no longer used\", metaData, clusterState);\n+ } catch (Exception e) {\n+ // wrap the exception to indicate we already deleted the shard\n+ throw new ElasticsearchException(\"failed to delete unused index after deleting its last shard (\" + shardId + \")\", e);\n+ }\n+ } else {\n+ logger.trace(\"[{}] still has shard stores, leaving as is\", shardId.index());\n+ }\n+ }\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import org.apache.lucene.store.StoreRateLimiting;\n import org.elasticsearch.cluster.*;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n@@ -288,28 +287,18 @@ private void allNodesResponded() {\n return;\n }\n \n- clusterService.submitStateUpdateTask(\"indices_store\", new ClusterStateNonMasterUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"indices_store ([\" + shardId + \"] active fully on other nodes)\", new ClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n if (clusterState.getVersion() != currentState.getVersion()) {\n logger.trace(\"not deleting shard {}, the update task state version[{}] is not equal to cluster state before shard active api call [{}]\", shardId, currentState.getVersion(), clusterState.getVersion());\n return currentState;\n }\n- IndexMetaData indexMeta = clusterState.getMetaData().indices().get(shardId.getIndex());\n try {\n- indicesService.deleteShardStore(\"no longer used\", shardId, indexMeta);\n+ indicesService.deleteShardStore(\"no longer used\", shardId, currentState);\n } catch (Throwable ex) {\n logger.debug(\"{} failed to delete unallocated shard, ignoring\", ex, shardId);\n }\n- // if the index doesn't exists anymore, delete its store as well, but only if its a non master node, since master\n- // nodes keep the index metadata around \n- if (indicesService.hasIndex(shardId.getIndex()) == false && currentState.nodes().localNode().masterNode() == false) {\n- try {\n- indicesService.deleteIndexStore(\"no longer used\", indexMeta, currentState);\n- } catch (Throwable ex) {\n- logger.debug(\"{} failed to delete unallocated index, ignoring\", ex, shardId.getIndex());\n- }\n- }\n return currentState;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/store/IndicesStore.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.base.Predicate;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n+import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n@@ -30,19 +31,22 @@\n import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoverySource;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.disruption.BlockClusterStateProcessing;\n import org.elasticsearch.test.disruption.SingleNodeDisruption;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.transport.TransportModule;\n import org.elasticsearch.transport.TransportRequestOptions;\n@@ -55,6 +59,7 @@\n import java.util.Arrays;\n import java.util.List;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.Future;\n import java.util.concurrent.TimeUnit;\n \n import static java.lang.Thread.sleep;\n@@ -217,6 +222,87 @@ public void shardsCleanup() throws Exception {\n assertThat(waitForShardDeletion(node_4, \"test\", 0), equalTo(false));\n }\n \n+\n+ @Test\n+ @TestLogging(\"cluster.service:TRACE\")\n+ public void testShardActiveElsewhereDoesNotDeleteAnother() throws Exception {\n+ Future<String> masterFuture = internalCluster().startNodeAsync(\n+ Settings.builder().put(\"node.master\", true, \"node.data\", false).build());\n+ Future<List<String>> nodesFutures = internalCluster().startNodesAsync(4,\n+ Settings.builder().put(\"node.master\", false, \"node.data\", true).build());\n+\n+ final String masterNode = masterFuture.get();\n+ final String node1 = nodesFutures.get().get(0);\n+ final String node2 = nodesFutures.get().get(1);\n+ final String node3 = nodesFutures.get().get(2);\n+ // we will use this later on, handy to start now to make sure it has a different data folder that node 1,2 &3\n+ final String node4 = nodesFutures.get().get(3);\n+\n+ assertAcked(prepareCreate(\"test\").setSettings(Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(FilterAllocationDecider.INDEX_ROUTING_EXCLUDE_GROUP + \"_name\", node4)\n+ ));\n+ assertFalse(client().admin().cluster().prepareHealth().setWaitForRelocatingShards(0).setWaitForGreenStatus().setWaitForNodes(\"5\").get().isTimedOut());\n+\n+ // disable allocation to control the situation more easily\n+ assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder()\n+ .put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"none\")));\n+\n+ logger.debug(\"--> shutting down two random nodes\");\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node1, node2, node3));\n+ internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node1, node2, node3));\n+\n+ logger.debug(\"--> verifying index is red\");\n+ ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes(\"3\").get();\n+ if (health.getStatus() != ClusterHealthStatus.RED) {\n+ logClusterState();\n+ fail(\"cluster didn't become red, despite of shutting 2 of 3 nodes\");\n+ }\n+\n+ logger.debug(\"--> allowing index to be assigned to node [{}]\", node4);\n+ assertAcked(client().admin().indices().prepareUpdateSettings(\"test\").setSettings(\n+ Settings.builder()\n+ .put(FilterAllocationDecider.INDEX_ROUTING_EXCLUDE_GROUP + \"_name\", \"NONE\")));\n+\n+ assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder()\n+ .put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"all\")));\n+\n+ logger.debug(\"--> waiting for shards to recover on [{}]\", node4);\n+ // we have to do this in two steps as we now do async shard fetching before assigning, so the change to the\n+ // allocation filtering may not have immediate effect\n+ // TODO: we should add an easier to do this. It's too much of a song and dance..\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ assertTrue(internalCluster().getInstance(IndicesService.class, node4).hasIndex(\"test\"));\n+ }\n+ });\n+\n+ // wait for 4 active shards - we should have lost one shard\n+ assertFalse(client().admin().cluster().prepareHealth().setWaitForActiveShards(4).get().isTimedOut());\n+\n+ // disable allocation again to control concurrency a bit and allow shard active to kick in before allocation\n+ assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder()\n+ .put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"none\")));\n+\n+ logger.debug(\"--> starting the two old nodes back\");\n+\n+ internalCluster().startNodesAsync(2,\n+ Settings.builder().put(\"node.master\", false, \"node.data\", true).build());\n+\n+ assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"5\").get().isTimedOut());\n+\n+\n+ assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder()\n+ .put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"all\")));\n+\n+ logger.debug(\"--> waiting for the lost shard to be recovered\");\n+\n+ ensureGreen(\"test\");\n+\n+ }\n+\n @Test\n @Slow\n public void testShardActiveElseWhere() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationTests.java",
"status": "modified"
}
]
} |
{
"body": "Upgraded from 1.6.1, a 1.7.0 node joins, we use allocation excludes to force data away from the nodes we intend to stop. When the 1.7.0 node has all the data, the old node is stopped.\n\n1.7.0 becomes the master, then logs \"delaying allocation for [66] unassigned shards, next check in [59.6s]\". 22 _minutes_ later, it then starts logging \"delaying allocation for [0] unassigned shards, next check in [0s]\" forever.\n\nWe had another cluster doing the same, which stopped logging it after two more nodes joined the cluster.\n\nNTP enabled, clocks seem fine, drift not out of the ordinary.\n",
"comments": [
{
"body": "@kimchy any ideas?\n",
"created_at": "2015-07-27T11:30:22Z"
},
{
"body": "@alexbrasetvik does this problem persist when the entire cluster is running 1.7.0, or does it only occur when the cluster is in a mixed-version state?\n",
"created_at": "2015-07-27T15:50:17Z"
},
{
"body": "It happens also for clusters created with 1.7.0.\n",
"created_at": "2015-07-27T16:33:47Z"
},
{
"body": "@alexbrasetvik I was able to reproduce this, still trying to figure out what causes it\n",
"created_at": "2015-07-27T16:35:52Z"
},
{
"body": "I see this multiple times too. Besides the logging itself, actually it never allocates my unassigned shards. I have a 20 machine cluster, all on 1.7 already. I shutdown a node via kopf. I see ~100 unassigned shards. The node then joins the cluster again, it's not initializing the unassigned shards. \n\nI manually reroute one unassigned shard, then i start seeing the cluster initializing the other unassigned shards. \n\nIn pending tasks, I'm seeing\n\n```\n 176228 29.1s URGENT shard-started ([xxxx-2015.27][0], node[nBe-T6GzTPOKoFHz-sIz8A], [R], s[INITIALIZING], unassigned_info[[reason=NODE_LEFT], at[2015-07-29T19:18:21.992Z], details[node_left[Vsj8k-eIQX2PFgAQQWDIJA]]]), reason [master [...][DwayquBqT8u8Xvns_HiIag][CO3SCH020050240][inet[/10.65.207.36:9300]]{...} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n```\n",
"created_at": "2015-07-29T19:43:09Z"
}
],
"number": 12456,
"title": "Delayed allocation \"stuck\" on 0 shards"
} | {
"body": "Resolves #12456\n",
"number": 12489,
"review_comments": [],
"title": "Skip scheduling a reroute when there are no delayed shards"
} | {
"commits": [
{
"message": "Skip scheduling a reroute when there are no delayed shards\n\nResolves #12456"
}
],
"files": [
{
"diff": "@@ -105,11 +105,14 @@ public void clusterChanged(ClusterChangedEvent event) {\n // then the last time we checked and scheduled, we are guaranteed to have a reroute until then, so no need\n // to schedule again\n long nextDelaySetting = UnassignedInfo.findSmallestDelayedAllocationSetting(settings, event.state());\n- if (nextDelaySetting > 0 && nextDelaySetting < registeredNextDelaySetting) {\n+ int delayedShardCount = UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state());\n+ if (nextDelaySetting > 0 &&\n+ (nextDelaySetting < registeredNextDelaySetting) &&\n+ delayedShardCount > 0) {\n FutureUtils.cancel(registeredNextDelayFuture);\n registeredNextDelaySetting = nextDelaySetting;\n TimeValue nextDelay = TimeValue.timeValueMillis(UnassignedInfo.findNextDelayedAllocationIn(settings, event.state()));\n- logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", UnassignedInfo.getNumberOfDelayedUnassigned(settings, event.state()), nextDelay);\n+ logger.info(\"delaying allocation for [{}] unassigned shards, next check in [{}]\", delayedShardCount, nextDelay);\n registeredNextDelayFuture = threadPool.schedule(nextDelay, ThreadPool.Names.SAME, new AbstractRunnable() {\n @Override\n protected void doRun() throws Exception {",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java",
"status": "modified"
}
]
} |
{
"body": "Currently we silently ignore the pipeline agg in question. Until #12336 is implemented we should report an error.\n\nFor a reproduction see:\n\n```\nGET logstash-2015.01/_search?search_type=count\n{\n \"aggs\": {\n \"time\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"day\"\n },\n \"aggs\": {\n \"test\": {\n \"filters\": {\n \"filters\": {\n \"get\": {\n \"term\": {\n \"verb\": \"get\"\n }\n },\n \"post\": {\n \"term\": {\n \"verb\": \"post\"\n }\n }\n }\n }\n },\n \"get_derive\": {\n \"derivative\": {\n \"buckets_path\": \"test>get>_count\"\n }\n }\n }\n }\n }\n}\n```\n\nWhich outputs (not the missing `get_derive` agg):\n\n```\n {\n \"key_as_string\": \"1420243200000\",\n \"key\": 1420243200000,\n \"doc_count\": 10664,\n \"test\": {\n \"buckets\": {\n \"get\": {\n \"doc_count\": 10592\n },\n \"post\": {\n \"doc_count\": 15\n }\n }\n }\n },\n```\n",
"comments": [],
"number": 12360,
"title": "Issue an error when a pipeline aggs references a muliti-bucket aggregation "
} | {
"body": "An exception will now be thrown if a pipeline aggregation specifies an Aggregation path which includes an incompatible aggregation (an aggregation which is not either a numeric metric aggregation or a single bucket aggregation). To support this change a marker interface (AggregationPathCompatibleFactory) has been added.\n\nCloses #12360\n",
"number": 12472,
"review_comments": [],
"title": "Throw error when a pipeline agg references an incompatible agg"
} | {
"commits": [
{
"message": "Aggregations: Throw error when a pipeline agg references an incompatible agg\n\nAn exception will now be thrown if a pipeline aggregation specifies an Aggregation path which includes an incompatible aggregation (an aggregation which is not either a numeric metric aggregation or a single bucket aggregation). To support this change a marker interface (AggregationPathCompatibleFactory) has been added.\n\nCloses #12360"
}
],
"files": [
{
"diff": "@@ -0,0 +1,24 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations;\n+\n+public interface AggregationPathCompatibleFactory {\n+\n+}",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregationPathCompatibleFactory.java",
"status": "added"
},
{
"diff": "@@ -164,7 +164,9 @@ private List<PipelineAggregatorFactory> resolvePipelineAggregatorOrder(List<Pipe\n }\n Set<String> aggFactoryNames = new HashSet<>();\n for (AggregatorFactory aggFactory : aggFactories) {\n- aggFactoryNames.add(aggFactory.name);\n+ if (aggFactory instanceof AggregationPathCompatibleFactory) {\n+ aggFactoryNames.add(aggFactory.name);\n+ }\n }\n List<PipelineAggregatorFactory> orderedPipelineAggregatorrs = new LinkedList<>();\n List<PipelineAggregatorFactory> unmarkedFactories = new ArrayList<PipelineAggregatorFactory>(pipelineAggregatorFactories);\n@@ -195,7 +197,7 @@ private void resolvePipelineAggregatorOrder(Set<String> aggFactoryNames, Map<Str\n resolvePipelineAggregatorOrder(aggFactoryNames, pipelineAggregatorFactoriesMap, orderedPipelineAggregators, unmarkedFactories,\n temporarilyMarked, matchingFactory);\n } else {\n- throw new IllegalStateException(\"No aggregation found for path [\" + bucketsPath + \"]\");\n+ throw new IllegalStateException(\"No compatible aggregation found for path [\" + bucketsPath + \"]\");\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.common.util.LongArray;\n import org.elasticsearch.common.util.LongObjectPagedHashMap;\n import org.elasticsearch.index.search.child.ConstantScorer;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n@@ -182,7 +183,8 @@ protected void doClose() {\n Releasables.close(parentOrdToBuckets, parentOrdToOtherBuckets);\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory<ValuesSource.Bytes.WithOrdinals.ParentChild> {\n+ public static class Factory extends ValuesSourceAggregatorFactory<ValuesSource.Bytes.WithOrdinals.ParentChild> implements\n+ AggregationPathCompatibleFactory {\n \n private final String parentType;\n private final Filter parentFilter;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -80,7 +81,7 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalFilter(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends AggregatorFactory {\n+ public static class Factory extends AggregatorFactory implements AggregationPathCompatibleFactory {\n \n private final Query filter;\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -68,7 +69,7 @@ public InternalAggregation buildEmptyAggregation() {\n throw new UnsupportedOperationException(\"global aggregations cannot serve as sub-aggregations, hence should never be called on #buildEmptyAggregations\");\n }\n \n- public static class Factory extends AggregatorFactory {\n+ public static class Factory extends AggregatorFactory implements AggregationPathCompatibleFactory {\n \n public Factory(String name) {\n super(name, InternalGlobal.TYPE.name());",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.util.Bits;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n@@ -81,7 +82,7 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalMissing(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory<ValuesSource> {\n+ public static class Factory extends ValuesSourceAggregatorFactory<ValuesSource> implements AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig valueSourceConfig) {\n super(name, InternalMissing.TYPE.name(), valueSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java",
"status": "modified"
},
{
"diff": "@@ -27,9 +27,9 @@\n import org.apache.lucene.util.BitSet;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -118,7 +118,7 @@ public void collect(int parentDoc, long bucket) throws IOException {\n }\n };\n }\n- \n+\n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException {\n return new InternalNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal), pipelineAggregators(),\n@@ -141,7 +141,7 @@ private static Filter findClosestNestedPath(Aggregator parent) {\n return null;\n }\n \n- public static class Factory extends AggregatorFactory {\n+ public static class Factory extends AggregatorFactory implements AggregationPathCompatibleFactory {\n \n private final String path;\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -28,10 +28,10 @@\n import org.apache.lucene.util.BitSet;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -50,7 +50,7 @@\n /**\n *\n */\n-public class ReverseNestedAggregator extends SingleBucketAggregator {\n+public class ReverseNestedAggregator extends SingleBucketAggregator implements AggregationPathCompatibleFactory {\n \n private final BitDocIdSetFilter parentFilter;\n \n@@ -84,8 +84,8 @@ public void collect(int childDoc, long bucket) throws IOException {\n // fast forward to retrieve the parentDoc this childDoc belongs to\n final int parentDoc = parentDocs.nextSetBit(childDoc);\n assert childDoc <= parentDoc && parentDoc != DocIdSetIterator.NO_MORE_DOCS;\n- \n- int keySlot = bucketOrdToLastCollectedParentDoc.indexOf(bucket); \n+\n+ int keySlot = bucketOrdToLastCollectedParentDoc.indexOf(bucket);\n if (bucketOrdToLastCollectedParentDoc.indexExists(keySlot)) {\n int lastCollectedParentDoc = bucketOrdToLastCollectedParentDoc.indexGet(keySlot);\n if (parentDoc > lastCollectedParentDoc) {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,14 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.lease.Releasables;\n-import org.elasticsearch.search.aggregations.*;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n+import org.elasticsearch.search.aggregations.Aggregator;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n+import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+import org.elasticsearch.search.aggregations.NonCollectingAggregator;\n import org.elasticsearch.search.aggregations.bucket.BestDocsDeferringCollector;\n import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector;\n import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;\n@@ -39,7 +46,7 @@\n \n /**\n * Aggregate on only the top-scoring docs on a shard.\n- * \n+ *\n * TODO currently the diversity feature of this agg offers only 'script' and\n * 'field' as a means of generating a de-dup value. In future it would be nice\n * if users could use any of the \"bucket\" aggs syntax (geo, date histogram...)\n@@ -131,8 +138,8 @@ abstract Aggregator create(String name, AggregatorFactories factories, int shard\n public String toString() {\n return parseField.getPreferredName();\n }\n- } \n- \n+ }\n+\n \n protected final int shardSize;\n protected BestDocsDeferringCollector bdd;\n@@ -174,7 +181,7 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalSampler(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends AggregatorFactory {\n+ public static class Factory extends AggregatorFactory implements AggregationPathCompatibleFactory {\n \n private int shardSize;\n \n@@ -191,7 +198,7 @@ public Aggregator createInternal(AggregationContext context, Aggregator parent,\n \n }\n \n- public static class DiversifiedFactory extends ValuesSourceAggregatorFactory<ValuesSource> {\n+ public static class DiversifiedFactory extends ValuesSourceAggregatorFactory<ValuesSource> implements AggregationPathCompatibleFactory {\n \n private int shardSize;\n private int maxDocsPerValue;\n@@ -213,7 +220,7 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationCont\n return new DiversifiedNumericSamplerAggregator(name, shardSize, factories, context, parent, pipelineAggregators, metaData,\n (Numeric) valuesSource, maxDocsPerValue);\n }\n- \n+\n if (valuesSource instanceof ValuesSource.Bytes) {\n ExecutionMode execution = null;\n if (executionHint != null) {\n@@ -231,7 +238,7 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationCont\n return execution.create(name, factories, shardSize, maxDocsPerValue, valuesSource, context, parent, pipelineAggregators,\n metaData);\n }\n- \n+\n throw new AggregationExecutionException(\"Sampler aggregation cannot be applied to field [\" + config.fieldContext().field() +\n \"]. It can only be applied to numeric or string fields.\");\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.util.DoubleArray;\n import org.elasticsearch.common.util.LongArray;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -113,7 +114,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalAvg(name, 0.0, 0l, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, String type, ValuesSourceConfig<ValuesSource.Numeric> valuesSourceConfig) {\n super(name, type, valuesSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.aggregations.metrics.cardinality;\n \n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -32,7 +33,7 @@\n import java.util.List;\n import java.util.Map;\n \n-final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<ValuesSource> {\n+final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<ValuesSource> implements AggregationPathCompatibleFactory {\n \n private final long precisionThreshold;\n private final boolean rehash;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.index.fielddata.NumericDoubleValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.search.MultiValueMode;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -114,7 +115,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalMax(name, Double.NEGATIVE_INFINITY, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig<ValuesSource.Numeric> valuesSourceConfig) {\n super(name, InternalMax.TYPE.name(), valuesSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.index.fielddata.NumericDoubleValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.search.MultiValueMode;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -113,7 +114,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalMin(name, Double.POSITIVE_INFINITY, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig<ValuesSource.Numeric> valuesSourceConfig) {\n super(name, InternalMin.TYPE.name(), valuesSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.metrics.percentiles.hdr;\n \n import org.HdrHistogram.DoubleHistogram;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -74,7 +75,8 @@ public double metric(String name, long bucketOrd) {\n }\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n private final double[] values;\n private final int numberOfSignificantValueDigits;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentileRanksAggregator.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.metrics.percentiles.hdr;\n \n import org.HdrHistogram.DoubleHistogram;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentiles;\n@@ -76,7 +77,8 @@ public InternalAggregation buildEmptyAggregation() {\n formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n private final double[] percents;\n private final int numberOfSignificantValueDigits;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/HDRPercentilesAggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.metrics.percentiles.tdigest;\n \n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -69,7 +70,8 @@ public double metric(String name, long bucketOrd) {\n }\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n private final double[] values;\n private final double compression;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentileRanksAggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.metrics.percentiles.tdigest;\n \n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -69,7 +70,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalTDigestPercentiles(name, keys, new TDigestState(compression), keyed, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n private final double[] percents;\n private final double compression;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestPercentilesAggregator.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.util.DoubleArray;\n import org.elasticsearch.common.util.LongArray;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -155,7 +156,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalStats(name, 0, 0, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig<ValuesSource.Numeric> valuesSourceConfig) {\n super(name, InternalStats.TYPE.name(), valuesSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.util.DoubleArray;\n import org.elasticsearch.common.util.LongArray;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -188,7 +189,8 @@ public void doClose() {\n Releasables.close(counts, maxes, mins, sumOfSqrs, sums);\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n private final double sigma;\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.DoubleArray;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -105,7 +106,8 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalSum(name, 0.0, formatter, pipelineAggregators(), metaData());\n }\n \n- public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> {\n+ public static class Factory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource.Numeric> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig<ValuesSource.Numeric> valuesSourceConfig) {\n super(name, InternalSum.TYPE.name(), valuesSourceConfig);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.LongArray;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -108,7 +109,8 @@ public void doClose() {\n Releasables.close(counts);\n }\n \n- public static class Factory<VS extends ValuesSource> extends ValuesSourceAggregatorFactory.LeafOnly<VS> {\n+ public static class Factory<VS extends ValuesSource> extends ValuesSourceAggregatorFactory.LeafOnly<VS> implements\n+ AggregationPathCompatibleFactory {\n \n public Factory(String name, ValuesSourceConfig<VS> config) {\n super(name, InternalValueCount.TYPE.name(), config);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.pipeline;\n \n+import org.elasticsearch.search.aggregations.AggregationPathCompatibleFactory;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n \n import java.io.IOException;\n@@ -28,7 +29,7 @@\n * A factory that knows how to create an {@link PipelineAggregator} of a\n * specific type.\n */\n-public abstract class PipelineAggregatorFactory {\n+public abstract class PipelineAggregatorFactory implements AggregationPathCompatibleFactory {\n \n protected String name;\n protected String type;\n@@ -37,7 +38,7 @@ public abstract class PipelineAggregatorFactory {\n \n /**\n * Constructs a new pipeline aggregator factory.\n- * \n+ *\n * @param name\n * The aggregation name\n * @param type\n@@ -52,7 +53,7 @@ public PipelineAggregatorFactory(String name, String type, String[] bucketsPaths\n /**\n * Validates the state of this factory (makes sure the factory is properly\n * configured)\n- * \n+ *\n * @param pipelineAggregatorFactories\n * @param factories\n * @param parent\n@@ -66,7 +67,7 @@ public final void validate(AggregatorFactory parent, AggregatorFactory[] factori\n \n /**\n * Creates the pipeline aggregator\n- * \n+ *\n * @param context\n * The aggregation context\n * @param parent\n@@ -77,7 +78,7 @@ public final void validate(AggregatorFactory parent, AggregatorFactory[] factori\n * with <tt>0</tt> as a bucket ordinal. Some factories can take\n * advantage of this in order to return more optimized\n * implementations.\n- * \n+ *\n * @return The created aggregator\n */\n public final PipelineAggregator create() throws IOException {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/PipelineAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -19,15 +19,17 @@\n \n package org.elasticsearch.search.aggregations.pipeline;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram;\n import org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.Bucket;\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n-import org.elasticsearch.search.aggregations.pipeline.SimpleValue;\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n import org.elasticsearch.search.aggregations.pipeline.derivative.Derivative;\n import org.elasticsearch.search.aggregations.support.AggregationPath;\n@@ -39,12 +41,13 @@\n import java.util.ArrayList;\n import java.util.List;\n \n-import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filters;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.stats;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.closeTo;\n@@ -157,7 +160,7 @@ public void setupSuiteScopeCluster() throws Exception {\n }\n \n private XContentBuilder newDocBuilder(int singleValueFieldValue) throws IOException {\n- return jsonBuilder().startObject().field(SINGLE_VALUED_FIELD_NAME, singleValueFieldValue).endObject();\n+ return jsonBuilder().startObject().field(SINGLE_VALUED_FIELD_NAME, singleValueFieldValue).field(\"tag\", \"foo\").endObject();\n }\n \n /**\n@@ -228,15 +231,15 @@ public void singleValuedField_normalised() {\n Derivative docCountDeriv = bucket.getAggregations().get(\"deriv\");\n if (i > 0) {\n assertThat(docCountDeriv, notNullValue());\n- assertThat(docCountDeriv.value(), closeTo((double) (firstDerivValueCounts[i - 1]), 0.00001));\n+ assertThat(docCountDeriv.value(), closeTo((firstDerivValueCounts[i - 1]), 0.00001));\n assertThat(docCountDeriv.normalizedValue(), closeTo((double) (firstDerivValueCounts[i - 1]) / 5, 0.00001));\n } else {\n assertThat(docCountDeriv, nullValue());\n }\n Derivative docCount2ndDeriv = bucket.getAggregations().get(\"2nd_deriv\");\n if (i > 1) {\n assertThat(docCount2ndDeriv, notNullValue());\n- assertThat(docCount2ndDeriv.value(), closeTo((double) (secondDerivValueCounts[i - 2]), 0.00001));\n+ assertThat(docCount2ndDeriv.value(), closeTo((secondDerivValueCounts[i - 2]), 0.00001));\n assertThat(docCount2ndDeriv.normalizedValue(), closeTo((double) (secondDerivValueCounts[i - 2]) * 2, 0.00001));\n } else {\n assertThat(docCount2ndDeriv, nullValue());\n@@ -596,6 +599,30 @@ public void singleValueAggDerivativeWithGaps_random() throws Exception {\n }\n }\n \n+ @Test\n+ public void singleValueAggDerivative_invalidPath() throws Exception {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(\n+ histogram(\"histo\")\n+ .field(SINGLE_VALUED_FIELD_NAME)\n+ .interval(interval)\n+ .subAggregation(\n+ filters(\"filters\").filter(QueryBuilders.termQuery(\"tag\", \"foo\")).subAggregation(\n+ sum(\"sum\").field(SINGLE_VALUED_FIELD_NAME)))\n+ .subAggregation(derivative(\"deriv\").setBucketsPaths(\"filters>foo>sum\"))).execute().actionGet();\n+ fail(\"Expected an Exception but didn't get one\");\n+ } catch (Exception e) {\n+ if (e instanceof SearchPhaseExecutionException) {\n+ SearchPhaseExecutionException spee = (SearchPhaseExecutionException) e;\n+ Throwable rootCause = ExceptionsHelper.unwrapCause(spee.guessRootCauses()[0]);\n+ assertThat(rootCause.getMessage(), equalTo(\"No compatible aggregation found for path [filters>foo>sum]\"));\n+ } else {\n+ throw e;\n+ }\n+ }\n+ }\n+\n private void checkBucketKeyAndDocCount(final String msg, final Histogram.Bucket bucket, final long expectedKey,\n final long expectedDocCount) {\n assertThat(msg, bucket, notNullValue());",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/DerivativeTests.java",
"status": "modified"
}
]
} |
{
"body": "reproduced on 1.7.0\n\nsteps in short\n\n1) start 2 nodes cluster on same host\n2) create huge file to cause disk to be >95% full (high disk allocation watermark hit)\n3) create new index with one doc\n4) index is created with UNASSIGNED shards\n5) remove huge file to go below disk allocation thresholds\n6) shards sit in UNASSIGNED (waited 10m)\n7) a manual reroute is required to have them reassigned\n\nsteps\nhttps://gist.github.com/nellicus/8a7f5a70160346b76237\n\ndebug logs\nhttps://gist.github.com/nellicus/c779113544c5f44f6d18\n\noutput of cat shards / cluster health during repro\nhttps://gist.github.com/nellicus/b3744dfd85b1343fd944\n",
"comments": [
{
"body": "When a node goes from high-watermark -> low-watermark (any state change really), we should issue a reroute.\n",
"created_at": "2015-07-24T09:14:00Z"
}
],
"number": 12422,
"title": "After disk allocation hi-watermark hit, freeing disk space does not cause automatic reallocation of new UNASSIGNED shards"
} | {
"body": "Previously we issued a reroute when a node went over the high watermark\nin order to move shards away from the node. This change tracks nodes\nthat have previously been over the high or low watermarks and issues a\nreroute when the node goes back underneath the watermark.\n\nThis allows shards that may be unassigned to be assigned back to a node\nthat was previously over the low watermark but no longer is.\n\nResolves #12422\n",
"number": 12452,
"review_comments": [
{
"body": "The _node_\n",
"created_at": "2015-07-28T21:49:55Z"
},
{
"body": "Does it matter whether it went from above the high watermark to normal or above the low to normal? Could you get away with just storing a single Map? I figure that'd be simpler.\n",
"created_at": "2015-07-28T21:53:55Z"
},
{
"body": "Can you inject some clock interface rather than use System.nanotime and sleep? Or crank the reroute interval down to 0? Something like guava's Ticker.\n",
"created_at": "2015-07-28T21:56:32Z"
},
{
"body": "Yeah, it should be a single map. I originally wanted to use multiple maps so I could track which one it went over/under, but I decided on the simpler implementation.\n",
"created_at": "2015-07-28T22:18:34Z"
},
{
"body": "I can actually do this without a clock interface at all, will change.\n",
"created_at": "2015-07-29T20:39:55Z"
},
{
"body": "Does it even have to be a map? It feels like a set would do just fine here.\n",
"created_at": "2015-07-30T16:50:51Z"
},
{
"body": "That'd be even better. I'll change it to a Set.\n",
"created_at": "2015-07-30T17:07:57Z"
}
],
"title": "Reroute shards when a node goes under disk watermarks"
} | {
"commits": [
{
"message": "Reroute shards when a node goes under disk watermarks\n\nPreviously we issued a reroute when a node went over the high watermark\nin order to move shards away from the node. This change tracks nodes\nthat have previously been over the high or low watermarks and issues a\nreroute when the node goes back underneath the watermark.\n\nThis allows shards that may be unassigned to be assigned back to a node\nthat was previously over the low watermark but no longer is.\n\nResolves #12422"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.routing.allocation.decider;\n \n+import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterInfo;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.node.settings.NodeSettingsService;\n \n import java.util.Map;\n+import java.util.Set;\n \n /**\n * The {@link DiskThresholdDecider} checks that the node a shard is potentially\n@@ -128,6 +130,8 @@ public void onRefreshSettings(Settings settings) {\n */\n class DiskListener implements ClusterInfoService.Listener {\n private final Client client;\n+ private final Set<String> nodeHasPassedWatermark = Sets.newConcurrentHashSet();\n+\n private long lastRunNS;\n \n DiskListener(Client client) {\n@@ -162,21 +166,55 @@ public void onNewInfo(ClusterInfo info) {\n Map<String, DiskUsage> usages = info.getNodeDiskUsages();\n if (usages != null) {\n boolean reroute = false;\n- for (DiskUsage entry : usages.values()) {\n- warnAboutDiskIfNeeded(entry);\n- if (entry.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdHigh.bytes() ||\n- entry.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdHigh) {\n+ String explanation = \"\";\n+\n+ // Garbage collect nodes that have been removed from the cluster\n+ // from the map that tracks watermark crossing\n+ Set<String> nodes = usages.keySet();\n+ for (String node : nodeHasPassedWatermark) {\n+ if (nodes.contains(node) == false) {\n+ nodeHasPassedWatermark.remove(node);\n+ }\n+ }\n+\n+ for (Map.Entry<String, DiskUsage> entry : usages.entrySet()) {\n+ String node = entry.getKey();\n+ DiskUsage usage = entry.getValue();\n+ warnAboutDiskIfNeeded(usage);\n+ if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdHigh.bytes() ||\n+ usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdHigh) {\n if ((System.nanoTime() - lastRunNS) > DiskThresholdDecider.this.rerouteInterval.nanos()) {\n lastRunNS = System.nanoTime();\n reroute = true;\n+ explanation = \"high disk watermark exceeded on one or more nodes\";\n } else {\n logger.debug(\"high disk watermark exceeded on {} but an automatic reroute has occurred in the last [{}], skipping reroute\",\n- entry, DiskThresholdDecider.this.rerouteInterval);\n+ node, DiskThresholdDecider.this.rerouteInterval);\n+ }\n+ nodeHasPassedWatermark.add(node);\n+ } else if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdLow.bytes() ||\n+ usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdLow) {\n+ nodeHasPassedWatermark.add(node);\n+ } else {\n+ if (nodeHasPassedWatermark.contains(node)) {\n+ // The node has previously been over the high or\n+ // low watermark, but is no longer, so we should\n+ // reroute so any unassigned shards can be allocated\n+ // if they are able to be\n+ if ((System.nanoTime() - lastRunNS) > DiskThresholdDecider.this.rerouteInterval.nanos()) {\n+ lastRunNS = System.nanoTime();\n+ reroute = true;\n+ explanation = \"one or more nodes has gone under the high or low watermark\";\n+ nodeHasPassedWatermark.remove(node);\n+ } else {\n+ logger.debug(\"{} has gone below a disk threshold, but an automatic reroute has occurred in the last [{}], skipping reroute\",\n+ node, DiskThresholdDecider.this.rerouteInterval);\n+ }\n }\n }\n }\n if (reroute) {\n- logger.info(\"high disk watermark exceeded on one or more nodes, rerouting shards\");\n+ logger.info(\"rerouting shards: [{}]\", explanation);\n // Execute an empty reroute, but don't block on the response\n client.admin().cluster().prepareReroute().execute();\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java",
"status": "modified"
},
{
"diff": "@@ -48,6 +48,7 @@\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n \n @ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.TEST, numDataNodes = 0)\n public class MockDiskUsagesTests extends ElasticsearchIntegrationTest {\n@@ -59,13 +60,13 @@ protected Settings nodeSettings(int nodeOrdinal) {\n // Use the mock internal cluster info service, which has fake-able disk usages\n .put(ClusterModule.CLUSTER_SERVICE_IMPL, MockInternalClusterInfoService.class.getName())\n // Update more frequently\n- .put(InternalClusterInfoService.INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, \"2s\")\n+ .put(InternalClusterInfoService.INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, \"1s\")\n .build();\n }\n \n @Test\n //@TestLogging(\"org.elasticsearch.cluster:TRACE,org.elasticsearch.cluster.routing.allocation.decider:TRACE\")\n- public void testRerouteOccursOnDiskpassingHighWatermark() throws Exception {\n+ public void testRerouteOccursOnDiskPassingHighWatermark() throws Exception {\n List<String> nodes = internalCluster().startNodesAsync(3).get();\n \n // Wait for all 3 nodes to be up\n@@ -87,7 +88,7 @@ public void run() {\n client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder()\n .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK, randomFrom(\"20b\", \"80%\"))\n .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK, randomFrom(\"10b\", \"90%\"))\n- .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_REROUTE_INTERVAL, \"1s\")).get();\n+ .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_REROUTE_INTERVAL, \"1ms\")).get();\n \n // Create an index with 10 shards so we can check allocation for it\n prepareCreate(\"test\").setSettings(settingsBuilder()\n@@ -106,7 +107,7 @@ public void run() {\n }\n });\n \n- List<String> realNodeNames = newArrayList();\n+ final List<String> realNodeNames = newArrayList();\n ClusterStateResponse resp = client().admin().cluster().prepareState().get();\n Iterator<RoutingNode> iter = resp.getState().getRoutingNodes().iterator();\n while (iter.hasNext()) {\n@@ -121,22 +122,50 @@ public void run() {\n cis.setN2Usage(realNodeNames.get(1), new DiskUsage(nodes.get(1), \"n2\", 100, 50));\n cis.setN3Usage(realNodeNames.get(2), new DiskUsage(nodes.get(2), \"n3\", 100, 0)); // nothing free on node3\n \n- // Cluster info gathering interval is 2 seconds, give reroute 2 seconds to kick in\n- Thread.sleep(4000);\n+ // Retrieve the count of shards on each node\n+ final Map<String, Integer> nodesToShardCount = newHashMap();\n+\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterStateResponse resp = client().admin().cluster().prepareState().get();\n+ Iterator<RoutingNode> iter = resp.getState().getRoutingNodes().iterator();\n+ while (iter.hasNext()) {\n+ RoutingNode node = iter.next();\n+ logger.info(\"--> node {} has {} shards\",\n+ node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n+ nodesToShardCount.put(node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n+ }\n+ assertThat(\"node1 has 5 shards\", nodesToShardCount.get(realNodeNames.get(0)), equalTo(5));\n+ assertThat(\"node2 has 5 shards\", nodesToShardCount.get(realNodeNames.get(1)), equalTo(5));\n+ assertThat(\"node3 has 0 shards\", nodesToShardCount.get(realNodeNames.get(2)), equalTo(0));\n+ }\n+ });\n+\n+ // Update the disk usages so one node is now back under the high watermark\n+ cis.setN1Usage(realNodeNames.get(0), new DiskUsage(nodes.get(0), \"n1\", 100, 50));\n+ cis.setN2Usage(realNodeNames.get(1), new DiskUsage(nodes.get(1), \"n2\", 100, 50));\n+ cis.setN3Usage(realNodeNames.get(2), new DiskUsage(nodes.get(2), \"n3\", 100, 50)); // node3 has free space now\n \n // Retrieve the count of shards on each node\n- resp = client().admin().cluster().prepareState().get();\n- iter = resp.getState().getRoutingNodes().iterator();\n- Map<String, Integer> nodesToShardCount = newHashMap();\n- while (iter.hasNext()) {\n- RoutingNode node = iter.next();\n- logger.info(\"--> node {} has {} shards\",\n- node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n- nodesToShardCount.put(node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n- }\n- assertThat(\"node1 has 5 shards\", nodesToShardCount.get(realNodeNames.get(0)), equalTo(5));\n- assertThat(\"node2 has 5 shards\", nodesToShardCount.get(realNodeNames.get(1)), equalTo(5));\n- assertThat(\"node3 has 0 shards\", nodesToShardCount.get(realNodeNames.get(2)), equalTo(0));\n+ nodesToShardCount.clear();\n+\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterStateResponse resp = client().admin().cluster().prepareState().get();\n+ Iterator<RoutingNode> iter = resp.getState().getRoutingNodes().iterator();\n+ while (iter.hasNext()) {\n+ RoutingNode node = iter.next();\n+ logger.info(\"--> node {} has {} shards\",\n+ node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n+ nodesToShardCount.put(node.nodeId(), resp.getState().getRoutingNodes().node(node.nodeId()).numberOfOwningShards());\n+ }\n+ assertThat(\"node1 has at least 3 shards\", nodesToShardCount.get(realNodeNames.get(0)), greaterThanOrEqualTo(3));\n+ assertThat(\"node2 has at least 3 shards\", nodesToShardCount.get(realNodeNames.get(1)), greaterThanOrEqualTo(3));\n+ assertThat(\"node3 has at least 3 shards\", nodesToShardCount.get(realNodeNames.get(2)), greaterThanOrEqualTo(3));\n+ }\n+ });\n }\n \n /** Create a fake NodeStats for the given node and usage */",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesTests.java",
"status": "modified"
}
]
} |
{
"body": "If you accidentally manage to place a leading space in front of a YML setting in `elasticsearch.yml`, then all follow-on settings will be silently ignored.\n\n```\n cluster.name: my-cluster\n other.setting: xyz\n\n# The rest is ignored\nname.node: my-node\n```\n\nIf you turn around and start the node, then it fails to recognize the `node.name` without any error:\n\n``` bash\n$ bin/elasticsearch\n[2015-07-21 11:43:08,883][INFO ][node ] [Screaming Mimi] version[1.6.0], pid[35094], build[cdd3ac4/2015-06-09T13:36:34Z]\n[2015-07-21 11:43:08,883][INFO ][node ] [Screaming Mimi] initializing ...\n```\n",
"comments": [
{
"body": "This should throw an inconsistent indentation error. Not sure if this is our parsing or Jackson that has th issue\n",
"created_at": "2015-07-22T09:33:23Z"
},
{
"body": "@clintongormley It's our issue. We parse until we reach an end object token. The YAML spec requires that all sibling nodes be at the same indentation level. Once the indentation is reduced in the example that @pickypg gave, a YAML parser that parses according to the spec will return an end object token and we will stop parsing. Therefore, to solve this we should just ensure that when we encounter an end object token while parsing settings, we are in fact at the end of the settings stream. I've submitted PR #12451 to do this.\n",
"created_at": "2015-07-24T17:33:46Z"
}
],
"number": 12382,
"title": "Leading space breaks YML parsing"
} | {
"body": "Settings are currently parsed by looping over the tokens until an END_OBJECT token is reached. However, this does not mean that the end of the settings stream was reached. This can occur, for example, when parsing a YAML settings file with inconsistent indentation. Currently in this case, some settings will be silently ignored. This commit forces a check that we have in fact reached the end of the settings stream.\n\nCloses #12382\n",
"number": 12451,
"review_comments": [],
"title": "Add explicit check that we have reached the end of the settings stream when parsing settings"
} | {
"commits": [
{
"message": "Add explicit check that we have reached the end of the settings stream when parsing settings\n\nSettings are currently parsed by looping over the tokens until an END_OBJECT token is reached. However, this does not mean that the end of\nthe settings stream was reached. This can occur, for example, when parsing a YAML settings file with inconsistent indentation. Currently\nin this case, some settings will be silently ignored. This commit forces a check that we have in fact reached the end of the settings\nstream.\n\nCloses #12382"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.settings.loader;\n \n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -65,6 +66,23 @@ public Map<String, String> load(XContentParser jp) throws IOException {\n throw new ElasticsearchParseException(\"malformed, expected settings to start with 'object', instead was [{}]\", token);\n }\n serializeObject(settings, sb, path, jp, null);\n+\n+ // ensure we reached the end of the stream\n+ Exception exception = null;\n+ XContentParser.Token lastToken = null;\n+ try {\n+ while (!jp.isClosed() && (lastToken = jp.nextToken()) == null);\n+ } catch (Exception e) {\n+ exception = e;\n+ }\n+ if (exception != null || lastToken != null) {\n+ throw new ElasticsearchParseException(\n+ \"malformed, expected end of settings but encountered additional content starting at columnNumber: [{}], lineNumber: [{}]\",\n+ jp.getTokenLocation().columnNumber,\n+ jp.getTokenLocation().lineNumber\n+ );\n+ }\n+\n return settings;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java",
"status": "modified"
},
{
"diff": "@@ -250,4 +250,6 @@ enum NumberType {\n * @return last token's location or null if cannot be determined\n */\n XContentLocation getTokenLocation();\n+\n+ boolean isClosed();\n }",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentParser.java",
"status": "modified"
},
{
"diff": "@@ -248,4 +248,9 @@ private Token convertToken(JsonToken token) {\n }\n throw new IllegalStateException(\"No matching token for json_token [\" + token + \"]\");\n }\n+\n+ @Override\n+ public boolean isClosed() {\n+ return parser.isClosed();\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContentParser.java",
"status": "modified"
},
{
"diff": "@@ -319,4 +319,7 @@ static Object readValue(XContentParser parser, MapFactory mapFactory, XContentPa\n }\n return null;\n }\n+\n+ @Override\n+ public abstract boolean isClosed();\n }",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
},
{
"diff": "@@ -20,11 +20,11 @@\n package org.elasticsearch.common.settings.loader;\n \n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.settings.SettingsException;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n-import static org.hamcrest.MatcherAssert.assertThat;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -49,4 +49,18 @@ public void testSimpleYamlSettings() throws Exception {\n assertThat(settings.getAsArray(\"test1.test3\")[0], equalTo(\"test3-1\"));\n assertThat(settings.getAsArray(\"test1.test3\")[1], equalTo(\"test3-2\"));\n }\n+\n+ @Test(expected = SettingsException.class)\n+ public void testIndentation() {\n+ settingsBuilder()\n+ .loadFromClasspath(\"org/elasticsearch/common/settings/loader/indentation-settings.yml\")\n+ .build();\n+ }\n+\n+ @Test(expected = SettingsException.class)\n+ public void testIndentationWithExplicitDocumentStart() {\n+ settingsBuilder()\n+ .loadFromClasspath(\"org/elasticsearch/common/settings/loader/indentation-with-explicit-document-start-settings.yml\")\n+ .build();\n+ }\n }\n\\ No newline at end of file",
"filename": "core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,10 @@\n+ test1:\n+ value1: value1\n+ test2:\n+ value2: value2\n+ value3: 2\n+ test3:\n+ - test3-1\n+ - test3-2\n+test4:\n+ value4: value4",
"filename": "core/src/test/java/org/elasticsearch/common/settings/loader/indentation-settings.yml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,11 @@\n+ test1:\n+ value1: value1\n+ test2:\n+ value2: value2\n+ value3: 2\n+ test3:\n+ - test3-1\n+ - test3-2\n+---\n+test4:\n+ value4: value4",
"filename": "core/src/test/java/org/elasticsearch/common/settings/loader/indentation-with-explicit-document-start-settings.yml",
"status": "added"
}
]
} |
{
"body": "An ip_range aggregation with mask of 0.0.0.0/0 is not being treated correctly, the prefix-length of 0 is being taken as 'unset' and is being set as /32\n\nI am trying to get aggregated information on expanding scopes (some subnet in our network, our entire network, the internet...), so I want to use 0.0.0.0/0 ... a range of 0.0.0.0 to 255.255.255.255 does work (although is nit-pickingly incorrect as the range is exclusive and so the final address tested is 255.255.255.254, which might be surprising if you wanted information about broadcast packets.... but that is perhaps a separate bug report.\n\n``` json\n# Replace the template that logstash provides, so\n# we add an 'ip' multifield type to the 'clientip'\n# field.\n\nPUT /_template/logstash\n{\n \"template\": \"logstash-*\",\n \"settings\": {\n \"index.refresh_interval\": \"5s\"\n },\n \"mappings\": {\n \"_default_\": {\n \"dynamic_templates\": [\n {\n \"message_field\": {\n \"mapping\": {\n \"index\": \"analyzed\",\n \"omit_norms\": true,\n \"type\": \"string\"\n },\n \"match_mapping_type\": \"string\",\n \"match\": \"message\"\n }\n },\n {\n \"ip_field\": {\n \"mapping\": {\n \"index\": \"not_analyzed\",\n \"omit_norms\": true,\n \"type\": \"string\",\n \"fields\": {\n \"num\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n }\n }\n },\n \"match_mapping_type\": \"string\",\n \"match\": \"ip_*\"\n }\n },\n {\n \"string_fields\": {\n \"mapping\": {\n \"index\": \"analyzed\",\n \"omit_norms\": true,\n \"type\": \"string\",\n \"fields\": {\n \"raw\": {\n \"index\": \"not_analyzed\",\n \"ignore_above\": 256,\n \"type\": \"string\"\n }\n }\n },\n \"match_mapping_type\": \"string\",\n \"match\": \"*\"\n }\n }\n ],\n \"properties\": {\n \"clientip\": {\n \"type\": \"string\",\n \"fields\": {\n \"num\": {\n \"index\": \"not_analyzed\",\n \"type\": \"ip\"\n }\n }\n },\n \"geoip\": {\n \"dynamic\": true,\n \"properties\": {\n \"location\": {\n \"type\": \"geo_point\"\n }\n },\n \"type\": \"object\"\n },\n \"@version\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n }\n },\n \"_all\": {\n \"enabled\": true,\n \"omit_norms\": true\n }\n }\n },\n \"aliases\": {}\n}\n\n# First, delete the current data, as we need to adjust\n# the mapping\n\nDELETE /logstash-test\n\n# Now go ahead and index your logstash data...\n\nPUT /logstash-test/logs/1\n{\n \"clientip\": \"12.34.0.1\",\n \"ip_source\": \"12.34.0.1\",\n \"message\": \"The very model of a modern major general\",\n \"somestring\": \"shoestring\"\n}\n\nPUT /logstash-test/logs/2\n{\n \"clientip\": \"23.45.2.3\",\n \"ip_source\": \"23.45.2.3\",\n \"message\": \"Everyone needs a nemesis\",\n \"somestring\": \"stringthing\"\n}\n\n# Check what we got back\n#\n# I was expecting to see:\n# - 'somestring' attribute should have a .raw subfield\n# - 'message' should not\n# - 'clientip' should have a .num subfield\n# - 'ip_source' should have a .num subfield\n#\n# ... but I don't see the subfields (not sure why)\n\nGET /logstash-test/logs/1\nGET /logstash-test/logs/2\n\n# Check your mapping after you start indexing\n\nGET logstash-test/_mapping/logs\n\n# Do a search for it (remember it won't show up in\n# a search until the index has passed its refresh\n# interval)\n#\n# Remember to specify the subfield you need to work on\n\n# I couldn't find an example of Kibana using a 'count'\n# search-type, but it seems to use '\"size\": 0' instead.\n\n### BUG!!! an ip_range with mask of 0.0.0.0/0 is not\n# being treated correctly, the prefix-length of 0 is\n# being taken as 'unset' and is being set as /32\n\nGET logstash-test/_search\n{\n \"size\": 0, \n \"aggs\": {\n \"ip_ranges\": {\n \"ip_range\": {\n \"field\": \"clientip.num\",\n \"ranges\": [\n {\n \"mask\": \"12.0.0.0/8\"\n },\n {\n \"mask\": \"0.0.0.0/1\"\n },\n {\n \"from\": \"0.0.0.0\",\n \"to\": \"255.255.255.255\"\n },\n {\n \"mask\": \"0.0.0.0/0\"\n }\n ]\n }\n }\n }\n}\n\n# There is no IP address value that can legitimately be\n# used as a \"null\" value for a CIDR prefix... \n# well, not in the IP address part; For IPv4, you could\n# use /33 or such as an \"null\" value I suppose, if \n# you didn't want the overhead of adding an extra bit\n# of information... so long as that implementation\n# detail wasn't exposed to users.\n```\n",
"comments": [
{
"body": "This was seen in Elasticsearch 1.6.0\n",
"created_at": "2015-07-02T22:50:41Z"
},
{
"body": "@cameronkerrnz Here is a potential fix: https://github.com/ruflin/elasticsearch/commit/3be98927a03fc7bd13b0792baddf1b7492f8855c\n\nSo far all tests are green, but I'm not sure if this could also have some unwanted side effects. Please have a look at it.\n",
"created_at": "2015-07-21T12:47:22Z"
},
{
"body": "thanks @ruflin - you want to open a PR?\n",
"created_at": "2015-07-23T10:01:34Z"
},
{
"body": "Done: https://github.com/elastic/elasticsearch/pull/12430\n",
"created_at": "2015-07-23T19:34:04Z"
},
{
"body": "Closed via #12430\n",
"created_at": "2015-07-27T10:13:23Z"
}
],
"number": 12005,
"title": "ip_range aggregation with mask of 0.0.0.0/0 gets treated as 0.0.0.0/32"
} | {
"body": "This fixes the issue raised in #12005 and adds test to verify it.\n",
"number": 12430,
"review_comments": [
{
"body": "I understand the above change, but not this one: what does it fix?\n",
"created_at": "2015-07-24T12:30:17Z"
},
{
"body": "Setting the value to `-1` leads to `null` as the to IP address. I assumed the expected IP address should be 255.255.255.255. But now that I rethink it there is a potential issue with that. As it is mentioned on line 153 the range is non inclusive. So it should be 256.0.0.0 which also correlates with `InternalIPv4Range.MAX_IP`. But I think this is actually not a valid IP address which is the reason why the system converts it to `null`. Going with the above solution would exclude the IP address 255.255.255.255.\n\nAs it is mentioned in the docs block `-1` means unbound end so you are right and the above change should not be needed. I will think of a test to verify that if the IP is 255.255.255.255 it is inside the range and will update the pull request.\n\nAs `-1` and `InternalIPv4Range.MAX_IP` are both converted to `null` I will check if the if clause is even needed.\n",
"created_at": "2015-07-27T06:26:34Z"
},
{
"body": "@jpountz This creates and index and adds documents to it. Should the first part be moved to the `setupSuiteScopeCluster(*)`? \n",
"created_at": "2015-07-27T07:16:27Z"
},
{
"body": "Thanks.\n",
"created_at": "2015-07-27T07:57:12Z"
},
{
"body": "I missed it, indeed it should be moved to setupSuiteScopeCluster\n",
"created_at": "2015-07-27T07:58:05Z"
},
{
"body": "I will move it up.\n",
"created_at": "2015-07-27T08:33:02Z"
}
],
"title": "Fix cidr mask conversion issue for 0.0.0.0/0 and add tests #12005"
} | {
"commits": [
{
"message": "Fix cidr mask conversion issue for 0.0.0.0/0 and add tests"
},
{
"message": "Revert change to set longTo to \"MAX_IP -1\" and improve test suite to check for range"
},
{
"message": "Move index creation to test setup method"
}
],
"files": [
{
"diff": "@@ -139,6 +139,10 @@ static long[] cidrMaskToMinMax(String cidr) {\n \n int mask = (-1) << (32 - Integer.parseInt(parts[4]));\n \n+ if (Integer.parseInt(parts[4]) == 0) {\n+ mask = 0 << 32;\n+ }\n+\n int from = addr & mask;\n long longFrom = intIpToLongIp(from);\n if (longFrom == 0) {\n@@ -147,6 +151,7 @@ static long[] cidrMaskToMinMax(String cidr) {\n \n int to = from + (~mask);\n long longTo = intIpToLongIp(to) + 1; // we have to +1 here as the range is non-inclusive on the \"to\" side\n+\n if (longTo == InternalIPv4Range.MAX_IP) {\n longTo = -1;\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ipv4/IPv4RangeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -83,6 +83,33 @@ public void setupSuiteScopeCluster() throws Exception {\n }\n indexRandom(true, builders.toArray(new IndexRequestBuilder[builders.size()]));\n }\n+ {\n+ assertAcked(prepareCreate(\"range_idx\")\n+ .addMapping(\"type\", \"ip\", \"type=ip\", \"ips\", \"type=ip\"));\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[4];\n+\n+ builders[0] = client().prepareIndex(\"range_idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(\"ip\", \"0.0.0.0\")\n+ .endObject());\n+\n+ builders[1] = client().prepareIndex(\"range_idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(\"ip\", \"0.0.0.255\")\n+ .endObject());\n+\n+ builders[2] = client().prepareIndex(\"range_idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(\"ip\", \"255.255.255.0\")\n+ .endObject());\n+\n+ builders[3] = client().prepareIndex(\"range_idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(\"ip\", \"255.255.255.255\")\n+ .endObject());\n+\n+ indexRandom(true, builders);\n+ }\n ensureSearchable();\n }\n \n@@ -869,4 +896,51 @@ public void emptyAggregation() throws Exception {\n assertThat(buckets.get(0).getToAsString(), equalTo(\"10.0.0.10\"));\n assertThat(buckets.get(0).getDocCount(), equalTo(0l));\n }\n+\n+ @Test\n+ public void mask0() {\n+ SearchResponse response = client().prepareSearch(\"idx\")\n+ .addAggregation(ipRange(\"range\")\n+ .field(\"ip\")\n+ .addMaskRange(\"0.0.0.0/0\"))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Range range = response.getAggregations().get(\"range\");\n+ assertThat(range, notNullValue());\n+ assertThat(range.getName(), equalTo(\"range\"));\n+ List<? extends Bucket> buckets = range.getBuckets();\n+ assertThat(range.getBuckets().size(), equalTo(1));\n+\n+ Range.Bucket bucket = buckets.get(0);\n+ assertThat((String) bucket.getKey(), equalTo(\"0.0.0.0/0\"));\n+ assertThat(bucket.getFromAsString(), nullValue());\n+ assertThat(bucket.getToAsString(), nullValue());\n+ assertThat(((Number) bucket.getTo()).doubleValue(), equalTo(Double.POSITIVE_INFINITY));\n+ assertEquals(255l, bucket.getDocCount());\n+ }\n+\n+\n+ @Test\n+ public void mask0SpecialIps() {\n+\n+ SearchResponse response = client().prepareSearch(\"range_idx\")\n+ .addAggregation(ipRange(\"range\")\n+ .field(\"ip\")\n+ .addMaskRange(\"0.0.0.0/0\"))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Range range = response.getAggregations().get(\"range\");\n+\n+ assertThat(range, notNullValue());\n+ assertThat(range.getName(), equalTo(\"range\"));\n+ List<? extends Bucket> buckets = range.getBuckets();\n+ assertThat(range.getBuckets().size(), equalTo(1));\n+\n+ Range.Bucket bucket = buckets.get(0);\n+ assertEquals(4l, bucket.getDocCount());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/IPv4RangeTests.java",
"status": "modified"
}
]
} |
{
"body": "Starting with 1.6.0, custom ClassLoader is lost in org.elasticsearch.node.internal.InternalSettingsPreparer.replacePromptPlaceholders(Settings, Terminal)\n",
"comments": [],
"number": 12340,
"title": "Settings' ClassLoader is lost "
} | {
"body": "Today, when a user provides settings and specifies a classloader to be used, the classloader gets\ndropped when we copy the settings to check for prompt entries. This change copies the classloader\nwhen replacing the prompt placeholders and adds a test to ensure the InternalSettingsPreparer\nalways retains the classloader.\n\nCloses #12340\n",
"number": 12419,
"review_comments": [
{
"body": "Can we use URLClassLoader.newInstance() instead? It doesn't require a createClassLoader permissions check i think, because the returned subclass is controlled and does proper security checks.\n",
"created_at": "2015-07-23T16:07:45Z"
}
],
"title": "Copy the classloader from the original settings when checking for prompts"
} | {
"commits": [
{
"message": "copy the classloader from the original settings when checking for prompts\n\nToday, when a user provides settings and specifies a classloader to be used, the classloader gets\ndropped when we copy the settings to check for prompt entries. This change copies the classloader\nwhen replacing the prompt placeholders and adds a test to ensure the InternalSettingsPreparer\nalways retains the classloader.\n\nCloses #12340"
}
],
"files": [
{
"diff": "@@ -180,7 +180,7 @@ public static Tuple<Settings, Environment> prepareSettings(Settings pSettings, b\n \n static Settings replacePromptPlaceholders(Settings settings, Terminal terminal) {\n UnmodifiableIterator<Map.Entry<String, String>> iter = settings.getAsMap().entrySet().iterator();\n- Settings.Builder builder = Settings.builder();\n+ Settings.Builder builder = Settings.builder().classLoader(settings.getClassLoaderIfSet());\n \n while (iter.hasNext()) {\n Map.Entry<String, String> entry = iter.next();",
"filename": "core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,8 @@\n import org.junit.Before;\n import org.junit.Test;\n \n+import java.net.URL;\n+import java.net.URLClassLoader;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -220,4 +222,19 @@ public String readText(String message, Object... args) {\n assertThat(settings.get(\"name\"), is(\"prompted name 0\"));\n assertThat(settings.get(\"node.name\"), is(\"prompted name 0\"));\n }\n+\n+ @Test\n+ public void testPreserveSettingsClassloader() {\n+ final ClassLoader classLoader = URLClassLoader.newInstance(new URL[0]);\n+ Settings settings = settingsBuilder()\n+ .put(\"foo\", \"bar\")\n+ .put(\"path.home\", createTempDir())\n+ .classLoader(classLoader)\n+ .build();\n+\n+ Tuple<Settings, Environment> tuple = InternalSettingsPreparer.prepareSettings(settings, randomBoolean());\n+\n+ Settings preparedSettings = tuple.v1();\n+ assertThat(preparedSettings.getClassLoaderIfSet(), is(classLoader));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java",
"status": "modified"
}
]
} |
{
"body": "Start Elasticsearch 1.6.0, and run the following commands:\n\n```\nDELETE *\n\nPUT bad\n{\n \"mappings\": {\n \"x\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\"\n }\n }\n },\n \"y\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"date\"\n }\n }\n }\n }\n}\n\nPUT bad/x/1\n{\"foo\": \"bar\"}\n\nPUT bad/y/1\n{\"foo\": \"2015-01-01\"}\n\nPOST _flush\n```\n\nThen shutdown and start ES compiled from master. The `bad` index fails to recover because of the conflicting mappings, and the failures just keep being repeated endlessly:\n\n```\n[2015-06-24 18:22:46,644][WARN ][indices.cluster ] [Crossbones] [bad] failed to add mapping [y], source [{\"y\":{\"properties\":{\"foo\":{\"t\nype\":\"date\",\"format\":\"dateOptionalTime\"}}}}]\njava.lang.IllegalArgumentException: Mapper for [foo] conflicts with existing mapping in other types[mapper [foo] cannot be changed from type [s\ntring] to [date]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:350)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:305)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:255)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:422)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:376)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:181)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:484)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThr\neadPoolExecutor.java:209)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolE\nxecutor.java:179)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\n[2015-06-24 18:22:46,701][WARN ][indices.cluster ] [Crossbones] [[bad][0]] marking and sending shard failed due to [failed to update m\nappings]\njava.lang.IllegalArgumentException: Mapper for [foo] conflicts with existing mapping in other types[mapper [foo] cannot be changed from type [s\ntring] to [date]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:350)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:305)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:255)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:422)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:376)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:181)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:484)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThr\neadPoolExecutor.java:209)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolE\nxecutor.java:179)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\n[2015-06-24 18:22:46,703][WARN ][cluster.action.shard ] [Crossbones] [bad][0] received shard failed for [bad][0], node[_DucCUykRy6JHLfcchxVVg], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-06-24T16:22:46.149Z]], indexUUID [RET0PNdWQQWhX_fglK4FIw], reason [shard failure [failed to update mappings][IllegalArgumentException[Mapper for [foo] conflicts with existing mapping in other types[mapper [foo] cannot be changed from type [string] to [date]]]]]\n[2015-06-24 18:22:46,705][WARN ][indices.cluster ] [Crossbones] [[bad][1]] marking and sending shard failed due to [failed to update mappings]\njava.lang.IllegalArgumentException: Mapper for [foo] conflicts with existing mapping in other types[mapper [foo] cannot be changed from type [string] to [date]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:350)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:305)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:255)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:422)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:376)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:181)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:484)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:209)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:179)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\n[2015-06-24 18:22:46,706][WARN ][cluster.action.shard ] [Crossbones] [bad][0] received shard failed for [bad][0], node[_DucCUykRy6JHLfcchxVVg], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-06-24T16:22:46.149Z]], indexUUID [RET0PNdWQQWhX_fglK4FIw], reason [master [Crossbones][_DucCUykRy6JHLfcchxVVg][Slim-2.local][inet[/127.0.0.1:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure]\n```\n",
"comments": [
{
"body": "A simpler way to reproduce this is to try put this mapping to ES master, and it will fail.\nThe ability to have a field with the same name in different types with uncompatible mapping seems to have been deliberately removed in #11812 by @rjernst.\n",
"created_at": "2015-06-28T20:04:49Z"
},
{
"body": "@szroland the change is intentional. This issue is about what happens when you upgrade to 2.0 with conflicting mappings. We should fail once and stop, instead of trying over and over again.\n",
"created_at": "2015-06-29T12:27:37Z"
}
],
"number": 11857,
"title": "Conflicting mappings causes runaway shard failures on upgrade"
} | {
"body": "Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.\n\nCloses #11857\n",
"number": 12406,
"review_comments": [
{
"body": "If my understanding is correct, this is the version of the \"oldest\" segment in the index? If yes, can you document it: I had to think about it for a bit since the elasticsearch version below also points to a lucene version which is the version which has been used to write the index initially.\n",
"created_at": "2015-08-03T13:20:23Z"
},
{
"body": "maybe we don't need this `changed` logic, calling `versions.put(index, new Tuple<>(version, luceneVersion));` would always be correct?\n",
"created_at": "2015-08-03T13:22:25Z"
},
{
"body": "should it rather be an IllegalStateException?\n",
"created_at": "2015-08-03T13:29:39Z"
},
{
"body": "Should `updateAllTypes` really be `true`? This might hide some problems?\n",
"created_at": "2015-08-03T13:35:11Z"
},
{
"body": "I don't think it should be static\n",
"created_at": "2015-08-03T13:36:32Z"
},
{
"body": "Should it return the created version or the upgraded version (if it exists) instead of Version.CURRENT?\n",
"created_at": "2015-08-03T13:44:56Z"
},
{
"body": "Actually I think applyDefaults should be false too since we already have a mapping?\n",
"created_at": "2015-08-03T13:46:29Z"
},
{
"body": "This analyzer is only compared to null it's not really used as an analyzer. \n",
"created_at": "2015-08-03T14:06:55Z"
},
{
"body": "So, the idea is that luceneVersion track the version of the latest segment. The elasticsearch version tracks the version of metadata, and since we update metadata to match the current version I am using the current version instead of version when index was created. This is the tricky part so I will add some comments here.\n",
"created_at": "2015-08-03T14:09:23Z"
},
{
"body": "The reason why I added this comment was because I saw it was closed in FakeAnalysisService.close(). So instead should we create an anonymous fake analyzer (that throws an exception in createComponents) and never close it?\n",
"created_at": "2015-08-03T14:10:46Z"
},
{
"body": "Makes sense.\n",
"created_at": "2015-08-03T15:13:54Z"
},
{
"body": "the comment says it looks for the oldest version but the code looks for the newest version?\n",
"created_at": "2015-08-03T15:51:21Z"
},
{
"body": "sorry I missed the ==false\n",
"created_at": "2015-08-03T15:55:00Z"
},
{
"body": "I don't think we need to close the fake analyzer since it doesn't hold resources?\n",
"created_at": "2015-08-03T16:04:36Z"
},
{
"body": "I think we need to come up with more descriptive names?\n",
"created_at": "2015-08-03T16:10:58Z"
},
{
"body": "The parent class has a non-empty close method, so while you are right - there is nothing really happens there, I don't think it causes any harm to close it and it will make reasoning about the code easier. \n",
"created_at": "2015-08-03T17:07:32Z"
},
{
"body": "I am going to rename them to `upgrade_version` and `minimum_compatible_lucene_version`.\n",
"created_at": "2015-08-03T17:13:22Z"
},
{
"body": "\"minimum compatible\" is confusing. Compatible with what? Can we maybe call this `oldest_lucene_segment_version`?\n",
"created_at": "2015-08-03T20:09:34Z"
},
{
"body": "The `upgradeMappings()` doesn't seem to modify the provided `indexMetaData` variable, so I think this method should have `void`as return type?\n",
"created_at": "2015-08-04T07:13:20Z"
},
{
"body": "mappings of mappings? Maybe just one mappings?\n",
"created_at": "2015-08-04T14:15:28Z"
},
{
"body": "Can we use the same \"lucene oldest segment\" terminology here?\n",
"created_at": "2015-08-04T14:18:00Z"
},
{
"body": "nit: space after if\n",
"created_at": "2015-08-04T14:21:00Z"
},
{
"body": "Is it possible to just check upgrade version? Can we default that to the created version, so this logic is very simple to think about here?\n",
"created_at": "2015-08-04T14:22:23Z"
},
{
"body": "upgrade the index -> upgrade mappings for index?\n",
"created_at": "2015-08-04T14:23:27Z"
},
{
"body": "Were you still going to rename these?\n",
"created_at": "2015-08-04T14:25:48Z"
},
{
"body": "Can we name this index differently to indicate it is an error case? Perhaps index-conflicting-mappings-1.7.0?\n",
"created_at": "2015-08-04T14:27:13Z"
},
{
"body": "If upgrade version is not present, it is set to created version, so we can just check upgrade version.\n",
"created_at": "2015-08-04T19:45:39Z"
}
],
"title": "Check for incompatible mappings while upgrading old indices"
} | {
"commits": [
{
"message": "Check for incompatible mappings while upgrading old indices\n\nConflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.\n\nCloses #11857"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.support.broadcast.BroadcastShardResponse;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -32,22 +33,29 @@\n */\n class ShardUpgradeResponse extends BroadcastShardResponse {\n \n- private org.apache.lucene.util.Version version;\n+ private org.apache.lucene.util.Version oldestLuceneSegment;\n+\n+ private Version upgradeVersion;\n \n private boolean primary;\n \n \n ShardUpgradeResponse() {\n }\n \n- ShardUpgradeResponse(ShardId shardId, boolean primary, org.apache.lucene.util.Version version) {\n+ ShardUpgradeResponse(ShardId shardId, boolean primary, Version upgradeVersion, org.apache.lucene.util.Version oldestLuceneSegment) {\n super(shardId);\n this.primary = primary;\n- this.version = version;\n+ this.upgradeVersion = upgradeVersion;\n+ this.oldestLuceneSegment = oldestLuceneSegment;\n+ }\n+\n+ public org.apache.lucene.util.Version oldestLuceneSegment() {\n+ return this.oldestLuceneSegment;\n }\n \n- public org.apache.lucene.util.Version version() {\n- return this.version;\n+ public Version upgradeVersion() {\n+ return this.upgradeVersion;\n }\n \n public boolean primary() {\n@@ -59,18 +67,21 @@ public boolean primary() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n primary = in.readBoolean();\n+ upgradeVersion = Version.readVersion(in);\n try {\n- version = org.apache.lucene.util.Version.parse(in.readString());\n+ oldestLuceneSegment = org.apache.lucene.util.Version.parse(in.readString());\n } catch (ParseException ex) {\n- throw new IOException(\"failed to parse lucene version [\" + version + \"]\", ex);\n+ throw new IOException(\"failed to parse lucene version [\" + oldestLuceneSegment + \"]\", ex);\n }\n+\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeBoolean(primary);\n- out.writeString(version.toString());\n+ Version.writeVersion(upgradeVersion, out);\n+ out.writeString(oldestLuceneSegment.toString());\n }\n \n }\n\\ No newline at end of file",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/ShardUpgradeResponse.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n-import org.apache.lucene.util.Version;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.PrimaryMissingActionException;\n import org.elasticsearch.action.ShardOperationFailedException;\n@@ -34,6 +34,7 @@\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.*;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -75,7 +76,7 @@ protected UpgradeResponse newResponse(UpgradeRequest request, AtomicReferenceArr\n int failedShards = 0;\n List<ShardOperationFailedException> shardFailures = null;\n Map<String, Integer> successfulPrimaryShards = newHashMap();\n- Map<String, Version> versions = newHashMap();\n+ Map<String, Tuple<Version, org.apache.lucene.util.Version>> versions = newHashMap();\n for (int i = 0; i < shardsResponses.length(); i++) {\n Object shardResponse = shardsResponses.get(i);\n if (shardResponse == null) {\n@@ -94,20 +95,35 @@ protected UpgradeResponse newResponse(UpgradeRequest request, AtomicReferenceArr\n Integer count = successfulPrimaryShards.get(index);\n successfulPrimaryShards.put(index, count == null ? 1 : count + 1);\n }\n- Version version = versions.get(index);\n- if (version == null || shardUpgradeResponse.version().onOrAfter(version) == false) {\n- versions.put(index, shardUpgradeResponse.version());\n+ Tuple<Version, org.apache.lucene.util.Version> versionTuple = versions.get(index);\n+ if (versionTuple == null) {\n+ versions.put(index, new Tuple<>(shardUpgradeResponse.upgradeVersion(), shardUpgradeResponse.oldestLuceneSegment()));\n+ } else {\n+ // We already have versions for this index - let's see if we need to update them based on the current shard\n+ Version version = versionTuple.v1();\n+ org.apache.lucene.util.Version luceneVersion = versionTuple.v2();\n+ // For the metadata we are interested in the _latest_ elasticsearch version that was processing the metadata\n+ // Since we rewrite the mapping during upgrade the metadata is always rewritten by the latest version\n+ if (shardUpgradeResponse.upgradeVersion().after(versionTuple.v1())) {\n+ version = shardUpgradeResponse.upgradeVersion();\n+ }\n+ // For the lucene version we are interested in the _oldest_ lucene version since it determines the\n+ // oldest version that we need to support\n+ if (shardUpgradeResponse.oldestLuceneSegment().onOrAfter(versionTuple.v2()) == false) {\n+ luceneVersion = shardUpgradeResponse.oldestLuceneSegment();\n+ }\n+ versions.put(index, new Tuple<>(version, luceneVersion));\n }\n }\n }\n- Map<String, String> updatedVersions = newHashMap();\n+ Map<String, Tuple<org.elasticsearch.Version, String>> updatedVersions = newHashMap();\n MetaData metaData = clusterState.metaData();\n- for (Map.Entry<String, Version> versionEntry : versions.entrySet()) {\n+ for (Map.Entry<String, Tuple<Version, org.apache.lucene.util.Version>> versionEntry : versions.entrySet()) {\n String index = versionEntry.getKey();\n Integer primaryCount = successfulPrimaryShards.get(index);\n int expectedPrimaryCount = metaData.index(index).getNumberOfShards();\n if (primaryCount == metaData.index(index).getNumberOfShards()) {\n- updatedVersions.put(index, versionEntry.getValue().toString());\n+ updatedVersions.put(index, new Tuple<>(versionEntry.getValue().v1(), versionEntry.getValue().v2().toString()));\n } else {\n logger.warn(\"Not updating settings for the index [{}] because upgraded of some primary shards failed - expected[{}], received[{}]\", index,\n expectedPrimaryCount, primaryCount == null ? 0 : primaryCount);\n@@ -130,8 +146,9 @@ protected ShardUpgradeResponse newShardResponse() {\n @Override\n protected ShardUpgradeResponse shardOperation(ShardUpgradeRequest request) {\n IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n- org.apache.lucene.util.Version version = indexShard.upgrade(request.upgradeRequest());\n- return new ShardUpgradeResponse(request.shardId(), indexShard.routingEntry().primary(), version);\n+ org.apache.lucene.util.Version oldestLuceneSegment = indexShard.upgrade(request.upgradeRequest());\n+ // We are using the current version of elasticsearch as upgrade version since we update mapping to match the current version\n+ return new ShardUpgradeResponse(request.shardId(), indexShard.routingEntry().primary(), Version.CURRENT, oldestLuceneSegment);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/TransportUpgradeAction.java",
"status": "modified"
},
{
"diff": "@@ -19,8 +19,10 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n \n@@ -37,13 +39,13 @@\n */\n public class UpgradeResponse extends BroadcastResponse {\n \n- private Map<String, String> versions;\n+ private Map<String, Tuple<Version, String>> versions;\n \n UpgradeResponse() {\n \n }\n \n- UpgradeResponse(Map<String, String> versions, int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n+ UpgradeResponse(Map<String, Tuple<Version, String>> versions, int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n super(totalShards, successfulShards, failedShards, shardFailures);\n this.versions = versions;\n }\n@@ -55,22 +57,28 @@ public void readFrom(StreamInput in) throws IOException {\n versions = newHashMap();\n for (int i=0; i<size; i++) {\n String index = in.readString();\n- String version = in.readString();\n- versions.put(index, version);\n+ Version upgradeVersion = Version.readVersion(in);\n+ String oldestLuceneSegment = in.readString();\n+ versions.put(index, new Tuple<>(upgradeVersion, oldestLuceneSegment));\n }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeVInt(versions.size());\n- for(Map.Entry<String, String> entry : versions.entrySet()) {\n+ for(Map.Entry<String, Tuple<Version, String>> entry : versions.entrySet()) {\n out.writeString(entry.getKey());\n- out.writeString(entry.getValue());\n+ Version.writeVersion(entry.getValue().v1(), out);\n+ out.writeString(entry.getValue().v2());\n }\n }\n \n- public Map<String, String> versions() {\n+ /**\n+ * Returns the highest upgrade version of the node that performed metadata upgrade and the\n+ * the version of the oldest lucene segment for each index that was upgraded.\n+ */\n+ public Map<String, Tuple<Version, String>> versions() {\n return versions;\n }\n }\n\\ No newline at end of file",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/UpgradeResponse.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateRequest;\n+import org.elasticsearch.common.collect.Tuple;\n \n import java.util.Map;\n \n@@ -28,7 +30,7 @@\n */\n public class UpgradeSettingsClusterStateUpdateRequest extends ClusterStateUpdateRequest<UpgradeSettingsClusterStateUpdateRequest> {\n \n- private Map<String, String> versions;\n+ private Map<String, Tuple<Version, String>> versions;\n \n public UpgradeSettingsClusterStateUpdateRequest() {\n \n@@ -37,14 +39,14 @@ public UpgradeSettingsClusterStateUpdateRequest() {\n /**\n * Returns the index to version map for indices that should be updated\n */\n- public Map<String, String> versions() {\n+ public Map<String, Tuple<Version, String>> versions() {\n return versions;\n }\n \n /**\n * Sets the index to version map for indices that should be updated\n */\n- public UpgradeSettingsClusterStateUpdateRequest versions(Map<String, String> versions) {\n+ public UpgradeSettingsClusterStateUpdateRequest versions(Map<String, Tuple<Version, String>> versions) {\n this.versions = versions;\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/UpgradeSettingsClusterStateUpdateRequest.java",
"status": "modified"
},
{
"diff": "@@ -19,8 +19,10 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.support.master.AcknowledgedRequest;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n \n@@ -35,16 +37,17 @@\n */\n public class UpgradeSettingsRequest extends AcknowledgedRequest<UpgradeSettingsRequest> {\n \n-\n- private Map<String, String> versions;\n+ private Map<String, Tuple<Version, String>> versions;\n \n UpgradeSettingsRequest() {\n }\n \n /**\n * Constructs a new request to update minimum compatible version settings for one or more indices\n+ *\n+ * @param versions a map from index name to elasticsearch version, oldest lucene segment version tuple\n */\n- public UpgradeSettingsRequest(Map<String, String> versions) {\n+ public UpgradeSettingsRequest(Map<String, Tuple<Version, String>> versions) {\n this.versions = versions;\n }\n \n@@ -59,14 +62,14 @@ public ActionRequestValidationException validate() {\n }\n \n \n- Map<String, String> versions() {\n+ Map<String, Tuple<Version, String>> versions() {\n return versions;\n }\n \n /**\n * Sets the index versions to be updated\n */\n- public UpgradeSettingsRequest versions(Map<String, String> versions) {\n+ public UpgradeSettingsRequest versions(Map<String, Tuple<Version, String>> versions) {\n this.versions = versions;\n return this;\n }\n@@ -79,8 +82,9 @@ public void readFrom(StreamInput in) throws IOException {\n versions = newHashMap();\n for (int i=0; i<size; i++) {\n String index = in.readString();\n- String version = in.readString();\n- versions.put(index, version);\n+ Version upgradeVersion = Version.readVersion(in);\n+ String oldestLuceneSegment = in.readString();\n+ versions.put(index, new Tuple<>(upgradeVersion, oldestLuceneSegment));\n }\n readTimeout(in);\n }\n@@ -89,9 +93,10 @@ public void readFrom(StreamInput in) throws IOException {\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeVInt(versions.size());\n- for(Map.Entry<String, String> entry : versions.entrySet()) {\n+ for(Map.Entry<String, Tuple<Version, String>> entry : versions.entrySet()) {\n out.writeString(entry.getKey());\n- out.writeString(entry.getValue());\n+ Version.writeVersion(entry.getValue().v1(), out);\n+ out.writeString(entry.getValue().v2());\n }\n writeTimeout(out);\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java",
"status": "modified"
},
{
"diff": "@@ -19,8 +19,10 @@\n \n package org.elasticsearch.action.admin.indices.upgrade.post;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n+import org.elasticsearch.common.collect.Tuple;\n \n import java.util.Map;\n \n@@ -36,7 +38,7 @@ public UpgradeSettingsRequestBuilder(ElasticsearchClient client, UpgradeSettings\n /**\n * Sets the index versions to be updated\n */\n- public UpgradeSettingsRequestBuilder setVersions(Map<String, String> versions) {\n+ public UpgradeSettingsRequestBuilder setVersions(Map<String, Tuple<Version, String>> versions) {\n request.versions(versions);\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/UpgradeSettingsRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,8 @@\n */\n package org.elasticsearch.cluster.metadata;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import org.apache.lucene.analysis.Analyzer;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.routing.DjbHashFunction;\n import org.elasticsearch.cluster.routing.HashFunction;\n@@ -27,6 +29,12 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import com.google.common.collect.ImmutableSet;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.analysis.AnalysisService;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.similarity.SimilarityLookupService;\n+import org.elasticsearch.script.ScriptService;\n \n import java.util.Set;\n \n@@ -45,11 +53,12 @@ public class MetaDataIndexUpgradeService extends AbstractComponent {\n \n private final Class<? extends HashFunction> pre20HashFunction;\n private final Boolean pre20UseType;\n+ private final ScriptService scriptService;\n \n @Inject\n- public MetaDataIndexUpgradeService(Settings settings) {\n+ public MetaDataIndexUpgradeService(Settings settings, ScriptService scriptService) {\n super(settings);\n-\n+ this.scriptService = scriptService;\n final String pre20HashFunctionName = settings.get(DEPRECATED_SETTING_ROUTING_HASH_FUNCTION, null);\n final boolean hasCustomPre20HashFunction = pre20HashFunctionName != null;\n // the hash function package has changed we replace the two hash functions if their fully qualified name is used.\n@@ -83,12 +92,24 @@ public MetaDataIndexUpgradeService(Settings settings) {\n */\n public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData) {\n // Throws an exception if there are too-old segments:\n+ if (isUpgraded(indexMetaData)) {\n+ return indexMetaData;\n+ }\n checkSupportedVersion(indexMetaData);\n IndexMetaData newMetaData = upgradeLegacyRoutingSettings(indexMetaData);\n newMetaData = addDefaultUnitsIfNeeded(newMetaData);\n+ checkMappingsCompatibility(newMetaData);\n+ newMetaData = markAsUpgraded(newMetaData);\n return newMetaData;\n }\n \n+ /**\n+ * Checks if the index was already opened by this version of Elasticsearch and doesn't require any additional checks.\n+ */\n+ private boolean isUpgraded(IndexMetaData indexMetaData) {\n+ return indexMetaData.upgradeVersion().onOrAfter(Version.V_2_0_0_beta1);\n+ }\n+\n /**\n * Elasticsearch 2.0 no longer supports indices with pre Lucene v4.0 (Elasticsearch v 0.90.0) segments. All indices\n * that were created before Elasticsearch v0.90.0 should be upgraded using upgrade plugin before they can\n@@ -239,4 +260,66 @@ private IndexMetaData addDefaultUnitsIfNeeded(IndexMetaData indexMetaData) {\n // No changes:\n return indexMetaData;\n }\n+\n+\n+ /**\n+ * Checks the mappings for compatibility with the current version\n+ */\n+ private void checkMappingsCompatibility(IndexMetaData indexMetaData) {\n+ Index index = new Index(indexMetaData.getIndex());\n+ Settings settings = indexMetaData.settings();\n+ try {\n+ SimilarityLookupService similarityLookupService = new SimilarityLookupService(index, settings);\n+ // We cannot instantiate real analysis server at this point because the node might not have\n+ // been started yet. However, we don't really need real analyzers at this stage - so we can fake it\n+ try (AnalysisService analysisService = new FakeAnalysisService(index, settings)) {\n+ try (MapperService mapperService = new MapperService(index, settings, analysisService, similarityLookupService, scriptService)) {\n+ for (ObjectCursor<MappingMetaData> cursor : indexMetaData.getMappings().values()) {\n+ MappingMetaData mappingMetaData = cursor.value;\n+ mapperService.merge(mappingMetaData.type(), mappingMetaData.source(), false, false);\n+ }\n+ }\n+ }\n+ } catch (Exception ex) {\n+ // Wrap the inner exception so we have the index name in the exception message\n+ throw new IllegalStateException(\"unable to upgrade the mappings for the index [\" + indexMetaData.getIndex() + \"], reason: [\" + ex.getMessage() + \"]\", ex);\n+ }\n+ }\n+\n+ /**\n+ * Marks index as upgraded so we don't have to test it again\n+ */\n+ private IndexMetaData markAsUpgraded(IndexMetaData indexMetaData) {\n+ Settings settings = Settings.builder().put(indexMetaData.settings()).put(IndexMetaData.SETTING_VERSION_UPGRADED, Version.CURRENT).build();\n+ return IndexMetaData.builder(indexMetaData).settings(settings).build();\n+ }\n+\n+ /**\n+ * A fake analysis server that returns the same keyword analyzer for all requests\n+ */\n+ private static class FakeAnalysisService extends AnalysisService {\n+\n+ private Analyzer fakeAnalyzer = new Analyzer() {\n+ @Override\n+ protected TokenStreamComponents createComponents(String fieldName) {\n+ throw new UnsupportedOperationException(\"shouldn't be here\");\n+ }\n+ };\n+\n+ public FakeAnalysisService(Index index, Settings indexSettings) {\n+ super(index, indexSettings);\n+ }\n+\n+ @Override\n+ public NamedAnalyzer analyzer(String name) {\n+ return new NamedAnalyzer(name, fakeAnalyzer);\n+ }\n+\n+ @Override\n+ public void close() {\n+ fakeAnalyzer.close();\n+ super.close();\n+ }\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.cluster.settings.DynamicSettings;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -334,16 +335,16 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n @Override\n public ClusterState execute(ClusterState currentState) {\n MetaData.Builder metaDataBuilder = MetaData.builder(currentState.metaData());\n- for (Map.Entry<String, String> entry : request.versions().entrySet()) {\n+ for (Map.Entry<String, Tuple<Version, String>> entry : request.versions().entrySet()) {\n String index = entry.getKey();\n IndexMetaData indexMetaData = metaDataBuilder.get(index);\n if (indexMetaData != null) {\n if (Version.CURRENT.equals(indexMetaData.creationVersion()) == false) {\n // No reason to pollute the settings, we didn't really upgrade anything\n metaDataBuilder.put(IndexMetaData.builder(indexMetaData)\n .settings(settingsBuilder().put(indexMetaData.settings())\n- .put(IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE, entry.getValue())\n- .put(IndexMetaData.SETTING_VERSION_UPGRADED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE, entry.getValue().v2())\n+ .put(IndexMetaData.SETTING_VERSION_UPGRADED, entry.getValue().v1())\n )\n );\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -63,6 +63,7 @@\n import org.elasticsearch.percolator.PercolatorService;\n import org.elasticsearch.script.ScriptService;\n \n+import java.io.Closeable;\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n@@ -78,7 +79,7 @@\n /**\n *\n */\n-public class MapperService extends AbstractIndexComponent {\n+public class MapperService extends AbstractIndexComponent implements Closeable {\n \n public static final String DEFAULT_MAPPING = \"_default_\";\n private static ObjectHashSet<String> META_FIELDS = ObjectHashSet.from(",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,13 @@\n \n package org.elasticsearch.rest.action.admin.indices.upgrade;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.upgrade.get.UpgradeStatusResponse;\n import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeRequest;\n import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -86,8 +88,11 @@ public RestResponse buildResponse(UpgradeResponse response, XContentBuilder buil\n builder.startObject();\n buildBroadcastShardsHeader(builder, request, response);\n builder.startObject(\"upgraded_indices\");\n- for (Map.Entry<String, String> entry : response.versions().entrySet()) {\n- builder.field(entry.getKey(), entry.getValue(), XContentBuilder.FieldCaseConversion.NONE);\n+ for (Map.Entry<String, Tuple<Version, String>> entry : response.versions().entrySet()) {\n+ builder.startObject(entry.getKey(), XContentBuilder.FieldCaseConversion.NONE);\n+ builder.field(\"upgrade_version\", entry.getValue().v1());\n+ builder.field(\"oldest_lucene_segment_version\", entry.getValue().v2());\n+ builder.endObject();\n }\n builder.endObject();\n builder.endObject();",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/upgrade/RestUpgradeAction.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.get.GetIndexResponse;\n+import org.elasticsearch.action.admin.indices.upgrade.UpgradeIT;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -42,7 +43,6 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.shard.MergePolicyConfig;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n-import org.elasticsearch.rest.action.admin.indices.upgrade.UpgradeIT;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.aggregations.AggregationBuilders;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;",
"filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java",
"status": "modified"
},
{
"diff": "",
"filename": "core/src/test/resources/org/elasticsearch/action/admin/indices/upgrade/index-conflicting-mappings-1.7.0.zip",
"status": "added"
},
{
"diff": "@@ -0,0 +1,93 @@\n+import create_bwc_index\n+import logging\n+import os\n+import random\n+import shutil\n+import subprocess\n+import sys\n+import tempfile\n+\n+def fetch_version(version):\n+ logging.info('fetching ES version %s' % version)\n+ if subprocess.call([sys.executable, os.path.join(os.path.split(sys.argv[0])[0], 'get-bwc-version.py'), version]) != 0:\n+ raise RuntimeError('failed to download ES version %s' % version)\n+\n+def main():\n+ '''\n+ Creates a static back compat index (.zip) with conflicting mappings.\n+ '''\n+ \n+ logging.basicConfig(format='[%(levelname)s] [%(asctime)s] %(message)s', level=logging.INFO,\n+ datefmt='%Y-%m-%d %I:%M:%S %p')\n+ logging.getLogger('elasticsearch').setLevel(logging.ERROR)\n+ logging.getLogger('urllib3').setLevel(logging.WARN)\n+\n+ tmp_dir = tempfile.mkdtemp()\n+ try:\n+ data_dir = os.path.join(tmp_dir, 'data')\n+ repo_dir = os.path.join(tmp_dir, 'repo')\n+ logging.info('Temp data dir: %s' % data_dir)\n+ logging.info('Temp repo dir: %s' % repo_dir)\n+\n+ version = '1.7.0'\n+ classifier = 'conflicting-mappings-%s' % version\n+ index_name = 'index-%s' % classifier\n+\n+ # Download old ES releases if necessary:\n+ release_dir = os.path.join('backwards', 'elasticsearch-%s' % version)\n+ if not os.path.exists(release_dir):\n+ fetch_version(version)\n+\n+ node = create_bwc_index.start_node(version, release_dir, data_dir, repo_dir, cluster_name=index_name)\n+ client = create_bwc_index.create_client()\n+\n+ put_conflicting_mappings(client, index_name)\n+ create_bwc_index.shutdown_node(node)\n+ print('%s server output:\\n%s' % (version, node.stdout.read().decode('utf-8')))\n+ node = None\n+ create_bwc_index.compress_index(classifier, tmp_dir, 'core/src/test/resources/org/elasticsearch/action/admin/indices/upgrade')\n+ finally:\n+ if node is not None:\n+ create_bwc_index.shutdown_node(node)\n+ shutil.rmtree(tmp_dir)\n+\n+def put_conflicting_mappings(client, index_name):\n+ client.indices.delete(index=index_name, ignore=404)\n+ logging.info('Create single shard test index')\n+\n+ mappings = {}\n+ # backwardcompat test for conflicting mappings, see #11857\n+ mappings['x'] = {\n+ 'analyzer': 'standard',\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+ }\n+ mappings['y'] = {\n+ 'analyzer': 'standard',\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"date\"\n+ }\n+ }\n+ }\n+\n+ client.indices.create(index=index_name, body={\n+ 'settings': {\n+ 'number_of_shards': 1,\n+ 'number_of_replicas': 0\n+ },\n+ 'mappings': mappings\n+ })\n+ health = client.cluster.health(wait_for_status='green', wait_for_relocating_shards=0)\n+ assert health['timed_out'] == False, 'cluster health timed out %s' % health\n+ num_docs = random.randint(2000, 3000)\n+ create_bwc_index.index_documents(client, index_name, 'doc', num_docs)\n+ logging.info('Running basic asserts on the data added')\n+ create_bwc_index.run_basic_asserts(client, index_name, 'doc', num_docs)\n+\n+if __name__ == '__main__':\n+ main()\n+ ",
"filename": "dev-tools/create_bwc_index_with_conficting_mappings.py",
"status": "added"
},
{
"diff": "@@ -25,7 +25,9 @@ def main():\n tmp_dir = tempfile.mkdtemp()\n try:\n data_dir = os.path.join(tmp_dir, 'data')\n+ repo_dir = os.path.join(tmp_dir, 'repo')\n logging.info('Temp data dir: %s' % data_dir)\n+ logging.info('Temp repo dir: %s' % repo_dir)\n \n first_version = '0.20.6'\n second_version = '0.90.6'\n@@ -36,7 +38,7 @@ def main():\n if not os.path.exists(release_dir):\n fetch_version(first_version)\n \n- node = create_bwc_index.start_node(first_version, release_dir, data_dir, cluster_name=index_name)\n+ node = create_bwc_index.start_node(first_version, release_dir, data_dir, repo_dir, cluster_name=index_name)\n client = create_bwc_index.create_client()\n \n # Creates the index & indexes docs w/ first_version:\n@@ -63,7 +65,7 @@ def main():\n fetch_version(second_version)\n \n # Now also index docs with second_version:\n- node = create_bwc_index.start_node(second_version, release_dir, data_dir, cluster_name=index_name)\n+ node = create_bwc_index.start_node(second_version, release_dir, data_dir, repo_dir, cluster_name=index_name)\n client = create_bwc_index.create_client()\n \n # If we index too many docs, the random refresh/flush causes the ancient segments to be merged away:\n@@ -102,7 +104,7 @@ def main():\n create_bwc_index.shutdown_node(node)\n print('%s server output:\\n%s' % (second_version, node.stdout.read().decode('utf-8')))\n node = None\n- create_bwc_index.compress_index('%s-and-%s' % (first_version, second_version), tmp_dir, 'src/test/resources/org/elasticsearch/rest/action/admin/indices/upgrade')\n+ create_bwc_index.compress_index('%s-and-%s' % (first_version, second_version), tmp_dir, 'core/src/test/resources/org/elasticsearch/action/admin/indices/upgrade')\n finally:\n if node is not None:\n create_bwc_index.shutdown_node(node)",
"filename": "dev-tools/create_bwc_index_with_some_ancient_segments.py",
"status": "modified"
},
{
"diff": "@@ -18,4 +18,5 @@\n indices.upgrade:\n index: test_index\n \n- - match: {upgraded_indices.test_index: '/(\\d\\.)+\\d/'}\n+ - match: {upgraded_indices.test_index.oldest_lucene_segment_version: '/(\\d\\.)+\\d/'}\n+ - is_true: upgraded_indices.test_index.upgrade_version",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.upgrade/10_basic.yaml",
"status": "modified"
}
]
} |
{
"body": "The equalTo logic of ShardRouting doesn't take version and unassignedInfo into the account when compares shard routings. Since cluster state diff relies on equal to detect the changes that needs to be sent to other cluster, this omission might lead to changes not being properly propagated to other nodes in the cluster.\n",
"comments": [
{
"body": "@bleskes could you take a look when you have a chance? Somehow, adding version to the equal triggers retry failures in RecoveryPercolatorTests. Any idea why?\n",
"created_at": "2015-07-21T22:25:53Z"
},
{
"body": "I had a look. like the change (didn’t do a proper review). The source of trouble was IndicesClusterStateService being confused by the new equal semantics. I fixed this in #12397 . Once it’s in we can try again..\n\n> On 22 Jul 2015, at 00:26, Igor Motov notifications@github.com wrote:\n> \n> @bleskes could you take a look when you have a chance? Somehow, adding version to the equal triggers retry failures in RecoveryPercolatorTests. Any idea why?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2015-07-23T10:23:56Z"
},
{
"body": "@imotov I think this needs to get in before the 2.0 beta, but needs to be rebased now that Boaz fixed the remaining issue in #12397. Can you rebase and see if the nocommit can be removed?\n",
"created_at": "2015-08-07T13:17:41Z"
},
{
"body": "@dakrone rebased, force pushed, and can use a review\n",
"created_at": "2015-08-07T21:43:43Z"
},
{
"body": "Left a couple of comments about the testing methods, but the actual diff code looks good to me.\n",
"created_at": "2015-08-07T21:56:55Z"
},
{
"body": "LGTM\n",
"created_at": "2015-08-10T15:53:44Z"
}
],
"number": 12387,
"title": "Changes in unassigned info and version might not be transferred as part of cluster state diffs"
} | {
"body": "#12242 introduced a unique id for an assignment of shard to a node. We should use these id's to drive the decisions made by IndicesClusterStateService when processing the new cluster state sent by the master. If the local shard has a different allocation id than the new cluster state, the shard will be removed and a new one will be created. This fixes a couple of subtle bugs, most notably a node previously got confused if an incoming cluster state had a newly allocated shard in the initializing state and the local copy was started (which can happen if cluster state updates are bulk processed). In that case, the node have previously re-used the local copy instead of initializing a new one.\n\n Also, as set of utility methods was introduced on ShardRouting to do various types of matching with other shard routings, giving control about what exactly should be matched (same shard id, same allocation id, all but version and shard info etc.). This is useful here, but also prepares the grounds for the change needed in #12387 (making ShardRouting.equals be strict and perform exact equality).\n",
"number": 12397,
"review_comments": [
{
"body": "@dakrone can you please verify the above?\n",
"created_at": "2015-07-22T13:26:14Z"
},
{
"body": "The previous behavior was that the service removed the shard if it was a shadow replica being promoted to primary, but it looks like this behavior was removed. Is it moved somewhere else?\n",
"created_at": "2015-07-22T14:18:23Z"
},
{
"body": "\"current shard has to be started inorder to relocated \" -> \"current shard has to be started in order to be relocated \"\n",
"created_at": "2015-07-22T14:40:33Z"
},
{
"body": "Okay, I tested this locally, it looks like it works differently with the line:\n\n```\n[2015-07-22 08:54:10,744][DEBUG][org.elasticsearch.indices.cluster] [node_t1] [test][0] removing shard (different instance of it allocated on this node, current [[test][0], node[nNCDLISFRu2ncr-QCiYPgQ], [R], v[4], s[STARTED], a[id=93HrpsfMRWybeCn-NxhBjw]], global [[test][0], node[nNCDLISFRu2ncr-QCiYPgQ], [P], v[6], s[INITIALIZING], a[id=g-oZ-7uiRTmDxgLnTdQHjg], unassigned_info[[reason=REINITIALIZED], at[2015-07-22T14:54:10.743Z]]])[2015-07-22 08:54:10,744][DEBUG][org.elasticsearch.indices.cluster] [node_t1] [test][0] removing shard (different instance of it allocated on this node, current [[test][0], node[nNCDLISFRu2ncr-QCiYPgQ], [R], v[4], s[STARTED], a[id=93HrpsfMRWybeCn-NxhBjw]], global [[test][0], node[nNCDLISFRu2ncr-QCiYPgQ], [P], v[6], s[INITIALIZING], a[id=g-oZ-7uiRTmDxgLnTdQHjg], unassigned_info[[reason=REINITIALIZED], at[2015-07-22T14:54:10.743Z]]])\n```\n\nIs my understanding correct?\n",
"created_at": "2015-07-22T15:30:38Z"
},
{
"body": "yeah, my reasoning (which I wanted double checked) is that the master always generates a new allocation id in this case and thus the shard is cleaned by the allocation id check. No need to have special shadow replica handling (but I did add an assert and a comment). \n",
"created_at": "2015-07-22T21:09:27Z"
},
{
"body": "Debugging leftovers\n",
"created_at": "2015-07-23T16:16:51Z"
}
],
"title": "Adapt IndicesClusterStateService to use allocation ids"
} | {
"commits": [
{
"message": "Core: Adapt IndicesClusterStateService to use allocation ids\n\n#12242 introduced a unique id for an assignment of shard to a node. We should use these id's to drive the decisions made by IndicesClusterStateService when processing the new cluster state sent by the master. If the local shard has a different allocation id than the new cluster state, the shard will be removed and a new one will be created. This fixes a couple of subtle bugs, most notably a node previously got confused if an incoming cluster state had a newly allocated shard in the initializing state and the local copy was started (which can happen if cluster state updates are bulk processed). In that case, the node have previously re-used the local copy instead of initializing a new one.\n\n Also, as set of utility methods was introduced on ShardRouting to do various types of matching with other shard routings, giving control about what exactly should be matched (same shard id, same allocation id, all but version and shard info etc.). This is useful here, but also prepares the grounds for the change needed in #12387 (making ShardRouting.equals be strict and perform exact equality)."
},
{
"message": "feedback"
}
],
"files": [
{
"diff": "@@ -83,18 +83,18 @@ public static AllocationId newRelocation(AllocationId allocationId) {\n \n /**\n * Creates a new allocation id representing a cancelled relocation.\n- *\n+ * <p/>\n * Note that this is expected to be called on the allocation id\n * of the *source* shard\n- * */\n+ */\n public static AllocationId cancelRelocation(AllocationId allocationId) {\n assert allocationId.getRelocationId() != null;\n return new AllocationId(allocationId.getId(), null);\n }\n \n /**\n * Creates a new allocation id finalizing a relocation.\n- *\n+ * <p/>\n * Note that this is expected to be called on the allocation id\n * of the *target* shard and thus it only needs to clear the relocating id.\n */\n@@ -120,9 +120,16 @@ public String getRelocationId() {\n \n @Override\n public boolean equals(Object o) {\n- if (this == o) return true;\n+ if (this == o) {\n+ return true;\n+ }\n+ if (o == null) {\n+ return false;\n+ }\n AllocationId that = (AllocationId) o;\n- if (!id.equals(that.id)) return false;\n+ if (!id.equals(that.id)) {\n+ return false;\n+ }\n return !(relocationId != null ? !relocationId.equals(that.relocationId) : that.relocationId != null);\n \n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/AllocationId.java",
"status": "modified"
},
{
"diff": "@@ -88,7 +88,7 @@ public int size() {\n void add(ShardRouting shard) {\n // TODO use Set with ShardIds for faster lookup.\n for (ShardRouting shardRouting : shards) {\n- if (shardRouting.shardId().equals(shard.shardId())) {\n+ if (shardRouting.isSameShard(shard)) {\n throw new IllegalStateException(\"Trying to add a shard [\" + shard.shardId().index().name() + \"][\" + shard.shardId().id() + \"] to a node [\" + nodeId + \"] where it already exists\");\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingNode.java",
"status": "modified"
},
{
"diff": "@@ -420,7 +420,7 @@ void initialize(String nodeId) {\n void relocate(String relocatingNodeId) {\n ensureNotFrozen();\n version++;\n- assert state == ShardRoutingState.STARTED : this;\n+ assert state == ShardRoutingState.STARTED : \"current shard has to be started in order to be relocated \" + this;\n state = ShardRoutingState.RELOCATING;\n this.relocatingNodeId = relocatingNodeId;\n this.allocationId = AllocationId.newRelocation(allocationId);\n@@ -467,7 +467,7 @@ void moveToStarted() {\n restoreSource = null;\n unassignedInfo = null; // we keep the unassigned data until the shard is started\n if (allocationId.getRelocationId() != null) {\n- // target relocation\n+ // relocation target\n allocationId = AllocationId.finishRelocation(allocationId);\n }\n state = ShardRoutingState.STARTED;\n@@ -498,44 +498,120 @@ void moveFromPrimary() {\n primary = false;\n }\n \n- @Override\n- public boolean equals(Object o) {\n- if (this == o) {\n- return true;\n- }\n- // we check on instanceof so we also handle the ImmutableShardRouting case as well\n- if (o == null || !(o instanceof ShardRouting)) {\n- return false;\n- }\n- ShardRouting that = (ShardRouting) o;\n+ /** returns true if this routing has the same shardId as another */\n+ public boolean isSameShard(ShardRouting other) {\n+ return index.equals(other.index) && shardId == other.shardId;\n+ }\n+\n+ /**\n+ * returns true if this routing has the same allocation ID as another.\n+ * <p/>\n+ * Note: if both shard routing has a null as their {@link #allocationId()}, this method returns false as the routing describe\n+ * no allocation at all..\n+ **/\n+ public boolean isSameAllocation(ShardRouting other) {\n+ boolean b = this.allocationId != null && other.allocationId != null && this.allocationId.getId().equals(other.allocationId.getId());\n+ assert b == false || this.currentNodeId.equals(other.currentNodeId) : \"ShardRoutings have the same allocation id but not the same node. This [\" + this + \"], other [\" + other + \"]\";\n+ return b;\n+ }\n+\n+ /** returns true if the routing is the relocation target of the given routing */\n+ public boolean isRelocationTargetOf(ShardRouting other) {\n+ boolean b = this.allocationId != null && other.allocationId != null && this.state == ShardRoutingState.INITIALIZING &&\n+ this.allocationId.getId().equals(other.allocationId.getRelocationId());\n+\n+ assert b == false || other.state == ShardRoutingState.RELOCATING :\n+ \"ShardRouting is a relocation target but the source shard state isn't relocating. This [\" + this + \"], other [\" + other + \"]\";\n+\n+\n+ assert b == false || other.allocationId.getId().equals(this.allocationId.getRelocationId()) :\n+ \"ShardRouting is a relocation target but the source id isn't equal to source's allocationId.getRelocationId. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || other.currentNodeId().equals(this.relocatingNodeId) :\n+ \"ShardRouting is a relocation target but source current node id isn't equal to target relocating node. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || this.currentNodeId().equals(other.relocatingNodeId) :\n+ \"ShardRouting is a relocation target but current node id isn't equal to source relocating node. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || isSameShard(other) :\n+ \"ShardRouting is a relocation target but both routings are not of the same shard. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || this.primary == other.primary :\n+ \"ShardRouting is a relocation target but primary flag is different. This [\" + this + \"], target [\" + other + \"]\";\n+\n+ return b;\n+ }\n+\n+ /** returns true if the routing is the relocation source for the given routing */\n+ public boolean isRelocationSourceOf(ShardRouting other) {\n+ boolean b = this.allocationId != null && other.allocationId != null && other.state == ShardRoutingState.INITIALIZING &&\n+ other.allocationId.getId().equals(this.allocationId.getRelocationId());\n+\n+ assert b == false || this.state == ShardRoutingState.RELOCATING :\n+ \"ShardRouting is a relocation source but shard state isn't relocating. This [\" + this + \"], other [\" + other + \"]\";\n+\n+\n+ assert b == false || this.allocationId.getId().equals(other.allocationId.getRelocationId()) :\n+ \"ShardRouting is a relocation source but the allocation id isn't equal to other.allocationId.getRelocationId. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || this.currentNodeId().equals(other.relocatingNodeId) :\n+ \"ShardRouting is a relocation source but current node isn't equal to other's relocating node. This [\" + this + \"], other [\" + other + \"]\";\n \n- if (primary != that.primary) {\n+ assert b == false || other.currentNodeId().equals(this.relocatingNodeId) :\n+ \"ShardRouting is a relocation source but relocating node isn't equal to other's current node. This [\" + this + \"], other [\" + other + \"]\";\n+\n+ assert b == false || isSameShard(other) :\n+ \"ShardRouting is a relocation source but both routings are not of the same shard. This [\" + this + \"], target [\" + other + \"]\";\n+\n+ assert b == false || this.primary == other.primary :\n+ \"ShardRouting is a relocation source but primary flag is different. This [\" + this + \"], target [\" + other + \"]\";\n+\n+ return b;\n+ }\n+\n+ /** returns true if the current routing is identical to the other routing in all but meta fields, i.e., version and unassigned info */\n+ public boolean equalsIgnoringMetaData(ShardRouting other) {\n+ if (primary != other.primary) {\n return false;\n }\n- if (shardId != that.shardId) {\n+ if (shardId != other.shardId) {\n return false;\n }\n- if (currentNodeId != null ? !currentNodeId.equals(that.currentNodeId) : that.currentNodeId != null) {\n+ if (currentNodeId != null ? !currentNodeId.equals(other.currentNodeId) : other.currentNodeId != null) {\n return false;\n }\n- if (index != null ? !index.equals(that.index) : that.index != null) {\n+ if (index != null ? !index.equals(other.index) : other.index != null) {\n return false;\n }\n- if (relocatingNodeId != null ? !relocatingNodeId.equals(that.relocatingNodeId) : that.relocatingNodeId != null) {\n+ if (relocatingNodeId != null ? !relocatingNodeId.equals(other.relocatingNodeId) : other.relocatingNodeId != null) {\n return false;\n }\n- if (allocationId != null ? !allocationId.equals(that.allocationId) : that.allocationId != null) {\n+ if (allocationId != null ? !allocationId.equals(other.allocationId) : other.allocationId != null) {\n return false;\n }\n- if (state != that.state) {\n+ if (state != other.state) {\n return false;\n }\n- if (restoreSource != null ? !restoreSource.equals(that.restoreSource) : that.restoreSource != null) {\n+ if (restoreSource != null ? !restoreSource.equals(other.restoreSource) : other.restoreSource != null) {\n return false;\n }\n return true;\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) {\n+ return true;\n+ }\n+ // we check on instanceof so we also handle the ImmutableShardRouting case as well\n+ if (o == null || !(o instanceof ShardRouting)) {\n+ return false;\n+ }\n+ ShardRouting that = (ShardRouting) o;\n+ // TODO: add version + unassigned info check. see #12387\n+ return equalsIgnoringMetaData(that);\n+ }\n+\n private long hashVersion = version - 1;\n private int hashCode = 0;\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java",
"status": "modified"
},
{
"diff": "@@ -338,7 +338,7 @@ private boolean applyStartedShards(RoutingNodes routingNodes, Iterable<? extends\n }\n \n for (ShardRouting shard : currentRoutingNode) {\n- if (shard.allocationId().getId().equals(startedShard.allocationId().getId())) {\n+ if (shard.isSameAllocation(startedShard)) {\n if (shard.active()) {\n logger.trace(\"{} shard is already started, ignoring (routing: {})\", startedShard.shardId(), startedShard);\n } else {\n@@ -363,8 +363,7 @@ private boolean applyStartedShards(RoutingNodes routingNodes, Iterable<? extends\n if (sourceRoutingNode != null) {\n while (sourceRoutingNode.hasNext()) {\n ShardRouting shard = sourceRoutingNode.next();\n- if (shard.allocationId().getId().equals(startedShard.allocationId().getRelocationId())) {\n- assert shard.relocating() : \"source shard for relocation is not marked as relocating. source \" + shard + \", started target \" + startedShard;\n+ if (shard.isRelocationSourceOf(startedShard)) {\n dirty = true;\n sourceRoutingNode.remove();\n break;\n@@ -397,7 +396,7 @@ private boolean applyFailedShard(RoutingAllocation allocation, ShardRouting fail\n boolean matchedShard = false;\n while (matchedNode.hasNext()) {\n ShardRouting routing = matchedNode.next();\n- if (routing.allocationId().getId().equals(failedShard.allocationId().getId())) {\n+ if (routing.isSameAllocation(failedShard)) {\n matchedShard = true;\n logger.debug(\"{} failed shard {} found in routingNodes, failing it ({})\", failedShard.shardId(), failedShard, unassignedInfo.shortSummary());\n break;\n@@ -428,7 +427,7 @@ private boolean applyFailedShard(RoutingAllocation allocation, ShardRouting fail\n RoutingNode relocatingFromNode = routingNodes.node(failedShard.relocatingNodeId());\n if (relocatingFromNode != null) {\n for (ShardRouting shardRouting : relocatingFromNode) {\n- if (shardRouting.allocationId().getId().equals(failedShard.allocationId().getRelocationId())) {\n+ if (shardRouting.isRelocationSourceOf(failedShard)) {\n logger.trace(\"{}, resolved source to [{}]. canceling relocation ... ({})\", failedShard.shardId(), shardRouting, unassignedInfo.shortSummary());\n routingNodes.cancelRelocation(shardRouting);\n break;\n@@ -441,18 +440,15 @@ private boolean applyFailedShard(RoutingAllocation allocation, ShardRouting fail\n // and the shard copy needs to be marked as unassigned\n \n if (failedShard.relocatingNodeId() != null) {\n- // handle relocation source shards. we need to find the target initializing shard that is recovering from, and remove it...\n+ // handle relocation source shards. we need to find the target initializing shard that is recovering, and remove it...\n assert failedShard.initializing() == false; // should have been dealt with and returned\n assert failedShard.relocating();\n \n RoutingNodes.RoutingNodeIterator initializingNode = routingNodes.routingNodeIter(failedShard.relocatingNodeId());\n if (initializingNode != null) {\n while (initializingNode.hasNext()) {\n ShardRouting shardRouting = initializingNode.next();\n- if (shardRouting.allocationId().getId().equals(failedShard.allocationId().getRelocationId())) {\n- assert shardRouting.initializing() : shardRouting;\n- assert failedShard.allocationId().getId().equals(shardRouting.allocationId().getRelocationId())\n- : \"found target shard's allocation relocation id is different than source\";\n+ if (shardRouting.isRelocationTargetOf(failedShard)) {\n logger.trace(\"{} is removed due to the failure of the source shard\", shardRouting);\n initializingNode.remove();\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java",
"status": "modified"
},
{
"diff": "@@ -177,7 +177,7 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n RoutingNode relocatingFromNode = allocation.routingNodes().node(shardRouting.relocatingNodeId());\n if (relocatingFromNode != null) {\n for (ShardRouting fromShardRouting : relocatingFromNode) {\n- if (fromShardRouting.shardId().equals(shardRouting.shardId()) && fromShardRouting.state() == RELOCATING) {\n+ if (fromShardRouting.isSameShard(shardRouting) && fromShardRouting.state() == RELOCATING) {\n allocation.routingNodes().cancelRelocation(fromShardRouting);\n break;\n }\n@@ -201,7 +201,7 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n if (initializingNode != null) {\n while (initializingNode.hasNext()) {\n ShardRouting initializingShardRouting = initializingNode.next();\n- if (initializingShardRouting.shardId().equals(shardRouting.shardId()) && initializingShardRouting.initializing()) {\n+ if (initializingShardRouting.isRelocationTargetOf(shardRouting)) {\n initializingNode.remove();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/CancelAllocationCommand.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import com.google.common.base.Charsets;\n import com.google.common.base.Preconditions;\n-\n import org.apache.lucene.codecs.PostingsFormat;\n import org.apache.lucene.index.CheckIndex;\n import org.apache.lucene.search.QueryCachingPolicy;\n@@ -337,14 +336,16 @@ public void updateRoutingEntry(final ShardRouting newRouting, final boolean pers\n if (!newRouting.shardId().equals(shardId())) {\n throw new IllegalArgumentException(\"Trying to set a routing entry with shardId [\" + newRouting.shardId() + \"] on a shard with shardId [\" + shardId() + \"]\");\n }\n+ if ((currentRouting == null || newRouting.isSameAllocation(currentRouting)) == false) {\n+ throw new IllegalArgumentException(\"Trying to set a routing entry with a different allocation. Current \" + currentRouting + \", new \" + newRouting);\n+ }\n try {\n if (currentRouting != null) {\n- assert newRouting.version() > currentRouting.version() : \"expected: \" + newRouting.version() + \" > \" + currentRouting.version();\n if (!newRouting.primary() && currentRouting.primary()) {\n logger.warn(\"suspect illegal state: trying to move shard from primary mode to replica mode\");\n }\n- // if its the same routing, return\n- if (currentRouting.equals(newRouting)) {\n+ // if its the same routing except for some metadata info, return\n+ if (currentRouting.equalsIgnoringMetaData(newRouting)) {\n this.shardRouting = newRouting; // might have a new version\n return;\n }\n@@ -723,12 +724,12 @@ public org.apache.lucene.util.Version upgrade(UpgradeRequest upgrade) {\n \n public org.apache.lucene.util.Version minimumCompatibleVersion() {\n org.apache.lucene.util.Version luceneVersion = null;\n- for(Segment segment : engine().segments(false)) {\n+ for (Segment segment : engine().segments(false)) {\n if (luceneVersion == null || luceneVersion.onOrAfter(segment.getVersion())) {\n luceneVersion = segment.getVersion();\n }\n }\n- return luceneVersion == null ? Version.indexCreated(indexSettings).luceneVersion : luceneVersion;\n+ return luceneVersion == null ? Version.indexCreated(indexSettings).luceneVersion : luceneVersion;\n }\n \n public SnapshotIndexCommit snapshotIndex(boolean flushFirst) throws EngineException {\n@@ -1113,7 +1114,7 @@ public void onRefreshSettings(Settings settings) {\n }\n \n final int maxMergeCount = settings.getAsInt(MergeSchedulerConfig.MAX_MERGE_COUNT, mergeSchedulerConfig.getMaxMergeCount());\n- if (maxMergeCount != mergeSchedulerConfig.getMaxMergeCount()) {\n+ if (maxMergeCount != mergeSchedulerConfig.getMaxMergeCount()) {\n logger.info(\"updating [{}] from [{}] to [{}]\", MergeSchedulerConfig.MAX_MERGE_COUNT, mergeSchedulerConfig.getMaxMergeCount(), maxMergeCount);\n mergeSchedulerConfig.setMaxMergeCount(maxMergeCount);\n change = true;",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -507,7 +507,7 @@ private void applyNewOrUpdatedShards(final ClusterChangedEvent event) {\n // for example: a shard that recovers from one node and now needs to recover to another node,\n // or a replica allocated and then allocating a primary because the primary failed on another node\n boolean shardHasBeenRemoved = false;\n- if (currentRoutingEntry.initializing() && shardRouting.initializing() && !currentRoutingEntry.equals(shardRouting)) {\n+ if (currentRoutingEntry.isSameAllocation(shardRouting) == false) {\n logger.debug(\"[{}][{}] removing shard (different instance of it allocated on this node, current [{}], global [{}])\", shardRouting.index(), shardRouting.id(), currentRoutingEntry, shardRouting);\n // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (different instance of it allocated on this node)\");\n@@ -526,22 +526,20 @@ public boolean apply(@Nullable RecoveryStatus status) {\n // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (recovery source node changed)\");\n shardHasBeenRemoved = true;\n-\n }\n }\n- if (shardHasBeenRemoved == false && (shardRouting.equals(indexShard.routingEntry()) == false || shardRouting.version() > indexShard.routingEntry().version())) {\n- if (shardRouting.primary() && indexShard.routingEntry().primary() == false && shardRouting.initializing() && indexShard.allowsPrimaryPromotion() == false) {\n- logger.debug(\"{} reinitialize shard on primary promotion\", indexShard.shardId());\n- indexService.removeShard(shardId, \"promoted to primary\");\n- } else {\n- // if we happen to remove the shardRouting by id above we don't need to jump in here!\n- indexShard.updateRoutingEntry(shardRouting, event.state().blocks().disableStatePersistence() == false);\n- }\n+\n+ if (shardHasBeenRemoved == false) {\n+ // shadow replicas do not support primary promotion. The master would reinitialize the shard, giving it a new allocation, meaning we should be there.\n+ // nocommit: double check check this\n+ assert (shardRouting.primary() && currentRoutingEntry.primary() == false) == false || indexShard.allowsPrimaryPromotion() :\n+ \"shard for doesn't support primary promotion but master promoted it with changing allocation. New routing \" + shardRouting + \", current routing \" + currentRoutingEntry;\n+ indexShard.updateRoutingEntry(shardRouting, event.state().blocks().disableStatePersistence() == false);\n }\n }\n \n if (shardRouting.initializing()) {\n- applyInitializingShard(event.state(),indexMetaData, shardRouting);\n+ applyInitializingShard(event.state(), indexMetaData, shardRouting);\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.test.ElasticsearchTestCase;\n@@ -48,6 +49,177 @@ public void testFrozenAfterRead() throws IOException {\n }\n }\n \n+ public void testIsSameAllocation() {\n+ ShardRouting unassignedShard0 = TestShardRouting.newShardRouting(\"test\", 0, null, false, ShardRoutingState.UNASSIGNED, 1);\n+ ShardRouting unassignedShard1 = TestShardRouting.newShardRouting(\"test\", 1, null, false, ShardRoutingState.UNASSIGNED, 1);\n+ ShardRouting initializingShard0 = TestShardRouting.newShardRouting(\"test\", 0, \"1\", randomBoolean(), ShardRoutingState.INITIALIZING, 1);\n+ ShardRouting initializingShard1 = TestShardRouting.newShardRouting(\"test\", 1, \"1\", randomBoolean(), ShardRoutingState.INITIALIZING, 1);\n+ ShardRouting startedShard0 = new ShardRouting(initializingShard0);\n+ startedShard0.moveToStarted();\n+ ShardRouting startedShard1 = new ShardRouting(initializingShard1);\n+ startedShard1.moveToStarted();\n+\n+ // test identity\n+ assertTrue(initializingShard0.isSameAllocation(initializingShard0));\n+\n+ // test same allocation different state\n+ assertTrue(initializingShard0.isSameAllocation(startedShard0));\n+\n+ // test unassigned is false even to itself\n+ assertFalse(unassignedShard0.isSameAllocation(unassignedShard0));\n+\n+ // test different shards/nodes/state\n+ assertFalse(unassignedShard0.isSameAllocation(unassignedShard1));\n+ assertFalse(unassignedShard0.isSameAllocation(initializingShard0));\n+ assertFalse(unassignedShard0.isSameAllocation(initializingShard1));\n+ assertFalse(unassignedShard0.isSameAllocation(startedShard1));\n+ }\n+\n+ public void testIsSameShard() {\n+ ShardRouting index1Shard0a = randomShardRouting(\"index1\", 0);\n+ ShardRouting index1Shard0b = randomShardRouting(\"index1\", 0);\n+ ShardRouting index1Shard1 = randomShardRouting(\"index1\", 1);\n+ ShardRouting index2Shard0 = randomShardRouting(\"index2\", 0);\n+ ShardRouting index2Shard1 = randomShardRouting(\"index2\", 1);\n+\n+ assertTrue(index1Shard0a.isSameShard(index1Shard0a));\n+ assertTrue(index1Shard0a.isSameShard(index1Shard0b));\n+ assertFalse(index1Shard0a.isSameShard(index1Shard1));\n+ assertFalse(index1Shard0a.isSameShard(index2Shard0));\n+ assertFalse(index1Shard0a.isSameShard(index2Shard1));\n+ }\n+\n+ private ShardRouting randomShardRouting(String index, int shard) {\n+ ShardRoutingState state = randomFrom(ShardRoutingState.values());\n+ return TestShardRouting.newShardRouting(index, shard, state == ShardRoutingState.UNASSIGNED ? null : \"1\", state != ShardRoutingState.UNASSIGNED && randomBoolean(), state, randomInt(5));\n+ }\n+\n+ public void testIsSourceTargetRelocation() {\n+ ShardRouting unassignedShard0 = TestShardRouting.newShardRouting(\"test\", 0, null, false, ShardRoutingState.UNASSIGNED, 1);\n+ ShardRouting initializingShard0 = TestShardRouting.newShardRouting(\"test\", 0, \"node1\", randomBoolean(), ShardRoutingState.INITIALIZING, 1);\n+ ShardRouting initializingShard1 = TestShardRouting.newShardRouting(\"test\", 1, \"node1\", randomBoolean(), ShardRoutingState.INITIALIZING, 1);\n+ ShardRouting startedShard0 = new ShardRouting(initializingShard0);\n+ startedShard0.moveToStarted();\n+ ShardRouting startedShard1 = new ShardRouting(initializingShard1);\n+ startedShard1.moveToStarted();\n+ ShardRouting sourceShard0a = new ShardRouting(startedShard0);\n+ sourceShard0a.relocate(\"node2\");\n+ ShardRouting targetShard0a = sourceShard0a.buildTargetRelocatingShard();\n+ ShardRouting sourceShard0b = new ShardRouting(startedShard0);\n+ sourceShard0b.relocate(\"node2\");\n+ ShardRouting sourceShard1 = new ShardRouting(startedShard1);\n+ sourceShard1.relocate(\"node2\");\n+\n+ // test true scenarios\n+ assertTrue(targetShard0a.isRelocationTargetOf(sourceShard0a));\n+ assertTrue(sourceShard0a.isRelocationSourceOf(targetShard0a));\n+\n+ // test two shards are not mixed\n+ assertFalse(targetShard0a.isRelocationTargetOf(sourceShard1));\n+ assertFalse(sourceShard1.isRelocationSourceOf(targetShard0a));\n+\n+ // test two allocations are not mixed\n+ assertFalse(targetShard0a.isRelocationTargetOf(sourceShard0b));\n+ assertFalse(sourceShard0b.isRelocationSourceOf(targetShard0a));\n+\n+ // test different shard states\n+ assertFalse(targetShard0a.isRelocationTargetOf(unassignedShard0));\n+ assertFalse(sourceShard0a.isRelocationTargetOf(unassignedShard0));\n+ assertFalse(unassignedShard0.isRelocationSourceOf(targetShard0a));\n+ assertFalse(unassignedShard0.isRelocationSourceOf(sourceShard0a));\n+\n+ assertFalse(targetShard0a.isRelocationTargetOf(initializingShard0));\n+ assertFalse(sourceShard0a.isRelocationTargetOf(initializingShard0));\n+ assertFalse(initializingShard0.isRelocationSourceOf(targetShard0a));\n+ assertFalse(initializingShard0.isRelocationSourceOf(sourceShard0a));\n+\n+ assertFalse(targetShard0a.isRelocationTargetOf(startedShard0));\n+ assertFalse(sourceShard0a.isRelocationTargetOf(startedShard0));\n+ assertFalse(startedShard0.isRelocationSourceOf(targetShard0a));\n+ assertFalse(startedShard0.isRelocationSourceOf(sourceShard0a));\n+ }\n+\n+ public void testEqualsIgnoringVersion() {\n+ ShardRouting routing = randomShardRouting(\"test\", 0);\n+\n+ ShardRouting otherRouting = new ShardRouting(routing);\n+\n+ assertTrue(\"expected equality\\nthis \" + routing + \",\\nother \" + otherRouting, routing.equalsIgnoringMetaData(otherRouting));\n+ otherRouting = new ShardRouting(routing, 1);\n+ assertTrue(\"expected equality\\nthis \" + routing + \",\\nother \" + otherRouting, routing.equalsIgnoringMetaData(otherRouting));\n+\n+\n+ otherRouting = new ShardRouting(routing);\n+ Integer[] changeIds = new Integer[]{0, 1, 2, 3, 4, 5, 6};\n+ for (int changeId : randomSubsetOf(randomIntBetween(1, changeIds.length), changeIds)) {\n+ switch (changeId) {\n+ case 0:\n+ // change index\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index() + \"a\", otherRouting.id(), otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary(), otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 1:\n+ // change shard id\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id() + 1, otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary(), otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 2:\n+ // change current node\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId() == null ? \"1\" : otherRouting.currentNodeId() + \"_1\", otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary(), otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 3:\n+ // change relocating node\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId(),\n+ otherRouting.relocatingNodeId() == null ? \"1\" : otherRouting.relocatingNodeId() + \"_1\",\n+ otherRouting.restoreSource(), otherRouting.primary(), otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 4:\n+ // change restore source\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource() == null ? new RestoreSource(new SnapshotId(\"test\", \"s1\"), Version.CURRENT, \"test\") :\n+ new RestoreSource(otherRouting.restoreSource().snapshotId(), Version.CURRENT, otherRouting.index() + \"_1\"),\n+ otherRouting.primary(), otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 5:\n+ // change primary flag\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary() == false, otherRouting.state(), otherRouting.version(), otherRouting.unassignedInfo());\n+ break;\n+ case 6:\n+ // change state\n+ ShardRoutingState newState;\n+ do {\n+ newState = randomFrom(ShardRoutingState.values());\n+ } while (newState == otherRouting.state());\n+\n+ UnassignedInfo unassignedInfo = otherRouting.unassignedInfo();\n+ if (unassignedInfo == null && (newState == ShardRoutingState.UNASSIGNED || newState == ShardRoutingState.INITIALIZING)) {\n+ unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"test\");\n+ }\n+\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary(), newState, otherRouting.version(), unassignedInfo);\n+ break;\n+ }\n+\n+ if (randomBoolean()) {\n+ // change version\n+ otherRouting = new ShardRouting(otherRouting, otherRouting.version() + 1);\n+ }\n+\n+ if (randomBoolean()) {\n+ // change unassigned info\n+ otherRouting = TestShardRouting.newShardRouting(otherRouting.index(), otherRouting.id(), otherRouting.currentNodeId(), otherRouting.relocatingNodeId(),\n+ otherRouting.restoreSource(), otherRouting.primary(), otherRouting.state(), otherRouting.version(),\n+ otherRouting.unassignedInfo() == null ? new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"test\") :\n+ new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, otherRouting.unassignedInfo().getMessage() + \"_1\"));\n+ }\n+\n+ logger.debug(\"comparing\\nthis {} to\\nother {}\", routing, otherRouting);\n+ assertFalse(\"expected non-equality\\nthis \" + routing + \",\\nother \" + otherRouting, routing.equalsIgnoringMetaData(otherRouting));\n+ }\n+ }\n \n public void testFrozenOnRoutingTable() {\n MetaData metaData = MetaData.builder()",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/ShardRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -19,26 +19,28 @@\n \n package org.elasticsearch.cluster.routing;\n \n+import org.elasticsearch.test.ElasticsearchTestCase;\n+\n /**\n * A helper that allows to create shard routing instances within tests, while not requiring to expose\n * different simplified constructors on the ShardRouting itself.\n */\n public class TestShardRouting {\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, null, null, primary, state, version, null, buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, null, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, null, buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, boolean primary, ShardRoutingState state, AllocationId allocationId, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, null, allocationId, true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, null, primary, state, version, buildUnassignedInfo(state), allocationId, true);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId, String relocatingNodeId, RestoreSource restoreSource, boolean primary, ShardRoutingState state, long version) {\n- return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, null, buildAllocationId(state), true);\n+ return new ShardRouting(index, shardId, currentNodeId, relocatingNodeId, restoreSource, primary, state, version, buildUnassignedInfo(state), buildAllocationId(state), true);\n }\n \n public static ShardRouting newShardRouting(String index, int shardId, String currentNodeId,\n@@ -61,4 +63,17 @@ private static AllocationId buildAllocationId(ShardRoutingState state) {\n throw new IllegalStateException(\"illegal state\");\n }\n }\n+\n+ private static UnassignedInfo buildUnassignedInfo(ShardRoutingState state) {\n+ switch (state) {\n+ case UNASSIGNED:\n+ case INITIALIZING:\n+ return new UnassignedInfo(ElasticsearchTestCase.randomFrom(UnassignedInfo.Reason.values()), \"auto generated for test\");\n+ case STARTED:\n+ case RELOCATING:\n+ return null;\n+ default:\n+ throw new IllegalStateException(\"illegal state\");\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/TestShardRouting.java",
"status": "modified"
},
{
"diff": "@@ -205,7 +205,7 @@ public void testDeleteShardState() throws IOException {\n ShardStateMetaData shardStateMetaData = load(logger, env.availableShardPaths(shard.shardId));\n assertEquals(shardStateMetaData, getShardStateMetadata(shard));\n \n- routing = TestShardRouting.newShardRouting(shard.shardId.index().getName(), shard.shardId.id(), routing.currentNodeId(), null, null, routing.primary(), ShardRoutingState.INITIALIZING, shard.shardRouting.version() + 1);\n+ routing = TestShardRouting.newShardRouting(shard.shardId.index().getName(), shard.shardId.id(), routing.currentNodeId(), null, routing.primary(), ShardRoutingState.INITIALIZING, shard.shardRouting.allocationId(), shard.shardRouting.version() + 1);\n shard.updateRoutingEntry(routing, true);\n shard.deleteShardState();\n ",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
}
]
} |
{
"body": "Metrics for scrolls (open scroll contexts, time scroll contexts held open, and completed scroll contexts) were added to the search stats in #9109. The time scroll contexts were held open is calculated at millisecond resolution using the time the context was created on the coordinating node. This can lead to two issues:\n1. Millisecond resolution is too coarse to accurately capture the time that very short-lived scroll contexts are held open, especially on systems with low-granualarity clocks.\n2. Discrepancies in the clocks between the coordinating node and the nodes where the scrolls are executed will lead to inaccuracies in measuring the time that the context is held open at the shard level.\n\nInstead, an origin time in nanosecond resolution that is calculated on the executing node should be used to measure the time that scroll contexts are held open.\n",
"comments": [],
"number": 12345,
"title": "search.scroll_time_in_millis might not be accurate"
} | {
"body": "Use time with nanosecond resolution calculated at the executing node to measure the time that contexts are held open\n\nCloses #12345\n",
"number": 12346,
"review_comments": [],
"title": "Use time with nanosecond resolution calculated at the executing node"
} | {
"commits": [
{
"message": "Use time with nanosecond resolution calculated at the executing node to measure the time that contexts are held open\n\nCloses #12345"
}
],
"files": [
{
"diff": "@@ -175,7 +175,7 @@ public void onNewScrollContext(SearchContext context) {\n \n public void onFreeScrollContext(SearchContext context) {\n totalStats.scrollCurrent.dec();\n- totalStats.scrollMetric.inc(TimeUnit.MILLISECONDS.toNanos(System.currentTimeMillis() - context.nowInMillis()));\n+ totalStats.scrollMetric.inc(System.nanoTime() - context.getOriginNanoTime());\n }\n \n public void onRefreshSettings(Settings settings) {",
"filename": "core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java",
"status": "modified"
},
{
"diff": "@@ -98,6 +98,7 @@ public class PercolateContext extends SearchContext {\n private final ConcurrentMap<BytesRef, Query> percolateQueries;\n private final int numberOfShards;\n private final Query aliasFilter;\n+ private final long originNanoTime = System.nanoTime();\n private final long startTime;\n private String[] types;\n \n@@ -337,6 +338,11 @@ public SearchContext queryBoost(float queryBoost) {\n throw new UnsupportedOperationException();\n }\n \n+ @Override\n+ public long getOriginNanoTime() {\n+ return originNanoTime;\n+ }\n+\n @Override\n protected long nowInMillisImpl() {\n return startTime;",
"filename": "core/src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -122,6 +122,7 @@ public class DefaultSearchContext extends SearchContext {\n private boolean queryRewritten;\n private volatile long keepAlive;\n private ScoreDoc lastEmittedDoc;\n+ private final long originNanoTime = System.nanoTime();\n private volatile long lastAccessTime = -1;\n private InnerHitsContext innerHitsContext;\n \n@@ -269,6 +270,11 @@ public SearchContext queryBoost(float queryBoost) {\n return this;\n }\n \n+ @Override\n+ public long getOriginNanoTime() {\n+ return originNanoTime;\n+ }\n+\n @Override\n protected long nowInMillisImpl() {\n return request.nowInMillis();",
"filename": "core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -139,6 +139,11 @@ public SearchContext queryBoost(float queryBoost) {\n return in.queryBoost(queryBoost);\n }\n \n+ @Override\n+ public long getOriginNanoTime() {\n+ return in.getOriginNanoTime();\n+ }\n+\n @Override\n protected long nowInMillisImpl() {\n return in.nowInMillisImpl();",
"filename": "core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -142,6 +142,8 @@ public final void close() {\n \n public abstract SearchContext queryBoost(float queryBoost);\n \n+ public abstract long getOriginNanoTime();\n+\n public final long nowInMillis() {\n nowInMillisUsed = true;\n return nowInMillisImpl();",
"filename": "core/src/main/java/org/elasticsearch/search/internal/SearchContext.java",
"status": "modified"
},
{
"diff": "@@ -82,6 +82,8 @@ public class TestSearchContext extends SearchContext {\n private String[] types;\n private SearchContextAggregations aggregations;\n \n+ private final long originNanoTime = System.nanoTime();\n+\n public TestSearchContext(ThreadPool threadPool,PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, IndexService indexService, QueryCache filterCache, IndexFieldDataService indexFieldDataService) {\n super(ParseFieldMatcher.STRICT);\n this.pageCacheRecycler = pageCacheRecycler;\n@@ -170,6 +172,11 @@ public SearchContext queryBoost(float queryBoost) {\n return null;\n }\n \n+ @Override\n+ public long getOriginNanoTime() {\n+ return originNanoTime;\n+ }\n+\n @Override\n protected long nowInMillisImpl() {\n return 0;",
"filename": "core/src/test/java/org/elasticsearch/test/TestSearchContext.java",
"status": "modified"
}
]
} |
{
"body": "When constructing field value factor functions, using `.modifier(FieldValueFactorFunction.Modifier.NONE)` yields malformed requests.\n\nSample expected output:\n\n```\n{\n \"function_score\" : {\n \"query\" : {\n \"simple_query_string\" : {\n \"query\" : \"foo\"\n }\n },\n \"functions\" : [ {\n \"field_value_factor\" : {\n \"field\" : \"test\",\n \"modifier\" : \"none\"\n }\n } ]\n }\n}\n```\n\nOr alternatively, no modifier property should be added at all.\n\nActual output:\n\n```\n{\n function_score\" : {\n \"query\" : {\n \"simple_query_string\" : {\n \"query\" : \"foo\"\n }\n },\n \"functions\" : [ {\n \"field_value_factor\" : {\n \"field\" : \"test\",\n \"modifier\" : \"\"\n }\n } ]\n }\n}\n```\n\nThis ultimately results in a parsing error: `IllegalArgumentException[No enum constant org.elasticsearch.common.lucene.search.function.FieldValueFactorFunction.Modifier.]`\n",
"comments": [
{
"body": "fixed via #12328\n",
"created_at": "2015-07-22T10:13:25Z"
}
],
"number": 12327,
"title": "Using FieldValueFactorFunction.Modifier.NONE results in malformed queries"
} | {
"body": "Fix for issue #12327 \n",
"number": 12328,
"review_comments": [],
"title": "Fix malformed query generation"
} | {
"commits": [
{
"message": "Fix malformed query generation"
},
{
"message": "Test behavior of explicit Modifier.NONE"
}
],
"files": [
{
"diff": "@@ -161,9 +161,6 @@ public double apply(double n) {\n \n @Override\n public String toString() {\n- if (this == NONE) {\n- return \"\";\n- }\n return super.toString().toLowerCase(Locale.ROOT);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -69,6 +69,14 @@ public void testFieldValueFactor() throws IOException {\n .get();\n assertOrderedSearchHits(response, \"2\", \"1\");\n \n+ // try again, but this time explicitly use the do-nothing modifier\n+ response = client().prepareSearch(\"test\")\n+ .setExplain(randomBoolean())\n+ .setQuery(functionScoreQuery(simpleQueryStringQuery(\"foo\"),\n+ fieldValueFactorFunction(\"test\").modifier(FieldValueFactorFunction.Modifier.NONE)))\n+ .get();\n+ assertOrderedSearchHits(response, \"2\", \"1\");\n+\n // document 1 scores higher because 1/5 > 1/17\n response = client().prepareSearch(\"test\")\n .setExplain(randomBoolean())",
"filename": "core/src/test/java/org/elasticsearch/search/functionscore/FunctionScoreFieldValueTests.java",
"status": "modified"
}
]
} |
{
"body": "This only happens when using `ids` with `parameters`. For example in a request such as this one:\n\n``` javascript\nGET index/type/_mtermvectors\n{\n \"ids\": [ ... ],\n \"parameters\": {\n \"fields\": [ ... ],\n \"filter\": {\n \"max_num_terms\": 3\n }\n }\n}\n```\n",
"comments": [],
"number": 12311,
"title": "Ignored `filter` parameter in _mtermvectors REST request"
} | {
"body": "This makes sure the `filter` parameter is correctly parsed in a multi-term\nvector request when using `ids` and `parameters`.\n\nCloses #12311\n",
"number": 12312,
"review_comments": [
{
"body": "where is this set used?\n",
"created_at": "2015-07-24T11:59:42Z"
}
],
"title": "Make sure filter is correctly parsed for multi-term vectors"
} | {
"commits": [
{
"message": "Make sure filter is correctly parsed for multi-term vectors\n\nThis makes sure the `filter` parameter is correctly parsed in a multi-term\nvector request when using `ids` and `parameters`.\n\nCloses #12311\nCloses #12312"
}
],
"files": [
{
"diff": "@@ -167,6 +167,7 @@ public TermVectorsRequest(TermVectorsRequest other) {\n this.version = other.version();\n this.versionType = VersionType.fromValue(other.versionType().getValue());\n this.startTime = other.startTime();\n+ this.filterSettings = other.filterSettings();\n }\n \n public TermVectorsRequest(MultiGetRequest.Item item) {",
"filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java",
"status": "modified"
},
{
"diff": "@@ -302,8 +302,8 @@ public void testMultiParser() throws Exception {\n request = new MultiTermVectorsRequest();\n request.add(new TermVectorsRequest(), bytes);\n checkParsedParameters(request);\n- \n }\n+\n void checkParsedParameters(MultiTermVectorsRequest request) {\n Set<String> ids = new HashSet<>();\n ids.add(\"1\");\n@@ -324,5 +324,31 @@ void checkParsedParameters(MultiTermVectorsRequest request) {\n assertThat(singleRequest.selectedFields(), equalTo(fields));\n }\n }\n- \n+\n+ @Test // issue #12311\n+ public void testMultiParserFilter() throws Exception {\n+ byte[] data = Streams.copyToBytesFromClasspath(\"/org/elasticsearch/action/termvectors/multiRequest3.json\");\n+ BytesReference bytes = new BytesArray(data);\n+ MultiTermVectorsRequest request = new MultiTermVectorsRequest();\n+ request.add(new TermVectorsRequest(), bytes);\n+ checkParsedFilterParameters(request);\n+ }\n+\n+ void checkParsedFilterParameters(MultiTermVectorsRequest multiRequest) {\n+ int id = 1;\n+ for (TermVectorsRequest request : multiRequest.requests) {\n+ assertThat(request.index(), equalTo(\"testidx\"));\n+ assertThat(request.type(), equalTo(\"test\"));\n+ assertThat(request.id(), equalTo(id+\"\"));\n+ assertNotNull(request.filterSettings());\n+ assertThat(request.filterSettings().maxNumTerms, equalTo(20));\n+ assertThat(request.filterSettings().minTermFreq, equalTo(1));\n+ assertThat(request.filterSettings().maxTermFreq, equalTo(20));\n+ assertThat(request.filterSettings().minDocFreq, equalTo(1));\n+ assertThat(request.filterSettings().maxDocFreq, equalTo(20));\n+ assertThat(request.filterSettings().minWordLength, equalTo(1));\n+ assertThat(request.filterSettings().maxWordLength, equalTo(20));\n+ id++;\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,16 @@\n+{\n+ \"ids\": [\"1\",\"2\"],\n+ \"parameters\": {\n+ \"_index\": \"testidx\",\n+ \"_type\": \"test\",\n+ \"filter\": {\n+ \"max_num_terms\": 20,\n+ \"min_term_freq\": 1,\n+ \"max_term_freq\": 20,\n+ \"min_doc_freq\": 1,\n+ \"max_doc_freq\": 20,\n+ \"min_word_length\": 1,\n+ \"max_word_length\": 20\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "core/src/test/java/org/elasticsearch/action/termvectors/multiRequest3.json",
"status": "added"
}
]
} |
{
"body": "ENV:\n- ES = 1.4.4\n- OS = SmartOS\n- JDK = OpenJDK 7\n- RAM = 16GB\n\nWe are running into a system failure because the PermGen cache is filling up. Our systems are deployed with Chef and the chef-client runs about every thirty minutes. The errors are happening after sitting for a few days. Every time Chef runs it will copy the scripts from a repo to the scripts folder, changed or not. ES will always recompile these scripts because the timestamp on the files has changed. Eventually the PermGen will fill up and crash the system (even with no indexing or searching happening).\n\n```\n[2015-07-12 02:31:33,382][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/dateFormat.groovy]\n[2015-07-12 02:31:33,385][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/inbound_group_duration.groovy]\n[2015-07-12 02:31:33,388][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/inbound_group_duration_doesNotEqualFilter.groovy]\n[2015-07-12 02:31:33,391][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/inbound_group_duration_equalsFilter.groovy]\n[2015-07-12 02:31:33,394][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/inbound_group_duration_greaterThanEqualFilter.groovy]\n[2015-07-12 02:31:35,718][INFO ][script ] [S-10.129.22.15] compiling script file [/opt/local/etc/elasticsearch/scripts/inbound_group_duration_greaterThanFilter.groovy]\n[2015-07-12 02:32:12,053][WARN ][script ] [S-10.129.22.15] failed to load/compile script [inbound_group_duration_greaterThanFilter]\norg.elasticsearch.script.groovy.GroovyScriptCompilationException: OutOfMemoryError[PermGen space]\n at org.elasticsearch.script.groovy.GroovyScriptEngineService.compile(GroovyScriptEngineService.java:153)\n at org.elasticsearch.script.ScriptService$ScriptChangesListener.onFileInit(ScriptService.java:588)\n at org.elasticsearch.script.ScriptService$ScriptChangesListener.onFileChanged(ScriptService.java:621)\n at org.elasticsearch.watcher.FileWatcher$FileObserver.onFileChanged(FileWatcher.java:271)\n at org.elasticsearch.watcher.FileWatcher$FileObserver.checkAndNotify(FileWatcher.java:122)\n at org.elasticsearch.watcher.FileWatcher$FileObserver.updateChildren(FileWatcher.java:207)\n at org.elasticsearch.watcher.FileWatcher$FileObserver.checkAndNotify(FileWatcher.java:108)\n at org.elasticsearch.watcher.FileWatcher.doCheckAndNotify(FileWatcher.java:62)\n at org.elasticsearch.watcher.AbstractResourceWatcher.checkAndNotify(AbstractResourceWatcher.java:43)\n at org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor.run(ResourceWatcherService.java:180)\n at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:490)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)\n at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nThis is the GC that was performed just before the crash:\n\n```\n[2015-07-12 02:00:18,361][INFO ][monitor.jvm ] [S-10.129.22.15] [gc][old][204306][1] duration [5.2s], collections [1]/[6s], total [5.2s]/[5.2s], memory [5.2gb]->[1.8g\nb]/[7.6gb], all_pools {[young] [2.5gb]->[20.2mb]/[2.5gb]}{[survivor] [2mb]->[0b]/[42.5mb]}{[old] [2.6gb]->[1.8gb]/[5.3gb]}\n```\n\nIf we stop copying the scripts with Chef then the issue doesn't happen. Somewhere there is an issue with the JVM not letting go of memory related to the scripts being recompiled.\n",
"comments": [
{
"body": "Hmm this is odd. This PR should have fixed it https://github.com/elastic/elasticsearch/pull/8062 but you're using 1.4.4, which should include the fix.\n\n@dakrone any ideas?\n",
"created_at": "2015-07-14T11:57:17Z"
},
{
"body": "Here is a more precise version of my ES but let me know if you need any other information.\n\n```\n=> curl 10.129.22.15:9200\n\n{\n \"status\" : 200,\n \"name\" : \"S-10.129.22.15\",\n \"cluster_name\" : \"<cluster-name-here>\",\n \"version\" : {\n \"number\" : \"1.4.4\",\n \"build_hash\" : \"c88f77ffc81301dfa9dfd81ca2232f09588bd512\",\n \"build_timestamp\" : \"2015-02-19T13:05:36Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.10.3\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n\nand the JAVA version:\n\n```\n=> java -version\n\nopenjdk version \"1.7.0-internal\"\nOpenJDK Runtime Environment (build 1.7.0-internal-pkgsrc_2015_02_22_06_07-b00)\nOpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)\n```\n\nThanks.\n",
"created_at": "2015-07-14T17:10:25Z"
},
{
"body": "@hulu1522 thanks for reporting this, I'm working on a fix for this.\n",
"created_at": "2015-07-16T18:29:44Z"
},
{
"body": "@dakrone Thanks! The fix looks good. Any estimate for 1.7.1 release?\n",
"created_at": "2015-07-16T19:50:25Z"
},
{
"body": "@hulu1522 unfortunately I'm not sure, since 1.7.0 was just released today it will probably be a couple of weeks at the least, maybe @clintongormley knows more?\n",
"created_at": "2015-07-16T22:16:10Z"
}
],
"number": 12212,
"title": "PermGen OutOfMemoryError when reloading scripts too often"
} | {
"body": "When adding a script to the Groovy classloader, the script name is used\nas the class identifier in the classloader. This means that in order not\nto break JVM Classloader convention, that script must always be\navailable by that name. As a result, modifying a script with the same\ncontent over and over causes it to be loaded with a different name (due\nto the incrementing integer).\n\nThis is particularly bad when something like chef or puppet replaces the\non-disk script file with the same content over and over every time a\nmachine is converged.\n\nThis change makes the script name the SHA1 hash of the script itself,\nmeaning that replacing a script with the same text will use the same\nscript name.\n\nResolves #12212 \n\nAs a test for this, I did the following:\n- Configure the resource watcher to check for new scripts more frequently (every 100ms)\n\n``` yaml\nwatcher.interval.low: 100ms\nwatcher.interval.medium: 100ms\nwatcher.interval.high: 100ms\nresource.reload.interval.low: 100ms\nresource.reload.interval.medium: 100ms\nresource.reload.interval.high: 100ms\n```\n- Start ES with a 1.7 JVM (since 1.8 removed permgen)\n- Run a script that constantly updated a script file with the same content over and over, causing it to be re-compiled by Elasticsearch:\n\n``` bash\nwhile true; do\n echo \"ctx._source.counter += 1\" > config/scripts/myscript.groovy\n echo \"ctx._source.counter += 2\" > config/scripts/myscript.groovy\ndone\n```\n\nWithout this change, permgen grows linearly, then the ES node hits OOME in permgen:\n\n\n\nAnd with this change, the classes are able to be unloaded, because they share the same class name (SHA1 of the content):\n\n\n",
"number": 12296,
"review_comments": [],
"title": "Consistently name Groovy scripts with the same content"
} | {
"commits": [
{
"message": "Consistently name Groovy scripts with the same content\n\nWhen adding a script to the Groovy classloader, the script name is used\nas the class identifier in the classloader. This means that in order not\nto break JVM Classloader convention, that script must always be\navailable by that name. As a result, modifying a script with the same\ncontent over and over causes it to be loaded with a different name (due\nto the incrementing integer).\n\nThis is particularly bad when something like chef or puppet replaces the\non-disk script file with the same content over and over every time a\nmachine is converged.\n\nThis change makes the script name the SHA1 hash of the script itself,\nmeaning that replacing a script with the same text will use the same\nscript name.\n\nResolves #12212"
}
],
"files": [
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.script.groovy;\n \n+import com.google.common.base.Charsets;\n+import com.google.common.hash.Hashing;\n import groovy.lang.Binding;\n import groovy.lang.GroovyClassLoader;\n import groovy.lang.Script;\n@@ -49,15 +51,13 @@\n import java.math.BigDecimal;\n import java.util.HashMap;\n import java.util.Map;\n-import java.util.concurrent.atomic.AtomicLong;\n \n /**\n * Provides the infrastructure for Groovy as a scripting language for Elasticsearch\n */\n public class GroovyScriptEngineService extends AbstractComponent implements ScriptEngineService {\n \n public static final String NAME = \"groovy\";\n- private final AtomicLong counter = new AtomicLong();\n private final GroovyClassLoader loader;\n \n @Inject\n@@ -111,7 +111,7 @@ public boolean sandboxed() {\n @Override\n public Object compile(String script) {\n try {\n- return loader.parseClass(script, generateScriptName());\n+ return loader.parseClass(script, Hashing.sha1().hashString(script, Charsets.UTF_8).toString());\n } catch (Throwable e) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"exception compiling Groovy script:\", e);\n@@ -190,10 +190,6 @@ public Object unwrap(Object value) {\n return value;\n }\n \n- private String generateScriptName() {\n- return \"Script\" + counter.incrementAndGet() + \".groovy\";\n- }\n-\n public static final class GroovyScript implements ExecutableScript, LeafSearchScript {\n \n private final CompiledScript compiledScript;",
"filename": "core/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java",
"status": "modified"
}
]
} |
{
"body": "This is a bug that has been fixed in lucene (https://issues.apache.org/jira/browse/LUCENE-6677) recently.\nI don't know if it will be backported to lucene 4.10.x and thus have chance to get into elasticsearch 1.6.x.\nNote that this fix could be backported in MapperQueryParser by overriding the newWildcardQuery method.\n",
"comments": [
{
"body": "@nomoa This fix won't make it into 1.6.1 or 1.7.0 - too short notice. I think it'll have to wait for 2.0\n",
"created_at": "2015-07-15T14:52:16Z"
},
{
"body": "Will be fixed by the upgrade to Lucene 5.3\n",
"created_at": "2015-07-15T14:52:28Z"
},
{
"body": "hah - just seen your PR and that mike's on it - you're in luck :)\n",
"created_at": "2015-07-15T14:55:40Z"
},
{
"body": "Yes, I guess it's my lucky day :)\nThis will be of great help to us to protect a cluster that was killed by a nasty wildcard query.\n",
"created_at": "2015-07-16T09:06:09Z"
},
{
"body": "I merged to 1.6.2, 1.7.1, 2.0. Thank you for fixing in both Lucene (5.3) and ES @nomoa!\n",
"created_at": "2015-07-16T16:23:13Z"
}
],
"number": 12266,
"title": "QueryString ignores maxDeterminizedStates when creating a WildcardQuery"
} | {
"body": "This patch backports https://issues.apache.org/jira/browse/LUCENE-6677 in MapperQueryParser\n\nCloses #12266\n",
"number": 12269,
"review_comments": [],
"title": "QueryString ignores maxDeterminizedStates when creating a WildcardQuery"
} | {
"commits": [
{
"message": "QueryString ignores maxDeterminizedStates when creating a WildcardQuery\n\nThis patch backports https://issues.apache.org/jira/browse/LUCENE-6677"
}
],
"files": [
{
"diff": "@@ -28,6 +28,7 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.automaton.RegExp;\n+import org.apache.lucene.util.Version;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.MatchNoDocsQuery;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -757,6 +758,15 @@ private Query getPossiblyAnalyzedWildcardQuery(String field, String termStr) thr\n return super.getWildcardQuery(field, aggStr.toString());\n }\n \n+ @Override\n+ protected WildcardQuery newWildcardQuery(Term t) {\n+ // Backport: https://issues.apache.org/jira/browse/LUCENE-6677\n+ assert Version.LATEST == Version.LUCENE_4_10_4;\n+ WildcardQuery query = new WildcardQuery(t, maxDeterminizedStates);\n+ query.setRewriteMethod(multiTermRewriteMethod);\n+ return query;\n+ }\n+\n @Override\n protected Query getRegexpQuery(String field, String termStr) throws ParseException {\n if (lowercaseExpandedTerms) {",
"filename": "src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java",
"status": "modified"
}
]
} |
{
"body": "This seems similar (albeit different) to #12179: there is an issue with \"now\" and percolator queries: when .percolator documents are filtered in addition to percolation. \n\n``` json\nPOST /idx/type/_percolate\n{\n \"doc\": {},\n \"filter\": {\n \"range\": {\n \"$context.created\": {\n \"lte\": \"now\"\n }\n }\n }\n}\n```\n\nfor .percolator:\n\n``` json\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"match_all\": []\n }\n }\n },\n \"$context\": {\n \"name\": \"test\",\n \"created\": \"2015-07-10T14:41:54+0000\",\n }\n}\n```\n\nThe response is:\n\n```\nBroadcastShardOperationFailedException[[idx][0] ]; \n nested: PercolateException[failed to percolate]; \n nested: ElasticsearchParseException[failed to parse request]; \n nested: ElasticsearchParseException[Could not read the current timestamp];\n nested: UnsupportedOperationException;\n```\n",
"comments": [
{
"body": "@zergin Thanks for reporting this. There is a PR open for the missing support of resolving `now` in the percolate api: #12215\n",
"created_at": "2015-07-14T07:47:10Z"
},
{
"body": "Splendid, thanks. Any ETA for this going live? I see it's scheduled for 1.6.1?\n",
"created_at": "2015-07-14T10:18:57Z"
},
{
"body": "when the PR gets reviewed it will get merged and then I'll try to back port\nit to 1.7 and 1.6 branches. So it should get into the 1.6.1 release.\n\nOn 14 July 2015 at 12:19, zergin notifications@github.com wrote:\n\n> Splendid, thanks. Any ETA for this going live? I see it's scheduled for\n> 1.6.1?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/12185#issuecomment-121192591\n> .\n\n## \n\nMet vriendelijke groet,\n\nMartijn van Groningen\n",
"created_at": "2015-07-14T10:22:37Z"
},
{
"body": "@zergin The change got backported to 1.6, so it is scheduled for 1.6.1 \n",
"created_at": "2015-07-14T14:22:35Z"
}
],
"number": 12185,
"title": "Date parsing (now) bug when filtering in addition to percolation"
} | {
"body": "PR for #12185\n",
"number": 12215,
"review_comments": [],
"title": "Support filtering percolator queries by date using `now`"
} | {
"commits": [
{
"message": "percolator: Support filtering percolator queries by date using `now`\n\nCloses #12185"
}
],
"files": [
{
"diff": "@@ -37,6 +37,7 @@ public class PercolateShardRequest extends BroadcastShardRequest {\n private BytesReference docSource;\n private boolean onlyCount;\n private int numberOfShards;\n+ private long startTime;\n \n PercolateShardRequest() {\n }\n@@ -48,6 +49,7 @@ public class PercolateShardRequest extends BroadcastShardRequest {\n this.docSource = request.docSource();\n this.onlyCount = request.onlyCount();\n this.numberOfShards = numberOfShards;\n+ this.startTime = request.startTime;\n }\n \n PercolateShardRequest(ShardId shardId, OriginalIndices originalIndices) {\n@@ -60,6 +62,7 @@ public class PercolateShardRequest extends BroadcastShardRequest {\n this.source = request.source();\n this.docSource = request.docSource();\n this.onlyCount = request.onlyCount();\n+ this.startTime = request.startTime;\n }\n \n public String documentType() {\n@@ -98,6 +101,10 @@ public int getNumberOfShards() {\n return numberOfShards;\n }\n \n+ public long getStartTime() {\n+ return startTime;\n+ }\n+\n OriginalIndices originalIndices() {\n return originalIndices;\n }\n@@ -110,6 +117,7 @@ public void readFrom(StreamInput in) throws IOException {\n docSource = in.readBytesReference();\n onlyCount = in.readBoolean();\n numberOfShards = in.readVInt();\n+ startTime = in.readLong(); // no vlong, this can be negative!\n }\n \n @Override\n@@ -120,6 +128,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBytesReference(docSource);\n out.writeBoolean(onlyCount);\n out.writeVInt(numberOfShards);\n+ out.writeLong(startTime);\n }\n \n }",
"filename": "core/src/main/java/org/elasticsearch/action/percolate/PercolateShardRequest.java",
"status": "modified"
},
{
"diff": "@@ -98,6 +98,7 @@ public class PercolateContext extends SearchContext {\n private final ConcurrentMap<BytesRef, Query> percolateQueries;\n private final int numberOfShards;\n private final Query aliasFilter;\n+ private final long startTime;\n private String[] types;\n \n private Engine.Searcher docSearcher;\n@@ -133,6 +134,7 @@ public PercolateContext(PercolateShardRequest request, SearchShardTarget searchS\n this.scriptService = scriptService;\n this.numberOfShards = request.getNumberOfShards();\n this.aliasFilter = aliasFilter;\n+ this.startTime = request.getStartTime();\n }\n \n public IndexSearcher docSearcher() {\n@@ -337,7 +339,7 @@ public SearchContext queryBoost(float queryBoost) {\n \n @Override\n protected long nowInMillisImpl() {\n- throw new UnsupportedOperationException();\n+ return startTime;\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -180,6 +180,7 @@ public PercolateShardResponse percolate(PercolateShardRequest request) {\n final PercolateContext context = new PercolateContext(\n request, searchShardTarget, indexShard, percolateIndexService, pageCacheRecycler, bigArrays, scriptService, aliasFilter, parseFieldMatcher\n );\n+ SearchContext.setCurrent(context);\n try {\n ParsedDocument parsedDocument = parseRequest(percolateIndexService, request, context);\n if (context.percolateQueries().isEmpty()) {\n@@ -235,6 +236,7 @@ public PercolateShardResponse percolate(PercolateShardRequest request) {\n percolatorIndex.prepare(context, parsedDocument);\n return action.doPercolate(request, context, isNested);\n } finally {\n+ SearchContext.removeCurrent();\n context.close();\n shardPercolateService.postPercolate(System.nanoTime() - startTime);\n }\n@@ -258,7 +260,6 @@ private ParsedDocument parseRequest(IndexService documentIndexService, Percolate\n // not the in memory percolate doc\n String[] previousTypes = context.types();\n context.types(new String[]{TYPE_NAME});\n- SearchContext.setCurrent(context);\n try {\n parser = XContentFactory.xContent(source).createParser(source);\n String currentFieldName = null;\n@@ -359,7 +360,6 @@ private ParsedDocument parseRequest(IndexService documentIndexService, Percolate\n throw new ElasticsearchParseException(\"failed to parse request\", e);\n } finally {\n context.types(previousTypes);\n- SearchContext.removeCurrent();\n if (parser != null) {\n parser.close();\n }",
"filename": "core/src/main/java/org/elasticsearch/percolator/PercolatorService.java",
"status": "modified"
},
{
"diff": "@@ -2079,5 +2079,21 @@ public void testPercolateDocumentWithParentField() throws Exception {\n assertThat(response.getMatches()[0].getId().string(), equalTo(\"1\"));\n }\n \n+ @Test\n+ public void testFilterByNow() throws Exception {\n+ client().prepareIndex(\"index\", PercolatorService.TYPE_NAME, \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", matchAllQuery()).field(\"created\", \"2015-07-10T14:41:54+0000\").endObject())\n+ .get();\n+ refresh();\n+\n+ PercolateResponse response = client().preparePercolate()\n+ .setIndices(\"index\")\n+ .setDocumentType(\"type\")\n+ .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"{}\"))\n+ .setPercolateQuery(rangeQuery(\"created\").lte(\"now\"))\n+ .get();\n+ assertMatchCount(response, 1);\n+ }\n+\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi there!\n\nI'm testing the percolator API with the current master. I'm getting a (gracefully handled) null-pointer exception. Here's what I'm trying to do:\n\n``` sh\n#!/bin/sh\ncurl -s -XPUT 'localhost:9200/test-es20' -d '{\n \"mappings\": {\n \"tweet\": {\n \"properties\": {\n \"user\": { \"type\": \"string\" },\n \"message\": { \"type\": \"string\" },\n \"created\": { \"type\": \"date\" }\n }\n },\n \"comment\": {\n \"_parent\": { \"type\": \"tweet\" },\n \"properties\": {\n \"user\": { \"type\": \"string\" },\n \"message\": { \"type\": \"string\" }\n }\n }\n }\n}'\n\n# Register query in percolator\ncurl -s -XPUT 'localhost:9200/test-es20/.percolator/1' -d '{\n \"query\": {\n \"match\": {\n \"message\" : \"Elasticsearch\"\n }\n }\n}'\n\n# Match document with percolator\ncurl -s -XGET 'localhost:9200/test-es20/tweet/_percolate?pretty' -d '{\n \"doc\": {\n \"user\": \"olivere\",\n \"message\": \"Welcome to Elasticsearch\"\n }\n}'\n```\n\nWhich yields:\n\n```\n{\n \"took\" : 6,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 0,\n \"failed\" : 5,\n \"failures\" : [ {\n \"shard\" : 0,\n \"index\" : \"test-es20\",\n \"status\" : \"BAD_REQUEST\",\n \"reason\" : {\n \"type\" : \"parse_exception\",\n \"reason\" : \"failed to parse request\",\n \"caused_by\" : {\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"failed to parse [_parent]\",\n \"caused_by\" : {\n \"type\" : \"null_pointer_exception\",\n \"reason\" : null\n }\n }\n }\n } ]\n },\n \"total\" : 0,\n \"matches\" : [ ]\n}\n```\n\nIf you remove the `_parent` from the comment mapping, everything is fine.\n\nIs this supposed to work, or is it the same issue mentioned in #2960?\n",
"comments": [
{
"body": "BTW: Just tested 1.6 and it's working fine (with the `_parent` in the comment type).\n",
"created_at": "2015-07-11T12:34:56Z"
},
{
"body": "So we can close the issue, right?\n",
"created_at": "2015-07-11T12:36:21Z"
},
{
"body": "Ehm, no. The error e.g. occurs with 2.0-BETA1. Works fine with 1.6.\n",
"created_at": "2015-07-11T12:38:29Z"
},
{
"body": "Ha thanks! I missed you were testing master branch. o_O\n\n> I'm testing the percolator API with the current master.\n\nSorry :)\n",
"created_at": "2015-07-11T13:07:19Z"
},
{
"body": "@dadoonet No problem. And thanks for your time. :-)\n",
"created_at": "2015-07-11T13:20:19Z"
},
{
"body": "@olivere thanks for testing out master! I've opened #12214 so it should be fixed soon.\n",
"created_at": "2015-07-13T20:35:02Z"
},
{
"body": "@martijnvg My pleasure. And thanks for fixing :-)\n",
"created_at": "2015-07-13T21:11:59Z"
}
],
"number": 12192,
"title": "Percolator NPE"
} | {
"body": "PR for #12192\n",
"number": 12214,
"review_comments": [],
"title": "Fix NPE when percolating a document that has a _parent field configured in its mapping"
} | {
"commits": [
{
"message": "percolator: Don't throw NPE when percolating a document that has a _parent field configured in its mapping\n\nCloses #12192"
}
],
"files": [
{
"diff": "@@ -249,7 +249,9 @@ public void preParse(ParseContext context) throws IOException {\n \n @Override\n public void postParse(ParseContext context) throws IOException {\n- parse(context);\n+ if (context.sourceToParse().flyweight() == false) {\n+ parse(context);\n+ }\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -2054,7 +2054,7 @@ public void testFailNicelyWithInnerHits() throws Exception {\n \n @Test\n public void testParentChild() throws Exception {\n- // We don't fail p/c queries, but those queries are unsuable because only one document can be provided in\n+ // We don't fail p/c queries, but those queries are unusable because only a single document can be provided in\n // the percolate api\n \n assertAcked(prepareCreate(\"index\").addMapping(\"child\", \"_parent\", \"type=parent\").addMapping(\"parent\"));\n@@ -2063,5 +2063,21 @@ public void testParentChild() throws Exception {\n .execute().actionGet();\n }\n \n+ @Test\n+ public void testPercolateDocumentWithParentField() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"child\", \"_parent\", \"type=parent\").addMapping(\"parent\"));\n+ client().prepareIndex(\"index\", PercolatorService.TYPE_NAME, \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", matchAllQuery()).endObject())\n+ .execute().actionGet();\n+\n+ // Just percolating a document that has a _parent field in its mapping should just work:\n+ PercolateResponse response = client().preparePercolate()\n+ .setDocumentType(\"parent\")\n+ .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"field\", \"value\"))\n+ .get();\n+ assertMatchCount(response, 1);\n+ assertThat(response.getMatches()[0].getId().string(), equalTo(\"1\"));\n+ }\n+\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorTests.java",
"status": "modified"
}
]
} |
{
"body": "Say my scroll keep alive time is set to 1 minute.\n1. I'm executing a search with that scroll settings\n2. Immediately after receiving my search result I'm closing and reopening the index\n3. For exactly one minute I get the following error message and my cluster is red\n4. after 1 minute my cluster is green.\n\n```\n[2015-06-22 14:48:13,846][INFO ][cluster.metadata ] [Grim Hunter] closing indices [[de_v4]]\n[2015-06-22 14:48:22,263][INFO ][cluster.metadata ] [Grim Hunter] opening indices [[de_v4]]\n[2015-06-22 14:48:27,308][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][2]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][2] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][2], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n[2015-06-22 14:48:27,309][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][2] received shard failed for [de_v4][2], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][2] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][2], timed out after 5000ms]; ]]\n[2015-06-22 14:48:32,309][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][0]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][0] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][0], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n\n[... trimmed ...]\n\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][0] received shard failed for [de_v4][0], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][0] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][0], timed out after 5000ms]; ]]\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][3] received shard failed for [de_v4][3], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [master [Grim Hunter][iTOuyuGSRLab6ZZxwnhb3g][u-excus][inet[/192.168.1.181:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure]\n```\n\n_Update:_\nToday I adjusted the logging settings and briefly before the cluster is getting green:\n\n```\n[2015-06-23 12:21:00,100][DEBUG][search ] [Prime Mover] freeing search context [1], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n[2015-06-23 12:21:00,101][DEBUG][search ] [Prime Mover] freeing search context [2], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n```\n\nbtw: [clearing](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api) (manually) all scroll ids during closing / opening process solves the problem (as temporary workaround); cluster state will be green immediately after reopening the index.\n\n```\ncurl -XDELETE localhost:9200/_search/scroll/_all\n```\n\nTested with Elasticsearch v1.6.0\n",
"comments": [
{
"body": "Similar issue see #8940.\n",
"created_at": "2015-07-08T06:37:57Z"
},
{
"body": "We should close all search contexts when closing an index in order to abort the scroll.\n",
"created_at": "2015-07-10T09:22:05Z"
},
{
"body": "The `SearchService` class maintains the open search contexts (for all searches and shards on a node). When we close an index, we just close the shard, but we check `SearchService` and clear search contexts of shards being closed. If do this, then this should fix the problem.\n",
"created_at": "2015-07-10T09:33:20Z"
}
],
"number": 12116,
"title": "Search Scroll witch keep alive prevent shards from closing (during Index closing)"
} | {
"body": "A change in #12116 introduces closing / cleaning of search ctx even if\nthe index service was closed due to a relocation of it's last shard. This\nis not desired since in that case it's fine to serve the pending requests from\nthe relocated shard. This commit adds an extra check to ensure that the index is\neither removed (delete) or closed via API.\n\nthere where some failures related to this on CI here:\n\nhttp://build-us-00.elastic.co/job/es_core_17_centos/50/\n",
"number": 12199,
"review_comments": [],
"title": "Only clear open search ctx if the index is delete or closed via API"
} | {
"commits": [
{
"message": "Only clear open search ctx if the index is delete or closed via API\n\nA change in #12116 introduces closing / cleaning of search ctx even if\nthe index service was closed due to a relocation of it's last shard. This\nis not desired since in that case it's fine to serve the pending requests from\nthe relocated shard. This commit adds an extra check to ensure that the index is\neither removed (delete) or closed via API."
}
],
"files": [
{
"diff": "@@ -157,11 +157,23 @@ public SearchService(Settings settings, ClusterService clusterService, IndicesSe\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n indicesService.indicesLifecycle().addListener(new IndicesLifecycle.Listener() {\n-\n @Override\n public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n // once an index is closed we can just clean up all the pending search context information\n // to release memory and let references to the filesystem go etc.\n+ IndexMetaData idxMeta = SearchService.this.clusterService.state().metaData().index(index.getName());\n+ if (idxMeta != null && idxMeta.state() == IndexMetaData.State.CLOSE) {\n+ // we need to check if it's really closed\n+ // since sometimes due to a relocation we already closed the shard and that causes the index to be closed\n+ // if we then close all the contexts we can get some search failures along the way which are not expected.\n+ // it's fine to keep the contexts open if the index is still \"alive\"\n+ // unfortunately we don't have a clear way to signal today why an index is closed.\n+ afterIndexDeleted(index, indexSettings);\n+ }\n+ }\n+\n+ @Override\n+ public void afterIndexDeleted(Index index, @IndexSettings Settings indexSettings) {\n freeAllContextForIndex(index);\n }\n });",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
}
]
} |
{
"body": "I am consistently getting the following stack trace when attempting to create a node from a simple application:\n\n```\n--- Start Example ---\nJul 11, 2015 1:37:24 PM org.elasticsearch.node.Node <init>\nINFO: [The Blank] version[2.0.0.beta1-SNAPSHOT], pid[18006], build[0b27ded/2015-07-11T20:22:54Z]\nJul 11, 2015 1:37:24 PM org.elasticsearch.node.Node <init>\nINFO: [The Blank] initializing ...\nJul 11, 2015 1:37:24 PM org.elasticsearch.plugins.PluginsService <init>\nINFO: [The Blank] loaded [], sites []\nJul 11, 2015 1:37:24 PM org.elasticsearch.env.NodeEnvironment maybeLogPathDetails\nINFO: [The Blank] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [79.6gb], net total_space [464.7gb], spins? [unknown], types [hfs]\nTests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.337 sec <<< FAILURE!\nx(org.primer.stupid.TestFoo) Time elapsed: 0.301 sec <<< ERROR!\njava.lang.NoClassDefFoundError: org/elasticsearch/common/util/concurrent/jsr166e/LongAdder\n at org.elasticsearch.common.metrics.CounterMetric.<init>(CounterMetric.java:28)\n at org.elasticsearch.common.util.concurrent.EsAbortPolicy.<init>(EsAbortPolicy.java:31)\n at org.elasticsearch.common.util.concurrent.EsExecutors.newCached(EsExecutors.java:70)\n at org.elasticsearch.threadpool.ThreadPool.rebuild(ThreadPool.java:339)\n at org.elasticsearch.threadpool.ThreadPool.build(ThreadPool.java:296)\n at org.elasticsearch.threadpool.ThreadPool.<init>(ThreadPool.java:134)\n at org.elasticsearch.node.Node.<init>(Node.java:159)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157)\n at org.elasticsearch.node.NodeBuilder.node(NodeBuilder.java:164)\n at org.primer.stupid.TestFoo.x(TestFoo.java:50)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:483)\n at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)\n at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)\n at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)\n at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)\n at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)\n at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)\n at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)\n at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)\n at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)\n at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)\n at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\n at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\n at org.junit.runners.ParentRunner.run(ParentRunner.java:363)\n at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)\n at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)\n at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:483)\n at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)\n at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)\n at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)\n at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)\n at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)\nCaused by: java.lang.ClassNotFoundException: org.elasticsearch.common.util.concurrent.jsr166e.LongAdder\n at java.net.URLClassLoader$1.run(URLClassLoader.java:372)\n at java.net.URLClassLoader$1.run(URLClassLoader.java:361)\n at java.security.AccessController.doPrivileged(Native Method)\n at java.net.URLClassLoader.findClass(URLClassLoader.java:360)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:424)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n ... 39 more\n```\n\nNote that this _only_ happens when using the shaded jar. \n\nA simple test program to reproduce is:\n\n``` java\npublic class TestFoo {\n\n @Test\n public void x() throws Exception {\n\n System.out.println(\"--- Start Example ---\");\n\n File home = Files.createTempDir();\n home.deleteOnExit();\n Settings settings = Settings.builder()\n .put(\"path.home\", home.getAbsolutePath()).build();\n Node node = nodeBuilder().settings(settings).local(false).data(true).clusterName(\"test-cluster\").node();\n node.close();\n\n System.out.println(\"--- Finished ---\");\n }\n}\n```\n\nFrom what I can tell, the jsr166e classes are not getting shaded to the correct location. If I look at the shaded jar I can see the actual location of the jsr166e classes:\n\n```\n[ gnocchi 2.0.0.beta1-SNAPSHOT ] [ 01:42:39 ] > jar tvf elasticsearch-2.0.0.beta1-SNAPSHOT-shaded.jar | grep LongAdder\n 3422 Sat Jul 11 13:23:58 PDT 2015 org/elasticsearch/common/cache/LongAdder.class\n```\n\nHowever, the pom.xml for core has this:\n\n``` xml\n<relocation>\n <pattern>com.twitter.jsr166e</pattern>\n <shadedPattern>org.elasticsearch.common.util.concurrent.jsr166e</shadedPattern>\n</relocation>\n```\n",
"comments": [
{
"body": "This was simply a matter of a missing line in the core/pom.xml. Adding `<include>com.twitter:jsr166e</include>` to the shade configuration fixed it.\n",
"created_at": "2015-07-11T22:11:31Z"
}
],
"number": 12193,
"title": "NoClassDefFoundError w/shaded 2.0.0.beta1 jar"
} | {
"body": "The classes in com.twitter.jsr166e were not getting included in the\nshaded jar due to a missing configuration line.\n\nCloses #12193\n",
"number": 12194,
"review_comments": [],
"title": "jsr166e was left out of shaded jar"
} | {
"commits": [
{
"message": "jsr166e was left out of shaded jar\n\nThe classes in com.twitter.jsr166e were not getting included in the\nshaded jar due to a missing configuration line.\n\nCloses #12193"
}
],
"files": [
{
"diff": "@@ -392,6 +392,7 @@\n <include>com.tdunning:t-digest</include>\n <include>org.apache.commons:commons-lang3</include>\n <include>commons-cli:commons-cli</include>\n+ <include>com.twitter:jsr166e</include>\n </includes>\n </artifactSet>\n <transformers>",
"filename": "core/pom.xml",
"status": "modified"
}
]
} |
{
"body": "Say my scroll keep alive time is set to 1 minute.\n1. I'm executing a search with that scroll settings\n2. Immediately after receiving my search result I'm closing and reopening the index\n3. For exactly one minute I get the following error message and my cluster is red\n4. after 1 minute my cluster is green.\n\n```\n[2015-06-22 14:48:13,846][INFO ][cluster.metadata ] [Grim Hunter] closing indices [[de_v4]]\n[2015-06-22 14:48:22,263][INFO ][cluster.metadata ] [Grim Hunter] opening indices [[de_v4]]\n[2015-06-22 14:48:27,308][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][2]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][2] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][2], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n[2015-06-22 14:48:27,309][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][2] received shard failed for [de_v4][2], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][2] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][2], timed out after 5000ms]; ]]\n[2015-06-22 14:48:32,309][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][0]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][0] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][0], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n\n[... trimmed ...]\n\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][0] received shard failed for [de_v4][0], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][0] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][0], timed out after 5000ms]; ]]\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][3] received shard failed for [de_v4][3], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [master [Grim Hunter][iTOuyuGSRLab6ZZxwnhb3g][u-excus][inet[/192.168.1.181:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure]\n```\n\n_Update:_\nToday I adjusted the logging settings and briefly before the cluster is getting green:\n\n```\n[2015-06-23 12:21:00,100][DEBUG][search ] [Prime Mover] freeing search context [1], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n[2015-06-23 12:21:00,101][DEBUG][search ] [Prime Mover] freeing search context [2], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n```\n\nbtw: [clearing](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api) (manually) all scroll ids during closing / opening process solves the problem (as temporary workaround); cluster state will be green immediately after reopening the index.\n\n```\ncurl -XDELETE localhost:9200/_search/scroll/_all\n```\n\nTested with Elasticsearch v1.6.0\n",
"comments": [
{
"body": "Similar issue see #8940.\n",
"created_at": "2015-07-08T06:37:57Z"
},
{
"body": "We should close all search contexts when closing an index in order to abort the scroll.\n",
"created_at": "2015-07-10T09:22:05Z"
},
{
"body": "The `SearchService` class maintains the open search contexts (for all searches and shards on a node). When we close an index, we just close the shard, but we check `SearchService` and clear search contexts of shards being closed. If do this, then this should fix the problem.\n",
"created_at": "2015-07-10T09:33:20Z"
}
],
"number": 12116,
"title": "Search Scroll witch keep alive prevent shards from closing (during Index closing)"
} | {
"body": "Today we only clear search contexts for deleted indies. Yet, we should\ndo the same for closed indices to ensure they can be reopened quickly.\n\nCloses #12116\n",
"number": 12180,
"review_comments": [],
"title": "Free all pending search contexts if index is closed or removed"
} | {
"commits": [
{
"message": "Free all pending search contexts if index is closed or removed\n\nToday we only clear search contexts for deleted indies. Yet, we should\ndo the same for closed indices to ensure they can be reopened quickly.\n\nCloses #12116"
}
],
"files": [
{
"diff": "@@ -159,7 +159,7 @@ public SearchService(Settings settings, ClusterService clusterService, IndicesSe\n indicesService.indicesLifecycle().addListener(new IndicesLifecycle.Listener() {\n \n @Override\n- public void afterIndexDeleted(Index index, @IndexSettings Settings indexSettings) {\n+ public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n // once an index is closed we can just clean up all the pending search context information\n // to release memory and let references to the filesystem go etc.\n freeAllContextForIndex(index);",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -576,4 +577,31 @@ public void testParseClearScrollRequestWithUnknownParamThrowsException() throws\n }\n }\n \n+ public void testCloseAndReopenOrDeleteWithActiveScroll() throws IOException {\n+ createIndex(\"test\");\n+ for (int i = 0; i < 100; i++) {\n+ client().prepareIndex(\"test\", \"type1\", Integer.toString(i)).setSource(jsonBuilder().startObject().field(\"field\", i).endObject()).execute().actionGet();\n+ }\n+ refresh();\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(matchAllQuery())\n+ .setSize(35)\n+ .setScroll(TimeValue.timeValueMinutes(2))\n+ .addSort(\"field\", SortOrder.ASC)\n+ .execute().actionGet();\n+ long counter = 0;\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(100l));\n+ assertThat(searchResponse.getHits().hits().length, equalTo(35));\n+ for (SearchHit hit : searchResponse.getHits()) {\n+ assertThat(((Number) hit.sortValues()[0]).longValue(), equalTo(counter++));\n+ }\n+ if (randomBoolean()) {\n+ client().admin().indices().prepareClose(\"test\").get();\n+ client().admin().indices().prepareOpen(\"test\").get();\n+ ensureGreen(\"test\");\n+ } else {\n+ client().admin().indices().prepareDelete(\"test\").get();\n+ }\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/scroll/SearchScrollTests.java",
"status": "modified"
}
]
} |
{
"body": "This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names\nfor our official plugins. This restores the old behavior prior to #11805.\n\nCloses #12143\n",
"comments": [
{
"body": "Looks good to me.\n",
"created_at": "2015-07-09T15:19:55Z"
},
{
"body": "Thank you @jaymode!\n",
"created_at": "2015-07-09T15:21:03Z"
}
],
"number": 12158,
"title": "remove elasticsearch- from name of official plugins"
} | {
"body": "#12158 only partially fixed #12143 by removing the prefix from official plugin names. This change\n\nremoves the prefixes from any plugin names.\n",
"number": 12160,
"review_comments": [],
"title": "strip elasticsearch- and es- from any plugin name"
} | {
"commits": [
{
"message": "strip elasticsearch- and es- from any plugin name\n\n#12158 only partially fixed #12143 by removing the prefix from official plugin names. This change\nremoves the prefixes from any plugin names."
}
],
"files": [
{
"diff": "@@ -758,19 +758,20 @@ static PluginHandle parse(String name) {\n }\n }\n \n+ String endname = repo;\n+ if (repo.startsWith(\"elasticsearch-\")) {\n+ // remove elasticsearch- prefix\n+ endname = repo.substring(\"elasticsearch-\".length());\n+ } else if (repo.startsWith(\"es-\")) {\n+ // remove es- prefix\n+ endname = repo.substring(\"es-\".length());\n+ }\n+\n if (isOfficialPlugin(repo, user, version)) {\n- String endname = repo;\n- if (repo.startsWith(\"elasticsearch-\")) {\n- // remove elasticsearch- prefix\n- endname = repo.substring(\"elasticsearch-\".length());\n- } else if (name.startsWith(\"es-\")) {\n- // remove es- prefix\n- endname = repo.substring(\"es-\".length());\n- }\n return new PluginHandle(endname, Version.CURRENT.number(), null, repo);\n }\n \n- return new PluginHandle(repo, version, user, repo);\n+ return new PluginHandle(endname, version, user, repo);\n }\n \n static boolean isOfficialPlugin(String repo, String user, String version) {",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ public void testSimplifiedNaming() throws IOException {\n }\n \n @Test\n- public void testTrimmingElasticsearchFromPluginName() throws IOException {\n+ public void testTrimmingElasticsearchFromOfficialPluginName() throws IOException {\n String randomName = randomAsciiOfLength(10);\n String pluginName = randomFrom(\"elasticsearch-\", \"es-\") + randomName;\n PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(pluginName);\n@@ -79,4 +79,16 @@ public void testTrimmingElasticsearchFromPluginName() throws IOException {\n pluginName + \"-\" + Version.CURRENT.number() + \".zip\");\n assertThat(handle.urls().get(0), is(expected));\n }\n+\n+ @Test\n+ public void testTrimmingElasticsearchFromGithubPluginName() throws IOException {\n+ String user = randomAsciiOfLength(6);\n+ String randomName = randomAsciiOfLength(10);\n+ String pluginName = randomFrom(\"elasticsearch-\", \"es-\") + randomName;\n+ PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(user + \"/\" + pluginName);\n+ assertThat(handle.name, is(randomName));\n+ assertThat(handle.urls(), hasSize(1));\n+ URL expected = new URL(\"https\", \"github.com\", \"/\" + user + \"/\" + pluginName + \"/\" + \"archive/master.zip\");\n+ assertThat(handle.urls().get(0), is(expected));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java",
"status": "modified"
}
]
} |
{
"body": "https://github.com/elastic/elasticsearch/pull/11805 accidentally broke the naming convention not stripping of the 'elasticsearch' / 'es' parts of the beginning of the plugin names.\n\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/plugins/PluginManager.java#L754-L756 runs the pluginmanager before stripping of the names which happens after that.\n",
"comments": [],
"number": 12143,
"title": "Recent changes to Pluginmanager broke naming convention"
} | {
"body": "#12158 only partially fixed #12143 by removing the prefix from official plugin names. This change\n\nremoves the prefixes from any plugin names.\n",
"number": 12160,
"review_comments": [],
"title": "strip elasticsearch- and es- from any plugin name"
} | {
"commits": [
{
"message": "strip elasticsearch- and es- from any plugin name\n\n#12158 only partially fixed #12143 by removing the prefix from official plugin names. This change\nremoves the prefixes from any plugin names."
}
],
"files": [
{
"diff": "@@ -758,19 +758,20 @@ static PluginHandle parse(String name) {\n }\n }\n \n+ String endname = repo;\n+ if (repo.startsWith(\"elasticsearch-\")) {\n+ // remove elasticsearch- prefix\n+ endname = repo.substring(\"elasticsearch-\".length());\n+ } else if (repo.startsWith(\"es-\")) {\n+ // remove es- prefix\n+ endname = repo.substring(\"es-\".length());\n+ }\n+\n if (isOfficialPlugin(repo, user, version)) {\n- String endname = repo;\n- if (repo.startsWith(\"elasticsearch-\")) {\n- // remove elasticsearch- prefix\n- endname = repo.substring(\"elasticsearch-\".length());\n- } else if (name.startsWith(\"es-\")) {\n- // remove es- prefix\n- endname = repo.substring(\"es-\".length());\n- }\n return new PluginHandle(endname, Version.CURRENT.number(), null, repo);\n }\n \n- return new PluginHandle(repo, version, user, repo);\n+ return new PluginHandle(endname, version, user, repo);\n }\n \n static boolean isOfficialPlugin(String repo, String user, String version) {",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ public void testSimplifiedNaming() throws IOException {\n }\n \n @Test\n- public void testTrimmingElasticsearchFromPluginName() throws IOException {\n+ public void testTrimmingElasticsearchFromOfficialPluginName() throws IOException {\n String randomName = randomAsciiOfLength(10);\n String pluginName = randomFrom(\"elasticsearch-\", \"es-\") + randomName;\n PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(pluginName);\n@@ -79,4 +79,16 @@ public void testTrimmingElasticsearchFromPluginName() throws IOException {\n pluginName + \"-\" + Version.CURRENT.number() + \".zip\");\n assertThat(handle.urls().get(0), is(expected));\n }\n+\n+ @Test\n+ public void testTrimmingElasticsearchFromGithubPluginName() throws IOException {\n+ String user = randomAsciiOfLength(6);\n+ String randomName = randomAsciiOfLength(10);\n+ String pluginName = randomFrom(\"elasticsearch-\", \"es-\") + randomName;\n+ PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(user + \"/\" + pluginName);\n+ assertThat(handle.name, is(randomName));\n+ assertThat(handle.urls(), hasSize(1));\n+ URL expected = new URL(\"https\", \"github.com\", \"/\" + user + \"/\" + pluginName + \"/\" + \"archive/master.zip\");\n+ assertThat(handle.urls().get(0), is(expected));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java",
"status": "modified"
}
]
} |
{
"body": "https://github.com/elastic/elasticsearch/pull/11805 accidentally broke the naming convention not stripping of the 'elasticsearch' / 'es' parts of the beginning of the plugin names.\n\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/plugins/PluginManager.java#L754-L756 runs the pluginmanager before stripping of the names which happens after that.\n",
"comments": [],
"number": 12143,
"title": "Recent changes to Pluginmanager broke naming convention"
} | {
"body": "This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names\nfor our official plugins. This restores the old behavior prior to #11805.\n\nCloses #12143\n",
"number": 12158,
"review_comments": [],
"title": "remove elasticsearch- from name of official plugins"
} | {
"commits": [
{
"message": "remove elasticsearch- from name of official plugins\n\nThis change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names\nfor our official plugins. This restores the old behavior prior to #11805.\n\nCloses #12143"
}
],
"files": [
{
"diff": "@@ -759,19 +759,15 @@ static PluginHandle parse(String name) {\n }\n \n if (isOfficialPlugin(repo, user, version)) {\n- return new PluginHandle(repo, Version.CURRENT.number(), null, repo);\n- }\n-\n- if (repo.startsWith(\"elasticsearch-\")) {\n- // remove elasticsearch- prefix\n- String endname = repo.substring(\"elasticsearch-\".length());\n- return new PluginHandle(endname, version, user, repo);\n- }\n-\n- if (name.startsWith(\"es-\")) {\n- // remove es- prefix\n- String endname = repo.substring(\"es-\".length());\n- return new PluginHandle(endname, version, user, repo);\n+ String endname = repo;\n+ if (repo.startsWith(\"elasticsearch-\")) {\n+ // remove elasticsearch- prefix\n+ endname = repo.substring(\"elasticsearch-\".length());\n+ } else if (name.startsWith(\"es-\")) {\n+ // remove es- prefix\n+ endname = repo.substring(\"es-\".length());\n+ }\n+ return new PluginHandle(endname, Version.CURRENT.number(), null, repo);\n }\n \n return new PluginHandle(repo, version, user, repo);",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -67,4 +67,16 @@ public void testSimplifiedNaming() throws IOException {\n pluginName + \"-\" + Version.CURRENT.number() + \".zip\");\n assertThat(handle.urls().get(0), is(expected));\n }\n+\n+ @Test\n+ public void testTrimmingElasticsearchFromPluginName() throws IOException {\n+ String randomName = randomAsciiOfLength(10);\n+ String pluginName = randomFrom(\"elasticsearch-\", \"es-\") + randomName;\n+ PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(pluginName);\n+ assertThat(handle.name, is(randomName));\n+ assertThat(handle.urls(), hasSize(1));\n+ URL expected = new URL(\"http\", \"download.elastic.co\", \"/org.elasticsearch.plugins/\" + pluginName + \"/\" +\n+ pluginName + \"-\" + Version.CURRENT.number() + \".zip\");\n+ assertThat(handle.urls().get(0), is(expected));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java",
"status": "modified"
}
]
} |
{
"body": "It gives them _only_ execute permission. But instead it should add execute bit, not remove everything else.\n",
"comments": [
{
"body": "This is a 2.0-only problem. 1.x uses a different mechanism from java.io.File to add executable perms.\n",
"created_at": "2015-07-09T14:50:33Z"
}
],
"number": 12142,
"title": "PluginManager screws up file permissions for stuff in bin/"
} | {
"body": "Today it will remove all permissions and only set execute bit:\n\n```\n---x--x--x\n```\n\nInstead we should preserve existing permissions, and just add\nread and execute to whatever is there.\n\nCloses #12142\n",
"number": 12157,
"review_comments": [],
"title": "Fix pluginmanager permissions for bin/ scripts"
} | {
"commits": [
{
"message": "Fix pluginmanager permissions for bin/ scripts\n\nToday it will remove all permissions and only set execute bit:\n\n ---x--x--x\n\nInstead we should preserve existing permissions, and just add\nread and execute to whatever is there.\n\nCloses #12142"
}
],
"files": [
{
"diff": "@@ -277,14 +277,21 @@ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IO\n throw new IOException(\"Could not move [\" + binFile + \"] to [\" + toLocation + \"]\", e);\n }\n if (Files.getFileStore(toLocation).supportsFileAttributeView(PosixFileAttributeView.class)) {\n- final Set<PosixFilePermission> perms = new HashSet<>();\n- perms.add(PosixFilePermission.OWNER_EXECUTE);\n- perms.add(PosixFilePermission.GROUP_EXECUTE);\n- perms.add(PosixFilePermission.OTHERS_EXECUTE);\n+ // add read and execute permissions to existing perms, so execution will work.\n+ // read should generally be set already, but set it anyway: don't rely on umask...\n+ final Set<PosixFilePermission> executePerms = new HashSet<>();\n+ executePerms.add(PosixFilePermission.OWNER_READ);\n+ executePerms.add(PosixFilePermission.GROUP_READ);\n+ executePerms.add(PosixFilePermission.OTHERS_READ);\n+ executePerms.add(PosixFilePermission.OWNER_EXECUTE);\n+ executePerms.add(PosixFilePermission.GROUP_EXECUTE);\n+ executePerms.add(PosixFilePermission.OTHERS_EXECUTE);\n Files.walkFileTree(toLocation, new SimpleFileVisitor<Path>() {\n @Override\n public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n if (attrs.isRegularFile()) {\n+ Set<PosixFilePermission> perms = Files.getPosixFilePermissions(file);\n+ perms.addAll(executePerms);\n Files.setPosixFilePermissions(file, perms);\n }\n return FileVisitResult.CONTINUE;",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -116,6 +116,8 @@ public void testLocalPluginInstallWithBinAndConfig() throws Exception {\n PosixFileAttributes attributes = view.readAttributes();\n assertTrue(\"unexpected permissions: \" + attributes.permissions(),\n attributes.permissions().contains(PosixFilePermission.OWNER_EXECUTE));\n+ assertTrue(\"unexpected permissions: \" + attributes.permissions(),\n+ attributes.permissions().contains(PosixFilePermission.OWNER_READ));\n }\n } finally {\n // we need to clean up the copied dirs",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerTests.java",
"status": "modified"
}
]
} |
{
"body": "I had to disable license verification on windows so that `mvn verify` would work in #12117\n\nI installed strawberry perl and unzip.exe in my path, but the script will consistently compute the wrong sha1 checksum for lucene-analysis-kuromoji jar (i was testing with plugins/analysis-kuromoji)... i've verified (with windows fciv) that the checksum is correct, so there is something not right there.\n\nNot sure its critical to fix this, its just omitted on windows right now.\n",
"comments": [
{
"body": "for now I think it's fine to just run this on unix though. @clintongormley what do you think?\n",
"created_at": "2015-07-08T07:38:16Z"
},
{
"body": "we turned this on for windows in #12528\n",
"created_at": "2015-08-04T12:23:56Z"
}
],
"number": 12118,
"title": "fix license verification on windows?"
} | {
"body": "Related to #12117\n\nCloses #12118\n",
"number": 12154,
"review_comments": [],
"title": "Changed SHA calculation in license checker to assume binary files"
} | {
"commits": [
{
"message": "Changed SHA calculation in license checker to assume binary files\n\nRelated to #12117"
}
],
"files": [
{
"diff": "@@ -41,7 +41,8 @@ sub check_shas_and_licenses {\n }\n \n unless ( $old_sha eq $new{$jar} ) {\n- say STDERR \"$jar: SHA has changed, expected $old_sha but found $new{$jar}\";\n+ say STDERR\n+ \"$jar: SHA has changed, expected $old_sha but found $new{$jar}\";\n $error++;\n $sha_error++;\n next;\n@@ -228,7 +229,7 @@ sub calculate_shas {\n #===================================\n my %shas;\n while ( my $file = shift() ) {\n- my $digest = eval { Digest::SHA->new(1)->addfile($file) }\n+ my $digest = eval { Digest::SHA->new(1)->addfile( $file, \"b\" ) }\n or die \"Error calculating SHA1 for <$file>: $!\\n\";\n $shas{ basename($file) . \".sha1\" } = $digest->hexdigest;\n }",
"filename": "dev-tools/src/main/resources/license-check/check_license_and_sha.pl",
"status": "modified"
}
]
} |
{
"body": "I am doing _update on some doc using the url like\nhttp://10.0.0.91:9200/alias_SOME_INDEX/SOME_TYPE/SOME_ID/_update\nand my payload is like\n\n{ \n\"doc\":{\"baseName\":\"Microsoft Office\"}, \n\"upsert\":{\"baseName\":\"Microsoft Office\"}\n}\n\non some other thread i am doing PUT for the same document with same id.\n\ni get document already exists]\",\"status\":409 \nsome time it works some time i get this error\n\nso i suspect some thing to do with 2 threads doing similar thing causing this, but an \"_update\" call giving already exists kind of exception looks strange\n",
"comments": [
{
"body": "I haven't tried to replicate this, but it sounds like it is trying to do an upsert, and not handling the conflict exception correctly.\n",
"created_at": "2015-06-05T08:53:06Z"
},
{
"body": "i have made a pull request for fixing this, do i have to do any thing more? is the pull request fine?\n",
"created_at": "2015-06-12T14:43:03Z"
},
{
"body": "thanks for the pr @vedil - i've marked it for review. somebody should get to it shortly\n",
"created_at": "2015-06-12T17:13:54Z"
},
{
"body": "messed up previous pull request, so created another one https://github.com/elastic/elasticsearch/pull/12137\n",
"created_at": "2015-07-09T01:11:13Z"
},
{
"body": "Can you provide a test case that replicates the `DocumentAlreadyExistsException`?\n\nI agree that there is clearly potential for a race condition here and think that it's important that we get to the bottom of it. A test case that reproduces the exception would be helpful.\n",
"created_at": "2015-07-10T02:13:29Z"
},
{
"body": "tried creating a test case after removing my changes, and modifying my test a bit\n @Test\n public void testIndexNUpdateUpsert() {\n //update action goes to the primary, index op gets executed locally, then replicated\n String[] updateShardActions = new String[]{UpdateAction.NAME, IndexAction.NAME + \"[r]\"};\n interceptTransportActions(updateShardActions);\n\n```\n String indexOrAlias = randomIndexOrAlias();\n\n String[] indexShardActions = new String[]{IndexAction.NAME, IndexAction.NAME + \"[r]\"};\n interceptTransportActions(indexShardActions);\n\n IndexRequest indexRequest = new IndexRequest(randomIndexOrAlias(), \"type\", \"id\").source(\"field\", \"value\");\n IndexResponse indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n clearInterceptedActions();\n assertSameIndices(indexRequest, indexShardActions);\n assertThat(1L, equalTo(indexResponse.getVersion()));\n\n indexRequest = new IndexRequest(randomIndexOrAlias(), \"type\", \"id\").source(\"field\", \"value\");\n indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n clearInterceptedActions();\n //assertSameIndices(indexRequest, indexShardActions);\n assertThat(2L, equalTo(indexResponse.getVersion()));\n\n UpdateRequest updateRequest = new UpdateRequest(indexOrAlias, \"type\", \"id\").upsert(\"field2\", \"value2\").doc(\"field1\", \"value1\");\n UpdateResponse updateResponse = internalCluster().clientNodeClient().update(updateRequest).actionGet();\n assertThat( updateResponse.getVersion(), greaterThan(indexResponse.getVersion()));\n\n clearInterceptedActions();\n System.out.println(\"updateRequest \"+updateRequest +\" updateShardActions = \"+updateShardActions );\n assertSameIndicesOptionalRequests(updateRequest, updateShardActions);\n}\n```\n\nnow this is failing always by saying 1 is < 2, is my assert supposed to succeed?\ni am doing index, index, update and expecting a version > 2\n",
"created_at": "2015-07-14T18:28:06Z"
},
{
"body": "i am running test case using these vm arguments in eclipse\n-ea -Dtests.seed=806B4E52F9B20C5B -Dtests.assertion.disabled=false -Dtests.heap.size=512m -Dtests.locale=no_NO_NY -Dtests.timezone=America/Miquelon -Des.logger.level=DEBUG\n",
"created_at": "2015-07-14T18:29:00Z"
},
{
"body": "```\n @Test\n public void testIndexNUpdateUpsert() {\n //update action goes to the primary, index op gets executed locally, then replicated\n //String[] updateShardActions = new String[]{UpdateAction.NAME, IndexAction.NAME + \"[r]\"};\n //interceptTransportActions(updateShardActions);\n\n final String indexOrAlias = randomIndexOrAlias();\n final int NUMBER_OF_THREADS = 10;\n final int UPDATE_EVERY = 2;\n final CountDownLatch latch = new CountDownLatch(NUMBER_OF_THREADS);\n Thread[] threads = new Thread[NUMBER_OF_THREADS];\n for (int i = 0; i < threads.length; i++) {\n threads[i] = new Thread() {\n @Override\n public void run() {\n try {\n for (long i = 0; i < NUMBER_OF_THREADS; i++) {\n if ((i % UPDATE_EVERY) == 0) {\n UpdateRequest updateRequest = new UpdateRequest(indexOrAlias, \"type\", \"id\").upsert(\"field2\", \"value2\").doc(\"field1\", \"value1\");\n UpdateResponse updateResponse = internalCluster().clientNodeClient().update(updateRequest).actionGet();\n System.out.println(\"update response = \"+updateResponse);\n } else {\n IndexRequest indexRequest = new IndexRequest(indexOrAlias, \"type\", \"id\").source(\"field\", \"value\");\n IndexResponse indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n System.out.println(\"index response = \"+indexResponse);\n }\n }\n } finally {\n latch.countDown();\n }\n }\n };\n }\n\n for (Thread thread : threads) {\n thread.start();\n }\n\n try {\n latch.await();\n } catch (InterruptedException e) {\n e.printStackTrace();\n throw new RuntimeException();\n }\n}\n```\n\nusing this test i am able to reproduce the \"document already exists\" also i see some exceptions whose message is like \"version conflict, current [3], provided [1]\" also\n",
"created_at": "2015-07-24T16:56:54Z"
},
{
"body": "I do not think this is an actual bug.\n\n@vedil I believe the unit test you put [here](https://github.com/elastic/elasticsearch/issues/11506#issuecomment-121331165) fails because you use a new `randomIndexOrAlias()` for each request and so the requests might not all got to the same index. I you use the same index each time the test will pass.\n\nI agree that the `DocumentAlreadyExistsException` seems weird for the integration test but this is also expected I think. An update first retrieves the document via `get` and then issues an `index` request with the updated source. If a write sneaked in between `get` and issuing the `index` request we throw a `VersionConflictException` in case the document already existed before the update. However, in case the document did not exist when the `get` was executed we check that the document does still not exist when the `index` request is sent. If it does, we throw a `DocumentAlreadyExistsException`.\nTo circumvent this, you need to set the [retry on conflict parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html#_parameters_3) to a higher value.\nLet me know if my explanation makes sense.\n\nWe could potentially throw a `VersionConfictException` instead of a `DocumentAlreadyExistsException` to have consistent exceptions for updates or have a dedicated `UpdateFailedException` that explains what happened or just document this better. \n",
"created_at": "2015-07-28T13:21:01Z"
},
{
"body": "regarding randomIndexOrAlias, i am calling it once and using it in all requests. so within a test it will use same index.\nyour other explanation makes sense, i feel VersionConfictException makes more sense.\n",
"created_at": "2015-07-28T14:13:35Z"
},
{
"body": "After https://github.com/elastic/elasticsearch/pull/13955 is in we will get `VersionConfictException`.\n",
"created_at": "2015-10-06T19:18:17Z"
},
{
"body": "#13955 was merged, closing\n",
"created_at": "2015-10-07T15:16:19Z"
},
{
"body": "赞",
"created_at": "2018-03-21T12:42:33Z"
}
],
"number": 11506,
"title": "_update some time i get DocumentAlreadyExistsException"
} | {
"body": "Closes #11506\n",
"number": 12137,
"review_comments": [],
"title": "Handle upserts failing when document has already been created by another process "
} | {
"commits": [
{
"message": "not throwing DAEE if version type is internal and version is supplied, added test"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.engine;\n \n import com.google.common.collect.Lists;\n+\n import org.apache.lucene.index.*;\n import org.apache.lucene.index.IndexWriter.IndexReaderWarmer;\n import org.apache.lucene.search.BooleanClause.Occur;\n@@ -44,6 +45,7 @@\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.common.util.concurrent.ReleasableLock;\n+import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;\n import org.elasticsearch.index.indexing.ShardIndexingService;\n import org.elasticsearch.index.mapper.Uid;\n@@ -397,7 +399,11 @@ private void innerCreateNoLock(Create create, long currentVersion, VersionValue\n */\n doUpdate = true;\n updatedVersion = 1;\n- } else {\n+ } else if (create.origin() == Operation.Origin.PRIMARY && create.versionType() == VersionType.INTERNAL && (create.version() == Versions.MATCH_ANY && currentVersion == Versions.NOT_FOUND ) ) {\n+ //assuming that this means it is an update request and we can update safely\n+ doUpdate = true;\n+ updatedVersion = currentVersion++;\n+ } else {\n // On primary, we throw DAEE if the _uid is already in the index with an older version:\n assert create.origin() == Operation.Origin.PRIMARY;\n throw new DocumentAlreadyExistsException(shardId, create.type(), create.id());",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -69,6 +69,7 @@\n import org.elasticsearch.action.get.MultiGetRequest;\n import org.elasticsearch.action.index.IndexAction;\n import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.percolate.MultiPercolateAction;\n import org.elasticsearch.action.percolate.MultiPercolateRequest;\n import org.elasticsearch.action.percolate.PercolateAction;\n@@ -248,6 +249,31 @@ public void testUpdateUpsert() {\n assertSameIndices(updateRequest, updateShardActions);\n }\n \n+ @Test\n+ public void testIndexNUpdateUpsert() {\n+ //update action goes to the primary, index op gets executed locally, then replicated\n+ String[] updateShardActions = new String[]{UpdateAction.NAME, IndexAction.NAME + \"[r]\"};\n+ interceptTransportActions(updateShardActions);\n+\n+ String indexOrAlias = randomIndexOrAlias();\n+ \n+ String[] indexShardActions = new String[]{IndexAction.NAME, IndexAction.NAME + \"[r]\"};\n+ interceptTransportActions(indexShardActions);\n+ \n+ IndexRequest indexRequest = new IndexRequest(randomIndexOrAlias(), \"type\", \"id\").source(\"field\", \"value\");\n+ IndexResponse indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n+ clearInterceptedActions();\n+ assertSameIndices(indexRequest, indexShardActions);\n+ \n+ UpdateRequest updateRequest = new UpdateRequest(indexOrAlias, \"type\", \"id\").upsert(\"field\", \"value\").doc(\"field1\", \"value1\");\n+ UpdateResponse updateResponse = internalCluster().clientNodeClient().update(updateRequest).actionGet();\n+ assertThat( updateResponse.getVersion(), greaterThan(indexResponse.getVersion()));\n+\n+ clearInterceptedActions();\n+ System.out.println(\"updateRequest \"+updateRequest +\" updateShardActions = \"+updateShardActions );\n+ assertSameIndicesOptionalRequests(updateRequest, updateShardActions);\n+ }\n+ \n @Test\n public void testUpdateDelete() {\n //update action goes to the primary, delete op gets executed locally, then replicated",
"filename": "core/src/test/java/org/elasticsearch/action/IndicesRequestTests.java",
"status": "modified"
}
]
} |
{
"body": "A failure during the fetch phase of a scan mistakenly invokes the `onFailedQueryPhase` handler, but should invoke the `onFailedFetchPhase` handler. As such, in cases of failure during the fetch phase both the query and fetch statistics will be wrong.\n",
"comments": [],
"number": 12086,
"title": "Failure during the fetch phase of scan invokes the wrong failure handler"
} | {
"body": "… phase handler.\n\nThis commit fixes an issue where during a failure in the fetch phase of a scan the wrong failure handler was invoked.\n\nCloses #12086\n",
"number": 12087,
"review_comments": [],
"title": "Failure during the fetch phase of scan should invoke the failed fetch…"
} | {
"commits": [
{
"message": "Failure during the fetch phase of scan should invoke the failed fetch phase handler.\n\nThis commit fixes an issue where during a failure in the fetch phase of a scan the wrong failure handler was invoked.\n\nCloses #12086"
}
],
"files": [
{
"diff": "@@ -296,7 +296,7 @@ public ScrollQueryFetchSearchResult executeScan(InternalScrollSearchRequest requ\n contextProcessedSuccessfully(context);\n }\n } catch (Throwable e) {\n- shardSearchStats.onFailedQueryPhase(context);\n+ shardSearchStats.onFailedFetchPhase(context);\n throw ExceptionsHelper.convertToRuntime(e);\n }\n shardSearchStats.onFetchPhase(context, System.nanoTime() - queryFinishTime);",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
}
]
} |
{
"body": "```\n$ curl -XDELETE localhost:9200/test; echo\n{\"error\":\"IndexMissingException[[test] missing]\",\"status\":404}\n$ curl -XPUT localhost:9200/test/test/1 -d '{\"title\": \"test document\"}'; echo\n{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":1,\"created\":true}\n$ curl -XPOST localhost:9200/test/_close; echo\n{\"acknowledged\":true}\n$ curl -XPOST localhost:9200/_bulk -d '{\"update\" : {\"_index\": \"test\", \"_type\": \"type1\", \"_id\": \"1\"}}\n> {\"doc\": {\"field1\": \"value1\"}}\n> '; echo\n{\"took\":1,\"errors\":true,\"items\":[{\"index\":{\"_index\":\"test\",\"_type\":\"type1\",\"_id\":\"1\",\"status\":403,\"error\":\"IndexClosedException[[test] closed]\"}}]} \n```\n\nThe key should be `update` instead of `index`. Similar things happen for deletes. The fix against current master is trivial:\n\n```\ndiff --git a/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java b/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java\nindex 6e48349..ffd49df 100644\n--- a/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java\n+++ b/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java\n@@ -168,13 +168,13 @@ public class TransportBulkAction extends HandledTransportAction<BulkRequest, Bul\n } else if (request instanceof DeleteRequest) {\n DeleteRequest deleteRequest = (DeleteRequest) request;\n if (index.equals(deleteRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"delete\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n return true;\n }\n } else if (request instanceof UpdateRequest) {\n UpdateRequest updateRequest = (UpdateRequest) request;\n if (index.equals(updateRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"update\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n return true;\n }\n } else {\n```\n\nHowever, why is the type reported at all? Is it allowed to differ? Could there be multiple responses to a single action?\n",
"comments": [
{
"body": "Hi @robx \n\nThanks for reporting. Would you be up for sending a PR?\n",
"created_at": "2015-02-28T01:05:03Z"
}
],
"number": 9821,
"title": "wrong type for bulk updates and deletes on closed index"
} | {
"body": "When a bulk request fails on a Delete or Update request, the BulkItemResponse\nreports incorrect \"index\" operation in the response. This PR fixes this\nfor the case of closed indices as reported in #9821 but also for\nother failures and adds tests for the two cases covered.\n\nCloses #9821\n",
"number": 12060,
"review_comments": [],
"title": "Fix: Use correct OpType on Failure in BulkItemResponse"
} | {
"commits": [
{
"message": "Fix: Use correct OpType on Failure in BulkItemResponse\n\nWhen a bulk request fails on a Delete or Update request, the BulkItemResponse\nreports incorrect \"index\" operation in the response. This PR fixes this\nfor the case of closed indices as reported in #9821 but also for\nother failures and adds tests for the two cases covered.\n\nCloses #9821"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.action.bulk;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.delete.DeleteResponse;\n@@ -28,7 +27,6 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.RestStatus;\n \n import java.io.IOException;",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java",
"status": "modified"
},
{
"diff": "@@ -168,13 +168,13 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray<BulkItemResponse> r\n } else if (request instanceof DeleteRequest) {\n DeleteRequest deleteRequest = (DeleteRequest) request;\n if (index.equals(deleteRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"delete\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n return true;\n }\n } else if (request instanceof UpdateRequest) {\n UpdateRequest updateRequest = (UpdateRequest) request;\n if (index.equals(updateRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"update\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n return true;\n }\n } else {\n@@ -379,7 +379,15 @@ private boolean addFailureIfIndexIsUnavailable(DocumentRequest request, BulkRequ\n if (unavailableException != null) {\n BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(),\n unavailableException);\n- BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, \"index\", failure);\n+ String operationType = \"unknown\";\n+ if (request instanceof IndexRequest) {\n+ operationType = \"index\";\n+ } else if (request instanceof DeleteRequest) {\n+ operationType = \"delete\";\n+ } else if (request instanceof UpdateRequest) {\n+ operationType = \"update\";\n+ }\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, operationType, failure);\n responses.set(idx, bulkItemResponse);\n // make sure the request gets never processed again\n bulkRequest.requests.set(idx, null);",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -725,5 +725,39 @@ public void testThatMissingIndexDoesNotAbortFullBulkRequest() throws Exception{\n assertThat(bulkResponse.hasFailures(), is(true));\n assertThat(bulkResponse.getItems().length, is(5));\n }\n+\n+ @Test // issue 9821\n+ public void testFailedRequestsOnClosedIndex() throws Exception {\n+ createIndex(\"bulkindex1\");\n+ ensureYellow();\n+\n+ client().prepareIndex(\"bulkindex1\", \"index1_type\", \"1\").setSource(\"text\", \"test\").get();\n+ assertAcked(client().admin().indices().prepareClose(\"bulkindex1\"));\n+\n+ BulkRequest bulkRequest = new BulkRequest();\n+ bulkRequest.add(new IndexRequest(\"bulkindex1\", \"index1_type\", \"1\").source(\"text\", \"hallo1\"))\n+ .add(new UpdateRequest(\"bulkindex1\", \"index1_type\", \"1\").doc(\"foo\", \"bar\"))\n+ .add(new DeleteRequest(\"bulkindex1\", \"index1_type\", \"1\")).refresh(true);\n+\n+ BulkResponse bulkResponse = client().bulk(bulkRequest).get();\n+ assertThat(bulkResponse.hasFailures(), is(true));\n+ BulkItemResponse[] responseItems = bulkResponse.getItems();\n+ assertThat(responseItems.length, is(3));\n+ assertThat(responseItems[0].getOpType(), is(\"index\"));\n+ assertThat(responseItems[1].getOpType(), is(\"update\"));\n+ assertThat(responseItems[2].getOpType(), is(\"delete\"));\n+ }\n+\n+ @Test // issue 9821\n+ public void testInvalidIndexNamesCorrectOpType() {\n+ BulkResponse bulkResponse = client().prepareBulk()\n+ .add(client().prepareIndex().setIndex(\"INVALID.NAME\").setType(\"type1\").setId(\"1\").setSource(\"field\", 1))\n+ .add(client().prepareUpdate().setIndex(\"INVALID.NAME\").setType(\"type1\").setId(\"1\").setDoc(\"field\", randomInt()))\n+ .add(client().prepareDelete().setIndex(\"INVALID.NAME\").setType(\"type1\").setId(\"1\")).get();\n+ assertThat(bulkResponse.getItems().length, is(3));\n+ assertThat(bulkResponse.getItems()[0].getOpType(), is(\"index\"));\n+ assertThat(bulkResponse.getItems()[1].getOpType(), is(\"update\"));\n+ assertThat(bulkResponse.getItems()[2].getOpType(), is(\"delete\"));\n+ }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/document/BulkTests.java",
"status": "modified"
}
]
} |
{
"body": "The following request produces a SearchParseException because there is no aggregator of type `foo`:\n\n```\nGET _search\n{\n \"aggs\": {\n \"foo\": {\n \"bar\": {}\n }\n }\n}\n```\n\nOn the current master this is masked by a StackOverflowError (I presume as the stack trace fall off the top of the console buffer) will the follow (part) stacktrace showing the infinite loop:\n\n```\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:312)\n at org.elasticsearch.ElasticsearchException.toXContent(ElasticsearchException.java:277)\n```\n",
"comments": [
{
"body": "SearchParseException extends SearchException which in turn extends ElasticsearchException and implements ElasticsearchWrapperException. Any exception which extends ElasticsearchException and implements ElasticsearchWrapperExceptionwill suffer this issue\n",
"created_at": "2015-07-02T10:46:00Z"
},
{
"body": "From what I can see, the following Exceptions are affected:\n- BroadcastShardOperationFailedException\n- IndexCreationException\n- PercolateException\n- RecoverFilesRecoveryException\n- RemoteTransportException\n- SearchException\n- SearchContextException\n- SendRequestTransportException\n",
"created_at": "2015-07-02T10:49:06Z"
}
],
"number": 11994,
"title": "StackOverflowError when a SearchParseException is thrown"
} | {
"body": "the specialization can cause stack overflows if an exception is a\nElasticsearchWrapperException as well as a ElasticsearchException.\nThis commit just relies on the unwrap logic now to find the cause and only\nrenders if we the rendering exception is the cause otherwise forwards\nto the generic exception rendering.\n\nCloses #11994\n",
"number": 12015,
"review_comments": [],
"title": "Don't special-case on ElasticsearchWrapperException in toXContent"
} | {
"commits": [
{
"message": "Don't special-case on ElasticsearchWrapperException in toXContent\n\nthe specialization can cause stack overflows if an exception is a\nElasticsearchWrapperException as well as a ElasticsearchException.\nThis commit just relies on the unwrap logic now to find the cause and only\nrenders if we the rendering exception is the cause otherwise forwards\nto the generic exception rendering.\n\nCloses #11994"
}
],
"files": [
{
"diff": "@@ -246,7 +246,8 @@ static Set<String> getRegisteredKeys() { // for testing\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- if (this instanceof ElasticsearchWrapperException) {\n+ Throwable ex = ExceptionsHelper.unwrapCause(this);\n+ if (ex != this) {\n toXContent(builder, params, this);\n } else {\n builder.field(\"type\", getExceptionName());",
"filename": "core/src/main/java/org/elasticsearch/ElasticsearchException.java",
"status": "modified"
},
{
"diff": "@@ -32,14 +32,17 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentLocation;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexException;\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.query.TestQueryParsingException;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.test.TestSearchContext;\n import org.elasticsearch.test.VersionUtils;\n import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n import org.elasticsearch.transport.RemoteTransportException;\n@@ -177,6 +180,16 @@ public void testToString() {\n }\n \n public void testToXContent() throws IOException {\n+ {\n+ ElasticsearchException ex = new SearchParseException(new TestSearchContext(), \"foo\", new XContentLocation(1,0));\n+ XContentBuilder builder = XContentFactory.jsonBuilder();\n+ builder.startObject();\n+ ex.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+\n+ String expected = \"{\\\"type\\\":\\\"search_parse_exception\\\",\\\"reason\\\":\\\"foo\\\",\\\"line\\\":1,\\\"col\\\":0}\";\n+ assertEquals(expected, builder.string());\n+ }\n {\n ElasticsearchException ex = new ElasticsearchException(\"foo\", new ElasticsearchException(\"bar\", new IllegalArgumentException(\"index is closed\", new RuntimeException(\"foobar\"))));\n XContentBuilder builder = XContentFactory.jsonBuilder();\n@@ -226,6 +239,7 @@ public void testToXContent() throws IOException {\n ex.toXContent(otherBuilder, ToXContent.EMPTY_PARAMS);\n otherBuilder.endObject();\n assertEquals(otherBuilder.string(), builder.string());\n+ assertEquals(\"{\\\"type\\\":\\\"file_not_found_exception\\\",\\\"reason\\\":\\\"foo not found\\\"}\", builder.string());\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/ElasticsearchExceptionTests.java",
"status": "modified"
}
]
} |
{
"body": "it does this like so:\n\n```\nES_CLASSPATH=\"$ES_CLASSPATH:...\"\n```\n\nWhen ES_CLASSPATH is empty we get a classpath like this: \n\n```\n:/wherever.jar:/whatever.jar\n```\n\nThe empty path element gets interpreted as CWD.\n\nI did fgrep -r ES_CLASSPATH and saw lots of logic in various packaging scripts, etc. We need to review how these scripts set the classpath and make sure they do not do this.\n",
"comments": [
{
"body": "Easy to reproduce, cd /, then run /path/to/bin/elasticsearch\n",
"created_at": "2015-07-02T16:59:31Z"
}
],
"number": 12000,
"title": "elasticsearch.in.sh unintentionally adds CWD to classpath"
} | {
"body": "See #12000 for what this does.\n\nWe may need equivalent fix for the .bat file, too\n",
"number": 12001,
"review_comments": [],
"title": "Don't add CWD to classpath when ES_CLASSPATH isn't set."
} | {
"commits": [
{
"message": "Don't add CWD to classpath when ES_CLASSPATH isn't set."
},
{
"message": "blind stab at windows"
}
],
"files": [
{
"diff": "@@ -88,5 +88,11 @@ set JAVA_OPTS=%JAVA_OPTS% -Dfile.encoding=UTF-8\n REM Use our provided JNA always versus the system one\n set JAVA_OPTS=%JAVA_OPTS% -Djna.nosys=true\n \n-set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*\n+set CORE_CLASSPATH=%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*\n+if \"%ES_CLASSPATH%\" == \"\" (\n+set ES_CLASSPATH=%CORE_CLASSPATH%\n+) else (\n+set ES_CLASSPATH=%ES_CLASSPATH%;%CORE_CLASSPATH%\n+)\n+\n set ES_PARAMS=-Delasticsearch -Des-foreground=yes -Des.path.home=\"%ES_HOME%\"",
"filename": "core/bin/elasticsearch.in.bat",
"status": "modified"
},
{
"diff": "@@ -1,6 +1,12 @@\n #!/bin/sh\n \n-ES_CLASSPATH=\"$ES_CLASSPATH:$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*\"\n+CORE_CLASSPATH=\"$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*\"\n+\n+if [ \"x$ES_CLASSPATH\" = \"x\" ]; then\n+ ES_CLASSPATH=\"$CORE_CLASSPATH\"\n+else\n+ ES_CLASSPATH=\"$ES_CLASSPATH:$CORE_CLASSPATH\"\n+fi\n \n if [ \"x$ES_MIN_MEM\" = \"x\" ]; then\n ES_MIN_MEM=${packaging.elasticsearch.heap.min}",
"filename": "core/bin/elasticsearch.in.sh",
"status": "modified"
}
]
} |
{
"body": "I am using the latest version of elasticsearch and I got this error when I use scroll with large number size and scan as a search type\n\n{\"error\":\"ArrayIndexOutOfBoundsException[-131072]\",\"status\":500}\n\nthous it perfectly works woth small sizes\n\nex. \n\n``` bash\n[01:21:39] lnxg33k@ruined-sec ➜ ~: curl -XGET \"http://localhost:9200/dns_logs/pico/_search?search_type=scan&scroll=1m\" -d \"{\n \"query\": { \"match_all\": {}},\n \"size\": 100000\n }\"\n{\"_scroll_id\":\"c2Nhbjs1OzUxOjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTM6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzs1Mjo4Zko2ODZBVVRueVpsbE90WXF4MmpnOzU0OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTU6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzsxO3RvdGFsX2hpdHM6NTIwNzY2ODg7\",\"took\":132,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":52076688,\"max_score\":0.0,\"hits\":[]}}⏎ [01:21:50] lnxg33k@ruined-sec ➜ ~: curl -XGET \"http://localhost:9200/_search/scroll?scroll=1m&scroll_id=c2Nhbjs1OzUxOjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTM6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzs1Mjo4Zko2ODZBVVRueVpsbE90WXF4MmpnOzU0OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTU6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzsxO3RvdGFsX2hpdHM6NTIwNzY2ODg7\" > xxx.json\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 262M 100 262M 0 0 129M 0 0:00:02 0:00:02 --:--:-- 129M\n[01:22:03] lnxg33k@ruined-sec ➜ ~: du -sh xxx.json \n263M xxx.json\n```\n\n``` bash\n[01:22:07] lnxg33k@ruined-sec ➜ ~: curl -XGET \"http://localhost:9200/dns_logs/pico/_search?search_type=scan&scroll=1m\" -d \"{\n \"query\": { \"match_all\": {}},\n \"size\": 1000000\n }\"\n{\"_scroll_id\":\"c2Nhbjs1OzU2OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTc6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzs1ODo4Zko2ODZBVVRueVpsbE90WXF4MmpnOzU5OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NjA6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzsxO3RvdGFsX2hpdHM6NTIwNzY2ODg7\",\"took\":128,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":52076688,\"max_score\":0.0,\"hits\":[]}}⏎ [01:22:38] lnxg33k@ruined-sec ➜ ~: curl -XGET \"http://localhost:9200/_search/scroll?scroll=1m&scroll_id=c2Nhbjs1OzU2OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NTc6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzs1ODo4Zko2ODZBVVRueVpsbE90WXF4MmpnOzU5OjhmSjY4NkFVVG55WmxsT3RZcXgyamc7NjA6OGZKNjg2QVVUbnlabGxPdFlxeDJqZzsxO3RvdGFsX2hpdHM6NTIwNzY2ODg7\"\n{\"error\":\"ArrayIndexOutOfBoundsException[null]\",\"status\":500}⏎ \n```\n",
"comments": [
{
"body": "Hi @lnxg33k \n\nJust to note: you shouldn't use such big sizes. The whole point of scrolling is that you can keep pulling smaller batches of results until you have enough.\n\nThat said, an NPE is always a bug. I've tried replicating this with two shards and 300,000 documents, but it is working fine for me.\n\nCould you provide the stack trace from the logs so that we can investigate further?\n\nthanks\n",
"created_at": "2014-10-14T12:21:10Z"
},
{
"body": "Hi @lnxg33k \n\nAny chance of getting the stack trace please?\n",
"created_at": "2014-10-23T10:01:13Z"
},
{
"body": "@clintongormley I am sorry but I couldn't reporoduce it anymore and don't have the stack trace. \n",
"created_at": "2014-10-28T08:21:22Z"
},
{
"body": "OK, thanks @lnxg33k \n\nI'll close this issue as we have been unable to replicate, but please feel free to reopen if you see it happen again.\n",
"created_at": "2014-10-28T10:28:54Z"
},
{
"body": "There is a stack trace with `ArrayIndexOutOfBounds` that occurs when scrolling, it started after upgrade to 1.6 : \n\n```\norg.elasticsearch.transport.RemoteTransportException: [Book][inet[/172.31.13.26:9300]][indices:data/read/scroll]\nCaused by: org.elasticsearch.action.search.ReduceSearchPhaseException: Failed to execute phase [fetch], [reduce] \n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction.finishHim(TransportSearchScrollScanAction.java:190) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction.access$800(TransportSearchScrollScanAction.java:71) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction$1.onResult(TransportSearchScrollScanAction.java:164) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction$1.onResult(TransportSearchScrollScanAction.java:159) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.search.action.SearchServiceTransportAction$22.handleResponse(SearchServiceTransportAction.java:533) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.search.action.SearchServiceTransportAction$22.handleResponse(SearchServiceTransportAction.java:524) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:163) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:132) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[gwiq.jar:0.6-SNAPSHOT]\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_75]\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_75]\n at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_75]\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 70\n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction.innerFinishHim(TransportSearchScrollScanAction.java:209) ~[gwiq.jar:0.6-SNAPSHOT]\n at org.elasticsearch.action.search.type.TransportSearchScrollScanAction$AsyncAction.finishHim(TransportSearchScrollScanAction.java:188) ~[gwiq.jar:0.6-SNAPSHOT]\n ... 29 common frames omitted\n```\n",
"created_at": "2015-06-30T18:55:34Z"
},
{
"body": "I'm doing a lot of scrollings (with sliding time constraint over the same indices) in a for loop. I'll try to clear scroll context after each iteration to see if it helps...\n",
"created_at": "2015-07-01T08:38:39Z"
},
{
"body": "@l15k4 also, are you using `scan`? and do you have any shard exceptions while scrolling?\n",
"created_at": "2015-07-01T08:39:24Z"
},
{
"body": "@clintongormley Yes I'm doing `scan` with `range` over 110 indices, `page-size=70`, tried `keepAlive=10s-30s`. When I didn't get `ArrayIndexOutOfBoundsException` I got shardFailures, like ~ 10-50 of the same identical failures : \n\n```\nfailure.index() == null\nfailure.shardId == -1`\nfailure.reason == NodeDisconnectedException\n```\n",
"created_at": "2015-07-01T08:44:37Z"
},
{
"body": "Hi @l15k4 \n\nThanks for the info. I've asked @martijnvg to have a look at it when he has a moment. Any more info that you can provide to help us track it down would be useful. also, why so many node disconnected exceptions? that seems weird. Do you see any exceptions on those nodes?\n",
"created_at": "2015-07-01T09:21:04Z"
},
{
"body": "Imho it was all caused by leaving too many \"15s\" scroll contexts alive because I wasn't clearing them and I was performing 8760 tiny scans in for loop (sequentially) ... After I deployed the application with `clearing-scroll-context-feature` it works like a charm...\n\nSorry but those logs were temporary, they are gone with the old docker container...\n",
"created_at": "2015-07-01T09:28:38Z"
},
{
"body": "@l15k4 Did the errors occur while there were nodes of mixed versions in the cluster? Or were all nodes on the same version?\n",
"created_at": "2015-07-01T14:20:42Z"
},
{
"body": "@martijnvg At the time of the error being thrown all 4 nodes were `1.6.0` but a week ago we managed to run cluster [1.6.0, 1.6.0, 1.6.0, 1.5.1] for 3 hours before we noticed it was having `yellow` status indefinitely... Could it affect future well being of the cluster? \n",
"created_at": "2015-07-01T14:25:32Z"
},
{
"body": "@l15k4 no, but I don't recommend to do this is for a long period of time. Not sure what the cause of the exception was here, but I think the code where the exception occurs can be written in such a way that an `ArrayIndexOutOfBoundsException` can never occur.\n",
"created_at": "2015-07-01T16:16:57Z"
},
{
"body": "Fixed via #11978\n",
"created_at": "2015-08-24T14:48:33Z"
}
],
"number": 7926,
"title": "ArrayIndexOutOfBoundsException"
} | {
"body": "This way a ArrayIndexOutOfBoundsException like is reported in #7926 is impossible to occur.\n",
"number": 11978,
"review_comments": [
{
"body": "I think we should try to do it in an atomic way?\n\n``` java\nif (queryFetchResults.compareAndSet(shardIndex, null, result) == false) {\n throw new Exception();\n}\n```\n",
"created_at": "2015-07-06T17:03:11Z"
}
],
"title": "Append the shard top docs in such a way to prevent AOOBE"
} | {
"commits": [
{
"message": "scroll: Append the shard top docs in such a way that ArrayIndexOutOfBoundsException is impossible to occur.\nalso added AtomicArray#setOnce() method to be sure that we fail if an shard response has already been set"
}
],
"files": [
{
"diff": "@@ -41,7 +41,9 @@\n import org.elasticsearch.search.internal.InternalSearchResponse;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.List;\n+import java.util.Objects;\n import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.action.search.type.TransportSearchHelper.internalScrollSearchRequest;\n@@ -159,7 +161,8 @@ void executePhase(final int shardIndex, DiscoveryNode node, final long searchId)\n searchService.sendExecuteScan(node, internalScrollSearchRequest(searchId, request), new SearchServiceListener<QueryFetchSearchResult>() {\n @Override\n public void onResult(QueryFetchSearchResult result) {\n- queryFetchResults.set(shardIndex, result);\n+ Objects.requireNonNull(result, \"QueryFetchSearchResult can't be null\");\n+ queryFetchResults.setOnce(shardIndex, result);\n if (counter.decrementAndGet() == 0) {\n finishHim();\n }\n@@ -197,25 +200,27 @@ private void finishHim() {\n \n private void innerFinishHim() throws IOException {\n int numberOfHits = 0;\n- for (AtomicArray.Entry<QueryFetchSearchResult> entry : queryFetchResults.asList()) {\n+ List<AtomicArray.Entry<QueryFetchSearchResult>> entries = queryFetchResults.asList();\n+ for (AtomicArray.Entry<QueryFetchSearchResult> entry : entries) {\n numberOfHits += entry.value.queryResult().topDocs().scoreDocs.length;\n }\n- ScoreDoc[] docs = new ScoreDoc[numberOfHits];\n- int counter = 0;\n- for (AtomicArray.Entry<QueryFetchSearchResult> entry : queryFetchResults.asList()) {\n+ List<ScoreDoc> docs = new ArrayList<>(numberOfHits);\n+ for (AtomicArray.Entry<QueryFetchSearchResult> entry : entries) {\n ScoreDoc[] scoreDocs = entry.value.queryResult().topDocs().scoreDocs;\n for (ScoreDoc scoreDoc : scoreDocs) {\n scoreDoc.shardIndex = entry.index;\n- docs[counter++] = scoreDoc;\n+ docs.add(scoreDoc);\n }\n }\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(docs, queryFetchResults, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(docs.toArray(new ScoreDoc[0]), queryFetchResults, queryFetchResults);\n ((InternalSearchHits) internalResponse.hits()).totalHits = Long.parseLong(this.scrollId.getAttributes().get(\"total_hits\"));\n \n \n- for (AtomicArray.Entry<QueryFetchSearchResult> entry : queryFetchResults.asList()) {\n+ for (AtomicArray.Entry<QueryFetchSearchResult> entry : entries) {\n if (entry.value.queryResult().topDocs().scoreDocs.length < entry.value.queryResult().size()) {\n- // we found more than we want for this round, remove this from our scrolling\n+ // we found more than we want for this round, remove this from our scrolling, so we don't go back to\n+ // this shard, since all hits have been processed.\n+ // The SearchContext already gets freed on the node holding the shard, via a similar check.\n queryFetchResults.set(entry.index, null);\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchScrollScanAction.java",
"status": "modified"
},
{
"diff": "@@ -67,6 +67,15 @@ public void set(int i, E value) {\n }\n }\n \n+ public final void setOnce(int i, E value) {\n+ if (array.compareAndSet(i, null, value) == false) {\n+ throw new IllegalStateException(\"index [\" + i + \"] has already been set\");\n+ }\n+ if (nonNullList != null) { // read first, lighter, and most times it will be null...\n+ nonNullList = null;\n+ }\n+ }\n+\n /**\n * Gets the current value at position {@code i}.\n *",
"filename": "src/main/java/org/elasticsearch/common/util/concurrent/AtomicArray.java",
"status": "modified"
}
]
} |
{
"body": "Start a node with 1.6.0, then a node from master.\n\nCheck the cluster health on master - it reports two nodes.\n",
"comments": [
{
"body": "That's unlikely to work well....\n",
"created_at": "2015-06-29T18:25:37Z"
},
{
"body": "I tried to reproduce this, but I couldn't get the 1.6 node to join the node from master. Maybe it just happened based on a particular commit? Regardless of this, it still feels good to have have validation during pinging that ignores ping responses (and log a warning) of nodes with a version lower than the minimum supported version. (in the case that ping requests/responses happen to serialize successfully between versions that shouldn't be compatible )\n",
"created_at": "2015-06-30T13:41:38Z"
},
{
"body": "I managed to reproduce this with unicast discovery.\n",
"created_at": "2015-07-01T08:35:21Z"
}
],
"number": 11924,
"title": "Elasticsearch 2.0 joins 1.6 cluster"
} | {
"body": "If the version of a node is lower than the minimum supported version or higher than the maximum hypothetical supported version, a node shouldn't be allowed to join and nodes should join that elected master node\n\nCloses #11924\n",
"number": 11972,
"review_comments": [
{
"body": "I wonder how long this would work ;)\n",
"created_at": "2015-07-02T08:56:00Z"
},
{
"body": "can we use atomic reference?\n",
"created_at": "2015-07-02T08:56:22Z"
},
{
"body": "I think this can be better put in the findmaster() logic. If the \"elected\" master is not of a good version we should log a warning and return null (no master found)\n",
"created_at": "2015-07-02T09:01:13Z"
},
{
"body": "I think this is tricky (thought about it some more). We now say that 2.0 is incompatible with 1.x but maybe in the future we will decide differently. For example, that 3.0 is compatible with 2.x. We will only make this decision when we come close to 3.0 so we can not \"bake it\" into 2.0 code now. Since we now add a min compatible check both on the node and master side during join - I don't think we need a max check?\n",
"created_at": "2015-07-02T09:03:03Z"
},
{
"body": "that question did cross my mind too, but historically this is what always has happened, so that was the main drive to add it... but yes since the minimum version is checked on both sides, checking the maximum version isn't necessary.\n",
"created_at": "2015-07-02T09:50:16Z"
},
{
"body": "good point\n",
"created_at": "2015-07-02T09:50:40Z"
},
{
"body": "the reason I used an array here, is because setting/getting happens on the same thread. but I can change it to atomic reference.\n",
"created_at": "2015-07-02T09:51:20Z"
},
{
"body": "why is serailization likely to fail? it might, it might not... \n",
"created_at": "2015-07-02T09:55:44Z"
},
{
"body": "inline the variable in the onFailure?\n",
"created_at": "2015-07-02T09:55:59Z"
},
{
"body": "can we rename version into minMasterVersion, call it with Version. minimumCompatibilityVersion() and document what it means? I think it will be clearer.\n",
"created_at": "2015-07-02T09:59:41Z"
},
{
"body": "maybe my choice of wording here... I'll change it to: `we may not end up here.....`\n",
"created_at": "2015-07-02T10:02:23Z"
},
{
"body": "sure\n",
"created_at": "2015-07-02T10:02:28Z"
},
{
"body": "agreed\n",
"created_at": "2015-07-02T10:02:58Z"
},
{
"body": "this can go back to private now, no?\n",
"created_at": "2015-07-02T14:59:37Z"
},
{
"body": "It turns out it's hard(ish) to remove ElectMasterService from guice.. not doing.\n",
"created_at": "2015-07-02T15:00:58Z"
},
{
"body": "true\n",
"created_at": "2015-07-02T15:05:11Z"
},
{
"body": "oops... no, because it is used in a test (ZenDiscoveryTests#testHandleNodeJoin_incompatibleMinVersion)\n",
"created_at": "2015-07-02T15:06:26Z"
}
],
"title": "Don't join master nodes or accept join requests of old and too new nodes"
} | {
"commits": [
{
"message": "zen: Don't join master nodes or accept join requests of old and too new nodes.\n\nIf the version of a node is lower than the minimum supported version or higher than the maximum supported version, a node shouldn't be allowed to join and nodes should join that elected master node\n\nCloses #11924"
}
],
"files": [
{
"diff": "@@ -23,6 +23,7 @@\n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -886,12 +887,22 @@ static boolean shouldIgnoreOrRejectNewClusterState(ESLogger logger, ClusterState\n }\n }\n \n- private void handleJoinRequest(final DiscoveryNode node, final MembershipAction.JoinCallback callback) {\n+ void handleJoinRequest(final DiscoveryNode node, final MembershipAction.JoinCallback callback) {\n \n if (!transportService.addressSupported(node.address().getClass())) {\n // TODO, what should we do now? Maybe inform that node that its crap?\n logger.warn(\"received a wrong address type from [{}], ignoring...\", node);\n } else {\n+ // The minimum supported version for a node joining a master:\n+ Version minimumNodeJoinVersion = localNode().getVersion().minimumCompatibilityVersion();\n+ // Sanity check: maybe we don't end up here, because serialization may have failed.\n+ if (node.getVersion().before(minimumNodeJoinVersion)) {\n+ callback.onFailure(\n+ new IllegalStateException(\"Can't handle join request from a node with a version [\" + node.getVersion() + \"] that is lower than the minimum compatible version [\" + minimumNodeJoinVersion.minimumCompatibilityVersion() + \"]\")\n+ );\n+ return;\n+ }\n+\n // try and connect to the node, if it fails, we can raise an exception back to the client...\n transportService.connectToNode(node);\n ",
"filename": "core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectContainer;\n import com.google.common.collect.Lists;\n import org.apache.lucene.util.CollectionUtil;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n@@ -36,13 +37,17 @@ public class ElectMasterService extends AbstractComponent {\n \n public static final String DISCOVERY_ZEN_MINIMUM_MASTER_NODES = \"discovery.zen.minimum_master_nodes\";\n \n+ // This is the minimum version a master needs to be on, otherwise it gets ignored\n+ // This is based on the minimum compatible version of the current version this node is on\n+ private final Version minMasterVersion;\n private final NodeComparator nodeComparator = new NodeComparator();\n \n private volatile int minimumMasterNodes;\n \n @Inject\n- public ElectMasterService(Settings settings) {\n+ public ElectMasterService(Settings settings, Version version) {\n super(settings);\n+ this.minMasterVersion = version.minimumCompatibilityVersion();\n this.minimumMasterNodes = settings.getAsInt(DISCOVERY_ZEN_MINIMUM_MASTER_NODES, -1);\n logger.debug(\"using minimum_master_nodes [{}]\", minimumMasterNodes);\n }\n@@ -108,7 +113,14 @@ public DiscoveryNode electMaster(Iterable<DiscoveryNode> nodes) {\n if (sortedNodes == null || sortedNodes.isEmpty()) {\n return null;\n }\n- return sortedNodes.get(0);\n+ DiscoveryNode masterNode = sortedNodes.get(0);\n+ // Sanity check: maybe we don't end up here, because serialization may have failed.\n+ if (masterNode.getVersion().before(minMasterVersion)) {\n+ logger.warn(\"ignoring master [{}], because the version [{}] is lower than the minimum compatible version [{}]\", masterNode, masterNode.getVersion(), minMasterVersion);\n+ return null;\n+ } else {\n+ return masterNode;\n+ }\n }\n \n private List<DiscoveryNode> sortedMasterNodes(Iterable<DiscoveryNode> nodes) {",
"filename": "core/src/main/java/org/elasticsearch/discovery/zen/elect/ElectMasterService.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,7 @@\n public class ElectMasterServiceTest extends ElasticsearchTestCase {\n \n ElectMasterService electMasterService() {\n- return new ElectMasterService(Settings.EMPTY);\n+ return new ElectMasterService(Settings.EMPTY, Version.CURRENT);\n }\n \n List<DiscoveryNode> generateRandomNodes() {",
"filename": "core/src/test/java/org/elasticsearch/discovery/zen/ElectMasterServiceTest.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.discovery.zen;\n \n+import com.google.common.collect.Iterables;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n@@ -35,7 +36,9 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.LocalTransportAddress;\n import org.elasticsearch.discovery.Discovery;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.discovery.zen.fd.FaultDetection;\n+import org.elasticsearch.discovery.zen.membership.MembershipAction;\n import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n@@ -45,7 +48,10 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.lang.ref.Reference;\n import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n@@ -215,4 +221,40 @@ public void handleException(TransportException exp) {\n assertThat(reference.get(), notNullValue());\n assertThat(ExceptionsHelper.detailedMessage(reference.get()), containsString(\"cluster state from a different master then the current one, rejecting \"));\n }\n+\n+ @Test\n+ public void testHandleNodeJoin_incompatibleMinVersion() {\n+ Settings nodeSettings = Settings.settingsBuilder()\n+ .put(\"discovery.type\", \"zen\") // <-- To override the local setting if set externally\n+ .build();\n+ String nodeName = internalCluster().startNode(nodeSettings, Version.V_2_0_0);\n+ ZenDiscovery zenDiscovery = (ZenDiscovery) internalCluster().getInstance(Discovery.class, nodeName);\n+\n+ DiscoveryNode node = new DiscoveryNode(\"_node_id\", new LocalTransportAddress(\"_id\"), Version.V_1_6_0);\n+ final AtomicReference<IllegalStateException> holder = new AtomicReference<>();\n+ zenDiscovery.handleJoinRequest(node, new MembershipAction.JoinCallback() {\n+ @Override\n+ public void onSuccess() {\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ holder.set((IllegalStateException) t);\n+ }\n+ });\n+\n+ assertThat(holder.get(), notNullValue());\n+ assertThat(holder.get().getMessage(), equalTo(\"Can't handle join request from a node with a version [1.6.0] that is lower than the minimum compatible version [2.0.0-SNAPSHOT]\"));\n+ }\n+\n+ @Test\n+ public void testJoinElectedMaster_incompatibleMinVersion() {\n+ ElectMasterService electMasterService = new ElectMasterService(Settings.EMPTY, Version.V_2_0_0);\n+\n+ DiscoveryNode node = new DiscoveryNode(\"_node_id\", new LocalTransportAddress(\"_id\"), Version.V_2_0_0);\n+ assertThat(electMasterService.electMaster(Collections.singletonList(node)), sameInstance(node));\n+ node = new DiscoveryNode(\"_node_id\", new LocalTransportAddress(\"_id\"), Version.V_1_6_0);\n+ assertThat(\"Can't join master because version 1.6.0 is lower than the minimum compatable version 2.0.0 can support\", electMasterService.electMaster(Collections.singletonList(node)), nullValue());\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryTests.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,7 @@ public void testSimplePings() throws InterruptedException {\n ThreadPool threadPool = new ThreadPool(getClass().getName());\n ClusterName clusterName = new ClusterName(\"test\");\n NetworkService networkService = new NetworkService(settings);\n- ElectMasterService electMasterService = new ElectMasterService(settings);\n+ ElectMasterService electMasterService = new ElectMasterService(settings, Version.CURRENT);\n \n NettyTransport transportA = new NettyTransport(settings, threadPool, networkService, BigArrays.NON_RECYCLING_INSTANCE, Version.CURRENT);\n final TransportService transportServiceA = new TransportService(transportA, threadPool).start();",
"filename": "core/src/test/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPingTests.java",
"status": "modified"
}
]
} |
{
"body": "During repository verification the master node writes a single test file into repository and then all data nodes try to read this file back. If this operation fails, the error message `store location [......] is not shared between node [the data node name] and the master node]`. The message can be very confusing in situation when there is only one node in the cluster and the error is caused by wrong S3 permissions. \n",
"comments": [],
"number": 11922,
"title": "Failed repository verification message can be confusing"
} | {
"body": "Closes #11922\n",
"number": 11925,
"review_comments": [
{
"body": "\"just created files\" <- what do you mean exactly? if one waits 2h they will be able to read it? if not I would just go with \"the permissions on the store don't allow reading\"\n",
"created_at": "2015-06-29T18:56:49Z"
},
{
"body": "Agree with @bleskes here!\n\n(BTW, do we also log this? Can we maybe more information from the AWS Java SDK if the logger is `debug`?)\n",
"created_at": "2015-06-30T08:02:08Z"
},
{
"body": "What I meant here is that it couldn't read the file that was just created by the master. I will change the language. Thanks!\n\nWe now log this. See #11763, but there is really no more information to log here because there is no way to distinguish between file doesn't exist and file exists but you don't have permissions to read it on this level. \n",
"created_at": "2015-06-30T13:18:32Z"
},
{
"body": "> there is no way to distinguish between file doesn't exist and file exists but you don't have permissions to read it on this level\n\nArgh, that's bad... Thanks for improving it Igor!\n",
"created_at": "2015-06-30T13:22:17Z"
}
],
"title": "Improve repository verification failure message"
} | {
"commits": [
{
"message": "Improve repository verification failure message\n\nCloses #11922"
}
],
"files": [
{
"diff": "@@ -204,7 +204,9 @@ public void verify(String seed) {\n throw new RepositoryVerificationException(repositoryName, \"store location [\" + blobStore + \"] is not accessible on the node [\" + localNode + \"]\", exp);\n }\n } else {\n- throw new RepositoryVerificationException(repositoryName, \"store location [\" + blobStore + \"] is not shared between node [\" + localNode + \"] and the master node\");\n+ throw new RepositoryVerificationException(repositoryName, \"a file written by master to the store [\" + blobStore + \"] cannot be accessed on the node [\" + localNode + \"]. \"\n+ + \"This might indicate that the store [\" + blobStore + \"] is not shared between this node and the master node or \"\n+ + \"that permissions on the store don't allow reading files written by the master node\");\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java",
"status": "modified"
}
]
} |
{
"body": "If `XContentBuilder` receives a `Path` object to write, then it results in a `StackOverflowError` because it runs into the `Iterable` check, which `Path` implements (`Path implements Iterable<Path>`).\n",
"comments": [
{
"body": "This is correct, since one segment of the path is still a path, which is its own first segment, etc. Weird API, that.\n\nStill, @pickypg, I'm curious where you stumbled upon this, e.g. if you could post the stack trace (until it starts calling itself recursively). Because I don't see much value serializing a Path object like this, it would not be useful as an array of path elements even if there was not this issue of endless recursion. Calling toString() on it and storing it as a string could be more useful.\n\nNot sure XContentBuilder needs explicit support of Path, or rather whatever is calling should decide how they want to serialize paths, especially if the value is expected to be read back as a Path eventually.\n",
"created_at": "2015-06-25T23:31:52Z"
},
{
"body": "Hi @szroland, I agree that it's a slightly odd usage, but I came across it while integration testing against Elasticearch 1.6. As the Groovy client author, I had to update a bunch of my tests to include support for the new `path.repo` setting (amongst other things). This involved changing the path for integration tests for snapshots to use a \"valid\" temporary directory.\n\n``` groovy\n// Create the repository\nPutRepositoryResponse putResponse = clusterAdminClient.putRepository {\n name repoName\n type \"fs\"\n settings {\n location = randomRepoPath()\n }\n}\n```\n\nwhere `randomRepoPath()` returns a `Path`. Without going into too much detail, this indirectly/effectively calls `builder.field(\"location\", randomRepoPath())`, which is [making use of `XContentBuilder field(String name, Object value)` and eventually `writeValue(Object value)`](https://github.com/elastic/elasticsearch/blob/v1.6.0/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java#L771).\n\nThis can be easily worked around by calling `randomRepoPath().toString()` or, presumably even better, `randomRepoPath().toAbsolutePath().toString()`, but there's nothing implying a problem until you call it.\n\n> Not sure XContentBuilder needs explicit support of Path, or rather whatever is calling should decide how they want to serialize paths, especially if the value is expected to be read back as a Path eventually.\n\nWe do have a bunch of `Path`-based settings (particularly on master in ES 2.0+), which I think will expose this \"problem\" more over time, particularly when something _else_ is doing the serialization for you.\n",
"created_at": "2015-06-26T20:00:16Z"
},
{
"body": "I added a pull request with a possible implementation to have something concrete to review / discuss.\n",
"created_at": "2015-06-27T23:38:39Z"
},
{
"body": "Fixed.\n",
"created_at": "2015-07-10T08:57:08Z"
}
],
"number": 11771,
"title": "XContentBuilder.writeValue causes StackOverflowError given Path"
} | {
"body": "Treat path object as a simple value objects instead of Iterable in XContentBuilder, using toString() to create String representation.\n\nThis addresses #11771\n",
"number": 11903,
"review_comments": [
{
"body": "I'm not actually sure if `value.toString()` is equivalent or superior to `value.toAbsolutePath().toString()`. \n",
"created_at": "2015-06-27T23:45:52Z"
},
{
"body": "Checking out the `Path` JavaDoc, it does not appear that it will give the absolute path just by using `toString()`, but having thought about it, I agree that you do not want to make it absolute because that will change the path, which may only be relevant on the deserialization side. I'm imagining the creation of a `Path` on the client side, then passing it over the wire (having it serialized) where `../other/path/stuff` makes sense.\n",
"created_at": "2015-06-28T02:42:25Z"
},
{
"body": "Can you add a test for absolute and relative `Path`s?\n",
"created_at": "2015-06-28T02:45:08Z"
},
{
"body": "Yes I was thinking the same thing, e.g. if it is relative, making it absolute during serialization would change the semantics, which is not something we want to do in serialization code. Still, having these issues is why I wasn't sure if we wanted to have dedicated support for Path, or have the caller decide what aspect of the Path should be serialized, and call XContentBuilder with the string it created. Some could could keep the relative path, others could make it absolute first, maybe also normalize, resolve symlinks, etc.\n\nOverall, toString() felt as a good default, caller can still do the custom serialization if needed.\n",
"created_at": "2015-06-28T12:48:59Z"
},
{
"body": "I made it more explicit in the test that the value returned by toString() is used. Anything more would be testing the behavior of the toString() method of a particular Path implementation, which is not desirable I think. It could also make the test operating system specific.\n\nI still included a relative path example, as it tests a Path with multiple components, which does highlight the need to escape the string value, at least when running the test on windows, where the separator is the backslash.\n",
"created_at": "2015-06-28T14:50:41Z"
},
{
"body": "I disagree on this one. We test on both Windows and multiple Linux distributions, which should cover us in terms of integration problems and this is exactly why I want to see the test.\n\nIn terms of getting an absolute path, you can call `createTempDir().toAbsolutePath()`, which should return a `Path` object to a temporary directory that you can safely use for absolute path testing in an OS agnostic way.\n",
"created_at": "2015-06-28T15:43:43Z"
},
{
"body": "Ok, adding a test like that is easy enough, even though I still feel it is redundant.\nIt also created enough repetition that I felt we should extract the common check code.\n",
"created_at": "2015-06-28T16:51:32Z"
},
{
"body": "Even if it is a corner-case, the `field(String, Iterable)` method would still get called if you do something like:\n\n``` java\nPath path = ...;\nIterable iterable = path;\nbuilder.field(\"path\", iterable); // this will call field(String, Iterable), not field(String, Path)\n```\n\nSo I don't think we should try to add these specialized methods for `Path` and just make make sure that `Path` instances are escaped in all XContentBuilder methods that either take an `Iterable` or perform an `instanceof Iterable` test?\n",
"created_at": "2015-07-08T16:10:51Z"
},
{
"body": "I really hope that no one is doing that, but it's a good point.\n",
"created_at": "2015-07-08T16:14:24Z"
}
],
"title": "Treat path object as a simple value instead of Iterable in XContentBuilder"
} | {
"commits": [
{
"message": "Treat path object as a simple value objects instead of Iterable in XContentBuilder, using toString() to create String representation.\n\nThis addresses #11771"
},
{
"message": "make it more explicit in the tests that a Path instance is serialized using the String representation returned by its toString() method"
},
{
"message": "add test with absolute path, extract common check code"
},
{
"message": "instead of dedicated methods for Path, test for Path when accepting Iterable"
},
{
"message": "Instead of testing the actual output of serialization for Paths, test that serializing a Path is the same as serializing its string version."
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.xcontent;\n \n import com.google.common.base.Charsets;\n+\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -41,6 +42,7 @@\n import java.io.OutputStream;\n import java.math.BigDecimal;\n import java.math.RoundingMode;\n+import java.nio.file.Path;\n import java.util.Calendar;\n import java.util.Date;\n import java.util.Locale;\n@@ -650,21 +652,33 @@ public XContentBuilder field(XContentBuilderString name, Map<String, Object> val\n return this;\n }\n \n- public XContentBuilder field(String name, Iterable value) throws IOException {\n- startArray(name);\n- for (Object o : value) {\n- value(o);\n+ public XContentBuilder field(String name, Iterable<?> value) throws IOException {\n+ if (value instanceof Path) {\n+ //treat Paths as single value\n+ field(name);\n+ value(value); \n+ } else {\n+ startArray(name);\n+ for (Object o : value) {\n+ value(o);\n+ }\n+ endArray();\n }\n- endArray();\n return this;\n }\n \n- public XContentBuilder field(XContentBuilderString name, Iterable value) throws IOException {\n- startArray(name);\n- for (Object o : value) {\n- value(o);\n+ public XContentBuilder field(XContentBuilderString name, Iterable<?> value) throws IOException {\n+ if (value instanceof Path) {\n+ //treat Paths as single value\n+ field(name);\n+ value(value); \n+ } else {\n+ startArray(name);\n+ for (Object o : value) {\n+ value(o);\n+ }\n+ endArray();\n }\n- endArray();\n return this;\n }\n \n@@ -1140,15 +1154,20 @@ public XContentBuilder value(Map<String, Object> map) throws IOException {\n return this;\n }\n \n- public XContentBuilder value(Iterable value) throws IOException {\n+ public XContentBuilder value(Iterable<?> value) throws IOException {\n if (value == null) {\n return nullValue();\n }\n- startArray();\n- for (Object o : value) {\n- value(o);\n+ if (value instanceof Path) {\n+ //treat as single value\n+ writeValue(value);\n+ } else {\n+ startArray();\n+ for (Object o : value) {\n+ value(o);\n+ }\n+ endArray();\n }\n- endArray();\n return this;\n }\n \n@@ -1231,7 +1250,7 @@ private void writeValue(Object value) throws IOException {\n generator.writeNull();\n return;\n }\n- Class type = value.getClass();\n+ Class<?> type = value.getClass();\n if (type == String.class) {\n generator.writeString((String) value);\n } else if (type == Integer.class) {\n@@ -1255,9 +1274,12 @@ private void writeValue(Object value) throws IOException {\n generator.writeEndObject();\n } else if (value instanceof Map) {\n writeMap((Map) value);\n+ } else if (value instanceof Path) {\n+ //Path implements Iterable<Path> and causes endless recursion and a StackOverFlow if treated as an Iterable here\n+ generator.writeString(value.toString()); \n } else if (value instanceof Iterable) {\n generator.writeStartArray();\n- for (Object v : (Iterable) value) {\n+ for (Object v : (Iterable<?>) value) {\n writeValue(v);\n }\n generator.writeEndArray();",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java",
"status": "modified"
},
{
"diff": "@@ -20,14 +20,17 @@\n package org.elasticsearch.common.xcontent.builder;\n \n import com.google.common.collect.Lists;\n+\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.FastCharArrayWriter;\n+import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.xcontent.*;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.nio.file.Path;\n import java.util.*;\n \n import static org.elasticsearch.common.xcontent.XContentBuilder.FieldCaseConversion.CAMELCASE;\n@@ -260,4 +263,60 @@ public void testCopyCurrentStructure() throws Exception {\n \n assertThat(i, equalTo(terms.size()));\n }\n+\n+ @Test\n+ public void testHandlingOfPath() throws IOException {\n+ Path path = PathUtils.get(\"path\");\n+ checkPathSerialization(path);\n+ }\n+\n+ @Test\n+ public void testHandlingOfPath_relative() throws IOException {\n+ Path path = PathUtils.get(\"..\", \"..\", \"path\");\n+ checkPathSerialization(path);\n+ }\n+\n+ @Test\n+ public void testHandlingOfPath_absolute() throws IOException {\n+ Path path = createTempDir().toAbsolutePath();\n+ checkPathSerialization(path);\n+ } \n+\n+ private void checkPathSerialization(Path path) throws IOException {\n+ XContentBuilder pathBuilder = XContentFactory.contentBuilder(XContentType.JSON); \n+ pathBuilder.startObject().field(\"file\", path).endObject();\n+ \n+ XContentBuilder stringBuilder = XContentFactory.contentBuilder(XContentType.JSON); \n+ stringBuilder.startObject().field(\"file\", path.toString()).endObject();\n+ \n+ assertThat(pathBuilder.string(), equalTo(stringBuilder.string()));\n+ }\n+\n+ @Test\n+ public void testHandlingOfPath_XContentBuilderStringName() throws IOException {\n+ Path path = PathUtils.get(\"path\"); \n+ XContentBuilderString name = new XContentBuilderString(\"file\");\n+\n+ XContentBuilder pathBuilder = XContentFactory.contentBuilder(XContentType.JSON);\n+ pathBuilder.startObject().field(name, path).endObject();\n+\n+ XContentBuilder stringBuilder = XContentFactory.contentBuilder(XContentType.JSON); \n+ stringBuilder.startObject().field(name, path.toString()).endObject();\n+ \n+ assertThat(pathBuilder.string(), equalTo(stringBuilder.string()));\n+ }\n+\n+ @Test\n+ public void testHandlingOfCollectionOfPaths() throws IOException {\n+ Path path = PathUtils.get(\"path\");\n+ \n+ XContentBuilder pathBuilder = XContentFactory.contentBuilder(XContentType.JSON);\n+ pathBuilder.startObject().field(\"file\", Arrays.asList(path)).endObject();\n+\n+ XContentBuilder stringBuilder = XContentFactory.contentBuilder(XContentType.JSON);\n+ stringBuilder.startObject().field(\"file\", Arrays.asList(path.toString())).endObject();\n+ \n+ assertThat(pathBuilder.string(), equalTo(stringBuilder.string()));\n+ }\n+ \n }",
"filename": "core/src/test/java/org/elasticsearch/common/xcontent/builder/XContentBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "Because the fetch phase now has nested doc support, the logic that deals with detecting if a named nested query/filter matches with a hit can be removed.\n\nPR for #10661\n",
"comments": [
{
"body": "LGTM\n\nHurray for code removal. \\o/\n",
"created_at": "2015-04-21T07:55:08Z"
}
],
"number": 10694,
"title": "Matched queries: Remove redundant and broken code"
} | {
"body": "Before inner_hits existed named queries had support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits got released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.\n\nLeft over of issue #10661\n",
"number": 11880,
"review_comments": [],
"title": "Properly support named queries for both nested and parent child inner hits"
} | {
"commits": [
{
"message": "inner_hits: Properly support named queries for both nested and parent child inner hits.\n\nBefore inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits."
}
],
"files": [
{
"diff": "@@ -151,7 +151,8 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n \n if (innerHits != null) {\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), innerQuery, null, parseContext.mapperService(), childDocMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), parsedQuery, null, parseContext.mapperService(), childDocMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : childType;\n parseContext.addInnerHits(name, parentChildInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -154,7 +154,8 @@ static Query createParentQuery(Query innerQuery, String parentType, boolean scor\n }\n \n if (innerHits != null) {\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), innerQuery, null, parseContext.mapperService(), parentDocMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), parsedQuery, null, parseContext.mapperService(), parentDocMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : parentType;\n parseContext.addInnerHits(name, parentChildInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -217,7 +217,7 @@ public ParsedQuery parseInnerFilter(XContentParser parser) throws IOException {\n if (filter == null) {\n return null;\n }\n- return new ParsedQuery(filter, context.copyNamedFilters());\n+ return new ParsedQuery(filter, context.copyNamedQueries());\n } finally {\n context.reset(null);\n }\n@@ -300,7 +300,7 @@ private ParsedQuery innerParse(QueryParseContext parseContext, XContentParser pa\n if (query == null) {\n query = Queries.newMatchNoDocsQuery();\n }\n- return new ParsedQuery(query, parseContext.copyNamedFilters());\n+ return new ParsedQuery(query, parseContext.copyNamedQueries());\n } finally {\n parseContext.reset(null);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java",
"status": "modified"
},
{
"diff": "@@ -151,7 +151,8 @@ public ToParentBlockJoinQuery build() throws IOException {\n }\n \n if (innerHits != null) {\n- InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), innerQuery, null, getParentObjectMapper(), nestedObjectMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), parsedQuery, null, getParentObjectMapper(), nestedObjectMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : path;\n parseContext.addInnerHits(name, nestedInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -185,11 +185,11 @@ public void addNamedQuery(String name, Query query) {\n namedQueries.put(name, query);\n }\n \n- public ImmutableMap<String, Query> copyNamedFilters() {\n+ public ImmutableMap<String, Query> copyNamedQueries() {\n return ImmutableMap.copyOf(namedQueries);\n }\n \n- public void combineNamedFilters(QueryParseContext context) {\n+ public void combineNamedQueries(QueryParseContext context) {\n namedQueries.putAll(context.namedQueries);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n context.reset(qSourceParser);\n Query result = context.parseInnerQuery();\n parser.nextToken();\n- parseContext.combineNamedFilters(context);\n+ parseContext.combineNamedQueries(context);\n return result;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/WrapperQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -19,34 +19,17 @@\n \n package org.elasticsearch.search.fetch.innerhits;\n \n-import com.google.common.collect.ImmutableMap;\n-\n import org.apache.lucene.index.LeafReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause.Occur;\n-import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.ConstantScoreScorer;\n-import org.apache.lucene.search.ConstantScoreWeight;\n-import org.apache.lucene.search.DocIdSet;\n-import org.apache.lucene.search.DocIdSetIterator;\n-import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.IndexSearcher;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Scorer;\n-import org.apache.lucene.search.TermQuery;\n-import org.apache.lucene.search.TopDocs;\n-import org.apache.lucene.search.TopDocsCollector;\n-import org.apache.lucene.search.TopFieldCollector;\n-import org.apache.lucene.search.TopScoreDocCollector;\n-import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BitSet;\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n-import org.elasticsearch.index.fieldvisitor.SingleFieldsVisitor;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.Uid;\n@@ -83,10 +66,10 @@ public void addInnerHitDefinition(String name, BaseInnerHits innerHit) {\n \n public static abstract class BaseInnerHits extends FilteredSearchContext {\n \n- protected final Query query;\n+ protected final ParsedQuery query;\n private final InnerHitsContext childInnerHits;\n \n- protected BaseInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits) {\n+ protected BaseInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits) {\n super(context);\n this.query = query;\n if (childInnerHits != null && !childInnerHits.isEmpty()) {\n@@ -98,12 +81,12 @@ protected BaseInnerHits(SearchContext context, Query query, Map<String, BaseInne\n \n @Override\n public Query query() {\n- return query;\n+ return query.query();\n }\n \n @Override\n public ParsedQuery parsedQuery() {\n- return new ParsedQuery(query, ImmutableMap.<String, Query>of());\n+ return query;\n }\n \n public abstract TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContext) throws IOException;\n@@ -120,7 +103,7 @@ public static final class NestedInnerHits extends BaseInnerHits {\n private final ObjectMapper parentObjectMapper;\n private final ObjectMapper childObjectMapper;\n \n- public NestedInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) {\n+ public NestedInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) {\n super(context, query, childInnerHits);\n this.parentObjectMapper = parentObjectMapper;\n this.childObjectMapper = childObjectMapper;\n@@ -136,7 +119,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n BitDocIdSetFilter parentFilter = context.bitsetFilterCache().getBitDocIdSetFilter(rawParentFilter);\n Filter childFilter = childObjectMapper.nestedTypeFilter();\n- Query q = Queries.filtered(query, new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n+ Query q = Queries.filtered(query.query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n \n if (size() == 0) {\n return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0);\n@@ -280,7 +263,7 @@ public static final class ParentChildInnerHits extends BaseInnerHits {\n private final MapperService mapperService;\n private final DocumentMapper documentMapper;\n \n- public ParentChildInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits, MapperService mapperService, DocumentMapper documentMapper) {\n+ public ParentChildInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits, MapperService mapperService, DocumentMapper documentMapper) {\n super(context, query, childInnerHits);\n this.mapperService = mapperService;\n this.documentMapper = documentMapper;\n@@ -307,7 +290,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n \n BooleanQuery q = new BooleanQuery();\n- q.add(query, Occur.MUST);\n+ q.add(query.query(), Occur.MUST);\n // Only include docs that have the current hit as parent\n q.add(new TermQuery(new Term(field, term)), Occur.MUST);\n // Only include docs that have this inner hits type",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsContext.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,11 @@\n \n package org.elasticsearch.search.fetch.innerhits;\n \n-import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n+import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsParseElement;\n@@ -169,7 +168,7 @@ private InnerHitsContext.NestedInnerHits parseNested(XContentParser parser, Quer\n }\n \n private ParseResult parseSubSearchContext(SearchContext searchContext, QueryParseContext parseContext, XContentParser parser) throws Exception {\n- Query query = null;\n+ ParsedQuery query = null;\n Map<String, InnerHitsContext.BaseInnerHits> childInnerHits = null;\n SubSearchContext subSearchContext = new SubSearchContext(searchContext);\n String fieldName = null;\n@@ -179,7 +178,8 @@ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryPars\n fieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"query\".equals(fieldName)) {\n- query = searchContext.queryParserService().parseInnerQuery(parseContext);\n+ Query q = searchContext.queryParserService().parseInnerQuery(parseContext);\n+ query = new ParsedQuery(q, parseContext.copyNamedQueries());\n } else if (\"inner_hits\".equals(fieldName)) {\n childInnerHits = parseInnerHits(parser, parseContext, searchContext);\n } else {\n@@ -191,18 +191,18 @@ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryPars\n }\n \n if (query == null) {\n- query = new MatchAllDocsQuery();\n+ query = ParsedQuery.parsedMatchAllQuery();\n }\n return new ParseResult(subSearchContext, query, childInnerHits);\n }\n \n private static final class ParseResult {\n \n private final SubSearchContext context;\n- private final Query query;\n+ private final ParsedQuery query;\n private final Map<String, InnerHitsContext.BaseInnerHits> childInnerHits;\n \n- private ParseResult(SubSearchContext context, Query query, Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n+ private ParseResult(SubSearchContext context, ParsedQuery query, Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n this.context = context;\n this.query = query;\n this.childInnerHits = childInnerHits;\n@@ -212,7 +212,7 @@ public SubSearchContext context() {\n return context;\n }\n \n- public Query query() {\n+ public ParsedQuery query() {\n return query;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsParseElement.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -46,25 +47,9 @@\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAllSuccessful;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.arrayContaining;\n-import static org.hamcrest.Matchers.arrayContainingInAnyOrder;\n-import static org.hamcrest.Matchers.arrayWithSize;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n-import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.startsWith;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n public class SimpleNestedTests extends ElasticsearchIntegrationTest {\n \n@@ -178,98 +163,6 @@ public void simpleNested() throws Exception {\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n }\n \n- @Test @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/10661\")\n- public void simpleNestedMatchQueries() throws Exception {\n- XContentBuilder builder = jsonBuilder().startObject()\n- .startObject(\"type1\")\n- .startObject(\"properties\")\n- .startObject(\"nested1\")\n- .field(\"type\", \"nested\")\n- .endObject()\n- .startObject(\"field1\")\n- .field(\"type\", \"long\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject();\n- assertAcked(prepareCreate(\"test\").addMapping(\"type1\", builder));\n- ensureGreen();\n-\n- List<IndexRequestBuilder> requests = new ArrayList<>();\n- int numDocs = randomIntBetween(2, 35);\n- requests.add(client().prepareIndex(\"test\", \"type1\", \"0\").setSource(jsonBuilder().startObject()\n- .field(\"field1\", 0)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_1\")\n- .field(\"n_field2\", \"n_value2_1\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_2\")\n- .field(\"n_field2\", \"n_value2_2\")\n- .endObject()\n- .endArray()\n- .endObject()));\n- requests.add(client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n- .field(\"field1\", 1)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_8\")\n- .field(\"n_field2\", \"n_value2_5\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_3\")\n- .field(\"n_field2\", \"n_value2_1\")\n- .endObject()\n- .endArray()\n- .endObject()));\n-\n- for (int i = 2; i < numDocs; i++) {\n- requests.add(client().prepareIndex(\"test\", \"type1\", String.valueOf(i)).setSource(jsonBuilder().startObject()\n- .field(\"field1\", i)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_8\")\n- .field(\"n_field2\", \"n_value2_5\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_2\")\n- .field(\"n_field2\", \"n_value2_2\")\n- .endObject()\n- .endArray()\n- .endObject()));\n- }\n-\n- indexRandom(true, requests);\n- waitForRelocation(ClusterHealthStatus.GREEN);\n-\n- SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(nestedQuery(\"nested1\", boolQuery()\n- .should(termQuery(\"nested1.n_field1\", \"n_value1_1\").queryName(\"test1\"))\n- .should(termQuery(\"nested1.n_field1\", \"n_value1_3\").queryName(\"test2\"))\n- .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\"))\n- ))\n- .setSize(numDocs)\n- .addSort(\"field1\", SortOrder.ASC)\n- .get();\n- assertNoFailures(searchResponse);\n- assertAllSuccessful(searchResponse);\n- assertThat(searchResponse.getHits().totalHits(), equalTo((long) numDocs));\n- assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"0\"));\n- assertThat(searchResponse.getHits().getAt(0).matchedQueries(), arrayWithSize(2));\n- assertThat(searchResponse.getHits().getAt(0).matchedQueries(), arrayContainingInAnyOrder(\"test1\", \"test3\"));\n-\n- assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n- assertThat(searchResponse.getHits().getAt(1).matchedQueries(), arrayWithSize(1));\n- assertThat(searchResponse.getHits().getAt(1).matchedQueries(), arrayContaining(\"test2\"));\n-\n- for (int i = 2; i < numDocs; i++) {\n- assertThat(searchResponse.getHits().getAt(i).id(), equalTo(String.valueOf(i)));\n- assertThat(searchResponse.getHits().getAt(i).matchedQueries(), arrayWithSize(1));\n- assertThat(searchResponse.getHits().getAt(i).matchedQueries(), arrayContaining(\"test3\"));\n- }\n- }\n-\n @Test\n public void multiNested() throws Exception {\n assertAcked(prepareCreate(\"test\")",
"filename": "core/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java",
"status": "modified"
},
{
"diff": "@@ -881,7 +881,6 @@ public void testNestedFetchFeatures() {\n long version = searchHit.version();\n assertThat(version, equalTo(1l));\n \n- // Can't use named queries for the same reason explain doesn't work:\n assertThat(searchHit.matchedQueries(), arrayContaining(\"test\"));\n \n SearchHitField field = searchHit.field(\"comments.user\");",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
},
{
"diff": "@@ -20,13 +20,13 @@\n package org.elasticsearch.search.innerhits;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n-import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.SearchHit;\n@@ -42,21 +42,9 @@\n import java.util.Locale;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.nullValue;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n@@ -1004,4 +992,139 @@ public void testRoyals() throws Exception {\n assertThat(innerInnerHits.getAt(0).getId(), equalTo(\"king\"));\n }\n \n+ @Test\n+ public void matchesQueries_nestedInnerHits() throws Exception {\n+ XContentBuilder builder = jsonBuilder().startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"nested1\")\n+ .field(\"type\", \"nested\")\n+ .endObject()\n+ .startObject(\"field1\")\n+ .field(\"type\", \"long\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ assertAcked(prepareCreate(\"test\").addMapping(\"type1\", builder));\n+ ensureGreen();\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ int numDocs = randomIntBetween(2, 35);\n+ requests.add(client().prepareIndex(\"test\", \"type1\", \"0\").setSource(jsonBuilder().startObject()\n+ .field(\"field1\", 0)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_1\")\n+ .field(\"n_field2\", \"n_value2_1\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_2\")\n+ .field(\"n_field2\", \"n_value2_2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+ requests.add(client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"field1\", 1)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_8\")\n+ .field(\"n_field2\", \"n_value2_5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_3\")\n+ .field(\"n_field2\", \"n_value2_1\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+\n+ for (int i = 2; i < numDocs; i++) {\n+ requests.add(client().prepareIndex(\"test\", \"type1\", String.valueOf(i)).setSource(jsonBuilder().startObject()\n+ .field(\"field1\", i)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_8\")\n+ .field(\"n_field2\", \"n_value2_5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_2\")\n+ .field(\"n_field2\", \"n_value2_2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+ }\n+\n+ indexRandom(true, requests);\n+ waitForRelocation(ClusterHealthStatus.GREEN);\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(nestedQuery(\"nested1\", boolQuery()\n+ .should(termQuery(\"nested1.n_field1\", \"n_value1_1\").queryName(\"test1\"))\n+ .should(termQuery(\"nested1.n_field1\", \"n_value1_3\").queryName(\"test2\"))\n+ .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\"))\n+ ).innerHit(new QueryInnerHitBuilder()))\n+ .setSize(numDocs)\n+ .addSort(\"field1\", SortOrder.ASC)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertAllSuccessful(searchResponse);\n+ assertThat(searchResponse.getHits().totalHits(), equalTo((long) numDocs));\n+ assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"0\"));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(2l));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test1\"));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(1).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(1).getMatchedQueries()[0], equalTo(\"test3\"));\n+\n+\n+ assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test2\"));\n+\n+ for (int i = 2; i < numDocs; i++) {\n+ assertThat(searchResponse.getHits().getAt(i).id(), equalTo(String.valueOf(i)));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test3\"));\n+ }\n+ }\n+\n+ @Test\n+ public void matchesQueries_parentChildInnerHits() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"index\", \"parent\", \"1\").setSource(\"{}\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"1\").setParent(\"1\").setSource(\"field\", \"value1\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"2\").setParent(\"1\").setSource(\"field\", \"value2\"));\n+ requests.add(client().prepareIndex(\"index\", \"parent\", \"2\").setSource(\"{}\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"3\").setParent(\"2\").setSource(\"field\", \"value1\"));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"index\")\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\").queryName(\"_name1\")).innerHit(new QueryInnerHitBuilder()))\n+ .addSort(\"_uid\", SortOrder.ASC)\n+ .get();\n+ assertHitCount(response, 2);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name1\"));\n+\n+ assertThat(response.getHits().getAt(1).id(), equalTo(\"2\"));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name1\"));\n+\n+ response = client().prepareSearch(\"index\")\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value2\").queryName(\"_name2\")).innerHit(new QueryInnerHitBuilder()))\n+ .addSort(\"_id\", SortOrder.ASC)\n+ .get();\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name2\"));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "I saw this by adding assertingcodec to the mix in our tests:\n\nFAILURE 0.55s | TopHitsTests.testNestedFetchFeatures <<<\n\n> Throwable #1: java.lang.AssertionError: Hit count is 1 but 2 was expected. Total shards: 9 Successful shards: 8 & 1 shard failures:\n> shard [[qcEkX24CTsSgeHjwon4SYA][articles][5]], reason [ElasticsearchException[target must be > docID(), got 1 <= 3]; nested: AssertionError[target must be > docID(), got 1 <= 3]; ]\n> at __randomizedtesting.SeedInfo.seed([EEBE1D571C1FD8E9:9F4B07A5A1F77FA4]:0)\n> at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount(ElasticsearchAssertions.java:145)\n> at org.elasticsearch.search.aggregations.bucket.TopHitsTests.testNestedFetchFeatures(TopHitsTests.java:804)\n> at java.lang.Thread.run(Thread.java:745)\n",
"comments": [
{
"body": "I also hit this with SimpleNestedTests.simpleNestedMatchQueries() with AssertingCodec\n",
"created_at": "2015-04-19T17:44:38Z"
},
{
"body": "The failure in the TopHits is caused by the nested aggregator. The TopHitsAggregator doesn't invoke advance by itself. (the combination is being tested here)\n",
"created_at": "2015-04-19T21:55:31Z"
},
{
"body": "@martijnvg can you fix this?\n",
"created_at": "2015-04-20T09:49:10Z"
},
{
"body": "@s1monw sure, I'll fix this.\n",
"created_at": "2015-04-20T09:59:23Z"
},
{
"body": "@martijnvg SimpleNestedTests still has an `AwaitsFix` with this bug url, should it be removed?\n",
"created_at": "2015-06-25T14:02:38Z"
},
{
"body": "@jpountz yes, it should! I guess I forgot that. the test needs to be changed a bit too... the matched queries should be asserted on the inner hits instead of the root hit.\n",
"created_at": "2015-06-25T18:59:32Z"
}
],
"number": 10661,
"title": "TopHits advance()'s backwards"
} | {
"body": "Before inner_hits existed named queries had support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits got released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.\n\nLeft over of issue #10661\n",
"number": 11880,
"review_comments": [],
"title": "Properly support named queries for both nested and parent child inner hits"
} | {
"commits": [
{
"message": "inner_hits: Properly support named queries for both nested and parent child inner hits.\n\nBefore inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits."
}
],
"files": [
{
"diff": "@@ -151,7 +151,8 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n \n if (innerHits != null) {\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), innerQuery, null, parseContext.mapperService(), childDocMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), parsedQuery, null, parseContext.mapperService(), childDocMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : childType;\n parseContext.addInnerHits(name, parentChildInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -154,7 +154,8 @@ static Query createParentQuery(Query innerQuery, String parentType, boolean scor\n }\n \n if (innerHits != null) {\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), innerQuery, null, parseContext.mapperService(), parentDocMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), parsedQuery, null, parseContext.mapperService(), parentDocMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : parentType;\n parseContext.addInnerHits(name, parentChildInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -217,7 +217,7 @@ public ParsedQuery parseInnerFilter(XContentParser parser) throws IOException {\n if (filter == null) {\n return null;\n }\n- return new ParsedQuery(filter, context.copyNamedFilters());\n+ return new ParsedQuery(filter, context.copyNamedQueries());\n } finally {\n context.reset(null);\n }\n@@ -300,7 +300,7 @@ private ParsedQuery innerParse(QueryParseContext parseContext, XContentParser pa\n if (query == null) {\n query = Queries.newMatchNoDocsQuery();\n }\n- return new ParsedQuery(query, parseContext.copyNamedFilters());\n+ return new ParsedQuery(query, parseContext.copyNamedQueries());\n } finally {\n parseContext.reset(null);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java",
"status": "modified"
},
{
"diff": "@@ -151,7 +151,8 @@ public ToParentBlockJoinQuery build() throws IOException {\n }\n \n if (innerHits != null) {\n- InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), innerQuery, null, getParentObjectMapper(), nestedObjectMapper);\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, parseContext.copyNamedQueries());\n+ InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.v2(), parsedQuery, null, getParentObjectMapper(), nestedObjectMapper);\n String name = innerHits.v1() != null ? innerHits.v1() : path;\n parseContext.addInnerHits(name, nestedInnerHits);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -185,11 +185,11 @@ public void addNamedQuery(String name, Query query) {\n namedQueries.put(name, query);\n }\n \n- public ImmutableMap<String, Query> copyNamedFilters() {\n+ public ImmutableMap<String, Query> copyNamedQueries() {\n return ImmutableMap.copyOf(namedQueries);\n }\n \n- public void combineNamedFilters(QueryParseContext context) {\n+ public void combineNamedQueries(QueryParseContext context) {\n namedQueries.putAll(context.namedQueries);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n context.reset(qSourceParser);\n Query result = context.parseInnerQuery();\n parser.nextToken();\n- parseContext.combineNamedFilters(context);\n+ parseContext.combineNamedQueries(context);\n return result;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/WrapperQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -19,34 +19,17 @@\n \n package org.elasticsearch.search.fetch.innerhits;\n \n-import com.google.common.collect.ImmutableMap;\n-\n import org.apache.lucene.index.LeafReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause.Occur;\n-import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.ConstantScoreScorer;\n-import org.apache.lucene.search.ConstantScoreWeight;\n-import org.apache.lucene.search.DocIdSet;\n-import org.apache.lucene.search.DocIdSetIterator;\n-import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.IndexSearcher;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Scorer;\n-import org.apache.lucene.search.TermQuery;\n-import org.apache.lucene.search.TopDocs;\n-import org.apache.lucene.search.TopDocsCollector;\n-import org.apache.lucene.search.TopFieldCollector;\n-import org.apache.lucene.search.TopScoreDocCollector;\n-import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.search.join.BitDocIdSetFilter;\n import org.apache.lucene.util.BitSet;\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n-import org.elasticsearch.index.fieldvisitor.SingleFieldsVisitor;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.Uid;\n@@ -83,10 +66,10 @@ public void addInnerHitDefinition(String name, BaseInnerHits innerHit) {\n \n public static abstract class BaseInnerHits extends FilteredSearchContext {\n \n- protected final Query query;\n+ protected final ParsedQuery query;\n private final InnerHitsContext childInnerHits;\n \n- protected BaseInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits) {\n+ protected BaseInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits) {\n super(context);\n this.query = query;\n if (childInnerHits != null && !childInnerHits.isEmpty()) {\n@@ -98,12 +81,12 @@ protected BaseInnerHits(SearchContext context, Query query, Map<String, BaseInne\n \n @Override\n public Query query() {\n- return query;\n+ return query.query();\n }\n \n @Override\n public ParsedQuery parsedQuery() {\n- return new ParsedQuery(query, ImmutableMap.<String, Query>of());\n+ return query;\n }\n \n public abstract TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContext) throws IOException;\n@@ -120,7 +103,7 @@ public static final class NestedInnerHits extends BaseInnerHits {\n private final ObjectMapper parentObjectMapper;\n private final ObjectMapper childObjectMapper;\n \n- public NestedInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) {\n+ public NestedInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper) {\n super(context, query, childInnerHits);\n this.parentObjectMapper = parentObjectMapper;\n this.childObjectMapper = childObjectMapper;\n@@ -136,7 +119,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n BitDocIdSetFilter parentFilter = context.bitsetFilterCache().getBitDocIdSetFilter(rawParentFilter);\n Filter childFilter = childObjectMapper.nestedTypeFilter();\n- Query q = Queries.filtered(query, new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n+ Query q = Queries.filtered(query.query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n \n if (size() == 0) {\n return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0);\n@@ -280,7 +263,7 @@ public static final class ParentChildInnerHits extends BaseInnerHits {\n private final MapperService mapperService;\n private final DocumentMapper documentMapper;\n \n- public ParentChildInnerHits(SearchContext context, Query query, Map<String, BaseInnerHits> childInnerHits, MapperService mapperService, DocumentMapper documentMapper) {\n+ public ParentChildInnerHits(SearchContext context, ParsedQuery query, Map<String, BaseInnerHits> childInnerHits, MapperService mapperService, DocumentMapper documentMapper) {\n super(context, query, childInnerHits);\n this.mapperService = mapperService;\n this.documentMapper = documentMapper;\n@@ -307,7 +290,7 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n \n BooleanQuery q = new BooleanQuery();\n- q.add(query, Occur.MUST);\n+ q.add(query.query(), Occur.MUST);\n // Only include docs that have the current hit as parent\n q.add(new TermQuery(new Term(field, term)), Occur.MUST);\n // Only include docs that have this inner hits type",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsContext.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,11 @@\n \n package org.elasticsearch.search.fetch.innerhits;\n \n-import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n+import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.fetch.fielddata.FieldDataFieldsParseElement;\n@@ -169,7 +168,7 @@ private InnerHitsContext.NestedInnerHits parseNested(XContentParser parser, Quer\n }\n \n private ParseResult parseSubSearchContext(SearchContext searchContext, QueryParseContext parseContext, XContentParser parser) throws Exception {\n- Query query = null;\n+ ParsedQuery query = null;\n Map<String, InnerHitsContext.BaseInnerHits> childInnerHits = null;\n SubSearchContext subSearchContext = new SubSearchContext(searchContext);\n String fieldName = null;\n@@ -179,7 +178,8 @@ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryPars\n fieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"query\".equals(fieldName)) {\n- query = searchContext.queryParserService().parseInnerQuery(parseContext);\n+ Query q = searchContext.queryParserService().parseInnerQuery(parseContext);\n+ query = new ParsedQuery(q, parseContext.copyNamedQueries());\n } else if (\"inner_hits\".equals(fieldName)) {\n childInnerHits = parseInnerHits(parser, parseContext, searchContext);\n } else {\n@@ -191,18 +191,18 @@ private ParseResult parseSubSearchContext(SearchContext searchContext, QueryPars\n }\n \n if (query == null) {\n- query = new MatchAllDocsQuery();\n+ query = ParsedQuery.parsedMatchAllQuery();\n }\n return new ParseResult(subSearchContext, query, childInnerHits);\n }\n \n private static final class ParseResult {\n \n private final SubSearchContext context;\n- private final Query query;\n+ private final ParsedQuery query;\n private final Map<String, InnerHitsContext.BaseInnerHits> childInnerHits;\n \n- private ParseResult(SubSearchContext context, Query query, Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n+ private ParseResult(SubSearchContext context, ParsedQuery query, Map<String, InnerHitsContext.BaseInnerHits> childInnerHits) {\n this.context = context;\n this.query = query;\n this.childInnerHits = childInnerHits;\n@@ -212,7 +212,7 @@ public SubSearchContext context() {\n return context;\n }\n \n- public Query query() {\n+ public ParsedQuery query() {\n return query;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/innerhits/InnerHitsParseElement.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.sort.SortBuilders;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -46,25 +47,9 @@\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAllSuccessful;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.arrayContaining;\n-import static org.hamcrest.Matchers.arrayContainingInAnyOrder;\n-import static org.hamcrest.Matchers.arrayWithSize;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n-import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.startsWith;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n public class SimpleNestedTests extends ElasticsearchIntegrationTest {\n \n@@ -178,98 +163,6 @@ public void simpleNested() throws Exception {\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n }\n \n- @Test @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/10661\")\n- public void simpleNestedMatchQueries() throws Exception {\n- XContentBuilder builder = jsonBuilder().startObject()\n- .startObject(\"type1\")\n- .startObject(\"properties\")\n- .startObject(\"nested1\")\n- .field(\"type\", \"nested\")\n- .endObject()\n- .startObject(\"field1\")\n- .field(\"type\", \"long\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject();\n- assertAcked(prepareCreate(\"test\").addMapping(\"type1\", builder));\n- ensureGreen();\n-\n- List<IndexRequestBuilder> requests = new ArrayList<>();\n- int numDocs = randomIntBetween(2, 35);\n- requests.add(client().prepareIndex(\"test\", \"type1\", \"0\").setSource(jsonBuilder().startObject()\n- .field(\"field1\", 0)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_1\")\n- .field(\"n_field2\", \"n_value2_1\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_2\")\n- .field(\"n_field2\", \"n_value2_2\")\n- .endObject()\n- .endArray()\n- .endObject()));\n- requests.add(client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n- .field(\"field1\", 1)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_8\")\n- .field(\"n_field2\", \"n_value2_5\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_3\")\n- .field(\"n_field2\", \"n_value2_1\")\n- .endObject()\n- .endArray()\n- .endObject()));\n-\n- for (int i = 2; i < numDocs; i++) {\n- requests.add(client().prepareIndex(\"test\", \"type1\", String.valueOf(i)).setSource(jsonBuilder().startObject()\n- .field(\"field1\", i)\n- .startArray(\"nested1\")\n- .startObject()\n- .field(\"n_field1\", \"n_value1_8\")\n- .field(\"n_field2\", \"n_value2_5\")\n- .endObject()\n- .startObject()\n- .field(\"n_field1\", \"n_value1_2\")\n- .field(\"n_field2\", \"n_value2_2\")\n- .endObject()\n- .endArray()\n- .endObject()));\n- }\n-\n- indexRandom(true, requests);\n- waitForRelocation(ClusterHealthStatus.GREEN);\n-\n- SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(nestedQuery(\"nested1\", boolQuery()\n- .should(termQuery(\"nested1.n_field1\", \"n_value1_1\").queryName(\"test1\"))\n- .should(termQuery(\"nested1.n_field1\", \"n_value1_3\").queryName(\"test2\"))\n- .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\"))\n- ))\n- .setSize(numDocs)\n- .addSort(\"field1\", SortOrder.ASC)\n- .get();\n- assertNoFailures(searchResponse);\n- assertAllSuccessful(searchResponse);\n- assertThat(searchResponse.getHits().totalHits(), equalTo((long) numDocs));\n- assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"0\"));\n- assertThat(searchResponse.getHits().getAt(0).matchedQueries(), arrayWithSize(2));\n- assertThat(searchResponse.getHits().getAt(0).matchedQueries(), arrayContainingInAnyOrder(\"test1\", \"test3\"));\n-\n- assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n- assertThat(searchResponse.getHits().getAt(1).matchedQueries(), arrayWithSize(1));\n- assertThat(searchResponse.getHits().getAt(1).matchedQueries(), arrayContaining(\"test2\"));\n-\n- for (int i = 2; i < numDocs; i++) {\n- assertThat(searchResponse.getHits().getAt(i).id(), equalTo(String.valueOf(i)));\n- assertThat(searchResponse.getHits().getAt(i).matchedQueries(), arrayWithSize(1));\n- assertThat(searchResponse.getHits().getAt(i).matchedQueries(), arrayContaining(\"test3\"));\n- }\n- }\n-\n @Test\n public void multiNested() throws Exception {\n assertAcked(prepareCreate(\"test\")",
"filename": "core/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java",
"status": "modified"
},
{
"diff": "@@ -881,7 +881,6 @@ public void testNestedFetchFeatures() {\n long version = searchHit.version();\n assertThat(version, equalTo(1l));\n \n- // Can't use named queries for the same reason explain doesn't work:\n assertThat(searchHit.matchedQueries(), arrayContaining(\"test\"));\n \n SearchHitField field = searchHit.field(\"comments.user\");",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
},
{
"diff": "@@ -20,13 +20,13 @@\n package org.elasticsearch.search.innerhits;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n-import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.SearchHit;\n@@ -42,21 +42,9 @@\n import java.util.Locale;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.nullValue;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n@@ -1004,4 +992,139 @@ public void testRoyals() throws Exception {\n assertThat(innerInnerHits.getAt(0).getId(), equalTo(\"king\"));\n }\n \n+ @Test\n+ public void matchesQueries_nestedInnerHits() throws Exception {\n+ XContentBuilder builder = jsonBuilder().startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"nested1\")\n+ .field(\"type\", \"nested\")\n+ .endObject()\n+ .startObject(\"field1\")\n+ .field(\"type\", \"long\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ assertAcked(prepareCreate(\"test\").addMapping(\"type1\", builder));\n+ ensureGreen();\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ int numDocs = randomIntBetween(2, 35);\n+ requests.add(client().prepareIndex(\"test\", \"type1\", \"0\").setSource(jsonBuilder().startObject()\n+ .field(\"field1\", 0)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_1\")\n+ .field(\"n_field2\", \"n_value2_1\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_2\")\n+ .field(\"n_field2\", \"n_value2_2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+ requests.add(client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"field1\", 1)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_8\")\n+ .field(\"n_field2\", \"n_value2_5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_3\")\n+ .field(\"n_field2\", \"n_value2_1\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+\n+ for (int i = 2; i < numDocs; i++) {\n+ requests.add(client().prepareIndex(\"test\", \"type1\", String.valueOf(i)).setSource(jsonBuilder().startObject()\n+ .field(\"field1\", i)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_8\")\n+ .field(\"n_field2\", \"n_value2_5\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_2\")\n+ .field(\"n_field2\", \"n_value2_2\")\n+ .endObject()\n+ .endArray()\n+ .endObject()));\n+ }\n+\n+ indexRandom(true, requests);\n+ waitForRelocation(ClusterHealthStatus.GREEN);\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(nestedQuery(\"nested1\", boolQuery()\n+ .should(termQuery(\"nested1.n_field1\", \"n_value1_1\").queryName(\"test1\"))\n+ .should(termQuery(\"nested1.n_field1\", \"n_value1_3\").queryName(\"test2\"))\n+ .should(termQuery(\"nested1.n_field2\", \"n_value2_2\").queryName(\"test3\"))\n+ ).innerHit(new QueryInnerHitBuilder()))\n+ .setSize(numDocs)\n+ .addSort(\"field1\", SortOrder.ASC)\n+ .get();\n+ assertNoFailures(searchResponse);\n+ assertAllSuccessful(searchResponse);\n+ assertThat(searchResponse.getHits().totalHits(), equalTo((long) numDocs));\n+ assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"0\"));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(2l));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test1\"));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(1).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().get(\"nested1\").getAt(1).getMatchedQueries()[0], equalTo(\"test3\"));\n+\n+\n+ assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(1).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test2\"));\n+\n+ for (int i = 2; i < numDocs; i++) {\n+ assertThat(searchResponse.getHits().getAt(i).id(), equalTo(String.valueOf(i)));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getTotalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(searchResponse.getHits().getAt(i).getInnerHits().get(\"nested1\").getAt(0).getMatchedQueries()[0], equalTo(\"test3\"));\n+ }\n+ }\n+\n+ @Test\n+ public void matchesQueries_parentChildInnerHits() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"index\", \"parent\", \"1\").setSource(\"{}\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"1\").setParent(\"1\").setSource(\"field\", \"value1\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"2\").setParent(\"1\").setSource(\"field\", \"value2\"));\n+ requests.add(client().prepareIndex(\"index\", \"parent\", \"2\").setSource(\"{}\"));\n+ requests.add(client().prepareIndex(\"index\", \"child\", \"3\").setParent(\"2\").setSource(\"field\", \"value1\"));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"index\")\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value1\").queryName(\"_name1\")).innerHit(new QueryInnerHitBuilder()))\n+ .addSort(\"_uid\", SortOrder.ASC)\n+ .get();\n+ assertHitCount(response, 2);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name1\"));\n+\n+ assertThat(response.getHits().getAt(1).id(), equalTo(\"2\"));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(1).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name1\"));\n+\n+ response = client().prepareSearch(\"index\")\n+ .setQuery(hasChildQuery(\"child\", matchQuery(\"field\", \"value2\").queryName(\"_name2\")).innerHit(new QueryInnerHitBuilder()))\n+ .addSort(\"_id\", SortOrder.ASC)\n+ .get();\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getTotalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries().length, equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"child\").getAt(0).getMatchedQueries()[0], equalTo(\"_name2\"));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsTests.java",
"status": "modified"
}
]
} |
{
"body": "The cluster state diffs introduced a concept of cluster state uuid that is uniquely identifying a particular iteration (version) of the the cluster state. This uuid is ephemeral and changes with each cluster state update, which causes confusion with other 2 uuids in the system - metadata uuid and index metadata uuid which are persistent. We need to rename cluster state uuid to reflect the difference in use and lifecycle. \n",
"comments": [
{
"body": "++\n",
"created_at": "2015-06-23T15:14:36Z"
},
{
"body": "If somebody has a good name for it to renamed to, please let me know. Rejected proposals so far:\n- `id` - too generic, can be perceived as a persistent identification the same way as uuid today\n- `instanceId` - the term instance is used as a synonym for a node everywhere in documentation\n- `updateId` - rejected by the reviewer \n- `diffId` - rejected by the reviewer\n- `revision` - rejected by the reviewer\n- `edition` - rejected by the reviewer\n- `fingerprint` - rejected by the reviewer\n",
"created_at": "2015-06-28T01:52:02Z"
},
{
"body": "What about\n- `clusterStateId`\n- `clusterStateVersionId`\n- `clusterStateDiffId`\n- `stepId`\n",
"created_at": "2015-06-28T02:26:58Z"
},
{
"body": "Thanks! In my mind, `clusterStateId` implies persistence. Since we are trying to find a name for a field of a cluster state I don't think clusterState prefix is very useful here. I don't really see how `clusterStateDiffId` is better than `diffId`, for example. But to move things forward I would be ok with `versionId`, `clusterStateVersionId`, `clusterStateDiffId` or `stepId`. @bleskes?\n",
"created_at": "2015-06-28T13:22:10Z"
},
{
"body": "My problem with all the diff/step related things is that the field is not related to the diff, but rather to the state. A diff has from and to . The versionId related things are confusing w.r.t the existing version field.\n\nI took a step back and I think this is may be difficult because we are trying to rename the wrong thing. Instead we should rename the uuid on the IndexMeta to indexUUID and the one on the MetaData to clusterUUID . I think things fall together nicely (https://github.com/elastic/elasticsearch/compare/master...bleskes:cs_uuid ) . @pickypg @imotov how would you feel about that?\n",
"created_at": "2015-06-29T08:25:35Z"
},
{
"body": "I think renaming uuid in IndexMetaData and MetaData wouldn't hurt. However, I don't think that step, updated and revision has anything to do with diffs. They simply emphasize transient nature of this id as related to iterations, updates, or revisions (whatever you want to call them) of the cluster state without overloading term \"instance\" that is already used in several places. \n",
"created_at": "2015-06-29T14:01:04Z"
},
{
"body": "`generation` or `generation_id`? \n",
"created_at": "2015-06-29T14:04:35Z"
},
{
"body": "That would work for me as well.\n",
"created_at": "2015-06-29T14:14:58Z"
},
{
"body": "copying a comment from #11914 here, to have everything in one place:\n\n> Sorry I misunderstood your proposal. I thought you want to rename MetaData.uuid into MetaData.metaDataUUID. In my opinion renaming MetaData.uuid into MetaData.ClusterUUID (persistent) while keeping ClusterState.uuid (transient) is extremely confusing and is much worse than what we have today. I don't think we should get this in until we fix #11831.\n\nOK. Naming is hard :) . I'm going give it one last shot before letting this go for a week or so. How about\n\n-> ClusterState.uuid -> stateUUID (transient)\n-> MetaData.uuid -> clusterUUID (persistent)\n-> IndexMetaData.uuid -> indexUUID (persistent).\n\n(brace, brace)\n",
"created_at": "2015-06-29T14:55:34Z"
},
{
"body": "I like the \"smurf\" naming approach. Each field becomes much clearer and much simpler.\n\nI might suggest making the transient ones more obvious though by calling them out as such: `transientStateUUID` (or if there will only be one, then maybe even `transientUUID`). Just to add more problems, \"state\" has a lot of potential associations (it's too generic), but as long as it's directly tied to the cluster state, then I'm okay with it.\n",
"created_at": "2015-06-29T15:46:56Z"
},
{
"body": "I'm good with the proposal in https://github.com/elastic/elasticsearch/issues/11831#issuecomment-116717240\n",
"created_at": "2015-06-30T09:35:27Z"
},
{
"body": "That works for me as well. \n",
"created_at": "2015-06-30T13:08:50Z"
}
],
"number": 11831,
"title": "Rename cluster state uuid"
} | {
"body": "The cluster state diffs introduced a concept of cluster state uuid that is uniquely identifying a particular iteration (version) of the the cluster state. This uuid is ephemeral and changes with each cluster state update, which causes confusion with other 2 uuids in the system - metadata uuid and index metadata uuid which are persistent. This commit renames cluster state uuid to updateId to reflect the difference in use and lifecycle.\n\nCloses #11831\n",
"number": 11862,
"review_comments": [],
"title": "Rename cluster state uuid to updateId"
} | {
"commits": [
{
"message": "Rename cluster state uuid to updateId\n\nThe cluster state diffs introduced a concept of cluster state uuid that is uniquely identifying a particular iteration (version) of the the cluster state. This uuid is ephemeral and changes with each cluster state update, which causes confusion with other 2 uuids in the system - metadata uuid and index metadata uuid which are persistent. This commit renames cluster state uuid to updateId to reflect the difference in use and lifecycle.\n\nCloses #11831"
}
],
"files": [
{
"diff": "@@ -76,7 +76,7 @@ protected void masterOperation(final ClusterStateRequest request, final ClusterS\n logger.trace(\"Serving cluster state request using version {}\", currentState.version());\n ClusterState.Builder builder = ClusterState.builder(currentState.getClusterName());\n builder.version(currentState.version());\n- builder.uuid(currentState.uuid());\n+ builder.updateId(currentState.updateId());\n if (request.nodes()) {\n builder.nodes(currentState.nodes());\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -76,9 +76,9 @@\n * to a node if this node was present in the previous version of the cluster state. If a node is not present was\n * not present in the previous version of the cluster state, such node is unlikely to have the previous cluster\n * state version and should be sent a complete version. In order to make sure that the differences are applied to\n- * correct version of the cluster state, each cluster state version update generates {@link #uuid} that uniquely\n- * identifies this version of the state. This uuid is verified by the {@link ClusterStateDiff#apply} method to\n- * makes sure that the correct diffs are applied. If uuids don’t match, the {@link ClusterStateDiff#apply} method\n+ * correct version of the cluster state, each cluster state version update generates {@link #updateId} that uniquely\n+ * identifies this version of the state. This updateId is verified by the {@link ClusterStateDiff#apply} method to\n+ * makes sure that the correct diffs are applied. If updateIds don’t match, the {@link ClusterStateDiff#apply} method\n * throws the {@link IncompatibleClusterStateVersionException}, which should cause the publishing mechanism to send\n * a full version of the cluster state to the node on which this exception was thrown.\n */\n@@ -138,13 +138,13 @@ public static <T extends Custom> T lookupPrototypeSafe(String type) {\n return proto;\n }\n \n- public static final String UNKNOWN_UUID = \"_na_\";\n+ public static final String UNKNOWN_UPDATE_ID = \"_na_\";\n \n public static final long UNKNOWN_VERSION = -1;\n \n private final long version;\n \n- private final String uuid;\n+ private final String updateId;\n \n private final RoutingTable routingTable;\n \n@@ -165,13 +165,13 @@ public static <T extends Custom> T lookupPrototypeSafe(String type) {\n \n private volatile ClusterStateStatus status;\n \n- public ClusterState(long version, String uuid, ClusterState state) {\n- this(state.clusterName, version, uuid, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs(), false);\n+ public ClusterState(long version, String updateId, ClusterState state) {\n+ this(state.clusterName, version, updateId, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs(), false);\n }\n \n- public ClusterState(ClusterName clusterName, long version, String uuid, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs, boolean wasReadFromDiff) {\n+ public ClusterState(ClusterName clusterName, long version, String updateId, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs, boolean wasReadFromDiff) {\n this.version = version;\n- this.uuid = uuid;\n+ this.updateId = updateId;\n this.clusterName = clusterName;\n this.metaData = metaData;\n this.routingTable = routingTable;\n@@ -200,11 +200,11 @@ public long getVersion() {\n }\n \n /**\n- * This uuid is automatically generated for for each version of cluster state. It is used to make sure that\n+ * This updateId is automatically generated for for each version of cluster state. It is used to make sure that\n * we are applying diffs to the right previous state.\n */\n- public String uuid() {\n- return this.uuid;\n+ public String updateId() {\n+ return this.updateId;\n }\n \n public DiscoveryNodes nodes() {\n@@ -283,7 +283,7 @@ public RoutingNodes readOnlyRoutingNodes() {\n public String prettyPrint() {\n StringBuilder sb = new StringBuilder();\n sb.append(\"version: \").append(version).append(\"\\n\");\n- sb.append(\"uuid: \").append(uuid).append(\"\\n\");\n+ sb.append(\"update_id: \").append(updateId).append(\"\\n\");\n sb.append(\"from_diff: \").append(wasReadFromDiff).append(\"\\n\");\n sb.append(\"meta data version: \").append(metaData.version()).append(\"\\n\");\n sb.append(nodes().prettyPrint());\n@@ -362,7 +362,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n if (metrics.contains(Metric.VERSION)) {\n builder.field(\"version\", version);\n- builder.field(\"uuid\", uuid);\n+ builder.field(\"update_id\", updateId);\n }\n \n if (metrics.contains(Metric.MASTER_NODE)) {\n@@ -559,7 +559,7 @@ public static class Builder {\n \n private final ClusterName clusterName;\n private long version = 0;\n- private String uuid = UNKNOWN_UUID;\n+ private String updateId = UNKNOWN_UPDATE_ID;\n private MetaData metaData = MetaData.EMPTY_META_DATA;\n private RoutingTable routingTable = RoutingTable.EMPTY_ROUTING_TABLE;\n private DiscoveryNodes nodes = DiscoveryNodes.EMPTY_NODES;\n@@ -571,7 +571,7 @@ public static class Builder {\n public Builder(ClusterState state) {\n this.clusterName = state.clusterName;\n this.version = state.version();\n- this.uuid = state.uuid();\n+ this.updateId = state.updateId();\n this.nodes = state.nodes();\n this.routingTable = state.routingTable();\n this.metaData = state.metaData();\n@@ -633,12 +633,12 @@ public Builder version(long version) {\n \n public Builder incrementVersion() {\n this.version = version + 1;\n- this.uuid = UNKNOWN_UUID;\n+ this.updateId = UNKNOWN_UPDATE_ID;\n return this;\n }\n \n- public Builder uuid(String uuid) {\n- this.uuid = uuid;\n+ public Builder updateId(String updateId) {\n+ this.updateId = updateId;\n return this;\n }\n \n@@ -667,10 +667,10 @@ public Builder fromDiff(boolean fromDiff) {\n }\n \n public ClusterState build() {\n- if (UNKNOWN_UUID.equals(uuid)) {\n- uuid = Strings.randomBase64UUID();\n+ if (UNKNOWN_UPDATE_ID.equals(updateId)) {\n+ updateId = Strings.randomBase64UUID();\n }\n- return new ClusterState(clusterName, version, uuid, metaData, routingTable, nodes, blocks, customs.build(), fromDiff);\n+ return new ClusterState(clusterName, version, updateId, metaData, routingTable, nodes, blocks, customs.build(), fromDiff);\n }\n \n public static byte[] toBytes(ClusterState state) throws IOException {\n@@ -711,7 +711,7 @@ public ClusterState readFrom(StreamInput in, DiscoveryNode localNode) throws IOE\n ClusterName clusterName = ClusterName.readClusterName(in);\n Builder builder = new Builder(clusterName);\n builder.version = in.readLong();\n- builder.uuid = in.readString();\n+ builder.updateId = in.readString();\n builder.metaData = MetaData.Builder.readFrom(in);\n builder.routingTable = RoutingTable.Builder.readFrom(in);\n builder.nodes = DiscoveryNodes.Builder.readFrom(in, localNode);\n@@ -734,7 +734,7 @@ public ClusterState readFrom(StreamInput in) throws IOException {\n public void writeTo(StreamOutput out) throws IOException {\n clusterName.writeTo(out);\n out.writeLong(version);\n- out.writeString(uuid);\n+ out.writeString(updateId);\n metaData.writeTo(out);\n routingTable.writeTo(out);\n nodes.writeTo(out);\n@@ -750,9 +750,9 @@ private static class ClusterStateDiff implements Diff<ClusterState> {\n \n private final long toVersion;\n \n- private final String fromUuid;\n+ private final String fromUpdateId;\n \n- private final String toUuid;\n+ private final String toUpdateId;\n \n private final ClusterName clusterName;\n \n@@ -767,8 +767,8 @@ private static class ClusterStateDiff implements Diff<ClusterState> {\n private final Diff<ImmutableOpenMap<String, Custom>> customs;\n \n public ClusterStateDiff(ClusterState before, ClusterState after) {\n- fromUuid = before.uuid;\n- toUuid = after.uuid;\n+ fromUpdateId = before.updateId;\n+ toUpdateId = after.updateId;\n toVersion = after.version;\n clusterName = after.clusterName;\n routingTable = after.routingTable.diff(before.routingTable);\n@@ -780,8 +780,8 @@ public ClusterStateDiff(ClusterState before, ClusterState after) {\n \n public ClusterStateDiff(StreamInput in, ClusterState proto) throws IOException {\n clusterName = ClusterName.readClusterName(in);\n- fromUuid = in.readString();\n- toUuid = in.readString();\n+ fromUpdateId = in.readString();\n+ toUpdateId = in.readString();\n toVersion = in.readLong();\n routingTable = proto.routingTable.readDiffFrom(in);\n nodes = proto.nodes.readDiffFrom(in);\n@@ -803,8 +803,8 @@ public Diff<Custom> readDiffFrom(StreamInput in, String key) throws IOException\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n clusterName.writeTo(out);\n- out.writeString(fromUuid);\n- out.writeString(toUuid);\n+ out.writeString(fromUpdateId);\n+ out.writeString(toUpdateId);\n out.writeLong(toVersion);\n routingTable.writeTo(out);\n nodes.writeTo(out);\n@@ -816,14 +816,14 @@ public void writeTo(StreamOutput out) throws IOException {\n @Override\n public ClusterState apply(ClusterState state) {\n Builder builder = new Builder(clusterName);\n- if (toUuid.equals(state.uuid)) {\n+ if (toUpdateId.equals(state.updateId)) {\n // no need to read the rest - cluster state didn't change\n return state;\n }\n- if (fromUuid.equals(state.uuid) == false) {\n- throw new IncompatibleClusterStateVersionException(state.version, state.uuid, toVersion, fromUuid);\n+ if (fromUpdateId.equals(state.updateId) == false) {\n+ throw new IncompatibleClusterStateVersionException(state.version, state.updateId, toVersion, fromUpdateId);\n }\n- builder.uuid(toUuid);\n+ builder.updateId(toUpdateId);\n builder.version(toVersion);\n builder.routingTable(routingTable.apply(state.routingTable));\n builder.nodes(nodes.apply(state.nodes));",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterState.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,14 @@\n import org.elasticsearch.ElasticsearchException;\n \n /**\n- * Thrown by {@link Diffable#readDiffAndApply(org.elasticsearch.common.io.stream.StreamInput)} method\n+ * Thrown by {@link Diff#apply} method if the diffs cannot be applied to the given cluster state\n */\n public class IncompatibleClusterStateVersionException extends ElasticsearchException {\n public IncompatibleClusterStateVersionException(String msg) {\n super(msg);\n }\n \n- public IncompatibleClusterStateVersionException(long expectedVersion, String expectedUuid, long receivedVersion, String receivedUuid) {\n- super(\"Expected diff for version \" + expectedVersion + \" with uuid \" + expectedUuid + \" got version \" + receivedVersion + \" and uuid \" + receivedUuid);\n+ public IncompatibleClusterStateVersionException(long expectedVersion, String expectedUpdateId, long receivedVersion, String receivedUpdateId) {\n+ super(\"Expected diff for version \" + expectedVersion + \" with updateId \" + expectedUpdateId + \" got version \" + receivedVersion + \" and updateId \" + receivedUpdateId);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/IncompatibleClusterStateVersionException.java",
"status": "modified"
},
{
"diff": "@@ -519,11 +519,11 @@ public void run() {\n }\n \n TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(System.nanoTime() - startTimeNS)));\n- logger.debug(\"processing [{}]: took {} done applying updated cluster_state (version: {}, uuid: {})\", source, executionTime, newClusterState.version(), newClusterState.uuid());\n+ logger.debug(\"processing [{}]: took {} done applying updated cluster_state (version: {}, updateId: {})\", source, executionTime, newClusterState.version(), newClusterState.updateId());\n warnAboutSlowTaskIfNeeded(executionTime, source);\n } catch (Throwable t) {\n TimeValue executionTime = TimeValue.timeValueMillis(Math.max(0, TimeValue.nsecToMSec(System.nanoTime() - startTimeNS)));\n- StringBuilder sb = new StringBuilder(\"failed to apply updated cluster state in \").append(executionTime).append(\":\\nversion [\").append(newClusterState.version()).append(\"], uuid [\").append(newClusterState.uuid()).append(\"], source [\").append(source).append(\"]\\n\");\n+ StringBuilder sb = new StringBuilder(\"failed to apply updated cluster state in \").append(executionTime).append(\":\\nversion [\").append(newClusterState.version()).append(\"], updateId [\").append(newClusterState.updateId()).append(\"], source [\").append(source).append(\"]\\n\");\n sb.append(newClusterState.nodes().prettyPrint());\n sb.append(newClusterState.routingTable().prettyPrint());\n sb.append(newClusterState.readOnlyRoutingNodes().prettyPrint());",
"filename": "core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java",
"status": "modified"
},
{
"diff": "@@ -266,7 +266,7 @@ public void messageReceived(BytesTransportRequest request, final TransportChanne\n } else if (lastSeenClusterState != null) {\n Diff<ClusterState> diff = lastSeenClusterState.readDiffFrom(in);\n lastSeenClusterState = diff.apply(lastSeenClusterState);\n- logger.debug(\"received diff cluster state version {} with uuid {}, diff size {}\", lastSeenClusterState.version(), lastSeenClusterState.uuid(), request.bytes().length());\n+ logger.debug(\"received diff cluster state version {} with updateId {}, diff size {}\", lastSeenClusterState.version(), lastSeenClusterState.updateId(), request.bytes().length());\n } else {\n logger.debug(\"received diff for but don't have any local cluster state - requesting full state\");\n throw new IncompatibleClusterStateVersionException(\"have no local cluster state\");",
"filename": "core/src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -750,7 +750,7 @@ public void testClusterStateUpdateLogging() throws Exception {\n MockLogAppender mockAppender = new MockLogAppender();\n mockAppender.addExpectation(new MockLogAppender.SeenEventExpectation(\"test1\", \"cluster.service\", Level.DEBUG, \"*processing [test1]: took * no change in cluster_state\"));\n mockAppender.addExpectation(new MockLogAppender.SeenEventExpectation(\"test2\", \"cluster.service\", Level.TRACE, \"*failed to execute cluster state update in *\"));\n- mockAppender.addExpectation(new MockLogAppender.SeenEventExpectation(\"test3\", \"cluster.service\", Level.DEBUG, \"*processing [test3]: took * done applying updated cluster_state (version: *, uuid: *)\"));\n+ mockAppender.addExpectation(new MockLogAppender.SeenEventExpectation(\"test3\", \"cluster.service\", Level.DEBUG, \"*processing [test3]: took * done applying updated cluster_state (version: *, updateId: *)\"));\n \n Logger rootLogger = Logger.getRootLogger();\n rootLogger.addAppender(mockAppender);",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -87,7 +87,7 @@ public MockNode createMockNode(final String name, Settings settings, Version ver\n return createMockNode(name, settings, version, new PublishClusterStateAction.NewClusterStateListener() {\n @Override\n public void onNewClusterState(ClusterState clusterState, NewStateProcessed newStateProcessed) {\n- logger.debug(\"Node [{}] onNewClusterState version [{}], uuid [{}]\", name, clusterState.version(), clusterState.uuid());\n+ logger.debug(\"Node [{}] onNewClusterState version [{}], updateId [{}]\", name, clusterState.version(), clusterState.updateId());\n newStateProcessed.onNewClusterStateProcessed();\n }\n });\n@@ -392,7 +392,7 @@ public void onNewClusterState(ClusterState clusterState, NewStateProcessed newSt\n MockNode nodeB = createMockNode(\"nodeB\", noDiffPublishingSettings, Version.CURRENT, new PublishClusterStateAction.NewClusterStateListener() {\n @Override\n public void onNewClusterState(ClusterState clusterState, NewStateProcessed newStateProcessed) {\n- logger.debug(\"Got cluster state update, version [{}], guid [{}], from diff [{}]\", clusterState.version(), clusterState.uuid(), clusterState.wasReadFromDiff());\n+ logger.debug(\"Got cluster state update, version [{}], updateId [{}], from diff [{}]\", clusterState.version(), clusterState.updateId(), clusterState.wasReadFromDiff());\n assertFalse(clusterState.wasReadFromDiff());\n newStateProcessed.onNewClusterStateProcessed();\n }\n@@ -496,7 +496,7 @@ public void check(ClusterState clusterState, PublishClusterStateAction.NewCluste\n }\n });\n \n- ClusterState unserializableClusterState = new ClusterState(clusterState.version(), clusterState.uuid(), clusterState) {\n+ ClusterState unserializableClusterState = new ClusterState(clusterState.version(), clusterState.updateId(), clusterState) {\n @Override\n public Diff<ClusterState> diff(ClusterState previousState) {\n return new Diff<ClusterState>() {\n@@ -615,7 +615,7 @@ public void add(NewClusterStateExpectation expectation) {\n public static class DelegatingClusterState extends ClusterState {\n \n public DelegatingClusterState(ClusterState clusterState) {\n- super(clusterState.version(), clusterState.uuid(), clusterState);\n+ super(clusterState.version(), clusterState.updateId(), clusterState);\n }\n \n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffPublishingTests.java",
"status": "modified"
},
{
"diff": "@@ -116,7 +116,7 @@ public void testClusterStateDiffSerialization() throws Exception {\n try {\n // Check non-diffable elements\n assertThat(clusterStateFromDiffs.version(), equalTo(clusterState.version()));\n- assertThat(clusterStateFromDiffs.uuid(), equalTo(clusterState.uuid()));\n+ assertThat(clusterStateFromDiffs.updateId(), equalTo(clusterState.updateId()));\n \n // Check nodes\n assertThat(clusterStateFromDiffs.nodes().nodes(), equalTo(clusterState.nodes().nodes()));",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffTests.java",
"status": "modified"
},
{
"diff": "@@ -1085,7 +1085,7 @@ protected void ensureClusterStateConsistency() throws IOException {\n // Check that the non-master node has the same version of the cluster state as the master and that this node didn't disconnect from the master\n if (masterClusterState.version() == localClusterState.version() && localClusterState.nodes().nodes().containsKey(masterId)) {\n try {\n- assertEquals(\"clusterstate UUID does not match\", masterClusterState.uuid(), localClusterState.uuid());\n+ assertEquals(\"clusterstate updateId does not match\", masterClusterState.updateId(), localClusterState.updateId());\n // We cannot compare serialization bytes since serialization order of maps is not guaranteed\n // but we can compare serialization sizes - they should be the same\n assertEquals(\"clusterstate size does not match\", masterClusterStateSize, localClusterStateSize);",
"filename": "core/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java",
"status": "modified"
}
]
} |
{
"body": "Start Elasticsearch 1.6 and run:\n\n```\nDELETE *\n\nPUT good/t/1\n{\"foo\": \"bar\"}\n```\n\nThen shutdown and start ES from the master branch. On the first start, the index recovers correctly:\n\n```\n[2015-06-24 18:38:02,888][INFO ][gateway ] [Cobra] recovered [1] indices into cluster_state\n[2015-06-24 18:38:03,315][WARN ][cluster.metadata ] [Cobra] [good] re-syncing mappings with cluster state for types [[t]]\n```\n\nThen shutdown and restart the node. Recovery fails (continuously) with the following exception:\n\n```\n[2015-06-24 18:38:16,260][INFO ][gateway ] [Fight-Man] recovered [1] indices into cluster_state\n[2015-06-24 18:38:16,810][WARN ][indices.cluster ] [Fight-Man] [[good][0]] marking and sending shard failed due to [failed recovery]\n[good][0] failed to recovery from gateway\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:258)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:60)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:133)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\nCaused by: [good][0] failed to create engine\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:135)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1287)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1282)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:829)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:810)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:249)\n ... 5 more\nCaused by: java.nio.file.FileAlreadyExistsException: data/elasticsearch/nodes/0/indices/good/0/translog/translog-1435163866460.ckp\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:176)\n at java.nio.channels.FileChannel.open(FileChannel.java:287)\n at java.nio.channels.FileChannel.open(FileChannel.java:334)\n at org.elasticsearch.index.translog.Checkpoint.write(Checkpoint.java:87)\n at org.elasticsearch.index.translog.Translog.upgradeLegacyTranslog(Translog.java:263)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:177)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:131)\n ... 11 more\n[2015-06-24 18:38:16,814][WARN ][cluster.action.shard ] [Fight-Man] [good][0] received shard failed for [good][0], node[NQ3bYnFUTfiyF-2RjSacbg], [P], s[INITIALIZING], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-06-24T16:38:16.219Z]], indexUUID [fx8Dwd1GRvivqRVezmK3sg], reason [shard failure [failed recovery][IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileAlreadyExistsException[data/elasticsearch/nodes/0/indices/good/0/translog/translog-1435163866460.ckp]; ]]\n[2015-06-24 18:38:16,815][WARN ][indices.cluster ] [Fight-Man] [[good][1]] marking and sending shard failed due to [failed recovery]\n[good][1] failed to recovery from gateway\n```\n",
"comments": [],
"number": 11858,
"title": "Translog file already exists exception thrown on old indices on second restart"
} | {
"body": "Today we mark a translog as upgraded by adding a marker to the engine commit.\nYet, this commit was only added if there was no translog present before ie. only\nif we have a fresh engine which is missing the entire point. Yet, this commit\nadds a backwards index tests that ensures we can open old indices more than once\nie. mark the index as upgraded.\n\nCloses #11858\n",
"number": 11860,
"review_comments": [
{
"body": "I think it's good have 0 as an option here too? I.e., index and search a freshly opened engine ..\n",
"created_at": "2015-06-25T10:17:07Z"
},
{
"body": "can we change this log to indicated why we ended up here? (now it says only translog ID)\n",
"created_at": "2015-06-25T10:17:40Z"
}
],
"title": "Mark translog as upgraded in the engine even if a legacy generation exists"
} | {
"commits": [
{
"message": "Mark translog as upgraded in the engine even if a legacy generation exists\n\nToday we mark a translog as upgraded by adding a marker to the engine commit.\nYet, this commit was only added if there was no translog present before ie. only\nif we have a fresh engine which is missing the entire point. Yet, this commit\nadds a backwards index tests that ensures we can open old indices more than once\nie. mark the index as upgraded.\n\nCloses #11858"
}
],
"files": [
{
"diff": "@@ -178,8 +178,12 @@ private Translog openTranslog(EngineConfig engineConfig, IndexWriter writer, boo\n }\n }\n final Translog translog = new Translog(translogConfig);\n- if (generation == null) {\n- logger.debug(\"no translog ID present in the current generation - creating one\");\n+ if (generation == null || generation.translogUUID == null) {\n+ if (generation == null) {\n+ logger.debug(\"no translog ID present in the current generation - creating one\");\n+ } else if (generation.translogUUID == null) {\n+ logger.debug(\"upgraded translog to pre 2.0 format, associating translog with index - writing translog UUID\");\n+ }\n boolean success = false;\n try {\n commitIndexWriter(writer, translog);",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -210,9 +210,10 @@ public static void upgradeLegacyTranslog(ESLogger logger, TranslogConfig config)\n if (translogGeneration.translogUUID != null) {\n throw new IllegalArgumentException(\"TranslogGeneration has a non-null UUID - index must have already been upgraded\");\n }\n- assert translogGeneration.translogUUID == null : \"Already upgrade\";\n try {\n- assert Checkpoint.read(translogPath.resolve(CHECKPOINT_FILE_NAME)) == null;\n+ if (Checkpoint.read(translogPath.resolve(CHECKPOINT_FILE_NAME)) != null) {\n+ throw new IllegalStateException(CHECKPOINT_FILE_NAME + \" file already present, translog is already upgraded\");\n+ }\n } catch (NoSuchFileException | FileNotFoundException ex) {\n logger.debug(\"upgrading translog - no checkpoint found\");\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
},
{
"diff": "@@ -38,13 +38,16 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.MockDirectoryWrapper;\n import org.apache.lucene.util.IOUtils;\n+import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.Version;\n+import org.elasticsearch.bwcompat.OldIndexBackwardsCompatibilityTests;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Base64;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.settings.Settings;\n@@ -83,10 +86,12 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.io.InputStream;\n import java.nio.charset.Charset;\n+import java.nio.file.DirectoryStream;\n+import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.util.Arrays;\n-import java.util.List;\n+import java.util.*;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n \n@@ -1711,6 +1716,102 @@ private Mapping dynamicUpdate() {\n return new Mapping(Version.CURRENT, root, new RootMapper[0], new Mapping.SourceTransform[0], ImmutableMap.<String, Object>of());\n }\n \n+ public void testUpgradeOldIndex() throws IOException {\n+ List<Path> indexes = new ArrayList<>();\n+ Path dir = getDataPath(\"/\" + OldIndexBackwardsCompatibilityTests.class.getPackage().getName().replace('.', '/')); // the files are in the same pkg as the OldIndexBackwardsCompatibilityTests test\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, \"index-*.zip\")) {\n+ for (Path path : stream) {\n+ indexes.add(path);\n+ }\n+ }\n+ Collections.shuffle(indexes, random());\n+ for (Path indexFile : indexes.subList(0, scaledRandomIntBetween(1, indexes.size() / 2))) {\n+ final String indexName = indexFile.getFileName().toString().replace(\".zip\", \"\").toLowerCase(Locale.ROOT);\n+ Version version = Version.fromString(indexName.replace(\"index-\", \"\"));\n+ if (version.onOrAfter(Version.V_2_0_0)) {\n+ continue;\n+ }\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ // decompress the index\n+ try (InputStream stream = Files.newInputStream(indexFile)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ // check it is unique\n+ assertTrue(Files.exists(unzipDataDir));\n+ Path[] list = filterExtraFSFiles(FileSystemUtils.files(unzipDataDir));\n+\n+ if (list.length != 1) {\n+ throw new IllegalStateException(\"Backwards index must contain exactly one cluster but was \" + list.length + \" \" + Arrays.toString(list));\n+ }\n+ // the bwc scripts packs the indices under this path\n+ Path src = list[0].resolve(\"nodes/0/indices/\" + indexName);\n+ Path translog = list[0].resolve(\"nodes/0/indices/\" + indexName).resolve(\"0\").resolve(\"translog\");\n+ assertTrue(\"[\" + indexFile + \"] missing index dir: \" + src.toString(), Files.exists(src));\n+ assertTrue(\"[\" + indexFile + \"] missing translog dir: \" + translog.toString(), Files.exists(translog));\n+ Path[] tlogFiles = filterExtraFSFiles(FileSystemUtils.files(translog));\n+ assertEquals(Arrays.toString(tlogFiles), tlogFiles.length, 1);\n+ final long size = Files.size(tlogFiles[0]);\n+\n+ final long generation = Translog.parseIdFromFileName(tlogFiles[0]);\n+ assertTrue(generation >= 1);\n+ logger.debug(\"upgrading index {} file: {} size: {}\", indexName, tlogFiles[0].getFileName(), size);\n+ Directory directory = newFSDirectory(src.resolve(\"0\").resolve(\"index\"));\n+ Store store = createStore(directory);\n+ final int iters = randomIntBetween(0, 2);\n+ int numDocs = -1;\n+ for (int i = 0; i < iters; i++) { // make sure we can restart on an upgraded index\n+ try (InternalEngine engine = createEngine(store, translog)) {\n+ try (Searcher searcher = engine.acquireSearcher(\"test\")) {\n+ if (i > 0) {\n+ assertEquals(numDocs, searcher.reader().numDocs());\n+ }\n+ TopDocs search = searcher.searcher().search(new MatchAllDocsQuery(), 1);\n+ numDocs = searcher.reader().numDocs();\n+ assertTrue(search.totalHits > 1);\n+ }\n+ CommitStats commitStats = engine.commitStats();\n+ Map<String, String> userData = commitStats.getUserData();\n+ assertTrue(\"userdata dosn't contain uuid\",userData.containsKey(Translog.TRANSLOG_UUID_KEY));\n+ assertTrue(\"userdata doesn't contain generation key\", userData.containsKey(Translog.TRANSLOG_GENERATION_KEY));\n+ assertFalse(\"userdata contains legacy marker\", userData.containsKey(\"translog_id\"));\n+ }\n+ }\n+\n+ try (InternalEngine engine = createEngine(store, translog)) {\n+ if (numDocs == -1) {\n+ try (Searcher searcher = engine.acquireSearcher(\"test\")) {\n+ numDocs = searcher.reader().numDocs();\n+ }\n+ }\n+ final int numExtraDocs = randomIntBetween(1, 10);\n+ for (int i = 0; i < numExtraDocs; i++) {\n+ ParsedDocument doc = testParsedDocument(\"extra\" + Integer.toString(i), \"extra\" + Integer.toString(i), \"test\", null, -1, -1, testDocument(), new BytesArray(\"{}\"), null);\n+ Engine.Create firstIndexRequest = new Engine.Create(null, newUid(Integer.toString(i)), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), false, false);\n+ engine.create(firstIndexRequest);\n+ assertThat(firstIndexRequest.version(), equalTo(1l));\n+ }\n+ engine.refresh(\"test\");\n+ try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n+ TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), randomIntBetween(numDocs, numDocs + numExtraDocs));\n+ assertThat(topDocs.totalHits, equalTo(numDocs + numExtraDocs));\n+ }\n+ }\n+ IOUtils.close(store, directory);\n+ }\n+ }\n+\n+ private Path[] filterExtraFSFiles(Path[] files) {\n+ List<Path> paths = new ArrayList<>();\n+ for (Path p : files) {\n+ if (p.getFileName().toString().startsWith(\"extra\")) {\n+ continue;\n+ }\n+ paths.add(p);\n+ }\n+ return paths.toArray(new Path[0]);\n+ }\n+\n public void testTranslogReplay() throws IOException {\n boolean canHaveDuplicates = true;\n boolean autoGeneratedId = true;\n@@ -1833,6 +1934,13 @@ protected Tuple<DocumentMapper, Mapping> docMapper(String type) {\n protected void operationProcessed() {\n recoveredOps.incrementAndGet();\n }\n+\n+ @Override\n+ public void performRecoveryOperation(Engine engine, Translog.Operation operation, boolean allowMappingUpdates) {\n+ if (operation.opType() != Translog.Operation.Type.DELETE_BY_QUERY) { // we don't support del by query in this test\n+ super.performRecoveryOperation(engine, operation, allowMappingUpdates);\n+ }\n+ }\n }\n \n public void testRecoverFromForeignTranslog() throws IOException {",
"filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
}
]
} |
{
"body": "From https://issues.apache.org/jira/browse/LUCENE-6482\n\n> during startup of Elasticsearch nodes: From the last 2 stack traces, you see that there are 2 things happening in parallel: Loading of Codec.class (because an Index was opened), but in parallel, Elasticsearch seems to initialize the PostingsFormats.class in the class CodecModule (Elasticsearch). In my opinion, this should not happen in parallel, but a fix would maybe that CodecModule should also call Codecs.forName() so those classes are initialized sequentially at one single place. The problem with Codec.class and PostingsFormat.class clinit running in parallel in different threads may have similar effects like you have seen in the blog post (Codecs depend on PostingsFormat and some PostingsFormats depend on the Codec class, which then hangs if PostingsFormats and Codecs are initialized from 2 different threads at same time, waiting for each other). But we have no chance to prevent this (unfortunately).\n> \n> I cannot say for sure, but something seems to be fishy while initializing Elasticsearch, because there is too much happening at the same time. In my opinion, Codecs and Postingsformats and Docvalues classes should be initialized sequentially, but I have no idea how to enforce this.\n",
"comments": [
{
"body": "@shikhar We haven't seen this issue reported before. From your last comment:\n\n> It might have been due to using a custom Elasticsearch discovery plugin which is purely asynchronous that those 2 bits ended up happening in parallel, and caused the deadlock.\n\nDid you determine whether this happened when you weren't using the custom plugin?\n",
"created_at": "2015-05-15T18:43:15Z"
},
{
"body": "@clintongormley Not yet, but I'll try to ascertain whether this is still happening by doing a ton of cluster restarts with and without the plugin. I'll reopen if I find this is still an issue. Thanks :)\n",
"created_at": "2015-05-16T07:44:55Z"
},
{
"body": "I have been able to reproduce this with Zen. The only interesting setting is probably `discovery.zen.publish_timeout=0` which makes it similar to [eskka](http://github.com/shikhar/eskka) in that master does not block for acks before processing more updates. But I'm not sure if this is relevant to the problem at all, just thought I'd mention in case it is.\n\nTo describe what happens, the cluster is stuck in the RED state while starting up. On the master node, there are many errors like:\n\n```\n[2015-05-19 12:48:34,460][WARN ][gateway.local ] [search45-es1] [mfg-1431690242][0]: failed to list shard stores on node [vrHV_EVZSuSFdCGEC7lUsg]\norg.elasticsearch.action.FailedNodeException: Failed node [vrHV_EVZSuSFdCGEC7lUsg]\n at org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.onFailure(TransportNodesOperationAction.java:206)\n at org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.access$1000(TransportNodesOperationAction.java:97)\n at org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$4.handleException(TransportNodesOperationAction.java:178)\n at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [search44-es1][inet[/172.31.240.55:8301]][internal:cluster/nodes/indices/shard/store[n]] request_id [4273] timed out after [30001ms]\n ... 4 more\n```\n\nA thread dump from the `vrHV_EVZSuSFdCGEC7lUsg` node that it is complaining about: \nhttps://gist.github.com/shikhar/8ad5c166c6a20458c4d2 -- grepping for `Codec` or `<init>` should reveal the threads where the static initialization deadlock is happening. These threads are shown as RUNNABLE while in `Object.wait()`\n\nMy hunch is that this can happen while the cluster is starting up and an index create event lands (we have some crons that are creating new indexes in the background as a way of reindexing). You can see that one of the threads that is deadlocked landed in the Codec-ey bits via:\n\n```\norg.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewIndices(IndicesClusterStateService.java:311)\n```\n",
"created_at": "2015-05-19T12:59:15Z"
},
{
"body": "I think the static initialization of the deadlock-prone bits that @uschindler mentioned should happen in a deterministic manner at ES startup, rather than e.g. when creating indexes.\n",
"created_at": "2015-05-19T13:05:33Z"
},
{
"body": "@shikhar thanks for coming back to us with the details\n",
"created_at": "2015-05-19T13:10:14Z"
},
{
"body": "Forgot to mention that this was reproduced on ES 1.5.2\n",
"created_at": "2015-05-19T13:13:58Z"
},
{
"body": "The discussion when and why this might happen is here:\nhttps://issues.apache.org/jira/browse/LUCENE-6482\n\nThe proposal is to load the Codec, PostingsFormat and DocValuesFormat in the Elasticsearch Boostrap class. So something like calls to:\n\n``` java\nCodec.reloadCodecs();\nPostingsFormat.reloadPostingsFormats();\n// and others, see Solr's SolrResourceLoader startup\n```\n\nsomewhere at the beginning of Boostrap class. In Solr this is already done in the startup code. This prevents dependency problems if those index classes are started in a concurrent way.\n",
"created_at": "2015-05-19T13:24:06Z"
},
{
"body": "Uwe, why should users of lucene have to do this? Tracking down a class loading deadlock like this is something i do not wish on anyone (thank you very much @shikhar for digging here). So I don't like that you need a \"magical sequence\" at the startup of your app, or you might get hangs.\n\nIs there no better way?\n",
"created_at": "2015-05-19T13:43:16Z"
},
{
"body": "The problem here is that on Elasticsearch startup the big builder is initializing a lot of shit in a unforseeable order, and concurrent in addition. The main problem here is that the Codec/Postingsformat/... stuff has a large spaghetti of load-time dependencies, so if called in the wrong order concurrently, it hangs.\n\nTo solve the issue here, my suggestion is to simply call those static initializes at a defined place in Elasticsearch's bootstrap, so it cannot happen that index A is initialized at the same time while index B is also booting up and both are loading SimpleText codec and ElasticsearchFooBar codec in parallel.\n\nWe can then look into Lucene's code to cleanup the loading of codecs. The big problem is the dependency graph and static initializers. The main problem is the Codec.getDefault() value that is initialized with Codec.forName(). I think the problem would be much easier if we would just initialize the default codec with a simple new LuceneXYCodec() instead of SPI on the <clinit> phase. Currently we have code like \"static Codec defaultCodec = forName(\"Lucene50\");\"\n\nSo two steps:\n- prevent deadlocks in ES caused by Lucene for now by explicitely initializing Codec class in Bootstrap.java\n- change Lucene's Codec/PostingsFormat to initialize the default codecs/formats without forName(), because this can lead to deadlock. Just use a simple \"new\" call to constructor.\n\nSimilar propblems are regularirly happening in ICU4J: You remember the case where ICU4J caused NullPointerExceptions because it was trying to print a message using the default codec before the default codec was actually loaded and initialized... The problems in Lucene are the same, just happening in multithreaded code.\n",
"created_at": "2015-05-19T14:10:55Z"
},
{
"body": "> I think the problem would be much easier if we would just initialize the default codec with a simple new LuceneXYCodec() instead of SPI on the phase. Currently we have code like \"static Codec defaultCodec = forName(\"Lucene50\");\"\n\nIs that really all thats needed to fix it in Lucene? If so, +1.\n\nBut for master branch, we shouldn't need a hack in bootstrap, lets just fix it in lucene and upgrade to a newer lucene snapshot jar.\n\nFor any backport fixes (elasticsearch 1.X) your solution is practical. But i would not be upset to see a 4.10.5 either.\n",
"created_at": "2015-05-19T14:16:36Z"
},
{
"body": "In my opinion this should work fine. We should just not use SPI to initialize the defaults.\n\nThe problem with forName() is: You are loading a lot of classes by that, which initialize themselves and may reference to the class doing the forName(). If we initialize the default codec statically, we dont need to wait for forName() to complete, the default codec is there ASAP. If some index file needs another codec, it can load it with forName() but there is no risk in deadlocking.\n",
"created_at": "2015-05-19T14:20:38Z"
},
{
"body": "I see. I think this is a lot simpler in 5.x, because forName is not \"abused\" to load \"impersonator\" codecs in tests. You remember, in that case we had a Lucene3x that was writeable and we reordered jar files in tests and all that... all gone now.\n\nSo I think its too tricky to fix as a bugfix in lucene 4.10.x, and we should apply your logic to bootstrap for ES 1.x. But lets fix lucene 5.x as proposed, then simply update our snapshot for ES master.\n",
"created_at": "2015-05-19T14:32:57Z"
},
{
"body": "I would wish we could write a test to check this out... But as usual, this test would need to run isolated (new and fresh JVM trying to open several indexes in parallel without loading Codec class before).\n\nThank you for remembering me why we did this Codec.forName() for the static initialization! That makes sense. I think we should kill this in 5.x, its a one-line change (and maybe a change in smoketester that validates that the defaultCodec line was updated after release). PostingsFormat is not affected, all of that is caused by Codec's <clinit>.\n\nI analyzed the stack traces provided. What happens in the example is exactly as described: One of the threads is initializing Codec class (<clinit>), as side effect of opening an index. 3 other threads are opening other indexes at same time. Because the Codec class is not yet initialized in clinit, those threads are waiting before SegmentInfos's call to Codec.forName(), which is blocked because the SegmentInfos's call cannot access the Codec class yet. Because of concurrent class loading the newInstance() call in NamedSPILoader to the codec class actually loaded is blocked (too). This happens because JVM has internal locks on that (not 100% sure why and where this blocks, but seems to be the reason).\n",
"created_at": "2015-05-19T14:46:14Z"
},
{
"body": "Related to that, there was some previous discussion about implementing a static detector here: https://issues.apache.org/jira/browse/LUCENE-5573\n\nI am just hoping some policeman finds himself bored one day and implements the ASM logic so we could try to detect these, even if it wouldn't find this particular one.\n",
"created_at": "2015-05-19T14:52:06Z"
},
{
"body": "I know this issue :-) I wonder that there is no Eclipse plugin that can detect this!\n",
"created_at": "2015-05-19T15:40:04Z"
},
{
"body": "I did a quick survey, nothing exhaustive but I couldn't find anything. I haven't tried to play around with writing a detector. But IMO ideally it would be a policeman-tool like forbidden API, and we just scan for it in builds and fail the build.\n",
"created_at": "2015-05-19T16:10:40Z"
},
{
"body": "hey @rmuir what are the plans for tackling this? It is fixed in Lucene 5.2.1 (https://issues.apache.org/jira/browse/LUCENE-6482) -- if only ES 2.0 can use 5.x, there probably either needs to be that ES Bootstrap workaround or a backport to Lucene 4.x\n",
"created_at": "2015-06-22T17:59:27Z"
},
{
"body": "I would go with the Bootstrap workaround for ES 1.5 and 1.6. This is very easy to implement and there is no need to release several 4.x versions of Lucene. Should I provide a PR?\n",
"created_at": "2015-06-22T18:05:50Z"
},
{
"body": "@uschindler yes please! :)\n",
"created_at": "2015-06-23T14:37:43Z"
},
{
"body": "@uschindler can you open a PR for this against 1.x branch please\n",
"created_at": "2015-06-23T14:37:50Z"
},
{
"body": "OK, will provide one later! I am currently in the U.S. and busy :-)\n",
"created_at": "2015-06-23T17:00:24Z"
},
{
"body": "Ah which branch?\n",
"created_at": "2015-06-23T17:00:33Z"
},
{
"body": "Found it out: 1.x\n",
"created_at": "2015-06-23T17:57:50Z"
},
{
"body": "I created a PR: #11837\n\nThis one does a \"fake\" Codecs.availableCodecs() on InternalNode startup. I did not do it in the Boostrap, so also people embedding ES can make use of it. In any case, the method call is cheap and does not slowdown, it just returns an immutable list.\n\nIs there a problem that InternalNode directly depends on lucene-core.jar?\n",
"created_at": "2015-06-23T19:08:18Z"
},
{
"body": "Thanks for merging!\n",
"created_at": "2015-06-23T19:24:36Z"
}
],
"number": 11170,
"title": "Cluster startup deadlock involving Lucene static initialization bits"
} | {
"body": "Not need for Lucene 5.2.1 anymore, but Lucene 4.9.4 needs it, see LUCENE-6482 for more info)\n\nCloses #11170\n",
"number": 11837,
"review_comments": [],
"title": "Workaround deadlock on Codec initialisation"
} | {
"commits": [
{
"message": "Fix #11170: workaround deadlock on Codec initialization (not needed for 5.2.1 anymore, but Lucene 4.9.4 needs it, see LUCENE-6482 for more info)"
}
],
"files": [
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.node.internal;\n \n+import org.apache.lucene.codecs.Codec;\n+\n import org.elasticsearch.Build;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n@@ -150,6 +152,9 @@ public InternalNode(Settings preparedSettings, boolean loadConfigSettings) throw\n env.homeFile(), env.configFile(), Arrays.toString(env.dataFiles()), env.logsFile(),\n env.workFile(), env.pluginsFile());\n }\n+ \n+ // workaround for LUCENE-6482\n+ Codec.availableCodecs();\n \n this.pluginsService = new PluginsService(tuple.v1(), tuple.v2());\n this.settings = pluginsService.updatedSettings();",
"filename": "src/main/java/org/elasticsearch/node/internal/InternalNode.java",
"status": "modified"
}
]
} |
{
"body": "With ElasticSearch 1.5.2\n\nI have a mapping with a nested object and try to index a percolator query like this:\n\n```\n{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"match_all\": {}\n },\n \"filter\": {\n \"nested\": {\n \"path\": \"nestedObject\",\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"match_all\": {}\n }\n }\n },\n \"inner_hits\": {\n \"size\": 1000\n }\n }\n }\n }\n }\n}\n```\n\nThat produces this exception:\n\n```\norg.elasticsearch.index.percolator.PercolatorException: [test] failed to parse query [498370ec09aecc8ac355befa1f41]\n at org.elasticsearch.index.percolator.PercolatorQueriesRegistry.parsePercolatorDocument(PercolatorQueriesRegistry.java:196)\n at org.elasticsearch.index.percolator.PercolatorQueriesRegistry$RealTimePercolatorOperationListener.preIndex(PercolatorQueriesRegistry.java:324)\n at org.elasticsearch.index.indexing.ShardIndexingService.preIndex(ShardIndexingService.java:139)\n at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:493)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:196)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.query.QueryParseContext.addInnerHits(QueryParseContext.java:268)\n at org.elasticsearch.index.query.NestedQueryParser$ToBlockJoinQueryBuilder.build(NestedQueryParser.java:153)\n at org.elasticsearch.index.query.NestedFilterParser.parse(NestedFilterParser.java:97)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:368)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:349)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:368)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:349)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:302)\n at org.elasticsearch.index.query.IndexQueryParserService.parseInnerQuery(IndexQueryParserService.java:321)\n at org.elasticsearch.index.percolator.PercolatorQueriesRegistry.parseQuery(PercolatorQueriesRegistry.java:222)\n at org.elasticsearch.index.percolator.PercolatorQueriesRegistry.parsePercolatorDocument(PercolatorQueriesRegistry.java:193)\n ... 9 more\n```\n\nI can index the query if I remove the useless \"inner_hits\" field. Maybe Elasticsearch should return a better error than `NullPointerException`\n",
"comments": [
{
"body": "@Volune agreed, ES should fail with a better error with something like: `inner hits isn't supported in the percolate api`\n",
"created_at": "2015-06-16T06:48:29Z"
}
],
"number": 11672,
"title": "NullPointerException indexing percolator query with \"inner_hits\""
} | {
"body": "Previously using `inner_hits` on the `nested` query in the percolate api result in a NPE. With this change a more descriptive error is returned.\n\nPR for #11672\n",
"number": 11793,
"review_comments": [],
"title": "Fail nicely if `nested` query with `inner_hits` is used in a percolator query"
} | {
"commits": [
{
"message": "percolator: Fail nicely if `nested` query with `inner_hits` is used in a percolator query.\n\nCloses #11672"
}
],
"files": [
{
"diff": "@@ -204,6 +204,10 @@ public boolean isFilter() {\n \n public void addInnerHits(String name, InnerHitsContext.BaseInnerHits context) {\n SearchContext sc = SearchContext.current();\n+ if (sc == null) {\n+ throw new QueryParsingException(this, \"inner_hits unsupported\");\n+ }\n+\n InnerHitsContext innerHitsContext;\n if (sc.innerHits() == null) {\n innerHitsContext = new InnerHitsContext(new HashMap<String, InnerHitsContext.BaseInnerHits>());",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,7 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.query.functionscore.factor.FactorBuilder;\n+import org.elasticsearch.index.query.support.QueryInnerHitBuilder;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.highlight.HighlightBuilder;\n@@ -71,13 +72,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.smileBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.yamlBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAllSuccessful;\n@@ -2028,7 +2023,34 @@ public void testMapUnmappedFieldAsString() throws IOException{\n .execute().actionGet();\n assertMatchCount(response1, 1l);\n assertThat(response1.getMatches(), arrayWithSize(1));\n+ }\n \n+ @Test\n+ public void testFailNicelyWithInnerHits() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"mapping\")\n+ .startObject(\"properties\")\n+ .startObject(\"nested\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(\"index\").addMapping(\"mapping\", mapping));\n+ try {\n+ client().prepareIndex(\"index\", PercolatorService.TYPE_NAME, \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", nestedQuery(\"nested\", matchQuery(\"nested.name\", \"value\")).innerHit(new QueryInnerHitBuilder())).endObject())\n+ .execute().actionGet();\n+ fail(\"Expected a parse error, because inner_hits isn't supported in the percolate api\");\n+ } catch (Exception e) {\n+ assertThat(e.getCause(), instanceOf(QueryParsingException.class));\n+ assertThat(e.getCause().getMessage(), containsString(\"inner_hits unsupported\"));\n+ }\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorTests.java",
"status": "modified"
},
{
"diff": "@@ -493,6 +493,8 @@ the time the percolate API needs to run can be decreased.\n Because the percolator API is processing one document at a time, it doesn't support queries and filters that run\n against child documents such as `has_child` and `has_parent`.\n \n+The `inner_hits` feature on the `nested` query isn't supported in the percolate api.\n+\n The `wildcard` and `regexp` query natively use a lot of memory and because the percolator keeps the queries into memory\n this can easily take up the available memory in the heap space. If possible try to use a `prefix` query or ngramming to\n achieve the same result (with way less memory being used).",
"filename": "docs/reference/search/percolate.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Other captialized numeric types, Integer, Double etc. have a null check when passed in to the XContentBuilder.field method. The XContentBuilder.field(String, BigDecimal) method however throws a NullPointerException.\n\nFor consistency it would be better to writeNull() if the BigDecimal value is null.\n",
"comments": [],
"number": 11699,
"title": "Null pointer exception when adding BigDecimal field with value null"
} | {
"body": "Add a null-check for XContentBuilder#field for BigDecimals\n\nCloses #11699\n",
"number": 11790,
"review_comments": [],
"title": "Add a null-check for XContentBuilder#field for BigDecimals"
} | {
"commits": [
{
"message": "Fix #11699\n\nAdd a null-check for XContentBuilder#field for BigDecimals"
}
],
"files": [
{
"diff": "@@ -499,28 +499,36 @@ public XContentBuilder field(XContentBuilderString name, BigDecimal value) throw\n \n public XContentBuilder field(String name, BigDecimal value, int scale, RoundingMode rounding, boolean toDouble) throws IOException {\n field(name);\n- if (toDouble) {\n- try {\n- generator.writeNumber(value.setScale(scale, rounding).doubleValue());\n- } catch (ArithmeticException e) {\n+ if (value == null) {\n+ generator.writeNull();\n+ } else {\n+ if (toDouble) {\n+ try {\n+ generator.writeNumber(value.setScale(scale, rounding).doubleValue());\n+ } catch (ArithmeticException e) {\n+ generator.writeString(value.toEngineeringString());\n+ }\n+ } else {\n generator.writeString(value.toEngineeringString());\n }\n- } else {\n- generator.writeString(value.toEngineeringString());\n }\n return this;\n }\n \n public XContentBuilder field(XContentBuilderString name, BigDecimal value, int scale, RoundingMode rounding, boolean toDouble) throws IOException {\n field(name);\n- if (toDouble) {\n- try {\n- generator.writeNumber(value.setScale(scale, rounding).doubleValue());\n- } catch (ArithmeticException e) {\n+ if (value == null) {\n+ generator.writeNull();\n+ } else {\n+ if (toDouble) {\n+ try {\n+ generator.writeNumber(value.setScale(scale, rounding).doubleValue());\n+ } catch (ArithmeticException e) {\n+ generator.writeString(value.toEngineeringString());\n+ }\n+ } else {\n generator.writeString(value.toEngineeringString());\n }\n- } else {\n- generator.writeString(value.toEngineeringString());\n }\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java",
"status": "modified"
}
]
} |
{
"body": "CommonTermsQueryParser does not check for `disable_coords`, only for `disable_coord`. Yet the builder only outputs `disable_coords`, leading to disabling the coordination factor to be ignored in the java API.\n",
"comments": [],
"number": 11730,
"title": "CommonTermsQuery ignores disabling the coordination factor with Java API"
} | {
"body": "CommonTermsQueryParser does not check for disable_coords, only for\ndisable_coord. Yet the builder only outputs disable_coords, leading to\ndisabling the coordination factor to be ignored in the Java API.\n\nCloses #11730\n",
"number": 11780,
"review_comments": [],
"title": "CommonTermsQuery fix for ignored coordination factor"
} | {
"commits": [
{
"message": "CommonTermsQuery fix for ignored coordination factor\n\nCommonTermsQueryParser does not check for disable_coords, only for\ndisable_coord. Yet the builder only outputs disable_coords, leading to\ndisabling the coordination factor to be ignored in the Java API.\n\nCloses #11730"
}
],
"files": [
{
"diff": "@@ -64,7 +64,7 @@ public static enum Operator {\n \n private String highFreqMinimumShouldMatch = null;\n \n- private Boolean disableCoords = null;\n+ private Boolean disableCoord = null;\n \n private Float cutoffFrequency = null;\n \n@@ -150,6 +150,11 @@ public CommonTermsQueryBuilder lowFreqMinimumShouldMatch(String lowFreqMinimumSh\n this.lowFreqMinimumShouldMatch = lowFreqMinimumShouldMatch;\n return this;\n }\n+ \n+ public CommonTermsQueryBuilder disableCoord(boolean disableCoord) {\n+ this.disableCoord = disableCoord;\n+ return this;\n+ }\n \n /**\n * Sets the query name for the filter that can be used when searching for matched_filters per hit.\n@@ -165,8 +170,8 @@ public void doXContent(XContentBuilder builder, Params params) throws IOExceptio\n builder.startObject(name);\n \n builder.field(\"query\", text);\n- if (disableCoords != null) {\n- builder.field(\"disable_coords\", disableCoords);\n+ if (disableCoord != null) {\n+ builder.field(\"disable_coord\", disableCoord);\n }\n if (highFreqOperator != null) {\n builder.field(\"high_freq_operator\", highFreqOperator.toString());",
"filename": "core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -47,7 +47,7 @@ public class CommonTermsQueryParser implements QueryParser {\n \n static final Occur DEFAULT_LOW_FREQ_OCCUR = Occur.SHOULD;\n \n- static final boolean DEFAULT_DISABLE_COORDS = true;\n+ static final boolean DEFAULT_DISABLE_COORD = true;\n \n \n @Inject\n@@ -72,7 +72,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n String queryAnalyzer = null;\n String lowFreqMinimumShouldMatch = null;\n String highFreqMinimumShouldMatch = null;\n- boolean disableCoords = DEFAULT_DISABLE_COORDS;\n+ boolean disableCoord = DEFAULT_DISABLE_COORD;\n Occur highFreqOccur = DEFAULT_HIGH_FREQ_OCCUR;\n Occur lowFreqOccur = DEFAULT_LOW_FREQ_OCCUR;\n float maxTermFrequency = DEFAULT_MAX_TERM_DOC_FREQ;\n@@ -113,7 +113,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n queryAnalyzer = analyzer;\n } else if (\"disable_coord\".equals(currentFieldName) || \"disableCoord\".equals(currentFieldName)) {\n- disableCoords = parser.booleanValue();\n+ disableCoord = parser.booleanValue();\n } else if (\"boost\".equals(currentFieldName)) {\n boost = parser.floatValue();\n } else if (\"high_freq_operator\".equals(currentFieldName) || \"highFreqOperator\".equals(currentFieldName)) {\n@@ -188,7 +188,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n }\n \n- ExtendedCommonTermsQuery commonsQuery = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoords, fieldType);\n+ ExtendedCommonTermsQuery commonsQuery = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoord, fieldType);\n commonsQuery.setBoost(boost);\n Query query = parseQueryString(commonsQuery, value.toString(), field, parseContext, analyzer, lowFreqMinimumShouldMatch, highFreqMinimumShouldMatch);\n if (queryName != null) {",
"filename": "core/src/main/java/org/elasticsearch/index/query/CommonTermsQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -2170,6 +2170,19 @@ public void testCommonTermsQuery3() throws IOException {\n assertThat(ectQuery.getLowFreqMinimumNumberShouldMatchSpec(), equalTo(\"2\"));\n }\n \n+ @Test // see #11730\n+ public void testCommonTermsQuery4() throws IOException {\n+ IndexQueryParserService queryParser = queryParser();\n+ Query parsedQuery = queryParser.parse(commonTermsQuery(\"field\", \"text\").disableCoord(false)).query();\n+ assertThat(parsedQuery, instanceOf(ExtendedCommonTermsQuery.class));\n+ ExtendedCommonTermsQuery ectQuery = (ExtendedCommonTermsQuery) parsedQuery;\n+ assertFalse(ectQuery.isCoordDisabled());\n+ parsedQuery = queryParser.parse(commonTermsQuery(\"field\", \"text\").disableCoord(true)).query();\n+ assertThat(parsedQuery, instanceOf(ExtendedCommonTermsQuery.class));\n+ ectQuery = (ExtendedCommonTermsQuery) parsedQuery;\n+ assertTrue(ectQuery.isCoordDisabled());\n+ }\n+\n @Test(expected = QueryParsingException.class)\n public void assureMalformedThrowsException() throws IOException {\n IndexQueryParserService queryParser;",
"filename": "core/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java",
"status": "modified"
}
]
} |
{
"body": "The settings parser for `moving_avg` models is too strict, and will throw an exception if it encounters a non-double (including coercible numerics). Should be changed to accept any numeric.\n\nE.g.\n\n```\n\"movavg\": {\n \"moving_avg\": {\n \"buckets_path\": \"avg\",\n \"window\": 12,\n \"model\": \"holt_winters\",\n \"settings\": {\n \"alpha\": 0.4992721,\n \"beta\": 0.1366293,\n \"gamma\": 1,\n \"period\": 4\n }\n }\n}\n```\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"search_parse_exception\",\n \"reason\": \"Parameter [gamma] must be a double, type `Integer` provided instead\"\n }\n ],\n...\n}\n```\n",
"comments": [],
"number": 11487,
"title": "Moving_avg model parser is too strict (should coerce numerics)"
} | {
"body": "Fixes model validation/parsing so that any numeric is accepted, not just explicit doubles.\n\nAlso changes the models to throw ParseExceptions instead of SearchParseExceptions, so that\nthe validation can be unit-tested.\n\nFixes #11487\n",
"number": 11778,
"review_comments": [],
"title": "Aggregations: moving_avg model parser should accept any numeric"
} | {
"commits": [
{
"message": "Aggregations: moving_avg model parser should accept any numeric, not just doubles\n\nAlso changes the models to throw ParseExceptions instead of SearchParseExceptions, so that\nthe validation can be unit-tested.\n\nFixes #11487"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Map;\n@@ -144,7 +145,14 @@ public PipelineAggregatorFactory parse(String pipelineAggregatorName, XContentPa\n throw new SearchParseException(context, \"Unknown model [\" + model + \"] specified. Valid options are:\"\n + movAvgModelParserMapper.getAllNames().toString(), parser.getTokenLocation());\n }\n- MovAvgModel movAvgModel = modelParser.parse(settings, pipelineAggregatorName, context, window);\n+\n+ MovAvgModel movAvgModel;\n+ try {\n+ movAvgModel = modelParser.parse(settings, pipelineAggregatorName, window);\n+ } catch (ParseException exception) {\n+ throw new SearchParseException(context, \"Could not parse settings for model [\" + model + \"].\", null, exception);\n+ }\n+\n \n return new MovAvgPipelineAggregator.Factory(pipelineAggregatorName, bucketsPaths, formatter, gapPolicy, window, predict,\n movAvgModel);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgParser.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.Collection;\n import java.util.Map;\n \n@@ -92,9 +93,9 @@ public String getName() {\n }\n \n @Override\n- public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize) {\n+ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException {\n \n- double alpha = parseDoubleParam(context, settings, \"alpha\", 0.5);\n+ double alpha = parseDoubleParam(settings, \"alpha\", 0.5);\n \n return new EwmaModel(alpha);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/EwmaModel.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.*;\n \n /**\n@@ -151,10 +152,10 @@ public String getName() {\n }\n \n @Override\n- public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize) {\n+ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException {\n \n- double alpha = parseDoubleParam(context, settings, \"alpha\", 0.5);\n- double beta = parseDoubleParam(context, settings, \"beta\", 0.5);\n+ double alpha = parseDoubleParam(settings, \"alpha\", 0.5);\n+ double beta = parseDoubleParam(settings, \"beta\", 0.5);\n return new HoltLinearModel(alpha, beta);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltLinearModel.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.*;\n \n /**\n@@ -316,17 +317,17 @@ public String getName() {\n }\n \n @Override\n- public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize) {\n+ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException {\n \n- double alpha = parseDoubleParam(context, settings, \"alpha\", 0.5);\n- double beta = parseDoubleParam(context, settings, \"beta\", 0.5);\n- double gamma = parseDoubleParam(context, settings, \"gamma\", 0.5);\n- int period = parseIntegerParam(context, settings, \"period\", 1);\n+ double alpha = parseDoubleParam(settings, \"alpha\", 0.5);\n+ double beta = parseDoubleParam(settings, \"beta\", 0.5);\n+ double gamma = parseDoubleParam(settings, \"gamma\", 0.5);\n+ int period = parseIntegerParam(settings, \"period\", 1);\n \n if (windowSize < 2 * period) {\n- throw new SearchParseException(context, \"Field [window] must be at least twice as large as the period when \" +\n+ throw new ParseException(\"Field [window] must be at least twice as large as the period when \" +\n \"using Holt-Winters. Value provided was [\" + windowSize + \"], which is less than (2*period) == \"\n- + (2 * period), null);\n+ + (2 * period), 0);\n }\n \n SeasonalityType seasonalityType = SeasonalityType.ADDITIVE;\n@@ -337,13 +338,13 @@ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipeline\n if (value instanceof String) {\n seasonalityType = SeasonalityType.parse((String)value);\n } else {\n- throw new SearchParseException(context, \"Parameter [type] must be a String, type `\"\n- + value.getClass().getSimpleName() + \"` provided instead\", null);\n+ throw new ParseException(\"Parameter [type] must be a String, type `\"\n+ + value.getClass().getSimpleName() + \"` provided instead\", 0);\n }\n }\n }\n \n- boolean pad = parseBoolParam(context, settings, \"pad\", seasonalityType.equals(SeasonalityType.MULTIPLICATIVE));\n+ boolean pad = parseBoolParam(settings, \"pad\", seasonalityType.equals(SeasonalityType.MULTIPLICATIVE));\n \n return new HoltWintersModel(alpha, beta, gamma, period, seasonalityType, pad);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/HoltWintersModel.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.Collection;\n import java.util.Map;\n \n@@ -79,7 +80,7 @@ public String getName() {\n }\n \n @Override\n- public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize) {\n+ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException {\n return new LinearModel();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/LinearModel.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Map;\n@@ -125,79 +126,75 @@ public abstract static class AbstractModelParser {\n *\n * @param settings Map of settings, extracted from the request\n * @param pipelineName Name of the parent pipeline agg\n- * @param context The parser context that we are in\n * @param windowSize Size of the window for this moving avg\n * @return A fully built moving average model\n */\n- public abstract MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize);\n+ public abstract MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException;\n \n \n /**\n * Extracts a 0-1 inclusive double from the settings map, otherwise throws an exception\n *\n- * @param context Search query context\n * @param settings Map of settings provided to this model\n * @param name Name of parameter we are attempting to extract\n * @param defaultValue Default value to be used if value does not exist in map\n *\n- * @throws SearchParseException\n+ * @throws ParseException\n *\n * @return Double value extracted from settings map\n */\n- protected double parseDoubleParam(SearchContext context, @Nullable Map<String, Object> settings, String name, double defaultValue) {\n+ protected double parseDoubleParam(@Nullable Map<String, Object> settings, String name, double defaultValue) throws ParseException {\n if (settings == null) {\n return defaultValue;\n }\n \n Object value = settings.get(name);\n if (value == null) {\n return defaultValue;\n- } else if (value instanceof Double) {\n- double v = (Double)value;\n+ } else if (value instanceof Number) {\n+ double v = ((Number) value).doubleValue();\n if (v >= 0 && v <= 1) {\n return v;\n }\n \n- throw new SearchParseException(context, \"Parameter [\" + name + \"] must be between 0-1 inclusive. Provided\"\n- + \"value was [\" + v + \"]\", null);\n+ throw new ParseException(\"Parameter [\" + name + \"] must be between 0-1 inclusive. Provided\"\n+ + \"value was [\" + v + \"]\", 0);\n }\n \n- throw new SearchParseException(context, \"Parameter [\" + name + \"] must be a double, type `\"\n- + value.getClass().getSimpleName() + \"` provided instead\", null);\n+ throw new ParseException(\"Parameter [\" + name + \"] must be a double, type `\"\n+ + value.getClass().getSimpleName() + \"` provided instead\", 0);\n }\n \n /**\n * Extracts an integer from the settings map, otherwise throws an exception\n *\n- * @param context Search query context\n * @param settings Map of settings provided to this model\n * @param name Name of parameter we are attempting to extract\n * @param defaultValue Default value to be used if value does not exist in map\n *\n- * @throws SearchParseException\n+ * @throws ParseException\n *\n * @return Integer value extracted from settings map\n */\n- protected int parseIntegerParam(SearchContext context, @Nullable Map<String, Object> settings, String name, int defaultValue) {\n+ protected int parseIntegerParam(@Nullable Map<String, Object> settings, String name, int defaultValue) throws ParseException {\n if (settings == null) {\n return defaultValue;\n }\n \n Object value = settings.get(name);\n if (value == null) {\n return defaultValue;\n- } else if (value instanceof Integer) {\n- return (Integer)value;\n+ } else if (value instanceof Number) {\n+ return ((Number) value).intValue();\n }\n \n- throw new SearchParseException(context, \"Parameter [\" + name + \"] must be an integer, type `\"\n- + value.getClass().getSimpleName() + \"` provided instead\", null);\n+ throw new ParseException(\"Parameter [\" + name + \"] must be an integer, type `\"\n+ + value.getClass().getSimpleName() + \"` provided instead\", 0);\n }\n \n /**\n * Extracts a boolean from the settings map, otherwise throws an exception\n *\n- * @param context Search query context\n * @param settings Map of settings provided to this model\n * @param name Name of parameter we are attempting to extract\n * @param defaultValue Default value to be used if value does not exist in map\n@@ -206,7 +203,7 @@ protected int parseIntegerParam(SearchContext context, @Nullable Map<String, Obj\n *\n * @return Boolean value extracted from settings map\n */\n- protected boolean parseBoolParam(SearchContext context, @Nullable Map<String, Object> settings, String name, boolean defaultValue) {\n+ protected boolean parseBoolParam(@Nullable Map<String, Object> settings, String name, boolean defaultValue) throws ParseException {\n if (settings == null) {\n return defaultValue;\n }\n@@ -218,8 +215,8 @@ protected boolean parseBoolParam(SearchContext context, @Nullable Map<String, Ob\n return (Boolean)value;\n }\n \n- throw new SearchParseException(context, \"Parameter [\" + name + \"] must be a boolean, type `\"\n- + value.getClass().getSimpleName() + \"` provided instead\", null);\n+ throw new ParseException(\"Parameter [\" + name + \"] must be a boolean, type `\"\n+ + value.getClass().getSimpleName() + \"` provided instead\", 0);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModel.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.text.ParseException;\n import java.util.Collection;\n import java.util.Map;\n \n@@ -72,7 +73,7 @@ public String getName() {\n }\n \n @Override\n- public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, SearchContext context, int windowSize) {\n+ public MovAvgModel parse(@Nullable Map<String, Object> settings, String pipelineName, int windowSize) throws ParseException {\n return new SimpleModel();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/SimpleModel.java",
"status": "modified"
},
{
"diff": "@@ -21,14 +21,16 @@\n \n import com.google.common.collect.EvictingQueue;\n \n+import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.pipeline.movavg.models.*;\n import org.elasticsearch.test.ElasticsearchTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n \n import org.junit.Test;\n \n-import java.util.Arrays;\n+import java.text.ParseException;\n+import java.util.*;\n \n public class MovAvgUnitTests extends ElasticsearchTestCase {\n \n@@ -583,4 +585,50 @@ public void testHoltWintersAdditivePredictionModel() {\n }\n \n }\n+\n+ @Test\n+ public void testNumericValidation() {\n+\n+ List<MovAvgModel.AbstractModelParser> parsers = new ArrayList<>(5);\n+\n+ // Simple and Linear don't have any settings to test\n+ parsers.add(new EwmaModel.SingleExpModelParser());\n+ parsers.add(new HoltWintersModel.HoltWintersModelParser());\n+ parsers.add(new HoltLinearModel.DoubleExpModelParser());\n+\n+\n+ Object[] values = {(byte)1, 1, 1L, (short)1, (double)1};\n+ Map<String, Object> settings = new HashMap<>(2);\n+\n+ for (MovAvgModel.AbstractModelParser parser : parsers) {\n+ for (Object v : values) {\n+ settings.put(\"alpha\", v);\n+ settings.put(\"beta\", v);\n+ settings.put(\"gamma\", v);\n+\n+ try {\n+ parser.parse(settings, \"pipeline\", 10);\n+ } catch (ParseException e) {\n+ fail(parser.getName() + \" parser should not have thrown SearchParseException while parsing [\" +\n+ v.getClass().getSimpleName() +\"]\");\n+ }\n+\n+ }\n+ }\n+\n+ for (MovAvgModel.AbstractModelParser parser : parsers) {\n+ settings.put(\"alpha\", \"abc\");\n+ settings.put(\"beta\", \"abc\");\n+ settings.put(\"gamma\", \"abc\");\n+\n+ try {\n+ parser.parse(settings, \"pipeline\", 10);\n+ } catch (ParseException e) {\n+ //all good\n+ continue;\n+ }\n+\n+ fail(parser.getName() + \" parser should have thrown SearchParseException while parsing [String]\");\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/moving/avg/MovAvgUnitTests.java",
"status": "modified"
}
]
} |
{
"body": "Currently repository verification exceptions are returned to the user, but not logged on the node where the exception occurred. This behavior makes debugging of some repository registration issues difficult. See elastic/elasticsearch-cloud-aws#217 for example.\n",
"comments": [],
"number": 11760,
"title": "Repository verification exceptions should be logged"
} | {
"body": "Some repository verification exception are currently only returned to the users but not logged on the nodes where the exception occurred, which makes troubleshooting difficult.\n\n Closes #11760\n",
"number": 11763,
"review_comments": [],
"title": "Improve logging of repository verification exceptions."
} | {
"commits": [
{
"message": "Improve logging of repository verification exceptions.\n\nSome repository verification exceptions are currently only returned to the users but not logged on the nodes where the exceptions occurred, which makes troubleshooting difficult.\n\nCloses #11760"
}
],
"files": [
{
"diff": "@@ -217,7 +217,7 @@ public void onResponse(VerifyResponse verifyResponse) {\n try {\n repository.endVerification(verificationToken);\n } catch (Throwable t) {\n- logger.warn(\"[{}] failed to finish repository verification\", repositoryName, t);\n+ logger.warn(\"[{}] failed to finish repository verification\", t, repositoryName);\n listener.onFailure(t);\n return;\n }\n@@ -233,7 +233,7 @@ public void onFailure(Throwable e) {\n try {\n repository.endVerification(verificationToken);\n } catch (Throwable t1) {\n- logger.warn(\"[{}] failed to finish repository verification\", repositoryName, t);\n+ logger.warn(\"[{}] failed to finish repository verification\", t1, repositoryName);\n }\n listener.onFailure(t);\n }",
"filename": "core/src/main/java/org/elasticsearch/repositories/RepositoriesService.java",
"status": "modified"
},
{
"diff": "@@ -80,6 +80,7 @@ public void verify(String repository, String verificationToken, final ActionList\n try {\n doVerify(repository, verificationToken);\n } catch (Throwable t) {\n+ logger.warn(\"[{}] failed to verify repository\", t, repository);\n errors.add(new VerificationFailure(node.id(), ExceptionsHelper.detailedMessage(t)));\n }\n if (counter.decrementAndGet() == 0) {\n@@ -146,7 +147,12 @@ public void writeTo(StreamOutput out) throws IOException {\n class VerifyNodeRepositoryRequestHandler implements TransportRequestHandler<VerifyNodeRepositoryRequest> {\n @Override\n public void messageReceived(VerifyNodeRepositoryRequest request, TransportChannel channel) throws Exception {\n- doVerify(request.repository, request.verificationToken);\n+ try {\n+ doVerify(request.repository, request.verificationToken);\n+ } catch (Exception ex) {\n+ logger.warn(\"[{}] failed to verify repository\", ex, request.repository);\n+ throw ex;\n+ }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/repositories/VerifyNodeRepositoryAction.java",
"status": "modified"
}
]
} |
{
"body": "we moved to `minimum_should_match` but we should still accept the `percent_terms_to_match` parameter until 2.0\n",
"comments": [
{
"body": "fixed see #11574 \n",
"created_at": "2015-06-10T12:35:49Z"
}
],
"number": 11572,
"title": "_mlt API ignores deprecated `percent_terms_to_match` parameter"
} | {
"body": "We should also make sure that `minimum_should_match` is overridden by\n`percent_terms_to_match` when the later is set.\n\nRelates to #11574 #11572\n",
"number": 11736,
"review_comments": [],
"title": "Support for deprecated percent_terms_to_match REST parameter"
} | {
"commits": [
{
"message": "Support for deprectated percent_terms_to_match REST parameter\n\nWe should also make sure that minimumShouldMatch is overridden by\npercent_terms_to_match when set.\n\nRelates to #11574 #11572"
}
],
"files": [
{
"diff": "@@ -57,12 +57,12 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n //needs some work if it is to be used in a REST context like this too\n // See the MoreLikeThisQueryParser constants that hold the valid syntax\n mltRequest.fields(request.paramAsStringArray(\"mlt_fields\", null));\n+ mltRequest.minimumShouldMatch(request.param(\"minimum_should_match\", \"0\"));\n if (request.hasParam(\"percent_terms_to_match\") && request.hasParam(\"minimum_should_match\") == false) {\n // percent_terms_to_match is deprecated!!!\n // only set if it's really set AND the new parameter is not present (prefer non-deprecated\n mltRequest.percentTermsToMatch(request.paramAsFloat(\"percent_terms_to_match\", 0));\n }\n- mltRequest.minimumShouldMatch(request.param(\"minimum_should_match\", \"0\"));\n mltRequest.minTermFreq(request.paramAsInt(\"min_term_freq\", -1));\n mltRequest.maxQueryTerms(request.paramAsInt(\"max_query_terms\", -1));\n mltRequest.stopWords(request.paramAsStringArray(\"stop_words\", null));",
"filename": "src/main/java/org/elasticsearch/rest/action/mlt/RestMoreLikeThisAction.java",
"status": "modified"
}
]
} |
{
"body": "The `ElasticsearchRestTestCase` has changed in 2.0 and now throws an error when executing REST tests in a maven plugin project which depends on `elasticsearch-2.0.0-SNAPSHOT-tests.jar`:\n\n``` java\n$ cd plugins/delete-by-query\n$ mvn clean test -Dtests.filter=\"@rest\"\n\nSuite: org.elasticsearch.plugin.deletebyquery.test.rest.DeleteByQueryRestTests\nERROR 0.03s J3 | DeleteByQueryRestTests.initializationError <<<\n > Throwable #1: java.nio.file.NoSuchFileException: delete_by_query\n > at org.elasticsearch.test.rest.support.FileUtils.resolveFile(FileUtils.java:99)\n > at org.elasticsearch.test.rest.support.FileUtils.findYamlSuites(FileUtils.java:86)\n > at org.elasticsearch.test.rest.ElasticsearchRestTestCase.collectTestCandidates(ElasticsearchRestTestCase.java:189)\n > at org.elasticsearch.test.rest.ElasticsearchRestTestCase.createParameters(ElasticsearchRestTestCase.java:174)\n > at org.elasticsearch.plugin.deletebyquery.test.rest.DeleteByQueryRestTests.parameters(DeleteByQueryRestTests.java:45)\n > at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\nCompleted [4/4] on J3 in 0.04s, 1 test, 1 error <<< FAILURES!\n```\n\nWhen executed with `mvn clean test` command, it seems that plugin specific REST tests (declared in POM file using `<tests.rest.suite/>` tag) are resolved using a `ZipFileSystem` instance corresponding to the `elasticsearch-2.0.0-SNAPSHOT-tests.jar`.\n\nWhen executed in an IDE, tests are resolved using the classpath and it works.\n\nFile resolution are a bit cryptic in `ElasticsearchRestTestCase` and `FileUtils` so I'll be happy if someone can point me in the right direction to make this works.\n",
"comments": [
{
"body": "I don't think its a bug, i think these tests are overengineered?\n\nThe zipfilesystem stuff is the best way I could make it work given what some of the other plugins wanted to do here.\n\nIn order to simplify this test the parameterization really has to go. that is what adds a significant majority of complexity to this stuff because things must happen in clinit. \n\nPlease, no more parameterized tests, ever!\n",
"created_at": "2015-06-17T13:04:53Z"
},
{
"body": "I'd leave the parameterized discussion out of this issue, we can certainly discuss that but I'd do it separately. \n\nThe problem here is that plugins need to be able to run their own REST tests, pointing to them via -Dtests.rest.suite and -Dtests.rest.spec. That is not possible anymore, given that if `ElasticsearchRestTestCase` comes from within a jar, REST tests and spec can only be taken from that specific jar. Only the usecase where a plugin needs to run REST tests coming from elasticsearch core is handled right now; a plugin cannot run its own tests, unless I am missing a way to do it. We need to add back this feature to the REST test infra, it doesn't need to be able to run tests from any file system location as it used to be though (way too much!).\n",
"created_at": "2015-06-17T13:46:06Z"
},
{
"body": "The parameterized discussion is definitely involved here. Besides being completely useless, parameterized tests just make things complicated, because nothing can happen except in clinit (so no subclassing). Just have a look around and look at what is subclassing this rest test stuff and why.\n\nThere is too much going on. Down with parameterized tests! I need to create some kind of github plugin like the CLA-checker that makes the whole PR go completely red if it contains any.\n",
"created_at": "2015-06-17T13:59:04Z"
}
],
"number": 11721,
"title": "Allow to execute REST tests in plugins again"
} | {
"body": "this is really just a workaround for plugins to run their own\nREST tests instead of the core ones. It opts out of the rest test\nloading from the core jar file and tries to load from the classpath instead.\nEventually we need to fix this infrastrucutre to move away from parameterized\ntests such that subclasses can override behavior.\n\nCloses #11721\n",
"number": 11727,
"review_comments": [],
"title": "Allow to opt-out of loading packaged REST tests"
} | {
"commits": [
{
"message": "Allow to opt-out of loading packaged REST tests\n\nthis is really just a workaround for plugins to run their own\nREST tests instead of the core ones. It opts out of the rest test\nloading from the core jar file and tries to load from the classpath instead.\nEventually we need to fix this infrastrucutre to move away from parameterized\ntests such that subclasses can override behavior.\n\nCloses #11721"
}
],
"files": [
{
"diff": "@@ -120,6 +120,8 @@ public abstract class ElasticsearchRestTestCase extends ElasticsearchIntegration\n */\n public static final String REST_TESTS_SPEC = \"tests.rest.spec\";\n \n+ public static final String REST_LOAD_PACKAGED_TESTS = \"tests.rest.load_packaged\";\n+\n private static final String DEFAULT_TESTS_PATH = \"/rest-api-spec/test\";\n private static final String DEFAULT_SPEC_PATH = \"/rest-api-spec/api\";\n \n@@ -239,8 +241,8 @@ static FileSystem getFileSystem() throws IOException {\n // REST suite handling is currently complicated, with lots of filtering and so on\n // For now, to work embedded in a jar, return a ZipFileSystem over the jar contents. \n URL codeLocation = FileUtils.class.getProtectionDomain().getCodeSource().getLocation();\n-\n- if (codeLocation.getFile().endsWith(\".jar\")) {\n+ boolean loadPackaged = RandomizedTest.systemPropertyAsBoolean(REST_LOAD_PACKAGED_TESTS, true);\n+ if (codeLocation.getFile().endsWith(\".jar\") && loadPackaged) {\n try {\n // hack around a bug in the zipfilesystem implementation before java 9,\n // its checkWritable was incorrect and it won't work without write permissions. ",
"filename": "core/src/test/java/org/elasticsearch/test/rest/ElasticsearchRestTestCase.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@ governing permissions and limitations under the License. -->\n <properties>\n <tests.ifNoTests>warn</tests.ifNoTests>\n <tests.rest.suite>delete_by_query</tests.rest.suite>\n+ <tests.rest.load_packaged>false</tests.rest.load_packaged>\n </properties>\n \n <build>",
"filename": "plugins/delete-by-query/pom.xml",
"status": "modified"
},
{
"diff": "@@ -36,8 +36,6 @@\n \n @Rest\n @ClusterScope(scope = SUITE, randomDynamicTemplates = false)\n-@Ignore\n-@LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/11721\")\n public class DeleteByQueryRestTests extends ElasticsearchRestTestCase {\n \n public DeleteByQueryRestTests(@Name(\"yaml\") RestTestCandidate testCandidate) {",
"filename": "plugins/delete-by-query/src/test/java/org/elasticsearch/plugin/deletebyquery/test/rest/DeleteByQueryRestTests.java",
"status": "modified"
},
{
"diff": "@@ -96,6 +96,7 @@\n <tests.network></tests.network>\n <tests.cluster></tests.cluster>\n <tests.filter></tests.filter>\n+ <tests.rest.load_packaged></tests.rest.load_packaged>\n <env.ES_TEST_LOCAL></env.ES_TEST_LOCAL>\n <tests.security.manager>true</tests.security.manager>\n <tests.compatibility></tests.compatibility>\n@@ -660,6 +661,7 @@\n <tests.filter>${tests.filter}</tests.filter>\n <tests.version>${elasticsearch.version}</tests.version>\n <tests.locale>${tests.locale}</tests.locale>\n+ <tests.rest.load_packaged>${tests.rest.load_packaged}</tests.rest.load_packaged>\n <tests.timezone>${tests.timezone}</tests.timezone>\n <project.basedir>${project.basedir}</project.basedir>\n <m2.repository>${settings.localRepository}</m2.repository>",
"filename": "pom.xml",
"status": "modified"
}
]
} |
{
"body": "New 1.6.0 install with new indices, in Kibana discover searching for \"execute\":\n\n```\n[2015-06-11 13:18:59,718][DEBUG][action.search.type ] [Laura Dean] [29] Failed to execute fetch phase\norg.elasticsearch.search.fetch.FetchPhaseExecutionException: [logstash-2015.06.11][4]: query[filtered(_all:execute)->BooleanFilter(+cache(@timestamp:[1433938739612 TO 1434025139612]\n))],from[0],size[500],sort[<custom:\"@timestamp\": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@31c250a5>!]: Fetch Failed [Failed to highlight field [m\nessage.raw]]\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:133)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:133)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:194)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:504)\n at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:452)\n at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:449)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.RuntimeException: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 173493\n at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:493)\n at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:396)\n at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:371)\n at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:351)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getLeafContext(WeightedSpanTermExtractor.java:360)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:216)\n at org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:474)\n at org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:217)\n at org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n at org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:197)\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:119)\n ... 9 more\nCaused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 173493\n at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)\n at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:467)\n ... 19 more\n```\n\nI thought this was fixed in #9881 - but for some reason ES is still trying to highlight message.raw which is not_analyzed. The comment in the Kibana issue https://github.com/elastic/kibana/issues/2782#issuecomment-105224019 says that long terms shouldn't get into the index and not_analyzed should be skipped - but neither seem to be happening here.\n\nReproduction in this [gist](https://gist.github.com/jimmyjones2/0be6664a471a30b2e624) then add index to Kibana and in discover search for \"execute\". My original setup involves Elasticsearch sending its own logs to logstash via log4j socket, then ingesting into another elasticsearch instance - can include details here if needed.\n",
"comments": [
{
"body": "Thanks for reporting @jimmyjones2 \n\n@brwe please can you investigate?\n",
"created_at": "2015-06-11T12:51:03Z"
},
{
"body": "Indeed, the problem was not fixed on 1.x branch. Two things happened: 1) the test I wrote for the huge terms actually did not trigger the exception on 1.x because on 1.x this only happens with certain queries like prefix query but not match query (which I use in the test) and 2) I fixed the issue by catching a MaxBytesLengthExceededException in PlainHighlighter but on 1.x this is actually wrapped in a RuntimeException. So this did not fix it on 1.x but the tests passed anyway. I made a pull request to change this now here #11683. Sorry. \n",
"created_at": "2015-06-15T18:27:29Z"
},
{
"body": "@brwe Thanks! Might this fix get into 1.6.1?\n",
"created_at": "2015-06-15T19:18:55Z"
},
{
"body": "+1\n",
"created_at": "2015-06-25T08:58:33Z"
},
{
"body": "I pushed to 1.6 and 1.x branch, the fix will be in 1.6.1. Thanks again for reporting!\n",
"created_at": "2015-06-25T10:36:55Z"
},
{
"body": "+1 simple searches not happening due to this when using with ELK :-( Kind of annoying and frustrating. Is there any workaround?\n",
"created_at": "2015-06-30T08:56:09Z"
},
{
"body": "Thanks, it works like a charm now :beers: \n",
"created_at": "2015-07-17T07:59:50Z"
},
{
"body": "This still doesn't work for me when using Kibana 4.1.1 and Elasticsearch 1.7.1. Is there something else that needs to be done to existing indexes to get this to work?\n",
"created_at": "2015-08-28T15:36:44Z"
},
{
"body": "@tmartensen you shouldn't have to do anything, as long as we're talking about the same exception. More details?\n",
"created_at": "2015-08-28T15:40:31Z"
},
{
"body": "I'm still getting the same exception after upgrading to 1.7.1. I'm trying to log SOAP messages that are larger than normal, but not outrageously large, by filtering on the `org.springframework.ws.client.messagetracing*` logger name.\n\n```\nCaused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException: [logstash-2015.08.26][4]: query[filtered(logger_name:org.springframework.ws.client.messagetracing* logger_name:com.xxx.xxx.xxx)->BooleanFilter(+QueryWrapperFilter(ConstantScore(*:*)) +cache(@timestamp:[1440306000000 TO 1440777217294]))],from[0],size[500],sort[<custom:\"@timestamp\": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@2f9f7c14>!]: Fetch Failed [Failed to highlight field [message.raw]]\n at org.elasticsearch.search.highlight.PlainHighlighter.highlight(PlainHighlighter.java:126)\n at org.elasticsearch.search.highlight.HighlightPhase.hitExecute(HighlightPhase.java:128)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:192)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:501)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:868)\n at org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:862)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.RuntimeException: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 47034\n```\n\nI've added the workaround to the Kibana code as described in https://github.com/elastic/kibana/issues/2782#issuecomment-81925059 as well, and it seems to have no effect.\n",
"created_at": "2015-08-28T15:58:56Z"
},
{
"body": "Fix worked for me, after remembering to upgrade the client node\n",
"created_at": "2015-08-28T16:27:02Z"
},
{
"body": "@jimmyjones2 what does that entail? I've upgraded the index through the upgrade API for the offending indexes.\n",
"created_at": "2015-08-28T16:31:36Z"
},
{
"body": "It means: make sure you upgrade all Elasticsearch nodes, including the client nodes (if you're using them)\n",
"created_at": "2015-08-28T16:34:40Z"
},
{
"body": "I see. I'm using ES unclustered at the moment, straight out of the box. How do you upgrade the nodes?\n\n**EDIT:** I saw that I had quite a few older nodes running out there and shut them all down. After a restart, everything seems to be fine. Thanks for your help, @jimmyjones2 and @clintongormley!\n",
"created_at": "2015-08-28T16:38:17Z"
}
],
"number": 11599,
"title": "Kibana highlighting issue not fixed?"
} | {
"body": "In Lucene 5x the exception thrown when highlighter encounters a huge term\nis a BytesRefHash.MaxBytesLengthExceededException but in Lucene 4x it is\nwrapped in a RuntimeException. We have to catch that as well.\n\ncloses #11599\n",
"number": 11683,
"review_comments": [
{
"body": "can we use here `ExceptionHelper.unwrap(e, BytesRefHash.MaxBytesLengthExceededException)` and check if it doesn't return null, we can handle it? I think its safer to use this logic in master as well, its safer?\n",
"created_at": "2015-06-25T10:02:28Z"
},
{
"body": "yes. will push to master also\n",
"created_at": "2015-06-25T10:20:11Z"
}
],
"title": "Fix exception for plain highlighter and huge terms for Lucene 4.x"
} | {
"commits": [
{
"message": "highlighter: Fix exception for plain highlighter and huge terme also for lucene 4\n\nIn Lucene 5x the exception thrown when highlighter encounters a huge term\nis a BytesRefHash.MaxBytesLengthExceededException but in Lucene 4x it is\nwrapped in a RuntimeException. We have to catch that as well.\n\ncloses #11599"
}
],
"files": [
{
"diff": "@@ -28,6 +28,7 @@\n import org.apache.lucene.util.BytesRefHash;\n import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.index.mapper.FieldMapper;\n@@ -124,7 +125,7 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n }\n }\n } catch (Exception e) {\n- if (e instanceof BytesRefHash.MaxBytesLengthExceededException) {\n+ if (ExceptionsHelper.unwrap(e, BytesRefHash.MaxBytesLengthExceededException.class) != null) {\n // this can happen if for example a field is not_analyzed and ignore_above option is set.\n // the field will be ignored when indexing but the huge term is still in the source and\n // the plain highlighter will parse the source and try to analyze it.",
"filename": "src/main/java/org/elasticsearch/search/highlight/PlainHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -132,6 +132,10 @@ public void testPlainHighlighterWithLongUnanalyzedStringTerm() throws IOExceptio\n search = client().prepareSearch().setQuery(constantScoreQuery(matchQuery(\"text\", \"text\"))).addHighlightedField(new Field(\"long_text\").highlighterType(highlighter)).get();\n assertNoFailures(search);\n assertThat(search.getHits().getAt(0).getHighlightFields().size(), equalTo(0));\n+\n+ search = client().prepareSearch().setQuery(prefixQuery(\"text\", \"te\")).addHighlightedField(new Field(\"long_text\").highlighterType(highlighter)).get();\n+ assertNoFailures(search);\n+ assertThat(search.getHits().getAt(0).getHighlightFields().size(), equalTo(0));\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/search/highlight/HighlighterSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "During start elasticsearch (version 1.6) service after restart ubuntu machine I have following error:\n\ntouch: cannot touch ‘/var/run/elasticsearch/elasticsearch.pid’: No such file or directory\n\nThis occurs because elasticsearch.pid move to /elasticsearch directory (PID_DIR=\"/var/run/elasticsearch\", PID_FILE=\"$PID_DIR/$NAME.pid\") and ubuntu remove this after restart. Command \"touch\" does not create parent directories.\n\nIn version 1.5 the .pid file was in /var/run/ directiry (PID_FILE=\"/var/run/$NAME.pid\") and so was no error.\n",
"comments": [
{
"body": "@konstantinkostin28 thanks for your reporting and you're right, the PID_DIR should be created before `touch` the file.\n",
"created_at": "2015-06-11T07:35:58Z"
},
{
"body": "I get the same behavior on an Ubuntu 14.04 VM. Was fine on v1.5.2, broken on 1.6.0.\n",
"created_at": "2015-06-11T10:48:45Z"
},
{
"body": "@tlrx , \n1. Should I change the PID_DIR value to `/var/run/' or \n2. Change script to create \"$PID_DIR\" before touch as following?\n\n```\n154 mkdir -p \"$LOG_DIR\" \"$DATA_DIR\" \"$WORK_DIR\" \"$PID_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$LOG_DIR\" \"$DATA_DIR\" \"$WORK_DIR\" \"$PID_DIR\"\n155 touch \"$PID_FILE\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_FILE\"\n\n```\n\nI did the approach 2, and it works. I am just curious about the proper way. Thanks! \n",
"created_at": "2015-06-11T12:30:04Z"
},
{
"body": "@yett My personal preference goes to `/var/run/elasticsearch/elasticsearch.pid` (approach 2). The thing is to have a consistent, secured way to read/write this file.\n",
"created_at": "2015-06-11T12:36:10Z"
},
{
"body": "Thanks for the tips. Cheers! \n",
"created_at": "2015-06-11T13:51:39Z"
},
{
"body": "I just upgraded to 1.6 today and ran into this issue. Annoyingly, the /var/run/elasticsearch directory I created before upgrading to 1.6 (in an attempt to avoid this issue on other nodes...) went away after I rebooted (I had non-ES reasons to reboot). \n\nThis is on Ubuntu 12.04. I was going from 1.4.5 to 1.6, using the apt repo.\n\nIs there a reason apt doesn't deal with this for us?\n",
"created_at": "2015-06-11T19:30:29Z"
},
{
"body": "On my Ubuntu 14.04 instance, there is, in fact a /usr/lib/tmpfiles.d/elasticsearch.conf:\nd /var/run/elasticsearch 0755 root root - -\nThis doesn't seem to get handled. In fact, I have other tmpfile entries that don't seem to be processed.\n",
"created_at": "2015-06-12T15:42:32Z"
},
{
"body": "Same issue here. I fixed this by putting `PID_DIR=\"/var/run\"` into `/etc/default/elasticsearch`.\nIn my opinion, on Ubuntu/Debian all pid files are directly in this folder, so PID_DIR by default should just be `/var/run`\n",
"created_at": "2015-06-13T12:33:05Z"
},
{
"body": "Please fix this issue in official Elasticsearch repository (ppa) for Ubuntu. \n",
"created_at": "2015-06-13T13:15:58Z"
},
{
"body": "@uschindler Can you elaborate on \"on Ubuntu/Debian all pid files are directly in this folder\"? On my Ubuntu 14.04 LTS install with only Elasticsearch as an additional install, I've got a number of services whose PID files are in their on subdirectories in /var/run (CUPS, for example).\n",
"created_at": "2015-06-14T11:40:05Z"
},
{
"body": "Daemons generally only create a sub-dir if they create more than just a PID as run file. But generally, the pid file should be placed in /var/run and have ending \".pid\".\n\nI have to first lookup the packaging rules in Debian, but there is something about that in it!\n",
"created_at": "2015-06-14T22:05:59Z"
},
{
"body": "Sorry for the annoyance. I pushed a fix in #11674, it basically creates & chown the PID_DIR in init.d scripts and had been tested on Ubuntu 12.04, 14.04, Debian 7.8, CentOS6.6 and few others.\n\nThe Debian Guidelines are pretty open concerning PID files as long as it is stored in `/var/run` (source: https://www.debian.org/doc/packaging-manuals/fhs/fhs-2.3.txt): \n\n> Programs may have a subdirectory of /var/run; this is encouraged for programs that use more than\n> one run-time file. \n\nFor defaults, we'd like to keep the current `/var/run/elasticsearch` directory. It was already the case on RPM based distributions and keeping things consistent across platforms make scripts easier to maintain. It also helps when multiple instances runs on the same machine and make things simplier when the JVM Security Manager is enabled. Note that it can been overridden using the `PID_DIR` environment variable as @uschindler already mentioned. \n",
"created_at": "2015-06-15T14:45:45Z"
},
{
"body": "@tlrx Thanks for your work on this. :)\n",
"created_at": "2015-06-15T15:26:11Z"
},
{
"body": "@tlrx Thanks. Patch looks good.\n\nIn any case the packaging manuals already say what I mentioned in my first post:\n\n> /var/run : Run-time variable data\n> \n> Purpose\n> \n> This directory contains system information data describing the system since it\n> was booted. Files under this directory must be cleared (removed or truncated as\n> appropriate) at the beginning of the boot process. Programs may have a\n> subdirectory of /var/run; this is encouraged for programs that use more than\n> one run-time file. [42] Process identifier (PID) files, which were originally\n> placed in /etc, must be placed in /var/run. The naming convention for PID files\n> is <program-name>.pid. For example, the crond PID file is named /var/run/\n> crond.pid.\n\nYour text was just a bit clipped. Basically it says: If there is only the PID file and nothing more of runtime data it should be placed directly in /var/run. Otherwise a subdirectory with the name of the program should be created (\"this is encouraged for programs that use more than one run-time file\").\n\nBut I am also fine with your pull request. Thanks in any case!\n",
"created_at": "2015-06-15T16:19:16Z"
},
{
"body": "@tlrx Thanks!\n",
"created_at": "2015-06-16T07:50:26Z"
},
{
"body": "When will this get released? I noticed this version [here](https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.deb) does not contain these changes\n",
"created_at": "2015-07-02T20:01:47Z"
},
{
"body": "@bskern the issue has been reported for 1.6.0, and the package you indicate is 1.6.0 too so it does not contain the fix.\n\nThe fix will will be released with 1.6.1, hopefully soon.\n",
"created_at": "2015-07-03T07:20:07Z"
},
{
"body": "Also an issue on manjaro\n",
"created_at": "2015-11-02T09:18:30Z"
},
{
"body": "Linux Version: Ubuntu 16.04\r\nElasticsearch Version: 1.7.3\r\n\r\nI have recently added some new nodes to my existing cluster, and randomly one server would not start elasticsearch on restart. When manually trying to start it gives the following error:\r\n\r\n>sudo systemctl status elasticsearch\r\n\r\n```\r\n* elasticsearch.service - Elasticsearch\r\n Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)\r\n Active: failed (Result: exit-code) since Fri 2017-06-16 12:32:28 UTC; 3s ago\r\n Docs: http://www.elastic.co\r\n Process: 1437 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.p\r\n Main PID: 1437 (code=exited, status=3)\r\n\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: java.io.FileNotFoundException: /var/run/elasticsearch/elasticsearch.pid (No such file or directory)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at java.io.FileOutputStream.open0(Native Method)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at java.io.FileOutputStream.open(FileOutputStream.java:270)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at java.io.FileOutputStream.<init>(FileOutputStream.java:213)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at java.io.FileOutputStream.<init>(FileOutputStream.java:162)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:194)\r\nJun 16 12:32:28 els02.jobsoid.net elasticsearch[1437]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)\r\nJun 16 12:32:28 els02.jobsoid.net systemd[1]: elasticsearch.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED\r\nJun 16 12:32:28 els02.jobsoid.net systemd[1]: elasticsearch.service: Unit entered failed state.\r\nJun 16 12:32:28 els02.jobsoid.net systemd[1]: elasticsearch.service: Failed with result 'exit-code'.\r\n```\r\n\r\n\r\nIf I create a folder /var/run/elasticsearch and change ownership to elasticsearch:elasticsearch then elasticsearch starts. On restart again the same issue.\r\n\r\nI understand this above issue is related to version 1.6 but still I tried changing the PID_DIR folder to /var/run in the service file at /usr/lib/systemd/system/elasticsearch.service\r\n\r\nwith that it gives an error: java.io.FileNotFoundException: /var/run/elasticsearch.pid (Permission denied) instead of \"no such file or directory\"\r\n\r\nOne other thing i noticed, when I manually create the dir in /var/run and force elasticsearch to start, it seems to work fine but the GET /_nodes does not return any stats of the OS or Filesystem. \r\n\r\nMakes me think its some permission issue?",
"created_at": "2017-06-16T12:42:32Z"
},
{
"body": "@yashvit it's really time to upgrade :)",
"created_at": "2017-06-20T12:06:27Z"
},
{
"body": "Resolved issue for \"Elasticsearch.service: Can’t open PID file /run/elasticsearch/elasticsearch.pid No such file or directory\"\r\n\r\nBy running the below command:-\r\n\r\nsudo chown -R elasticsearch:elasticsearch /etc/elasticsearch ",
"created_at": "2018-07-10T14:43:17Z"
}
],
"number": 11594,
"title": "Path to file elasticsearch.pid not exist after restart "
} | {
"body": "Since the /var/run/elasticsearch directory is cleaned when the operating system starts, the init.d script must ensure that the PID_DIR is correctly created.\n\nCloses #11594\n",
"number": 11674,
"review_comments": [],
"title": "Create PID_DIR in init.d script"
} | {
"commits": [
{
"message": "Create PID_DIR in init.d script\n\nSince the /var/run/elasticsearch directory is cleaned when the operating system starts, the init.d script must ensure that the PID_DIR is correctly created.\n\nCloses #11594"
}
],
"files": [
{
"diff": "@@ -151,7 +151,14 @@ case \"$1\" in\n \n \t# Prepare environment\n \tmkdir -p \"$LOG_DIR\" \"$DATA_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$LOG_DIR\" \"$DATA_DIR\"\n-\ttouch \"$PID_FILE\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_FILE\"\n+\n+ # Ensure that the PID_DIR exists (it is cleaned at OS startup time)\n+ if [ -n \"$PID_DIR\" ] && [ ! -e \"$PID_DIR\" ]; then\n+ mkdir -p \"$PID_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_DIR\"\n+ fi\n+ if [ -n \"$PID_FILE\" ] && [ ! -e \"$PID_FILE\" ]; then\n+ touch \"$PID_FILE\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_FILE\"\n+ fi\n \n \tif [ -n \"$MAX_OPEN_FILES\" ]; then\n \t\tulimit -n $MAX_OPEN_FILES",
"filename": "core/src/packaging/deb/init.d/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -99,6 +99,14 @@ start() {\n fi\n export ES_GC_LOG_FILE\n \n+ # Ensure that the PID_DIR exists (it is cleaned at OS startup time)\n+ if [ -n \"$PID_DIR\" ] && [ ! -e \"$PID_DIR\" ]; then\n+ mkdir -p \"$PID_DIR\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$PID_DIR\"\n+ fi\n+ if [ -n \"$pidfile\" ] && [ ! -e \"$pidfile\" ]; then\n+ touch \"$pidfile\" && chown \"$ES_USER\":\"$ES_GROUP\" \"$pidfile\"\n+ fi\n+\n echo -n $\"Starting $prog: \"\n # if not running, start it up here, usually something like \"daemon $exec\"\n daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.conf=$CONF_DIR",
"filename": "core/src/packaging/rpm/init.d/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,123 @@\n+#!/usr/bin/env bats\n+\n+# This file is used to test the elasticsearch init.d scripts.\n+\n+# WARNING: This testing file must be executed as root and can\n+# dramatically change your system. It removes the 'elasticsearch'\n+# user/group and also many directories. Do not execute this file\n+# unless you know exactly what you are doing.\n+\n+# The test case can be executed with the Bash Automated\n+# Testing System tool available at https://github.com/sstephenson/bats\n+# Thanks to Sam Stephenson!\n+\n+# Licensed to Elasticsearch under one or more contributor\n+# license agreements. See the NOTICE file distributed with\n+# this work for additional information regarding copyright\n+# ownership. Elasticsearch licenses this file to you under\n+# the Apache License, Version 2.0 (the \"License\"); you may\n+# not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+\n+# Load test utilities\n+load packaging_test_utils\n+\n+# Cleans everything for the 1st execution\n+setup() {\n+ if [ \"$BATS_TEST_NUMBER\" -eq 1 ]; then\n+ clean_before_test\n+ fi\n+\n+ # Installs a package before test\n+ if is_dpkg; then\n+ dpkg -i elasticsearch*.deb >&2 || true\n+ fi\n+ if is_rpm; then\n+ rpm -i elasticsearch*.rpm >&2 || true\n+ fi\n+}\n+\n+@test \"[INIT.D] start\" {\n+ skip_not_sysvinit\n+\n+ run service elasticsearch start\n+ [ \"$status\" -eq 0 ]\n+\n+ wait_for_elasticsearch_status\n+\n+ assert_file_exist \"/var/run/elasticsearch/elasticsearch.pid\"\n+}\n+\n+@test \"[INIT.D] status (running)\" {\n+ skip_not_sysvinit\n+\n+ run service elasticsearch status\n+ [ \"$status\" -eq 0 ]\n+}\n+\n+##################################\n+# Check that Elasticsearch is working\n+##################################\n+@test \"[INIT.D] test elasticsearch\" {\n+ skip_not_sysvinit\n+\n+ run_elasticsearch_tests\n+}\n+\n+@test \"[INIT.D] restart\" {\n+ skip_not_sysvinit\n+\n+ run service elasticsearch restart\n+ [ \"$status\" -eq 0 ]\n+\n+ wait_for_elasticsearch_status\n+\n+ run service elasticsearch status\n+ [ \"$status\" -eq 0 ]\n+}\n+\n+@test \"[INIT.D] stop (running)\" {\n+ skip_not_sysvinit\n+\n+ run service elasticsearch stop\n+ [ \"$status\" -eq 0 ]\n+\n+}\n+\n+@test \"[INIT.D] status (stopped)\" {\n+ skip_not_sysvinit\n+\n+ run service elasticsearch status\n+ [ \"$status\" -eq 3 ]\n+}\n+\n+# Simulates the behavior of a system restart:\n+# the PID directory is deleted by the operating system\n+# but it should not block ES from starting\n+# see https://github.com/elastic/elasticsearch/issues/11594\n+@test \"[INIT.D] delete PID_DIR and restart\" {\n+ skip_not_sysvinit\n+\n+ run rm -rf /var/run/elasticsearch\n+ [ \"$status\" -eq 0 ]\n+\n+\n+ run service elasticsearch start\n+ [ \"$status\" -eq 0 ]\n+\n+ wait_for_elasticsearch_status\n+\n+ assert_file_exist \"/var/run/elasticsearch/elasticsearch.pid\"\n+\n+ run service elasticsearch stop\n+ [ \"$status\" -eq 0 ]\n+}\n\\ No newline at end of file",
"filename": "core/src/test/resources/packaging/scripts/70_sysv_initd.bats",
"status": "added"
}
]
} |
{
"body": "With the following configuration\n\n```\nnode.name: ${prompt.text}\n```\n\nElasticsearch 1.6.0 prompts you twice. Once for \"node.name\" and \"name\". Ultimately it uses the second one for the value of the configuration item:\n\n```\ndjschny:elasticsearch-1.6.0 djschny$ ../startElastic.sh \nEnter value for [node.name]: foo\nEnter value for [name]: bar\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] version[1.6.0], pid[4836], build[cdd3ac4/2015-06-09T13:36:34Z]\n[2015-06-09 16:10:03,405][INFO ][node ] [bar] initializing ...\n```\n",
"comments": [
{
"body": "@jaymode please could you take a look\n",
"created_at": "2015-06-12T14:04:01Z"
},
{
"body": "Looks to be still a problem on 2.1. Regression here? @GlenRSmith and I can confirm this occurs with the 2.1 release archive.\n",
"created_at": "2015-12-17T23:47:41Z"
},
{
"body": "It seems to be independent of which config item you want to prompt for:\n\n```\nelasticsearch-2.1.0 bin/elasticsearch\nEnter value for [cluster.name]: foo\nEnter value for [cluster.name]: bar\n[2015-12-18 10:48:50,219][INFO ][node ] [Wendell Vaughn] version[2.1.0], pid[21231], build[72cd1f1/2015-11-18T22:40:03Z]\n[2015-12-18 10:48:50,220][INFO ][node ] [Wendell Vaughn] initializing ...\n[2015-12-18 10:48:50,280][INFO ][plugins ] [Wendell Vaughn] loaded [], sites []\n[2015-12-18 10:48:50,304][INFO ][env ] [Wendell Vaughn] using [1] data paths, mounts [[/ (/dev/mapper/fedora_josh--xps13-root)]], net usable_space [79.1gb], net total_space [233.9gb], spins? [no], types [ext4]\n[2015-12-18 10:48:51,761][INFO ][node ] [Wendell Vaughn] initialized\n[2015-12-18 10:48:51,762][INFO ][node ] [Wendell Vaughn] starting ...\n[2015-12-18 10:48:51,902][INFO ][transport ] [Wendell Vaughn] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}\n[2015-12-18 10:48:51,912][INFO ][discovery ] [Wendell Vaughn] bar/azOFxYXMQ6muCpGOpqvxSw\n```\n",
"created_at": "2015-12-17T23:49:53Z"
},
{
"body": "Just confirmed on 2.1.1.\n",
"created_at": "2015-12-18T00:00:40Z"
},
{
"body": "This is different than the original issue. I think what is happening is we first initialize the settings/environment in bootstrap so that we can eg init logging, but then when bootstrap creates the node, it passes in a fresh Settings, and the Node constructor again initializes settings/environment.\n",
"created_at": "2015-12-18T06:52:08Z"
},
{
"body": "@jaymode could you take a look at this please?\n",
"created_at": "2016-01-18T20:16:45Z"
},
{
"body": "As @rjernst said, this is a different issue that the original. `BootstrapCLIParser` was added which extends CLITool. CLITool prepares the settings and environment which includes prompting since CLITools are usually run outside of the bootstrap process. This causes the first prompt that is ignored. The second prompt that is used, comes from `Bootstrap`, which is passed to the node.\n\nPart of the issue is that the `BootstrapCLIParser` sets properties that will change the value of settings. I think we can solve this a few different ways:\n1. Pass in empty settings/null environment for this CLITool. If we try to create a valid environment here then we have to prepare the settings to ensure we parse the paths from the settings for our directories\n2. Do not use the `CLITool` infrastructure\n3. Prepare the environment in bootstrap, pass to `BootstrapCLIParser`. Re-prepare the settings/environment, passing in the already prepared settings.\n\n@spinscale @rjernst any thoughts?\n",
"created_at": "2016-01-19T15:07:27Z"
},
{
"body": "I would say preparing the environment once would be the best option, it will just take some refactoring to pass it through. Running prepare already creates Environment twice (the first time so it can try and load the config file, which might have other paths like plugin path or data path). Really I think we should simply not allow paths to be configured in elasticsearch.yml, instead it should only be through sysprops. It might be something that could simplify the settings/env prep, to make this a little easier.\n",
"created_at": "2016-01-19T19:46:38Z"
},
{
"body": "I have a fix for this that will come as part of #16579.\n",
"created_at": "2016-02-24T02:40:12Z"
},
{
"body": "Closed by #17024.\n",
"created_at": "2016-03-14T00:07:24Z"
}
],
"number": 11564,
"title": "${prompt.text} and ${prompt.secret} double prompting"
} | {
"body": "We allow setting the node's name a few different ways: the `name` system\nproperty, the setting `name`, and the setting `node.name`. There is an order\nof preference to these settings that gets applied will copy values from the\nsystem property or `node.name` setting to the `name` setting. When setting\nonly `node.name` to one of the prompt placeholders, the user would be\nprompted twice as the value of `node.name` is copied to `name` prior to\nprompting for input. Additionally, the value entered by the user for `node.name`\nwould not be used and only the value entered for `name` would be used.\n\nThis fix changes the behavior to only prompt once when `node.name` is set and\n`name` is not set. This is accomplished by waiting until all values have been\nprompted and replaced, then the logic for determining the node's name is\nexecuted.\n\nCloses #11564\n",
"number": 11668,
"review_comments": [],
"title": "Do not prompt for node name twice"
} | {
"commits": [
{
"message": "do not prompt for node name twice\n\nWe allow setting the node's name a few different ways: the `name` system\nproperty, the setting `name`, and the setting `node.name`. There is an order\nof preference to these settings that gets applied, which can copy values from the\nsystem property or `node.name` setting to the `name` setting. When setting\nonly `node.name` to one of the prompt placeholders, the user would be\nprompted twice as the value of `node.name` is copied to `name` prior to\nprompting for input. Additionally, the value entered by the user for `node.name`\nwould not be used and only the value entered for `name` would be used.\n\nThis fix changes the behavior to only prompt once when `node.name is set` and\n`name` is not set. This is accomplished by waiting until all values have been\nprompted and replaced, then the logic for determining the node's name is\nexecuted.\n\nCloses #11564"
}
],
"files": [
{
"diff": "@@ -50,8 +50,8 @@ public class InternalSettingsPreparer {\n \n /**\n * Prepares the settings by gathering all elasticsearch system properties, optionally loading the configuration settings,\n- * and then replacing all property placeholders. This method will not work with settings that have <code>__prompt__</code>\n- * as their value unless they have been resolved previously.\n+ * and then replacing all property placeholders. This method will not work with settings that have <code>${prompt.text}</code>\n+ * or <code>${prompt.secret}</code> as their value unless they have been resolved previously.\n * @param pSettings The initial settings to use\n * @param loadConfigSettings flag to indicate whether to load settings from the configuration directory/file\n * @return the {@link Settings} and {@link Environment} as a {@link Tuple}\n@@ -63,7 +63,8 @@ public static Tuple<Settings, Environment> prepareSettings(Settings pSettings, b\n /**\n * Prepares the settings by gathering all elasticsearch system properties, optionally loading the configuration settings,\n * and then replacing all property placeholders. If a {@link Terminal} is provided and configuration settings are loaded,\n- * settings with the <code>__prompt__</code> value will result in a prompt for the setting to the user.\n+ * settings with a value of <code>${prompt.text}</code> or <code>${prompt.secret}</code> will result in a prompt for\n+ * the setting to the user.\n * @param pSettings The initial settings to use\n * @param loadConfigSettings flag to indicate whether to load settings from the configuration directory/file\n * @param terminal the Terminal to use for input/output\n@@ -131,16 +132,9 @@ public static Tuple<Settings, Environment> prepareSettings(Settings pSettings, b\n }\n settingsBuilder.replacePropertyPlaceholders();\n \n- // generate the name\n+ // check if name is set in settings, if not look for system property and set it\n if (settingsBuilder.get(\"name\") == null) {\n String name = System.getProperty(\"name\");\n- if (name == null || name.isEmpty()) {\n- name = settingsBuilder.get(\"node.name\");\n- if (name == null || name.isEmpty()) {\n- name = Names.randomNodeName(environment.resolveConfig(\"names.txt\"));\n- }\n- }\n-\n if (name != null) {\n settingsBuilder.put(\"name\", name);\n }\n@@ -155,17 +149,33 @@ public static Tuple<Settings, Environment> prepareSettings(Settings pSettings, b\n if (v != null) {\n Settings.setSettingsRequireUnits(Booleans.parseBoolean(v, true));\n }\n- Settings v1 = replacePromptPlaceholders(settingsBuilder.build(), terminal);\n- environment = new Environment(v1);\n+\n+ Settings settings = replacePromptPlaceholders(settingsBuilder.build(), terminal);\n+ // all settings placeholders have been resolved. resolve the value for the name setting by checking for name,\n+ // then looking for node.name, and finally generate one if needed\n+ if (settings.get(\"name\") == null) {\n+ final String name = settings.get(\"node.name\");\n+ if (name == null || name.isEmpty()) {\n+ settings = settingsBuilder().put(settings)\n+ .put(\"name\", Names.randomNodeName(environment.resolveConfig(\"names.txt\")))\n+ .build();\n+ } else {\n+ settings = settingsBuilder().put(settings)\n+ .put(\"name\", name)\n+ .build();\n+ }\n+ }\n+\n+ environment = new Environment(settings);\n \n // put back the env settings\n- settingsBuilder = settingsBuilder().put(v1);\n+ settingsBuilder = settingsBuilder().put(settings);\n // we put back the path.logs so we can use it in the logging configuration file\n settingsBuilder.put(\"path.logs\", cleanPath(environment.logsFile().toAbsolutePath().toString()));\n \n- v1 = settingsBuilder.build();\n+ settings = settingsBuilder.build();\n \n- return new Tuple<>(v1, environment);\n+ return new Tuple<>(settings, environment);\n }\n \n static Settings replacePromptPlaceholders(Settings settings, Terminal terminal) {",
"filename": "core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java",
"status": "modified"
},
{
"diff": "@@ -31,22 +31,23 @@\n \n import java.util.ArrayList;\n import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.*;\n \n public class InternalSettingsPreparerTests extends ElasticsearchTestCase {\n \n @Before\n public void setupSystemProperties() {\n System.setProperty(\"es.node.zone\", \"foo\");\n+ System.setProperty(\"name\", \"sys-prop-name\");\n }\n \n @After\n public void cleanupSystemProperties() {\n System.clearProperty(\"es.node.zone\");\n+ System.clearProperty(\"name\");\n }\n \n @Test\n@@ -151,4 +152,72 @@ public void testReplaceTextPromptPlaceholderWithNullTerminal() {\n assertThat(e.getMessage(), containsString(\"with value [\" + InternalSettingsPreparer.TEXT_PROMPT_VALUE + \"]\"));\n }\n }\n+\n+ @Test\n+ public void testNameSettingsPreference() {\n+ // Test system property overrides node.name\n+ Settings settings = settingsBuilder()\n+ .put(\"node.name\", \"node-name\")\n+ .put(\"path.home\", createTempDir().toString())\n+ .build();\n+ Tuple<Settings, Environment> tuple = InternalSettingsPreparer.prepareSettings(settings, true);\n+ assertThat(tuple.v1().get(\"name\"), equalTo(\"sys-prop-name\"));\n+\n+ // test name in settings overrides sys prop and node.name\n+ settings = settingsBuilder()\n+ .put(\"name\", \"name-in-settings\")\n+ .put(\"node.name\", \"node-name\")\n+ .put(\"path.home\", createTempDir().toString())\n+ .build();\n+ tuple = InternalSettingsPreparer.prepareSettings(settings, true);\n+ assertThat(tuple.v1().get(\"name\"), equalTo(\"name-in-settings\"));\n+\n+ // test only node.name in settings\n+ System.clearProperty(\"name\");\n+ settings = settingsBuilder()\n+ .put(\"node.name\", \"node-name\")\n+ .put(\"path.home\", createTempDir().toString())\n+ .build();\n+ tuple = InternalSettingsPreparer.prepareSettings(settings, true);\n+ assertThat(tuple.v1().get(\"name\"), equalTo(\"node-name\"));\n+\n+ // test no name at all results in name being set\n+ settings = settingsBuilder()\n+ .put(\"path.home\", createTempDir().toString())\n+ .build();\n+ tuple = InternalSettingsPreparer.prepareSettings(settings, true);\n+ assertThat(tuple.v1().get(\"name\"), not(\"name-in-settings\"));\n+ assertThat(tuple.v1().get(\"name\"), not(\"sys-prop-name\"));\n+ assertThat(tuple.v1().get(\"name\"), not(\"node-name\"));\n+ assertThat(tuple.v1().get(\"name\"), notNullValue());\n+ }\n+\n+ @Test\n+ public void testPromptForNodeNameOnlyPromptsOnce() {\n+ final AtomicInteger counter = new AtomicInteger();\n+ final Terminal terminal = new CliToolTestCase.MockTerminal() {\n+ @Override\n+ public char[] readSecret(String message, Object... args) {\n+ fail(\"readSecret should never be called by this test\");\n+ return null;\n+ }\n+\n+ @Override\n+ public String readText(String message, Object... args) {\n+ int count = counter.getAndIncrement();\n+ return \"prompted name \" + count;\n+ }\n+ };\n+\n+ System.clearProperty(\"name\");\n+ Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir())\n+ .put(\"node.name\", InternalSettingsPreparer.TEXT_PROMPT_VALUE)\n+ .build();\n+ Tuple<Settings, Environment> tuple = InternalSettingsPreparer.prepareSettings(settings, false, terminal);\n+ settings = tuple.v1();\n+ assertThat(counter.intValue(), is(1));\n+ assertThat(settings.get(\"name\"), is(\"prompted name 0\"));\n+ assertThat(settings.get(\"node.name\"), is(\"prompted name 0\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java",
"status": "modified"
}
]
} |
{
"body": "This change adds a simplistic heuristic to try to balance new shard allocations across multiple data paths on one node.\n\nIt very roughly predicts (guesses!) how much disk space a shard will eventually use, as the max of the current avg. size of shards across the cluster, and 5% of current free space across all path.data on the current node, and then reserves space by counting how many shards are now assigned to each path.data.\n\nPicking the best path.data for a new shard is using the same \"most free space\" logic, except it now deducts the reserved space.\n\nI tested this on an EC2 instance with 2 SSDs with nearly the same amount of free space and confirmed we now put 2 shards on one SSD and 3 shards on the other, vs all 5 shards on a single path with master today, but I'm not sure how to make a standalone unit test ... maybe I can use a MockFS to fake up N path.datas with different free space?\n\nThis is just a heuristic, and it easily has adversarial cases that will fill up one path.data while other path.data on the same node still have plenty of space, and unfortunately ES can't recover from that today. E.g., DiskThresholdDecider won't even detect any problem (since it sums up total free space across all path.data) ... I think we should separately think about fixing that, but at least this change improves the current situation.\n\nCloses #11122\n",
"comments": [
{
"body": "This looks ok to me, although I do hope we keep tweaking this heuristic. I've tried to think of a better one but can't for now..but I wish we could do something based on weighted shard count instead of the very hard to think about mixed/shifting logic of average/5%.\n",
"created_at": "2015-05-15T18:02:07Z"
},
{
"body": "mike this looks good to me - can we somehow have a test for this?\n",
"created_at": "2015-05-15T20:08:24Z"
},
{
"body": "> can we somehow have a test for this?\n\nI agree, but it's tricky: I think I need a new MockFS that fakes how much free space there is on each path. I can try ...\n",
"created_at": "2015-05-17T09:46:06Z"
},
{
"body": "> This looks ok to me, although I do hope we keep tweaking this heuristic. I've tried to think of a better one but can't for now..but I wish we could do something based on weighted shard count instead of the very hard to think about mixed/shifting logic of average/5%.\n\nThe challenge with only using current shard count is then we don't take into account how much space the already allocated shards are already using? E.g maybe one path has only a few shards, and is nearly full, but another path has quite a few shards and has lots of free space.\n\nNo matter the heuristic here, there will be easy adversarial cases against it, so in the end this will be at best a \"starting guess\": we can't predict the future.\n\nTo fix this correctly we really need shard allocation to separately see / allocate across each path.data on a node, so we can move a shard off a path.data that is filling up even if other path.data on that same node have plenty of space.\n",
"created_at": "2015-05-17T09:53:15Z"
},
{
"body": "> I agree, but it's tricky: I think I need a new MockFS that fakes how much free space there is on each path. I can try ...\n\nthis requires QUITE a few steps, and please note that ES's file management (especially tests) is simply not ready to juggle multiple nio filesystems at once (various copy/move routines are unsafe there). \n\nSeparately, an out-of-disk space mockfs would be great. But please don't add a complicated test, and please don't use multiple nio.2 filesystems when ES isn't ready for that yet.\n\nTest what is reasonable to test and then if we want to do better, some cleanups are needed. I have been working on these things but it is quite difficult without straight up banning methods, because more tests are added all the time.\n",
"created_at": "2015-05-17T13:48:36Z"
},
{
"body": "> this requires QUITE a few steps, and please note that ES's file management (especially tests) is simply not ready to juggle multiple nio filesystems at once (various copy/move routines are unsafe there).\n\nOK this sounds hard :) I'll try to make a more direct test that just tests the heuristic logic w/o needing MockFS ...\n",
"created_at": "2015-05-18T21:54:50Z"
},
{
"body": "I pushed a new commit addressing feedback (thanks!).\n\nHowever, I gave up making a direct unit test for the ShardPath.selectNewPath ... I tried to simplify the arguments passed to it (e.g. replacing Iterator<IndexShard> with Iterator<Path> and extracting the shard's paths \"up above\") so that it was more easily tested, but it just became too forced ...\n\nI did retest on EC2 and confirmed the 5 shards are split to 2 and 3 shards on each SSD.\n\nI'll open a follow-on issue to allow for shards to relocated to different path.data within a single node.\n",
"created_at": "2015-05-20T20:50:10Z"
},
{
"body": "One problem with this change is it's making a \"blind guess\" about how big a shard will be, yet if it's a relocation of an existing shard, the source node clearly \"knows\" how large it is, so it's crazy not to use this.\n\n@dakrone is working on getting this information published via ClusterInfo or index meta data or something (thanks!), then I'll fix this change to use that, so we can make a more informed \"guess\".\n\nEven so, this is just the current size of the shard, and we still separately need to fix #11271 so shards on one path.data that's filling up will still be relocated even if other path.data on the same node have plenty of space.\n",
"created_at": "2015-05-22T09:19:46Z"
},
{
"body": "I left some comments here again, I think we should move forward here without the infrastructure to make it perfect. It's a step in the right direction?\n",
"created_at": "2015-06-12T12:28:30Z"
},
{
"body": "OK I folded in feedback here.\n\nI avoid this logic when a custom data path is set, and force the statePath to be NodePaths[0] in that case.\n\nAnd I removed ClusterInfo/Service and instead take avg. of all shards already on the node.\n",
"created_at": "2015-06-12T17:57:34Z"
},
{
"body": "left a tiny comment otherwise LGTM\n",
"created_at": "2015-06-16T18:18:52Z"
},
{
"body": "LGTM\n",
"created_at": "2015-06-17T09:36:34Z"
}
],
"number": 11185,
"title": "Balance new shard allocations more evenly on multiple path.data"
} | {
"body": "This commit makes the `ClusterInfo` object available to all nodes via a\ncustom ClusterState object. The cluster info contains the following\ninformation:\n1. Disk usage information about each node in the cluster.\n2. Shard sizes for all primary shards in the cluster.\n3. Relative classifications for the size of each index.\n\nEach index is classified from SMALLEST to LARGEST based on its size,\nwith SMALL, MEDIUM, and LARGE falling equidistantly between smallest and\nlargest.\n\nThis information can be used in the future for determining the best data\npath for a particular shard, enhancing the `DiskThresholdDecider`, or\nmaking routing decidings for a shard based on its relative size in the\ncluster. It can be accessed by using something like:\n\n``` java\nClusterInfo info = clusterService.state().custom(ClusterInfo.TYPE);\nMap<String, DiskUsage> nodeDiskUsage = info.getNodeDiskUsages();\nMap<ShardId, Long> shardSizes = info.getShardSizes();\nIndexClassification classification = info.getIndexClassification();\n```\n\nThe `ClusterInfo` object is also available to inspect from the cluster\nstate HTTP endpoint, which returns a response like:\n\n``` json\n{\n ... other cluster state ...\n \"cluster_info\" : {\n \"node_disk_usage\" : {\n \"nFlT3nFzQRyvixcYv6Yfuw\" : {\n \"free_bytes\" : 37556461568,\n \"used_bytes\" : 23225233408\n }\n },\n \"shard_size\" : {\n \"[test][4]\" : 156,\n \"[test][2]\" : 156,\n \"[test][3]\" : 3007,\n \"[foo][0]\" : 127,\n \"[wiki][0]\" : 12728802,\n \"[test2][4]\" : 156,\n \"[test2][3]\" : 2904,\n \"[wiki2][1]\" : 23338909,\n \"[test2][2]\" : 156,\n \"[test2][1]\" : 156,\n \"[test2][0]\" : 156,\n \"[test][0]\" : 156,\n \"[test][1]\" : 156\n },\n \"classifications\" : {\n \"test2\" : \"SMALLEST\",\n \"test\" : \"SMALLEST\",\n \"wiki2\" : \"LARGEST\",\n \"foo\" : \"SMALLEST\",\n \"wiki\" : \"MEDIUM\"\n }\n }\n}\n```\n\nRelates to work on #11185\n",
"number": 11643,
"review_comments": [
{
"body": "That's java 1.8 construct\n",
"created_at": "2015-06-16T17:02:15Z"
},
{
"body": "I am not entirely sure how this information is going to be used on non-master nodes, so this might not be a problem. But since this task has `NORMAL` priority it will tend to get stuck behind `URGENT` and `IMMEDIATE` priority routing-related update tasks during any massive shard-related event (cluster startup, rolling restart, etc.) or might not reach any other nodes at all if the current master will die during such an event. On a relevant note, since these tasks will be accumulating at the end of the pending tasks queue during such events, it might make sense to add some sort of batching so we would only update the cluster with the latest information instead of processing information that was stuck in the queue for a while.\n",
"created_at": "2015-06-16T17:37:29Z"
},
{
"body": "This comment is now misleading and the import for DiskThresholdDecider is now unused. By the way, what will happen if we disable cluster info but will not disable disk threshold decider? Should we add a test for it, to make sure that it still produces sane results?\n",
"created_at": "2015-06-16T17:49:13Z"
}
],
"title": "Categorize index by size and make ClusterInfo available to all nodes"
} | {
"commits": [
{
"message": "Categorize index by size and make ClusterInfo available to all nodes\n\nThis commit makes the `ClusterInfo` object available to all nodes via a\ncustom ClusterState object. The cluster info contains the following\ninformation:\n\n1. Disk usage information about each node in the cluster.\n\n2. Shard sizes for all primary shards in the cluster.\n\n3. Relative classifications for the size of each index.\n\nEach index is classified from SMALLEST to LARGEST based on its size,\nwith SMALL, MEDIUM, and LARGE falling equidistantly between smallest and\nlargest.\n\nThis information can be used in the future for determining the best data\npath for a particular shard, enhancing the `DiskThresholdDecider`, or\nmaking routing decidings for a shard based on its relative size in the\ncluster. It can be accessed by using something like:\n\n```java\nClusterInfo info = clusterService.state().custom(ClusterInfo.TYPE);\nMap<String, DiskUsage> nodeDiskUsage = info.getNodeDiskUsages();\nMap<ShardId, Long> shardSizes = info.getShardSizes();\nIndexClassification classification = info.getIndexClassification();\n```\n\nThe `ClusterInfo` object is also available to inspect from the cluster\nstate HTTP endpoint, which returns a response like:\n\n```json\n{\n ... other cluster state ...\n \"cluster_info\" : {\n \"node_disk_usage\" : {\n \"nFlT3nFzQRyvixcYv6Yfuw\" : {\n \"free_bytes\" : 37556461568,\n \"used_bytes\" : 23225233408\n }\n },\n \"shard_size\" : {\n \"[test][4]\" : 156,\n \"[test][2]\" : 156,\n \"[test][3]\" : 3007,\n \"[foo][0]\" : 127,\n \"[wiki][0]\" : 12728802,\n \"[test2][4]\" : 156,\n \"[test2][3]\" : 2904,\n \"[wiki2][1]\" : 23338909,\n \"[test2][2]\" : 156,\n \"[test2][1]\" : 156,\n \"[test2][0]\" : 156,\n \"[test][0]\" : 156,\n \"[test][1]\" : 156\n },\n \"classifications\" : {\n \"test2\" : \"SMALLEST\",\n \"test\" : \"SMALLEST\",\n \"wiki2\" : \"LARGEST\",\n \"foo\" : \"SMALLEST\",\n \"wiki\" : \"MEDIUM\"\n }\n }\n}\n```\n\nRelates to work on #11185"
}
],
"files": [
{
"diff": "@@ -20,30 +20,184 @@\n package org.elasticsearch.cluster;\n \n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Maps;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.shard.ShardId;\n \n+import java.io.IOException;\n import java.util.Map;\n \n /**\n- * ClusterInfo is an object representing a map of nodes to {@link DiskUsage}\n- * and a map of shard ids to shard sizes, see\n- * <code>InternalClusterInfoService.shardIdentifierFromRouting(String)</code>\n- * for the key used in the shardSizes map\n+ * ClusterInfo is an object representing information about nodes and shards in\n+ * the cluster. This includes a map of nodes to disk usage, primary shards to\n+ * shard size, and classifications for the relative sizes of indices in the\n+ * cluster.\n+ *\n+ * It is a custom cluster state object. This means it is sent to all nodes in\n+ * the cluster via the cluster state, however, it is not written to disk since\n+ * it is dynamically generated.\n */\n-public class ClusterInfo {\n+public class ClusterInfo extends AbstractDiffable<ClusterState.Custom> implements ClusterState.Custom {\n \n+ public static final String TYPE = \"cluster_info\";\n+ public static final ClusterInfo PROTO = new ClusterInfo();\n+ \n private final ImmutableMap<String, DiskUsage> usages;\n- private final ImmutableMap<String, Long> shardSizes;\n+ private final ImmutableMap<ShardId, Long> shardSizes;\n+ private final IndexClassification indexClassification;\n+\n+ public ClusterInfo() {\n+ this.usages = ImmutableMap.of();\n+ this.shardSizes = ImmutableMap.of();\n+ this.indexClassification = new IndexClassification(ImmutableMap.<String, IndexSize>of());\n+ }\n \n- public ClusterInfo(ImmutableMap<String, DiskUsage> usages, ImmutableMap<String, Long> shardSizes) {\n- this.usages = usages;\n- this.shardSizes = shardSizes;\n+ public ClusterInfo(Map<String, DiskUsage> usages, Map<ShardId, Long> shardSizes,\n+ IndexClassification indexClassification) {\n+ this.usages = ImmutableMap.copyOf(usages);\n+ this.shardSizes = ImmutableMap.copyOf(shardSizes);\n+ this.indexClassification = indexClassification;\n }\n \n+ public ClusterInfo(Map<String, DiskUsage> usages, Map<ShardId, Long> shardSizes,\n+ Map<String, IndexSize> indexClassifications) {\n+ this(usages, shardSizes, new IndexClassification(indexClassifications));\n+ }\n+\n+ /**\n+ * Return a map of node ids to disk usage\n+ */\n public Map<String, DiskUsage> getNodeDiskUsages() {\n return this.usages;\n }\n \n- public Map<String, Long> getShardSizes() {\n+ /**\n+ * Return a map of shard id to shard size. Note this is the size of the\n+ * primary shard.\n+ */\n+ public Map<ShardId, Long> getShardSizes() {\n return this.shardSizes;\n }\n+\n+ /**\n+ * Return a map of index name to index size classification.\n+ */\n+ public Map<String, IndexSize> getIndexClassification() {\n+ return this.indexClassification.getIndexClassifications();\n+ }\n+\n+ public String type() {\n+ return TYPE;\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject(\"node_disk_usage\");\n+ for (Map.Entry<String, DiskUsage> entry : usages.entrySet()) {\n+ builder.startObject(entry.getKey());\n+ builder.field(\"free_bytes\");\n+ builder.value(entry.getValue().getFreeBytes());\n+ builder.field(\"used_bytes\");\n+ builder.value(entry.getValue().getUsedBytes());\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ builder.startObject(\"shard_size\");\n+ for (Map.Entry<ShardId, Long> entry : shardSizes.entrySet()) {\n+ builder.field(entry.getKey().toString());\n+ builder.value(entry.getValue());\n+ }\n+ builder.endObject();\n+ indexClassification.toXContent(builder, params);\n+ return builder;\n+ }\n+\n+ @Override\n+ public ClusterInfo readFrom(StreamInput in) throws IOException {\n+ long mapSize = in.readVLong();\n+ Map<String, DiskUsage> newUsages = Maps.newHashMap();\n+ for (int i = 0; i < mapSize; i++) {\n+ String index = in.readString();\n+ String nodeId = in.readString();\n+ String nodeName = in.readString();\n+ long totalBytes = in.readLong();\n+ long freeBytes = in.readLong();\n+ DiskUsage usage = new DiskUsage(nodeId, nodeName, totalBytes, freeBytes);\n+ newUsages.put(index, usage);\n+ }\n+\n+ mapSize = in.readVLong();\n+ Map<ShardId, Long> newSizes = Maps.newHashMap();\n+ for (int i = 0; i < mapSize; i++) {\n+ String idx = in.readString();\n+ int id = in.readVInt();\n+ long size = in.readLong();\n+ newSizes.put(new ShardId(idx, id), size);\n+ }\n+\n+ IndexClassification newClassifications = new IndexClassification();\n+ newClassifications.readFrom(in);\n+ return new ClusterInfo(newUsages, newSizes, newClassifications);\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ out.writeVLong(usages.size());\n+ for (Map.Entry<String, DiskUsage> entry : usages.entrySet()) {\n+ String index = entry.getKey();\n+ DiskUsage usage = entry.getValue();\n+ out.writeString(index);\n+ out.writeString(usage.getNodeId());\n+ out.writeString(usage.getNodeName());\n+ out.writeLong(usage.getTotalBytes());\n+ out.writeLong(usage.getFreeBytes());\n+ }\n+\n+ out.writeVLong(shardSizes.size());\n+ for (Map.Entry<ShardId, Long> entry : shardSizes.entrySet()) {\n+ out.writeString(entry.getKey().getIndex());\n+ out.writeVInt(entry.getKey().getId());\n+ out.writeLong(entry.getValue());\n+ }\n+\n+ indexClassification.writeTo(out);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return usages.hashCode() ^\n+ 31 * shardSizes.hashCode() ^\n+ 31 * indexClassification.hashCode();\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (obj == null) {\n+ return false;\n+ }\n+ if (obj instanceof ClusterInfo) {\n+ ClusterInfo other = (ClusterInfo) obj;\n+ return usages.equals(other.usages) &&\n+ shardSizes.equals(other.shardSizes) &&\n+ indexClassification.equals(other.indexClassification);\n+ } else {\n+ return false;\n+ }\n+ }\n+\n+ /**\n+ * An Enumeration representing the classification for the size of an index.\n+ * {@code SMALLEST} corresponds to the smallest index and {@code LARGEST}\n+ * corresponds to the largest, with the other sizes falling equidistantly\n+ * between them.\n+ */\n+ public enum IndexSize {\n+ SMALLEST,\n+ SMALL,\n+ MEDIUM,\n+ LARGE,\n+ LARGEST\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterInfo.java",
"status": "modified"
},
{
"diff": "@@ -121,6 +121,7 @@ public static void registerPrototype(String type, Custom proto) {\n // register non plugin custom parts\n registerPrototype(SnapshotsInProgress.TYPE, SnapshotsInProgress.PROTO);\n registerPrototype(RestoreInProgress.TYPE, RestoreInProgress.PROTO);\n+ registerPrototype(ClusterInfo.TYPE, ClusterInfo.PROTO);\n }\n \n @Nullable\n@@ -289,6 +290,12 @@ public String prettyPrint() {\n sb.append(nodes().prettyPrint());\n sb.append(routingTable().prettyPrint());\n sb.append(readOnlyRoutingNodes().prettyPrint());\n+ sb.append(\"custom:\").append(\"\\n\");\n+ for (ObjectObjectCursor<String, Custom> cursor : customs) {\n+ sb.append(cursor.key).append(\":\\n\");\n+ sb.append(cursor.value.toString()).append(\"\\n\");\n+ }\n+ sb.append(\"\\n\");\n return sb.toString();\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterState.java",
"status": "modified"
},
{
"diff": "@@ -75,9 +75,33 @@ public long getUsedBytes() {\n return getTotalBytes() - getFreeBytes();\n }\n \n+ @Override\n+ public int hashCode() {\n+ return nodeId.hashCode() ^\n+ 31 * nodeName.hashCode() ^\n+ 31 * Long.hashCode(freeBytes) ^\n+ 31 * Long.hashCode(totalBytes);\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (obj == null) {\n+ return false;\n+ }\n+ if (obj instanceof DiskUsage) {\n+ DiskUsage other = (DiskUsage) obj;\n+ return nodeId.equals(other.nodeId) &&\n+ nodeName.equals(other.nodeName) &&\n+ freeBytes == other.freeBytes &&\n+ totalBytes == other.totalBytes;\n+ } else {\n+ return false;\n+ }\n+ }\n+\n @Override\n public String toString() {\n- return \"[\" + nodeId + \"][\" + nodeName + \"] free: \" + new ByteSizeValue(getFreeBytes()) +\n- \"[\" + Strings.format1Decimals(getFreeDiskAsPercentage(), \"%\") + \"]\";\n+ return \"[\" + nodeId + \"][\" + nodeName + \"] used: \" + new ByteSizeValue(getUsedBytes()) +\n+ \"[\" + Strings.format1Decimals(getUsedDiskAsPercentage(), \"%\") + \"]\";\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/DiskUsage.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,7 @@ private final static class Holder {\n \n private EmptyClusterInfoService() {\n super(Settings.EMPTY);\n- emptyClusterInfo = new ClusterInfo(ImmutableMap.<String, DiskUsage>of(), ImmutableMap.<String, Long>of());\n+ emptyClusterInfo = new ClusterInfo();\n }\n \n public static EmptyClusterInfoService getInstance() {",
"filename": "core/src/main/java/org/elasticsearch/cluster/EmptyClusterInfoService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,199 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster;\n+\n+import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Maps;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.io.stream.Streamable;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+\n+import java.io.IOException;\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+/**\n+ * IndexClassification is used to classify indices by size into different\n+ * {@code ClusterInfo.IndexSize} categories.\n+ */\n+public class IndexClassification implements Streamable, ToXContent {\n+\n+ private volatile ImmutableMap<String, ClusterInfo.IndexSize> indexClassifications;\n+\n+ public IndexClassification(Map<String, ClusterInfo.IndexSize> indexClassifications) {\n+ this.indexClassifications = ImmutableMap.copyOf(indexClassifications);\n+ }\n+\n+ // Used for serialization in ClusterInfo\n+ IndexClassification() {\n+ }\n+\n+ public static IndexClassification classifyIndices(final Map<String, Long> indexSizes,\n+ ESLogger logger) {\n+ long maxSize = 0;\n+ long minSize = Long.MAX_VALUE;\n+ for (Map.Entry<String, Long> idx : indexSizes.entrySet()) {\n+ maxSize = Math.max(maxSize, idx.getValue());\n+ minSize = Math.min(minSize, idx.getValue());\n+ }\n+ Map<String, ClusterInfo.IndexSize> newIndexClassifications = new HashMap<>(indexSizes.size());\n+\n+ long TINY = minSize;\n+ long HUGE = maxSize;\n+ long MEDIUM = HUGE - ((HUGE - TINY) / 2);\n+ long SMALL = MEDIUM - ((MEDIUM - TINY) / 2);\n+ long LARGE = HUGE - ((HUGE - MEDIUM) / 2);\n+\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"SMALLEST: [{}], SMALL: [{}], MEDIUM: [{}], LARGE: [{}], LARGEST: [{}]\",\n+ new ByteSizeValue(TINY),\n+ new ByteSizeValue(SMALL),\n+ new ByteSizeValue(MEDIUM),\n+ new ByteSizeValue(LARGE),\n+ new ByteSizeValue(HUGE));\n+ }\n+\n+ for (Map.Entry<String, Long> idx : indexSizes.entrySet()) {\n+ if (TINY == HUGE) {\n+ // This means they're all the same size, or there is only one\n+ // index, so short-circuit to MEDIUM\n+ logger.debug(\"index [{}] is [{}]\", idx.getKey(), ClusterInfo.IndexSize.MEDIUM);\n+ newIndexClassifications.put(idx.getKey(), ClusterInfo.IndexSize.MEDIUM);\n+ continue;\n+ }\n+ long size = idx.getValue();\n+ logger.debug(\"index size for [{}] is [{}]\", idx.getKey(), new ByteSizeValue(size));\n+ ClusterInfo.IndexSize sizeEnum;\n+ // split the search set in half\n+ if (size <= MEDIUM) {\n+ // less than or equal to medium\n+ if (size > SMALL) {\n+ // between SMALL and MEDIUM\n+ if ((size - SMALL) < (MEDIUM - size)) {\n+ sizeEnum = ClusterInfo.IndexSize.SMALL;\n+ } else {\n+ sizeEnum = ClusterInfo.IndexSize.MEDIUM;\n+ }\n+ } else {\n+ // between SMALLEST and SMALL\n+ if ((size - TINY) < (SMALL - size)) {\n+ sizeEnum = ClusterInfo.IndexSize.SMALLEST;\n+ } else {\n+ sizeEnum = ClusterInfo.IndexSize.SMALL;\n+ }\n+ }\n+ } else {\n+ // greater than MEDIUM\n+ if (size > LARGE) {\n+ // between LARGE and LARGEST\n+ if ((size - LARGE) < (HUGE - size)) {\n+ sizeEnum = ClusterInfo.IndexSize.LARGE;\n+ } else {\n+ sizeEnum = ClusterInfo.IndexSize.LARGEST;\n+ }\n+ } else {\n+ // between MEDIUM and LARGE\n+ if ((size - MEDIUM) < (LARGE - size)) {\n+ sizeEnum = ClusterInfo.IndexSize.MEDIUM;\n+ } else {\n+ sizeEnum = ClusterInfo.IndexSize.LARGE;\n+ }\n+ }\n+ }\n+ logger.debug(\"index [{}] is [{}]\", idx.getKey(), sizeEnum);\n+ newIndexClassifications.put(idx.getKey(), sizeEnum);\n+ }\n+\n+ return new IndexClassification(newIndexClassifications);\n+ }\n+\n+ public Map<String, ClusterInfo.IndexSize> getIndexClassifications() {\n+ return this.indexClassifications;\n+ }\n+ \n+ @Override\n+ public int hashCode() {\n+ return this.indexClassifications.hashCode();\n+ }\n+\n+ @Override\n+ public boolean equals(Object o) {\n+ if (o == null) {\n+ return false;\n+ }\n+ if (o instanceof IndexClassification) {\n+ IndexClassification other = (IndexClassification) o;\n+ // Just compare the maps\n+ return this.indexClassifications.equals(other.getIndexClassifications());\n+ } else {\n+ return false;\n+ }\n+ }\n+ \n+ @Override\n+ public String toString() {\n+ StringBuilder sb = new StringBuilder();\n+ for (Map.Entry<String, ClusterInfo.IndexSize> entry : indexClassifications.entrySet()) {\n+ sb.append(entry.getKey()).append(\": \");\n+ sb.append(entry.getValue());\n+ sb.append(\"\\n\");\n+ }\n+ return sb.toString();\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject(\"classifications\");\n+ for (Map.Entry<String, ClusterInfo.IndexSize> entry : indexClassifications.entrySet()) {\n+ builder.field(entry.getKey());\n+ builder.value(entry.getValue());\n+ }\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ long mapSize = in.readVLong();\n+ Map<String, ClusterInfo.IndexSize> sizes = Maps.newHashMap();\n+ for (int i = 0; i < mapSize; i++) {\n+ String index = in.readString();\n+ String sizeStr = in.readString();\n+ ClusterInfo.IndexSize size = ClusterInfo.IndexSize.valueOf(sizeStr);\n+ sizes.put(index, size);\n+ }\n+ this.indexClassifications = ImmutableMap.copyOf(sizes);\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ out.writeVLong(indexClassifications.size());\n+ for (Map.Entry<String, ClusterInfo.IndexSize> entry : indexClassifications.entrySet()) {\n+ String index = entry.getKey();\n+ ClusterInfo.IndexSize size = entry.getValue();\n+ out.writeString(index);\n+ out.writeString(size.toString());\n+ }\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/cluster/IndexClassification.java",
"status": "added"
},
{
"diff": "@@ -37,8 +37,10 @@\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.monitor.fs.FsStats;\n import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -61,15 +63,20 @@\n */\n public class InternalClusterInfoService extends AbstractComponent implements ClusterInfoService, LocalNodeMasterListener, ClusterStateListener {\n \n+ /** Whether information about the cluster should be gathered */\n+ public static final String INTERNAL_CLUSTER_INFO_ENABLED = \"cluster.info.update.enabled\";\n+ /** How often node disk usage and shard sizes should be fetched */\n public static final String INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL = \"cluster.info.update.interval\";\n+ /** How long to wait for a response for node disk and shard sizes */\n public static final String INTERNAL_CLUSTER_INFO_TIMEOUT = \"cluster.info.update.timeout\";\n \n- private volatile TimeValue updateFrequency;\n-\n private volatile ImmutableMap<String, DiskUsage> usages;\n- private volatile ImmutableMap<String, Long> shardSizes;\n+ private volatile ImmutableMap<ShardId, Long> shardSizes = ImmutableMap.of();\n+ private volatile ImmutableMap<String, Long> indexSizes = ImmutableMap.of();\n+ private volatile IndexClassification indexClassification = new IndexClassification();\n private volatile boolean isMaster = false;\n private volatile boolean enabled;\n+ private volatile TimeValue updateFrequency;\n private volatile TimeValue fetchTimeout;\n private final TransportNodesStatsAction transportNodesStatsAction;\n private final TransportIndicesStatsAction transportIndicesStatsAction;\n@@ -80,8 +87,8 @@ public class InternalClusterInfoService extends AbstractComponent implements Clu\n @Inject\n public InternalClusterInfoService(Settings settings, NodeSettingsService nodeSettingsService,\n TransportNodesStatsAction transportNodesStatsAction,\n- TransportIndicesStatsAction transportIndicesStatsAction, ClusterService clusterService,\n- ThreadPool threadPool) {\n+ TransportIndicesStatsAction transportIndicesStatsAction,\n+ ClusterService clusterService, ThreadPool threadPool) {\n super(settings);\n this.usages = ImmutableMap.of();\n this.shardSizes = ImmutableMap.of();\n@@ -91,7 +98,7 @@ public InternalClusterInfoService(Settings settings, NodeSettingsService nodeSet\n this.threadPool = threadPool;\n this.updateFrequency = settings.getAsTime(INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, TimeValue.timeValueSeconds(30));\n this.fetchTimeout = settings.getAsTime(INTERNAL_CLUSTER_INFO_TIMEOUT, TimeValue.timeValueSeconds(15));\n- this.enabled = settings.getAsBoolean(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED, true);\n+ this.enabled = settings.getAsBoolean(INTERNAL_CLUSTER_INFO_ENABLED, true);\n nodeSettingsService.addListener(new ApplySettings());\n \n // Add InternalClusterInfoService to listen for Master changes\n@@ -105,7 +112,7 @@ class ApplySettings implements NodeSettingsService.Listener {\n public void onRefreshSettings(Settings settings) {\n TimeValue newUpdateFrequency = settings.getAsTime(INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, null);\n // ClusterInfoService is only enabled if the DiskThresholdDecider is enabled\n- Boolean newEnabled = settings.getAsBoolean(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED, null);\n+ Boolean newEnabled = settings.getAsBoolean(INTERNAL_CLUSTER_INFO_ENABLED, null);\n \n if (newUpdateFrequency != null) {\n if (newUpdateFrequency.getMillis() < TimeValue.timeValueSeconds(10).getMillis()) {\n@@ -123,9 +130,9 @@ public void onRefreshSettings(Settings settings) {\n InternalClusterInfoService.this.fetchTimeout = newFetchTimeout;\n }\n \n-\n- // We don't log about enabling it here, because the DiskThresholdDecider will already be logging about enable/disable\n if (newEnabled != null) {\n+ logger.info(\"updating cluster info service enabled [{}] from [{}] to [{}]\",\n+ INTERNAL_CLUSTER_INFO_ENABLED, enabled, newEnabled);\n InternalClusterInfoService.this.enabled = newEnabled;\n }\n }\n@@ -209,7 +216,7 @@ public void clusterChanged(ClusterChangedEvent event) {\n \n @Override\n public ClusterInfo getClusterInfo() {\n- return new ClusterInfo(usages, shardSizes);\n+ return new ClusterInfo(usages, shardSizes, indexClassification);\n }\n \n @Override\n@@ -357,16 +364,30 @@ public void onFailure(Throwable e) {\n @Override\n public void onResponse(IndicesStatsResponse indicesStatsResponse) {\n ShardStats[] stats = indicesStatsResponse.getShards();\n- HashMap<String, Long> newShardSizes = new HashMap<>();\n+ HashMap<ShardId, Long> newShardSizes = new HashMap<>();\n+ HashMap<String, Long> newIndexSizes = new HashMap<>();\n for (ShardStats s : stats) {\n long size = s.getStats().getStore().sizeInBytes();\n- String sid = shardIdentifierFromRouting(s.getShardRouting());\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"shard: {} size: {}\", sid, size);\n+ ShardRouting routing = s.getShardRouting();\n+ if (routing.primary()) {\n+ // Only track primary shards in the shard sizes map\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"{} size: [{}/{}]\",\n+ routing.shardId(), size, new ByteSizeValue(size));\n+ }\n+ newShardSizes.put(routing.shardId(), size);\n }\n- newShardSizes.put(sid, size);\n+ // Add this shard to the index size\n+ // TODO use .getOrDefault(s.getIndex(), 0L) once we update to Java 8\n+ Long indexSize = newIndexSizes.get(s.getIndex());\n+ if (indexSize == null) {\n+ indexSize = 0L;\n+ }\n+ indexSize += size;\n+ newIndexSizes.put(s.getIndex(), indexSize);\n }\n shardSizes = ImmutableMap.copyOf(newShardSizes);\n+ indexSizes = ImmutableMap.copyOf(newIndexSizes);\n }\n \n @Override\n@@ -399,21 +420,41 @@ public void onFailure(Throwable e) {\n logger.warn(\"Failed to update shard information for ClusterInfoUpdateJob within 15s timeout\");\n }\n \n+ indexClassification = IndexClassification.classifyIndices(indexSizes, logger);\n+\n+ final ClusterInfo newClusterInfo = getClusterInfo();\n for (Listener l : listeners) {\n try {\n- l.onNewInfo(getClusterInfo());\n+ l.onNewInfo(newClusterInfo);\n } catch (Exception e) {\n logger.info(\"Failed executing ClusterInfoService listener\", e);\n }\n }\n- }\n- }\n \n- /**\n- * Method that incorporates the ShardId for the shard into a string that\n- * includes a 'p' or 'r' depending on whether the shard is a primary.\n- */\n- public static String shardIdentifierFromRouting(ShardRouting shardRouting) {\n- return shardRouting.shardId().toString() + \"[\" + (shardRouting.primary() ? \"p\" : \"r\") + \"]\";\n+ clusterService.submitStateUpdateTask(\"update_cluster_info\", new ClusterStateUpdateTask() {\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ ClusterInfo currentInfo = currentState.custom(ClusterInfo.TYPE);\n+ // Disk usage is always changing, so only update the\n+ // cluster state if the classifications or shard sizes\n+ // have changed\n+ if (currentInfo != null && currentInfo.getShardSizes() != null &&\n+ currentInfo.getIndexClassification() != null &&\n+ newClusterInfo.getShardSizes().equals(currentInfo.getShardSizes())\n+ && newClusterInfo.getIndexClassification().equals(currentInfo.getIndexClassification())) {\n+ return currentState;\n+ } else {\n+ ClusterState.Builder builder = ClusterState.builder(currentState);\n+ builder.putCustom(ClusterInfo.TYPE, newClusterInfo);\n+ return builder.build();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ logger.warn(\"Failed to update cluster state with new cluster info\", t);\n+ }\n+ });\n+ }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java",
"status": "modified"
},
{
"diff": "@@ -34,12 +34,11 @@\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.RatioValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.node.settings.NodeSettingsService;\n \n import java.util.Map;\n \n-import static org.elasticsearch.cluster.InternalClusterInfoService.shardIdentifierFromRouting;\n-\n /**\n * The {@link DiskThresholdDecider} checks that the node a shard is potentially\n * being allocated to has enough disk space.\n@@ -276,7 +275,7 @@ public TimeValue getRerouteInterval() {\n * If subtractShardsMovingAway is set then the size of shards moving away is subtracted from the total size\n * of all shards\n */\n- public long sizeOfRelocatingShards(RoutingNode node, Map<String, Long> shardSizes, boolean subtractShardsMovingAway) {\n+ public long sizeOfRelocatingShards(RoutingNode node, Map<ShardId, Long> shardSizes, boolean subtractShardsMovingAway) {\n long totalSize = 0;\n for (ShardRouting routing : node.shardsWithState(ShardRoutingState.RELOCATING, ShardRoutingState.INITIALIZING)) {\n if (routing.initializing() && routing.relocatingNodeId() != null) {\n@@ -288,8 +287,8 @@ public long sizeOfRelocatingShards(RoutingNode node, Map<String, Long> shardSize\n return totalSize;\n }\n \n- private long getShardSize(ShardRouting routing, Map<String, Long> shardSizes) {\n- Long shardSize = shardSizes.get(shardIdentifierFromRouting(routing));\n+ private long getShardSize(ShardRouting routing, Map<ShardId, Long> shardSizes) {\n+ Long shardSize = shardSizes.get(routing.shardId());\n return shardSize == null ? 0 : shardSize;\n }\n \n@@ -322,7 +321,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n \n // Fail open if there are no disk usages available\n Map<String, DiskUsage> usages = clusterInfo.getNodeDiskUsages();\n- Map<String, Long> shardSizes = clusterInfo.getShardSizes();\n+ Map<ShardId, Long> shardSizes = clusterInfo.getShardSizes();\n if (usages.isEmpty()) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Unable to determine disk usages for disk-aware allocation, allowing allocation\");\n@@ -432,7 +431,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n }\n \n // Secondly, check that allocating the shard to this node doesn't put it above the high watermark\n- Long shardSize = shardSizes.get(shardIdentifierFromRouting(shardRouting));\n+ Long shardSize = shardSizes.get(shardRouting.shardId());\n shardSize = shardSize == null ? 0 : shardSize;\n double freeSpaceAfterShard = this.freeDiskPercentageAfterShardAssigned(usage, shardSize);\n long freeBytesAfterShard = freeBytes - shardSize;\n@@ -490,7 +489,7 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl\n }\n \n if (includeRelocations) {\n- Map<String, Long> shardSizes = clusterInfo.getShardSizes();\n+ Map<ShardId, Long> shardSizes = clusterInfo.getShardSizes();\n long relocatingShardsSize = sizeOfRelocatingShards(node, shardSizes, true);\n DiskUsage usageIncludingRelocations = new DiskUsage(node.nodeId(), node.node().name(),\n usage.getTotalBytes(), usage.getFreeBytes() - relocatingShardsSize);",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.plugins.AbstractPlugin;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -163,7 +164,7 @@ public void testClusterInfoServiceCollectsInformation() throws Exception {\n ClusterInfo info = listener.get();\n assertNotNull(\"info should not be null\", info);\n Map<String, DiskUsage> usages = info.getNodeDiskUsages();\n- Map<String, Long> shardSizes = info.getShardSizes();\n+ Map<ShardId, Long> shardSizes = info.getShardSizes();\n assertNotNull(usages);\n assertNotNull(shardSizes);\n assertThat(\"some usages are populated\", usages.values().size(), Matchers.equalTo(2));\n@@ -176,6 +177,17 @@ public void testClusterInfoServiceCollectsInformation() throws Exception {\n logger.info(\"--> shard size: {}\", size);\n assertThat(\"shard size is greater than 0\", size, greaterThan(0L));\n }\n+ // Make sure the stats show up in the cluster state\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ ClusterInfo cInfo = state.custom(ClusterInfo.TYPE);\n+ assertThat(cInfo.getIndexClassification().get(\"test\"), Matchers.equalTo(ClusterInfo.IndexSize.MEDIUM));\n+ assertThat(cInfo.getShardSizes().get(new ShardId(\"test\", 0)), Matchers.greaterThan(0L));\n+ assertThat(cInfo.getNodeDiskUsages().size(), Matchers.greaterThan(0));\n+ }\n+ });\n }\n \n @Test",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,9 @@\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.util.HashMap;\n import java.util.List;\n+import java.util.Map;\n \n import static org.elasticsearch.cluster.metadata.AliasMetaData.newAliasMetaDataBuilder;\n import static org.elasticsearch.test.XContentTestUtils.convertToMap;\n@@ -630,7 +632,7 @@ public ClusterState.Builder remove(ClusterState.Builder builder, String name) {\n \n @Override\n public ClusterState.Custom randomCreate(String name) {\n- switch (randomIntBetween(0, 1)) {\n+ switch (randomIntBetween(0, 2)) {\n case 0:\n return new SnapshotsInProgress(new SnapshotsInProgress.Entry(\n new SnapshotId(randomName(\"repo\"), randomName(\"snap\")),\n@@ -645,6 +647,14 @@ public ClusterState.Custom randomCreate(String name) {\n RestoreInProgress.State.fromValue((byte) randomIntBetween(0, 3)),\n ImmutableList.<String>of(),\n ImmutableMap.<ShardId, RestoreInProgress.ShardRestoreStatus>of()));\n+ case 2:\n+ Map<String, DiskUsage> usages = new HashMap<>();\n+ Map<ShardId, Long> sizes = new HashMap<>();\n+ Map<String, ClusterInfo.IndexSize> classifications = new HashMap<>();\n+ usages.put(\"node1\", new DiskUsage(\"nodeid\", \"node1\", 100, 50));\n+ sizes.put(new ShardId(\"test\", 0), 100L);\n+ classifications.put(\"test\", ClusterInfo.IndexSize.MEDIUM);\n+ return new ClusterInfo(usages, sizes, classifications);\n default:\n throw new IllegalArgumentException(\"Shouldn't be here\");\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,100 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster;\n+\n+import org.elasticsearch.common.io.stream.ByteBufferStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+\n+import org.junit.Test;\n+\n+import java.nio.ByteBuffer;\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+public class IndexClassificationTests extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testIndexSizeClassification() throws Exception {\n+ testSequence(1, 2, 3, 4, 5);\n+ testSequence(1, 5, 10, 15, 20);\n+ testSequence(1, 17, 27, 45, 55);\n+\n+ Map<String, Long> indexSizes = new HashMap<>(2);\n+\n+ indexSizes.put(\"single\", 1L);\n+ Map<String, ClusterInfo.IndexSize> c = IndexClassification.classifyIndices(indexSizes, logger).getIndexClassifications();\n+ assertTrue(c.get(\"single\") == ClusterInfo.IndexSize.MEDIUM);\n+\n+ indexSizes.clear();\n+\n+ indexSizes.put(\"foo\", 3L);\n+ indexSizes.put(\"bar\", 3L);\n+ c = IndexClassification.classifyIndices(indexSizes, logger).getIndexClassifications();\n+ assertTrue(c.get(\"foo\") == ClusterInfo.IndexSize.MEDIUM);\n+ assertTrue(c.get(\"bar\") == ClusterInfo.IndexSize.MEDIUM);\n+ }\n+\n+ @Test\n+ public void testIndexSizeStreaming() throws Exception {\n+ Map<String, Long> indexSizes = new HashMap<>(2);\n+\n+ indexSizes.put(\"foo\", 1L);\n+ indexSizes.put(\"bar\", 100L);\n+ IndexClassification ic = IndexClassification.classifyIndices(indexSizes, logger);\n+ assertTrue(ic.getIndexClassifications().get(\"foo\") == ClusterInfo.IndexSize.SMALLEST);\n+ assertTrue(ic.getIndexClassifications().get(\"bar\") == ClusterInfo.IndexSize.LARGEST);\n+\n+ BytesStreamOutput out = new BytesStreamOutput();\n+ ic.writeTo(out);\n+ out.flush();\n+ ByteBufferStreamInput in = new ByteBufferStreamInput(ByteBuffer.wrap(out.bytes().toBytes()));\n+ IndexClassification ic2 = new IndexClassification();\n+ ic2.readFrom(in);\n+\n+ assertTrue(ic.getIndexClassifications().get(\"foo\") == ClusterInfo.IndexSize.SMALLEST);\n+ assertTrue(ic.getIndexClassifications().get(\"bar\") == ClusterInfo.IndexSize.LARGEST);\n+\n+ assertEquals(ic.hashCode(), ic2.hashCode());\n+ assertEquals(ic, ic2);\n+ }\n+\n+ private void testSequence(long tiny, long small, long medium, long large, long huge) {\n+ Map<String, Long> indexSizes = new HashMap<>(5);\n+\n+ indexSizes.put(\"tiny\", tiny);\n+ indexSizes.put(\"small\", small);\n+ indexSizes.put(\"medium\", medium);\n+ indexSizes.put(\"large\", large);\n+ indexSizes.put(\"huge\", huge);\n+ Map<String, ClusterInfo.IndexSize> c = IndexClassification.classifyIndices(indexSizes, logger).getIndexClassifications();\n+\n+ assertTrue(\"should be SMALLEST:\" + c.get(\"tiny\"),\n+ c.get(\"tiny\") == ClusterInfo.IndexSize.SMALLEST);\n+ assertTrue(\"should be SMALL:\" + c.get(\"small\"),\n+ c.get(\"small\") == ClusterInfo.IndexSize.SMALL);\n+ assertTrue(\"should be MEDIUM:\" + c.get(\"medium\"),\n+ c.get(\"medium\") == ClusterInfo.IndexSize.MEDIUM);\n+ assertTrue(\"should be LARGE:\" + c.get(\"large\"),\n+ c.get(\"large\") == ClusterInfo.IndexSize.LARGE);\n+ assertTrue(\"should be LARGEST:\" + c.get(\"huge\"),\n+ c.get(\"huge\") == ClusterInfo.IndexSize.LARGEST);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/cluster/IndexClassificationTests.java",
"status": "added"
},
{
"diff": "@@ -74,10 +74,11 @@ public void diskThresholdTest() {\n usages.put(\"node3\", new DiskUsage(\"node3\", \"node3\", 100, 60)); // 40% used\n usages.put(\"node4\", new DiskUsage(\"node4\", \"node4\", 100, 80)); // 20% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 10L); // 10 bytes\n- shardSizes.put(\"[test][0][r]\", 10L);\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 10L); // 10 bytes\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n AllocationDeciders deciders = new AllocationDeciders(Settings.EMPTY,\n new HashSet<>(Arrays.asList(\n@@ -269,10 +270,11 @@ public void diskThresholdWithAbsoluteSizesTest() {\n usages.put(\"node4\", new DiskUsage(\"node4\", \"n4\", 100, 80)); // 20% used\n usages.put(\"node5\", new DiskUsage(\"node5\", \"n5\", 100, 85)); // 15% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 10L); // 10 bytes\n- shardSizes.put(\"[test][0][r]\", 10L);\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 10L); // 10 bytes\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n AllocationDeciders deciders = new AllocationDeciders(Settings.EMPTY,\n new HashSet<>(Arrays.asList(\n@@ -334,7 +336,9 @@ public void addListener(Listener listener) {\n \n // Make node without the primary now habitable to replicas\n usages.put(nodeWithoutPrimary, new DiskUsage(nodeWithoutPrimary, \"\", 100, 35)); // 65% used\n- final ClusterInfo clusterInfo2 = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ final ClusterInfo clusterInfo2 = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n cis = new ClusterInfoService() {\n @Override\n public ClusterInfo getClusterInfo() {\n@@ -531,9 +535,11 @@ public void diskThresholdWithShardSizes() {\n usages.put(\"node1\", new DiskUsage(\"node1\", \"n1\", 100, 31)); // 69% used\n usages.put(\"node2\", new DiskUsage(\"node2\", \"n2\", 100, 1)); // 99% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 10L); // 10 bytes\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 10L); // 10 bytes\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n AllocationDeciders deciders = new AllocationDeciders(Settings.EMPTY,\n new HashSet<>(Arrays.asList(\n@@ -597,10 +603,11 @@ public void unknownDiskUsageTest() {\n usages.put(\"node2\", new DiskUsage(\"node2\", \"node2\", 100, 50)); // 50% used\n usages.put(\"node3\", new DiskUsage(\"node3\", \"node3\", 100, 0)); // 100% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 10L); // 10 bytes\n- shardSizes.put(\"[test][0][r]\", 10L); // 10 bytes\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 10L); // 10 bytes\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n AllocationDeciders deciders = new AllocationDeciders(Settings.EMPTY,\n new HashSet<>(Arrays.asList(\n@@ -699,12 +706,12 @@ public void testShardRelocationsTakenIntoAccount() {\n usages.put(\"node2\", new DiskUsage(\"node2\", \"n2\", 100, 40)); // 60% used\n usages.put(\"node2\", new DiskUsage(\"node3\", \"n3\", 100, 40)); // 60% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 14L); // 14 bytes\n- shardSizes.put(\"[test][0][r]\", 14L);\n- shardSizes.put(\"[test2][0][p]\", 1L); // 1 bytes\n- shardSizes.put(\"[test2][0][r]\", 1L);\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 14L); // 14 bytes\n+ shardSizes.put(new ShardId(\"test2\", 0), 1L); // 1 bytes\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n AllocationDeciders deciders = new AllocationDeciders(Settings.EMPTY,\n new HashSet<>(Arrays.asList(\n@@ -804,10 +811,12 @@ public void testCanRemainWithShardRelocatingAway() {\n usages.put(\"node1\", new DiskUsage(\"node1\", \"n1\", 100, 20)); // 80% used\n usages.put(\"node2\", new DiskUsage(\"node2\", \"n2\", 100, 100)); // 0% used\n \n- Map<String, Long> shardSizes = new HashMap<>();\n- shardSizes.put(\"[test][0][p]\", 40L);\n- shardSizes.put(\"[test][1][p]\", 40L);\n- final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ Map<ShardId, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(new ShardId(\"test\", 0), 40L);\n+ shardSizes.put(new ShardId(\"test\", 1), 40L);\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages),\n+ ImmutableMap.copyOf(shardSizes),\n+ ImmutableMap.<String, ClusterInfo.IndexSize>of());\n \n DiskThresholdDecider diskThresholdDecider = new DiskThresholdDecider(diskSettings);\n MetaData metaData = MetaData.builder()",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java",
"status": "modified"
},
{
"diff": "@@ -47,9 +47,7 @@ public void testDynamicSettings() {\n ClusterInfoService cis = new ClusterInfoService() {\n @Override\n public ClusterInfo getClusterInfo() {\n- Map<String, DiskUsage> usages = new HashMap<>();\n- Map<String, Long> shardSizes = new HashMap<>();\n- return new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+ return new ClusterInfo();\n }\n \n @Override",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderUnitTests.java",
"status": "modified"
},
{
"diff": "@@ -24,10 +24,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n-import org.elasticsearch.cluster.ClusterChangedEvent;\n-import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.common.Priority;\n@@ -128,6 +125,7 @@ public void testNodeFailuresAreProcessedOnce() throws ExecutionException, Interr\n Settings defaultSettings = Settings.builder()\n .put(FaultDetection.SETTING_PING_TIMEOUT, \"1s\")\n .put(FaultDetection.SETTING_PING_RETRIES, \"1\")\n+ .put(InternalClusterInfoService.INTERNAL_CLUSTER_INFO_ENABLED, false)\n .put(\"discovery.type\", \"zen\")\n .build();\n ",
"filename": "core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryTests.java",
"status": "modified"
},
{
"diff": "@@ -92,7 +92,7 @@ public void testUnassignedShardAndEmptyNodesInRoutingTable() throws Exception {\n .nodes(DiscoveryNodes.EMPTY_NODES)\n .build()\n );\n- ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.<String, DiskUsage>of(), ImmutableMap.<String, Long>of());\n+ ClusterInfo clusterInfo = new ClusterInfo();\n \n RoutingAllocation routingAllocation = new RoutingAllocation(allocationDeciders, routingNodes, current.nodes(), clusterInfo);\n allocator.allocateUnassigned(routingAllocation);",
"filename": "core/src/test/java/org/elasticsearch/indices/state/RareClusterStateTests.java",
"status": "modified"
}
]
} |
{
"body": "In rare occasion, the translog replay phase of recovery may require mapping changes on the target shard. This can happen where indexing on the primary introduces new mappings while the recovery is in phase1. If the source node processes the new mapping from the master, allowing the indexing to proceed, before the target node does and the recovery moves to the phase 2 (translog replay) before as well, the translog operations arriving on the target node may miss the mapping changes. Since this is extremely rare, we opt for a simple fix and simply restart the recovery. Note that in the case the file copy phase will likely be very short as the files are already in sync.\n\nRestarting recoveries in such a late phase means we may need to copy segment_N files and/or files that were quickly merged away on the target again. This annoys the write-once protection in our testing infra. To work around it I have introduces a counter in the termpoary file name prefix used by the recovery code.\n\n***\\* THERE IS STILL AN ONGOING ISSUE ***: Lucene will try to write the same segment_N file (which was cleaned by the recovery code) twice triggering test failures.\n\nDue ot this issue we have decided to change approach and use a cluster observer to retry operations once the mapping have arrived (or any other change)\n\n Closes #11281\n",
"comments": [
{
"body": "@bleskes I looked at this and I think we should not try to restart the recovery in this `hyper corner case` I think we should just fail the shard, fail the recovery and start fresh. This makes the entire code less complicated and more strict. It's I think we should not design for all these corner cases and rather start fresh?\n",
"created_at": "2015-05-28T09:39:39Z"
},
{
"body": "@s1monw I pushed an update based on our discussion ... no more DelayRecoveryException\n",
"created_at": "2015-06-04T13:30:13Z"
},
{
"body": "LGTM, one question if we elect a new master will this somehow get notified and we retry?\n",
"created_at": "2015-06-04T19:27:41Z"
},
{
"body": "I'm not sure I follow the question exactly, but if the recovery code is wait on the observer it will retry on any change in the cluster state, master related or not.\n",
"created_at": "2015-06-04T20:00:22Z"
},
{
"body": "@bleskes I got confused... nevermind\n",
"created_at": "2015-06-04T20:04:34Z"
}
],
"number": 11363,
"title": "Restart recovery upon mapping changes during translog replay"
} | {
"body": "The current ExceptionsHelper.unwrapCause(exception) requires the incoming exception to support ElasticsearchWrapperException , which TranslogRecoveryPerformer.BatchOperationException doesn't implement. I opted for a more generic solution\n\nExample failure: http://build-us-00.elastic.co/job/es_g1gc_master_metal/8534/testReport/junit/org.elasticsearch.recovery/RelocationTests/testRelocationWhileRefreshing/\n\nSee #11363\n",
"number": 11583,
"review_comments": [],
"title": "Fix MapperException detection during translog ops replay"
} | {
"commits": [
{
"message": "Recovery: fix MapperException detection during translog ops replay\n\nThe current ExceptionsHelper.unwrapCause(exception) requires the incoming exception to support ElasticsearchWrapperException , which TranslogRecoveryPerformer.BatchOperationException doesn't implement. I opted for a more generic solution"
}
],
"files": [
{
"diff": "@@ -306,7 +306,8 @@ public void messageReceived(final RecoveryTranslogOperationsRequest request, fin\n try {\n recoveryStatus.indexShard().performBatchRecovery(request.operations());\n } catch (TranslogRecoveryPerformer.BatchOperationException exception) {\n- if (ExceptionsHelper.unwrapCause(exception) instanceof MapperException == false) {\n+ MapperException mapperException = (MapperException) ExceptionsHelper.unwrap(exception, MapperException.class);\n+ if (mapperException == null) {\n throw exception;\n }\n // in very rare cases a translog replay from primary is processed before a mapping update on this node",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
}
]
} |
{
"body": "the term vector API which is now used to build MLT queries get the term vector potentially from a different field (source) than it's executed against. This will result in empty queries and should be rewritten against it's index name field\n",
"comments": [
{
"body": "Bumping the version up to 1.7.1 for the release today.\n",
"created_at": "2015-07-16T09:55:08Z"
},
{
"body": "The `path` parameter has been removed. Closing\n",
"created_at": "2016-01-18T20:17:14Z"
}
],
"number": 11573,
"title": "MoreLikeThis Query doesn't work with `just_path` option pre 2.0"
} | {
"body": "Closes #11573\n",
"number": 11577,
"review_comments": [
{
"body": "leave -> leaf\n",
"created_at": "2015-06-10T16:27:46Z"
},
{
"body": "his -> This\n",
"created_at": "2015-06-10T16:28:07Z"
},
{
"body": "This looks good, but shouldn't we build support for `just_name` in the TVs API instead?\n",
"created_at": "2015-06-17T14:42:18Z"
},
{
"body": "TermsEnum.docs can never return null?\n",
"created_at": "2015-07-08T17:16:02Z"
}
],
"title": "Rewrite fields with `just_name` option to their actual index names in MLT"
} | {
"commits": [
{
"message": "Rewrite fields with `just_name` option to their actual index names in MLT\n\nCloses #11573"
}
],
"files": [
{
"diff": "@@ -39,10 +39,7 @@\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.document.Document;\n import org.apache.lucene.index.*;\n-import org.apache.lucene.search.BooleanClause;\n-import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.search.similarities.DefaultSimilarity;\n import org.apache.lucene.search.similarities.TFIDFSimilarity;\n import org.apache.lucene.util.BytesRef;\n@@ -827,9 +824,11 @@ private void addTermFrequencies(Map<String, Int> termFreqMap, Terms vector) thro\n continue;\n }\n \n- DocsEnum docs = termsEnum.docs(null, null);\n- final int freq = docs.freq();\n-\n+ final DocsEnum docs = termsEnum.docs(null, null);\n+ int freq = 0;\n+ while(docs != null && docs.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ freq += docs.freq();\n+ }\n // increment frequency\n Int cnt = termFreqMap.get(term);\n if (cnt == null) {",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/XMoreLikeThis.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.index.*;\n import org.apache.lucene.queries.TermsFilter;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n@@ -43,10 +44,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.Set;\n+import java.util.*;\n \n import static org.elasticsearch.index.mapper.Uid.createUidAsBytes;\n \n@@ -159,7 +157,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else if (\"fields\".equals(currentFieldName)) {\n moreLikeFields = Lists.newLinkedList();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- moreLikeFields.add(parseContext.indexName(parser.text()));\n+ moreLikeFields.add(parser.text());\n }\n } else if (Fields.DOCUMENT_IDS.match(currentFieldName, parseContext.parseFlags())) {\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n@@ -204,7 +202,12 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (moreLikeFields.isEmpty()) {\n return null;\n }\n- mltQuery.setMoreLikeFields(moreLikeFields.toArray(Strings.EMPTY_ARRAY));\n+\n+ List<String> moreLikeThisIndexFields = new ArrayList<>();\n+ for (String field : moreLikeFields) {\n+ moreLikeThisIndexFields.add(parseContext.indexName(field));\n+ }\n+ mltQuery.setMoreLikeFields(moreLikeThisIndexFields.toArray(new String[moreLikeThisIndexFields.size()]));\n \n // support for named query\n if (queryName != null) {\n@@ -237,6 +240,21 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n // fetching the items with multi-termvectors API\n org.apache.lucene.index.Fields[] likeFields = fetchService.fetch(items);\n+ for (int i = 0; i < likeFields.length; i++) {\n+ final Map<String, List<String>> fieldToIndexName = new HashMap<>();\n+ for (String field : likeFields[i]) {\n+ String indexName = parseContext.indexName(field);\n+ if (indexName.equals(field) == false) {\n+ if (fieldToIndexName.containsKey(indexName) == false) {\n+ fieldToIndexName.put(indexName, new ArrayList<String>());\n+ }\n+ fieldToIndexName.get(indexName).add(field);\n+ }\n+ }\n+ if (fieldToIndexName.isEmpty() == false) {\n+ likeFields[i] = new MappedIndexedFields(likeFields[i], fieldToIndexName);\n+ }\n+ }\n items.copyContextAndHeadersFrom(SearchContext.current());\n mltQuery.setLikeText(likeFields);\n \n@@ -296,4 +314,50 @@ private void handleExclude(BooleanQuery boolQuery, MultiTermVectorsRequest likeI\n boolQuery.add(query, BooleanClause.Occur.MUST_NOT);\n }\n }\n+\n+ /**\n+ * This class converts the actual path name to the index name if they happen to be different.\n+ * This is needed if the \"path\" : \"just_name\" feature is used in mappings where paths like `person.name` are indexed\n+ * into just the leave name of the path ie. in this case `name`. For this case we need to somehow map those names to\n+ * the actual fields to get the right statistics from the index when we rewrite the MLT query otherwise it will rewrite against\n+ * the full path name which is not present in the index at all in that case.\n+ * his will result in an empty query and no results are returned\n+ */\n+ private static class MappedIndexedFields extends org.apache.lucene.index.Fields {\n+ private final Map<String, List<String>> fieldToIndexName;\n+ private final org.apache.lucene.index.Fields in;\n+\n+ MappedIndexedFields(org.apache.lucene.index.Fields in, Map<String, List<String>> fieldToIndexName) {\n+ this.in = in;\n+ this.fieldToIndexName = Collections.unmodifiableMap(fieldToIndexName);\n+ }\n+\n+ @Override\n+ public Iterator<String> iterator() {\n+ return fieldToIndexName.keySet().iterator();\n+ }\n+\n+ @Override\n+ public Terms terms(String field) throws IOException {\n+ List<String> indexNames = fieldToIndexName.get(field);\n+ if (indexNames == null) {\n+ return in.terms(field);\n+ } if (indexNames.size() == 1) {\n+ return in.terms(indexNames.get(0));\n+ }else {\n+ final Terms[] terms = new Terms[indexNames.size()];\n+ final ReaderSlice[] slice = new ReaderSlice[indexNames.size()];\n+ for (int i = 0; i < terms.length; i++) {\n+ terms[i] = in.terms(indexNames.get(i));\n+ slice[i]= new ReaderSlice(0, 1, i);\n+ }\n+ return new MultiTerms(terms, slice);\n+ }\n+ }\n+\n+ @Override\n+ public int size() {\n+ return fieldToIndexName.size();\n+ }\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -39,6 +39,7 @@\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Comparator;\n import java.util.List;\n@@ -666,4 +667,105 @@ public void testMoreLikeThisMalformedArtificialDocs() throws Exception {\n assertSearchResponse(response);\n assertHitCount(response, 1);\n }\n+\n+\n+ public void testJustPath() throws IOException, ExecutionException, InterruptedException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"morelikethis\")\n+ .field(\"path\", \"just_name\")\n+ .startObject(\"properties\")\n+ .startObject(\"from\")\n+ .field(\"type\", \"string\")\n+ .field(\"path\", \"just_name\")\n+ .endObject()\n+ .startObject(\"text\")\n+ .field(\"type\", \"string\")\n+ .field(\"path\", \"just_name\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .startObject(\"another_field\")\n+ .field(\"path\", \"just_name\")\n+ .startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"string\")\n+ .field(\"path\", \"just_name\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject();\n+\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type1\", mapping));\n+ ensureGreen(\"test\");\n+ indexRandom(true, client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"{ \\\"morelikethis\\\" : { \\\"text\\\" : \\\"hello world\\\" , \\\"from\\\" : \\\"elasticsearch\\\" }, \\\"another_field\\\" : { \\\"text\\\" : \\\"foo bar\\\" }}\"),\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\" { \\\"morelikethis\\\" : { \\\"text\\\" : \\\"goodby moon\\\" , \\\"from\\\" : \\\"elasticsearch\\\" }, \\\"another_field\\\" : { \\\"text\\\" : \\\"foo bar\\\" }}\"));\n+\n+ MoreLikeThisQueryBuilder mltQuery = moreLikeThisQuery()\n+ .docs((Item)new Item(\"test\", \"type1\", \"1\").fields(\"morelikethis.text\"))\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ SearchResponse response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 1);\n+\n+ mltQuery = moreLikeThisQuery()\n+ .docs((Item)new Item(\"test\", \"type1\", \"1\").fields(\"morelikethis.text\", \"another_field.text\"))\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 2);\n+\n+ mltQuery = moreLikeThisQuery(\"morelikethis.text\", \"another_field.text\")\n+ .docs((Item) new Item(\"test\", \"type1\", \"1\"))\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 2);\n+\n+ mltQuery = moreLikeThisQuery(\"morelikethis.text\", \"another_field.text\")\n+ .likeText(\"hello world foo bar\")\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 2);\n+\n+ mltQuery = moreLikeThisQuery(\"text\")\n+ .likeText(\"hello world foo bar\")\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 2);\n+\n+ mltQuery = moreLikeThisQuery()\n+ .docs((Item)new Item(\"test\", \"type1\", \"1\").fields(\"morelikethis.text\", \"morelikethis.from\"))\n+ .minTermFreq(0)\n+ .minDocFreq(0)\n+ .include(true)\n+ .minimumShouldMatch(\"1%\");\n+ response = client().prepareSearch(\"test\").setTypes(\"type1\")\n+ .setQuery(mltQuery).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 2);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/mlt/MoreLikeThisActionTests.java",
"status": "modified"
}
]
} |
{
"body": "we moved to `minimum_should_match` but we should still accept the `percent_terms_to_match` parameter until 2.0\n",
"comments": [
{
"body": "fixed see #11574 \n",
"created_at": "2015-06-10T12:35:49Z"
}
],
"number": 11572,
"title": "_mlt API ignores deprecated `percent_terms_to_match` parameter"
} | {
"body": "This was removed in 1.5 but should still be supported until the next major release.\n\nCloses #11572\n",
"number": 11574,
"review_comments": [],
"title": "Add back support for deprectated percent_terms_to_match REST parameter"
} | {
"commits": [
{
"message": "Add back support for deprectated percent_terms_to_match REST parameter\n\nThis was removed in 1.5 but should still be supported until the next major release.\n\nCloses #11572"
}
],
"files": [
{
"diff": "@@ -57,6 +57,11 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n //needs some work if it is to be used in a REST context like this too\n // See the MoreLikeThisQueryParser constants that hold the valid syntax\n mltRequest.fields(request.paramAsStringArray(\"mlt_fields\", null));\n+ if (request.hasParam(\"percent_terms_to_match\") && request.hasParam(\"minimum_should_match\") == false) {\n+ // percent_terms_to_match is deprecated!!!\n+ // only set if it's really set AND the new parameter is not present (prefer non-deprecated\n+ mltRequest.percentTermsToMatch(request.paramAsFloat(\"percent_terms_to_match\", 0));\n+ }\n mltRequest.minimumShouldMatch(request.param(\"minimum_should_match\", \"0\"));\n mltRequest.minTermFreq(request.paramAsInt(\"min_term_freq\", -1));\n mltRequest.maxQueryTerms(request.paramAsInt(\"max_query_terms\", -1));",
"filename": "src/main/java/org/elasticsearch/rest/action/mlt/RestMoreLikeThisAction.java",
"status": "modified"
}
]
} |
{
"body": "I am doing _update on some doc using the url like\nhttp://10.0.0.91:9200/alias_SOME_INDEX/SOME_TYPE/SOME_ID/_update\nand my payload is like\n\n{ \n\"doc\":{\"baseName\":\"Microsoft Office\"}, \n\"upsert\":{\"baseName\":\"Microsoft Office\"}\n}\n\non some other thread i am doing PUT for the same document with same id.\n\ni get document already exists]\",\"status\":409 \nsome time it works some time i get this error\n\nso i suspect some thing to do with 2 threads doing similar thing causing this, but an \"_update\" call giving already exists kind of exception looks strange\n",
"comments": [
{
"body": "I haven't tried to replicate this, but it sounds like it is trying to do an upsert, and not handling the conflict exception correctly.\n",
"created_at": "2015-06-05T08:53:06Z"
},
{
"body": "i have made a pull request for fixing this, do i have to do any thing more? is the pull request fine?\n",
"created_at": "2015-06-12T14:43:03Z"
},
{
"body": "thanks for the pr @vedil - i've marked it for review. somebody should get to it shortly\n",
"created_at": "2015-06-12T17:13:54Z"
},
{
"body": "messed up previous pull request, so created another one https://github.com/elastic/elasticsearch/pull/12137\n",
"created_at": "2015-07-09T01:11:13Z"
},
{
"body": "Can you provide a test case that replicates the `DocumentAlreadyExistsException`?\n\nI agree that there is clearly potential for a race condition here and think that it's important that we get to the bottom of it. A test case that reproduces the exception would be helpful.\n",
"created_at": "2015-07-10T02:13:29Z"
},
{
"body": "tried creating a test case after removing my changes, and modifying my test a bit\n @Test\n public void testIndexNUpdateUpsert() {\n //update action goes to the primary, index op gets executed locally, then replicated\n String[] updateShardActions = new String[]{UpdateAction.NAME, IndexAction.NAME + \"[r]\"};\n interceptTransportActions(updateShardActions);\n\n```\n String indexOrAlias = randomIndexOrAlias();\n\n String[] indexShardActions = new String[]{IndexAction.NAME, IndexAction.NAME + \"[r]\"};\n interceptTransportActions(indexShardActions);\n\n IndexRequest indexRequest = new IndexRequest(randomIndexOrAlias(), \"type\", \"id\").source(\"field\", \"value\");\n IndexResponse indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n clearInterceptedActions();\n assertSameIndices(indexRequest, indexShardActions);\n assertThat(1L, equalTo(indexResponse.getVersion()));\n\n indexRequest = new IndexRequest(randomIndexOrAlias(), \"type\", \"id\").source(\"field\", \"value\");\n indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n clearInterceptedActions();\n //assertSameIndices(indexRequest, indexShardActions);\n assertThat(2L, equalTo(indexResponse.getVersion()));\n\n UpdateRequest updateRequest = new UpdateRequest(indexOrAlias, \"type\", \"id\").upsert(\"field2\", \"value2\").doc(\"field1\", \"value1\");\n UpdateResponse updateResponse = internalCluster().clientNodeClient().update(updateRequest).actionGet();\n assertThat( updateResponse.getVersion(), greaterThan(indexResponse.getVersion()));\n\n clearInterceptedActions();\n System.out.println(\"updateRequest \"+updateRequest +\" updateShardActions = \"+updateShardActions );\n assertSameIndicesOptionalRequests(updateRequest, updateShardActions);\n}\n```\n\nnow this is failing always by saying 1 is < 2, is my assert supposed to succeed?\ni am doing index, index, update and expecting a version > 2\n",
"created_at": "2015-07-14T18:28:06Z"
},
{
"body": "i am running test case using these vm arguments in eclipse\n-ea -Dtests.seed=806B4E52F9B20C5B -Dtests.assertion.disabled=false -Dtests.heap.size=512m -Dtests.locale=no_NO_NY -Dtests.timezone=America/Miquelon -Des.logger.level=DEBUG\n",
"created_at": "2015-07-14T18:29:00Z"
},
{
"body": "```\n @Test\n public void testIndexNUpdateUpsert() {\n //update action goes to the primary, index op gets executed locally, then replicated\n //String[] updateShardActions = new String[]{UpdateAction.NAME, IndexAction.NAME + \"[r]\"};\n //interceptTransportActions(updateShardActions);\n\n final String indexOrAlias = randomIndexOrAlias();\n final int NUMBER_OF_THREADS = 10;\n final int UPDATE_EVERY = 2;\n final CountDownLatch latch = new CountDownLatch(NUMBER_OF_THREADS);\n Thread[] threads = new Thread[NUMBER_OF_THREADS];\n for (int i = 0; i < threads.length; i++) {\n threads[i] = new Thread() {\n @Override\n public void run() {\n try {\n for (long i = 0; i < NUMBER_OF_THREADS; i++) {\n if ((i % UPDATE_EVERY) == 0) {\n UpdateRequest updateRequest = new UpdateRequest(indexOrAlias, \"type\", \"id\").upsert(\"field2\", \"value2\").doc(\"field1\", \"value1\");\n UpdateResponse updateResponse = internalCluster().clientNodeClient().update(updateRequest).actionGet();\n System.out.println(\"update response = \"+updateResponse);\n } else {\n IndexRequest indexRequest = new IndexRequest(indexOrAlias, \"type\", \"id\").source(\"field\", \"value\");\n IndexResponse indexResponse = internalCluster().clientNodeClient().index(indexRequest).actionGet();\n System.out.println(\"index response = \"+indexResponse);\n }\n }\n } finally {\n latch.countDown();\n }\n }\n };\n }\n\n for (Thread thread : threads) {\n thread.start();\n }\n\n try {\n latch.await();\n } catch (InterruptedException e) {\n e.printStackTrace();\n throw new RuntimeException();\n }\n}\n```\n\nusing this test i am able to reproduce the \"document already exists\" also i see some exceptions whose message is like \"version conflict, current [3], provided [1]\" also\n",
"created_at": "2015-07-24T16:56:54Z"
},
{
"body": "I do not think this is an actual bug.\n\n@vedil I believe the unit test you put [here](https://github.com/elastic/elasticsearch/issues/11506#issuecomment-121331165) fails because you use a new `randomIndexOrAlias()` for each request and so the requests might not all got to the same index. I you use the same index each time the test will pass.\n\nI agree that the `DocumentAlreadyExistsException` seems weird for the integration test but this is also expected I think. An update first retrieves the document via `get` and then issues an `index` request with the updated source. If a write sneaked in between `get` and issuing the `index` request we throw a `VersionConflictException` in case the document already existed before the update. However, in case the document did not exist when the `get` was executed we check that the document does still not exist when the `index` request is sent. If it does, we throw a `DocumentAlreadyExistsException`.\nTo circumvent this, you need to set the [retry on conflict parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html#_parameters_3) to a higher value.\nLet me know if my explanation makes sense.\n\nWe could potentially throw a `VersionConfictException` instead of a `DocumentAlreadyExistsException` to have consistent exceptions for updates or have a dedicated `UpdateFailedException` that explains what happened or just document this better. \n",
"created_at": "2015-07-28T13:21:01Z"
},
{
"body": "regarding randomIndexOrAlias, i am calling it once and using it in all requests. so within a test it will use same index.\nyour other explanation makes sense, i feel VersionConfictException makes more sense.\n",
"created_at": "2015-07-28T14:13:35Z"
},
{
"body": "After https://github.com/elastic/elasticsearch/pull/13955 is in we will get `VersionConfictException`.\n",
"created_at": "2015-10-06T19:18:17Z"
},
{
"body": "#13955 was merged, closing\n",
"created_at": "2015-10-07T15:16:19Z"
},
{
"body": "赞",
"created_at": "2018-03-21T12:42:33Z"
}
],
"number": 11506,
"title": "_update some time i get DocumentAlreadyExistsException"
} | {
"body": "fix for #11506 and https://github.com/elastic/elasticsearch/issues/9821 \n",
"number": 11542,
"review_comments": [],
"title": "Handle upserts failing when document has already been created by another process"
} | {
"commits": [
{
"message": "fix for 9821 change in String key"
},
{
"message": "not throwing DAEE if version type is internal and version is supplied, added test\n\n\tmodified: core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java\n\tmodified: core/src/test/java/org/elasticsearch/action/IndicesRequestTests.java"
},
{
"message": "Revert \"not throwing DAEE if version type is internal and version is supplied, added test\"\n\nThis reverts commit 047b3ff4f0dc5fadbd772dae8099228e3dc82f57."
}
],
"files": [
{
"diff": "@@ -168,13 +168,13 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray<BulkItemResponse> r\n } else if (request instanceof DeleteRequest) {\n DeleteRequest deleteRequest = (DeleteRequest) request;\n if (index.equals(deleteRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"delete\", new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e)));\n return true;\n }\n } else if (request instanceof UpdateRequest) {\n UpdateRequest updateRequest = (UpdateRequest) request;\n if (index.equals(updateRequest.index())) {\n- responses.set(idx, new BulkItemResponse(idx, \"index\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n+ responses.set(idx, new BulkItemResponse(idx, \"update\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), e)));\n return true;\n }\n } else {",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
}
]
} |
{
"body": "After restarting Elasticsearch 1.5.2 (just a `sudo service elasticsearch restart`) on my single node server with about 600GB data one shard cannot get recovered and the cluster stays in the red status. I haven't noticed any problems before the restart. I'm using the indices without replicas.\n\nI am using a GCE instance with persistent disks and there is enough free space for Elasticsearch there (>170GB). \n\nError log:\n\n```\n[2015-05-20 07:48:29,774][WARN ][indices.cluster ] [test1] [[gardenhose-2015-week20][2]] marking and sending shard failed due to [failed recovery]\norg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [gardenhose-2015-week20][2] failed recovery\n at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:162)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.index.engine.EngineCreationFailureException: [gardenhose-2015-week20][2] failed to upgrade 3x segments\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:121)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:32)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1262)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1257)\n at org.elasticsearch.index.shard.IndexShard.prepareForTranslogRecovery(IndexShard.java:784)\n at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:226)\n at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112)\n ... 3 more\nCaused by: java.io.EOFException: read past EOF: NIOFSIndexInput(path=\"/mnt/elasticsearch-test1/test/nodes/0/indices/gardenhose-2015-week20/2/index/segments_c9\")\n at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)\n at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)\n at org.apache.lucene.store.DataInput.readInt(DataInput.java:98)\n at org.apache.lucene.store.BufferedIndexInput.readInt(BufferedIndexInput.java:183)\n at org.elasticsearch.common.lucene.Lucene.indexNeeds3xUpgrading(Lucene.java:767)\n at org.elasticsearch.common.lucene.Lucene.upgradeLucene3xSegmentsMetadata(Lucene.java:778)\n at org.elasticsearch.index.engine.InternalEngine.upgrade3xSegments(InternalEngine.java:1084)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:119)\n ... 9 more\n```\n",
"comments": [
{
"body": "This looks similar to #10680\n",
"created_at": "2015-05-20T08:21:06Z"
},
{
"body": "Yes, shutdown of the cluster may have last 30s.\nI have noticed there was a big query running before the restart which cause an OOM-Exception.\n\nI have uploaded the full log file https://gist.githubusercontent.com/Hocdoc/0956a3675bd63e0af0d5/raw/elasticsearch.log . The (first) restart was at [2015-05-20 07:44:35,457].\n",
"created_at": "2015-05-20T08:32:13Z"
},
{
"body": "Actually, it says `failed to upgrade 3x segments`. Makes me think this was an old previously-corrupted index.\n",
"created_at": "2015-05-25T12:59:43Z"
},
{
"body": "Hmm I think I see what's happening here ... Elasticsearch detects false corruption if the segments file is 0 bytes, and gives a confusing message about upgrading 3x segments.\n\nWas `/mnt/elasticsearch-test1/test/nodes/0/indices/gardenhose-2015-week20/2/index/segments_c9` 0 bytes in this case?\n\nI'll work on a fix.\n",
"created_at": "2015-06-08T15:29:47Z"
},
{
"body": "This only affects 1.x; we don't do this check in 2.0 (master).\n",
"created_at": "2015-06-08T15:31:05Z"
},
{
"body": "Fixed with better error reporting in #11539.\n",
"created_at": "2015-06-08T16:37:28Z"
}
],
"number": 11249,
"title": "Corrupted shard"
} | {
"body": "ES checks the latest segments_N file to see if it was written by Lucene 3.x and upgrade if so, but if that file is truncated (e.g. 0 bytes) due prior disk full, we now give a confusing \"failed to upgrade 3x segments\" exception when you likely don't have 3.x segments_N nor segments.\n\nThis change just breaks apart the exception handling so we do a better job separating \"I could not read the latest segments_N file\" from \"I did read it and it was ancient (Lucene 3.x) and then when I tried to upgrade it, bad things happened\".\n\nClosed #11249\n",
"number": 11539,
"review_comments": [
{
"body": "thank you for renaming this.\n",
"created_at": "2015-06-08T16:24:18Z"
}
],
"title": "Improve exception message when shard has a partial commit (segments_N file) due to prior disk full"
} | {
"commits": [
{
"message": "fix exc handling to differentiate 'cannot read last commit' from 'failed to upgrade 3.x segments_N'"
}
],
"files": [
{
"diff": "@@ -789,10 +789,10 @@ public void delete() {\n * Returns <code>true</code> iff the store contains an index that contains segments that were\n * not upgraded to the lucene 4.x format.\n */\n- static boolean indexNeeds3xUpgrading(Directory directory) throws IOException {\n- final String si = SegmentInfos.getLastCommitSegmentsFileName(directory);\n- if (si != null) {\n- try (IndexInput input = directory.openInput(si, IOContext.READONCE)) {\n+ public static boolean indexNeeds3xUpgrading(Directory directory) throws IOException {\n+ final String segmentsFile = SegmentInfos.getLastCommitSegmentsFileName(directory);\n+ if (segmentsFile != null) {\n+ try (IndexInput input = directory.openInput(segmentsFile, IOContext.READONCE)) {\n return input.readInt() != CodecUtil.CODEC_MAGIC; // check if it's a 4.x commit point\n }\n }\n@@ -801,20 +801,18 @@ static boolean indexNeeds3xUpgrading(Directory directory) throws IOException {\n \n /**\n * Upgrades the segments metadata of the index to match a lucene 4.x index format. In particular it ensures that each\n- * segment has a .si file even if it was written with lucene 3.x\n+ * segment has a .si file even if it was written with lucene 3.x. Only call this if {@link #indexNeeds3xUpgrading}\n+ * returned true.\n */\n- public static boolean upgradeLucene3xSegmentsMetadata(Directory directory) throws IOException {\n- if (indexNeeds3xUpgrading(directory)) {\n- try (final IndexWriter iw = new IndexWriter(directory, new IndexWriterConfig(Version.LATEST, Lucene.STANDARD_ANALYZER)\n- .setMergePolicy(NoMergePolicy.INSTANCE)\n- .setOpenMode(IndexWriterConfig.OpenMode.APPEND))) {\n- Map<String, String> commitData = iw.getCommitData(); // this is a trick to make IW to actually do a commit - we have to preserve the last committed data as well\n- // for ES to get the translog ID back\n- iw.setCommitData(commitData);\n- iw.commit();\n- }\n- return true;\n+ public static void upgradeLucene3xSegmentsMetadata(Directory directory) throws IOException {\n+ try (final IndexWriter iw = new IndexWriter(directory, new IndexWriterConfig(Version.LATEST, Lucene.STANDARD_ANALYZER)\n+ .setMergePolicy(NoMergePolicy.INSTANCE)\n+ .setOpenMode(IndexWriterConfig.OpenMode.APPEND))) {\n+ Map<String, String> commitData = iw.getCommitData();\n+ // this is a trick to make IW to actually do a commit - we have to preserve the last committed data as well\n+ // for ES to get the translog ID back\n+ iw.setCommitData(commitData);\n+ iw.commit();\n }\n- return false;\n }\n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/Lucene.java",
"status": "modified"
},
{
"diff": "@@ -119,19 +119,12 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n SearcherManager manager = null;\n boolean success = false;\n try {\n- try {\n- boolean autoUpgrade = true;\n- // If the index was created on 0.20.7 (Lucene 3.x) or earlier,\n- // it needs to be upgraded\n- autoUpgrade = Version.indexCreated(engineConfig.getIndexSettings()).onOrBefore(Version.V_0_20_7);\n- if (autoUpgrade) {\n- logger.debug(\"[{}] checking for 3x segments to upgrade\", shardId);\n- upgrade3xSegments(store);\n- } else {\n- logger.debug(\"[{}] skipping check for 3x segments\", shardId);\n- }\n- } catch (IOException ex) {\n- throw new EngineCreationFailureException(shardId, \"failed to upgrade 3x segments\", ex);\n+ // If the index was created on 0.20.7 (Lucene 3.x) or earlier, its commit point (segments_N file) needs to be upgraded:\n+ if (Version.indexCreated(engineConfig.getIndexSettings()).onOrBefore(Version.V_0_20_7)) {\n+ logger.debug(\"[{}] checking for 3x segments to upgrade\", shardId);\n+ maybeUpgrade3xSegments(store);\n+ } else {\n+ logger.debug(\"[{}] skipping check for 3x segments\", shardId);\n }\n this.onGoingRecoveries = new FlushingRecoveryCounter(this, store, logger);\n this.lastDeleteVersionPruneTimeMSec = engineConfig.getThreadPool().estimatedTimeInMillis();\n@@ -1139,10 +1132,25 @@ public void warm(AtomicReader reader) throws IOException {\n }\n }\n \n- protected void upgrade3xSegments(Store store) throws IOException {\n+ protected void maybeUpgrade3xSegments(Store store) throws EngineException {\n store.incRef();\n try {\n- if (Lucene.upgradeLucene3xSegmentsMetadata(store.directory())) {\n+ boolean doUpgrade;\n+ try {\n+ doUpgrade = Lucene.indexNeeds3xUpgrading(store.directory());\n+ } catch (IOException ex) {\n+ // This can happen if commit was truncated (e.g. due to prior disk full), and this case requires user intervention (remove the broken\n+ // commit file so Lucene falls back to a previous good one, and also clear ES's corrupted_XXX marker file), and the shard\n+ // should be OK:\n+ throw new EngineCreationFailureException(shardId, \"failed to read commit\", ex);\n+ }\n+ \n+ if (doUpgrade) {\n+ try {\n+ Lucene.upgradeLucene3xSegmentsMetadata(store.directory());\n+ } catch (IOException ex) {\n+ throw new EngineCreationFailureException(shardId, \"failed to upgrade 3.x segments_N commit point\", ex);\n+ }\n logger.debug(\"upgraded current 3.x segments file on startup\");\n } else {\n logger.debug(\"segments file is already after 3.x; not upgrading\");",
"filename": "src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -330,14 +330,7 @@ public void testNeedsUpgrading() throws URISyntaxException, IOException {\n assertTrue(Lucene.indexNeeds3xUpgrading(dir));\n }\n \n- for (int i = 0; i < 2; i++) {\n- boolean upgraded = Lucene.upgradeLucene3xSegmentsMetadata(dir);\n- if (i == 0) {\n- assertTrue(upgraded);\n- } else {\n- assertFalse(upgraded);\n- }\n- }\n+ Lucene.upgradeLucene3xSegmentsMetadata(dir);\n \n for (int i = 0; i < 2; i++) {\n assertFalse(Lucene.indexNeeds3xUpgrading(dir));",
"filename": "src/test/java/org/elasticsearch/common/lucene/LuceneTest.java",
"status": "modified"
}
]
} |
{
"body": "In rare occasion, the translog replay phase of recovery may require mapping changes on the target shard. This can happen where indexing on the primary introduces new mappings while the recovery is in phase1. If the source node processes the new mapping from the master, allowing the indexing to proceed, before the target node does and the recovery moves to the phase 2 (translog replay) before as well, the translog operations arriving on the target node may miss the mapping changes. Since this is extremely rare, we opt for a simple fix and simply restart the recovery. Note that in the case the file copy phase will likely be very short as the files are already in sync.\n\nRestarting recoveries in such a late phase means we may need to copy segment_N files and/or files that were quickly merged away on the target again. This annoys the write-once protection in our testing infra. To work around it I have introduces a counter in the termpoary file name prefix used by the recovery code.\n\n***\\* THERE IS STILL AN ONGOING ISSUE ***: Lucene will try to write the same segment_N file (which was cleaned by the recovery code) twice triggering test failures.\n\nDue ot this issue we have decided to change approach and use a cluster observer to retry operations once the mapping have arrived (or any other change)\n\n Closes #11281\n",
"comments": [
{
"body": "@bleskes I looked at this and I think we should not try to restart the recovery in this `hyper corner case` I think we should just fail the shard, fail the recovery and start fresh. This makes the entire code less complicated and more strict. It's I think we should not design for all these corner cases and rather start fresh?\n",
"created_at": "2015-05-28T09:39:39Z"
},
{
"body": "@s1monw I pushed an update based on our discussion ... no more DelayRecoveryException\n",
"created_at": "2015-06-04T13:30:13Z"
},
{
"body": "LGTM, one question if we elect a new master will this somehow get notified and we retry?\n",
"created_at": "2015-06-04T19:27:41Z"
},
{
"body": "I'm not sure I follow the question exactly, but if the recovery code is wait on the observer it will retry on any change in the cluster state, master related or not.\n",
"created_at": "2015-06-04T20:00:22Z"
},
{
"body": "@bleskes I got confused... nevermind\n",
"created_at": "2015-06-04T20:04:34Z"
}
],
"number": 11363,
"title": "Restart recovery upon mapping changes during translog replay"
} | {
"body": "#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice.\n\nSee http://build-us-00.elastic.co/job/es_g1gc_master_metal/8381/ for an example failure\n",
"number": 11536,
"review_comments": [],
"title": "Fix recovered translog ops stat counting when retrying a batch"
} | {
"commits": [
{
"message": "Recovery: fix recovered translog ops stat counting when retrying a batch\n\n#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice."
}
],
"files": [
{
"diff": "@@ -1327,7 +1327,7 @@ private Tuple<DocumentMapper, Mapping> docMapper(String type) {\n }\n \n private final EngineConfig newEngineConfig(TranslogConfig translogConfig) {\n- final TranslogRecoveryPerformer translogRecoveryPerformer = new TranslogRecoveryPerformer(mapperService, mapperAnalyzer, queryParserService, indexAliasesService, indexCache) {\n+ final TranslogRecoveryPerformer translogRecoveryPerformer = new TranslogRecoveryPerformer(shardId, mapperService, mapperAnalyzer, queryParserService, indexAliasesService, indexCache) {\n @Override\n protected void operationProcessed() {\n assert recoveryState != null;",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -55,8 +55,10 @@ public class TranslogRecoveryPerformer {\n private final IndexCache indexCache;\n private final MapperAnalyzer mapperAnalyzer;\n private final Map<String, Mapping> recoveredTypes = new HashMap<>();\n+ private final ShardId shardId;\n \n- protected TranslogRecoveryPerformer(MapperService mapperService, MapperAnalyzer mapperAnalyzer, IndexQueryParserService queryParserService, IndexAliasesService indexAliasesService, IndexCache indexCache) {\n+ protected TranslogRecoveryPerformer(ShardId shardId, MapperService mapperService, MapperAnalyzer mapperAnalyzer, IndexQueryParserService queryParserService, IndexAliasesService indexAliasesService, IndexCache indexCache) {\n+ this.shardId = shardId;\n this.mapperService = mapperService;\n this.queryParserService = queryParserService;\n this.indexAliasesService = indexAliasesService;\n@@ -76,13 +78,33 @@ protected Tuple<DocumentMapper, Mapping> docMapper(String type) {\n */\n int performBatchRecovery(Engine engine, Iterable<Translog.Operation> operations) {\n int numOps = 0;\n- for (Translog.Operation operation : operations) {\n- performRecoveryOperation(engine, operation, false);\n- numOps++;\n+ try {\n+ for (Translog.Operation operation : operations) {\n+ performRecoveryOperation(engine, operation, false);\n+ numOps++;\n+ }\n+ } catch (Throwable t) {\n+ throw new BatchOperationException(shardId, \"failed to apply batch translog operation [\" + t.getMessage() + \"]\", numOps, t);\n }\n return numOps;\n }\n \n+ public static class BatchOperationException extends IndexShardException {\n+\n+ private final int completedOperations;\n+\n+ public BatchOperationException(ShardId shardId, String msg, int completedOperations, Throwable cause) {\n+ super(shardId, msg, cause);\n+ this.completedOperations = completedOperations;\n+ }\n+\n+\n+ /** the number of succesful operations performed before the exception was thrown */\n+ public int completedOperations() {\n+ return completedOperations;\n+ }\n+ }\n+\n private void maybeAddMappingUpdate(String type, Mapping update, String docId, boolean allowMappingUpdates) {\n if (update == null) {\n return;",
"filename": "core/src/main/java/org/elasticsearch/index/shard/TranslogRecoveryPerformer.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,6 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -506,6 +505,13 @@ public synchronized void incrementRecoveredOperations() {\n assert total == UNKNOWN || total >= recovered : \"total, if known, should be > recovered. total [\" + total + \"], recovered [\" + recovered + \"]\";\n }\n \n+ public synchronized void decrementRecoveredOperations(int ops) {\n+ recovered -= ops;\n+ assert recovered >= 0 : \"recovered operations must be non-negative. Because [\" + recovered + \"] after decrementing [\" + ops + \"]\";\n+ assert total == UNKNOWN || total >= recovered : \"total, if known, should be > recovered. total [\" + total + \"], recovered [\" + recovered + \"]\";\n+ }\n+\n+\n /**\n * returns the total number of translog operations recovered so far\n */",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java",
"status": "modified"
},
{
"diff": "@@ -47,10 +47,7 @@\n import org.elasticsearch.index.engine.RecoveryEngineException;\n import org.elasticsearch.index.mapper.MapperException;\n import org.elasticsearch.index.settings.IndexSettings;\n-import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.index.shard.IndexShardClosedException;\n-import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.*;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.indices.IndicesLifecycle;\n@@ -308,10 +305,14 @@ public void messageReceived(final RecoveryTranslogOperationsRequest request, fin\n assert recoveryStatus.indexShard().recoveryState() == recoveryStatus.state();\n try {\n recoveryStatus.indexShard().performBatchRecovery(request.operations());\n- } catch (MapperException mapperException) {\n+ } catch (TranslogRecoveryPerformer.BatchOperationException exception) {\n+ if (ExceptionsHelper.unwrapCause(exception) instanceof MapperException == false) {\n+ throw exception;\n+ }\n // in very rare cases a translog replay from primary is processed before a mapping update on this node\n // which causes local mapping changes. we want to wait until these mappings are processed.\n- logger.trace(\"delaying recovery due to missing mapping changes\", mapperException);\n+ logger.trace(\"delaying recovery due to missing mapping changes (rolling back stats for [{}] ops)\", exception, exception.completedOperations());\n+ translog.decrementRecoveredOperations(exception.completedOperations());\n // we do not need to use a timeout here since the entire recovery mechanism has an inactivity protection (it will be\n // canceled)\n observer.waitForNextChange(new ClusterStateObserver.Listener() {",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -1820,7 +1820,7 @@ public static class TranslogHandler extends TranslogRecoveryPerformer {\n public final AtomicInteger recoveredOps = new AtomicInteger(0);\n \n public TranslogHandler(String indexName) {\n- super(null, new MapperAnalyzer(null), null, null, null);\n+ super(new ShardId(\"test\", 0), null, new MapperAnalyzer(null), null, null, null);\n Settings settings = Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n RootObjectMapper.Builder rootBuilder = new RootObjectMapper.Builder(\"test\");\n Index index = new Index(indexName);",
"filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -389,6 +389,10 @@ Translog createObj() {\n for (int j = iterationOps; j > 0; j--) {\n ops++;\n translog.incrementRecoveredOperations();\n+ if (randomBoolean()) {\n+ translog.decrementRecoveredOperations(1);\n+ translog.incrementRecoveredOperations();\n+ }\n }\n assertThat(translog.recoveredOperations(), equalTo(ops));\n assertThat(translog.totalOperations(), equalTo(totalOps));",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/RecoveryStateTest.java",
"status": "modified"
}
]
} |
{
"body": "I was trying to make a full, consistent backup before an upgrade. Snapshots are at a moment of time, which doesn't work if clients are still updating your indexes.\n\nI tried putting the cluster into read_only mode by setting cluster.blocks.read_only: true, but running a snapshot returned this error:\n\n```\n{\"error\":\"ClusterBlockException[blocked by: [FORBIDDEN/6/cluster read-only (api)];]\",\"status\":403}\n```\n\nPlease consider allowing snapshots to provide a consistent backup by running when in read-only mode.\n",
"comments": [
{
"body": "@webmstr Snapshots are still moment in time while updates are happening. You don't need to lock anything. A snapshot will only backup the state of the index at the point that the backup starts, it won't take any later changes into account.\n",
"created_at": "2014-10-16T18:19:02Z"
},
{
"body": "As I mentioned, snapshots - as currently implemented - are an unreasonable method of performing a consistent backup prior to an upgrade. This enhancement would have allowed that option.\n\nWithout the enhancement, snapshots should not be used before an upgrade, because the indexes may have been changed while the snapshot was running. As such, the upgrade documentation should be changed to not propose the use of snapshots as backups, and a \"full\" backup procedure should be documented in its place.\n",
"created_at": "2014-10-16T20:09:17Z"
},
{
"body": "Out of interest, why don't you just stop writing to your cluster? Reopening for discussion.\n",
"created_at": "2014-10-17T05:04:47Z"
},
{
"body": "@imotov what are your thoughts?\n",
"created_at": "2014-10-17T05:05:03Z"
},
{
"body": "I could turn off logstash, but that's just one potential client. Someone could be curl'ing, or using an ES plugin (like head), etc. If you need a consistent backup, you have to disconnect and lock out the clients from the server side.\n",
"created_at": "2014-10-17T06:18:42Z"
},
{
"body": "@clintongormley see https://github.com/elasticsearch/elasticsearch/pull/5876 I think this one is similar. \n",
"created_at": "2014-10-17T13:33:58Z"
},
{
"body": "@imotov thanks, so setting `index.blocks.write` to `true` on all indices would be a reasonable workaround, at least until #5855 is resolved.\n",
"created_at": "2014-10-17T13:39:15Z"
},
{
"body": "@clintongormley Actually, I discovered that the `index.blocks.write` attribute only prevents writes to **existing** indices. If a client tries to create a new index, that request succeeds, which brings us back to the same problem. My workaround was to shutdown the proxy node though which our clients access our ES cluster.\nI am running into the same issue as @webmstr , but for different reason: I cannot create a consistent backup for a restore to a secondary datacenter because each snapshot takes ~1 hour to complete and we cannot afford to block writes from our clients for such a long period of time. \nI am still trying to root cause why snapshots are taking so long; the time required for snapshot completion increases with each snapshot. However, when i restore the same data to a new cluster, snapshotting that data to a new S3 bucket takes less than a minute. \n\nEDIT: I may have a theory on why the snapshots were taking so long... i was taking a snapshot every two hours, and the s3 bucket has a LOT of snapshots now (49). I'm thinking that the calls the ES aws plugin makes to the S3 endpoint slow down over time as the number of snapshots increase. \n\nOr may be it's just the number of snapshots that's causing the slowness...i.e. regardless of whether the backend repository is S3 or fs? I guess I should have an additional cron job that deletes older snaphots. Is there a good rule of thumb on the number of snapshots to retain?\n",
"created_at": "2014-11-12T23:26:35Z"
},
{
"body": "@imotov we discussed this issue but were unclear on what the differences are between the index.blocks.\\* options are and why the snapshot fails with read_only set to false?\n",
"created_at": "2015-02-20T10:37:12Z"
},
{
"body": "@colings86 there is an ongoing effort to resolve this issue in #9203\n",
"created_at": "2015-02-20T15:54:26Z"
},
{
"body": "After discussing this with @tlrx it looks like the best way to address this issue is by moving snapshot and restore cluster state elements from cluster metadata to a custom cluster element where it seems to belong (since information about currently running snapshot and restore hardly qualifies as metadata).\n",
"created_at": "2015-05-19T16:35:32Z"
}
],
"number": 8102,
"title": "snapshot should work when cluster is in read_only mode."
} | {
"body": "Information about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster.\n\nCloses #8102\n",
"number": 11486,
"review_comments": [
{
"body": "I'm a little fuzzy on this, why do we need to get it out of the `customs` map instead of using the `customPrototype` and getting it there?\n",
"created_at": "2015-06-04T21:24:26Z"
},
{
"body": "Or, just calling `.customs().get(\"mycustom\")`\n",
"created_at": "2015-06-04T21:30:59Z"
},
{
"body": "`customPrototype` contains a list of dummy (prototype) objects that are used for deserialization when we don't have a concrete implementation to diff against. It is static and changes only when a component needs to register another custom object. In other words you can think of `customPrototypes` as factories. The `custom` map contains the actual objects that are part of the cluster state. \n\nWe could use `.customs().get(\"mycustom\")` but it would return us objects with type `Custom` that we would have to later cast to concrete custom types. This is a convenience method that does this cast for us, so later we can write something like:\n\n```\nSnapshotsInProgress snapshotsInProgress = allocation.routingNodes().custom(SnapshotsInProgress.TYPE);\n```\n\ninstead of\n\n```\nSnapshotsInProgress snapshotsInProgress = (SnapshotsInProgress) allocation.routingNodes().custom().get(SnapshotsInProgress.TYPE);\n```\n",
"created_at": "2015-06-05T00:55:22Z"
},
{
"body": "I understand why we would allow creating a snapshot when the cluster is read-only, but it seems backwards to allow deleting a snapshot, should we document this somewhere in the snapshot & restore documentation?\n",
"created_at": "2015-06-10T20:17:44Z"
},
{
"body": "Can you add javadoc for what this function is doing?\n",
"created_at": "2015-06-10T20:18:57Z"
},
{
"body": "Same here, needs javadocs\n",
"created_at": "2015-06-10T20:19:30Z"
},
{
"body": "Or just a regular comment is fine.\n",
"created_at": "2015-06-10T20:19:49Z"
}
],
"title": "Move in-progress snapshot and restore information from custom metadata to custom cluster state part"
} | {
"commits": [
{
"message": "Snapshot/Restore: Move in-progress snapshot and restore information from custom metadata to custom cluster state part\n\nInformation about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster.\n\nCloses #8102"
}
],
"files": [
{
"diff": "@@ -59,8 +59,8 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- // We are writing to the cluster metadata and reading from indices - so we need to check both blocks\n- ClusterBlockException clusterBlockException = state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ // We are reading the cluster metadata and indices - so we need to check both blocks\n+ ClusterBlockException clusterBlockException = state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n if (clusterBlockException != null) {\n return clusterBlockException;\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java",
"status": "modified"
},
{
"diff": "@@ -58,7 +58,8 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ // Cluster is not affected but we look up repositories in metadata\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,7 @@\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.State;\n+import org.elasticsearch.cluster.SnapshotsInProgress.State;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatus.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,7 @@\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n+import org.elasticsearch.cluster.SnapshotsInProgress;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -82,16 +82,16 @@ protected SnapshotsStatusResponse newResponse() {\n protected void masterOperation(final SnapshotsStatusRequest request,\n final ClusterState state,\n final ActionListener<SnapshotsStatusResponse> listener) throws Exception {\n- List<SnapshotMetaData.Entry> currentSnapshots = snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n+ List<SnapshotsInProgress.Entry> currentSnapshots = snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n \n if (currentSnapshots.isEmpty()) {\n listener.onResponse(buildResponse(request, currentSnapshots, null));\n return;\n }\n \n Set<String> nodesIds = newHashSet();\n- for (SnapshotMetaData.Entry entry : currentSnapshots) {\n- for (SnapshotMetaData.ShardSnapshotStatus status : entry.shards().values()) {\n+ for (SnapshotsInProgress.Entry entry : currentSnapshots) {\n+ for (SnapshotsInProgress.ShardSnapshotStatus status : entry.shards().values()) {\n if (status.nodeId() != null) {\n nodesIds.add(status.nodeId());\n }\n@@ -111,7 +111,7 @@ protected void masterOperation(final SnapshotsStatusRequest request,\n @Override\n public void onResponse(TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) {\n try {\n- List<SnapshotMetaData.Entry> currentSnapshots =\n+ List<SnapshotsInProgress.Entry> currentSnapshots =\n snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));\n } catch (Throwable e) {\n@@ -131,7 +131,7 @@ public void onFailure(Throwable e) {\n \n }\n \n- private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, List<SnapshotMetaData.Entry> currentSnapshots,\n+ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, List<SnapshotsInProgress.Entry> currentSnapshots,\n TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) throws IOException {\n // First process snapshot that are currently processed\n ImmutableList.Builder<SnapshotStatus> builder = ImmutableList.builder();\n@@ -144,11 +144,11 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li\n nodeSnapshotStatusMap = newHashMap();\n }\n \n- for (SnapshotMetaData.Entry entry : currentSnapshots) {\n+ for (SnapshotsInProgress.Entry entry : currentSnapshots) {\n currentSnapshotIds.add(entry.snapshotId());\n ImmutableList.Builder<SnapshotIndexShardStatus> shardStatusBuilder = ImmutableList.builder();\n- for (ImmutableMap.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shardEntry : entry.shards().entrySet()) {\n- SnapshotMetaData.ShardSnapshotStatus status = shardEntry.getValue();\n+ for (ImmutableMap.Entry<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shardEntry : entry.shards().entrySet()) {\n+ SnapshotsInProgress.ShardSnapshotStatus status = shardEntry.getValue();\n if (status.nodeId() != null) {\n // We should have information about this shard from the shard:\n TransportNodesSnapshotsStatus.NodeSnapshotStatus nodeStatus = nodeSnapshotStatusMap.get(status.nodeId());\n@@ -204,16 +204,16 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li\n for (ImmutableMap.Entry<ShardId, IndexShardSnapshotStatus> shardStatus : shardStatues.entrySet()) {\n shardStatusBuilder.add(new SnapshotIndexShardStatus(shardStatus.getKey(), shardStatus.getValue()));\n }\n- final SnapshotMetaData.State state;\n+ final SnapshotsInProgress.State state;\n switch (snapshot.state()) {\n case FAILED:\n- state = SnapshotMetaData.State.FAILED;\n+ state = SnapshotsInProgress.State.FAILED;\n break;\n case SUCCESS:\n case PARTIAL:\n // Translating both PARTIAL and SUCCESS to SUCCESS for now\n // TODO: add the differentiation on the metadata level in the next major release\n- state = SnapshotMetaData.State.SUCCESS;\n+ state = SnapshotsInProgress.State.SUCCESS;\n break;\n default:\n throw new IllegalArgumentException(\"Unknown snapshot state \" + snapshot.state());",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@ public class ClusterStateRequest extends MasterNodeReadRequest<ClusterStateReque\n private boolean nodes = true;\n private boolean metaData = true;\n private boolean blocks = true;\n+ private boolean customs = true;\n private String[] indices = Strings.EMPTY_ARRAY;\n private IndicesOptions indicesOptions = IndicesOptions.lenientExpandOpen();\n \n@@ -54,6 +55,7 @@ public ClusterStateRequest all() {\n nodes = true;\n metaData = true;\n blocks = true;\n+ customs = true;\n indices = Strings.EMPTY_ARRAY;\n return this;\n }\n@@ -63,6 +65,7 @@ public ClusterStateRequest clear() {\n nodes = false;\n metaData = false;\n blocks = false;\n+ customs = false;\n indices = Strings.EMPTY_ARRAY;\n return this;\n }\n@@ -124,13 +127,23 @@ public final ClusterStateRequest indicesOptions(IndicesOptions indicesOptions) {\n return this;\n }\n \n+ public ClusterStateRequest customs(boolean customs) {\n+ this.customs = customs;\n+ return this;\n+ }\n+\n+ public boolean customs() {\n+ return customs;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n routingTable = in.readBoolean();\n nodes = in.readBoolean();\n metaData = in.readBoolean();\n blocks = in.readBoolean();\n+ customs = in.readBoolean();\n indices = in.readStringArray();\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n }\n@@ -142,6 +155,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(nodes);\n out.writeBoolean(metaData);\n out.writeBoolean(blocks);\n+ out.writeBoolean(customs);\n out.writeStringArray(indices);\n indicesOptions.writeIndicesOptions(out);\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java",
"status": "modified"
},
{
"diff": "@@ -123,6 +123,9 @@ protected void masterOperation(final ClusterStateRequest request, final ClusterS\n \n builder.metaData(mdBuilder);\n }\n+ if (request.customs()) {\n+ builder.customs(currentState.customs());\n+ }\n listener.onResponse(new ClusterStateResponse(clusterName, builder.build()));\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -117,6 +117,12 @@ public static void registerPrototype(String type, Custom proto) {\n customPrototypes.put(type, proto);\n }\n \n+ static {\n+ // register non plugin custom parts\n+ registerPrototype(SnapshotsInProgress.TYPE, SnapshotsInProgress.PROTO);\n+ registerPrototype(RestoreInProgress.TYPE, RestoreInProgress.PROTO);\n+ }\n+\n @Nullable\n public static <T extends Custom> T lookupPrototype(String type) {\n //noinspection unchecked\n@@ -249,6 +255,10 @@ public ImmutableOpenMap<String, Custom> getCustoms() {\n return this.customs;\n }\n \n+ public <T extends Custom> T custom(String type) {\n+ return (T) customs.get(type);\n+ }\n+\n public ClusterName getClusterName() {\n return this.clusterName;\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterState.java",
"status": "modified"
},
{
"diff": "@@ -97,8 +97,6 @@ public interface Custom extends Diffable<Custom>, ToXContent {\n static {\n // register non plugin custom metadata\n registerPrototype(RepositoriesMetaData.TYPE, RepositoriesMetaData.PROTO);\n- registerPrototype(SnapshotMetaData.TYPE, SnapshotMetaData.PROTO);\n- registerPrototype(RestoreMetaData.TYPE, RestoreMetaData.PROTO);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.index.shard.ShardId;\n \n import java.util.*;\n@@ -56,6 +57,8 @@ public class RoutingNodes implements Iterable<RoutingNode> {\n \n private final Map<ShardId, List<MutableShardRouting>> assignedShards = newHashMap();\n \n+ private final ImmutableOpenMap<String, ClusterState.Custom> customs;\n+\n private int inactivePrimaryCount = 0;\n \n private int inactiveShardCount = 0;\n@@ -70,6 +73,7 @@ public RoutingNodes(ClusterState clusterState) {\n this.metaData = clusterState.metaData();\n this.blocks = clusterState.blocks();\n this.routingTable = clusterState.routingTable();\n+ this.customs = clusterState.customs();\n \n Map<String, List<MutableShardRouting>> nodesToShards = newHashMap();\n // fill in the nodeToShards with the \"live\" nodes\n@@ -157,6 +161,14 @@ public ClusterBlocks getBlocks() {\n return this.blocks;\n }\n \n+ public ImmutableOpenMap<String, ClusterState.Custom> customs() {\n+ return this.customs;\n+ }\n+\n+ public <T extends ClusterState.Custom> T custom(String type) {\n+ return (T) customs.get(type);\n+ }\n+\n public int requiredAverageNumberOfShardsPerNode() {\n int totalNumberOfShards = 0;\n // we need to recompute to take closed shards into account",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,7 @@\n \n package org.elasticsearch.cluster.routing.allocation.decider;\n \n-import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n+import org.elasticsearch.cluster.SnapshotsInProgress;\n import org.elasticsearch.cluster.routing.RoutingNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n@@ -99,14 +99,14 @@ private Decision canMove(ShardRouting shardRouting, RoutingAllocation allocation\n if (!enableRelocation && shardRouting.primary()) {\n // Only primary shards are snapshotted\n \n- SnapshotMetaData snapshotMetaData = allocation.metaData().custom(SnapshotMetaData.TYPE);\n- if (snapshotMetaData == null) {\n+ SnapshotsInProgress snapshotsInProgress = allocation.routingNodes().custom(SnapshotsInProgress.TYPE);\n+ if (snapshotsInProgress == null) {\n // Snapshots are not running\n return allocation.decision(Decision.YES, NAME, \"no snapshots are currently running\");\n }\n \n- for (SnapshotMetaData.Entry snapshot : snapshotMetaData.entries()) {\n- SnapshotMetaData.ShardSnapshotStatus shardSnapshotStatus = snapshot.shards().get(shardRouting.shardId());\n+ for (SnapshotsInProgress.Entry snapshot : snapshotsInProgress.entries()) {\n+ SnapshotsInProgress.ShardSnapshotStatus shardSnapshotStatus = snapshot.shards().get(shardRouting.shardId());\n if (shardSnapshotStatus != null && !shardSnapshotStatus.state().completed() && shardSnapshotStatus.nodeId() != null && shardSnapshotStatus.nodeId().equals(shardRouting.currentNodeId())) {\n logger.trace(\"Preventing snapshotted shard [{}] to be moved from node [{}]\", shardRouting.shardId(), shardSnapshotStatus.nodeId());\n return allocation.decision(Decision.NO, NAME, \"snapshot for shard [%s] is currently running on node [%s]\",",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/SnapshotInProgressAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -75,6 +75,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n clusterStateRequest.routingTable(metrics.contains(ClusterState.Metric.ROUTING_TABLE) || metrics.contains(ClusterState.Metric.ROUTING_NODES));\n clusterStateRequest.metaData(metrics.contains(ClusterState.Metric.METADATA));\n clusterStateRequest.blocks(metrics.contains(ClusterState.Metric.BLOCKS));\n+ clusterStateRequest.customs(metrics.contains(ClusterState.Metric.CUSTOMS));\n }\n settingsFilter.addFilterSettingParams(request);\n ",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/state/RestClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,7 @@\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.*;\n-import org.elasticsearch.cluster.metadata.RestoreMetaData.ShardRestoreStatus;\n+import org.elasticsearch.cluster.RestoreInProgress.ShardRestoreStatus;\n import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n@@ -74,7 +74,7 @@\n * <p/>\n * First {@link #restoreSnapshot(RestoreRequest, org.elasticsearch.action.ActionListener))}\n * method reads information about snapshot and metadata from repository. In update cluster state task it checks restore\n- * preconditions, restores global state if needed, creates {@link RestoreMetaData} record with list of shards that needs\n+ * preconditions, restores global state if needed, creates {@link RestoreInProgress} record with list of shards that needs\n * to be restored and adds this shard to the routing table using {@link RoutingTable.Builder#addAsRestore(IndexMetaData, RestoreSource)}\n * method.\n * <p/>\n@@ -86,7 +86,7 @@\n * method to start shard restore process.\n * <p/>\n * At the end of the successful restore process {@code IndexShardSnapshotAndRestoreService} calls {@link #indexShardRestoreCompleted(SnapshotId, ShardId)},\n- * which updates {@link RestoreMetaData} in cluster state or removes it when all shards are completed. In case of\n+ * which updates {@link RestoreInProgress} in cluster state or removes it when all shards are completed. In case of\n * restore failure a normal recovery fail-over process kicks in.\n */\n public class RestoreService extends AbstractComponent implements ClusterStateListener {\n@@ -183,20 +183,21 @@ public void restoreSnapshot(final RestoreRequest request, final ActionListener<R\n public ClusterState execute(ClusterState currentState) {\n // Check if another restore process is already running - cannot run two restore processes at the\n // same time\n- RestoreMetaData restoreMetaData = currentState.metaData().custom(RestoreMetaData.TYPE);\n- if (restoreMetaData != null && !restoreMetaData.entries().isEmpty()) {\n+ RestoreInProgress restoreInProgress = currentState.custom(RestoreInProgress.TYPE);\n+ if (restoreInProgress != null && !restoreInProgress.entries().isEmpty()) {\n throw new ConcurrentSnapshotExecutionException(snapshotId, \"Restore process is already running in this cluster\");\n }\n \n // Updating cluster state\n+ ClusterState.Builder builder = ClusterState.builder(currentState);\n MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n RoutingTable.Builder rtBuilder = RoutingTable.builder(currentState.routingTable());\n- final ImmutableMap<ShardId, RestoreMetaData.ShardRestoreStatus> shards;\n+ final ImmutableMap<ShardId, RestoreInProgress.ShardRestoreStatus> shards;\n Set<String> aliases = newHashSet();\n if (!renamedIndices.isEmpty()) {\n // We have some indices to restore\n- ImmutableMap.Builder<ShardId, RestoreMetaData.ShardRestoreStatus> shardsBuilder = ImmutableMap.builder();\n+ ImmutableMap.Builder<ShardId, RestoreInProgress.ShardRestoreStatus> shardsBuilder = ImmutableMap.builder();\n for (Map.Entry<String, String> indexEntry : renamedIndices.entrySet()) {\n String index = indexEntry.getValue();\n boolean partial = checkPartial(index);\n@@ -260,16 +261,16 @@ public ClusterState execute(ClusterState currentState) {\n }\n for (int shard = 0; shard < snapshotIndexMetaData.getNumberOfShards(); shard++) {\n if (!ignoreShards.contains(shard)) {\n- shardsBuilder.put(new ShardId(renamedIndex, shard), new RestoreMetaData.ShardRestoreStatus(clusterService.state().nodes().localNodeId()));\n+ shardsBuilder.put(new ShardId(renamedIndex, shard), new RestoreInProgress.ShardRestoreStatus(clusterService.state().nodes().localNodeId()));\n } else {\n- shardsBuilder.put(new ShardId(renamedIndex, shard), new RestoreMetaData.ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreMetaData.State.FAILURE));\n+ shardsBuilder.put(new ShardId(renamedIndex, shard), new RestoreInProgress.ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreInProgress.State.FAILURE));\n }\n }\n }\n \n shards = shardsBuilder.build();\n- RestoreMetaData.Entry restoreEntry = new RestoreMetaData.Entry(snapshotId, RestoreMetaData.State.INIT, ImmutableList.copyOf(renamedIndices.keySet()), shards);\n- mdBuilder.putCustom(RestoreMetaData.TYPE, new RestoreMetaData(restoreEntry));\n+ RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshotId, RestoreInProgress.State.INIT, ImmutableList.copyOf(renamedIndices.keySet()), shards);\n+ builder.putCustom(RestoreInProgress.TYPE, new RestoreInProgress(restoreEntry));\n } else {\n shards = ImmutableMap.of();\n }\n@@ -285,7 +286,7 @@ public ClusterState execute(ClusterState currentState) {\n shards.size(), shards.size() - failedShards(shards));\n }\n \n- ClusterState updatedState = ClusterState.builder(currentState).metaData(mdBuilder).blocks(blocks).routingTable(rtBuilder).build();\n+ ClusterState updatedState = builder.metaData(mdBuilder).blocks(blocks).routingTable(rtBuilder).build();\n RoutingAllocation.Result routingResult = allocationService.reroute(ClusterState.builder(updatedState).routingTable(rtBuilder).build());\n return ClusterState.builder(updatedState).routingResult(routingResult).build();\n }\n@@ -457,7 +458,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n public void indexShardRestoreCompleted(SnapshotId snapshotId, ShardId shardId) {\n logger.trace(\"[{}] successfully restored shard [{}]\", snapshotId, shardId);\n UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshotId, shardId,\n- new ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreMetaData.State.SUCCESS));\n+ new ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreInProgress.State.SUCCESS));\n transportService.sendRequest(clusterService.state().nodes().masterNode(),\n UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n }\n@@ -509,12 +510,11 @@ public ClusterState execute(ClusterState currentState) {\n return currentState;\n }\n \n- final MetaData metaData = currentState.metaData();\n- final RestoreMetaData restore = metaData.custom(RestoreMetaData.TYPE);\n+ final RestoreInProgress restore = currentState.custom(RestoreInProgress.TYPE);\n if (restore != null) {\n int changedCount = 0;\n- final List<RestoreMetaData.Entry> entries = newArrayList();\n- for (RestoreMetaData.Entry entry : restore.entries()) {\n+ final List<RestoreInProgress.Entry> entries = newArrayList();\n+ for (RestoreInProgress.Entry entry : restore.entries()) {\n Map<ShardId, ShardRestoreStatus> shards = null;\n \n for (int i = 0; i < batchSize; i++) {\n@@ -533,7 +533,7 @@ public ClusterState execute(ClusterState currentState) {\n \n if (shards != null) {\n if (!completed(shards)) {\n- entries.add(new RestoreMetaData.Entry(entry.snapshotId(), RestoreMetaData.State.STARTED, entry.indices(), ImmutableMap.copyOf(shards)));\n+ entries.add(new RestoreInProgress.Entry(entry.snapshotId(), RestoreInProgress.State.STARTED, entry.indices(), ImmutableMap.copyOf(shards)));\n } else {\n logger.info(\"restore [{}] is done\", entry.snapshotId());\n if (batchedRestoreInfo == null) {\n@@ -553,9 +553,8 @@ public ClusterState execute(ClusterState currentState) {\n if (changedCount > 0) {\n logger.trace(\"changed cluster state triggered by {} snapshot restore state updates\", changedCount);\n \n- final RestoreMetaData updatedRestore = new RestoreMetaData(entries.toArray(new RestoreMetaData.Entry[entries.size()]));\n- final MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData()).putCustom(RestoreMetaData.TYPE, updatedRestore);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ final RestoreInProgress updatedRestore = new RestoreInProgress(entries.toArray(new RestoreInProgress.Entry[entries.size()]));\n+ return ClusterState.builder(currentState).putCustom(RestoreInProgress.TYPE, updatedRestore).build();\n }\n }\n return currentState;\n@@ -578,7 +577,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n RoutingTable routingTable = newState.getRoutingTable();\n final List<ShardId> waitForStarted = newArrayList();\n for (Map.Entry<ShardId, ShardRestoreStatus> shard : shards.entrySet()) {\n- if (shard.getValue().state() == RestoreMetaData.State.SUCCESS ) {\n+ if (shard.getValue().state() == RestoreInProgress.State.SUCCESS ) {\n ShardId shardId = shard.getKey();\n ShardRouting shardRouting = findPrimaryShard(routingTable, shardId);\n if (shardRouting != null && !shardRouting.active()) {\n@@ -639,19 +638,19 @@ private void notifyListeners(SnapshotId snapshotId, RestoreInfo restoreInfo) {\n });\n }\n \n- private boolean completed(Map<ShardId, RestoreMetaData.ShardRestoreStatus> shards) {\n- for (RestoreMetaData.ShardRestoreStatus status : shards.values()) {\n+ private boolean completed(Map<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n+ for (RestoreInProgress.ShardRestoreStatus status : shards.values()) {\n if (!status.state().completed()) {\n return false;\n }\n }\n return true;\n }\n \n- private int failedShards(Map<ShardId, RestoreMetaData.ShardRestoreStatus> shards) {\n+ private int failedShards(Map<ShardId, RestoreInProgress.ShardRestoreStatus> shards) {\n int failedShards = 0;\n- for (RestoreMetaData.ShardRestoreStatus status : shards.values()) {\n- if (status.state() == RestoreMetaData.State.FAILURE) {\n+ for (RestoreInProgress.ShardRestoreStatus status : shards.values()) {\n+ if (status.state() == RestoreInProgress.State.FAILURE) {\n failedShards++;\n }\n }\n@@ -696,16 +695,15 @@ private void validateSnapshotRestorable(SnapshotId snapshotId, Snapshot snapshot\n * @param event cluster changed event\n */\n private void processDeletedIndices(ClusterChangedEvent event) {\n- MetaData metaData = event.state().metaData();\n- RestoreMetaData restore = metaData.custom(RestoreMetaData.TYPE);\n+ RestoreInProgress restore = event.state().custom(RestoreInProgress.TYPE);\n if (restore == null) {\n // Not restoring - nothing to do\n return;\n }\n \n if (!event.indicesDeleted().isEmpty()) {\n // Some indices were deleted, let's make sure all indices that we are restoring still exist\n- for (RestoreMetaData.Entry entry : restore.entries()) {\n+ for (RestoreInProgress.Entry entry : restore.entries()) {\n List<ShardId> shardsToFail = null;\n for (ImmutableMap.Entry<ShardId, ShardRestoreStatus> shard : entry.shards().entrySet()) {\n if (!shard.getValue().state().completed()) {\n@@ -720,7 +718,7 @@ private void processDeletedIndices(ClusterChangedEvent event) {\n if (shardsToFail != null) {\n for (ShardId shardId : shardsToFail) {\n logger.trace(\"[{}] failing running shard restore [{}]\", entry.snapshotId(), shardId);\n- updateRestoreStateOnMaster(new UpdateIndexShardRestoreStatusRequest(entry.snapshotId(), shardId, new ShardRestoreStatus(null, RestoreMetaData.State.FAILURE, \"index was deleted\")));\n+ updateRestoreStateOnMaster(new UpdateIndexShardRestoreStatusRequest(entry.snapshotId(), shardId, new ShardRestoreStatus(null, RestoreInProgress.State.FAILURE, \"index was deleted\")));\n }\n }\n }\n@@ -733,7 +731,7 @@ private void processDeletedIndices(ClusterChangedEvent event) {\n public void failRestore(SnapshotId snapshotId, ShardId shardId) {\n logger.debug(\"[{}] failed to restore shard [{}]\", snapshotId, shardId);\n UpdateIndexShardRestoreStatusRequest request = new UpdateIndexShardRestoreStatusRequest(snapshotId, shardId,\n- new ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreMetaData.State.FAILURE));\n+ new ShardRestoreStatus(clusterService.state().nodes().localNodeId(), RestoreInProgress.State.FAILURE));\n transportService.sendRequest(clusterService.state().nodes().masterNode(),\n UPDATE_RESTORE_ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);\n }\n@@ -789,10 +787,9 @@ public void clusterChanged(ClusterChangedEvent event) {\n * @return true if repository is currently in use by one of the running snapshots\n */\n public static boolean isRepositoryInUse(ClusterState clusterState, String repository) {\n- MetaData metaData = clusterState.metaData();\n- RestoreMetaData snapshots = metaData.custom(RestoreMetaData.TYPE);\n+ RestoreInProgress snapshots = clusterState.custom(RestoreInProgress.TYPE);\n if (snapshots != null) {\n- for (RestoreMetaData.Entry snapshot : snapshots.entries()) {\n+ for (RestoreInProgress.Entry snapshot : snapshots.entries()) {\n if (repository.equals(snapshot.snapshotId().getRepository())) {\n return true;\n }",
"filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -28,8 +28,8 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.*;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.ShardSnapshotStatus;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.State;\n+import org.elasticsearch.cluster.SnapshotsInProgress.ShardSnapshotStatus;\n+import org.elasticsearch.cluster.SnapshotsInProgress.State;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n@@ -77,13 +77,13 @@\n * <ul>\n * <li>On the master node the {@link #createSnapshot(SnapshotRequest, CreateSnapshotListener)} is called and makes sure that no snapshots is currently running\n * and registers the new snapshot in cluster state</li>\n- * <li>When cluster state is updated the {@link #beginSnapshot(ClusterState, SnapshotMetaData.Entry, boolean, CreateSnapshotListener)} method\n+ * <li>When cluster state is updated the {@link #beginSnapshot(ClusterState, SnapshotsInProgress.Entry, boolean, CreateSnapshotListener)} method\n * kicks in and initializes the snapshot in the repository and then populates list of shards that needs to be snapshotted in cluster state</li>\n * <li>Each data node is watching for these shards and when new shards scheduled for snapshotting appear in the cluster state, data nodes\n * start processing them through {@link SnapshotsService#processIndexShardSnapshots(ClusterChangedEvent)} method</li>\n * <li>Once shard snapshot is created data node updates state of the shard in the cluster state using the {@link #updateIndexShardSnapshotStatus(UpdateIndexShardSnapshotStatusRequest)} method</li>\n * <li>When last shard is completed master node in {@link #innerUpdateSnapshotState} method marks the snapshot as completed</li>\n- * <li>After cluster state is updated, the {@link #endSnapshot(SnapshotMetaData.Entry)} finalizes snapshot in the repository,\n+ * <li>After cluster state is updated, the {@link #endSnapshot(SnapshotsInProgress.Entry)} finalizes snapshot in the repository,\n * notifies all {@link #snapshotCompletionListeners} that snapshot is completed, and finally calls {@link #removeSnapshotFromClusterState(SnapshotId, SnapshotInfo, Throwable)} to remove snapshot from cluster state</li>\n * </ul>\n */\n@@ -135,7 +135,7 @@ public SnapshotsService(Settings settings, ClusterService clusterService, Reposi\n * @throws SnapshotMissingException if snapshot is not found\n */\n public Snapshot snapshot(SnapshotId snapshotId) {\n- List<SnapshotMetaData.Entry> entries = currentSnapshots(snapshotId.getRepository(), new String[]{snapshotId.getSnapshot()});\n+ List<SnapshotsInProgress.Entry> entries = currentSnapshots(snapshotId.getRepository(), new String[]{snapshotId.getSnapshot()});\n if (!entries.isEmpty()) {\n return inProgressSnapshot(entries.iterator().next());\n }\n@@ -150,8 +150,8 @@ public Snapshot snapshot(SnapshotId snapshotId) {\n */\n public List<Snapshot> snapshots(String repositoryName) {\n Set<Snapshot> snapshotSet = newHashSet();\n- List<SnapshotMetaData.Entry> entries = currentSnapshots(repositoryName, null);\n- for (SnapshotMetaData.Entry entry : entries) {\n+ List<SnapshotsInProgress.Entry> entries = currentSnapshots(repositoryName, null);\n+ for (SnapshotsInProgress.Entry entry : entries) {\n snapshotSet.add(inProgressSnapshot(entry));\n }\n Repository repository = repositoriesService.repository(repositoryName);\n@@ -172,8 +172,8 @@ public List<Snapshot> snapshots(String repositoryName) {\n */\n public List<Snapshot> currentSnapshots(String repositoryName) {\n List<Snapshot> snapshotList = newArrayList();\n- List<SnapshotMetaData.Entry> entries = currentSnapshots(repositoryName, null);\n- for (SnapshotMetaData.Entry entry : entries) {\n+ List<SnapshotsInProgress.Entry> entries = currentSnapshots(repositoryName, null);\n+ for (SnapshotsInProgress.Entry entry : entries) {\n snapshotList.add(inProgressSnapshot(entry));\n }\n CollectionUtil.timSort(snapshotList);\n@@ -193,27 +193,25 @@ public void createSnapshot(final SnapshotRequest request, final CreateSnapshotLi\n final SnapshotId snapshotId = new SnapshotId(request.repository(), request.name());\n clusterService.submitStateUpdateTask(request.cause(), new TimeoutClusterStateUpdateTask() {\n \n- private SnapshotMetaData.Entry newSnapshot = null;\n+ private SnapshotsInProgress.Entry newSnapshot = null;\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n validate(request, currentState);\n \n MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots == null || snapshots.entries().isEmpty()) {\n // Store newSnapshot here to be processed in clusterStateProcessed\n ImmutableList<String> indices = ImmutableList.copyOf(metaData.concreteIndices(request.indicesOptions(), request.indices()));\n logger.trace(\"[{}][{}] creating snapshot for indices [{}]\", request.repository(), request.name(), indices);\n- newSnapshot = new SnapshotMetaData.Entry(snapshotId, request.includeGlobalState(), State.INIT, indices, System.currentTimeMillis(), null);\n- snapshots = new SnapshotMetaData(newSnapshot);\n+ newSnapshot = new SnapshotsInProgress.Entry(snapshotId, request.includeGlobalState(), State.INIT, indices, System.currentTimeMillis(), null);\n+ snapshots = new SnapshotsInProgress(newSnapshot);\n } else {\n // TODO: What should we do if a snapshot is already running?\n throw new ConcurrentSnapshotExecutionException(snapshotId, \"a snapshot is already running\");\n }\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, snapshots).build();\n }\n \n @Override\n@@ -288,7 +286,7 @@ private void validate(SnapshotRequest request, ClusterState state) {\n * @param partial allow partial snapshots\n * @param userCreateSnapshotListener listener\n */\n- private void beginSnapshot(ClusterState clusterState, final SnapshotMetaData.Entry snapshot, final boolean partial, final CreateSnapshotListener userCreateSnapshotListener) {\n+ private void beginSnapshot(ClusterState clusterState, final SnapshotsInProgress.Entry snapshot, final boolean partial, final CreateSnapshotListener userCreateSnapshotListener) {\n boolean snapshotCreated = false;\n try {\n Repository repository = repositoriesService.repository(snapshot.snapshotId().getRepository());\n@@ -313,26 +311,25 @@ private void beginSnapshot(ClusterState clusterState, final SnapshotMetaData.Ent\n }\n clusterService.submitStateUpdateTask(\"update_snapshot [\" + snapshot.snapshotId().getSnapshot() + \"]\", new ProcessedClusterStateUpdateTask() {\n boolean accepted = false;\n- SnapshotMetaData.Entry updatedSnapshot;\n+ SnapshotsInProgress.Entry updatedSnapshot;\n String failure = null;\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n- ImmutableList.Builder<SnapshotMetaData.Entry> entries = ImmutableList.builder();\n- for (SnapshotMetaData.Entry entry : snapshots.entries()) {\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n+ ImmutableList.Builder<SnapshotsInProgress.Entry> entries = ImmutableList.builder();\n+ for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n if (entry.snapshotId().equals(snapshot.snapshotId())) {\n // Replace the snapshot that was just created\n- ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards = shards(currentState, entry.indices());\n+ ImmutableMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards = shards(currentState, entry.indices());\n if (!partial) {\n Tuple<Set<String>, Set<String>> indicesWithMissingShards = indicesWithMissingShards(shards, currentState.metaData());\n Set<String> missing = indicesWithMissingShards.v1();\n Set<String> closed = indicesWithMissingShards.v2();\n if (missing.isEmpty() == false || closed.isEmpty() == false) {\n StringBuilder failureMessage = new StringBuilder();\n- updatedSnapshot = new SnapshotMetaData.Entry(entry, State.FAILED, shards);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.FAILED, shards);\n entries.add(updatedSnapshot);\n if (missing.isEmpty() == false ) {\n failureMessage.append(\"Indices don't have primary shards \");\n@@ -349,7 +346,7 @@ public ClusterState execute(ClusterState currentState) {\n continue;\n }\n }\n- updatedSnapshot = new SnapshotMetaData.Entry(entry, State.STARTED, shards);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.STARTED, shards);\n entries.add(updatedSnapshot);\n if (!completed(shards.values())) {\n accepted = true;\n@@ -358,8 +355,7 @@ public ClusterState execute(ClusterState currentState) {\n entries.add(entry);\n }\n }\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, new SnapshotMetaData(entries.build()));\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, new SnapshotsInProgress(entries.build())).build();\n }\n \n @Override\n@@ -407,7 +403,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n }\n }\n \n- private Snapshot inProgressSnapshot(SnapshotMetaData.Entry entry) {\n+ private Snapshot inProgressSnapshot(SnapshotsInProgress.Entry entry) {\n return new Snapshot(entry.snapshotId().getSnapshot(), entry.indices(), entry.startTime());\n }\n \n@@ -421,35 +417,34 @@ private Snapshot inProgressSnapshot(SnapshotMetaData.Entry entry) {\n * @param snapshots optional list of snapshots that will be used as a filter\n * @return list of metadata for currently running snapshots\n */\n- public List<SnapshotMetaData.Entry> currentSnapshots(String repository, String[] snapshots) {\n- MetaData metaData = clusterService.state().metaData();\n- SnapshotMetaData snapshotMetaData = metaData.custom(SnapshotMetaData.TYPE);\n- if (snapshotMetaData == null || snapshotMetaData.entries().isEmpty()) {\n+ public List<SnapshotsInProgress.Entry> currentSnapshots(String repository, String[] snapshots) {\n+ SnapshotsInProgress snapshotsInProgress = clusterService.state().custom(SnapshotsInProgress.TYPE);\n+ if (snapshotsInProgress == null || snapshotsInProgress.entries().isEmpty()) {\n return ImmutableList.of();\n }\n if (\"_all\".equals(repository)) {\n- return snapshotMetaData.entries();\n+ return snapshotsInProgress.entries();\n }\n- if (snapshotMetaData.entries().size() == 1) {\n+ if (snapshotsInProgress.entries().size() == 1) {\n // Most likely scenario - one snapshot is currently running\n // Check this snapshot against the query\n- SnapshotMetaData.Entry entry = snapshotMetaData.entries().get(0);\n+ SnapshotsInProgress.Entry entry = snapshotsInProgress.entries().get(0);\n if (!entry.snapshotId().getRepository().equals(repository)) {\n return ImmutableList.of();\n }\n if (snapshots != null && snapshots.length > 0) {\n for (String snapshot : snapshots) {\n if (entry.snapshotId().getSnapshot().equals(snapshot)) {\n- return snapshotMetaData.entries();\n+ return snapshotsInProgress.entries();\n }\n }\n return ImmutableList.of();\n } else {\n- return snapshotMetaData.entries();\n+ return snapshotsInProgress.entries();\n }\n }\n- ImmutableList.Builder<SnapshotMetaData.Entry> builder = ImmutableList.builder();\n- for (SnapshotMetaData.Entry entry : snapshotMetaData.entries()) {\n+ ImmutableList.Builder<SnapshotsInProgress.Entry> builder = ImmutableList.builder();\n+ for (SnapshotsInProgress.Entry entry : snapshotsInProgress.entries()) {\n if (!entry.snapshotId().getRepository().equals(repository)) {\n continue;\n }\n@@ -544,8 +539,8 @@ public void clusterChanged(ClusterChangedEvent event) {\n processStartedShards(event);\n }\n }\n- SnapshotMetaData prev = event.previousState().metaData().custom(SnapshotMetaData.TYPE);\n- SnapshotMetaData curr = event.state().metaData().custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress prev = event.previousState().custom(SnapshotsInProgress.TYPE);\n+ SnapshotsInProgress curr = event.state().custom(SnapshotsInProgress.TYPE);\n \n if (prev == null) {\n if (curr != null) {\n@@ -579,16 +574,14 @@ private void processSnapshotsOnRemovedNodes(ClusterChangedEvent event) {\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n DiscoveryNodes nodes = currentState.nodes();\n- MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots == null) {\n return currentState;\n }\n boolean changed = false;\n- ArrayList<SnapshotMetaData.Entry> entries = newArrayList();\n- for (final SnapshotMetaData.Entry snapshot : snapshots.entries()) {\n- SnapshotMetaData.Entry updatedSnapshot = snapshot;\n+ ArrayList<SnapshotsInProgress.Entry> entries = newArrayList();\n+ for (final SnapshotsInProgress.Entry snapshot : snapshots.entries()) {\n+ SnapshotsInProgress.Entry updatedSnapshot = snapshot;\n boolean snapshotChanged = false;\n if (snapshot.state() == State.STARTED || snapshot.state() == State.ABORTED) {\n ImmutableMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableMap.builder();\n@@ -609,10 +602,10 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n changed = true;\n ImmutableMap<ShardId, ShardSnapshotStatus> shardsMap = shards.build();\n if (!snapshot.state().completed() && completed(shardsMap.values())) {\n- updatedSnapshot = new SnapshotMetaData.Entry(snapshot, State.SUCCESS, shardsMap);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, State.SUCCESS, shardsMap);\n endSnapshot(updatedSnapshot);\n } else {\n- updatedSnapshot = new SnapshotMetaData.Entry(snapshot, snapshot.state(), shardsMap);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, snapshot.state(), shardsMap);\n }\n }\n entries.add(updatedSnapshot);\n@@ -635,9 +628,8 @@ public void onFailure(Throwable t) {\n }\n }\n if (changed) {\n- snapshots = new SnapshotMetaData(entries.toArray(new SnapshotMetaData.Entry[entries.size()]));\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ snapshots = new SnapshotsInProgress(entries.toArray(new SnapshotsInProgress.Entry[entries.size()]));\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, snapshots).build();\n }\n return currentState;\n }\n@@ -655,33 +647,30 @@ private void processStartedShards(ClusterChangedEvent event) {\n clusterService.submitStateUpdateTask(\"update snapshot state after shards started\", new ClusterStateUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n- MetaData metaData = currentState.metaData();\n RoutingTable routingTable = currentState.routingTable();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots != null) {\n boolean changed = false;\n- ArrayList<SnapshotMetaData.Entry> entries = newArrayList();\n- for (final SnapshotMetaData.Entry snapshot : snapshots.entries()) {\n- SnapshotMetaData.Entry updatedSnapshot = snapshot;\n+ ArrayList<SnapshotsInProgress.Entry> entries = newArrayList();\n+ for (final SnapshotsInProgress.Entry snapshot : snapshots.entries()) {\n+ SnapshotsInProgress.Entry updatedSnapshot = snapshot;\n if (snapshot.state() == State.STARTED) {\n ImmutableMap<ShardId, ShardSnapshotStatus> shards = processWaitingShards(snapshot.shards(), routingTable);\n if (shards != null) {\n changed = true;\n if (!snapshot.state().completed() && completed(shards.values())) {\n- updatedSnapshot = new SnapshotMetaData.Entry(snapshot, State.SUCCESS, shards);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, State.SUCCESS, shards);\n endSnapshot(updatedSnapshot);\n } else {\n- updatedSnapshot = new SnapshotMetaData.Entry(snapshot, shards);\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, shards);\n }\n }\n entries.add(updatedSnapshot);\n }\n }\n if (changed) {\n- snapshots = new SnapshotMetaData(entries.toArray(new SnapshotMetaData.Entry[entries.size()]));\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ snapshots = new SnapshotsInProgress(entries.toArray(new SnapshotsInProgress.Entry[entries.size()]));\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, snapshots).build();\n }\n }\n return currentState;\n@@ -735,9 +724,9 @@ private ImmutableMap<ShardId, ShardSnapshotStatus> processWaitingShards(Immutabl\n }\n \n private boolean waitingShardsStartedOrUnassigned(ClusterChangedEvent event) {\n- SnapshotMetaData curr = event.state().metaData().custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress curr = event.state().custom(SnapshotsInProgress.TYPE);\n if (curr != null) {\n- for (SnapshotMetaData.Entry entry : curr.entries()) {\n+ for (SnapshotsInProgress.Entry entry : curr.entries()) {\n if (entry.state() == State.STARTED && !entry.waitingIndices().isEmpty()) {\n for (String index : entry.waitingIndices().keySet()) {\n if (event.indexRoutingTableChanged(index)) {\n@@ -759,11 +748,11 @@ private boolean waitingShardsStartedOrUnassigned(ClusterChangedEvent event) {\n private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n // Check if we just became the master\n boolean newMaster = !event.previousState().nodes().localNodeMaster();\n- SnapshotMetaData snapshotMetaData = event.state().getMetaData().custom(SnapshotMetaData.TYPE);\n- if (snapshotMetaData == null) {\n+ SnapshotsInProgress snapshotsInProgress = event.state().custom(SnapshotsInProgress.TYPE);\n+ if (snapshotsInProgress == null) {\n return false;\n }\n- for (SnapshotMetaData.Entry snapshot : snapshotMetaData.entries()) {\n+ for (SnapshotsInProgress.Entry snapshot : snapshotsInProgress.entries()) {\n if (newMaster && (snapshot.state() == State.SUCCESS || snapshot.state() == State.INIT)) {\n // We just replaced old master and snapshots in intermediate states needs to be cleaned\n return true;\n@@ -786,11 +775,11 @@ private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n * @param event cluster state changed event\n */\n private void processIndexShardSnapshots(ClusterChangedEvent event) {\n- SnapshotMetaData snapshotMetaData = event.state().metaData().custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshotsInProgress = event.state().custom(SnapshotsInProgress.TYPE);\n Map<SnapshotId, SnapshotShards> survivors = newHashMap();\n // First, remove snapshots that are no longer there\n for (Map.Entry<SnapshotId, SnapshotShards> entry : shardSnapshots.entrySet()) {\n- if (snapshotMetaData != null && snapshotMetaData.snapshot(entry.getKey()) != null) {\n+ if (snapshotsInProgress != null && snapshotsInProgress.snapshot(entry.getKey()) != null) {\n survivors.put(entry.getKey(), entry.getValue());\n }\n }\n@@ -800,12 +789,12 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n Map<SnapshotId, Map<ShardId, IndexShardSnapshotStatus>> newSnapshots = newHashMap();\n // Now go through all snapshots and update existing or create missing\n final String localNodeId = clusterService.localNode().id();\n- if (snapshotMetaData != null) {\n- for (SnapshotMetaData.Entry entry : snapshotMetaData.entries()) {\n+ if (snapshotsInProgress != null) {\n+ for (SnapshotsInProgress.Entry entry : snapshotsInProgress.entries()) {\n if (entry.state() == State.STARTED) {\n Map<ShardId, IndexShardSnapshotStatus> startedShards = newHashMap();\n SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n- for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ for (Map.Entry<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n // Add all new shards to start processing on\n if (localNodeId.equals(shard.getValue().nodeId())) {\n if (shard.getValue().state() == State.INIT && (snapshotShards == null || !snapshotShards.shards.containsKey(shard.getKey()))) {\n@@ -833,7 +822,7 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n // Abort all running shards for this snapshot\n SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n if (snapshotShards != null) {\n- for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ for (Map.Entry<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n if (snapshotStatus != null) {\n switch (snapshotStatus.stage()) {\n@@ -843,7 +832,7 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n case DONE:\n logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that is already done, updating status on the master\", entry.snapshotId(), shard.getKey());\n updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.snapshotId(), shard.getKey(),\n- new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotMetaData.State.SUCCESS)));\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotsInProgress.State.SUCCESS)));\n break;\n case FAILURE:\n logger.debug(\"[{}] trying to cancel snapshot on the shard [{}] that has already failed, updating status on the master\", entry.snapshotId(), shard.getKey());\n@@ -883,15 +872,15 @@ private void processIndexShardSnapshots(ClusterChangedEvent event) {\n public void run() {\n try {\n shardSnapshotService.snapshot(entry.getKey(), shardEntry.getValue());\n- updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotMetaData.State.SUCCESS)));\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.SUCCESS)));\n } catch (Throwable t) {\n logger.warn(\"[{}] [{}] failed to create snapshot\", t, shardEntry.getKey(), entry.getKey());\n- updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotMetaData.State.FAILED, ExceptionsHelper.detailedMessage(t))));\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(t))));\n }\n }\n });\n } catch (Throwable t) {\n- updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotMetaData.State.FAILED, ExceptionsHelper.detailedMessage(t))));\n+ updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(entry.getKey(), shardEntry.getKey(), new ShardSnapshotStatus(localNodeId, SnapshotsInProgress.State.FAILED, ExceptionsHelper.detailedMessage(t))));\n }\n }\n }\n@@ -903,11 +892,11 @@ public void run() {\n * @param event\n */\n private void syncShardStatsOnNewMaster(ClusterChangedEvent event) {\n- SnapshotMetaData snapshotMetaData = event.state().getMetaData().custom(SnapshotMetaData.TYPE);\n- if (snapshotMetaData == null) {\n+ SnapshotsInProgress snapshotsInProgress = event.state().custom(SnapshotsInProgress.TYPE);\n+ if (snapshotsInProgress == null) {\n return;\n }\n- for (SnapshotMetaData.Entry snapshot : snapshotMetaData.entries()) {\n+ for (SnapshotsInProgress.Entry snapshot : snapshotsInProgress.entries()) {\n if (snapshot.state() == State.STARTED || snapshot.state() == State.ABORTED) {\n ImmutableMap<ShardId, IndexShardSnapshotStatus> localShards = currentSnapshotShards(snapshot.snapshotId());\n if (localShards != null) {\n@@ -922,7 +911,7 @@ private void syncShardStatsOnNewMaster(ClusterChangedEvent event) {\n // but we think the shard is done - we need to make new master know that the shard is done\n logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard is done locally, updating status on the master\", snapshot.snapshotId(), shardId);\n updateIndexShardSnapshotStatus(new UpdateIndexShardSnapshotStatusRequest(snapshot.snapshotId(), shardId,\n- new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotMetaData.State.SUCCESS)));\n+ new ShardSnapshotStatus(event.state().nodes().localNodeId(), SnapshotsInProgress.State.SUCCESS)));\n } else if (localShard.getValue().stage() == IndexShardSnapshotStatus.Stage.FAILURE) {\n // but we think the shard failed - we need to make new master know that the shard failed\n logger.debug(\"[{}] new master thinks the shard [{}] is not completed but the shard failed locally, updating status on master\", snapshot.snapshotId(), shardId);\n@@ -961,7 +950,7 @@ private void updateIndexShardSnapshotStatus(UpdateIndexShardSnapshotStatusReques\n * @param shards list of shard statuses\n * @return true if all shards have completed (either successfully or failed), false otherwise\n */\n- private boolean completed(Collection<SnapshotMetaData.ShardSnapshotStatus> shards) {\n+ private boolean completed(Collection<SnapshotsInProgress.ShardSnapshotStatus> shards) {\n for (ShardSnapshotStatus status : shards) {\n if (!status.state().completed()) {\n return false;\n@@ -976,10 +965,10 @@ private boolean completed(Collection<SnapshotMetaData.ShardSnapshotStatus> shard\n * @param shards list of shard statuses\n * @return list of failed and closed indices\n */\n- private Tuple<Set<String>, Set<String>> indicesWithMissingShards(ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards, MetaData metaData) {\n+ private Tuple<Set<String>, Set<String>> indicesWithMissingShards(ImmutableMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards, MetaData metaData) {\n Set<String> missing = newHashSet();\n Set<String> closed = newHashSet();\n- for (ImmutableMap.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> entry : shards.entrySet()) {\n+ for (ImmutableMap.Entry<ShardId, SnapshotsInProgress.ShardSnapshotStatus> entry : shards.entrySet()) {\n if (entry.getValue().state() == State.MISSING) {\n if (metaData.hasIndex(entry.getKey().getIndex()) && metaData.index(entry.getKey().getIndex()).getState() == IndexMetaData.State.CLOSE) {\n closed.add(entry.getKey().getIndex());\n@@ -1019,12 +1008,11 @@ public ClusterState execute(ClusterState currentState) {\n return currentState;\n }\n \n- final MetaData metaData = currentState.metaData();\n- final SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ final SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots != null) {\n int changedCount = 0;\n- final List<SnapshotMetaData.Entry> entries = newArrayList();\n- for (SnapshotMetaData.Entry entry : snapshots.entries()) {\n+ final List<SnapshotsInProgress.Entry> entries = newArrayList();\n+ for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n HashMap<ShardId, ShardSnapshotStatus> shards = null;\n \n for (int i = 0; i < batchSize; i++) {\n@@ -1043,11 +1031,11 @@ public ClusterState execute(ClusterState currentState) {\n \n if (shards != null) {\n if (!completed(shards.values())) {\n- entries.add(new SnapshotMetaData.Entry(entry, ImmutableMap.copyOf(shards)));\n+ entries.add(new SnapshotsInProgress.Entry(entry, ImmutableMap.copyOf(shards)));\n } else {\n // Snapshot is finished - mark it as done\n // TODO: Add PARTIAL_SUCCESS status?\n- SnapshotMetaData.Entry updatedEntry = new SnapshotMetaData.Entry(entry, State.SUCCESS, ImmutableMap.copyOf(shards));\n+ SnapshotsInProgress.Entry updatedEntry = new SnapshotsInProgress.Entry(entry, State.SUCCESS, ImmutableMap.copyOf(shards));\n entries.add(updatedEntry);\n // Finalize snapshot in the repository\n endSnapshot(updatedEntry);\n@@ -1060,9 +1048,8 @@ public ClusterState execute(ClusterState currentState) {\n if (changedCount > 0) {\n logger.trace(\"changed cluster state triggered by {} snapshot state updates\", changedCount);\n \n- final SnapshotMetaData updatedSnapshots = new SnapshotMetaData(entries.toArray(new SnapshotMetaData.Entry[entries.size()]));\n- final MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData()).putCustom(SnapshotMetaData.TYPE, updatedSnapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ final SnapshotsInProgress updatedSnapshots = new SnapshotsInProgress(entries.toArray(new SnapshotsInProgress.Entry[entries.size()]));\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, updatedSnapshots).build();\n }\n }\n return currentState;\n@@ -1084,7 +1071,7 @@ public void onFailure(String source, Throwable t) {\n *\n * @param entry snapshot\n */\n- private void endSnapshot(SnapshotMetaData.Entry entry) {\n+ private void endSnapshot(SnapshotsInProgress.Entry entry) {\n endSnapshot(entry, null);\n }\n \n@@ -1097,7 +1084,7 @@ private void endSnapshot(SnapshotMetaData.Entry entry) {\n * @param entry snapshot\n * @param failure failure reason or null if snapshot was successful\n */\n- private void endSnapshot(final SnapshotMetaData.Entry entry, final String failure) {\n+ private void endSnapshot(final SnapshotsInProgress.Entry entry, final String failure) {\n threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(new Runnable() {\n @Override\n public void run() {\n@@ -1136,23 +1123,20 @@ private void removeSnapshotFromClusterState(final SnapshotId snapshotId, final S\n clusterService.submitStateUpdateTask(\"remove snapshot metadata\", new ProcessedClusterStateUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n- MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots != null) {\n boolean changed = false;\n- ArrayList<SnapshotMetaData.Entry> entries = newArrayList();\n- for (SnapshotMetaData.Entry entry : snapshots.entries()) {\n+ ArrayList<SnapshotsInProgress.Entry> entries = newArrayList();\n+ for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n if (entry.snapshotId().equals(snapshotId)) {\n changed = true;\n } else {\n entries.add(entry);\n }\n }\n if (changed) {\n- snapshots = new SnapshotMetaData(entries.toArray(new SnapshotMetaData.Entry[entries.size()]));\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ snapshots = new SnapshotsInProgress(entries.toArray(new SnapshotsInProgress.Entry[entries.size()]));\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, snapshots).build();\n }\n }\n return currentState;\n@@ -1196,14 +1180,12 @@ public void deleteSnapshot(final SnapshotId snapshotId, final DeleteSnapshotList\n \n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n- MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n if (snapshots == null) {\n // No snapshots running - we can continue\n return currentState;\n }\n- SnapshotMetaData.Entry snapshot = snapshots.snapshot(snapshotId);\n+ SnapshotsInProgress.Entry snapshot = snapshots.snapshot(snapshotId);\n if (snapshot == null) {\n // This snapshot is not running - continue\n if (!snapshots.entries().isEmpty()) {\n@@ -1252,10 +1234,9 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n endSnapshot(snapshot);\n }\n }\n- SnapshotMetaData.Entry newSnapshot = new SnapshotMetaData.Entry(snapshot, State.ABORTED, shards);\n- snapshots = new SnapshotMetaData(newSnapshot);\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ SnapshotsInProgress.Entry newSnapshot = new SnapshotsInProgress.Entry(snapshot, State.ABORTED, shards);\n+ snapshots = new SnapshotsInProgress(newSnapshot);\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, snapshots).build();\n }\n }\n \n@@ -1303,10 +1284,9 @@ public void onSnapshotFailure(SnapshotId failedSnapshotId, Throwable t) {\n * @return true if repository is currently in use by one of the running snapshots\n */\n public static boolean isRepositoryInUse(ClusterState clusterState, String repository) {\n- MetaData metaData = clusterState.metaData();\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ SnapshotsInProgress snapshots = clusterState.custom(SnapshotsInProgress.TYPE);\n if (snapshots != null) {\n- for (SnapshotMetaData.Entry snapshot : snapshots.entries()) {\n+ for (SnapshotsInProgress.Entry snapshot : snapshots.entries()) {\n if (repository.equals(snapshot.snapshotId().getRepository())) {\n return true;\n }\n@@ -1343,18 +1323,18 @@ public void run() {\n * @param indices list of indices to be snapshotted\n * @return list of shard to be included into current snapshot\n */\n- private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(ClusterState clusterState, ImmutableList<String> indices) {\n- ImmutableMap.Builder<ShardId, SnapshotMetaData.ShardSnapshotStatus> builder = ImmutableMap.builder();\n+ private ImmutableMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards(ClusterState clusterState, ImmutableList<String> indices) {\n+ ImmutableMap.Builder<ShardId, SnapshotsInProgress.ShardSnapshotStatus> builder = ImmutableMap.builder();\n MetaData metaData = clusterState.metaData();\n for (String index : indices) {\n IndexMetaData indexMetaData = metaData.index(index);\n if (indexMetaData == null) {\n // The index was deleted before we managed to start the snapshot - mark it as missing.\n- builder.put(new ShardId(index, 0), new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing index\"));\n+ builder.put(new ShardId(index, 0), new SnapshotsInProgress.ShardSnapshotStatus(null, State.MISSING, \"missing index\"));\n } else if (indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n for (int i = 0; i < indexMetaData.numberOfShards(); i++) {\n ShardId shardId = new ShardId(index, i);\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"index is closed\"));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(null, State.MISSING, \"index is closed\"));\n }\n } else {\n IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);\n@@ -1363,17 +1343,17 @@ private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(Clust\n if (indexRoutingTable != null) {\n ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n if (primary == null || !primary.assignedToNode()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n } else if (primary.relocating() || primary.initializing()) {\n // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n } else if (!primary.started()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n } else {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId()));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(primary.currentNodeId()));\n }\n } else {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing routing table\"));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(null, State.MISSING, \"missing routing table\"));\n }\n }\n }\n@@ -1656,15 +1636,15 @@ private SnapshotShards(ImmutableMap<ShardId, IndexShardSnapshotStatus> shards) {\n private static class UpdateIndexShardSnapshotStatusRequest extends TransportRequest {\n private SnapshotId snapshotId;\n private ShardId shardId;\n- private SnapshotMetaData.ShardSnapshotStatus status;\n+ private SnapshotsInProgress.ShardSnapshotStatus status;\n \n volatile boolean processed; // state field, no need to serialize\n \n private UpdateIndexShardSnapshotStatusRequest() {\n \n }\n \n- private UpdateIndexShardSnapshotStatusRequest(SnapshotId snapshotId, ShardId shardId, SnapshotMetaData.ShardSnapshotStatus status) {\n+ private UpdateIndexShardSnapshotStatusRequest(SnapshotId snapshotId, ShardId shardId, SnapshotsInProgress.ShardSnapshotStatus status) {\n this.snapshotId = snapshotId;\n this.shardId = shardId;\n this.status = status;\n@@ -1675,7 +1655,7 @@ public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n snapshotId = SnapshotId.readSnapshotId(in);\n shardId = ShardId.readShardId(in);\n- status = SnapshotMetaData.ShardSnapshotStatus.readShardSnapshotStatus(in);\n+ status = SnapshotsInProgress.ShardSnapshotStatus.readShardSnapshotStatus(in);\n }\n \n @Override\n@@ -1694,7 +1674,7 @@ public ShardId shardId() {\n return shardId;\n }\n \n- public SnapshotMetaData.ShardSnapshotStatus status() {\n+ public SnapshotsInProgress.ShardSnapshotStatus status() {\n return status;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -88,16 +88,16 @@ protected void setUpRepository() throws Exception {\n \n @Test\n public void testCreateSnapshotWithBlocks() {\n- logger.info(\"--> creating a snapshot is blocked when the cluster is read only\");\n+ logger.info(\"--> creating a snapshot is allowed when the cluster is read only\");\n try {\n setClusterReadOnly(true);\n- assertBlocked(client().admin().cluster().prepareCreateSnapshot(REPOSITORY_NAME, \"snapshot-1\"), MetaData.CLUSTER_READ_ONLY_BLOCK);\n+ assertThat(client().admin().cluster().prepareCreateSnapshot(REPOSITORY_NAME, \"snapshot-1\").setWaitForCompletion(true).get().status(), equalTo(RestStatus.OK));\n } finally {\n setClusterReadOnly(false);\n }\n \n logger.info(\"--> creating a snapshot is allowed when the cluster is not read only\");\n- CreateSnapshotResponse response = client().admin().cluster().prepareCreateSnapshot(REPOSITORY_NAME, \"snapshot-1\")\n+ CreateSnapshotResponse response = client().admin().cluster().prepareCreateSnapshot(REPOSITORY_NAME, \"snapshot-2\")\n .setWaitForCompletion(true)\n .execute().actionGet();\n assertThat(response.status(), equalTo(RestStatus.OK));\n@@ -126,17 +126,13 @@ public void testCreateSnapshotWithIndexBlocks() {\n \n @Test\n public void testDeleteSnapshotWithBlocks() {\n- logger.info(\"--> deleting a snapshot is blocked when the cluster is read only\");\n+ logger.info(\"--> deleting a snapshot is allowed when the cluster is read only\");\n try {\n setClusterReadOnly(true);\n- assertBlocked(client().admin().cluster().prepareDeleteSnapshot(REPOSITORY_NAME, SNAPSHOT_NAME), MetaData.CLUSTER_READ_ONLY_BLOCK);\n+ assertTrue(client().admin().cluster().prepareDeleteSnapshot(REPOSITORY_NAME, SNAPSHOT_NAME).get().isAcknowledged());\n } finally {\n setClusterReadOnly(false);\n }\n-\n- logger.info(\"--> deleting a snapshot is allowed when the cluster is not read only\");\n- DeleteSnapshotResponse response = client().admin().cluster().prepareDeleteSnapshot(REPOSITORY_NAME, SNAPSHOT_NAME).execute().actionGet();\n- assertThat(response.isAcknowledged(), equalTo(true));\n }\n \n @Test",
"filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/snapshots/SnapshotBlocksTests.java",
"status": "modified"
},
{
"diff": "@@ -85,6 +85,8 @@ public void testClusterStateDiffSerialization() throws Exception {\n builder = randomBlocks(clusterState);\n break;\n case 3:\n+ builder = randomClusterStateCustoms(clusterState);\n+ break;\n case 4:\n builder = randomMetaDataChanges(clusterState);\n break;\n@@ -163,6 +165,9 @@ public void testClusterStateDiffSerialization() throws Exception {\n \n }\n \n+ /**\n+ * Randomly updates nodes in the cluster state\n+ */\n private ClusterState.Builder randomNodes(ClusterState clusterState) {\n DiscoveryNodes.Builder nodes = DiscoveryNodes.builder(clusterState.nodes());\n List<String> nodeIds = randomSubsetOf(randomInt(clusterState.nodes().nodes().size() - 1), clusterState.nodes().nodes().keys().toArray(String.class));\n@@ -182,6 +187,9 @@ private ClusterState.Builder randomNodes(ClusterState clusterState) {\n return ClusterState.builder(clusterState).nodes(nodes);\n }\n \n+ /**\n+ * Randomly updates routing table in the cluster state\n+ */\n private ClusterState.Builder randomRoutingTable(ClusterState clusterState) {\n RoutingTable.Builder builder = RoutingTable.builder(clusterState.routingTable());\n int numberOfIndices = clusterState.routingTable().indicesRouting().size();\n@@ -202,6 +210,9 @@ private ClusterState.Builder randomRoutingTable(ClusterState clusterState) {\n return ClusterState.builder(clusterState).routingTable(builder.build());\n }\n \n+ /**\n+ * Randomly updates index routing table in the cluster state\n+ */\n private IndexRoutingTable randomIndexRoutingTable(String index, String[] nodeIds) {\n IndexRoutingTable.Builder builder = IndexRoutingTable.builder(index);\n int shardCount = randomInt(10);\n@@ -218,6 +229,9 @@ private IndexRoutingTable randomIndexRoutingTable(String index, String[] nodeIds\n return builder.build();\n }\n \n+ /**\n+ * Randomly creates or removes cluster blocks\n+ */\n private ClusterState.Builder randomBlocks(ClusterState clusterState) {\n ClusterBlocks.Builder builder = ClusterBlocks.builder().blocks(clusterState.blocks());\n int globalBlocksCount = clusterState.blocks().global().size();\n@@ -234,6 +248,9 @@ private ClusterState.Builder randomBlocks(ClusterState clusterState) {\n return ClusterState.builder(clusterState).blocks(builder);\n }\n \n+ /**\n+ * Returns a random global block\n+ */\n private ClusterBlock randomGlobalBlock() {\n switch (randomInt(2)) {\n case 0:\n@@ -245,6 +262,67 @@ private ClusterBlock randomGlobalBlock() {\n }\n }\n \n+ /**\n+ * Random cluster state part generator interface. Used by {@link #randomClusterStateParts(ClusterState, String, RandomClusterPart)}\n+ * method to update cluster state with randomly generated parts\n+ */\n+ private interface RandomClusterPart<T> {\n+ /**\n+ * Returns list of parts from metadata\n+ */\n+ ImmutableOpenMap<String, T> parts(ClusterState clusterState);\n+\n+ /**\n+ * Puts the part back into metadata\n+ */\n+ ClusterState.Builder put(ClusterState.Builder builder, T part);\n+\n+ /**\n+ * Remove the part from metadata\n+ */\n+ ClusterState.Builder remove(ClusterState.Builder builder, String name);\n+\n+ /**\n+ * Returns a random part with the specified name\n+ */\n+ T randomCreate(String name);\n+\n+ /**\n+ * Makes random modifications to the part\n+ */\n+ T randomChange(T part);\n+\n+ }\n+\n+ /**\n+ * Takes an existing cluster state and randomly adds, removes or updates a cluster state part using randomPart generator.\n+ * If a new part is added the prefix value is used as a prefix of randomly generated part name.\n+ */\n+ private <T> ClusterState randomClusterStateParts(ClusterState clusterState, String prefix, RandomClusterPart<T> randomPart) {\n+ ClusterState.Builder builder = ClusterState.builder(clusterState);\n+ ImmutableOpenMap<String, T> parts = randomPart.parts(clusterState);\n+ int partCount = parts.size();\n+ if (partCount > 0) {\n+ List<String> randomParts = randomSubsetOf(randomInt(partCount - 1), randomPart.parts(clusterState).keys().toArray(String.class));\n+ for (String part : randomParts) {\n+ if (randomBoolean()) {\n+ randomPart.remove(builder, part);\n+ } else {\n+ randomPart.put(builder, randomPart.randomChange(parts.get(part)));\n+ }\n+ }\n+ }\n+ int additionalPartCount = randomIntBetween(1, 20);\n+ for (int i = 0; i < additionalPartCount; i++) {\n+ String name = randomName(prefix);\n+ randomPart.put(builder, randomPart.randomCreate(name));\n+ }\n+ return builder.build();\n+ }\n+\n+ /**\n+ * Makes random metadata changes\n+ */\n private ClusterState.Builder randomMetaDataChanges(ClusterState clusterState) {\n MetaData metaData = clusterState.metaData();\n int changesCount = randomIntBetween(1, 10);\n@@ -269,6 +347,9 @@ private ClusterState.Builder randomMetaDataChanges(ClusterState clusterState) {\n return ClusterState.builder(clusterState).metaData(MetaData.builder(metaData).version(metaData.version() + 1).build());\n }\n \n+ /**\n+ * Makes random settings changes\n+ */\n private Settings randomSettings(Settings settings) {\n Settings.Builder builder = Settings.builder();\n if (randomBoolean()) {\n@@ -282,6 +363,9 @@ private Settings randomSettings(Settings settings) {\n \n }\n \n+ /**\n+ * Randomly updates persistent or transient settings of the given metadata\n+ */\n private MetaData randomMetaDataSettings(MetaData metaData) {\n if (randomBoolean()) {\n return MetaData.builder(metaData).persistentSettings(randomSettings(metaData.persistentSettings())).build();\n@@ -290,6 +374,9 @@ private MetaData randomMetaDataSettings(MetaData metaData) {\n }\n }\n \n+ /**\n+ * Random metadata part generator\n+ */\n private interface RandomPart<T> {\n /**\n * Returns list of parts from metadata\n@@ -318,6 +405,10 @@ private interface RandomPart<T> {\n \n }\n \n+ /**\n+ * Takes an existing cluster state and randomly adds, removes or updates a metadata part using randomPart generator.\n+ * If a new part is added the prefix value is used as a prefix of randomly generated part name.\n+ */\n private <T> MetaData randomParts(MetaData metaData, String prefix, RandomPart<T> randomPart) {\n MetaData.Builder builder = MetaData.builder(metaData);\n ImmutableOpenMap<String, T> parts = randomPart.parts(metaData);\n@@ -340,6 +431,9 @@ private <T> MetaData randomParts(MetaData metaData, String prefix, RandomPart<T>\n return builder.build();\n }\n \n+ /**\n+ * Randomly add, deletes or updates indices in the metadata\n+ */\n private MetaData randomIndices(MetaData metaData) {\n return randomParts(metaData, \"index\", new RandomPart<IndexMetaData>() {\n \n@@ -404,6 +498,9 @@ public IndexMetaData randomChange(IndexMetaData part) {\n });\n }\n \n+ /**\n+ * Generates a random warmer\n+ */\n private IndexWarmersMetaData randomWarmers() {\n if (randomBoolean()) {\n return new IndexWarmersMetaData(\n@@ -418,6 +515,9 @@ private IndexWarmersMetaData randomWarmers() {\n }\n }\n \n+ /**\n+ * Randomly adds, deletes or updates index templates in the metadata\n+ */\n private MetaData randomTemplates(MetaData metaData) {\n return randomParts(metaData, \"template\", new RandomPart<IndexTemplateMetaData>() {\n @Override\n@@ -460,6 +560,9 @@ public IndexTemplateMetaData randomChange(IndexTemplateMetaData part) {\n });\n }\n \n+ /**\n+ * Generates random alias\n+ */\n private AliasMetaData randomAlias() {\n AliasMetaData.Builder builder = newAliasMetaDataBuilder(randomName(\"alias\"));\n if (randomBoolean()) {\n@@ -471,6 +574,9 @@ private AliasMetaData randomAlias() {\n return builder.build();\n }\n \n+ /**\n+ * Randomly adds, deletes or updates repositories in the metadata\n+ */\n private MetaData randomMetaDataCustoms(final MetaData metaData) {\n return randomParts(metaData, \"custom\", new RandomPart<MetaData.Custom>() {\n \n@@ -481,14 +587,7 @@ public ImmutableOpenMap<String, MetaData.Custom> parts(MetaData metaData) {\n \n @Override\n public MetaData.Builder put(MetaData.Builder builder, MetaData.Custom part) {\n- if (part instanceof SnapshotMetaData) {\n- return builder.putCustom(SnapshotMetaData.TYPE, part);\n- } else if (part instanceof RepositoriesMetaData) {\n- return builder.putCustom(RepositoriesMetaData.TYPE, part);\n- } else if (part instanceof RestoreMetaData) {\n- return builder.putCustom(RestoreMetaData.TYPE, part);\n- }\n- throw new IllegalArgumentException(\"Unknown custom part \" + part);\n+ return builder.putCustom(part.type(), part);\n }\n \n @Override\n@@ -498,35 +597,69 @@ public MetaData.Builder remove(MetaData.Builder builder, String name) {\n \n @Override\n public MetaData.Custom randomCreate(String name) {\n- switch (randomIntBetween(0, 2)) {\n+ return new RepositoriesMetaData();\n+ }\n+\n+ @Override\n+ public MetaData.Custom randomChange(MetaData.Custom part) {\n+ return part;\n+ }\n+ });\n+ }\n+\n+ /**\n+ * Randomly adds, deletes or updates in-progress snapshot and restore records in the cluster state\n+ */\n+ private ClusterState.Builder randomClusterStateCustoms(final ClusterState clusterState) {\n+ return ClusterState.builder(randomClusterStateParts(clusterState, \"custom\", new RandomClusterPart<ClusterState.Custom>() {\n+\n+ @Override\n+ public ImmutableOpenMap<String, ClusterState.Custom> parts(ClusterState clusterState) {\n+ return clusterState.customs();\n+ }\n+\n+ @Override\n+ public ClusterState.Builder put(ClusterState.Builder builder, ClusterState.Custom part) {\n+ return builder.putCustom(part.type(), part);\n+ }\n+\n+ @Override\n+ public ClusterState.Builder remove(ClusterState.Builder builder, String name) {\n+ return builder.removeCustom(name);\n+ }\n+\n+ @Override\n+ public ClusterState.Custom randomCreate(String name) {\n+ switch (randomIntBetween(0, 1)) {\n case 0:\n- return new SnapshotMetaData(new SnapshotMetaData.Entry(\n+ return new SnapshotsInProgress(new SnapshotsInProgress.Entry(\n new SnapshotId(randomName(\"repo\"), randomName(\"snap\")),\n randomBoolean(),\n- SnapshotMetaData.State.fromValue((byte) randomIntBetween(0, 6)),\n+ SnapshotsInProgress.State.fromValue((byte) randomIntBetween(0, 6)),\n ImmutableList.<String>of(),\n Math.abs(randomLong()),\n- ImmutableMap.<ShardId, SnapshotMetaData.ShardSnapshotStatus>of()));\n+ ImmutableMap.<ShardId, SnapshotsInProgress.ShardSnapshotStatus>of()));\n case 1:\n- return new RepositoriesMetaData();\n- case 2:\n- return new RestoreMetaData(new RestoreMetaData.Entry(\n+ return new RestoreInProgress(new RestoreInProgress.Entry(\n new SnapshotId(randomName(\"repo\"), randomName(\"snap\")),\n- RestoreMetaData.State.fromValue((byte) randomIntBetween(0, 3)),\n+ RestoreInProgress.State.fromValue((byte) randomIntBetween(0, 3)),\n ImmutableList.<String>of(),\n- ImmutableMap.<ShardId, RestoreMetaData.ShardRestoreStatus>of()));\n+ ImmutableMap.<ShardId, RestoreInProgress.ShardRestoreStatus>of()));\n default:\n throw new IllegalArgumentException(\"Shouldn't be here\");\n }\n }\n \n @Override\n- public MetaData.Custom randomChange(MetaData.Custom part) {\n+ public ClusterState.Custom randomChange(ClusterState.Custom part) {\n return part;\n }\n- });\n+ }));\n }\n \n+ /**\n+ * Generates a random name that starts with the given prefix\n+ */\n private String randomName(String prefix) {\n return prefix + Strings.randomBase64UUID(getRandom());\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterStateDiffTests.java",
"status": "modified"
},
{
"diff": "@@ -19,13 +19,12 @@\n package org.elasticsearch.snapshots;\n \n import com.google.common.base.Predicate;\n-import com.google.common.collect.ImmutableList;\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksResponse;\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n+import org.elasticsearch.cluster.SnapshotsInProgress;\n import org.elasticsearch.cluster.service.PendingClusterTask;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.Settings;\n@@ -106,8 +105,8 @@ public SnapshotInfo waitForCompletion(String repository, String snapshot, TimeVa\n if (snapshotInfos.get(0).state().completed()) {\n // Make sure that snapshot clean up operations are finished\n ClusterStateResponse stateResponse = client().admin().cluster().prepareState().get();\n- SnapshotMetaData snapshotMetaData = stateResponse.getState().getMetaData().custom(SnapshotMetaData.TYPE);\n- if (snapshotMetaData == null || snapshotMetaData.snapshot(snapshotId) == null) {\n+ SnapshotsInProgress snapshotsInProgress = stateResponse.getState().custom(SnapshotsInProgress.TYPE);\n+ if (snapshotsInProgress == null || snapshotsInProgress.snapshot(snapshotId) == null) {\n return snapshotInfos.get(0);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotTests.java",
"status": "modified"
},
{
"diff": "@@ -68,7 +68,6 @@\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin;\n import org.elasticsearch.test.InternalTestCluster;\n-import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.rest.FakeRestRequest;\n import org.junit.Ignore;\n import org.junit.Test;",
"filename": "core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.status.*;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n-import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksResponse;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n@@ -42,11 +41,10 @@\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.*;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.Entry;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.ShardSnapshotStatus;\n-import org.elasticsearch.cluster.metadata.SnapshotMetaData.State;\n+import org.elasticsearch.cluster.SnapshotsInProgress.Entry;\n+import org.elasticsearch.cluster.SnapshotsInProgress.ShardSnapshotStatus;\n+import org.elasticsearch.cluster.SnapshotsInProgress.State;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n-import org.elasticsearch.cluster.service.PendingClusterTask;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n@@ -1390,7 +1388,7 @@ public void snapshotStatusTest() throws Exception {\n SnapshotsStatusResponse response = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").execute().actionGet();\n assertThat(response.getSnapshots().size(), equalTo(1));\n SnapshotStatus snapshotStatus = response.getSnapshots().get(0);\n- assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n+ assertThat(snapshotStatus.getState(), equalTo(SnapshotsInProgress.State.STARTED));\n // We blocked the node during data write operation, so at least one shard snapshot should be in STARTED stage\n assertThat(snapshotStatus.getShardsStats().getStartedShards(), greaterThan(0));\n for (SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n@@ -1403,7 +1401,7 @@ public void snapshotStatusTest() throws Exception {\n response = client.admin().cluster().prepareSnapshotStatus().execute().actionGet();\n assertThat(response.getSnapshots().size(), equalTo(1));\n snapshotStatus = response.getSnapshots().get(0);\n- assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n+ assertThat(snapshotStatus.getState(), equalTo(SnapshotsInProgress.State.STARTED));\n // We blocked the node during data write operation, so at least one shard snapshot should be in STARTED stage\n assertThat(snapshotStatus.getShardsStats().getStartedShards(), greaterThan(0));\n for (SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n@@ -1769,9 +1767,7 @@ public ClusterState execute(ClusterState currentState) {\n shards.put(new ShardId(\"test-idx\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n ImmutableList.Builder<Entry> entries = ImmutableList.builder();\n entries.add(new Entry(new SnapshotId(\"test-repo\", \"test-snap\"), true, State.ABORTED, ImmutableList.of(\"test-idx\"), System.currentTimeMillis(), shards.build()));\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, new SnapshotMetaData(entries.build()));\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ return ClusterState.builder(currentState).putCustom(SnapshotsInProgress.TYPE, new SnapshotsInProgress(entries.build())).build();\n }\n \n @Override",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -426,4 +426,13 @@ The restore operation uses the standard shard recovery mechanism. Therefore, any\n be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed\n from the cluster as a result of this operation.\n \n+[float]\n+=== Effect of cluster blocks on snapshot and restore operations\n+Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering\n+repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as\n+well as the global metadata were readable. The restore operation requires the global metadata to be writable, however\n+the index level blocks are ignored during restore because indices are essentially recreated during restore.\n+Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal\n+repository operations such as listing or deleting snapshots from an already registered repository.\n+\n ",
"filename": "docs/reference/modules/snapshots.asciidoc",
"status": "modified"
}
]
} |
{
"body": "I have a simple testcase here:\n\n``` Java\n@Test\n public void testExecute_withAggs() throws Exception {\n\n client().admin().indices().prepareCreate(\"my-index\")\n .addMapping(\"my-type\", \"_timestamp\", \"enabled=true\")\n .get();\n ensureGreen(\"my-index\");\n\n client().prepareIndex(\"my-index\", \"my-type\").setTimestamp(\"2005-01-01T00:00\").setSource(\"{}\").get();\n client().prepareIndex(\"my-index\", \"my-type\").setTimestamp(\"2005-01-01T00:10\").setSource(\"{}\").get();\n client().prepareIndex(\"my-index\", \"my-type\").setTimestamp(\"2005-01-01T00:20\").setSource(\"{}\").get();\n client().prepareIndex(\"my-index\", \"my-type\").setTimestamp(\"2005-01-01T00:30\").setSource(\"{}\").get();\n refresh();\n\n SearchResponse response = client().prepareSearch(\"my-index\")\n .addAggregation(AggregationBuilders.dateHistogram(\"rate\").field(\"_timestamp\").interval(DateHistogramInterval.HOUR).order(Histogram.Order.COUNT_DESC))\n .get();\n response.toString();\n }\n```\n\nwhich fails generating the XContent with this:\n\n```\n\njava.lang.UnsupportedOperationException: Printing not supported\n at __randomizedtesting.SeedInfo.seed([4C10016566504505:7B5D40D0D89482B4]:0)\n at org.joda.time.format.DateTimeFormatter.requirePrinter(DateTimeFormatter.java:695)\n at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:642)\n at org.elasticsearch.search.aggregations.support.format.ValueFormatter$DateTime.format(ValueFormatter.java:127)\n at org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram$Bucket.toXContent(InternalHistogram.java:156)\n at org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.doXContentBody(InternalHistogram.java:535)\n at org.elasticsearch.search.aggregations.InternalAggregation.toXContent(InternalAggregation.java:202)\n at org.elasticsearch.search.aggregations.InternalAggregations.toXContentInternal(InternalAggregations.java:196)\n at org.elasticsearch.search.aggregations.InternalAggregations.toXContent(InternalAggregations.java:187)\n at org.elasticsearch.search.internal.InternalSearchResponse.toXContent(InternalSearchResponse.java:91)\n at org.elasticsearch.action.search.SearchResponse.toXContent(SearchResponse.java:181)\n at org.elasticsearch.action.search.SearchResponse.toString(SearchResponse.java:225)\n at org.elasticsearch.search.aggregations.bucket.HistogramTests.testExecute_withAggs(HistogramTests.java:1041)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n```\n\nto me this looks wrong :) but not my part of the system to figure out what the hack is going on... @colings86 maybe?\n\nhappens only on 2.0\n",
"comments": [
{
"body": "This bug will be fixed with https://github.com/elastic/elasticsearch/pull/11482 which adds a printer to the epoch_second and epoch_millis formats. I pushed the test case suggested in this issue to master.\n",
"created_at": "2015-06-16T08:37:07Z"
}
],
"number": 11692,
"title": "Histogram Agg result fails to generate XContent"
} | {
"body": "This fixes an issue to allow for negative unix timestamps.\nIt also fixes the default date field mapper to support epochs.\n\nFixes #11478\nFixes #11692\n",
"number": 11482,
"review_comments": [
{
"body": "it seems wrong to me to give the dateOptionTime printer to the epoch_\\* formats, since in that case I would not be able to print epoch dates if I needed/wanted to? I would suggest allowing these formats to print as epoch timestamps and then the user can use the order of formats in the `format` field to determine how they want the date to be printed. \n\nMaybe also (in a different PR) we should separate index and display time formats into different options so you could set only e.g. epoch_second to be allowed when indexing documents but display a more human readable format to the user at query time?\n",
"created_at": "2015-06-16T08:47:41Z"
},
{
"body": "This should probably be `epoch_seconds` to match the fact the `epoch_millis` is plural\n",
"created_at": "2015-06-16T08:48:12Z"
},
{
"body": "+1 on a dedicated printer\n\n> Maybe also (in a different PR) we should separate index and display time formats into different options so you could set only e.g. epoch_second to be allowed when indexing documents but display a more human readable format to the user at query time?\n\nOne thing that makes me uncomfortable about this is that it means that a format could not always parse the dates that it prints.\n",
"created_at": "2015-06-16T08:50:28Z"
},
{
"body": "I stayed consistent with the other formats here... I agree that naming is not ideal, but IMO it is better to stay like the others\n",
"created_at": "2015-06-17T15:28:29Z"
},
{
"body": "shouldn't it be /1000 instead?\n",
"created_at": "2015-06-17T15:31:13Z"
},
{
"body": "maybe we can also have a simple test that makes sure that calling print then parse is idempotent?\n",
"created_at": "2015-06-18T06:37:35Z"
},
{
"body": "fixed and test added\n",
"created_at": "2015-06-22T07:24:43Z"
},
{
"body": "added a test as well\n",
"created_at": "2015-06-22T07:24:51Z"
},
{
"body": "no tests for epoch_second?\n",
"created_at": "2015-06-22T08:13:26Z"
}
],
"title": "Allow for negative unix timestamps"
} | {
"commits": [
{
"message": "Dates: Allow for negative unix timestamps\n\nThis fixes an issue to allow for negative unix timestamps.\nAn own printer for epochs instead of just having a parser has been added.\nAdded docs that only 10/13 length unix timestamps are supported\nAdded docs in upgrade documentation\n\nFixes #11478"
}
],
"files": [
{
"diff": "@@ -26,6 +26,8 @@\n import org.joda.time.field.ScaledDurationField;\n import org.joda.time.format.*;\n \n+import java.io.IOException;\n+import java.io.Writer;\n import java.util.Locale;\n import java.util.regex.Pattern;\n \n@@ -135,9 +137,9 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n } else if (\"yearMonthDay\".equals(input) || \"year_month_day\".equals(input)) {\n formatter = ISODateTimeFormat.yearMonthDay();\n } else if (\"epoch_second\".equals(input)) {\n- formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(false)).toFormatter();\n+ formatter = new DateTimeFormatterBuilder().append(new EpochTimePrinter(false), new EpochTimeParser(false)).toFormatter();\n } else if (\"epoch_millis\".equals(input)) {\n- formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(true)).toFormatter();\n+ formatter = new DateTimeFormatterBuilder().append(new EpochTimePrinter(true), new EpochTimeParser(true)).toFormatter();\n } else if (Strings.hasLength(input) && input.contains(\"||\")) {\n String[] formats = Strings.delimitedListToStringArray(input, \"||\");\n DateTimeParser[] parsers = new DateTimeParser[formats.length];\n@@ -200,8 +202,8 @@ public DateTimeField getField(Chronology chronology) {\n \n public static class EpochTimeParser implements DateTimeParser {\n \n- private static final Pattern MILLI_SECOND_PRECISION_PATTERN = Pattern.compile(\"^\\\\d{1,13}$\");\n- private static final Pattern SECOND_PRECISION_PATTERN = Pattern.compile(\"^\\\\d{1,10}$\");\n+ private static final Pattern MILLI_SECOND_PRECISION_PATTERN = Pattern.compile(\"^-?\\\\d{1,13}$\");\n+ private static final Pattern SECOND_PRECISION_PATTERN = Pattern.compile(\"^-?\\\\d{1,10}$\");\n \n private final boolean hasMilliSecondPrecision;\n private final Pattern pattern;\n@@ -218,7 +220,10 @@ public int estimateParsedLength() {\n \n @Override\n public int parseInto(DateTimeParserBucket bucket, String text, int position) {\n- if (text.length() > estimateParsedLength() ||\n+ boolean isPositive = text.startsWith(\"-\") == false;\n+ boolean isTooLong = text.length() > estimateParsedLength();\n+\n+ if ((isPositive && isTooLong) ||\n // timestamps have to have UTC timezone\n bucket.getZone() != DateTimeZone.UTC ||\n pattern.matcher(text).matches() == false) {\n@@ -242,5 +247,66 @@ public int parseInto(DateTimeParserBucket bucket, String text, int position) {\n }\n return text.length();\n }\n- };\n+ }\n+\n+ public static class EpochTimePrinter implements DateTimePrinter {\n+\n+ private boolean hasMilliSecondPrecision;\n+\n+ public EpochTimePrinter(boolean hasMilliSecondPrecision) {\n+ this.hasMilliSecondPrecision = hasMilliSecondPrecision;\n+ }\n+\n+ @Override\n+ public int estimatePrintedLength() {\n+ return hasMilliSecondPrecision ? 13 : 10;\n+ }\n+\n+ @Override\n+ public void printTo(StringBuffer buf, long instant, Chronology chrono, int displayOffset, DateTimeZone displayZone, Locale locale) {\n+ if (hasMilliSecondPrecision) {\n+ buf.append(instant);\n+ } else {\n+ buf.append(instant / 1000);\n+ }\n+ }\n+\n+ @Override\n+ public void printTo(Writer out, long instant, Chronology chrono, int displayOffset, DateTimeZone displayZone, Locale locale) throws IOException {\n+ if (hasMilliSecondPrecision) {\n+ out.write(String.valueOf(instant));\n+ } else {\n+ out.append(String.valueOf(instant / 1000));\n+ }\n+ }\n+\n+ @Override\n+ public void printTo(StringBuffer buf, ReadablePartial partial, Locale locale) {\n+ if (hasMilliSecondPrecision) {\n+ buf.append(String.valueOf(getDateTimeMillis(partial)));\n+ } else {\n+ buf.append(String.valueOf(getDateTimeMillis(partial) / 1000));\n+ }\n+ }\n+\n+ @Override\n+ public void printTo(Writer out, ReadablePartial partial, Locale locale) throws IOException {\n+ if (hasMilliSecondPrecision) {\n+ out.append(String.valueOf(getDateTimeMillis(partial)));\n+ } else {\n+ out.append(String.valueOf(getDateTimeMillis(partial) / 1000));\n+ }\n+ }\n+\n+ private long getDateTimeMillis(ReadablePartial partial) {\n+ int year = partial.get(DateTimeFieldType.year());\n+ int monthOfYear = partial.get(DateTimeFieldType.monthOfYear());\n+ int dayOfMonth = partial.get(DateTimeFieldType.dayOfMonth());\n+ int hourOfDay = partial.get(DateTimeFieldType.hourOfDay());\n+ int minuteOfHour = partial.get(DateTimeFieldType.minuteOfHour());\n+ int secondOfMinute = partial.get(DateTimeFieldType.secondOfMinute());\n+ int millisOfSecond = partial.get(DateTimeFieldType.millisOfSecond());\n+ return partial.getChronology().getDateTimeMillis(year, monthOfYear, dayOfMonth, hourOfDay, minuteOfHour, secondOfMinute, millisOfSecond);\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/joda/Joda.java",
"status": "modified"
},
{
"diff": "@@ -25,11 +25,13 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n+import org.joda.time.LocalDateTime;\n import org.joda.time.MutableDateTime;\n import org.joda.time.format.*;\n import org.junit.Test;\n \n import java.util.Date;\n+import java.util.Locale;\n \n import static org.hamcrest.Matchers.*;\n \n@@ -250,7 +252,7 @@ public void testRoundingWithTimeZone() {\n }\n \n @Test\n- public void testThatEpochsInSecondsCanBeParsed() {\n+ public void testThatEpochsCanBeParsed() {\n boolean parseMilliSeconds = randomBoolean();\n \n // epoch: 1433144433655 => date: Mon Jun 1 09:40:33.655 CEST 2015\n@@ -271,6 +273,37 @@ public void testThatEpochsInSecondsCanBeParsed() {\n }\n }\n \n+ @Test\n+ public void testThatNegativeEpochsCanBeParsed() {\n+ // problem: negative epochs can be arbitrary in size...\n+ boolean parseMilliSeconds = randomBoolean();\n+ FormatDateTimeFormatter formatter = Joda.forPattern(parseMilliSeconds ? \"epoch_millis\" : \"epoch_second\");\n+ DateTime dateTime = formatter.parser().parseDateTime(\"-10000\");\n+\n+ assertThat(dateTime.getYear(), is(1969));\n+ assertThat(dateTime.getMonthOfYear(), is(12));\n+ assertThat(dateTime.getDayOfMonth(), is(31));\n+ if (parseMilliSeconds) {\n+ assertThat(dateTime.getHourOfDay(), is(23)); // utc timezone, +2 offset due to CEST\n+ assertThat(dateTime.getMinuteOfHour(), is(59));\n+ assertThat(dateTime.getSecondOfMinute(), is(50));\n+ } else {\n+ assertThat(dateTime.getHourOfDay(), is(21)); // utc timezone, +2 offset due to CEST\n+ assertThat(dateTime.getMinuteOfHour(), is(13));\n+ assertThat(dateTime.getSecondOfMinute(), is(20));\n+ }\n+\n+ // every negative epoch must be parsed, no matter if exact the size or bigger\n+ if (parseMilliSeconds) {\n+ formatter.parser().parseDateTime(\"-100000000\");\n+ formatter.parser().parseDateTime(\"-999999999999\");\n+ formatter.parser().parseDateTime(\"-1234567890123\");\n+ } else {\n+ formatter.parser().parseDateTime(\"-100000000\");\n+ formatter.parser().parseDateTime(\"-1234567890\");\n+ }\n+ }\n+\n @Test(expected = IllegalArgumentException.class)\n public void testForInvalidDatesInEpochSecond() {\n FormatDateTimeFormatter formatter = Joda.forPattern(\"epoch_second\");\n@@ -283,6 +316,51 @@ public void testForInvalidDatesInEpochMillis() {\n formatter.parser().parseDateTime(randomFrom(\"invalid date\", \"12345678901234\"));\n }\n \n+ public void testThatEpochParserIsPrinter() {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"epoch_millis\");\n+ assertThat(formatter.parser().isPrinter(), is(true));\n+ assertThat(formatter.printer().isPrinter(), is(true));\n+\n+ FormatDateTimeFormatter epochSecondFormatter = Joda.forPattern(\"epoch_second\");\n+ assertThat(epochSecondFormatter.parser().isPrinter(), is(true));\n+ assertThat(epochSecondFormatter.printer().isPrinter(), is(true));\n+ }\n+\n+ public void testThatEpochTimePrinterWorks() {\n+ StringBuffer buffer = new StringBuffer();\n+ LocalDateTime now = LocalDateTime.now();\n+\n+ Joda.EpochTimePrinter epochTimePrinter = new Joda.EpochTimePrinter(false);\n+ epochTimePrinter.printTo(buffer, now, Locale.ROOT);\n+ assertThat(buffer.length(), is(10));\n+ // only check the last digit, as seconds go from 0-99 in the unix timestamp and dont stop at 60\n+ assertThat(buffer.toString(), endsWith(String.valueOf(now.getSecondOfMinute() % 10)));\n+\n+ buffer = new StringBuffer();\n+ Joda.EpochTimePrinter epochMilliSecondTimePrinter = new Joda.EpochTimePrinter(true);\n+ epochMilliSecondTimePrinter.printTo(buffer, now, Locale.ROOT);\n+ assertThat(buffer.length(), is(13));\n+ assertThat(buffer.toString(), endsWith(String.valueOf(now.getMillisOfSecond())));\n+ }\n+\n+ public void testThatEpochParserIsIdempotent() {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"epoch_millis\");\n+ DateTime dateTime = formatter.parser().parseDateTime(\"1234567890123\");\n+ assertThat(dateTime.getMillis(), is(1234567890123l));\n+ dateTime = formatter.printer().parseDateTime(\"1234567890456\");\n+ assertThat(dateTime.getMillis(), is(1234567890456l));\n+ dateTime = formatter.parser().parseDateTime(\"1234567890789\");\n+ assertThat(dateTime.getMillis(), is(1234567890789l));\n+\n+ FormatDateTimeFormatter secondsFormatter = Joda.forPattern(\"epoch_second\");\n+ DateTime secondsDateTime = secondsFormatter.parser().parseDateTime(\"1234567890\");\n+ assertThat(secondsDateTime.getMillis(), is(1234567890000l));\n+ secondsDateTime = secondsFormatter.printer().parseDateTime(\"1234567890\");\n+ assertThat(secondsDateTime.getMillis(), is(1234567890000l));\n+ secondsDateTime = secondsFormatter.parser().parseDateTime(\"1234567890\");\n+ assertThat(secondsDateTime.getMillis(), is(1234567890000l));\n+ }\n+\n private long utcTimeInMillis(String time) {\n return ISODateTimeFormat.dateOptionalTimeParser().withZone(DateTimeZone.UTC).parseMillis(time);\n }",
"filename": "core/src/test/java/org/elasticsearch/deps/joda/SimpleJodaTests.java",
"status": "modified"
},
{
"diff": "@@ -33,15 +33,9 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.index.query.BoolQueryBuilder;\n+import org.elasticsearch.index.query.*;\n import org.elasticsearch.index.query.CommonTermsQueryBuilder.Operator;\n-import org.elasticsearch.index.query.MatchQueryBuilder;\n import org.elasticsearch.index.query.MatchQueryBuilder.Type;\n-import org.elasticsearch.index.query.MultiMatchQueryBuilder;\n-import org.elasticsearch.index.query.QueryBuilders;\n-import org.elasticsearch.index.query.QueryStringQueryBuilder;\n-import org.elasticsearch.index.query.TermQueryBuilder;\n-import org.elasticsearch.index.query.WrapperQueryBuilder;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.SearchHit;\n@@ -62,58 +56,11 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.andQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.commonTermsQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.existsQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.indicesQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.limitQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.missingQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.notQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.prefixQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.regexpQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.spanMultiTermQueryBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.spanNearQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.spanNotQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.spanOrQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.spanTermQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termsLookupQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.termsQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.typeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.wrapperQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction;\n import static org.elasticsearch.test.VersionUtils.randomVersion;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFirstHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSecondHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThirdHit;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasScore;\n-import static org.hamcrest.Matchers.allOf;\n-import static org.hamcrest.Matchers.closeTo;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n-import static org.hamcrest.Matchers.is;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n @Slow\n public class SearchQueryTests extends ElasticsearchIntegrationTest {\n@@ -2195,11 +2142,10 @@ public void testQueryStringWithSlopAndFields() {\n }\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/11478\")\n @Test\n public void testDateProvidedAsNumber() throws ExecutionException, InterruptedException {\n createIndex(\"test\");\n- assertAcked(client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(\"field\", \"type=date\").get());\n+ assertAcked(client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(\"field\", \"type=date,format=epoch_millis\").get());\n indexRandom(true, client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"field\", -1000000000001L),\n client().prepareIndex(\"test\", \"type\", \"2\").setSource(\"field\", -1000000000000L),\n client().prepareIndex(\"test\", \"type\", \"3\").setSource(\"field\", -999999999999L));",
"filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -200,9 +200,14 @@ year.\n year, and two digit day of month.\n \n |`epoch_second`|A formatter for the number of seconds since the epoch.\n-\n-|`epoch_millis`|A formatter for the number of milliseconds since\n-the epoch.\n+Note, that this timestamp allows a max length of 10 chars, so dates\n+older than 1653 and 2286 are not supported. You should use a different\n+date formatter in that case.\n+\n+|`epoch_millis`|A formatter for the number of milliseconds since the epoch.\n+Note, that this timestamp allows a max length of 13 chars, so dates\n+older than 1653 and 2286 are not supported. You should use a different\n+date formatter in that case.\n |=======================================================================\n \n [float]",
"filename": "docs/reference/mapping/date-format.asciidoc",
"status": "modified"
},
{
"diff": "@@ -301,6 +301,16 @@ Meta fields can no longer be specified within a document. They should be specifi\n via the API. For example, instead of adding a field `_parent` within a document,\n use the `parent` url parameter when indexing that document.\n \n+==== Date format does not support unix timestamps by default\n+\n+In earlier versions of elasticsearch, every timestamp was always tried to be parsed as\n+as unix timestamp first. This means, even when specifying a date format like\n+`dateOptionalTime`, one could supply unix timestamps instead of a ISO8601 formatted\n+date.\n+\n+This is not supported anymore. If you want to store unix timestamps, you need to specify\n+the appropriate formats in the mapping, namely `epoch_second` or `epoch_millis`.\n+\n ==== Source field limitations\n The `_source` field could previously be disabled dynamically. Since this field\n is a critical piece of many features like the Update API, it is no longer",
"filename": "docs/reference/migration/migrate_2_0.asciidoc",
"status": "modified"
}
]
} |
{
"body": "I've run the upgrade API on indices created in 0.20, and it sets the minimum compatible version to 3.6.0, which means that ES 2.0 fails to open them.\n",
"comments": [],
"number": 11474,
"title": "Minimum compatible version being written as 3.6.0"
} | {
"body": "The minimum version comparison was always using the default version\nsicne the comparison was flipped.\n\nCloses #11474\n",
"number": 11475,
"review_comments": [
{
"body": "I'm confused about this assert. especially given the comments above it. If this minimum version is a lucene version, should it not be the oldest lucene version? In this case the ancient segments will be upgraded, but the \"merely old\" ones will still be there, so I can't see it being CURRENT.\n",
"created_at": "2015-06-03T14:00:32Z"
},
{
"body": "What happens if the index is empty and just a commit point?\n",
"created_at": "2015-06-03T14:03:39Z"
},
{
"body": "OK but this indexCreated is an immutable thing in ES right? So this means a user can never upgrade an empty index?\n",
"created_at": "2015-06-03T14:12:42Z"
},
{
"body": "if there are no docs in the index it won't do anything here yes! in this case we should make this method more low level - I will fix that too\n",
"created_at": "2015-06-03T14:15:17Z"
}
],
"title": "Use the smallest version rather than the default version"
} | {
"commits": [
{
"message": "Use the smallest version rather than the default version\n\nThe minimum version comparison was always using the default version\nsicne the comparison was flipped.\n\nCloses #11474"
}
],
"files": [
{
"diff": "@@ -756,13 +756,13 @@ public org.apache.lucene.util.Version upgrade(UpgradeRequest upgrade) {\n }\n \n public org.apache.lucene.util.Version minimumCompatibleVersion() {\n- org.apache.lucene.util.Version luceneVersion = org.apache.lucene.util.Version.LUCENE_3_6;\n+ org.apache.lucene.util.Version luceneVersion = null;\n for(Segment segment : engine().segments()) {\n- if (luceneVersion.onOrAfter(segment.getVersion())) {\n+ if (luceneVersion == null || luceneVersion.onOrAfter(segment.getVersion())) {\n luceneVersion = segment.getVersion();\n }\n }\n- return luceneVersion;\n+ return luceneVersion == null ? Version.indexCreated(indexSettings).luceneVersion : luceneVersion;\n }\n \n public SnapshotIndexCommit snapshotIndex() throws EngineException {",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_VERSION_CREATED;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n@@ -148,5 +149,19 @@ public void testDeleteByQueryBWC() {\n assertEquals(numDocs, searcher.reader().numDocs());\n }\n }\n-\n+ \n+ public void testMinimumCompatVersion() {\n+ Version versionCreated = randomVersion();\n+ assertAcked(client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0, SETTING_VERSION_CREATED, versionCreated.id));\n+ client().prepareIndex(\"test\", \"test\").setSource(\"{}\").get();\n+ ensureGreen(\"test\");\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexShard test = indicesService.indexService(\"test\").shard(0);\n+ assertEquals(versionCreated.luceneVersion, test.minimumCompatibleVersion());\n+ client().prepareIndex(\"test\", \"test\").setSource(\"{}\").get();\n+ assertEquals(versionCreated.luceneVersion, test.minimumCompatibleVersion());\n+ test.engine().flush();\n+ assertEquals(Version.CURRENT.luceneVersion, test.minimumCompatibleVersion());\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,11 @@\n \n package org.elasticsearch.rest.action.admin.indices.upgrade;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.bwcompat.StaticIndexBackwardCompatibilityTest;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.indices.IndicesService;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n \n@@ -28,20 +32,23 @@ public class UpgradeReallyOldIndexTest extends StaticIndexBackwardCompatibilityT\n public void testUpgrade_0_20() throws Exception {\n String indexName = \"test\";\n loadIndex(\"index-0.20.zip\", indexName);\n-\n+ assertMinVersion(indexName, org.apache.lucene.util.Version.parse(\"3.6.2\"));\n assertTrue(UpgradeTest.hasAncientSegments(client(), indexName));\n UpgradeTest.assertNotUpgraded(client(), indexName);\n assertNoFailures(client().admin().indices().prepareUpgrade(indexName).setUpgradeOnlyAncientSegments(true).get());\n assertFalse(UpgradeTest.hasAncientSegments(client(), indexName));\n \n // This index has entirely ancient segments so the whole index should now be upgraded:\n UpgradeTest.assertUpgraded(client(), indexName);\n+ assertEquals(Version.CURRENT.luceneVersion.toString(), client().admin().indices().prepareGetSettings(indexName).get().getSetting(indexName, IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE));\n+ assertMinVersion(indexName, Version.CURRENT.luceneVersion);\n+\n }\n \n public void testUpgradeMixed_0_20_6_and_0_90_6() throws Exception {\n String indexName = \"index-0.20.6-and-0.90.6\";\n loadIndex(indexName + \".zip\", indexName);\n-\n+ assertMinVersion(indexName, org.apache.lucene.util.Version.parse(\"3.6.2\"));\n // Has ancient segments?:\n assertTrue(UpgradeTest.hasAncientSegments(client(), indexName));\n \n@@ -59,5 +66,18 @@ public void testUpgradeMixed_0_20_6_and_0_90_6() throws Exception {\n \n // We succeeded in upgrading only the ancient segments but leaving the \"merely old\" ones untouched:\n assertTrue(UpgradeTest.hasOldButNotAncientSegments(client(), indexName));\n+ assertEquals(org.apache.lucene.util.Version.LUCENE_4_5_1.toString(), client().admin().indices().prepareGetSettings(indexName).get().getSetting(indexName, IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE));\n+ assertMinVersion(indexName, org.apache.lucene.util.Version.LUCENE_4_5_1);\n+\n+ }\n+\n+ private void assertMinVersion(String index, org.apache.lucene.util.Version version) {\n+ for (IndicesService services : internalCluster().getInstances(IndicesService.class)) {\n+ IndexService indexService = services.indexService(index);\n+ if (indexService != null) {\n+ assertEquals(version, indexService.shard(0).minimumCompatibleVersion());\n+ }\n+ }\n+\n }\n }",
"filename": "src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java",
"status": "modified"
}
]
} |
{
"body": "Bug found from a forum post: https://discuss.elastic.co/t/problem-with-binary-sub-aggregations/1727\n\nUsing the below code to embed a sub-aggregation inside another aggregation:\n\n``` java\nimport org.elasticsearch.common.xcontent.ToXContent;\nimport org.elasticsearch.common.xcontent.XContentBuilder;\nimport org.elasticsearch.common.xcontent.XContentHelper;\nimport org.elasticsearch.common.xcontent.json.JsonXContent;\nimport org.elasticsearch.search.aggregations.AggregationBuilders;\nimport org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\nimport java.io.IOException;\n\npublic class TestSubAggregation {\n\n public static void main(String[] args) throws IOException {\n // Build a simple term aggregation\n\n TermsBuilder termsBuilder = AggregationBuilders.terms(\"test\").field(\"testfield\");\n\n // Build a simple term sub aggregation\n TermsBuilder subTerm = AggregationBuilders.terms(\"subtest\").field(\"subtestfield\");\n\n // Add sub aggregation as an AggregationBuilder\n termsBuilder.subAggregation(subTerm);\n // It produces a correct output\n System.out.println(XContentHelper.toString(termsBuilder));\n\n // Reset term aggregation\n termsBuilder = AggregationBuilders.terms(\"test\").field(\"testfield\");\n\n // Create an XContentBuilder from sub aggregation\n XContentBuilder subTermContentBuilder = JsonXContent.contentBuilder().startObject();\n subTerm.toXContent(subTermContentBuilder, ToXContent.EMPTY_PARAMS);\n subTermContentBuilder.endObject();\n\n // Add sub aggregation as a XContentBuilder (binary_aggregation)\n termsBuilder.subAggregation(subTermContentBuilder);\n\n // Produces an incorrect output (two aggregations levels instead of one)\n System.out.println(XContentHelper.toString(termsBuilder));\n\n }\n}\n```\n\nThe second sysout produces:\n\n``` json\n{\n \"test\" : {\n \"terms\" : {\n \"field\" : \"testfield\"\n },\n \"aggregations\" : {\n \"aggregations\":{\"subtest\":{\"terms\":{\"field\":\"subtestfield\"}}}\n }\n }\n}\n```\n\nWhich is invalid as `aggregations` object is specified twice.\n",
"comments": [
{
"body": "Additionally to the issue with the builder, even when the builder is producing the correct output, the parser does not recognise the `aggregations_binary` field which is used when the XContentBuilder's contentType is different from the contentType of the sub-aggregation.\n",
"created_at": "2015-06-03T11:47:40Z"
}
],
"number": 11457,
"title": "AggregationBuilder renders incorrect JSON when using binary or raw sub-aggregation"
} | {
"body": "Previously AggregationBuilder would wrap binary_aggregations in an aggregations object which would break parsing. This has been fixed so that for normally specified aggregations there are wrapped in an `aggregations` object, for binary aggregation which have the same XContentType as the builder it will use an `aggregations` field name and use the aggregationsBinary as the value (this will render the same as normal aggregations), and for binary aggregation with a different ContentType from the builder we use an `aggregations_binary` field name and add the aggregationsBinary as a binary value.\n\nAdditionally the logic in AggregationParsers needed to be changed as it previously did not parse `aggregations_binary` fields in sub-aggregations. A check has been added for the `aggregations_binary` field name and the binaryValue of this field is used to create a new parser and create the correct AggregatorFactories.\n\nClose #11457\n",
"number": 11473,
"review_comments": [],
"title": "Allow aggregations_binary to build and parse"
} | {
"commits": [
{
"message": "Aggregations: Allow aggregation_binary to build and parse\n\nPreviously AggregationBuilder would wrap binary_aggregations in an aggregations object which would break parsing. This has been fixed so that for normally specified aggregations there are wrapped in an `aggregations` object, for binary aggregation which have the same XContentType as the builder it will use an `aggregations` field name and use the aggregationsBinary as the value (this will render the same as normal aggregations), and for binary aggregation with a different ContentType from the builder we use an `aggregations_binary` field name and add the aggregationsBinary as a binary value.\n\nAdditionally the logic in AggregationParsers needed to be changed as it previously did not parse `aggregations_binary` fields in sub-aggregations. A check has been added for the `aggregations_binary` field name and the binaryValue of this field is used to create a new parser and create the correct AggregatorFactories.\n\nClose #11457"
}
],
"files": [
{
"diff": "@@ -122,12 +122,13 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n internalXContent(builder, params);\n \n if (aggregations != null || aggregationsBinary != null) {\n- builder.startObject(\"aggregations\");\n \n if (aggregations != null) {\n+ builder.startObject(\"aggregations\");\n for (AbstractAggregationBuilder subAgg : aggregations) {\n subAgg.toXContent(builder, params);\n }\n+ builder.endObject();\n }\n \n if (aggregationsBinary != null) {\n@@ -138,7 +139,6 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n }\n }\n \n- builder.endObject();\n }\n \n return builder.endObject();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n \n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -140,45 +141,66 @@ private AggregatorFactories parseAggregators(XContentParser parser, SearchContex\n final String fieldName = parser.currentName();\n \n token = parser.nextToken();\n- if (token != XContentParser.Token.START_OBJECT) {\n- throw new SearchParseException(context, \"Expected [\" + XContentParser.Token.START_OBJECT + \"] under [\" + fieldName\n- + \"], but got a [\" + token + \"] in [\" + aggregationName + \"]\", parser.getTokenLocation());\n- }\n-\n- switch (fieldName) {\n+ if (\"aggregations_binary\".equals(fieldName)) {\n+ if (subFactories != null) {\n+ throw new SearchParseException(context, \"Found two sub aggregation definitions under [\" + aggregationName + \"]\",\n+ parser.getTokenLocation());\n+ }\n+ XContentParser binaryParser = null;\n+ if (token == XContentParser.Token.VALUE_STRING || token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) {\n+ byte[] source = parser.binaryValue();\n+ binaryParser = XContentFactory.xContent(source).createParser(source);\n+ } else {\n+ throw new SearchParseException(context, \"Expected [\" + XContentParser.Token.VALUE_STRING + \" or \"\n+ + XContentParser.Token.VALUE_EMBEDDED_OBJECT + \"] for [\" + fieldName + \"], but got a [\" + token + \"] in [\"\n+ + aggregationName + \"]\", parser.getTokenLocation());\n+ }\n+ XContentParser.Token binaryToken = binaryParser.nextToken();\n+ if (binaryToken != XContentParser.Token.START_OBJECT) {\n+ throw new SearchParseException(context, \"Expected [\" + XContentParser.Token.START_OBJECT\n+ + \"] as first token when parsing [\" + fieldName + \"], but got a [\" + binaryToken + \"] in [\"\n+ + aggregationName + \"]\", parser.getTokenLocation());\n+ }\n+ subFactories = parseAggregators(binaryParser, context, level + 1);\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ switch (fieldName) {\n case \"meta\":\n metaData = parser.map();\n break;\n case \"aggregations\":\n case \"aggs\":\n if (subFactories != null) {\n- throw new SearchParseException(context, \"Found two sub aggregation definitions under [\" + aggregationName + \"]\",\n- parser.getTokenLocation());\n+ throw new SearchParseException(context,\n+ \"Found two sub aggregation definitions under [\" + aggregationName + \"]\", parser.getTokenLocation());\n }\n- subFactories = parseAggregators(parser, context, level+1);\n+ subFactories = parseAggregators(parser, context, level + 1);\n break;\n default:\n if (aggFactory != null) {\n- throw new SearchParseException(context, \"Found two aggregation type definitions in [\" + aggregationName + \"]: [\"\n- + aggFactory.type + \"] and [\" + fieldName + \"]\", parser.getTokenLocation());\n+ throw new SearchParseException(context, \"Found two aggregation type definitions in [\" + aggregationName\n+ + \"]: [\" + aggFactory.type + \"] and [\" + fieldName + \"]\", parser.getTokenLocation());\n }\n- if (pipelineAggregatorFactory != null) {\n- throw new SearchParseException(context, \"Found two aggregation type definitions in [\" + aggregationName + \"]: [\"\n- + pipelineAggregatorFactory + \"] and [\" + fieldName + \"]\", parser.getTokenLocation());\n+ if (pipelineAggregatorFactory != null) {\n+ throw new SearchParseException(context, \"Found two aggregation type definitions in [\" + aggregationName\n+ + \"]: [\" + pipelineAggregatorFactory + \"] and [\" + fieldName + \"]\", parser.getTokenLocation());\n }\n \n Aggregator.Parser aggregatorParser = parser(fieldName);\n if (aggregatorParser == null) {\n- PipelineAggregator.Parser pipelineAggregatorParser = pipelineAggregator(fieldName);\n- if (pipelineAggregatorParser == null) {\n+ PipelineAggregator.Parser pipelineAggregatorParser = pipelineAggregator(fieldName);\n+ if (pipelineAggregatorParser == null) {\n throw new SearchParseException(context, \"Could not find aggregator type [\" + fieldName + \"] in [\"\n- + aggregationName + \"]\", parser.getTokenLocation());\n+ + aggregationName + \"]\", parser.getTokenLocation());\n } else {\n- pipelineAggregatorFactory = pipelineAggregatorParser.parse(aggregationName, parser, context);\n+ pipelineAggregatorFactory = pipelineAggregatorParser.parse(aggregationName, parser, context);\n }\n } else {\n aggFactory = aggregatorParser.parse(aggregationName, parser, context);\n }\n+ }\n+ } else {\n+ throw new SearchParseException(context, \"Expected [\" + XContentParser.Token.START_OBJECT + \"] under [\" + fieldName\n+ + \"], but got a [\" + token + \"] in [\" + aggregationName + \"]\", parser.getTokenLocation());\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,142 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations;\n+\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.client.Requests;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.core.IsNull.notNullValue;\n+\n+@ElasticsearchIntegrationTest.SuiteScopeTest\n+public class AggregationsBinaryTests extends ElasticsearchIntegrationTest {\n+\n+ private static final String STRING_FIELD_NAME = \"s_value\";\n+ private static final String INT_FIELD_NAME = \"i_value\";\n+\n+ @Override\n+ public void setupSuiteScopeCluster() throws Exception {\n+ createIndex(\"idx\");\n+ List<IndexRequestBuilder> builders = new ArrayList<>();\n+ for (int i = 0; i < 5; i++) {\n+ builders.add(client().prepareIndex(\"idx\", \"type\").setSource(\n+ jsonBuilder().startObject().field(STRING_FIELD_NAME, \"val\" + i).field(INT_FIELD_NAME, i).endObject()));\n+ }\n+ indexRandom(true, builders);\n+ ensureSearchable();\n+ }\n+\n+ @Test\n+ public void testAggregationsBinary() throws Exception {\n+ TermsBuilder termsBuilder = AggregationBuilders.terms(\"terms\").field(STRING_FIELD_NAME);\n+ TermsBuilder subTerm = AggregationBuilders.terms(\"subterms\").field(INT_FIELD_NAME);\n+\n+ // Create an XContentBuilder from sub aggregation\n+ XContentBuilder subTermContentBuilder = JsonXContent.contentBuilder().startObject();\n+ subTerm.toXContent(subTermContentBuilder, ToXContent.EMPTY_PARAMS);\n+ subTermContentBuilder.endObject();\n+\n+ // Add sub aggregation as a XContentBuilder (binary_aggregation)\n+ termsBuilder.subAggregation(subTermContentBuilder);\n+\n+ SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\").addAggregation(termsBuilder).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Terms terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ assertThat(terms.getBuckets().size(), equalTo(5));\n+\n+ for (int i = 0; i < 5; i++) {\n+ Terms.Bucket bucket = terms.getBucketByKey(\"val\" + i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsString(), equalTo(\"val\" + i));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Aggregations subAggs = bucket.getAggregations();\n+ assertThat(subAggs, notNullValue());\n+ assertThat(subAggs.asList().size(), equalTo(1));\n+ Terms subTerms = subAggs.get(\"subterms\");\n+ assertThat(subTerms, notNullValue());\n+ List<Bucket> subTermsBuckets = subTerms.getBuckets();\n+ assertThat(subTermsBuckets, notNullValue());\n+ assertThat(subTermsBuckets.size(), equalTo(1));\n+ assertThat(((Number) subTermsBuckets.get(0).getKey()).intValue(), equalTo(i));\n+ assertThat(subTermsBuckets.get(0).getDocCount(), equalTo(1l));\n+ }\n+ }\n+\n+ @Test\n+ public void testAggregationsBinarySameContentType() throws Exception {\n+ TermsBuilder termsBuilder = AggregationBuilders.terms(\"terms\").field(STRING_FIELD_NAME);\n+ TermsBuilder subTerm = AggregationBuilders.terms(\"subterms\").field(INT_FIELD_NAME);\n+\n+ // Create an XContentBuilder from sub aggregation\n+\n+ XContentBuilder subTermContentBuilder = XContentFactory.contentBuilder(Requests.CONTENT_TYPE);\n+ subTermContentBuilder.startObject();\n+ subTerm.toXContent(subTermContentBuilder, ToXContent.EMPTY_PARAMS);\n+ subTermContentBuilder.endObject();\n+\n+ // Add sub aggregation as a XContentBuilder (binary_aggregation)\n+ termsBuilder.subAggregation(subTermContentBuilder);\n+\n+ SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\").addAggregation(termsBuilder).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Terms terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ assertThat(terms.getBuckets().size(), equalTo(5));\n+\n+ for (int i = 0; i < 5; i++) {\n+ Terms.Bucket bucket = terms.getBucketByKey(\"val\" + i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsString(), equalTo(\"val\" + i));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Aggregations subAggs = bucket.getAggregations();\n+ assertThat(subAggs, notNullValue());\n+ assertThat(subAggs.asList().size(), equalTo(1));\n+ Terms subTerms = subAggs.get(\"subterms\");\n+ assertThat(subTerms, notNullValue());\n+ List<Bucket> subTermsBuckets = subTerms.getBuckets();\n+ assertThat(subTermsBuckets, notNullValue());\n+ assertThat(subTermsBuckets.size(), equalTo(1));\n+ assertThat(((Number) subTermsBuckets.get(0).getKey()).intValue(), equalTo(i));\n+ assertThat(subTermsBuckets.get(0).getDocCount(), equalTo(1l));\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/search/aggregations/AggregationsBinaryTests.java",
"status": "added"
}
]
} |
{
"body": "Working on #10067, improving our back-compat test indices so that the translog has a delete-by-query on upgrade, I hit this pre-existing back-compat bug where on upgrade of an index <= 1.0.0 Beta2 that has a DBQ in its translog, this exception shows up:\n\n```\n 1> [2015-03-25 08:57:35,714][INFO ][index.gateway ] [node_t3] [test][0] ignoring recovery of a corrupt translog entry\n 1> org.elasticsearch.index.query.QueryParsingException: [test] request does not support [range]\n 1> at org.elasticsearch.index.query.IndexQueryParserService.parseQuery(IndexQueryParserService.java:362)\n 1> at org.elasticsearch.index.shard.IndexShard.prepareDeleteByQuery(IndexShard.java:537)\n 1> at org.elasticsearch.index.shard.IndexShard.performRecoveryOperation(IndexShard.java:864)\n 1> at org.elasticsearch.index.gateway.IndexShardGateway.recover(IndexShardGateway.java:235)\n 1> at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:114)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n```\n\nThis is happening because of #4074 when we required that the top-level \"query\" is present to delete-by-query requests, but prior to that we required that it is not present. So the translog has a DBQ without \"query\" and when we try to parse it we hit this exception.\n\nI have changes to create-bwc-index.py that shows the bug ... but I'm not sure how to cleanly fix it. Somehow on parsing a translog entry from an old enough version of ES we need to insert \"query\" at the top...\n",
"comments": [],
"number": 10262,
"title": "Core: delete-by-query fails to replay from translog < 1.0.0 Beta2"
} | {
"body": "This is happening because of #4074 when we required that the top-level \"query\" is present to delete-by-query requests, but prior to that we required that it is not present. So the translog has a DBQ without \"query\" and when we try to parse it we hit this exception.\n\nThis commit adds special handling for pre 1.0.0 indices if we hit parse exception, we\ntry to reparse without a top-level query object to be BWC compatible for these indices.\n\nCloses #10262\n",
"number": 11472,
"review_comments": [],
"title": "Fix possible BWC break after upgrading from pre 1.0.0"
} | {
"commits": [
{
"message": "Fix possible BWC break after upgrading from pre 1.0.0\n\nThis is happening because of #4074 when we required that the top-level \"query\" is present to delete-by-query requests, but prior to that we required that it is not present. So the translog has a DBQ without \"query\" and when we try to parse it we hit this exception.\n\nThis commit adds special handling for pre 1.0.0 indices if we hit parse exception, we\ntry to reparse without a top-level query object to be BWC compatible for these indices.\n\nCloses #10262"
}
],
"files": [
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n@@ -56,6 +57,8 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.AbstractRefCounted;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.aliases.IndexAliasesService;\n@@ -87,6 +90,8 @@\n import org.elasticsearch.index.percolator.PercolatorQueriesRegistry;\n import org.elasticsearch.index.percolator.stats.ShardPercolateService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n+import org.elasticsearch.index.query.ParsedQuery;\n+import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.recovery.RecoveryStats;\n import org.elasticsearch.index.refresh.RefreshStats;\n import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n@@ -540,7 +545,24 @@ public Engine.DeleteByQuery prepareDeleteByQuery(BytesReference source, @Nullabl\n if (types == null) {\n types = Strings.EMPTY_ARRAY;\n }\n- Query query = queryParserService.parseQuery(source).query();\n+ Query query;\n+ try {\n+ query = queryParserService.parseQuery(source).query();\n+ } catch (QueryParsingException ex) {\n+ // for BWC we try to parse directly the query since pre 1.0.0.Beta2 we didn't require a top level query field\n+ if (Version.indexCreated(config.getIndexSettings()).onOrBefore(Version.V_1_0_0_Beta2)) {\n+ try {\n+ XContentParser parser = XContentHelper.createParser(source);\n+ ParsedQuery parse = queryParserService.parse(parser);\n+ query = parse.query();\n+ } catch (Throwable t) {\n+ ex.addSuppressed(t);\n+ throw ex;\n+ }\n+ } else {\n+ throw ex;\n+ }\n+ }\n query = filterQueryIfNeeded(query, types);\n \n Filter aliasFilter = indexAliasesService.aliasFilter(filteringAliases);",
"filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,15 @@\n package org.elasticsearch.index.shard;\n \n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.stats.IndexStats;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n@@ -116,4 +121,32 @@ public void run() {\n assertNotNull(indexStats.getShards()[0].getCommitStats().getUserData().get(Engine.SYNC_COMMIT_ID));\n }\n \n+ public void testDeleteByQueryBWC() {\n+ Version version = randomVersion();\n+ assertAcked(client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0, IndexMetaData.SETTING_VERSION_CREATED, version.id));\n+ ensureGreen(\"test\");\n+ client().prepareIndex(\"test\", \"person\").setSource(\"{ \\\"user\\\" : \\\"kimchy\\\" }\").get();\n+\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ IndexShard shard = test.shard(0);\n+ int numDocs = 1;\n+ shard.state = IndexShardState.RECOVERING;\n+ try {\n+ shard.performRecoveryOperation(new Translog.DeleteByQuery(new Engine.DeleteByQuery(null, new BytesArray(\"{\\\"term\\\" : { \\\"user\\\" : \\\"kimchy\\\" }}\"), null, null, null, Engine.Operation.Origin.RECOVERY, 0, \"person\")));\n+ assertTrue(version.onOrBefore(Version.V_1_0_0_Beta2));\n+ numDocs = 0;\n+ } catch (QueryParsingException ex) {\n+ assertTrue(version.after(Version.V_1_0_0_Beta2));\n+ } finally {\n+ shard.state = IndexShardState.STARTED;\n+ }\n+ shard.engine().refresh(\"foo\");\n+\n+ try (Engine.Searcher searcher = shard.engine().acquireSearcher(\"foo\")) {\n+ assertEquals(numDocs, searcher.reader().numDocs());\n+ }\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
}
]
} |
{
"body": "In the example request below we have a date histogram with buckets for each month between 01 Jan 2001 (inclusive) and 01 Jan 2005 (not inclusive). In each bucket (month) we calculate the minimum distance, the derivative of that minimum (the speed) and then do a moving average of the distance and the moving average of the speed. Both moving averages predict the next 12 months of values.\n\nThe first moving average to execute (lets say the distance moving average) works fine and produces buckets for Jan 2005 to Dec 2005 (inclusive) as expected. The problem is that when the second moving average runs (the speed moving average), it appends its predictions onto the end of the buckets and predict for Jan 2006 to Dec 2006 (inclusive). This is because we assume predictions should always append buckets on the end of the histogram, rather than predicting for buckets starting at the next bucket after the last value for whatever metric we are predicting.\n\nExample request:\n\n``` json\n{\n \"query\": {\n \"range\": {\n \"date\": {\n \"gte\": \"2001-01-01\",\n \"lt\": \"2005-01-01\"\n }\n }\n },\n \"size\": 0,\n \"aggs\": {\n \"histo\": {\n \"date_histogram\": {\n \"field\": \"date\",\n \"interval\": \"month\"\n },\n \"aggs\": {\n \"dist\": {\n \"min\": {\n \"field\": \"distance\"\n }\n },\n \"speed\": {\n \"derivative\": {\n \"buckets_path\": \"distance\"\n }\n },\n \"speed-mov-avg\": {\n \"moving_avg\": {\n \"buckets_path\": \"speed\",\n \"predict\": 12,\n \"model\" : \"holt\",\n \"settings\" : {\n \"alpha\" : 0.5,\n \"beta\": 0.5\n }\n }\n },\n \"dist-mov-avg\": {\n \"moving_avg\": {\n \"buckets_path\": \"distance\",\n \"predict\": 12,\n \"model\" : \"holt\",\n \"settings\" : {\n \"alpha\" : 0.5,\n \"beta\": 0.5\n }\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [],
"number": 11454,
"title": "predicting two moving averages in the same aggregation produces incorrect keys for predicted buckets"
} | {
"body": "Flow is now:\n1. Iterate over buckets, generating movavg values if possible\n2. If predictions are requested, a hashmap of keys is maintained for lookup later\n3. Track the last valid key (e.g. last key that had data)\n4. When a prediction is needed, we check in the map to see if the key is present.\n 1. If the key is present, we create a new bucket with the old aggs + new prediction and overwrite the existing bucket\n 2. If the key was not found, we simply generate a brand new bucket with the prediction\n5. Predictions are now appended to the end of the \"valid data\", rather than the end of the requested range. This makes more sense anyway, and works nicely with the gap policies.\n\nThe map is needed to avoid doing constant binary searches over the list of buckets. We may need to add an `execution_mode` that allows a bin-search in the future, in case the user is executing massive moving avgs and would prefer to trade CPU for memory.\n\nThe solution feels a little janky...happy to entertain a different solution. If this looks reasonable, I'll work up some more test cases (movavgs with different predictions, 3+ movavgs, etc)\n\nFixes #11454\n",
"number": 11465,
"review_comments": [
{
"body": "why not start `i` from 1 and have the end condition as `i <= predictions.length`? You seem to always use `i+1` anyway?\n",
"created_at": "2015-06-04T14:14:26Z"
},
{
"body": "No particularly compelling reason, just how I was thinking about it at the time. E.g. the first prediction is one ahead of the last valid bucket, second prediction is two ahead, etc.\n\nCan switch it if you think it is cleaner. Will need to change `predictions[i]` to `predictions[i - 1]` to compensate.\n",
"created_at": "2015-06-04T14:20:30Z"
},
{
"body": "Had missed that `predictions[i]`. Think in that case it's better as is.\n",
"created_at": "2015-06-04T14:23:44Z"
}
],
"title": "Fix bug where moving_avg prediction keys are appended to previous prediction"
} | {
"commits": [
{
"message": "Fix bug where predictions append to the previous prediction\n\nFixes #11454"
}
],
"files": [
{
"diff": "@@ -47,9 +47,7 @@\n import org.joda.time.DateTime;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n \n import static org.elasticsearch.search.aggregations.pipeline.BucketHelpers.resolveBucketValue;\n \n@@ -110,12 +108,12 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n List newBuckets = new ArrayList<>();\n EvictingQueue<Double> values = EvictingQueue.create(this.window);\n \n- long lastKey = 0;\n- Object currentKey;\n+ long lastValidKey = 0;\n+ int lastValidPosition = 0;\n+ int counter = 0;\n \n for (InternalHistogram.Bucket bucket : buckets) {\n Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], gapPolicy);\n- currentKey = bucket.getKey();\n \n // Default is to reuse existing bucket. Simplifies the rest of the logic,\n // since we only change newBucket if we can add to it\n@@ -130,22 +128,23 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n \n List<InternalAggregation> aggs = new ArrayList<>(Lists.transform(bucket.getAggregations().asList(), AGGREGATION_TRANFORM_FUNCTION));\n aggs.add(new InternalSimpleValue(name(), movavg, formatter, new ArrayList<PipelineAggregator>(), metaData()));\n- newBucket = factory.createBucket(currentKey, bucket.getDocCount(), new InternalAggregations(\n+ newBucket = factory.createBucket(bucket.getKey(), bucket.getDocCount(), new InternalAggregations(\n aggs), bucket.getKeyed(), bucket.getFormatter());\n }\n- }\n-\n- newBuckets.add(newBucket);\n \n- if (predict > 0) {\n- if (currentKey instanceof Number) {\n- lastKey = ((Number) bucket.getKey()).longValue();\n- } else if (currentKey instanceof DateTime) {\n- lastKey = ((DateTime) bucket.getKey()).getMillis();\n- } else {\n- throw new AggregationExecutionException(\"Expected key of type Number or DateTime but got [\" + currentKey + \"]\");\n+ if (predict > 0) {\n+ if (bucket.getKey() instanceof Number) {\n+ lastValidKey = ((Number) bucket.getKey()).longValue();\n+ } else if (bucket.getKey() instanceof DateTime) {\n+ lastValidKey = ((DateTime) bucket.getKey()).getMillis();\n+ } else {\n+ throw new AggregationExecutionException(\"Expected key of type Number or DateTime but got [\" + lastValidKey + \"]\");\n+ }\n+ lastValidPosition = counter;\n }\n }\n+ counter += 1;\n+ newBuckets.add(newBucket);\n \n }\n \n@@ -158,13 +157,35 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n \n double[] predictions = model.predict(values, predict);\n for (int i = 0; i < predictions.length; i++) {\n- List<InternalAggregation> aggs = new ArrayList<>();\n- aggs.add(new InternalSimpleValue(name(), predictions[i], formatter, new ArrayList<PipelineAggregator>(), metaData()));\n- long newKey = histo.getRounding().nextRoundingValue(lastKey);\n- InternalHistogram.Bucket newBucket = factory.createBucket(newKey, 0, new InternalAggregations(\n- aggs), keyed, formatter);\n- newBuckets.add(newBucket);\n- lastKey = newKey;\n+\n+ List<InternalAggregation> aggs;\n+ long newKey = histo.getRounding().nextRoundingValue(lastValidKey);\n+\n+ if (lastValidPosition + i + 1 < newBuckets.size()) {\n+ InternalHistogram.Bucket bucket = (InternalHistogram.Bucket) newBuckets.get(lastValidPosition + i + 1);\n+\n+ // Get the existing aggs in the bucket so we don't clobber data\n+ aggs = new ArrayList<>(Lists.transform(bucket.getAggregations().asList(), AGGREGATION_TRANFORM_FUNCTION));\n+ aggs.add(new InternalSimpleValue(name(), predictions[i], formatter, new ArrayList<PipelineAggregator>(), metaData()));\n+\n+ InternalHistogram.Bucket newBucket = factory.createBucket(newKey, 0, new InternalAggregations(\n+ aggs), keyed, formatter);\n+\n+ // Overwrite the existing bucket with the new version\n+ newBuckets.set(lastValidPosition + i + 1, newBucket);\n+\n+ } else {\n+ // Not seen before, create fresh\n+ aggs = new ArrayList<>();\n+ aggs.add(new InternalSimpleValue(name(), predictions[i], formatter, new ArrayList<PipelineAggregator>(), metaData()));\n+\n+ InternalHistogram.Bucket newBucket = factory.createBucket(newKey, 0, new InternalAggregations(\n+ aggs), keyed, formatter);\n+\n+ // Since this is a new bucket, simply append it\n+ newBuckets.add(newBucket);\n+ }\n+ lastValidKey = newKey;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregator.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.search.aggregations.pipeline.BucketHelpers;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregationHelperTests;\n import org.elasticsearch.search.aggregations.pipeline.SimpleValue;\n+import org.elasticsearch.search.aggregations.pipeline.derivative.Derivative;\n import org.elasticsearch.search.aggregations.pipeline.movavg.models.*;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n@@ -49,6 +50,7 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.max;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.min;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.range;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.movingAvg;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.closeTo;\n@@ -160,6 +162,11 @@ public void setupSuiteScopeCluster() throws Exception {\n jsonBuilder().startObject().field(INTERVAL_FIELD, i).field(VALUE_FIELD, 10).endObject()));\n }\n \n+ for (int i = 0; i < 12; i++) {\n+ builders.add(client().prepareIndex(\"double_predict\", \"type\").setSource(\n+ jsonBuilder().startObject().field(INTERVAL_FIELD, i).field(VALUE_FIELD, 10).endObject()));\n+ }\n+\n indexRandom(true, builders);\n ensureSearchable();\n }\n@@ -957,8 +964,10 @@ public void testGiantGapWithPredict() {\n assertThat(histo, notNullValue());\n assertThat(histo.getName(), equalTo(\"histo\"));\n List<? extends Bucket> buckets = histo.getBuckets();\n+\n assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(50 + numPredictions));\n \n+\n double lastValue = ((SimpleValue)(buckets.get(0).getAggregations().get(\"movavg_values\"))).value();\n assertThat(Double.compare(lastValue, 0.0d), greaterThanOrEqualTo(0));\n \n@@ -1073,8 +1082,10 @@ public void testLeftGapWithPredict() {\n assertThat(histo, notNullValue());\n assertThat(histo.getName(), equalTo(\"histo\"));\n List<? extends Bucket> buckets = histo.getBuckets();\n+\n assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(50 + numPredictions));\n \n+\n double lastValue = 0;\n \n double currentValue;\n@@ -1099,8 +1110,7 @@ public void testLeftGapWithPredict() {\n \n /**\n * This test filters the \"gap\" data so that the last doc is excluded. This leaves a long stretch of empty\n- * buckets after the first bucket. The moving avg should be one at the beginning, then zero for the rest\n- * regardless of mov avg type or gap policy.\n+ * buckets after the first bucket.\n */\n @Test\n public void testRightGap() {\n@@ -1176,32 +1186,39 @@ public void testRightGapWithPredict() {\n assertThat(histo, notNullValue());\n assertThat(histo.getName(), equalTo(\"histo\"));\n List<? extends Bucket> buckets = histo.getBuckets();\n- assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(50 + numPredictions));\n \n+ // If we are skipping, there will only be predictions at the very beginning and won't append any new buckets\n+ if (gapPolicy.equals(BucketHelpers.GapPolicy.SKIP)) {\n+ assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(50));\n+ } else {\n+ assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(50 + numPredictions));\n+ }\n \n+ // Unlike left-gap tests, we cannot check the slope of prediction for right-gap. E.g. linear will\n+ // converge on zero, but holt-linear may trend upwards based on the first value\n+ // Just check for non-nullness\n SimpleValue current = buckets.get(0).getAggregations().get(\"movavg_values\");\n assertThat(current, notNullValue());\n \n- double lastValue = current.value();\n-\n- double currentValue;\n- for (int i = 1; i < 50; i++) {\n- current = buckets.get(i).getAggregations().get(\"movavg_values\");\n- if (current != null) {\n- currentValue = current.value();\n-\n- assertThat(Double.compare(lastValue, currentValue), greaterThanOrEqualTo(0));\n- lastValue = currentValue;\n+ // If we are skipping, there will only be predictions at the very beginning and won't append any new buckets\n+ if (gapPolicy.equals(BucketHelpers.GapPolicy.SKIP)) {\n+ // Now check predictions\n+ for (int i = 1; i < 1 + numPredictions; i++) {\n+ // Unclear at this point which direction the predictions will go, just verify they are\n+ // not null\n+ assertThat(buckets.get(i).getDocCount(), equalTo(0L));\n+ assertThat((buckets.get(i).getAggregations().get(\"movavg_values\")), notNullValue());\n+ }\n+ } else {\n+ // Otherwise we'll have some predictions at the end\n+ for (int i = 50; i < 50 + numPredictions; i++) {\n+ // Unclear at this point which direction the predictions will go, just verify they are\n+ // not null\n+ assertThat(buckets.get(i).getDocCount(), equalTo(0L));\n+ assertThat((buckets.get(i).getAggregations().get(\"movavg_values\")), notNullValue());\n }\n }\n \n- // Now check predictions\n- for (int i = 50; i < 50 + numPredictions; i++) {\n- // Unclear at this point which direction the predictions will go, just verify they are\n- // not null, and that we don't have the_metric anymore\n- assertThat((buckets.get(i).getAggregations().get(\"movavg_values\")), notNullValue());\n- assertThat((buckets.get(i).getAggregations().get(\"the_metric\")), nullValue());\n- }\n }\n \n @Test\n@@ -1232,6 +1249,100 @@ public void testHoltWintersNotEnoughData() {\n \n }\n \n+ @Test\n+ public void testTwoMovAvgsWithPredictions() {\n+\n+ SearchResponse response = client()\n+ .prepareSearch(\"double_predict\")\n+ .setTypes(\"type\")\n+ .addAggregation(\n+ histogram(\"histo\")\n+ .field(INTERVAL_FIELD)\n+ .interval(1)\n+ .subAggregation(avg(\"avg\").field(VALUE_FIELD))\n+ .subAggregation(derivative(\"deriv\")\n+ .setBucketsPaths(\"avg\").gapPolicy(gapPolicy))\n+ .subAggregation(\n+ movingAvg(\"avg_movavg\").window(windowSize).modelBuilder(new SimpleModel.SimpleModelBuilder())\n+ .gapPolicy(gapPolicy).predict(12).setBucketsPaths(\"avg\"))\n+ .subAggregation(\n+ movingAvg(\"deriv_movavg\").window(windowSize).modelBuilder(new SimpleModel.SimpleModelBuilder())\n+ .gapPolicy(gapPolicy).predict(12).setBucketsPaths(\"deriv\"))\n+ ).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ InternalHistogram<Bucket> histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(\"Size of buckets array is not correct.\", buckets.size(), equalTo(24));\n+\n+ Bucket bucket = buckets.get(0);\n+ assertThat(bucket, notNullValue());\n+ assertThat((long) bucket.getKey(), equalTo((long) 0));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ Avg avgAgg = bucket.getAggregations().get(\"avg\");\n+ assertThat(avgAgg, notNullValue());\n+ assertThat(avgAgg.value(), equalTo(10d));\n+\n+ SimpleValue movAvgAgg = bucket.getAggregations().get(\"avg_movavg\");\n+ assertThat(movAvgAgg, notNullValue());\n+ assertThat(movAvgAgg.value(), equalTo(10d));\n+\n+ Derivative deriv = bucket.getAggregations().get(\"deriv\");\n+ assertThat(deriv, nullValue());\n+\n+ SimpleValue derivMovAvg = bucket.getAggregations().get(\"deriv_movavg\");\n+ assertThat(derivMovAvg, nullValue());\n+\n+ for (int i = 1; i < 12; i++) {\n+ bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat((long) bucket.getKey(), equalTo((long) i));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ avgAgg = bucket.getAggregations().get(\"avg\");\n+ assertThat(avgAgg, notNullValue());\n+ assertThat(avgAgg.value(), equalTo(10d));\n+\n+ deriv = bucket.getAggregations().get(\"deriv\");\n+ assertThat(deriv, notNullValue());\n+ assertThat(deriv.value(), equalTo(0d));\n+\n+ movAvgAgg = bucket.getAggregations().get(\"avg_movavg\");\n+ assertThat(movAvgAgg, notNullValue());\n+ assertThat(movAvgAgg.value(), equalTo(10d));\n+\n+ derivMovAvg = bucket.getAggregations().get(\"deriv_movavg\");\n+ assertThat(derivMovAvg, notNullValue());\n+ assertThat(derivMovAvg.value(), equalTo(0d));\n+ }\n+\n+ // Predictions\n+ for (int i = 12; i < 24; i++) {\n+ bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat((long) bucket.getKey(), equalTo((long) i));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+\n+ avgAgg = bucket.getAggregations().get(\"avg\");\n+ assertThat(avgAgg, nullValue());\n+\n+ deriv = bucket.getAggregations().get(\"deriv\");\n+ assertThat(deriv, nullValue());\n+\n+ movAvgAgg = bucket.getAggregations().get(\"avg_movavg\");\n+ assertThat(movAvgAgg, notNullValue());\n+ assertThat(movAvgAgg.value(), equalTo(10d));\n+\n+ derivMovAvg = bucket.getAggregations().get(\"deriv_movavg\");\n+ assertThat(derivMovAvg, notNullValue());\n+ assertThat(derivMovAvg.value(), equalTo(0d));\n+ }\n+ }\n+\n @Test\n public void testBadModelParams() {\n try {",
"filename": "src/test/java/org/elasticsearch/search/aggregations/pipeline/moving/avg/MovAvgTests.java",
"status": "modified"
}
]
} |
{
"body": "This was working for me as of a few weeks ago. but now I hit this:\n\nGot error to send bulk of actions to elasticsearch server at localhost : [400] {\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Malformed action/metadata line [1], expected a simple value for field [_id] but found [VALUE_NULL]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"Malformed action/metadata line [1], expected a simple value for field [_id] but found [VALUE_NULL]\"},\"status\":400} {:level=>:error}\n\nI'm using the simple config here: https://github.com/peterskim12/elk-index-size-tests\n",
"comments": [
{
"body": "isn't this a logstash issue in the first place? I mean null is not a valid id?\n",
"created_at": "2015-06-02T15:10:29Z"
},
{
"body": "I don't know anything about the particulars, I just wanted to raise the issue. I'm using logstash 1.5 fwiw.\n",
"created_at": "2015-06-02T15:12:02Z"
},
{
"body": "@rmuir When it was working you were running logstash 1.4.2?\n",
"created_at": "2015-06-02T15:14:25Z"
},
{
"body": "When i started doing index size tests with master, I started with a 1.5 release candidate but upgraded to the official release. \n\nSince then i semi-regularly index that test dataset with logstash into es master, because its a little sample index for debugging, etc. (in this case, I am looking at our stats api so i wanted some data there)\n",
"created_at": "2015-06-02T15:15:56Z"
},
{
"body": "@rmuir I'll have a look, lets keep this issue open, if needed I'll migrate it.\n",
"created_at": "2015-06-02T15:18:27Z"
},
{
"body": "@ph this likely caused by https://github.com/elastic/elasticsearch/issues/10977 since we now barf if there is an invalid value\n",
"created_at": "2015-06-02T15:20:34Z"
},
{
"body": "btw. this is also in `1.6` so we better check if we wanna keep it there since it seems to break logstash?\n",
"created_at": "2015-06-02T15:22:14Z"
},
{
"body": "#11331 throws an exception if a \"metadata\" field (like `_id` , `_type` etc) contains a null value. Let me know if this must be fixed on ES side.\n",
"created_at": "2015-06-02T15:45:54Z"
},
{
"body": "we should see if this is a regression on the logstash-output-elasticsearch plugin /cc @talevy .\nI suggest we test with 1.4.2 to see if we can reproduce and this will tell us if this is a new behaviour introduced in the plugin. Note that there's been a handful of point releases of logstash-output-elasticsearch in the last weeks so it can also be a regression within the 1.5.0 release cycle.\n",
"created_at": "2015-06-02T15:46:31Z"
},
{
"body": "@tlrx i think that null values should be ignored here\n",
"created_at": "2015-06-02T15:49:33Z"
},
{
"body": "@rmuir @s1monw I've tested with 1.5.0 and I can reproduce the bug, this also break logstash 1.4.2.\nEven if it wasn't the intended behavior, this feel like a bit breaking for a 1.x release?\n",
"created_at": "2015-06-02T16:00:30Z"
},
{
"body": "Easy fix on our side we can remove the `null` keys before doing the bulk request.\n",
"created_at": "2015-06-02T16:06:04Z"
},
{
"body": "@ph we should just fix this before 1.6, to ignore null values.\n",
"created_at": "2015-06-02T16:07:31Z"
},
{
"body": "@clintongormley :+1: \n",
"created_at": "2015-06-02T16:08:17Z"
},
{
"body": "This will be fixed by #11459, sorry for the regression\n",
"created_at": "2015-06-02T16:16:39Z"
}
],
"number": 11458,
"title": "unable to index with logstash with current master"
} | {
"body": "Closes #11458\n",
"number": 11459,
"review_comments": [],
"title": "Allow null values in the bulk action/metadata line parameters"
} | {
"commits": [
{
"message": "Bulk: allow null values in action/metadata line parameters\n\nCloses #11458"
}
],
"files": [
{
"diff": "@@ -332,7 +332,7 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n } else {\n throw new IllegalArgumentException(\"Action/metadata line [\" + line + \"] contains an unknown parameter [\" + currentFieldName + \"]\");\n }\n- } else {\n+ } else if (token != XContentParser.Token.VALUE_NULL) {\n throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line + \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/bulk/BulkRequest.java",
"status": "modified"
},
{
"diff": "@@ -177,4 +177,12 @@ public void testSimpleBulk9() throws Exception {\n e.getMessage().contains(\"Malformed action/metadata line [3], expected START_OBJECT or END_OBJECT but found [START_ARRAY]\"), equalTo(true));\n }\n }\n+\n+ @Test\n+ public void testSimpleBulk10() throws Exception {\n+ String bulkAction = copyToStringFromClasspath(\"/org/elasticsearch/action/bulk/simple-bulk10.json\");\n+ BulkRequest bulkRequest = new BulkRequest();\n+ bulkRequest.add(bulkAction.getBytes(Charsets.UTF_8), 0, bulkAction.length(), null, null);\n+ assertThat(bulkRequest.numberOfActions(), equalTo(9));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/action/bulk/BulkRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,15 @@\n+{ \"index\" : {\"_index\":null, \"_type\":\"type1\", \"_id\":\"0\"} }\n+{ \"field1\" : \"value1\" }\n+{ \"index\" : {\"_index\":\"test\", \"_type\":null, \"_id\":\"0\"} }\n+{ \"field1\" : \"value1\" }\n+{ \"index\" : {\"_index\":\"test\", \"_type\":\"type1\", \"_id\":null} }\n+{ \"field1\" : \"value1\" }\n+{ \"delete\" : {\"_index\":null, \"_type\":\"type1\", \"_id\":\"0\"} }\n+{ \"delete\" : {\"_index\":\"test\", \"_type\":null, \"_id\":\"0\"} }\n+{ \"delete\" : {\"_index\":\"test\", \"_type\":\"type1\", \"_id\":null} }\n+{ \"create\" : {\"_index\":null, \"_type\":\"type1\", \"_id\":\"0\"} }\n+{ \"field1\" : \"value1\" }\n+{ \"create\" : {\"_index\":\"test\", \"_type\":null, \"_id\":\"0\"} }\n+{ \"field1\" : \"value1\" }\n+{ \"create\" : {\"_index\":\"test\", \"_type\":\"type1\", \"_id\":null} }\n+{ \"field1\" : \"value1\" }",
"filename": "src/test/java/org/elasticsearch/action/bulk/simple-bulk10.json",
"status": "added"
}
]
} |
{
"body": "run the example below:\n\n```\nDELETE test\n\nPOST test\n\nPUT test/test/_mapping\n{\n \"test\" : \n {\n \"_timestamp\" : { \"enabled\" : true, \"store\" : true, \"format\": \"yyyyMMddHHmmssSSS\"}\n }\n}\n\nPOST test/test/1?timestamp=20140301101010123\n{\n \"id\" : 1\n}\n\nPOST test/_search\n{\n \"fields\": [\n \"_timestamp\"\n ], \n \"query\": {\n \"range\": {\n \"_timestamp\": {\n \"from\": 20140301101010000, \n \"to\": 20140301101010999\n }\n }\n }\n}\n```\n\ni wll expect that last query will return created document (as the timestamp falls into specified range)\nalso after inserting, timestamp on document is not this same as the one specified in the request (it is in fact **20140301101010124** when it should be **20140301101010123**)\n\ncontext:\n- es: 1.0.0\n- jvm: 1.7.0_51\n- hosted in EC2\n",
"comments": [
{
"body": "This problem occurs because Elasticsearch [first tries](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java#L165) to parse timestamp as long (number of milliseconds since epoch). So, your timestamp is getting interpreted as `Thu, 27 Oct 2033 12:48:30 GMT`, while your ranges are getting interpreted correctly as you expect. We could flip this logic in parsing timestamp and try parsing using format first, but then it will break interface for all users who are using milliseconds (with default format just a number is interpreted as year). So, I am not really sure what would be a good solution here, but two possible workarounds here are 1) to use a format that doesn't look like long or 2) switch to milliseconds on the timestamp field. \n",
"created_at": "2014-03-05T17:00:46Z"
},
{
"body": "sadly im stuck with timestamp in this format (as it comes from external source)\nwill be nice to have some configuration switch that will allow to change default parsing behavior for timestamp \n",
"created_at": "2014-03-05T17:06:24Z"
},
{
"body": "Another place where we don't do the right thing with `\"1234\"` vs `1234`:\n\n```\nDELETE /t\n\nPUT /t\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"date\": {\n \"type\": \"date\",\n \"format\": \"YYYYMMddHHmmss\"\n }\n }\n }\n }\n}\n\nPUT /t/t/1\n{\n \"date\": \"20140101000000\"\n}\n\nPUT /t/t/2\n{\n \"date\": 20140101000000\n}\n\nGET /t/_search\n{\n \"script_fields\": {\n \"date\": {\n \"script\": \"doc['date'].value\"\n }\n }\n}\n\nGET /t/_search\n{\n \"query\": {\n \"match\": {\n \"date\": \"20140101000000\"\n }\n }\n}\n\nGET /t/_search\n{\n \"query\": {\n \"match\": {\n \"date\": 20140101000000\n }\n }\n}\n```\n",
"created_at": "2014-03-15T13:24:41Z"
},
{
"body": "As Igor said before, part of the problem is date fields have this dual behavior, where it can accept a formatted date, but also an epoch timestamp. However, in this case, the value being searched is passed as a json number. The only way we could interpret that with the original supplied format would be to write the number as a string and then try to parse. I don't think we should do this.\n\nWe could however make a couple improvements:\n- Add a flag to disable interpreting as an epoch time\n- Throw an error if a number is passed in when this feature is disabled \n",
"created_at": "2015-04-10T16:49:01Z"
},
{
"body": "Another possibility would be to add `epoch` as a format, then users can control the order in which the epoch format is tried, eg:\n\n```\n \"format\": \"epoch|YYYY-mm-dd\"\n```\n\nvs\n\n```\n \"format\": \"YYYY-mm-dd|epoch\"\n```\n",
"created_at": "2015-04-13T10:07:33Z"
},
{
"body": "@clintongormley I really like that idea! I think we should do that, and change the docs to describe the default format to match the current behavior? Should we have a way to specify whether it is seconds or milliseconds epoch? We currently use the first example you gave, with milliseconds epoch.\n",
"created_at": "2015-04-23T07:17:42Z"
},
{
"body": "@rjernst yes, i think supporting epoch seconds would be an excellent idea, eg Perl uses floating point seconds. So perhaps `epoch_seconds` and `epoch_ms`?\n",
"created_at": "2015-04-26T13:55:14Z"
}
],
"number": 5328,
"title": "filtering on _timestamp field (with custom format) not working"
} | {
"body": "This PR adds the support the new date formats, namely `epoch_second` and `epoch_second_millis`. By adding those, we can remove the internal logic to try and parse everything as a unix timestamp first and only then use our regular date format.\n\nThis PR tries to remain backwards compatible by using `epoch_second_millis||dateOptionalTime` where before only `dateOptionalTime` was used in combination with the parsing.\n\nSome BWC occured, ie, a date like `10000` is now always a year without any other configuration instead of a unix timestamp.\n\nAlso, in the current implementation, the `RangeQueryParser` allows to configure a timezone for queries using `from`/`to` unix timestamps, even though the timezone is ignored in the query, as a timestamp is always UTC.\n\nCloses #5328\nRelates #10971\n",
"number": 11453,
"review_comments": [
{
"body": "should this not be `epoch_millis`?\n",
"created_at": "2015-06-02T14:41:29Z"
},
{
"body": "I agree\n",
"created_at": "2015-06-02T16:28:06Z"
},
{
"body": "I realize this was there before, but can we not lose the original exception?\n",
"created_at": "2015-06-03T06:39:08Z"
},
{
"body": "Can skip adding camelCase versions? \n",
"created_at": "2015-06-03T06:41:14Z"
},
{
"body": "SECOND_PRECISION_PATTERN since this is a constant?\n",
"created_at": "2015-06-03T06:43:58Z"
},
{
"body": "Can we use `== false`?\n",
"created_at": "2015-06-03T06:46:07Z"
},
{
"body": "Shouldn't we still have this? It doesn't really make sense to set seconds or milliseconds since epoch in anything but UTC?\n",
"created_at": "2015-06-03T07:16:17Z"
},
{
"body": "Does this really need to be an integration test? Could it be in `SimpleDateMappingTests`?\n",
"created_at": "2015-06-03T07:17:32Z"
},
{
"body": "the problem here is, that a date like `2015121212` supplied as a number could be a date in the format `yyyyDDMMHH` which cannot be distinguished, so this check would trigger even though the number is valid and going to be parsed with a date that correctly owns a timezone. Not sure, when/where to postpone this check, need to think about it.\n",
"created_at": "2015-06-03T07:49:43Z"
},
{
"body": "we should convert this to use ParseField since that will deal with deprecated names etc. in the future\n",
"created_at": "2015-06-03T08:26:42Z"
},
{
"body": "this test actually uncovered the above issue with the `QueryRangeParser` and numeric dates like `2015121212`, so I would prefer to leave it as it is\n",
"created_at": "2015-06-03T08:29:03Z"
},
{
"body": "Why would we add a _new_ setting that supports camelCase? The idea is to get rid of this weird dual behavior with settings. Why allow users to start using something they will just have to change later?\n",
"created_at": "2015-06-03T08:29:16Z"
},
{
"body": "In 2.0 ParseField no longer supports camelCase IIRC. The advantage with using ParseField is that it standardises the checks that we do to make it consistent across the codebase (or will once all the Parsers have been changed to use it). It also makes deprecating an old option name in favour of a new one, much easier as we don't end up with these messy if statements with multiple ||'s. And also means its much easier to correlate the builder classes with the Parser since they can both reference a ParseField constant and the builder will alway use the preferred name for the option.\n",
"created_at": "2015-06-03T08:35:07Z"
},
{
"body": "changed\n",
"created_at": "2015-06-03T09:41:50Z"
},
{
"body": "true, added\n",
"created_at": "2015-06-03T09:42:00Z"
},
{
"body": "skipped for now\n",
"created_at": "2015-06-03T09:42:10Z"
},
{
"body": "fixed\n",
"created_at": "2015-06-03T09:42:14Z"
},
{
"body": "fixed\n",
"created_at": "2015-06-03T09:42:22Z"
},
{
"body": "I actually found a solution to this in my latest commit and do check this in the parser now\n",
"created_at": "2015-06-03T09:42:41Z"
},
{
"body": "I think this is confusing if we support camelCase in some of the options in this parser and not others (even if they are new). We should either support camelCase for all options or for none to be consistent.\n",
"created_at": "2015-06-03T10:30:29Z"
},
{
"body": "do we plan to resolve this before 2.0? If so, I am fine leaving it as it is...\n",
"created_at": "2015-06-03T12:17:31Z"
},
{
"body": "I don't think it will be resolved (rejecting camelCase) as its a huge change across 100s of files (because most parser use this style and not ParseField) and I don't see us doing that change quickly\n",
"created_at": "2015-06-03T12:34:16Z"
},
{
"body": "I don't think it matters. We should not force making huge changes to the entire codebase in order to not add things which will just be deprecated and/or confusing to the user.\n",
"created_at": "2015-06-03T13:18:41Z"
},
{
"body": "But I think this is confusing. If I have a format specified as `yearMonthDay` that works then I would expect to be able to change it to `epochSecond` and it would work. Supporting some values in camelCase for date formats and not other values is very confusing to a user. I'm all for removing camelCase but we should be consistent with it, especially when its different values for the same setting (in the case different values of `format` for date fields).\n",
"created_at": "2015-06-03T13:23:44Z"
},
{
"body": "I would also be fine with removing all the camelCase options for all formats in this PR to make it consistent.\n",
"created_at": "2015-06-03T13:25:25Z"
},
{
"body": "We already do not support camelCase for all settings, and I don't think there is any consistency even within the same query/field type/whatever. \n",
"created_at": "2015-06-03T13:26:10Z"
},
{
"body": "> I would also be fine with removing all the camelCase options for all formats in this PR to make it consistent.\n\nThis is the kind of statement that stalls progress. Requiring huge changes just to make a small improvement should not be necessary.\n",
"created_at": "2015-06-03T13:27:35Z"
},
{
"body": "I am only talking about the date formats here, not across the whole codebase (i can see the above statement might have been a bit ambiguous on that). All the multi-word date format values above support both a camelCase and an underscored version. That should be consistent, whether that means supporting both for now or only supporting the underscored version I don't have a strong opinion but its hardly a huge change to update the date format values to be consistent and its not a huge overhead to maintain an extra 2 camelCase options given that any change to that policy would require a change to all the other date formats too\n",
"created_at": "2015-06-03T13:32:52Z"
},
{
"body": "I just realized we aren't even talking about setting names, but the valid values for the `format` setting. This argument to use ParseValue does not make sense. We don't support camelCase in eg the `index` option. We should not do it here, it will just add more work for users if we allow them to _start_ using a new value that will just go away in the future (and will require them to change the value to what they would have found in the first place if they had tried using camelCase and seen an error).\n",
"created_at": "2015-06-03T13:58:26Z"
},
{
"body": "I think this might be busted by-design, JSON numerical values only have a float type, so \"1.470417092e+09\" (produced by Go's json encoder) will just result in `IllegalArgumentException[Invalid format: \"1.470417092e+09\"];`. Any recommended way around this? \n",
"created_at": "2016-08-22T16:31:07Z"
}
],
"title": "Added epoch date formats to configure parsing of unix dates"
} | {
"commits": [
{
"message": "Date Parsing: Add parsing for epoch and epoch in milliseconds\n\nThis commit changes the date handling. First and foremost Elasticsearch\ndoes not try to convert every date to a unix timestamp first and then\nuses the configured date. This now allows for dates like `2015121212` to\nbe parsed correctly.\n\nInstead it is now explicit by adding a `epoch_second` and `epoch_millis`\ndate format. This also means, that the default date format now is\n`epoch_millis||dateOptionalTime` to remain backwards compatible.\n\nCloses #5328\nRelates #10971"
}
],
"files": [
{
"diff": "@@ -198,6 +198,11 @@ year.\n \n |`year_month_day`|A formatter for a four digit year, two digit month of\n year, and two digit day of month.\n+\n+|`epoch_second`|A formatter for the number of seconds since the epoch.\n+\n+|`epoch_millis`|A formatter for the number of milliseconds since\n+the epoch.\n |=======================================================================\n \n [float]",
"filename": "docs/reference/mapping/date-format.asciidoc",
"status": "modified"
},
{
"diff": "@@ -79,7 +79,7 @@ format>> used to parse the provided timestamp value. For example:\n }\n --------------------------------------------------\n \n-Note, the default format is `dateOptionalTime`. The timestamp value will\n+Note, the default format is `epoch_millis||dateOptionalTime`. The timestamp value will\n first be parsed as a number and if it fails the format will be tried.\n \n [float]",
"filename": "docs/reference/mapping/fields/timestamp-field.asciidoc",
"status": "modified"
},
{
"diff": "@@ -349,7 +349,7 @@ date type:\n Defaults to the property/field name.\n \n |`format` |The <<mapping-date-format,date\n-format>>. Defaults to `dateOptionalTime`.\n+format>>. Defaults to `epoch_millis||dateOptionalTime`.\n \n |`store` |Set to `true` to store actual field in the index, `false` to not\n store it. Defaults to `false` (note, the JSON document itself is stored,",
"filename": "docs/reference/mapping/types/core-types.asciidoc",
"status": "modified"
},
{
"diff": "@@ -42,8 +42,8 @@ and will use the matching format as its format attribute. The date\n format itself is explained\n <<mapping-date-format,here>>.\n \n-The default formats are: `dateOptionalTime` (ISO) and\n-`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z`.\n+The default formats are: `dateOptionalTime` (ISO),\n+`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z` and `epoch_millis`.\n \n *Note:* `dynamic_date_formats` are used *only* for dynamically added\n date fields, not for `date` fields that you specify in your mapping.",
"filename": "docs/reference/mapping/types/root-object-type.asciidoc",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,11 @@ public TimestampParsingException(String timestamp) {\n this.timestamp = timestamp;\n }\n \n+ public TimestampParsingException(String timestamp, Throwable cause) {\n+ super(\"failed to parse timestamp [\" + timestamp + \"]\", cause);\n+ this.timestamp = timestamp;\n+ }\n+\n public String timestamp() {\n return timestamp;\n }",
"filename": "src/main/java/org/elasticsearch/action/TimestampParsingException.java",
"status": "modified"
},
{
"diff": "@@ -161,19 +161,11 @@ public int hashCode() {\n public static class Timestamp {\n \n public static String parseStringTimestamp(String timestampAsString, FormatDateTimeFormatter dateTimeFormatter) throws TimestampParsingException {\n- long ts;\n try {\n- // if we manage to parse it, its a millisecond timestamp, just return the string as is\n- ts = Long.parseLong(timestampAsString);\n- return timestampAsString;\n- } catch (NumberFormatException e) {\n- try {\n- ts = dateTimeFormatter.parser().parseMillis(timestampAsString);\n- } catch (RuntimeException e1) {\n- throw new TimestampParsingException(timestampAsString);\n- }\n+ return Long.toString(dateTimeFormatter.parser().parseMillis(timestampAsString));\n+ } catch (RuntimeException e) {\n+ throw new TimestampParsingException(timestampAsString, e);\n }\n- return Long.toString(ts);\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java",
"status": "modified"
},
{
"diff": "@@ -19,14 +19,14 @@\n \n package org.elasticsearch.common.joda;\n \n-import org.apache.commons.lang3.StringUtils;\n import org.elasticsearch.ElasticsearchParseException;\n import org.joda.time.DateTimeZone;\n import org.joda.time.MutableDateTime;\n import org.joda.time.format.DateTimeFormatter;\n \n import java.util.concurrent.Callable;\n-import java.util.concurrent.TimeUnit;\n+\n+import static com.google.common.base.Preconditions.checkNotNull;\n \n /**\n * A parser for date/time formatted text with optional date math.\n@@ -38,13 +38,10 @@\n public class DateMathParser {\n \n private final FormatDateTimeFormatter dateTimeFormatter;\n- private final TimeUnit timeUnit;\n \n- public DateMathParser(FormatDateTimeFormatter dateTimeFormatter, TimeUnit timeUnit) {\n- if (dateTimeFormatter == null) throw new NullPointerException();\n- if (timeUnit == null) throw new NullPointerException();\n+ public DateMathParser(FormatDateTimeFormatter dateTimeFormatter) {\n+ checkNotNull(dateTimeFormatter);\n this.dateTimeFormatter = dateTimeFormatter;\n- this.timeUnit = timeUnit;\n }\n \n public long parse(String text, Callable<Long> now) {\n@@ -195,17 +192,6 @@ private long parseMath(String mathString, long time, boolean roundUp, DateTimeZo\n }\n \n private long parseDateTime(String value, DateTimeZone timeZone) {\n- \n- // first check for timestamp\n- if (value.length() > 4 && StringUtils.isNumeric(value)) {\n- try {\n- long time = Long.parseLong(value);\n- return timeUnit.toMillis(time);\n- } catch (NumberFormatException e) {\n- throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"] as timestamp\", e);\n- }\n- }\n- \n DateTimeFormatter parser = dateTimeFormatter.parser();\n if (timeZone != null) {\n parser = parser.withZone(timeZone);",
"filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.joda.time.format.*;\n \n import java.util.Locale;\n+import java.util.regex.Pattern;\n \n /**\n *\n@@ -133,6 +134,10 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n formatter = ISODateTimeFormat.yearMonth();\n } else if (\"yearMonthDay\".equals(input) || \"year_month_day\".equals(input)) {\n formatter = ISODateTimeFormat.yearMonthDay();\n+ } else if (\"epoch_second\".equals(input)) {\n+ formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(false)).toFormatter();\n+ } else if (\"epoch_millis\".equals(input)) {\n+ formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(true)).toFormatter();\n } else if (Strings.hasLength(input) && input.contains(\"||\")) {\n String[] formats = Strings.delimitedListToStringArray(input, \"||\");\n DateTimeParser[] parsers = new DateTimeParser[formats.length];\n@@ -192,4 +197,50 @@ public DateTimeField getField(Chronology chronology) {\n return new OffsetDateTimeField(new DividedDateTimeField(new OffsetDateTimeField(chronology.monthOfYear(), -1), QuarterOfYear, 3), 1);\n }\n };\n+\n+ public static class EpochTimeParser implements DateTimeParser {\n+\n+ private static final Pattern MILLI_SECOND_PRECISION_PATTERN = Pattern.compile(\"^\\\\d{1,13}$\");\n+ private static final Pattern SECOND_PRECISION_PATTERN = Pattern.compile(\"^\\\\d{1,10}$\");\n+\n+ private final boolean hasMilliSecondPrecision;\n+ private final Pattern pattern;\n+\n+ public EpochTimeParser(boolean hasMilliSecondPrecision) {\n+ this.hasMilliSecondPrecision = hasMilliSecondPrecision;\n+ this.pattern = hasMilliSecondPrecision ? MILLI_SECOND_PRECISION_PATTERN : SECOND_PRECISION_PATTERN;\n+ }\n+\n+ @Override\n+ public int estimateParsedLength() {\n+ return hasMilliSecondPrecision ? 13 : 10;\n+ }\n+\n+ @Override\n+ public int parseInto(DateTimeParserBucket bucket, String text, int position) {\n+ if (text.length() > estimateParsedLength() ||\n+ // timestamps have to have UTC timezone\n+ bucket.getZone() != DateTimeZone.UTC ||\n+ pattern.matcher(text).matches() == false) {\n+ return -1;\n+ }\n+\n+ int factor = hasMilliSecondPrecision ? 1 : 1000;\n+ try {\n+ long millis = Long.valueOf(text) * factor;\n+ DateTime dt = new DateTime(millis, DateTimeZone.UTC);\n+ bucket.saveField(DateTimeFieldType.year(), dt.getYear());\n+ bucket.saveField(DateTimeFieldType.monthOfYear(), dt.getMonthOfYear());\n+ bucket.saveField(DateTimeFieldType.dayOfMonth(), dt.getDayOfMonth());\n+ bucket.saveField(DateTimeFieldType.hourOfDay(), dt.getHourOfDay());\n+ bucket.saveField(DateTimeFieldType.minuteOfHour(), dt.getMinuteOfHour());\n+ bucket.saveField(DateTimeFieldType.secondOfMinute(), dt.getSecondOfMinute());\n+ bucket.saveField(DateTimeFieldType.millisOfSecond(), dt.getMillisOfSecond());\n+ bucket.setZone(DateTimeZone.UTC);\n+ } catch (Exception e) {\n+ return -1;\n+ }\n+ return text.length();\n+ }\n+ };\n }",
"filename": "src/main/java/org/elasticsearch/common/joda/Joda.java",
"status": "modified"
},
{
"diff": "@@ -46,12 +46,7 @@\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.analysis.NumericDateAnalyzer;\n import org.elasticsearch.index.fielddata.FieldDataType;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.mapper.Mapper;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.index.mapper.MergeMappingException;\n-import org.elasticsearch.index.mapper.MergeResult;\n-import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.core.LongFieldMapper.CustomLongNumericField;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -223,7 +218,7 @@ public String toString(String s) {\n \n protected FormatDateTimeFormatter dateTimeFormatter = Defaults.DATE_TIME_FORMATTER;\n protected TimeUnit timeUnit = Defaults.TIME_UNIT;\n- protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter, timeUnit);\n+ protected DateMathParser dateMathParser = new DateMathParser(dateTimeFormatter);\n \n public DateFieldType() {}\n \n@@ -245,7 +240,7 @@ public FormatDateTimeFormatter dateTimeFormatter() {\n public void setDateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n checkIfFrozen();\n this.dateTimeFormatter = dateTimeFormatter;\n- this.dateMathParser = new DateMathParser(dateTimeFormatter, timeUnit);\n+ this.dateMathParser = new DateMathParser(dateTimeFormatter);\n }\n \n public TimeUnit timeUnit() {\n@@ -255,7 +250,7 @@ public TimeUnit timeUnit() {\n public void setTimeUnit(TimeUnit timeUnit) {\n checkIfFrozen();\n this.timeUnit = timeUnit;\n- this.dateMathParser = new DateMathParser(dateTimeFormatter, timeUnit);\n+ this.dateMathParser = new DateMathParser(dateTimeFormatter);\n }\n \n protected DateMathParser dateMathParser() {\n@@ -365,9 +360,6 @@ private Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includ\n }\n \n public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n- if (value instanceof Number) {\n- return ((Number) value).longValue();\n- }\n DateMathParser dateParser = dateMathParser();\n if (forcedDateParser != null) {\n dateParser = forcedDateParser;\n@@ -434,25 +426,20 @@ protected boolean customBoost() {\n @Override\n protected void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException {\n String dateAsString = null;\n- Long value = null;\n float boost = this.fieldType.boost();\n if (context.externalValueSet()) {\n Object externalValue = context.externalValue();\n- if (externalValue instanceof Number) {\n- value = ((Number) externalValue).longValue();\n- } else {\n- dateAsString = (String) externalValue;\n- if (dateAsString == null) {\n- dateAsString = nullValue;\n- }\n+ dateAsString = (String) externalValue;\n+ if (dateAsString == null) {\n+ dateAsString = nullValue;\n }\n } else {\n XContentParser parser = context.parser();\n XContentParser.Token token = parser.currentToken();\n if (token == XContentParser.Token.VALUE_NULL) {\n dateAsString = nullValue;\n } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- value = parser.longValue(coerce.value());\n+ dateAsString = parser.text();\n } else if (token == XContentParser.Token.START_OBJECT) {\n String currentFieldName = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n@@ -462,8 +449,6 @@ protected void innerParseCreateField(ParseContext context, List<Field> fields) t\n if (\"value\".equals(currentFieldName) || \"_value\".equals(currentFieldName)) {\n if (token == XContentParser.Token.VALUE_NULL) {\n dateAsString = nullValue;\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- value = parser.longValue(coerce.value());\n } else {\n dateAsString = parser.text();\n }\n@@ -479,14 +464,12 @@ protected void innerParseCreateField(ParseContext context, List<Field> fields) t\n }\n }\n \n+ Long value = null;\n if (dateAsString != null) {\n- assert value == null;\n if (context.includeInAll(includeInAll, this)) {\n context.allEntries().addText(fieldType.names().fullName(), dateAsString, boost);\n }\n value = fieldType().parseStringValue(dateAsString);\n- } else if (value != null) {\n- value = ((DateFieldType)fieldType).timeUnit().toMillis(value);\n }\n \n if (value != null) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -58,7 +58,7 @@ public class TimestampFieldMapper extends DateFieldMapper implements RootMapper\n \n public static final String NAME = \"_timestamp\";\n public static final String CONTENT_TYPE = \"_timestamp\";\n- public static final String DEFAULT_DATE_TIME_FORMAT = \"dateOptionalTime\";\n+ public static final String DEFAULT_DATE_TIME_FORMAT = \"epoch_millis||dateOptionalTime\";\n \n public static class Defaults extends DateFieldMapper.Defaults {\n public static final String NAME = \"_timestamp\";",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -102,7 +102,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else if (\"time_zone\".equals(currentFieldName) || \"timeZone\".equals(currentFieldName)) {\n timeZone = DateTimeZone.forID(parser.text());\n } else if (\"format\".equals(currentFieldName)) {\n- forcedDateParser = new DateMathParser(Joda.forPattern(parser.text()), DateFieldMapper.Defaults.TIME_UNIT);\n+ forcedDateParser = new DateMathParser(Joda.forPattern(parser.text()));\n } else {\n throw new QueryParsingException(parseContext, \"[range] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -123,11 +123,6 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n FieldMapper mapper = parseContext.fieldMapper(fieldName);\n if (mapper != null) {\n if (mapper instanceof DateFieldMapper) {\n- if ((from instanceof Number || to instanceof Number) && timeZone != null) {\n- throw new QueryParsingException(parseContext,\n- \"[range] time_zone when using ms since epoch format as it's UTC based can not be applied to [\" + fieldName\n- + \"]\");\n- }\n query = ((DateFieldMapper) mapper).fieldType().rangeQuery(from, to, includeLower, includeUpper, timeZone, forcedDateParser, parseContext);\n } else {\n if (timeZone != null) {",
"filename": "src/main/java/org/elasticsearch/index/query/RangeQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -68,7 +68,7 @@ public static class DateTime extends Patternable<DateTime> {\n public static final DateTime DEFAULT = new DateTime(DateFieldMapper.Defaults.DATE_TIME_FORMATTER.format(), ValueFormatter.DateTime.DEFAULT, ValueParser.DateMath.DEFAULT);\n \n public static DateTime format(String format) {\n- return new DateTime(format, new ValueFormatter.DateTime(format), new ValueParser.DateMath(format, DateFieldMapper.Defaults.TIME_UNIT));\n+ return new DateTime(format, new ValueFormatter.DateTime(format), new ValueParser.DateMath(format));\n }\n \n public static DateTime mapper(DateFieldMapper mapper) {",
"filename": "src/main/java/org/elasticsearch/search/aggregations/support/format/ValueFormat.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import java.text.ParseException;\n import java.util.Locale;\n import java.util.concurrent.Callable;\n-import java.util.concurrent.TimeUnit;\n \n /**\n *\n@@ -81,12 +80,12 @@ public double parseDouble(String value, SearchContext searchContext) {\n */\n static class DateMath implements ValueParser {\n \n- public static final DateMath DEFAULT = new ValueParser.DateMath(new DateMathParser(DateFieldMapper.Defaults.DATE_TIME_FORMATTER, DateFieldMapper.Defaults.TIME_UNIT));\n+ public static final DateMath DEFAULT = new ValueParser.DateMath(new DateMathParser(DateFieldMapper.Defaults.DATE_TIME_FORMATTER));\n \n private DateMathParser parser;\n \n- public DateMath(String format, TimeUnit timeUnit) {\n- this(new DateMathParser(Joda.forPattern(format), timeUnit));\n+ public DateMath(String format) {\n+ this(new DateMathParser(Joda.forPattern(format)));\n }\n \n public DateMath(DateMathParser parser) {\n@@ -110,7 +109,7 @@ public double parseDouble(String value, SearchContext searchContext) {\n }\n \n public static DateMath mapper(DateFieldMapper mapper) {\n- return new DateMath(new DateMathParser(mapper.fieldType().dateTimeFormatter(), DateFieldMapper.Defaults.TIME_UNIT));\n+ return new DateMath(new DateMathParser(mapper.fieldType().dateTimeFormatter()));\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/support/format/ValueParser.java",
"status": "modified"
},
{
"diff": "@@ -23,16 +23,18 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.joda.time.DateTimeZone;\n+import org.junit.Test;\n \n+import java.util.TimeZone;\n import java.util.concurrent.Callable;\n-import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.hamcrest.Matchers.equalTo;\n \n public class DateMathParserTests extends ElasticsearchTestCase {\n- FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime\");\n- DateMathParser parser = new DateMathParser(formatter, TimeUnit.MILLISECONDS);\n+\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime||epoch_millis\");\n+ DateMathParser parser = new DateMathParser(formatter);\n \n private static Callable<Long> callable(final long value) {\n return new Callable<Long>() {\n@@ -195,25 +197,22 @@ public void testRounding() {\n public void testTimestamps() {\n assertDateMathEquals(\"1418248078000\", \"2014-12-10T21:47:58.000\");\n \n- // timezone does not affect timestamps\n- assertDateMathEquals(\"1418248078000\", \"2014-12-10T21:47:58.000\", 0, false, DateTimeZone.forID(\"-08:00\"));\n-\n // datemath still works on timestamps\n assertDateMathEquals(\"1418248078000||/m\", \"2014-12-10T21:47:00.000\");\n \n // also check other time units\n- DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.SECONDS);\n+ DateMathParser parser = new DateMathParser(Joda.forPattern(\"epoch_second||dateOptionalTime\"));\n long datetime = parser.parse(\"1418248078\", callable(0));\n assertDateEquals(datetime, \"1418248078\", \"2014-12-10T21:47:58.000\");\n \n // a timestamp before 10000 is a year\n assertDateMathEquals(\"9999\", \"9999-01-01T00:00:00.000\");\n- // 10000 is the first timestamp\n- assertDateMathEquals(\"10000\", \"1970-01-01T00:00:10.000\");\n+ // 10000 is also a year, breaking bwc, used to be a timestamp\n+ assertDateMathEquals(\"10000\", \"10000-01-01T00:00:00.000\");\n // but 10000 with T is still a date format\n assertDateMathEquals(\"10000T\", \"10000-01-01T00:00:00.000\");\n }\n- \n+\n void assertParseException(String msg, String date, String exc) {\n try {\n parser.parse(date, callable(0));\n@@ -232,7 +231,7 @@ public void testIllegalMathFormat() {\n }\n \n public void testIllegalDateFormat() {\n- assertParseException(\"Expected bad timestamp exception\", Long.toString(Long.MAX_VALUE) + \"0\", \"timestamp\");\n+ assertParseException(\"Expected bad timestamp exception\", Long.toString(Long.MAX_VALUE) + \"0\", \"failed to parse date field\");\n assertParseException(\"Expected bad date format exception\", \"123bogus\", \"with format\");\n }\n \n@@ -250,4 +249,10 @@ public Long call() throws Exception {\n parser.parse(\"now/d\", now, false, null);\n assertTrue(called.get());\n }\n+\n+ @Test(expected = ElasticsearchParseException.class)\n+ public void testThatUnixTimestampMayNotHaveTimeZone() {\n+ DateMathParser parser = new DateMathParser(Joda.forPattern(\"epoch_millis\"));\n+ parser.parse(\"1234567890123\", callable(42), false, DateTimeZone.forTimeZone(TimeZone.getTimeZone(\"CET\")));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.util.Constants;\n import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -39,6 +40,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.hamcrest.Matchers.is;\n \n public class SimpleCountTests extends ElasticsearchIntegrationTest {\n \n@@ -177,4 +179,46 @@ public void localDependentDateTests() throws Exception {\n assertHitCount(countResponse, 20l);\n }\n }\n+\n+ @Test\n+ public void testThatNonEpochDatesCanBeSearch() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type1\",\n+ jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\").startObject(\"date_field\").field(\"type\", \"date\").field(\"format\", \"yyyyMMddHH\").endObject().endObject()\n+ .endObject().endObject()));\n+ ensureGreen(\"test\");\n+\n+ XContentBuilder document = jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2015060210\")\n+ .endObject();\n+ assertThat(client().prepareIndex(\"test\", \"type1\").setSource(document).get().isCreated(), is(true));\n+\n+ document = jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2014060210\")\n+ .endObject();\n+ assertThat(client().prepareIndex(\"test\", \"type1\").setSource(document).get().isCreated(), is(true));\n+\n+ // this is a timestamp in 2015 and should not be returned in counting when filtering by year\n+ document = jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"1433236702\")\n+ .endObject();\n+ assertThat(client().prepareIndex(\"test\", \"type1\").setSource(document).get().isCreated(), is(true));\n+\n+ refresh();\n+\n+ assertHitCount(client().prepareCount(\"test\").get(), 3);\n+\n+ CountResponse countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"date_field\").from(\"2015010100\").to(\"2015123123\")).get();\n+ assertHitCount(countResponse, 1);\n+\n+ countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"date_field\").from(2015010100).to(2015123123)).get();\n+ assertHitCount(countResponse, 1);\n+\n+ countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"date_field\").from(2015010100).to(2015123123).timeZone(\"UTC\")).get();\n+ assertHitCount(countResponse, 1);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/count/simple/SimpleCountTests.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n import org.joda.time.MutableDateTime;\n import org.joda.time.format.*;\n@@ -248,6 +249,40 @@ public void testRoundingWithTimeZone() {\n assertThat(time.getMillis(), equalTo(utcTime.getMillis() - TimeValue.timeValueHours(22).millis()));\n }\n \n+ @Test\n+ public void testThatEpochsInSecondsCanBeParsed() {\n+ boolean parseMilliSeconds = randomBoolean();\n+\n+ // epoch: 1433144433655 => date: Mon Jun 1 09:40:33.655 CEST 2015\n+ FormatDateTimeFormatter formatter = Joda.forPattern(parseMilliSeconds ? \"epoch_millis\" : \"epoch_second\");\n+ DateTime dateTime = formatter.parser().parseDateTime(parseMilliSeconds ? \"1433144433655\" : \"1433144433\");\n+\n+ assertThat(dateTime.getYear(), is(2015));\n+ assertThat(dateTime.getDayOfMonth(), is(1));\n+ assertThat(dateTime.getMonthOfYear(), is(6));\n+ assertThat(dateTime.getHourOfDay(), is(7)); // utc timezone, +2 offset due to CEST\n+ assertThat(dateTime.getMinuteOfHour(), is(40));\n+ assertThat(dateTime.getSecondOfMinute(), is(33));\n+\n+ if (parseMilliSeconds) {\n+ assertThat(dateTime.getMillisOfSecond(), is(655));\n+ } else {\n+ assertThat(dateTime.getMillisOfSecond(), is(0));\n+ }\n+ }\n+\n+ @Test(expected = IllegalArgumentException.class)\n+ public void testForInvalidDatesInEpochSecond() {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"epoch_second\");\n+ formatter.parser().parseDateTime(randomFrom(\"invalid date\", \"12345678901\", \"12345678901234\"));\n+ }\n+\n+ @Test(expected = IllegalArgumentException.class)\n+ public void testForInvalidDatesInEpochMillis() {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"epoch_millis\");\n+ formatter.parser().parseDateTime(randomFrom(\"invalid date\", \"12345678901234\"));\n+ }\n+\n private long utcTimeInMillis(String time) {\n return ISODateTimeFormat.dateOptionalTimeParser().withZone(DateTimeZone.UTC).parseMillis(time);\n }",
"filename": "src/test/java/org/elasticsearch/deps/joda/SimpleJodaTests.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.search.NumericRangeQuery;\n import org.apache.lucene.util.Constants;\n+import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.LocaleUtils;\n@@ -33,13 +34,8 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.index.mapper.MergeResult;\n-import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n-import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n@@ -51,21 +47,12 @@\n import org.junit.Before;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Arrays;\n-import java.util.List;\n-import java.util.Locale;\n-import java.util.Map;\n+import java.util.*;\n \n import static com.carrotsearch.randomizedtesting.RandomizedTest.systemPropertyAsBoolean;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.index.mapper.string.SimpleStringMappingTests.docValuesType;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.hasKey;\n-import static org.hamcrest.Matchers.instanceOf;\n-import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.Matchers.*;\n \n public class SimpleDateMappingTests extends ElasticsearchSingleNodeTest {\n \n@@ -439,4 +426,31 @@ public void testNumericResolution() throws Exception {\n .bytes());\n assertThat(getDateAsMillis(doc.rootDoc(), \"date_field\"), equalTo(44000L));\n }\n+\n+ public void testThatEpochCanBeIgnoredWithCustomFormat() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"date_field\").field(\"type\", \"date\").field(\"format\", \"yyyyMMddHH\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper defaultMapper = mapper(\"type\", mapping);\n+\n+ XContentBuilder document = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2015060210\")\n+ .endObject();\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", document.bytes());\n+ assertThat(getDateAsMillis(doc.rootDoc(), \"date_field\"), equalTo(1433239200000L));\n+ IndexResponse indexResponse = client().prepareIndex(\"test\", \"test\").setSource(document).get();\n+ assertThat(indexResponse.isCreated(), is(true));\n+\n+ // integers should always be parsed as well... cannot be sure it is a unix timestamp only\n+ doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", 2015060210)\n+ .endObject()\n+ .bytes());\n+ assertThat(getDateAsMillis(doc.rootDoc(), \"date_field\"), equalTo(1433239200000L));\n+ indexResponse = client().prepareIndex(\"test\", \"test\").setSource(document).get();\n+ assertThat(indexResponse.isCreated(), is(true));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -775,4 +775,18 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertEquals(MappingMetaData.Timestamp.parseStringTimestamp(\"1970\", Joda.forPattern(\"YYYY\")), request.timestamp());\n assertNull(docMapper.parse(\"type\", \"1\", doc.bytes()).rootDoc().get(\"_timestamp\"));\n }\n+\n+ public void testThatEpochCanBeIgnoredWithCustomFormat() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true).field(\"format\", \"yyyyMMddHH\").field(\"path\", \"custom_timestamp\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"custom_timestamp\", 2015060210).endObject();\n+ IndexRequest request = new IndexRequest(\"test\", \"type\", \"1\").source(doc);\n+ MappingMetaData mappingMetaData = new MappingMetaData(docMapper);\n+ request.process(MetaData.builder().build(), mappingMetaData, true, \"test\");\n+\n+ assertThat(request.timestamp(), is(\"1433239200000\"));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java",
"status": "modified"
}
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.