issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "Today we rely on FS operations to find the indices on disk or to find the shards for an index. This is super error prone and requires String parsing. We should on each level `index` -> `shard` know exactly what to expect and don't use directory listings which are expensive and subject to change. There should be a metadata file on each level that is atomically written that we open and see what we have to expect no matter of what's on the FS.\n",
"comments": [
{
"body": "+1 to moving away from parsing things from disk\n",
"created_at": "2015-09-02T13:54:20Z"
}
],
"number": 13265,
"title": "Detecting exising indices and shards is broken"
}
|
{
"body": "Following up https://github.com/elastic/elasticsearch/pull/16217 , This PR uses `${data.paths}/nodes/{node.id}/indices/{index.uuid}` \ninstead of `${data.paths}/nodes/{node.id}/indices/{index.name}` pattern to store index \nfolder on disk.\nThis way we avoid collision between indices that are named the same (deleted and recreated).\n\nCloses #13265\nCloses #13264\nCloses #14932\nCloses #15853\nCloses #14512\n",
"number": 16442,
"review_comments": [
{
"body": "can we always use the Index class to refer to an index? we should get in this habit so we always have both name and uuid available.\n",
"created_at": "2016-02-04T07:43:22Z"
},
{
"body": "this should really be `hasIndex(Index index)` no?\n",
"created_at": "2016-02-04T16:37:31Z"
},
{
"body": "I think we should add a `Index#getPathIdentifier()` to have a single place to calculate it?\n",
"created_at": "2016-02-04T16:38:21Z"
},
{
"body": "++\n",
"created_at": "2016-02-04T16:38:29Z"
},
{
"body": "I think we need to fix this too that we don't read the index name from the directory name. We have to read some descriptor which I think is available everywhere now if not it's not an index. Then when we have that we can just use Index.java as a key\n",
"created_at": "2016-02-04T16:40:16Z"
},
{
"body": "fixed\n",
"created_at": "2016-02-05T03:23:35Z"
},
{
"body": "this has been removed\n",
"created_at": "2016-02-05T03:23:39Z"
},
{
"body": "great idea, thanks for the suggestion!\n",
"created_at": "2016-02-05T03:23:42Z"
},
{
"body": "Now we load all the indices state files upfront and then add them to dangling indices if they are not present in the cluster state or are not identified as dangling already. Is this what you were getting at by \"read some descriptor\"? This behaviour is different from before where we only used to load the state of indices that were relevant. We also don't try to rename the index according to the folder name, like before.\n",
"created_at": "2016-02-05T03:32:13Z"
},
{
"body": "looking at the MetaDataStateFormat code, we should be able to read the format from the file - something stinks here :) . Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n",
"created_at": "2016-03-08T12:52:09Z"
},
{
"body": "why do we need all these file copies? can't we just do a top level folder rename? consolidating to one folder is a 2.0 feature?\n",
"created_at": "2016-03-08T13:04:44Z"
},
{
"body": "can we check the uuid and make sure it's the same and if not log a warning?\n",
"created_at": "2016-03-08T13:08:25Z"
},
{
"body": "this feels too lenient to me. At the very least we should add a parameter indicating whether it's OK to be lenient (dangling indices detection ) and be strict (throw an exception on node start up). \n",
"created_at": "2016-03-08T13:20:03Z"
},
{
"body": "I don't think this is possible any more? we added a filter on the path names?\n",
"created_at": "2016-03-08T13:24:03Z"
},
{
"body": "> looking at the MetaDataStateFormat code, we should be able to read the format from the file\n\nDo you mean that MetaDataStateFormat#loadLatestState reads the format from the file? I hardcoded the format to be `SMILE`\n\n> Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n\nI will open an issue for this.\n",
"created_at": "2016-03-09T14:30:59Z"
},
{
"body": "Simplified the upgrade by renaming the top level folder. Thanks for the suggestion.\n",
"created_at": "2016-03-09T14:31:43Z"
},
{
"body": "added\n",
"created_at": "2016-03-09T14:31:54Z"
},
{
"body": "Now we throw an ISE on startup when an invalid index folder name is found\n",
"created_at": "2016-03-09T14:33:48Z"
},
{
"body": "I think we can make this trace?\n",
"created_at": "2016-03-09T16:06:27Z"
},
{
"body": "can we add something that will explain why we're doing this of the unordained user? something ala upgrading indexing folder to new naming conventions\n",
"created_at": "2016-03-09T16:08:06Z"
},
{
"body": "can we add a comment into why need this check? (we already renamed it before)\n",
"created_at": "2016-03-09T16:11:33Z"
},
{
"body": "call it upgradeIndicesIfNeeded?\n",
"created_at": "2016-03-09T16:11:37Z"
},
{
"body": "this might also be a `FileNotFoundException`? ie `} catch (NoSuchFileException| | FileNotFoundException ignored) {`\n",
"created_at": "2016-03-13T13:25:33Z"
},
{
"body": "wow this is scary as shit I guess this means we are restarting multiple nodes at the same time in a full cluster restart. I think we can't do this neither support it on a shared FS. If we run into this situation we should fail and not swallow exceptions IMO\n",
"created_at": "2016-03-13T13:28:39Z"
},
{
"body": "lets document that in the migration guides\n",
"created_at": "2016-03-13T13:29:06Z"
},
{
"body": "@mikemccand this is unused and get removed in 4d38856f7017275e326df52a44b90662b2f3da6a - was this a mistake or did you just not remove this method?\n",
"created_at": "2016-03-13T13:36:38Z"
},
{
"body": "this should never happen right? we just got it from a dir list?\n",
"created_at": "2016-03-14T10:08:12Z"
},
{
"body": "can we log this in debug? we're going to log it on every node start.. \n",
"created_at": "2016-03-14T10:08:50Z"
},
{
"body": "wondering if this should be a warn... it means we have an unknow folder in our universe? \n",
"created_at": "2016-03-14T10:09:29Z"
},
{
"body": "can we name this readOnlyMetaDataMetaDataStateFormat\n",
"created_at": "2016-03-14T10:11:00Z"
}
],
"title": "Rename index folder to index_uuid"
}
|
{
"commits": [
{
"message": "Add upgrader to upgrade old indices to new naming convention"
},
{
"message": "remove redundant getters in MetaData"
},
{
"message": "use index uuid as folder name to decouple index folder name from index name"
},
{
"message": "adapt tests to use index uuid as folder name"
}
],
"files": [
{
"diff": "@@ -79,7 +79,7 @@ public ClusterStateHealth(MetaData clusterMetaData, RoutingTable routingTables)\n * @param clusterState The current cluster state. Must not be null.\n */\n public ClusterStateHealth(ClusterState clusterState) {\n- this(clusterState, clusterState.metaData().concreteAllIndices());\n+ this(clusterState, clusterState.metaData().getConcreteAllIndices());\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/health/ClusterStateHealth.java",
"status": "modified"
},
{
"diff": "@@ -432,7 +432,7 @@ private Map<String, Set<String>> resolveSearchRoutingAllIndices(MetaData metaDat\n if (routing != null) {\n Set<String> r = Strings.splitStringByCommaToSet(routing);\n Map<String, Set<String>> routings = new HashMap<>();\n- String[] concreteIndices = metaData.concreteAllIndices();\n+ String[] concreteIndices = metaData.getConcreteAllIndices();\n for (String index : concreteIndices) {\n routings.put(index, r);\n }\n@@ -472,7 +472,7 @@ static boolean isExplicitAllPattern(List<String> aliasesOrIndices) {\n */\n boolean isPatternMatchingAllIndices(MetaData metaData, String[] indicesOrAliases, String[] concreteIndices) {\n // if we end up matching on all indices, check, if its a wildcard parameter, or a \"-something\" structure\n- if (concreteIndices.length == metaData.concreteAllIndices().length && indicesOrAliases.length > 0) {\n+ if (concreteIndices.length == metaData.getConcreteAllIndices().length && indicesOrAliases.length > 0) {\n \n //we might have something like /-test1,+test1 that would identify all indices\n //or something like /-test1 with test1 index missing and IndicesOptions.lenient()\n@@ -728,11 +728,11 @@ private boolean isEmptyOrTrivialWildcard(List<String> expressions) {\n \n private List<String> resolveEmptyOrTrivialWildcard(IndicesOptions options, MetaData metaData, boolean assertEmpty) {\n if (options.expandWildcardsOpen() && options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllIndices());\n+ return Arrays.asList(metaData.getConcreteAllIndices());\n } else if (options.expandWildcardsOpen()) {\n- return Arrays.asList(metaData.concreteAllOpenIndices());\n+ return Arrays.asList(metaData.getConcreteAllOpenIndices());\n } else if (options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllClosedIndices());\n+ return Arrays.asList(metaData.getConcreteAllClosedIndices());\n } else {\n assert assertEmpty : \"Shouldn't end up here\";\n return Collections.emptyList();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java",
"status": "modified"
},
{
"diff": "@@ -370,26 +370,14 @@ public ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> findM\n /**\n * Returns all the concrete indices.\n */\n- public String[] concreteAllIndices() {\n- return allIndices;\n- }\n-\n public String[] getConcreteAllIndices() {\n- return concreteAllIndices();\n- }\n-\n- public String[] concreteAllOpenIndices() {\n- return allOpenIndices;\n+ return allIndices;\n }\n \n public String[] getConcreteAllOpenIndices() {\n return allOpenIndices;\n }\n \n- public String[] concreteAllClosedIndices() {\n- return allClosedIndices;\n- }\n-\n public String[] getConcreteAllClosedIndices() {\n return allClosedIndices;\n }\n@@ -795,9 +783,9 @@ public static MetaData addDefaultUnitsIfNeeded(ESLogger logger, MetaData metaDat\n metaData.getIndices(),\n metaData.getTemplates(),\n metaData.getCustoms(),\n- metaData.concreteAllIndices(),\n- metaData.concreteAllOpenIndices(),\n- metaData.concreteAllClosedIndices(),\n+ metaData.getConcreteAllIndices(),\n+ metaData.getConcreteAllOpenIndices(),\n+ metaData.getConcreteAllClosedIndices(),\n metaData.getAliasAndIndexLookup());\n } else {\n // No changes:",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,154 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.NoSuchFileException;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n+\n+/**\n+ * Renames index folders from {index.name} to {index.uuid}\n+ */\n+public class IndexFolderUpgrader {\n+ private final NodeEnvironment nodeEnv;\n+ private final Settings settings;\n+ private final ESLogger logger = Loggers.getLogger(IndexFolderUpgrader.class);\n+ private final MetaDataStateFormat<IndexMetaData> indexStateFormat = readOnlyIndexMetaDataStateFormat();\n+\n+ /**\n+ * Creates a new upgrader instance\n+ * @param settings node settings\n+ * @param nodeEnv the node env to operate on\n+ */\n+ IndexFolderUpgrader(Settings settings, NodeEnvironment nodeEnv) {\n+ this.settings = settings;\n+ this.nodeEnv = nodeEnv;\n+ }\n+\n+ /**\n+ * Moves the index folder found in <code>source</code> to <code>target</code>\n+ */\n+ void upgrade(final Index index, final Path source, final Path target) throws IOException {\n+ boolean success = false;\n+ try {\n+ Files.move(source, target, StandardCopyOption.ATOMIC_MOVE);\n+ success = true;\n+ } catch (NoSuchFileException | FileNotFoundException exception) {\n+ // thrown when the source is non-existent because the folder was renamed\n+ // by another node (shared FS) after we checked if the target exists\n+ logger.error(\"multiple nodes trying to upgrade [{}] in parallel, retry upgrading with single node\",\n+ exception, target);\n+ throw exception;\n+ } finally {\n+ if (success) {\n+ logger.info(\"{} moved from [{}] to [{}]\", index, source, target);\n+ logger.trace(\"{} syncing directory [{}]\", index, target);\n+ IOUtils.fsync(target, true);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Renames <code>indexFolderName</code> index folders found in node paths and custom path\n+ * iff {@link #needsUpgrade(Index, String)} is true.\n+ * Index folder in custom paths are renamed first followed by index folders in each node path.\n+ */\n+ void upgrade(final String indexFolderName) throws IOException {\n+ for (NodeEnvironment.NodePath nodePath : nodeEnv.nodePaths()) {\n+ final Path indexFolderPath = nodePath.indicesPath.resolve(indexFolderName);\n+ final IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, indexFolderPath);\n+ if (indexMetaData != null) {\n+ final Index index = indexMetaData.getIndex();\n+ if (needsUpgrade(index, indexFolderName)) {\n+ logger.info(\"{} upgrading [{}] to new naming convention\", index, indexFolderPath);\n+ final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings);\n+ if (indexSettings.hasCustomDataPath()) {\n+ // we rename index folder in custom path before renaming them in any node path\n+ // to have the index state under a not-yet-upgraded index folder, which we use to\n+ // continue renaming after a incomplete upgrade.\n+ final Path customLocationSource = nodeEnv.resolveBaseCustomLocation(indexSettings)\n+ .resolve(indexFolderName);\n+ final Path customLocationTarget = customLocationSource.resolveSibling(index.getUUID());\n+ // we rename the folder in custom path only the first time we encounter a state\n+ // in a node path, which needs upgrading, it is a no-op for subsequent node paths\n+ if (Files.exists(customLocationSource) // might not exist if no data was written for this index\n+ && Files.exists(customLocationTarget) == false) {\n+ upgrade(index, customLocationSource, customLocationTarget);\n+ } else {\n+ logger.info(\"[{}] no upgrade needed - already upgraded\", customLocationTarget);\n+ }\n+ }\n+ upgrade(index, indexFolderPath, indexFolderPath.resolveSibling(index.getUUID()));\n+ } else {\n+ logger.debug(\"[{}] no upgrade needed - already upgraded\", indexFolderPath);\n+ }\n+ } else {\n+ logger.warn(\"[{}] no index state found - ignoring\", indexFolderPath);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Upgrades all indices found under <code>nodeEnv</code>. Already upgraded indices are ignored.\n+ */\n+ public static void upgradeIndicesIfNeeded(final Settings settings, final NodeEnvironment nodeEnv) throws IOException {\n+ final IndexFolderUpgrader upgrader = new IndexFolderUpgrader(settings, nodeEnv);\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ upgrader.upgrade(indexFolderName);\n+ }\n+ }\n+\n+ static boolean needsUpgrade(Index index, String indexFolderName) {\n+ return indexFolderName.equals(index.getUUID()) == false;\n+ }\n+\n+ static MetaDataStateFormat<IndexMetaData> readOnlyIndexMetaDataStateFormat() {\n+ // NOTE: XContentType param is not used as we use the format read from the serialized index state\n+ return new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java",
"status": "added"
},
{
"diff": "@@ -70,7 +70,6 @@\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.stream.Collectors;\n \n import static java.util.Collections.unmodifiableSet;\n \n@@ -89,7 +88,7 @@ public static class NodePath {\n * not running on Linux, or we hit an exception trying), True means the device possibly spins and False means it does not. */\n public final Boolean spins;\n \n- public NodePath(Path path, Environment environment) throws IOException {\n+ public NodePath(Path path) throws IOException {\n this.path = path;\n this.indicesPath = path.resolve(INDICES_FOLDER);\n this.fileStore = Environment.getFileStore(path);\n@@ -102,16 +101,18 @@ public NodePath(Path path, Environment environment) throws IOException {\n \n /**\n * Resolves the given shards directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}/{shard.id}\n */\n public Path resolve(ShardId shardId) {\n return resolve(shardId.getIndex()).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n- * Resolves the given indexes directory against this NodePath\n+ * Resolves index directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}\n */\n public Path resolve(Index index) {\n- return indicesPath.resolve(index.getName());\n+ return indicesPath.resolve(index.getUUID());\n }\n \n @Override\n@@ -131,7 +132,7 @@ public String toString() {\n \n private final int localNodeId;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n- private final Map<ShardLockKey, InternalShardLock> shardLocks = new HashMap<>();\n+ private final Map<ShardId, InternalShardLock> shardLocks = new HashMap<>();\n \n /**\n * Maximum number of data nodes that should run in an environment.\n@@ -186,7 +187,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n logger.trace(\"obtaining node lock on {} ...\", dir.toAbsolutePath());\n try {\n locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n- nodePaths[dirIndex] = new NodePath(dir, environment);\n+ nodePaths[dirIndex] = new NodePath(dir);\n localNodeId = possibleLockId;\n } catch (LockObtainFailedException ex) {\n logger.trace(\"failed to obtain node lock on {}\", dir.toAbsolutePath());\n@@ -445,11 +446,11 @@ public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, IndexSetti\n * @param indexSettings settings for the index being deleted\n */\n public void deleteIndexDirectoryUnderLock(Index index, IndexSettings indexSettings) throws IOException {\n- final Path[] indexPaths = indexPaths(index.getName());\n+ final Path[] indexPaths = indexPaths(index);\n logger.trace(\"deleting index {} directory, paths({}): [{}]\", index, indexPaths.length, indexPaths);\n IOUtils.rm(indexPaths);\n if (indexSettings.hasCustomDataPath()) {\n- Path customLocation = resolveCustomLocation(indexSettings, index.getName());\n+ Path customLocation = resolveIndexCustomLocation(indexSettings);\n logger.trace(\"deleting custom index {} directory [{}]\", index, customLocation);\n IOUtils.rm(customLocation);\n }\n@@ -517,17 +518,16 @@ public ShardLock shardLock(ShardId id) throws IOException {\n */\n public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOException {\n logger.trace(\"acquiring node shardlock on [{}], timeout [{}]\", shardId, lockTimeoutMS);\n- final ShardLockKey shardLockKey = new ShardLockKey(shardId);\n final InternalShardLock shardLock;\n final boolean acquired;\n synchronized (shardLocks) {\n- if (shardLocks.containsKey(shardLockKey)) {\n- shardLock = shardLocks.get(shardLockKey);\n+ if (shardLocks.containsKey(shardId)) {\n+ shardLock = shardLocks.get(shardId);\n shardLock.incWaitCount();\n acquired = false;\n } else {\n- shardLock = new InternalShardLock(shardLockKey);\n- shardLocks.put(shardLockKey, shardLock);\n+ shardLock = new InternalShardLock(shardId);\n+ shardLocks.put(shardId, shardLock);\n acquired = true;\n }\n }\n@@ -547,7 +547,7 @@ public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOE\n @Override\n protected void closeInternal() {\n shardLock.release();\n- logger.trace(\"released shard lock for [{}]\", shardLockKey);\n+ logger.trace(\"released shard lock for [{}]\", shardId);\n }\n };\n }\n@@ -559,51 +559,7 @@ protected void closeInternal() {\n */\n public Set<ShardId> lockedShards() {\n synchronized (shardLocks) {\n- Set<ShardId> lockedShards = shardLocks.keySet().stream()\n- .map(shardLockKey -> new ShardId(new Index(shardLockKey.indexName, \"_na_\"), shardLockKey.shardId)).collect(Collectors.toSet());\n- return unmodifiableSet(lockedShards);\n- }\n- }\n-\n- // a key for the shard lock. we can't use shardIds, because the contain\n- // the index uuid, but we want the lock semantics to the same as we map indices to disk folders, i.e., without the uuid (for now).\n- private final class ShardLockKey {\n- final String indexName;\n- final int shardId;\n-\n- public ShardLockKey(final ShardId shardId) {\n- this.indexName = shardId.getIndexName();\n- this.shardId = shardId.id();\n- }\n-\n- @Override\n- public String toString() {\n- return \"[\" + indexName + \"][\" + shardId + \"]\";\n- }\n-\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) {\n- return true;\n- }\n- if (o == null || getClass() != o.getClass()) {\n- return false;\n- }\n-\n- ShardLockKey that = (ShardLockKey) o;\n-\n- if (shardId != that.shardId) {\n- return false;\n- }\n- return indexName.equals(that.indexName);\n-\n- }\n-\n- @Override\n- public int hashCode() {\n- int result = indexName.hashCode();\n- result = 31 * result + shardId;\n- return result;\n+ return unmodifiableSet(new HashSet<>(shardLocks.keySet()));\n }\n }\n \n@@ -616,10 +572,10 @@ private final class InternalShardLock {\n */\n private final Semaphore mutex = new Semaphore(1);\n private int waitCount = 1; // guarded by shardLocks\n- private final ShardLockKey lockKey;\n+ private final ShardId shardId;\n \n- InternalShardLock(ShardLockKey id) {\n- lockKey = id;\n+ InternalShardLock(ShardId shardId) {\n+ this.shardId = shardId;\n mutex.acquireUninterruptibly();\n }\n \n@@ -639,10 +595,10 @@ private void decWaitCount() {\n synchronized (shardLocks) {\n assert waitCount > 0 : \"waitCount is \" + waitCount + \" but should be > 0\";\n --waitCount;\n- logger.trace(\"shard lock wait count for [{}] is now [{}]\", lockKey, waitCount);\n+ logger.trace(\"shard lock wait count for {} is now [{}]\", shardId, waitCount);\n if (waitCount == 0) {\n- logger.trace(\"last shard lock wait decremented, removing lock for [{}]\", lockKey);\n- InternalShardLock remove = shardLocks.remove(lockKey);\n+ logger.trace(\"last shard lock wait decremented, removing lock for {}\", shardId);\n+ InternalShardLock remove = shardLocks.remove(shardId);\n assert remove != null : \"Removed lock was null\";\n }\n }\n@@ -651,11 +607,11 @@ private void decWaitCount() {\n void acquire(long timeoutInMillis) throws LockObtainFailedException{\n try {\n if (mutex.tryAcquire(timeoutInMillis, TimeUnit.MILLISECONDS) == false) {\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", timed out after \" + timeoutInMillis + \"ms\");\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", timed out after \" + timeoutInMillis + \"ms\");\n }\n } catch (InterruptedException e) {\n Thread.currentThread().interrupt();\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", interrupted\", e);\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", interrupted\", e);\n }\n }\n }\n@@ -698,11 +654,11 @@ public NodePath[] nodePaths() {\n /**\n * Returns all index paths.\n */\n- public Path[] indexPaths(String indexName) {\n+ public Path[] indexPaths(Index index) {\n assert assertEnvIsLocked();\n Path[] indexPaths = new Path[nodePaths.length];\n for (int i = 0; i < nodePaths.length; i++) {\n- indexPaths[i] = nodePaths[i].indicesPath.resolve(indexName);\n+ indexPaths[i] = nodePaths[i].resolve(index);\n }\n return indexPaths;\n }\n@@ -725,25 +681,47 @@ public Path[] availableShardPaths(ShardId shardId) {\n return shardLocations;\n }\n \n- public Set<String> findAllIndices() throws IOException {\n+ /**\n+ * Returns all folder names in ${data.paths}/nodes/{node.id}/indices folder\n+ */\n+ public Set<String> availableIndexFolders() throws IOException {\n if (nodePaths == null || locks == null) {\n throw new IllegalStateException(\"node is not configured to store local location\");\n }\n assert assertEnvIsLocked();\n- Set<String> indices = new HashSet<>();\n+ Set<String> indexFolders = new HashSet<>();\n for (NodePath nodePath : nodePaths) {\n Path indicesLocation = nodePath.indicesPath;\n if (Files.isDirectory(indicesLocation)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indicesLocation)) {\n for (Path index : stream) {\n if (Files.isDirectory(index)) {\n- indices.add(index.getFileName().toString());\n+ indexFolders.add(index.getFileName().toString());\n }\n }\n }\n }\n }\n- return indices;\n+ return indexFolders;\n+\n+ }\n+\n+ /**\n+ * Resolves all existing paths to <code>indexFolderName</code> in ${data.paths}/nodes/{node.id}/indices\n+ */\n+ public Path[] resolveIndexFolder(String indexFolderName) throws IOException {\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ assert assertEnvIsLocked();\n+ List<Path> paths = new ArrayList<>(nodePaths.length);\n+ for (NodePath nodePath : nodePaths) {\n+ Path indexFolder = nodePath.indicesPath.resolve(indexFolderName);\n+ if (Files.exists(indexFolder)) {\n+ paths.add(indexFolder);\n+ }\n+ }\n+ return paths.toArray(new Path[paths.size()]);\n }\n \n /**\n@@ -761,13 +739,13 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n assert assertEnvIsLocked();\n final Set<ShardId> shardIds = new HashSet<>();\n- String indexName = index.getName();\n+ final String indexUniquePathId = index.getUUID();\n for (final NodePath nodePath : nodePaths) {\n Path location = nodePath.indicesPath;\n if (Files.isDirectory(location)) {\n try (DirectoryStream<Path> indexStream = Files.newDirectoryStream(location)) {\n for (Path indexPath : indexStream) {\n- if (indexName.equals(indexPath.getFileName().toString())) {\n+ if (indexUniquePathId.equals(indexPath.getFileName().toString())) {\n shardIds.addAll(findAllShardsForIndex(indexPath, index));\n }\n }\n@@ -778,7 +756,7 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n \n private static Set<ShardId> findAllShardsForIndex(Path indexPath, Index index) throws IOException {\n- assert indexPath.getFileName().toString().equals(index.getName());\n+ assert indexPath.getFileName().toString().equals(index.getUUID());\n Set<ShardId> shardIds = new HashSet<>();\n if (Files.isDirectory(indexPath)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n@@ -861,7 +839,7 @@ Settings getSettings() { // for testing\n *\n * @param indexSettings settings for the index\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings) {\n+ public Path resolveBaseCustomLocation(IndexSettings indexSettings) {\n String customDataDir = indexSettings.customDataPath();\n if (customDataDir != null) {\n // This assert is because this should be caught by MetaDataCreateIndexService\n@@ -882,10 +860,9 @@ private Path resolveCustomLocation(IndexSettings indexSettings) {\n * the root path for the index.\n *\n * @param indexSettings settings for the index\n- * @param indexName index to resolve the path for\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings, final String indexName) {\n- return resolveCustomLocation(indexSettings).resolve(indexName);\n+ private Path resolveIndexCustomLocation(IndexSettings indexSettings) {\n+ return resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getUUID());\n }\n \n /**\n@@ -897,7 +874,7 @@ private Path resolveCustomLocation(IndexSettings indexSettings, final String ind\n * @param shardId shard to resolve the path to\n */\n public Path resolveCustomLocation(IndexSettings indexSettings, final ShardId shardId) {\n- return resolveCustomLocation(indexSettings, shardId.getIndexName()).resolve(Integer.toString(shardId.id()));\n+ return resolveIndexCustomLocation(indexSettings).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n@@ -921,22 +898,24 @@ private void assertCanWrite() throws IOException {\n for (Path path : nodeDataPaths()) { // check node-paths are writable\n tryWriteTempFile(path);\n }\n- for (String index : this.findAllIndices()) {\n- for (Path path : this.indexPaths(index)) { // check index paths are writable\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n- }\n- for (ShardId shardID : this.findAllShardIds(new Index(index, IndexMetaData.INDEX_UUID_NA_VALUE))) {\n- Path[] paths = this.availableShardPaths(shardID);\n- for (Path path : paths) { // check shard paths are writable\n- Path indexDir = path.resolve(ShardPath.INDEX_FOLDER_NAME);\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- Path translogDir = path.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n- tryWriteTempFile(indexDir);\n- tryWriteTempFile(translogDir);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n+ for (String indexFolderName : this.availableIndexFolders()) {\n+ for (Path indexPath : this.resolveIndexFolder(indexFolderName)) { // check index paths are writable\n+ Path indexStatePath = indexPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ tryWriteTempFile(indexStatePath);\n+ tryWriteTempFile(indexPath);\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n+ for (Path shardPath : stream) {\n+ String fileName = shardPath.getFileName().toString();\n+ if (Files.isDirectory(shardPath) && fileName.chars().allMatch(Character::isDigit)) {\n+ Path indexDir = shardPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Path statePath = shardPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ Path translogDir = shardPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ tryWriteTempFile(indexDir);\n+ tryWriteTempFile(translogDir);\n+ tryWriteTempFile(statePath);\n+ tryWriteTempFile(shardPath);\n+ }\n+ }\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -19,19 +19,25 @@\n \n package org.elasticsearch.gateway;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -47,7 +53,7 @@ public class DanglingIndicesState extends AbstractComponent {\n private final MetaStateService metaStateService;\n private final LocalAllocateDangledIndices allocateDangledIndices;\n \n- private final Map<String, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n+ private final Map<Index, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n \n @Inject\n public DanglingIndicesState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService,\n@@ -74,7 +80,7 @@ public void processDanglingIndices(MetaData metaData) {\n /**\n * The current set of dangling indices.\n */\n- Map<String, IndexMetaData> getDanglingIndices() {\n+ Map<Index, IndexMetaData> getDanglingIndices() {\n // This might be a good use case for CopyOnWriteHashMap\n return unmodifiableMap(new HashMap<>(danglingIndices));\n }\n@@ -83,10 +89,16 @@ Map<String, IndexMetaData> getDanglingIndices() {\n * Cleans dangling indices if they are already allocated on the provided meta data.\n */\n void cleanupAllocatedDangledIndices(MetaData metaData) {\n- for (String danglingIndex : danglingIndices.keySet()) {\n- if (metaData.hasIndex(danglingIndex)) {\n- logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", danglingIndex);\n- danglingIndices.remove(danglingIndex);\n+ for (Index index : danglingIndices.keySet()) {\n+ final IndexMetaData indexMetaData = metaData.index(index);\n+ if (indexMetaData != null && indexMetaData.getIndex().getName().equals(index.getName())) {\n+ if (indexMetaData.getIndex().getUUID().equals(index.getUUID()) == false) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as there is already another index \" +\n+ \"with the same name but a different uuid. local index will be ignored (but not deleted)\", index);\n+ } else {\n+ logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", index);\n+ }\n+ danglingIndices.remove(index);\n }\n }\n }\n@@ -104,36 +116,30 @@ void findNewAndAddDanglingIndices(MetaData metaData) {\n * that have state on disk, but are not part of the provided meta data, or not detected\n * as dangled already.\n */\n- Map<String, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n- final Set<String> indices;\n- try {\n- indices = nodeEnv.findAllIndices();\n- } catch (Throwable e) {\n- logger.warn(\"failed to list dangling indices\", e);\n- return emptyMap();\n+ Map<Index, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n+ final Set<String> excludeIndexPathIds = new HashSet<>(metaData.indices().size() + danglingIndices.size());\n+ for (ObjectCursor<IndexMetaData> cursor : metaData.indices().values()) {\n+ excludeIndexPathIds.add(cursor.value.getIndex().getUUID());\n }\n-\n- Map<String, IndexMetaData> newIndices = new HashMap<>();\n- for (String indexName : indices) {\n- if (metaData.hasIndex(indexName) == false && danglingIndices.containsKey(indexName) == false) {\n- try {\n- IndexMetaData indexMetaData = metaStateService.loadIndexState(indexName);\n- if (indexMetaData != null) {\n- logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\", indexName);\n- if (!indexMetaData.getIndex().getName().equals(indexName)) {\n- logger.info(\"dangled index directory name is [{}], state name is [{}], renaming to directory name\", indexName, indexMetaData.getIndex());\n- indexMetaData = IndexMetaData.builder(indexMetaData).index(indexName).build();\n- }\n- newIndices.put(indexName, indexMetaData);\n- } else {\n- logger.debug(\"[{}] dangling index directory detected, but no state found\", indexName);\n- }\n- } catch (Throwable t) {\n- logger.warn(\"[{}] failed to load index state for detected dangled index\", t, indexName);\n+ excludeIndexPathIds.addAll(danglingIndices.keySet().stream().map(Index::getUUID).collect(Collectors.toList()));\n+ try {\n+ final List<IndexMetaData> indexMetaDataList = metaStateService.loadIndicesStates(excludeIndexPathIds::contains);\n+ Map<Index, IndexMetaData> newIndices = new HashMap<>(indexMetaDataList.size());\n+ for (IndexMetaData indexMetaData : indexMetaDataList) {\n+ if (metaData.hasIndex(indexMetaData.getIndex().getName())) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as index with same name already exists in cluster metadata\",\n+ indexMetaData.getIndex());\n+ } else {\n+ logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\",\n+ indexMetaData.getIndex());\n+ newIndices.put(indexMetaData.getIndex(), indexMetaData);\n }\n }\n+ return newIndices;\n+ } catch (IOException e) {\n+ logger.warn(\"failed to list dangling indices\", e);\n+ return emptyMap();\n }\n- return newIndices;\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.Index;\n \n@@ -86,6 +87,7 @@ public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateSer\n try {\n ensureNoPre019State();\n pre20Upgrade();\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(settings, nodeEnv);\n long startNS = System.nanoTime();\n metaStateService.loadFullState();\n logger.debug(\"took {} to load state\", TimeValue.timeValueMillis(TimeValue.nsecToMSec(System.nanoTime() - startNS)));\n@@ -130,7 +132,7 @@ public void clusterChanged(ClusterChangedEvent event) {\n for (IndexMetaData indexMetaData : newMetaData) {\n IndexMetaData indexMetaDataOnDisk = null;\n if (indexMetaData.getState().equals(IndexMetaData.State.CLOSE)) {\n- indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex().getName());\n+ indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex());\n }\n if (indexMetaDataOnDisk != null) {\n newPreviouslyWrittenIndices.add(indexMetaDataOnDisk.getIndex());\n@@ -158,15 +160,14 @@ public void clusterChanged(ClusterChangedEvent event) {\n // check and write changes in indices\n for (IndexMetaWriteInfo indexMetaWrite : writeInfo) {\n try {\n- metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData, indexMetaWrite.previousMetaData);\n+ metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData);\n } catch (Throwable e) {\n success = false;\n }\n }\n }\n \n danglingIndicesState.processDanglingIndices(newMetaData);\n-\n if (success) {\n previousMetaData = newMetaData;\n previouslyWrittenIndices = unmodifiableSet(relevantIndices);\n@@ -233,7 +234,8 @@ private void pre20Upgrade() throws Exception {\n // We successfully checked all indices for backward compatibility and found no non-upgradable indices, which\n // means the upgrade can continue. Now it's safe to overwrite index metadata with the new version.\n for (IndexMetaData indexMetaData : updateIndexMetaData) {\n- metaStateService.writeIndex(\"upgrade\", indexMetaData, null);\n+ // since we still haven't upgraded the index folders, we write index state in the old folder\n+ metaStateService.writeIndex(\"upgrade\", indexMetaData, nodeEnv.resolveIndexFolder(indexMetaData.getIndex().getName()));\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java",
"status": "modified"
},
{
"diff": "@@ -33,9 +33,12 @@\n import org.elasticsearch.index.Index;\n \n import java.io.IOException;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n+import java.util.function.Predicate;\n \n /**\n * Handles writing and loading both {@link MetaData} and {@link IndexMetaData}\n@@ -45,7 +48,7 @@ public class MetaStateService extends AbstractComponent {\n static final String FORMAT_SETTING = \"gateway.format\";\n \n static final String GLOBAL_STATE_FILE_PREFIX = \"global-\";\n- private static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n+ public static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n \n private final NodeEnvironment nodeEnv;\n \n@@ -91,14 +94,12 @@ MetaData loadFullState() throws Exception {\n } else {\n metaDataBuilder = MetaData.builder();\n }\n-\n- final Set<String> indices = nodeEnv.findAllIndices();\n- for (String index : indices) {\n- IndexMetaData indexMetaData = loadIndexState(index);\n- if (indexMetaData == null) {\n- logger.debug(\"[{}] failed to find metadata for existing index location\", index);\n- } else {\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n metaDataBuilder.put(indexMetaData, false);\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n }\n }\n return metaDataBuilder.build();\n@@ -108,10 +109,35 @@ MetaData loadFullState() throws Exception {\n * Loads the index state for the provided index name, returning null if doesn't exists.\n */\n @Nullable\n- IndexMetaData loadIndexState(String index) throws IOException {\n+ IndexMetaData loadIndexState(Index index) throws IOException {\n return indexStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n }\n \n+ /**\n+ * Loads all indices states available on disk\n+ */\n+ List<IndexMetaData> loadIndicesStates(Predicate<String> excludeIndexPathIdsPredicate) throws IOException {\n+ List<IndexMetaData> indexMetaDataList = new ArrayList<>();\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ if (excludeIndexPathIdsPredicate.test(indexFolderName)) {\n+ continue;\n+ }\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger,\n+ nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n+ final String indexPathId = indexMetaData.getIndex().getUUID();\n+ if (indexFolderName.equals(indexPathId)) {\n+ indexMetaDataList.add(indexMetaData);\n+ } else {\n+ throw new IllegalStateException(\"[\" + indexFolderName+ \"] invalid index folder name, rename to [\" + indexPathId + \"]\");\n+ }\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n+ }\n+ }\n+ return indexMetaDataList;\n+ }\n+\n /**\n * Loads the global state, *without* index state, see {@link #loadFullState()} for that.\n */\n@@ -129,13 +155,22 @@ MetaData loadGlobalState() throws IOException {\n /**\n * Writes the index state.\n */\n- void writeIndex(String reason, IndexMetaData indexMetaData, @Nullable IndexMetaData previousIndexMetaData) throws Exception {\n- logger.trace(\"[{}] writing state, reason [{}]\", indexMetaData.getIndex(), reason);\n+ void writeIndex(String reason, IndexMetaData indexMetaData) throws IOException {\n+ writeIndex(reason, indexMetaData, nodeEnv.indexPaths(indexMetaData.getIndex()));\n+ }\n+\n+ /**\n+ * Writes the index state in <code>locations</code>, use {@link #writeGlobalState(String, MetaData)}\n+ * to write index state in index paths\n+ */\n+ void writeIndex(String reason, IndexMetaData indexMetaData, Path[] locations) throws IOException {\n+ final Index index = indexMetaData.getIndex();\n+ logger.trace(\"[{}] writing state, reason [{}]\", index, reason);\n try {\n- indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), nodeEnv.indexPaths(indexMetaData.getIndex().getName()));\n+ indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), locations);\n } catch (Throwable ex) {\n- logger.warn(\"[{}]: failed to write index state\", ex, indexMetaData.getIndex());\n- throw new IOException(\"failed to write state for [\" + indexMetaData.getIndex() + \"]\", ex);\n+ logger.warn(\"[{}]: failed to write index state\", ex, index);\n+ throw new IOException(\"failed to write state for [\" + index + \"]\", ex);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/MetaStateService.java",
"status": "modified"
},
{
"diff": "@@ -29,30 +29,27 @@\n import java.nio.file.FileStore;\n import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.util.HashMap;\n import java.util.Map;\n \n public final class ShardPath {\n public static final String INDEX_FOLDER_NAME = \"index\";\n public static final String TRANSLOG_FOLDER_NAME = \"translog\";\n \n private final Path path;\n- private final String indexUUID;\n private final ShardId shardId;\n private final Path shardStatePath;\n private final boolean isCustomDataPath;\n \n- public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, String indexUUID, ShardId shardId) {\n+ public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, ShardId shardId) {\n assert dataPath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"dataPath must end with the shard ID but didn't: \" + dataPath.toString();\n assert shardStatePath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"shardStatePath must end with the shard ID but didn't: \" + dataPath.toString();\n- assert dataPath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"dataPath must end with index/shardID but didn't: \" + dataPath.toString();\n- assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"shardStatePath must end with index/shardID but didn't: \" + dataPath.toString();\n+ assert dataPath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"dataPath must end with index path id but didn't: \" + dataPath.toString();\n+ assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"shardStatePath must end with index path id but didn't: \" + dataPath.toString();\n if (isCustomDataPath && dataPath.equals(shardStatePath)) {\n throw new IllegalArgumentException(\"shard state path must be different to the data path when using custom data paths\");\n }\n this.isCustomDataPath = isCustomDataPath;\n this.path = dataPath;\n- this.indexUUID = indexUUID;\n this.shardId = shardId;\n this.shardStatePath = shardStatePath;\n }\n@@ -73,10 +70,6 @@ public boolean exists() {\n return Files.exists(path);\n }\n \n- public String getIndexUUID() {\n- return indexUUID;\n- }\n-\n public ShardId getShardId() {\n return shardId;\n }\n@@ -144,7 +137,7 @@ public static ShardPath loadShardPath(ESLogger logger, NodeEnvironment env, Shar\n dataPath = statePath;\n }\n logger.debug(\"{} loaded data path [{}], state path [{}]\", shardId, dataPath, statePath);\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n }\n \n@@ -168,34 +161,6 @@ public static void deleteLeftoverShardDirectory(ESLogger logger, NodeEnvironment\n }\n }\n \n- /** Maps each path.data path to a \"guess\" of how many bytes the shards allocated to that path might additionally use over their\n- * lifetime; we do this so a bunch of newly allocated shards won't just all go the path with the most free space at this moment. */\n- private static Map<Path,Long> getEstimatedReservedBytes(NodeEnvironment env, long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n- long totFreeSpace = 0;\n- for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n- totFreeSpace += nodePath.fileStore.getUsableSpace();\n- }\n-\n- // Very rough heuristic of how much disk space we expect the shard will use over its lifetime, the max of current average\n- // shard size across the cluster and 5% of the total available free space on this node:\n- long estShardSizeInBytes = Math.max(avgShardSizeInBytes, (long) (totFreeSpace/20.0));\n-\n- // Collate predicted (guessed!) disk usage on each path.data:\n- Map<Path,Long> reservedBytes = new HashMap<>();\n- for (IndexShard shard : shards) {\n- Path dataPath = NodeEnvironment.shardStatePathToDataPath(shard.shardPath().getShardStatePath());\n-\n- // Remove indices/<index>/<shardID> subdirs from the statePath to get back to the path.data/<lockID>:\n- Long curBytes = reservedBytes.get(dataPath);\n- if (curBytes == null) {\n- curBytes = 0L;\n- }\n- reservedBytes.put(dataPath, curBytes + estShardSizeInBytes);\n- } \n-\n- return reservedBytes;\n- }\n-\n public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, IndexSettings indexSettings,\n long avgShardSizeInBytes, Map<Path,Integer> dataPathToShardCount) throws IOException {\n \n@@ -206,7 +171,6 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n dataPath = env.resolveCustomLocation(indexSettings, shardId);\n statePath = env.nodePaths()[0].resolve(shardId);\n } else {\n-\n long totFreeSpace = 0;\n for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n totFreeSpace += nodePath.fileStore.getUsableSpace();\n@@ -241,9 +205,7 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n statePath = bestPath.resolve(shardId);\n dataPath = statePath;\n }\n-\n- final String indexUUID = indexSettings.getUUID();\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n \n @Override\n@@ -258,9 +220,6 @@ public boolean equals(Object o) {\n if (shardId != null ? !shardId.equals(shardPath.shardId) : shardPath.shardId != null) {\n return false;\n }\n- if (indexUUID != null ? !indexUUID.equals(shardPath.indexUUID) : shardPath.indexUUID != null) {\n- return false;\n- }\n if (path != null ? !path.equals(shardPath.path) : shardPath.path != null) {\n return false;\n }\n@@ -271,7 +230,6 @@ public boolean equals(Object o) {\n @Override\n public int hashCode() {\n int result = path != null ? path.hashCode() : 0;\n- result = 31 * result + (indexUUID != null ? indexUUID.hashCode() : 0);\n result = 31 * result + (shardId != null ? shardId.hashCode() : 0);\n return result;\n }\n@@ -280,7 +238,6 @@ public int hashCode() {\n public String toString() {\n return \"ShardPath{\" +\n \"path=\" + path +\n- \", indexUUID='\" + indexUUID + '\\'' +\n \", shard=\" + shardId +\n '}';\n }",
"filename": "core/src/main/java/org/elasticsearch/index/shard/ShardPath.java",
"status": "modified"
},
{
"diff": "@@ -531,7 +531,7 @@ private void deleteIndexStore(String reason, Index index, IndexSettings indexSet\n }\n // this is a pure protection to make sure this index doesn't get re-imported as a dangling index.\n // we should in the future rather write a tombstone rather than wiping the metadata.\n- MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index.getName()));\n+ MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index));\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -105,6 +106,8 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n List<String> indexes;\n List<String> unsupportedIndexes;\n+ static String singleDataPathNodeName;\n+ static String multiDataPathNodeName;\n static Path singleDataPath;\n static Path[] multiDataPath;\n \n@@ -127,6 +130,8 @@ private List<String> loadIndexesList(String prefix) throws IOException {\n \n @AfterClass\n public static void tearDownStatics() {\n+ singleDataPathNodeName = null;\n+ multiDataPathNodeName = null;\n singleDataPath = null;\n multiDataPath = null;\n }\n@@ -157,15 +162,17 @@ void setupCluster() throws Exception {\n InternalTestCluster.Async<String> multiDataPathNode = internalCluster().startNodeAsync(nodeSettings.build());\n \n // find single data path dir\n- Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNode.get()).nodeDataPaths();\n+ singleDataPathNodeName = singleDataPathNode.get();\n+ Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName).nodeDataPaths();\n assertEquals(1, nodePaths.length);\n singleDataPath = nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER);\n assertFalse(Files.exists(singleDataPath));\n Files.createDirectories(singleDataPath);\n logger.info(\"--> Single data path: {}\", singleDataPath);\n \n // find multi data path dirs\n- nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNode.get()).nodeDataPaths();\n+ multiDataPathNodeName = multiDataPathNode.get();\n+ nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName).nodeDataPaths();\n assertEquals(2, nodePaths.length);\n multiDataPath = new Path[] {nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER),\n nodePaths[1].resolve(NodeEnvironment.INDICES_FOLDER)};\n@@ -178,6 +185,13 @@ void setupCluster() throws Exception {\n replicas.get(); // wait for replicas\n }\n \n+ void upgradeIndexFolder() throws Exception {\n+ final NodeEnvironment nodeEnvironment = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+ final NodeEnvironment nodeEnv = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnv);\n+ }\n+\n String loadIndex(String indexFile) throws Exception {\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -296,6 +310,10 @@ public void testOldIndexes() throws Exception {\n void assertOldIndexWorks(String index) throws Exception {\n Version version = extractVersion(index);\n String indexName = loadIndex(index);\n+ // we explicitly upgrade the index folders as these indices\n+ // are imported as dangling indices and not available on\n+ // node startup\n+ upgradeIndexFolder();\n importIndex(indexName);\n assertIndexSanity(indexName, version);\n assertBasicSearchWorks(indexName);",
"filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java",
"status": "modified"
},
{
"diff": "@@ -92,22 +92,22 @@ public void testRandomDiskUsage() {\n }\n \n public void testFillShardLevelInfo() {\n- final Index index = new Index(\"test\", \"_na_\");\n+ final Index index = new Index(\"test\", \"0xdeadbeef\");\n ShardRouting test_0 = ShardRouting.newUnassigned(index, 0, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_0, \"node1\");\n ShardRoutingHelper.moveToStarted(test_0);\n- Path test0Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"0\");\n+ Path test0Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"0\");\n CommonStats commonStats0 = new CommonStats();\n commonStats0.store = new StoreStats(100, 1);\n ShardRouting test_1 = ShardRouting.newUnassigned(index, 1, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_1, \"node2\");\n ShardRoutingHelper.moveToStarted(test_1);\n- Path test1Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"1\");\n+ Path test1Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"1\");\n CommonStats commonStats1 = new CommonStats();\n commonStats1.store = new StoreStats(1000, 1);\n ShardStats[] stats = new ShardStats[] {\n- new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, \"0xdeadbeef\", test_0.shardId()), commonStats0 , null),\n- new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, \"0xdeadbeef\", test_1.shardId()), commonStats1 , null)\n+ new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, test_0.shardId()), commonStats0 , null),\n+ new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, test_1.shardId()), commonStats1 , null)\n };\n ImmutableOpenMap.Builder<String, Long> shardSizes = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<ShardRouting, String> routingToPath = ImmutableOpenMap.builder();",
"filename": "core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java",
"status": "modified"
},
{
"diff": "@@ -22,8 +22,10 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n@@ -42,6 +44,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n@@ -226,9 +229,10 @@ private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exc\n assertThat(state.getRoutingNodes().node(state.nodes().resolveNode(node_1).id()).get(0).state(), equalTo(ShardRoutingState.STARTED));\n \n client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).execute().actionGet();\n+ final Index index = resolveIndex(\"test\");\n \n logger.info(\"--> closing all nodes\");\n- Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(index, 0));\n assertThat(FileSystemUtils.exists(shardLocation), equalTo(true)); // make sure the data is there!\n internalCluster().closeNonSharedNodes(false); // don't wipe data directories the index needs to be there!\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,366 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.CollectionUtil;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.bwcompat.OldIndexBackwardsCompatibilityIT;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.AllocationId;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.ShardPath;\n+import org.elasticsearch.index.shard.ShardStateMetaData;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.BufferedWriter;\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.net.URISyntaxException;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.DirectoryStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Set;\n+\n+import static org.hamcrest.core.Is.is;\n+\n+@LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n+public class IndexFolderUpgraderTests extends ESTestCase {\n+\n+ private static MetaDataStateFormat<IndexMetaData> indexMetaDataStateFormat =\n+ new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ IndexMetaData.Builder.toXContent(state, builder, ToXContent.EMPTY_PARAMS);\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+\n+ /**\n+ * tests custom data paths are upgraded\n+ */\n+ public void testUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ /**\n+ * tests upgrade on partially upgraded index, when we crash while upgrading\n+ */\n+ public void testPartialUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv) {\n+ @Override\n+ void upgrade(Index index, Path source, Path target) throws IOException {\n+ if(randomBoolean()) {\n+ throw new FileNotFoundException(\"simulated\");\n+ }\n+ }\n+ };\n+ // only upgrade some paths\n+ try {\n+ helper.upgrade(index.getName());\n+ } catch (IOException e) {\n+ assertTrue(e instanceof FileNotFoundException);\n+ }\n+ helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ // try to upgrade again\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgrade() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgradeIndices() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ Map<IndexSettings, Tuple<Integer, Integer>> indexSettingsMap = new HashMap<>();\n+ for (int i = 0; i < randomIntBetween(2, 5); i++) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ Tuple<Integer, Integer> fileCounts = new Tuple<>(randomIntBetween(1, 5), randomIntBetween(1, 5));\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ indexSettingsMap.put(indexSettings, fileCounts);\n+ writeIndex(nodeEnv, indexSettings, fileCounts.v1(), fileCounts.v2());\n+ }\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(nodeSettings, nodeEnv);\n+ for (Map.Entry<IndexSettings, Tuple<Integer, Integer>> entry : indexSettingsMap.entrySet()) {\n+ checkIndex(nodeEnv, entry.getKey(), entry.getValue().v1(), entry.getValue().v2());\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Run upgrade on a real bwc index\n+ */\n+ public void testUpgradeRealIndex() throws IOException, URISyntaxException {\n+ List<Path> indexes = new ArrayList<>();\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(getBwcIndicesPath(), \"index-*.zip\")) {\n+ for (Path path : stream) {\n+ indexes.add(path);\n+ }\n+ }\n+ CollectionUtil.introSort(indexes, (o1, o2) -> o1.getFileName().compareTo(o2.getFileName()));\n+ final Path path = randomFrom(indexes);\n+ final String indexName = path.getFileName().toString().replace(\".zip\", \"\").toLowerCase(Locale.ROOT);\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ // decompress the index\n+ try (InputStream stream = Files.newInputStream(path)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ // check it is unique\n+ assertTrue(Files.exists(unzipDataDir));\n+ Path[] list = FileSystemUtils.files(unzipDataDir);\n+ if (list.length != 1) {\n+ throw new IllegalStateException(\"Backwards index must contain exactly one cluster but was \" + list.length);\n+ }\n+ // the bwc scripts packs the indices under this path\n+ Path src = list[0].resolve(\"nodes/0/indices/\" + indexName);\n+ assertTrue(\"[\" + path + \"] missing index dir: \" + src.toString(), Files.exists(src));\n+ final Path indicesPath = randomFrom(nodeEnvironment.nodePaths()).indicesPath;\n+ logger.info(\"--> injecting index [{}] into [{}]\", indexName, indicesPath);\n+ OldIndexBackwardsCompatibilityIT.copyIndex(logger, src, indexName, indicesPath);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+\n+ // ensure old index folder is deleted\n+ Set<String> indexFolders = nodeEnvironment.availableIndexFolders();\n+ assertEquals(indexFolders.size(), 1);\n+\n+ // ensure index metadata is moved\n+ IndexMetaData indexMetaData = indexMetaDataStateFormat.loadLatestState(logger,\n+ nodeEnvironment.resolveIndexFolder(indexFolders.iterator().next()));\n+ assertNotNull(indexMetaData);\n+ Index index = indexMetaData.getIndex();\n+ assertEquals(index.getName(), indexName);\n+\n+ Set<ShardId> shardIds = nodeEnvironment.findAllShardIds(index);\n+ // ensure all shards are moved\n+ assertEquals(shardIds.size(), indexMetaData.getNumberOfShards());\n+ for (ShardId shardId : shardIds) {\n+ final ShardPath shardPath = ShardPath.loadShardPath(logger, nodeEnvironment, shardId,\n+ new IndexSettings(indexMetaData, Settings.EMPTY));\n+ final Path translog = shardPath.resolveTranslog();\n+ final Path idx = shardPath.resolveIndex();\n+ final Path state = shardPath.getShardStatePath().resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ assertTrue(shardPath.exists());\n+ assertTrue(Files.exists(translog));\n+ assertTrue(Files.exists(idx));\n+ assertTrue(Files.exists(state));\n+ }\n+ }\n+ }\n+\n+ public void testNeedsUpgrade() throws IOException {\n+ final Index index = new Index(\"foo\", Strings.randomBase64UUID());\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName())\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ indexMetaDataStateFormat.write(indexState, 1, nodeEnvironment.indexPaths(index));\n+ assertFalse(IndexFolderUpgrader.needsUpgrade(index, index.getUUID()));\n+ }\n+ }\n+\n+ private void checkIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ final Index index = indexSettings.getIndex();\n+ // ensure index state can be loaded\n+ IndexMetaData loadLatestState = indexMetaDataStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n+ assertNotNull(loadLatestState);\n+ assertEquals(loadLatestState.getIndex(), index);\n+ for (int shardId = 0; shardId < indexSettings.getNumberOfShards(); shardId++) {\n+ // ensure shard path can be loaded\n+ ShardPath targetShardPath = ShardPath.loadShardPath(logger, nodeEnv, new ShardId(index, shardId), indexSettings);\n+ assertNotNull(targetShardPath);\n+ // ensure shard contents are copied over\n+ final Path translog = targetShardPath.resolveTranslog();\n+ final Path idx = targetShardPath.resolveIndex();\n+\n+ // ensure index and translog files are copied over\n+ assertEquals(numTranslogFiles, FileSystemUtils.files(translog).length);\n+ assertEquals(numIdxFiles, FileSystemUtils.files(idx).length);\n+ Path[] files = FileSystemUtils.files(translog);\n+ final HashSet<Path> translogFiles = new HashSet<>(Arrays.asList(files));\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ final String name = Integer.toString(i);\n+ translogFiles.contains(translog.resolve(name + \".translog\"));\n+ byte[] content = Files.readAllBytes(translog.resolve(name + \".translog\"));\n+ assertEquals(name , new String(content, StandardCharsets.UTF_8));\n+ }\n+ Path[] indexFileList = FileSystemUtils.files(idx);\n+ final HashSet<Path> idxFiles = new HashSet<>(Arrays.asList(indexFileList));\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ final String name = Integer.toString(i);\n+ idxFiles.contains(idx.resolve(name + \".tst\"));\n+ byte[] content = Files.readAllBytes(idx.resolve(name + \".tst\"));\n+ assertEquals(name, new String(content, StandardCharsets.UTF_8));\n+ }\n+ }\n+ }\n+\n+ private void writeIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ NodeEnvironment.NodePath[] nodePaths = nodeEnv.nodePaths();\n+ Path[] oldIndexPaths = new Path[nodePaths.length];\n+ for (int i = 0; i < nodePaths.length; i++) {\n+ oldIndexPaths[i] = nodePaths[i].indicesPath.resolve(indexSettings.getIndex().getName());\n+ }\n+ indexMetaDataStateFormat.write(indexSettings.getIndexMetaData(), 1, oldIndexPaths);\n+ for (int id = 0; id < indexSettings.getNumberOfShards(); id++) {\n+ Path oldIndexPath = randomFrom(oldIndexPaths);\n+ ShardId shardId = new ShardId(indexSettings.getIndex(), id);\n+ if (indexSettings.hasCustomDataPath()) {\n+ Path customIndexPath = nodeEnv.resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getIndex().getName());\n+ writeShard(shardId, customIndexPath, numIdxFiles, numTranslogFiles);\n+ } else {\n+ writeShard(shardId, oldIndexPath, numIdxFiles, numTranslogFiles);\n+ }\n+ ShardStateMetaData state = new ShardStateMetaData(true, indexSettings.getUUID(), AllocationId.newInitializing());\n+ ShardStateMetaData.FORMAT.write(state, 1, oldIndexPath.resolve(String.valueOf(shardId.getId())));\n+ }\n+ }\n+\n+ private void writeShard(ShardId shardId, Path indexLocation,\n+ final int numIdxFiles, final int numTranslogFiles) throws IOException {\n+ Path oldShardDataPath = indexLocation.resolve(String.valueOf(shardId.getId()));\n+ final Path translogPath = oldShardDataPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ final Path idxPath = oldShardDataPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Files.createDirectories(translogPath);\n+ Files.createDirectories(idxPath);\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(idxPath.resolve(filename + \".tst\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(translogPath.resolve(filename + \".translog\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/util/IndexFolderUpgraderTests.java",
"status": "added"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n@@ -36,7 +37,11 @@\n import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -129,33 +134,34 @@ public void testNodeLockMultipleEnvironment() throws IOException {\n public void testShardLock() throws IOException {\n final NodeEnvironment env = newNodeEnvironment();\n \n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n try {\n- env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n fail(\"shard 0 is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n fooLock.close();\n // can lock again?\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0)).close();\n+ env.shardLock(new ShardId(index, 0)).close();\n \n- List<ShardLock> locks = env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ List<ShardLock> locks = env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n@@ -165,63 +171,91 @@ public void testShardLock() throws IOException {\n env.close();\n }\n \n- public void testGetAllIndices() throws Exception {\n+ public void testAvailableIndexFolders() throws Exception {\n final NodeEnvironment env = newNodeEnvironment();\n final int numIndices = randomIntBetween(1, 10);\n+ Set<String> actualPaths = new HashSet<>();\n for (int i = 0; i < numIndices; i++) {\n- for (Path path : env.indexPaths(\"foo\" + i)) {\n- Files.createDirectories(path);\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ for (Path path : env.indexPaths(index)) {\n+ Files.createDirectories(path.resolve(MetaDataStateFormat.STATE_DIR_NAME));\n+ actualPaths.add(path.getFileName().toString());\n }\n }\n- Set<String> indices = env.findAllIndices();\n- assertEquals(indices.size(), numIndices);\n+\n+ assertThat(actualPaths, equalTo(env.availableIndexFolders()));\n+ assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n+ env.close();\n+ }\n+\n+ public void testResolveIndexFolders() throws Exception {\n+ final NodeEnvironment env = newNodeEnvironment();\n+ final int numIndices = randomIntBetween(1, 10);\n+ Map<String, List<Path>> actualIndexDataPaths = new HashMap<>();\n for (int i = 0; i < numIndices; i++) {\n- assertTrue(indices.contains(\"foo\" + i));\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ Path[] indexPaths = env.indexPaths(index);\n+ for (Path path : indexPaths) {\n+ Files.createDirectories(path);\n+ String fileName = path.getFileName().toString();\n+ List<Path> paths = actualIndexDataPaths.get(fileName);\n+ if (paths == null) {\n+ paths = new ArrayList<>();\n+ }\n+ paths.add(path);\n+ actualIndexDataPaths.put(fileName, paths);\n+ }\n+ }\n+ for (Map.Entry<String, List<Path>> actualIndexDataPathEntry : actualIndexDataPaths.entrySet()) {\n+ List<Path> actual = actualIndexDataPathEntry.getValue();\n+ Path[] actualPaths = actual.toArray(new Path[actual.size()]);\n+ assertThat(actualPaths, equalTo(env.resolveIndexFolder(actualIndexDataPathEntry.getKey())));\n }\n assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n env.close();\n }\n \n public void testDeleteSafe() throws IOException, InterruptedException {\n final NodeEnvironment env = newNodeEnvironment();\n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ final Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n \n try {\n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 0), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 0), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertTrue(Files.exists(path.resolve(\"1\")));\n \n }\n \n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 1), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 1), idxSettings);\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertFalse(Files.exists(path.resolve(\"1\")));\n }\n \n try {\n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), randomIntBetween(0, 10), idxSettings);\n+ env.deleteIndexDirectorySafe(index, randomIntBetween(0, 10), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n fooLock.close();\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path));\n }\n \n@@ -242,7 +276,7 @@ public void onFailure(Throwable t) {\n @Override\n protected void doRun() throws Exception {\n start.await();\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", 0))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(index, 0))) {\n blockLatch.countDown();\n Thread.sleep(randomIntBetween(1, 10));\n }\n@@ -257,11 +291,11 @@ protected void doRun() throws Exception {\n start.countDown();\n blockLatch.await();\n \n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), 5000, idxSettings);\n+ env.deleteIndexDirectorySafe(index, 5000, idxSettings);\n \n assertNull(threadException.get());\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertFalse(Files.exists(path));\n }\n latch.await();\n@@ -300,7 +334,7 @@ public void run() {\n for (int i = 0; i < iters; i++) {\n int shard = randomIntBetween(0, counts.length - 1);\n try {\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", shard), scaledRandomIntBetween(0, 10))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"fooUUID\", shard), scaledRandomIntBetween(0, 10))) {\n counts[shard].value++;\n countsAtomic[shard].incrementAndGet();\n assertEquals(flipFlop[shard].incrementAndGet(), 1);\n@@ -334,37 +368,38 @@ public void testCustomDataPaths() throws Exception {\n String[] dataPaths = tmpPaths();\n NodeEnvironment env = newNodeEnvironment(dataPaths, \"/tmp\", Settings.EMPTY);\n \n- IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.EMPTY);\n- IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n- Index index = new Index(\"myindex\", \"_na_\");\n+ final Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_INDEX_UUID, \"myindexUUID\").build();\n+ IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", indexSettings);\n+ IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n+ Index index = new Index(\"myindex\", \"myindexUUID\");\n ShardId sid = new ShardId(index, 0);\n \n assertFalse(\"no settings should mean no custom data path\", s1.hasCustomDataPath());\n assertTrue(\"settings with path_data should have a custom data path\", s2.hasCustomDataPath());\n \n assertThat(env.availableShardPaths(sid), equalTo(env.availableShardPaths(sid)));\n- assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/myindex/0\")));\n+ assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env.close();\n NodeEnvironment env2 = newNodeEnvironment(dataPaths, \"/tmp\",\n Settings.builder().put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), false).build());\n \n assertThat(env2.availableShardPaths(sid), equalTo(env2.availableShardPaths(sid)));\n- assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/myindex/0\")));\n+ assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env2.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env2.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env2.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env2.close();\n }",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n \n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -53,6 +54,47 @@ public void testCleanupWhenEmpty() throws Exception {\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n+ public void testDanglingIndicesDiscovery() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ assertTrue(danglingState.getDanglingIndices().isEmpty());\n+ MetaData metaData = MetaData.builder().build();\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertTrue(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ metaData = MetaData.builder().put(dangledIndex, false).build();\n+ newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertFalse(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ }\n+ }\n+\n+ public void testInvalidIndexFolder() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ MetaData metaData = MetaData.builder().build();\n+ final String uuid = \"test1UUID\";\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, uuid);\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ for (Path path : env.resolveIndexFolder(uuid)) {\n+ if (Files.exists(path)) {\n+ Files.move(path, path.resolveSibling(\"invalidUUID\"), StandardCopyOption.ATOMIC_MOVE);\n+ }\n+ }\n+ try {\n+ danglingState.findNewDanglingIndices(metaData);\n+ fail(\"no exception thrown for invalid folder name\");\n+ } catch (IllegalStateException e) {\n+ assertThat(e.getMessage(), equalTo(\"[invalidUUID] invalid index folder name, rename to [test1UUID]\"));\n+ }\n+ }\n+ }\n \n public void testDanglingProcessing() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n@@ -61,59 +103,40 @@ public void testDanglingProcessing() throws Exception {\n \n MetaData metaData = MetaData.builder().build();\n \n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n \n // check that several runs when not in the metadata still keep the dangled index around\n int numberOfChecks = randomIntBetween(1, 10);\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(newDanglingIndices.keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n \n for (int i = 0; i < numberOfChecks; i++) {\n danglingState.findNewAndAddDanglingIndices(metaData);\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n // simulate allocation to the metadata\n metaData = MetaData.builder(metaData).put(dangledIndex, true).build();\n \n // check that several runs when in the metadata, but not cleaned yet, still keeps dangled\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertTrue(newDanglingIndices.isEmpty());\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n danglingState.cleanupAllocatedDangledIndices(metaData);\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n-\n- public void testRenameOfIndexState() throws Exception {\n- try (NodeEnvironment env = newNodeEnvironment()) {\n- MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n- DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n-\n- MetaData metaData = MetaData.builder().build();\n-\n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n-\n- for (Path path : env.indexPaths(\"test1\")) {\n- Files.move(path, path.getParent().resolve(\"test1_renamed\"));\n- }\n-\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n- assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1_renamed\"));\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/gateway/DanglingIndicesStateTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n@@ -68,14 +69,15 @@ public void testMetaIsRemovedIfAllShardsFromIndexRemoved() throws Exception {\n index(index, \"doc\", \"1\", jsonBuilder().startObject().field(\"text\", \"some text\").endObject());\n ensureGreen();\n assertIndexInMetaState(node1, index);\n- assertIndexDirectoryDeleted(node2, index);\n+ Index resolveIndex = resolveIndex(index);\n+ assertIndexDirectoryDeleted(node2, resolveIndex);\n assertIndexInMetaState(masterNode, index);\n \n logger.debug(\"relocating index...\");\n client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put(IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING.getKey() + \"_name\", node2)).get();\n client().admin().cluster().prepareHealth().setWaitForRelocatingShards(0).get();\n ensureGreen();\n- assertIndexDirectoryDeleted(node1, index);\n+ assertIndexDirectoryDeleted(node1, resolveIndex);\n assertIndexInMetaState(node2, index);\n assertIndexInMetaState(masterNode, index);\n }\n@@ -146,10 +148,10 @@ public void testMetaWrittenWhenIndexIsClosedAndMetaUpdated() throws Exception {\n assertThat(indicesMetaData.get(index).getState(), equalTo(IndexMetaData.State.OPEN));\n }\n \n- protected void assertIndexDirectoryDeleted(final String nodeName, final String indexName) throws Exception {\n+ protected void assertIndexDirectoryDeleted(final String nodeName, final Index index) throws Exception {\n assertBusy(() -> {\n logger.info(\"checking if index directory exists...\");\n- assertFalse(\"Expecting index directory of \" + indexName + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, indexName));\n+ assertFalse(\"Expecting index directory of \" + index + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, index));\n }\n );\n }\n@@ -168,9 +170,9 @@ protected void assertIndexInMetaState(final String nodeName, final String indexN\n }\n \n \n- private boolean indexDirectoryExists(String nodeName, String indexName) {\n+ private boolean indexDirectoryExists(String nodeName, Index index) {\n NodeEnvironment nodeEnv = ((InternalTestCluster) cluster()).getInstance(NodeEnvironment.class, nodeName);\n- for (Path path : nodeEnv.indexPaths(indexName)) {\n+ for (Path path : nodeEnv.indexPaths(index)) {\n if (Files.exists(path)) {\n return true;\n }",
"filename": "core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -43,15 +44,15 @@ public void testWriteLoadIndex() throws Exception {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n \n IndexMetaData index = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", index, null);\n- assertThat(metaStateService.loadIndexState(\"test1\"), equalTo(index));\n+ metaStateService.writeIndex(\"test_write\", index);\n+ assertThat(metaStateService.loadIndexState(index.getIndex()), equalTo(index));\n }\n }\n \n public void testLoadMissingIndex() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n- assertThat(metaStateService.loadIndexState(\"test1\"), nullValue());\n+ assertThat(metaStateService.loadIndexState(new Index(\"test1\", \"test1UUID\")), nullValue());\n }\n }\n \n@@ -94,7 +95,7 @@ public void testLoadGlobal() throws Exception {\n .build();\n \n metaStateService.writeGlobalState(\"test_write\", metaData);\n- metaStateService.writeIndex(\"test_write\", index, null);\n+ metaStateService.writeIndex(\"test_write\", index);\n \n MetaData loadedState = metaStateService.loadFullState();\n assertThat(loadedState.persistentSettings(), equalTo(metaData.persistentSettings()));",
"filename": "core/src/test/java/org/elasticsearch/gateway/MetaStateServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -70,6 +70,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.NodeServicesProvider;\n@@ -97,6 +98,7 @@\n import org.elasticsearch.test.IndexSettingsModule;\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.elasticsearch.test.VersionUtils;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n \n import java.io.IOException;\n import java.nio.file.Files;\n@@ -141,33 +143,35 @@ protected Collection<Class<? extends Plugin>> getPlugins() {\n \n public void testWriteShardState() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n- ShardId id = new ShardId(\"foo\", \"_na_\", 1);\n+ ShardId id = new ShardId(\"foo\", \"fooUUID\", 1);\n long version = between(1, Integer.MAX_VALUE / 2);\n boolean primary = randomBoolean();\n AllocationId allocationId = randomBoolean() ? null : randomAllocationId();\n- ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state1, env.availableShardPaths(id));\n ShardStateMetaData shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state2, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"foo\", allocationId);\n+ ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"fooUUID\", allocationId);\n write(state3, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state3);\n- assertEquals(\"foo\", state3.indexUUID);\n+ assertEquals(\"fooUUID\", state3.indexUUID);\n }\n }\n \n public void testLockTryingToDelete() throws Exception {\n createIndex(\"test\");\n ensureGreen();\n NodeEnvironment env = getInstanceFromNode(NodeEnvironment.class);\n- Path[] shardPaths = env.availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ ClusterService cs = getInstanceFromNode(ClusterService.class);\n+ final Index index = cs.state().metaData().index(\"test\").getIndex();\n+ Path[] shardPaths = env.availableShardPaths(new ShardId(index, 0));\n logger.info(\"--> paths: [{}]\", (Object)shardPaths);\n // Should not be able to acquire the lock because it's already open\n try {\n@@ -179,7 +183,7 @@ public void testLockTryingToDelete() throws Exception {\n // Test without the regular shard lock to assume we can acquire it\n // (worst case, meaning that the shard lock could be acquired and\n // we're green to delete the shard's directory)\n- ShardLock sLock = new DummyShardLock(new ShardId(\"test\", \"_na_\", 0));\n+ ShardLock sLock = new DummyShardLock(new ShardId(index, 0));\n try {\n env.deleteShardDirectoryUnderLock(sLock, IndexSettingsModule.newIndexSettings(\"test\", Settings.EMPTY));\n fail(\"should not have been able to delete the directory\");",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n@@ -42,13 +43,13 @@ public void testLoadShardPath() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"0xDEADBEEF\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n assertEquals(path, shardPath.getDataPath());\n- assertEquals(\"0xDEADBEEF\", shardPath.getIndexUUID());\n+ assertEquals(\"0xDEADBEEF\", shardPath.getShardId().getIndex().getUUID());\n assertEquals(\"foo\", shardPath.getShardId().getIndexName());\n assertEquals(path.resolve(\"translog\"), shardPath.resolveTranslog());\n assertEquals(path.resolve(\"index\"), shardPath.resolveIndex());\n@@ -57,14 +58,15 @@ public void testLoadShardPath() throws IOException {\n \n public void testFailLoadShardPathOnMultiState() throws IOException {\n try (final NodeEnvironment env = newNodeEnvironment(settingsBuilder().build())) {\n- Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ final String indexUUID = \"0xDEADBEEF\";\n+ Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n assumeTrue(\"This test tests multi data.path but we only got one\", paths.length > 1);\n int id = randomIntBetween(1, 10);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, \"0xDEADBEEF\", AllocationId.newInitializing()), id, paths);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, indexUUID, AllocationId.newInitializing()), id, paths);\n ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n fail(\"Expected IllegalStateException\");\n } catch (IllegalStateException e) {\n@@ -77,7 +79,7 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"foobar\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"foobar\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n int id = randomIntBetween(1, 10);\n@@ -90,18 +92,20 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n }\n \n public void testIllegalCustomDataPath() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n try {\n- new ShardPath(true, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ new ShardPath(true, path, path, new ShardId(index, 0));\n fail(\"Expected IllegalArgumentException\");\n } catch (IllegalArgumentException e) {\n assertThat(e.getMessage(), is(\"shard state path must be different to the data path when using custom data paths\"));\n }\n }\n \n public void testValidCtor() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n- ShardPath shardPath = new ShardPath(false, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ ShardPath shardPath = new ShardPath(false, path, path, new ShardId(index, 0));\n assertFalse(shardPath.isCustomDataPath());\n assertEquals(shardPath.getDataPath(), path);\n assertEquals(shardPath.getShardStatePath(), path);\n@@ -111,8 +115,9 @@ public void testGetRootPaths() throws IOException {\n boolean useCustomDataPath = randomBoolean();\n final Settings indexSettings;\n final Settings nodeSettings;\n+ final String indexUUID = \"0xDEADBEEF\";\n Settings.Builder indexSettingsBuilder = settingsBuilder()\n- .put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ .put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n final Path customPath;\n if (useCustomDataPath) {\n@@ -132,10 +137,10 @@ public void testGetRootPaths() throws IOException {\n nodeSettings = Settings.EMPTY;\n }\n try (final NodeEnvironment env = newNodeEnvironment(nodeSettings)) {\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, indexUUID, AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), indexSettings));\n boolean found = false;\n for (Path p : env.nodeDataPaths()) {",
"filename": "core/src/test/java/org/elasticsearch/index/shard/ShardPathTests.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MergePolicyConfig;\n import org.elasticsearch.index.shard.IndexEventListener;\n@@ -571,8 +572,9 @@ private int numShards(String... index) {\n private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOException {\n Map<String, List<Path>> filesToNodes = new HashMap<>();\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n for (ShardRouting shardRouting : state.getRoutingTable().allShards(\"test\")) {\n- if (shardRouting.primary() == true) {\n+ if (shardRouting.primary()) {\n continue;\n }\n assertTrue(shardRouting.assignedToNode());\n@@ -582,8 +584,7 @@ private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOExceptio\n filesToNodes.put(nodeStats.getNode().getName(), files);\n for (FsInfo.Path info : nodeStats.getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -604,6 +605,7 @@ private ShardRouting corruptRandomPrimaryFile() throws IOException {\n \n private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFiles) throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n@@ -616,8 +618,7 @@ private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFile\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -676,12 +677,13 @@ private void pruneOldDeleteGenerations(Set<Path> files) {\n \n public List<Path> listShardFiles(ShardRouting routing) throws IOException {\n NodesStatsResponse nodeStatses = client().admin().cluster().prepareNodesStats(routing.currentNodeId()).setFs(true).get();\n-\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ final Index test = state.metaData().index(\"test\").getIndex();\n assertThat(routing.toString(), nodeStatses.getNodes().length, equalTo(1));\n List<Path> files = new ArrayList<>();\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- Path file = PathUtils.get(path).resolve(\"indices/test/\" + Integer.toString(routing.getId()) + \"/index\");\n+ Path file = PathUtils.get(path).resolve(\"indices/\" + test.getUUID() + \"/\" + Integer.toString(routing.getId()) + \"/index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {",
"filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MockEngineFactoryPlugin;\n import org.elasticsearch.monitor.fs.FsInfo;\n@@ -110,6 +111,7 @@ public void testCorruptTranslogFiles() throws Exception {\n private void corruptRandomTranslogFiles() throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n+ final Index test = state.metaData().index(\"test\").getIndex();\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n ShardRouting shardRouting = shardIterator.nextOrNull();\n@@ -121,7 +123,7 @@ private void corruptRandomTranslogFiles() throws IOException {\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path fsPath : nodeStatses.getNodes()[0].getFs()) {\n String path = fsPath.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n+ final String relativeDataLocationPath = \"indices/\"+ test.getUUID() +\"/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n if (Files.exists(file)) {\n logger.info(\"--> path: {}\", file);",
"filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedTranslogIT.java",
"status": "modified"
},
{
"diff": "@@ -46,9 +46,9 @@ public void testHasSleepWrapperOnSharedFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);\n@@ -62,9 +62,9 @@ public void testHasNoSleepWrapperOnNormalFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);",
"filename": "core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -47,13 +47,14 @@\n public class IndexStoreTests extends ESTestCase {\n \n public void testStoreDirectory() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n final IndexModule.Type[] values = IndexModule.Type.values();\n final IndexModule.Type type = RandomPicks.randomFrom(random(), values);\n Settings settings = Settings.settingsBuilder().put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), type.name().toLowerCase(Locale.ROOT))\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n IndexSettings indexSettings = IndexSettingsModule.newIndexSettings(\"foo\", settings);\n- FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n switch (type) {\n case NIOFS:\n@@ -84,8 +85,9 @@ public void testStoreDirectory() throws IOException {\n }\n \n public void testStoreDirectoryDefault() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n- FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"foo\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ Index index = new Index(\"bar\", \"foo\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"bar\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n if (Constants.WINDOWS) {\n assertTrue(directory.toString(), directory instanceof MMapDirectory || directory instanceof SimpleFSDirectory);",
"filename": "core/src/test/java/org/elasticsearch/index/store/IndexStoreTests.java",
"status": "modified"
},
{
"diff": "@@ -112,12 +112,14 @@ public void testIndexCleanup() throws Exception {\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1))\n );\n ensureGreen(\"test\");\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n \n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n final String node_3 = internalCluster().startNode(Settings.builder().put(Node.NODE_MASTER_SETTING.getKey(), false));\n@@ -128,12 +130,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(false));\n \n logger.info(\"--> move shard from node_1 to node_3, and wait for relocation to finish\");\n \n@@ -161,12 +163,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(true));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(true));\n \n }\n \n@@ -180,16 +182,18 @@ public void testShardCleanupIfShardDeletionAfterRelocationFailedAndIndexDeleted(\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n );\n ensureGreen(\"test\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n \n final String node_2 = internalCluster().startDataOnlyNode(Settings.builder().build());\n assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"2\").get().isTimedOut());\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n \n // add a transport delegate that will prevent the shard active request to succeed the first time after relocation has finished.\n // node_1 will then wait for the next cluster state change before it tries a next attempt to delete the shard.\n@@ -220,14 +224,14 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n // it must still delete the shard, even if it cannot find it anymore in indicesservice\n client().admin().indices().prepareDelete(\"test\").get();\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(false));\n- assertThat(waitForShardDeletion(node_2, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_2, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(false));\n+ assertThat(waitForShardDeletion(node_2, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_2, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n }\n \n public void testShardsCleanup() throws Exception {\n@@ -241,9 +245,11 @@ public void testShardsCleanup() throws Exception {\n );\n ensureGreen(\"test\");\n \n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n String node_3 = internalCluster().startNode();\n@@ -255,10 +261,10 @@ public void testShardsCleanup() throws Exception {\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n logger.info(\"--> making sure that shard is not allocated on server3\");\n- assertThat(waitForShardDeletion(node_3, \"test\", 0), equalTo(false));\n+ assertThat(waitForShardDeletion(node_3, index, 0), equalTo(false));\n \n- Path server2Shard = shardDirectory(node_2, \"test\", 0);\n- logger.info(\"--> stopping node {}\", node_2);\n+ Path server2Shard = shardDirectory(node_2, index, 0);\n+ logger.info(\"--> stopping node \" + node_2);\n internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_2));\n \n logger.info(\"--> running cluster_health\");\n@@ -273,9 +279,9 @@ public void testShardsCleanup() throws Exception {\n assertThat(Files.exists(server2Shard), equalTo(true));\n \n logger.info(\"--> making sure that shard and its replica exist on server1, server2 and server3\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n assertThat(Files.exists(server2Shard), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node node_4\");\n final String node_4 = internalCluster().startNode();\n@@ -284,9 +290,9 @@ public void testShardsCleanup() throws Exception {\n ensureGreen();\n \n logger.info(\"--> making sure that shard and its replica are allocated on server1 and server3 but not on server2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(waitForShardDeletion(node_4, \"test\", 0), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(waitForShardDeletion(node_4, index, 0), equalTo(false));\n }\n \n public void testShardActiveElsewhereDoesNotDeleteAnother() throws Exception {\n@@ -426,30 +432,30 @@ public void onFailure(String source, Throwable t) {\n waitNoPendingTasksOnAll();\n logger.info(\"Checking if shards aren't removed\");\n for (int shard : node2Shards) {\n- assertTrue(waitForShardDeletion(nonMasterNode, \"test\", shard));\n+ assertTrue(waitForShardDeletion(nonMasterNode, index, shard));\n }\n }\n \n- private Path indexDirectory(String server, String index) {\n+ private Path indexDirectory(String server, Index index) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n final Path[] paths = env.indexPaths(index);\n assert paths.length == 1;\n return paths[0];\n }\n \n- private Path shardDirectory(String server, String index, int shard) {\n+ private Path shardDirectory(String server, Index index, int shard) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n- final Path[] paths = env.availableShardPaths(new ShardId(index, \"_na_\", shard));\n+ final Path[] paths = env.availableShardPaths(new ShardId(index, shard));\n assert paths.length == 1;\n return paths[0];\n }\n \n- private boolean waitForShardDeletion(final String server, final String index, final int shard) throws InterruptedException {\n+ private boolean waitForShardDeletion(final String server, final Index index, final int shard) throws InterruptedException {\n awaitBusy(() -> !Files.exists(shardDirectory(server, index, shard)));\n return Files.exists(shardDirectory(server, index, shard));\n }\n \n- private boolean waitForIndexDeletion(final String server, final String index) throws InterruptedException {\n+ private boolean waitForIndexDeletion(final String server, final Index index) throws InterruptedException {\n awaitBusy(() -> !Files.exists(indexDirectory(server, index)));\n return Files.exists(indexDirectory(server, index));\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -4,6 +4,17 @@\n This section discusses the changes that you need to be aware of when migrating\n your application to Elasticsearch 5.0.\n \n+[float]\n+=== Indices created before 5.0\n+\n+Elasticsearch 5.0 can read indices created in version 2.0 and above. If any\n+of your indices were created before 2.0 you will need to upgrade to the\n+latest 2.x version of Elasticsearch first, in order to upgrade your indices or\n+to delete the old indices. Elasticsearch will not start in the presence of old\n+indices. To upgrade 2.x indices, first start a node which have access to all\n+the data folders and let it upgrade all the indices before starting up rest of\n+the cluster.\n+\n [IMPORTANT]\n .Reindex indices from Elasticseach 1.x or before\n =========================================",
"filename": "docs/reference/migration/migrate_5_0.asciidoc",
"status": "modified"
},
{
"diff": "@@ -52,6 +52,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-murmur3-2.0.0\";\n+ final String indexUUID = \"1VzJds59TTK7lRu17W0mcg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -72,6 +73,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n \n master.get();\n // force reloading dangling indices with a cluster state republish",
"filename": "plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperUpgradeTests.java",
"status": "modified"
},
{
"diff": "@@ -53,6 +53,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-size-2.0.0\";\n+ final String indexUUID = \"ENCw7sG0SWuTPcH60bHheg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -73,6 +74,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n master.get();\n // force reloading dangling indices with a cluster state republish\n client().admin().cluster().prepareReroute().get();",
"filename": "plugins/mapper-size/src/test/java/org/elasticsearch/index/mapper/size/SizeFieldMapperUpgradeTests.java",
"status": "modified"
}
]
}
|
{
"body": "after the lucene 5.3 upgrade, i looked at how ES uses lucene's filesystem locking. most places are ok, obtaining a lock and doing stuff in a try/finally. However NodeEnvironment is a totally different story. Can we fix the use of locking here?\n1. `deleteShardDirectorySafe` is anything but safe. it calls `deleteShardDirectoryUnderLock` which doesn't actually delete under a lock either!!!! It calls this bogus method: `acquireFSLockForPaths` which acquires _then releases_ locks. Why? Why? Why?\n2. `assertEnvIsLocked` is only called under assert. why? Look at `findAllIndices`, its about to do something really expensive, why can't the call to `ensureValid` be a real check?\n3. `assertEnvIsLocked` has a bunch of leniency, why in the hell would it return `true` when closed or when there are no locks at all, thats broken.\n\nAfter this stuff is fixed, any places here doing heavy operations (e.g. N filesystem operations) should seriously consider calling `ensureValid` on any locks that are supposed to be held. It means you do N+1 operations or whatever but man, if what we are doing is not important, then why are we using fs locks?\n",
"comments": [
{
"body": "> deleteShardDirectorySafe is anything but safe. it calls deleteShardDirectoryUnderLock which doesn't actually delete under a lock either!!!! It calls this bogus method: acquireFSLockForPaths which acquires then releases locks. Why? Why? Why?\n\nso we need to clarify the naming here, we actually delete under lock but it's not the IW lock it's a shard lock that we maintain internally per Node.\n\n``` Java\npublic void deleteShardDirectorySafe(ShardId shardId, @IndexSettings Settings indexSettings) throws IOException {\n // This is to ensure someone doesn't use Settings.EMPTY\n assert indexSettings != Settings.EMPTY;\n final Path[] paths = availableShardPaths(shardId);\n logger.trace(\"deleting shard {} directory, paths: [{}]\", shardId, paths);\n try (ShardLock lock = shardLock(shardId)) { // <==== here we lock and keep the lock - it's JVM internal\n deleteShardDirectoryUnderLock(lock, indexSettings);\n }\n }\n```\n\nthe `Why? Why? Why?` is especially interesting and I think you should talk to the guy who reviewed the change: `https://github.com/elastic/elasticsearch/pull/11127` He said:\n\n`yes, I am +1 for this approach. Maybe we can add a line to the javadocs just mentioning this is the case. Especially if you think about locking on shared filesystems, we should not rush to do something complex.`\n\nI am not sure if the Javadoc happened but it need clarification.\n\n> assertEnvIsLocked is only called under assert. why? Look at findAllIndices, its about to do something really expensive, why can't the call to ensureValid be a real check?\n\n+1 to make it a real check where it makes sense...\n\n> assertEnvIsLocked has a bunch of leniency, why in the hell would it return true when closed or when there are no locks at all, thats broken.\n\n+1 to beef it up.\n",
"created_at": "2015-09-02T06:42:50Z"
},
{
"body": "Yes, I remember this, but now is our chance to fix it, so locking is as good as we can make it. It seems we are broken because of the placement of the lock files being underneath what is deleted, but that is something fixable.\n\nIts 2.0, there is no constraint about back compat here, so I think its time to fix it correctly.\n\nAdditionally we spent lots of time, and added lots of paranoia in lucene to actually help with shitty behavior from shared filesystems, so it would be nice if it stands a chance.\n\nAs far as the shard lock, i have no idea what that is. How is it better than a filesystem lock? Its definitely got a shitload of abstractions, but i can't tell if its anything more than a in-process RWL.\n",
"created_at": "2015-09-02T06:49:20Z"
},
{
"body": "I think we should make this straight forward and add a `/locks` directory to each root path we are using. This directory can then be full of locks and will never be deleted. The crucial part is that it has to be on the same mount as the actual data it protects otherwise it will likely not help at all in the shared FS case.\n",
"created_at": "2015-09-02T07:01:26Z"
},
{
"body": "yeah, i think something along those lines: though is 'never deleted' a problem with ppl that have tons and tons of shards cycling through? accumulating a bunch of 0-byte files sounds dangerous and eventually the directory is gonna crap its pants. \n\nDeleting an NIOFS lock file is especially tricky and we just don't do it ever in lucene (we leave the lock file around). I dont know how to fix that without adding a \"master\" lock file that always stays around and is acquired around individual lock acquire/release+delete.\n",
"created_at": "2015-09-02T08:14:02Z"
},
{
"body": "> Deleting an NIOFS lock file is especially tricky and we just don't do it ever in lucene (we leave the lock file around). I dont know how to fix that without adding a \"master\" lock file that always stays around and is acquired around individual lock acquire/release+delete.\n\nyeah we won't have a way around that I guess. I think what we can do is to have an `$index_name.lock` that you need to own to make changes to the `$index_name_$shardId.lock` which also allows to delete it. That reduces the set to the number of indices. We can safely delete the `$index_name.lock` once the last shard of the index is deleted?\n",
"created_at": "2015-09-03T07:34:26Z"
},
{
"body": "Why do that? just have global.lock. Its only needed around the actaul _acquire_ and _release+delete_. Its not gonna cause a concurrency issue. \n\nDoing this in a more fine grained way makes zero sense.\n",
"created_at": "2015-09-03T07:44:45Z"
},
{
"body": "> Doing this in a more fine grained way makes zero sense.\n\nhaving an index level lock make sense to have here anyway since we also have index metadata we want to protect from concurrent modifications. All I was saying here is that we might be able to get away with not locking the global lock as long as we are in the context of an index. \n",
"created_at": "2015-09-03T07:57:52Z"
}
],
"number": 13264,
"title": "Locking in NodeEnvironment is completely broken"
}
|
{
"body": "Following up https://github.com/elastic/elasticsearch/pull/16217 , This PR uses `${data.paths}/nodes/{node.id}/indices/{index.uuid}` \ninstead of `${data.paths}/nodes/{node.id}/indices/{index.name}` pattern to store index \nfolder on disk.\nThis way we avoid collision between indices that are named the same (deleted and recreated).\n\nCloses #13265\nCloses #13264\nCloses #14932\nCloses #15853\nCloses #14512\n",
"number": 16442,
"review_comments": [
{
"body": "can we always use the Index class to refer to an index? we should get in this habit so we always have both name and uuid available.\n",
"created_at": "2016-02-04T07:43:22Z"
},
{
"body": "this should really be `hasIndex(Index index)` no?\n",
"created_at": "2016-02-04T16:37:31Z"
},
{
"body": "I think we should add a `Index#getPathIdentifier()` to have a single place to calculate it?\n",
"created_at": "2016-02-04T16:38:21Z"
},
{
"body": "++\n",
"created_at": "2016-02-04T16:38:29Z"
},
{
"body": "I think we need to fix this too that we don't read the index name from the directory name. We have to read some descriptor which I think is available everywhere now if not it's not an index. Then when we have that we can just use Index.java as a key\n",
"created_at": "2016-02-04T16:40:16Z"
},
{
"body": "fixed\n",
"created_at": "2016-02-05T03:23:35Z"
},
{
"body": "this has been removed\n",
"created_at": "2016-02-05T03:23:39Z"
},
{
"body": "great idea, thanks for the suggestion!\n",
"created_at": "2016-02-05T03:23:42Z"
},
{
"body": "Now we load all the indices state files upfront and then add them to dangling indices if they are not present in the cluster state or are not identified as dangling already. Is this what you were getting at by \"read some descriptor\"? This behaviour is different from before where we only used to load the state of indices that were relevant. We also don't try to rename the index according to the folder name, like before.\n",
"created_at": "2016-02-05T03:32:13Z"
},
{
"body": "looking at the MetaDataStateFormat code, we should be able to read the format from the file - something stinks here :) . Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n",
"created_at": "2016-03-08T12:52:09Z"
},
{
"body": "why do we need all these file copies? can't we just do a top level folder rename? consolidating to one folder is a 2.0 feature?\n",
"created_at": "2016-03-08T13:04:44Z"
},
{
"body": "can we check the uuid and make sure it's the same and if not log a warning?\n",
"created_at": "2016-03-08T13:08:25Z"
},
{
"body": "this feels too lenient to me. At the very least we should add a parameter indicating whether it's OK to be lenient (dangling indices detection ) and be strict (throw an exception on node start up). \n",
"created_at": "2016-03-08T13:20:03Z"
},
{
"body": "I don't think this is possible any more? we added a filter on the path names?\n",
"created_at": "2016-03-08T13:24:03Z"
},
{
"body": "> looking at the MetaDataStateFormat code, we should be able to read the format from the file\n\nDo you mean that MetaDataStateFormat#loadLatestState reads the format from the file? I hardcoded the format to be `SMILE`\n\n> Also (different change) I think we should remove that setting and always write smile (and read what ever we find).\n\nI will open an issue for this.\n",
"created_at": "2016-03-09T14:30:59Z"
},
{
"body": "Simplified the upgrade by renaming the top level folder. Thanks for the suggestion.\n",
"created_at": "2016-03-09T14:31:43Z"
},
{
"body": "added\n",
"created_at": "2016-03-09T14:31:54Z"
},
{
"body": "Now we throw an ISE on startup when an invalid index folder name is found\n",
"created_at": "2016-03-09T14:33:48Z"
},
{
"body": "I think we can make this trace?\n",
"created_at": "2016-03-09T16:06:27Z"
},
{
"body": "can we add something that will explain why we're doing this of the unordained user? something ala upgrading indexing folder to new naming conventions\n",
"created_at": "2016-03-09T16:08:06Z"
},
{
"body": "can we add a comment into why need this check? (we already renamed it before)\n",
"created_at": "2016-03-09T16:11:33Z"
},
{
"body": "call it upgradeIndicesIfNeeded?\n",
"created_at": "2016-03-09T16:11:37Z"
},
{
"body": "this might also be a `FileNotFoundException`? ie `} catch (NoSuchFileException| | FileNotFoundException ignored) {`\n",
"created_at": "2016-03-13T13:25:33Z"
},
{
"body": "wow this is scary as shit I guess this means we are restarting multiple nodes at the same time in a full cluster restart. I think we can't do this neither support it on a shared FS. If we run into this situation we should fail and not swallow exceptions IMO\n",
"created_at": "2016-03-13T13:28:39Z"
},
{
"body": "lets document that in the migration guides\n",
"created_at": "2016-03-13T13:29:06Z"
},
{
"body": "@mikemccand this is unused and get removed in 4d38856f7017275e326df52a44b90662b2f3da6a - was this a mistake or did you just not remove this method?\n",
"created_at": "2016-03-13T13:36:38Z"
},
{
"body": "this should never happen right? we just got it from a dir list?\n",
"created_at": "2016-03-14T10:08:12Z"
},
{
"body": "can we log this in debug? we're going to log it on every node start.. \n",
"created_at": "2016-03-14T10:08:50Z"
},
{
"body": "wondering if this should be a warn... it means we have an unknow folder in our universe? \n",
"created_at": "2016-03-14T10:09:29Z"
},
{
"body": "can we name this readOnlyMetaDataMetaDataStateFormat\n",
"created_at": "2016-03-14T10:11:00Z"
}
],
"title": "Rename index folder to index_uuid"
}
|
{
"commits": [
{
"message": "Add upgrader to upgrade old indices to new naming convention"
},
{
"message": "remove redundant getters in MetaData"
},
{
"message": "use index uuid as folder name to decouple index folder name from index name"
},
{
"message": "adapt tests to use index uuid as folder name"
}
],
"files": [
{
"diff": "@@ -79,7 +79,7 @@ public ClusterStateHealth(MetaData clusterMetaData, RoutingTable routingTables)\n * @param clusterState The current cluster state. Must not be null.\n */\n public ClusterStateHealth(ClusterState clusterState) {\n- this(clusterState, clusterState.metaData().concreteAllIndices());\n+ this(clusterState, clusterState.metaData().getConcreteAllIndices());\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/health/ClusterStateHealth.java",
"status": "modified"
},
{
"diff": "@@ -432,7 +432,7 @@ private Map<String, Set<String>> resolveSearchRoutingAllIndices(MetaData metaDat\n if (routing != null) {\n Set<String> r = Strings.splitStringByCommaToSet(routing);\n Map<String, Set<String>> routings = new HashMap<>();\n- String[] concreteIndices = metaData.concreteAllIndices();\n+ String[] concreteIndices = metaData.getConcreteAllIndices();\n for (String index : concreteIndices) {\n routings.put(index, r);\n }\n@@ -472,7 +472,7 @@ static boolean isExplicitAllPattern(List<String> aliasesOrIndices) {\n */\n boolean isPatternMatchingAllIndices(MetaData metaData, String[] indicesOrAliases, String[] concreteIndices) {\n // if we end up matching on all indices, check, if its a wildcard parameter, or a \"-something\" structure\n- if (concreteIndices.length == metaData.concreteAllIndices().length && indicesOrAliases.length > 0) {\n+ if (concreteIndices.length == metaData.getConcreteAllIndices().length && indicesOrAliases.length > 0) {\n \n //we might have something like /-test1,+test1 that would identify all indices\n //or something like /-test1 with test1 index missing and IndicesOptions.lenient()\n@@ -728,11 +728,11 @@ private boolean isEmptyOrTrivialWildcard(List<String> expressions) {\n \n private List<String> resolveEmptyOrTrivialWildcard(IndicesOptions options, MetaData metaData, boolean assertEmpty) {\n if (options.expandWildcardsOpen() && options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllIndices());\n+ return Arrays.asList(metaData.getConcreteAllIndices());\n } else if (options.expandWildcardsOpen()) {\n- return Arrays.asList(metaData.concreteAllOpenIndices());\n+ return Arrays.asList(metaData.getConcreteAllOpenIndices());\n } else if (options.expandWildcardsClosed()) {\n- return Arrays.asList(metaData.concreteAllClosedIndices());\n+ return Arrays.asList(metaData.getConcreteAllClosedIndices());\n } else {\n assert assertEmpty : \"Shouldn't end up here\";\n return Collections.emptyList();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java",
"status": "modified"
},
{
"diff": "@@ -370,26 +370,14 @@ public ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> findM\n /**\n * Returns all the concrete indices.\n */\n- public String[] concreteAllIndices() {\n- return allIndices;\n- }\n-\n public String[] getConcreteAllIndices() {\n- return concreteAllIndices();\n- }\n-\n- public String[] concreteAllOpenIndices() {\n- return allOpenIndices;\n+ return allIndices;\n }\n \n public String[] getConcreteAllOpenIndices() {\n return allOpenIndices;\n }\n \n- public String[] concreteAllClosedIndices() {\n- return allClosedIndices;\n- }\n-\n public String[] getConcreteAllClosedIndices() {\n return allClosedIndices;\n }\n@@ -795,9 +783,9 @@ public static MetaData addDefaultUnitsIfNeeded(ESLogger logger, MetaData metaDat\n metaData.getIndices(),\n metaData.getTemplates(),\n metaData.getCustoms(),\n- metaData.concreteAllIndices(),\n- metaData.concreteAllOpenIndices(),\n- metaData.concreteAllClosedIndices(),\n+ metaData.getConcreteAllIndices(),\n+ metaData.getConcreteAllOpenIndices(),\n+ metaData.getConcreteAllClosedIndices(),\n metaData.getAliasAndIndexLookup());\n } else {\n // No changes:",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,154 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.NoSuchFileException;\n+import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n+\n+/**\n+ * Renames index folders from {index.name} to {index.uuid}\n+ */\n+public class IndexFolderUpgrader {\n+ private final NodeEnvironment nodeEnv;\n+ private final Settings settings;\n+ private final ESLogger logger = Loggers.getLogger(IndexFolderUpgrader.class);\n+ private final MetaDataStateFormat<IndexMetaData> indexStateFormat = readOnlyIndexMetaDataStateFormat();\n+\n+ /**\n+ * Creates a new upgrader instance\n+ * @param settings node settings\n+ * @param nodeEnv the node env to operate on\n+ */\n+ IndexFolderUpgrader(Settings settings, NodeEnvironment nodeEnv) {\n+ this.settings = settings;\n+ this.nodeEnv = nodeEnv;\n+ }\n+\n+ /**\n+ * Moves the index folder found in <code>source</code> to <code>target</code>\n+ */\n+ void upgrade(final Index index, final Path source, final Path target) throws IOException {\n+ boolean success = false;\n+ try {\n+ Files.move(source, target, StandardCopyOption.ATOMIC_MOVE);\n+ success = true;\n+ } catch (NoSuchFileException | FileNotFoundException exception) {\n+ // thrown when the source is non-existent because the folder was renamed\n+ // by another node (shared FS) after we checked if the target exists\n+ logger.error(\"multiple nodes trying to upgrade [{}] in parallel, retry upgrading with single node\",\n+ exception, target);\n+ throw exception;\n+ } finally {\n+ if (success) {\n+ logger.info(\"{} moved from [{}] to [{}]\", index, source, target);\n+ logger.trace(\"{} syncing directory [{}]\", index, target);\n+ IOUtils.fsync(target, true);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Renames <code>indexFolderName</code> index folders found in node paths and custom path\n+ * iff {@link #needsUpgrade(Index, String)} is true.\n+ * Index folder in custom paths are renamed first followed by index folders in each node path.\n+ */\n+ void upgrade(final String indexFolderName) throws IOException {\n+ for (NodeEnvironment.NodePath nodePath : nodeEnv.nodePaths()) {\n+ final Path indexFolderPath = nodePath.indicesPath.resolve(indexFolderName);\n+ final IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, indexFolderPath);\n+ if (indexMetaData != null) {\n+ final Index index = indexMetaData.getIndex();\n+ if (needsUpgrade(index, indexFolderName)) {\n+ logger.info(\"{} upgrading [{}] to new naming convention\", index, indexFolderPath);\n+ final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings);\n+ if (indexSettings.hasCustomDataPath()) {\n+ // we rename index folder in custom path before renaming them in any node path\n+ // to have the index state under a not-yet-upgraded index folder, which we use to\n+ // continue renaming after a incomplete upgrade.\n+ final Path customLocationSource = nodeEnv.resolveBaseCustomLocation(indexSettings)\n+ .resolve(indexFolderName);\n+ final Path customLocationTarget = customLocationSource.resolveSibling(index.getUUID());\n+ // we rename the folder in custom path only the first time we encounter a state\n+ // in a node path, which needs upgrading, it is a no-op for subsequent node paths\n+ if (Files.exists(customLocationSource) // might not exist if no data was written for this index\n+ && Files.exists(customLocationTarget) == false) {\n+ upgrade(index, customLocationSource, customLocationTarget);\n+ } else {\n+ logger.info(\"[{}] no upgrade needed - already upgraded\", customLocationTarget);\n+ }\n+ }\n+ upgrade(index, indexFolderPath, indexFolderPath.resolveSibling(index.getUUID()));\n+ } else {\n+ logger.debug(\"[{}] no upgrade needed - already upgraded\", indexFolderPath);\n+ }\n+ } else {\n+ logger.warn(\"[{}] no index state found - ignoring\", indexFolderPath);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Upgrades all indices found under <code>nodeEnv</code>. Already upgraded indices are ignored.\n+ */\n+ public static void upgradeIndicesIfNeeded(final Settings settings, final NodeEnvironment nodeEnv) throws IOException {\n+ final IndexFolderUpgrader upgrader = new IndexFolderUpgrader(settings, nodeEnv);\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ upgrader.upgrade(indexFolderName);\n+ }\n+ }\n+\n+ static boolean needsUpgrade(Index index, String indexFolderName) {\n+ return indexFolderName.equals(index.getUUID()) == false;\n+ }\n+\n+ static MetaDataStateFormat<IndexMetaData> readOnlyIndexMetaDataStateFormat() {\n+ // NOTE: XContentType param is not used as we use the format read from the serialized index state\n+ return new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/util/IndexFolderUpgrader.java",
"status": "added"
},
{
"diff": "@@ -70,7 +70,6 @@\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.stream.Collectors;\n \n import static java.util.Collections.unmodifiableSet;\n \n@@ -89,7 +88,7 @@ public static class NodePath {\n * not running on Linux, or we hit an exception trying), True means the device possibly spins and False means it does not. */\n public final Boolean spins;\n \n- public NodePath(Path path, Environment environment) throws IOException {\n+ public NodePath(Path path) throws IOException {\n this.path = path;\n this.indicesPath = path.resolve(INDICES_FOLDER);\n this.fileStore = Environment.getFileStore(path);\n@@ -102,16 +101,18 @@ public NodePath(Path path, Environment environment) throws IOException {\n \n /**\n * Resolves the given shards directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}/{shard.id}\n */\n public Path resolve(ShardId shardId) {\n return resolve(shardId.getIndex()).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n- * Resolves the given indexes directory against this NodePath\n+ * Resolves index directory against this NodePath\n+ * ${data.paths}/nodes/{node.id}/indices/{index.uuid}\n */\n public Path resolve(Index index) {\n- return indicesPath.resolve(index.getName());\n+ return indicesPath.resolve(index.getUUID());\n }\n \n @Override\n@@ -131,7 +132,7 @@ public String toString() {\n \n private final int localNodeId;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n- private final Map<ShardLockKey, InternalShardLock> shardLocks = new HashMap<>();\n+ private final Map<ShardId, InternalShardLock> shardLocks = new HashMap<>();\n \n /**\n * Maximum number of data nodes that should run in an environment.\n@@ -186,7 +187,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n logger.trace(\"obtaining node lock on {} ...\", dir.toAbsolutePath());\n try {\n locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n- nodePaths[dirIndex] = new NodePath(dir, environment);\n+ nodePaths[dirIndex] = new NodePath(dir);\n localNodeId = possibleLockId;\n } catch (LockObtainFailedException ex) {\n logger.trace(\"failed to obtain node lock on {}\", dir.toAbsolutePath());\n@@ -445,11 +446,11 @@ public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, IndexSetti\n * @param indexSettings settings for the index being deleted\n */\n public void deleteIndexDirectoryUnderLock(Index index, IndexSettings indexSettings) throws IOException {\n- final Path[] indexPaths = indexPaths(index.getName());\n+ final Path[] indexPaths = indexPaths(index);\n logger.trace(\"deleting index {} directory, paths({}): [{}]\", index, indexPaths.length, indexPaths);\n IOUtils.rm(indexPaths);\n if (indexSettings.hasCustomDataPath()) {\n- Path customLocation = resolveCustomLocation(indexSettings, index.getName());\n+ Path customLocation = resolveIndexCustomLocation(indexSettings);\n logger.trace(\"deleting custom index {} directory [{}]\", index, customLocation);\n IOUtils.rm(customLocation);\n }\n@@ -517,17 +518,16 @@ public ShardLock shardLock(ShardId id) throws IOException {\n */\n public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOException {\n logger.trace(\"acquiring node shardlock on [{}], timeout [{}]\", shardId, lockTimeoutMS);\n- final ShardLockKey shardLockKey = new ShardLockKey(shardId);\n final InternalShardLock shardLock;\n final boolean acquired;\n synchronized (shardLocks) {\n- if (shardLocks.containsKey(shardLockKey)) {\n- shardLock = shardLocks.get(shardLockKey);\n+ if (shardLocks.containsKey(shardId)) {\n+ shardLock = shardLocks.get(shardId);\n shardLock.incWaitCount();\n acquired = false;\n } else {\n- shardLock = new InternalShardLock(shardLockKey);\n- shardLocks.put(shardLockKey, shardLock);\n+ shardLock = new InternalShardLock(shardId);\n+ shardLocks.put(shardId, shardLock);\n acquired = true;\n }\n }\n@@ -547,7 +547,7 @@ public ShardLock shardLock(final ShardId shardId, long lockTimeoutMS) throws IOE\n @Override\n protected void closeInternal() {\n shardLock.release();\n- logger.trace(\"released shard lock for [{}]\", shardLockKey);\n+ logger.trace(\"released shard lock for [{}]\", shardId);\n }\n };\n }\n@@ -559,51 +559,7 @@ protected void closeInternal() {\n */\n public Set<ShardId> lockedShards() {\n synchronized (shardLocks) {\n- Set<ShardId> lockedShards = shardLocks.keySet().stream()\n- .map(shardLockKey -> new ShardId(new Index(shardLockKey.indexName, \"_na_\"), shardLockKey.shardId)).collect(Collectors.toSet());\n- return unmodifiableSet(lockedShards);\n- }\n- }\n-\n- // a key for the shard lock. we can't use shardIds, because the contain\n- // the index uuid, but we want the lock semantics to the same as we map indices to disk folders, i.e., without the uuid (for now).\n- private final class ShardLockKey {\n- final String indexName;\n- final int shardId;\n-\n- public ShardLockKey(final ShardId shardId) {\n- this.indexName = shardId.getIndexName();\n- this.shardId = shardId.id();\n- }\n-\n- @Override\n- public String toString() {\n- return \"[\" + indexName + \"][\" + shardId + \"]\";\n- }\n-\n- @Override\n- public boolean equals(Object o) {\n- if (this == o) {\n- return true;\n- }\n- if (o == null || getClass() != o.getClass()) {\n- return false;\n- }\n-\n- ShardLockKey that = (ShardLockKey) o;\n-\n- if (shardId != that.shardId) {\n- return false;\n- }\n- return indexName.equals(that.indexName);\n-\n- }\n-\n- @Override\n- public int hashCode() {\n- int result = indexName.hashCode();\n- result = 31 * result + shardId;\n- return result;\n+ return unmodifiableSet(new HashSet<>(shardLocks.keySet()));\n }\n }\n \n@@ -616,10 +572,10 @@ private final class InternalShardLock {\n */\n private final Semaphore mutex = new Semaphore(1);\n private int waitCount = 1; // guarded by shardLocks\n- private final ShardLockKey lockKey;\n+ private final ShardId shardId;\n \n- InternalShardLock(ShardLockKey id) {\n- lockKey = id;\n+ InternalShardLock(ShardId shardId) {\n+ this.shardId = shardId;\n mutex.acquireUninterruptibly();\n }\n \n@@ -639,10 +595,10 @@ private void decWaitCount() {\n synchronized (shardLocks) {\n assert waitCount > 0 : \"waitCount is \" + waitCount + \" but should be > 0\";\n --waitCount;\n- logger.trace(\"shard lock wait count for [{}] is now [{}]\", lockKey, waitCount);\n+ logger.trace(\"shard lock wait count for {} is now [{}]\", shardId, waitCount);\n if (waitCount == 0) {\n- logger.trace(\"last shard lock wait decremented, removing lock for [{}]\", lockKey);\n- InternalShardLock remove = shardLocks.remove(lockKey);\n+ logger.trace(\"last shard lock wait decremented, removing lock for {}\", shardId);\n+ InternalShardLock remove = shardLocks.remove(shardId);\n assert remove != null : \"Removed lock was null\";\n }\n }\n@@ -651,11 +607,11 @@ private void decWaitCount() {\n void acquire(long timeoutInMillis) throws LockObtainFailedException{\n try {\n if (mutex.tryAcquire(timeoutInMillis, TimeUnit.MILLISECONDS) == false) {\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", timed out after \" + timeoutInMillis + \"ms\");\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", timed out after \" + timeoutInMillis + \"ms\");\n }\n } catch (InterruptedException e) {\n Thread.currentThread().interrupt();\n- throw new LockObtainFailedException(\"Can't lock shard \" + lockKey + \", interrupted\", e);\n+ throw new LockObtainFailedException(\"Can't lock shard \" + shardId + \", interrupted\", e);\n }\n }\n }\n@@ -698,11 +654,11 @@ public NodePath[] nodePaths() {\n /**\n * Returns all index paths.\n */\n- public Path[] indexPaths(String indexName) {\n+ public Path[] indexPaths(Index index) {\n assert assertEnvIsLocked();\n Path[] indexPaths = new Path[nodePaths.length];\n for (int i = 0; i < nodePaths.length; i++) {\n- indexPaths[i] = nodePaths[i].indicesPath.resolve(indexName);\n+ indexPaths[i] = nodePaths[i].resolve(index);\n }\n return indexPaths;\n }\n@@ -725,25 +681,47 @@ public Path[] availableShardPaths(ShardId shardId) {\n return shardLocations;\n }\n \n- public Set<String> findAllIndices() throws IOException {\n+ /**\n+ * Returns all folder names in ${data.paths}/nodes/{node.id}/indices folder\n+ */\n+ public Set<String> availableIndexFolders() throws IOException {\n if (nodePaths == null || locks == null) {\n throw new IllegalStateException(\"node is not configured to store local location\");\n }\n assert assertEnvIsLocked();\n- Set<String> indices = new HashSet<>();\n+ Set<String> indexFolders = new HashSet<>();\n for (NodePath nodePath : nodePaths) {\n Path indicesLocation = nodePath.indicesPath;\n if (Files.isDirectory(indicesLocation)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indicesLocation)) {\n for (Path index : stream) {\n if (Files.isDirectory(index)) {\n- indices.add(index.getFileName().toString());\n+ indexFolders.add(index.getFileName().toString());\n }\n }\n }\n }\n }\n- return indices;\n+ return indexFolders;\n+\n+ }\n+\n+ /**\n+ * Resolves all existing paths to <code>indexFolderName</code> in ${data.paths}/nodes/{node.id}/indices\n+ */\n+ public Path[] resolveIndexFolder(String indexFolderName) throws IOException {\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ assert assertEnvIsLocked();\n+ List<Path> paths = new ArrayList<>(nodePaths.length);\n+ for (NodePath nodePath : nodePaths) {\n+ Path indexFolder = nodePath.indicesPath.resolve(indexFolderName);\n+ if (Files.exists(indexFolder)) {\n+ paths.add(indexFolder);\n+ }\n+ }\n+ return paths.toArray(new Path[paths.size()]);\n }\n \n /**\n@@ -761,13 +739,13 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n assert assertEnvIsLocked();\n final Set<ShardId> shardIds = new HashSet<>();\n- String indexName = index.getName();\n+ final String indexUniquePathId = index.getUUID();\n for (final NodePath nodePath : nodePaths) {\n Path location = nodePath.indicesPath;\n if (Files.isDirectory(location)) {\n try (DirectoryStream<Path> indexStream = Files.newDirectoryStream(location)) {\n for (Path indexPath : indexStream) {\n- if (indexName.equals(indexPath.getFileName().toString())) {\n+ if (indexUniquePathId.equals(indexPath.getFileName().toString())) {\n shardIds.addAll(findAllShardsForIndex(indexPath, index));\n }\n }\n@@ -778,7 +756,7 @@ public Set<ShardId> findAllShardIds(final Index index) throws IOException {\n }\n \n private static Set<ShardId> findAllShardsForIndex(Path indexPath, Index index) throws IOException {\n- assert indexPath.getFileName().toString().equals(index.getName());\n+ assert indexPath.getFileName().toString().equals(index.getUUID());\n Set<ShardId> shardIds = new HashSet<>();\n if (Files.isDirectory(indexPath)) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n@@ -861,7 +839,7 @@ Settings getSettings() { // for testing\n *\n * @param indexSettings settings for the index\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings) {\n+ public Path resolveBaseCustomLocation(IndexSettings indexSettings) {\n String customDataDir = indexSettings.customDataPath();\n if (customDataDir != null) {\n // This assert is because this should be caught by MetaDataCreateIndexService\n@@ -882,10 +860,9 @@ private Path resolveCustomLocation(IndexSettings indexSettings) {\n * the root path for the index.\n *\n * @param indexSettings settings for the index\n- * @param indexName index to resolve the path for\n */\n- private Path resolveCustomLocation(IndexSettings indexSettings, final String indexName) {\n- return resolveCustomLocation(indexSettings).resolve(indexName);\n+ private Path resolveIndexCustomLocation(IndexSettings indexSettings) {\n+ return resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getUUID());\n }\n \n /**\n@@ -897,7 +874,7 @@ private Path resolveCustomLocation(IndexSettings indexSettings, final String ind\n * @param shardId shard to resolve the path to\n */\n public Path resolveCustomLocation(IndexSettings indexSettings, final ShardId shardId) {\n- return resolveCustomLocation(indexSettings, shardId.getIndexName()).resolve(Integer.toString(shardId.id()));\n+ return resolveIndexCustomLocation(indexSettings).resolve(Integer.toString(shardId.id()));\n }\n \n /**\n@@ -921,22 +898,24 @@ private void assertCanWrite() throws IOException {\n for (Path path : nodeDataPaths()) { // check node-paths are writable\n tryWriteTempFile(path);\n }\n- for (String index : this.findAllIndices()) {\n- for (Path path : this.indexPaths(index)) { // check index paths are writable\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n- }\n- for (ShardId shardID : this.findAllShardIds(new Index(index, IndexMetaData.INDEX_UUID_NA_VALUE))) {\n- Path[] paths = this.availableShardPaths(shardID);\n- for (Path path : paths) { // check shard paths are writable\n- Path indexDir = path.resolve(ShardPath.INDEX_FOLDER_NAME);\n- Path statePath = path.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n- Path translogDir = path.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n- tryWriteTempFile(indexDir);\n- tryWriteTempFile(translogDir);\n- tryWriteTempFile(statePath);\n- tryWriteTempFile(path);\n+ for (String indexFolderName : this.availableIndexFolders()) {\n+ for (Path indexPath : this.resolveIndexFolder(indexFolderName)) { // check index paths are writable\n+ Path indexStatePath = indexPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ tryWriteTempFile(indexStatePath);\n+ tryWriteTempFile(indexPath);\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(indexPath)) {\n+ for (Path shardPath : stream) {\n+ String fileName = shardPath.getFileName().toString();\n+ if (Files.isDirectory(shardPath) && fileName.chars().allMatch(Character::isDigit)) {\n+ Path indexDir = shardPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Path statePath = shardPath.resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ Path translogDir = shardPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ tryWriteTempFile(indexDir);\n+ tryWriteTempFile(translogDir);\n+ tryWriteTempFile(statePath);\n+ tryWriteTempFile(shardPath);\n+ }\n+ }\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -19,19 +19,25 @@\n \n package org.elasticsearch.gateway;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -47,7 +53,7 @@ public class DanglingIndicesState extends AbstractComponent {\n private final MetaStateService metaStateService;\n private final LocalAllocateDangledIndices allocateDangledIndices;\n \n- private final Map<String, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n+ private final Map<Index, IndexMetaData> danglingIndices = ConcurrentCollections.newConcurrentMap();\n \n @Inject\n public DanglingIndicesState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService,\n@@ -74,7 +80,7 @@ public void processDanglingIndices(MetaData metaData) {\n /**\n * The current set of dangling indices.\n */\n- Map<String, IndexMetaData> getDanglingIndices() {\n+ Map<Index, IndexMetaData> getDanglingIndices() {\n // This might be a good use case for CopyOnWriteHashMap\n return unmodifiableMap(new HashMap<>(danglingIndices));\n }\n@@ -83,10 +89,16 @@ Map<String, IndexMetaData> getDanglingIndices() {\n * Cleans dangling indices if they are already allocated on the provided meta data.\n */\n void cleanupAllocatedDangledIndices(MetaData metaData) {\n- for (String danglingIndex : danglingIndices.keySet()) {\n- if (metaData.hasIndex(danglingIndex)) {\n- logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", danglingIndex);\n- danglingIndices.remove(danglingIndex);\n+ for (Index index : danglingIndices.keySet()) {\n+ final IndexMetaData indexMetaData = metaData.index(index);\n+ if (indexMetaData != null && indexMetaData.getIndex().getName().equals(index.getName())) {\n+ if (indexMetaData.getIndex().getUUID().equals(index.getUUID()) == false) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as there is already another index \" +\n+ \"with the same name but a different uuid. local index will be ignored (but not deleted)\", index);\n+ } else {\n+ logger.debug(\"[{}] no longer dangling (created), removing from dangling list\", index);\n+ }\n+ danglingIndices.remove(index);\n }\n }\n }\n@@ -104,36 +116,30 @@ void findNewAndAddDanglingIndices(MetaData metaData) {\n * that have state on disk, but are not part of the provided meta data, or not detected\n * as dangled already.\n */\n- Map<String, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n- final Set<String> indices;\n- try {\n- indices = nodeEnv.findAllIndices();\n- } catch (Throwable e) {\n- logger.warn(\"failed to list dangling indices\", e);\n- return emptyMap();\n+ Map<Index, IndexMetaData> findNewDanglingIndices(MetaData metaData) {\n+ final Set<String> excludeIndexPathIds = new HashSet<>(metaData.indices().size() + danglingIndices.size());\n+ for (ObjectCursor<IndexMetaData> cursor : metaData.indices().values()) {\n+ excludeIndexPathIds.add(cursor.value.getIndex().getUUID());\n }\n-\n- Map<String, IndexMetaData> newIndices = new HashMap<>();\n- for (String indexName : indices) {\n- if (metaData.hasIndex(indexName) == false && danglingIndices.containsKey(indexName) == false) {\n- try {\n- IndexMetaData indexMetaData = metaStateService.loadIndexState(indexName);\n- if (indexMetaData != null) {\n- logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\", indexName);\n- if (!indexMetaData.getIndex().getName().equals(indexName)) {\n- logger.info(\"dangled index directory name is [{}], state name is [{}], renaming to directory name\", indexName, indexMetaData.getIndex());\n- indexMetaData = IndexMetaData.builder(indexMetaData).index(indexName).build();\n- }\n- newIndices.put(indexName, indexMetaData);\n- } else {\n- logger.debug(\"[{}] dangling index directory detected, but no state found\", indexName);\n- }\n- } catch (Throwable t) {\n- logger.warn(\"[{}] failed to load index state for detected dangled index\", t, indexName);\n+ excludeIndexPathIds.addAll(danglingIndices.keySet().stream().map(Index::getUUID).collect(Collectors.toList()));\n+ try {\n+ final List<IndexMetaData> indexMetaDataList = metaStateService.loadIndicesStates(excludeIndexPathIds::contains);\n+ Map<Index, IndexMetaData> newIndices = new HashMap<>(indexMetaDataList.size());\n+ for (IndexMetaData indexMetaData : indexMetaDataList) {\n+ if (metaData.hasIndex(indexMetaData.getIndex().getName())) {\n+ logger.warn(\"[{}] can not be imported as a dangling index, as index with same name already exists in cluster metadata\",\n+ indexMetaData.getIndex());\n+ } else {\n+ logger.info(\"[{}] dangling index, exists on local file system, but not in cluster metadata, auto import to cluster state\",\n+ indexMetaData.getIndex());\n+ newIndices.put(indexMetaData.getIndex(), indexMetaData);\n }\n }\n+ return newIndices;\n+ } catch (IOException e) {\n+ logger.warn(\"failed to list dangling indices\", e);\n+ return emptyMap();\n }\n- return newIndices;\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/gateway/DanglingIndicesState.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.Index;\n \n@@ -86,6 +87,7 @@ public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateSer\n try {\n ensureNoPre019State();\n pre20Upgrade();\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(settings, nodeEnv);\n long startNS = System.nanoTime();\n metaStateService.loadFullState();\n logger.debug(\"took {} to load state\", TimeValue.timeValueMillis(TimeValue.nsecToMSec(System.nanoTime() - startNS)));\n@@ -130,7 +132,7 @@ public void clusterChanged(ClusterChangedEvent event) {\n for (IndexMetaData indexMetaData : newMetaData) {\n IndexMetaData indexMetaDataOnDisk = null;\n if (indexMetaData.getState().equals(IndexMetaData.State.CLOSE)) {\n- indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex().getName());\n+ indexMetaDataOnDisk = metaStateService.loadIndexState(indexMetaData.getIndex());\n }\n if (indexMetaDataOnDisk != null) {\n newPreviouslyWrittenIndices.add(indexMetaDataOnDisk.getIndex());\n@@ -158,15 +160,14 @@ public void clusterChanged(ClusterChangedEvent event) {\n // check and write changes in indices\n for (IndexMetaWriteInfo indexMetaWrite : writeInfo) {\n try {\n- metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData, indexMetaWrite.previousMetaData);\n+ metaStateService.writeIndex(indexMetaWrite.reason, indexMetaWrite.newMetaData);\n } catch (Throwable e) {\n success = false;\n }\n }\n }\n \n danglingIndicesState.processDanglingIndices(newMetaData);\n-\n if (success) {\n previousMetaData = newMetaData;\n previouslyWrittenIndices = unmodifiableSet(relevantIndices);\n@@ -233,7 +234,8 @@ private void pre20Upgrade() throws Exception {\n // We successfully checked all indices for backward compatibility and found no non-upgradable indices, which\n // means the upgrade can continue. Now it's safe to overwrite index metadata with the new version.\n for (IndexMetaData indexMetaData : updateIndexMetaData) {\n- metaStateService.writeIndex(\"upgrade\", indexMetaData, null);\n+ // since we still haven't upgraded the index folders, we write index state in the old folder\n+ metaStateService.writeIndex(\"upgrade\", indexMetaData, nodeEnv.resolveIndexFolder(indexMetaData.getIndex().getName()));\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java",
"status": "modified"
},
{
"diff": "@@ -33,9 +33,12 @@\n import org.elasticsearch.index.Index;\n \n import java.io.IOException;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n+import java.util.function.Predicate;\n \n /**\n * Handles writing and loading both {@link MetaData} and {@link IndexMetaData}\n@@ -45,7 +48,7 @@ public class MetaStateService extends AbstractComponent {\n static final String FORMAT_SETTING = \"gateway.format\";\n \n static final String GLOBAL_STATE_FILE_PREFIX = \"global-\";\n- private static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n+ public static final String INDEX_STATE_FILE_PREFIX = \"state-\";\n \n private final NodeEnvironment nodeEnv;\n \n@@ -91,14 +94,12 @@ MetaData loadFullState() throws Exception {\n } else {\n metaDataBuilder = MetaData.builder();\n }\n-\n- final Set<String> indices = nodeEnv.findAllIndices();\n- for (String index : indices) {\n- IndexMetaData indexMetaData = loadIndexState(index);\n- if (indexMetaData == null) {\n- logger.debug(\"[{}] failed to find metadata for existing index location\", index);\n- } else {\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger, nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n metaDataBuilder.put(indexMetaData, false);\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n }\n }\n return metaDataBuilder.build();\n@@ -108,10 +109,35 @@ MetaData loadFullState() throws Exception {\n * Loads the index state for the provided index name, returning null if doesn't exists.\n */\n @Nullable\n- IndexMetaData loadIndexState(String index) throws IOException {\n+ IndexMetaData loadIndexState(Index index) throws IOException {\n return indexStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n }\n \n+ /**\n+ * Loads all indices states available on disk\n+ */\n+ List<IndexMetaData> loadIndicesStates(Predicate<String> excludeIndexPathIdsPredicate) throws IOException {\n+ List<IndexMetaData> indexMetaDataList = new ArrayList<>();\n+ for (String indexFolderName : nodeEnv.availableIndexFolders()) {\n+ if (excludeIndexPathIdsPredicate.test(indexFolderName)) {\n+ continue;\n+ }\n+ IndexMetaData indexMetaData = indexStateFormat.loadLatestState(logger,\n+ nodeEnv.resolveIndexFolder(indexFolderName));\n+ if (indexMetaData != null) {\n+ final String indexPathId = indexMetaData.getIndex().getUUID();\n+ if (indexFolderName.equals(indexPathId)) {\n+ indexMetaDataList.add(indexMetaData);\n+ } else {\n+ throw new IllegalStateException(\"[\" + indexFolderName+ \"] invalid index folder name, rename to [\" + indexPathId + \"]\");\n+ }\n+ } else {\n+ logger.debug(\"[{}] failed to find metadata for existing index location\", indexFolderName);\n+ }\n+ }\n+ return indexMetaDataList;\n+ }\n+\n /**\n * Loads the global state, *without* index state, see {@link #loadFullState()} for that.\n */\n@@ -129,13 +155,22 @@ MetaData loadGlobalState() throws IOException {\n /**\n * Writes the index state.\n */\n- void writeIndex(String reason, IndexMetaData indexMetaData, @Nullable IndexMetaData previousIndexMetaData) throws Exception {\n- logger.trace(\"[{}] writing state, reason [{}]\", indexMetaData.getIndex(), reason);\n+ void writeIndex(String reason, IndexMetaData indexMetaData) throws IOException {\n+ writeIndex(reason, indexMetaData, nodeEnv.indexPaths(indexMetaData.getIndex()));\n+ }\n+\n+ /**\n+ * Writes the index state in <code>locations</code>, use {@link #writeGlobalState(String, MetaData)}\n+ * to write index state in index paths\n+ */\n+ void writeIndex(String reason, IndexMetaData indexMetaData, Path[] locations) throws IOException {\n+ final Index index = indexMetaData.getIndex();\n+ logger.trace(\"[{}] writing state, reason [{}]\", index, reason);\n try {\n- indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), nodeEnv.indexPaths(indexMetaData.getIndex().getName()));\n+ indexStateFormat.write(indexMetaData, indexMetaData.getVersion(), locations);\n } catch (Throwable ex) {\n- logger.warn(\"[{}]: failed to write index state\", ex, indexMetaData.getIndex());\n- throw new IOException(\"failed to write state for [\" + indexMetaData.getIndex() + \"]\", ex);\n+ logger.warn(\"[{}]: failed to write index state\", ex, index);\n+ throw new IOException(\"failed to write state for [\" + index + \"]\", ex);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/gateway/MetaStateService.java",
"status": "modified"
},
{
"diff": "@@ -29,30 +29,27 @@\n import java.nio.file.FileStore;\n import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.util.HashMap;\n import java.util.Map;\n \n public final class ShardPath {\n public static final String INDEX_FOLDER_NAME = \"index\";\n public static final String TRANSLOG_FOLDER_NAME = \"translog\";\n \n private final Path path;\n- private final String indexUUID;\n private final ShardId shardId;\n private final Path shardStatePath;\n private final boolean isCustomDataPath;\n \n- public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, String indexUUID, ShardId shardId) {\n+ public ShardPath(boolean isCustomDataPath, Path dataPath, Path shardStatePath, ShardId shardId) {\n assert dataPath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"dataPath must end with the shard ID but didn't: \" + dataPath.toString();\n assert shardStatePath.getFileName().toString().equals(Integer.toString(shardId.id())) : \"shardStatePath must end with the shard ID but didn't: \" + dataPath.toString();\n- assert dataPath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"dataPath must end with index/shardID but didn't: \" + dataPath.toString();\n- assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndexName()) : \"shardStatePath must end with index/shardID but didn't: \" + dataPath.toString();\n+ assert dataPath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"dataPath must end with index path id but didn't: \" + dataPath.toString();\n+ assert shardStatePath.getParent().getFileName().toString().equals(shardId.getIndex().getUUID()) : \"shardStatePath must end with index path id but didn't: \" + dataPath.toString();\n if (isCustomDataPath && dataPath.equals(shardStatePath)) {\n throw new IllegalArgumentException(\"shard state path must be different to the data path when using custom data paths\");\n }\n this.isCustomDataPath = isCustomDataPath;\n this.path = dataPath;\n- this.indexUUID = indexUUID;\n this.shardId = shardId;\n this.shardStatePath = shardStatePath;\n }\n@@ -73,10 +70,6 @@ public boolean exists() {\n return Files.exists(path);\n }\n \n- public String getIndexUUID() {\n- return indexUUID;\n- }\n-\n public ShardId getShardId() {\n return shardId;\n }\n@@ -144,7 +137,7 @@ public static ShardPath loadShardPath(ESLogger logger, NodeEnvironment env, Shar\n dataPath = statePath;\n }\n logger.debug(\"{} loaded data path [{}], state path [{}]\", shardId, dataPath, statePath);\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n }\n \n@@ -168,34 +161,6 @@ public static void deleteLeftoverShardDirectory(ESLogger logger, NodeEnvironment\n }\n }\n \n- /** Maps each path.data path to a \"guess\" of how many bytes the shards allocated to that path might additionally use over their\n- * lifetime; we do this so a bunch of newly allocated shards won't just all go the path with the most free space at this moment. */\n- private static Map<Path,Long> getEstimatedReservedBytes(NodeEnvironment env, long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n- long totFreeSpace = 0;\n- for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n- totFreeSpace += nodePath.fileStore.getUsableSpace();\n- }\n-\n- // Very rough heuristic of how much disk space we expect the shard will use over its lifetime, the max of current average\n- // shard size across the cluster and 5% of the total available free space on this node:\n- long estShardSizeInBytes = Math.max(avgShardSizeInBytes, (long) (totFreeSpace/20.0));\n-\n- // Collate predicted (guessed!) disk usage on each path.data:\n- Map<Path,Long> reservedBytes = new HashMap<>();\n- for (IndexShard shard : shards) {\n- Path dataPath = NodeEnvironment.shardStatePathToDataPath(shard.shardPath().getShardStatePath());\n-\n- // Remove indices/<index>/<shardID> subdirs from the statePath to get back to the path.data/<lockID>:\n- Long curBytes = reservedBytes.get(dataPath);\n- if (curBytes == null) {\n- curBytes = 0L;\n- }\n- reservedBytes.put(dataPath, curBytes + estShardSizeInBytes);\n- } \n-\n- return reservedBytes;\n- }\n-\n public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, IndexSettings indexSettings,\n long avgShardSizeInBytes, Map<Path,Integer> dataPathToShardCount) throws IOException {\n \n@@ -206,7 +171,6 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n dataPath = env.resolveCustomLocation(indexSettings, shardId);\n statePath = env.nodePaths()[0].resolve(shardId);\n } else {\n-\n long totFreeSpace = 0;\n for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n totFreeSpace += nodePath.fileStore.getUsableSpace();\n@@ -241,9 +205,7 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n statePath = bestPath.resolve(shardId);\n dataPath = statePath;\n }\n-\n- final String indexUUID = indexSettings.getUUID();\n- return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, indexUUID, shardId);\n+ return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId);\n }\n \n @Override\n@@ -258,9 +220,6 @@ public boolean equals(Object o) {\n if (shardId != null ? !shardId.equals(shardPath.shardId) : shardPath.shardId != null) {\n return false;\n }\n- if (indexUUID != null ? !indexUUID.equals(shardPath.indexUUID) : shardPath.indexUUID != null) {\n- return false;\n- }\n if (path != null ? !path.equals(shardPath.path) : shardPath.path != null) {\n return false;\n }\n@@ -271,7 +230,6 @@ public boolean equals(Object o) {\n @Override\n public int hashCode() {\n int result = path != null ? path.hashCode() : 0;\n- result = 31 * result + (indexUUID != null ? indexUUID.hashCode() : 0);\n result = 31 * result + (shardId != null ? shardId.hashCode() : 0);\n return result;\n }\n@@ -280,7 +238,6 @@ public int hashCode() {\n public String toString() {\n return \"ShardPath{\" +\n \"path=\" + path +\n- \", indexUUID='\" + indexUUID + '\\'' +\n \", shard=\" + shardId +\n '}';\n }",
"filename": "core/src/main/java/org/elasticsearch/index/shard/ShardPath.java",
"status": "modified"
},
{
"diff": "@@ -531,7 +531,7 @@ private void deleteIndexStore(String reason, Index index, IndexSettings indexSet\n }\n // this is a pure protection to make sure this index doesn't get re-imported as a dangling index.\n // we should in the future rather write a tombstone rather than wiping the metadata.\n- MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index.getName()));\n+ MetaDataStateFormat.deleteMetaState(nodeEnv.indexPaths(index));\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.IndexFolderUpgrader;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -105,6 +106,8 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n List<String> indexes;\n List<String> unsupportedIndexes;\n+ static String singleDataPathNodeName;\n+ static String multiDataPathNodeName;\n static Path singleDataPath;\n static Path[] multiDataPath;\n \n@@ -127,6 +130,8 @@ private List<String> loadIndexesList(String prefix) throws IOException {\n \n @AfterClass\n public static void tearDownStatics() {\n+ singleDataPathNodeName = null;\n+ multiDataPathNodeName = null;\n singleDataPath = null;\n multiDataPath = null;\n }\n@@ -157,15 +162,17 @@ void setupCluster() throws Exception {\n InternalTestCluster.Async<String> multiDataPathNode = internalCluster().startNodeAsync(nodeSettings.build());\n \n // find single data path dir\n- Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNode.get()).nodeDataPaths();\n+ singleDataPathNodeName = singleDataPathNode.get();\n+ Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName).nodeDataPaths();\n assertEquals(1, nodePaths.length);\n singleDataPath = nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER);\n assertFalse(Files.exists(singleDataPath));\n Files.createDirectories(singleDataPath);\n logger.info(\"--> Single data path: {}\", singleDataPath);\n \n // find multi data path dirs\n- nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNode.get()).nodeDataPaths();\n+ multiDataPathNodeName = multiDataPathNode.get();\n+ nodePaths = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName).nodeDataPaths();\n assertEquals(2, nodePaths.length);\n multiDataPath = new Path[] {nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER),\n nodePaths[1].resolve(NodeEnvironment.INDICES_FOLDER)};\n@@ -178,6 +185,13 @@ void setupCluster() throws Exception {\n replicas.get(); // wait for replicas\n }\n \n+ void upgradeIndexFolder() throws Exception {\n+ final NodeEnvironment nodeEnvironment = internalCluster().getInstance(NodeEnvironment.class, singleDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+ final NodeEnvironment nodeEnv = internalCluster().getInstance(NodeEnvironment.class, multiDataPathNodeName);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnv);\n+ }\n+\n String loadIndex(String indexFile) throws Exception {\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -296,6 +310,10 @@ public void testOldIndexes() throws Exception {\n void assertOldIndexWorks(String index) throws Exception {\n Version version = extractVersion(index);\n String indexName = loadIndex(index);\n+ // we explicitly upgrade the index folders as these indices\n+ // are imported as dangling indices and not available on\n+ // node startup\n+ upgradeIndexFolder();\n importIndex(indexName);\n assertIndexSanity(indexName, version);\n assertBasicSearchWorks(indexName);",
"filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java",
"status": "modified"
},
{
"diff": "@@ -92,22 +92,22 @@ public void testRandomDiskUsage() {\n }\n \n public void testFillShardLevelInfo() {\n- final Index index = new Index(\"test\", \"_na_\");\n+ final Index index = new Index(\"test\", \"0xdeadbeef\");\n ShardRouting test_0 = ShardRouting.newUnassigned(index, 0, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_0, \"node1\");\n ShardRoutingHelper.moveToStarted(test_0);\n- Path test0Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"0\");\n+ Path test0Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"0\");\n CommonStats commonStats0 = new CommonStats();\n commonStats0.store = new StoreStats(100, 1);\n ShardRouting test_1 = ShardRouting.newUnassigned(index, 1, null, false, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"foo\"));\n ShardRoutingHelper.initialize(test_1, \"node2\");\n ShardRoutingHelper.moveToStarted(test_1);\n- Path test1Path = createTempDir().resolve(\"indices\").resolve(\"test\").resolve(\"1\");\n+ Path test1Path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(\"1\");\n CommonStats commonStats1 = new CommonStats();\n commonStats1.store = new StoreStats(1000, 1);\n ShardStats[] stats = new ShardStats[] {\n- new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, \"0xdeadbeef\", test_0.shardId()), commonStats0 , null),\n- new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, \"0xdeadbeef\", test_1.shardId()), commonStats1 , null)\n+ new ShardStats(test_0, new ShardPath(false, test0Path, test0Path, test_0.shardId()), commonStats0 , null),\n+ new ShardStats(test_1, new ShardPath(false, test1Path, test1Path, test_1.shardId()), commonStats1 , null)\n };\n ImmutableOpenMap.Builder<String, Long> shardSizes = ImmutableOpenMap.builder();\n ImmutableOpenMap.Builder<ShardRouting, String> routingToPath = ImmutableOpenMap.builder();",
"filename": "core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java",
"status": "modified"
},
{
"diff": "@@ -22,8 +22,10 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n@@ -42,6 +44,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n@@ -226,9 +229,10 @@ private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exc\n assertThat(state.getRoutingNodes().node(state.nodes().resolveNode(node_1).id()).get(0).state(), equalTo(ShardRoutingState.STARTED));\n \n client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).execute().actionGet();\n+ final Index index = resolveIndex(\"test\");\n \n logger.info(\"--> closing all nodes\");\n- Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ Path[] shardLocation = internalCluster().getInstance(NodeEnvironment.class, node_1).availableShardPaths(new ShardId(index, 0));\n assertThat(FileSystemUtils.exists(shardLocation), equalTo(true)); // make sure the data is there!\n internalCluster().closeNonSharedNodes(false); // don't wipe data directories the index needs to be there!\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,366 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import org.apache.lucene.util.CollectionUtil;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.bwcompat.OldIndexBackwardsCompatibilityIT;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.AllocationId;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.io.FileSystemUtils;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n+import org.elasticsearch.gateway.MetaStateService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.ShardPath;\n+import org.elasticsearch.index.shard.ShardStateMetaData;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.BufferedWriter;\n+import java.io.FileNotFoundException;\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.net.URISyntaxException;\n+import java.nio.charset.StandardCharsets;\n+import java.nio.file.DirectoryStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Set;\n+\n+import static org.hamcrest.core.Is.is;\n+\n+@LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n+public class IndexFolderUpgraderTests extends ESTestCase {\n+\n+ private static MetaDataStateFormat<IndexMetaData> indexMetaDataStateFormat =\n+ new MetaDataStateFormat<IndexMetaData>(XContentType.SMILE, MetaStateService.INDEX_STATE_FILE_PREFIX) {\n+\n+ @Override\n+ public void toXContent(XContentBuilder builder, IndexMetaData state) throws IOException {\n+ IndexMetaData.Builder.toXContent(state, builder, ToXContent.EMPTY_PARAMS);\n+ }\n+\n+ @Override\n+ public IndexMetaData fromXContent(XContentParser parser) throws IOException {\n+ return IndexMetaData.Builder.fromXContent(parser);\n+ }\n+ };\n+\n+ /**\n+ * tests custom data paths are upgraded\n+ */\n+ public void testUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ /**\n+ * tests upgrade on partially upgraded index, when we crash while upgrading\n+ */\n+ public void testPartialUpgradeCustomDataPath() throws IOException {\n+ Path customPath = createTempDir();\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean())\n+ .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), customPath.toAbsolutePath().toString()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_DATA_PATH, customPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv) {\n+ @Override\n+ void upgrade(Index index, Path source, Path target) throws IOException {\n+ if(randomBoolean()) {\n+ throw new FileNotFoundException(\"simulated\");\n+ }\n+ }\n+ };\n+ // only upgrade some paths\n+ try {\n+ helper.upgrade(index.getName());\n+ } catch (IOException e) {\n+ assertTrue(e instanceof FileNotFoundException);\n+ }\n+ helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ // try to upgrade again\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgrade() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ int numIdxFiles = randomIntBetween(1, 5);\n+ int numTranslogFiles = randomIntBetween(1, 5);\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ writeIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ IndexFolderUpgrader helper = new IndexFolderUpgrader(settings, nodeEnv);\n+ helper.upgrade(indexSettings.getIndex().getName());\n+ checkIndex(nodeEnv, indexSettings, numIdxFiles, numTranslogFiles);\n+ }\n+ }\n+\n+ public void testUpgradeIndices() throws IOException {\n+ final Settings nodeSettings = Settings.builder()\n+ .put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), randomBoolean()).build();\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(nodeSettings)) {\n+ Map<IndexSettings, Tuple<Integer, Integer>> indexSettingsMap = new HashMap<>();\n+ for (int i = 0; i < randomIntBetween(2, 5); i++) {\n+ final Index index = new Index(randomAsciiOfLength(10), Strings.randomBase64UUID());\n+ Settings settings = Settings.builder()\n+ .put(nodeSettings)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_0_0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 5))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName()).settings(settings).build();\n+ Tuple<Integer, Integer> fileCounts = new Tuple<>(randomIntBetween(1, 5), randomIntBetween(1, 5));\n+ IndexSettings indexSettings = new IndexSettings(indexState, nodeSettings);\n+ indexSettingsMap.put(indexSettings, fileCounts);\n+ writeIndex(nodeEnv, indexSettings, fileCounts.v1(), fileCounts.v2());\n+ }\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(nodeSettings, nodeEnv);\n+ for (Map.Entry<IndexSettings, Tuple<Integer, Integer>> entry : indexSettingsMap.entrySet()) {\n+ checkIndex(nodeEnv, entry.getKey(), entry.getValue().v1(), entry.getValue().v2());\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Run upgrade on a real bwc index\n+ */\n+ public void testUpgradeRealIndex() throws IOException, URISyntaxException {\n+ List<Path> indexes = new ArrayList<>();\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(getBwcIndicesPath(), \"index-*.zip\")) {\n+ for (Path path : stream) {\n+ indexes.add(path);\n+ }\n+ }\n+ CollectionUtil.introSort(indexes, (o1, o2) -> o1.getFileName().compareTo(o2.getFileName()));\n+ final Path path = randomFrom(indexes);\n+ final String indexName = path.getFileName().toString().replace(\".zip\", \"\").toLowerCase(Locale.ROOT);\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ // decompress the index\n+ try (InputStream stream = Files.newInputStream(path)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ // check it is unique\n+ assertTrue(Files.exists(unzipDataDir));\n+ Path[] list = FileSystemUtils.files(unzipDataDir);\n+ if (list.length != 1) {\n+ throw new IllegalStateException(\"Backwards index must contain exactly one cluster but was \" + list.length);\n+ }\n+ // the bwc scripts packs the indices under this path\n+ Path src = list[0].resolve(\"nodes/0/indices/\" + indexName);\n+ assertTrue(\"[\" + path + \"] missing index dir: \" + src.toString(), Files.exists(src));\n+ final Path indicesPath = randomFrom(nodeEnvironment.nodePaths()).indicesPath;\n+ logger.info(\"--> injecting index [{}] into [{}]\", indexName, indicesPath);\n+ OldIndexBackwardsCompatibilityIT.copyIndex(logger, src, indexName, indicesPath);\n+ IndexFolderUpgrader.upgradeIndicesIfNeeded(Settings.EMPTY, nodeEnvironment);\n+\n+ // ensure old index folder is deleted\n+ Set<String> indexFolders = nodeEnvironment.availableIndexFolders();\n+ assertEquals(indexFolders.size(), 1);\n+\n+ // ensure index metadata is moved\n+ IndexMetaData indexMetaData = indexMetaDataStateFormat.loadLatestState(logger,\n+ nodeEnvironment.resolveIndexFolder(indexFolders.iterator().next()));\n+ assertNotNull(indexMetaData);\n+ Index index = indexMetaData.getIndex();\n+ assertEquals(index.getName(), indexName);\n+\n+ Set<ShardId> shardIds = nodeEnvironment.findAllShardIds(index);\n+ // ensure all shards are moved\n+ assertEquals(shardIds.size(), indexMetaData.getNumberOfShards());\n+ for (ShardId shardId : shardIds) {\n+ final ShardPath shardPath = ShardPath.loadShardPath(logger, nodeEnvironment, shardId,\n+ new IndexSettings(indexMetaData, Settings.EMPTY));\n+ final Path translog = shardPath.resolveTranslog();\n+ final Path idx = shardPath.resolveIndex();\n+ final Path state = shardPath.getShardStatePath().resolve(MetaDataStateFormat.STATE_DIR_NAME);\n+ assertTrue(shardPath.exists());\n+ assertTrue(Files.exists(translog));\n+ assertTrue(Files.exists(idx));\n+ assertTrue(Files.exists(state));\n+ }\n+ }\n+ }\n+\n+ public void testNeedsUpgrade() throws IOException {\n+ final Index index = new Index(\"foo\", Strings.randomBase64UUID());\n+ IndexMetaData indexState = IndexMetaData.builder(index.getName())\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_INDEX_UUID, index.getUUID())\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .build();\n+ try (NodeEnvironment nodeEnvironment = newNodeEnvironment()) {\n+ indexMetaDataStateFormat.write(indexState, 1, nodeEnvironment.indexPaths(index));\n+ assertFalse(IndexFolderUpgrader.needsUpgrade(index, index.getUUID()));\n+ }\n+ }\n+\n+ private void checkIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ final Index index = indexSettings.getIndex();\n+ // ensure index state can be loaded\n+ IndexMetaData loadLatestState = indexMetaDataStateFormat.loadLatestState(logger, nodeEnv.indexPaths(index));\n+ assertNotNull(loadLatestState);\n+ assertEquals(loadLatestState.getIndex(), index);\n+ for (int shardId = 0; shardId < indexSettings.getNumberOfShards(); shardId++) {\n+ // ensure shard path can be loaded\n+ ShardPath targetShardPath = ShardPath.loadShardPath(logger, nodeEnv, new ShardId(index, shardId), indexSettings);\n+ assertNotNull(targetShardPath);\n+ // ensure shard contents are copied over\n+ final Path translog = targetShardPath.resolveTranslog();\n+ final Path idx = targetShardPath.resolveIndex();\n+\n+ // ensure index and translog files are copied over\n+ assertEquals(numTranslogFiles, FileSystemUtils.files(translog).length);\n+ assertEquals(numIdxFiles, FileSystemUtils.files(idx).length);\n+ Path[] files = FileSystemUtils.files(translog);\n+ final HashSet<Path> translogFiles = new HashSet<>(Arrays.asList(files));\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ final String name = Integer.toString(i);\n+ translogFiles.contains(translog.resolve(name + \".translog\"));\n+ byte[] content = Files.readAllBytes(translog.resolve(name + \".translog\"));\n+ assertEquals(name , new String(content, StandardCharsets.UTF_8));\n+ }\n+ Path[] indexFileList = FileSystemUtils.files(idx);\n+ final HashSet<Path> idxFiles = new HashSet<>(Arrays.asList(indexFileList));\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ final String name = Integer.toString(i);\n+ idxFiles.contains(idx.resolve(name + \".tst\"));\n+ byte[] content = Files.readAllBytes(idx.resolve(name + \".tst\"));\n+ assertEquals(name, new String(content, StandardCharsets.UTF_8));\n+ }\n+ }\n+ }\n+\n+ private void writeIndex(NodeEnvironment nodeEnv, IndexSettings indexSettings,\n+ int numIdxFiles, int numTranslogFiles) throws IOException {\n+ NodeEnvironment.NodePath[] nodePaths = nodeEnv.nodePaths();\n+ Path[] oldIndexPaths = new Path[nodePaths.length];\n+ for (int i = 0; i < nodePaths.length; i++) {\n+ oldIndexPaths[i] = nodePaths[i].indicesPath.resolve(indexSettings.getIndex().getName());\n+ }\n+ indexMetaDataStateFormat.write(indexSettings.getIndexMetaData(), 1, oldIndexPaths);\n+ for (int id = 0; id < indexSettings.getNumberOfShards(); id++) {\n+ Path oldIndexPath = randomFrom(oldIndexPaths);\n+ ShardId shardId = new ShardId(indexSettings.getIndex(), id);\n+ if (indexSettings.hasCustomDataPath()) {\n+ Path customIndexPath = nodeEnv.resolveBaseCustomLocation(indexSettings).resolve(indexSettings.getIndex().getName());\n+ writeShard(shardId, customIndexPath, numIdxFiles, numTranslogFiles);\n+ } else {\n+ writeShard(shardId, oldIndexPath, numIdxFiles, numTranslogFiles);\n+ }\n+ ShardStateMetaData state = new ShardStateMetaData(true, indexSettings.getUUID(), AllocationId.newInitializing());\n+ ShardStateMetaData.FORMAT.write(state, 1, oldIndexPath.resolve(String.valueOf(shardId.getId())));\n+ }\n+ }\n+\n+ private void writeShard(ShardId shardId, Path indexLocation,\n+ final int numIdxFiles, final int numTranslogFiles) throws IOException {\n+ Path oldShardDataPath = indexLocation.resolve(String.valueOf(shardId.getId()));\n+ final Path translogPath = oldShardDataPath.resolve(ShardPath.TRANSLOG_FOLDER_NAME);\n+ final Path idxPath = oldShardDataPath.resolve(ShardPath.INDEX_FOLDER_NAME);\n+ Files.createDirectories(translogPath);\n+ Files.createDirectories(idxPath);\n+ for (int i = 0; i < numIdxFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(idxPath.resolve(filename + \".tst\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ for (int i = 0; i < numTranslogFiles; i++) {\n+ String filename = Integer.toString(i);\n+ try (BufferedWriter w = Files.newBufferedWriter(translogPath.resolve(filename + \".translog\"),\n+ StandardCharsets.UTF_8)) {\n+ w.write(filename);\n+ }\n+ }\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/util/IndexFolderUpgraderTests.java",
"status": "added"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n+import org.elasticsearch.gateway.MetaDataStateFormat;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n@@ -36,7 +37,11 @@\n import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -129,33 +134,34 @@ public void testNodeLockMultipleEnvironment() throws IOException {\n public void testShardLock() throws IOException {\n final NodeEnvironment env = newNodeEnvironment();\n \n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n try {\n- env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n fail(\"shard 0 is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n fooLock.close();\n // can lock again?\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0)).close();\n+ env.shardLock(new ShardId(index, 0)).close();\n \n- List<ShardLock> locks = env.lockAllForIndex(new Index(\"foo\", \"_na_\"), idxSettings, randomIntBetween(0, 10));\n+ List<ShardLock> locks = env.lockAllForIndex(index, idxSettings, randomIntBetween(0, 10));\n try {\n- env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n+ env.shardLock(new ShardId(index, 0));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n@@ -165,63 +171,91 @@ public void testShardLock() throws IOException {\n env.close();\n }\n \n- public void testGetAllIndices() throws Exception {\n+ public void testAvailableIndexFolders() throws Exception {\n final NodeEnvironment env = newNodeEnvironment();\n final int numIndices = randomIntBetween(1, 10);\n+ Set<String> actualPaths = new HashSet<>();\n for (int i = 0; i < numIndices; i++) {\n- for (Path path : env.indexPaths(\"foo\" + i)) {\n- Files.createDirectories(path);\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ for (Path path : env.indexPaths(index)) {\n+ Files.createDirectories(path.resolve(MetaDataStateFormat.STATE_DIR_NAME));\n+ actualPaths.add(path.getFileName().toString());\n }\n }\n- Set<String> indices = env.findAllIndices();\n- assertEquals(indices.size(), numIndices);\n+\n+ assertThat(actualPaths, equalTo(env.availableIndexFolders()));\n+ assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n+ env.close();\n+ }\n+\n+ public void testResolveIndexFolders() throws Exception {\n+ final NodeEnvironment env = newNodeEnvironment();\n+ final int numIndices = randomIntBetween(1, 10);\n+ Map<String, List<Path>> actualIndexDataPaths = new HashMap<>();\n for (int i = 0; i < numIndices; i++) {\n- assertTrue(indices.contains(\"foo\" + i));\n+ Index index = new Index(\"foo\" + i, \"fooUUID\" + i);\n+ Path[] indexPaths = env.indexPaths(index);\n+ for (Path path : indexPaths) {\n+ Files.createDirectories(path);\n+ String fileName = path.getFileName().toString();\n+ List<Path> paths = actualIndexDataPaths.get(fileName);\n+ if (paths == null) {\n+ paths = new ArrayList<>();\n+ }\n+ paths.add(path);\n+ actualIndexDataPaths.put(fileName, paths);\n+ }\n+ }\n+ for (Map.Entry<String, List<Path>> actualIndexDataPathEntry : actualIndexDataPaths.entrySet()) {\n+ List<Path> actual = actualIndexDataPathEntry.getValue();\n+ Path[] actualPaths = actual.toArray(new Path[actual.size()]);\n+ assertThat(actualPaths, equalTo(env.resolveIndexFolder(actualIndexDataPathEntry.getKey())));\n }\n assertTrue(\"LockedShards: \" + env.lockedShards(), env.lockedShards().isEmpty());\n env.close();\n }\n \n public void testDeleteSafe() throws IOException, InterruptedException {\n final NodeEnvironment env = newNodeEnvironment();\n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", \"_na_\", 0));\n- assertEquals(new ShardId(\"foo\", \"_na_\", 0), fooLock.getShardId());\n+ final Index index = new Index(\"foo\", \"fooUUID\");\n+ ShardLock fooLock = env.shardLock(new ShardId(index, 0));\n+ assertEquals(new ShardId(index, 0), fooLock.getShardId());\n \n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n Files.createDirectories(path.resolve(\"0\"));\n Files.createDirectories(path.resolve(\"1\"));\n }\n \n try {\n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 0), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 0), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertTrue(Files.exists(path.resolve(\"1\")));\n \n }\n \n- env.deleteShardDirectorySafe(new ShardId(\"foo\", \"_na_\", 1), idxSettings);\n+ env.deleteShardDirectorySafe(new ShardId(index, 1), idxSettings);\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path.resolve(\"0\")));\n assertFalse(Files.exists(path.resolve(\"1\")));\n }\n \n try {\n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), randomIntBetween(0, 10), idxSettings);\n+ env.deleteIndexDirectorySafe(index, randomIntBetween(0, 10), idxSettings);\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n }\n fooLock.close();\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertTrue(Files.exists(path));\n }\n \n@@ -242,7 +276,7 @@ public void onFailure(Throwable t) {\n @Override\n protected void doRun() throws Exception {\n start.await();\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", 0))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(index, 0))) {\n blockLatch.countDown();\n Thread.sleep(randomIntBetween(1, 10));\n }\n@@ -257,11 +291,11 @@ protected void doRun() throws Exception {\n start.countDown();\n blockLatch.await();\n \n- env.deleteIndexDirectorySafe(new Index(\"foo\", \"_na_\"), 5000, idxSettings);\n+ env.deleteIndexDirectorySafe(index, 5000, idxSettings);\n \n assertNull(threadException.get());\n \n- for (Path path : env.indexPaths(\"foo\")) {\n+ for (Path path : env.indexPaths(index)) {\n assertFalse(Files.exists(path));\n }\n latch.await();\n@@ -300,7 +334,7 @@ public void run() {\n for (int i = 0; i < iters; i++) {\n int shard = randomIntBetween(0, counts.length - 1);\n try {\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"_na_\", shard), scaledRandomIntBetween(0, 10))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", \"fooUUID\", shard), scaledRandomIntBetween(0, 10))) {\n counts[shard].value++;\n countsAtomic[shard].incrementAndGet();\n assertEquals(flipFlop[shard].incrementAndGet(), 1);\n@@ -334,37 +368,38 @@ public void testCustomDataPaths() throws Exception {\n String[] dataPaths = tmpPaths();\n NodeEnvironment env = newNodeEnvironment(dataPaths, \"/tmp\", Settings.EMPTY);\n \n- IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.EMPTY);\n- IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n- Index index = new Index(\"myindex\", \"_na_\");\n+ final Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_INDEX_UUID, \"myindexUUID\").build();\n+ IndexSettings s1 = IndexSettingsModule.newIndexSettings(\"myindex\", indexSettings);\n+ IndexSettings s2 = IndexSettingsModule.newIndexSettings(\"myindex\", Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_DATA_PATH, \"/tmp/foo\").build());\n+ Index index = new Index(\"myindex\", \"myindexUUID\");\n ShardId sid = new ShardId(index, 0);\n \n assertFalse(\"no settings should mean no custom data path\", s1.hasCustomDataPath());\n assertTrue(\"settings with path_data should have a custom data path\", s2.hasCustomDataPath());\n \n assertThat(env.availableShardPaths(sid), equalTo(env.availableShardPaths(sid)));\n- assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/myindex/0\")));\n+ assertThat(env.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/0/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env.close();\n NodeEnvironment env2 = newNodeEnvironment(dataPaths, \"/tmp\",\n Settings.builder().put(NodeEnvironment.ADD_NODE_ID_TO_CUSTOM_PATH.getKey(), false).build());\n \n assertThat(env2.availableShardPaths(sid), equalTo(env2.availableShardPaths(sid)));\n- assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/myindex/0\")));\n+ assertThat(env2.resolveCustomLocation(s2, sid), equalTo(PathUtils.get(\"/tmp/foo/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"shard paths with a custom data_path should contain only regular paths\",\n env2.availableShardPaths(sid),\n- equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex/0\")));\n+ equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID() + \"/0\")));\n \n assertThat(\"index paths uses the regular template\",\n- env2.indexPaths(index.getName()), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/myindex\")));\n+ env2.indexPaths(index), equalTo(stringsToPaths(dataPaths, \"elasticsearch/nodes/0/indices/\" + index.getUUID())));\n \n env2.close();\n }",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n \n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.nio.file.StandardCopyOption;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -53,6 +54,47 @@ public void testCleanupWhenEmpty() throws Exception {\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n+ public void testDanglingIndicesDiscovery() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ assertTrue(danglingState.getDanglingIndices().isEmpty());\n+ MetaData metaData = MetaData.builder().build();\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertTrue(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ metaData = MetaData.builder().put(dangledIndex, false).build();\n+ newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ assertFalse(newDanglingIndices.containsKey(dangledIndex.getIndex()));\n+ }\n+ }\n+\n+ public void testInvalidIndexFolder() throws Exception {\n+ try (NodeEnvironment env = newNodeEnvironment()) {\n+ MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n+ DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n+\n+ MetaData metaData = MetaData.builder().build();\n+ final String uuid = \"test1UUID\";\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, uuid);\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n+ for (Path path : env.resolveIndexFolder(uuid)) {\n+ if (Files.exists(path)) {\n+ Files.move(path, path.resolveSibling(\"invalidUUID\"), StandardCopyOption.ATOMIC_MOVE);\n+ }\n+ }\n+ try {\n+ danglingState.findNewDanglingIndices(metaData);\n+ fail(\"no exception thrown for invalid folder name\");\n+ } catch (IllegalStateException e) {\n+ assertThat(e.getMessage(), equalTo(\"[invalidUUID] invalid index folder name, rename to [test1UUID]\"));\n+ }\n+ }\n+ }\n \n public void testDanglingProcessing() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n@@ -61,59 +103,40 @@ public void testDanglingProcessing() throws Exception {\n \n MetaData metaData = MetaData.builder().build();\n \n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n+ final Settings.Builder settings = Settings.builder().put(indexSettings).put(IndexMetaData.SETTING_INDEX_UUID, \"test1UUID\");\n+ IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(settings).build();\n+ metaStateService.writeIndex(\"test_write\", dangledIndex);\n \n // check that several runs when not in the metadata still keep the dangled index around\n int numberOfChecks = randomIntBetween(1, 10);\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(newDanglingIndices.keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n \n for (int i = 0; i < numberOfChecks; i++) {\n danglingState.findNewAndAddDanglingIndices(metaData);\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n // simulate allocation to the metadata\n metaData = MetaData.builder(metaData).put(dangledIndex, true).build();\n \n // check that several runs when in the metadata, but not cleaned yet, still keeps dangled\n for (int i = 0; i < numberOfChecks; i++) {\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n+ Map<Index, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n assertTrue(newDanglingIndices.isEmpty());\n \n assertThat(danglingState.getDanglingIndices().size(), equalTo(1));\n- assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(\"test1\"));\n+ assertThat(danglingState.getDanglingIndices().keySet(), Matchers.hasItems(dangledIndex.getIndex()));\n }\n \n danglingState.cleanupAllocatedDangledIndices(metaData);\n assertTrue(danglingState.getDanglingIndices().isEmpty());\n }\n }\n-\n- public void testRenameOfIndexState() throws Exception {\n- try (NodeEnvironment env = newNodeEnvironment()) {\n- MetaStateService metaStateService = new MetaStateService(Settings.EMPTY, env);\n- DanglingIndicesState danglingState = new DanglingIndicesState(Settings.EMPTY, env, metaStateService, null);\n-\n- MetaData metaData = MetaData.builder().build();\n-\n- IndexMetaData dangledIndex = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", dangledIndex, null);\n-\n- for (Path path : env.indexPaths(\"test1\")) {\n- Files.move(path, path.getParent().resolve(\"test1_renamed\"));\n- }\n-\n- Map<String, IndexMetaData> newDanglingIndices = danglingState.findNewDanglingIndices(metaData);\n- assertThat(newDanglingIndices.size(), equalTo(1));\n- assertThat(newDanglingIndices.keySet(), Matchers.hasItems(\"test1_renamed\"));\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/gateway/DanglingIndicesStateTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n@@ -68,14 +69,15 @@ public void testMetaIsRemovedIfAllShardsFromIndexRemoved() throws Exception {\n index(index, \"doc\", \"1\", jsonBuilder().startObject().field(\"text\", \"some text\").endObject());\n ensureGreen();\n assertIndexInMetaState(node1, index);\n- assertIndexDirectoryDeleted(node2, index);\n+ Index resolveIndex = resolveIndex(index);\n+ assertIndexDirectoryDeleted(node2, resolveIndex);\n assertIndexInMetaState(masterNode, index);\n \n logger.debug(\"relocating index...\");\n client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put(IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING.getKey() + \"_name\", node2)).get();\n client().admin().cluster().prepareHealth().setWaitForRelocatingShards(0).get();\n ensureGreen();\n- assertIndexDirectoryDeleted(node1, index);\n+ assertIndexDirectoryDeleted(node1, resolveIndex);\n assertIndexInMetaState(node2, index);\n assertIndexInMetaState(masterNode, index);\n }\n@@ -146,10 +148,10 @@ public void testMetaWrittenWhenIndexIsClosedAndMetaUpdated() throws Exception {\n assertThat(indicesMetaData.get(index).getState(), equalTo(IndexMetaData.State.OPEN));\n }\n \n- protected void assertIndexDirectoryDeleted(final String nodeName, final String indexName) throws Exception {\n+ protected void assertIndexDirectoryDeleted(final String nodeName, final Index index) throws Exception {\n assertBusy(() -> {\n logger.info(\"checking if index directory exists...\");\n- assertFalse(\"Expecting index directory of \" + indexName + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, indexName));\n+ assertFalse(\"Expecting index directory of \" + index + \" to be deleted from node \" + nodeName, indexDirectoryExists(nodeName, index));\n }\n );\n }\n@@ -168,9 +170,9 @@ protected void assertIndexInMetaState(final String nodeName, final String indexN\n }\n \n \n- private boolean indexDirectoryExists(String nodeName, String indexName) {\n+ private boolean indexDirectoryExists(String nodeName, Index index) {\n NodeEnvironment nodeEnv = ((InternalTestCluster) cluster()).getInstance(NodeEnvironment.class, nodeName);\n- for (Path path : nodeEnv.indexPaths(indexName)) {\n+ for (Path path : nodeEnv.indexPaths(index)) {\n if (Files.exists(path)) {\n return true;\n }",
"filename": "core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -43,15 +44,15 @@ public void testWriteLoadIndex() throws Exception {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n \n IndexMetaData index = IndexMetaData.builder(\"test1\").settings(indexSettings).build();\n- metaStateService.writeIndex(\"test_write\", index, null);\n- assertThat(metaStateService.loadIndexState(\"test1\"), equalTo(index));\n+ metaStateService.writeIndex(\"test_write\", index);\n+ assertThat(metaStateService.loadIndexState(index.getIndex()), equalTo(index));\n }\n }\n \n public void testLoadMissingIndex() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n MetaStateService metaStateService = new MetaStateService(randomSettings(), env);\n- assertThat(metaStateService.loadIndexState(\"test1\"), nullValue());\n+ assertThat(metaStateService.loadIndexState(new Index(\"test1\", \"test1UUID\")), nullValue());\n }\n }\n \n@@ -94,7 +95,7 @@ public void testLoadGlobal() throws Exception {\n .build();\n \n metaStateService.writeGlobalState(\"test_write\", metaData);\n- metaStateService.writeIndex(\"test_write\", index, null);\n+ metaStateService.writeIndex(\"test_write\", index);\n \n MetaData loadedState = metaStateService.loadFullState();\n assertThat(loadedState.persistentSettings(), equalTo(metaData.persistentSettings()));",
"filename": "core/src/test/java/org/elasticsearch/gateway/MetaStateServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -70,6 +70,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.NodeServicesProvider;\n@@ -97,6 +98,7 @@\n import org.elasticsearch.test.IndexSettingsModule;\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.elasticsearch.test.VersionUtils;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n \n import java.io.IOException;\n import java.nio.file.Files;\n@@ -141,33 +143,35 @@ protected Collection<Class<? extends Plugin>> getPlugins() {\n \n public void testWriteShardState() throws Exception {\n try (NodeEnvironment env = newNodeEnvironment()) {\n- ShardId id = new ShardId(\"foo\", \"_na_\", 1);\n+ ShardId id = new ShardId(\"foo\", \"fooUUID\", 1);\n long version = between(1, Integer.MAX_VALUE / 2);\n boolean primary = randomBoolean();\n AllocationId allocationId = randomBoolean() ? null : randomAllocationId();\n- ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state1 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state1, env.availableShardPaths(id));\n ShardStateMetaData shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"foo\", allocationId);\n+ ShardStateMetaData state2 = new ShardStateMetaData(version, primary, \"fooUUID\", allocationId);\n write(state2, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state1);\n \n- ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"foo\", allocationId);\n+ ShardStateMetaData state3 = new ShardStateMetaData(version + 1, primary, \"fooUUID\", allocationId);\n write(state3, env.availableShardPaths(id));\n shardStateMetaData = load(logger, env.availableShardPaths(id));\n assertEquals(shardStateMetaData, state3);\n- assertEquals(\"foo\", state3.indexUUID);\n+ assertEquals(\"fooUUID\", state3.indexUUID);\n }\n }\n \n public void testLockTryingToDelete() throws Exception {\n createIndex(\"test\");\n ensureGreen();\n NodeEnvironment env = getInstanceFromNode(NodeEnvironment.class);\n- Path[] shardPaths = env.availableShardPaths(new ShardId(\"test\", \"_na_\", 0));\n+ ClusterService cs = getInstanceFromNode(ClusterService.class);\n+ final Index index = cs.state().metaData().index(\"test\").getIndex();\n+ Path[] shardPaths = env.availableShardPaths(new ShardId(index, 0));\n logger.info(\"--> paths: [{}]\", (Object)shardPaths);\n // Should not be able to acquire the lock because it's already open\n try {\n@@ -179,7 +183,7 @@ public void testLockTryingToDelete() throws Exception {\n // Test without the regular shard lock to assume we can acquire it\n // (worst case, meaning that the shard lock could be acquired and\n // we're green to delete the shard's directory)\n- ShardLock sLock = new DummyShardLock(new ShardId(\"test\", \"_na_\", 0));\n+ ShardLock sLock = new DummyShardLock(new ShardId(index, 0));\n try {\n env.deleteShardDirectoryUnderLock(sLock, IndexSettingsModule.newIndexSettings(\"test\", Settings.EMPTY));\n fail(\"should not have been able to delete the directory\");",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n@@ -42,13 +43,13 @@ public void testLoadShardPath() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"0xDEADBEEF\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n assertEquals(path, shardPath.getDataPath());\n- assertEquals(\"0xDEADBEEF\", shardPath.getIndexUUID());\n+ assertEquals(\"0xDEADBEEF\", shardPath.getShardId().getIndex().getUUID());\n assertEquals(\"foo\", shardPath.getShardId().getIndexName());\n assertEquals(path.resolve(\"translog\"), shardPath.resolveTranslog());\n assertEquals(path.resolve(\"index\"), shardPath.resolveIndex());\n@@ -57,14 +58,15 @@ public void testLoadShardPath() throws IOException {\n \n public void testFailLoadShardPathOnMultiState() throws IOException {\n try (final NodeEnvironment env = newNodeEnvironment(settingsBuilder().build())) {\n- Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ final String indexUUID = \"0xDEADBEEF\";\n+ Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n assumeTrue(\"This test tests multi data.path but we only got one\", paths.length > 1);\n int id = randomIntBetween(1, 10);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, \"0xDEADBEEF\", AllocationId.newInitializing()), id, paths);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(id, true, indexUUID, AllocationId.newInitializing()), id, paths);\n ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), settings));\n fail(\"Expected IllegalStateException\");\n } catch (IllegalStateException e) {\n@@ -77,7 +79,7 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n Settings.Builder builder = settingsBuilder().put(IndexMetaData.SETTING_INDEX_UUID, \"foobar\")\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n Settings settings = builder.build();\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", \"foobar\", 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n int id = randomIntBetween(1, 10);\n@@ -90,18 +92,20 @@ public void testFailLoadShardPathIndexUUIDMissmatch() throws IOException {\n }\n \n public void testIllegalCustomDataPath() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n try {\n- new ShardPath(true, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ new ShardPath(true, path, path, new ShardId(index, 0));\n fail(\"Expected IllegalArgumentException\");\n } catch (IllegalArgumentException e) {\n assertThat(e.getMessage(), is(\"shard state path must be different to the data path when using custom data paths\"));\n }\n }\n \n public void testValidCtor() {\n- final Path path = createTempDir().resolve(\"foo\").resolve(\"0\");\n- ShardPath shardPath = new ShardPath(false, path, path, \"foo\", new ShardId(\"foo\", \"_na_\", 0));\n+ Index index = new Index(\"foo\", \"foo\");\n+ final Path path = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ ShardPath shardPath = new ShardPath(false, path, path, new ShardId(index, 0));\n assertFalse(shardPath.isCustomDataPath());\n assertEquals(shardPath.getDataPath(), path);\n assertEquals(shardPath.getShardStatePath(), path);\n@@ -111,8 +115,9 @@ public void testGetRootPaths() throws IOException {\n boolean useCustomDataPath = randomBoolean();\n final Settings indexSettings;\n final Settings nodeSettings;\n+ final String indexUUID = \"0xDEADBEEF\";\n Settings.Builder indexSettingsBuilder = settingsBuilder()\n- .put(IndexMetaData.SETTING_INDEX_UUID, \"0xDEADBEEF\")\n+ .put(IndexMetaData.SETTING_INDEX_UUID, indexUUID)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT);\n final Path customPath;\n if (useCustomDataPath) {\n@@ -132,10 +137,10 @@ public void testGetRootPaths() throws IOException {\n nodeSettings = Settings.EMPTY;\n }\n try (final NodeEnvironment env = newNodeEnvironment(nodeSettings)) {\n- ShardId shardId = new ShardId(\"foo\", \"_na_\", 0);\n+ ShardId shardId = new ShardId(\"foo\", indexUUID, 0);\n Path[] paths = env.availableShardPaths(shardId);\n Path path = randomFrom(paths);\n- ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, \"0xDEADBEEF\", AllocationId.newInitializing()), 2, path);\n+ ShardStateMetaData.FORMAT.write(new ShardStateMetaData(2, true, indexUUID, AllocationId.newInitializing()), 2, path);\n ShardPath shardPath = ShardPath.loadShardPath(logger, env, shardId, IndexSettingsModule.newIndexSettings(shardId.getIndex(), indexSettings));\n boolean found = false;\n for (Path p : env.nodeDataPaths()) {",
"filename": "core/src/test/java/org/elasticsearch/index/shard/ShardPathTests.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MergePolicyConfig;\n import org.elasticsearch.index.shard.IndexEventListener;\n@@ -571,8 +572,9 @@ private int numShards(String... index) {\n private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOException {\n Map<String, List<Path>> filesToNodes = new HashMap<>();\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n for (ShardRouting shardRouting : state.getRoutingTable().allShards(\"test\")) {\n- if (shardRouting.primary() == true) {\n+ if (shardRouting.primary()) {\n continue;\n }\n assertTrue(shardRouting.assignedToNode());\n@@ -582,8 +584,7 @@ private Map<String, List<Path>> findFilesToCorruptForReplica() throws IOExceptio\n filesToNodes.put(nodeStats.getNode().getName(), files);\n for (FsInfo.Path info : nodeStats.getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -604,6 +605,7 @@ private ShardRouting corruptRandomPrimaryFile() throws IOException {\n \n private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFiles) throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index test = state.metaData().index(\"test\").getIndex();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n@@ -616,8 +618,7 @@ private ShardRouting corruptRandomPrimaryFile(final boolean includePerCommitFile\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/index\";\n- Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n+ Path file = PathUtils.get(path).resolve(\"indices\").resolve(test.getUUID()).resolve(Integer.toString(shardRouting.getId())).resolve(\"index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {\n@@ -676,12 +677,13 @@ private void pruneOldDeleteGenerations(Set<Path> files) {\n \n public List<Path> listShardFiles(ShardRouting routing) throws IOException {\n NodesStatsResponse nodeStatses = client().admin().cluster().prepareNodesStats(routing.currentNodeId()).setFs(true).get();\n-\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ final Index test = state.metaData().index(\"test\").getIndex();\n assertThat(routing.toString(), nodeStatses.getNodes().length, equalTo(1));\n List<Path> files = new ArrayList<>();\n for (FsInfo.Path info : nodeStatses.getNodes()[0].getFs()) {\n String path = info.getPath();\n- Path file = PathUtils.get(path).resolve(\"indices/test/\" + Integer.toString(routing.getId()) + \"/index\");\n+ Path file = PathUtils.get(path).resolve(\"indices/\" + test.getUUID() + \"/\" + Integer.toString(routing.getId()) + \"/index\");\n if (Files.exists(file)) { // multi data path might only have one path in use\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(file)) {\n for (Path item : stream) {",
"filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MockEngineFactoryPlugin;\n import org.elasticsearch.monitor.fs.FsInfo;\n@@ -110,6 +111,7 @@ public void testCorruptTranslogFiles() throws Exception {\n private void corruptRandomTranslogFiles() throws IOException {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n GroupShardsIterator shardIterators = state.getRoutingNodes().getRoutingTable().activePrimaryShardsGrouped(new String[]{\"test\"}, false);\n+ final Index test = state.metaData().index(\"test\").getIndex();\n List<ShardIterator> iterators = iterableAsArrayList(shardIterators);\n ShardIterator shardIterator = RandomPicks.randomFrom(getRandom(), iterators);\n ShardRouting shardRouting = shardIterator.nextOrNull();\n@@ -121,7 +123,7 @@ private void corruptRandomTranslogFiles() throws IOException {\n Set<Path> files = new TreeSet<>(); // treeset makes sure iteration order is deterministic\n for (FsInfo.Path fsPath : nodeStatses.getNodes()[0].getFs()) {\n String path = fsPath.getPath();\n- final String relativeDataLocationPath = \"indices/test/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n+ final String relativeDataLocationPath = \"indices/\"+ test.getUUID() +\"/\" + Integer.toString(shardRouting.getId()) + \"/translog\";\n Path file = PathUtils.get(path).resolve(relativeDataLocationPath);\n if (Files.exists(file)) {\n logger.info(\"--> path: {}\", file);",
"filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedTranslogIT.java",
"status": "modified"
},
{
"diff": "@@ -46,9 +46,9 @@ public void testHasSleepWrapperOnSharedFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);\n@@ -62,9 +62,9 @@ public void testHasNoSleepWrapperOnNormalFS() throws IOException {\n IndexSettings settings = IndexSettingsModule.newIndexSettings(\"foo\", build);\n IndexStoreConfig config = new IndexStoreConfig(build);\n IndexStore store = new IndexStore(settings, config);\n- Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Path tempDir = createTempDir().resolve(settings.getUUID()).resolve(\"0\");\n Files.createDirectories(tempDir);\n- ShardPath path = new ShardPath(false, tempDir, tempDir, settings.getUUID(), new ShardId(settings.getIndex(), 0));\n+ ShardPath path = new ShardPath(false, tempDir, tempDir, new ShardId(settings.getIndex(), 0));\n FsDirectoryService fsDirectoryService = new FsDirectoryService(settings, store, path);\n Directory directory = fsDirectoryService.newDirectory();\n assertTrue(directory instanceof RateLimitedFSDirectory);",
"filename": "core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -47,13 +47,14 @@\n public class IndexStoreTests extends ESTestCase {\n \n public void testStoreDirectory() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n+ Index index = new Index(\"foo\", \"fooUUID\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n final IndexModule.Type[] values = IndexModule.Type.values();\n final IndexModule.Type type = RandomPicks.randomFrom(random(), values);\n Settings settings = Settings.settingsBuilder().put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), type.name().toLowerCase(Locale.ROOT))\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n IndexSettings indexSettings = IndexSettingsModule.newIndexSettings(\"foo\", settings);\n- FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ FsDirectoryService service = new FsDirectoryService(indexSettings, null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n switch (type) {\n case NIOFS:\n@@ -84,8 +85,9 @@ public void testStoreDirectory() throws IOException {\n }\n \n public void testStoreDirectoryDefault() throws IOException {\n- final Path tempDir = createTempDir().resolve(\"foo\").resolve(\"0\");\n- FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"foo\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, \"foo\", new ShardId(\"foo\", \"_na_\", 0)));\n+ Index index = new Index(\"bar\", \"foo\");\n+ final Path tempDir = createTempDir().resolve(index.getUUID()).resolve(\"0\");\n+ FsDirectoryService service = new FsDirectoryService(IndexSettingsModule.newIndexSettings(\"bar\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()), null, new ShardPath(false, tempDir, tempDir, new ShardId(index, 0)));\n try (final Directory directory = service.newFSDirectory(tempDir, NoLockFactory.INSTANCE)) {\n if (Constants.WINDOWS) {\n assertTrue(directory.toString(), directory instanceof MMapDirectory || directory instanceof SimpleFSDirectory);",
"filename": "core/src/test/java/org/elasticsearch/index/store/IndexStoreTests.java",
"status": "modified"
},
{
"diff": "@@ -112,12 +112,14 @@ public void testIndexCleanup() throws Exception {\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1))\n );\n ensureGreen(\"test\");\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n \n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n final String node_3 = internalCluster().startNode(Settings.builder().put(Node.NODE_MASTER_SETTING.getKey(), false));\n@@ -128,12 +130,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(false));\n \n logger.info(\"--> move shard from node_1 to node_3, and wait for relocation to finish\");\n \n@@ -161,12 +163,12 @@ public void testIndexCleanup() throws Exception {\n .get();\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_3, \"test\")), equalTo(true));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_3, index)), equalTo(true));\n \n }\n \n@@ -180,16 +182,18 @@ public void testShardCleanupIfShardDeletionAfterRelocationFailedAndIndexDeleted(\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n );\n ensureGreen(\"test\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n \n final String node_2 = internalCluster().startDataOnlyNode(Settings.builder().build());\n assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"2\").get().isTimedOut());\n \n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n \n // add a transport delegate that will prevent the shard active request to succeed the first time after relocation has finished.\n // node_1 will then wait for the next cluster state change before it tries a next attempt to delete the shard.\n@@ -220,14 +224,14 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n // it must still delete the shard, even if it cannot find it anymore in indicesservice\n client().admin().indices().prepareDelete(\"test\").get();\n \n- assertThat(waitForShardDeletion(node_1, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_1, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_1, \"test\")), equalTo(false));\n- assertThat(waitForShardDeletion(node_2, \"test\", 0), equalTo(false));\n- assertThat(waitForIndexDeletion(node_2, \"test\"), equalTo(false));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(false));\n- assertThat(Files.exists(indexDirectory(node_2, \"test\")), equalTo(false));\n+ assertThat(waitForShardDeletion(node_1, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_1, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_1, index)), equalTo(false));\n+ assertThat(waitForShardDeletion(node_2, index, 0), equalTo(false));\n+ assertThat(waitForIndexDeletion(node_2, index), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(false));\n+ assertThat(Files.exists(indexDirectory(node_2, index)), equalTo(false));\n }\n \n public void testShardsCleanup() throws Exception {\n@@ -241,9 +245,11 @@ public void testShardsCleanup() throws Exception {\n );\n ensureGreen(\"test\");\n \n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ Index index = state.metaData().index(\"test\").getIndex();\n logger.info(\"--> making sure that shard and its replica are allocated on node_1 and node_2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_2, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_2, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node server3\");\n String node_3 = internalCluster().startNode();\n@@ -255,10 +261,10 @@ public void testShardsCleanup() throws Exception {\n assertThat(clusterHealth.isTimedOut(), equalTo(false));\n \n logger.info(\"--> making sure that shard is not allocated on server3\");\n- assertThat(waitForShardDeletion(node_3, \"test\", 0), equalTo(false));\n+ assertThat(waitForShardDeletion(node_3, index, 0), equalTo(false));\n \n- Path server2Shard = shardDirectory(node_2, \"test\", 0);\n- logger.info(\"--> stopping node {}\", node_2);\n+ Path server2Shard = shardDirectory(node_2, index, 0);\n+ logger.info(\"--> stopping node \" + node_2);\n internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_2));\n \n logger.info(\"--> running cluster_health\");\n@@ -273,9 +279,9 @@ public void testShardsCleanup() throws Exception {\n assertThat(Files.exists(server2Shard), equalTo(true));\n \n logger.info(\"--> making sure that shard and its replica exist on server1, server2 and server3\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n assertThat(Files.exists(server2Shard), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n \n logger.info(\"--> starting node node_4\");\n final String node_4 = internalCluster().startNode();\n@@ -284,9 +290,9 @@ public void testShardsCleanup() throws Exception {\n ensureGreen();\n \n logger.info(\"--> making sure that shard and its replica are allocated on server1 and server3 but not on server2\");\n- assertThat(Files.exists(shardDirectory(node_1, \"test\", 0)), equalTo(true));\n- assertThat(Files.exists(shardDirectory(node_3, \"test\", 0)), equalTo(true));\n- assertThat(waitForShardDeletion(node_4, \"test\", 0), equalTo(false));\n+ assertThat(Files.exists(shardDirectory(node_1, index, 0)), equalTo(true));\n+ assertThat(Files.exists(shardDirectory(node_3, index, 0)), equalTo(true));\n+ assertThat(waitForShardDeletion(node_4, index, 0), equalTo(false));\n }\n \n public void testShardActiveElsewhereDoesNotDeleteAnother() throws Exception {\n@@ -426,30 +432,30 @@ public void onFailure(String source, Throwable t) {\n waitNoPendingTasksOnAll();\n logger.info(\"Checking if shards aren't removed\");\n for (int shard : node2Shards) {\n- assertTrue(waitForShardDeletion(nonMasterNode, \"test\", shard));\n+ assertTrue(waitForShardDeletion(nonMasterNode, index, shard));\n }\n }\n \n- private Path indexDirectory(String server, String index) {\n+ private Path indexDirectory(String server, Index index) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n final Path[] paths = env.indexPaths(index);\n assert paths.length == 1;\n return paths[0];\n }\n \n- private Path shardDirectory(String server, String index, int shard) {\n+ private Path shardDirectory(String server, Index index, int shard) {\n NodeEnvironment env = internalCluster().getInstance(NodeEnvironment.class, server);\n- final Path[] paths = env.availableShardPaths(new ShardId(index, \"_na_\", shard));\n+ final Path[] paths = env.availableShardPaths(new ShardId(index, shard));\n assert paths.length == 1;\n return paths[0];\n }\n \n- private boolean waitForShardDeletion(final String server, final String index, final int shard) throws InterruptedException {\n+ private boolean waitForShardDeletion(final String server, final Index index, final int shard) throws InterruptedException {\n awaitBusy(() -> !Files.exists(shardDirectory(server, index, shard)));\n return Files.exists(shardDirectory(server, index, shard));\n }\n \n- private boolean waitForIndexDeletion(final String server, final String index) throws InterruptedException {\n+ private boolean waitForIndexDeletion(final String server, final Index index) throws InterruptedException {\n awaitBusy(() -> !Files.exists(indexDirectory(server, index)));\n return Files.exists(indexDirectory(server, index));\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -4,6 +4,17 @@\n This section discusses the changes that you need to be aware of when migrating\n your application to Elasticsearch 5.0.\n \n+[float]\n+=== Indices created before 5.0\n+\n+Elasticsearch 5.0 can read indices created in version 2.0 and above. If any\n+of your indices were created before 2.0 you will need to upgrade to the\n+latest 2.x version of Elasticsearch first, in order to upgrade your indices or\n+to delete the old indices. Elasticsearch will not start in the presence of old\n+indices. To upgrade 2.x indices, first start a node which have access to all\n+the data folders and let it upgrade all the indices before starting up rest of\n+the cluster.\n+\n [IMPORTANT]\n .Reindex indices from Elasticseach 1.x or before\n =========================================",
"filename": "docs/reference/migration/migrate_5_0.asciidoc",
"status": "modified"
},
{
"diff": "@@ -52,6 +52,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-murmur3-2.0.0\";\n+ final String indexUUID = \"1VzJds59TTK7lRu17W0mcg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -72,6 +73,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n \n master.get();\n // force reloading dangling indices with a cluster state republish",
"filename": "plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperUpgradeTests.java",
"status": "modified"
},
{
"diff": "@@ -53,6 +53,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n \n public void testUpgradeOldMapping() throws IOException, ExecutionException, InterruptedException {\n final String indexName = \"index-mapper-size-2.0.0\";\n+ final String indexUUID = \"ENCw7sG0SWuTPcH60bHheg\";\n InternalTestCluster.Async<String> master = internalCluster().startNodeAsync();\n Path unzipDir = createTempDir();\n Path unzipDataDir = unzipDir.resolve(\"data\");\n@@ -73,6 +74,7 @@ public void testUpgradeOldMapping() throws IOException, ExecutionException, Inte\n assertFalse(Files.exists(dataPath));\n Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n Files.move(src, dataPath);\n+ Files.move(dataPath.resolve(indexName), dataPath.resolve(indexUUID));\n master.get();\n // force reloading dangling indices with a cluster state republish\n client().admin().cluster().prepareReroute().get();",
"filename": "plugins/mapper-size/src/test/java/org/elasticsearch/index/mapper/size/SizeFieldMapperUpgradeTests.java",
"status": "modified"
}
]
}
|
{
"body": "Also removes `ensureCanWrite` since this already passes the\n`TRUNCATE_EXISTING` flag when opening.\n\nAdds a REST test that fails without this fix due to the classloader\nisolation.\n\nAdditionally, move `SmbDirectoryWrapper` into an elasticsearch package instead of a lucene one, so this would be found at compile time instead of runtime.\n",
"comments": [
{
"body": "This is only for 2.x, since a lot of files were moved all over the place I will forward-port separately.\n",
"created_at": "2016-02-02T20:15:22Z"
},
{
"body": "+1\n",
"created_at": "2016-02-02T20:16:44Z"
},
{
"body": "Actually, I believe this is also a problem is 2.2.0, so I will backport this to the `2.2` branch so it's available in the 2.2.1 release (whenever that is)\n",
"created_at": "2016-02-03T00:04:04Z"
},
{
"body": "It might be beneficial to consider https://github.com/elastic/elasticsearch/pull/16325 as well. IMO its a serious bug :)\n",
"created_at": "2016-02-03T01:26:22Z"
},
{
"body": "@rmuir I agree, I'm going to backport it to 2.2.1\n",
"created_at": "2016-02-03T16:09:07Z"
}
],
"number": 16383,
"title": "Fix calling ensureOpen() on the wrong directory"
}
|
{
"body": "Also removes ensureCanWrite since this already passes the\nTRUNCATE_EXISTING flag when opening.\n\nAdds a REST test that fails without this fix due to the classloader\nisolation.\n\nAdditionally, move SmbDirectoryWrapper into an elasticsearch package instead of a lucene one, so this would be found at compile time instead of runtime.\n\nForward-port of #16383\n",
"number": 16395,
"review_comments": [],
"title": "Fix calling ensureOpen() on the wrong directory (master forwardport)"
}
|
{
"commits": [
{
"message": "Fix calling ensureOpen() on the wrong directory\n\nAlso removes ensureCanWrite since this already passes the\nTRUNCATE_EXISTING flag when opening.\n\nAdds a REST test that fails without this fix due to the classloader\nisolation.\n\nAdditionally, move SmbDirectoryWrapper into an elasticsearch package instead of a lucene one, so this would be found at compile time instead of runtime.\n\nForward-port of #16383"
}
],
"files": [
{
"diff": "@@ -22,11 +22,11 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.LockFactory;\n import org.apache.lucene.store.MMapDirectory;\n-import org.apache.lucene.store.SmbDirectoryWrapper;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardPath;\n import org.elasticsearch.index.store.FsDirectoryService;\n import org.elasticsearch.index.store.IndexStore;\n+import org.elasticsearch.index.store.SmbDirectoryWrapper;\n \n import java.io.IOException;\n import java.nio.file.Path;",
"filename": "plugins/store-smb/src/main/java/org/elasticsearch/index/store/smbmmapfs/SmbMmapFsDirectoryService.java",
"status": "modified"
},
{
"diff": "@@ -22,11 +22,11 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.LockFactory;\n import org.apache.lucene.store.SimpleFSDirectory;\n-import org.apache.lucene.store.SmbDirectoryWrapper;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardPath;\n import org.elasticsearch.index.store.FsDirectoryService;\n import org.elasticsearch.index.store.IndexStore;\n+import org.elasticsearch.index.store.SmbDirectoryWrapper;\n \n import java.io.IOException;\n import java.nio.file.Path;",
"filename": "plugins/store-smb/src/main/java/org/elasticsearch/index/store/smbsimplefs/SmbSimpleFsDirectoryService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,30 @@\n+\"Test the smb_mmap_fs directory wrapper\":\n+ - do:\n+ indices.create:\n+ index: smb-test\n+ body:\n+ index:\n+ store.type: smb_mmap_fs\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: yellow\n+\n+ - do:\n+ index:\n+ index: smb-test\n+ type: doc\n+ id: 1\n+ body: { foo: bar }\n+\n+ - do:\n+ get:\n+ index: smb-test\n+ type: doc\n+ id: 1\n+\n+ - match: { _index: smb-test }\n+ - match: { _type: doc }\n+ - match: { _id: \"1\"}\n+ - match: { _version: 1}\n+ - match: { _source: { foo: bar }}",
"filename": "plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yaml",
"status": "added"
}
]
}
|
{
"body": "Integration tests for plugins are failing on Windows platforms because:\n- it tries to print message using a wrong format (see #16367)\n- when installing the plugin using a file, it mixes up the file path (ex: `file://C:\\whatever`) and thinks it refers to a plugin to download on Maven Central (ex: `org.elasticsearch:mapper-attachments:3.0.0`) because of the number of `:` in the name\n\n```\n....\nbin\\plugin install file:/Y:/jenkins/workspace/es_core_master_window-2012/plugins/analysis-icu/build/cluster/integTest%20node0/plugins%20tmp/analysis-icu-3.0.0-SNAPSHOT.zip\nSuccessfully started process 'command 'cmd''\nPlugins directory [Y:\\jenkins\\workspace\\es_core_master_window-2012\\plugins\\analysis-icu\\build\\cluster\\integTest node0\\elasticsearch-3.0.0-SNAPSHOT\\plugins] does not exist. Creating...\nException in thread \"main\" java.util.IllegalFormatWidthException: 20\n at java.util.Formatter$FormatSpecifier.checkText(Formatter.java:3044)\n at java.util.Formatter$FormatSpecifier.<init>(Formatter.java:2733)\n at java.util.Formatter.parse(Formatter.java:2560)\n at java.util.Formatter.format(Formatter.java:2501)\n at java.util.Formatter.format(Formatter.java:2455)\n at java.lang.String.format(String.java:2981)\n at org.elasticsearch.common.cli.Terminal$SystemTerminal.doPrint(Terminal.java:161)\n at org.elasticsearch.common.cli.Terminal.print(Terminal.java:110)\n at org.elasticsearch.common.cli.Terminal.println(Terminal.java:105)\n at org.elasticsearch.common.cli.Terminal.println(Terminal.java:93)\n at org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:168)\n at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:140)\n at org.elasticsearch.common.cli.CliTool.execute(CliTool.java:145)\n at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:74)\n:plugins:analysis-icu:integTest#installAnalysisIcuPlugin FAILED\n:plugins:analysis-icu:integTest#installAnalysisIcuPlugin (Thread[main,5,main]) completed. Took 1.304 secs.\n\nFAILURE: Build failed with an exception.\n\n* What went wrong:\nExecution failed for task ':plugins:analysis-icu:integTest#installAnalysisIcuPlugin'.\n> Process 'command 'cmd'' finished with non-zero exit value 1\n```\n",
"comments": [],
"number": 16376,
"title": "Integration tests for plugins fail on Windows platforms"
}
|
{
"body": "Identifying when a plugin id is maven coordinates is currently done by\nchecking if the plugin id contains 2 colons. However, a valid url could\nhave 2 colons, for example when a port is specified. This change adds\nanother check, ensuring the plugin id with maven coordinates does not\ncontain a slash, which only a url would have.\n\ncloses #16376\n",
"number": 16384,
"review_comments": [],
"title": "Plugin cli: Improve maven coordinates detection"
}
|
{
"commits": [
{
"message": "Plugin cli: Improve maven coordinates detection\n\nIdentifying when a plugin id is maven coordinates is currently done by\nchecking if the plugin id contains 2 colons. However, a valid url could\nhave 2 colons, for example when a port is specified. This change adds\nanother check, ensuring the plugin id with maven coordinates does not\ncontain a slash, which only a url would have.\n\ncloses #16376"
}
],
"files": [
{
"diff": "@@ -160,9 +160,9 @@ private Path download(String pluginId, Path tmpDir) throws Exception {\n return downloadZipAndChecksum(url, tmpDir);\n }\n \n- // now try as maven coordinates, a valid URL would only have a single colon\n+ // now try as maven coordinates, a valid URL would only have a colon and slash\n String[] coordinates = pluginId.split(\":\");\n- if (coordinates.length == 3) {\n+ if (coordinates.length == 3 && pluginId.contains(\"/\") == false) {\n String mavenUrl = String.format(Locale.ROOT, \"https://repo1.maven.org/maven2/%1$s/%2$s/%3$s/%2$s-%3$s.zip\",\n coordinates[0].replace(\".\", \"/\") /* groupId */, coordinates[1] /* artifactId */, coordinates[2] /* version */);\n terminal.println(\"-> Downloading \" + pluginId + \" from maven central\");",
"filename": "core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.plugins;\n \n import java.io.IOException;\n+import java.net.MalformedURLException;\n import java.net.URL;\n import java.nio.charset.StandardCharsets;\n import java.nio.file.DirectoryStream;\n@@ -205,6 +206,14 @@ public void testSpaceInUrl() throws Exception {\n assertPlugin(\"fake\", pluginDir, env);\n }\n \n+ public void testMalformedUrlNotMaven() throws Exception {\n+ // has two colons, so it appears similar to maven coordinates\n+ MalformedURLException e = expectThrows(MalformedURLException.class, () -> {\n+ installPlugin(\"://host:1234\", createEnv());\n+ });\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"no protocol\"));\n+ }\n+\n public void testPluginsDirMissing() throws Exception {\n Environment env = createEnv();\n Files.delete(env.pluginsFile());",
"filename": "qa/evil-tests/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java",
"status": "modified"
}
]
}
|
{
"body": "Integration tests for plugins are failing on Windows platforms because:\n- it tries to print message using a wrong format (see #16367)\n- when installing the plugin using a file, it mixes up the file path (ex: `file://C:\\whatever`) and thinks it refers to a plugin to download on Maven Central (ex: `org.elasticsearch:mapper-attachments:3.0.0`) because of the number of `:` in the name\n\n```\n....\nbin\\plugin install file:/Y:/jenkins/workspace/es_core_master_window-2012/plugins/analysis-icu/build/cluster/integTest%20node0/plugins%20tmp/analysis-icu-3.0.0-SNAPSHOT.zip\nSuccessfully started process 'command 'cmd''\nPlugins directory [Y:\\jenkins\\workspace\\es_core_master_window-2012\\plugins\\analysis-icu\\build\\cluster\\integTest node0\\elasticsearch-3.0.0-SNAPSHOT\\plugins] does not exist. Creating...\nException in thread \"main\" java.util.IllegalFormatWidthException: 20\n at java.util.Formatter$FormatSpecifier.checkText(Formatter.java:3044)\n at java.util.Formatter$FormatSpecifier.<init>(Formatter.java:2733)\n at java.util.Formatter.parse(Formatter.java:2560)\n at java.util.Formatter.format(Formatter.java:2501)\n at java.util.Formatter.format(Formatter.java:2455)\n at java.lang.String.format(String.java:2981)\n at org.elasticsearch.common.cli.Terminal$SystemTerminal.doPrint(Terminal.java:161)\n at org.elasticsearch.common.cli.Terminal.print(Terminal.java:110)\n at org.elasticsearch.common.cli.Terminal.println(Terminal.java:105)\n at org.elasticsearch.common.cli.Terminal.println(Terminal.java:93)\n at org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:168)\n at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:140)\n at org.elasticsearch.common.cli.CliTool.execute(CliTool.java:145)\n at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:74)\n:plugins:analysis-icu:integTest#installAnalysisIcuPlugin FAILED\n:plugins:analysis-icu:integTest#installAnalysisIcuPlugin (Thread[main,5,main]) completed. Took 1.304 secs.\n\nFAILURE: Build failed with an exception.\n\n* What went wrong:\nExecution failed for task ':plugins:analysis-icu:integTest#installAnalysisIcuPlugin'.\n> Process 'command 'cmd'' finished with non-zero exit value 1\n```\n",
"comments": [],
"number": 16376,
"title": "Integration tests for plugins fail on Windows platforms"
}
|
{
"body": "closes #16376\n",
"number": 16377,
"review_comments": [
{
"body": "I don't like doing this \"try a url and on error do something\". Can we instead add an extra condition to the clause, so in addition to having 3 coordinates separated by colon, it does not contain `/`? All URLs will have a slash. Then we don't have to worry about this code trapping a malformed URL and attempting as maven coordinates, and the URL handling below can handle malformed urls.\n",
"created_at": "2016-02-02T17:00:31Z"
}
],
"title": "Fix plugins integration tests on Windows"
}
|
{
"commits": [
{
"message": "Fix plugins integration tests on Windows\n\ncloses #16376"
}
],
"files": [
{
"diff": "@@ -24,6 +24,8 @@\n import java.io.InputStream;\n import java.io.InputStreamReader;\n import java.io.OutputStream;\n+import java.net.MalformedURLException;\n+import java.net.URISyntaxException;\n import java.net.URL;\n import java.net.URLDecoder;\n import java.nio.charset.StandardCharsets;\n@@ -160,13 +162,15 @@ private Path download(String pluginId, Path tmpDir) throws Exception {\n return downloadZipAndChecksum(url, tmpDir);\n }\n \n- // now try as maven coordinates, a valid URL would only have a single colon\n- String[] coordinates = pluginId.split(\":\");\n- if (coordinates.length == 3) {\n- String mavenUrl = String.format(Locale.ROOT, \"https://repo1.maven.org/maven2/%1$s/%2$s/%3$s/%2$s-%3$s.zip\",\n- coordinates[0].replace(\".\", \"/\") /* groupId */, coordinates[1] /* artifactId */, coordinates[2] /* version */);\n- terminal.println(\"-> Downloading \" + pluginId + \" from maven central\");\n- return downloadZipAndChecksum(mavenUrl, tmpDir);\n+ // now try as maven coordinates\n+ if (false == isURL(pluginId)) {\n+ String[] coordinates = pluginId.split(\":\");\n+ if (coordinates.length == 3) {\n+ String mavenUrl = String.format(Locale.ROOT, \"https://repo1.maven.org/maven2/%1$s/%2$s/%3$s/%2$s-%3$s.zip\",\n+ coordinates[0].replace(\".\", \"/\") /* groupId */, coordinates[1] /* artifactId */, coordinates[2] /* version */);\n+ terminal.println(\"-> Downloading \" + pluginId + \" from maven central\");\n+ return downloadZipAndChecksum(mavenUrl, tmpDir);\n+ }\n }\n \n // fall back to plain old URL\n@@ -395,4 +399,13 @@ private void installConfig(PluginInfo info, Path tmpConfigDir, Path destConfigDi\n }\n IOUtils.rm(tmpConfigDir); // clean up what we just copied\n }\n+\n+ private boolean isURL(String s) {\n+ try {\n+ new URL(s).toURI();\n+ return true;\n+ } catch (MalformedURLException | URISyntaxException e) {\n+ return false;\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java",
"status": "modified"
}
]
}
|
{
"body": "here is how to reproduce (i know, kinda crazy):\n1. force GroovyScriptEngineService to make NPE on close:\n\n```\n+++ b/core/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java\n@@ -99,17 +99,19 @@ public class GroovyScriptEngineService extends AbstractComponent implements Scri\n }\n\n // Groovy class loader to isolate Groovy-land code\n- this.loader = new GroovyClassLoader(getClass().getClassLoader(), config);\n+ this.loader = null; //new GroovyClassLoader(getClass().getClassLoader(), config);\n```\n1. mvn install -DskipTests from core/\n2. mvn test from plugins/cloud-azure. \n\nLeakFS detects a file leak:\n\n```\nThrowable #1: java.lang.RuntimeException: file handle leaks: [FileChannel(/home/rmuir/workspace/elasticsearch/plugins/cloud-azure/target/J0/temp/org.elasticsearch.index.store.SmbMMapFsTests_2C60FEE3D73A0680-001/tempDir-001/d0/SUITE-CHILD_VM=[0]-CLUSTER_SEED=[-1301520842927563395]-HASH=[11EC1C68B1C5F]-cluster/nodes/2/node.lock), \n```\n\nThe issue might just be our test harness stuff, but my concern is it could happen \"for real\" too. In the case of SimpleFSLock it could be annoying (user has to then remove lock file).\n",
"comments": [],
"number": 13685,
"title": "Possibly leak of lock, if plugin hits exception on close()"
}
|
{
"body": "We are leaking all kinds of resources if something during Node#close() barfs.\nThis commit cuts over to a list of closeables to release resources that\nalso closed remaining services if one or more services fail to close.\n\nCloses #13685\n",
"number": 16316,
"review_comments": [],
"title": "Ensure all resources are closed on Node#close()"
}
|
{
"commits": [
{
"message": "Ensure all resoruces are closed on Node#close()\n\nWe are leaking all kinds of resources if something during Node#close() barfs.\nThis commit cuts over to a list of closeables to release resources that\nalso closed remaining services if one or more services fail to close.\n\nCloses #13685"
},
{
"message": "Use IOUtils#close() where needed"
},
{
"message": "Use IOUtils#close() where needed"
},
{
"message": "Disambiguate TestCluster implementation since Client is now also Closeable and if we call IOUtils it might interpret it as a Iterable<Closeable>"
},
{
"message": "Cleanup Relesables now that we can delegate to IOUtils"
},
{
"message": "Fix AzureRepositoryF to handle exceptions on close\nFix TribeUnitTests to handle exceptions on close"
},
{
"message": "Call latch in a finally block"
}
],
"files": [
{
"diff": "@@ -20,7 +20,9 @@\n package org.elasticsearch.bootstrap;\n \n import org.apache.lucene.util.Constants;\n+import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.StringHelper;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.PidFile;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -40,6 +42,7 @@\n import org.elasticsearch.node.internal.InternalSettingsPreparer;\n \n import java.io.ByteArrayOutputStream;\n+import java.io.IOException;\n import java.io.PrintStream;\n import java.nio.file.Path;\n import java.util.Locale;\n@@ -114,7 +117,11 @@ public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean sec\n public boolean handle(int code) {\n if (CTRL_CLOSE_EVENT == code) {\n logger.info(\"running graceful exit on windows\");\n- Bootstrap.stop();\n+ try {\n+ Bootstrap.stop();\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"failed to stop node\", e);\n+ }\n return true;\n }\n return false;\n@@ -153,8 +160,10 @@ private void setup(boolean addShutdownHook, Settings settings, Environment envir\n Runtime.getRuntime().addShutdownHook(new Thread() {\n @Override\n public void run() {\n- if (node != null) {\n- node.close();\n+ try {\n+ IOUtils.close(node);\n+ } catch (IOException ex) {\n+ throw new ElasticsearchException(\"failed to stop node\", ex);\n }\n }\n });\n@@ -221,9 +230,9 @@ private void start() {\n keepAliveThread.start();\n }\n \n- static void stop() {\n+ static void stop() throws IOException {\n try {\n- Releasables.close(INSTANCE.node);\n+ IOUtils.close(INSTANCE.node);\n } finally {\n INSTANCE.keepAliveLatch.countDown();\n }",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.bootstrap;\n \n+import java.io.IOException;\n+\n /**\n * This class starts elasticsearch.\n */\n@@ -48,7 +50,7 @@ public static void main(String[] args) throws StartupError {\n *\n * NOTE: If this method is renamed and/or moved, make sure to update service.bat!\n */\n- static void close(String[] args) {\n+ static void close(String[] args) throws IOException {\n Bootstrap.stop();\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.recycler.AbstractRecyclerC;\n import org.elasticsearch.common.recycler.Recycler;\n import org.elasticsearch.common.settings.Settings;\n@@ -38,7 +39,7 @@\n import static org.elasticsearch.common.recycler.Recyclers.none;\n \n /** A recycler of fixed-size pages. */\n-public class PageCacheRecycler extends AbstractComponent {\n+public class PageCacheRecycler extends AbstractComponent implements Releasable {\n \n public static final String TYPE = \"recycler.page.type\";\n public static final String LIMIT_HEAP = \"recycler.page.limit.heap\";\n@@ -49,6 +50,7 @@ public class PageCacheRecycler extends AbstractComponent {\n private final Recycler<long[]> longPage;\n private final Recycler<Object[]> objectPage;\n \n+ @Override\n public void close() {\n bytePage.close();\n intPage.close();",
"filename": "core/src/main/java/org/elasticsearch/cache/recycler/PageCacheRecycler.java",
"status": "modified"
},
{
"diff": "@@ -21,10 +21,12 @@\n \n import org.elasticsearch.ElasticsearchException;\n \n+import java.io.Closeable;\n+\n /**\n * Specialization of {@link AutoCloseable} that may only throw an {@link ElasticsearchException}.\n */\n-public interface Releasable extends AutoCloseable {\n+public interface Releasable extends Closeable {\n \n @Override\n void close();",
"filename": "core/src/main/java/org/elasticsearch/common/lease/Releasable.java",
"status": "modified"
},
{
"diff": "@@ -19,38 +19,24 @@\n \n package org.elasticsearch.common.lease;\n \n+import org.apache.lucene.util.IOUtils;\n+\n+import java.io.IOException;\n import java.util.Arrays;\n \n /** Utility methods to work with {@link Releasable}s. */\n public enum Releasables {\n ;\n \n- private static void rethrow(Throwable t) {\n- if (t instanceof RuntimeException) {\n- throw (RuntimeException) t;\n- }\n- if (t instanceof Error) {\n- throw (Error) t;\n- }\n- throw new RuntimeException(t);\n- }\n-\n private static void close(Iterable<? extends Releasable> releasables, boolean ignoreException) {\n- Throwable th = null;\n- for (Releasable releasable : releasables) {\n- if (releasable != null) {\n- try {\n- releasable.close();\n- } catch (Throwable t) {\n- if (th == null) {\n- th = t;\n- }\n- }\n+ try {\n+ // this does the right thing with respect to add suppressed and not wrapping errors etc.\n+ IOUtils.close(releasables);\n+ } catch (Throwable t) {\n+ if (ignoreException == false) {\n+ IOUtils.reThrowUnchecked(t);\n }\n }\n- if (th != null && !ignoreException) {\n- rethrow(th);\n- }\n }\n \n /** Release the provided {@link Releasable}s. */\n@@ -99,25 +85,11 @@ public static void close(boolean success, Releasable... releasables) {\n * </pre>\n */\n public static Releasable wrap(final Iterable<Releasable> releasables) {\n- return new Releasable() {\n-\n- @Override\n- public void close() {\n- Releasables.close(releasables);\n- }\n-\n- };\n+ return () -> close(releasables);\n }\n \n /** @see #wrap(Iterable) */\n public static Releasable wrap(final Releasable... releasables) {\n- return new Releasable() {\n-\n- @Override\n- public void close() {\n- Releasables.close(releasables);\n- }\n-\n- };\n+ return () -> close(releasables);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lease/Releasables.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.cache.RemovalNotification;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;\n import org.elasticsearch.common.settings.Setting;\n@@ -52,7 +53,7 @@\n \n /**\n */\n-public class IndicesFieldDataCache extends AbstractComponent implements RemovalListener<IndicesFieldDataCache.Key, Accountable> {\n+public class IndicesFieldDataCache extends AbstractComponent implements RemovalListener<IndicesFieldDataCache.Key, Accountable>, Releasable{\n \n public static final Setting<TimeValue> INDICES_FIELDDATA_CLEAN_INTERVAL_SETTING = Setting.positiveTimeSetting(\"indices.fielddata.cache.cleanup_interval\", TimeValue.timeValueMinutes(1), false, Setting.Scope.CLUSTER);\n public static final Setting<ByteSizeValue> INDICES_FIELDDATA_CACHE_SIZE_KEY = Setting.byteSizeSetting(\"indices.fielddata.cache.size\", new ByteSizeValue(-1), false, Setting.Scope.CLUSTER);\n@@ -84,6 +85,7 @@ public IndicesFieldDataCache(Settings settings, IndicesFieldDataCacheListener in\n new FieldDataCacheCleaner(this.cache, this.logger, this.threadPool, this.cleanInterval));\n }\n \n+ @Override\n public void close() {\n cache.invalidateAll();\n this.closed = true;",
"filename": "core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.node;\n \n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Build;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n@@ -100,6 +101,7 @@\n import org.elasticsearch.watcher.ResourceWatcherService;\n \n import java.io.BufferedWriter;\n+import java.io.Closeable;\n import java.io.IOException;\n import java.net.Inet6Address;\n import java.net.InetAddress;\n@@ -108,9 +110,11 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.nio.file.StandardCopyOption;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n+import java.util.List;\n import java.util.concurrent.TimeUnit;\n import java.util.function.Function;\n \n@@ -120,7 +124,7 @@\n * A node represent a node within a cluster (<tt>cluster.name</tt>). The {@link #client()} can be used\n * in order to use a {@link Client} to perform actions/operations against the cluster.\n */\n-public class Node implements Releasable {\n+public class Node implements Closeable {\n \n public static final Setting<Boolean> WRITE_PORTS_FIELD_SETTING = Setting.boolSetting(\"node.portsfile\", false, false, Setting.Scope.CLUSTER);\n public static final Setting<Boolean> NODE_CLIENT_SETTING = Setting.boolSetting(\"node.client\", false, false, Setting.Scope.CLUSTER);\n@@ -351,7 +355,7 @@ private Node stop() {\n // If not, the hook that is added in Bootstrap#setup() will be useless: close() might not be executed, in case another (for example api) call\n // to close() has already set some lifecycles to stopped. In this case the process will be terminated even if the first call to close() has not finished yet.\n @Override\n- public synchronized void close() {\n+ public synchronized void close() throws IOException {\n if (lifecycle.started()) {\n stop();\n }\n@@ -361,88 +365,80 @@ public synchronized void close() {\n \n ESLogger logger = Loggers.getLogger(Node.class, settings.get(\"name\"));\n logger.info(\"closing ...\");\n-\n+ List<Closeable> toClose = new ArrayList<>();\n StopWatch stopWatch = new StopWatch(\"node_close\");\n- stopWatch.start(\"tribe\");\n- injector.getInstance(TribeService.class).close();\n- stopWatch.stop().start(\"node_service\");\n- try {\n- injector.getInstance(NodeService.class).close();\n- } catch (IOException e) {\n- logger.warn(\"NodeService close failed\", e);\n- }\n- stopWatch.stop().start(\"http\");\n+ toClose.add(() -> stopWatch.start(\"tribe\"));\n+ toClose.add(injector.getInstance(TribeService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"node_service\"));\n+ toClose.add(injector.getInstance(NodeService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"http\"));\n if (settings.getAsBoolean(\"http.enabled\", true)) {\n- injector.getInstance(HttpServer.class).close();\n+ toClose.add(injector.getInstance(HttpServer.class));\n }\n- stopWatch.stop().start(\"snapshot_service\");\n- injector.getInstance(SnapshotsService.class).close();\n- injector.getInstance(SnapshotShardsService.class).close();\n- stopWatch.stop().start(\"client\");\n+ toClose.add(() ->stopWatch.stop().start(\"snapshot_service\"));\n+ toClose.add(injector.getInstance(SnapshotsService.class));\n+ toClose.add(injector.getInstance(SnapshotShardsService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"client\"));\n Releasables.close(injector.getInstance(Client.class));\n- stopWatch.stop().start(\"indices_cluster\");\n- injector.getInstance(IndicesClusterStateService.class).close();\n- stopWatch.stop().start(\"indices\");\n- injector.getInstance(IndicesTTLService.class).close();\n- injector.getInstance(IndicesService.class).close();\n+ toClose.add(() ->stopWatch.stop().start(\"indices_cluster\"));\n+ toClose.add(injector.getInstance(IndicesClusterStateService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"indices\"));\n+ toClose.add(injector.getInstance(IndicesTTLService.class));\n+ toClose.add(injector.getInstance(IndicesService.class));\n // close filter/fielddata caches after indices\n- injector.getInstance(IndicesQueryCache.class).close();\n- injector.getInstance(IndicesFieldDataCache.class).close();\n- injector.getInstance(IndicesStore.class).close();\n- stopWatch.stop().start(\"routing\");\n- injector.getInstance(RoutingService.class).close();\n- stopWatch.stop().start(\"cluster\");\n- injector.getInstance(ClusterService.class).close();\n- stopWatch.stop().start(\"discovery\");\n- injector.getInstance(DiscoveryService.class).close();\n- stopWatch.stop().start(\"monitor\");\n- injector.getInstance(MonitorService.class).close();\n- stopWatch.stop().start(\"gateway\");\n- injector.getInstance(GatewayService.class).close();\n- stopWatch.stop().start(\"search\");\n- injector.getInstance(SearchService.class).close();\n- stopWatch.stop().start(\"rest\");\n- injector.getInstance(RestController.class).close();\n- stopWatch.stop().start(\"transport\");\n- injector.getInstance(TransportService.class).close();\n- stopWatch.stop().start(\"percolator_service\");\n- injector.getInstance(PercolatorService.class).close();\n+ toClose.add(injector.getInstance(IndicesQueryCache.class));\n+ toClose.add(injector.getInstance(IndicesFieldDataCache.class));\n+ toClose.add(injector.getInstance(IndicesStore.class));\n+ toClose.add(() ->stopWatch.stop().start(\"routing\"));\n+ toClose.add(injector.getInstance(RoutingService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"cluster\"));\n+ toClose.add(injector.getInstance(ClusterService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"discovery\"));\n+ toClose.add(injector.getInstance(DiscoveryService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"monitor\"));\n+ toClose.add(injector.getInstance(MonitorService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"gateway\"));\n+ toClose.add(injector.getInstance(GatewayService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"search\"));\n+ toClose.add(injector.getInstance(SearchService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"rest\"));\n+ toClose.add(injector.getInstance(RestController.class));\n+ toClose.add(() ->stopWatch.stop().start(\"transport\"));\n+ toClose.add(injector.getInstance(TransportService.class));\n+ toClose.add(() ->stopWatch.stop().start(\"percolator_service\"));\n+ toClose.add(injector.getInstance(PercolatorService.class));\n \n for (Class<? extends LifecycleComponent> plugin : pluginsService.nodeServices()) {\n- stopWatch.stop().start(\"plugin(\" + plugin.getName() + \")\");\n- injector.getInstance(plugin).close();\n+ toClose.add(() ->stopWatch.stop().start(\"plugin(\" + plugin.getName() + \")\"));\n+ toClose.add(injector.getInstance(plugin));\n }\n \n- stopWatch.stop().start(\"script\");\n- try {\n- injector.getInstance(ScriptService.class).close();\n- } catch(IOException e) {\n- logger.warn(\"ScriptService close failed\", e);\n- }\n+ toClose.add(() ->stopWatch.stop().start(\"script\"));\n+ toClose.add(injector.getInstance(ScriptService.class));\n \n- stopWatch.stop().start(\"thread_pool\");\n+ toClose.add(() ->stopWatch.stop().start(\"thread_pool\"));\n // TODO this should really use ThreadPool.terminate()\n- injector.getInstance(ThreadPool.class).shutdown();\n- try {\n- injector.getInstance(ThreadPool.class).awaitTermination(10, TimeUnit.SECONDS);\n- } catch (InterruptedException e) {\n- // ignore\n- }\n- stopWatch.stop().start(\"thread_pool_force_shutdown\");\n- try {\n- injector.getInstance(ThreadPool.class).shutdownNow();\n- } catch (Exception e) {\n- // ignore\n- }\n- stopWatch.stop();\n+ toClose.add(() -> injector.getInstance(ThreadPool.class).shutdown());\n+ toClose.add(() -> {\n+ try {\n+ injector.getInstance(ThreadPool.class).awaitTermination(10, TimeUnit.SECONDS);\n+ } catch (InterruptedException e) {\n+ // ignore\n+ }\n+ });\n+\n+ toClose.add(() ->stopWatch.stop().start(\"thread_pool_force_shutdown\"));\n+ toClose.add(() -> injector.getInstance(ThreadPool.class).shutdownNow());\n+ toClose.add(() -> stopWatch.stop());\n+\n+\n+ toClose.add(injector.getInstance(NodeEnvironment.class));\n+ toClose.add(injector.getInstance(PageCacheRecycler.class));\n \n if (logger.isTraceEnabled()) {\n logger.trace(\"Close times for each service:\\n{}\", stopWatch.prettyPrint());\n }\n-\n- injector.getInstance(NodeEnvironment.class).close();\n- injector.getInstance(PageCacheRecycler.class).close();\n-\n+ IOUtils.close(toClose);\n logger.info(\"closed\");\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/node/Node.java",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.text.Text;\n@@ -85,7 +86,7 @@\n import static org.apache.lucene.search.BooleanClause.Occur.FILTER;\n import static org.apache.lucene.search.BooleanClause.Occur.MUST;\n \n-public class PercolatorService extends AbstractComponent {\n+public class PercolatorService extends AbstractComponent implements Releasable {\n \n public final static float NO_SCORE = Float.NEGATIVE_INFINITY;\n public final static String TYPE_NAME = \".percolator\";\n@@ -304,6 +305,7 @@ static PercolateShardResponse doPercolate(PercolateContext context, PercolatorQu\n }\n }\n \n+ @Override\n public void close() {\n cache.close();\n }",
"filename": "core/src/main/java/org/elasticsearch/percolator/PercolatorService.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,8 @@\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.transport.TransportService;\n \n+import java.io.IOException;\n+\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -48,7 +50,7 @@ public void testPickingUpChangesInDiscoveryNode() {\n \n }\n \n- public void testNodeVersionIsUpdated() {\n+ public void testNodeVersionIsUpdated() throws IOException {\n TransportClient client = (TransportClient) internalCluster().client();\n TransportClientNodesService nodeService = client.nodeService();\n Node node = new Node(Settings.builder()",
"filename": "core/src/test/java/org/elasticsearch/client/transport/TransportClientIT.java",
"status": "modified"
},
{
"diff": "@@ -279,7 +279,7 @@ public void testDynamicUpdateMinimumMasterNodes() throws Exception {\n setMinimumMasterNodes(2);\n \n // make sure it has been processed on all nodes (master node spawns a secondary cluster state update task)\n- for (Client client : internalCluster()) {\n+ for (Client client : internalCluster().getClients()) {\n assertThat(client.admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setLocal(true).get().isTimedOut(),\n equalTo(false));\n }\n@@ -303,7 +303,7 @@ private void assertNoMasterBlockOnAllNodes() throws InterruptedException {\n assertTrue(awaitBusy(\n () -> {\n boolean success = true;\n- for (Client client : internalCluster()) {\n+ for (Client client : internalCluster().getClients()) {\n boolean clientHasNoMasterBlock = hasNoMasterBlock.test(client);\n if (logger.isDebugEnabled()) {\n logger.debug(\"Checking for NO_MASTER_BLOCK on client: {} NO_MASTER_BLOCK: [{}]\", client, clientHasNoMasterBlock);",
"filename": "core/src/test/java/org/elasticsearch/cluster/MinimumMasterNodesIT.java",
"status": "modified"
},
{
"diff": "@@ -167,7 +167,7 @@ protected void testConflict(String mapping, String mappingUpdate, String... erro\n \n private void compareMappingOnNodes(GetMappingsResponse previousMapping) {\n // make sure all nodes have same cluster state\n- for (Client client : cluster()) {\n+ for (Client client : cluster().getClients()) {\n GetMappingsResponse currentMapping = client.admin().indices().prepareGetMappings(INDEX).addTypes(TYPE).setLocal(true).get();\n assertThat(previousMapping.getMappings().get(INDEX).get(TYPE).source(), equalTo(currentMapping.getMappings().get(INDEX).get(TYPE).source()));\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterIT.java",
"status": "modified"
},
{
"diff": "@@ -92,7 +92,7 @@ public void testExplainValidateQueryTwoNodes() throws IOException {\n \n refresh();\n \n- for (Client client : internalCluster()) {\n+ for (Client client : internalCluster().getClients()) {\n ValidateQueryResponse response = client.admin().indices().prepareValidateQuery(\"test\")\n .setQuery(QueryBuilders.wrapperQuery(\"foo\".getBytes(StandardCharsets.UTF_8)))\n .setExplain(true)\n@@ -104,7 +104,7 @@ public void testExplainValidateQueryTwoNodes() throws IOException {\n \n }\n \n- for (Client client : internalCluster()) {\n+ for (Client client : internalCluster().getClients()) {\n ValidateQueryResponse response = client.admin().indices().prepareValidateQuery(\"test\")\n .setQuery(QueryBuilders.queryStringQuery(\"foo\"))\n .setExplain(true)",
"filename": "core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,15 @@\n \n package org.elasticsearch.repositories.azure;\n \n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.node.MockNode;\n import org.elasticsearch.node.Node;\n import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin;\n \n+import java.io.IOException;\n import java.util.Collections;\n import java.util.concurrent.CountDownLatch;\n \n@@ -112,8 +115,13 @@ public static void main(String[] args) throws Throwable {\n Runtime.getRuntime().addShutdownHook(new Thread() {\n @Override\n public void run() {\n- node.close();\n- latch.countDown();\n+ try {\n+ IOUtils.close(node);\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(e);\n+ } finally {\n+ latch.countDown();\n+ }\n }\n });\n node.start();",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositoryF.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.tribe;\n \n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -34,6 +35,7 @@\n import org.junit.AfterClass;\n import org.junit.BeforeClass;\n \n+import java.io.IOException;\n import java.nio.file.Path;\n \n import static org.hamcrest.CoreMatchers.either;\n@@ -76,10 +78,9 @@ public static void createTribes() {\n }\n \n @AfterClass\n- public static void closeTribes() {\n- tribe1.close();\n+ public static void closeTribes() throws IOException {\n+ IOUtils.close(tribe1, tribe2);\n tribe1 = null;\n- tribe2.close();\n tribe2 = null;\n }\n ",
"filename": "qa/evil-tests/src/test/java/org/elasticsearch/tribe/TribeUnitTests.java",
"status": "modified"
},
{
"diff": "@@ -241,8 +241,8 @@ public String getClusterName() {\n }\n \n @Override\n- public synchronized Iterator<Client> iterator() {\n- return Collections.singleton(client()).iterator();\n+ public synchronized Iterable<Client> getClients() {\n+ return Collections.singleton(client());\n }\n \n /**",
"filename": "test/framework/src/main/java/org/elasticsearch/test/CompositeTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -129,6 +129,7 @@\n import org.junit.Before;\n import org.junit.BeforeClass;\n \n+import java.io.Closeable;\n import java.io.IOException;\n import java.io.InputStream;\n import java.lang.annotation.Annotation;\n@@ -675,7 +676,7 @@ public static Client dataNodeClient() {\n }\n \n public static Iterable<Client> clients() {\n- return cluster();\n+ return cluster().getClients();\n }\n \n protected int minimumNumberOfShards() {\n@@ -1099,7 +1100,7 @@ protected void ensureClusterStateConsistency() throws IOException {\n Map<String, Object> masterStateMap = convertToMap(masterClusterState);\n int masterClusterStateSize = masterClusterState.toString().length();\n String masterId = masterClusterState.nodes().masterNodeId();\n- for (Client client : cluster()) {\n+ for (Client client : cluster().getClients()) {\n ClusterState localClusterState = client.admin().cluster().prepareState().all().setLocal(true).get().getState();\n byte[] localClusterStateBytes = ClusterState.Builder.toBytes(localClusterState);\n // remove local node reference",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.test;\n \n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n@@ -51,6 +52,7 @@\n import org.junit.Before;\n import org.junit.BeforeClass;\n \n+import java.io.IOException;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -68,7 +70,7 @@ public abstract class ESSingleNodeTestCase extends ESTestCase {\n \n private static Node NODE = null;\n \n- private void reset() {\n+ private void reset() throws IOException {\n assert NODE != null;\n stopNode();\n startNode();\n@@ -83,13 +85,13 @@ private void startNode() {\n assertFalse(clusterHealthResponse.isTimedOut());\n }\n \n- private static void stopNode() {\n+ private static void stopNode() throws IOException {\n Node node = NODE;\n NODE = null;\n- Releasables.close(node);\n+ IOUtils.close(node);\n }\n \n- private void cleanup(boolean resetNode) {\n+ private void cleanup(boolean resetNode) throws IOException {\n assertAcked(client().admin().indices().prepareDelete(\"*\").get());\n if (resetNode) {\n reset();\n@@ -126,7 +128,7 @@ public static void setUpClass() throws Exception {\n }\n \n @AfterClass\n- public static void tearDownClass() {\n+ public static void tearDownClass() throws IOException {\n stopNode();\n }\n ",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ESSingleNodeTestCase.java",
"status": "modified"
},
{
"diff": "@@ -167,8 +167,8 @@ public void ensureEstimatedStats() {\n }\n \n @Override\n- public Iterator<Client> iterator() {\n- return Collections.singleton(client).iterator();\n+ public Iterable<Client> getClients() {\n+ return Collections.singleton(client);\n }\n \n @Override",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ExternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -815,7 +815,7 @@ void resetClient() throws IOException {\n }\n }\n \n- void closeNode() {\n+ void closeNode() throws IOException {\n registerDataPath();\n node.close();\n }\n@@ -1720,27 +1720,29 @@ synchronized String routingKeyForShard(String index, String type, int shard, Ran\n return null;\n }\n \n- @Override\n- public synchronized Iterator<Client> iterator() {\n+ public synchronized Iterable<Client> getClients() {\n ensureOpen();\n- final Iterator<NodeAndClient> iterator = nodes.values().iterator();\n- return new Iterator<Client>() {\n+ return () -> {\n+ ensureOpen();\n+ final Iterator<NodeAndClient> iterator = nodes.values().iterator();\n+ return new Iterator<Client>() {\n \n- @Override\n- public boolean hasNext() {\n- return iterator.hasNext();\n- }\n+ @Override\n+ public boolean hasNext() {\n+ return iterator.hasNext();\n+ }\n \n- @Override\n- public Client next() {\n- return iterator.next().client(random);\n- }\n+ @Override\n+ public Client next() {\n+ return iterator.next().client(random);\n+ }\n \n- @Override\n- public void remove() {\n- throw new UnsupportedOperationException(\"\");\n- }\n+ @Override\n+ public void remove() {\n+ throw new UnsupportedOperationException(\"\");\n+ }\n \n+ };\n };\n }\n ",
"filename": "test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -43,7 +43,7 @@\n * Base test cluster that exposes the basis to run tests against any elasticsearch cluster, whose layout\n * (e.g. number of nodes) is predefined and cannot be changed during the tests execution\n */\n-public abstract class TestCluster implements Iterable<Client>, Closeable {\n+public abstract class TestCluster implements Closeable {\n \n protected final ESLogger logger = Loggers.getLogger(getClass());\n private final long seed;\n@@ -228,5 +228,10 @@ public void wipeRepositories(String... repositories) {\n */\n public abstract String getClusterName();\n \n+ /**\n+ * Returns an {@link Iterable} over all clients in this test cluster\n+ */\n+ public abstract Iterable<Client> getClients();\n+\n \n }",
"filename": "test/framework/src/main/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
},
{
"diff": "@@ -130,8 +130,8 @@ public void testBeforeTest() throws Exception {\n cluster1.beforeTest(random, random.nextDouble());\n }\n assertArrayEquals(cluster0.getNodeNames(), cluster1.getNodeNames());\n- Iterator<Client> iterator1 = cluster1.iterator();\n- for (Client client : cluster0) {\n+ Iterator<Client> iterator1 = cluster1.getClients().iterator();\n+ for (Client client : cluster0.getClients()) {\n assertTrue(iterator1.hasNext());\n Client other = iterator1.next();\n assertSettings(client.settings(), other.settings(), false);",
"filename": "test/framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java",
"status": "modified"
}
]
}
|
{
"body": "I do store ES configs in git repo.\nIt required to have `scripts` sub folder, but i can't commit to git empty folder, need to put there .gitkeep file. \nI got this exception:\n\n```\nes_data_1 | java.lang.IllegalArgumentException: script file extension not supported [gitkeep]\nes_data_1 | at org.elasticsearch.script.ScriptService.getScriptEngineServiceForFileExt(ScriptService.java:221)\nes_data_1 | at org.elasticsearch.script.ScriptService.access$1200(ScriptService.java:83)\nes_data_1 | at org.elasticsearch.script.ScriptService$ScriptChangesListener.onFileInit(ScriptService.java:537)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.onFileCreated(FileWatcher.java:256)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.init(FileWatcher.java:166)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.createChild(FileWatcher.java:173)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.listChildren(FileWatcher.java:188)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.onDirectoryCreated(FileWatcher.java:299)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.init(FileWatcher.java:162)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher$FileObserver.access$000(FileWatcher.java:75)\nes_data_1 | at org.elasticsearch.watcher.FileWatcher.doInit(FileWatcher.java:65)\nes_data_1 | at org.elasticsearch.watcher.AbstractResourceWatcher.init(AbstractResourceWatcher.java:36)\nes_data_1 | at org.elasticsearch.watcher.ResourceWatcherService.add(ResourceWatcherService.java:133)\nes_data_1 | at org.elasticsearch.watcher.ResourceWatcherService.add(ResourceWatcherService.java:126)\nes_data_1 | at org.elasticsearch.script.ScriptService.<init>(ScriptService.java:192)\nes_data_1 | at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\nes_data_1 | at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\nes_data_1 | at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nes_data_1 | at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n```\n",
"comments": [
{
"body": "It looks like you can work around this by creating a file without any extension - those are ignored by the ScriptsService.\n\nI'll let someone more opinionated than I am decide if this is a bug or a feature. I will say that it wouldn't be a ton of work to ignore hidden files in this directory. To declare them \"not a script\" and just log a WARNING or something.\n",
"created_at": "2015-12-07T18:16:52Z"
},
{
"body": "I'm guessing this just showed up in 2.0. @nizsheanez, can you confirm when you first saw it?\n",
"created_at": "2015-12-07T18:17:44Z"
},
{
"body": "We check for this with plugins, we should use the same utility: `FileSystemUtils.isHidden`\n",
"created_at": "2015-12-07T19:17:26Z"
},
{
"body": "> We check for this with plugins, we should use the same utility: FileSystemUtils.isHidden\n\nYeah. Also it feels wrong that we should ever be ok running hidden scripts. That just screams \"abuse me!\"\n",
"created_at": "2015-12-07T19:25:31Z"
},
{
"body": "@nizsheanez I've changed the title to make it more obvious what behavior you were relying on before and marked this as a bug.\n",
"created_at": "2015-12-07T19:26:16Z"
},
{
"body": "> Also it feels wrong that we should ever be ok running hidden scripts. \n\nI don't think this would actually happen, because the \"hidden script\" would have an empty string for a name (and I assume, maybe incorrectly, that the script service checks for an empty string when a file script is specified)\n",
"created_at": "2015-12-07T19:28:35Z"
},
{
"body": "Hey,\nI was looking at the ScriptService.java, mainly in the inner class ScriptChangesListener.\nWe could ignore the hidden files on the **onFileInit(Path file)** method if we add the check that was suggested above: _FileSystemUtils.isHidden_, then add some warning as well. \nIf you guys agree, can I take this one?\n",
"created_at": "2016-01-27T01:47:35Z"
},
{
"body": "@fforbeck Please do!\n",
"created_at": "2016-01-27T02:09:52Z"
},
{
"body": "@rjernst \nAlright! I sent a PR. \nCould you please take a look when possible?\n\nThanks\n",
"created_at": "2016-01-28T10:17:09Z"
},
{
"body": "Is there any chance this being backported to 2.3.x ? \n",
"created_at": "2016-06-24T08:01:02Z"
},
{
"body": "> Is there any chance this being backported to 2.3.x ?\n\nThat is very unlikely.\n",
"created_at": "2016-06-24T11:01:52Z"
}
],
"number": 15269,
"title": "Hidden files should be ignored by the ScriptsService"
}
|
{
"body": "If hidden files are detected during the ScriptService initialization, they will be ignored and a warning message will be logged.\n\nCloses #15269 \n",
"number": 16286,
"review_comments": [
{
"body": "I think this should go before the previous check because otherwise a trace log will read that the script file is being loaded, and then that it is being skipped.\n",
"created_at": "2016-01-28T15:06:32Z"
},
{
"body": "I'm not sure if warn is the right level here, maybe debug but I admit uncertainty. Can you remove the leading \"--- \" and add the word \"script\" to the log message somewhere?\n",
"created_at": "2016-01-28T15:07:22Z"
},
{
"body": "This is gone from master now, you'll have to integrate master and rework.\n",
"created_at": "2016-01-28T15:08:31Z"
},
{
"body": "There are several logging statements like this one and I'm not sure what they are adding to the test? If the test fails, a stack trace will give a pretty clear indication how far the test had progressed.\n",
"created_at": "2016-01-28T15:14:02Z"
},
{
"body": "I agree with @jasontedor . I think `debug` is fine here. If I had to guess this is a leftover from debugging the implementation. I know I tend to use stuff like `---` or `asdfewfca` to mark any temporary logs I add.\n",
"created_at": "2016-01-28T15:27:48Z"
},
{
"body": "This test is not really testing what we want to be testing here. The reason that it's not is because the cache key for a file named `\".hidden_file\"` is not `\"hidden_file\"`, but rather it is `\"\"`. A file named `\".hidden_file\"` never would have been processed by the compilation engine because it doesn't have an extension. So this will ultimately throw, but not for the right reason. \n",
"created_at": "2016-01-28T15:30:24Z"
},
{
"body": "Hi @jasontedor, thanks for reviewing. \nI have fixed the other points, but only this one I am not sure what to do.\nIn this case then, would be better to use the key as `\"\"`?\n",
"created_at": "2016-01-29T00:56:45Z"
},
{
"body": "I don't think so, that's relying too much on an internal implementation detail (that there is a cache, that file scripts are compiled and put into the static cache, that the key is the prefix of the filename, etc.). The purpose behind the PR is to get the script service to ignore hidden files, and that is what needs to be tested. I haven't looked too closely, but I suspect that you're going to have to hook into the script service or maybe the resource watcher and possibly refactor a little bit to expose the pieces needed to in fact make this assertion. Let me know if that's enough to get you started, I'm happy to take a closer look if needed. :)\n",
"created_at": "2016-01-29T01:01:35Z"
},
{
"body": "Alright! Yep, I think I can find a way out from this point.\nI let you know. Thanks a lot!\n",
"created_at": "2016-01-29T01:09:41Z"
},
{
"body": "Hi @jasontedor, \nI was wondering if we could just change the method:\n`private Tuple<String, String> scriptNameExt(Path file)` from `ScriptChangesListener` to ignore any kind of file that does not have a name. I mean, hidden files. \n\nIt starts with `.` but the method allows you to retrieve an empty script name for this case. If we just use the `if (extIndex > 0)` we would ignore this kind of file. In addition I would move the method to the parent class and drop the access modifier to create the unit test. We would not need to check if the file is a hidden file or not.\nPlease, let me know what do you think. \nThanks\n",
"created_at": "2016-02-02T11:56:38Z"
}
],
"title": "Skipping hidden files compilation for script service"
}
|
{
"commits": [
{
"message": "Skipping hidden files compilation for script service"
},
{
"message": "Minor fixes after review"
},
{
"message": "Ignoring hidden script files and files with invalid names"
},
{
"message": "testing script compiled once dot files detected"
}
],
"files": [
{
"diff": "@@ -55,7 +55,6 @@\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.query.TemplateQueryParser;\n-import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.lookup.SearchLookup;\n import org.elasticsearch.watcher.FileChangesListener;\n import org.elasticsearch.watcher.FileWatcher;\n@@ -225,6 +224,8 @@ private ScriptEngineService getScriptEngineServiceForFileExt(String fileExtensio\n return scriptEngineService;\n }\n \n+\n+\n /**\n * Checks if a script can be executed and compiles it if needed, or returns the previously compiled and cached script.\n */\n@@ -516,46 +517,53 @@ public void onRemoval(RemovalNotification<CacheKey, CompiledScript> notification\n \n private class ScriptChangesListener extends FileChangesListener {\n \n- private Tuple<String, String> scriptNameExt(Path file) {\n+ private Tuple<String, String> getScriptNameExt(Path file) {\n Path scriptPath = scriptsDirectory.relativize(file);\n int extIndex = scriptPath.toString().lastIndexOf('.');\n- if (extIndex != -1) {\n- String ext = scriptPath.toString().substring(extIndex + 1);\n- String scriptName = scriptPath.toString().substring(0, extIndex).replace(scriptPath.getFileSystem().getSeparator(), \"_\");\n- return new Tuple<>(scriptName, ext);\n- } else {\n+ if (extIndex <= 0) {\n+ return null;\n+ }\n+\n+ String ext = scriptPath.toString().substring(extIndex + 1);\n+ if (ext.isEmpty()) {\n return null;\n }\n+\n+ String scriptName = scriptPath.toString().substring(0, extIndex).replace(scriptPath.getFileSystem().getSeparator(), \"_\");\n+ return new Tuple<>(scriptName, ext);\n }\n \n @Override\n public void onFileInit(Path file) {\n+ Tuple<String, String> scriptNameExt = getScriptNameExt(file);\n+ if (scriptNameExt == null) {\n+ logger.debug(\"Skipped script with invalid extension : [{}]\", file);\n+ return;\n+ }\n if (logger.isTraceEnabled()) {\n logger.trace(\"Loading script file : [{}]\", file);\n }\n- Tuple<String, String> scriptNameExt = scriptNameExt(file);\n- if (scriptNameExt != null) {\n- ScriptEngineService engineService = getScriptEngineServiceForFileExt(scriptNameExt.v2());\n- if (engineService == null) {\n- logger.warn(\"no script engine found for [{}]\", scriptNameExt.v2());\n- } else {\n- try {\n- //we don't know yet what the script will be used for, but if all of the operations for this lang\n- // with file scripts are disabled, it makes no sense to even compile it and cache it.\n- if (isAnyScriptContextEnabled(engineService.getTypes().get(0), engineService, ScriptType.FILE)) {\n- logger.info(\"compiling script file [{}]\", file.toAbsolutePath());\n- try(InputStreamReader reader = new InputStreamReader(Files.newInputStream(file), StandardCharsets.UTF_8)) {\n- String script = Streams.copyToString(reader);\n- CacheKey cacheKey = new CacheKey(engineService, scriptNameExt.v1(), null, Collections.emptyMap());\n- staticCache.put(cacheKey, new CompiledScript(ScriptType.FILE, scriptNameExt.v1(), engineService.getTypes().get(0), engineService.compile(script, Collections.emptyMap())));\n- scriptMetrics.onCompilation();\n- }\n- } else {\n- logger.warn(\"skipping compile of script file [{}] as all scripted operations are disabled for file scripts\", file.toAbsolutePath());\n+\n+ ScriptEngineService engineService = getScriptEngineServiceForFileExt(scriptNameExt.v2());\n+ if (engineService == null) {\n+ logger.warn(\"No script engine found for [{}]\", scriptNameExt.v2());\n+ } else {\n+ try {\n+ //we don't know yet what the script will be used for, but if all of the operations for this lang\n+ // with file scripts are disabled, it makes no sense to even compile it and cache it.\n+ if (isAnyScriptContextEnabled(engineService.getTypes().get(0), engineService, ScriptType.FILE)) {\n+ logger.info(\"compiling script file [{}]\", file.toAbsolutePath());\n+ try (InputStreamReader reader = new InputStreamReader(Files.newInputStream(file), StandardCharsets.UTF_8)) {\n+ String script = Streams.copyToString(reader);\n+ CacheKey cacheKey = new CacheKey(engineService, scriptNameExt.v1(), null, Collections.emptyMap());\n+ staticCache.put(cacheKey, new CompiledScript(ScriptType.FILE, scriptNameExt.v1(), engineService.getTypes().get(0), engineService.compile(script, Collections.emptyMap())));\n+ scriptMetrics.onCompilation();\n }\n- } catch (Throwable e) {\n- logger.warn(\"failed to load/compile script [{}]\", e, scriptNameExt.v1());\n+ } else {\n+ logger.warn(\"skipping compile of script file [{}] as all scripted operations are disabled for file scripts\", file.toAbsolutePath());\n }\n+ } catch (Throwable e) {\n+ logger.warn(\"failed to load/compile script [{}]\", e, scriptNameExt.v1());\n }\n }\n }\n@@ -567,7 +575,7 @@ public void onFileCreated(Path file) {\n \n @Override\n public void onFileDeleted(Path file) {\n- Tuple<String, String> scriptNameExt = scriptNameExt(file);\n+ Tuple<String, String> scriptNameExt = getScriptNameExt(file);\n if (scriptNameExt != null) {\n ScriptEngineService engineService = getScriptEngineServiceForFileExt(scriptNameExt.v2());\n assert engineService != null;",
"filename": "core/src/main/java/org/elasticsearch/script/ScriptService.java",
"status": "modified"
},
{
"diff": "@@ -122,26 +122,21 @@ public void testNotSupportedDisableDynamicSetting() throws IOException {\n }\n \n public void testScriptsWithoutExtensions() throws IOException {\n-\n buildScriptService(Settings.EMPTY);\n- logger.info(\"--> setup two test files one with extension and another without\");\n Path testFileNoExt = scriptsFilePath.resolve(\"test_no_ext\");\n Path testFileWithExt = scriptsFilePath.resolve(\"test_script.tst\");\n Streams.copy(\"test_file_no_ext\".getBytes(\"UTF-8\"), Files.newOutputStream(testFileNoExt));\n Streams.copy(\"test_file\".getBytes(\"UTF-8\"), Files.newOutputStream(testFileWithExt));\n resourceWatcherService.notifyNow();\n \n- logger.info(\"--> verify that file with extension was correctly processed\");\n CompiledScript compiledScript = scriptService.compile(new Script(\"test_script\", ScriptType.FILE, \"test\", null),\n ScriptContext.Standard.SEARCH, Collections.emptyMap());\n assertThat(compiledScript.compiled(), equalTo((Object) \"compiled_test_file\"));\n \n- logger.info(\"--> delete both files\");\n Files.delete(testFileNoExt);\n Files.delete(testFileWithExt);\n resourceWatcherService.notifyNow();\n \n- logger.info(\"--> verify that file with extension was correctly removed\");\n try {\n scriptService.compile(new Script(\"test_script\", ScriptType.FILE, \"test\", null), ScriptContext.Standard.SEARCH,\n Collections.emptyMap());\n@@ -151,6 +146,25 @@ public void testScriptsWithoutExtensions() throws IOException {\n }\n }\n \n+ public void testScriptCompiledOnceHiddenFileDetected() throws IOException {\n+ buildScriptService(Settings.EMPTY);\n+\n+ Path testHiddenFile = scriptsFilePath.resolve(\".hidden_file\");\n+ Streams.copy(\"test_hidden_file\".getBytes(\"UTF-8\"), Files.newOutputStream(testHiddenFile));\n+\n+ Path testFileScript = scriptsFilePath.resolve(\"file_script.tst\");\n+ Streams.copy(\"test_file_script\".getBytes(\"UTF-8\"), Files.newOutputStream(testFileScript));\n+ resourceWatcherService.notifyNow();\n+\n+ CompiledScript compiledScript = scriptService.compile(new Script(\"file_script\", ScriptType.FILE, \"test\", null),\n+ ScriptContext.Standard.SEARCH, Collections.emptyMap());\n+ assertThat(compiledScript.compiled(), equalTo((Object) \"compiled_test_file_script\"));\n+\n+ Files.delete(testHiddenFile);\n+ Files.delete(testFileScript);\n+ resourceWatcherService.notifyNow();\n+ }\n+\n public void testInlineScriptCompiledOnceCache() throws IOException {\n buildScriptService(Settings.EMPTY);\n CompiledScript compiledScript1 = scriptService.compile(new Script(\"1+1\", ScriptType.INLINE, \"test\", null),",
"filename": "core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "When a primary is relocating from `node_1` to `node_2`, there can be a short time where the old primary is removed from the node already (closed, not deleted) but the new primary is still in `POST_RECOVERY`. In this state indexing requests might be sent back and forth between `node_1` and `node_2` endlessly.\n\nCourse of events: \n1. primary (`[index][0]`) relocates from `node_1` to `node_2`\n2. `node_2` is done recovering, moves its shard to `IndexShardState.POST_RECOVERY` and sends a message to master that the shard is `ShardRoutingState.STARTED` \n \n ```\n Cluster state 1: \n node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1) \n node_2: [index][0] INITIALIZING (ShardRoutingState), (at this point already POST_RECOVERY from IndexShardState perspective on node_2) \n ```\n3. master receives shard started and updates cluster state to: \n \n ```\n Cluster state 2: \n node_1: [index][0] no shard \n node_2: [index][0] STARTED (ShardRoutingState), (at this point still in POST_RECOVERY from IndexShardState perspective on node_2) \n ```\n \n master sends this to `node_1` and `node_2`\n4. `node_1` receives the new cluster state and removes its shard because it is not allocated on `node_1` anymore \n5. index a document \n\nAt this point `node_1` is already on cluster state 2 and does not have the shard anymore so it forwards the request to `node_2`. But `node_2` is behind with cluster state processing, is still on cluster state 1 and therefore has the shard in `IndexShardState.POST_RECOVERY` and thinks `node_1` has the primary. So it will send the request back to `node_1`. This goes on until either `node_2` finally catches up and processes cluster state 2 or both nodes OOM.\n\nI will make a pull request with a test shortly\n",
"comments": [
{
"body": "here is a test that reproduces this: #12574\n",
"created_at": "2015-07-31T11:44:17Z"
},
{
"body": "I think this will be closed by https://github.com/elastic/elasticsearch/pull/15900\n",
"created_at": "2016-01-26T17:30:31Z"
},
{
"body": "I've opened #16274 to address this issue.\n",
"created_at": "2016-01-27T18:26:35Z"
}
],
"number": 12573,
"title": "Cluster state delay can cause endless index request loop"
}
|
{
"body": "Relates to #12573\n\nWhen relocating a primary shard, there is a cluster state update at the end of relocation where the active primary is switched from the relocation source to the relocation target. If relocation source receives and processes this cluster state before the relocation target, there is a time span where relocation source believes active primary to be on relocation target and relocation target believes active primary to be on relocation source. This results in index/delete/flush requests being sent back and forth and can end in an OOM on both nodes.\n\nThis PR adds a field to the index/delete/flush request that helps detect the case where we locally have stale routing information. In case this staleness is detected, we wait until we have received an up-to-date cluster state before rerouting the request.\n\nI have included the test from #12574 in this PR to demonstrate the fix in an integration test. That integration test will not be part of the final commit, however.\n",
"number": 16274,
"review_comments": [
{
"body": "can we assert that the version is not set?\n",
"created_at": "2016-02-01T13:18:49Z"
},
{
"body": "I see where you are going with the minimumClusterStateVersion here. I would prefer being more explicit and naming it `routedBasedOnClusterStateVersion` or something that implies when it's set. In your original suggestion this was a hard requirement, but now we only use it on failure to resolve a primary. \n",
"created_at": "2016-02-01T13:29:57Z"
},
{
"body": "Got confused. It actually likely the previous routing node has an older cluster state in which we should override the existing value.. nevermind\n",
"created_at": "2016-02-01T16:39:14Z"
},
{
"body": "agree\n",
"created_at": "2016-02-01T17:39:30Z"
},
{
"body": "can we add some java docs here?\n",
"created_at": "2016-02-01T17:49:51Z"
},
{
"body": "maybe better to say \"failed to find primary despite of request being routing here. local cluster state version [{}]] is older than sending node (version [{}]), scheduling a retry... \"\n",
"created_at": "2016-02-01T17:51:55Z"
},
{
"body": "maybe - failed to find primary but cluster state version [{}] is stale (expected at least [{}]).\n",
"created_at": "2016-02-01T17:52:52Z"
}
],
"title": "Prevent TransportReplicationAction to route request based on stale local routing table"
}
|
{
"commits": [
{
"message": "Prevent TransportReplicationAction to route request based on stale local routing table\n\nCloses #16274\nCloses #12573\nCloses #12574"
}
],
"files": [
{
"diff": "@@ -55,6 +55,8 @@ public abstract class ReplicationRequest<Request extends ReplicationRequest<Requ\n \n private WriteConsistencyLevel consistencyLevel = WriteConsistencyLevel.DEFAULT;\n \n+ private long routedBasedOnClusterVersion = 0;\n+\n public ReplicationRequest() {\n \n }\n@@ -141,6 +143,20 @@ public final Request consistencyLevel(WriteConsistencyLevel consistencyLevel) {\n return (Request) this;\n }\n \n+ /**\n+ * Sets the minimum version of the cluster state that is required on the next node before we redirect to another primary.\n+ * Used to prevent redirect loops, see also {@link TransportReplicationAction.ReroutePhase#doRun()}\n+ */\n+ @SuppressWarnings(\"unchecked\")\n+ Request routedBasedOnClusterVersion(long routedBasedOnClusterVersion) {\n+ this.routedBasedOnClusterVersion = routedBasedOnClusterVersion;\n+ return (Request) this;\n+ }\n+\n+ long routedBasedOnClusterVersion() {\n+ return routedBasedOnClusterVersion;\n+ }\n+\n @Override\n public ActionRequestValidationException validate() {\n ActionRequestValidationException validationException = null;\n@@ -161,6 +177,7 @@ public void readFrom(StreamInput in) throws IOException {\n consistencyLevel = WriteConsistencyLevel.fromId(in.readByte());\n timeout = TimeValue.readTimeValue(in);\n index = in.readString();\n+ routedBasedOnClusterVersion = in.readVLong();\n }\n \n @Override\n@@ -175,6 +192,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeByte(consistencyLevel.id());\n timeout.writeTo(out);\n out.writeString(index);\n+ out.writeVLong(routedBasedOnClusterVersion);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java",
"status": "modified"
},
{
"diff": "@@ -472,6 +472,15 @@ protected void doRun() {\n }\n performAction(node, transportPrimaryAction, true);\n } else {\n+ if (state.version() < request.routedBasedOnClusterVersion()) {\n+ logger.trace(\"failed to find primary [{}] for request [{}] despite sender thinking it would be here. Local cluster state version [{}]] is older than on sending node (version [{}]), scheduling a retry...\", request.shardId(), request, state.version(), request.routedBasedOnClusterVersion());\n+ retryBecauseUnavailable(request.shardId(), \"failed to find primary as current cluster state with version [\" + state.version() + \"] is stale (expected at least [\" + request.routedBasedOnClusterVersion() + \"]\");\n+ return;\n+ } else {\n+ // chasing the node with the active primary for a second hop requires that we are at least up-to-date with the current cluster state version\n+ // this prevents redirect loops between two nodes when a primary was relocated and the relocation target is not aware that it is the active primary shard already.\n+ request.routedBasedOnClusterVersion(state.version());\n+ }\n if (logger.isTraceEnabled()) {\n logger.trace(\"send action [{}] on primary [{}] for request [{}] with cluster state version [{}] to [{}]\", actionName, request.shardId(), request, state.version(), primary.currentNodeId());\n }",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java",
"status": "modified"
},
{
"diff": "@@ -42,6 +42,8 @@\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -53,6 +55,7 @@\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardNotFoundException;\n import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.test.ESAllocationTestCase;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.cluster.TestClusterService;\n import org.elasticsearch.test.transport.CapturingTransport;\n@@ -67,6 +70,7 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n@@ -205,6 +209,56 @@ public void testNotStartedPrimary() throws InterruptedException, ExecutionExcept\n assertIndexShardCounter(1);\n }\n \n+ /**\n+ * When relocating a primary shard, there is a cluster state update at the end of relocation where the active primary is switched from\n+ * the relocation source to the relocation target. If relocation source receives and processes this cluster state\n+ * before the relocation target, there is a time span where relocation source believes active primary to be on\n+ * relocation target and relocation target believes active primary to be on relocation source. This results in replication\n+ * requests being sent back and forth.\n+ *\n+ * This test checks that replication request is not routed back from relocation target to relocation source in case of\n+ * stale index routing table on relocation target.\n+ */\n+ public void testNoRerouteOnStaleClusterState() throws InterruptedException, ExecutionException {\n+ final String index = \"test\";\n+ final ShardId shardId = new ShardId(index, 0);\n+ ClusterState state = state(index, true, ShardRoutingState.RELOCATING);\n+ String relocationTargetNode = state.getRoutingTable().shardRoutingTable(shardId).primaryShard().relocatingNodeId();\n+ state = ClusterState.builder(state).nodes(DiscoveryNodes.builder(state.nodes()).localNodeId(relocationTargetNode)).build();\n+ clusterService.setState(state);\n+ logger.debug(\"--> relocation ongoing state:\\n{}\", clusterService.state().prettyPrint());\n+\n+ Request request = new Request(shardId).timeout(\"1ms\").routedBasedOnClusterVersion(clusterService.state().version() + 1);\n+ PlainActionFuture<Response> listener = new PlainActionFuture<>();\n+ TransportReplicationAction.ReroutePhase reroutePhase = action.new ReroutePhase(request, listener);\n+ reroutePhase.run();\n+ assertListenerThrows(\"cluster state too old didn't cause a timeout\", listener, UnavailableShardsException.class);\n+\n+ request = new Request(shardId).routedBasedOnClusterVersion(clusterService.state().version() + 1);\n+ listener = new PlainActionFuture<>();\n+ reroutePhase = action.new ReroutePhase(request, listener);\n+ reroutePhase.run();\n+ assertFalse(\"cluster state too old didn't cause a retry\", listener.isDone());\n+\n+ // finish relocation\n+ ShardRouting relocationTarget = clusterService.state().getRoutingTable().shardRoutingTable(shardId).shardsWithState(ShardRoutingState.INITIALIZING).get(0);\n+ AllocationService allocationService = ESAllocationTestCase.createAllocationService();\n+ RoutingAllocation.Result result = allocationService.applyStartedShards(state, Arrays.asList(relocationTarget));\n+ ClusterState updatedState = ClusterState.builder(clusterService.state()).routingResult(result).build();\n+\n+ clusterService.setState(updatedState);\n+ logger.debug(\"--> relocation complete state:\\n{}\", clusterService.state().prettyPrint());\n+\n+ IndexShardRoutingTable shardRoutingTable = clusterService.state().routingTable().index(index).shard(shardId.id());\n+ final String primaryNodeId = shardRoutingTable.primaryShard().currentNodeId();\n+ final List<CapturingTransport.CapturedRequest> capturedRequests =\n+ transport.getCapturedRequestsByTargetNodeAndClear().get(primaryNodeId);\n+ assertThat(capturedRequests, notNullValue());\n+ assertThat(capturedRequests.size(), equalTo(1));\n+ assertThat(capturedRequests.get(0).action, equalTo(\"testAction[p]\"));\n+ assertIndexShardCounter(1);\n+ }\n+\n public void testUnknownIndexOrShardOnReroute() throws InterruptedException {\n final String index = \"test\";\n // no replicas in oder to skip the replication part",
"filename": "core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java",
"status": "modified"
}
]
}
|
{
"body": "Or `toXContent` is broken.\n\nReproduce:\n\n```\nDELETE test_index\n\nPUT test_index\n{\n \"mappings\": {\n \"type1\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\"\n }\n }\n },\n \"type2\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nGET test_index/_mapping/field/field?include_defaults\n\nPUT test_index/type1/_mapping?update_all_types\n{\n \"properties\": {\n \"field\": {\n \"type\": \"string\",\n \"search_analyzer\": \"keyword\",\n \"analyzer\": \"default\"\n }\n }\n}\n\n```\n\nIn the mapping I retrieve with the GET _mapping API in the mapping only type1 is updated, although with the GET field mapping API I get the correct mapping.\n\n```\nGET test_index/_mapping\n\n{\n \"test_index\": {\n \"mappings\": {\n \"type1\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\",\n \"analyzer\": \"default\",\n \"search_analyzer\": \"keyword\"\n }\n }\n },\n \"type2\": {\n \"properties\": {\n \"field\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n}\n\nGET test_index/_mapping/field/field\n\n\n{\n \"test_index\": {\n \"mappings\": {\n \"type2\": {\n \"field\": {\n \"full_name\": \"field\",\n \"mapping\": {\n \"field\": {\n \"type\": \"string\",\n \"analyzer\": \"default\",\n \"search_analyzer\": \"keyword\"\n }\n }\n }\n },\n \"type1\": {\n \"field\": {\n \"full_name\": \"field\",\n \"mapping\": {\n \"field\": {\n \"type\": \"string\",\n \"analyzer\": \"default\",\n \"search_analyzer\": \"keyword\"\n }\n }\n }\n }\n }\n }\n}\n\n```\n\nWhen I restart the node the change has vanished from the field mapping result but still is in the get mapping result.\n",
"comments": [
{
"body": "tested only 2.1.1\n",
"created_at": "2016-01-26T16:16:17Z"
},
{
"body": "I tried on 2.2 and see the same behavior. The get mapping api is implemented by looking at the cluster state from the master, and doing a manual inspection of the cluster state shows the same inconsistency.\n\nI have a hunch this is a problem in `MetaDataMappingService.PutMappingExecutor`, where I believe it should now simply be regenerating the entire mappings for the given index instead of the current complicated merging into the existing `MappingMetaData` of the index based on the type in the request. /cc @jpountz \n",
"created_at": "2016-01-27T06:14:49Z"
},
{
"body": "Good catch @rjernst, this was indeed one of the causes of the problem. I opened #16264 for it.\n\nBut there was also a serialization bug as Britta initially thought, which would be fixed by #16255.\n",
"created_at": "2016-01-27T13:41:24Z"
}
],
"number": 16239,
"title": "PUT mapping with with search_analyzer and `update_all_types` does not actually update all types "
}
|
{
"body": "Today put mapping operations only update metadata of the type that is being\nmodified, which is not enough since some modifications may have side-effects\non other types.\n\nCloses #16239\n",
"number": 16264,
"review_comments": [
{
"body": "Is this else really necessary anymore? Looks like just some logging?\n",
"created_at": "2016-02-08T19:28:21Z"
},
{
"body": "It is logging only indeed. But whether or not to log things tends to be a bit controversial so I'd rather do it in another change if we want to remove it\n",
"created_at": "2016-02-09T10:55:47Z"
},
{
"body": "Sure, it was just an observation. :)\n",
"created_at": "2016-02-09T18:38:38Z"
}
],
"title": "Put mapping operations must update metadata of all types."
}
|
{
"commits": [
{
"message": "Put mapping operations must update metadata of all types.\n\nToday put mapping operations only update metadata of the type that is being\nmodified, which is not enough since some modifications may have side-effects\non other types.\n\nCloses #16239"
}
],
"files": [
{
"diff": "@@ -290,7 +290,7 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n if (!MapperService.DEFAULT_MAPPING.equals(mappingType) && !PercolatorService.TYPE_NAME.equals(mappingType) && mappingType.charAt(0) == '_') {\n throw new InvalidTypeNameException(\"Document mapping type name can't start with '_'\");\n }\n- final Map<String, MappingMetaData> mappings = new HashMap<>();\n+ MetaData.Builder builder = MetaData.builder(currentState.metaData());\n for (String index : request.indices()) {\n // do the actual merge here on the master, and update the mapping source\n IndexService indexService = indicesService.indexService(index);\n@@ -311,7 +311,6 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n // same source, no changes, ignore it\n } else {\n // use the merged mapping source\n- mappings.put(index, new MappingMetaData(mergedMapper));\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}] update_mapping [{}] with source [{}]\", index, mergedMapper.type(), updatedSource);\n } else if (logger.isInfoEnabled()) {\n@@ -320,28 +319,24 @@ private ClusterState applyRequest(ClusterState currentState, PutMappingClusterSt\n \n }\n } else {\n- mappings.put(index, new MappingMetaData(mergedMapper));\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}] create_mapping [{}] with source [{}]\", index, mappingType, updatedSource);\n } else if (logger.isInfoEnabled()) {\n logger.info(\"[{}] create_mapping [{}]\", index, mappingType);\n }\n }\n- }\n- if (mappings.isEmpty()) {\n- // no changes, return\n- return currentState;\n- }\n- MetaData.Builder builder = MetaData.builder(currentState.metaData());\n- for (String indexName : request.indices()) {\n- IndexMetaData indexMetaData = currentState.metaData().index(indexName);\n+\n+ IndexMetaData indexMetaData = currentState.metaData().index(index);\n if (indexMetaData == null) {\n- throw new IndexNotFoundException(indexName);\n+ throw new IndexNotFoundException(index);\n }\n- MappingMetaData mappingMd = mappings.get(indexName);\n- if (mappingMd != null) {\n- builder.put(IndexMetaData.builder(indexMetaData).putMapping(mappingMd));\n+ IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(indexMetaData);\n+ // Mapping updates on a single type may have side-effects on other types so we need to\n+ // update mapping metadata on all types\n+ for (DocumentMapper mapper : indexService.mapperService().docMappers(true)) {\n+ indexMetaDataBuilder.putMapping(new MappingMetaData(mapper.mappingSource()));\n }\n+ builder.put(indexMetaDataBuilder);\n }\n \n return ClusterState.builder(currentState).metaData(builder).build();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.hamcrest.Matchers;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n@@ -337,4 +338,20 @@ public void testPutMappingsWithBlocks() throws Exception {\n }\n }\n }\n+\n+ public void testUpdateMappingOnAllTypes() throws IOException {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"type1\", \"f\", \"type=string\").addMapping(\"type2\", \"f\", \"type=string\"));\n+\n+ assertAcked(client().admin().indices().preparePutMapping(\"index\")\n+ .setType(\"type1\")\n+ .setUpdateAllTypes(true)\n+ .setSource(\"f\", \"type=string,analyzer=default,null_value=n/a\")\n+ .get());\n+\n+ GetMappingsResponse mappings = client().admin().indices().prepareGetMappings(\"index\").setTypes(\"type2\").get();\n+ MappingMetaData type2Mapping = mappings.getMappings().get(\"index\").get(\"type2\").get();\n+ Map<String, Object> properties = (Map<String, Object>) type2Mapping.sourceAsMap().get(\"properties\");\n+ Map<String, Object> f = (Map<String, Object>) properties.get(\"f\");\n+ assertEquals(\"n/a\", f.get(\"null_value\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java",
"status": "modified"
}
]
}
|
{
"body": "This is a duplicate of https://github.com/elastic/elasticsearch/issues/15880\n\nTo replicate, copy the config directory to a new location and add a simple tribe config, eg:\n\n```\ncp -a elasticsearch-2.2.0/config tribe\necho \"tribe.t1.cluster.name: foo\" >> tribe/elasticsearch.yml\n```\n\nStart the tribe node as follows:\n\n```\n./elasticsearch-2.2.0/bin/elasticsearch --path.conf tribe\n```\n\nThe process dies with:\n\n```\nException in thread \"main\" java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/Users/clinton/workspace/servers/elasticsearch-2.2.0/config/hunspell\" \"read\")\nat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\nat java.security.AccessController.checkPermission(AccessController.java:884)\nat java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\nat java.lang.SecurityManager.checkRead(SecurityManager.java:888)\nat sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\nat sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:49)\nat sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)\nat java.nio.file.Files.readAttributes(Files.java:1737)\nat java.nio.file.Files.isDirectory(Files.java:2192)\nat org.elasticsearch.indices.analysis.HunspellService.scanAndLoadDictionaries(HunspellService.java:127)\nat org.elasticsearch.indices.analysis.HunspellService.<init>(HunspellService.java:102)\nat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\nat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\nat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nat java.lang.reflect.Constructor.newInstance(Constructor.java:422)\nat <<<guice>>>\nat org.elasticsearch.node.Node.<init>(Node.java:200)\nat org.elasticsearch.tribe.TribeClientNode.<init>(TribeClientNode.java:35)\nat org.elasticsearch.tribe.TribeService.<init>(TribeService.java:141)\nat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\nat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\nat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\nat java.lang.reflect.Constructor.newInstance(Constructor.java:422)\nat <<<guice>>>\nat org.elasticsearch.node.Node.<init>(Node.java:200)\nat org.elasticsearch.node.Node.<init>(Node.java:128)\nat org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\nat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)\nat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)\nat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n\nThe reason for this is that, when the tribe node tries to create a client to connect to the cluster, it doesn't pass on the `path.conf` setting to the client, so that when HunspellService tries to [resolve the hunspell directory](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/indices/analysis/HunspellService.java#L114) it uses `path.home` instead of the custom `path.conf`.\n\nAdding the absolute config path to the tribe node configuration works around this:\n\n```\necho \"tribe.t1.path.conf: /Users/clinton/workspace/servers/tribe/\" >> tribe/elasticsearch.yml\n```\n",
"comments": [
{
"body": "@clintongormley hi. I do as you said, but it doesn't work!\n\n```\nelasticsearch@repository:/$ cat /etc/elasticsearch/elasticsearch.yml \ntransport.tcp.port: 9300\nhttp.port: 9200\nnode.name: es-center\npath.data: /var/lib/elasticsearch\npath.logs: /var/log/elasticsearch\nnetwork.host: 0.0.0.0\ntribe:\n t1:\n path.conf: /etc/tribe-client/\n cluster.name: ydnj-es\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.2.89.182:50002\"]\n t2:\n path.conf: /etc/tribe-client/\n cluster.name: ytyq-es\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.2.89.182:50003\"]\n```\n\nelasticsearch.yml in /etc/tribe-client/ without anything in it.\n\n```\nelasticsearch@repository:/$ ls -al /etc/tribe-client/elasticsearch.yml \n-rw-r--r-- 1 elasticsearch root 0 Jan 27 20:01 /etc/tribe-client/elasticsearch.yml\n```\n\nHOWEVER, I get the result.\n\n```\nelasticsearch@repository:/$ /usr/share/elasticsearch/bin/elasticsearch --path.conf /etc/elasticsearch\n[2016-01-28 11:00:06,314][INFO ][node ] [es-center] version[2.1.1], pid[126326], build[40e2c53/2015-12-15T13:05:55Z]\n[2016-01-28 11:00:06,314][INFO ][node ] [es-center] initializing ...\n[2016-01-28 11:00:06,353][INFO ][plugins ] [es-center] loaded [], sites []\n[2016-01-28 11:00:07,421][INFO ][node ] [es-center/t1] version[2.1.1], pid[126326], build[40e2c53/2015-12-15T13:05:55Z]\n[2016-01-28 11:00:07,421][INFO ][node ] [es-center/t1] initializing ...\n[2016-01-28 11:00:07,480][INFO ][plugins ] [es-center/t1] loaded [], sites []\nException in thread \"main\" java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/etc/tribe-client/hunspell\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkRead(SecurityManager.java:888)\n at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\n at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:49)\n at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)\n at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)\n at java.nio.file.Files.readAttributes(Files.java:1737)\n at java.nio.file.Files.isDirectory(Files.java:2192)\n at org.elasticsearch.indices.analysis.HunspellService.scanAndLoadDictionaries(HunspellService.java:127)\n at org.elasticsearch.indices.analysis.HunspellService.<init>(HunspellService.java:102)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:423)\n```\n\nseems that the path.conf send to the tribe. but it still search for hunspell.\n\nEnviroment: es 2.1.1; java 1.8.0_72\n",
"created_at": "2016-01-28T03:05:15Z"
},
{
"body": "the same configuration with environment: es 2.0.2\n\n```\n[2016-01-28 11:07:17,016][INFO ][node ] [es-center] version[2.0.2], pid[127071], build[6abf5d8/2015-12-16T12:49:58Z]\n[2016-01-28 11:07:17,017][INFO ][node ] [es-center] initializing ...\n[2016-01-28 11:07:17,064][INFO ][plugins ] [es-center] loaded [], sites []\n[2016-01-28 11:07:18,040][ERROR][bootstrap ] Guice Exception: java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/etc/tribe-client/elasticsearch.yml\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkRead(SecurityManager.java:888)\n at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\n at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:290)\n at java.nio.file.Files.exists(Files.java:2385)\n```\n",
"created_at": "2016-01-28T03:08:05Z"
},
{
"body": "@gbjuno for the workaround to work, the `path.conf` that you specify on the command line must be the same as the `path.conf` that you specify in the tribe node config.\n",
"created_at": "2016-01-28T10:41:36Z"
},
{
"body": "@clintongormley ,thank you.After changing the path.conf to /etc/elasticsearch, the tribe node started running.\n",
"created_at": "2016-01-29T06:28:55Z"
}
],
"number": 16253,
"title": "Tribe node clients using wrong config path"
}
|
{
"body": "If we don't do this, and some path.conf is set when starting the tribe node, that path.conf will be ignored and the inner tribe clients will try to read elsewhere, where they most likely don't have permissions to read from.\n\nCloses #16253\n",
"number": 16258,
"review_comments": [],
"title": "Tribe node: pass path.conf to inner tribe clients"
}
|
{
"commits": [
{
"message": "Tribe node: pass path.conf to inner tribe clients\n\nIf we don't do this, and some path.conf is set when starting the tribe node, that path.conf will be ignored and the inner tribe clients will try to read elsewhere, where they most likely don't have permissions to read from.\n\nCloses #16253\nCloses #16258"
}
],
"files": [
{
"diff": "@@ -135,6 +135,9 @@ public TribeService(Settings settings, ClusterService clusterService, DiscoveryS\n Settings.Builder sb = Settings.builder().put(entry.getValue());\n sb.put(\"name\", settings.get(\"name\") + \"/\" + entry.getKey());\n sb.put(Environment.PATH_HOME_SETTING.getKey(), Environment.PATH_HOME_SETTING.get(settings)); // pass through ES home dir\n+ if (Environment.PATH_CONF_SETTING.exists(settings)) {\n+ sb.put(Environment.PATH_CONF_SETTING.getKey(), Environment.PATH_CONF_SETTING.get(settings));\n+ }\n sb.put(TRIBE_NAME, entry.getKey());\n if (sb.get(\"http.enabled\") == null) {\n sb.put(\"http.enabled\", false);",
"filename": "core/src/main/java/org/elasticsearch/tribe/TribeService.java",
"status": "modified"
},
{
"diff": "@@ -91,7 +91,7 @@ public void testThatTribeClientsIgnoreGlobalSysProps() throws Exception {\n System.setProperty(\"es.tribe.t2.discovery.id.seed\", Long.toString(random().nextLong()));\n \n try {\n- assertTribeNodeSuccesfullyCreated(Settings.EMPTY);\n+ assertTribeNodeSuccessfullyCreated(Settings.EMPTY);\n } finally {\n System.clearProperty(\"es.cluster.name\");\n System.clearProperty(\"es.tribe.t1.cluster.name\");\n@@ -108,10 +108,10 @@ public void testThatTribeClientsIgnoreGlobalConfig() throws Exception {\n .put(InternalSettingsPreparer.IGNORE_SYSTEM_PROPERTIES_SETTING, true)\n .put(Environment.PATH_CONF_SETTING.getKey(), pathConf)\n .build();\n- assertTribeNodeSuccesfullyCreated(settings);\n+ assertTribeNodeSuccessfullyCreated(settings);\n }\n \n- private static void assertTribeNodeSuccesfullyCreated(Settings extraSettings) throws Exception {\n+ private static void assertTribeNodeSuccessfullyCreated(Settings extraSettings) throws Exception {\n //tribe node doesn't need the node.mode setting, as it's forced local internally anyways. The tribe clients do need it to make sure\n //they can find their corresponding tribes using the proper transport\n Settings settings = Settings.builder().put(\"http.enabled\", false).put(\"node.name\", \"tribe_node\")",
"filename": "qa/evil-tests/src/test/java/org/elasticsearch/tribe/TribeUnitTests.java",
"status": "modified"
}
]
}
|
{
"body": "PR for #16246\n",
"comments": [
{
"body": "LGTM\n",
"created_at": "2016-01-26T21:43:37Z"
}
],
"number": 16248,
"title": "The IngestDocument copy constructor should make a deep copy"
}
|
{
"body": "updated this test to reflect changes in #16248 which fix #16246 around `_simulate?verbose` results mutating between processor executions because `IngestDocument`s were not properly being copied.\n",
"number": 16251,
"review_comments": [],
"title": "update test to verify that documents are deep-copied between verbose results"
}
|
{
"commits": [
{
"message": "[ingest] update test to verify that documents are deep-copied between verbose results"
}
],
"files": [
{
"diff": "@@ -207,7 +207,7 @@\n {\n \"set\" : {\n \"tag\" : \"processor[set]-0\",\n- \"field\" : \"field2\",\n+ \"field\" : \"field2.value\",\n \"value\" : \"_value\"\n }\n },\n@@ -216,6 +216,16 @@\n \"field\" : \"field3\",\n \"value\" : \"third_val\"\n }\n+ },\n+ {\n+ \"uppercase\" : {\n+ \"field\" : \"field2.value\"\n+ }\n+ },\n+ {\n+ \"lowercase\" : {\n+ \"field\" : \"foo.bar.0.item\"\n+ }\n }\n ]\n },\n@@ -225,25 +235,39 @@\n \"_type\": \"type\",\n \"_id\": \"id\",\n \"_source\": {\n- \"foo\": \"bar\"\n+ \"foo\": {\n+ \"bar\" : [ {\"item\": \"HELLO\"} ]\n+ }\n }\n }\n ]\n }\n - length: { docs: 1 }\n- - length: { docs.0.processor_results: 2 }\n+ - length: { docs.0.processor_results: 4 }\n - match: { docs.0.processor_results.0.tag: \"processor[set]-0\" }\n - length: { docs.0.processor_results.0.doc._source: 2 }\n- - match: { docs.0.processor_results.0.doc._source.foo: \"bar\" }\n- - match: { docs.0.processor_results.0.doc._source.field2: \"_value\" }\n+ - match: { docs.0.processor_results.0.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.0.doc._source.field2.value: \"_value\" }\n - length: { docs.0.processor_results.0.doc._ingest: 1 }\n - is_true: docs.0.processor_results.0.doc._ingest.timestamp\n - length: { docs.0.processor_results.1.doc._source: 3 }\n- - match: { docs.0.processor_results.1.doc._source.foo: \"bar\" }\n- - match: { docs.0.processor_results.1.doc._source.field2: \"_value\" }\n+ - match: { docs.0.processor_results.1.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.1.doc._source.field2.value: \"_value\" }\n - match: { docs.0.processor_results.1.doc._source.field3: \"third_val\" }\n - length: { docs.0.processor_results.1.doc._ingest: 1 }\n - is_true: docs.0.processor_results.1.doc._ingest.timestamp\n+ - length: { docs.0.processor_results.2.doc._source: 3 }\n+ - match: { docs.0.processor_results.2.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.2.doc._source.field2.value: \"_VALUE\" }\n+ - match: { docs.0.processor_results.2.doc._source.field3: \"third_val\" }\n+ - length: { docs.0.processor_results.2.doc._ingest: 1 }\n+ - is_true: docs.0.processor_results.2.doc._ingest.timestamp\n+ - length: { docs.0.processor_results.3.doc._source: 3 }\n+ - match: { docs.0.processor_results.3.doc._source.foo.bar.0.item: \"hello\" }\n+ - match: { docs.0.processor_results.3.doc._source.field2.value: \"_VALUE\" }\n+ - match: { docs.0.processor_results.3.doc._source.field3: \"third_val\" }\n+ - length: { docs.0.processor_results.3.doc._ingest: 1 }\n+ - is_true: docs.0.processor_results.3.doc._ingest.timestamp\n \n ---\n \"Test simulate with exception thrown\":",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/ingest/40_simulate.yaml",
"status": "modified"
}
]
}
|
{
"body": "Processors appear to be firing out of sequence. \n\nIn example 1, when the geoip processor is the only processor in the pipeline, the resulting object has a continent_name property with the value of \"North America\". Note that is it not all uppercase.\n\nIn example 2, the same geoip processor and input doc is used, but there is an additional uppercase processor after the geoip processor. Note that the output of BOTH processors in the pipeline have a continent_name property that is all uppercase.\n## Example 1\n### Request Body\n\n```\n{\n \"pipeline\": {\n \"processors\": [\n {\n \"geoip\": {\n \"processor_id\": \"processor_1\",\n \"source_field\": \"_raw\",\n \"target_field\": \"geoip\"\n }\n }\n ]\n },\n \"docs\": [\n {\n \"_source\": {\n \"_raw\": \"64.242.88.10\"\n }\n }\n ]\n}\n```\n### Response Body\n\n```\n{\n \"docs\": [\n {\n \"processor_results\": [\n {\n \"processor_id\": \"processor_1\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"North America\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:41:59.806+0000\"\n }\n }\n }\n ]\n }\n ]\n}\n```\n## Example 2\n### Request Body\n\n```\n{\n \"pipeline\": {\n \"processors\": [\n {\n \"geoip\": {\n \"processor_id\": \"processor_1\",\n \"source_field\": \"_raw\",\n \"target_field\": \"geoip\"\n }\n },\n {\n \"uppercase\": {\n \"processor_id\": \"processor_2\",\n \"field\": \"geoip.continent_name\"\n }\n }\n ]\n },\n \"docs\": [\n {\n \"_source\": {\n \"_raw\": \"64.242.88.10\"\n }\n }\n ]\n}\n```\n### Response Body\n\n```\n{\n \"docs\": [\n {\n \"processor_results\": [\n {\n \"processor_id\": \"processor_1\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"NORTH AMERICA\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:39:10.407+0000\"\n }\n }\n },\n {\n \"processor_id\": \"processor_2\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"NORTH AMERICA\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:39:10.407+0000\"\n }\n }\n }\n ]\n }\n ]\n}\n```\n",
"comments": [
{
"body": "Seems to be a bigger issue. The same effect happens with the lowercase, and split processors. Is it related to the fact that I am operating on a nested object? ie: geoip.continent_name\n\nAm I doing this incorrectly?\n",
"created_at": "2016-01-26T20:50:11Z"
},
{
"body": "I believe the copy constructor for `IngestDocument` does not properly do a deep copy of the underlying source Map.\n",
"created_at": "2016-01-26T20:56:34Z"
},
{
"body": "@BigFunger Usage is correct, there seems to be something wrong with the verbose version of the simulate api.\n",
"created_at": "2016-01-26T20:56:39Z"
},
{
"body": "@BigFunger Bug has been fixed in the master branch. Make sure when you're going to work from master, that you need to change `processor_id` to `tag`.\n",
"created_at": "2016-01-26T21:50:38Z"
}
],
"number": 16246,
"title": "node ingest - simulate - uppercase processor affecting output of previous processors"
}
|
{
"body": "updated this test to reflect changes in #16248 which fix #16246 around `_simulate?verbose` results mutating between processor executions because `IngestDocument`s were not properly being copied.\n",
"number": 16251,
"review_comments": [],
"title": "update test to verify that documents are deep-copied between verbose results"
}
|
{
"commits": [
{
"message": "[ingest] update test to verify that documents are deep-copied between verbose results"
}
],
"files": [
{
"diff": "@@ -207,7 +207,7 @@\n {\n \"set\" : {\n \"tag\" : \"processor[set]-0\",\n- \"field\" : \"field2\",\n+ \"field\" : \"field2.value\",\n \"value\" : \"_value\"\n }\n },\n@@ -216,6 +216,16 @@\n \"field\" : \"field3\",\n \"value\" : \"third_val\"\n }\n+ },\n+ {\n+ \"uppercase\" : {\n+ \"field\" : \"field2.value\"\n+ }\n+ },\n+ {\n+ \"lowercase\" : {\n+ \"field\" : \"foo.bar.0.item\"\n+ }\n }\n ]\n },\n@@ -225,25 +235,39 @@\n \"_type\": \"type\",\n \"_id\": \"id\",\n \"_source\": {\n- \"foo\": \"bar\"\n+ \"foo\": {\n+ \"bar\" : [ {\"item\": \"HELLO\"} ]\n+ }\n }\n }\n ]\n }\n - length: { docs: 1 }\n- - length: { docs.0.processor_results: 2 }\n+ - length: { docs.0.processor_results: 4 }\n - match: { docs.0.processor_results.0.tag: \"processor[set]-0\" }\n - length: { docs.0.processor_results.0.doc._source: 2 }\n- - match: { docs.0.processor_results.0.doc._source.foo: \"bar\" }\n- - match: { docs.0.processor_results.0.doc._source.field2: \"_value\" }\n+ - match: { docs.0.processor_results.0.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.0.doc._source.field2.value: \"_value\" }\n - length: { docs.0.processor_results.0.doc._ingest: 1 }\n - is_true: docs.0.processor_results.0.doc._ingest.timestamp\n - length: { docs.0.processor_results.1.doc._source: 3 }\n- - match: { docs.0.processor_results.1.doc._source.foo: \"bar\" }\n- - match: { docs.0.processor_results.1.doc._source.field2: \"_value\" }\n+ - match: { docs.0.processor_results.1.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.1.doc._source.field2.value: \"_value\" }\n - match: { docs.0.processor_results.1.doc._source.field3: \"third_val\" }\n - length: { docs.0.processor_results.1.doc._ingest: 1 }\n - is_true: docs.0.processor_results.1.doc._ingest.timestamp\n+ - length: { docs.0.processor_results.2.doc._source: 3 }\n+ - match: { docs.0.processor_results.2.doc._source.foo.bar.0.item: \"HELLO\" }\n+ - match: { docs.0.processor_results.2.doc._source.field2.value: \"_VALUE\" }\n+ - match: { docs.0.processor_results.2.doc._source.field3: \"third_val\" }\n+ - length: { docs.0.processor_results.2.doc._ingest: 1 }\n+ - is_true: docs.0.processor_results.2.doc._ingest.timestamp\n+ - length: { docs.0.processor_results.3.doc._source: 3 }\n+ - match: { docs.0.processor_results.3.doc._source.foo.bar.0.item: \"hello\" }\n+ - match: { docs.0.processor_results.3.doc._source.field2.value: \"_VALUE\" }\n+ - match: { docs.0.processor_results.3.doc._source.field3: \"third_val\" }\n+ - length: { docs.0.processor_results.3.doc._ingest: 1 }\n+ - is_true: docs.0.processor_results.3.doc._ingest.timestamp\n \n ---\n \"Test simulate with exception thrown\":",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/ingest/40_simulate.yaml",
"status": "modified"
}
]
}
|
{
"body": "This PR is based on 2.2. It's a start for #16206, but I still need to make a test case.\n\nI fixed `IndexShard.recovering` to forcefully activate the shard (and have IMC re-budget indexing buffers), and also fixed `IndexShard.checkIdle` to always return `false` (still active) if the shard is still recovering.\n",
"comments": [
{
"body": "@bleskes I pushed changes based on your feedback (pushing activation \"lower\", to when recovery actually begins ... and I changed `markLastWrite` to use `System.nanoTime` instead of the operation's start time) can you look? Thanks!\n",
"created_at": "2016-01-25T14:54:07Z"
},
{
"body": "LGTM. I wonder if we can add an easy unit test test to make sure this doesn't regress. Maybe check that a shard is active post recovery from store in `testRecoverFromStore`?\n\nAlso - although not needed in master, I wonder what parts of this should go there as well? (i.e., mark the shard as active up recovery from primary)\n",
"created_at": "2016-01-25T15:01:25Z"
},
{
"body": "@bleskes I added a couple unit tests ^^\n\nThey fail before the change and pass with the change.\n\nI'll think about master ...\n",
"created_at": "2016-01-25T15:11:17Z"
},
{
"body": "Awesome. Thanks mike\n",
"created_at": "2016-01-25T15:15:25Z"
},
{
"body": "Thanks @bleskes, I'll run all tests and push to 2.2 and 2.1.x.\n\n@dadoonet can you maybe test this with your script and see if recovery is as fast as the original indexing again? Thanks!\n",
"created_at": "2016-01-25T15:42:43Z"
},
{
"body": "Well, some tests are angry ... digging.\n",
"created_at": "2016-01-25T16:11:44Z"
},
{
"body": "So the rest test (`Rest6IT`) is angry because, immediately after the local recovery, which just does `activate()` but does not set the `lastWriteNS`, the shard goes to inactive and a sync flush is done, giving the shard a `sync_id` when the test doesn't want one (since it didn't flush).\n\nIf I change the `activate()` to `markLastWrite()` in `internalPerformTranslogRecovery` the test is happy again (no sync flush happens, since the shard never goes to inactive again during the test). Or I can fix the test (`10_basic.yaml`) to accept a sync_id....\n\nOther tests are tripping on the new `assert` I added, which is spooky, but I fear it could be a concurrency issue where one thread is activating, and then another deactivates, before the first thread gets to its assert ... these failures don't repro ... still digging.\n",
"created_at": "2016-01-25T16:56:40Z"
},
{
"body": "> If I change the `activate()` to `markLastWrite()` in `internalPerformTranslogRecovery`\n\nI'm going to push this ... it simplifies things again because `activate()` can be folded into `markLastWrite()` again ... I think the \"purity\" of separating \"I'm writing via my local translog\" vs \"I'm writing via a peer's xlog\" is not worth the strange \"active -> inactive -> active\" that would happen as a result.\n\nBut I'm still struggling with:\n\n> Other tests are tripping on the new assert I added,\n\nMany tests seem to trip on this, where a shard becomes active, asks IMC to set its buffer, but after that it still has an inactive buffer ... but only intermittently ... I'm still not sure why. I'll dig some more, but this is possibly a pre-existing issue (I'll confirm) and we can perhaps decouple from this blocker.\n",
"created_at": "2016-01-25T19:40:32Z"
},
{
"body": "+1 to just use markLastWrite.. Make sense to me.\n\nOn 25 jan. 2016 8:41 PM +0100, Michael McCandlessnotifications@github.com, wrote:\n\n> > If I change theactivate()tomarkLastWrite()ininternalPerformTranslogRecovery\n> \n> I'm going to push this ... it simplifies things again becauseactivate()can be folded intomarkLastWrite()again ... I think the \"purity\" of separating \"I'm writing via my local translog\" vs \"I'm writing via a peer's xlog\" is not worth the strange \"active ->inactive ->active\" that would happen as a result.\n> \n> But I'm still struggling with:\n> \n> > Other tests are tripping on the new assert I added,\n> \n> Many tests seem to trip on this, where a shard becomes active, asks IMC to set its buffer, but after that it still has an inactive buffer ... but only intermittently ... I'm still not sure why. I'll dig some more, but this is possibly a pre-existing issue (I'll confirm) and we can perhaps decouple from this blocker.\n> \n> —\n> Reply to this email directly orview it on GitHub(https://github.com/elastic/elasticsearch/pull/16209#issuecomment-174634910).\n",
"created_at": "2016-01-25T19:55:02Z"
},
{
"body": "@mikemccand here is what I did:\n- checkout 2.1\n- cherry-picked your commits in 2.1\n- build the distribution\n- run my scenario\n\nI can confirm that everything works perfectly now! Thanks for fixing this Mike\n",
"created_at": "2016-01-26T08:10:44Z"
},
{
"body": "Thank you for confirming @dadoonet!\n\nI pushed a fix to restrict the new assertion to only the one thread that actually did the activation (invoked IMC's `forceCheck`), and `mvn verify` is passing (well at least once), but this is hard to explain the test failures I was seeing because local recovery should be single threaded...\n\nThe assert does NOT appear trip if I add it to a clean 2.2 clone, meaning it was only ever tripping when activating during translog replay.\n\nIf `mvn verify` passes again I'm going to push!\n",
"created_at": "2016-01-26T09:07:56Z"
},
{
"body": "OK I pushed this fix to 2.2 with a disastrously accidental commit message: https://github.com/elastic/elasticsearch/commit/9c612d4a7baa8c1ced0eb5ed0bfc0daac05337c6\n\nWorking on 2.1.x backport now ...\n",
"created_at": "2016-01-26T10:40:56Z"
},
{
"body": "I pushed the fix to 2.1 and 2.2, but the new assert still intermittently trips, e.g.: https://build-us-01.elastic.co/job/elastic+elasticsearch+2.2+multijob-intake/128/console\n\nI've commented it out for now, but I still can't explain it.\n",
"created_at": "2016-01-26T11:13:06Z"
},
{
"body": "Another example failure: http://build-us-00.elastic.co/job/es_core_22_oracle_6/70/consoleFull\n",
"created_at": "2016-01-26T11:14:11Z"
},
{
"body": "The failures look like this:\n\n```\n > Caused by: java.lang.AssertionError: active=true state=RECOVERING shard=[test][0]\n > at __randomizedtesting.SeedInfo.seed([582017D9CD0B9CD1]:0)\n > at org.elasticsearch.index.shard.IndexShard.markLastWrite(IndexShard.java:1005)\n > at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:923)\n > at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:897)\n > at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n > at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n > at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n > at java.lang.Thread.run(Thread.java:745)\n 1> [2016-01-26 03:08:21,706][INFO ][index \n```\n\nMy only guess at this point is when IMC asks `IndicesService` for all indices + shards, that somehow this index is not yet visible? I.e., that somehow the `StoreRecoveryService` is recovering this shard before `IndicesService` knows about the index ... is that possible?\n",
"created_at": "2016-01-26T11:43:15Z"
},
{
"body": "Hmm I think for the `IndexWithShadowReplicasIT.testIndexWithFewDocuments` failure, the node is being closed while (concurrently) an index shard is trying to recover, and so when IMC iterates all active shards, there are none.\n",
"created_at": "2016-01-26T12:17:33Z"
},
{
"body": "I think your theory about closing while recovering is correct. An index is recovered on a concurrent thread (to not block the cluster service thread). If the shard is removed before it's recovered (which I guess is likely on test cleanup), the shard is first removed from the lookup tables and then shut down. In between those the state of the shard is not closed, but the IMC ignores it. Not sure what a simple fix would be - we don't have access to indexservice from the shard (good!). we can add \"pending delete\" flag to the indexshard marking it as being deleted (while still in the pool), we can change the the assert to wait on close on failure. All of this feels like an overkill to just removing the assert and just relying on the unit tests.\n",
"created_at": "2016-01-26T13:37:13Z"
},
{
"body": "Thanks @bleskes ... my inclination would be to just leave the assert off, as long as we can convince ourselves these cases when it trips are not important in practice ...\n\nI'm chasing this failure now, which is different...: https://build-us-01.elastic.co/job/elastic+elasticsearch+2.2+multijob-os-compatibility/os=elasticsearch&&debian/107/console\n",
"created_at": "2016-01-26T15:27:01Z"
},
{
"body": "> I'm chasing this failure now, which is different.\n\nAlright, I've explained this failure, and I'm running tests for a simple fix now.\n\nThe issue is that when a shard creates its `Engine` it pulls the current indexing buffer (and other settings) from its `EngineConfig`, but then possibly a lot of time can elapse (e.g. replaying local translog) before it finally sets the `Engine` instances into the shard's `AtomicReference currentEngineReference`.\n\nDuring that window, if IMC goes and updates the indexing buffer for this shard, since the current engine reference is still null, that shard will update the `EngineConfig` but not the engine, and the change is \"lost\" in the sense that unless IMC further changes the indexing buffer, `IndexShard.updateBufferSize`'s change detection logic will think nothing had actually changed and won't push the change down to the (now initialized) engine.\n\nThis is really a pre-existing issue, but my change here exacerbated it by invoking IMC more frequently, especially when N shards are being initialized which happens nearly concurrently.\n\nMy proposed fix is this is to call `onSettingsChanged()` after we finally set the engine instance:\n\n```\ndiff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java\nindex 69f575b..26bc3ed 100644\n--- a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java\n+++ b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java\n@@ -1433,6 +1433,12 @@ public class IndexShard extends AbstractIndexShardComponent {\n assert this.currentEngineReference.get() == null;\n this.currentEngineReference.set(newEngine(skipTranslogRecovery, config));\n }\n+\n+ // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during which\n+ // IMC may have updated our indexing buffer, e.g. if N other shards are being created and applying their translogs, and we would\n+ // otherwise not realize the index buffer had changed while we were concurrently \"startint up\" so here we forcefully push any config\n+ // chagnes to the new engine:\n+ engine().onSettingsChanged();\n }\n\n protected Engine newEngine(boolean skipTranslogRecovery, EngineConfig config) {\n```\n\nThis fixes the above (reproducible, yay!) test failure, and `mvn verify` seems happy at least once as well...\n",
"created_at": "2016-01-26T18:31:09Z"
},
{
"body": "+1 on this move. Strictly speaking we are still missing the update to the translog buffer, which is a pain in the... . The current infra doesn't allow fixing it directly without doing something ugly and the IMC will set it right in it's next run. I'm good with letting this go and proceed with your suggestion.\n",
"created_at": "2016-01-26T18:40:29Z"
},
{
"body": "Thanks @bleskes, I pushed that change (I added `null` check just in case ;) ).\n\nI think this change is done for 2.1, 2.2, 2.x.\n\nI'll open a separate PR for master changes, and resolve this ...\n",
"created_at": "2016-01-26T20:52:18Z"
}
],
"number": 16209,
"title": "Make sure IndexShard is active during recovery so it gets its fair share of the indexing buffer"
}
|
{
"body": "This is the master port for #16209 ... it's simpler because in master\nwe already pushed `lastWriteNanos` down to engine, so it's already being\nset properly on translog replay.\n\nI carried forward setting `IndexShard.active` to `true` when translog\nrecovery starts; while this won't impact indexing buffer decisions\n(it's totally different in master with #14121), I think this is still\nimportant so sync'd flush will later notice there is something to\nsync, if all the shard does is wake up, play translog, but then do no\nindexing ops.\n\nI also carried forward the \"refresh engine settings after engine is\ndone starting\" change. While it's not important for indexing buffer\n(since in master indexing buffer is always a huge value), I think if\nother settings changes happened (e.g. merge scheduler) they may need\nto be pushed as well.\n",
"number": 16250,
"review_comments": [
{
"body": "minor note - this is only called when recovering from a primary on a replica. When a replica becomes inactive we don't sync flush (we do that just on the primary). The change is still good, but I think we should change the comment.\n",
"created_at": "2016-01-27T08:44:17Z"
},
{
"body": "Ahh, thanks @bleskes, I'll fix the comment to say \"so we can invoke the `onShardInactive` listeners\" instead ... and explain that sync'd flush won't actually occur :)\n",
"created_at": "2016-01-27T09:32:53Z"
}
],
"title": "Mark shard active during recovery; push settings after engine finally inits"
}
|
{
"commits": [
{
"message": "mark shard active during recovery; push settings after engine finally inits"
},
{
"message": "put back test asserts"
},
{
"message": "remove unused import"
},
{
"message": "fix comment"
}
],
"files": [
{
"diff": "@@ -854,6 +854,10 @@ public int performBatchRecovery(Iterable<Translog.Operation> operations) {\n if (state != IndexShardState.RECOVERING) {\n throw new IndexShardNotRecoveringException(shardId, state);\n }\n+ // We set active because we are now writing operations to the engine; this way, if we go idle after some time and become inactive,\n+ // we still invoke any onShardInactive listeners ... we won't sync'd flush in this case because we only do that on primary and this\n+ // is a replica\n+ active.set(true);\n return engineConfig.getTranslogRecoveryPerformer().performBatchRecovery(getEngine(), operations);\n }\n \n@@ -883,6 +887,11 @@ private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boole\n // but we need to make sure we don't loose deletes until we are done recovering\n engineConfig.setEnableGcDeletes(false);\n engineConfig.setCreate(indexExists == false);\n+ if (skipTranslogRecovery == false) {\n+ // We set active because we are now writing operations to the engine; this way, if we go idle after some time and become inactive,\n+ // we still give sync'd flush a chance to run:\n+ active.set(true);\n+ }\n createNewEngine(skipTranslogRecovery, engineConfig);\n \n }\n@@ -1043,6 +1052,10 @@ public void deleteShardState() throws IOException {\n MetaDataStateFormat.deleteMetaState(shardPath().getDataPath());\n }\n \n+ public boolean isActive() {\n+ return active.get();\n+ }\n+\n public ShardPath shardPath() {\n return path;\n }\n@@ -1302,6 +1315,15 @@ private void createNewEngine(boolean skipTranslogRecovery, EngineConfig config)\n assert this.currentEngineReference.get() == null;\n this.currentEngineReference.set(newEngine(skipTranslogRecovery, config));\n }\n+\n+ // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during which\n+ // settings changes could possibly have happened, so here we forcefully push any config changes to the new engine:\n+ Engine engine = getEngineOrNull();\n+\n+ // engine could perhaps be null if we were e.g. concurrently closed:\n+ if (engine != null) {\n+ engine.onSettingsChanged();\n+ }\n }\n \n protected Engine newEngine(boolean skipTranslogRecovery, EngineConfig config) {",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -1087,4 +1087,65 @@ public void testTranslogRecoverySyncsTranslog() throws IOException {\n newShard.performBatchRecovery(operations);\n assertFalse(newShard.getTranslog().syncNeeded());\n }\n+\n+ public void testIndexingBufferDuringInternalRecovery() throws IOException {\n+ createIndex(\"index\");\n+ client().admin().indices().preparePutMapping(\"index\").setType(\"testtype\").setSource(jsonBuilder().startObject()\n+ .startObject(\"testtype\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject().endObject().endObject()).get();\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"index\");\n+ IndexShard shard = test.getShardOrNull(0);\n+ ShardRouting routing = new ShardRouting(shard.routingEntry());\n+ test.removeShard(0, \"b/c britta says so\");\n+ IndexShard newShard = test.createShard(routing);\n+ newShard.shardRouting = routing;\n+ DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ newShard.markAsRecovering(\"for testing\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.REPLICA, localNode, localNode));\n+ // Shard is still inactive since we haven't started recovering yet\n+ assertFalse(newShard.isActive());\n+ newShard.prepareForIndexRecovery();\n+ // Shard is still inactive since we haven't started recovering yet\n+ assertFalse(newShard.isActive());\n+ newShard.performTranslogRecovery(true);\n+ // Shard should now be active since we did recover:\n+ assertTrue(newShard.isActive());\n+ }\n+\n+ public void testIndexingBufferDuringPeerRecovery() throws IOException {\n+ createIndex(\"index\");\n+ client().admin().indices().preparePutMapping(\"index\").setType(\"testtype\").setSource(jsonBuilder().startObject()\n+ .startObject(\"testtype\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject().endObject().endObject()).get();\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"index\");\n+ IndexShard shard = test.getShardOrNull(0);\n+ ShardRouting routing = new ShardRouting(shard.routingEntry());\n+ test.removeShard(0, \"b/c britta says so\");\n+ IndexShard newShard = test.createShard(routing);\n+ newShard.shardRouting = routing;\n+ DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ newShard.markAsRecovering(\"for testing\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.REPLICA, localNode, localNode));\n+ // Shard is still inactive since we haven't started recovering yet\n+ assertFalse(newShard.isActive());\n+ List<Translog.Operation> operations = new ArrayList<>();\n+ operations.add(new Translog.Index(\"testtype\", \"1\", jsonBuilder().startObject().field(\"foo\", \"bar\").endObject().bytes().toBytes()));\n+ newShard.prepareForIndexRecovery();\n+ newShard.skipTranslogRecovery();\n+ // Shard is still inactive since we haven't started recovering yet\n+ assertFalse(newShard.isActive());\n+ newShard.performBatchRecovery(operations);\n+ // Shard should now be active since we did recover:\n+ assertTrue(newShard.isActive());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
}
]
}
|
{
"body": "Processors appear to be firing out of sequence. \n\nIn example 1, when the geoip processor is the only processor in the pipeline, the resulting object has a continent_name property with the value of \"North America\". Note that is it not all uppercase.\n\nIn example 2, the same geoip processor and input doc is used, but there is an additional uppercase processor after the geoip processor. Note that the output of BOTH processors in the pipeline have a continent_name property that is all uppercase.\n## Example 1\n### Request Body\n\n```\n{\n \"pipeline\": {\n \"processors\": [\n {\n \"geoip\": {\n \"processor_id\": \"processor_1\",\n \"source_field\": \"_raw\",\n \"target_field\": \"geoip\"\n }\n }\n ]\n },\n \"docs\": [\n {\n \"_source\": {\n \"_raw\": \"64.242.88.10\"\n }\n }\n ]\n}\n```\n### Response Body\n\n```\n{\n \"docs\": [\n {\n \"processor_results\": [\n {\n \"processor_id\": \"processor_1\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"North America\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:41:59.806+0000\"\n }\n }\n }\n ]\n }\n ]\n}\n```\n## Example 2\n### Request Body\n\n```\n{\n \"pipeline\": {\n \"processors\": [\n {\n \"geoip\": {\n \"processor_id\": \"processor_1\",\n \"source_field\": \"_raw\",\n \"target_field\": \"geoip\"\n }\n },\n {\n \"uppercase\": {\n \"processor_id\": \"processor_2\",\n \"field\": \"geoip.continent_name\"\n }\n }\n ]\n },\n \"docs\": [\n {\n \"_source\": {\n \"_raw\": \"64.242.88.10\"\n }\n }\n ]\n}\n```\n### Response Body\n\n```\n{\n \"docs\": [\n {\n \"processor_results\": [\n {\n \"processor_id\": \"processor_1\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"NORTH AMERICA\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:39:10.407+0000\"\n }\n }\n },\n {\n \"processor_id\": \"processor_2\",\n \"doc\": {\n \"_type\": \"_type\",\n \"_routing\": null,\n \"_ttl\": null,\n \"_index\": \"_index\",\n \"_timestamp\": null,\n \"_parent\": null,\n \"_id\": \"_id\",\n \"_source\": {\n \"geoip\": {\n \"continent_name\": \"NORTH AMERICA\",\n \"city_name\": \"Chesterfield\",\n \"country_iso_code\": \"US\",\n \"region_name\": \"Missouri\",\n \"location\": [\n -90.5771,\n 38.6631\n ]\n },\n \"_raw\": \"64.242.88.10\"\n },\n \"_ingest\": {\n \"timestamp\": \"2016-01-26T20:39:10.407+0000\"\n }\n }\n }\n ]\n }\n ]\n}\n```\n",
"comments": [
{
"body": "Seems to be a bigger issue. The same effect happens with the lowercase, and split processors. Is it related to the fact that I am operating on a nested object? ie: geoip.continent_name\n\nAm I doing this incorrectly?\n",
"created_at": "2016-01-26T20:50:11Z"
},
{
"body": "I believe the copy constructor for `IngestDocument` does not properly do a deep copy of the underlying source Map.\n",
"created_at": "2016-01-26T20:56:34Z"
},
{
"body": "@BigFunger Usage is correct, there seems to be something wrong with the verbose version of the simulate api.\n",
"created_at": "2016-01-26T20:56:39Z"
},
{
"body": "@BigFunger Bug has been fixed in the master branch. Make sure when you're going to work from master, that you need to change `processor_id` to `tag`.\n",
"created_at": "2016-01-26T21:50:38Z"
}
],
"number": 16246,
"title": "node ingest - simulate - uppercase processor affecting output of previous processors"
}
|
{
"body": "PR for #16246\n",
"number": 16248,
"review_comments": [],
"title": "The IngestDocument copy constructor should make a deep copy"
}
|
{
"commits": [
{
"message": "ingest: The IngestDocument copy constructor should make a deep copy instead of shallow copy\n\nCloses #16246"
}
],
"files": [
{
"diff": "@@ -81,7 +81,7 @@ public IngestDocument(String index, String type, String id, String routing, Stri\n * Copy constructor that creates a new {@link IngestDocument} which has exactly the same properties as the one provided as argument\n */\n public IngestDocument(IngestDocument other) {\n- this(new HashMap<>(other.sourceAndMetadata), new HashMap<>(other.ingestMetadata));\n+ this(deepCopyMap(other.sourceAndMetadata), deepCopyMap(other.ingestMetadata));\n }\n \n /**\n@@ -470,6 +470,35 @@ public Map<String, Object> getSourceAndMetadata() {\n return this.sourceAndMetadata;\n }\n \n+ @SuppressWarnings(\"unchecked\")\n+ private static <K, V> Map<K, V> deepCopyMap(Map<K, V> source) {\n+ return (Map<K, V>) deepCopy(source);\n+ }\n+\n+ private static Object deepCopy(Object value) {\n+ if (value instanceof Map) {\n+ Map<?, ?> mapValue = (Map<?, ?>) value;\n+ Map<Object, Object> copy = new HashMap<>(mapValue.size());\n+ for (Map.Entry<?, ?> entry : mapValue.entrySet()) {\n+ copy.put(entry.getKey(), deepCopy(entry.getValue()));\n+ }\n+ return copy;\n+ } else if (value instanceof List) {\n+ List<?> listValue = (List<?>) value;\n+ List<Object> copy = new ArrayList<>(listValue.size());\n+ for (Object itemValue : listValue) {\n+ copy.add(deepCopy(itemValue));\n+ }\n+ return copy;\n+ } else if (value == null || value instanceof String || value instanceof Integer ||\n+ value instanceof Long || value instanceof Float ||\n+ value instanceof Double || value instanceof Boolean) {\n+ return value;\n+ } else {\n+ throw new IllegalArgumentException(\"unexpected value type [\" + value.getClass() + \"]\");\n+ }\n+ }\n+\n @Override\n public boolean equals(Object obj) {\n if (obj == this) { return true; }",
"filename": "core/src/main/java/org/elasticsearch/ingest/core/IngestDocument.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.ingest.core;\n \n import org.elasticsearch.ingest.RandomDocumentPicks;\n-import org.elasticsearch.ingest.core.IngestDocument;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Before;\n \n@@ -970,7 +969,31 @@ public void testIngestMetadataTimestamp() throws Exception {\n public void testCopyConstructor() {\n IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random());\n IngestDocument copy = new IngestDocument(ingestDocument);\n- assertThat(ingestDocument.getSourceAndMetadata(), not(sameInstance(copy.getSourceAndMetadata())));\n- assertThat(ingestDocument.getSourceAndMetadata(), equalTo(copy.getSourceAndMetadata()));\n+ recursiveEqualsButNotSameCheck(ingestDocument.getSourceAndMetadata(), copy.getSourceAndMetadata());\n+ }\n+\n+ private void recursiveEqualsButNotSameCheck(Object a, Object b) {\n+ assertThat(a, not(sameInstance(b)));\n+ assertThat(a, equalTo(b));\n+ if (a instanceof Map) {\n+ Map<?, ?> mapA = (Map<?, ?>) a;\n+ Map<?, ?> mapB = (Map<?, ?>) b;\n+ for (Map.Entry<?, ?> entry : mapA.entrySet()) {\n+ if (entry.getValue() instanceof List || entry.getValue() instanceof Map) {\n+ recursiveEqualsButNotSameCheck(entry.getValue(), mapB.get(entry.getKey()));\n+ }\n+ }\n+ } else if (a instanceof List) {\n+ List<?> listA = (List<?>) a;\n+ List<?> listB = (List<?>) b;\n+ for (int i = 0; i < listA.size(); i++) {\n+ Object value = listA.get(i);\n+ if (value instanceof List || value instanceof Map) {\n+ recursiveEqualsButNotSameCheck(value, listB.get(i));\n+ }\n+ }\n+ }\n+\n }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/ingest/core/IngestDocumentTests.java",
"status": "modified"
}
]
}
|
{
"body": "In the presence of top level inner hits query level inner hits are missing from the result, even if there are no name collisions. This necessitates rewriting query level inner hits as top level inner hits with a duplicate of the main query, which I assume is suboptimal.\n\nES 2.1.1.\n",
"comments": [
{
"body": "@zygfryd Thanks for reporting this. This is indeed suboptimal and not expected behaviour.\n\nSimple reproduction:\n\n```\nPUT index\n{\n \"mappings\": {\n \"parent\" : {\n },\n \"child\" : {\n \"_parent\": {\n \"type\": \"parent\"\n }\n }\n }\n}\n\nPUT /index/parent/1\n{ \n}\n\nPUT /index/child/1?parent=1\n{ \n}\n\nGET /index/_search\n{\n \"query\": {\n \"has_child\": {\n \"type\": \"child\",\n \"query\": {\n \"match_all\": {}\n },\n \"inner_hits\" : {}\n }\n },\n \"inner_hits\" : {\n \"my-inner-hits\" : {\n \"type\" : {\n \"child\" : {\n\n }\n }\n }\n }\n}\n```\n\nTwo inner hits are expected here, but instead only the top level inner hit is returned.\n",
"created_at": "2016-01-26T10:16:41Z"
}
],
"number": 16218,
"title": "Top level inner hits override query level inner hits"
}
|
{
"body": "PR for #16218\n",
"number": 16222,
"review_comments": [],
"title": "Query and top level inner hit definitions shouldn't overwrite each other"
}
|
{
"commits": [],
"files": []
}
|
{
"body": "This starts with ES 2.2 as we move to the script modules. The below script works in ES 2.1, but not ES 2.2 (the impact of which could be pretty big):\n\n``` http\nPUT /test/type/1\n{\n \"message\": \"This is a message!\",\n \"numbers\" : [1, 2, 3, 4]\n}\n\nPUT /test/type/2\n{\n \"message\": \"This is a message! too\",\n \"numbers\" : [4, 2, 3, 5]\n}\n\nGET /test/_search\n{\n \"script_fields\": {\n \"use_closure\": {\n \"script\": {\n \"inline\": \"doc['numbers'].values.findAll { it % 2 == 0 }\"\n }\n }\n }\n}\n```\n\nI noticed that the issue seemed to be ClassNotFound errors related to the security manager, so I started adding permissions until I hit a wall because it could not access the generated closure's class.\n\n```\n permission org.elasticsearch.script.ClassPermission \"groovy.lang.Closure\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.reflection.CachedMethod\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.reflection.ClassInfo\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.DefaultGroovyMethodsSupport\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.GeneratedClosure\";\n```\n\ncc @rasroh \n",
"comments": [
{
"body": "Relates apache/groovy#248.\n",
"created_at": "2016-01-26T11:40:51Z"
},
{
"body": "Closed via bdddea2dd0ecd3624e5484b3a34e668ca74cd6d0 from #16196.\n",
"created_at": "2016-01-26T17:56:27Z"
}
],
"number": 16194,
"title": "Groovy Closures inaccessible"
}
|
{
"body": "The purpose behind this pull request is to add some permissions that Groovy needs to use closures in its current state. While granting `java.lang.reflect.ReflectPermission \"supressAccessChecks\"` is undesirable, there is an upstream pull request in apache/groovy#248 to modify the Groovy compiler so that these permissions are not necessary. Therefore, the long-term plan is to grant these permissions so as to not break existing scripts, and remove these permissions when the upstream pull request is accepted (and make a breaking change in a major release if not). These permissions are not staying around forever.\n\nCloses #16194 \n",
"number": 16196,
"review_comments": [
{
"body": "@rmuir should look at this...there was an enormous amount of work that went into removing this permission across elasticsearch.\n",
"created_at": "2016-01-23T02:38:07Z"
}
],
"title": "Security permissions for Groovy closures"
}
|
{
"commits": [
{
"message": "Security permissions for Groovy closures\n\nThis commit adds some permissions that Groovy needs to use closures."
},
{
"message": "Groovy closure integration test to unit test\n\nThis commit converts an integration test for Groovy closures to a unit\ntest."
},
{
"message": "Additional security permission for Groovy closures\n\nThis commit adds an additional security permission for Groovy closures\nand a corresponding test."
}
],
"files": [
{
"diff": "@@ -25,6 +25,7 @@ grant {\n // needed by groovy engine\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.reflect\";\n+ permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\n // needed by GroovyScriptEngineService to close its classloader (why?)\n permission java.lang.RuntimePermission \"closeClassLoader\";\n // Allow executing groovy scripts with codesource of /untrusted\n@@ -48,4 +49,9 @@ grant {\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation\";\n permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.vmplugin.v7.IndyInterface\";\n permission org.elasticsearch.script.ClassPermission \"sun.reflect.ConstructorAccessorImpl\";\n+\n+ permission org.elasticsearch.script.ClassPermission \"groovy.lang.Closure\";\n+ permission org.elasticsearch.script.ClassPermission \"org.codehaus.groovy.runtime.GeneratedClosure\";\n+ permission org.elasticsearch.script.ClassPermission \"groovy.lang.MetaClass\";\n+ permission org.elasticsearch.script.ClassPermission \"groovy.lang.Range\";\n };",
"filename": "modules/lang-groovy/src/main/plugin-metadata/plugin-security.policy",
"status": "modified"
},
{
"diff": "@@ -87,6 +87,9 @@ public void testEvilGroovyScripts() throws Exception {\n assertSuccess(\"def t = Instant.now().getMillis()\");\n // GroovyCollections\n assertSuccess(\"def n = [1,2,3]; GroovyCollections.max(n)\");\n+ // Groovy closures\n+ assertSuccess(\"[1, 2, 3, 4].findAll { it % 2 == 0 }\");\n+ assertSuccess(\"def buckets=[ [2, 4, 6, 8], [10, 12, 16, 14], [18, 22, 20, 24] ]; buckets[-3..-1].every { it.every { i -> i % 2 == 0 } }\");\n \n // Fail cases:\n assertFailure(\"pr = Runtime.getRuntime().exec(\\\"touch /tmp/gotcha\\\"); pr.waitFor()\", MissingPropertyException.class);",
"filename": "modules/lang-groovy/src/test/java/org/elasticsearch/script/groovy/GroovySecurityTests.java",
"status": "modified"
}
]
}
|
{
"body": "> Reported by @jimmyjones2 in https://github.com/elastic/kibana/issues/3335#issuecomment-168061914\n\nWhile indexing a small amount of data to test https://github.com/elastic/kibana/issues/3335 we discovered a strange inconsistency with how elasticsearch validated aggregations. What we observed:\n\nAfter creating a fresh index (two shards and a `nofielddata` field) any aggregation using the `nofielddata` field can be sent to the index and responses will come back as \"successful\". As shards start to get documents these shards will start to fail with `\"java.lang.IllegalStateException: Field data loading is forbidden on nofielddata\"`, but responses will still be formatted as partial successes. Once the final shard has a document requests will start to actually fail (response with a top level \"error\" property).\n\nThis feels like a serious edge-case, but I figured there isn't any harm in reporting it. \n\n---\n\nHere is a sense script to replicate this:\n\n``` sh\nDELETE /test?ignore_unavailable=true\n\nPOST /test\n{\n \"settings\": {\n \"number_of_shards\": 2\n },\n \"mappings\": {\n \"test\" : {\n \"properties\": {\n \"nofielddata\" : {\n \"type\": \"string\",\n \"fielddata\" : { \"format\" : \"disabled\" }\n }\n }\n }\n }\n}\n\nPOST /test/test\n{ \"noindex\" : \"foo\", \"nofielddata\" : \"foo\" }\n\nGET /test/test/_search\n{\n \"query\": {\n \"match\": {\n \"_id\": \"AVJhGARbdsS1hY8oBX1Z\"\n }\n },\n \"aggs\" : {\n \"test\" : {\n \"terms\" : { \"field\" : \"nofielddata\" }\n }\n }\n}\n\nPOST /test/test\n{ \"noindex\" : \"bar\", \"nofielddata\" : \"bar\" }\n\nGET /test/test/_search\n{\n \"query\": {\n \"match\": {\n \"_id\": \"AVJhGARbdsS1hY8oBX1Z\"\n }\n },\n \"aggs\" : {\n \"test\" : {\n \"terms\" : { \"field\" : \"nofielddata\" }\n }\n }\n}\n```\n",
"comments": [
{
"body": "Agreed this is a bug.\n",
"created_at": "2016-01-20T22:54:22Z"
}
],
"number": 16135,
"title": "Aggs with fielddata disabled fails inconsistently"
}
|
{
"body": "Currently this fails when loading data from a segment, which means that it will\nnever fail on an empty index since it does not have segments.\n\nCloses #16135\n",
"number": 16179,
"review_comments": [],
"title": "Make disabled fielddata loading fail earlier."
}
|
{
"commits": [
{
"message": "Make disabled fielddata loading fail earlier.\n\nCurrently this fails when loading data from a segment, which means that it will\nnever fail on an empty index since it does not have segments.\n\nCloses #16135"
}
],
"files": [
{
"diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.fielddata.plain.AbstractGeoPointDVIndexFieldData;\n import org.elasticsearch.index.fielddata.plain.BytesBinaryDVIndexFieldData;\n-import org.elasticsearch.index.fielddata.plain.DisabledIndexFieldData;\n import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData;\n import org.elasticsearch.index.fielddata.plain.GeoPointArrayIndexFieldData;\n import org.elasticsearch.index.fielddata.plain.IndexIndexFieldData;\n@@ -79,6 +78,14 @@ public class IndexFieldDataService extends AbstractIndexComponent implements Clo\n private static final String DOC_VALUES_FORMAT = \"doc_values\";\n private static final String PAGED_BYTES_FORMAT = \"paged_bytes\";\n \n+ private static final IndexFieldData.Builder DISABLED_BUILDER = new IndexFieldData.Builder() {\n+ @Override\n+ public IndexFieldData<?> build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache,\n+ CircuitBreakerService breakerService, MapperService mapperService) {\n+ throw new IllegalStateException(\"Field data loading is forbidden on [\" + fieldType.name() + \"]\");\n+ }\n+ };\n+\n private final static Map<String, IndexFieldData.Builder> buildersByType;\n private final static Map<String, IndexFieldData.Builder> docValuesBuildersByType;\n private final static Map<Tuple<String, String>, IndexFieldData.Builder> buildersByTypeAndFormat;\n@@ -96,7 +103,7 @@ public class IndexFieldDataService extends AbstractIndexComponent implements Clo\n buildersByTypeBuilder.put(\"geo_point\", new GeoPointArrayIndexFieldData.Builder());\n buildersByTypeBuilder.put(ParentFieldMapper.NAME, new ParentChildIndexFieldData.Builder());\n buildersByTypeBuilder.put(IndexFieldMapper.NAME, new IndexIndexFieldData.Builder());\n- buildersByTypeBuilder.put(\"binary\", new DisabledIndexFieldData.Builder());\n+ buildersByTypeBuilder.put(\"binary\", DISABLED_BUILDER);\n buildersByTypeBuilder.put(BooleanFieldMapper.CONTENT_TYPE, MISSING_DOC_VALUES_BUILDER);\n buildersByType = unmodifiableMap(buildersByTypeBuilder);\n \n@@ -117,35 +124,35 @@ public class IndexFieldDataService extends AbstractIndexComponent implements Clo\n buildersByTypeAndFormat = MapBuilder.<Tuple<String, String>, IndexFieldData.Builder>newMapBuilder()\n .put(Tuple.tuple(\"string\", PAGED_BYTES_FORMAT), new PagedBytesIndexFieldData.Builder())\n .put(Tuple.tuple(\"string\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder())\n- .put(Tuple.tuple(\"string\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"string\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"float\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.FLOAT))\n- .put(Tuple.tuple(\"float\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"float\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"double\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.DOUBLE))\n- .put(Tuple.tuple(\"double\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"double\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"byte\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.BYTE))\n- .put(Tuple.tuple(\"byte\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"byte\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"short\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.SHORT))\n- .put(Tuple.tuple(\"short\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"short\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"int\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.INT))\n- .put(Tuple.tuple(\"int\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"int\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"long\", DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.LONG))\n- .put(Tuple.tuple(\"long\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"long\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"geo_point\", ARRAY_FORMAT), new GeoPointArrayIndexFieldData.Builder())\n .put(Tuple.tuple(\"geo_point\", DOC_VALUES_FORMAT), new AbstractGeoPointDVIndexFieldData.Builder())\n- .put(Tuple.tuple(\"geo_point\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"geo_point\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(\"binary\", DOC_VALUES_FORMAT), new BytesBinaryDVIndexFieldData.Builder())\n- .put(Tuple.tuple(\"binary\", DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(\"binary\", DISABLED_FORMAT), DISABLED_BUILDER)\n \n .put(Tuple.tuple(BooleanFieldMapper.CONTENT_TYPE, DOC_VALUES_FORMAT), new DocValuesIndexFieldData.Builder().numericType(IndexNumericFieldData.NumericType.BOOLEAN))\n- .put(Tuple.tuple(BooleanFieldMapper.CONTENT_TYPE, DISABLED_FORMAT), new DisabledIndexFieldData.Builder())\n+ .put(Tuple.tuple(BooleanFieldMapper.CONTENT_TYPE, DISABLED_FORMAT), DISABLED_BUILDER)\n \n .immutableMap();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java",
"status": "modified"
},
{
"diff": "@@ -234,4 +234,23 @@ public void testRequireDocValuesOnDoubles() {\n public void testRequireDocValuesOnBools() {\n doTestRequireDocValues(new BooleanFieldMapper.BooleanFieldType());\n }\n+\n+ public void testDisabled() {\n+ ThreadPool threadPool = new ThreadPool(\"random_threadpool_name\");\n+ StringFieldMapper.StringFieldType ft = new StringFieldMapper.StringFieldType();\n+ try {\n+ IndicesFieldDataCache cache = new IndicesFieldDataCache(Settings.EMPTY, null, threadPool);\n+ IndexFieldDataService ifds = new IndexFieldDataService(IndexSettingsModule.newIndexSettings(new Index(\"test\"), Settings.EMPTY), cache, null, null);\n+ ft.setName(\"some_str\");\n+ ft.setFieldDataType(new FieldDataType(\"string\", Settings.builder().put(FieldDataType.FORMAT_KEY, \"disabled\").build()));\n+ try {\n+ ifds.getForField(ft);\n+ fail();\n+ } catch (IllegalStateException e) {\n+ assertThat(e.getMessage(), containsString(\"Field data loading is forbidden on [some_str]\"));\n+ }\n+ } finally {\n+ threadPool.shutdown();\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "I'm trying to get `inner_hits` to work on an a two level deep `has_child` query, but the grandchild hits just seem to return an empty array. If I specify the inner hits at the top level I get the results I want, but its rather too slow. Using inner hits on the has child query seems much faster, but doesn't return the grandchild hits.\n\nBelow is an example query:\n\n``` json\n{\n \"from\" : 0,\n \"size\" : 25,\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"multi_match\" : {\n \"query\" : \"asia\",\n \"fields\" : [ \"_search\" ],\n \"operator\" : \"AND\",\n \"analyzer\" : \"library_synonyms\",\n \"fuzziness\" : \"1\"\n }\n },\n \"filter\" : {\n \"and\" : {\n \"filters\" : [ {\n \"terms\" : {\n \"range\" : [ \"Global\" ]\n }\n } ]\n }\n }\n }\n },\n \"child_type\" : \"document-ref\",\n \"inner_hits\" : {\n \"name\" : \"document-ref\"\n }\n }\n },\n \"child_type\" : \"class\",\n \"inner_hits\" : {\n \"size\" : 1000,\n \"_source\" : false,\n \"fielddata_fields\" : [ \"class\" ],\n \"name\" : \"class\"\n }\n }\n },\n \"fielddata_fields\" : [ \"name\" ]\n}\n```\n\nThe `document-ref` inner hits just always returns an empty array. Should this work (and, if so, any ideas why it isn't?), or is it beyond the means of what inner hits can currently do?\n",
"comments": [
{
"body": "I've created a simpler test case for this, and it seems a little clearer that this doesn't currently work.\n\n**Add mappings:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren' -d '{\n \"mappings\" : {\n \"parent\" : {\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"child\" : {\n \"_parent\" : {\n \"type\" : \"parent\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"parent-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"grandchild\" : {\n \"_parent\" : {\n \"type\" : \"child\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"grandchild-name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n }\n }\n}'\n```\n\n**Populate:**\n\n```\ncurl -XPOST 'http://localhost:9200/grandchildren/parent/parent' -d '{ \"parent-name\" : \"Parent\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/child/child?parent=parent&routing=parent' -d '{ \"child-name\" : \"Child\" }'\ncurl -XPOST 'http://localhost:9200/grandchildren/grandchild/grandchild?parent=child&routing=parent' -d '{ \"grandchild-name\" : \"Grandchild\" }'\n```\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\",\n \"inner_hits\" : {\n \"name\" : \"grandchild\"\n }\n }\n },\n \"child_type\" : \"child\",\n \"inner_hits\" : {\n \"name\" : \"child\"\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"grandchild\" : {\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n },\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nNot only is the grandchild hits empty, but they're also not nested within the child hits, so aren't going to give me what I want anyway. I'm not sure what the intended/expected behaviour would be here, but guess I need to try something else for now.\n",
"created_at": "2015-05-12T13:52:39Z"
},
{
"body": "Above with inner hits at top-level:\n\n**Query:**\n\n```\ncurl -XGET 'http://localhost:9200/grandchildren/_search?pretty' -d '{\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"has_child\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"child_type\" : \"grandchild\"\n }\n },\n \"child_type\" : \"child\"\n }\n },\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"child\" : {\n \"inner_hits\" : {\n \"child\" : {\n \"type\" : {\n \"grandchild\" : {}\n }\n }\n }\n }\n }\n }\n }\n}'\n```\n\n**Result:**\n\n```\n{\n \"took\" : 3,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"parent\",\n \"_id\" : \"parent\",\n \"_score\" : 1.0,\n \"_source\":{ \"parent-name\" : \"Parent\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"child\",\n \"_id\" : \"child\",\n \"_score\" : 1.0,\n \"_source\":{ \"child-name\" : \"Child\" },\n \"inner_hits\" : {\n \"child\" : {\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"grandchildren\",\n \"_type\" : \"grandchild\",\n \"_id\" : \"grandchild\",\n \"_score\" : 1.0,\n \"_source\":{ \"grandchild-name\" : \"Grandchild\" }\n } ]\n }\n }\n }\n } ]\n }\n }\n }\n } ]\n }\n}\n```\n\nHere the grandchild inner hit is correctly found, and nested\n",
"created_at": "2015-05-12T14:03:05Z"
},
{
"body": "[I'm using 1.5.2]\n",
"created_at": "2015-05-12T14:12:57Z"
},
{
"body": "I managed to track down why this wasn't working. When parsing 'has child' queries, inner hits were always being added to the `parseContext`, and never as child inner hits to the parent inner hits.\n\nI've put together a very quick and dirty fix for this (https://github.com/lukens/elasticsearch/commit/fb22d622e7b24074b5f4fe3e26cffd4c38cff75b) which has got it working for my case, but I don't think is suitable for inclusion in a release.\n\nIssues I see with my with my fix:\n1. It's just messy, it doesn't really fix the issue, but just cleans up after it. Children still add their inner hits to the `parseContext`, the parent just removes them again afterwards, and adds them as children to its own inner hits.\n2. The parent removes them again afterwards by mutating a Map it gets from the current `SearchContext`'s `InnerHitsContext`. This is obviously bad and messy, and would be broken if `InnerHitsContext` was changed to return a copy of the map, rather than the map itself. Nasty dependencies between classes.\n3. I think a child can specify inner hits even if a parent doesn't. These would currently get lost. I'm not sure what the behaviour should be here, it should probably be considered an invalid query.\n4. If this was done properly, descendants at different levels could add inner hits with the same name, whereas currently this could cause issues. Maybe you shouldn't be able to have the same name at different levels, but if implemented correctly, there should be no need to enforce this. \n\nI considered submitting as a pull request, but felt it was far too rough and ready.\n",
"created_at": "2015-05-13T12:44:35Z"
},
{
"body": "@lukens If you want nested inner hits then you need to use the top level inner hits, the inner_hits on a query doesn't support nesting. The fact that your grandparents inner hits is empty is clearly a bug, thanks for bringing this up!\n",
"created_at": "2015-05-13T16:41:55Z"
},
{
"body": "Hi, it's grandchild, rather than grandparent, that isn't working. Though you also seem to be suggesting it shouldn't be. Either way, the change I committed shows that it can work, my change just isn't a very nice way to make it work.\n",
"created_at": "2015-05-14T09:30:04Z"
},
{
"body": "Ah, or are you saying it should work, but just shouldn't be nested? I'm not really sure what the point of it would be if it wasn't nested, though that may just be because it doesn't fit my use case, and I can't think of a use case where it would be useful.\n\nI think it would be good if nesting did work, as that would still allow either use case, really.\n\nThe problem with top level inner_hits is that I have to apply the query once in the has_child query, and then again in the inner_hits query, which makes everything slower than it would otherwise need to be.\n",
"created_at": "2015-05-14T09:36:15Z"
},
{
"body": "@lukens yes, I meant grandchild. The reason it is a bug is, because the inner_hits in your response shouldn't be empty.\n\nThe top level inner hits and inner hits defined on a query internally to ES is the same thing and either way of defining inner hits will yield the same performance in terms of query time. The nested inner hits support in the query dsl was left out to reduce complexity and most of the times there is just a single level relationship. Obviously that means for your use case that you need to use top level inner hits. \n\nMaybe the inner hits support in the query dsl should support multi level relationships too, but I think the parsing logic shouldn't be get super complex because this. I need to think more about this. Like you said if it the grandchild isn't nested its hits in the response, then it isn't very helpful.\n\nThe only overhead of top level inner hits is that queries are defined twice, so the request body gets larger. If you're concerned with that, you can consider using search templates, so that you don't you reduce the amount of data send to your cluster.\n",
"created_at": "2015-05-14T10:20:30Z"
},
{
"body": "Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\nI've not yet come across search templates, are these compatible with highly dynamic searches?\n\nI'd like try and spend a bit more time on getting nesting working in a less hacky manner, but am under enormous pressure just to get a project completed at the moment. For the case when grandchildren aren't nested in the query, I guess the simple solution is to also not nest them in the response. I don't think this would overcomplicate the parsing too much.\n\nI have a fairly nice way to handle all this in my mind, just not the time to implement it at the moment.\n",
"created_at": "2015-05-14T10:46:55Z"
},
{
"body": "OK, switching to top level hits doesn't seem to have affected performance, so I can work with that for now. It had seemed much slower before, but once I'd actually got inner_hits on the query working, that ended up just as slow, until I tweaked some other things.\n\nThe \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n",
"created_at": "2015-05-14T11:12:21Z"
},
{
"body": "> Isn't there also an overhead in running the query twice with the top level inner_hits, or does Elasticsearch do something clever so that it only gets run once? (and, if so, does it need to be specified again in the inner hits?)\n\ninner_hits runs as part of the fetch phase and always executes an additional search to fetch the inner hits. The search being executed is cheap. It only runs an a single shard and just runs a query that fetches top child docs that matches with `_parent:[parent_id]` (all docs associated with parent `parent_id`) and the inner query defined in the `has_child` query. This is a query that ES (actually Lucene) can execute relatively quickly. This mini search is executed for each hit being returned.\n\n> I've not yet come across search templates, are these compatible with highly dynamic searches?\n\nYes, the dynamic part of the search request can be templated.\n\n> The \"fix\" I've committed above is probably only really of any use as a reference if you want to go with nesting, as it doesn't solve the problem when not nested. I expect there's something in the inner workings of inner_hits that currently relies on them being nested.\n\nYes, the inner hits features relies on the fact that grandchild and child are nested. When using the top-level inner hits notation this works out, but when using inner_hits as part of the query dsl this doesn't work out, because grandchild inner hit definition isn't nested under the child inner hit definition.\n\nIn order to fix this properly the query dsl parsing logic should just support nested inner hits. I think the format doesn't need to change in order to support this. Just because the fact the two `has_child` queries are nested should be enough for automatically nest the two inner hit definitions.\n",
"created_at": "2015-05-15T09:55:48Z"
},
{
"body": "Allso seeing this issue, in my case multi-level nested documents (as in https://github.com/elastic/elasticsearch/issues/13064). Would be great to get a solution to this.\n",
"created_at": "2015-09-01T23:28:25Z"
},
{
"body": "Will this issue be fixed at 2.x? Really looking forward to see the nested inner hits in query dsl.\n",
"created_at": "2015-11-02T11:08:31Z"
},
{
"body": "When will this be fixed, its a blocker for us !! \n",
"created_at": "2015-11-26T13:49:40Z"
},
{
"body": "+1\n",
"created_at": "2015-12-20T13:30:05Z"
},
{
"body": "+1\n",
"created_at": "2016-01-09T11:29:34Z"
},
{
"body": "@martijnvg just pinging you about this as a reminder\n",
"created_at": "2016-01-18T19:16:53Z"
},
{
"body": "+1\n",
"created_at": "2016-02-01T13:21:45Z"
},
{
"body": "+1\n",
"created_at": "2016-03-01T09:54:42Z"
},
{
"body": "Hi Everyone,\r\nwhere is document data ",
"created_at": "2017-04-27T05:37:57Z"
},
{
"body": "It looks like this issue is still unsolved, at least in elasticsearch 7.1",
"created_at": "2020-02-28T03:06:16Z"
}
],
"number": 11118,
"title": "has_child and inner_hits for grandchild hit doesn't work"
}
|
{
"body": "Fix support for using inner hits hierarchically when using nested `has_child`, `has_parent` or `nested` queries in the query dsl.\n\nPR for #11118\n",
"number": 16143,
"review_comments": [],
"title": "Fix the use of nested `inner_hits` in query dsl"
}
|
{
"commits": [
{
"message": "inner_hits: Fix support for using inner hits hierarchically when using nested `has_child`, `has_parent` or `nested` queries in the query dsl.\n\nCloses #11118"
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.join.JoinUtil;\n import org.apache.lucene.search.join.ScoreMode;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -40,7 +41,9 @@\n import org.elasticsearch.search.fetch.innerhits.InnerHitsSubSearchContext;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -205,13 +208,7 @@ public String getWriteableName() {\n \n @Override\n protected Query doToQuery(QueryShardContext context) throws IOException {\n- String[] previousTypes = QueryShardContext.setTypesWithPrevious(type);\n- Query innerQuery;\n- try {\n- innerQuery = query.toQuery(context);\n- } finally {\n- QueryShardContext.setTypes(previousTypes);\n- }\n+ Query innerQuery = processInnerQuery(context, query, type, queryInnerHits);\n if (innerQuery == null) {\n return null;\n }\n@@ -223,21 +220,8 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n if (parentFieldMapper.active() == false) {\n throw new QueryShardException(context, \"[\" + NAME + \"] _parent field has no parent type configured\");\n }\n- if (queryInnerHits != null) {\n- try (XContentParser parser = queryInnerHits.getXcontentParser()) {\n- XContentParser.Token token = parser.nextToken();\n- if (token != XContentParser.Token.START_OBJECT) {\n- throw new IllegalStateException(\"start object expected but was: [\" + token + \"]\");\n- }\n- InnerHitsSubSearchContext innerHits = context.getInnerHitsContext(parser);\n- if (innerHits != null) {\n- ParsedQuery parsedQuery = new ParsedQuery(innerQuery, context.copyNamedQueries());\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.getSubSearchContext(), parsedQuery, null, context.getMapperService(), childDocMapper);\n- String name = innerHits.getName() != null ? innerHits.getName() : type;\n- context.addInnerHits(name, parentChildInnerHits);\n- }\n- }\n- }\n+\n+ processInnerHits(queryInnerHits, context, innerQuery, type, childDocMapper);\n \n String parentType = parentFieldMapper.type();\n DocumentMapper parentDocMapper = context.getMapperService().documentMapper(parentType);\n@@ -262,6 +246,44 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n return new LateParsingQuery(parentDocMapper.typeFilter(), innerQuery, minChildren(), maxChildren, parentType, scoreMode, parentChildIndexFieldData);\n }\n \n+ static Query processInnerQuery(QueryShardContext context, QueryBuilder query, String type, QueryInnerHits queryInnerHits) throws IOException {\n+ String[] previousTypes = QueryShardContext.setTypesWithPrevious(type);\n+ boolean previous = context.hasParentQueryWithInnerHits();\n+ try {\n+ context.setHasParentQueryWithInnerHits(queryInnerHits != null);\n+ return query.toQuery(context);\n+ } finally {\n+ QueryShardContext.setTypes(previousTypes);\n+ context.setHasParentQueryWithInnerHits(previous);\n+ }\n+ }\n+\n+ static void processInnerHits(QueryInnerHits queryInnerHits, QueryShardContext context, Query innerQuery, String type, DocumentMapper documentMapper) throws IOException {\n+ if (queryInnerHits == null) {\n+ return;\n+ }\n+\n+ try (XContentParser parser = queryInnerHits.getXcontentParser()) {\n+ XContentParser.Token token = parser.nextToken();\n+ if (token != XContentParser.Token.START_OBJECT) {\n+ throw new IllegalStateException(\"start object expected but was: [\" + token + \"]\");\n+ }\n+ InnerHitsSubSearchContext innerHits = context.getInnerHitsContext(parser);\n+ if (innerHits != null) {\n+ ParsedQuery parsedQuery = new ParsedQuery(innerQuery, context.copyNamedQueries());\n+ String name = innerHits.getName() != null ? innerHits.getName() : type;\n+ InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(\n+ innerHits.getSubSearchContext(), parsedQuery, context.getChildInnerHits(), context.getMapperService(), documentMapper\n+ );\n+ if (context.hasParentQueryWithInnerHits()) {\n+ context.setChildInnerHits(name, parentChildInnerHits);\n+ } else {\n+ context.addInnerHits(name, parentChildInnerHits);\n+ }\n+ }\n+ }\n+ }\n+\n final static class LateParsingQuery extends Query {\n \n private final Query toQuery;",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasChildQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,13 +26,10 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.query.support.QueryInnerHits;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsContext;\n-import org.elasticsearch.search.fetch.innerhits.InnerHitsSubSearchContext;\n \n import java.io.IOException;\n import java.util.HashSet;\n@@ -118,14 +115,7 @@ public QueryInnerHits innerHit() {\n \n @Override\n protected Query doToQuery(QueryShardContext context) throws IOException {\n- Query innerQuery;\n- String[] previousTypes = QueryShardContext.setTypesWithPrevious(type);\n- try {\n- innerQuery = query.toQuery(context);\n- } finally {\n- QueryShardContext.setTypes(previousTypes);\n- }\n-\n+ Query innerQuery = HasChildQueryBuilder.processInnerQuery(context, query, type, innerHit);\n if (innerQuery == null) {\n return null;\n }\n@@ -135,21 +125,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n + \"] is not a valid type\");\n }\n \n- if (innerHit != null) {\n- try (XContentParser parser = innerHit.getXcontentParser()) {\n- XContentParser.Token token = parser.nextToken();\n- if (token != XContentParser.Token.START_OBJECT) {\n- throw new IllegalStateException(\"start object expected but was: [\" + token + \"]\");\n- }\n- InnerHitsSubSearchContext innerHits = context.getInnerHitsContext(parser);\n- if (innerHits != null) {\n- ParsedQuery parsedQuery = new ParsedQuery(innerQuery, context.copyNamedQueries());\n- InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.getSubSearchContext(), parsedQuery, null, context.getMapperService(), parentDocMapper);\n- String name = innerHits.getName() != null ? innerHits.getName() : type;\n- context.addInnerHits(name, parentChildInnerHits);\n- }\n- }\n- }\n+ HasChildQueryBuilder.processInnerHits(innerHit, context, innerQuery, type, parentDocMapper);\n \n Set<String> parentTypes = new HashSet<>(5);\n parentTypes.add(parentDocMapper.type());",
"filename": "core/src/main/java/org/elasticsearch/index/query/HasParentQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -189,8 +189,10 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n final Query childFilter;\n final ObjectMapper parentObjectMapper;\n final Query innerQuery;\n+ boolean previous = context.hasParentQueryWithInnerHits();\n ObjectMapper objectMapper = context.nestedScope().getObjectMapper();\n try {\n+ context.setHasParentQueryWithInnerHits(queryInnerHits != null);\n if (objectMapper == null) {\n parentFilter = context.bitsetFilter(Queries.newNonNestedFilter());\n } else {\n@@ -204,6 +206,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n } finally {\n context.nestedScope().previousLevel();\n+ context.setHasParentQueryWithInnerHits(previous);\n }\n \n if (queryInnerHits != null) {\n@@ -216,9 +219,13 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n if (innerHits != null) {\n ParsedQuery parsedQuery = new ParsedQuery(innerQuery, context.copyNamedQueries());\n \n- InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.getSubSearchContext(), parsedQuery, null, parentObjectMapper, nestedObjectMapper);\n+ InnerHitsContext.NestedInnerHits nestedInnerHits = new InnerHitsContext.NestedInnerHits(innerHits.getSubSearchContext(), parsedQuery, context.getChildInnerHits(), parentObjectMapper, nestedObjectMapper);\n String name = innerHits.getName() != null ? innerHits.getName() : path;\n- context.addInnerHits(name, nestedInnerHits);\n+ if (context.hasParentQueryWithInnerHits()) {\n+ context.setChildInnerHits(name, nestedInnerHits);\n+ } else {\n+ context.addInnerHits(name, nestedInnerHits);\n+ }\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -109,6 +110,8 @@ public static void removeTypes() {\n private NestedScope nestedScope;\n private QueryParseContext parseContext;\n boolean isFilter; // pkg private for testing\n+ private boolean hasParentQueryWithInnerHits;\n+ private Map<String, InnerHitsContext.BaseInnerHits> childInnerHits;\n \n public QueryShardContext(IndexSettings indexSettings, Client client, BitsetFilterCache bitsetFilterCache, IndexFieldDataService indexFieldDataService, MapperService mapperService, SimilarityService similarityService, ScriptService scriptService,\n final IndicesQueriesRegistry indicesQueriesRegistry) {\n@@ -246,6 +249,29 @@ public void addInnerHits(String name, InnerHitsContext.BaseInnerHits context) {\n innerHitsContext.addInnerHitDefinition(name, context);\n }\n \n+ public void setChildInnerHits(String name, InnerHitsContext.BaseInnerHits innerHits) {\n+ this.childInnerHits = Collections.singletonMap(name, innerHits);\n+ }\n+\n+ /**\n+ * @return Any inner hits that an inner query has processed if {@link #hasParentQueryWithInnerHits()} was set\n+ * to true before processing the inner query.\n+ */\n+ public Map<String, InnerHitsContext.BaseInnerHits> getChildInnerHits() {\n+ return childInnerHits;\n+ }\n+\n+ /**\n+ * @return Whether a parent query in the dsl has inner hits enabled\n+ */\n+ public boolean hasParentQueryWithInnerHits() {\n+ return hasParentQueryWithInnerHits;\n+ }\n+\n+ public void setHasParentQueryWithInnerHits(boolean hasParentQueryWithInnerHits) {\n+ this.hasParentQueryWithInnerHits = hasParentQueryWithInnerHits;\n+ }\n+\n public Collection<String> simpleMatchToIndexNames(String pattern) {\n return mapperService.simpleMatchToIndexNames(pattern);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java",
"status": "modified"
},
{
"diff": "@@ -1219,4 +1219,255 @@ public void testDontExplode() throws Exception {\n assertHitCount(response, 1);\n }\n \n+ public void testParentChildHierarchy() throws Exception {\n+ assertAcked(prepareCreate(\"index1\")\n+ .addMapping(\"level1\")\n+ .addMapping(\"level2\", \"_parent\", \"type=level1\")\n+ .addMapping(\"level3\", \"_parent\", \"type=level2\")\n+ .addMapping(\"level4\", \"_parent\", \"type=level3\")\n+ );\n+\n+ client().prepareIndex(\"index1\", \"level1\", \"1\").setSource(\"{}\").get();\n+ client().prepareIndex(\"index1\", \"level2\", \"2\").setParent(\"1\").setRouting(\"1\").setSource(\"{}\").get();\n+ client().prepareIndex(\"index1\", \"level3\", \"3\").setParent(\"2\").setRouting(\"1\").setSource(\"{}\").get();\n+ client().prepareIndex(\"index1\", \"level4\", \"4\").setParent(\"3\").setRouting(\"1\").setSource(\"{}\").get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(\"index1\")\n+ .setQuery(\n+ hasChildQuery(\"level2\",\n+ hasChildQuery(\"level3\",\n+ hasChildQuery(\"level4\", matchAllQuery()).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ )\n+ .get();\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(response.getHits().getAt(0).getType(), equalTo(\"level1\"));\n+ assertThat(response.getHits().getAt(0).getIndex(), equalTo(\"index1\"));\n+\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ SearchHits innerHits = response.getHits().getAt(0).getInnerHits().get(\"level2\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level3\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level3\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"3\"));\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level4\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level4\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"4\"));\n+ assertThat(innerHits.getAt(0).getInnerHits(), nullValue());\n+\n+ response = client().prepareSearch(\"index1\")\n+ .setQuery(\n+ hasParentQuery(\"level3\",\n+ hasParentQuery(\"level2\",\n+ hasParentQuery(\"level1\", matchAllQuery()).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ )\n+ .get();\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getAt(0).getId(), equalTo(\"4\"));\n+ assertThat(response.getHits().getAt(0).getType(), equalTo(\"level4\"));\n+ assertThat(response.getHits().getAt(0).getIndex(), equalTo(\"index1\"));\n+\n+ assertThat(response.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = response.getHits().getAt(0).getInnerHits().get(\"level3\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level3\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"3\"));\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level2\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level1\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level1\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ }\n+\n+ public void testNestedHierarchy() throws Exception {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"level1\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"level2\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"level3\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"level4\")\n+ .field(\"type\", \"nested\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject();\n+ assertAcked(prepareCreate(\"index\")\n+ .addMapping(\"type\", mapping)\n+ );\n+\n+ XContentBuilder source = jsonBuilder().startObject()\n+ .startArray(\"level1\")\n+ .startObject()\n+ .field(\"field\", \"value1\")\n+ .startArray(\"level2\")\n+ .startObject()\n+ .field(\"field\", \"value2\")\n+ .startArray(\"level3\")\n+ .startObject()\n+ .field(\"field\", \"value3\")\n+ .startArray(\"level4\")\n+ .startObject()\n+ .field(\"field\", \"value4\")\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .endObject();\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(source).get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"index\")\n+ .setQuery(\n+ nestedQuery(\"level1\",\n+ nestedQuery(\"level1.level2\",\n+ nestedQuery(\"level1.level2.level3\",\n+ nestedQuery(\"level1.level2.level3.level4\", matchAllQuery()).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ )\n+ .get();\n+\n+ assertHitCount(searchResponse, 1);\n+ assertThat(searchResponse.getHits().getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(0).getType(), equalTo(\"type\"));\n+ assertThat(searchResponse.getHits().getAt(0).getNestedIdentity(), nullValue());\n+\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ SearchHits innerHits = searchResponse.getHits().getAt(0).getInnerHits().get(\"level1\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"type\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"level1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild(), nullValue());\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level1.level2\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"type\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"level1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild(), nullValue());\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level1.level2.level3\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"type\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"level1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getField().string(), equalTo(\"level3\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getChild(), nullValue());\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level1.level2.level3.level4\");\n+ assertThat(innerHits, notNullValue());\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"type\"));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"level1\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getField().string(), equalTo(\"level3\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getChild().getField().string(), equalTo(\"level4\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getChild().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild().getChild().getChild().getChild(), nullValue());\n+ }\n+\n+ public void testParentChildAndNestedHierarchy() throws Exception {\n+ assertAcked(prepareCreate(\"index\")\n+ .addMapping(\"level1\")\n+ .addMapping(\"level2\", \"_parent\", \"type=level1\", \"level3\", \"type=nested\")\n+ );\n+\n+ client().prepareIndex(\"index\", \"level1\", \"1\").setSource(\"{}\").get();\n+ XContentBuilder source = jsonBuilder().startObject()\n+ .startArray(\"level3\")\n+ .startObject()\n+ .field(\"field\", \"value\")\n+ .endObject()\n+ .endArray()\n+ .endObject();\n+ client().prepareIndex(\"index\", \"level2\", \"2\").setParent(\"1\").setSource(source).get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"index\")\n+ .setQuery(\n+ hasChildQuery(\"level2\",\n+ nestedQuery(\"level3\", matchAllQuery()).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ ).innerHit(new QueryInnerHits(null, new InnerHitsBuilder.InnerHit()))\n+ )\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ assertThat(searchResponse.getHits().getAt(0).getId(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(0).getType(), equalTo(\"level1\"));\n+\n+ assertThat(searchResponse.getHits().getAt(0).getInnerHits().size(), equalTo(1));\n+ SearchHits innerHits = searchResponse.getHits().getAt(0).getInnerHits().get(\"level2\");\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity(), nullValue());\n+\n+ assertThat(innerHits.getAt(0).getInnerHits().size(), equalTo(1));\n+ innerHits = innerHits.getAt(0).getInnerHits().get(\"level3\");\n+ assertThat(innerHits.getTotalHits(), equalTo(1L));\n+ assertThat(innerHits.getAt(0).getId(), equalTo(\"2\"));\n+ assertThat(innerHits.getAt(0).getType(), equalTo(\"level2\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"level3\"));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(innerHits.getAt(0).getNestedIdentity().getChild(), nullValue());\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsIT.java",
"status": "modified"
}
]
}
|
{
"body": "Bug introduced in #13779: we don't filter anymore credentials because we were filtering `cloud.azure.storage.account` and `cloud.azure.storage.key` but now credentials are like `cloud.azure.storage.XXX.account` and `cloud.azure.storage.XXX.key` where `XXX` can be a storage setting id.\n\nSee https://github.com/elastic/elasticsearch/blob/master/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettingsFilter.java#L35\n",
"comments": [],
"number": 14843,
"title": "Filter cloud azure credentials"
}
|
{
"body": "One test we forgot in #14843 and #13779 is the default client selection.\n\nMost of the time, users won't define explicitly which client they want to use because they are providing only one connection to Azure storage:\n\n``` yml\ncloud:\n azure:\n storage:\n my_account:\n account: your_azure_storage_account\n key: your_azure_storage_key\n```\n\nThen using the default client like this:\n\n``` sh\n# This one will use the default account (my_account1)\ncurl -XPUT localhost:9200/_snapshot/my_backup1?pretty -d '{\n \"type\": \"azure\"\n}'\n```\n\nThis commit adds tests to check that the right client is still selected when no client is explicitly set when creating the snapshot.\n",
"number": 16088,
"review_comments": [],
"title": "Add more tests for Azure Repository client selection"
}
|
{
"commits": [
{
"message": "Add more tests for Azure Repository client selection\n\nOne test we forgot in #14843 and #13779 is the default client selection.\n\nMost of the time, users won't define explicitly which client they want to use because they are providing only one connection to Azure storage:\n\n```yml\ncloud:\n azure:\n storage:\n my_account:\n account: your_azure_storage_account\n key: your_azure_storage_key\n```\n\nThen using the default client like this:\n\n```sh\n# This one will use the default account (my_account1)\ncurl -XPUT localhost:9200/_snapshot/my_backup1?pretty -d '{\n \"type\": \"azure\"\n}'\n```\n\nThis commit adds tests to check that the right client is still selected when no client is explicitly set when creating the snapshot."
}
],
"files": [
{
"diff": "@@ -61,6 +61,16 @@ public void testGetSelectedClientWithNoSecondary() {\n assertThat(client.getEndpoint(), is(URI.create(\"https://azure1\")));\n }\n \n+ public void testGetDefaultClientWithNoSecondary() {\n+ AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(Settings.builder()\n+ .put(\"cloud.azure.storage.azure1.account\", \"myaccount1\")\n+ .put(\"cloud.azure.storage.azure1.key\", \"mykey1\")\n+ .build());\n+ azureStorageService.doStart();\n+ CloudBlobClient client = azureStorageService.getSelectedClient(null, LocationMode.PRIMARY_ONLY);\n+ assertThat(client.getEndpoint(), is(URI.create(\"https://azure1\")));\n+ }\n+\n public void testGetSelectedClientPrimary() {\n AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(settings);\n azureStorageService.doStart();\n@@ -82,6 +92,13 @@ public void testGetSelectedClientSecondary2() {\n assertThat(client.getEndpoint(), is(URI.create(\"https://azure3\")));\n }\n \n+ public void testGetDefaultClientWithPrimaryAndSecondaries() {\n+ AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(settings);\n+ azureStorageService.doStart();\n+ CloudBlobClient client = azureStorageService.getSelectedClient(null, LocationMode.PRIMARY_ONLY);\n+ assertThat(client.getEndpoint(), is(URI.create(\"https://azure1\")));\n+ }\n+\n public void testGetSelectedClientNonExisting() {\n AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(settings);\n azureStorageService.doStart();",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceTest.java",
"status": "modified"
}
]
}
|
{
"body": "The following is a search query. The term full_name uses a synonym analyzer. \nThe synonym expand file has the synonym: rice,arroz. \nNow if we run the validate query we get the below results:\n\n```\nGET /products_v2/product/_validate/query?explain\n{\n \"query\": {\n \"bool\": {\n \"must\": [\n {\n \"match\": {\n \"full_name\": {\n \"query\": \"rice\",\n \"fuzziness\": 1,\n \"minimum_should_match\": \"3<90%\"\n }\n }\n }\n ],\n \"should\": [\n {\n \"match_phrase\": {\n \"name\": {\n \"query\": \"rice\",\n \"slop\": 50\n }\n }\n }\n ]\n }\n }\n}\n\n{\n \"valid\": true,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"explanations\": [\n {\n \"index\": \"products_v2\",\n \"valid\": true,\n \"explanation\": \"filtered(+((full_name:arroz~1 full_name:rice~1)~2) (name:arroz name:rice))->cache(_type:product)\"\n }\n ]\n}\n```\n\nBoth rice and arroz are position 1. The minimum_should_match doesn't get applied to the position but the number of words in position 1.\n\nLet's say the synonym file has another synonym : Brown Sugar, brownsugar.\nThe following is the result of the validate query calling brownsugar\n\n```\n{\n \"valid\": true,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"explanations\": [\n {\n \"index\": \"products_v4\",\n \"valid\": true,\n \"explanation\": \"filtered(+(((full_name:brownsugar~1 full_name:brown~1) full_name:sugar~1)~2) name:\\\"(brownsugar brown) sugar\\\"~50)->cache(_type:product)\"\n }\n ]\n}\n```\n\nIn this case the minimum to match is applied based on positions. brownsugar and brown are position 1, sugar is position 2. The minimum_should_match gets applied as 2.\n\nIf minimum_should_match gets applied based on the number of positions there are, I believe the above example for rice should have minimum_should_match should be 1 and not 2. This is a problem in the way ES handles minimum_should_match for one word synonyms.\n",
"comments": [
{
"body": "Simpler recreation here:\n\n```\nPUT t\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"syns\": {\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"lowercase\",\n \"syns\"\n ]\n }\n },\n \"filter\": {\n \"syns\": {\n \"type\": \"synonym\",\n \"synonyms\": [\n \"arroz,rice\",\n \"brown sugar,brownsugar\"\n ]\n }\n }\n }\n },\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"full_name\": {\n \"type\": \"string\",\n \"analyzer\": \"syns\"\n }\n }\n }\n }\n}\n\nGET t/_validate/query?explain\n{\n \"query\": {\n \"match\": {\n \"full_name\": {\n \"query\": \"rice\",\n \"minimum_should_match\": 2\n }\n }\n }\n}\n\nGET t/_validate/query?explain\n{\n \"query\": {\n \"match\": {\n \"full_name\": {\n \"query\": \"brown sugar\",\n \"minimum_should_match\": 2\n }\n }\n }\n}\n```\n\nNot sure what can be done for the second multi-word synonym example (as multi-word synonyms are really problematic) but it does feel like the first example should not apply the min-should-match to stacked tokens.\n\n@mikemccand @jpountz what do you think=\n",
"created_at": "2016-01-10T10:02:34Z"
},
{
"body": "+1\n",
"created_at": "2016-01-11T08:50:05Z"
},
{
"body": "@clintongormley I changed the behavior of minimum should match to fix this issue: https://github.com/elastic/elasticsearch/issues/15521. In 2.2 and master we always honor the value even if the final number of optional clauses (after expansion) is smaller. This is why your first example returns no result. For the multi word synonym example the problem is similar but the resolution would be different simply because the generated query is wrong:\n\n```\n\"explanation\": \"((full_name:brown full_name:brownsugar) full_name:sugar)\"\n```\n",
"created_at": "2016-01-11T10:37:40Z"
},
{
"body": "@jimferenczi Maybe I'm the one confused but I think the issue is different here: we want the number of SHOULD clauses to be equal to the number of unique positions instead of the number of tokens?\n",
"created_at": "2016-01-11T16:49:41Z"
},
{
"body": "I am also a little unsure as to what @jimferenczi is saying. I want the minimum_should_match for rice,arroz to be equal to 1 as it should be based on the number of positions (in this case 1), but it is 2. It is correct for the multiword brown sugar, brownsugar as it is 2 and the number of positions in that case is 2.\n",
"created_at": "2016-01-11T17:06:08Z"
},
{
"body": "@jzbahrai @jpountz I just answered regarding @clintongormley's recreation where the minimum should match is statically set to 2. I agree that there is a bug with the example used by @jzbahrai where minimum should match is set with \"3<90%\". I'll take a look.\n",
"created_at": "2016-01-11T18:13:59Z"
},
{
"body": "The bug is when there is only one position but multiple clauses in this position. it is related to a small optimization that is done to reduce the number of clauses when there is only one position. In such case we build only one level of boolean query and the optional clauses are all at the root level, this level is then used to compute the number of optional clauses in the query. For instance the query \"rice\" is expanded into:\n(rice arrow) => 2 optional clauses at the root level.\nbut to correctly compute the number of positions we should generate another level to indicate that the query contains only one position:\n((rice arrow)) => 1 optional clause at the root level and 2 optional clauses at the inner level.\nI'll work on a fix tomorrow.\n",
"created_at": "2016-01-11T18:39:21Z"
},
{
"body": "The bug/problem is at the Lucene level. I opened https://issues.apache.org/jira/browse/LUCENE-6972, @jpountz can you take a look ?\n",
"created_at": "2016-01-12T09:05:22Z"
}
],
"number": 15858,
"title": "Minimum_should_match for one word query"
}
|
{
"body": "Fixes #15858\n",
"number": 16078,
"review_comments": [
{
"body": "can you leave a comment to explain that this means that this booleanquery was generated from synonyms?\n",
"created_at": "2016-01-19T09:46:45Z"
}
],
"title": "Do not apply minimum-should-match on a boolean query if the coords are disabled"
}
|
{
"commits": [
{
"message": "Do not apply minimum-should-match on a boolean query if the coords are disabled.\nFixes #15858"
}
],
"files": [
{
"diff": "@@ -117,6 +117,9 @@ public static Query applyMinimumShouldMatch(BooleanQuery query, @Nullable String\n if (minimumShouldMatch == null) {\n return query;\n }\n+ if (query.isCoordDisabled()) {\n+ return query;\n+ }\n int optionalClauses = 0;\n for (BooleanClause c : query.clauses()) {\n if (c.getOccur() == BooleanClause.Occur.SHOULD) {",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java",
"status": "modified"
},
{
"diff": "@@ -272,6 +272,17 @@ public void testMinShouldMatchBiggerThanNumberOfShouldClauses() throws Exception\n assertEquals(3, bq.getMinimumNumberShouldMatch());\n }\n \n+ public void testMinShouldMatchDisableCoord() throws Exception {\n+ BooleanQuery bq = (BooleanQuery) parseQuery(\n+ boolQuery()\n+ .should(termQuery(\"foo\", \"bar\"))\n+ .should(termQuery(\"foo2\", \"bar2\"))\n+ .minimumNumberShouldMatch(\"3\")\n+ .disableCoord(true)\n+ .buildAsBytes()).toQuery(createShardContext());\n+ assertEquals(0, bq.getMinimumNumberShouldMatch());\n+ }\n+\n public void testFromJson() throws IOException {\n String query =\n \"{\" +",
"filename": "core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -288,7 +288,7 @@ protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query\n Map.Entry<String, Float> field = fieldsIterator.next();\n assertTermOrBoostQuery(booleanClause.getQuery(), field.getKey(), queryBuilder.value(), field.getValue());\n }\n- if (queryBuilder.minimumShouldMatch() != null) {\n+ if (queryBuilder.minimumShouldMatch() != null && !boolQuery.isCoordDisabled()) {\n assertThat(boolQuery.getMinimumNumberShouldMatch(), greaterThan(0));\n }\n } else if (queryBuilder.fields().size() == 1) {",
"filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java",
"status": "modified"
}
]
}
|
{
"body": "Consider the following Sense script:\n\n``` sh\nDELETE /test-index\n\nPOST /test-index\n{\n \"mappings\": {\n \"test-type\": {\n \"properties\": {\n \"ip\": {\n \"type\": \"ip\"\n }\n }\n }\n }\n}\n\nGET test-index/_mapping/field/ip\n\nPOST /test-index/test-type/1\n{\n \"ip\": \"192.168.0.1\"\n}\n\nPOST test-index/test-type/_search?size=1\n\nPOST logstash-0/apache/_search\n{\n \"query\": {\n \"match\": {\n \"clientip\": \"192.168.0.1\"\n }\n }\n}\n```\n\nThis executes just fine in 2.x but fails at the final step in master. Since https://github.com/elastic/elasticsearch/pull/14874 the `match` query now requires that IP addresses use valid CIDR notation (`\"192.168.0.1\"` must be `\"192.168.0.1/32\"`). The pr seems clear on the point that ES shouldn't be lenient when it comes to parsing CIDR notation, but should the match query on IP address fields really only accept CIDR notation? It seems like it should accept a single IPV4 address as well.\n",
"comments": [
{
"body": "Agreed that it should.\n",
"created_at": "2016-01-18T17:50:01Z"
}
],
"number": 16058,
"title": "Match query on IP requires CIDR notation"
}
|
{
"body": "This commit modifies IpFieldMapper#termQuery to permit single IPv4\naddresses for use in match and query_string queries.\n\nCloses #16058 \n",
"number": 16068,
"review_comments": [],
"title": "Single IPv4 addresses in IP field term queries"
}
|
{
"commits": [
{
"message": "Single IPv4 addresses in IP field term queries\n\nThis commit modifies IpFieldMapper#termQuery to permit single IPv4\naddresses for use in match and query_string queries."
}
],
"files": [
{
"diff": "@@ -213,11 +213,24 @@ public BytesRef indexedValueForSearch(Object value) {\n @Override\n public Query termQuery(Object value, @Nullable QueryShardContext context) {\n if (value != null) {\n- long[] fromTo;\n+ String term;\n if (value instanceof BytesRef) {\n- fromTo = Cidrs.cidrMaskToMinMax(((BytesRef) value).utf8ToString());\n+ term = ((BytesRef) value).utf8ToString();\n+ } else {\n+ term = value.toString();\n+ }\n+ long[] fromTo;\n+ // assume that the term is either a CIDR range or the\n+ // term is a single IPv4 address; if either of these\n+ // assumptions is wrong, the CIDR parsing will fail\n+ // anyway, and that is okay\n+ if (term.contains(\"/\")) {\n+ // treat the term as if it is in CIDR notation\n+ fromTo = Cidrs.cidrMaskToMinMax(term);\n } else {\n- fromTo = Cidrs.cidrMaskToMinMax(value.toString());\n+ // treat the term as if it is a single IPv4, and\n+ // apply a CIDR mask equivalent to the host route\n+ fromTo = Cidrs.cidrMaskToMinMax(term + \"/32\");\n }\n if (fromTo != null) {\n return rangeQuery(fromTo[0] == 0 ? null : fromTo[0],",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFailures;\n@@ -124,6 +125,16 @@ public void testIpCidr() throws Exception {\n refresh();\n \n SearchResponse search = client().prepareSearch()\n+ .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.0.1\")))\n+ .execute().actionGet();\n+ assertHitCount(search, 1L);\n+\n+ search = client().prepareSearch()\n+ .setQuery(queryStringQuery(\"ip: 192.168.0.1\"))\n+ .execute().actionGet();\n+ assertHitCount(search, 1L);\n+\n+ search = client().prepareSearch()\n .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.0.1/32\")))\n .execute().actionGet();\n assertHitCount(search, 1l);",
"filename": "core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
}
]
}
|
{
"body": "Hi team,\n\nI get an error using a `multi_match` query with `cross_fields` type and a numeric query.\nI'm using v2.1.1 on OSX, installed via Homebrew.\n\n**Index basic data**\n\n```\ncurl -XPUT http://localhost:9200/blog/post/1?pretty=1 -d '{\"foo\":123, \"bar\":\"xyzzy\", \"baz\":456}'\n```\n\n**Use a `multi_match` query with `cross_fields` type and a numeric query**\n\n```\ncurl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\": {\"multi_match\": {\"type\": \"cross_fields\", \"query\": \"100\", \"lenient\": true, \"fields\": [\"foo\", \"bar\", \"baz\"]}}}'\n```\n\n**Error**\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n } ],\n \"type\" : \"search_phase_execution_exception\",\n \"reason\" : \"all shards failed\",\n \"phase\" : \"query\",\n \"grouped\" : true,\n \"failed_shards\" : [ {\n \"shard\" : 0,\n \"index\" : \"blog\",\n \"node\" : \"0TxGVVWsSu2qX63hZdOv2w\",\n \"reason\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n } ]\n },\n \"status\" : 400\n}\n```\n\n**Note that the error does not appear if I specify only 1 numeric field in search.**\n\n**Stack trace**\n\n```\nCaused by: java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n at org.apache.lucene.util.NumericUtils.longToPrefixCodedBytes(NumericUtils.java:147)\n at org.apache.lucene.util.NumericUtils.longToPrefixCoded(NumericUtils.java:121)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.getBytesRef(NumericTokenStream.java:163)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:217)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:148)\n at org.apache.lucene.util.AttributeSource$State.clone(AttributeSource.java:54)\n at org.apache.lucene.util.AttributeSource.captureState(AttributeSource.java:281)\n at org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:96)\n at org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:70)\n at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:223)\n at org.apache.lucene.util.QueryBuilder.createBooleanQuery(QueryBuilder.java:87)\n at org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:178)\n at org.elasticsearch.index.search.MultiMatchQuery.parseAndApply(MultiMatchQuery.java:55)\n at org.elasticsearch.index.search.MultiMatchQuery.access$000(MultiMatchQuery.java:42)\n at org.elasticsearch.index.search.MultiMatchQuery$QueryBuilder.parseGroup(MultiMatchQuery.java:118)\n at org.elasticsearch.index.search.MultiMatchQuery$CrossFieldsQueryBuilder.buildGroupedQueries(MultiMatchQuery.java:198)\n at org.elasticsearch.index.search.MultiMatchQuery.parse(MultiMatchQuery.java:86)\n at org.elasticsearch.index.query.MultiMatchQueryParser.parse(MultiMatchQueryParser.java:163)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:257)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:303)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:206)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:201)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:831)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:651)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:617)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n```\n",
"comments": [
{
"body": "Fun times. Reproduces in master. I'll work on fixing it there and backporting it after that.\n",
"created_at": "2016-01-08T18:33:44Z"
},
{
"body": "FYI, I have reported a similar bug with the same symptoms (1 numeric field included is ok but 2+ numeric fields give `number_format_exception`). Don't know if it could be related.\n\nhttps://github.com/elastic/elasticsearch/issues/3975#issuecomment-167577538\n",
"created_at": "2016-01-08T20:01:37Z"
},
{
"body": "Good timing! I've figure out what is up and I've started on a solution. I've only got about two more hours left to work on it so I might not have anything before the weekend but, yeah, I'll have something soon.\n\nThe `lenient: true` issue is similar so I'll work on it while I'm in there.\n",
"created_at": "2016-01-08T20:22:56Z"
},
{
"body": "Had to revert the change. I'll get it in there though.\n",
"created_at": "2016-01-11T17:27:25Z"
}
],
"number": 15860,
"title": "multi_match query gives java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647"
}
|
{
"body": "See http://build-us-00.elastic.co/job/es_core_master_window-2012/2308/testReport/junit/org.elasticsearch.index.query/MatchQueryBuilderTests/testToQuery/ for test failure details.\n\nThe change fixes the above test failure.\n\n@nik9000 can you take a quick look and check this makes sense wrt. #15860 \n",
"number": 16056,
"review_comments": [],
"title": "Fix for MatchQueryBuilderTests.testToQuery test failure"
}
|
{
"commits": [
{
"message": "Fixes test failure with numeric range query."
}
],
"files": [
{
"diff": "@@ -196,21 +196,20 @@ protected void doAssertLuceneQuery(MatchQueryBuilder queryBuilder, Query query,\n assertTrue(numericRangeQuery.includesMax());\n \n double value;\n+ double width = 0;\n try {\n value = Double.parseDouble(queryBuilder.value().toString());\n } catch (NumberFormatException e) {\n // Maybe its a date\n value = ISODateTimeFormat.dateTimeParser().parseMillis(queryBuilder.value().toString());\n+ width = queryBuilder.fuzziness().asTimeValue().getMillis();\n }\n- double width;\n- if (queryBuilder.fuzziness().equals(Fuzziness.AUTO)) {\n- width = 1;\n- } else {\n- try {\n+\n+ if (width == 0) {\n+ if (queryBuilder.fuzziness().equals(Fuzziness.AUTO)) {\n+ width = 1;\n+ } else {\n width = queryBuilder.fuzziness().asDouble();\n- } catch (NumberFormatException e) {\n- // Maybe a time value?\n- width = queryBuilder.fuzziness().asTimeValue().getMillis();\n }\n }\n assertEquals(value - width, numericRangeQuery.getMin().doubleValue(), width * .1);",
"filename": "core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java",
"status": "modified"
}
]
}
|
{
"body": "On ES 2.1.1. Ran into this when trying to verify if a setting has persisted. In this case, I am purposely setting the codec to an invalid codec name `best_compression1` instead of `best_compression`:\n\n```\nPOST /largeevent/_close\nPUT /largeevent/_settings\n{\n \"index.codec\":\"best_compression1\"\n}\nPOST /largeevent/_open\nGET /largeevent/_settings\n```\n\nAs soon as, I reopen the index after making the setting change, the shard gets into a failure loop trying to initialize the shard and will not start up.\n\n```\n[2016-01-15 18:12:18,597][WARN ][cluster.action.shard ] [node-1] [largeevent][1] received shard failed for [largeevent][1], node[O373snrkTTGclRFL0t7H5g], [P], v[1553], s[INITIALIZING], a[id=w3VoZACfQKq_QcsDGLaZ3Q], unassigned_info[[reason=ALLOCATION_FAILED], at[2016-01-16T02:12:18.563Z], details[failed recovery, failure IndexShardRecoveryException[failed recovery]; nested: IllegalArgumentException[failed to find codec [best_compression1]]; ]], indexUUID [JEPoX4QkR_Om-W69VKV6iQ], message [failed recovery], failure [IndexShardRecoveryException[failed recovery]; nested: IllegalArgumentException[failed to find codec [best_compression1]]; ]\n[largeevent][[largeevent][1]] IndexShardRecoveryException[failed recovery]; nested: IllegalArgumentException[failed to find codec [best_compression1]];\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:179)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.IllegalArgumentException: failed to find codec [best_compression1]\n at org.elasticsearch.index.codec.CodecService.codec(CodecService.java:94)\n at org.elasticsearch.index.engine.EngineConfig.getCodec(EngineConfig.java:255)\n at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:1057)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:150)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n ... 3 more\n```\n\nWill be nice if we can handle this better.\n",
"comments": [
{
"body": "With the changes coming in #16054 you will be able to close the index, update the settings, and reopen the index. The continuous failures can only be prevented once we remove Guice. \n",
"created_at": "2016-01-18T13:01:17Z"
}
],
"number": 16032,
"title": "Setting invalid codec causes shard to keep failing and stuck in initializing state"
}
|
{
"body": "This change moves all `index.*` settings over to the new infrastructure. This means in short that:\n- every setting that has an index scope must be registered up-front\n- index settings are validated on index creation, template creation, index settings update\n- node level settings starting with `index.*` are validated on node startup\n- settings that are private to ES like `index.version.created` can only be set by tests when they install a specific test plugin.\n- all index settings can be reset by passing `null` as their value on update\n- all index settings defaults can be listed via the settings APIs\n\nCloses #12854\nCloses #6732\nCloses #16032\nCloses #12790\n\nThanks @brwe who helped me a lot by going through the mechanical part of converting settings. Much appreciated!\n",
"number": 16054,
"review_comments": [
{
"body": "Wrong javadoc?\n",
"created_at": "2016-01-18T14:31:50Z"
},
{
"body": "s/IndexScopeSettings/IndexScopedSettings/ ?\n",
"created_at": "2016-01-18T14:32:12Z"
},
{
"body": "s/handels/handles/\n\nHandel was a composer.\n",
"created_at": "2016-01-18T14:34:54Z"
},
{
"body": "I think this'd be more clear: `enabled ? minReplicas + \"-\" + maxReplicas : \"false\"`\n",
"created_at": "2016-01-18T14:38:40Z"
},
{
"body": "Can/should this be private?\n",
"created_at": "2016-01-18T14:39:26Z"
},
{
"body": "++ on doing this\n",
"created_at": "2016-01-18T14:39:35Z"
},
{
"body": "Not yet! Those are still just strings but I suspect this note is to make sure that they grow their validation at the setting level.\n",
"created_at": "2016-01-18T14:43:31Z"
},
{
"body": "Should these be settings references? I'm not sure why they aren't but I'm sure there was a good reason but I don't know if its still true.\n",
"created_at": "2016-01-18T14:45:29Z"
},
{
"body": "I know you are just keeping this logic from before but it'd be cool to have some comment explaining it.\n",
"created_at": "2016-01-18T14:51:24Z"
},
{
"body": "Having both the SETTING_$NAME and INDEX_$NAME_SETTING is a bit confusing.\n",
"created_at": "2016-01-18T15:05:59Z"
},
{
"body": "The name of this one seems wrong.\n",
"created_at": "2016-01-18T15:06:07Z"
},
{
"body": "If looks like if you didn't specify the value it used to leave it unchanged on all the indexes but now it'll remove all the blocks I think.\n",
"created_at": "2016-01-18T15:06:53Z"
},
{
"body": "Very nice\n",
"created_at": "2016-01-18T15:09:30Z"
},
{
"body": "Its not really a builder, right?\n",
"created_at": "2016-01-18T15:11:47Z"
},
{
"body": "Ooops - wasn't reading right. Ignore.\n",
"created_at": "2016-01-18T15:12:05Z"
},
{
"body": "This seems like a pretty heavy way to validate a setting. Like, its building a bunch of maps and things.\n",
"created_at": "2016-01-18T15:13:55Z"
},
{
"body": "maybe have `dynamicOnly` instead? It feels more precise.\n",
"created_at": "2016-01-18T15:25:20Z"
},
{
"body": "It'd be nice if this got a class the same way AutoExpandReplicas did. Not required at all, but pleasant.\n",
"created_at": "2016-01-18T16:01:51Z"
},
{
"body": "Side note: this one's come up lately as some people have been using it as a big hammer to fix caching issues. They know its a bad idea but its the only tool they have.\n",
"created_at": "2016-01-18T16:03:22Z"
},
{
"body": "I removed all of this - it's a pre 2.0 BWC layer we don't need anymore in master\n",
"created_at": "2016-01-19T08:21:06Z"
}
],
"title": "Cut over all index scope settings to the new setting infrastrucuture"
}
|
{
"commits": [
{
"message": "Convert index level setting to the new setting infrastrucutre\n\nthis is an initial commit of cutting over simple string key based settings\nto a more contained scoped settings infrastructure."
},
{
"message": "convert index.translog.durability"
},
{
"message": "convert index.warmer.enabled"
},
{
"message": "convert refresh interval"
},
{
"message": "convert index.max_result_window"
},
{
"message": "converted index.routing.allocation.total_shards_per_node"
},
{
"message": "convert gc_deletes"
},
{
"message": "cut over index.unassigned.node_left.delayed_timeout"
},
{
"message": "register missing setting"
},
{
"message": "cut over index.routing.rebalance.enable and index.routing.allocation.enable"
},
{
"message": "convert index.ttl.disable_purge"
},
{
"message": "cut over indexing slow log"
},
{
"message": "add unittest for indexing slow log settings"
},
{
"message": "convert translog.flush_threshold_size"
},
{
"message": "convert all slow logs"
},
{
"message": "convert compound_format"
},
{
"message": "remove unused noCFSRatio"
},
{
"message": "convert expunge_deletes_allowed"
},
{
"message": "convert index allocation filtering"
},
{
"message": "we assume from now on that settings are reset if we pass empty settings"
},
{
"message": "convert merge.policy.floor_segment"
},
{
"message": "convert max_merge_at_once"
},
{
"message": "convert max_merge_at_once_explicit"
},
{
"message": "convert all setting in IndexMetaData"
},
{
"message": "use a valid default"
},
{
"message": "convert max_merged_segment"
},
{
"message": "convert segments_per_tier"
},
{
"message": "convert reclaim_deletes_weight and remove superfluous methods"
},
{
"message": "first cut at integrating new settings infra"
},
{
"message": "first cut at integrating new settings infra"
}
],
"files": [
{
"diff": "@@ -22,14 +22,9 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n \n-import java.util.HashSet;\n-import java.util.Map;\n-import java.util.Set;\n-\n import static org.elasticsearch.cluster.ClusterState.builder;\n \n /**\n@@ -57,11 +52,11 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin\n boolean changed = false;\n Settings.Builder transientSettings = Settings.settingsBuilder();\n transientSettings.put(currentState.metaData().transientSettings());\n- changed |= apply(transientToApply, transientSettings, transientUpdates, \"transient\");\n+ changed |= clusterSettings.updateDynamicSettings(transientToApply, transientSettings, transientUpdates, \"transient\");\n \n Settings.Builder persistentSettings = Settings.settingsBuilder();\n persistentSettings.put(currentState.metaData().persistentSettings());\n- changed |= apply(persistentToApply, persistentSettings, persistentUpdates, \"persistent\");\n+ changed |= clusterSettings.updateDynamicSettings(persistentToApply, persistentSettings, persistentUpdates, \"persistent\");\n \n if (!changed) {\n return currentState;\n@@ -86,42 +81,5 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin\n return build;\n }\n \n- private boolean apply(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n- boolean changed = false;\n- final Set<String> toRemove = new HashSet<>();\n- Settings.Builder settingsBuilder = Settings.settingsBuilder();\n- for (Map.Entry<String, String> entry : toApply.getAsMap().entrySet()) {\n- if (entry.getValue() == null) {\n- toRemove.add(entry.getKey());\n- } else if (clusterSettings.isLoggerSetting(entry.getKey()) || clusterSettings.hasDynamicSetting(entry.getKey())) {\n- settingsBuilder.put(entry.getKey(), entry.getValue());\n- updates.put(entry.getKey(), entry.getValue());\n- changed = true;\n- } else {\n- throw new IllegalArgumentException(type + \" setting [\" + entry.getKey() + \"], not dynamically updateable\");\n- }\n-\n- }\n- changed |= applyDeletes(toRemove, target);\n- target.put(settingsBuilder.build());\n- return changed;\n- }\n \n- private final boolean applyDeletes(Set<String> deletes, Settings.Builder builder) {\n- boolean changed = false;\n- for (String entry : deletes) {\n- Set<String> keysToRemove = new HashSet<>();\n- Set<String> keySet = builder.internalMap().keySet();\n- for (String key : keySet) {\n- if (Regex.simpleMatch(entry, key)) {\n- keysToRemove.add(key);\n- }\n- }\n- for (String key : keysToRemove) {\n- builder.remove(key);\n- changed = true;\n- }\n- }\n- return changed;\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ protected ClusterBlockException checkBlock(UpdateSettingsRequest request, Cluste\n if (globalBlock != null) {\n return globalBlock;\n }\n- if (request.settings().getAsMap().size() == 1 && (request.settings().get(IndexMetaData.SETTING_BLOCKS_METADATA) != null || request.settings().get(IndexMetaData.SETTING_READ_ONLY) != null )) {\n+ if (request.settings().getAsMap().size() == 1 && IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings())) {\n return null;\n }\n return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indexNameExpressionResolver.concreteIndices(state, request));",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java",
"status": "modified"
},
{
"diff": "@@ -25,9 +25,11 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n@@ -38,13 +40,15 @@\n public class TransportPutIndexTemplateAction extends TransportMasterNodeAction<PutIndexTemplateRequest, PutIndexTemplateResponse> {\n \n private final MetaDataIndexTemplateService indexTemplateService;\n+ private final IndexScopedSettings indexScopedSettings;\n \n @Inject\n public TransportPutIndexTemplateAction(Settings settings, TransportService transportService, ClusterService clusterService,\n ThreadPool threadPool, MetaDataIndexTemplateService indexTemplateService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, IndexScopedSettings indexScopedSettings) {\n super(settings, PutIndexTemplateAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, PutIndexTemplateRequest::new);\n this.indexTemplateService = indexTemplateService;\n+ this.indexScopedSettings = indexScopedSettings;\n }\n \n @Override\n@@ -69,11 +73,13 @@ protected void masterOperation(final PutIndexTemplateRequest request, final Clus\n if (cause.length() == 0) {\n cause = \"api\";\n }\n-\n+ final Settings.Builder templateSettingsBuilder = Settings.settingsBuilder();\n+ templateSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ indexScopedSettings.validate(templateSettingsBuilder);\n indexTemplateService.putTemplate(new MetaDataIndexTemplateService.PutRequest(cause, request.name())\n .template(request.template())\n .order(request.order())\n- .settings(request.settings())\n+ .settings(templateSettingsBuilder.build())\n .mappings(request.mappings())\n .aliases(request.aliases())\n .customs(request.customs())",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,7 @@ public final class AutoCreateIndex {\n @Inject\n public AutoCreateIndex(Settings settings, IndexNameExpressionResolver resolver) {\n this.resolver = resolver;\n- dynamicMappingDisabled = !settings.getAsBoolean(MapperService.INDEX_MAPPER_DYNAMIC_SETTING, MapperService.INDEX_MAPPER_DYNAMIC_DEFAULT);\n+ dynamicMappingDisabled = !MapperService.INDEX_MAPPER_DYNAMIC_SETTING.get(settings);\n String value = settings.get(\"action.auto_create_index\");\n if (value == null || Booleans.isExplicitTrue(value)) {\n needToCheck = true;",
"filename": "core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java",
"status": "modified"
},
{
"diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.cluster.action.index.NodeIndexDeletedAction;\n import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction;\n import org.elasticsearch.cluster.action.shard.ShardStateAction;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.IndexTemplateFilter;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n@@ -36,7 +35,6 @@\n import org.elasticsearch.cluster.node.DiscoveryNodeService;\n import org.elasticsearch.cluster.routing.OperationRouting;\n import org.elasticsearch.cluster.routing.RoutingService;\n-import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;\n@@ -56,27 +54,12 @@\n import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.cluster.service.InternalClusterService;\n-import org.elasticsearch.cluster.settings.DynamicSettings;\n-import org.elasticsearch.cluster.settings.Validator;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.ExtensionPoint;\n import org.elasticsearch.gateway.GatewayAllocator;\n-import org.elasticsearch.gateway.PrimaryShardAllocator;\n-import org.elasticsearch.index.IndexSettings;\n-import org.elasticsearch.index.IndexingSlowLog;\n-import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.search.stats.SearchSlowLog;\n-import org.elasticsearch.index.settings.IndexDynamicSettings;\n-import org.elasticsearch.index.MergePolicyConfig;\n-import org.elasticsearch.index.MergeSchedulerConfig;\n-import org.elasticsearch.index.store.IndexStore;\n-import org.elasticsearch.indices.IndicesWarmer;\n-import org.elasticsearch.indices.cache.request.IndicesRequestCache;\n-import org.elasticsearch.indices.ttl.IndicesTTLService;\n-import org.elasticsearch.search.internal.DefaultSearchContext;\n \n import java.util.Arrays;\n import java.util.Collections;\n@@ -107,7 +90,6 @@ public class ClusterModule extends AbstractModule {\n SnapshotInProgressAllocationDecider.class));\n \n private final Settings settings;\n- private final DynamicSettings.Builder indexDynamicSettings = new DynamicSettings.Builder();\n private final ExtensionPoint.SelectedType<ShardsAllocator> shardsAllocators = new ExtensionPoint.SelectedType<>(\"shards_allocator\", ShardsAllocator.class);\n private final ExtensionPoint.ClassSet<AllocationDecider> allocationDeciders = new ExtensionPoint.ClassSet<>(\"allocation_decider\", AllocationDecider.class, AllocationDeciders.class);\n private final ExtensionPoint.ClassSet<IndexTemplateFilter> indexTemplateFilters = new ExtensionPoint.ClassSet<>(\"index_template_filter\", IndexTemplateFilter.class);\n@@ -117,79 +99,13 @@ public class ClusterModule extends AbstractModule {\n \n public ClusterModule(Settings settings) {\n this.settings = settings;\n-\n- registerBuiltinIndexSettings();\n-\n for (Class<? extends AllocationDecider> decider : ClusterModule.DEFAULT_ALLOCATION_DECIDERS) {\n registerAllocationDecider(decider);\n }\n registerShardsAllocator(ClusterModule.BALANCED_ALLOCATOR, BalancedShardsAllocator.class);\n registerShardsAllocator(ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR, BalancedShardsAllocator.class);\n }\n \n- private void registerBuiltinIndexSettings() {\n- registerIndexDynamicSetting(IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(IndexStore.INDEX_STORE_THROTTLE_TYPE, Validator.EMPTY);\n- registerIndexDynamicSetting(MergeSchedulerConfig.MAX_THREAD_COUNT, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(MergeSchedulerConfig.MAX_MERGE_COUNT, Validator.EMPTY);\n- registerIndexDynamicSetting(MergeSchedulerConfig.AUTO_THROTTLE, Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_REQUIRE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_INCLUDE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_EXCLUDE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, Validator.EMPTY);\n- registerIndexDynamicSetting(EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_READ_ONLY, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_READ, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_WRITE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_METADATA, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_PRIORITY, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(IndicesTTLService.INDEX_TTL_DISABLE_PURGE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_REFRESH_INTERVAL, Validator.TIME);\n- registerIndexDynamicSetting(PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_GC_DELETES_SETTING, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_REFORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_LEVEL, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG, Validator.EMPTY);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_WARN, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_INFO, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_REFORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_LEVEL, Validator.EMPTY);\n- registerIndexDynamicSetting(ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE, Validator.INTEGER);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED, Validator.DOUBLE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_FLOOR_SEGMENT, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE, Validator.INTEGER_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT, Validator.INTEGER_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER, Validator.DOUBLE_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT, Validator.NON_NEGATIVE_DOUBLE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_COMPOUND_FORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(IndexSettings.INDEX_TRANSLOG_DURABILITY, Validator.EMPTY);\n- registerIndexDynamicSetting(IndicesWarmer.INDEX_WARMER_ENABLED, Validator.EMPTY);\n- registerIndexDynamicSetting(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED, Validator.BOOLEAN);\n- registerIndexDynamicSetting(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, Validator.TIME);\n- registerIndexDynamicSetting(DefaultSearchContext.MAX_RESULT_WINDOW, Validator.POSITIVE_INTEGER);\n- registerIndexDynamicSetting(MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING, Validator.NON_NEGATIVE_INTEGER);\n- }\n-\n- public void registerIndexDynamicSetting(String setting, Validator validator) {\n- indexDynamicSettings.addSetting(setting, validator);\n- }\n-\n-\n public void registerAllocationDecider(Class<? extends AllocationDecider> allocationDecider) {\n allocationDeciders.registerExtension(allocationDecider);\n }\n@@ -204,8 +120,6 @@ public void registerIndexTemplateFilter(Class<? extends IndexTemplateFilter> ind\n \n @Override\n protected void configure() {\n- bind(DynamicSettings.class).annotatedWith(IndexDynamicSettings.class).toInstance(indexDynamicSettings.build());\n-\n // bind ShardsAllocator\n String shardsAllocatorType = shardsAllocators.bindType(binder(), settings, ClusterModule.SHARDS_ALLOCATOR_TYPE_KEY, ClusterModule.BALANCED_ALLOCATOR);\n if (shardsAllocatorType.equals(ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java",
"status": "modified"
},
{
"diff": "@@ -306,16 +306,16 @@ public Builder addBlocks(IndexMetaData indexMetaData) {\n if (indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n addIndexBlock(indexMetaData.getIndex(), MetaDataIndexStateService.INDEX_CLOSED_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_READ_ONLY, false)) {\n+ if (IndexMetaData.INDEX_READ_ONLY_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_READ_ONLY_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_READ, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_READ_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_READ_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_WRITE, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_WRITE_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_WRITE_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_METADATA, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_METADATA_BLOCK);\n }\n return this;",
"filename": "core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,92 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.common.Booleans;\n+import org.elasticsearch.common.settings.Setting;\n+\n+/**\n+ * This class acts as a functional wrapper around the <tt>index.auto_expand_replicas</tt> setting.\n+ * This setting or rather it's value is expanded into a min and max value which requires special handling\n+ * based on the number of datanodes in the cluster. This class handles all the parsing and streamlines the access to these values.\n+ */\n+final class AutoExpandReplicas {\n+ // the value we recognize in the \"max\" position to mean all the nodes\n+ private static final String ALL_NODES_VALUE = \"all\";\n+ public static final Setting<AutoExpandReplicas> SETTING = new Setting<>(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, \"false\", (value) -> {\n+ final int min;\n+ final int max;\n+ if (Booleans.parseBoolean(value, true) == false) {\n+ return new AutoExpandReplicas(0, 0, false);\n+ }\n+ final int dash = value.indexOf('-');\n+ if (-1 == dash) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash);\n+ }\n+ final String sMin = value.substring(0, dash);\n+ try {\n+ min = Integer.parseInt(sMin);\n+ } catch (NumberFormatException e) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash, e);\n+ }\n+ String sMax = value.substring(dash + 1);\n+ if (sMax.equals(ALL_NODES_VALUE)) {\n+ max = Integer.MAX_VALUE;\n+ } else {\n+ try {\n+ max = Integer.parseInt(sMax);\n+ } catch (NumberFormatException e) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash, e);\n+ }\n+ }\n+ return new AutoExpandReplicas(min, max, true);\n+ }, true, Setting.Scope.INDEX);\n+\n+ private final int minReplicas;\n+ private final int maxReplicas;\n+ private final boolean enabled;\n+\n+ private AutoExpandReplicas(int minReplicas, int maxReplicas, boolean enabled) {\n+ if (minReplicas > maxReplicas) {\n+ throw new IllegalArgumentException(\"[\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] minReplicas must be =< maxReplicas but wasn't \" + minReplicas + \" > \" + maxReplicas);\n+ }\n+ this.minReplicas = minReplicas;\n+ this.maxReplicas = maxReplicas;\n+ this.enabled = enabled;\n+ }\n+\n+ int getMinReplicas() {\n+ return minReplicas;\n+ }\n+\n+ int getMaxReplicas(int numDataNodes) {\n+ return Math.min(maxReplicas, numDataNodes-1);\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return enabled ? minReplicas + \"-\" + maxReplicas : \"false\";\n+ }\n+\n+ boolean isEnabled() {\n+ return enabled;\n+ }\n+}\n+\n+",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java",
"status": "added"
},
{
"diff": "@@ -29,14 +29,17 @@\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.node.DiscoveryNodeFilters;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.collect.ImmutableOpenIntMap;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.MapBuilder;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.loader.SettingsLoader;\n import org.elasticsearch.common.xcontent.FromXContentBuilder;\n@@ -58,6 +61,7 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.Function;\n \n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND;\n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.OR;\n@@ -70,10 +74,6 @@\n */\n public class IndexMetaData implements Diffable<IndexMetaData>, FromXContentBuilder<IndexMetaData>, ToXContent {\n \n- public static final IndexMetaData PROTO = IndexMetaData.builder(\"\")\n- .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n- .numberOfShards(1).numberOfReplicas(0).build();\n-\n public interface Custom extends Diffable<Custom>, ToXContent {\n \n String type();\n@@ -152,27 +152,53 @@ public static State fromString(String state) {\n }\n public static final String INDEX_SETTING_PREFIX = \"index.\";\n public static final String SETTING_NUMBER_OF_SHARDS = \"index.number_of_shards\";\n+ public static final Setting<Integer> INDEX_NUMBER_OF_SHARDS_SETTING = Setting.intSetting(SETTING_NUMBER_OF_SHARDS, 5, 1, false, Setting.Scope.INDEX);\n public static final String SETTING_NUMBER_OF_REPLICAS = \"index.number_of_replicas\";\n+ public static final Setting<Integer> INDEX_NUMBER_OF_REPLICAS_SETTING = Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, true, Setting.Scope.INDEX);\n public static final String SETTING_SHADOW_REPLICAS = \"index.shadow_replicas\";\n+ public static final Setting<Boolean> INDEX_SHADOW_REPLICAS_SETTING = Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, false, Setting.Scope.INDEX);\n+\n public static final String SETTING_SHARED_FILESYSTEM = \"index.shared_filesystem\";\n+ public static final Setting<Boolean> INDEX_SHARED_FILESYSTEM_SETTING = Setting.boolSetting(SETTING_SHARED_FILESYSTEM, false, false, Setting.Scope.INDEX);\n+\n public static final String SETTING_AUTO_EXPAND_REPLICAS = \"index.auto_expand_replicas\";\n+ public static final Setting<AutoExpandReplicas> INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING;\n public static final String SETTING_READ_ONLY = \"index.blocks.read_only\";\n+ public static final Setting<Boolean> INDEX_READ_ONLY_SETTING = Setting.boolSetting(SETTING_READ_ONLY, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_READ = \"index.blocks.read\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_READ_SETTING = Setting.boolSetting(SETTING_BLOCKS_READ, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_WRITE = \"index.blocks.write\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_WRITE_SETTING = Setting.boolSetting(SETTING_BLOCKS_WRITE, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_METADATA = \"index.blocks.metadata\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_METADATA_SETTING = Setting.boolSetting(SETTING_BLOCKS_METADATA, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_VERSION_CREATED = \"index.version.created\";\n public static final String SETTING_VERSION_CREATED_STRING = \"index.version.created_string\";\n public static final String SETTING_VERSION_UPGRADED = \"index.version.upgraded\";\n public static final String SETTING_VERSION_UPGRADED_STRING = \"index.version.upgraded_string\";\n public static final String SETTING_VERSION_MINIMUM_COMPATIBLE = \"index.version.minimum_compatible\";\n public static final String SETTING_CREATION_DATE = \"index.creation_date\";\n public static final String SETTING_PRIORITY = \"index.priority\";\n+ public static final Setting<Integer> INDEX_PRIORITY_SETTING = Setting.intSetting(\"index.priority\", 1, 0, true, Setting.Scope.INDEX);\n public static final String SETTING_CREATION_DATE_STRING = \"index.creation_date_string\";\n public static final String SETTING_INDEX_UUID = \"index.uuid\";\n public static final String SETTING_DATA_PATH = \"index.data_path\";\n+ public static final Setting<String> INDEX_DATA_PATH_SETTING = new Setting<>(SETTING_DATA_PATH, \"\", Function.identity(), false, Setting.Scope.INDEX);\n public static final String SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE = \"index.shared_filesystem.recover_on_any_node\";\n+ public static final Setting<Boolean> INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING = Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, true, Setting.Scope.INDEX);\n public static final String INDEX_UUID_NA_VALUE = \"_na_\";\n \n+ public static final Setting<Settings> INDEX_ROUTING_REQUIRE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.require.\", true, Setting.Scope.INDEX);\n+ public static final Setting<Settings> INDEX_ROUTING_INCLUDE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.include.\", true, Setting.Scope.INDEX);\n+ public static final Setting<Settings> INDEX_ROUTING_EXCLUDE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.exclude.\", true, Setting.Scope.INDEX);\n+\n+ public static final IndexMetaData PROTO = IndexMetaData.builder(\"\")\n+ .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1).numberOfReplicas(0).build();\n+\n public static final String KEY_ACTIVE_ALLOCATIONS = \"active_allocations\";\n \n private final int numberOfShards;\n@@ -627,10 +653,6 @@ public Builder creationDate(long creationDate) {\n return this;\n }\n \n- public long creationDate() {\n- return settings.getAsLong(SETTING_CREATION_DATE, -1l);\n- }\n-\n public Builder settings(Settings.Builder settings) {\n this.settings = settings.build();\n return this;\n@@ -645,11 +667,6 @@ public MappingMetaData mapping(String type) {\n return mappings.get(type);\n }\n \n- public Builder removeMapping(String mappingType) {\n- mappings.remove(mappingType);\n- return this;\n- }\n-\n public Builder putMapping(String type, String source) throws IOException {\n try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) {\n putMapping(new MappingMetaData(type, parser.mapOrdered()));\n@@ -692,24 +709,11 @@ public Builder putCustom(String type, Custom customIndexMetaData) {\n return this;\n }\n \n- public Builder removeCustom(String type) {\n- this.customs.remove(type);\n- return this;\n- }\n-\n- public Custom getCustom(String type) {\n- return this.customs.get(type);\n- }\n-\n public Builder putActiveAllocationIds(int shardId, Set<String> allocationIds) {\n activeAllocationIds.put(shardId, new HashSet(allocationIds));\n return this;\n }\n \n- public Set<String> getActiveAllocationIds(int shardId) {\n- return activeAllocationIds.get(shardId);\n- }\n-\n public long version() {\n return this.version;\n }\n@@ -758,22 +762,21 @@ public IndexMetaData build() {\n filledActiveAllocationIds.put(i, Collections.emptySet());\n }\n }\n-\n- Map<String, String> requireMap = settings.getByPrefix(\"index.routing.allocation.require.\").getAsMap();\n+ final Map<String, String> requireMap = INDEX_ROUTING_REQUIRE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters requireFilters;\n if (requireMap.isEmpty()) {\n requireFilters = null;\n } else {\n requireFilters = DiscoveryNodeFilters.buildFromKeyValue(AND, requireMap);\n }\n- Map<String, String> includeMap = settings.getByPrefix(\"index.routing.allocation.include.\").getAsMap();\n+ Map<String, String> includeMap = INDEX_ROUTING_INCLUDE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters includeFilters;\n if (includeMap.isEmpty()) {\n includeFilters = null;\n } else {\n includeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, includeMap);\n }\n- Map<String, String> excludeMap = settings.getByPrefix(\"index.routing.allocation.exclude.\").getAsMap();\n+ Map<String, String> excludeMap = INDEX_ROUTING_EXCLUDE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters excludeFilters;\n if (excludeMap.isEmpty()) {\n excludeFilters = null;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java",
"status": "modified"
},
{
"diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n@@ -103,13 +104,14 @@ public class MetaDataCreateIndexService extends AbstractComponent {\n private final IndexTemplateFilter indexTemplateFilter;\n private final Environment env;\n private final NodeServicesProvider nodeServicesProvider;\n+ private final IndexScopedSettings indexScopedSettings;\n \n \n @Inject\n public MetaDataCreateIndexService(Settings settings, ClusterService clusterService,\n IndicesService indicesService, AllocationService allocationService,\n Version version, AliasValidator aliasValidator,\n- Set<IndexTemplateFilter> indexTemplateFilters, Environment env, NodeServicesProvider nodeServicesProvider) {\n+ Set<IndexTemplateFilter> indexTemplateFilters, Environment env, NodeServicesProvider nodeServicesProvider, IndexScopedSettings indexScopedSettings) {\n super(settings);\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n@@ -118,6 +120,7 @@ public MetaDataCreateIndexService(Settings settings, ClusterService clusterServi\n this.aliasValidator = aliasValidator;\n this.env = env;\n this.nodeServicesProvider = nodeServicesProvider;\n+ this.indexScopedSettings = indexScopedSettings;\n \n if (indexTemplateFilters.isEmpty()) {\n this.indexTemplateFilter = DEFAULT_INDEX_TEMPLATE_FILTER;\n@@ -174,6 +177,7 @@ public void validateIndexName(String index, ClusterState state) {\n public void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n Settings.Builder updatedSettingsBuilder = Settings.settingsBuilder();\n updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ indexScopedSettings.validate(updatedSettingsBuilder);\n request.settings(updatedSettingsBuilder.build());\n \n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\",\n@@ -460,16 +464,17 @@ public void validateIndexSettings(String indexName, Settings settings) throws In\n }\n \n List<String> getIndexSettingsValidationErrors(Settings settings) {\n- String customPath = settings.get(IndexMetaData.SETTING_DATA_PATH, null);\n+ String customPath = IndexMetaData.INDEX_DATA_PATH_SETTING.get(settings);\n List<String> validationErrors = new ArrayList<>();\n- if (customPath != null && env.sharedDataFile() == null) {\n+ if (Strings.isEmpty(customPath) == false && env.sharedDataFile() == null) {\n validationErrors.add(\"path.shared_data must be set in order to use custom data paths\");\n- } else if (customPath != null) {\n+ } else if (Strings.isEmpty(customPath) == false) {\n Path resolvedPath = PathUtils.get(new Path[]{env.sharedDataFile()}, customPath);\n if (resolvedPath == null) {\n validationErrors.add(\"custom path [\" + customPath + \"] is not a sub-path of path.shared_data [\" + env.sharedDataFile() + \"]\");\n }\n }\n+ //norelease - this can be removed?\n Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);\n if (number_of_primaries != null && number_of_primaries <= 0) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.apache.lucene.analysis.Analyzer;\n import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -70,7 +69,6 @@ public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData) {\n }\n checkSupportedVersion(indexMetaData);\n IndexMetaData newMetaData = indexMetaData;\n- newMetaData = addDefaultUnitsIfNeeded(newMetaData);\n checkMappingsCompatibility(newMetaData);\n newMetaData = markAsUpgraded(newMetaData);\n return newMetaData;\n@@ -113,111 +111,14 @@ private static boolean isSupportedVersion(IndexMetaData indexMetaData) {\n return false;\n }\n \n- /** All known byte-sized settings for an index. */\n- public static final Set<String> INDEX_BYTES_SIZE_SETTINGS = unmodifiableSet(newHashSet(\n- \"index.merge.policy.floor_segment\",\n- \"index.merge.policy.max_merged_segment\",\n- \"index.merge.policy.max_merge_size\",\n- \"index.merge.policy.min_merge_size\",\n- \"index.shard.recovery.file_chunk_size\",\n- \"index.shard.recovery.translog_size\",\n- \"index.store.throttle.max_bytes_per_sec\",\n- \"index.translog.flush_threshold_size\",\n- \"index.translog.fs.buffer_size\",\n- \"index.version_map_size\"));\n-\n- /** All known time settings for an index. */\n- public static final Set<String> INDEX_TIME_SETTINGS = unmodifiableSet(newHashSet(\n- \"index.gateway.wait_for_mapping_update_post_recovery\",\n- \"index.shard.wait_for_mapping_update_post_recovery\",\n- \"index.gc_deletes\",\n- \"index.indexing.slowlog.threshold.index.debug\",\n- \"index.indexing.slowlog.threshold.index.info\",\n- \"index.indexing.slowlog.threshold.index.trace\",\n- \"index.indexing.slowlog.threshold.index.warn\",\n- \"index.refresh_interval\",\n- \"index.search.slowlog.threshold.fetch.debug\",\n- \"index.search.slowlog.threshold.fetch.info\",\n- \"index.search.slowlog.threshold.fetch.trace\",\n- \"index.search.slowlog.threshold.fetch.warn\",\n- \"index.search.slowlog.threshold.query.debug\",\n- \"index.search.slowlog.threshold.query.info\",\n- \"index.search.slowlog.threshold.query.trace\",\n- \"index.search.slowlog.threshold.query.warn\",\n- \"index.shadow.wait_for_initial_commit\",\n- \"index.store.stats_refresh_interval\",\n- \"index.translog.flush_threshold_period\",\n- \"index.translog.interval\",\n- \"index.translog.sync_interval\",\n- \"index.shard.inactive_time\",\n- UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING));\n-\n- /**\n- * Elasticsearch 2.0 requires units on byte/memory and time settings; this method adds the default unit to any such settings that are\n- * missing units.\n- */\n- private IndexMetaData addDefaultUnitsIfNeeded(IndexMetaData indexMetaData) {\n- if (indexMetaData.getCreationVersion().before(Version.V_2_0_0_beta1)) {\n- // TODO: can we somehow only do this *once* for a pre-2.0 index? Maybe we could stuff a \"fake marker setting\" here? Seems hackish...\n- // Created lazily if we find any settings that are missing units:\n- Settings settings = indexMetaData.getSettings();\n- Settings.Builder newSettings = null;\n- for(String byteSizeSetting : INDEX_BYTES_SIZE_SETTINGS) {\n- String value = settings.get(byteSizeSetting);\n- if (value != null) {\n- try {\n- Long.parseLong(value);\n- } catch (NumberFormatException nfe) {\n- continue;\n- }\n- // It's a naked number that previously would be interpreted as default unit (bytes); now we add it:\n- logger.warn(\"byte-sized index setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", byteSizeSetting, value);\n- if (newSettings == null) {\n- newSettings = Settings.builder();\n- newSettings.put(settings);\n- }\n- newSettings.put(byteSizeSetting, value + \"b\");\n- }\n- }\n- for(String timeSetting : INDEX_TIME_SETTINGS) {\n- String value = settings.get(timeSetting);\n- if (value != null) {\n- try {\n- Long.parseLong(value);\n- } catch (NumberFormatException nfe) {\n- continue;\n- }\n- // It's a naked number that previously would be interpreted as default unit (ms); now we add it:\n- logger.warn(\"time index setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", timeSetting, value);\n- if (newSettings == null) {\n- newSettings = Settings.builder();\n- newSettings.put(settings);\n- }\n- newSettings.put(timeSetting, value + \"ms\");\n- }\n- }\n- if (newSettings != null) {\n- // At least one setting was changed:\n- return IndexMetaData.builder(indexMetaData)\n- .version(indexMetaData.getVersion())\n- .settings(newSettings.build())\n- .build();\n- }\n- }\n-\n- // No changes:\n- return indexMetaData;\n- }\n-\n-\n /**\n * Checks the mappings for compatibility with the current version\n */\n private void checkMappingsCompatibility(IndexMetaData indexMetaData) {\n try {\n // We cannot instantiate real analysis server at this point because the node might not have\n // been started yet. However, we don't really need real analyzers at this stage - so we can fake it\n- IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings, Collections.emptyList());\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings);\n SimilarityService similarityService = new SimilarityService(indexSettings, Collections.emptyMap());\n \n try (AnalysisService analysisService = new FakeAnalysisService(indexSettings)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -30,19 +30,20 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateListener;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n-import org.elasticsearch.cluster.settings.DynamicSettings;\n-import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.index.settings.IndexDynamicSettings;\n+import org.elasticsearch.index.IndexNotFoundException;\n \n import java.util.ArrayList;\n import java.util.HashMap;\n@@ -59,25 +60,21 @@\n */\n public class MetaDataUpdateSettingsService extends AbstractComponent implements ClusterStateListener {\n \n- // the value we recognize in the \"max\" position to mean all the nodes\n- private static final String ALL_NODES_VALUE = \"all\";\n-\n private final ClusterService clusterService;\n \n private final AllocationService allocationService;\n \n- private final DynamicSettings dynamicSettings;\n-\n private final IndexNameExpressionResolver indexNameExpressionResolver;\n+ private final IndexScopedSettings indexScopedSettings;\n \n @Inject\n- public MetaDataUpdateSettingsService(Settings settings, ClusterService clusterService, AllocationService allocationService, @IndexDynamicSettings DynamicSettings dynamicSettings, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ public MetaDataUpdateSettingsService(Settings settings, ClusterService clusterService, AllocationService allocationService, IndexScopedSettings indexScopedSettings, IndexNameExpressionResolver indexNameExpressionResolver) {\n super(settings);\n this.clusterService = clusterService;\n this.indexNameExpressionResolver = indexNameExpressionResolver;\n this.clusterService.add(this);\n this.allocationService = allocationService;\n- this.dynamicSettings = dynamicSettings;\n+ this.indexScopedSettings = indexScopedSettings;\n }\n \n @Override\n@@ -90,69 +87,43 @@ public void clusterChanged(ClusterChangedEvent event) {\n final int dataNodeCount = event.state().nodes().dataNodes().size();\n \n Map<Integer, List<String>> nrReplicasChanged = new HashMap<>();\n-\n // we need to do this each time in case it was changed by update settings\n for (final IndexMetaData indexMetaData : event.state().metaData()) {\n- String autoExpandReplicas = indexMetaData.getSettings().get(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS);\n- if (autoExpandReplicas != null && Booleans.parseBoolean(autoExpandReplicas, true)) { // Booleans only work for false values, just as we want it here\n- try {\n- final int min;\n- final int max;\n-\n- final int dash = autoExpandReplicas.indexOf('-');\n- if (-1 == dash) {\n- logger.warn(\"failed to set [{}] for index [{}], it should be dash delimited [{}]\",\n- IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), autoExpandReplicas);\n- continue;\n- }\n- final String sMin = autoExpandReplicas.substring(0, dash);\n- try {\n- min = Integer.parseInt(sMin);\n- } catch (NumberFormatException e) {\n- logger.warn(\"failed to set [{}] for index [{}], minimum value is not a number [{}]\",\n- e, IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), sMin);\n- continue;\n- }\n- String sMax = autoExpandReplicas.substring(dash + 1);\n- if (sMax.equals(ALL_NODES_VALUE)) {\n- max = dataNodeCount - 1;\n- } else {\n- try {\n- max = Integer.parseInt(sMax);\n- } catch (NumberFormatException e) {\n- logger.warn(\"failed to set [{}] for index [{}], maximum value is neither [{}] nor a number [{}]\",\n- e, IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), ALL_NODES_VALUE, sMax);\n- continue;\n- }\n- }\n+ AutoExpandReplicas autoExpandReplicas = IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING.get(indexMetaData.getSettings());\n+ if (autoExpandReplicas.isEnabled()) {\n+ /*\n+ * we have to expand the number of replicas for this index to at least min and at most max nodes here\n+ * so we are bumping it up if we have to or reduce it depending on min/max and the number of datanodes.\n+ * If we change the number of replicas we just let the shard allocator do it's thing once we updated it\n+ * since it goes through the index metadata to figure out if something needs to be done anyway. Do do that\n+ * we issue a cluster settings update command below and kicks off a reroute.\n+ */\n+ final int min = autoExpandReplicas.getMinReplicas();\n+ final int max = autoExpandReplicas.getMaxReplicas(dataNodeCount);\n+ int numberOfReplicas = dataNodeCount - 1;\n+ if (numberOfReplicas < min) {\n+ numberOfReplicas = min;\n+ } else if (numberOfReplicas > max) {\n+ numberOfReplicas = max;\n+ }\n+ // same value, nothing to do there\n+ if (numberOfReplicas == indexMetaData.getNumberOfReplicas()) {\n+ continue;\n+ }\n \n- int numberOfReplicas = dataNodeCount - 1;\n- if (numberOfReplicas < min) {\n- numberOfReplicas = min;\n- } else if (numberOfReplicas > max) {\n- numberOfReplicas = max;\n- }\n+ if (numberOfReplicas >= min && numberOfReplicas <= max) {\n \n- // same value, nothing to do there\n- if (numberOfReplicas == indexMetaData.getNumberOfReplicas()) {\n- continue;\n+ if (!nrReplicasChanged.containsKey(numberOfReplicas)) {\n+ nrReplicasChanged.put(numberOfReplicas, new ArrayList<>());\n }\n \n- if (numberOfReplicas >= min && numberOfReplicas <= max) {\n-\n- if (!nrReplicasChanged.containsKey(numberOfReplicas)) {\n- nrReplicasChanged.put(numberOfReplicas, new ArrayList<String>());\n- }\n-\n- nrReplicasChanged.get(numberOfReplicas).add(indexMetaData.getIndex());\n- }\n- } catch (Exception e) {\n- logger.warn(\"[{}] failed to parse auto expand replicas\", e, indexMetaData.getIndex());\n+ nrReplicasChanged.get(numberOfReplicas).add(indexMetaData.getIndex());\n }\n }\n }\n \n if (nrReplicasChanged.size() > 0) {\n+ // update settings and kick of a reroute (implicit) for them to take effect\n for (final Integer fNumberOfReplicas : nrReplicasChanged.keySet()) {\n Settings settings = Settings.settingsBuilder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, fNumberOfReplicas).build();\n final List<String> indices = nrReplicasChanged.get(fNumberOfReplicas);\n@@ -182,42 +153,30 @@ public void onFailure(Throwable t) {\n }\n \n public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n- Settings.Builder updatedSettingsBuilder = Settings.settingsBuilder();\n- updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ final Settings normalizedSettings = Settings.settingsBuilder().put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();\n+ Settings.Builder settingsForClosedIndices = Settings.builder();\n+ Settings.Builder settingsForOpenIndices = Settings.builder();\n+ Settings.Builder skipppedSettings = Settings.builder();\n+\n+ indexScopedSettings.validate(normalizedSettings);\n // never allow to change the number of shards\n- for (String key : updatedSettingsBuilder.internalMap().keySet()) {\n- if (key.equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) {\n+ for (Map.Entry<String, String> entry : normalizedSettings.getAsMap().entrySet()) {\n+ if (entry.getKey().equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) {\n listener.onFailure(new IllegalArgumentException(\"can't change the number of shards for an index\"));\n return;\n }\n- }\n-\n- final Settings closeSettings = updatedSettingsBuilder.build();\n-\n- final Set<String> removedSettings = new HashSet<>();\n- final Set<String> errors = new HashSet<>();\n- for (Map.Entry<String, String> setting : updatedSettingsBuilder.internalMap().entrySet()) {\n- if (!dynamicSettings.hasDynamicSetting(setting.getKey())) {\n- removedSettings.add(setting.getKey());\n+ Setting setting = indexScopedSettings.get(entry.getKey());\n+ assert setting != null; // we already validated the normalized settings\n+ settingsForClosedIndices.put(entry.getKey(), entry.getValue());\n+ if (setting.isDynamic()) {\n+ settingsForOpenIndices.put(entry.getKey(), entry.getValue());\n } else {\n- String error = dynamicSettings.validateDynamicSetting(setting.getKey(), setting.getValue(), clusterService.state());\n- if (error != null) {\n- errors.add(\"[\" + setting.getKey() + \"] - \" + error);\n- }\n+ skipppedSettings.put(entry.getKey(), entry.getValue());\n }\n }\n-\n- if (!errors.isEmpty()) {\n- listener.onFailure(new IllegalArgumentException(\"can't process the settings: \" + errors.toString()));\n- return;\n- }\n-\n- if (!removedSettings.isEmpty()) {\n- for (String removedSetting : removedSettings) {\n- updatedSettingsBuilder.remove(removedSetting);\n- }\n- }\n- final Settings openSettings = updatedSettingsBuilder.build();\n+ final Settings skippedSettigns = skipppedSettings.build();\n+ final Settings closedSettings = settingsForClosedIndices.build();\n+ final Settings openSettings = settingsForOpenIndices.build();\n \n clusterService.submitStateUpdateTask(\"update-settings\",\n new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, listener) {\n@@ -245,16 +204,16 @@ public ClusterState execute(ClusterState currentState) {\n }\n }\n \n- if (closeIndices.size() > 0 && closeSettings.get(IndexMetaData.SETTING_NUMBER_OF_REPLICAS) != null) {\n+ if (closeIndices.size() > 0 && closedSettings.get(IndexMetaData.SETTING_NUMBER_OF_REPLICAS) != null) {\n throw new IllegalArgumentException(String.format(Locale.ROOT,\n \"Can't update [%s] on closed indices [%s] - can leave index in an unopenable state\", IndexMetaData.SETTING_NUMBER_OF_REPLICAS,\n closeIndices\n ));\n }\n- if (!removedSettings.isEmpty() && !openIndices.isEmpty()) {\n+ if (!skippedSettigns.getAsMap().isEmpty() && !openIndices.isEmpty()) {\n throw new IllegalArgumentException(String.format(Locale.ROOT,\n \"Can't update non dynamic settings[%s] for open indices [%s]\",\n- removedSettings,\n+ skippedSettigns.getAsMap().keySet(),\n openIndices\n ));\n }\n@@ -267,71 +226,73 @@ public ClusterState execute(ClusterState currentState) {\n }\n \n ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n- Boolean updatedReadOnly = openSettings.getAsBoolean(IndexMetaData.SETTING_READ_ONLY, null);\n- if (updatedReadOnly != null) {\n- for (String index : actualIndices) {\n- if (updatedReadOnly) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_READ_ONLY_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_READ_ONLY_BLOCK);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_ONLY_BLOCK, IndexMetaData.INDEX_READ_ONLY_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_METADATA_BLOCK, IndexMetaData.INDEX_BLOCKS_METADATA_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_WRITE_BLOCK, IndexMetaData.INDEX_BLOCKS_WRITE_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_BLOCK, IndexMetaData.INDEX_BLOCKS_READ_SETTING, openSettings);\n+\n+ if (!openIndices.isEmpty()) {\n+ for (String index : openIndices) {\n+ IndexMetaData indexMetaData = metaDataBuilder.get(index);\n+ if (indexMetaData == null) {\n+ throw new IndexNotFoundException(index);\n }\n- }\n- }\n- Boolean updateMetaDataBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_METADATA, null);\n- if (updateMetaDataBlock != null) {\n- for (String index : actualIndices) {\n- if (updateMetaDataBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_METADATA_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_METADATA_BLOCK);\n+ Settings.Builder updates = Settings.builder();\n+ Settings.Builder indexSettings = Settings.builder().put(indexMetaData.getSettings());\n+ if (indexScopedSettings.updateDynamicSettings(openSettings, indexSettings, updates, index)) {\n+ metaDataBuilder.put(IndexMetaData.builder(indexMetaData).settings(indexSettings));\n }\n }\n }\n \n- Boolean updateWriteBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_WRITE, null);\n- if (updateWriteBlock != null) {\n- for (String index : actualIndices) {\n- if (updateWriteBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_WRITE_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_WRITE_BLOCK);\n+ if (!closeIndices.isEmpty()) {\n+ for (String index : closeIndices) {\n+ IndexMetaData indexMetaData = metaDataBuilder.get(index);\n+ if (indexMetaData == null) {\n+ throw new IndexNotFoundException(index);\n }\n- }\n- }\n-\n- Boolean updateReadBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_READ, null);\n- if (updateReadBlock != null) {\n- for (String index : actualIndices) {\n- if (updateReadBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_READ_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_READ_BLOCK);\n+ Settings.Builder updates = Settings.builder();\n+ Settings.Builder indexSettings = Settings.builder().put(indexMetaData.getSettings());\n+ if (indexScopedSettings.updateSettings(closedSettings, indexSettings, updates, index)) {\n+ metaDataBuilder.put(IndexMetaData.builder(indexMetaData).settings(indexSettings));\n }\n }\n }\n \n- if (!openIndices.isEmpty()) {\n- String[] indices = openIndices.toArray(new String[openIndices.size()]);\n- metaDataBuilder.updateSettings(openSettings, indices);\n- }\n-\n- if (!closeIndices.isEmpty()) {\n- String[] indices = closeIndices.toArray(new String[closeIndices.size()]);\n- metaDataBuilder.updateSettings(closeSettings, indices);\n- }\n-\n \n ClusterState updatedState = ClusterState.builder(currentState).metaData(metaDataBuilder).routingTable(routingTableBuilder.build()).blocks(blocks).build();\n \n // now, reroute in case things change that require it (like number of replicas)\n RoutingAllocation.Result routingResult = allocationService.reroute(updatedState, \"settings update\");\n updatedState = ClusterState.builder(updatedState).routingResult(routingResult).build();\n-\n+ for (String index : openIndices) {\n+ indexScopedSettings.dryRun(updatedState.metaData().index(index).getSettings());\n+ }\n+ for (String index : closeIndices) {\n+ indexScopedSettings.dryRun(updatedState.metaData().index(index).getSettings());\n+ }\n return updatedState;\n }\n });\n }\n \n+ /**\n+ * Updates the cluster block only iff the setting exists in the given settings\n+ */\n+ private static void maybeUpdateClusterBlock(String[] actualIndices, ClusterBlocks.Builder blocks, ClusterBlock block, Setting<Boolean> setting, Settings openSettings) {\n+ if (setting.exists(openSettings)) {\n+ final boolean updateReadBlock = setting.get(openSettings);\n+ for (String index : actualIndices) {\n+ if (updateReadBlock) {\n+ blocks.addIndexBlock(index, block);\n+ } else {\n+ blocks.removeIndexBlock(index, block);\n+ }\n+ }\n+ }\n+ }\n+\n+\n public void upgradeIndexSettings(final UpgradeSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n \n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -41,10 +42,10 @@\n public class UnassignedInfo implements ToXContent, Writeable<UnassignedInfo> {\n \n public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"dateOptionalTime\");\n-\n- public static final String INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING = \"index.unassigned.node_left.delayed_timeout\";\n private static final TimeValue DEFAULT_DELAYED_NODE_LEFT_TIMEOUT = TimeValue.timeValueMinutes(1);\n \n+ public static final Setting<TimeValue> INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING = Setting.timeSetting(\"index.unassigned.node_left.delayed_timeout\", DEFAULT_DELAYED_NODE_LEFT_TIMEOUT, true, Setting.Scope.INDEX);\n+\n /**\n * Reason why the shard is in unassigned state.\n * <p>\n@@ -215,7 +216,7 @@ public long getAllocationDelayTimeoutSettingNanos(Settings settings, Settings in\n if (reason != Reason.NODE_LEFT) {\n return 0;\n }\n- TimeValue delayTimeout = indexSettings.getAsTime(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, settings.getAsTime(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, DEFAULT_DELAYED_NODE_LEFT_TIMEOUT));\n+ TimeValue delayTimeout = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexSettings, settings);\n return Math.max(0l, delayTimeout.nanos());\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,7 @@\n \n /**\n * This allocation decider allows shard allocations / rebalancing via the cluster wide settings {@link #CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} /\n- * {@link #CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} and the per index setting {@link #INDEX_ROUTING_ALLOCATION_ENABLE} / {@link #INDEX_ROUTING_REBALANCE_ENABLE}.\n+ * {@link #CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} and the per index setting {@link #INDEX_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link #INDEX_ROUTING_REBALANCE_ENABLE_SETTING}.\n * The per index settings overrides the cluster wide setting.\n *\n * <p>\n@@ -61,10 +61,10 @@ public class EnableAllocationDecider extends AllocationDecider {\n public static final String NAME = \"enable\";\n \n public static final Setting<Allocation> CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING = new Setting<>(\"cluster.routing.allocation.enable\", Allocation.ALL.name(), Allocation::parse, true, Setting.Scope.CLUSTER);\n- public static final String INDEX_ROUTING_ALLOCATION_ENABLE= \"index.routing.allocation.enable\";\n+ public static final Setting<Allocation> INDEX_ROUTING_ALLOCATION_ENABLE_SETTING = new Setting<>(\"index.routing.allocation.enable\", Allocation.ALL.name(), Allocation::parse, true, Setting.Scope.INDEX);\n \n public static final Setting<Rebalance> CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING = new Setting<>(\"cluster.routing.rebalance.enable\", Rebalance.ALL.name(), Rebalance::parse, true, Setting.Scope.CLUSTER);\n- public static final String INDEX_ROUTING_REBALANCE_ENABLE = \"index.routing.rebalance.enable\";\n+ public static final Setting<Rebalance> INDEX_ROUTING_REBALANCE_ENABLE_SETTING = new Setting<>(\"index.routing.rebalance.enable\", Rebalance.ALL.name(), Rebalance::parse, true, Setting.Scope.INDEX);\n \n private volatile Rebalance enableRebalance;\n private volatile Allocation enableAllocation;\n@@ -92,11 +92,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n return allocation.decision(Decision.YES, NAME, \"allocation disabling is ignored\");\n }\n \n- IndexMetaData indexMetaData = allocation.metaData().index(shardRouting.getIndex());\n- String enableIndexValue = indexMetaData.getSettings().get(INDEX_ROUTING_ALLOCATION_ENABLE);\n+ final IndexMetaData indexMetaData = allocation.metaData().index(shardRouting.getIndex());\n final Allocation enable;\n- if (enableIndexValue != null) {\n- enable = Allocation.parse(enableIndexValue);\n+ if (INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.exists(indexMetaData.getSettings())) {\n+ enable = INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.get(indexMetaData.getSettings());\n } else {\n enable = this.enableAllocation;\n }\n@@ -129,10 +128,9 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca\n }\n \n Settings indexSettings = allocation.routingNodes().metaData().index(shardRouting.index()).getSettings();\n- String enableIndexValue = indexSettings.get(INDEX_ROUTING_REBALANCE_ENABLE);\n final Rebalance enable;\n- if (enableIndexValue != null) {\n- enable = Rebalance.parse(enableIndexValue);\n+ if (INDEX_ROUTING_REBALANCE_ENABLE_SETTING.exists(indexSettings)) {\n+ enable = INDEX_ROUTING_REBALANCE_ENABLE_SETTING.get(indexSettings);\n } else {\n enable = this.enableRebalance;\n }\n@@ -160,7 +158,7 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca\n \n /**\n * Allocation values or rather their string representation to be used used with\n- * {@link EnableAllocationDecider#CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_ALLOCATION_ENABLE}\n+ * {@link EnableAllocationDecider#CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_ALLOCATION_ENABLE_SETTING}\n * via cluster / index settings.\n */\n public enum Allocation {\n@@ -186,7 +184,7 @@ public static Allocation parse(String strValue) {\n \n /**\n * Rebalance values or rather their string representation to be used used with\n- * {@link EnableAllocationDecider#CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_REBALANCE_ENABLE}\n+ * {@link EnableAllocationDecider#CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_REBALANCE_ENABLE_SETTING}\n * via cluster / index settings.\n */\n public enum Rebalance {",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -60,10 +60,6 @@ public class FilterAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"filter\";\n \n- public static final String INDEX_ROUTING_REQUIRE_GROUP = \"index.routing.allocation.require.\";\n- public static final String INDEX_ROUTING_INCLUDE_GROUP = \"index.routing.allocation.include.\";\n- public static final String INDEX_ROUTING_EXCLUDE_GROUP = \"index.routing.allocation.exclude.\";\n-\n public static final Setting<Settings> CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.require.\", true, Setting.Scope.CLUSTER);\n public static final Setting<Settings> CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.include.\", true, Setting.Scope.CLUSTER);\n public static final Setting<Settings> CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.exclude.\", true, Setting.Scope.CLUSTER);",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -32,12 +32,12 @@\n /**\n * This {@link AllocationDecider} limits the number of shards per node on a per\n * index or node-wide basis. The allocator prevents a single node to hold more\n- * than {@value #INDEX_TOTAL_SHARDS_PER_NODE} per index and\n+ * than <tt>index.routing.allocation.total_shards_per_node</tt> per index and\n * <tt>cluster.routing.allocation.total_shards_per_node</tt> globally during the allocation\n * process. The limits of this decider can be changed in real-time via a the\n * index settings API.\n * <p>\n- * If {@value #INDEX_TOTAL_SHARDS_PER_NODE} is reset to a negative value shards\n+ * If <tt>index.routing.allocation.total_shards_per_node</tt> is reset to a negative value shards\n * per index are unlimited per node. Shards currently in the\n * {@link ShardRoutingState#RELOCATING relocating} state are ignored by this\n * {@link AllocationDecider} until the shard changed its state to either\n@@ -59,12 +59,13 @@ public class ShardsLimitAllocationDecider extends AllocationDecider {\n * Controls the maximum number of shards per index on a single Elasticsearch\n * node. Negative values are interpreted as unlimited.\n */\n- public static final String INDEX_TOTAL_SHARDS_PER_NODE = \"index.routing.allocation.total_shards_per_node\";\n+ public static final Setting<Integer> INDEX_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"index.routing.allocation.total_shards_per_node\", -1, -1, true, Setting.Scope.INDEX);\n+\n /**\n * Controls the maximum number of shards per node on a global level.\n * Negative values are interpreted as unlimited.\n */\n- public static final Setting<Integer> CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"cluster.routing.allocation.total_shards_per_node\", -1, true, Setting.Scope.CLUSTER);\n+ public static final Setting<Integer> CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"cluster.routing.allocation.total_shards_per_node\", -1, -1, true, Setting.Scope.CLUSTER);\n \n \n @Inject\n@@ -81,7 +82,7 @@ private void setClusterShardLimit(int clusterShardLimit) {\n @Override\n public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n IndexMetaData indexMd = allocation.routingNodes().metaData().index(shardRouting.index());\n- int indexShardLimit = indexMd.getSettings().getAsInt(INDEX_TOTAL_SHARDS_PER_NODE, -1);\n+ final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings);\n // Capture the limit here in case it changes during this method's\n // execution\n final int clusterShardLimit = this.clusterShardLimit;\n@@ -118,7 +119,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n @Override\n public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n IndexMetaData indexMd = allocation.routingNodes().metaData().index(shardRouting.index());\n- int indexShardLimit = indexMd.getSettings().getAsInt(INDEX_TOTAL_SHARDS_PER_NODE, -1);\n+ final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings);\n // Capture the limit here in case it changes during this method's\n // execution\n final int clusterShardLimit = this.clusterShardLimit;",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,13 @@\n \n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.common.util.set.Sets;\n \n import java.util.ArrayList;\n+import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n@@ -43,17 +47,31 @@ public abstract class AbstractScopedSettings extends AbstractComponent {\n \n protected AbstractScopedSettings(Settings settings, Set<Setting<?>> settingsSet, Setting.Scope scope) {\n super(settings);\n+ this.lastSettingsApplied = Settings.EMPTY;\n+ this.scope = scope;\n for (Setting<?> entry : settingsSet) {\n- if (entry.getScope() != scope) {\n- throw new IllegalArgumentException(\"Setting must be a cluster setting but was: \" + entry.getScope());\n- }\n- if (entry.hasComplexMatcher()) {\n- complexMatchers.put(entry.getKey(), entry);\n- } else {\n- keySettings.put(entry.getKey(), entry);\n- }\n+ addSetting(entry);\n+ }\n+ }\n+\n+ protected AbstractScopedSettings(Settings nodeSettings, Settings scopeSettings, AbstractScopedSettings other) {\n+ super(nodeSettings);\n+ this.lastSettingsApplied = scopeSettings;\n+ this.scope = other.scope;\n+ complexMatchers.putAll(other.complexMatchers);\n+ keySettings.putAll(other.keySettings);\n+ settingUpdaters.addAll(other.settingUpdaters);\n+ }\n+\n+ protected final void addSetting(Setting<?> setting) {\n+ if (setting.getScope() != scope) {\n+ throw new IllegalArgumentException(\"Setting must be a \" + scope + \" setting but was: \" + setting.getScope());\n+ }\n+ if (setting.hasComplexMatcher()) {\n+ complexMatchers.putIfAbsent(setting.getKey(), setting);\n+ } else {\n+ keySettings.putIfAbsent(setting.getKey(), setting);\n }\n- this.scope = scope;\n }\n \n public Setting.Scope getScope() {\n@@ -161,6 +179,34 @@ public synchronized <T> void addSettingsUpdateConsumer(Setting<T> setting, Consu\n addSettingsUpdateConsumer(setting, consumer, (s) -> {});\n }\n \n+ /**\n+ * Validates that all settings in the builder are registered and valid\n+ */\n+ public final void validate(Settings.Builder settingsBuilder) {\n+ validate(settingsBuilder.build());\n+ }\n+\n+ /**\n+ * * Validates that all given settings are registered and valid\n+ */\n+ public final void validate(Settings settings) {\n+ for (Map.Entry<String, String> entry : settings.getAsMap().entrySet()) {\n+ validate(entry.getKey(), settings);\n+ }\n+ }\n+\n+\n+ /**\n+ * Validates that the setting is valid\n+ */\n+ public final void validate(String key, Settings settings) {\n+ Setting setting = get(key);\n+ if (setting == null) {\n+ throw new IllegalArgumentException(\"unknown setting [\" + key + \"]\");\n+ }\n+ setting.get(settings);\n+ }\n+\n /**\n * Transactional interface to update settings.\n * @see Setting\n@@ -253,4 +299,93 @@ public Settings diff(Settings source, Settings defaultSettings) {\n return builder.build();\n }\n \n+ /**\n+ * Returns the value for the given setting.\n+ */\n+ public <T> T get(Setting<T> setting) {\n+ if (setting.getScope() != scope) {\n+ throw new IllegalArgumentException(\"settings scope doesn't match the setting scope [\" + this.scope + \"] != [\" + setting.getScope() + \"]\");\n+ }\n+ if (get(setting.getKey()) == null) {\n+ throw new IllegalArgumentException(\"setting \" + setting.getKey() + \" has not been registered\");\n+ }\n+ return setting.get(this.lastSettingsApplied, settings);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * <p>\n+ * Note: This method will only allow updates to dynamic settings. if a non-dynamic setting is updated an {@link IllegalArgumentException} is thrown instead.\n+ *</p>\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ public boolean updateDynamicSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n+ return updateSettings(toApply, target, updates, type, true);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ public boolean updateSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n+ return updateSettings(toApply, target, updates, type, false);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @param onlyDynamic if <code>false</code> all settings are updated otherwise only dynamic settings are updated. if set to <code>true</code> and a non-dynamic setting is updated an exception is thrown.\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ private boolean updateSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type, boolean onlyDynamic) {\n+ boolean changed = false;\n+ final Set<String> toRemove = new HashSet<>();\n+ Settings.Builder settingsBuilder = Settings.settingsBuilder();\n+ for (Map.Entry<String, String> entry : toApply.getAsMap().entrySet()) {\n+ if (entry.getValue() == null) {\n+ toRemove.add(entry.getKey());\n+ } else if ((onlyDynamic == false && get(entry.getKey()) != null) || hasDynamicSetting(entry.getKey())) {\n+ validate(entry.getKey(), toApply);\n+ settingsBuilder.put(entry.getKey(), entry.getValue());\n+ updates.put(entry.getKey(), entry.getValue());\n+ changed = true;\n+ } else {\n+ throw new IllegalArgumentException(type + \" setting [\" + entry.getKey() + \"], not dynamically updateable\");\n+ }\n+\n+ }\n+ changed |= applyDeletes(toRemove, target);\n+ target.put(settingsBuilder.build());\n+ return changed;\n+ }\n+\n+ private static final boolean applyDeletes(Set<String> deletes, Settings.Builder builder) {\n+ boolean changed = false;\n+ for (String entry : deletes) {\n+ Set<String> keysToRemove = new HashSet<>();\n+ Set<String> keySet = builder.internalMap().keySet();\n+ for (String key : keySet) {\n+ if (Regex.simpleMatch(entry, key)) {\n+ keysToRemove.add(key);\n+ }\n+ }\n+ for (String key : keysToRemove) {\n+ builder.remove(key);\n+ changed = true;\n+ }\n+ }\n+ return changed;\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,8 @@\n import org.elasticsearch.discovery.DiscoverySettings;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n+import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.store.IndexStoreConfig;\n import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n@@ -62,7 +64,6 @@ public ClusterSettings(Settings settings, Set<Setting<?>> settingsSet) {\n super(settings, settingsSet, Setting.Scope.CLUSTER);\n }\n \n-\n @Override\n public synchronized Settings applySettings(Settings newSettings) {\n Settings settings = super.applySettings(newSettings);\n@@ -83,6 +84,11 @@ public synchronized Settings applySettings(Settings newSettings) {\n return settings;\n }\n \n+ @Override\n+ public boolean hasDynamicSetting(String key) {\n+ return isLoggerSetting(key) || super.hasDynamicSetting(key);\n+ }\n+\n /**\n * Returns <code>true</code> if the settings is a logger setting.\n */\n@@ -149,5 +155,8 @@ public boolean isLoggerSetting(String key) {\n HierarchyCircuitBreakerService.FIELDDATA_CIRCUIT_BREAKER_TYPE_SETTING,\n HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_TYPE_SETTING,\n Transport.TRANSPORT_PROFILES_SETTING,\n- Transport.TRANSPORT_TCP_COMPRESS)));\n+ Transport.TRANSPORT_TCP_COMPRESS,\n+ IndexSettings.QUERY_STRING_ANALYZE_WILDCARD,\n+ IndexSettings.QUERY_STRING_ALLOW_LEADING_WILDCARD,\n+ PrimaryShardAllocator.NODE_INITIAL_SHARDS_SETTING)));\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,155 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.common.settings;\n+\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n+import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.IndexModule;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.IndexingSlowLog;\n+import org.elasticsearch.index.MergePolicyConfig;\n+import org.elasticsearch.index.MergeSchedulerConfig;\n+import org.elasticsearch.index.SearchSlowLog;\n+import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.engine.EngineConfig;\n+import org.elasticsearch.index.fielddata.IndexFieldDataService;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.percolator.PercolatorQueriesRegistry;\n+import org.elasticsearch.index.store.FsDirectoryService;\n+import org.elasticsearch.index.store.IndexStore;\n+import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.indices.cache.request.IndicesRequestCache;\n+import org.elasticsearch.search.SearchService;\n+\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.function.Predicate;\n+\n+/**\n+ * Encapsulates all valid index level settings.\n+ * @see org.elasticsearch.common.settings.Setting.Scope#INDEX\n+ */\n+public final class IndexScopedSettings extends AbstractScopedSettings {\n+\n+ public static final Predicate<String> INDEX_SETTINGS_KEY_PREDICATE = (s) -> s.startsWith(IndexMetaData.INDEX_SETTING_PREFIX);\n+\n+ public static Set<Setting<?>> BUILT_IN_INDEX_SETTINGS = Collections.unmodifiableSet(new HashSet<>(Arrays.asList(\n+ IndexSettings.INDEX_TTL_DISABLE_PURGE_SETTING,\n+ IndexStore.INDEX_STORE_THROTTLE_TYPE_SETTING,\n+ IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING,\n+ MergeSchedulerConfig.AUTO_THROTTLE_SETTING,\n+ MergeSchedulerConfig.MAX_MERGE_COUNT_SETTING,\n+ MergeSchedulerConfig.MAX_THREAD_COUNT_SETTING,\n+ IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,\n+ IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING,\n+ IndexMetaData.INDEX_SHADOW_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_SHARED_FILESYSTEM_SETTING,\n+ IndexMetaData.INDEX_READ_ONLY_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_READ_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_WRITE_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_METADATA_SETTING,\n+ IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING,\n+ IndexMetaData.INDEX_PRIORITY_SETTING,\n+ IndexMetaData.INDEX_DATA_PATH_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_DEBUG_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_WARN_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_INFO_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_TRACE_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_DEBUG_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_TRACE_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_LEVEL,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_REFORMAT,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_LEVEL_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING,\n+ MergePolicyConfig.INDEX_COMPOUND_FORMAT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_FLOOR_SEGMENT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_DURABILITY_SETTING,\n+ IndexSettings.INDEX_WARMER_ENABLED_SETTING,\n+ IndexSettings.INDEX_REFRESH_INTERVAL_SETTING,\n+ IndexSettings.MAX_RESULT_WINDOW_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING,\n+ IndexSettings.DEFAULT_FIELD_SETTING,\n+ IndexSettings.QUERY_STRING_LENIENT_SETTING,\n+ IndexSettings.ALLOW_UNMAPPED,\n+ IndexSettings.INDEX_CHECK_ON_STARTUP,\n+ ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE_SETTING,\n+ IndexSettings.INDEX_GC_DELETES_SETTING,\n+ IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING,\n+ UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING,\n+ EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE_SETTING,\n+ EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTTING,\n+ IndexFieldDataService.INDEX_FIELDDATA_CACHE_KEY,\n+ FieldMapper.IGNORE_MALFORMED_SETTING,\n+ FieldMapper.COERCE_SETTING,\n+ Store.INDEX_STORE_STATS_REFRESH_INTERVAL_SETTING,\n+ PercolatorQueriesRegistry.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING,\n+ MapperService.INDEX_MAPPER_DYNAMIC_SETTING,\n+ MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING,\n+ BitsetFilterCache.INDEX_LOAD_RANDOM_ACCESS_FILTERS_EAGERLY_SETTING,\n+ IndexModule.INDEX_STORE_TYPE_SETTING,\n+ IndexModule.INDEX_QUERY_CACHE_TYPE_SETTING,\n+ IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING,\n+ PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS_SETTING,\n+ FsDirectoryService.INDEX_LOCK_FACTOR_SETTING,\n+ EngineConfig.INDEX_CODEC_SETTING,\n+ SearchService.INDEX_NORMS_LOADING_SETTING,\n+ // this sucks but we can't really validate all the analyzers/similarity in here\n+ Setting.groupSetting(\"index.similarity.\", false, Setting.Scope.INDEX), // this allows similarity settings to be passed\n+ Setting.groupSetting(\"index.analysis.\", false, Setting.Scope.INDEX) // this allows analysis settings to be passed\n+\n+ )));\n+\n+ public static final IndexScopedSettings DEFAULT_SCOPED_SETTINGS = new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS);\n+\n+ public IndexScopedSettings(Settings settings, Set<Setting<?>> settingsSet) {\n+ super(settings, settingsSet, Setting.Scope.INDEX);\n+ }\n+\n+ private IndexScopedSettings(Settings settings, IndexScopedSettings other, IndexMetaData metaData) {\n+ super(settings, metaData.getSettings(), other);\n+ }\n+\n+ public IndexScopedSettings copy(Settings settings, IndexMetaData metaData) {\n+ return new IndexScopedSettings(settings, this, metaData);\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java",
"status": "added"
},
{
"diff": "@@ -36,6 +36,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Objects;\n import java.util.function.BiConsumer;\n import java.util.function.Consumer;\n import java.util.function.Function;\n@@ -167,6 +168,16 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n return builder;\n }\n \n+ /**\n+ * Returns the value for this setting but falls back to the second provided settings object\n+ */\n+ public final T get(Settings primary, Settings secondary) {\n+ if (exists(primary)) {\n+ return get(primary);\n+ }\n+ return get(secondary);\n+ }\n+\n /**\n * The settings scope - settings can either be cluster settings or per index settings.\n */\n@@ -225,7 +236,7 @@ public String toString() {\n }\n \n \n- private class Updater implements AbstractScopedSettings.SettingUpdater<T> {\n+ private final class Updater implements AbstractScopedSettings.SettingUpdater<T> {\n private final Consumer<T> consumer;\n private final ESLogger logger;\n private final Consumer<T> accept;\n@@ -265,8 +276,8 @@ public T getValue(Settings current, Settings previous) {\n }\n \n @Override\n- public void apply(T value, Settings current, Settings previous) {\n- logger.info(\"update [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n+ public final void apply(T value, Settings current, Settings previous) {\n+ logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n consumer.accept(value);\n }\n }\n@@ -294,6 +305,10 @@ public static Setting<Integer> intSetting(String key, int defaultValue, int minV\n return new Setting<>(key, (s) -> Integer.toString(defaultValue), (s) -> parseInt(s, minValue, key), dynamic, scope);\n }\n \n+ public static Setting<Long> longSetting(String key, long defaultValue, long minValue, boolean dynamic, Scope scope) {\n+ return new Setting<>(key, (s) -> Long.toString(defaultValue), (s) -> parseLong(s, minValue, key), dynamic, scope);\n+ }\n+\n public static int parseInt(String s, int minValue, String key) {\n int value = Integer.parseInt(s);\n if (value < minValue) {\n@@ -302,6 +317,14 @@ public static int parseInt(String s, int minValue, String key) {\n return value;\n }\n \n+ public static long parseLong(String s, long minValue, String key) {\n+ long value = Long.parseLong(s);\n+ if (value < minValue) {\n+ throw new IllegalArgumentException(\"Failed to parse value [\" + s + \"] for setting [\" + key + \"] must be >= \" + minValue);\n+ }\n+ return value;\n+ }\n+\n public static Setting<Integer> intSetting(String key, int defaultValue, boolean dynamic, Scope scope) {\n return intSetting(key, defaultValue, Integer.MIN_VALUE, dynamic, scope);\n }\n@@ -430,6 +453,7 @@ public Settings getValue(Settings current, Settings previous) {\n \n @Override\n public void apply(Settings value, Settings current, Settings previous) {\n+ logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n consumer.accept(value);\n }\n \n@@ -470,4 +494,16 @@ public static Setting<Double> doubleSetting(String key, double defaultValue, dou\n }, dynamic, scope);\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (o == null || getClass() != o.getClass()) return false;\n+ Setting<?> setting = (Setting<?>) o;\n+ return Objects.equals(key, setting.key);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(key);\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,7 @@\n import java.util.SortedMap;\n import java.util.TreeMap;\n import java.util.concurrent.TimeUnit;\n+import java.util.function.Predicate;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n \n@@ -212,6 +213,19 @@ public Settings getByPrefix(String prefix) {\n return builder.build();\n }\n \n+ /**\n+ * Returns a new settings object that contains all setting of the current one filtered by the given settings key predicate.\n+ */\n+ public Settings filter(Predicate<String> predicate) {\n+ Builder builder = new Builder();\n+ for (Map.Entry<String, String> entry : getAsMap().entrySet()) {\n+ if (predicate.test(entry.getKey())) {\n+ builder.put(entry.getKey(), entry.getValue());\n+ }\n+ }\n+ return builder.build();\n+ }\n+\n /**\n * Returns the settings mapped to the given setting name.\n */",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -34,35 +34,47 @@ public class SettingsModule extends AbstractModule {\n \n private final Settings settings;\n private final SettingsFilter settingsFilter;\n- private final Map<String, Setting<?>> clusterDynamicSettings = new HashMap<>();\n-\n+ private final Map<String, Setting<?>> clusterSettings = new HashMap<>();\n+ private final Map<String, Setting<?>> indexSettings = new HashMap<>();\n \n public SettingsModule(Settings settings, SettingsFilter settingsFilter) {\n this.settings = settings;\n this.settingsFilter = settingsFilter;\n for (Setting<?> setting : ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) {\n registerSetting(setting);\n }\n+ for (Setting<?> setting : IndexScopedSettings.BUILT_IN_INDEX_SETTINGS) {\n+ registerSetting(setting);\n+ }\n }\n \n @Override\n protected void configure() {\n+ final IndexScopedSettings indexScopedSettings = new IndexScopedSettings(settings, new HashSet<>(this.indexSettings.values()));\n+ // by now we are fully configured, lets check node level settings for unregistered index settings\n+ indexScopedSettings.validate(settings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE));\n bind(Settings.class).toInstance(settings);\n bind(SettingsFilter.class).toInstance(settingsFilter);\n- final ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(clusterDynamicSettings.values()));\n+ final ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(this.clusterSettings.values()));\n+\n bind(ClusterSettings.class).toInstance(clusterSettings);\n+ bind(IndexScopedSettings.class).toInstance(indexScopedSettings);\n }\n \n public void registerSetting(Setting<?> setting) {\n switch (setting.getScope()) {\n case CLUSTER:\n- if (clusterDynamicSettings.containsKey(setting.getKey())) {\n+ if (clusterSettings.containsKey(setting.getKey())) {\n throw new IllegalArgumentException(\"Cannot register setting [\" + setting.getKey() + \"] twice\");\n }\n- clusterDynamicSettings.put(setting.getKey(), setting);\n+ clusterSettings.put(setting.getKey(), setting);\n break;\n case INDEX:\n- throw new UnsupportedOperationException(\"not yet implemented\");\n+ if (indexSettings.containsKey(setting.getKey())) {\n+ throw new IllegalArgumentException(\"Cannot register setting [\" + setting.getKey() + \"] twice\");\n+ }\n+ indexSettings.put(setting.getKey(), setting);\n+ break;\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java",
"status": "modified"
},
{
"diff": "@@ -20,12 +20,10 @@\n package org.elasticsearch.common.unit;\n \n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.settings.Settings;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -176,7 +174,6 @@ public static ByteSizeValue parseBytesSizeValue(String sValue, String settingNam\n \n public static ByteSizeValue parseBytesSizeValue(String sValue, ByteSizeValue defaultValue, String settingName) throws ElasticsearchParseException {\n settingName = Objects.requireNonNull(settingName);\n- assert settingName.startsWith(\"index.\") == false || MetaDataIndexUpgradeService.INDEX_BYTES_SIZE_SETTINGS.contains(settingName);\n if (sValue == null) {\n return defaultValue;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java",
"status": "modified"
},
{
"diff": "@@ -20,12 +20,10 @@\n package org.elasticsearch.common.unit;\n \n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.settings.Settings;\n import org.joda.time.Period;\n import org.joda.time.PeriodType;\n import org.joda.time.format.PeriodFormat;\n@@ -254,7 +252,6 @@ public String getStringRep() {\n \n public static TimeValue parseTimeValue(String sValue, TimeValue defaultValue, String settingName) {\n settingName = Objects.requireNonNull(settingName);\n- assert settingName.startsWith(\"index.\") == false || MetaDataIndexUpgradeService.INDEX_TIME_SETTINGS.contains(settingName) : settingName;\n if (sValue == null) {\n return defaultValue;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/unit/TimeValue.java",
"status": "modified"
},
{
"diff": "@@ -342,7 +342,7 @@ public static void acquireFSLockForPaths(IndexSettings indexSettings, Path... sh\n // resolve the directory the shard actually lives in\n Path p = shardPaths[i].resolve(\"index\");\n // open a directory (will be immediately closed) on the shard's location\n- dirs[i] = new SimpleFSDirectory(p, FsDirectoryService.buildLockFactory(indexSettings));\n+ dirs[i] = new SimpleFSDirectory(p, indexSettings.getValue(FsDirectoryService.INDEX_LOCK_FACTOR_SETTING));\n // create a lock for the \"write.lock\" file\n try {\n locks[i] = dirs[i].obtainLock(IndexWriter.WRITE_LOCK_NAME);",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n \n@@ -40,6 +41,7 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.Function;\n import java.util.stream.Collectors;\n \n /**\n@@ -48,15 +50,30 @@\n */\n public abstract class PrimaryShardAllocator extends AbstractComponent {\n \n- @Deprecated\n- public static final String INDEX_RECOVERY_INITIAL_SHARDS = \"index.recovery.initial_shards\";\n+ private static final Function<String, String> INITIAL_SHARDS_PARSER = (value) -> {\n+ switch (value) {\n+ case \"quorum\":\n+ case \"quorum-1\":\n+ case \"half\":\n+ case \"one\":\n+ case \"full\":\n+ case \"full-1\":\n+ case \"all-1\":\n+ case \"all\":\n+ return value;\n+ default:\n+ Integer.parseInt(value); // it can be parsed that's all we care here?\n+ return value;\n+ }\n+ };\n \n- private final String initialShards;\n+ public static final Setting<String> NODE_INITIAL_SHARDS_SETTING = new Setting<>(\"gateway.initial_shards\", (settings) -> settings.get(\"gateway.local.initial_shards\", \"quorum\"), INITIAL_SHARDS_PARSER, true, Setting.Scope.CLUSTER);\n+ @Deprecated\n+ public static final Setting<String> INDEX_RECOVERY_INITIAL_SHARDS_SETTING = new Setting<>(\"index.recovery.initial_shards\", (settings) -> NODE_INITIAL_SHARDS_SETTING.get(settings) , INITIAL_SHARDS_PARSER, true, Setting.Scope.INDEX);\n \n public PrimaryShardAllocator(Settings settings) {\n super(settings);\n- this.initialShards = settings.get(\"gateway.initial_shards\", settings.get(\"gateway.local.initial_shards\", \"quorum\"));\n- logger.debug(\"using initial_shards [{}]\", initialShards);\n+ logger.debug(\"using initial_shards [{}]\", NODE_INITIAL_SHARDS_SETTING.get(settings));\n }\n \n public boolean allocateUnassigned(RoutingAllocation allocation) {\n@@ -73,7 +90,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n \n final IndexMetaData indexMetaData = metaData.index(shard.getIndex());\n- final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings, Collections.emptyList());\n+ final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings);\n \n if (shard.allocatedPostIndexCreate(indexMetaData) == false) {\n // when we create a fresh index\n@@ -209,29 +226,25 @@ private boolean isEnoughVersionBasedAllocationsFound(ShardRouting shard, IndexMe\n // check if the counts meets the minimum set\n int requiredAllocation = 1;\n // if we restore from a repository one copy is more then enough\n- try {\n- String initialShards = indexMetaData.getSettings().get(INDEX_RECOVERY_INITIAL_SHARDS, settings.get(INDEX_RECOVERY_INITIAL_SHARDS, this.initialShards));\n- if (\"quorum\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 1) {\n- requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2) + 1;\n- }\n- } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 2) {\n- requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2);\n- }\n- } else if (\"one\".equals(initialShards)) {\n- requiredAllocation = 1;\n- } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n- requiredAllocation = indexMetaData.getNumberOfReplicas() + 1;\n- } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 1) {\n- requiredAllocation = indexMetaData.getNumberOfReplicas();\n- }\n- } else {\n- requiredAllocation = Integer.parseInt(initialShards);\n+ String initialShards = INDEX_RECOVERY_INITIAL_SHARDS_SETTING.get(indexMetaData.getSettings(), settings);\n+ if (\"quorum\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 1) {\n+ requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2) + 1;\n+ }\n+ } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 2) {\n+ requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2);\n+ }\n+ } else if (\"one\".equals(initialShards)) {\n+ requiredAllocation = 1;\n+ } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n+ requiredAllocation = indexMetaData.getNumberOfReplicas() + 1;\n+ } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 1) {\n+ requiredAllocation = indexMetaData.getNumberOfReplicas();\n }\n- } catch (Exception e) {\n- logger.warn(\"[{}][{}] failed to derived initial_shards from value {}, ignore allocation for {}\", shard.index(), shard.id(), initialShards, shard);\n+ } else {\n+ requiredAllocation = Integer.parseInt(initialShards);\n }\n \n return nodesAndVersions.allocationsFound >= requiredAllocation;\n@@ -336,7 +349,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n */\n private boolean recoverOnAnyNode(IndexSettings indexSettings) {\n return indexSettings.isOnSharedFilesystem()\n- && indexSettings.getSettings().getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false);\n+ && IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING.get(indexSettings.getSettings());\n }\n \n protected abstract AsyncShardFetch.FetchResult<TransportNodesListGatewayStartedShards.NodeGatewayStartedShards> fetchData(ShardRouting shard, RoutingAllocation allocation);",
"filename": "core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -56,7 +56,7 @@ public final int compare(ShardRouting o1, ShardRouting o2) {\n }\n \n private int priority(Settings settings) {\n- return settings.getAsInt(IndexMetaData.SETTING_PRIORITY, 1);\n+ return IndexMetaData.INDEX_PRIORITY_SETTING.get(settings);\n }\n \n private long timeCreated(Settings settings) {",
"filename": "core/src/main/java/org/elasticsearch/gateway/PriorityComparator.java",
"status": "modified"
},
{
"diff": "@@ -130,7 +130,7 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) {\n if (metaData != null) {\n ShardPath shardPath = null;\n try {\n- IndexSettings indexSettings = new IndexSettings(metaData, settings, Collections.emptyList());\n+ IndexSettings indexSettings = new IndexSettings(metaData, settings);\n shardPath = ShardPath.loadShardPath(logger, nodeEnv, shardId, indexSettings);\n if (shardPath == null) {\n throw new IllegalStateException(shardId + \" no shard path found\");",
"filename": "core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index;\n \n import org.apache.lucene.util.SetOnce;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.analysis.AnalysisRegistry;\n@@ -35,7 +37,6 @@\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.store.IndexStore;\n import org.elasticsearch.index.store.IndexStoreConfig;\n-import org.elasticsearch.indices.IndexingMemoryController;\n import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.mapper.MapperRegistry;\n \n@@ -47,6 +48,7 @@\n import java.util.Set;\n import java.util.function.BiFunction;\n import java.util.function.Consumer;\n+import java.util.function.Function;\n \n /**\n * IndexModule represents the central extension point for index level custom implementations like:\n@@ -57,25 +59,24 @@\n * <tt>\"index.similarity.my_similarity.type : \"BM25\"</tt> can be used.</li>\n * <li>{@link IndexStore} - Custom {@link IndexStore} instances can be registered via {@link #addIndexStore(String, BiFunction)}</li>\n * <li>{@link IndexEventListener} - Custom {@link IndexEventListener} instances can be registered via {@link #addIndexEventListener(IndexEventListener)}</li>\n- * <li>Settings update listener - Custom settings update listener can be registered via {@link #addIndexSettingsListener(Consumer)}</li>\n+ * <li>Settings update listener - Custom settings update listener can be registered via {@link #addSettingsUpdateConsumer(Setting, Consumer)}</li>\n * </ul>\n */\n public final class IndexModule {\n \n- public static final String STORE_TYPE = \"index.store.type\";\n+ public static final Setting<String> INDEX_STORE_TYPE_SETTING = new Setting<>(\"index.store.type\", \"\", Function.identity(), false, Setting.Scope.INDEX);\n public static final String SIMILARITY_SETTINGS_PREFIX = \"index.similarity\";\n public static final String INDEX_QUERY_CACHE = \"index\";\n public static final String NONE_QUERY_CACHE = \"none\";\n- public static final String QUERY_CACHE_TYPE = \"index.queries.cache.type\";\n+ public static final Setting<String> INDEX_QUERY_CACHE_TYPE_SETTING = new Setting<>(\"index.queries.cache.type\", INDEX_QUERY_CACHE, Function.identity(), false, Setting.Scope.INDEX);\n // for test purposes only\n- public static final String QUERY_CACHE_EVERYTHING = \"index.queries.cache.everything\";\n+ public static final Setting<Boolean> INDEX_QUERY_CACHE_EVERYTHING_SETTING = Setting.boolSetting(\"index.queries.cache.everything\", false, false, Setting.Scope.INDEX);\n private final IndexSettings indexSettings;\n private final IndexStoreConfig indexStoreConfig;\n private final AnalysisRegistry analysisRegistry;\n // pkg private so tests can mock\n final SetOnce<EngineFactory> engineFactory = new SetOnce<>();\n private SetOnce<IndexSearcherWrapperFactory> indexSearcherWrapper = new SetOnce<>();\n- private final Set<Consumer<Settings>> settingsConsumers = new HashSet<>();\n private final Set<IndexEventListener> indexEventListeners = new HashSet<>();\n private IndexEventListener listener;\n private final Map<String, BiFunction<String, Settings, SimilarityProvider>> similarities = new HashMap<>();\n@@ -92,17 +93,13 @@ public IndexModule(IndexSettings indexSettings, IndexStoreConfig indexStoreConfi\n }\n \n /**\n- * Adds a settings consumer for this index\n+ * Adds a Setting and it's consumer for this index.\n */\n- public void addIndexSettingsListener(Consumer<Settings> listener) {\n- if (listener == null) {\n- throw new IllegalArgumentException(\"listener must not be null\");\n+ public <T> void addSettingsUpdateConsumer(Setting<T> setting, Consumer<T> consumer) {\n+ if (setting == null) {\n+ throw new IllegalArgumentException(\"setting must not be null\");\n }\n-\n- if (settingsConsumers.contains(listener)) {\n- throw new IllegalStateException(\"listener already registered\");\n- }\n- settingsConsumers.add(listener);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(setting, consumer);\n }\n \n /**\n@@ -245,27 +242,29 @@ public interface IndexSearcherWrapperFactory {\n \n public IndexService newIndexService(NodeEnvironment environment, IndexService.ShardStoreDeleter shardStoreDeleter, NodeServicesProvider servicesProvider, MapperRegistry mapperRegistry,\n IndexingOperationListener... listeners) throws IOException {\n- final IndexSettings settings = indexSettings.newWithListener(settingsConsumers);\n IndexSearcherWrapperFactory searcherWrapperFactory = indexSearcherWrapper.get() == null ? (shard) -> null : indexSearcherWrapper.get();\n IndexEventListener eventListener = freeze();\n- final String storeType = settings.getSettings().get(STORE_TYPE);\n+ final String storeType = indexSettings.getValue(INDEX_STORE_TYPE_SETTING);\n final IndexStore store;\n- if (storeType == null || isBuiltinType(storeType)) {\n- store = new IndexStore(settings, indexStoreConfig);\n+ if (Strings.isEmpty(storeType) || isBuiltinType(storeType)) {\n+ store = new IndexStore(indexSettings, indexStoreConfig);\n } else {\n BiFunction<IndexSettings, IndexStoreConfig, IndexStore> factory = storeTypes.get(storeType);\n if (factory == null) {\n throw new IllegalArgumentException(\"Unknown store type [\" + storeType + \"]\");\n }\n- store = factory.apply(settings, indexStoreConfig);\n+ store = factory.apply(indexSettings, indexStoreConfig);\n if (store == null) {\n throw new IllegalStateException(\"store must not be null\");\n }\n }\n- final String queryCacheType = settings.getSettings().get(IndexModule.QUERY_CACHE_TYPE, IndexModule.INDEX_QUERY_CACHE);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING, store::setMaxRate);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(IndexStore.INDEX_STORE_THROTTLE_TYPE_SETTING, store::setType);\n+ final String queryCacheType = indexSettings.getValue(INDEX_QUERY_CACHE_TYPE_SETTING);\n final BiFunction<IndexSettings, IndicesQueryCache, QueryCache> queryCacheProvider = queryCaches.get(queryCacheType);\n- final QueryCache queryCache = queryCacheProvider.apply(settings, servicesProvider.getIndicesQueryCache());\n- return new IndexService(settings, environment, new SimilarityService(settings, similarities), shardStoreDeleter, analysisRegistry, engineFactory.get(),\n+ final QueryCache queryCache = queryCacheProvider.apply(indexSettings, servicesProvider.getIndicesQueryCache());\n+ return new IndexService(indexSettings, environment, new SimilarityService(indexSettings, similarities), shardStoreDeleter, analysisRegistry, engineFactory.get(),\n servicesProvider, queryCache, store, eventListener, searcherWrapperFactory, mapperRegistry, listeners);\n }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/index/IndexModule.java",
"status": "modified"
}
]
}
|
{
"body": "When creating a new index using settings from an old index, the \"creation_date\" of new index is same as that of old index. Steps - \n1. Get settings of existing index using GetIndexRequestBuilder()\n2. Create new index using CreateIndexRequestBuilder() by doing - \na. copying the mappings getIndexResponse.getMappings()\nb. copying the settings getIndexResponse.getSettings()\nc. copying headers getIndexResponse.getHeaders()\n\nNow the new index has same \"creation_date\" timestamp as old index. Version - 1.4\n",
"comments": [
{
"body": "Hi @ajeis \n\nThanks for reporting. This is indeed a bug. Confirmed on master with the REST api. Also `version.created` should not be settable either:\n\n```\nPUT x\n{\n \"settings\": {\n \"index\": {\n \"creation_date\": \"1439300111000\",\n \"version\": {\n \"created\": \"2000123\"\n }\n }\n }\n}\n\nGET x/_settings\n```\n\nReturns:\n\n```\n{\n \"x\": {\n \"settings\": {\n \"index\": {\n \"creation_date\": \"1439300111000\",\n \"uuid\": \"plte5zsHRwSTYzolfP537w\",\n \"number_of_replicas\": \"1\",\n \"number_of_shards\": \"5\",\n \"version\": {\n \"created\": \"2000123\"\n }\n }\n }\n }\n}\n```\n",
"created_at": "2015-08-11T13:40:23Z"
},
{
"body": "`version.created` is set in tests (to check backcompat behavior) _a lot_ (at least within mapping tests).\n",
"created_at": "2015-08-11T17:49:54Z"
},
{
"body": "Where would the fix for this be - IndexSettingsModule, InternalIndicesService, IndexSettingsService or somewhere else? I am still fairly new to the codebase.\n",
"created_at": "2015-08-13T06:51:24Z"
},
{
"body": "Hi , I already started working on this issue. I am also a beginner (let's join hands and work together)..\nI think I spotted the issue ... \nI thinks it's [here](https://github.com/elastic/elasticsearch/blob/2.0/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java#L331-L339) ...\n\nInvestigating why they are fetching it from settings\nnot sure if it was done intentionally for some purpose like migration where they need to maintain such information. I am just guessing ... Elasticsearch developers or experts can throw more light on this ...\nI am almost there with the fix ...\n\n``` java\nif (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) {\n DiscoveryNodes nodes = currentState.nodes();\n final Version createdVersion = Version.smallest(version, nodes.smallestNonClientNodeVersion());\n indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion);\n }\n\n if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) {\n indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis());\n }\n```\n",
"created_at": "2015-08-13T07:50:16Z"
},
{
"body": " I fixed it and it started working ..\ninput settings \n\n``` javascript\n{\n 'settings': \n { 'index': \n {'number_of_replicas': '1', \n 'version': {'created': '2000010'}, \n 'creation_date': '1439384094544'\n 'uuid': 'ayzHgwB3Sgey-Pk_Okgwyg',\n 'number_of_shards': '5', \n }\n }\n}\n```\n\nnew index settings \n\n``` javascript\n{\n 'settings': \n {'index': \n {'number_of_replicas': '1', \n 'version': {'created': '2000001'}, \n 'creation_date': '1439456346424', \n 'uuid': 'fddpIVxNTTKPzqDakLAL4A', \n 'number_of_shards': '5'\n }\n }\n}\n```\n\nBut the ramifications are something to be worried about ..... Now I understand what @rjernst\nis talking about in https://github.com/elastic/elasticsearch/issues/12790#issuecomment-129989979\nThe fix caused \n\n> 474 suites, 2648 tests, <b> 43 errors, 22 failures </b>\n\n(...tears in my eyes ...)\n",
"created_at": "2015-08-13T09:10:01Z"
},
{
"body": "Well it is not just for tests, that setting is inserted so we know when the index was created for things like backcompat checks, and also feature behavior. For example, when the `_field_names` was added back in 1.3, that setting helped determine for which indexes the new field should be added (those created on or after 1.3).\n",
"created_at": "2015-08-13T09:30:01Z"
},
{
"body": "This version.created seems like a bit dangerous entity to play with .\nI just was playing with the version number and unfortunately I gave version. created as 1 ..\nit accepted without any problem.. but then when I stopped it and restarted ... <b>it could not start</b> . It was throwing [errors](https://gist.github.com/HarishAtGitHub/6072670467d7d768ebf3) that I \"The index [demo6] was created before v0.90.0 and wasn't upgraded. This index should be open using a version before 2.0.0-beta1-SNAPSHOT and upgraded using the upgrade API.\" ...\nI cleared the data dir then to fix ..\n\nHmm.. so dangerous ....\n",
"created_at": "2015-08-13T10:44:08Z"
},
{
"body": "Pull request for date replication : https://github.com/elastic/elasticsearch/pull/12854\n",
"created_at": "2015-08-13T12:03:33Z"
},
{
"body": "Ok .. what is the final take on version.created ?\n\napart from that: Is my comment https://github.com/elastic/elasticsearch/issues/12790#issuecomment-130609798 a valid one ?\nIs it good to allow such a dangerous property out .\nWhy I am asking this is .. anything that can easily Fool/Trick/Ruin the system is dangerous and should be avoided ? So by that logic the comment seems valid ...\n\nor can it be ignored as index creation is mostly carried out by admins ? (I saw most of the index creation,deletion api's inside admin) ? and you have left it knowingly based on the belief that admins are supposed to perform such an operation with discretion ?\n",
"created_at": "2015-08-13T12:24:14Z"
},
{
"body": "@HarishAtGitHub `version.created` is an internal setting. It is not meant to be set by a user (and thus not documented). In an ideal world, there would be some separate section of the settings (probably outside the `settings` key) which would be used for these internal settings. However, I think this is currently difficult to change. In the meantime, the answer may be something like a \"clone\" index metadata action? I don't know how this would look, but it could ignore the known internal settings. I'm curious to here what @jpountz @clintongormley and others think about either of these ideas.\n",
"created_at": "2015-08-14T23:06:53Z"
},
{
"body": "I'd definitely like to see a list of \"banned\" settings, things that can only be set internally.\n",
"created_at": "2015-08-15T08:56:03Z"
},
{
"body": "Ok ... seems like a critical thing ...\nso why should the creation_date replication patch wait for this ?\ncan we review that patch and pull if it deserves a pull ?\nand track the version.created separately ? \n",
"created_at": "2015-08-16T09:02:50Z"
},
{
"body": "FYI: I just started to work on this on \"version.created\" part . I will update my progress in a day ....\nwhat is the timeline for this issue completion(as I see 2.0 tag in this thats y asking this Q) ? \nIs it ok for a person(me) who has has a week experience in code base do this ?\n\nIf you want this to be done by experts(for sooner completion for 2.0 release) I am fine , as I don't want to be a blocker . No worries ...\nI am fine with any decision u take ... @rjernst , @clintongormley ???\n",
"created_at": "2015-08-17T11:36:34Z"
},
{
"body": "May be input creation_date feature information should also be removed from the docs [here](https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-create-index.html#_creation_date)\n",
"created_at": "2015-08-17T12:23:38Z"
},
{
"body": "Hi,\n made a commit to the same PR for handling version.created problem also: https://github.com/elastic/elasticsearch/pull/12854\n\nPlease give feedback and I can refactor the patch.\n\nNote about Test case results:\n\n---\n\nI built and all tests PASSED except the \"Smoke test shaded jar\"\nsee build log here: https://gist.github.com/HarishAtGitHub/aa8539a5c56cb4ad48b9\n. and I hope this has nothing to do with my code(as all related tests passed).\nand suspecting some other problem I resumed the build and I landed in https://gist.github.com/HarishAtGitHub/fcc519bd9863ab8a95af\n\nwhich is reported in https://github.com/elastic/elasticsearch/issues/12791 .\n",
"created_at": "2015-08-18T07:47:45Z"
},
{
"body": "we decided to move this out to 2.1\n",
"created_at": "2015-10-08T18:23:39Z"
}
],
"number": 12790,
"title": "Index \"creation_date\" not accurate when created with settings from another index"
}
|
{
"body": "This change moves all `index.*` settings over to the new infrastructure. This means in short that:\n- every setting that has an index scope must be registered up-front\n- index settings are validated on index creation, template creation, index settings update\n- node level settings starting with `index.*` are validated on node startup\n- settings that are private to ES like `index.version.created` can only be set by tests when they install a specific test plugin.\n- all index settings can be reset by passing `null` as their value on update\n- all index settings defaults can be listed via the settings APIs\n\nCloses #12854\nCloses #6732\nCloses #16032\nCloses #12790\n\nThanks @brwe who helped me a lot by going through the mechanical part of converting settings. Much appreciated!\n",
"number": 16054,
"review_comments": [
{
"body": "Wrong javadoc?\n",
"created_at": "2016-01-18T14:31:50Z"
},
{
"body": "s/IndexScopeSettings/IndexScopedSettings/ ?\n",
"created_at": "2016-01-18T14:32:12Z"
},
{
"body": "s/handels/handles/\n\nHandel was a composer.\n",
"created_at": "2016-01-18T14:34:54Z"
},
{
"body": "I think this'd be more clear: `enabled ? minReplicas + \"-\" + maxReplicas : \"false\"`\n",
"created_at": "2016-01-18T14:38:40Z"
},
{
"body": "Can/should this be private?\n",
"created_at": "2016-01-18T14:39:26Z"
},
{
"body": "++ on doing this\n",
"created_at": "2016-01-18T14:39:35Z"
},
{
"body": "Not yet! Those are still just strings but I suspect this note is to make sure that they grow their validation at the setting level.\n",
"created_at": "2016-01-18T14:43:31Z"
},
{
"body": "Should these be settings references? I'm not sure why they aren't but I'm sure there was a good reason but I don't know if its still true.\n",
"created_at": "2016-01-18T14:45:29Z"
},
{
"body": "I know you are just keeping this logic from before but it'd be cool to have some comment explaining it.\n",
"created_at": "2016-01-18T14:51:24Z"
},
{
"body": "Having both the SETTING_$NAME and INDEX_$NAME_SETTING is a bit confusing.\n",
"created_at": "2016-01-18T15:05:59Z"
},
{
"body": "The name of this one seems wrong.\n",
"created_at": "2016-01-18T15:06:07Z"
},
{
"body": "If looks like if you didn't specify the value it used to leave it unchanged on all the indexes but now it'll remove all the blocks I think.\n",
"created_at": "2016-01-18T15:06:53Z"
},
{
"body": "Very nice\n",
"created_at": "2016-01-18T15:09:30Z"
},
{
"body": "Its not really a builder, right?\n",
"created_at": "2016-01-18T15:11:47Z"
},
{
"body": "Ooops - wasn't reading right. Ignore.\n",
"created_at": "2016-01-18T15:12:05Z"
},
{
"body": "This seems like a pretty heavy way to validate a setting. Like, its building a bunch of maps and things.\n",
"created_at": "2016-01-18T15:13:55Z"
},
{
"body": "maybe have `dynamicOnly` instead? It feels more precise.\n",
"created_at": "2016-01-18T15:25:20Z"
},
{
"body": "It'd be nice if this got a class the same way AutoExpandReplicas did. Not required at all, but pleasant.\n",
"created_at": "2016-01-18T16:01:51Z"
},
{
"body": "Side note: this one's come up lately as some people have been using it as a big hammer to fix caching issues. They know its a bad idea but its the only tool they have.\n",
"created_at": "2016-01-18T16:03:22Z"
},
{
"body": "I removed all of this - it's a pre 2.0 BWC layer we don't need anymore in master\n",
"created_at": "2016-01-19T08:21:06Z"
}
],
"title": "Cut over all index scope settings to the new setting infrastrucuture"
}
|
{
"commits": [
{
"message": "Convert index level setting to the new setting infrastrucutre\n\nthis is an initial commit of cutting over simple string key based settings\nto a more contained scoped settings infrastructure."
},
{
"message": "convert index.translog.durability"
},
{
"message": "convert index.warmer.enabled"
},
{
"message": "convert refresh interval"
},
{
"message": "convert index.max_result_window"
},
{
"message": "converted index.routing.allocation.total_shards_per_node"
},
{
"message": "convert gc_deletes"
},
{
"message": "cut over index.unassigned.node_left.delayed_timeout"
},
{
"message": "register missing setting"
},
{
"message": "cut over index.routing.rebalance.enable and index.routing.allocation.enable"
},
{
"message": "convert index.ttl.disable_purge"
},
{
"message": "cut over indexing slow log"
},
{
"message": "add unittest for indexing slow log settings"
},
{
"message": "convert translog.flush_threshold_size"
},
{
"message": "convert all slow logs"
},
{
"message": "convert compound_format"
},
{
"message": "remove unused noCFSRatio"
},
{
"message": "convert expunge_deletes_allowed"
},
{
"message": "convert index allocation filtering"
},
{
"message": "we assume from now on that settings are reset if we pass empty settings"
},
{
"message": "convert merge.policy.floor_segment"
},
{
"message": "convert max_merge_at_once"
},
{
"message": "convert max_merge_at_once_explicit"
},
{
"message": "convert all setting in IndexMetaData"
},
{
"message": "use a valid default"
},
{
"message": "convert max_merged_segment"
},
{
"message": "convert segments_per_tier"
},
{
"message": "convert reclaim_deletes_weight and remove superfluous methods"
},
{
"message": "first cut at integrating new settings infra"
},
{
"message": "first cut at integrating new settings infra"
}
],
"files": [
{
"diff": "@@ -22,14 +22,9 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n \n-import java.util.HashSet;\n-import java.util.Map;\n-import java.util.Set;\n-\n import static org.elasticsearch.cluster.ClusterState.builder;\n \n /**\n@@ -57,11 +52,11 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin\n boolean changed = false;\n Settings.Builder transientSettings = Settings.settingsBuilder();\n transientSettings.put(currentState.metaData().transientSettings());\n- changed |= apply(transientToApply, transientSettings, transientUpdates, \"transient\");\n+ changed |= clusterSettings.updateDynamicSettings(transientToApply, transientSettings, transientUpdates, \"transient\");\n \n Settings.Builder persistentSettings = Settings.settingsBuilder();\n persistentSettings.put(currentState.metaData().persistentSettings());\n- changed |= apply(persistentToApply, persistentSettings, persistentUpdates, \"persistent\");\n+ changed |= clusterSettings.updateDynamicSettings(persistentToApply, persistentSettings, persistentUpdates, \"persistent\");\n \n if (!changed) {\n return currentState;\n@@ -86,42 +81,5 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin\n return build;\n }\n \n- private boolean apply(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n- boolean changed = false;\n- final Set<String> toRemove = new HashSet<>();\n- Settings.Builder settingsBuilder = Settings.settingsBuilder();\n- for (Map.Entry<String, String> entry : toApply.getAsMap().entrySet()) {\n- if (entry.getValue() == null) {\n- toRemove.add(entry.getKey());\n- } else if (clusterSettings.isLoggerSetting(entry.getKey()) || clusterSettings.hasDynamicSetting(entry.getKey())) {\n- settingsBuilder.put(entry.getKey(), entry.getValue());\n- updates.put(entry.getKey(), entry.getValue());\n- changed = true;\n- } else {\n- throw new IllegalArgumentException(type + \" setting [\" + entry.getKey() + \"], not dynamically updateable\");\n- }\n-\n- }\n- changed |= applyDeletes(toRemove, target);\n- target.put(settingsBuilder.build());\n- return changed;\n- }\n \n- private final boolean applyDeletes(Set<String> deletes, Settings.Builder builder) {\n- boolean changed = false;\n- for (String entry : deletes) {\n- Set<String> keysToRemove = new HashSet<>();\n- Set<String> keySet = builder.internalMap().keySet();\n- for (String key : keySet) {\n- if (Regex.simpleMatch(entry, key)) {\n- keysToRemove.add(key);\n- }\n- }\n- for (String key : keysToRemove) {\n- builder.remove(key);\n- changed = true;\n- }\n- }\n- return changed;\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ protected ClusterBlockException checkBlock(UpdateSettingsRequest request, Cluste\n if (globalBlock != null) {\n return globalBlock;\n }\n- if (request.settings().getAsMap().size() == 1 && (request.settings().get(IndexMetaData.SETTING_BLOCKS_METADATA) != null || request.settings().get(IndexMetaData.SETTING_READ_ONLY) != null )) {\n+ if (request.settings().getAsMap().size() == 1 && IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings())) {\n return null;\n }\n return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indexNameExpressionResolver.concreteIndices(state, request));",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java",
"status": "modified"
},
{
"diff": "@@ -25,9 +25,11 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n@@ -38,13 +40,15 @@\n public class TransportPutIndexTemplateAction extends TransportMasterNodeAction<PutIndexTemplateRequest, PutIndexTemplateResponse> {\n \n private final MetaDataIndexTemplateService indexTemplateService;\n+ private final IndexScopedSettings indexScopedSettings;\n \n @Inject\n public TransportPutIndexTemplateAction(Settings settings, TransportService transportService, ClusterService clusterService,\n ThreadPool threadPool, MetaDataIndexTemplateService indexTemplateService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, IndexScopedSettings indexScopedSettings) {\n super(settings, PutIndexTemplateAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, PutIndexTemplateRequest::new);\n this.indexTemplateService = indexTemplateService;\n+ this.indexScopedSettings = indexScopedSettings;\n }\n \n @Override\n@@ -69,11 +73,13 @@ protected void masterOperation(final PutIndexTemplateRequest request, final Clus\n if (cause.length() == 0) {\n cause = \"api\";\n }\n-\n+ final Settings.Builder templateSettingsBuilder = Settings.settingsBuilder();\n+ templateSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ indexScopedSettings.validate(templateSettingsBuilder);\n indexTemplateService.putTemplate(new MetaDataIndexTemplateService.PutRequest(cause, request.name())\n .template(request.template())\n .order(request.order())\n- .settings(request.settings())\n+ .settings(templateSettingsBuilder.build())\n .mappings(request.mappings())\n .aliases(request.aliases())\n .customs(request.customs())",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,7 @@ public final class AutoCreateIndex {\n @Inject\n public AutoCreateIndex(Settings settings, IndexNameExpressionResolver resolver) {\n this.resolver = resolver;\n- dynamicMappingDisabled = !settings.getAsBoolean(MapperService.INDEX_MAPPER_DYNAMIC_SETTING, MapperService.INDEX_MAPPER_DYNAMIC_DEFAULT);\n+ dynamicMappingDisabled = !MapperService.INDEX_MAPPER_DYNAMIC_SETTING.get(settings);\n String value = settings.get(\"action.auto_create_index\");\n if (value == null || Booleans.isExplicitTrue(value)) {\n needToCheck = true;",
"filename": "core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java",
"status": "modified"
},
{
"diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.cluster.action.index.NodeIndexDeletedAction;\n import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction;\n import org.elasticsearch.cluster.action.shard.ShardStateAction;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.IndexTemplateFilter;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n@@ -36,7 +35,6 @@\n import org.elasticsearch.cluster.node.DiscoveryNodeService;\n import org.elasticsearch.cluster.routing.OperationRouting;\n import org.elasticsearch.cluster.routing.RoutingService;\n-import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;\n@@ -56,27 +54,12 @@\n import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.cluster.service.InternalClusterService;\n-import org.elasticsearch.cluster.settings.DynamicSettings;\n-import org.elasticsearch.cluster.settings.Validator;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.ExtensionPoint;\n import org.elasticsearch.gateway.GatewayAllocator;\n-import org.elasticsearch.gateway.PrimaryShardAllocator;\n-import org.elasticsearch.index.IndexSettings;\n-import org.elasticsearch.index.IndexingSlowLog;\n-import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.search.stats.SearchSlowLog;\n-import org.elasticsearch.index.settings.IndexDynamicSettings;\n-import org.elasticsearch.index.MergePolicyConfig;\n-import org.elasticsearch.index.MergeSchedulerConfig;\n-import org.elasticsearch.index.store.IndexStore;\n-import org.elasticsearch.indices.IndicesWarmer;\n-import org.elasticsearch.indices.cache.request.IndicesRequestCache;\n-import org.elasticsearch.indices.ttl.IndicesTTLService;\n-import org.elasticsearch.search.internal.DefaultSearchContext;\n \n import java.util.Arrays;\n import java.util.Collections;\n@@ -107,7 +90,6 @@ public class ClusterModule extends AbstractModule {\n SnapshotInProgressAllocationDecider.class));\n \n private final Settings settings;\n- private final DynamicSettings.Builder indexDynamicSettings = new DynamicSettings.Builder();\n private final ExtensionPoint.SelectedType<ShardsAllocator> shardsAllocators = new ExtensionPoint.SelectedType<>(\"shards_allocator\", ShardsAllocator.class);\n private final ExtensionPoint.ClassSet<AllocationDecider> allocationDeciders = new ExtensionPoint.ClassSet<>(\"allocation_decider\", AllocationDecider.class, AllocationDeciders.class);\n private final ExtensionPoint.ClassSet<IndexTemplateFilter> indexTemplateFilters = new ExtensionPoint.ClassSet<>(\"index_template_filter\", IndexTemplateFilter.class);\n@@ -117,79 +99,13 @@ public class ClusterModule extends AbstractModule {\n \n public ClusterModule(Settings settings) {\n this.settings = settings;\n-\n- registerBuiltinIndexSettings();\n-\n for (Class<? extends AllocationDecider> decider : ClusterModule.DEFAULT_ALLOCATION_DECIDERS) {\n registerAllocationDecider(decider);\n }\n registerShardsAllocator(ClusterModule.BALANCED_ALLOCATOR, BalancedShardsAllocator.class);\n registerShardsAllocator(ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR, BalancedShardsAllocator.class);\n }\n \n- private void registerBuiltinIndexSettings() {\n- registerIndexDynamicSetting(IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(IndexStore.INDEX_STORE_THROTTLE_TYPE, Validator.EMPTY);\n- registerIndexDynamicSetting(MergeSchedulerConfig.MAX_THREAD_COUNT, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(MergeSchedulerConfig.MAX_MERGE_COUNT, Validator.EMPTY);\n- registerIndexDynamicSetting(MergeSchedulerConfig.AUTO_THROTTLE, Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_REQUIRE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_INCLUDE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(FilterAllocationDecider.INDEX_ROUTING_EXCLUDE_GROUP + \"*\", Validator.EMPTY);\n- registerIndexDynamicSetting(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, Validator.EMPTY);\n- registerIndexDynamicSetting(EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_READ_ONLY, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_READ, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_WRITE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_BLOCKS_METADATA, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexMetaData.SETTING_PRIORITY, Validator.NON_NEGATIVE_INTEGER);\n- registerIndexDynamicSetting(IndicesTTLService.INDEX_TTL_DISABLE_PURGE, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_REFRESH_INTERVAL, Validator.TIME);\n- registerIndexDynamicSetting(PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_GC_DELETES_SETTING, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_REFORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_LEVEL, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexingSlowLog.INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG, Validator.EMPTY);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_WARN, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_INFO, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_DEBUG, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_TRACE, Validator.TIME);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_REFORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(SearchSlowLog.INDEX_SEARCH_SLOWLOG_LEVEL, Validator.EMPTY);\n- registerIndexDynamicSetting(ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE, Validator.INTEGER);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED, Validator.DOUBLE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_FLOOR_SEGMENT, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE, Validator.INTEGER_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT, Validator.INTEGER_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER, Validator.DOUBLE_GTE_2);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT, Validator.NON_NEGATIVE_DOUBLE);\n- registerIndexDynamicSetting(MergePolicyConfig.INDEX_COMPOUND_FORMAT, Validator.EMPTY);\n- registerIndexDynamicSetting(IndexSettings.INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE, Validator.BYTES_SIZE);\n- registerIndexDynamicSetting(IndexSettings.INDEX_TRANSLOG_DURABILITY, Validator.EMPTY);\n- registerIndexDynamicSetting(IndicesWarmer.INDEX_WARMER_ENABLED, Validator.EMPTY);\n- registerIndexDynamicSetting(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED, Validator.BOOLEAN);\n- registerIndexDynamicSetting(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, Validator.TIME);\n- registerIndexDynamicSetting(DefaultSearchContext.MAX_RESULT_WINDOW, Validator.POSITIVE_INTEGER);\n- registerIndexDynamicSetting(MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING, Validator.NON_NEGATIVE_INTEGER);\n- }\n-\n- public void registerIndexDynamicSetting(String setting, Validator validator) {\n- indexDynamicSettings.addSetting(setting, validator);\n- }\n-\n-\n public void registerAllocationDecider(Class<? extends AllocationDecider> allocationDecider) {\n allocationDeciders.registerExtension(allocationDecider);\n }\n@@ -204,8 +120,6 @@ public void registerIndexTemplateFilter(Class<? extends IndexTemplateFilter> ind\n \n @Override\n protected void configure() {\n- bind(DynamicSettings.class).annotatedWith(IndexDynamicSettings.class).toInstance(indexDynamicSettings.build());\n-\n // bind ShardsAllocator\n String shardsAllocatorType = shardsAllocators.bindType(binder(), settings, ClusterModule.SHARDS_ALLOCATOR_TYPE_KEY, ClusterModule.BALANCED_ALLOCATOR);\n if (shardsAllocatorType.equals(ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java",
"status": "modified"
},
{
"diff": "@@ -306,16 +306,16 @@ public Builder addBlocks(IndexMetaData indexMetaData) {\n if (indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n addIndexBlock(indexMetaData.getIndex(), MetaDataIndexStateService.INDEX_CLOSED_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_READ_ONLY, false)) {\n+ if (IndexMetaData.INDEX_READ_ONLY_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_READ_ONLY_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_READ, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_READ_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_READ_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_WRITE, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_WRITE_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_WRITE_BLOCK);\n }\n- if (indexMetaData.getSettings().getAsBoolean(IndexMetaData.SETTING_BLOCKS_METADATA, false)) {\n+ if (IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.get(indexMetaData.getSettings())) {\n addIndexBlock(indexMetaData.getIndex(), IndexMetaData.INDEX_METADATA_BLOCK);\n }\n return this;",
"filename": "core/src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,92 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.common.Booleans;\n+import org.elasticsearch.common.settings.Setting;\n+\n+/**\n+ * This class acts as a functional wrapper around the <tt>index.auto_expand_replicas</tt> setting.\n+ * This setting or rather it's value is expanded into a min and max value which requires special handling\n+ * based on the number of datanodes in the cluster. This class handles all the parsing and streamlines the access to these values.\n+ */\n+final class AutoExpandReplicas {\n+ // the value we recognize in the \"max\" position to mean all the nodes\n+ private static final String ALL_NODES_VALUE = \"all\";\n+ public static final Setting<AutoExpandReplicas> SETTING = new Setting<>(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, \"false\", (value) -> {\n+ final int min;\n+ final int max;\n+ if (Booleans.parseBoolean(value, true) == false) {\n+ return new AutoExpandReplicas(0, 0, false);\n+ }\n+ final int dash = value.indexOf('-');\n+ if (-1 == dash) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash);\n+ }\n+ final String sMin = value.substring(0, dash);\n+ try {\n+ min = Integer.parseInt(sMin);\n+ } catch (NumberFormatException e) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash, e);\n+ }\n+ String sMax = value.substring(dash + 1);\n+ if (sMax.equals(ALL_NODES_VALUE)) {\n+ max = Integer.MAX_VALUE;\n+ } else {\n+ try {\n+ max = Integer.parseInt(sMax);\n+ } catch (NumberFormatException e) {\n+ throw new IllegalArgumentException(\"failed to parse [\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] from value: [\" + value + \"] at index \" + dash, e);\n+ }\n+ }\n+ return new AutoExpandReplicas(min, max, true);\n+ }, true, Setting.Scope.INDEX);\n+\n+ private final int minReplicas;\n+ private final int maxReplicas;\n+ private final boolean enabled;\n+\n+ private AutoExpandReplicas(int minReplicas, int maxReplicas, boolean enabled) {\n+ if (minReplicas > maxReplicas) {\n+ throw new IllegalArgumentException(\"[\" + IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS + \"] minReplicas must be =< maxReplicas but wasn't \" + minReplicas + \" > \" + maxReplicas);\n+ }\n+ this.minReplicas = minReplicas;\n+ this.maxReplicas = maxReplicas;\n+ this.enabled = enabled;\n+ }\n+\n+ int getMinReplicas() {\n+ return minReplicas;\n+ }\n+\n+ int getMaxReplicas(int numDataNodes) {\n+ return Math.min(maxReplicas, numDataNodes-1);\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return enabled ? minReplicas + \"-\" + maxReplicas : \"false\";\n+ }\n+\n+ boolean isEnabled() {\n+ return enabled;\n+ }\n+}\n+\n+",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/AutoExpandReplicas.java",
"status": "added"
},
{
"diff": "@@ -29,14 +29,17 @@\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.node.DiscoveryNodeFilters;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.collect.ImmutableOpenIntMap;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.MapBuilder;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.loader.SettingsLoader;\n import org.elasticsearch.common.xcontent.FromXContentBuilder;\n@@ -58,6 +61,7 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.Function;\n \n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND;\n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.OR;\n@@ -70,10 +74,6 @@\n */\n public class IndexMetaData implements Diffable<IndexMetaData>, FromXContentBuilder<IndexMetaData>, ToXContent {\n \n- public static final IndexMetaData PROTO = IndexMetaData.builder(\"\")\n- .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n- .numberOfShards(1).numberOfReplicas(0).build();\n-\n public interface Custom extends Diffable<Custom>, ToXContent {\n \n String type();\n@@ -152,27 +152,53 @@ public static State fromString(String state) {\n }\n public static final String INDEX_SETTING_PREFIX = \"index.\";\n public static final String SETTING_NUMBER_OF_SHARDS = \"index.number_of_shards\";\n+ public static final Setting<Integer> INDEX_NUMBER_OF_SHARDS_SETTING = Setting.intSetting(SETTING_NUMBER_OF_SHARDS, 5, 1, false, Setting.Scope.INDEX);\n public static final String SETTING_NUMBER_OF_REPLICAS = \"index.number_of_replicas\";\n+ public static final Setting<Integer> INDEX_NUMBER_OF_REPLICAS_SETTING = Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, true, Setting.Scope.INDEX);\n public static final String SETTING_SHADOW_REPLICAS = \"index.shadow_replicas\";\n+ public static final Setting<Boolean> INDEX_SHADOW_REPLICAS_SETTING = Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, false, Setting.Scope.INDEX);\n+\n public static final String SETTING_SHARED_FILESYSTEM = \"index.shared_filesystem\";\n+ public static final Setting<Boolean> INDEX_SHARED_FILESYSTEM_SETTING = Setting.boolSetting(SETTING_SHARED_FILESYSTEM, false, false, Setting.Scope.INDEX);\n+\n public static final String SETTING_AUTO_EXPAND_REPLICAS = \"index.auto_expand_replicas\";\n+ public static final Setting<AutoExpandReplicas> INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING;\n public static final String SETTING_READ_ONLY = \"index.blocks.read_only\";\n+ public static final Setting<Boolean> INDEX_READ_ONLY_SETTING = Setting.boolSetting(SETTING_READ_ONLY, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_READ = \"index.blocks.read\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_READ_SETTING = Setting.boolSetting(SETTING_BLOCKS_READ, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_WRITE = \"index.blocks.write\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_WRITE_SETTING = Setting.boolSetting(SETTING_BLOCKS_WRITE, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_BLOCKS_METADATA = \"index.blocks.metadata\";\n+ public static final Setting<Boolean> INDEX_BLOCKS_METADATA_SETTING = Setting.boolSetting(SETTING_BLOCKS_METADATA, false, true, Setting.Scope.INDEX);\n+\n public static final String SETTING_VERSION_CREATED = \"index.version.created\";\n public static final String SETTING_VERSION_CREATED_STRING = \"index.version.created_string\";\n public static final String SETTING_VERSION_UPGRADED = \"index.version.upgraded\";\n public static final String SETTING_VERSION_UPGRADED_STRING = \"index.version.upgraded_string\";\n public static final String SETTING_VERSION_MINIMUM_COMPATIBLE = \"index.version.minimum_compatible\";\n public static final String SETTING_CREATION_DATE = \"index.creation_date\";\n public static final String SETTING_PRIORITY = \"index.priority\";\n+ public static final Setting<Integer> INDEX_PRIORITY_SETTING = Setting.intSetting(\"index.priority\", 1, 0, true, Setting.Scope.INDEX);\n public static final String SETTING_CREATION_DATE_STRING = \"index.creation_date_string\";\n public static final String SETTING_INDEX_UUID = \"index.uuid\";\n public static final String SETTING_DATA_PATH = \"index.data_path\";\n+ public static final Setting<String> INDEX_DATA_PATH_SETTING = new Setting<>(SETTING_DATA_PATH, \"\", Function.identity(), false, Setting.Scope.INDEX);\n public static final String SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE = \"index.shared_filesystem.recover_on_any_node\";\n+ public static final Setting<Boolean> INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING = Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, true, Setting.Scope.INDEX);\n public static final String INDEX_UUID_NA_VALUE = \"_na_\";\n \n+ public static final Setting<Settings> INDEX_ROUTING_REQUIRE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.require.\", true, Setting.Scope.INDEX);\n+ public static final Setting<Settings> INDEX_ROUTING_INCLUDE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.include.\", true, Setting.Scope.INDEX);\n+ public static final Setting<Settings> INDEX_ROUTING_EXCLUDE_GROUP_SETTING = Setting.groupSetting(\"index.routing.allocation.exclude.\", true, Setting.Scope.INDEX);\n+\n+ public static final IndexMetaData PROTO = IndexMetaData.builder(\"\")\n+ .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1).numberOfReplicas(0).build();\n+\n public static final String KEY_ACTIVE_ALLOCATIONS = \"active_allocations\";\n \n private final int numberOfShards;\n@@ -627,10 +653,6 @@ public Builder creationDate(long creationDate) {\n return this;\n }\n \n- public long creationDate() {\n- return settings.getAsLong(SETTING_CREATION_DATE, -1l);\n- }\n-\n public Builder settings(Settings.Builder settings) {\n this.settings = settings.build();\n return this;\n@@ -645,11 +667,6 @@ public MappingMetaData mapping(String type) {\n return mappings.get(type);\n }\n \n- public Builder removeMapping(String mappingType) {\n- mappings.remove(mappingType);\n- return this;\n- }\n-\n public Builder putMapping(String type, String source) throws IOException {\n try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) {\n putMapping(new MappingMetaData(type, parser.mapOrdered()));\n@@ -692,24 +709,11 @@ public Builder putCustom(String type, Custom customIndexMetaData) {\n return this;\n }\n \n- public Builder removeCustom(String type) {\n- this.customs.remove(type);\n- return this;\n- }\n-\n- public Custom getCustom(String type) {\n- return this.customs.get(type);\n- }\n-\n public Builder putActiveAllocationIds(int shardId, Set<String> allocationIds) {\n activeAllocationIds.put(shardId, new HashSet(allocationIds));\n return this;\n }\n \n- public Set<String> getActiveAllocationIds(int shardId) {\n- return activeAllocationIds.get(shardId);\n- }\n-\n public long version() {\n return this.version;\n }\n@@ -758,22 +762,21 @@ public IndexMetaData build() {\n filledActiveAllocationIds.put(i, Collections.emptySet());\n }\n }\n-\n- Map<String, String> requireMap = settings.getByPrefix(\"index.routing.allocation.require.\").getAsMap();\n+ final Map<String, String> requireMap = INDEX_ROUTING_REQUIRE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters requireFilters;\n if (requireMap.isEmpty()) {\n requireFilters = null;\n } else {\n requireFilters = DiscoveryNodeFilters.buildFromKeyValue(AND, requireMap);\n }\n- Map<String, String> includeMap = settings.getByPrefix(\"index.routing.allocation.include.\").getAsMap();\n+ Map<String, String> includeMap = INDEX_ROUTING_INCLUDE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters includeFilters;\n if (includeMap.isEmpty()) {\n includeFilters = null;\n } else {\n includeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, includeMap);\n }\n- Map<String, String> excludeMap = settings.getByPrefix(\"index.routing.allocation.exclude.\").getAsMap();\n+ Map<String, String> excludeMap = INDEX_ROUTING_EXCLUDE_GROUP_SETTING.get(settings).getAsMap();\n final DiscoveryNodeFilters excludeFilters;\n if (excludeMap.isEmpty()) {\n excludeFilters = null;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java",
"status": "modified"
},
{
"diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n@@ -103,13 +104,14 @@ public class MetaDataCreateIndexService extends AbstractComponent {\n private final IndexTemplateFilter indexTemplateFilter;\n private final Environment env;\n private final NodeServicesProvider nodeServicesProvider;\n+ private final IndexScopedSettings indexScopedSettings;\n \n \n @Inject\n public MetaDataCreateIndexService(Settings settings, ClusterService clusterService,\n IndicesService indicesService, AllocationService allocationService,\n Version version, AliasValidator aliasValidator,\n- Set<IndexTemplateFilter> indexTemplateFilters, Environment env, NodeServicesProvider nodeServicesProvider) {\n+ Set<IndexTemplateFilter> indexTemplateFilters, Environment env, NodeServicesProvider nodeServicesProvider, IndexScopedSettings indexScopedSettings) {\n super(settings);\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n@@ -118,6 +120,7 @@ public MetaDataCreateIndexService(Settings settings, ClusterService clusterServi\n this.aliasValidator = aliasValidator;\n this.env = env;\n this.nodeServicesProvider = nodeServicesProvider;\n+ this.indexScopedSettings = indexScopedSettings;\n \n if (indexTemplateFilters.isEmpty()) {\n this.indexTemplateFilter = DEFAULT_INDEX_TEMPLATE_FILTER;\n@@ -174,6 +177,7 @@ public void validateIndexName(String index, ClusterState state) {\n public void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n Settings.Builder updatedSettingsBuilder = Settings.settingsBuilder();\n updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ indexScopedSettings.validate(updatedSettingsBuilder);\n request.settings(updatedSettingsBuilder.build());\n \n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\",\n@@ -460,16 +464,17 @@ public void validateIndexSettings(String indexName, Settings settings) throws In\n }\n \n List<String> getIndexSettingsValidationErrors(Settings settings) {\n- String customPath = settings.get(IndexMetaData.SETTING_DATA_PATH, null);\n+ String customPath = IndexMetaData.INDEX_DATA_PATH_SETTING.get(settings);\n List<String> validationErrors = new ArrayList<>();\n- if (customPath != null && env.sharedDataFile() == null) {\n+ if (Strings.isEmpty(customPath) == false && env.sharedDataFile() == null) {\n validationErrors.add(\"path.shared_data must be set in order to use custom data paths\");\n- } else if (customPath != null) {\n+ } else if (Strings.isEmpty(customPath) == false) {\n Path resolvedPath = PathUtils.get(new Path[]{env.sharedDataFile()}, customPath);\n if (resolvedPath == null) {\n validationErrors.add(\"custom path [\" + customPath + \"] is not a sub-path of path.shared_data [\" + env.sharedDataFile() + \"]\");\n }\n }\n+ //norelease - this can be removed?\n Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);\n if (number_of_primaries != null && number_of_primaries <= 0) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.apache.lucene.analysis.Analyzer;\n import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -70,7 +69,6 @@ public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData) {\n }\n checkSupportedVersion(indexMetaData);\n IndexMetaData newMetaData = indexMetaData;\n- newMetaData = addDefaultUnitsIfNeeded(newMetaData);\n checkMappingsCompatibility(newMetaData);\n newMetaData = markAsUpgraded(newMetaData);\n return newMetaData;\n@@ -113,111 +111,14 @@ private static boolean isSupportedVersion(IndexMetaData indexMetaData) {\n return false;\n }\n \n- /** All known byte-sized settings for an index. */\n- public static final Set<String> INDEX_BYTES_SIZE_SETTINGS = unmodifiableSet(newHashSet(\n- \"index.merge.policy.floor_segment\",\n- \"index.merge.policy.max_merged_segment\",\n- \"index.merge.policy.max_merge_size\",\n- \"index.merge.policy.min_merge_size\",\n- \"index.shard.recovery.file_chunk_size\",\n- \"index.shard.recovery.translog_size\",\n- \"index.store.throttle.max_bytes_per_sec\",\n- \"index.translog.flush_threshold_size\",\n- \"index.translog.fs.buffer_size\",\n- \"index.version_map_size\"));\n-\n- /** All known time settings for an index. */\n- public static final Set<String> INDEX_TIME_SETTINGS = unmodifiableSet(newHashSet(\n- \"index.gateway.wait_for_mapping_update_post_recovery\",\n- \"index.shard.wait_for_mapping_update_post_recovery\",\n- \"index.gc_deletes\",\n- \"index.indexing.slowlog.threshold.index.debug\",\n- \"index.indexing.slowlog.threshold.index.info\",\n- \"index.indexing.slowlog.threshold.index.trace\",\n- \"index.indexing.slowlog.threshold.index.warn\",\n- \"index.refresh_interval\",\n- \"index.search.slowlog.threshold.fetch.debug\",\n- \"index.search.slowlog.threshold.fetch.info\",\n- \"index.search.slowlog.threshold.fetch.trace\",\n- \"index.search.slowlog.threshold.fetch.warn\",\n- \"index.search.slowlog.threshold.query.debug\",\n- \"index.search.slowlog.threshold.query.info\",\n- \"index.search.slowlog.threshold.query.trace\",\n- \"index.search.slowlog.threshold.query.warn\",\n- \"index.shadow.wait_for_initial_commit\",\n- \"index.store.stats_refresh_interval\",\n- \"index.translog.flush_threshold_period\",\n- \"index.translog.interval\",\n- \"index.translog.sync_interval\",\n- \"index.shard.inactive_time\",\n- UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING));\n-\n- /**\n- * Elasticsearch 2.0 requires units on byte/memory and time settings; this method adds the default unit to any such settings that are\n- * missing units.\n- */\n- private IndexMetaData addDefaultUnitsIfNeeded(IndexMetaData indexMetaData) {\n- if (indexMetaData.getCreationVersion().before(Version.V_2_0_0_beta1)) {\n- // TODO: can we somehow only do this *once* for a pre-2.0 index? Maybe we could stuff a \"fake marker setting\" here? Seems hackish...\n- // Created lazily if we find any settings that are missing units:\n- Settings settings = indexMetaData.getSettings();\n- Settings.Builder newSettings = null;\n- for(String byteSizeSetting : INDEX_BYTES_SIZE_SETTINGS) {\n- String value = settings.get(byteSizeSetting);\n- if (value != null) {\n- try {\n- Long.parseLong(value);\n- } catch (NumberFormatException nfe) {\n- continue;\n- }\n- // It's a naked number that previously would be interpreted as default unit (bytes); now we add it:\n- logger.warn(\"byte-sized index setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", byteSizeSetting, value);\n- if (newSettings == null) {\n- newSettings = Settings.builder();\n- newSettings.put(settings);\n- }\n- newSettings.put(byteSizeSetting, value + \"b\");\n- }\n- }\n- for(String timeSetting : INDEX_TIME_SETTINGS) {\n- String value = settings.get(timeSetting);\n- if (value != null) {\n- try {\n- Long.parseLong(value);\n- } catch (NumberFormatException nfe) {\n- continue;\n- }\n- // It's a naked number that previously would be interpreted as default unit (ms); now we add it:\n- logger.warn(\"time index setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", timeSetting, value);\n- if (newSettings == null) {\n- newSettings = Settings.builder();\n- newSettings.put(settings);\n- }\n- newSettings.put(timeSetting, value + \"ms\");\n- }\n- }\n- if (newSettings != null) {\n- // At least one setting was changed:\n- return IndexMetaData.builder(indexMetaData)\n- .version(indexMetaData.getVersion())\n- .settings(newSettings.build())\n- .build();\n- }\n- }\n-\n- // No changes:\n- return indexMetaData;\n- }\n-\n-\n /**\n * Checks the mappings for compatibility with the current version\n */\n private void checkMappingsCompatibility(IndexMetaData indexMetaData) {\n try {\n // We cannot instantiate real analysis server at this point because the node might not have\n // been started yet. However, we don't really need real analyzers at this stage - so we can fake it\n- IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings, Collections.emptyList());\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings);\n SimilarityService similarityService = new SimilarityService(indexSettings, Collections.emptyMap());\n \n try (AnalysisService analysisService = new FakeAnalysisService(indexSettings)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -30,19 +30,20 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateListener;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n-import org.elasticsearch.cluster.settings.DynamicSettings;\n-import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.IndexScopedSettings;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.index.settings.IndexDynamicSettings;\n+import org.elasticsearch.index.IndexNotFoundException;\n \n import java.util.ArrayList;\n import java.util.HashMap;\n@@ -59,25 +60,21 @@\n */\n public class MetaDataUpdateSettingsService extends AbstractComponent implements ClusterStateListener {\n \n- // the value we recognize in the \"max\" position to mean all the nodes\n- private static final String ALL_NODES_VALUE = \"all\";\n-\n private final ClusterService clusterService;\n \n private final AllocationService allocationService;\n \n- private final DynamicSettings dynamicSettings;\n-\n private final IndexNameExpressionResolver indexNameExpressionResolver;\n+ private final IndexScopedSettings indexScopedSettings;\n \n @Inject\n- public MetaDataUpdateSettingsService(Settings settings, ClusterService clusterService, AllocationService allocationService, @IndexDynamicSettings DynamicSettings dynamicSettings, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ public MetaDataUpdateSettingsService(Settings settings, ClusterService clusterService, AllocationService allocationService, IndexScopedSettings indexScopedSettings, IndexNameExpressionResolver indexNameExpressionResolver) {\n super(settings);\n this.clusterService = clusterService;\n this.indexNameExpressionResolver = indexNameExpressionResolver;\n this.clusterService.add(this);\n this.allocationService = allocationService;\n- this.dynamicSettings = dynamicSettings;\n+ this.indexScopedSettings = indexScopedSettings;\n }\n \n @Override\n@@ -90,69 +87,43 @@ public void clusterChanged(ClusterChangedEvent event) {\n final int dataNodeCount = event.state().nodes().dataNodes().size();\n \n Map<Integer, List<String>> nrReplicasChanged = new HashMap<>();\n-\n // we need to do this each time in case it was changed by update settings\n for (final IndexMetaData indexMetaData : event.state().metaData()) {\n- String autoExpandReplicas = indexMetaData.getSettings().get(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS);\n- if (autoExpandReplicas != null && Booleans.parseBoolean(autoExpandReplicas, true)) { // Booleans only work for false values, just as we want it here\n- try {\n- final int min;\n- final int max;\n-\n- final int dash = autoExpandReplicas.indexOf('-');\n- if (-1 == dash) {\n- logger.warn(\"failed to set [{}] for index [{}], it should be dash delimited [{}]\",\n- IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), autoExpandReplicas);\n- continue;\n- }\n- final String sMin = autoExpandReplicas.substring(0, dash);\n- try {\n- min = Integer.parseInt(sMin);\n- } catch (NumberFormatException e) {\n- logger.warn(\"failed to set [{}] for index [{}], minimum value is not a number [{}]\",\n- e, IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), sMin);\n- continue;\n- }\n- String sMax = autoExpandReplicas.substring(dash + 1);\n- if (sMax.equals(ALL_NODES_VALUE)) {\n- max = dataNodeCount - 1;\n- } else {\n- try {\n- max = Integer.parseInt(sMax);\n- } catch (NumberFormatException e) {\n- logger.warn(\"failed to set [{}] for index [{}], maximum value is neither [{}] nor a number [{}]\",\n- e, IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), ALL_NODES_VALUE, sMax);\n- continue;\n- }\n- }\n+ AutoExpandReplicas autoExpandReplicas = IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING.get(indexMetaData.getSettings());\n+ if (autoExpandReplicas.isEnabled()) {\n+ /*\n+ * we have to expand the number of replicas for this index to at least min and at most max nodes here\n+ * so we are bumping it up if we have to or reduce it depending on min/max and the number of datanodes.\n+ * If we change the number of replicas we just let the shard allocator do it's thing once we updated it\n+ * since it goes through the index metadata to figure out if something needs to be done anyway. Do do that\n+ * we issue a cluster settings update command below and kicks off a reroute.\n+ */\n+ final int min = autoExpandReplicas.getMinReplicas();\n+ final int max = autoExpandReplicas.getMaxReplicas(dataNodeCount);\n+ int numberOfReplicas = dataNodeCount - 1;\n+ if (numberOfReplicas < min) {\n+ numberOfReplicas = min;\n+ } else if (numberOfReplicas > max) {\n+ numberOfReplicas = max;\n+ }\n+ // same value, nothing to do there\n+ if (numberOfReplicas == indexMetaData.getNumberOfReplicas()) {\n+ continue;\n+ }\n \n- int numberOfReplicas = dataNodeCount - 1;\n- if (numberOfReplicas < min) {\n- numberOfReplicas = min;\n- } else if (numberOfReplicas > max) {\n- numberOfReplicas = max;\n- }\n+ if (numberOfReplicas >= min && numberOfReplicas <= max) {\n \n- // same value, nothing to do there\n- if (numberOfReplicas == indexMetaData.getNumberOfReplicas()) {\n- continue;\n+ if (!nrReplicasChanged.containsKey(numberOfReplicas)) {\n+ nrReplicasChanged.put(numberOfReplicas, new ArrayList<>());\n }\n \n- if (numberOfReplicas >= min && numberOfReplicas <= max) {\n-\n- if (!nrReplicasChanged.containsKey(numberOfReplicas)) {\n- nrReplicasChanged.put(numberOfReplicas, new ArrayList<String>());\n- }\n-\n- nrReplicasChanged.get(numberOfReplicas).add(indexMetaData.getIndex());\n- }\n- } catch (Exception e) {\n- logger.warn(\"[{}] failed to parse auto expand replicas\", e, indexMetaData.getIndex());\n+ nrReplicasChanged.get(numberOfReplicas).add(indexMetaData.getIndex());\n }\n }\n }\n \n if (nrReplicasChanged.size() > 0) {\n+ // update settings and kick of a reroute (implicit) for them to take effect\n for (final Integer fNumberOfReplicas : nrReplicasChanged.keySet()) {\n Settings settings = Settings.settingsBuilder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, fNumberOfReplicas).build();\n final List<String> indices = nrReplicasChanged.get(fNumberOfReplicas);\n@@ -182,42 +153,30 @@ public void onFailure(Throwable t) {\n }\n \n public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n- Settings.Builder updatedSettingsBuilder = Settings.settingsBuilder();\n- updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n+ final Settings normalizedSettings = Settings.settingsBuilder().put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();\n+ Settings.Builder settingsForClosedIndices = Settings.builder();\n+ Settings.Builder settingsForOpenIndices = Settings.builder();\n+ Settings.Builder skipppedSettings = Settings.builder();\n+\n+ indexScopedSettings.validate(normalizedSettings);\n // never allow to change the number of shards\n- for (String key : updatedSettingsBuilder.internalMap().keySet()) {\n- if (key.equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) {\n+ for (Map.Entry<String, String> entry : normalizedSettings.getAsMap().entrySet()) {\n+ if (entry.getKey().equals(IndexMetaData.SETTING_NUMBER_OF_SHARDS)) {\n listener.onFailure(new IllegalArgumentException(\"can't change the number of shards for an index\"));\n return;\n }\n- }\n-\n- final Settings closeSettings = updatedSettingsBuilder.build();\n-\n- final Set<String> removedSettings = new HashSet<>();\n- final Set<String> errors = new HashSet<>();\n- for (Map.Entry<String, String> setting : updatedSettingsBuilder.internalMap().entrySet()) {\n- if (!dynamicSettings.hasDynamicSetting(setting.getKey())) {\n- removedSettings.add(setting.getKey());\n+ Setting setting = indexScopedSettings.get(entry.getKey());\n+ assert setting != null; // we already validated the normalized settings\n+ settingsForClosedIndices.put(entry.getKey(), entry.getValue());\n+ if (setting.isDynamic()) {\n+ settingsForOpenIndices.put(entry.getKey(), entry.getValue());\n } else {\n- String error = dynamicSettings.validateDynamicSetting(setting.getKey(), setting.getValue(), clusterService.state());\n- if (error != null) {\n- errors.add(\"[\" + setting.getKey() + \"] - \" + error);\n- }\n+ skipppedSettings.put(entry.getKey(), entry.getValue());\n }\n }\n-\n- if (!errors.isEmpty()) {\n- listener.onFailure(new IllegalArgumentException(\"can't process the settings: \" + errors.toString()));\n- return;\n- }\n-\n- if (!removedSettings.isEmpty()) {\n- for (String removedSetting : removedSettings) {\n- updatedSettingsBuilder.remove(removedSetting);\n- }\n- }\n- final Settings openSettings = updatedSettingsBuilder.build();\n+ final Settings skippedSettigns = skipppedSettings.build();\n+ final Settings closedSettings = settingsForClosedIndices.build();\n+ final Settings openSettings = settingsForOpenIndices.build();\n \n clusterService.submitStateUpdateTask(\"update-settings\",\n new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, listener) {\n@@ -245,16 +204,16 @@ public ClusterState execute(ClusterState currentState) {\n }\n }\n \n- if (closeIndices.size() > 0 && closeSettings.get(IndexMetaData.SETTING_NUMBER_OF_REPLICAS) != null) {\n+ if (closeIndices.size() > 0 && closedSettings.get(IndexMetaData.SETTING_NUMBER_OF_REPLICAS) != null) {\n throw new IllegalArgumentException(String.format(Locale.ROOT,\n \"Can't update [%s] on closed indices [%s] - can leave index in an unopenable state\", IndexMetaData.SETTING_NUMBER_OF_REPLICAS,\n closeIndices\n ));\n }\n- if (!removedSettings.isEmpty() && !openIndices.isEmpty()) {\n+ if (!skippedSettigns.getAsMap().isEmpty() && !openIndices.isEmpty()) {\n throw new IllegalArgumentException(String.format(Locale.ROOT,\n \"Can't update non dynamic settings[%s] for open indices [%s]\",\n- removedSettings,\n+ skippedSettigns.getAsMap().keySet(),\n openIndices\n ));\n }\n@@ -267,71 +226,73 @@ public ClusterState execute(ClusterState currentState) {\n }\n \n ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n- Boolean updatedReadOnly = openSettings.getAsBoolean(IndexMetaData.SETTING_READ_ONLY, null);\n- if (updatedReadOnly != null) {\n- for (String index : actualIndices) {\n- if (updatedReadOnly) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_READ_ONLY_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_READ_ONLY_BLOCK);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_ONLY_BLOCK, IndexMetaData.INDEX_READ_ONLY_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_METADATA_BLOCK, IndexMetaData.INDEX_BLOCKS_METADATA_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_WRITE_BLOCK, IndexMetaData.INDEX_BLOCKS_WRITE_SETTING, openSettings);\n+ maybeUpdateClusterBlock(actualIndices, blocks, IndexMetaData.INDEX_READ_BLOCK, IndexMetaData.INDEX_BLOCKS_READ_SETTING, openSettings);\n+\n+ if (!openIndices.isEmpty()) {\n+ for (String index : openIndices) {\n+ IndexMetaData indexMetaData = metaDataBuilder.get(index);\n+ if (indexMetaData == null) {\n+ throw new IndexNotFoundException(index);\n }\n- }\n- }\n- Boolean updateMetaDataBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_METADATA, null);\n- if (updateMetaDataBlock != null) {\n- for (String index : actualIndices) {\n- if (updateMetaDataBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_METADATA_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_METADATA_BLOCK);\n+ Settings.Builder updates = Settings.builder();\n+ Settings.Builder indexSettings = Settings.builder().put(indexMetaData.getSettings());\n+ if (indexScopedSettings.updateDynamicSettings(openSettings, indexSettings, updates, index)) {\n+ metaDataBuilder.put(IndexMetaData.builder(indexMetaData).settings(indexSettings));\n }\n }\n }\n \n- Boolean updateWriteBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_WRITE, null);\n- if (updateWriteBlock != null) {\n- for (String index : actualIndices) {\n- if (updateWriteBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_WRITE_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_WRITE_BLOCK);\n+ if (!closeIndices.isEmpty()) {\n+ for (String index : closeIndices) {\n+ IndexMetaData indexMetaData = metaDataBuilder.get(index);\n+ if (indexMetaData == null) {\n+ throw new IndexNotFoundException(index);\n }\n- }\n- }\n-\n- Boolean updateReadBlock = openSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_READ, null);\n- if (updateReadBlock != null) {\n- for (String index : actualIndices) {\n- if (updateReadBlock) {\n- blocks.addIndexBlock(index, IndexMetaData.INDEX_READ_BLOCK);\n- } else {\n- blocks.removeIndexBlock(index, IndexMetaData.INDEX_READ_BLOCK);\n+ Settings.Builder updates = Settings.builder();\n+ Settings.Builder indexSettings = Settings.builder().put(indexMetaData.getSettings());\n+ if (indexScopedSettings.updateSettings(closedSettings, indexSettings, updates, index)) {\n+ metaDataBuilder.put(IndexMetaData.builder(indexMetaData).settings(indexSettings));\n }\n }\n }\n \n- if (!openIndices.isEmpty()) {\n- String[] indices = openIndices.toArray(new String[openIndices.size()]);\n- metaDataBuilder.updateSettings(openSettings, indices);\n- }\n-\n- if (!closeIndices.isEmpty()) {\n- String[] indices = closeIndices.toArray(new String[closeIndices.size()]);\n- metaDataBuilder.updateSettings(closeSettings, indices);\n- }\n-\n \n ClusterState updatedState = ClusterState.builder(currentState).metaData(metaDataBuilder).routingTable(routingTableBuilder.build()).blocks(blocks).build();\n \n // now, reroute in case things change that require it (like number of replicas)\n RoutingAllocation.Result routingResult = allocationService.reroute(updatedState, \"settings update\");\n updatedState = ClusterState.builder(updatedState).routingResult(routingResult).build();\n-\n+ for (String index : openIndices) {\n+ indexScopedSettings.dryRun(updatedState.metaData().index(index).getSettings());\n+ }\n+ for (String index : closeIndices) {\n+ indexScopedSettings.dryRun(updatedState.metaData().index(index).getSettings());\n+ }\n return updatedState;\n }\n });\n }\n \n+ /**\n+ * Updates the cluster block only iff the setting exists in the given settings\n+ */\n+ private static void maybeUpdateClusterBlock(String[] actualIndices, ClusterBlocks.Builder blocks, ClusterBlock block, Setting<Boolean> setting, Settings openSettings) {\n+ if (setting.exists(openSettings)) {\n+ final boolean updateReadBlock = setting.get(openSettings);\n+ for (String index : actualIndices) {\n+ if (updateReadBlock) {\n+ blocks.addIndexBlock(index, block);\n+ } else {\n+ blocks.removeIndexBlock(index, block);\n+ }\n+ }\n+ }\n+ }\n+\n+\n public void upgradeIndexSettings(final UpgradeSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n \n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -41,10 +42,10 @@\n public class UnassignedInfo implements ToXContent, Writeable<UnassignedInfo> {\n \n public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"dateOptionalTime\");\n-\n- public static final String INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING = \"index.unassigned.node_left.delayed_timeout\";\n private static final TimeValue DEFAULT_DELAYED_NODE_LEFT_TIMEOUT = TimeValue.timeValueMinutes(1);\n \n+ public static final Setting<TimeValue> INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING = Setting.timeSetting(\"index.unassigned.node_left.delayed_timeout\", DEFAULT_DELAYED_NODE_LEFT_TIMEOUT, true, Setting.Scope.INDEX);\n+\n /**\n * Reason why the shard is in unassigned state.\n * <p>\n@@ -215,7 +216,7 @@ public long getAllocationDelayTimeoutSettingNanos(Settings settings, Settings in\n if (reason != Reason.NODE_LEFT) {\n return 0;\n }\n- TimeValue delayTimeout = indexSettings.getAsTime(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, settings.getAsTime(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, DEFAULT_DELAYED_NODE_LEFT_TIMEOUT));\n+ TimeValue delayTimeout = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexSettings, settings);\n return Math.max(0l, delayTimeout.nanos());\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,7 @@\n \n /**\n * This allocation decider allows shard allocations / rebalancing via the cluster wide settings {@link #CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} /\n- * {@link #CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} and the per index setting {@link #INDEX_ROUTING_ALLOCATION_ENABLE} / {@link #INDEX_ROUTING_REBALANCE_ENABLE}.\n+ * {@link #CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} and the per index setting {@link #INDEX_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link #INDEX_ROUTING_REBALANCE_ENABLE_SETTING}.\n * The per index settings overrides the cluster wide setting.\n *\n * <p>\n@@ -61,10 +61,10 @@ public class EnableAllocationDecider extends AllocationDecider {\n public static final String NAME = \"enable\";\n \n public static final Setting<Allocation> CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING = new Setting<>(\"cluster.routing.allocation.enable\", Allocation.ALL.name(), Allocation::parse, true, Setting.Scope.CLUSTER);\n- public static final String INDEX_ROUTING_ALLOCATION_ENABLE= \"index.routing.allocation.enable\";\n+ public static final Setting<Allocation> INDEX_ROUTING_ALLOCATION_ENABLE_SETTING = new Setting<>(\"index.routing.allocation.enable\", Allocation.ALL.name(), Allocation::parse, true, Setting.Scope.INDEX);\n \n public static final Setting<Rebalance> CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING = new Setting<>(\"cluster.routing.rebalance.enable\", Rebalance.ALL.name(), Rebalance::parse, true, Setting.Scope.CLUSTER);\n- public static final String INDEX_ROUTING_REBALANCE_ENABLE = \"index.routing.rebalance.enable\";\n+ public static final Setting<Rebalance> INDEX_ROUTING_REBALANCE_ENABLE_SETTING = new Setting<>(\"index.routing.rebalance.enable\", Rebalance.ALL.name(), Rebalance::parse, true, Setting.Scope.INDEX);\n \n private volatile Rebalance enableRebalance;\n private volatile Allocation enableAllocation;\n@@ -92,11 +92,10 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n return allocation.decision(Decision.YES, NAME, \"allocation disabling is ignored\");\n }\n \n- IndexMetaData indexMetaData = allocation.metaData().index(shardRouting.getIndex());\n- String enableIndexValue = indexMetaData.getSettings().get(INDEX_ROUTING_ALLOCATION_ENABLE);\n+ final IndexMetaData indexMetaData = allocation.metaData().index(shardRouting.getIndex());\n final Allocation enable;\n- if (enableIndexValue != null) {\n- enable = Allocation.parse(enableIndexValue);\n+ if (INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.exists(indexMetaData.getSettings())) {\n+ enable = INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.get(indexMetaData.getSettings());\n } else {\n enable = this.enableAllocation;\n }\n@@ -129,10 +128,9 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca\n }\n \n Settings indexSettings = allocation.routingNodes().metaData().index(shardRouting.index()).getSettings();\n- String enableIndexValue = indexSettings.get(INDEX_ROUTING_REBALANCE_ENABLE);\n final Rebalance enable;\n- if (enableIndexValue != null) {\n- enable = Rebalance.parse(enableIndexValue);\n+ if (INDEX_ROUTING_REBALANCE_ENABLE_SETTING.exists(indexSettings)) {\n+ enable = INDEX_ROUTING_REBALANCE_ENABLE_SETTING.get(indexSettings);\n } else {\n enable = this.enableRebalance;\n }\n@@ -160,7 +158,7 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca\n \n /**\n * Allocation values or rather their string representation to be used used with\n- * {@link EnableAllocationDecider#CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_ALLOCATION_ENABLE}\n+ * {@link EnableAllocationDecider#CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_ALLOCATION_ENABLE_SETTING}\n * via cluster / index settings.\n */\n public enum Allocation {\n@@ -186,7 +184,7 @@ public static Allocation parse(String strValue) {\n \n /**\n * Rebalance values or rather their string representation to be used used with\n- * {@link EnableAllocationDecider#CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_REBALANCE_ENABLE}\n+ * {@link EnableAllocationDecider#CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING} / {@link EnableAllocationDecider#INDEX_ROUTING_REBALANCE_ENABLE_SETTING}\n * via cluster / index settings.\n */\n public enum Rebalance {",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/EnableAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -60,10 +60,6 @@ public class FilterAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"filter\";\n \n- public static final String INDEX_ROUTING_REQUIRE_GROUP = \"index.routing.allocation.require.\";\n- public static final String INDEX_ROUTING_INCLUDE_GROUP = \"index.routing.allocation.include.\";\n- public static final String INDEX_ROUTING_EXCLUDE_GROUP = \"index.routing.allocation.exclude.\";\n-\n public static final Setting<Settings> CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.require.\", true, Setting.Scope.CLUSTER);\n public static final Setting<Settings> CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.include.\", true, Setting.Scope.CLUSTER);\n public static final Setting<Settings> CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = Setting.groupSetting(\"cluster.routing.allocation.exclude.\", true, Setting.Scope.CLUSTER);",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -32,12 +32,12 @@\n /**\n * This {@link AllocationDecider} limits the number of shards per node on a per\n * index or node-wide basis. The allocator prevents a single node to hold more\n- * than {@value #INDEX_TOTAL_SHARDS_PER_NODE} per index and\n+ * than <tt>index.routing.allocation.total_shards_per_node</tt> per index and\n * <tt>cluster.routing.allocation.total_shards_per_node</tt> globally during the allocation\n * process. The limits of this decider can be changed in real-time via a the\n * index settings API.\n * <p>\n- * If {@value #INDEX_TOTAL_SHARDS_PER_NODE} is reset to a negative value shards\n+ * If <tt>index.routing.allocation.total_shards_per_node</tt> is reset to a negative value shards\n * per index are unlimited per node. Shards currently in the\n * {@link ShardRoutingState#RELOCATING relocating} state are ignored by this\n * {@link AllocationDecider} until the shard changed its state to either\n@@ -59,12 +59,13 @@ public class ShardsLimitAllocationDecider extends AllocationDecider {\n * Controls the maximum number of shards per index on a single Elasticsearch\n * node. Negative values are interpreted as unlimited.\n */\n- public static final String INDEX_TOTAL_SHARDS_PER_NODE = \"index.routing.allocation.total_shards_per_node\";\n+ public static final Setting<Integer> INDEX_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"index.routing.allocation.total_shards_per_node\", -1, -1, true, Setting.Scope.INDEX);\n+\n /**\n * Controls the maximum number of shards per node on a global level.\n * Negative values are interpreted as unlimited.\n */\n- public static final Setting<Integer> CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"cluster.routing.allocation.total_shards_per_node\", -1, true, Setting.Scope.CLUSTER);\n+ public static final Setting<Integer> CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING = Setting.intSetting(\"cluster.routing.allocation.total_shards_per_node\", -1, -1, true, Setting.Scope.CLUSTER);\n \n \n @Inject\n@@ -81,7 +82,7 @@ private void setClusterShardLimit(int clusterShardLimit) {\n @Override\n public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n IndexMetaData indexMd = allocation.routingNodes().metaData().index(shardRouting.index());\n- int indexShardLimit = indexMd.getSettings().getAsInt(INDEX_TOTAL_SHARDS_PER_NODE, -1);\n+ final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings);\n // Capture the limit here in case it changes during this method's\n // execution\n final int clusterShardLimit = this.clusterShardLimit;\n@@ -118,7 +119,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n @Override\n public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n IndexMetaData indexMd = allocation.routingNodes().metaData().index(shardRouting.index());\n- int indexShardLimit = indexMd.getSettings().getAsInt(INDEX_TOTAL_SHARDS_PER_NODE, -1);\n+ final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings);\n // Capture the limit here in case it changes during this method's\n // execution\n final int clusterShardLimit = this.clusterShardLimit;",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,13 @@\n \n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.common.util.set.Sets;\n \n import java.util.ArrayList;\n+import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n@@ -43,17 +47,31 @@ public abstract class AbstractScopedSettings extends AbstractComponent {\n \n protected AbstractScopedSettings(Settings settings, Set<Setting<?>> settingsSet, Setting.Scope scope) {\n super(settings);\n+ this.lastSettingsApplied = Settings.EMPTY;\n+ this.scope = scope;\n for (Setting<?> entry : settingsSet) {\n- if (entry.getScope() != scope) {\n- throw new IllegalArgumentException(\"Setting must be a cluster setting but was: \" + entry.getScope());\n- }\n- if (entry.hasComplexMatcher()) {\n- complexMatchers.put(entry.getKey(), entry);\n- } else {\n- keySettings.put(entry.getKey(), entry);\n- }\n+ addSetting(entry);\n+ }\n+ }\n+\n+ protected AbstractScopedSettings(Settings nodeSettings, Settings scopeSettings, AbstractScopedSettings other) {\n+ super(nodeSettings);\n+ this.lastSettingsApplied = scopeSettings;\n+ this.scope = other.scope;\n+ complexMatchers.putAll(other.complexMatchers);\n+ keySettings.putAll(other.keySettings);\n+ settingUpdaters.addAll(other.settingUpdaters);\n+ }\n+\n+ protected final void addSetting(Setting<?> setting) {\n+ if (setting.getScope() != scope) {\n+ throw new IllegalArgumentException(\"Setting must be a \" + scope + \" setting but was: \" + setting.getScope());\n+ }\n+ if (setting.hasComplexMatcher()) {\n+ complexMatchers.putIfAbsent(setting.getKey(), setting);\n+ } else {\n+ keySettings.putIfAbsent(setting.getKey(), setting);\n }\n- this.scope = scope;\n }\n \n public Setting.Scope getScope() {\n@@ -161,6 +179,34 @@ public synchronized <T> void addSettingsUpdateConsumer(Setting<T> setting, Consu\n addSettingsUpdateConsumer(setting, consumer, (s) -> {});\n }\n \n+ /**\n+ * Validates that all settings in the builder are registered and valid\n+ */\n+ public final void validate(Settings.Builder settingsBuilder) {\n+ validate(settingsBuilder.build());\n+ }\n+\n+ /**\n+ * * Validates that all given settings are registered and valid\n+ */\n+ public final void validate(Settings settings) {\n+ for (Map.Entry<String, String> entry : settings.getAsMap().entrySet()) {\n+ validate(entry.getKey(), settings);\n+ }\n+ }\n+\n+\n+ /**\n+ * Validates that the setting is valid\n+ */\n+ public final void validate(String key, Settings settings) {\n+ Setting setting = get(key);\n+ if (setting == null) {\n+ throw new IllegalArgumentException(\"unknown setting [\" + key + \"]\");\n+ }\n+ setting.get(settings);\n+ }\n+\n /**\n * Transactional interface to update settings.\n * @see Setting\n@@ -253,4 +299,93 @@ public Settings diff(Settings source, Settings defaultSettings) {\n return builder.build();\n }\n \n+ /**\n+ * Returns the value for the given setting.\n+ */\n+ public <T> T get(Setting<T> setting) {\n+ if (setting.getScope() != scope) {\n+ throw new IllegalArgumentException(\"settings scope doesn't match the setting scope [\" + this.scope + \"] != [\" + setting.getScope() + \"]\");\n+ }\n+ if (get(setting.getKey()) == null) {\n+ throw new IllegalArgumentException(\"setting \" + setting.getKey() + \" has not been registered\");\n+ }\n+ return setting.get(this.lastSettingsApplied, settings);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * <p>\n+ * Note: This method will only allow updates to dynamic settings. if a non-dynamic setting is updated an {@link IllegalArgumentException} is thrown instead.\n+ *</p>\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ public boolean updateDynamicSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n+ return updateSettings(toApply, target, updates, type, true);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ public boolean updateSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type) {\n+ return updateSettings(toApply, target, updates, type, false);\n+ }\n+\n+ /**\n+ * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n+ * @param toApply the new settings to apply\n+ * @param target the target settings builder that the updates are applied to. All keys that have explicit null value in toApply will be removed from this builder\n+ * @param updates a settings builder that holds all updates applied to target\n+ * @param type a free text string to allow better exceptions messages\n+ * @param onlyDynamic if <code>false</code> all settings are updated otherwise only dynamic settings are updated. if set to <code>true</code> and a non-dynamic setting is updated an exception is thrown.\n+ * @return <code>true</code> if the target has changed otherwise <code>false</code>\n+ */\n+ private boolean updateSettings(Settings toApply, Settings.Builder target, Settings.Builder updates, String type, boolean onlyDynamic) {\n+ boolean changed = false;\n+ final Set<String> toRemove = new HashSet<>();\n+ Settings.Builder settingsBuilder = Settings.settingsBuilder();\n+ for (Map.Entry<String, String> entry : toApply.getAsMap().entrySet()) {\n+ if (entry.getValue() == null) {\n+ toRemove.add(entry.getKey());\n+ } else if ((onlyDynamic == false && get(entry.getKey()) != null) || hasDynamicSetting(entry.getKey())) {\n+ validate(entry.getKey(), toApply);\n+ settingsBuilder.put(entry.getKey(), entry.getValue());\n+ updates.put(entry.getKey(), entry.getValue());\n+ changed = true;\n+ } else {\n+ throw new IllegalArgumentException(type + \" setting [\" + entry.getKey() + \"], not dynamically updateable\");\n+ }\n+\n+ }\n+ changed |= applyDeletes(toRemove, target);\n+ target.put(settingsBuilder.build());\n+ return changed;\n+ }\n+\n+ private static final boolean applyDeletes(Set<String> deletes, Settings.Builder builder) {\n+ boolean changed = false;\n+ for (String entry : deletes) {\n+ Set<String> keysToRemove = new HashSet<>();\n+ Set<String> keySet = builder.internalMap().keySet();\n+ for (String key : keySet) {\n+ if (Regex.simpleMatch(entry, key)) {\n+ keysToRemove.add(key);\n+ }\n+ }\n+ for (String key : keysToRemove) {\n+ builder.remove(key);\n+ changed = true;\n+ }\n+ }\n+ return changed;\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,8 @@\n import org.elasticsearch.discovery.DiscoverySettings;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n+import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.store.IndexStoreConfig;\n import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n@@ -62,7 +64,6 @@ public ClusterSettings(Settings settings, Set<Setting<?>> settingsSet) {\n super(settings, settingsSet, Setting.Scope.CLUSTER);\n }\n \n-\n @Override\n public synchronized Settings applySettings(Settings newSettings) {\n Settings settings = super.applySettings(newSettings);\n@@ -83,6 +84,11 @@ public synchronized Settings applySettings(Settings newSettings) {\n return settings;\n }\n \n+ @Override\n+ public boolean hasDynamicSetting(String key) {\n+ return isLoggerSetting(key) || super.hasDynamicSetting(key);\n+ }\n+\n /**\n * Returns <code>true</code> if the settings is a logger setting.\n */\n@@ -149,5 +155,8 @@ public boolean isLoggerSetting(String key) {\n HierarchyCircuitBreakerService.FIELDDATA_CIRCUIT_BREAKER_TYPE_SETTING,\n HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_TYPE_SETTING,\n Transport.TRANSPORT_PROFILES_SETTING,\n- Transport.TRANSPORT_TCP_COMPRESS)));\n+ Transport.TRANSPORT_TCP_COMPRESS,\n+ IndexSettings.QUERY_STRING_ANALYZE_WILDCARD,\n+ IndexSettings.QUERY_STRING_ALLOW_LEADING_WILDCARD,\n+ PrimaryShardAllocator.NODE_INITIAL_SHARDS_SETTING)));\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,155 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.common.settings;\n+\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n+import org.elasticsearch.gateway.PrimaryShardAllocator;\n+import org.elasticsearch.index.IndexModule;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.IndexingSlowLog;\n+import org.elasticsearch.index.MergePolicyConfig;\n+import org.elasticsearch.index.MergeSchedulerConfig;\n+import org.elasticsearch.index.SearchSlowLog;\n+import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.engine.EngineConfig;\n+import org.elasticsearch.index.fielddata.IndexFieldDataService;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.percolator.PercolatorQueriesRegistry;\n+import org.elasticsearch.index.store.FsDirectoryService;\n+import org.elasticsearch.index.store.IndexStore;\n+import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.indices.cache.request.IndicesRequestCache;\n+import org.elasticsearch.search.SearchService;\n+\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.function.Predicate;\n+\n+/**\n+ * Encapsulates all valid index level settings.\n+ * @see org.elasticsearch.common.settings.Setting.Scope#INDEX\n+ */\n+public final class IndexScopedSettings extends AbstractScopedSettings {\n+\n+ public static final Predicate<String> INDEX_SETTINGS_KEY_PREDICATE = (s) -> s.startsWith(IndexMetaData.INDEX_SETTING_PREFIX);\n+\n+ public static Set<Setting<?>> BUILT_IN_INDEX_SETTINGS = Collections.unmodifiableSet(new HashSet<>(Arrays.asList(\n+ IndexSettings.INDEX_TTL_DISABLE_PURGE_SETTING,\n+ IndexStore.INDEX_STORE_THROTTLE_TYPE_SETTING,\n+ IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING,\n+ MergeSchedulerConfig.AUTO_THROTTLE_SETTING,\n+ MergeSchedulerConfig.MAX_MERGE_COUNT_SETTING,\n+ MergeSchedulerConfig.MAX_THREAD_COUNT_SETTING,\n+ IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,\n+ IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING,\n+ IndexMetaData.INDEX_SHADOW_REPLICAS_SETTING,\n+ IndexMetaData.INDEX_SHARED_FILESYSTEM_SETTING,\n+ IndexMetaData.INDEX_READ_ONLY_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_READ_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_WRITE_SETTING,\n+ IndexMetaData.INDEX_BLOCKS_METADATA_SETTING,\n+ IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING,\n+ IndexMetaData.INDEX_PRIORITY_SETTING,\n+ IndexMetaData.INDEX_DATA_PATH_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_DEBUG_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_WARN_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_INFO_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_FETCH_TRACE_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_WARN_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_DEBUG_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_INFO_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_THRESHOLD_QUERY_TRACE_SETTING,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_LEVEL,\n+ SearchSlowLog.INDEX_SEARCH_SLOWLOG_REFORMAT,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_TRACE_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_LEVEL_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_REFORMAT_SETTING,\n+ IndexingSlowLog.INDEX_INDEXING_SLOWLOG_MAX_SOURCE_CHARS_TO_LOG_SETTING,\n+ MergePolicyConfig.INDEX_COMPOUND_FORMAT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_FLOOR_SEGMENT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER_SETTING,\n+ MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_DURABILITY_SETTING,\n+ IndexSettings.INDEX_WARMER_ENABLED_SETTING,\n+ IndexSettings.INDEX_REFRESH_INTERVAL_SETTING,\n+ IndexSettings.MAX_RESULT_WINDOW_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING,\n+ IndexSettings.DEFAULT_FIELD_SETTING,\n+ IndexSettings.QUERY_STRING_LENIENT_SETTING,\n+ IndexSettings.ALLOW_UNMAPPED,\n+ IndexSettings.INDEX_CHECK_ON_STARTUP,\n+ ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE_SETTING,\n+ IndexSettings.INDEX_GC_DELETES_SETTING,\n+ IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING,\n+ UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING,\n+ EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE_SETTING,\n+ EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE_SETTING,\n+ IndexSettings.INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTTING,\n+ IndexFieldDataService.INDEX_FIELDDATA_CACHE_KEY,\n+ FieldMapper.IGNORE_MALFORMED_SETTING,\n+ FieldMapper.COERCE_SETTING,\n+ Store.INDEX_STORE_STATS_REFRESH_INTERVAL_SETTING,\n+ PercolatorQueriesRegistry.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING,\n+ MapperService.INDEX_MAPPER_DYNAMIC_SETTING,\n+ MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING,\n+ BitsetFilterCache.INDEX_LOAD_RANDOM_ACCESS_FILTERS_EAGERLY_SETTING,\n+ IndexModule.INDEX_STORE_TYPE_SETTING,\n+ IndexModule.INDEX_QUERY_CACHE_TYPE_SETTING,\n+ IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING,\n+ PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS_SETTING,\n+ FsDirectoryService.INDEX_LOCK_FACTOR_SETTING,\n+ EngineConfig.INDEX_CODEC_SETTING,\n+ SearchService.INDEX_NORMS_LOADING_SETTING,\n+ // this sucks but we can't really validate all the analyzers/similarity in here\n+ Setting.groupSetting(\"index.similarity.\", false, Setting.Scope.INDEX), // this allows similarity settings to be passed\n+ Setting.groupSetting(\"index.analysis.\", false, Setting.Scope.INDEX) // this allows analysis settings to be passed\n+\n+ )));\n+\n+ public static final IndexScopedSettings DEFAULT_SCOPED_SETTINGS = new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS);\n+\n+ public IndexScopedSettings(Settings settings, Set<Setting<?>> settingsSet) {\n+ super(settings, settingsSet, Setting.Scope.INDEX);\n+ }\n+\n+ private IndexScopedSettings(Settings settings, IndexScopedSettings other, IndexMetaData metaData) {\n+ super(settings, metaData.getSettings(), other);\n+ }\n+\n+ public IndexScopedSettings copy(Settings settings, IndexMetaData metaData) {\n+ return new IndexScopedSettings(settings, this, metaData);\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java",
"status": "added"
},
{
"diff": "@@ -36,6 +36,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Objects;\n import java.util.function.BiConsumer;\n import java.util.function.Consumer;\n import java.util.function.Function;\n@@ -167,6 +168,16 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n return builder;\n }\n \n+ /**\n+ * Returns the value for this setting but falls back to the second provided settings object\n+ */\n+ public final T get(Settings primary, Settings secondary) {\n+ if (exists(primary)) {\n+ return get(primary);\n+ }\n+ return get(secondary);\n+ }\n+\n /**\n * The settings scope - settings can either be cluster settings or per index settings.\n */\n@@ -225,7 +236,7 @@ public String toString() {\n }\n \n \n- private class Updater implements AbstractScopedSettings.SettingUpdater<T> {\n+ private final class Updater implements AbstractScopedSettings.SettingUpdater<T> {\n private final Consumer<T> consumer;\n private final ESLogger logger;\n private final Consumer<T> accept;\n@@ -265,8 +276,8 @@ public T getValue(Settings current, Settings previous) {\n }\n \n @Override\n- public void apply(T value, Settings current, Settings previous) {\n- logger.info(\"update [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n+ public final void apply(T value, Settings current, Settings previous) {\n+ logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n consumer.accept(value);\n }\n }\n@@ -294,6 +305,10 @@ public static Setting<Integer> intSetting(String key, int defaultValue, int minV\n return new Setting<>(key, (s) -> Integer.toString(defaultValue), (s) -> parseInt(s, minValue, key), dynamic, scope);\n }\n \n+ public static Setting<Long> longSetting(String key, long defaultValue, long minValue, boolean dynamic, Scope scope) {\n+ return new Setting<>(key, (s) -> Long.toString(defaultValue), (s) -> parseLong(s, minValue, key), dynamic, scope);\n+ }\n+\n public static int parseInt(String s, int minValue, String key) {\n int value = Integer.parseInt(s);\n if (value < minValue) {\n@@ -302,6 +317,14 @@ public static int parseInt(String s, int minValue, String key) {\n return value;\n }\n \n+ public static long parseLong(String s, long minValue, String key) {\n+ long value = Long.parseLong(s);\n+ if (value < minValue) {\n+ throw new IllegalArgumentException(\"Failed to parse value [\" + s + \"] for setting [\" + key + \"] must be >= \" + minValue);\n+ }\n+ return value;\n+ }\n+\n public static Setting<Integer> intSetting(String key, int defaultValue, boolean dynamic, Scope scope) {\n return intSetting(key, defaultValue, Integer.MIN_VALUE, dynamic, scope);\n }\n@@ -430,6 +453,7 @@ public Settings getValue(Settings current, Settings previous) {\n \n @Override\n public void apply(Settings value, Settings current, Settings previous) {\n+ logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n consumer.accept(value);\n }\n \n@@ -470,4 +494,16 @@ public static Setting<Double> doubleSetting(String key, double defaultValue, dou\n }, dynamic, scope);\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (o == null || getClass() != o.getClass()) return false;\n+ Setting<?> setting = (Setting<?>) o;\n+ return Objects.equals(key, setting.key);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(key);\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,7 @@\n import java.util.SortedMap;\n import java.util.TreeMap;\n import java.util.concurrent.TimeUnit;\n+import java.util.function.Predicate;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n \n@@ -212,6 +213,19 @@ public Settings getByPrefix(String prefix) {\n return builder.build();\n }\n \n+ /**\n+ * Returns a new settings object that contains all setting of the current one filtered by the given settings key predicate.\n+ */\n+ public Settings filter(Predicate<String> predicate) {\n+ Builder builder = new Builder();\n+ for (Map.Entry<String, String> entry : getAsMap().entrySet()) {\n+ if (predicate.test(entry.getKey())) {\n+ builder.put(entry.getKey(), entry.getValue());\n+ }\n+ }\n+ return builder.build();\n+ }\n+\n /**\n * Returns the settings mapped to the given setting name.\n */",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -34,35 +34,47 @@ public class SettingsModule extends AbstractModule {\n \n private final Settings settings;\n private final SettingsFilter settingsFilter;\n- private final Map<String, Setting<?>> clusterDynamicSettings = new HashMap<>();\n-\n+ private final Map<String, Setting<?>> clusterSettings = new HashMap<>();\n+ private final Map<String, Setting<?>> indexSettings = new HashMap<>();\n \n public SettingsModule(Settings settings, SettingsFilter settingsFilter) {\n this.settings = settings;\n this.settingsFilter = settingsFilter;\n for (Setting<?> setting : ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) {\n registerSetting(setting);\n }\n+ for (Setting<?> setting : IndexScopedSettings.BUILT_IN_INDEX_SETTINGS) {\n+ registerSetting(setting);\n+ }\n }\n \n @Override\n protected void configure() {\n+ final IndexScopedSettings indexScopedSettings = new IndexScopedSettings(settings, new HashSet<>(this.indexSettings.values()));\n+ // by now we are fully configured, lets check node level settings for unregistered index settings\n+ indexScopedSettings.validate(settings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE));\n bind(Settings.class).toInstance(settings);\n bind(SettingsFilter.class).toInstance(settingsFilter);\n- final ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(clusterDynamicSettings.values()));\n+ final ClusterSettings clusterSettings = new ClusterSettings(settings, new HashSet<>(this.clusterSettings.values()));\n+\n bind(ClusterSettings.class).toInstance(clusterSettings);\n+ bind(IndexScopedSettings.class).toInstance(indexScopedSettings);\n }\n \n public void registerSetting(Setting<?> setting) {\n switch (setting.getScope()) {\n case CLUSTER:\n- if (clusterDynamicSettings.containsKey(setting.getKey())) {\n+ if (clusterSettings.containsKey(setting.getKey())) {\n throw new IllegalArgumentException(\"Cannot register setting [\" + setting.getKey() + \"] twice\");\n }\n- clusterDynamicSettings.put(setting.getKey(), setting);\n+ clusterSettings.put(setting.getKey(), setting);\n break;\n case INDEX:\n- throw new UnsupportedOperationException(\"not yet implemented\");\n+ if (indexSettings.containsKey(setting.getKey())) {\n+ throw new IllegalArgumentException(\"Cannot register setting [\" + setting.getKey() + \"] twice\");\n+ }\n+ indexSettings.put(setting.getKey(), setting);\n+ break;\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java",
"status": "modified"
},
{
"diff": "@@ -20,12 +20,10 @@\n package org.elasticsearch.common.unit;\n \n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.settings.Settings;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -176,7 +174,6 @@ public static ByteSizeValue parseBytesSizeValue(String sValue, String settingNam\n \n public static ByteSizeValue parseBytesSizeValue(String sValue, ByteSizeValue defaultValue, String settingName) throws ElasticsearchParseException {\n settingName = Objects.requireNonNull(settingName);\n- assert settingName.startsWith(\"index.\") == false || MetaDataIndexUpgradeService.INDEX_BYTES_SIZE_SETTINGS.contains(settingName);\n if (sValue == null) {\n return defaultValue;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java",
"status": "modified"
},
{
"diff": "@@ -20,12 +20,10 @@\n package org.elasticsearch.common.unit;\n \n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.settings.Settings;\n import org.joda.time.Period;\n import org.joda.time.PeriodType;\n import org.joda.time.format.PeriodFormat;\n@@ -254,7 +252,6 @@ public String getStringRep() {\n \n public static TimeValue parseTimeValue(String sValue, TimeValue defaultValue, String settingName) {\n settingName = Objects.requireNonNull(settingName);\n- assert settingName.startsWith(\"index.\") == false || MetaDataIndexUpgradeService.INDEX_TIME_SETTINGS.contains(settingName) : settingName;\n if (sValue == null) {\n return defaultValue;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/unit/TimeValue.java",
"status": "modified"
},
{
"diff": "@@ -342,7 +342,7 @@ public static void acquireFSLockForPaths(IndexSettings indexSettings, Path... sh\n // resolve the directory the shard actually lives in\n Path p = shardPaths[i].resolve(\"index\");\n // open a directory (will be immediately closed) on the shard's location\n- dirs[i] = new SimpleFSDirectory(p, FsDirectoryService.buildLockFactory(indexSettings));\n+ dirs[i] = new SimpleFSDirectory(p, indexSettings.getValue(FsDirectoryService.INDEX_LOCK_FACTOR_SETTING));\n // create a lock for the \"write.lock\" file\n try {\n locks[i] = dirs[i].obtainLock(IndexWriter.WRITE_LOCK_NAME);",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n \n@@ -40,6 +41,7 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.Function;\n import java.util.stream.Collectors;\n \n /**\n@@ -48,15 +50,30 @@\n */\n public abstract class PrimaryShardAllocator extends AbstractComponent {\n \n- @Deprecated\n- public static final String INDEX_RECOVERY_INITIAL_SHARDS = \"index.recovery.initial_shards\";\n+ private static final Function<String, String> INITIAL_SHARDS_PARSER = (value) -> {\n+ switch (value) {\n+ case \"quorum\":\n+ case \"quorum-1\":\n+ case \"half\":\n+ case \"one\":\n+ case \"full\":\n+ case \"full-1\":\n+ case \"all-1\":\n+ case \"all\":\n+ return value;\n+ default:\n+ Integer.parseInt(value); // it can be parsed that's all we care here?\n+ return value;\n+ }\n+ };\n \n- private final String initialShards;\n+ public static final Setting<String> NODE_INITIAL_SHARDS_SETTING = new Setting<>(\"gateway.initial_shards\", (settings) -> settings.get(\"gateway.local.initial_shards\", \"quorum\"), INITIAL_SHARDS_PARSER, true, Setting.Scope.CLUSTER);\n+ @Deprecated\n+ public static final Setting<String> INDEX_RECOVERY_INITIAL_SHARDS_SETTING = new Setting<>(\"index.recovery.initial_shards\", (settings) -> NODE_INITIAL_SHARDS_SETTING.get(settings) , INITIAL_SHARDS_PARSER, true, Setting.Scope.INDEX);\n \n public PrimaryShardAllocator(Settings settings) {\n super(settings);\n- this.initialShards = settings.get(\"gateway.initial_shards\", settings.get(\"gateway.local.initial_shards\", \"quorum\"));\n- logger.debug(\"using initial_shards [{}]\", initialShards);\n+ logger.debug(\"using initial_shards [{}]\", NODE_INITIAL_SHARDS_SETTING.get(settings));\n }\n \n public boolean allocateUnassigned(RoutingAllocation allocation) {\n@@ -73,7 +90,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n \n final IndexMetaData indexMetaData = metaData.index(shard.getIndex());\n- final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings, Collections.emptyList());\n+ final IndexSettings indexSettings = new IndexSettings(indexMetaData, settings);\n \n if (shard.allocatedPostIndexCreate(indexMetaData) == false) {\n // when we create a fresh index\n@@ -209,29 +226,25 @@ private boolean isEnoughVersionBasedAllocationsFound(ShardRouting shard, IndexMe\n // check if the counts meets the minimum set\n int requiredAllocation = 1;\n // if we restore from a repository one copy is more then enough\n- try {\n- String initialShards = indexMetaData.getSettings().get(INDEX_RECOVERY_INITIAL_SHARDS, settings.get(INDEX_RECOVERY_INITIAL_SHARDS, this.initialShards));\n- if (\"quorum\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 1) {\n- requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2) + 1;\n- }\n- } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 2) {\n- requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2);\n- }\n- } else if (\"one\".equals(initialShards)) {\n- requiredAllocation = 1;\n- } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n- requiredAllocation = indexMetaData.getNumberOfReplicas() + 1;\n- } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n- if (indexMetaData.getNumberOfReplicas() > 1) {\n- requiredAllocation = indexMetaData.getNumberOfReplicas();\n- }\n- } else {\n- requiredAllocation = Integer.parseInt(initialShards);\n+ String initialShards = INDEX_RECOVERY_INITIAL_SHARDS_SETTING.get(indexMetaData.getSettings(), settings);\n+ if (\"quorum\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 1) {\n+ requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2) + 1;\n+ }\n+ } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 2) {\n+ requiredAllocation = ((1 + indexMetaData.getNumberOfReplicas()) / 2);\n+ }\n+ } else if (\"one\".equals(initialShards)) {\n+ requiredAllocation = 1;\n+ } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n+ requiredAllocation = indexMetaData.getNumberOfReplicas() + 1;\n+ } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n+ if (indexMetaData.getNumberOfReplicas() > 1) {\n+ requiredAllocation = indexMetaData.getNumberOfReplicas();\n }\n- } catch (Exception e) {\n- logger.warn(\"[{}][{}] failed to derived initial_shards from value {}, ignore allocation for {}\", shard.index(), shard.id(), initialShards, shard);\n+ } else {\n+ requiredAllocation = Integer.parseInt(initialShards);\n }\n \n return nodesAndVersions.allocationsFound >= requiredAllocation;\n@@ -336,7 +349,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n */\n private boolean recoverOnAnyNode(IndexSettings indexSettings) {\n return indexSettings.isOnSharedFilesystem()\n- && indexSettings.getSettings().getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false);\n+ && IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING.get(indexSettings.getSettings());\n }\n \n protected abstract AsyncShardFetch.FetchResult<TransportNodesListGatewayStartedShards.NodeGatewayStartedShards> fetchData(ShardRouting shard, RoutingAllocation allocation);",
"filename": "core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java",
"status": "modified"
},
{
"diff": "@@ -56,7 +56,7 @@ public final int compare(ShardRouting o1, ShardRouting o2) {\n }\n \n private int priority(Settings settings) {\n- return settings.getAsInt(IndexMetaData.SETTING_PRIORITY, 1);\n+ return IndexMetaData.INDEX_PRIORITY_SETTING.get(settings);\n }\n \n private long timeCreated(Settings settings) {",
"filename": "core/src/main/java/org/elasticsearch/gateway/PriorityComparator.java",
"status": "modified"
},
{
"diff": "@@ -130,7 +130,7 @@ protected NodeGatewayStartedShards nodeOperation(NodeRequest request) {\n if (metaData != null) {\n ShardPath shardPath = null;\n try {\n- IndexSettings indexSettings = new IndexSettings(metaData, settings, Collections.emptyList());\n+ IndexSettings indexSettings = new IndexSettings(metaData, settings);\n shardPath = ShardPath.loadShardPath(logger, nodeEnv, shardId, indexSettings);\n if (shardPath == null) {\n throw new IllegalStateException(shardId + \" no shard path found\");",
"filename": "core/src/main/java/org/elasticsearch/gateway/TransportNodesListGatewayStartedShards.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index;\n \n import org.apache.lucene.util.SetOnce;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.analysis.AnalysisRegistry;\n@@ -35,7 +37,6 @@\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.store.IndexStore;\n import org.elasticsearch.index.store.IndexStoreConfig;\n-import org.elasticsearch.indices.IndexingMemoryController;\n import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.mapper.MapperRegistry;\n \n@@ -47,6 +48,7 @@\n import java.util.Set;\n import java.util.function.BiFunction;\n import java.util.function.Consumer;\n+import java.util.function.Function;\n \n /**\n * IndexModule represents the central extension point for index level custom implementations like:\n@@ -57,25 +59,24 @@\n * <tt>\"index.similarity.my_similarity.type : \"BM25\"</tt> can be used.</li>\n * <li>{@link IndexStore} - Custom {@link IndexStore} instances can be registered via {@link #addIndexStore(String, BiFunction)}</li>\n * <li>{@link IndexEventListener} - Custom {@link IndexEventListener} instances can be registered via {@link #addIndexEventListener(IndexEventListener)}</li>\n- * <li>Settings update listener - Custom settings update listener can be registered via {@link #addIndexSettingsListener(Consumer)}</li>\n+ * <li>Settings update listener - Custom settings update listener can be registered via {@link #addSettingsUpdateConsumer(Setting, Consumer)}</li>\n * </ul>\n */\n public final class IndexModule {\n \n- public static final String STORE_TYPE = \"index.store.type\";\n+ public static final Setting<String> INDEX_STORE_TYPE_SETTING = new Setting<>(\"index.store.type\", \"\", Function.identity(), false, Setting.Scope.INDEX);\n public static final String SIMILARITY_SETTINGS_PREFIX = \"index.similarity\";\n public static final String INDEX_QUERY_CACHE = \"index\";\n public static final String NONE_QUERY_CACHE = \"none\";\n- public static final String QUERY_CACHE_TYPE = \"index.queries.cache.type\";\n+ public static final Setting<String> INDEX_QUERY_CACHE_TYPE_SETTING = new Setting<>(\"index.queries.cache.type\", INDEX_QUERY_CACHE, Function.identity(), false, Setting.Scope.INDEX);\n // for test purposes only\n- public static final String QUERY_CACHE_EVERYTHING = \"index.queries.cache.everything\";\n+ public static final Setting<Boolean> INDEX_QUERY_CACHE_EVERYTHING_SETTING = Setting.boolSetting(\"index.queries.cache.everything\", false, false, Setting.Scope.INDEX);\n private final IndexSettings indexSettings;\n private final IndexStoreConfig indexStoreConfig;\n private final AnalysisRegistry analysisRegistry;\n // pkg private so tests can mock\n final SetOnce<EngineFactory> engineFactory = new SetOnce<>();\n private SetOnce<IndexSearcherWrapperFactory> indexSearcherWrapper = new SetOnce<>();\n- private final Set<Consumer<Settings>> settingsConsumers = new HashSet<>();\n private final Set<IndexEventListener> indexEventListeners = new HashSet<>();\n private IndexEventListener listener;\n private final Map<String, BiFunction<String, Settings, SimilarityProvider>> similarities = new HashMap<>();\n@@ -92,17 +93,13 @@ public IndexModule(IndexSettings indexSettings, IndexStoreConfig indexStoreConfi\n }\n \n /**\n- * Adds a settings consumer for this index\n+ * Adds a Setting and it's consumer for this index.\n */\n- public void addIndexSettingsListener(Consumer<Settings> listener) {\n- if (listener == null) {\n- throw new IllegalArgumentException(\"listener must not be null\");\n+ public <T> void addSettingsUpdateConsumer(Setting<T> setting, Consumer<T> consumer) {\n+ if (setting == null) {\n+ throw new IllegalArgumentException(\"setting must not be null\");\n }\n-\n- if (settingsConsumers.contains(listener)) {\n- throw new IllegalStateException(\"listener already registered\");\n- }\n- settingsConsumers.add(listener);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(setting, consumer);\n }\n \n /**\n@@ -245,27 +242,29 @@ public interface IndexSearcherWrapperFactory {\n \n public IndexService newIndexService(NodeEnvironment environment, IndexService.ShardStoreDeleter shardStoreDeleter, NodeServicesProvider servicesProvider, MapperRegistry mapperRegistry,\n IndexingOperationListener... listeners) throws IOException {\n- final IndexSettings settings = indexSettings.newWithListener(settingsConsumers);\n IndexSearcherWrapperFactory searcherWrapperFactory = indexSearcherWrapper.get() == null ? (shard) -> null : indexSearcherWrapper.get();\n IndexEventListener eventListener = freeze();\n- final String storeType = settings.getSettings().get(STORE_TYPE);\n+ final String storeType = indexSettings.getValue(INDEX_STORE_TYPE_SETTING);\n final IndexStore store;\n- if (storeType == null || isBuiltinType(storeType)) {\n- store = new IndexStore(settings, indexStoreConfig);\n+ if (Strings.isEmpty(storeType) || isBuiltinType(storeType)) {\n+ store = new IndexStore(indexSettings, indexStoreConfig);\n } else {\n BiFunction<IndexSettings, IndexStoreConfig, IndexStore> factory = storeTypes.get(storeType);\n if (factory == null) {\n throw new IllegalArgumentException(\"Unknown store type [\" + storeType + \"]\");\n }\n- store = factory.apply(settings, indexStoreConfig);\n+ store = factory.apply(indexSettings, indexStoreConfig);\n if (store == null) {\n throw new IllegalStateException(\"store must not be null\");\n }\n }\n- final String queryCacheType = settings.getSettings().get(IndexModule.QUERY_CACHE_TYPE, IndexModule.INDEX_QUERY_CACHE);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(IndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING, store::setMaxRate);\n+ indexSettings.getScopedSettings().addSettingsUpdateConsumer(IndexStore.INDEX_STORE_THROTTLE_TYPE_SETTING, store::setType);\n+ final String queryCacheType = indexSettings.getValue(INDEX_QUERY_CACHE_TYPE_SETTING);\n final BiFunction<IndexSettings, IndicesQueryCache, QueryCache> queryCacheProvider = queryCaches.get(queryCacheType);\n- final QueryCache queryCache = queryCacheProvider.apply(settings, servicesProvider.getIndicesQueryCache());\n- return new IndexService(settings, environment, new SimilarityService(settings, similarities), shardStoreDeleter, analysisRegistry, engineFactory.get(),\n+ final QueryCache queryCache = queryCacheProvider.apply(indexSettings, servicesProvider.getIndicesQueryCache());\n+ return new IndexService(indexSettings, environment, new SimilarityService(indexSettings, similarities), shardStoreDeleter, analysisRegistry, engineFactory.get(),\n servicesProvider, queryCache, store, eventListener, searcherWrapperFactory, mapperRegistry, listeners);\n }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/index/IndexModule.java",
"status": "modified"
}
]
}
|
{
"body": "Here is a script to reproduce the problem:\n\n``` shell\nES_SERVER=http://localhost:9200\ncurl -XPUT $ES_SERVER/myindex?pretty\n\n# Insert mapping with geo_shape type\ncurl -XPUT $ES_SERVER/myindex/_mapping/mytype?pretty -d '\n{\n \"mytype\" : {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_shape\",\n \"tree\": \"quadtree\",\n \"precision\": \"1m\"\n }\n }\n }\n}\n'\n\n# Check that mapping is correct\ncurl -XGET $ES_SERVER/myindex/_mapping/?pretty\n\ncurl -XPUT $ES_SERVER/myindex/mytype/mydocument?pretty -d'\n{\n \"location\" : {\n \"type\" : \"point\",\n \"coordinates\" : [1.44207, 43.59959]\n }\n}\n'\n\n# Check that we can retrieve our document with a normal query\ncurl -XGET $ES_SERVER/myindex/mytype/_search?pretty -d '\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n'\n# Try to submit the same query to the percolator. FAIL!\ncurl -XPUT $ES_SERVER/myindex/.percolator/myquery?pretty -d '\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n'\n\n```\n\nEverything goes as expected except the last request which returns:\n\n``` json\n{\n \"error\" : \"PercolatorException[[myindex] failed to parse query [myquery]]; nested: QueryParsingException[[myindex] Field [location] is not a geo_shape]; \",\n \"status\" : 500\n}\n```\n\nFollowing the documentation, it seems that it should have worked. I'll try to figure out the problem and submit a patch (whether it turns out to be a documentation patch or a code patch), but it would be great if someone could confirm if it's a bug or a misuse from my end.\n",
"comments": [
{
"body": "Hi @amarandon \n\nI've just tried this on 1.5.0, and it works correctly. \n\n```\nDELETE myindex\n\nPUT /myindex?pretty\n\n# Insert mapping with geo_shape type\nPUT /myindex/_mapping/mytype?pretty\n{\n \"mytype\": {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_shape\",\n \"tree\": \"quadtree\",\n \"precision\": \"1m\"\n }\n }\n }\n}\n\n# Check that mapping is correct\nGET /myindex/_mapping/?pretty\n\n\nPUT /myindex/mytype/mydocument?pretty\n{\n \"location\": {\n \"type\": \"point\",\n \"coordinates\": [\n 1.44207,\n 43.59959\n ]\n }\n}\n\n# Check that we can retrieve our document with a normal query\n\nGET /myindex/mytype/_search?pretty\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n\n# Try to submit the same query to the percolator. Works\nPUT /myindex/.percolator/myquery?pretty\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n\n# Percolate request works too\nPOST myindex/mytype/_percolate\n{\n \"doc\": {\n \"location\": {\n \"type\": \"point\",\n \"coordinates\": [\n 1.44207,\n 43.59959\n ]\n }\n }\n}\n```\n",
"created_at": "2015-04-12T14:39:06Z"
},
{
"body": "Hi @clintongormley Thanks for trying it out. I found out that the issue is triggered by having `index.percolator.map_unmapped_fields_as_string: true` in my config file. I'm in the process of migrating an app built against an earlier version of Elasticsearch and found out that I had to enable that option to keep it working because not all the percolator queries we record have corresponding mappings.\n",
"created_at": "2015-04-12T15:07:44Z"
},
{
"body": "@clintongormley Does it still work for you with `index.percolator.map_unmapped_fields_as_string: true` in your config file?\n",
"created_at": "2015-04-14T06:26:19Z"
},
{
"body": "@amarandon if you're using that option, then I'm not surprised it fails... A geoshape query can't work on a string field. You need to have the field specified in the mapping before you can create a percolator which uses it. Otherwise (with that setting enabled) it will assume that the missing field is a string, and... fail\n",
"created_at": "2015-04-14T13:22:03Z"
},
{
"body": "@clintongormley But I do have the field specified in the mapping before I create the percolator which uses it. In the test script I provided, we create that mapping explicitly and even check that it's been properly created with a GET request before trying to create a percolator query against it. In other words the geo_shape field is not unmapped and shouldn't be affected by that option.\n",
"created_at": "2015-04-14T13:49:57Z"
},
{
"body": "Gotcha! And I can recreate, too. Agreed, this is a bug.\n\n@martijnvg please could you take a look\n\nHere's the full recreation:\n\n```\nPUT /myindex?pretty\n{\n \"settings\": {\n \"index.percolator.map_unmapped_fields_as_string\":true\n }\n}\n\n# Insert mapping with geo_shape type\nPUT /myindex/_mapping/mytype?pretty\n{\n \"mytype\": {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_shape\",\n \"tree\": \"quadtree\",\n \"precision\": \"1m\"\n }\n }\n }\n}\n\n# Check that mapping is correct\nGET /myindex/_mapping/?pretty\n\n\nPUT /myindex/mytype/mydocument?pretty\n{\n \"location\": {\n \"type\": \"point\",\n \"coordinates\": [\n 1.44207,\n 43.59959\n ]\n }\n}\n\n# Check that we can retrieve our document with a normal query\n\nGET /myindex/mytype/_search?pretty\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n\n# Try to submit the same query to the percolator. Works\nPUT /myindex/.percolator/myquery?pretty\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [[0, 50],[2, 40]]\n }\n }\n }\n }\n}\n\n# Percolate request works too\nPOST myindex/mytype/_percolate\n{\n \"doc\": {\n \"location\": {\n \"type\": \"point\",\n \"coordinates\": [\n 1.44207,\n 43.59959\n ]\n }\n }\n}\n```\n",
"created_at": "2015-04-14T14:17:44Z"
},
{
"body": "This still fails in 2.2 and master, but it fails when trying to PUT the percolator query with `location is not a geoshape`.\n\n@martijnvg are percolator queries still parse-once, or are they parsed on each execution? If the latter, then could we just remove the `index.percolator.map_unmapped_fields_as_string` setting?\n",
"created_at": "2016-01-17T16:53:56Z"
},
{
"body": "@clintongormley Percolator queries are still parsed once. \n\nThis issue was caused by a bug that if the 'map unmapped fields as strings' was enabled it would even substitute found fields with string fields! The `geo_shape` query has a hard check if what type of field is being returned from the mapping as therefor fails. Luckily this is easy to fix: #16043\n",
"created_at": "2016-01-17T21:15:08Z"
}
],
"number": 10500,
"title": "Cannot put percolator query on geo_shape field"
}
|
{
"body": "PR for #10500\n",
"number": 16043,
"review_comments": [
{
"body": "I will fix this on 2.x, 2.2 branches too, but when back porting I will not remove this bwc code here.\n",
"created_at": "2016-01-17T21:11:10Z"
}
],
"title": "Don't replace found fields if map unmapped fields as string is enabled"
}
|
{
"commits": [
{
"message": "percolator: If `index.percolator.map_unmapped_fields_as_string` is enabled then don't replace found fields with dummy string field\n\nCloses #10500"
}
],
"files": [
{
"diff": "@@ -282,20 +282,14 @@ public void setMapUnmappedFieldAsString(boolean mapUnmappedFieldAsString) {\n this.mapUnmappedFieldAsString = mapUnmappedFieldAsString;\n }\n \n- private MappedFieldType failIfFieldMappingNotFound(String name, MappedFieldType fieldMapping) {\n- if (allowUnmappedFields) {\n+ MappedFieldType failIfFieldMappingNotFound(String name, MappedFieldType fieldMapping) {\n+ if (fieldMapping != null || allowUnmappedFields) {\n return fieldMapping;\n } else if (mapUnmappedFieldAsString) {\n StringFieldMapper.Builder builder = MapperBuilders.stringField(name);\n return builder.build(new Mapper.BuilderContext(indexSettings.getSettings(), new ContentPath(1))).fieldType();\n } else {\n- Version indexCreatedVersion = indexSettings.getIndexVersionCreated();\n- if (fieldMapping == null && indexCreatedVersion.onOrAfter(Version.V_1_4_0_Beta1)) {\n- throw new QueryShardException(this, \"Strict field resolution and no field mapping can be found for the field with name [\"\n- + name + \"]\");\n- } else {\n- return fieldMapping;\n- }\n+ throw new QueryShardException(this, \"No field mapping can be found for the field with name [{}]\", name);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,83 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.query;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.Collections;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.Matchers.sameInstance;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+\n+public class QueryShardContextTests extends ESTestCase {\n+\n+ public void testFailIfFieldMappingNotFound() {\n+ IndexMetaData.Builder indexMetadata = new IndexMetaData.Builder(\"index\");\n+ indexMetadata.settings(Settings.builder().put(\"index.version.created\", Version.CURRENT)\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"index.number_of_replicas\", 1)\n+ );\n+ IndexSettings indexSettings = new IndexSettings(indexMetadata.build(), Settings.EMPTY);\n+ MapperService mapperService = mock(MapperService.class);\n+ when(mapperService.getIndexSettings()).thenReturn(indexSettings);\n+ QueryShardContext context = new QueryShardContext(\n+ indexSettings, null, null, null, mapperService, null, null, null\n+ );\n+\n+ context.setAllowUnmappedFields(false);\n+ MappedFieldType fieldType = new StringFieldMapper.StringFieldType();\n+ MappedFieldType result = context.failIfFieldMappingNotFound(\"name\", fieldType);\n+ assertThat(result, sameInstance(fieldType));\n+ try {\n+ context.failIfFieldMappingNotFound(\"name\", null);\n+ fail(\"exception expected\");\n+ } catch (QueryShardException e) {\n+ assertThat(e.getMessage(), equalTo(\"No field mapping can be found for the field with name [name]\"));\n+ }\n+\n+ context.setAllowUnmappedFields(true);\n+ result = context.failIfFieldMappingNotFound(\"name\", fieldType);\n+ assertThat(result, sameInstance(fieldType));\n+ result = context.failIfFieldMappingNotFound(\"name\", null);\n+ assertThat(result, nullValue());\n+\n+ context.setAllowUnmappedFields(false);\n+ context.setMapUnmappedFieldAsString(true);\n+ result = context.failIfFieldMappingNotFound(\"name\", fieldType);\n+ assertThat(result, sameInstance(fieldType));\n+ result = context.failIfFieldMappingNotFound(\"name\", null);\n+ assertThat(result, notNullValue());\n+ assertThat(result, instanceOf(StringFieldMapper.StringFieldType.class));\n+ assertThat(result.name(), equalTo(\"name\"));\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/index/query/QueryShardContextTests.java",
"status": "added"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.percolator;\n \n+import com.vividsolutions.jts.geom.Coordinate;\n import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n@@ -30,6 +31,7 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Requests;\n+import org.elasticsearch.common.geo.builders.ShapeBuilders;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.Settings.Builder;\n@@ -71,6 +73,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery;\n import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n@@ -1836,6 +1839,33 @@ public void testMapUnmappedFieldAsString() throws IOException{\n assertThat(response1.getMatches(), arrayWithSize(1));\n }\n \n+ public void testGeoShapeWithMapUnmappedFieldAsString() throws Exception {\n+ // If index.percolator.map_unmapped_fields_as_string is set to true, unmapped field is mapped as an analyzed string.\n+ Settings.Builder settings = Settings.settingsBuilder()\n+ .put(indexSettings())\n+ .put(\"index.percolator.map_unmapped_fields_as_string\", true);\n+ assertAcked(prepareCreate(\"test\")\n+ .setSettings(settings)\n+ .addMapping(\"type\", \"location\", \"type=geo_shape\"));\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME, \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", geoShapeQuery(\"location\", ShapeBuilders.newEnvelope(new Coordinate(0d, 50d), new Coordinate(2d, 40d)))).endObject())\n+ .get();\n+ refresh();\n+\n+ PercolateResponse response1 = client().preparePercolate()\n+ .setIndices(\"test\").setDocumentType(\"type\")\n+ .setPercolateDoc(docBuilder().setDoc(jsonBuilder().startObject()\n+ .startObject(\"location\")\n+ .field(\"type\", \"point\")\n+ .field(\"coordinates\", Arrays.asList(1.44207d, 43.59959d))\n+ .endObject()\n+ .endObject()))\n+ .execute().actionGet();\n+ assertMatchCount(response1, 1L);\n+ assertThat(response1.getMatches().length, equalTo(1));\n+ assertThat(response1.getMatches()[0].getId().string(), equalTo(\"1\"));\n+ }\n+\n public void testFailNicelyWithInnerHits() throws Exception {\n XContentBuilder mapping = XContentFactory.jsonBuilder().startObject()\n .startObject(\"mapping\")",
"filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorIT.java",
"status": "modified"
}
]
}
|
{
"body": "Spinoff from #14121...\n\nToday, when ES detects it's using too much heap vs the configured indexing buffer (default 10% of JVM heap) it opens a new searcher to force Lucene to move the bytes to disk, clear version map, etc.\n\nBut this has the unexpected side effect of making newly indexed/deleted documents visible to future searches, which is not nice for users who are trying to prevent that, e.g. #3593.\n\nAs @uschindler suggested in that issue, I think ES should have two separate searchers from the engine: one for search visibility, only ever refreshed according to the user's wishes, and another, used internally for freeing up heap, version map lookups, etc. Lucene will be efficient about this, sharing segment readers across those two searchers.\n\nI haven't started on this (need to finish #14121 first!) so if someone wants to take it, please feel free!\n",
"comments": [
{
"body": "I'll try to tackle this ... it doesn't look too hard, given the changes in #14121 which already begins separate \"write indexing buffer to disk\" from \"refresh\".\n",
"created_at": "2016-01-06T14:46:27Z"
},
{
"body": "Note that with this change, a refresh only happens when the user expects it to: on the periodic (default: every 1 second) interval, or when refresh API is explicitly invoked.\n\nBut this is a biggish change to ES's behavior vs today, e.g. `flush`, `forceMerge`, moving indexing buffers to disk because they are too big, etc. does NOT refresh, and a good number of tests are angry because of this ... so I'm slowly inserting `refresh()` for such tests.\n\nIt also has implications for transient disk usage, since ES will \"secretly\" refresh less often, meaning we hold segments, which may now be merged or deleted, open for longer. Users who disable `refresh_interval` (set to -1) need to be careful to invoke refresh API at important times (after `flush` or `forceMerge`).\n\nStill I think it is important we make ES's semantics/behavior crisp and well defined: `refresh`, and `refresh` alone, makes recent index changes visible to searches. No other operation should do this as an \"accidental\" side effect.\n",
"created_at": "2016-01-06T18:56:38Z"
},
{
"body": "Just a note - since ES moves shard around on it’s own will, if we want to support this, we’ll have to make sure shard relocation (i.e., copy all the files) maintains this semantics. This will be tricky for many reasons - for example, the user may issue a refresh command when the target shard is not yet ready to receive it (engine closed). Today we refresh the target at the end of every recovery for this reason. Have a “refresh when I say and only when I say” is much more complicated then the current “refresh at least when I say” (but whenever you want as well) semantics. I’m not sure it’s worth the complexity imho.\n\n> On 06 Jan 2016, at 19:56, Michael McCandless notifications@github.com wrote:\n> \n> Note that with this change, a refresh only happens when the user expects it to: on the periodic (default: every 1 second) interval, or when refresh API is explicitly invoked.\n> \n> But this is a biggish change to ES's behavior vs today, e.g. flush, forceMerge, moving indexing buffers to disk because they are too big, etc. does NOT refresh, and a good number of tests are angry because of this ... so I'm slowly inserting refresh() for such tests.\n> \n> It also has implications for transient disk usage, since ES will \"secretly\" refresh less often, meaning we hold segments, which may now be merged or deleted, open for longer. Users who disable refresh_interval (set to -1) need to be careful to invoke refresh API at important times (after flush or forceMerge).\n> \n> Still I think it is important we make ES's semantics/behavior crisp and well defined: refresh, and refresh alone, makes recent index changes visible to searches. No other operation should do this as an \"accidental\" side effect.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2016-01-06T19:11:02Z"
},
{
"body": "Thanks @bleskes, I agree this is too difficult to achieve \"perfectly\", and I think recovery should be unchanged here (refresh when the shard is done recovering).\n\nSimilarly, primary and each replica are in general searching slightly different of a shard today, i.e. when each refreshes every 1s by default, it's a different set of indexed docs that become visible, in general.\n\nFile-based replication would make this easier ;)\n\nSo I think those should remain out of scope, here, and we should still state that ES is a \"refresh at least when I say\", but with this issue \"less often when I don't say\" than today.\n\nOr are you saying we shouldn't even try to make any change here, i.e. leave the engine doing a normal search-visible refresh when e.g. it wants to free up heap used by version map?\n",
"created_at": "2016-01-06T22:40:34Z"
},
{
"body": "> So I think those should remain out of scope, here, and we should still state that ES is a \"refresh at least when I say\", but with this issue \"less often when I don't say\" than today.\n\n+1 to this - I think for the replication case we should refresh since we have to but for stuff like clearing version maps etc. we can improve the situation.\n",
"created_at": "2016-01-11T20:22:12Z"
}
],
"number": 15768,
"title": "Use separate searchers for \"search visibility\" vs \"move indexing buffer to disk\""
}
|
{
"body": "This change makes it more predictable/controllable to users when recently indexed documents become visible for searches via refresh by making version map use a private searcher so that when it needs to free up heap, its refreshes won't be visible to users.\n\nNote that this also means flush no longer also refreshes, so I had to fix some tests that were relying on this.\n\nNote that as @bleskes explained on #157658 this won't be perfect, e.g. when shards relocate there is a user-visible refresh automatically done. Progress not perfection...\n\nCloses #15768 \n",
"number": 16028,
"review_comments": [
{
"body": "This feels a bit weird. Maybe we should separate this into two methods:\n1. create the reader and read the lastComittedInfos\n2. create a searcherManger based on the given reader. If one needs a dedicated directoryReader per searcher manager (not sure of lucene semantics), can we maybe call `lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);` once in the constructor?\n",
"created_at": "2016-01-17T06:54:17Z"
},
{
"body": "I think this may a leak an unclosed searchManager if the second one runs into trouble (than only the second one will be closed). Maybe we should deal with all of that the constructor? we already have a try catch there?\n",
"created_at": "2016-01-17T06:55:29Z"
},
{
"body": "this seems to bypass the IndexSearcherWrapper defined in IndexShard (and passed via the searchFactory)\n",
"created_at": "2016-01-17T06:59:14Z"
},
{
"body": "why do we need to acquire a searcher here?\n",
"created_at": "2016-01-17T06:59:59Z"
},
{
"body": "Is this a separate bug you found?\n",
"created_at": "2016-01-17T07:00:26Z"
},
{
"body": "correct me if I'm wrong, but this is about trimming the version map, right?\n",
"created_at": "2016-01-17T07:01:11Z"
},
{
"body": "do we want to unify this code with the refresh method by making this get a manager to work on? (+ a string description for failures)\n",
"created_at": "2016-01-17T07:02:13Z"
},
{
"body": "it's done in refreshInternal as well.. so I'm probably missing something w.r.t. lucene. Looking to learn ;)\n",
"created_at": "2016-01-17T07:03:20Z"
},
{
"body": "This is a great idea (single `DirectoryReader` on init, two searcher managers).\n",
"created_at": "2016-01-18T16:53:28Z"
},
{
"body": "I think we are actually OK here: the finally clause in the ctor would close the first searcherManager. I agree it's confusing ...\n",
"created_at": "2016-01-18T16:55:39Z"
},
{
"body": "Good catch, no reason!\n",
"created_at": "2016-01-18T16:59:25Z"
},
{
"body": "This is pre-existing, and I don't know why refresh means it's a good time to ask merge scheduler to pull any settings changes. We already call this in `onSettingsChanged`... I suspect it's not really necessary :)\n",
"created_at": "2016-01-18T17:02:55Z"
},
{
"body": "There were RT get tests that were angry, because apparently an \"RT get from xlog\" can sometimes fail (if there were mapping changes maybe, not sure?) where \"RT get from the index\" would succeed. So if an explicit refresh is invoked I need to refresh both internal and external readers.\n",
"created_at": "2016-01-18T17:08:21Z"
},
{
"body": "++\n",
"created_at": "2016-01-18T17:08:49Z"
},
{
"body": "no you can remove this - it's called when needed in onSettingsChanged\n",
"created_at": "2016-01-19T11:19:08Z"
},
{
"body": "I think the internalSearcherManager doesn't need the same searcher factory as we use for the external one since it doesn't need to do warming etc? I think it should be very lean?\n",
"created_at": "2016-01-19T11:21:20Z"
},
{
"body": "++\n",
"created_at": "2016-01-19T11:21:55Z"
}
],
"title": "Refreshes to clear heap used by version map should not be visible to users"
}
|
{
"commits": [
{
"message": "use separate private searcher for version map refreshing so user has more control over when refresh makes recently indexed docs visible"
},
{
"message": "refactor"
}
],
"files": [
{
"diff": "@@ -336,13 +336,25 @@ public final GetResult get(Get get) throws EngineException {\n * @see Searcher#close()\n */\n public final Searcher acquireSearcher(String source) throws EngineException {\n+ return acquireSearcher(source, getSearcherManager());\n+ }\n+\n+ /**\n+ * Returns a new searcher instance from the specified {@link SearcherManager}. The consumer of this\n+ * API is responsible for releasing the returned seacher in a\n+ * safe manner, preferably in a try/finally block.\n+ *\n+ * @see Searcher#close()\n+ */\n+ protected final Searcher acquireSearcher(String source, SearcherManager manager) throws EngineException {\n+ assert manager != null;\n+\n boolean success = false;\n /* Acquire order here is store -> manager since we need\n * to make sure that the store is not closed before\n * the searcher is acquired. */\n store.incRef();\n try {\n- final SearcherManager manager = getSearcherManager(); // can never be null\n /* This might throw NPE but that's fine we will run ensureOpen()\n * in the catch block and throw the right exception */\n final IndexSearcher searcher = manager.acquire();",
"filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java",
"status": "modified"
},
{
"diff": "@@ -100,8 +100,13 @@ public class InternalEngine extends Engine {\n private final IndexWriter indexWriter;\n \n private final SearcherFactory searcherFactory;\n+\n+ // Used to make recent indexing changes visible to incoming searches:\n private final SearcherManager searcherManager;\n \n+ // Used to move indexing buffer to disk w/o making searches see the changes:\n+ private final SearcherManager internalSearcherManager;\n+\n private final Lock flushLock = new ReentrantLock();\n private final ReentrantLock optimizeLock = new ReentrantLock();\n \n@@ -128,8 +133,9 @@ public InternalEngine(EngineConfig engineConfig, boolean skipInitialTranslogReco\n store.incRef();\n IndexWriter writer = null;\n Translog translog = null;\n- SearcherManager manager = null;\n EngineMergeScheduler scheduler = null;\n+ SearcherManager searcherManager = null;\n+ SearcherManager internalSearcherManager = null;\n boolean success = false;\n try {\n this.lastDeleteVersionPruneTimeMSec = engineConfig.getThreadPool().estimatedTimeInMillis();\n@@ -161,23 +167,28 @@ public InternalEngine(EngineConfig engineConfig, boolean skipInitialTranslogReco\n }\n }\n this.translog = translog;\n- manager = createSearcherManager();\n- this.searcherManager = manager;\n- this.versionMap.setManager(searcherManager);\n+ searcherManager = createSearcherManager();\n+ this.searcherManager = searcherManager;\n+\n+ internalSearcherManager = createSearcherManager();\n+ this.internalSearcherManager = internalSearcherManager;\n+\n+ this.versionMap.setManager(internalSearcherManager);\n try {\n if (skipInitialTranslogRecovery) {\n // make sure we point at the latest translog from now on..\n commitIndexWriter(writer, translog, lastCommittedSegmentInfos.getUserData().get(SYNC_COMMIT_ID));\n } else {\n recoverFromTranslog(engineConfig, translogGeneration);\n+ // IndexShard.finalizeRecovery will refresh() us, so we don't here\n }\n } catch (IOException | EngineException ex) {\n throw new EngineCreationFailureException(shardId, \"failed to recover from translog\", ex);\n }\n success = true;\n } finally {\n if (success == false) {\n- IOUtils.closeWhileHandlingException(writer, translog, manager, scheduler);\n+ IOUtils.closeWhileHandlingException(writer, translog, searcherManager, internalSearcherManager, scheduler);\n versionMap.clear();\n if (isClosed.get() == false) {\n // failure we need to dec the store reference\n@@ -285,14 +296,15 @@ private Translog.TranslogGeneration loadTranslogIdFromCommit(IndexWriter writer)\n }\n \n private SearcherManager createSearcherManager() throws EngineException {\n- boolean success = false;\n+ DirectoryReader directoryReader = null;\n SearcherManager searcherManager = null;\n try {\n try {\n- final DirectoryReader directoryReader = ElasticsearchDirectoryReader.wrap(DirectoryReader.open(indexWriter, true), shardId);\n+ directoryReader = ElasticsearchDirectoryReader.wrap(DirectoryReader.open(indexWriter, true), shardId);\n searcherManager = new SearcherManager(directoryReader, searcherFactory);\n- lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);\n- success = true;\n+ if (lastCommittedSegmentInfos == null) {\n+ lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);\n+ }\n return searcherManager;\n } catch (IOException e) {\n maybeFailEngine(\"start\", e);\n@@ -304,8 +316,8 @@ private SearcherManager createSearcherManager() throws EngineException {\n throw new EngineCreationFailureException(shardId, \"failed to open reader on writer\", e);\n }\n } finally {\n- if (success == false) { // release everything we created on a failure\n- IOUtils.closeWhileHandlingException(searcherManager, indexWriter);\n+ if (searcherManager == null) { // release everything we created on a failure\n+ IOUtils.closeWhileHandlingException(directoryReader, indexWriter);\n }\n }\n }\n@@ -330,13 +342,19 @@ public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws\n return new GetResult(true, versionValue.version(), op.getSource());\n }\n }\n- }\n \n- // no version, get the version from the index, we know that we refresh on flush\n- return getFromSearcher(get, searcherFactory);\n+ return getFromSearcher(get, this::acquireInternalSearcher);\n+ } else {\n+ // no version, get the version from the index, we know that we refresh on flush\n+ return getFromSearcher(get, searcherFactory);\n+ }\n }\n }\n \n+ private final Searcher acquireInternalSearcher(String source) throws EngineException {\n+ return acquireSearcher(source, internalSearcherManager);\n+ }\n+\n @Override\n public boolean index(Index index) {\n final boolean created;\n@@ -495,6 +513,8 @@ public void refresh(String source) throws EngineException {\n try (ReleasableLock lock = readLock.acquire()) {\n ensureOpen();\n searcherManager.maybeRefreshBlocking();\n+ IndexSearcher s = searcherManager.acquire();\n+ searcherManager.release(s);\n } catch (AlreadyClosedException e) {\n ensureOpen();\n maybeFailEngine(\"refresh\", e);\n@@ -505,7 +525,29 @@ public void refresh(String source) throws EngineException {\n throw new RefreshFailedEngineException(shardId, t);\n }\n \n- // TODO: maybe we should just put a scheduled job in threadPool?\n+ mergeScheduler.refreshConfig();\n+\n+ // we must also refresh our internal searcher, so that a subsequent real-time get (which uses internal searcher) comes from the\n+ // index and not xlog:\n+ refreshInternal();\n+ }\n+\n+ private void refreshInternal() throws EngineException {\n+ // we obtain a read lock here, since we don't want a flush to happen while we are refreshing\n+ // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it)\n+ try (ReleasableLock lock = readLock.acquire()) {\n+ ensureOpen();\n+ internalSearcherManager.maybeRefreshBlocking();\n+ } catch (AlreadyClosedException e) {\n+ ensureOpen();\n+ maybeFailEngine(\"refreshInternal\", e);\n+ } catch (EngineClosedException e) {\n+ throw e;\n+ } catch (Throwable t) {\n+ failEngine(\"refreshInternal failed\", t);\n+ throw new RefreshFailedEngineException(shardId, t);\n+ }\n+\n // We check for pruning in each delete request, but we also prune here e.g. in case a delete burst comes in and then no more deletes\n // for a long time:\n maybePruneDeletedTombstones();\n@@ -532,7 +574,7 @@ public void writeIndexingBuffer() throws EngineException {\n // The version map is using > 25% of the indexing buffer, so we do a refresh so the version map also clears\n logger.debug(\"use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])\",\n new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));\n- refresh(\"write indexing buffer\");\n+ refreshInternal();\n } else {\n // Most of our heap is used by the indexing buffer, so we do a cheaper (just writes segments, doesn't open a new searcher) IW.flush:\n logger.debug(\"use IndexWriter.flush to write indexing buffer (heap size=[{}]) since version map is small (heap size=[{}])\",\n@@ -599,9 +641,6 @@ final boolean tryRenewSyncCommit() {\n maybeFailEngine(\"renew sync commit\", ex);\n throw new EngineException(shardId, \"failed to renew sync commit\", ex);\n }\n- if (renewed) { // refresh outside of the write lock\n- refresh(\"renew sync commit\");\n- }\n \n return renewed;\n }\n@@ -643,7 +682,7 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti\n commitIndexWriter(indexWriter, translog);\n logger.trace(\"finished commit for flush\");\n // we need to refresh in order to clear older version values\n- refresh(\"version_table_flush\");\n+ refreshInternal();\n // after refresh documents can be retrieved from the index so we can now commit the translog\n translog.commit();\n } catch (Throwable e) {\n@@ -691,7 +730,7 @@ private void pruneDeletedTombstones() {\n \n // TODO: not good that we reach into LiveVersionMap here; can we move this inside VersionMap instead? problem is the dirtyLock...\n \n- // we only need to prune the deletes map; the current/old version maps are cleared on refresh:\n+ // we only need to prune the deletes map; the current/old version maps are cleared on refreshInternal:\n for (Map.Entry<BytesRef, VersionValue> entry : versionMap.getAllTombstones()) {\n BytesRef uid = entry.getKey();\n synchronized (dirtyLock(uid)) { // can we do it without this lock on each value? maybe batch to a set and get the lock once per set?\n@@ -865,6 +904,11 @@ protected final void closeNoLock(String reason) {\n } catch (Throwable t) {\n logger.warn(\"Failed to close SearcherManager\", t);\n }\n+ try {\n+ IOUtils.close(internalSearcherManager);\n+ } catch (Throwable t) {\n+ logger.warn(\"Failed to close internal SearcherManager\", t);\n+ }\n try {\n IOUtils.close(translog);\n } catch (Throwable t) {\n@@ -902,8 +946,11 @@ private Object dirtyLock(Term uid) {\n }\n \n private long loadCurrentVersionFromIndex(Term uid) throws IOException {\n- try (final Searcher searcher = acquireSearcher(\"load_version\")) {\n- return Versions.loadVersion(searcher.reader(), uid);\n+ IndexSearcher searcher = internalSearcherManager.acquire();\n+ try {\n+ return Versions.loadVersion(searcher.getIndexReader(), uid);\n+ } finally {\n+ internalSearcherManager.release(searcher);\n }\n }\n \n@@ -1112,7 +1159,7 @@ public synchronized void afterMerge(OnGoingMerge merge) {\n @Override\n public void onFailure(Throwable t) {\n if (isClosed.get() == false) {\n- logger.warn(\"failed to flush after merge has finished\");\n+ logger.warn(\"failed to flush after merge has finished\", t);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -47,6 +47,7 @@ public void setupIndex() {\n client().prepareIndex(\"test\", \"type1\", id).setSource(\"text\", \"sometext\").get();\n }\n client().admin().indices().prepareFlush(\"test\").setWaitIfOngoing(true).get();\n+ client().admin().indices().prepareRefresh(\"test\").get();\n }\n \n public void testBasic() {",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -425,6 +425,7 @@ void assertUpgradeWorks(String indexName, boolean alreadyLatest) throws Exceptio\n UpgradeIT.assertNotUpgraded(client(), indexName);\n }\n assertNoFailures(client().admin().indices().prepareUpgrade(indexName).get());\n+ refresh(indexName);\n UpgradeIT.assertUpgraded(client(), indexName);\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/bwcompat/OldIndexBackwardsCompatibilityIT.java",
"status": "modified"
},
{
"diff": "@@ -127,8 +127,9 @@ public void testSimpleGet() {\n assertThat(response.getField(\"field1\").getValues().get(0).toString(), equalTo(\"value1\"));\n assertThat(response.getField(\"field2\"), nullValue());\n \n- logger.info(\"--> flush the index, so we load it from it\");\n+ logger.info(\"--> flush and refresh the index, so we load it from it\");\n flush();\n+ refresh();\n \n logger.info(\"--> realtime get 1 (loaded from index)\");\n response = client().prepareGet(indexOrAlias(), \"type1\", \"1\").get();",
"filename": "core/src/test/java/org/elasticsearch/get/GetActionIT.java",
"status": "modified"
},
{
"diff": "@@ -1037,13 +1037,15 @@ public void testForceMerge() throws IOException {\n assertEquals(numDocs, test.reader().numDocs());\n }\n engine.forceMerge(true, 1, false, false, false);\n+ engine.refresh(\"test\");\n assertEquals(engine.segments(true).size(), 1);\n \n ParsedDocument doc = testParsedDocument(Integer.toString(0), Integer.toString(0), \"test\", null, -1, -1, testDocument(), B_1, null);\n Engine.Index index = new Engine.Index(newUid(Integer.toString(0)), doc);\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, true, false, false); //expunge deletes\n \n+ engine.refresh(\"test\");\n assertEquals(engine.segments(true).size(), 1);\n try (Engine.Searcher test = engine.acquireSearcher(\"test\")) {\n assertEquals(numDocs - 1, test.reader().numDocs());\n@@ -1055,6 +1057,7 @@ public void testForceMerge() throws IOException {\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, false, false, false); //expunge deletes\n \n+ engine.refresh(\"test\");\n assertEquals(engine.segments(true).size(), 1);\n try (Engine.Searcher test = engine.acquireSearcher(\"test\")) {\n assertEquals(numDocs - 2, test.reader().numDocs());\n@@ -1651,6 +1654,7 @@ public void testTranslogReplayWithFailure() throws IOException {\n // no mock directory, no fun.\n engine = createEngine(store, primaryTranslogDir);\n }\n+ engine.refresh(\"test\");\n try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), randomIntBetween(numDocs, numDocs + 10));\n assertThat(topDocs.totalHits, equalTo(numDocs));\n@@ -1807,6 +1811,7 @@ public void testTranslogReplay() throws IOException {\n engine.close();\n engine.config().setCreate(false);\n engine = new InternalEngine(engine.config(), false); // we need to reuse the engine config unless the parser.mappingModified won't work\n+ engine.refresh(\"test\");\n \n try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), randomIntBetween(numDocs, numDocs + 10));\n@@ -1853,6 +1858,7 @@ public void testTranslogReplay() throws IOException {\n \n engine.close();\n engine = createEngine(store, primaryTranslogDir);\n+ engine.refresh(\"test\");\n try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), numDocs + 1);\n assertThat(topDocs.totalHits, equalTo(numDocs + 1));\n@@ -1861,11 +1867,10 @@ public void testTranslogReplay() throws IOException {\n assertEquals(flush ? 1 : 2, parser.recoveredOps.get());\n engine.delete(new Engine.Delete(\"test\", Integer.toString(randomId), newUid(uuidValue)));\n if (randomBoolean()) {\n- engine.refresh(\"test\");\n- } else {\n engine.close();\n engine = createEngine(store, primaryTranslogDir);\n }\n+ engine.refresh(\"test\");\n try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), numDocs);\n assertThat(topDocs.totalHits, equalTo(numDocs));\n@@ -1947,6 +1952,7 @@ public void testRecoverFromForeignTranslog() throws IOException {\n }\n \n engine = createEngine(store, primaryTranslogDir); // and recover again!\n+ engine.refresh(\"test\");\n try (Engine.Searcher searcher = engine.acquireSearcher(\"test\")) {\n TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), randomIntBetween(numDocs, numDocs + 10));\n assertThat(topDocs.totalHits, equalTo(numDocs));",
"filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -538,6 +538,7 @@ public void testSegmentsStats() {\n \n client().admin().indices().prepareFlush().get();\n client().admin().indices().prepareForceMerge().setMaxNumSegments(1).execute().actionGet();\n+ refresh(\"test1\");\n stats = client().admin().indices().prepareStats().setSegments(true).get();\n \n assertThat(stats.getTotal().getSegments(), notNullValue());",
"filename": "core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java",
"status": "modified"
},
{
"diff": "@@ -515,6 +515,7 @@ public void testHasChildAndHasParentFailWhenSomeSegmentsDontContainAnyParentOrCh\n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"p_field\", 1).get();\n client().prepareIndex(\"test\", \"child\", \"1\").setParent(\"1\").setSource(\"c_field\", 1).get();\n client().admin().indices().prepareFlush(\"test\").get();\n+ refresh(\"test\");\n \n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"p_field\", 1).get();\n client().admin().indices().prepareFlush(\"test\").get();\n@@ -778,6 +779,7 @@ public void testHasChildAndHasParentFilter_withFilter() throws Exception {\n \n client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"p_field\", 2).get();\n client().admin().indices().prepareFlush(\"test\").get();\n+ refresh(\"test\");\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", termQuery(\"c_field\", 1)))).get();",
"filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java",
"status": "modified"
},
{
"diff": "@@ -78,8 +78,9 @@ public void testSimpleNested() throws Exception {\n .endObject()).execute().actionGet();\n \n waitForRelocation(ClusterHealthStatus.GREEN);\n- // flush, so we fetch it from the index (as see that we filter nested docs)\n+ // flush and refresh, so we fetch it from the index (as see that we filter nested docs)\n flush();\n+ refresh();\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", \"1\").get();\n assertThat(getResponse.isExists(), equalTo(true));\n assertThat(getResponse.getSourceAsBytes(), notNullValue());\n@@ -124,8 +125,9 @@ public void testSimpleNested() throws Exception {\n .endArray()\n .endObject()).execute().actionGet();\n waitForRelocation(ClusterHealthStatus.GREEN);\n- // flush, so we fetch it from the index (as see that we filter nested docs)\n+ // flush and refresh, so we fetch it from the index (as see that we filter nested docs)\n flush();\n+ refresh();\n assertDocumentCount(\"test\", 6);\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\",\n@@ -149,8 +151,8 @@ public void testSimpleNested() throws Exception {\n DeleteResponse deleteResponse = client().prepareDelete(\"test\", \"type1\", \"2\").execute().actionGet();\n assertThat(deleteResponse.isFound(), equalTo(true));\n \n- // flush, so we fetch it from the index (as see that we filter nested docs)\n- flush();\n+ // refresh, so we fetch it from the index (as see that we filter nested docs)\n+ refresh();\n assertDocumentCount(\"test\", 3);\n \n searchResponse = client().prepareSearch(\"test\").setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\", \"n_value1_1\"))).execute().actionGet();\n@@ -181,8 +183,9 @@ public void testMultiNested() throws Exception {\n .endArray()\n .endObject()).execute().actionGet();\n \n- // flush, so we fetch it from the index (as see that we filter nested docs)\n+ // flush and refresh, so we fetch it from the index (as see that we filter nested docs)\n flush();\n+ refresh();\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet();\n assertThat(getResponse.isExists(), equalTo(true));\n waitForRelocation(ClusterHealthStatus.GREEN);\n@@ -1057,14 +1060,10 @@ public void testCheckFixedBitSetCache() throws Exception {\n assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l));\n }\n \n- /**\n- */\n private void assertDocumentCount(String index, long numdocs) {\n IndicesStatsResponse stats = admin().indices().prepareStats(index).clear().setDocs(true).get();\n assertNoFailures(stats);\n assertThat(stats.getIndex(index).getPrimaries().docs.getCount(), is(numdocs));\n \n }\n-\n-\n }",
"filename": "core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java",
"status": "modified"
},
{
"diff": "@@ -114,6 +114,7 @@ public void testUpdateScripts() {\n ensureGreen(\"test_index\");\n client().prepareIndex(\"test_index\", \"test_type\", \"1\").setSource(\"{\\\"foo\\\":\\\"bar\\\"}\").get();\n flush(\"test_index\");\n+ refresh(\"test_index\");\n \n int iterations = randomIntBetween(2, 11);\n for (int i = 1; i < iterations; i++) {",
"filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/IndexedScriptTests.java",
"status": "modified"
}
]
}
|
{
"body": "It seems that if you disable the `_source` when first creating the index, and subsequent updates to the mapping will fail with `Merge failed with failures {[Cannot update enabled setting for [_source]]}`, even if the update doesn't change `_source` at all.\n\nTested on ES 2.1\n\n``` json\n{\n \"name\": \"Domo\",\n \"cluster_name\": \"elasticsearch\",\n \"version\": {\n \"number\": \"2.1.0\",\n \"build_hash\": \"72cd1f1a3eee09505e036106146dc1949dc5dc87\",\n \"build_timestamp\": \"2015-11-18T22:40:03Z\",\n \"build_snapshot\": false,\n \"lucene_version\": \"5.3.1\"\n },\n \"tagline\": \"You Know, for Search\"\n}\n```\n### Example\n\n``` js\nDELETE /data\n\nPUT /data\n{\n \"mappings\": {\n \"data\": {\n \"_source\": {\n \"enabled\": false\n },\n \"properties\": {\n \"float\": {\n \"type\": \"float\"\n },\n \"double\": {\n \"type\": \"double\"\n }\n }\n }\n }\n}\n\nGET /data/_mapping\n```\n\n``` json\n{\n \"data\": {\n \"mappings\": {\n \"data\": {\n \"_source\": {\n \"enabled\": false\n },\n \"properties\": {\n \"double\": {\n \"type\": \"double\"\n },\n \"float\": {\n \"type\": \"float\"\n }\n }\n }\n }\n }\n}\n```\n\nAll good, but if we try to update the mapping:\n\n``` js\nPUT /data/_mapping/data\n{\n \"properties\": {\n \"long\": {\n \"type\": \"long\"\n }\n }\n}\n```\n\n``` json\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"merge_mapping_exception\",\n \"reason\": \"Merge failed with failures {[Cannot update enabled setting for [_source]]}\"\n }\n ],\n \"type\": \"merge_mapping_exception\",\n \"reason\": \"Merge failed with failures {[Cannot update enabled setting for [_source]]}\"\n },\n \"status\": 400\n}\n```\n\nIf you repeat the process without touching `_source`, everything works as expected:\n\n``` js\nDELETE /data\n\nPUT /data\n{\n \"mappings\": {\n \"data\": {\n \"properties\": {\n \"float\": {\n \"type\": \"float\"\n },\n \"double\": {\n \"type\": \"double\"\n }\n }\n }\n }\n}\n\nGET /data/_mapping\n\nPUT /data/_mapping/data\n{\n \"properties\": {\n \"long\": {\n \"type\": \"long\"\n }\n }\n}\n\nGET /data/_mapping\n```\n\n``` json\n{\n \"data\": {\n \"mappings\": {\n \"data\": {\n \"properties\": {\n \"double\": {\n \"type\": \"double\"\n },\n \"float\": {\n \"type\": \"float\"\n },\n \"long\": {\n \"type\": \"long\"\n }\n }\n }\n }\n }\n}\n```\n\nAnd it doesn't seem to matter if you enable or disable _source, both cases will cause the error. It seems to be the presence of specifying _source that breaks future updates.\n",
"comments": [
{
"body": "I should note, if you explicitly set `_source` in your update it works as expected:\n\n``` js\nPUT /data/_mapping/data\n{\n \"_source\": {\n \"enabled\": false\n },\n \"properties\": {\n \"long\": {\n \"type\": \"long\"\n }\n }\n}\n```\n",
"created_at": "2016-01-14T18:20:03Z"
},
{
"body": "It looks to me like this would also be an issue with some other metadata mappers, eg `_timestamp`. The issue is currently when initializing the builder for a type (eg when parsing mappings), we use the current fieldtype for each current metadata mapper to get a metadata mapper for that specific type (so that by default it will match the metadata mapper used by other document types). However, we really need the entire metadata mapper, since there are settings like this that are not part of the field type. Also, SourceFieldMapper does not even use the existing field type (although that does not matter in this case).\n",
"created_at": "2016-01-14T20:12:41Z"
},
{
"body": "Same thing goes for `_parent`:\n\n```\nPUT t \n{\n \"mappings\": {\n \"parent\": {},\n \"child\": {\n \"_parent\": {\n \"type\": \"parent\"\n }\n }\n }\n}\n\nPUT t/_mapping/child\n{\n \"properties\": {}\n}\n```\n\nReturns:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"The _parent field's type option can't be changed: [parent]->[null]\"\n }\n ],\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"The _parent field's type option can't be changed: [parent]->[null]\"\n },\n \"status\": 400\n}\n```\n",
"created_at": "2016-01-15T11:32:42Z"
}
],
"number": 15997,
"title": "Disabling _source prevents any further updates to mapping"
}
|
{
"body": "When a metadata mapper is not specified in a mapping update, it should default\nto the current metadata mapper instead of the general default in order for the\nupdate to not conflict with the current mapping.\n\nCloses #15997\n",
"number": 16023,
"review_comments": [],
"title": "Reuse metadata mappers for dynamic updates."
}
|
{
"commits": [
{
"message": "Reuse metadata mappers for dynamic updates.\n\nWhen a metadata mapper is not specified in a mapping update, it should default\nto the current metadata mapper instead of the general default in order for the\nupdate to not conflict with the current mapping.\n\nCloses #15997"
}
],
"files": [
{
"diff": "@@ -79,10 +79,20 @@ public Builder(RootObjectMapper.Builder builder, MapperService mapperService) {\n this.builderContext = new Mapper.BuilderContext(indexSettings, new ContentPath(1));\n this.rootObjectMapper = builder.build(builderContext);\n \n+ final String type = rootObjectMapper.name();\n+ DocumentMapper existingMapper = mapperService.documentMapper(type);\n for (Map.Entry<String, MetadataFieldMapper.TypeParser> entry : mapperService.mapperRegistry.getMetadataMapperParsers().entrySet()) {\n final String name = entry.getKey();\n- final TypeParser parser = entry.getValue();\n- final MetadataFieldMapper metadataMapper = parser.getDefault(indexSettings, mapperService.fullName(name), builder.name());\n+ final MetadataFieldMapper existingMetadataMapper = existingMapper == null\n+ ? null\n+ : (MetadataFieldMapper) existingMapper.mappers().getMapper(name);\n+ final MetadataFieldMapper metadataMapper;\n+ if (existingMetadataMapper == null) {\n+ final TypeParser parser = entry.getValue();\n+ metadataMapper = parser.getDefault(indexSettings, mapperService.fullName(name), builder.name());\n+ } else {\n+ metadataMapper = existingMetadataMapper;\n+ }\n metadataMappers.put(metadataMapper.getClass(), metadataMapper);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n+import java.io.IOException;\n import java.util.concurrent.CyclicBarrier;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicReference;\n@@ -203,4 +204,28 @@ public void run() {\n throw error.get();\n }\n }\n+\n+ public void testDoNotRepeatOriginalMapping() throws IOException {\n+ CompressedXContent mapping = new CompressedXContent(XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_source\")\n+ .field(\"enabled\", false)\n+ .endObject()\n+ .endObject().endObject().bytes());\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ mapperService.merge(\"type\", mapping, true, false);\n+\n+ CompressedXContent update = new CompressedXContent(XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().bytes());\n+ DocumentMapper mapper = mapperService.merge(\"type\", update, false, false);\n+\n+ assertNotNull(mapper.mappers().getMapper(\"foo\"));\n+ assertFalse(mapper.sourceMapper().enabled());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/merge/TestMergeMapperTests.java",
"status": "modified"
}
]
}
|
{
"body": "Hello, on 2.1.1 3 nodes cluster seeing `timeInQueue` growing faster than it should?\n\n1) start three node cluster whose pending_tasks queue grows right after starting\n2) call GET _cat/pending_tasks every second\n\n```\nabonuccelli@w530 ~ $ while true;do date;curl -XGET -u admin:r1ng3r -k 'https://w530:9200/_cat/pending_tasks';sleep 1;done\njue ene 14 15:36:23 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:24 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:25 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:26 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:28 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:29 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:30 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:31 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:32 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:33 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:34 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:35 CET 2016\ncurl: (7) Failed to connect to w530 port 9200: Connection refused\njue ene 14 15:36:36 CET 2016\n5 11.3m NORMAL local-gateway-elected-state \n7 6m URGENT zen-disco-join(join from node[{node3}{zPm--nb4Q1GyDfUgPRki0w}{192.168.1.105}{w530/192.168.1.105:9302}{master=true}]) \n6 10.7m HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:36:37 CET 2016\n8 14.2s HIGH cluster_reroute(async_shard_fetch) \n9 1.4s HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:36:38 CET 2016\n 9 19.4m HIGH cluster_reroute(async_shard_fetch) \n10 9.3m URGENT shard-started ([logstash-unparsed-2016.01.14][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[23], s[INITIALIZING], a[id=6bPUppNYR96GpmRzRA2lVA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n11 9.1m URGENT shard-started ([logstash-unparsed-2016.01.14][2], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[26], s[INITIALIZING], a[id=tKqikFUkTqmHtr4SwQKmzQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n12 1.4m URGENT shard-started ([logstash-auth-2016.01.14][2], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[22], s[INITIALIZING], a[id=0uNZSK5bTjav1G5U462xPQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n13 1.1m URGENT shard-started ([logstash-auth-2016.01.14][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[21], s[INITIALIZING], a[id=-FSh28JvTw6CzPlT-rKslQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n14 1m URGENT shard-started ([logstash-unparsed-2016.01.14][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[26], s[INITIALIZING], a[id=A6yx_FkqRb6BRmNyn4v9dA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n15 47s URGENT shard-started ([logstash-auth-2016.01.14][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[22], s[INITIALIZING], a[id=7LEi3mQMRlOebCBTVITYqQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \njue ene 14 15:36:39 CET 2016\n17 5.1m URGENT shard-started ([logstash-auth-2016.01.12][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[40], s[INITIALIZING], a[id=oRhDl4tWQjyXrHjYtyyFfw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n18 3.9m URGENT shard-started ([logstash-auth-2016.01.13][2], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[32], s[INITIALIZING], a[id=WM9zpWx5T4ujXxPdskbuOA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n19 3.9m URGENT shard-started ([logstash-unparsed-2016.01.13][1], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[36], s[INITIALIZING], a[id=NZmkrLKfTMKT4a4iSfGtBQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n20 3.1m URGENT shard-started ([logstash-auth-2016.01.13][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=7_fdmALYRYCE6Hs1Ai0Lfg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n22 1.2m URGENT shard-started ([.marvel-es-data][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[26], s[INITIALIZING], a[id=2E_4eG24S8m1mUDSg1NLCQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n21 1.2m URGENT shard-started ([.watch_history-2016.01.05][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[29], s[INITIALIZING], a[id=tUm4Rt7XQhK7zJTRMqawFg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n16 20.8m HIGH _add_listener_ \njue ene 14 15:36:40 CET 2016\n29 16.5m URGENT shard-started ([logstash-syslog-2016.01.12][1], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[38], s[INITIALIZING], a[id=Cz2-XhQJSBCkUtLTU_XoYg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n30 16.2m URGENT shard-started ([logstash-syslog-2016.01.12][1], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[38], s[INITIALIZING], a[id=Cz2-XhQJSBCkUtLTU_XoYg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n31 14.5m URGENT shard-started ([logstash-syslog-2016.01.12][2], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=W3HDLC-nTWm33uJt9mCvyQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n33 8.2m URGENT shard-started ([logstash-unparsed-2016.01.12][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[37], s[INITIALIZING], a[id=70JqKeAaTgO9HGqX0UYaWg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n36 47.8s URGENT shard-started ([.watch_history-2016.01.12][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[34], s[INITIALIZING], a[id=L9Wpm4MlQtGuhq2Ps7-3Gg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n32 12m URGENT shard-started ([logstash-auth-2016.01.12][2], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[33], s[INITIALIZING], a[id=jAEM0xJMSFCAw7xAp3Zh5g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n34 8m URGENT shard-started ([logstash-auth-2016.01.12][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[33], s[INITIALIZING], a[id=aQYnVPBWTx24BR5L-H9egQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n35 8m URGENT shard-started ([logstash-unparsed-2016.01.12][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=nFYleWmMQumM9tnc-58TrA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n16 44.9m HIGH _add_listener_ \njue ene 14 15:36:42 CET 2016\n46 9.3m URGENT shard-started ([logstash-unparsed-2016.01.10][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=8US-Te0YSQCym42iTKKAjw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n47 3.9m URGENT shard-started ([logstash-unparsed-2016.01.10][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=8US-Te0YSQCym42iTKKAjw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n49 2.4m URGENT shard-started ([logstash-syslog-2016.01.10][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=3NzmZFu6S8yYVgkir8_Hfg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n48 2.6m URGENT shard-started ([logstash-syslog-2016.01.10][2], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=VGbLKgt7RYu0WmSuVg3ZDQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n53 25.6s URGENT shard-started ([.watch_history-2016.01.10][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=hykDFlrTQnOVJjY6WPSR4g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n50 1.7m URGENT shard-started ([logstash-unparsed-2016.01.10][2], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=rn21EPXSQbOw2YKx_bTilg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n51 1.6m URGENT shard-started ([logstash-unparsed-2016.01.10][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[34], s[INITIALIZING], a[id=yrciN2zZQPaY0XFWkpwUGQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n52 1.3m URGENT shard-started ([.watch_history-2016.01.10][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=hykDFlrTQnOVJjY6WPSR4g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n16 1h HIGH _add_listener_ \n54 25.6s URGENT shard-started ([logstash-unparsed-2016.01.10][2], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=rn21EPXSQbOw2YKx_bTilg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \njue ene 14 15:36:43 CET 2016\n64 8.4m URGENT shard-started ([logstash-syslog-2016.01.09][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=RnkOosAyRpiaOGvX0hBpTQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n65 6m URGENT shard-started ([logstash-unparsed-2016.01.08][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[40], s[INITIALIZING], a[id=I5Q3OFfpSZ-1inpSGLiwZQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n66 5.6m URGENT shard-started ([logstash-unparsed-2016.01.08][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[44], s[INITIALIZING], a[id=AyqMhVCHRpeSdGGZblgKig], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n69 1.2m URGENT shard-started ([.watch_history-2016.01.09][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=4yoPNJg8RjSSSpcMNwolHA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n67 5.5m URGENT shard-started ([logstash-syslog-2016.01.09][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=RnkOosAyRpiaOGvX0hBpTQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n68 5.1m URGENT shard-started ([logstash-syslog-2016.01.08][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[34], s[INITIALIZING], a[id=gWL8rqL5RICFBlOPbXXVlg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n16 1.4h HIGH _add_listener_ \n70 44.1s URGENT shard-started ([logstash-syslog-2016.01.09][1], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=z21y4FyJTeGab0hhunzPFg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \njue ene 14 15:36:44 CET 2016\n78 12.7m URGENT shard-started ([.watch_history-2016.01.07][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[38], s[INITIALIZING], a[id=HnIbZsKmRw2F811HF42Hbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n79 11.7m URGENT shard-started ([.watch_history-2016.01.07][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[38], s[INITIALIZING], a[id=HnIbZsKmRw2F811HF42Hbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n81 8.8m URGENT shard-started ([logstash-syslog-2016.01.07][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[36], s[INITIALIZING], a[id=3qO53WStT9CXgbkFnUksiA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n80 9.3m URGENT shard-started ([logstash-syslog-2016.01.07][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[34], s[INITIALIZING], a[id=axCQvobnTLC9PvJPjWAMGw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n83 6.2m URGENT shard-started ([.watch_history-2016.01.08][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=hXmEuKK6Sy2HWm-rHEDAew], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n87 14s URGENT shard-started ([test-geo][1], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[85], s[INITIALIZING], a[id=AGzIixN_RROIhjgaVgQ0GA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery from store] \n82 6.3m URGENT shard-started ([logstash-syslog-2016.01.08][2], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[34], s[INITIALIZING], a[id=tz9YgDHwT7yvXKEXQOX66A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n84 2.8m URGENT shard-started ([.watches][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[47], s[INITIALIZING], a[id=snXyoQs_QeaA06FhtMKQrg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n85 40.5s URGENT shard-started ([logstash-unparsed-2016.01.07][0], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[35], s[INITIALIZING], a[id=nzKhYoHqSa6v5mjLWQ-kyg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery from store] \n86 38.6s URGENT shard-started ([.watches][0], node[zPm--nb4Q1GyDfUgPRki0w], [P], v[47], s[INITIALIZING], a[id=snXyoQs_QeaA06FhtMKQrg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [master {node2}{I_iHcd03Q3aX2KXBVsjMhA}{192.168.1.105}{192.168.1.105:9301}{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started] \n16 1.7h HIGH _add_listener_ \njue ene 14 15:36:45 CET 2016\n101 2.9m URGENT shard-started ([test-idx][1], node[IjwKfD1UQMqtdKXPsLR8Cw], [P], v[90], s[INITIALIZING], a[id=0BHaiJdMQcSt2NeZJM09pA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.661Z]]), reason [after recovery from store] \n102 1.8m URGENT shard-started ([logstash-unparsed-2016.01.13][1], node[zPm--nb4Q1GyDfUgPRki0w], [R], v[38], s[INITIALIZING], a[id=j9ubCTjyS8qIamqigOblLQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.664Z]]), reason [after recovery (replica) from node [{node1}{IjwKfD1UQMqtdKXPsLR8Cw}{192.168.1.105}{w530/192.168.1.105:9300}{master=true}]] \n103 35.3s URGENT shard-started ([logstash-unparsed-2016.01.14][2], node[zPm--nb4Q1GyDfUgPRki0w], [R], v[28], s[INITIALIZING], a[id=12I5tGEJRzC6y65FTq4hdw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-01-14T14:36:35.665Z]]), reason [after recovery (replica) from node [{node1}{IjwKfD1UQMqtdKXPsLR8Cw}{192.168.1.105}{w530/192.168.1.105:9300}{master=true}]] \n 16 2.1h HIGH _add_listener_ \n 97 12.7m HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:36:47 CET 2016\njue ene 14 15:36:48 CET 2016\njue ene 14 15:36:49 CET 2016\njue ene 14 15:36:51 CET 2016\njue ene 14 15:36:52 CET 2016\njue ene 14 15:36:53 CET 2016\njue ene 14 15:36:54 CET 2016\njue ene 14 15:36:55 CET 2016\njue ene 14 15:36:56 CET 2016\njue ene 14 15:36:58 CET 2016\njue ene 14 15:36:59 CET 2016\njue ene 14 15:37:00 CET 2016\njue ene 14 15:37:01 CET 2016\njue ene 14 15:37:02 CET 2016\njue ene 14 15:37:03 CET 2016\njue ene 14 15:37:04 CET 2016\njue ene 14 15:37:06 CET 2016\njue ene 14 15:37:07 CET 2016\njue ene 14 15:37:08 CET 2016\njue ene 14 15:37:09 CET 2016\njue ene 14 15:37:10 CET 2016\njue ene 14 15:37:11 CET 2016\njue ene 14 15:37:12 CET 2016\njue ene 14 15:37:14 CET 2016\njue ene 14 15:37:15 CET 2016\njue ene 14 15:37:16 CET 2016\njue ene 14 15:37:17 CET 2016\njue ene 14 15:37:18 CET 2016\njue ene 14 15:37:19 CET 2016\njue ene 14 15:37:20 CET 2016\njue ene 14 15:37:21 CET 2016\njue ene 14 15:37:22 CET 2016\njue ene 14 15:37:23 CET 2016\njue ene 14 15:37:25 CET 2016\njue ene 14 15:37:26 CET 2016\njue ene 14 15:37:27 CET 2016\njue ene 14 15:37:28 CET 2016\njue ene 14 15:37:29 CET 2016\njue ene 14 15:37:30 CET 2016\njue ene 14 15:37:31 CET 2016\njue ene 14 15:37:32 CET 2016\njue ene 14 15:37:33 CET 2016\njue ene 14 15:37:35 CET 2016\njue ene 14 15:37:36 CET 2016\n108 12.9m URGENT delete-index [.shield_audit_log-2016.01.14] \n109 12.5m HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:37 CET 2016\n108 33.5m URGENT delete-index [.shield_audit_log-2016.01.14] \n109 33.2m HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:38 CET 2016\n108 52.8m URGENT delete-index [.shield_audit_log-2016.01.14] \n109 52.5m HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:39 CET 2016\n108 1.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 1.1h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:40 CET 2016\n108 1.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 1.4h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:41 CET 2016\n108 1.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 1.7h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:42 CET 2016\n108 2.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 2h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:44 CET 2016\n108 2.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 2.3h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:45 CET 2016\n108 2.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 2.7h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:46 CET 2016\n108 3h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 3h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:47 CET 2016\n108 3.3h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 3.3h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:48 CET 2016\n108 3.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 3.6h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:49 CET 2016\n108 3.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 3.9h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:50 CET 2016\n108 4.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 4.2h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:51 CET 2016\n108 4.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 4.5h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:52 CET 2016\n108 4.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 4.8h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:53 CET 2016\n108 5.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 5.1h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:55 CET 2016\n108 5.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 5.4h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:56 CET 2016\n108 5.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 5.7h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:57 CET 2016\n108 6.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 6.1h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:58 CET 2016\n108 6.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 6.4h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:37:59 CET 2016\n108 6.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 6.7h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:00 CET 2016\n108 7h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 7h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:01 CET 2016\n108 7.3h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 7.3h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:02 CET 2016\n108 7.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 7.6h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:04 CET 2016\n108 7.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 7.9h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:05 CET 2016\n108 8.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n109 8.2h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:06 CET 2016\n108 8.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 10.6m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 8.5h HIGH cluster_reroute(async_shard_fetch) \njue ene 14 15:38:07 CET 2016\n108 8.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 29.7m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 8.8h HIGH cluster_reroute(async_shard_fetch) \n111 7.8m HIGH _add_listener_ \njue ene 14 15:38:08 CET 2016\n108 9.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 48.4m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 9.1h HIGH cluster_reroute(async_shard_fetch) \n111 26.5m HIGH _add_listener_ \njue ene 14 15:38:09 CET 2016\n108 9.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 1.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 9.5h HIGH cluster_reroute(async_shard_fetch) \n111 44.9m HIGH _add_listener_ \njue ene 14 15:38:10 CET 2016\n108 9.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 1.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 9.8h HIGH cluster_reroute(async_shard_fetch) \n111 1h HIGH _add_listener_ \njue ene 14 15:38:11 CET 2016\n108 10.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 1.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 10.1h HIGH cluster_reroute(async_shard_fetch) \n111 1.3h HIGH _add_listener_ \njue ene 14 15:38:12 CET 2016\n108 10.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 10.4h HIGH cluster_reroute(async_shard_fetch) \n111 1.6h HIGH _add_listener_ \njue ene 14 15:38:14 CET 2016\n108 10.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 2.3h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 10.7h HIGH cluster_reroute(async_shard_fetch) \n111 1.9h HIGH _add_listener_ \njue ene 14 15:38:15 CET 2016\n108 11h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 2.6h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 11h HIGH cluster_reroute(async_shard_fetch) \n111 2.2h HIGH _add_listener_ \njue ene 14 15:38:16 CET 2016\n108 11.3h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 2.9h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 11.3h HIGH cluster_reroute(async_shard_fetch) \n111 2.6h HIGH _add_listener_ \njue ene 14 15:38:17 CET 2016\n108 11.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 3.2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 11.6h HIGH cluster_reroute(async_shard_fetch) \n111 2.9h HIGH _add_listener_ \njue ene 14 15:38:18 CET 2016\n108 11.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 3.5h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 11.9h HIGH cluster_reroute(async_shard_fetch) \n111 3.2h HIGH _add_listener_ \njue ene 14 15:38:19 CET 2016\n108 12.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 3.9h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 12.2h HIGH cluster_reroute(async_shard_fetch) \n111 3.5h HIGH _add_listener_ \njue ene 14 15:38:20 CET 2016\n108 12.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 4.2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 12.6h HIGH cluster_reroute(async_shard_fetch) \n111 3.8h HIGH _add_listener_ \njue ene 14 15:38:21 CET 2016\n108 12.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 4.5h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 12.9h HIGH cluster_reroute(async_shard_fetch) \n111 4.1h HIGH _add_listener_ \njue ene 14 15:38:22 CET 2016\n108 13.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 4.8h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 13.2h HIGH cluster_reroute(async_shard_fetch) \n111 4.4h HIGH _add_listener_ \njue ene 14 15:38:24 CET 2016\n108 13.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 5.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 13.5h HIGH cluster_reroute(async_shard_fetch) \n111 4.7h HIGH _add_listener_ \njue ene 14 15:38:25 CET 2016\n108 13.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 5.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 13.8h HIGH cluster_reroute(async_shard_fetch) \n111 5h HIGH _add_listener_ \njue ene 14 15:38:26 CET 2016\n108 14.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 5.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 14.1h HIGH cluster_reroute(async_shard_fetch) \n111 5.4h HIGH _add_listener_ \njue ene 14 15:38:27 CET 2016\n108 14.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 6h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 14.4h HIGH cluster_reroute(async_shard_fetch) \n111 5.7h HIGH _add_listener_ \njue ene 14 15:38:28 CET 2016\n108 14.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 6.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 14.8h HIGH cluster_reroute(async_shard_fetch) \n111 6h HIGH _add_listener_ \njue ene 14 15:38:29 CET 2016\n108 15.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 6.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 15.1h HIGH cluster_reroute(async_shard_fetch) \n111 6.3h HIGH _add_listener_ \njue ene 14 15:38:30 CET 2016\n108 15.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 15.4h HIGH cluster_reroute(async_shard_fetch) \n111 6.6h HIGH _add_listener_ \njue ene 14 15:38:31 CET 2016\n108 15.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 7.3h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 15.7h HIGH cluster_reroute(async_shard_fetch) \n111 6.9h HIGH _add_listener_ \njue ene 14 15:38:33 CET 2016\n108 16h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 7.6h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 16h HIGH cluster_reroute(async_shard_fetch) \n111 7.2h HIGH _add_listener_ \njue ene 14 15:38:34 CET 2016\n108 16.3h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 7.9h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 16.3h HIGH cluster_reroute(async_shard_fetch) \n111 7.5h HIGH _add_listener_ \njue ene 14 15:38:35 CET 2016\n108 16.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 8.2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 16.6h HIGH cluster_reroute(async_shard_fetch) \n111 7.8h HIGH _add_listener_ \njue ene 14 15:38:36 CET 2016\n108 16.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 8.5h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 16.9h HIGH cluster_reroute(async_shard_fetch) \n111 8.2h HIGH _add_listener_ \njue ene 14 15:38:37 CET 2016\n108 17.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 8.8h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 17.2h HIGH cluster_reroute(async_shard_fetch) \n111 8.5h HIGH _add_listener_ \njue ene 14 15:38:38 CET 2016\n108 17.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 9.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 17.5h HIGH cluster_reroute(async_shard_fetch) \n111 8.8h HIGH _add_listener_ \njue ene 14 15:38:39 CET 2016\n108 17.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 9.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 17.8h HIGH cluster_reroute(async_shard_fetch) \n111 9.1h HIGH _add_listener_ \n112 4m HIGH _add_listener_ \njue ene 14 15:38:40 CET 2016\n108 18.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 9.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 18.1h HIGH cluster_reroute(async_shard_fetch) \n111 9.4h HIGH _add_listener_ \n112 22.2m HIGH _add_listener_ \njue ene 14 15:38:41 CET 2016\n108 18.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 10.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 18.4h HIGH cluster_reroute(async_shard_fetch) \n111 9.7h HIGH _add_listener_ \n112 40.4m HIGH _add_listener_ \njue ene 14 15:38:43 CET 2016\n108 18.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 10.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 18.7h HIGH cluster_reroute(async_shard_fetch) \n111 10h HIGH _add_listener_ \n112 58.6m HIGH _add_listener_ \njue ene 14 15:38:44 CET 2016\n108 19.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 10.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 19.1h HIGH cluster_reroute(async_shard_fetch) \n111 10.3h HIGH _add_listener_ \n112 1.2h HIGH _add_listener_ \njue ene 14 15:38:45 CET 2016\n108 19.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 11h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 19.4h HIGH cluster_reroute(async_shard_fetch) \n111 10.6h HIGH _add_listener_ \n112 1.6h HIGH _add_listener_ \njue ene 14 15:38:46 CET 2016\n108 19.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 11.3h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 19.7h HIGH cluster_reroute(async_shard_fetch) \n111 10.9h HIGH _add_listener_ \n112 1.9h HIGH _add_listener_ \njue ene 14 15:38:47 CET 2016\n108 20h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 11.6h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 20h HIGH cluster_reroute(async_shard_fetch) \n111 11.3h HIGH _add_listener_ \n112 2.2h HIGH _add_listener_ \njue ene 14 15:38:48 CET 2016\n108 20.3h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 11.9h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 20.3h HIGH cluster_reroute(async_shard_fetch) \n111 11.6h HIGH _add_listener_ \n112 2.5h HIGH _add_listener_ \njue ene 14 15:38:49 CET 2016\n108 20.6h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 12.2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 20.6h HIGH cluster_reroute(async_shard_fetch) \n111 11.9h HIGH _add_listener_ \n112 2.8h HIGH _add_listener_ \njue ene 14 15:38:50 CET 2016\n108 20.9h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 12.5h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 20.9h HIGH cluster_reroute(async_shard_fetch) \n111 12.2h HIGH _add_listener_ \n112 3.1h HIGH _add_listener_ \njue ene 14 15:38:51 CET 2016\n108 21.2h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 12.8h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 21.2h HIGH cluster_reroute(async_shard_fetch) \n111 12.5h HIGH _add_listener_ \n112 3.4h HIGH _add_listener_ \njue ene 14 15:38:53 CET 2016\n108 21.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 13.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 21.5h HIGH cluster_reroute(async_shard_fetch) \n111 12.8h HIGH _add_listener_ \n112 3.7h HIGH _add_listener_ \njue ene 14 15:38:54 CET 2016\n108 21.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 13.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 21.8h HIGH cluster_reroute(async_shard_fetch) \n111 13.1h HIGH _add_listener_ \n112 4h HIGH _add_listener_ \njue ene 14 15:38:55 CET 2016\n108 22.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 13.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 22.1h HIGH cluster_reroute(async_shard_fetch) \n111 13.4h HIGH _add_listener_ \n112 4.3h HIGH _add_listener_ \njue ene 14 15:38:56 CET 2016\n108 22.5h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 14.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 22.5h HIGH cluster_reroute(async_shard_fetch) \n111 13.7h HIGH _add_listener_ \n112 4.7h HIGH _add_listener_ \njue ene 14 15:38:57 CET 2016\n108 22.8h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 14.4h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 22.8h HIGH cluster_reroute(async_shard_fetch) \n111 14h HIGH _add_listener_ \n112 5h HIGH _add_listener_ \njue ene 14 15:38:58 CET 2016\n108 23.1h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 14.7h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 23.1h HIGH cluster_reroute(async_shard_fetch) \n111 14.3h HIGH _add_listener_ \n112 5.3h HIGH _add_listener_ \njue ene 14 15:38:59 CET 2016\n108 23.4h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 15h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 23.4h HIGH cluster_reroute(async_shard_fetch) \n111 14.6h HIGH _add_listener_ \n112 5.6h HIGH _add_listener_ \njue ene 14 15:39:00 CET 2016\n108 23.7h URGENT delete-index [.shield_audit_log-2016.01.14] \n110 15.3h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 23.7h HIGH cluster_reroute(async_shard_fetch) \n111 14.9h HIGH _add_listener_ \n112 5.9h HIGH _add_listener_ \njue ene 14 15:39:01 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n110 15.6h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 15.3h HIGH _add_listener_ \n112 6.2h HIGH _add_listener_ \njue ene 14 15:39:03 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n110 15.9h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 15.6h HIGH _add_listener_ \n112 6.5h HIGH _add_listener_ \njue ene 14 15:39:04 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n110 16.2h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 15.9h HIGH _add_listener_ \n112 6.8h HIGH _add_listener_ \njue ene 14 15:39:05 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n110 16.5h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 16.2h HIGH _add_listener_ \n112 7.1h HIGH _add_listener_ \njue ene 14 15:39:06 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n113 13.8m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 16.5h HIGH _add_listener_ \n112 7.4h HIGH _add_listener_ \njue ene 14 15:39:07 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n113 32.6m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 16.8h HIGH _add_listener_ \n112 7.7h HIGH _add_listener_ \njue ene 14 15:39:08 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n113 50.8m URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 17.1h HIGH _add_listener_ \n112 8h HIGH _add_listener_ \njue ene 14 15:39:09 CET 2016\n108 1d URGENT delete-index [.shield_audit_log-2016.01.14] \n113 1.1h URGENT create-index [.shield_audit_log-2016.01.14], cause [auto(bulk api)] \n109 1d HIGH cluster_reroute(async_shard_fetch) \n111 17.4h HIGH _add_listener_ \n112 8.3h HIGH _add_listener_ \n^C\nabonuccelli@w530 ~ $ \n```\n",
"comments": [
{
"body": "Confirmed on 2.2.0-SNAPSHOT - within a few seconds it is reporting timeInQueue of several hours!\n",
"created_at": "2016-01-14T16:30:22Z"
},
{
"body": "@clintongormley Should be [dividing by `1000000` instead of `1000`](https://github.com/elastic/elasticsearch/blob/a954e4e8e5f235e0279ed38609a2bbd2f0abaf68/core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedRunnable.java#L45) (so the value being returned is microseconds, not milliseconds). Or even better, just use built-in methods to convert nanoseconds to milliseconds. I'll throw up a quick pull request.\n",
"created_at": "2016-01-14T16:43:01Z"
},
{
"body": "@clintongormley I opened #15995.\n",
"created_at": "2016-01-14T17:02:49Z"
}
],
"number": 15988,
"title": "timeInQueue for pending_tasks growing too fast?"
}
|
{
"body": "This commit addresses a time unit conversion bug in calculating the age\nof a PrioritizedRunnable. The issue was an incorrect conversion from\nnanoseconds to milliseconds as instead the conversion was to\nmicroseconds. This leads to the timeInQueue metric for pending tasks to\nbe off by three orders of magnitude.\n\nCloses #15988\n",
"number": 15995,
"review_comments": [
{
"body": "Could you just take a long here?\n",
"created_at": "2016-01-14T23:46:51Z"
},
{
"body": "Ah. I see.\n",
"created_at": "2016-01-14T23:47:02Z"
}
],
"title": "Fix calculation of age of pending tasks"
}
|
{
"commits": [
{
"message": "Fix calculation of age of pending tasks\n\nThis commit addresses a time unit conversion bug in calculating the age\nof a PrioritizedRunnable. The issue was an incorrect conversion from\nnanoseconds to milliseconds as instead the conversion was to\nmicroseconds. This leads to the timeInQueue metric for pending tasks to\nbe off by three orders of magnitude."
}
],
"files": [
{
"diff": "@@ -20,29 +20,39 @@\n \n import org.elasticsearch.common.Priority;\n \n+import java.util.concurrent.TimeUnit;\n+import java.util.function.LongSupplier;\n+\n /**\n *\n */\n public abstract class PrioritizedRunnable implements Runnable, Comparable<PrioritizedRunnable> {\n \n private final Priority priority;\n private final long creationDate;\n+ private final LongSupplier relativeTimeProvider;\n \n public static PrioritizedRunnable wrap(Runnable runnable, Priority priority) {\n return new Wrapped(runnable, priority);\n }\n \n protected PrioritizedRunnable(Priority priority) {\n+ this(priority, System::nanoTime);\n+ }\n+\n+ // package visible for testing\n+ PrioritizedRunnable(Priority priority, LongSupplier relativeTimeProvider) {\n this.priority = priority;\n- creationDate = System.nanoTime();\n+ this.creationDate = relativeTimeProvider.getAsLong();\n+ this.relativeTimeProvider = relativeTimeProvider;\n }\n \n public long getCreationDateInNanos() {\n return creationDate;\n }\n \n public long getAgeInMillis() {\n- return Math.max(0, (System.nanoTime() - creationDate) / 1000);\n+ return TimeUnit.MILLISECONDS.convert(relativeTimeProvider.getAsLong() - creationDate, TimeUnit.NANOSECONDS);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/common/util/concurrent/PrioritizedRunnable.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,43 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util.concurrent;\n+\n+import org.elasticsearch.common.Priority;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicLong;\n+\n+public class PrioritizedRunnableTests extends ESTestCase {\n+ public void testGetAgeInMillis() throws Exception {\n+ AtomicLong time = new AtomicLong();\n+\n+ PrioritizedRunnable runnable = new PrioritizedRunnable(Priority.NORMAL, time::get) {\n+ @Override\n+ public void run() {\n+\n+ }\n+ };\n+ assertEquals(0, runnable.getAgeInMillis());\n+ int milliseconds = randomIntBetween(1, 256);\n+ time.addAndGet(TimeUnit.NANOSECONDS.convert(milliseconds, TimeUnit.MILLISECONDS));\n+ assertEquals(milliseconds, runnable.getAgeInMillis());\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/util/concurrent/PrioritizedRunnableTests.java",
"status": "added"
}
]
}
|
{
"body": "There is no need to involve DNS in this!\n",
"comments": [
{
"body": "Lgtm. Good catch.\n",
"created_at": "2016-01-08T06:35:57Z"
},
{
"body": "grrrr... can we make this a forbidden API and only use it where it's justified / suppressed for a reason?\n",
"created_at": "2016-01-08T08:13:38Z"
},
{
"body": "++\n\n> On 08 Jan 2016, at 09:13, Simon Willnauer notifications@github.com wrote:\n> \n> grrrr... can we make this a forbidden API and only use it where it's justified / suppressed for a reason?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2016-01-08T08:21:44Z"
},
{
"body": "+1 to remove\n\nI thought it would be useful for cases hostnames are provided instead of ip addresses. Although this is maybe unlikely, I thought it would help.\n",
"created_at": "2016-01-08T08:34:47Z"
},
{
"body": "and in the case just an ip address is provided no actual dns lookup is being done.\n",
"created_at": "2016-01-08T08:41:18Z"
},
{
"body": "If the user has data with hostnames, they have a real problem on their hands to resolve these in any efficient way, we can't do it \"on the side\".\n\nUsing InetAddress is not a good solution, it will result in memory leaks (we must assume each host is cached infinitely). In general with any massive use of DNS (via InetAddress, JNDI, etc) we have to worry about craziness like exhaustion of entropy on the machine (securerandom is used for DNS port randomization to \"help\" address spoofing), bogus data (e.g. NXDOMAIN hijacking), tons of network traffic, throttling, triggering alarms, etc.\n\nI am aware that doing DNS operations on millions of log records, etc is sometimes something people want. But DNS is a very complex serious beast, its definitely a distributed database query, and should always be explicit. Doing it for lots of data is even more complicated if you want it to be anywhere near correct or efficient. \n",
"created_at": "2016-01-08T08:49:12Z"
},
{
"body": "> and in the case just an ip address is provided no actual dns lookup is being done.\n\nRight, my specific concern is right now it will silently do tons of DNS in cases where an ip address is not provided, e.g. `HostNameLookups On` (http://httpd.apache.org/docs/2.4/mod/core.html#hostnamelookups). We should fail rather than silently be slow in this case!\n",
"created_at": "2016-01-08T08:52:35Z"
},
{
"body": "Thanks for the explanation. I underestimated the slowness / costs here. LGTM\n",
"created_at": "2016-01-08T09:00:22Z"
},
{
"body": "like @s1monw suggested, we should make this is a forbidden api.\n",
"created_at": "2016-01-08T09:01:35Z"
},
{
"body": "I'll followup with a forbidden api patch for master. I'm concerned about false positive rate, but I totally agree with the trap (esp. considering the trappiness of InetAddress here, which is why @jasontedor added a parsing method that doesn't go to DNS).\n",
"created_at": "2016-01-08T18:16:44Z"
}
],
"number": 15851,
"title": "[ingest] Don't do DNS lookups from GeoIpProcessor"
}
|
{
"body": "followup to #15851\n\nThe idea here is that only a very few places really need to do DNS lookups, we should prevent it happening by accident. Abuse cases of InetAddress.getByName for other purposes (e.g. parsing addresses, getting the loopback address) can be done instead with other safer methods. \n\nEspecially parsing addresses, its better to use `InetAddresses.forString`, which won't do a lookup but just throw an exception if the address is wrong.\n\nThis cleans up the tests around this for the most part. There are some exceptions where we just `SuppressForbidden`, such as tests for InetAddresses.forString itself, which intentionally do round-trip testing against InetAddress, etc.\n",
"number": 15969,
"review_comments": [
{
"body": "Maybe extract the part the must use the forbidden method into a little method so you don't have to annotate the whole thing? It'd be nice to be able to be sure that we're not doing dns lookups when we try to get the loopback address, for instance.\n",
"created_at": "2016-01-14T02:11:09Z"
},
{
"body": "Same here I think.\n",
"created_at": "2016-01-14T02:11:52Z"
},
{
"body": "I'm not happy to suppress a whole class either! In this case I understand wanting to suppress for all those newly forbidden methods across the class but this'll suppress failures for stuff like System.out....\n",
"created_at": "2016-01-14T02:13:49Z"
},
{
"body": "++\n",
"created_at": "2016-01-14T02:14:29Z"
},
{
"body": "i don't think we should make code more complex for this reason. the method's name is `resolveInternal` and i think its clear that it does DNS resolution.\n",
"created_at": "2016-01-14T02:19:59Z"
},
{
"body": "ooops. ++\n",
"created_at": "2016-01-14T07:37:11Z"
},
{
"body": "I totally agree with you. Actually when I wrote the code, I did the same thing which was done for EC2 plugin.\nI think that we can totally remove that and only support IP address. \n\nIt can be done in a follow up issue though. I'd add a TODO here or open an issue.\n",
"created_at": "2016-01-14T07:38:56Z"
}
],
"title": "ban DNS lookups with forbidden apis"
}
|
{
"commits": [
{
"message": "ban dns lookups with forbidden APIs\n\nfollowup to https://github.com/elastic/elasticsearch/pull/15851\n\nIn general, we should avoid InetAddress.getByName under most circumstances:\n* If you really want to do a lookup, why not getAllByName (why an arbitrary address)\n* if you really just want to parse an address, we have a separate method for that in InetAddresses"
},
{
"message": "Merge branch 'master' into dns"
}
],
"files": [
{
"diff": "@@ -18,9 +18,6 @@\n java.net.URL#getPath()\n java.net.URL#getFile()\n \n-@defaultMessage Usage of getLocalHost is discouraged\n-java.net.InetAddress#getLocalHost()\n-\n @defaultMessage Use java.nio.file instead of java.io.File API\n java.util.jar.JarFile\n java.util.zip.ZipFile\n@@ -86,11 +83,13 @@ java.net.Inet4Address#getHostAddress()\n java.net.Inet6Address#getHostAddress()\n java.net.InetSocketAddress#toString()\n \n-@defaultMessage avoid DNS lookups by accident: if you have a valid reason, then @SuppressWarnings with that reason so its completely clear\n+@defaultMessage avoid DNS lookups by accident: if you have a valid reason, then @SuppressForbidden with that reason so its completely clear\n+java.net.InetAddress#getAllByName(java.lang.String)\n java.net.InetAddress#getHostName()\n java.net.InetAddress#getCanonicalHostName()\n-\n+java.net.InetAddress#getByName(java.lang.String) @ Use InetAddresses.forString to parse an address, or InetAddress.getAllByName to resolve a hostname\n java.net.InetSocketAddress#getHostName() @ Use getHostString() instead, which avoids a DNS lookup\n+java.net.InetAddress#getLocalHost() @ Use getLoopbackAddress() instead, which avoids a DNS lookup\n \n @defaultMessage Do not violate java's access system\n java.lang.Class#getDeclaredClasses() @ Do not violate java's access system: Use getClasses() instead\n@@ -128,4 +127,4 @@ java.util.Collections#EMPTY_SET\n java.util.Collections#shuffle(java.util.List) @ Use java.util.Collections#shuffle(java.util.List, java.util.Random) with a reproducible source of randomness\n @defaultMessage Use org.elasticsearch.common.Randomness#get for reproducible sources of randomness\n java.util.Random#<init>()\n-java.util.concurrent.ThreadLocalRandom\n\\ No newline at end of file\n+java.util.concurrent.ThreadLocalRandom",
"filename": "buildSrc/src/main/resources/forbidden/all-signatures.txt",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.network;\n \n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n@@ -210,6 +211,7 @@ private InetAddress[] resolveInetAddresses(String hosts[]) throws IOException {\n }\n \n /** resolves a single host specification */\n+ @SuppressForbidden(reason = \"does DNS lookup when bind/publish host is specified as hostname\")\n private InetAddress[] resolveInternal(String host) throws IOException {\n if ((host.startsWith(\"#\") && host.endsWith(\"#\")) || (host.startsWith(\"_\") && host.endsWith(\"_\"))) {\n host = host.substring(1, host.length() - 1);",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkService.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@ public interface TransportAddress extends Writeable<TransportAddress> {\n /**\n * Returns the address string for this transport address\n */\n+ // TODO: can this be a byte[]/InetAddress ?\n String getAddress();\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.network.NetworkAddress;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.network.NetworkService;\n@@ -436,7 +437,7 @@ private void writePortsFile(String type, BoundTransportAddress boundAddress) {\n Path tmpPortsFile = environment.logsFile().resolve(type + \".ports.tmp\");\n try (BufferedWriter writer = Files.newBufferedWriter(tmpPortsFile, Charset.forName(\"UTF-8\"))) {\n for (TransportAddress address : boundAddress.boundAddresses()) {\n- InetAddress inetAddress = InetAddress.getByName(address.getAddress());\n+ InetAddress inetAddress = InetAddresses.forString(address.getAddress());\n if (inetAddress instanceof Inet6Address && inetAddress.isLinkLocalAddress()) {\n // no link local, just causes problems\n continue;",
"filename": "core/src/main/java/org/elasticsearch/node/Node.java",
"status": "modified"
},
{
"diff": "@@ -692,6 +692,7 @@ public TransportAddress[] addressesFromString(String address, int perAddressLimi\n private static final Pattern BRACKET_PATTERN = Pattern.compile(\"^\\\\[(.*:.*)\\\\](?::([\\\\d\\\\-]*))?$\");\n \n /** parse a hostname+port range spec into its equivalent addresses */\n+ @SuppressForbidden(reason = \"does DNS lookup when unicast hosts are hostnames\")\n static TransportAddress[] parse(String hostPortString, String defaultPortRange, int perAddressLimit) throws UnknownHostException {\n Objects.requireNonNull(hostPortString);\n String host;",
"filename": "core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java",
"status": "modified"
},
{
"diff": "@@ -20,15 +20,14 @@\n package org.elasticsearch.cluster.node;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.common.transport.InetSocketTransportAddress;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.AfterClass;\n import org.junit.BeforeClass;\n \n-import java.net.InetAddress;\n-import java.net.UnknownHostException;\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.HashMap;\n@@ -48,8 +47,8 @@ public class DiscoveryNodeFiltersTests extends ESTestCase {\n private static InetSocketTransportAddress localAddress;\n \n @BeforeClass\n- public static void createLocalAddress() throws UnknownHostException {\n- localAddress = new InetSocketTransportAddress(InetAddress.getByName(\"192.1.1.54\"), 9999);\n+ public static void createLocalAddress() {\n+ localAddress = new InetSocketTransportAddress(InetAddresses.forString(\"192.1.1.54\"), 9999);\n }\n \n @AfterClass",
"filename": "core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java",
"status": "modified"
},
{
"diff": "@@ -16,11 +16,13 @@\n \n package org.elasticsearch.common.network;\n \n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.test.ESTestCase;\n \n import java.net.InetAddress;\n import java.net.UnknownHostException;\n \n+@SuppressForbidden(reason = \"checks that InetAddresses logic is consistent with InetAddress\")\n public class InetAddressesTests extends ESTestCase {\n public void testForStringBogusInput() {\n String[] bogusInputs = {",
"filename": "core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.network;\n \n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n@@ -88,6 +89,7 @@ public void testNoScopeID() throws Exception {\n }\n \n /** Test that ipv4 address formatting round trips */\n+ @SuppressForbidden(reason = \"checks against InetAddress on purpose\")\n public void testRoundTripV4() throws Exception {\n byte bytes[] = new byte[4];\n Random random = random();\n@@ -101,6 +103,7 @@ public void testRoundTripV4() throws Exception {\n }\n \n /** Test that ipv6 address formatting round trips */\n+ @SuppressForbidden(reason = \"checks against InetAddress on purpose\")\n public void testRoundTripV6() throws Exception {\n byte bytes[] = new byte[16];\n Random random = random();\n@@ -115,13 +118,13 @@ public void testRoundTripV6() throws Exception {\n \n /** creates address without any lookups. hostname can be null, for missing */\n private InetAddress forge(String hostname, String address) throws IOException {\n- byte bytes[] = InetAddress.getByName(address).getAddress();\n+ byte bytes[] = InetAddresses.forString(address).getAddress();\n return InetAddress.getByAddress(hostname, bytes);\n }\n \n /** creates scoped ipv6 address without any lookups. hostname can be null, for missing */\n private InetAddress forgeScoped(String hostname, String address, int scopeid) throws IOException {\n- byte bytes[] = InetAddress.getByName(address).getAddress();\n+ byte bytes[] = InetAddresses.forString(address).getAddress();\n return Inet6Address.getByAddress(hostname, bytes, scopeid);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkAddressTests.java",
"status": "modified"
},
{
"diff": "@@ -88,15 +88,15 @@ public void testPublishMulticastV6() throws Exception {\n */\n public void testBindAnyLocalV4() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n- assertEquals(InetAddress.getByName(\"0.0.0.0\"), service.resolveBindHostAddresses(new String[] { \"0.0.0.0\" })[0]);\n+ assertEquals(InetAddresses.forString(\"0.0.0.0\"), service.resolveBindHostAddresses(new String[] { \"0.0.0.0\" })[0]);\n }\n \n /**\n * ensure specifying wildcard ipv6 address will bind to all interfaces\n */\n public void testBindAnyLocalV6() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n- assertEquals(InetAddress.getByName(\"::\"), service.resolveBindHostAddresses(new String[] { \"::\" })[0]);\n+ assertEquals(InetAddresses.forString(\"::\"), service.resolveBindHostAddresses(new String[] { \"::\" })[0]);\n }\n \n /**",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -32,8 +32,8 @@ public class NetworkUtilsTests extends ESTestCase {\n * test sort key order respects PREFER_IPV4\n */\n public void testSortKey() throws Exception {\n- InetAddress localhostv4 = InetAddress.getByName(\"127.0.0.1\");\n- InetAddress localhostv6 = InetAddress.getByName(\"::1\");\n+ InetAddress localhostv4 = InetAddresses.forString(\"127.0.0.1\");\n+ InetAddress localhostv6 = InetAddresses.forString(\"::1\");\n assertTrue(NetworkUtils.sortKey(localhostv4, false) < NetworkUtils.sortKey(localhostv6, false));\n assertTrue(NetworkUtils.sortKey(localhostv6, true) < NetworkUtils.sortKey(localhostv4, true));\n }\n@@ -42,15 +42,15 @@ public void testSortKey() throws Exception {\n * test ordinary addresses sort before private addresses\n */\n public void testSortKeySiteLocal() throws Exception {\n- InetAddress siteLocal = InetAddress.getByName(\"172.16.0.1\");\n+ InetAddress siteLocal = InetAddresses.forString(\"172.16.0.1\");\n assert siteLocal.isSiteLocalAddress();\n- InetAddress ordinary = InetAddress.getByName(\"192.192.192.192\");\n+ InetAddress ordinary = InetAddresses.forString(\"192.192.192.192\");\n assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(siteLocal, true));\n assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(siteLocal, false));\n \n- InetAddress siteLocal6 = InetAddress.getByName(\"fec0::1\");\n+ InetAddress siteLocal6 = InetAddresses.forString(\"fec0::1\");\n assert siteLocal6.isSiteLocalAddress();\n- InetAddress ordinary6 = InetAddress.getByName(\"fddd::1\");\n+ InetAddress ordinary6 = InetAddresses.forString(\"fddd::1\");\n assertTrue(NetworkUtils.sortKey(ordinary6, true) < NetworkUtils.sortKey(siteLocal6, true));\n assertTrue(NetworkUtils.sortKey(ordinary6, false) < NetworkUtils.sortKey(siteLocal6, false));\n }\n@@ -59,9 +59,9 @@ public void testSortKeySiteLocal() throws Exception {\n * test private addresses sort before link local addresses\n */\n public void testSortKeyLinkLocal() throws Exception {\n- InetAddress linkLocal = InetAddress.getByName(\"fe80::1\");\n+ InetAddress linkLocal = InetAddresses.forString(\"fe80::1\");\n assert linkLocal.isLinkLocalAddress();\n- InetAddress ordinary = InetAddress.getByName(\"fddd::1\");\n+ InetAddress ordinary = InetAddresses.forString(\"fddd::1\");\n assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(linkLocal, true));\n assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(linkLocal, false));\n }\n@@ -70,8 +70,8 @@ public void testSortKeyLinkLocal() throws Exception {\n * Test filtering out ipv4/ipv6 addresses\n */\n public void testFilter() throws Exception {\n- InetAddress addresses[] = { InetAddress.getByName(\"::1\"), InetAddress.getByName(\"127.0.0.1\") };\n- assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"127.0.0.1\") }, NetworkUtils.filterIPV4(addresses));\n- assertArrayEquals(new InetAddress[] { InetAddress.getByName(\"::1\") }, NetworkUtils.filterIPV6(addresses));\n+ InetAddress addresses[] = { InetAddresses.forString(\"::1\"), InetAddresses.forString(\"127.0.0.1\") };\n+ assertArrayEquals(new InetAddress[] { InetAddresses.forString(\"127.0.0.1\") }, NetworkUtils.filterIPV4(addresses));\n+ assertArrayEquals(new InetAddress[] { InetAddresses.forString(\"::1\") }, NetworkUtils.filterIPV6(addresses));\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.io.stream.ByteBufferStreamInput;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.test.ESTestCase;\n \n import java.net.InetAddress;\n@@ -39,11 +40,9 @@\n public class BoundTransportAddressTests extends ESTestCase {\n \n public void testSerialization() throws Exception {\n- InetAddress[] inetAddresses = InetAddress.getAllByName(\"0.0.0.0\");\n+ InetAddress inetAddress = InetAddresses.forString(\"0.0.0.0\");\n List<InetSocketTransportAddress> transportAddressList = new ArrayList<>();\n- for (InetAddress address : inetAddresses) {\n- transportAddressList.add(new InetSocketTransportAddress(address, randomIntBetween(9200, 9299)));\n- }\n+ transportAddressList.add(new InetSocketTransportAddress(inetAddress, randomIntBetween(9200, 9299)));\n final BoundTransportAddress transportAddress = new BoundTransportAddress(transportAddressList.toArray(new InetSocketTransportAddress[0]), transportAddressList.get(0));\n assertThat(transportAddress.boundAddresses().length, equalTo(transportAddressList.size()));\n ",
"filename": "core/src/test/java/org/elasticsearch/common/transport/BoundTransportAddressTests.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.InetSocketTransportAddress;\n import org.elasticsearch.common.transport.LocalTransportAddress;\n@@ -57,7 +58,6 @@\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n-import java.net.InetAddress;\n import java.net.UnknownHostException;\n import java.util.ArrayList;\n import java.util.Collections;\n@@ -291,7 +291,7 @@ public void testHandleNodeJoin_incompatibleMinVersion() throws UnknownHostExcept\n String nodeName = internalCluster().startNode(nodeSettings, Version.V_2_0_0_beta1);\n ZenDiscovery zenDiscovery = (ZenDiscovery) internalCluster().getInstance(Discovery.class, nodeName);\n ClusterService clusterService = internalCluster().getInstance(ClusterService.class, nodeName);\n- DiscoveryNode node = new DiscoveryNode(\"_node_id\", new InetSocketTransportAddress(InetAddress.getByName(\"0.0.0.0\"), 0), Version.V_1_6_0);\n+ DiscoveryNode node = new DiscoveryNode(\"_node_id\", new InetSocketTransportAddress(InetAddresses.forString(\"0.0.0.0\"), 0), Version.V_1_6_0);\n final AtomicReference<IllegalStateException> holder = new AtomicReference<>();\n zenDiscovery.handleJoinRequest(node, clusterService.state(), new MembershipAction.JoinCallback() {\n @Override",
"filename": "core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryIT.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;\n import org.elasticsearch.client.transport.TransportClient;\n import org.elasticsearch.cluster.health.ClusterHealthStatus;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.network.NetworkAddress;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n@@ -33,7 +34,6 @@\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.test.junit.annotations.Network;\n \n-import java.net.InetAddress;\n import java.util.Locale;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -76,7 +76,7 @@ public void testThatTransportClientCanConnect() throws Exception {\n .put(\"path.home\", createTempDir().toString())\n .build();\n try (TransportClient transportClient = TransportClient.builder().settings(settings).build()) {\n- transportClient.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(\"127.0.0.1\"), randomPort));\n+ transportClient.addTransportAddress(new InetSocketTransportAddress(InetAddresses.forString(\"127.0.0.1\"), randomPort));\n ClusterHealthResponse response = transportClient.admin().cluster().prepareHealth().get();\n assertThat(response.getStatus(), is(ClusterHealthStatus.GREEN));\n }",
"filename": "core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -45,13 +45,13 @@ protected MockTransportService build(Settings settings, Version version, NamedWr\n return transportService;\n }\n \n+ // this is not a particularly good test: what if tcp blackhole is enabled (very slow), or the port is in use?\n public void testConnectException() throws UnknownHostException {\n try {\n- serviceA.connectToNode(new DiscoveryNode(\"C\", new InetSocketTransportAddress(InetAddress.getByName(\"localhost\"), 9876), Version.CURRENT));\n+ serviceA.connectToNode(new DiscoveryNode(\"C\", new InetSocketTransportAddress(InetAddress.getLoopbackAddress(), 9876), Version.CURRENT));\n fail(\"Expected ConnectTransportException\");\n } catch (ConnectTransportException e) {\n assertThat(e.getMessage(), containsString(\"connect_timeout\"));\n- assertThat(e.getMessage(), containsString(\"[localhost/127.0.0.1:9876]\"));\n }\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/transport/netty/SimpleNettyTransportTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.cloud.aws.AwsEc2ServiceImpl;\n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.network.NetworkService.CustomNameResolver;\n import org.elasticsearch.common.settings.Settings;\n@@ -91,6 +92,7 @@ public Ec2NameResolver(Settings settings) {\n * @return the appropriate host resolved from ec2 meta-data, or null if it cannot be obtained.\n * @see CustomNameResolver#resolveIfPossible(String)\n */\n+ @SuppressForbidden(reason = \"resolves ec2 hostnames if asked to\")\n public InetAddress[] resolve(Ec2HostnameType type) throws IOException {\n InputStream in = null;\n String metadataUrl = AwsEc2ServiceImpl.EC2_METADATA_URL + type.ec2Name;\n@@ -104,10 +106,11 @@ public InetAddress[] resolve(Ec2HostnameType type) throws IOException {\n \n String metadataResult = urlReader.readLine();\n if (metadataResult == null || metadataResult.length() == 0) {\n- throw new IOException(\"no gce metadata returned from [\" + url + \"] for [\" + type.configName + \"]\");\n+ throw new IOException(\"no ec2 metadata returned from [\" + url + \"] for [\" + type.configName + \"]\");\n }\n- // only one address: because we explicitly ask for only one via the Ec2HostnameType\n- return new InetAddress[] { InetAddress.getByName(metadataResult) };\n+ // really should be only one address: because we explicitly ask for only one via the Ec2HostnameType\n+ // but why do we even allow configuring this by hostname...\n+ return InetAddress.getAllByName(metadataResult);\n } catch (IOException e) {\n throw new IOException(\"IOException caught when fetching InetAddress from [\" + metadataUrl + \"]\", e);\n } finally {",
"filename": "plugins/discovery-ec2/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cloud.gce.GceComputeService;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.network.NetworkService.CustomNameResolver;\n import org.elasticsearch.common.settings.Settings;\n@@ -82,6 +83,7 @@ public GceNameResolver(Settings settings, GceComputeService gceComputeService) {\n * @return the appropriate host resolved from gce meta-data.\n * @see CustomNameResolver#resolveIfPossible(String)\n */\n+ @SuppressForbidden(reason = \"resolves gce hostnames if asked to\")\n private InetAddress[] resolve(String value) throws IOException {\n String gceMetadataPath;\n if (value.equals(GceAddressResolverType.GCE.configName)) {\n@@ -109,8 +111,9 @@ private InetAddress[] resolve(String value) throws IOException {\n if (metadataResult == null || metadataResult.length() == 0) {\n throw new IOException(\"no gce metadata returned from [\" + gceMetadataPath + \"] for [\" + value + \"]\");\n }\n- // only one address: because we explicitly ask for only one via the GceHostnameType\n- return new InetAddress[] { InetAddress.getByName(metadataResult) };\n+ // really should be only one address: because we explicitly ask for only one via the Ec2HostnameType\n+ // but why do we even allow configuring this by hostname...\n+ return InetAddress.getAllByName(metadataResult);\n } catch (IOException e) {\n throw new IOException(\"IOException caught when fetching InetAddress from [\" + gceMetadataPath + \"]\", e);\n }",
"filename": "plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/network/GceNameResolver.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.discovery.gce;\n \n import org.elasticsearch.cloud.gce.network.GceNameResolver;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.network.NetworkService;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESTestCase;\n@@ -39,21 +40,21 @@ public class GceNetworkTests extends ESTestCase {\n * Test for network.host: _gce_\n */\n public void testNetworkHostGceDefault() throws IOException {\n- resolveGce(\"_gce_\", InetAddress.getByName(\"10.240.0.2\"));\n+ resolveGce(\"_gce_\", InetAddresses.forString(\"10.240.0.2\"));\n }\n \n /**\n * Test for network.host: _gce:privateIp_\n */\n public void testNetworkHostPrivateIp() throws IOException {\n- resolveGce(\"_gce:privateIp_\", InetAddress.getByName(\"10.240.0.2\"));\n+ resolveGce(\"_gce:privateIp_\", InetAddresses.forString(\"10.240.0.2\"));\n }\n \n /**\n * Test for network.host: _gce:hostname_\n */\n public void testNetworkHostPrivateDns() throws IOException {\n- resolveGce(\"_gce:hostname_\", InetAddress.getByName(\"localhost\"));\n+ resolveGce(\"_gce:hostname_\", InetAddresses.forString(\"127.0.0.1\"));\n }\n \n /**\n@@ -70,8 +71,8 @@ public void testNetworkHostWrongSetting() throws IOException {\n * network.host: _gce:privateIp:1_\n */\n public void testNetworkHostPrivateIpInterface() throws IOException {\n- resolveGce(\"_gce:privateIp:0_\", InetAddress.getByName(\"10.240.0.2\"));\n- resolveGce(\"_gce:privateIp:1_\", InetAddress.getByName(\"10.150.0.1\"));\n+ resolveGce(\"_gce:privateIp:0_\", InetAddresses.forString(\"10.240.0.2\"));\n+ resolveGce(\"_gce:privateIp:1_\", InetAddresses.forString(\"10.150.0.1\"));\n }\n \n /**",
"filename": "plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceNetworkTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.plugin.example;\n \n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.BufferedReader;\n@@ -33,7 +34,7 @@ public class ExampleExternalIT extends ESTestCase {\n public void testExample() throws Exception {\n String stringAddress = Objects.requireNonNull(System.getProperty(\"external.address\"));\n URL url = new URL(\"http://\" + stringAddress);\n- InetAddress address = InetAddress.getByName(url.getHost());\n+ InetAddress address = InetAddresses.forString(url.getHost());\n try (Socket socket = new Socket(address, url.getPort());\n BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8))) {\n assertEquals(\"TEST\", reader.readLine());",
"filename": "plugins/jvm-example/src/test/java/org/elasticsearch/plugin/example/ExampleExternalIT.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.settings.Settings;\n@@ -106,6 +107,7 @@ private static Client startClient(Path tempDir, TransportAddress... transportAdd\n return client;\n }\n \n+ @SuppressForbidden(reason = \"does DNS lookup to connect to external server\")\n private static Client startClient() throws IOException {\n String[] stringAddresses = clusterAddresses.split(\",\");\n TransportAddress[] transportAddresses = new TransportAddress[stringAddresses.length];",
"filename": "qa/smoke-test-client/src/test/java/org/elasticsearch/smoketest/ESSmokeClientTestCase.java",
"status": "modified"
}
]
}
|
{
"body": "It is possible for a null pointer to happen when using a named query when you also have nested docs (seen on 1.4.3, I can't reproduce on 2.1.1 but given the number of things that have to be in place to reproduce I can't be that confident that it doesn't exist somewhere in the latest version).\n\nA script to reproduce it can be found here: https://gist.github.com/tstibbs/92de2206531ca37e0764\n\nThe last command in the gist is a search request that should result in a null pointer, I see something like the following in the logs:\n\n```\n[2016-01-12 16:51:30,174][DEBUG][action.search.type ] [Black Fox] [myindex][0], node[WYV4NA6SRNyRXepgS8QEQQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@43cc9843]\njava.lang.NullPointerException\n at org.apache.lucene.search.DisjunctionScorer.advance(DisjunctionScorer.java:155)\n at org.apache.lucene.search.ConstantScoreQuery$ConstantScorer.advance(ConstantScoreQuery.java:278)\n at org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.addMatchedQueries(MatchedQueriesFetchSubPhase.java:108)\n at org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.hitExecute(MatchedQueriesFetchSubPhase.java:80)\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:211)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:372)\n at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:333)\n at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:330)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n[2016-01-12 16:51:30,174][DEBUG][action.search.type ] [Black Fox] All shards failed for phase: [query_fetch]\n```\n\nI'm not sure if everything in the script is strictly required, but the following three things all seem to be required in order to reproduce it:\n- A large number of (possibly empty) documents – see org.apache.lucene.search.ConstantScoreAutoRewrite:91.\n- Multiple matching docs on the same shard.\n- Multiple matching documents that have nested children – the nested docs cause issues in [MatchedQueriesFetchSubPhase](https://github.com/elastic/elasticsearch/blob/1.4/src/main/java/org/elasticsearch/search/fetch/matchedqueries/MatchedQueriesFetchSubPhase.java#L108) where the filterIterator can be advanced beyond NO_MORE_DOCS (I’m not sure if that’s allowed or not, but the org.apache.lucene.search.DisjunctionScorer.advance method assumes it isn’t).\n",
"comments": [
{
"body": "I'm able to reproduce this in 1.7.3, but not in 2.0.0 or beyond. @jpountz @martijnvg do you know for sure whether this has been fixed?\n",
"created_at": "2016-01-13T11:25:50Z"
},
{
"body": "I don't know when it was fixed but I think I found the bug: in 1.7 MatchedQueriesFetchSubPhase might call DocIdSetIterator.advance after it already returned NO_MORE_DOCS, which is illegal.\n",
"created_at": "2016-01-13T14:20:34Z"
},
{
"body": "Is this no longer the case in 2.x? In which case we can close?\n",
"created_at": "2016-01-13T17:18:41Z"
},
{
"body": "This rule still exists in 2.x but the code is fine and doesn't call advance on an exhausted iterator. Do we want to get this fixed in 1.7?\n",
"created_at": "2016-01-13T17:20:05Z"
},
{
"body": "@jpountz depends how big a job it is i suppose? \n",
"created_at": "2016-01-13T17:22:33Z"
},
{
"body": "It looks simple. I'll give it a try and give up if it's more complicated than I expected.\n",
"created_at": "2016-01-13T17:33:20Z"
},
{
"body": "Fixed via #15962\n",
"created_at": "2016-01-14T16:36:16Z"
}
],
"number": 15949,
"title": "Nullpointer using named query when there are nested docs"
}
|
{
"body": "Closes #15949\n",
"number": 15962,
"review_comments": [],
"title": "Fix MatchedQueriesFetchSubPhase's consumption of DocIdSetIterator."
}
|
{
"commits": [
{
"message": "Fix MatchedQueriesFetchSubPhase's consumption of DocIdSetIterator."
}
],
"files": [
{
"diff": "@@ -104,7 +104,7 @@ private void addMatchedQueries(HitContext hitContext, ImmutableMap<String, Filte\n if (filterIterator != null && docAndNestedDocsIterator != null) {\n int matchedDocId = -1;\n for (int docId = docAndNestedDocsIterator.nextDoc(); docId < DocIdSetIterator.NO_MORE_DOCS; docId = docAndNestedDocsIterator.nextDoc()) {\n- if (docId != matchedDocId) {\n+ if (docId > matchedDocId) {\n matchedDocId = filterIterator.advance(docId);\n }\n if (matchedDocId == docId) {",
"filename": "src/main/java/org/elasticsearch/search/fetch/matchedqueries/MatchedQueriesFetchSubPhase.java",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,7 @@\n import org.junit.Test;\n \n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Iterator;\n import java.util.List;\n \n@@ -791,7 +792,7 @@ public void testNestedFetchFeatures() {\n assertThat(version, equalTo(1l));\n \n // Can't use named queries for the same reason explain doesn't work:\n- assertThat(searchHit.matchedQueries(), emptyArray());\n+ assertThat(Arrays.asList(searchHit.matchedQueries()), equalTo(Arrays.asList(\"test\")));\n \n SearchHitField field = searchHit.field(\"comments.user\");\n assertThat(field.getValue().toString(), equalTo(\"a\"));",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java",
"status": "modified"
},
{
"diff": "@@ -24,8 +24,13 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n import org.junit.Test;\n \n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.Collections;\n+\n import static org.elasticsearch.index.query.FilterBuilders.*;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n@@ -376,4 +381,24 @@ public void testMatchedWithWrapperQuery() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"abc\"));\n }\n }\n+\n+ public void test15949() {\n+ client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(\"index.number_of_shards\", 1)\n+ .addMapping(\"test\", \"children\", \"type=nested\")\n+ .get();\n+ client().prepareIndex(\"test\", \"test\", \"1\").setSource(\"foo\", \"bar\", \"children\", Arrays.asList(Collections.singletonMap(\"a\", \"b\"))).get();\n+ client().prepareIndex(\"test\", \"test\", \"2\").setSource().get();\n+ client().prepareIndex(\"test\", \"test\", \"2\").setSource().get();\n+ refresh();\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(QueryBuilders.boolQuery()\n+ .should(QueryBuilders.matchAllQuery())\n+ .should(QueryBuilders.boolQuery()\n+ .queryName(\"abc\")\n+ .should(QueryBuilders.matchQuery(\"foo\", \"bar\"))\n+ .should(QueryBuilders.matchQuery(\"foo\", \"bar\"))))\n+ .get();\n+ ElasticsearchAssertions.assertNoFailures(searchResponse);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/matchedqueries/MatchedQueriesTests.java",
"status": "modified"
}
]
}
|
{
"body": "We can end up with a circular reference in an Exception A, which has as suppressed Exception B which has as cause A again if an Exception is thrown in the try block of `Translog.ensureSynced()` (Translog.java#L562).\n\nThis is easy to see if you pick https://github.com/brwe/elasticsearch/commit/bb108994d9b4929481f2f74b3e01bc8c0f756b18 and run the `TranslogTests`.\n\nThe way this happens is so:\n1. We call `Translog.ensureSynced()` but unfortunately when `TranslogWriter.sync()` is called (via `current.syncUpTo()`) the writing of the checkpoint fails (TranslogWriter.java#L151) with Exception A. This causes `TranslogWriter.tragedy` to be set to Exception A before it is thrown (`TranslogWriter.closeWithTragicEvent()`). \n2. Because of Exception A we now get to the catch clause in `Translog.ensureSynced()` where we call `Translog.closeOnTragicEvent()` and pass as parameter the Exception A _that we just set as tragic exception_ (`TranslogWriter.tragedy`).\n3. In `Translog.closeOnTragicEvent()` we call `close()` but that will fail too because we closed the translog already before (because of the tragic event A). We will get Exception B which is an `AlreadyClosedException` with cause `TranslogWriter.tragedy` (Exception A). (It is created in `TranslogWriter.ensureOpen()`). \n4. Because of Exception B now we get to the catch clause of `Translog.closeOnTragicEvent()` and have\n - `inner` Exception B, the `AlreadyClosedException` with cause `TranslogWriter.tragedy` (Exception A)\n - method parameter Exception (`ex`) which is `TranslogWriter.tragedy` (Exception A)\n\nWe the set B as suppressed Exception for the parameter exception A and therefore end up with a circular reference A that has suppressed B that has cause A.\n\nAs a result, when we try to serialize this, we get a stackoverflow.\n\nThis can be seen also in this build failure: http://build-us-00.elastic.co/job/es_core_21_metal/326/\n\nIf this explanation is too confusing let me know. \n",
"comments": [
{
"body": "thanks so much @brwe for tracking this down. I know exactly what kind of time-sink this is/can be. The explanation makes perfect sense to me after reading it a couple of times :). I think the fix is relatively straight forward, no? I mean we should no add `AlreadyClosedExceptions` as suppressed exceptions when we call `Translog#close()` inside `Translog#closeOnTragicEvent(Throwable), what do you think?\n",
"created_at": "2016-01-13T08:27:31Z"
}
],
"number": 15941,
"title": "Circular reference in Exception when checkpoint writing fails "
}
|
{
"body": "Don't set the suppressed Exception in Translog.closeOnTragicEvent(Exception ex) if it is an\nAlreadyClosedException. ACE is thrown by the TranslogWriter and as cause might\ncontain the Exception that we add the suppressed ACE to. We then end up with a\ncircular reference where Exception A has a suppressed Exception B that has as cause A.\nThis would cause a stackoverflow when we try to serialize it.\nFor a more detailed description see #15941\n\ncloses #15941\n",
"number": 15952,
"review_comments": [
{
"body": "maybe assert before we add?\n",
"created_at": "2016-01-13T13:20:32Z"
}
],
"title": "Avoid circular reference in exception"
}
|
{
"commits": [
{
"message": "Avoid circular reference in exception\n\nDon't set the suppressed Exception in Translog.closeOnTragicEvent(Exception ex) if it is an\nAlreadyClosedException. ACE is thrown by the TranslogWriter and as cause might\ncontain the Exception that we add the suppressed ACE to. We then end up with a\ncircular reference where Exception A has a suppressed Exception B that has as cause A.\nThis would cause a stackoverflow when we try to serialize it.\nFor a more detailed description see #15941\n\ncloses #15941"
},
{
"message": "assert before we set the suppressed"
}
],
"files": [
{
"diff": "@@ -576,7 +576,11 @@ private void closeOnTragicEvent(Throwable ex) {\n if (current.getTragicException() != null) {\n try {\n close();\n+ } catch (AlreadyClosedException inner) {\n+ // don't do anything in this case. The AlreadyClosedException comes from TranslogWriter and we should not add it as suppressed because\n+ // will contain the Exception ex as cause. See also https://github.com/elastic/elasticsearch/issues/15941\n } catch (Exception inner) {\n+ assert (ex != inner.getCause());\n ex.addSuppressed(inner);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
}
]
}
|
{
"body": "The mpercolate call appears to be broken in 2.1.1 in clusters with more than one node. The following sequence works perfectly on my laptop but fails in two separate three node clusters (in \"green\" state). Sometimes the call succeeds on one of the three machines but not on the others. I could not find any errors in the logs.\nPUT http://localhost:9200/test\n[mapping.txt](https://github.com/elastic/elasticsearch/files/86558/mapping.txt)\nPOST http://localhost:9200/test/.percolator/Certificate_Expiration_Warning_Medium\n[percolator.txt](https://github.com/elastic/elasticsearch/files/86560/percolator.txt)\nPOST http://localhost:9200/test/_mpercolate\n[mpercolate.txt](https://github.com/elastic/elasticsearch/files/86562/mpercolate.txt)\nExpected Output:\n[output.txt](https://github.com/elastic/elasticsearch/files/86566/output.txt)\nActual Output:\n[actual.txt](https://github.com/elastic/elasticsearch/files/86577/actual.txt)\n",
"comments": [
{
"body": "Simple recreation: \n\nStart two nodes, run this:\n\n```\nPUT test\n{ \n \"settings\":{ \n \"number_of_shards\":1,\n \"number_of_replicas\":1\n },\n \"mappings\":{ \n \"cert_localmachine_my\":{ \n \"properties\":{ \n \"cert_localmachine_my\":{ \n \"properties\":{ \n \"NotAfter\":{ \n \"type\":\"date\",\n \"format\":\"strict_date_optional_time||epoch_millis\"\n }\n }\n }\n }\n }\n }\n}\nPOST /test/.percolator/Certificate_Expiration_Warning_Medium\n{ \n \"query\":{ \n \"bool\":{ \n \"filter\":{ \n \"range\":{ \n \"cert_localmachine_my.NotAfter\":{ \n \"lt\":\"now+90d\"\n }\n }\n }\n }\n }\n}\n\n\nPOST test/_mpercolate\n{\"percolate\":{\"type\":\"cert_localmachine_my\"}}\n{\"doc\":{\"cert_localmachine_my\":{\"NotAfter\":\"2015-07-21T10:28:01-07:00\"}}}\n```\n\nThen restart one node and run the mpercolate command a few times. Sometimes it matches sometimes it doesn't.\n",
"created_at": "2016-01-12T11:53:29Z"
},
{
"body": "The problem is not related to the node restart. It also doesn't match before the restart. The problem is that the mpercolate api doesn't serialise the start time of the request to other nodes that participate in mpercolate execution. I'll open a pr for that. \n\n@rabx88 Can you confirm that this problem only occurs with range query/filters that use `now`?\n",
"created_at": "2016-01-12T17:43:42Z"
},
{
"body": "That is correct. We tried a few other examples using combinations of term filters and it worked as expected. \n",
"created_at": "2016-01-12T19:07:57Z"
}
],
"number": 15908,
"title": "mpercolate does not function properly in elasticsearch 2.1.1"
}
|
{
"body": "PR for #15908 \n\n(when back porting to other branches I'll add version checks for serialisation)\n",
"number": 15938,
"review_comments": [
{
"body": "don't we need to read the original indices anymore here? not sure, but if that's the case we may be able to remove that specific PercolateShardRequest constructor.\n",
"created_at": "2016-01-13T07:42:29Z"
},
{
"body": "good point. We need to take the original indices into account. Let me change this.\n",
"created_at": "2016-01-13T09:08:16Z"
},
{
"body": "I think this change is ok and we do take the original indices into account. We do this when we create `PercolateShardRequest` instance in TransportMultiPercolateAction at line 206.\n",
"created_at": "2016-01-13T09:14:40Z"
},
{
"body": "you are right, the original indices are read/written in the super class, good! You can then remove the PercolateShardRequest that takes shardId and originalIndices as arguments I think.\n",
"created_at": "2016-01-13T09:26:27Z"
},
{
"body": "I tried to remove it but unit tests are using it (PercolateDocumentParserTests), the percolator document parser relies on the shard id to be set on the percolate shard request. So lets keep it?\n",
"created_at": "2016-01-13T13:30:02Z"
},
{
"body": "I think it's odd to have a public constructor for a test only... remove the OriginalIndices parameter at least? it's always set to null I think. \n",
"created_at": "2016-01-13T14:34:30Z"
}
],
"title": "mpercolate api should serialise start time "
}
|
{
"commits": [
{
"message": "percolator: Make sure that start time is serialized on the mpercolate shard requests\n\nCloses #15908"
}
],
"files": [
{
"diff": "@@ -52,10 +52,6 @@ public PercolateShardRequest() {\n this.startTime = request.startTime;\n }\n \n- public PercolateShardRequest(ShardId shardId, OriginalIndices originalIndices) {\n- super(shardId, originalIndices);\n- }\n-\n PercolateShardRequest(ShardId shardId, PercolateRequest request) {\n super(shardId, request);\n this.documentType = request.documentType();",
"filename": "core/src/main/java/org/elasticsearch/action/percolate/PercolateShardRequest.java",
"status": "modified"
},
{
"diff": "@@ -160,12 +160,8 @@ public void readFrom(StreamInput in) throws IOException {\n items = new ArrayList<>(size);\n for (int i = 0; i < size; i++) {\n int slot = in.readVInt();\n- OriginalIndices originalIndices = OriginalIndices.readOriginalIndices(in);\n- PercolateShardRequest shardRequest = new PercolateShardRequest(new ShardId(index, shardId), originalIndices);\n- shardRequest.documentType(in.readString());\n- shardRequest.source(in.readBytesReference());\n- shardRequest.docSource(in.readBytesReference());\n- shardRequest.onlyCount(in.readBoolean());\n+ PercolateShardRequest shardRequest = new PercolateShardRequest();\n+ shardRequest.readFrom(in);\n Item item = new Item(slot, shardRequest);\n items.add(item);\n }\n@@ -179,11 +175,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeVInt(items.size());\n for (Item item : items) {\n out.writeVInt(item.slot);\n- OriginalIndices.writeOriginalIndices(item.request.originalIndices(), out);\n- out.writeString(item.request.documentType());\n- out.writeBytesReference(item.request.source());\n- out.writeBytesReference(item.request.docSource());\n- out.writeBoolean(item.request.onlyCount());\n+ item.request.writeTo(out);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/percolate/TransportShardMultiPercolateAction.java",
"status": "modified"
},
{
"diff": "@@ -33,12 +33,14 @@\n import java.io.IOException;\n \n import static org.elasticsearch.action.percolate.PercolateSourceBuilder.docBuilder;\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.smileBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.yamlBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n import static org.elasticsearch.percolator.PercolatorTestUtil.convertFromTextArray;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertMatchCount;\n@@ -363,6 +365,33 @@ public void testNestedMultiPercolation() throws IOException {\n assertEquals(response.getItems()[1].getResponse().getMatches()[0].getId().string(), \"Q\");\n }\n \n+ public void testStartTimeIsPropagatedToShardRequests() throws Exception {\n+ // See: https://github.com/elastic/elasticsearch/issues/15908\n+ internalCluster().ensureAtLeastNumDataNodes(2);\n+ client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(settingsBuilder()\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"index.number_of_replicas\", 1)\n+ )\n+ .addMapping(\"type\", \"date_field\", \"type=date,format=strict_date_optional_time||epoch_millis\")\n+ .get();\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \".percolator\", \"1\")\n+ .setSource(jsonBuilder().startObject().field(\"query\", rangeQuery(\"date_field\").lt(\"now+90d\")).endObject())\n+ .setRefresh(true)\n+ .get();\n+\n+ for (int i = 0; i < 32; i++) {\n+ MultiPercolateResponse response = client().prepareMultiPercolate()\n+ .add(client().preparePercolate().setDocumentType(\"type\").setIndices(\"test\")\n+ .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"date_field\", \"2015-07-21T10:28:01-07:00\")))\n+ .get();\n+ assertThat(response.getItems()[0].getResponse().getCount(), equalTo(1L));\n+ assertThat(response.getItems()[0].getResponse().getMatches()[0].getId().string(), equalTo(\"1\"));\n+ }\n+ }\n+\n void initNestedIndexAndPercolation() throws IOException {\n XContentBuilder mapping = XContentFactory.jsonBuilder();\n mapping.startObject().startObject(\"properties\").startObject(\"companyname\").field(\"type\", \"string\").endObject()",
"filename": "core/src/test/java/org/elasticsearch/percolator/MultiPercolatorIT.java",
"status": "modified"
},
{
"diff": "@@ -66,14 +66,13 @@\n \n public class PercolateDocumentParserTests extends ESTestCase {\n \n- private Index index;\n private MapperService mapperService;\n private PercolateDocumentParser parser;\n private QueryShardContext queryShardContext;\n+ private PercolateShardRequest request;\n \n @Before\n public void init() {\n- index = new Index(\"_index\");\n IndexSettings indexSettings = new IndexSettings(new IndexMetaData.Builder(\"_index\").settings(\n Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n@@ -97,6 +96,10 @@ public void init() {\n parser = new PercolateDocumentParser(\n highlightPhase, new SortParseElement(), aggregationPhase, mappingUpdatedAction\n );\n+\n+ request = Mockito.mock(PercolateShardRequest.class);\n+ Mockito.when(request.shardId()).thenReturn(new ShardId(new Index(\"_index\"), 0));\n+ Mockito.when(request.documentType()).thenReturn(\"type\");\n }\n \n public void testParseDoc() throws Exception {\n@@ -105,9 +108,7 @@ public void testParseDoc() throws Exception {\n .field(\"field1\", \"value1\")\n .endObject()\n .endObject();\n- PercolateShardRequest request = new PercolateShardRequest(new ShardId(index, 0), null);\n- request.documentType(\"type\");\n- request.source(source.bytes());\n+ Mockito.when(request.source()).thenReturn(source.bytes());\n \n PercolateContext context = new PercolateContext(request, new SearchShardTarget(\"_node\", \"_index\", 0), mapperService);\n ParsedDocument parsedDocument = parser.parse(request, context, mapperService, queryShardContext);\n@@ -126,9 +127,7 @@ public void testParseDocAndOtherOptions() throws Exception {\n .field(\"size\", 123)\n .startObject(\"sort\").startObject(\"_score\").endObject().endObject()\n .endObject();\n- PercolateShardRequest request = new PercolateShardRequest(new ShardId(index, 0), null);\n- request.documentType(\"type\");\n- request.source(source.bytes());\n+ Mockito.when(request.source()).thenReturn(source.bytes());\n \n PercolateContext context = new PercolateContext(request, new SearchShardTarget(\"_node\", \"_index\", 0), mapperService);\n ParsedDocument parsedDocument = parser.parse(request, context, mapperService, queryShardContext);\n@@ -151,10 +150,8 @@ public void testParseDocSource() throws Exception {\n XContentBuilder docSource = jsonBuilder().startObject()\n .field(\"field1\", \"value1\")\n .endObject();\n- PercolateShardRequest request = new PercolateShardRequest(new ShardId(index, 0), null);\n- request.documentType(\"type\");\n- request.source(source.bytes());\n- request.docSource(docSource.bytes());\n+ Mockito.when(request.source()).thenReturn(source.bytes());\n+ Mockito.when(request.docSource()).thenReturn(docSource.bytes());\n \n PercolateContext context = new PercolateContext(request, new SearchShardTarget(\"_node\", \"_index\", 0), mapperService);\n ParsedDocument parsedDocument = parser.parse(request, context, mapperService, queryShardContext);\n@@ -180,10 +177,8 @@ public void testParseDocSourceAndSource() throws Exception {\n XContentBuilder docSource = jsonBuilder().startObject()\n .field(\"field1\", \"value1\")\n .endObject();\n- PercolateShardRequest request = new PercolateShardRequest(new ShardId(index, 0), null);\n- request.documentType(\"type\");\n- request.source(source.bytes());\n- request.docSource(docSource.bytes());\n+ Mockito.when(request.source()).thenReturn(source.bytes());\n+ Mockito.when(request.docSource()).thenReturn(docSource.bytes());\n \n PercolateContext context = new PercolateContext(request, new SearchShardTarget(\"_node\", \"_index\", 0), mapperService);\n try {",
"filename": "core/src/test/java/org/elasticsearch/percolator/PercolateDocumentParserTests.java",
"status": "modified"
}
]
}
|
{
"body": "Client and Server on 2.1.0\nThis is superficially similar to https://github.com/elastic/elasticsearch/issues/2218.\n\nI have an index called index1 with many types.\ncurl 'http://127.0.0.1:9200/index1/type1/_search?q=test\nreturns one result\ncurl 'http://127.0.0.1:9200/index1/type2/_search?q=test\nreturns one result.\nThese are different results with the correct type.\ncurl 'http://127.0.0.1:9200/index1/foo,bar,whatever/_search?q=test\nreturns both results\ncurl 'http://127.0.0.1:9200/index1/type1,foo/_search?q=test\nreturns results of type1 and type2 (both results)\n\nThis index and its mappings were created using the Java API. I cannot recreate this issue when using curl with a new index and mappings.\n",
"comments": [
{
"body": "Can you reproduce consistently with the Java API? If so, can you provide an example snippet in a gist or the like?\n",
"created_at": "2016-01-04T20:00:56Z"
},
{
"body": "Hi, apologies for the misleading issue above I have finally reproduced with curl (I had missed something from my mapping previously). This seems to be cause by nested fields. In the last step outlined below there are 2 results. If the mapping for entityContent is removed then there are no results (this is correct)\n\nN.B. Not sure if this helps or not but this is a regression from 1.7.3 caught by one of our tests as part of me upgrading to 2.1.0\n\nSteps:\n\n```\n#Create Index\ncurl -XPUT \"http://localhost:9200/index1\"\n\n#Create mappings\ncurl -XPUT --data @index1.type1.json \"http://localhost:9200/index1/type1/_mapping\"\ncurl -XPUT --data @index1.type2.json \"http://localhost:9200/index1/type2/_mapping\"\n\n#Add Data\ncurl -XPUT --data @index1.data.json \"http://localhost:9200/index1/type1/2\"\ncurl -XPUT --data @index1.data.json \"http://localhost:9200/index1/type2/2\"\n\n#Creates 2 hits\ncurl \"http://localhost:9200/index1/a,b/_search?q=test\"\n```\n\nFiles:\nindex1.type1.json\n\n```\n{\n \"type1\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"string\"\n },\n \"entityContent\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"field1\" : {\n \"type\" : \"string\"\n }\n }\n }\n }\n }\n}\n```\n\nindex1.type2.json\n\n```\n{\n \"type2\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"string\"\n }\n }\n }\n}\n```\n\nindex1.data.json\n\n```\n{\n \"content\" : \"This is a Test.\"\n}\n```\n",
"created_at": "2016-01-05T10:19:48Z"
},
{
"body": "Having a quick read through the elasticsearch code base I assume this is going wrong on line 534 of \nhttps://github.com/elastic/elasticsearch/blob/1a47226d9af373e4627cd360ad8a8a7b93ea7882/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java though not familiar enough to say for certain.\n",
"created_at": "2016-01-05T17:13:36Z"
},
{
"body": "Hi @TheHound \n\nI don't understand this issue at all, or how what you see is different from what you expect. The example you give above returns two documents, which is what I'd expect to get. You're searching the `_all` field for the word `test`, which matches both documents.\n",
"created_at": "2016-01-10T12:59:12Z"
},
{
"body": "Hi @clintongormley \n\nI would expect the path part of the URL /a,b/ to act as a type filter. Since those types do not exist I would expect no results.\n\nSo:\n\n```\ncurl 'http://127.0.0.1:9200/index1/type1/_search?q=test\n```\n\nOnly returns 1 hit. I would therefore expect this to only return 1 hit:\n\n```\ncurl 'http://127.0.0.1:9200/index1/type1,type3/_search?q=test\n```\n\nBut instead returns 2, 1 for type1 and 1 for type2.\n",
"created_at": "2016-01-11T09:07:26Z"
},
{
"body": "OK gotcha, here's a recreation:\n\n```\nPUT index1\n\nPUT index1/type1/_mapping\n{\n \"type1\": {\n \"properties\": {\n \"entityContent\": {\n \"type\": \"nested\",\n \"properties\": {\n \"field1\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n}\n\nPUT index1/type1/1\n{\n \"content\" : \"This is a Test.\"\n}\n\nPUT index1/type2/2\n{\n \"content\" : \"This is a test.\"\n}\n```\n\nQuerying with a known type plus an unknown type returns both documents incorrectly:\n\n```\nGET index1/type1,type3/_search?q=test\n```\n\nThe explanation for this query (when there is a nested document mapped):\n\n```\nGET index1/type1,type3/_validate/query?q=test&explain\n```\n\nis:\n\n```\n\"+_all:test #ConstantScore(+ConstantScore(ConstantScore(_type:type1) _type:type3 +(+*:* -_type:__*)))\"\n```\n\nHowever, if you repeat the explanation without the nested doc mapping, you get:\n\n```\n\"+_all:test #ConstantScore(+ConstantScore(ConstantScore(_type:type1) _type:type3))\"\n```\n",
"created_at": "2016-01-11T13:29:04Z"
},
{
"body": "@TheHound Thanks for reporting this issue! I'll back port it to all 2.x branches.\n",
"created_at": "2016-01-13T08:51:03Z"
}
],
"number": 15757,
"title": "Types filter doesn't work for multiple types (EDIT - with nested fields)"
}
|
{
"body": "Search filter should wrap the types filters in a separate boolean as should clauses\n\nSo that a document must either match with one of the types and the non nested clause.\n\nCloses #15757\n",
"number": 15923,
"review_comments": [],
"title": "Fix MapperService#searchFilter(...)"
}
|
{
"commits": [
{
"message": "mappings: Search filter should wrap the types filters in a separate boolean as should clauses\n\nSo that a document must either match with one of the types and the non nested clause.\n\nCloses #15757"
}
],
"files": [
{
"diff": "@@ -519,16 +519,17 @@ public Query searchFilter(String... types) {\n return termsFilter;\n }\n } else {\n- // Current bool filter requires that at least one should clause matches, even with a must clause.\n- BooleanQuery.Builder bool = new BooleanQuery.Builder();\n+ BooleanQuery.Builder typesBool = new BooleanQuery.Builder();\n for (String type : types) {\n DocumentMapper docMapper = documentMapper(type);\n if (docMapper == null) {\n- bool.add(new TermQuery(new Term(TypeFieldMapper.NAME, type)), BooleanClause.Occur.SHOULD);\n+ typesBool.add(new TermQuery(new Term(TypeFieldMapper.NAME, type)), BooleanClause.Occur.SHOULD);\n } else {\n- bool.add(docMapper.typeFilter(), BooleanClause.Occur.SHOULD);\n+ typesBool.add(docMapper.typeFilter(), BooleanClause.Occur.SHOULD);\n }\n }\n+ BooleanQuery.Builder bool = new BooleanQuery.Builder();\n+ bool.add(typesBool.build(), Occur.MUST);\n if (filterPercolateType) {\n bool.add(percolatorType, BooleanClause.Occur.MUST_NOT);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -19,9 +19,17 @@\n \n package org.elasticsearch.index.mapper;\n \n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.BooleanClause;\n+import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.ConstantScoreQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.junit.Rule;\n import org.junit.rules.ExpectedException;\n@@ -32,6 +40,7 @@\n import java.util.concurrent.ExecutionException;\n \n import static org.hamcrest.CoreMatchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasToString;\n \n public class MapperServiceTests extends ESSingleNodeTestCase {\n@@ -122,4 +131,22 @@ public void testIndexIntoDefaultMapping() throws Throwable {\n }\n assertFalse(indexService.mapperService().hasMapping(MapperService.DEFAULT_MAPPING));\n }\n+\n+ public void testSearchFilter() {\n+ IndexService indexService = createIndex(\"index1\", client().admin().indices().prepareCreate(\"index1\")\n+ .addMapping(\"type1\", \"field1\", \"type=nested\")\n+ .addMapping(\"type2\", new Object[0])\n+ );\n+\n+ Query searchFilter = indexService.mapperService().searchFilter(\"type1\", \"type3\");\n+ Query expectedQuery = new BooleanQuery.Builder()\n+ .add(new BooleanQuery.Builder()\n+ .add(new ConstantScoreQuery(new TermQuery(new Term(TypeFieldMapper.NAME, \"type1\"))), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(TypeFieldMapper.NAME, \"type3\")), BooleanClause.Occur.SHOULD)\n+ .build(), BooleanClause.Occur.MUST\n+ )\n+ .add(Queries.newNonNestedFilter(), BooleanClause.Occur.MUST)\n+ .build();\n+ assertThat(searchFilter, equalTo(new ConstantScoreQuery(expectedQuery)));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "Using this docker container: https://hub.docker.com/_/elasticsearch/\nVersion: 2.1.0\n\nElastic throws NullPointerException while I'm trying to update single document using inline script\n\n**First GET document**\n\n```\ncurl -XGET 'http://172.18.0.3:9200/counters-20160107/counters/20160107T1100:1:3:2?routing=1:2&pretty'\n{\n \"_index\" : \"counters-20160107\",\n \"_type\" : \"counters\",\n \"_id\" : \"20160107T1100:1:3:2\",\n \"_version\" : 1,\n \"_routing\" : \"1:2\",\n \"found\" : true,\n \"_source\":{\"company\":2,\"time\":1452167502000,\"counters\":{...}}\n}\n```\n\n**And update**\n\n```\ncurl -XPOST 'http://172.18.0.3:9200/counters-20160107/counters/20160107T1100:1:3:2/_update?routing=1:2' -d@/root/tmp/update\n{\"error\":{\"root_cause\":[{\"type\":\"null_pointer_exception\",\"reason\":null}],\"type\":\"null_pointer_exception\",\"reason\":null},\"status\":500}\n```\n\n**Content of /root/tmp/update**\n\n```\n\"script\" : {\n \"inline\": \"ctx._source.company = cid\",\n \"params\": {\n \"cid\": \"1\"\n }\n}\n```\n\n**Log**\n\n```\ntail -f /var/log/elasticsearch/elasticsearch.log\n[2016-01-07 12:26:25,986][INFO ][rest.suppressed ] /counters-20160107/counters/20160107T1100:1:3:2/_update Params: {routing=1:2, index=counters-20160107, id=20160107T1100:1:3:2, type=counters}\njava.lang.NullPointerException\n```\n",
"comments": [
{
"body": "Do you have a stack trace in the logs below the NullPointerException?\n",
"created_at": "2016-01-07T14:13:06Z"
},
{
"body": "No\n",
"created_at": "2016-01-07T14:27:11Z"
},
{
"body": "This NPE is caused by the malformed request body (missing opening `{`). A simple recreation:\n\n```\ncurl -XPOST \"http://localhost:9200/t/t/1/_update\" -d'\n\"doc\": {}\n'\n```\n",
"created_at": "2016-01-10T11:24:08Z"
}
],
"number": 15822,
"title": "java.lang.NullPointerException while updating document"
}
|
{
"body": "If the content type could not be determined, the UpdateRequest would still try to parse the content instead\nof throwing the standard ElasticsearchParseException. This manifests when\npassing illegal JSON in the request body that does not begin with a '{'.\nBy trying to parse the content from an unknown request body content type,\nthe UpdateRequest was throwing a null pointer exception. This has been\nfixed to throw an ElasticsearchParseException, to be consistent with the\nbehavior of all other requests in the face of undecipherable request\ncontent types.\n\nCloses #15822\n",
"number": 15904,
"review_comments": [],
"title": "Throw exception if content type could not be determined in Update API"
}
|
{
"commits": [
{
"message": "Fixes an issue where, if the content type of the request body could not be\ndetermined, the UpdateRequest would still try to parse the content instead\nof throwing the standard ElasticsearchParseException. This manifests when\npassing illegal JSON in the request body that does not begin with a '{'.\nBy trying to parse the content from an unknown request body content type,\nthe UpdateRequest was throwing a null pointer exception. This has been\nfixed to throw an ElasticsearchParseException, to be consistent with the\nbehavior of all other requests in the face of undecipherable request\ncontent types.\n\nCloses #15822"
}
],
"files": [
{
"diff": "@@ -639,8 +639,7 @@ public UpdateRequest source(BytesReference source) throws Exception {\n ScriptParameterParser scriptParameterParser = new ScriptParameterParser();\n Map<String, Object> scriptParams = null;\n Script script = null;\n- XContentType xContentType = XContentFactory.xContentType(source);\n- try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(source)) {\n+ try (XContentParser parser = XContentFactory.xContent(source).createParser(source)) {\n XContentParser.Token token = parser.nextToken();\n if (token == null) {\n return this;\n@@ -657,10 +656,12 @@ public UpdateRequest source(BytesReference source) throws Exception {\n } else if (\"scripted_upsert\".equals(currentFieldName)) {\n scriptedUpsert = parser.booleanValue();\n } else if (\"upsert\".equals(currentFieldName)) {\n+ XContentType xContentType = XContentFactory.xContentType(source);\n XContentBuilder builder = XContentFactory.contentBuilder(xContentType);\n builder.copyCurrentStructure(parser);\n safeUpsertRequest().source(builder);\n } else if (\"doc\".equals(currentFieldName)) {\n+ XContentType xContentType = XContentFactory.xContentType(source);\n XContentBuilder docBuilder = XContentFactory.contentBuilder(xContentType);\n docBuilder.copyCurrentStructure(parser);\n safeDoc().source(docBuilder);",
"filename": "core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java",
"status": "modified"
},
{
"diff": "@@ -133,6 +133,9 @@ public static XContentBuilder contentBuilder(XContentType type) throws IOExcepti\n * Returns the {@link org.elasticsearch.common.xcontent.XContent} for the provided content type.\n */\n public static XContent xContent(XContentType type) {\n+ if (type == null) {\n+ throw new IllegalArgumentException(\"Cannot get xcontent for unknown type\");\n+ }\n return type.xContent();\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.update;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.common.io.stream.Streamable;\n@@ -166,4 +167,15 @@ public void testUpdateRequestWithTTL() throws Exception {\n indexAction = (IndexRequest) action;\n assertThat(indexAction.ttl(), is(providedTTLValue));\n }\n+\n+ // Related to issue #15822\n+ public void testInvalidBodyThrowsParseException() throws Exception {\n+ UpdateRequest request = new UpdateRequest(\"test\", \"type\", \"1\");\n+ try {\n+ request.source(new byte[] { (byte) '\"' });\n+ fail(\"Should have thrown a ElasticsearchParseException\");\n+ } catch (ElasticsearchParseException e) {\n+ assertThat(e.getMessage(), equalTo(\"Failed to derive xcontent\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/update/UpdateRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -97,6 +97,14 @@ public void testEmptyStream() throws Exception {\n assertNull(XContentFactory.xContentType(is));\n }\n \n+ public void testInvalidStream() throws Exception {\n+ byte[] bytes = new byte[] { (byte) '\"' };\n+ assertNull(XContentFactory.xContentType(bytes));\n+\n+ bytes = new byte[] { (byte) 'x' };\n+ assertNull(XContentFactory.xContentType(bytes));\n+ }\n+\n public void testJsonFromBytesOptionallyPrecededByUtf8Bom() throws Exception {\n byte[] bytes = new byte[] {(byte) '{', (byte) '}'};\n assertThat(XContentFactory.xContentType(bytes), equalTo(XContentType.JSON));",
"filename": "core/src/test/java/org/elasticsearch/common/xcontent/XContentFactoryTests.java",
"status": "modified"
}
]
}
|
{
"body": "Relates to #12573\n\nWhen relocating a primary shard, there is a cluster state update at the end of relocation where the active primary is switched from the relocation source to the relocation target. If relocation source receives and processes this cluster state before the relocation target, there is a time span where relocation source believes active primary to be on relocation target and relocation target believes active primary to be on relocation source. This results in index/delete/flush requests being sent back and forth and can end in an OOM on both nodes.\n\nThis PR adds a field to the index/delete/flush request that helps detect the case where we locally have stale routing information. In case this staleness is detected, we wait until we have received an up-to-date cluster state before rerouting the request.\n\nI have included the test from #12574 in this PR to demonstrate the fix in an integration test. That integration test will not be part of the final commit, however.\n",
"comments": [
{
"body": "@bleskes instead of using the cluster state version, we could as well use the index metadata version. The index metadata version is updated whenever a new shard is started (thanks to active allocation ids). wdyt?\n\nOn a related note, we could use this field as well to wait for dynamic mapping updates to be applied. (for that the update mappings api would have to return the current index metadata version).\n",
"created_at": "2016-01-27T18:25:13Z"
},
{
"body": "@bleskes renamed the field and removed integration test.\n",
"created_at": "2016-02-01T17:43:44Z"
},
{
"body": "LGTM . Thanks @ywelsch - Left some minor comments, no need for another cycle.\n",
"created_at": "2016-02-01T17:53:26Z"
}
],
"number": 16274,
"title": "Prevent TransportReplicationAction to route request based on stale local routing table"
}
|
{
"body": "When primary relocation completes, a cluster state is propagated that deactivates the old primary and marks the new primary as active. As cluster state changes are not applied synchronously on all nodes, there can be a time interval where the relocation target has processed the cluster state and believes to be the active primary and the relocation source has not yet processed the cluster state update and still believes itself to be the active primary. This PR ensures that, before completing the relocation, the relocation source deactivates writing to its store and delegates requests to the relocation target.\n\nThe change is motivated as follows:\n\n1) We need to ensure that we only start writing data into the new primary once all the writes into the old primary have been completely replicated (among others to the new primary). This ensures that the new primary operates on the proper document version numbers. Document versions are increased when writing to the primary and then used on the replica to make sure that newer documents are not overridden by older documents (in the presence of concurrent replication). A scenario for this would be: Write document with id \"K\" and value \"X\" to old primary (gets version 1) and replicate it to new primary as well as replica. Assume that another document with id \"K\" but value \"Y\" is written on the new primary before the new primary gets the replicated write of \"K\" with value \"X\". Unaware of the other write it will then assign the same version number (namely 1) to the document with value \"Y\" and replicate it to the replica. Depending on the order in which replicated writes from old and new primary arrive at the replica, it will then either store \"X\" or \"Y\", which means that the new primary and the replica can become out of sync.\n\n2) We have to ensure that no new writes are done on the old primary once we start writing into the new primary. This helps with the following scenario. Assume primary relocation completes and master broadcasts cluster state which now only contains the new primary. Due to the distributed nature of Elasticsearch, cluster states are not applied in full synchrony on all nodes. For a brief moment nodes in the cluster have a different view of which node is the primary. In particular, it's possible that the node holding the old primary (node A) still believes to be the primary whereas the node holding the new primary (node B) believes to be the primary as well. If we send a document to node B, it will get indexed into the new primary and acknowledged (but won't exist on the old primary). If we then issue a delete request for the same document to the node A (which can happen if we send requests round-robin to nodes), then that node will not find the document in its old primary and fail the request.\n\nThis PR (in combination with #19013) implements the following solution:\n\nBefore completing the relocation, node A (holding the primary relocation source) deactivates writing to its shard copy (and temporarily puts all new incoming requests for that shard into a queue), then waits for all ongoing operations to be fully replicated. Once that is done, it delegates all new incoming requests to node B (holding the new primary) and also sends all the elements in the queue there. It uses a special action to delegate requests to node B, which bypasses the standard reroute phase when accepting requests as standard rerouting is based on the current cluster state on the node. At that moment, indexing requests that directly go to the node B will still be rerouted back to node A with the old primary. This means that node A is still in charge of indexing, but will use the physical shard copy on node B to do so. Node B finally asks the master to activate the new primary (finish the relocation). The master then broadcasts a new cluster state where the old primary on node A is removed and the new primary on node B is active. It doesn't matter now in which order the cluster state is applied on the nodes A and B:\n1) If the cluster state is first applied on the node B, both nodes will send their index requests to the shard copy that is on node B.\n2) If the cluster state is first applied on node A, requests to node A will be rerouted to node B and requests to node B will be rerouted to node A. To prevent redirect loops during the time period where cluster states on node A and node B differ, #16274 makes requests that are coming from node A wait on node B until B has processed the cluster state where relocation is completed.\n\nsupersedes #15532\n",
"number": 15900,
"review_comments": [
{
"body": "can we keep the shard reference? I know it's not needed here - but it serves as a proxy to the shard allowing to do more things - see https://github.com/elastic/elasticsearch/pull/15485/files#diff-a8aefbf42f29dc0fcc7c0a144863948eR1104\n",
"created_at": "2016-01-21T10:48:53Z"
},
{
"body": "this is tricky - it leaves us in a potentially scenario where there are two active primaries - the target and the source. I don't have a clean solution for this. My suggestion is to fail the shard and let the master promote another replica.\n",
"created_at": "2016-01-21T11:01:39Z"
},
{
"body": "why is relocated removed here?\n",
"created_at": "2016-01-21T11:04:28Z"
},
{
"body": "can you add the exception?\n",
"created_at": "2016-01-21T11:06:18Z"
},
{
"body": "we throw the exception and thus take care of the interrupt. We don't need to set it...\n",
"created_at": "2016-01-21T11:06:44Z"
},
{
"body": "can you add the suppressed interrupted exception? Also, I'm not sure about using IllegalIndexShardStateException here. It's ignore by the replication logic assuming the shard is not yet started or has finished relocating and was shut down. Interruption is a bigger problem which should never happen? \n",
"created_at": "2016-01-21T11:10:40Z"
},
{
"body": "get we call the method getIndexShardReference? Also I like explicit naming here better then a primary boolean. See [seq_no branch](https://github.com/elastic/elasticsearch/blob/feature/seq_no/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java#L710) for an example\n",
"created_at": "2016-01-28T10:43:49Z"
},
{
"body": "This method is becoming a bit of a monster. How about simplifying it like this? \n\nhttps://gist.github.com/bleskes/84c46a548d2782e6064a\n\nNote that I removed the non-existent nodes check - I really wonder when this can happen these days. Also, I rather take the shardrouting from the indexShard and know it's consistent with the IndexShard#relocated()\n",
"created_at": "2016-01-28T12:11:04Z"
},
{
"body": "can we change this block to use the same flow as the doRun (it's equivalent, but I have to double check every time)\n\n```\n if (shard.primary() == false && executeOnReplica == false) {\n numberOfIgnoredShardInstances++;\n continue;\n }\n if (shard.unassigned()) {\n numberOfIgnoredShardInstances++;\n continue;\n }\n if (nodes.localNodeId().equals(shard.currentNodeId()) == false) {\n numberOfPendingShardInstances++;\n }\n if (shard.relocating() && shard.relocatingNodeId().equals(nodes.localNodeId()) == false) {\n numberOfPendingShardInstances++;\n }\n```\n",
"created_at": "2016-01-28T12:37:45Z"
},
{
"body": "can we also change the comment to say that the local shard can be a relocation target of the primary ? \n",
"created_at": "2016-01-28T12:39:23Z"
},
{
"body": "Did you change anything here from the dedicate PR? (we should have comitted the other one, so it will be clear)\n",
"created_at": "2016-01-28T12:40:04Z"
},
{
"body": "we can remove the mutex here, but tbh this whole check can go away - It's just an extra safety mechanism. We can keep it simple\n",
"created_at": "2016-01-28T12:45:11Z"
},
{
"body": "I added the method `acquireUninterruptibly`. It's in the second commit ;-) As said before, I prefer to commit data structures when we know how to use them.\n",
"created_at": "2016-01-28T12:46:59Z"
},
{
"body": "can we make this an enumSet called writeAllowedStateForReplica (and the same for primary). It's getting out of hand :) also please right them in order recovering, post_recovery, started, relocated\n",
"created_at": "2016-01-28T12:47:29Z"
},
{
"body": "we don't really sue the term phases here. how about aquirePrimaryOpertaionLock?\n",
"created_at": "2016-01-28T12:48:10Z"
},
{
"body": "acquireReplicaOpertaionLock\n",
"created_at": "2016-01-28T12:48:35Z"
},
{
"body": "I think we can fail the shard on _any_ failure here. failAndRemoveShard already does logging under WARN\n",
"created_at": "2016-01-28T12:50:52Z"
},
{
"body": "the shard.routingEntry is always primary. How about renaming the RecoveryState.Type to PRIMARY_RELOCATION and just check that? \n",
"created_at": "2016-01-28T12:53:59Z"
},
{
"body": "can we add some randomization to the replica states?\n",
"created_at": "2016-01-28T12:57:46Z"
},
{
"body": "this should be part of the tests for the replication phase - with random cluster state and all..\n",
"created_at": "2016-01-28T12:59:47Z"
},
{
"body": "can we add some comments like // shard should be relocated until we are done?\n",
"created_at": "2016-01-28T13:02:25Z"
},
{
"body": "why do we need assertBusy here? I rather use two signaling methods - the one you have now and the one signaling the some/all the threads have acquired the ops lock. Note that now I think you have a race condition with the recovery thread can sneak in first.\n",
"created_at": "2016-01-28T13:07:35Z"
},
{
"body": "use recoveryThread.join()?\n",
"created_at": "2016-01-28T13:08:00Z"
},
{
"body": "can we turn on debug logging for this one?\n",
"created_at": "2016-01-28T13:09:05Z"
},
{
"body": "you can do : `internalCluster().getInstance(ClusterService.class, nodeA).localNode().id();`\n",
"created_at": "2016-01-28T13:11:12Z"
},
{
"body": "can we replace these test with a simpler to understand test, paying the price of things being less targeted? experience have shown that this type of tests are very hard to maintain and often don't reproduce exactly what was intended anyway (because it's so hard).. \n",
"created_at": "2016-01-28T13:34:52Z"
},
{
"body": "Thanks. I'm with you, but this is just a big change I would prefered to get the counter in to mimic the old behavior first and then build on it (and change it) here. No big one - water under the bridge :)\n",
"created_at": "2016-01-28T14:29:15Z"
},
{
"body": "I have added your changes but kept the exception type of AbstractRunnable.doRun() as before (Exception, not Throwable).\n",
"created_at": "2016-01-28T18:28:26Z"
},
{
"body": "done.\n\nalternatively, the list of shards to replicate to could be built in the constructor.\nWe would then only iterate over the list here.\n",
"created_at": "2016-01-28T18:29:26Z"
},
{
"body": "removed it.\n",
"created_at": "2016-01-28T18:30:03Z"
}
],
"title": "Primary relocation handoff"
}
|
{
"commits": [
{
"message": "Add operation counter for IndexShard\n\nAdds a container that represents a resource with reference counting capabilities. Provides operations to suspend acquisition of new references. Useful for resource management when resources are intermittently unavailable.\n\nCloses #15956"
},
{
"message": "Add proper handoff between old and new copy of relocating primary shard\n\nWhen primary relocation completes, a cluster state is propagated that deactivates the old primary and marks the new primary as active.\nAs cluster state changes are not applied synchronously on all nodes, there can be a time interval where the relocation target has processed\nthe cluster state and believes to be the active primary and the relocation source has not yet processed the cluster state update and\nstill believes itself to be the active primary. This commit ensures that, before completing the relocation, the reloction source deactivates\nwriting to its store and delegates requests to the relocation target.\n\nCloses #15900"
}
],
"files": [
{
"diff": "@@ -58,7 +58,7 @@ protected ReplicationResponse newResponseInstance() {\n }\n \n @Override\n- protected Tuple<ReplicationResponse, ShardFlushRequest> shardOperationOnPrimary(MetaData metaData, ShardFlushRequest shardRequest) throws Throwable {\n+ protected Tuple<ReplicationResponse, ShardFlushRequest> shardOperationOnPrimary(MetaData metaData, ShardFlushRequest shardRequest) {\n IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId().getIndex()).getShard(shardRequest.shardId().id());\n indexShard.flush(shardRequest.getRequest());\n logger.trace(\"{} flush request executed on primary\", indexShard.shardId());",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java",
"status": "modified"
},
{
"diff": "@@ -60,7 +60,7 @@ protected ReplicationResponse newResponseInstance() {\n }\n \n @Override\n- protected Tuple<ReplicationResponse, BasicReplicationRequest> shardOperationOnPrimary(MetaData metaData, BasicReplicationRequest shardRequest) throws Throwable {\n+ protected Tuple<ReplicationResponse, BasicReplicationRequest> shardOperationOnPrimary(MetaData metaData, BasicReplicationRequest shardRequest) {\n IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId().getIndex()).getShard(shardRequest.shardId().id());\n indexShard.refresh(\"api\");\n logger.trace(\"{} refresh request executed on primary\", indexShard.shardId());",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java",
"status": "modified"
},
{
"diff": "@@ -140,7 +140,7 @@ protected IndexResponse newResponseInstance() {\n }\n \n @Override\n- protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(MetaData metaData, IndexRequest request) throws Throwable {\n+ protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(MetaData metaData, IndexRequest request) throws Exception {\n \n // validate, if routing is required, that we got routing\n IndexMetaData indexMetaData = metaData.index(request.shardId().getIndex());\n@@ -200,7 +200,7 @@ public static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request,\n * Execute the given {@link IndexRequest} on a primary shard, throwing a\n * {@link RetryOnPrimaryException} if the operation needs to be re-tried.\n */\n- public static WriteResult<IndexResponse> executeIndexRequestOnPrimary(IndexRequest request, IndexShard indexShard, MappingUpdatedAction mappingUpdatedAction) throws Throwable {\n+ public static WriteResult<IndexResponse> executeIndexRequestOnPrimary(IndexRequest request, IndexShard indexShard, MappingUpdatedAction mappingUpdatedAction) throws Exception {\n Engine.Index operation = prepareIndexOperationOnPrimary(request, indexShard);\n Mapping update = operation.parsedDoc().dynamicMappingsUpdate();\n final ShardId shardId = indexShard.shardId();",
"filename": "core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java",
"status": "modified"
},
{
"diff": "@@ -56,6 +56,7 @@\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.indices.IndicesService;\n@@ -156,10 +157,11 @@ protected void resolveRequest(MetaData metaData, String concreteIndex, Request r\n \n /**\n * Primary operation on node with primary copy, the provided metadata should be used for request validation if needed\n+ *\n * @return A tuple containing not null values, as first value the result of the primary operation and as second value\n * the request to be executed on the replica shards.\n */\n- protected abstract Tuple<Response, ReplicaRequest> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable;\n+ protected abstract Tuple<Response, ReplicaRequest> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception;\n \n /**\n * Replica operation on nodes with replica copies\n@@ -299,7 +301,7 @@ public RetryOnReplicaException(ShardId shardId, String msg) {\n setShard(shardId);\n }\n \n- public RetryOnReplicaException(StreamInput in) throws IOException{\n+ public RetryOnReplicaException(StreamInput in) throws IOException {\n super(in);\n }\n }\n@@ -326,8 +328,8 @@ public void onFailure(Throwable t) {\n public void onNewClusterState(ClusterState state) {\n context.close();\n // Forking a thread on local node via transport service so that custom transport service have an\n- // opportunity to execute custom logic before the replica operation begins\n- String extraMessage = \"action [\" + transportReplicaAction + \"], request[\" + request + \"]\";\n+ // opportunity to execute custom logic before the replica operation begins\n+ String extraMessage = \"action [\" + transportReplicaAction + \"], request[\" + request + \"]\";\n TransportChannelResponseHandler<TransportResponse.Empty> handler = TransportChannelResponseHandler.emptyResponseHandler(logger, channel, extraMessage);\n transportService.sendRequest(clusterService.localNode(), transportReplicaAction, request, handler);\n }\n@@ -352,6 +354,7 @@ public void onTimeout(TimeValue timeout) {\n }\n }\n }\n+\n private void failReplicaIfNeeded(Throwable t) {\n String index = request.shardId().getIndex().getName();\n int shardId = request.shardId().id();\n@@ -383,7 +386,7 @@ protected void responseWithFailure(Throwable t) {\n @Override\n protected void doRun() throws Exception {\n assert request.shardId() != null : \"request shardId must be set\";\n- try (Releasable ignored = getIndexShardOperationsCounter(request.shardId())) {\n+ try (Releasable ignored = getIndexShardReferenceOnReplica(request.shardId())) {\n shardOperationOnReplica(request);\n if (logger.isTraceEnabled()) {\n logger.trace(\"action [{}] completed on shard [{}] for request [{}]\", transportReplicaAction, request.shardId(), request);\n@@ -399,7 +402,7 @@ public RetryOnPrimaryException(ShardId shardId, String msg) {\n setShard(shardId);\n }\n \n- public RetryOnPrimaryException(StreamInput in) throws IOException{\n+ public RetryOnPrimaryException(StreamInput in) throws IOException {\n super(in);\n }\n }\n@@ -445,6 +448,7 @@ protected void doRun() {\n handleBlockException(blockException);\n return;\n }\n+\n // request does not have a shardId yet, we need to pass the concrete index to resolve shardId\n resolveRequest(state.metaData(), concreteIndex, request);\n assert request.shardId() != null : \"request shardId must be set in resolveRequest\";\n@@ -584,60 +588,71 @@ void retryBecauseUnavailable(ShardId shardId, String message) {\n }\n \n /**\n- * Responsible for performing primary operation locally and delegating to replication action once successful\n+ * Responsible for performing primary operation locally or delegating primary operation to relocation target in case where shard has\n+ * been marked as RELOCATED. Delegates to replication action once successful.\n * <p>\n * Note that as soon as we move to replication action, state responsibility is transferred to {@link ReplicationPhase}.\n */\n- final class PrimaryPhase extends AbstractRunnable {\n+ class PrimaryPhase extends AbstractRunnable {\n private final Request request;\n+ private final ShardId shardId;\n private final TransportChannel channel;\n private final ClusterState state;\n private final AtomicBoolean finished = new AtomicBoolean();\n- private Releasable indexShardReference;\n+ private IndexShardReference indexShardReference;\n \n PrimaryPhase(Request request, TransportChannel channel) {\n this.state = clusterService.state();\n this.request = request;\n+ assert request.shardId() != null : \"request shardId must be set prior to primary phase\";\n+ this.shardId = request.shardId();\n this.channel = channel;\n }\n \n @Override\n public void onFailure(Throwable e) {\n+ if (ExceptionsHelper.status(e) == RestStatus.CONFLICT) {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"failed to execute [{}] on [{}]\", e, request, shardId);\n+ }\n+ } else {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to execute [{}] on [{}]\", e, request, shardId);\n+ }\n+ }\n finishAsFailed(e);\n }\n \n @Override\n protected void doRun() throws Exception {\n // request shardID was set in ReroutePhase\n- assert request.shardId() != null : \"request shardID must be set prior to primary phase\";\n- final ShardId shardId = request.shardId();\n final String writeConsistencyFailure = checkWriteConsistency(shardId);\n if (writeConsistencyFailure != null) {\n finishBecauseUnavailable(shardId, writeConsistencyFailure);\n return;\n }\n- final ReplicationPhase replicationPhase;\n- try {\n- indexShardReference = getIndexShardOperationsCounter(shardId);\n+ // closed in finishAsFailed(e) in the case of error\n+ indexShardReference = getIndexShardReferenceOnPrimary(shardId);\n+ if (indexShardReference.isRelocated() == false) {\n+ // execute locally\n Tuple<Response, ReplicaRequest> primaryResponse = shardOperationOnPrimary(state.metaData(), request);\n if (logger.isTraceEnabled()) {\n logger.trace(\"action [{}] completed on shard [{}] for request [{}] with cluster state version [{}]\", transportPrimaryAction, shardId, request, state.version());\n }\n- replicationPhase = new ReplicationPhase(primaryResponse.v2(), primaryResponse.v1(), shardId, channel, indexShardReference);\n- } catch (Throwable e) {\n- if (ExceptionsHelper.status(e) == RestStatus.CONFLICT) {\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"failed to execute [{}] on [{}]\", e, request, shardId);\n- }\n- } else {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"failed to execute [{}] on [{}]\", e, request, shardId);\n- }\n- }\n- finishAsFailed(e);\n- return;\n+ ReplicationPhase replicationPhase = new ReplicationPhase(primaryResponse.v2(), primaryResponse.v1(), shardId, channel, indexShardReference);\n+ finishAndMoveToReplication(replicationPhase);\n+ } else {\n+ // delegate primary phase to relocation target\n+ // it is safe to execute primary phase on relocation target as there are no more in-flight operations where primary\n+ // phase is executed on local shard and all subsequent operations are executed on relocation target as primary phase.\n+ final ShardRouting primary = indexShardReference.routingEntry();\n+ indexShardReference.close();\n+ assert primary.relocating() : \"indexShard is marked as relocated but routing isn't\" + primary;\n+ DiscoveryNode relocatingNode = state.nodes().get(primary.relocatingNodeId());\n+ transportService.sendRequest(relocatingNode, transportPrimaryAction, request, transportOptions,\n+ TransportChannelResponseHandler.responseHandler(logger, TransportReplicationAction.this::newResponseInstance, channel,\n+ \"rerouting indexing to target primary \" + primary));\n }\n- finishAndMoveToReplication(replicationPhase);\n }\n \n /**\n@@ -721,10 +736,24 @@ void finishBecauseUnavailable(ShardId shardId, String message) {\n }\n }\n \n- protected Releasable getIndexShardOperationsCounter(ShardId shardId) {\n+ /**\n+ * returns a new reference to {@link IndexShard} to perform a primary operation. Released after performing primary operation locally\n+ * and replication of the operation to all replica shards is completed / failed (see {@link ReplicationPhase}).\n+ */\n+ protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId) {\n+ IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex());\n+ IndexShard indexShard = indexService.getShard(shardId.id());\n+ return new IndexShardReferenceImpl(indexShard, true);\n+ }\n+\n+ /**\n+ * returns a new reference to {@link IndexShard} on a node that the request is replicated to. The reference is closed as soon as\n+ * replication is completed on the node.\n+ */\n+ protected IndexShardReference getIndexShardReferenceOnReplica(ShardId shardId) {\n IndexService indexService = indicesService.indexServiceSafe(shardId.getIndex());\n IndexShard indexShard = indexService.getShard(shardId.id());\n- return new IndexShardReference(indexShard);\n+ return new IndexShardReferenceImpl(indexShard, false);\n }\n \n /**\n@@ -777,25 +806,28 @@ public ReplicationPhase(ReplicaRequest replicaRequest, Response finalResponse, S\n int numberOfIgnoredShardInstances = 0;\n int numberOfPendingShardInstances = 0;\n for (ShardRouting shard : shards) {\n+ // the following logic to select the shards to replicate to is mirrored and explained in the doRun method below\n if (shard.primary() == false && executeOnReplica == false) {\n numberOfIgnoredShardInstances++;\n- } else if (shard.unassigned()) {\n+ continue;\n+ }\n+ if (shard.unassigned()) {\n numberOfIgnoredShardInstances++;\n- } else {\n- if (shard.currentNodeId().equals(nodes.localNodeId()) == false) {\n- numberOfPendingShardInstances++;\n- }\n- if (shard.relocating()) {\n- numberOfPendingShardInstances++;\n- }\n+ continue;\n+ }\n+ if (nodes.localNodeId().equals(shard.currentNodeId()) == false) {\n+ numberOfPendingShardInstances++;\n+ }\n+ if (shard.relocating() && nodes.localNodeId().equals(shard.relocatingNodeId()) == false) {\n+ numberOfPendingShardInstances++;\n }\n }\n // one for the local primary copy\n this.totalShards = 1 + numberOfPendingShardInstances + numberOfIgnoredShardInstances;\n this.pending = new AtomicInteger(numberOfPendingShardInstances);\n if (logger.isTraceEnabled()) {\n logger.trace(\"replication phase started. pending [{}], action [{}], request [{}], cluster state version used [{}]\", pending.get(),\n- transportReplicaAction, replicaRequest, state.version());\n+ transportReplicaAction, replicaRequest, state.version());\n }\n }\n \n@@ -860,7 +892,8 @@ protected void doRun() {\n performOnReplica(shard);\n }\n // send operation to relocating shard\n- if (shard.relocating()) {\n+ // local shard can be a relocation target of a primary that is in relocated state\n+ if (shard.relocating() && nodes.localNodeId().equals(shard.relocatingNodeId()) == false) {\n performOnReplica(shard.buildTargetRelocatingShard());\n }\n }\n@@ -898,22 +931,22 @@ public void handleException(TransportException exp) {\n String message = String.format(Locale.ROOT, \"failed to perform %s on replica on node %s\", transportReplicaAction, node);\n logger.warn(\"[{}] {}\", exp, shardId, message);\n shardStateAction.shardFailed(\n- shard,\n- indexUUID,\n- message,\n- exp,\n- new ShardStateAction.Listener() {\n- @Override\n- public void onSuccess() {\n- onReplicaFailure(nodeId, exp);\n- }\n-\n- @Override\n- public void onFailure(Throwable t) {\n- // TODO: handle catastrophic non-channel failures\n- onReplicaFailure(nodeId, exp);\n+ shard,\n+ indexUUID,\n+ message,\n+ exp,\n+ new ShardStateAction.Listener() {\n+ @Override\n+ public void onSuccess() {\n+ onReplicaFailure(nodeId, exp);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ // TODO: handle catastrophic non-channel failures\n+ onReplicaFailure(nodeId, exp);\n+ }\n }\n- }\n );\n }\n }\n@@ -993,21 +1026,39 @@ protected boolean shouldExecuteReplication(Settings settings) {\n return IndexMetaData.isIndexUsingShadowReplicas(settings) == false;\n }\n \n- static class IndexShardReference implements Releasable {\n+ interface IndexShardReference extends Releasable {\n+ boolean isRelocated();\n \n- final private IndexShard counter;\n- private final AtomicBoolean closed = new AtomicBoolean();\n+ ShardRouting routingEntry();\n+ }\n+\n+ static final class IndexShardReferenceImpl implements IndexShardReference {\n+\n+ private final IndexShard indexShard;\n+ private final Releasable operationLock;\n \n- IndexShardReference(IndexShard counter) {\n- counter.incrementOperationCounter();\n- this.counter = counter;\n+ IndexShardReferenceImpl(IndexShard indexShard, boolean primaryAction) {\n+ this.indexShard = indexShard;\n+ if (primaryAction) {\n+ operationLock = indexShard.acquirePrimaryOperationLock();\n+ } else {\n+ operationLock = indexShard.acquireReplicaOperationLock();\n+ }\n }\n \n @Override\n public void close() {\n- if (closed.compareAndSet(false, true)) {\n- counter.decrementOperationCounter();\n- }\n+ operationLock.close();\n+ }\n+\n+ @Override\n+ public boolean isRelocated() {\n+ return indexShard.state() == IndexShardState.RELOCATED;\n+ }\n+\n+ @Override\n+ public ShardRouting routingEntry() {\n+ return indexShard.routingEntry();\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java",
"status": "modified"
},
{
"diff": "@@ -94,15 +94,15 @@ public void onFailure(Throwable e) {\n }\n }\n \n- public void updateMappingOnMasterAsynchronously(String index, String type, Mapping mappingUpdate) throws Throwable {\n+ public void updateMappingOnMasterAsynchronously(String index, String type, Mapping mappingUpdate) throws Exception {\n updateMappingOnMaster(index, type, mappingUpdate, dynamicMappingUpdateTimeout, null);\n }\n \n /**\n * Same as {@link #updateMappingOnMasterSynchronously(String, String, Mapping, TimeValue)}\n * using the default timeout.\n */\n- public void updateMappingOnMasterSynchronously(String index, String type, Mapping mappingUpdate) throws Throwable {\n+ public void updateMappingOnMasterSynchronously(String index, String type, Mapping mappingUpdate) throws Exception {\n updateMappingOnMasterSynchronously(index, type, mappingUpdate, dynamicMappingUpdateTimeout);\n }\n \n@@ -111,7 +111,7 @@ public void updateMappingOnMasterSynchronously(String index, String type, Mappin\n * {@code timeout}. When this method returns successfully mappings have\n * been applied to the master node and propagated to data nodes.\n */\n- public void updateMappingOnMasterSynchronously(String index, String type, Mapping mappingUpdate, TimeValue timeout) throws Throwable {\n+ public void updateMappingOnMasterSynchronously(String index, String type, Mapping mappingUpdate, TimeValue timeout) throws Exception {\n if (updateMappingRequest(index, type, mappingUpdate, timeout).get().isAcknowledged() == false) {\n throw new TimeoutException(\"Failed to acknowledge mapping update within [\" + timeout + \"]\");\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/index/MappingUpdatedAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,117 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util.concurrent;\n+\n+import org.elasticsearch.common.lease.Releasable;\n+\n+import java.util.concurrent.Semaphore;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+/**\n+ * Container that represents a resource with reference counting capabilities. Provides operations to suspend acquisition of new references.\n+ * This is useful for resource management when resources are intermittently unavailable.\n+ *\n+ * Assumes less than Integer.MAX_VALUE references are concurrently being held at one point in time.\n+ */\n+public final class SuspendableRefContainer {\n+ private static final int TOTAL_PERMITS = Integer.MAX_VALUE;\n+ private final Semaphore semaphore;\n+\n+ public SuspendableRefContainer() {\n+ // fair semaphore to ensure that blockAcquisition() does not starve under thread contention\n+ this.semaphore = new Semaphore(TOTAL_PERMITS, true);\n+ }\n+\n+ /**\n+ * Tries acquiring a reference. Returns reference holder if reference acquisition is not blocked at the time of invocation (see\n+ * {@link #blockAcquisition()}). Returns null if reference acquisition is blocked at the time of invocation.\n+ *\n+ * @return reference holder if reference acquisition is not blocked, null otherwise\n+ * @throws InterruptedException if the current thread is interrupted\n+ */\n+ public Releasable tryAcquire() throws InterruptedException {\n+ if (semaphore.tryAcquire(1, 0, TimeUnit.SECONDS)) { // the untimed tryAcquire methods do not honor the fairness setting\n+ return idempotentRelease(1);\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ /**\n+ * Acquires a reference. Blocks if reference acquisition is blocked at the time of invocation.\n+ *\n+ * @return reference holder\n+ * @throws InterruptedException if the current thread is interrupted\n+ */\n+ public Releasable acquire() throws InterruptedException {\n+ semaphore.acquire();\n+ return idempotentRelease(1);\n+ }\n+\n+ /**\n+ * Acquires a reference. Blocks if reference acquisition is blocked at the time of invocation.\n+ *\n+ * @return reference holder\n+ */\n+ public Releasable acquireUninterruptibly() {\n+ semaphore.acquireUninterruptibly();\n+ return idempotentRelease(1);\n+ }\n+\n+ /**\n+ * Disables reference acquisition and waits until all existing references are released.\n+ * When released, reference acquisition is enabled again.\n+ * This guarantees that between successful acquisition and release, no one is holding a reference.\n+ *\n+ * @return references holder to all references\n+ */\n+ public Releasable blockAcquisition() {\n+ semaphore.acquireUninterruptibly(TOTAL_PERMITS);\n+ return idempotentRelease(TOTAL_PERMITS);\n+ }\n+\n+ /**\n+ * Helper method that ensures permits are only released once\n+ *\n+ * @return reference holder\n+ */\n+ private Releasable idempotentRelease(int permits) {\n+ AtomicBoolean closed = new AtomicBoolean();\n+ return () -> {\n+ if (closed.compareAndSet(false, true)) {\n+ semaphore.release(permits);\n+ }\n+ };\n+ }\n+\n+ /**\n+ * Returns the number of references currently being held.\n+ */\n+ public int activeRefs() {\n+ int availablePermits = semaphore.availablePermits();\n+ if (availablePermits == 0) {\n+ // when blockAcquisition is holding all permits\n+ return 0;\n+ } else {\n+ return TOTAL_PERMITS - availablePermits;\n+ }\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/util/concurrent/SuspendableRefContainer.java",
"status": "added"
},
{
"diff": "@@ -42,17 +42,17 @@\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n-import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.metrics.MeanMetric;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.Callback;\n-import org.elasticsearch.common.util.concurrent.AbstractRefCounted;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.common.util.concurrent.SuspendableRefContainer;\n import org.elasticsearch.gateway.MetaDataStateFormat;\n import org.elasticsearch.index.IndexModule;\n import org.elasticsearch.index.IndexSettings;\n@@ -189,9 +189,17 @@ public class IndexShard extends AbstractIndexShardComponent {\n \n private final ShardPath path;\n \n- private final IndexShardOperationCounter indexShardOperationCounter;\n+ private final SuspendableRefContainer suspendableRefContainer;\n \n- private final EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n+ private static final EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n+ // for primaries, we only allow to write when actually started (so the cluster has decided we started)\n+ // in case we have a relocation of a primary, we also allow to write after phase 2 completed, where the shard may be\n+ // in state RECOVERING or POST_RECOVERY. After a primary has been marked as RELOCATED, we only allow writes to the relocation target\n+ // which can be either in POST_RECOVERY or already STARTED (this prevents writing concurrently to two primaries).\n+ public static final EnumSet<IndexShardState> writeAllowedStatesForPrimary = EnumSet.of(IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED);\n+ // replication is also allowed while recovering, since we index also during recovery to replicas and rely on version checks to make sure its consistent\n+ // a relocated shard can also be target of a replication if the relocation target has not been marked as active yet and is syncing it's changes back to the relocation source\n+ private static final EnumSet<IndexShardState> writeAllowedStatesForReplica = EnumSet.of(IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n \n private final IndexSearcherWrapper searcherWrapper;\n \n@@ -250,7 +258,7 @@ public IndexShard(ShardId shardId, IndexSettings indexSettings, ShardPath path,\n }\n \n this.engineConfig = newEngineConfig(translogConfig, cachingPolicy);\n- this.indexShardOperationCounter = new IndexShardOperationCounter(logger, shardId);\n+ this.suspendableRefContainer = new SuspendableRefContainer();\n this.provider = provider;\n this.searcherWrapper = indexSearcherWrapper;\n this.percolatorQueriesRegistry = new PercolatorQueriesRegistry(shardId, indexSettings, newQueryShardContext());\n@@ -321,6 +329,8 @@ public QueryCachingPolicy getQueryCachingPolicy() {\n * Updates the shards routing entry. This mutate the shards internal state depending\n * on the changes that get introduced by the new routing value. This method will persist shard level metadata\n * unless explicitly disabled.\n+ *\n+ * @throws IndexShardRelocatedException if shard is marked as relocated and relocation aborted\n */\n public void updateRoutingEntry(final ShardRouting newRouting, final boolean persistState) {\n final ShardRouting currentRouting = this.shardRouting;\n@@ -368,6 +378,14 @@ public void updateRoutingEntry(final ShardRouting newRouting, final boolean pers\n }\n }\n }\n+\n+ if (state == IndexShardState.RELOCATED &&\n+ (newRouting.relocating() == false || newRouting.equalsIgnoringMetaData(currentRouting) == false)) {\n+ // if the shard is marked as RELOCATED we have to fail when any changes in shard routing occur (e.g. due to recovery\n+ // failure / cancellation). The reason is that at the moment we cannot safely move back to STARTED without risking two\n+ // active primaries.\n+ throw new IndexShardRelocatedException(shardId(), \"Shard is marked as relocated, cannot safely move to state \" + newRouting.state());\n+ }\n this.shardRouting = newRouting;\n indexEventListener.shardRoutingChanged(this, currentRouting, newRouting);\n } finally {\n@@ -404,12 +422,16 @@ public IndexShardState markAsRecovering(String reason, RecoveryState recoverySta\n }\n \n public IndexShard relocated(String reason) throws IndexShardNotStartedException {\n- synchronized (mutex) {\n- if (state != IndexShardState.STARTED) {\n- throw new IndexShardNotStartedException(shardId, state);\n+ try (Releasable block = suspendableRefContainer.blockAcquisition()) {\n+ // no shard operation locks are being held here, move state from started to relocated\n+ synchronized (mutex) {\n+ if (state != IndexShardState.STARTED) {\n+ throw new IndexShardNotStartedException(shardId, state);\n+ }\n+ changeState(IndexShardState.RELOCATED, reason);\n }\n- changeState(IndexShardState.RELOCATED, reason);\n }\n+\n return this;\n }\n \n@@ -796,7 +818,6 @@ public void close(String reason, boolean flushEngine) throws IOException {\n refreshScheduledFuture = null;\n }\n changeState(IndexShardState.CLOSED, reason);\n- indexShardOperationCounter.decRef();\n } finally {\n final Engine engine = this.currentEngineReference.getAndSet(null);\n try {\n@@ -810,7 +831,6 @@ public void close(String reason, boolean flushEngine) throws IOException {\n }\n }\n \n-\n public IndexShard postRecovery(String reason) throws IndexShardStartedException, IndexShardRelocatedException, IndexShardClosedException {\n if (mapperService.hasMapping(PercolatorService.TYPE_NAME)) {\n refresh(\"percolator_load_queries\");\n@@ -967,16 +987,17 @@ private void ensureWriteAllowed(Engine.Operation op) throws IllegalIndexShardSta\n IndexShardState state = this.state; // one time volatile read\n \n if (origin == Engine.Operation.Origin.PRIMARY) {\n- // for primaries, we only allow to write when actually started (so the cluster has decided we started)\n- // otherwise, we need to retry, we also want to still allow to index if we are relocated in case it fails\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n- throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when started/recovering, origin [\" + origin + \"]\");\n+ if (writeAllowedStatesForPrimary.contains(state) == false) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when shard state is one of \" + writeAllowedStatesForPrimary + \", origin [\" + origin + \"]\");\n+ }\n+ } else if (origin == Engine.Operation.Origin.RECOVERY) {\n+ if (state != IndexShardState.RECOVERING) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when recovering, origin [\" + origin + \"]\");\n }\n } else {\n- // for replicas, we allow to write also while recovering, since we index also during recovery to replicas\n- // and rely on version checks to make sure its consistent\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED && state != IndexShardState.RECOVERING && state != IndexShardState.POST_RECOVERY) {\n- throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when started/recovering, origin [\" + origin + \"]\");\n+ assert origin == Engine.Operation.Origin.REPLICA;\n+ if (writeAllowedStatesForReplica.contains(state) == false) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when shard state is one of \" + writeAllowedStatesForReplica + \", origin [\" + origin + \"]\");\n }\n }\n }\n@@ -995,7 +1016,7 @@ private void verifyNotClosed() throws IllegalIndexShardStateException {\n private void verifyNotClosed(Throwable suppressed) throws IllegalIndexShardStateException {\n IndexShardState state = this.state; // one time volatile read\n if (state == IndexShardState.CLOSED) {\n- final IllegalIndexShardStateException exc = new IllegalIndexShardStateException(shardId, state, \"operation only allowed when not closed\");\n+ final IllegalIndexShardStateException exc = new IndexShardClosedException(shardId, \"operation only allowed when not closed\");\n if (suppressed != null) {\n exc.addSuppressed(suppressed);\n }\n@@ -1390,37 +1411,21 @@ protected void operationProcessed() {\n idxSettings.getSettings().getAsTime(IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING, IndexingMemoryController.SHARD_DEFAULT_INACTIVE_TIME));\n }\n \n- private static class IndexShardOperationCounter extends AbstractRefCounted {\n- final private ESLogger logger;\n- private final ShardId shardId;\n-\n- public IndexShardOperationCounter(ESLogger logger, ShardId shardId) {\n- super(\"index-shard-operations-counter\");\n- this.logger = logger;\n- this.shardId = shardId;\n- }\n-\n- @Override\n- protected void closeInternal() {\n- logger.debug(\"operations counter reached 0, will not accept any further writes\");\n- }\n-\n- @Override\n- protected void alreadyClosed() {\n- throw new IndexShardClosedException(shardId, \"could not increment operation counter. shard is closed.\");\n+ public Releasable acquirePrimaryOperationLock() {\n+ verifyNotClosed();\n+ if (shardRouting.primary() == false) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"shard is not a primary\");\n }\n+ return suspendableRefContainer.acquireUninterruptibly();\n }\n \n- public void incrementOperationCounter() {\n- indexShardOperationCounter.incRef();\n- }\n-\n- public void decrementOperationCounter() {\n- indexShardOperationCounter.decRef();\n+ public Releasable acquireReplicaOperationLock() {\n+ verifyNotClosed();\n+ return suspendableRefContainer.acquireUninterruptibly();\n }\n \n- public int getOperationsCount() {\n- return Math.max(0, indexShardOperationCounter.refCount() - 1); // refCount is incremented on creation and decremented on close\n+ public int getActiveOperationsCount() {\n+ return suspendableRefContainer.activeRefs(); // refCount is incremented on creation and decremented on close\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -29,10 +29,14 @@\n public class IndexShardRelocatedException extends IllegalIndexShardStateException {\n \n public IndexShardRelocatedException(ShardId shardId) {\n- super(shardId, IndexShardState.RELOCATED, \"Already relocated\");\n+ this(shardId, \"Already relocated\");\n+ }\n+\n+ public IndexShardRelocatedException(ShardId shardId, String reason) {\n+ super(shardId, IndexShardState.RELOCATED, reason);\n }\n \n public IndexShardRelocatedException(StreamInput in) throws IOException{\n super(in);\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShardRelocatedException.java",
"status": "modified"
},
{
"diff": "@@ -492,7 +492,11 @@ private void applyNewOrUpdatedShards(final ClusterChangedEvent event) {\n // shadow replicas do not support primary promotion. The master would reinitialize the shard, giving it a new allocation, meaning we should be there.\n assert (shardRouting.primary() && currentRoutingEntry.primary() == false) == false || indexShard.allowsPrimaryPromotion() :\n \"shard for doesn't support primary promotion but master promoted it with changing allocation. New routing \" + shardRouting + \", current routing \" + currentRoutingEntry;\n- indexShard.updateRoutingEntry(shardRouting, event.state().blocks().disableStatePersistence() == false);\n+ try {\n+ indexShard.updateRoutingEntry(shardRouting, event.state().blocks().disableStatePersistence() == false);\n+ } catch (Throwable e) {\n+ failAndRemoveShard(shardRouting, indexService.indexUUID(), indexService, true, \"failed updating shard routing entry\", e);\n+ }\n }\n }\n \n@@ -626,7 +630,7 @@ private void applyInitializingShard(final ClusterState state, final IndexMetaDat\n // For primaries: requests in any case are routed to both when its relocating and that way we handle\n // the edge case where its mark as relocated, and we might need to roll it back...\n // For replicas: we are recovering a backup from a primary\n- RecoveryState.Type type = shardRouting.primary() ? RecoveryState.Type.RELOCATION : RecoveryState.Type.REPLICA;\n+ RecoveryState.Type type = shardRouting.primary() ? RecoveryState.Type.PRIMARY_RELOCATION : RecoveryState.Type.REPLICA;\n RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), shardRouting.primary(), type, sourceNode, nodes.localNode());\n indexShard.markAsRecovering(\"from \" + sourceNode, recoveryState);\n recoveryTarget.startRecovery(indexShard, type, sourceNode, new PeerRecoveryListener(shardRouting, indexService, indexMetaData));",
"filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java",
"status": "modified"
},
{
"diff": "@@ -435,7 +435,7 @@ private InFlightOpsResponse performInFlightOps(InFlightOpsRequest request) {\n if (indexShard.routingEntry().primary() == false) {\n throw new IllegalStateException(\"[\" + request.shardId() +\"] expected a primary shard\");\n }\n- int opCount = indexShard.getOperationsCount();\n+ int opCount = indexShard.getActiveOperationsCount();\n logger.trace(\"{} in flight operations sampled at [{}]\", request.shardId(), opCount);\n return new InFlightOpsResponse(opCount);\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java",
"status": "modified"
},
{
"diff": "@@ -61,8 +61,7 @@ public static class Actions {\n \n private final ClusterService clusterService;\n \n- private final OngoingRecoveres ongoingRecoveries = new OngoingRecoveres();\n-\n+ private final OngoingRecoveries ongoingRecoveries = new OngoingRecoveries();\n \n @Inject\n public RecoverySource(Settings settings, TransportService transportService, IndicesService indicesService,\n@@ -107,11 +106,11 @@ private RecoveryResponse recover(final StartRecoveryRequest request) throws IOEx\n }\n if (!targetShardRouting.initializing()) {\n logger.debug(\"delaying recovery of {} as it is not listed as initializing on the target node {}. known shards state is [{}]\",\n- request.shardId(), request.targetNode(), targetShardRouting.state());\n+ request.shardId(), request.targetNode(), targetShardRouting.state());\n throw new DelayRecoveryException(\"source node has the state of the target shard to be [\" + targetShardRouting.state() + \"], expecting to be [initializing]\");\n }\n \n- logger.trace(\"[{}][{}] starting recovery to {}, mark_as_relocated {}\", request.shardId().getIndex().getName(), request.shardId().id(), request.targetNode(), request.markAsRelocated());\n+ logger.trace(\"[{}][{}] starting recovery to {}\", request.shardId().getIndex().getName(), request.shardId().id(), request.targetNode());\n final RecoverySourceHandler handler;\n if (shard.indexSettings().isOnSharedFilesystem()) {\n handler = new SharedFSRecoverySourceHandler(shard, request, recoverySettings, transportService, logger);\n@@ -134,8 +133,7 @@ public void messageReceived(final StartRecoveryRequest request, final TransportC\n }\n }\n \n-\n- private static final class OngoingRecoveres {\n+ private static final class OngoingRecoveries {\n private final Map<IndexShard, Set<RecoverySourceHandler>> ongoingRecoveries = new HashMap<>();\n \n synchronized void add(IndexShard shard, RecoverySourceHandler handler) {",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java",
"status": "modified"
},
{
"diff": "@@ -393,9 +393,11 @@ public void run() throws InterruptedException {\n }\n });\n \n-\n- if (request.markAsRelocated()) {\n- // TODO what happens if the recovery process fails afterwards, we need to mark this back to started\n+ if (isPrimaryRelocation()) {\n+ /**\n+ * if the recovery process fails after setting the shard state to RELOCATED, both relocation source and\n+ * target are failed (see {@link IndexShard#updateRoutingEntry}).\n+ */\n try {\n shard.relocated(\"to \" + request.targetNode());\n } catch (IllegalIndexShardStateException e) {\n@@ -406,7 +408,11 @@ public void run() throws InterruptedException {\n }\n stopWatch.stop();\n logger.trace(\"[{}][{}] finalizing recovery to {}: took [{}]\",\n- indexName, shardId, request.targetNode(), stopWatch.totalTime());\n+ indexName, shardId, request.targetNode(), stopWatch.totalTime());\n+ }\n+\n+ protected boolean isPrimaryRelocation() {\n+ return request.recoveryType() == RecoveryState.Type.PRIMARY_RELOCATION;\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -101,7 +101,7 @@ public static enum Type {\n STORE((byte) 0),\n SNAPSHOT((byte) 1),\n REPLICA((byte) 2),\n- RELOCATION((byte) 3);\n+ PRIMARY_RELOCATION((byte) 3);\n \n private static final Type[] TYPES = new Type[Type.values().length];\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java",
"status": "modified"
},
{
"diff": "@@ -138,7 +138,6 @@ public void startRecovery(final IndexShard indexShard, final RecoveryState.Type\n // create a new recovery status, and process...\n final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, listener, recoverySettings.activityTimeout());\n threadPool.generic().execute(new RecoveryRunner(recoveryId));\n-\n }\n \n protected void retryRecovery(final RecoveryStatus recoveryStatus, final Throwable reason, TimeValue retryAfter, final StartRecoveryRequest currentRequest) {\n@@ -178,7 +177,7 @@ private void doRecovery(final RecoveryStatus recoveryStatus) {\n return;\n }\n final StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n- false, metadataSnapshot, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n+ metadataSnapshot, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n \n final AtomicReference<RecoveryResponse> responseHolder = new AtomicReference<>();\n try {\n@@ -267,7 +266,6 @@ public RecoveryResponse newInstance() {\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n return;\n }\n-\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, e), true);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -84,8 +84,4 @@ protected int sendSnapshot(Translog.Snapshot snapshot) {\n return 0;\n }\n \n- private boolean isPrimaryRelocation() {\n- return request.recoveryType() == RecoveryState.Type.RELOCATION && shard.routingEntry().primary();\n- }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -41,8 +41,6 @@ public class StartRecoveryRequest extends TransportRequest {\n \n private DiscoveryNode targetNode;\n \n- private boolean markAsRelocated;\n-\n private Store.MetadataSnapshot metadataSnapshot;\n \n private RecoveryState.Type recoveryType;\n@@ -56,12 +54,11 @@ public StartRecoveryRequest() {\n * @param sourceNode The node to recover from\n * @param targetNode The node to recover to\n */\n- public StartRecoveryRequest(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, boolean markAsRelocated, Store.MetadataSnapshot metadataSnapshot, RecoveryState.Type recoveryType, long recoveryId) {\n+ public StartRecoveryRequest(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, Store.MetadataSnapshot metadataSnapshot, RecoveryState.Type recoveryType, long recoveryId) {\n this.recoveryId = recoveryId;\n this.shardId = shardId;\n this.sourceNode = sourceNode;\n this.targetNode = targetNode;\n- this.markAsRelocated = markAsRelocated;\n this.recoveryType = recoveryType;\n this.metadataSnapshot = metadataSnapshot;\n }\n@@ -82,10 +79,6 @@ public DiscoveryNode targetNode() {\n return targetNode;\n }\n \n- public boolean markAsRelocated() {\n- return markAsRelocated;\n- }\n-\n public RecoveryState.Type recoveryType() {\n return recoveryType;\n }\n@@ -101,7 +94,6 @@ public void readFrom(StreamInput in) throws IOException {\n shardId = ShardId.readShardId(in);\n sourceNode = DiscoveryNode.readNode(in);\n targetNode = DiscoveryNode.readNode(in);\n- markAsRelocated = in.readBoolean();\n metadataSnapshot = new Store.MetadataSnapshot(in);\n recoveryType = RecoveryState.Type.fromId(in.readByte());\n \n@@ -114,7 +106,6 @@ public void writeTo(StreamOutput out) throws IOException {\n shardId.writeTo(out);\n sourceNode.writeTo(out);\n targetNode.writeTo(out);\n- out.writeBoolean(markAsRelocated);\n metadataSnapshot.writeTo(out);\n out.writeByte(recoveryType.id());\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java",
"status": "modified"
},
{
"diff": "@@ -23,14 +23,15 @@\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n+import java.util.function.Supplier;\n \n /**\n * Base class for delegating transport response to a transport channel\n */\n public abstract class TransportChannelResponseHandler<T extends TransportResponse> implements TransportResponseHandler<T> {\n \n /**\n- * Convenience method for delegating an empty response to the provided changed\n+ * Convenience method for delegating an empty response to the provided transport channel\n */\n public static TransportChannelResponseHandler<TransportResponse.Empty> emptyResponseHandler(ESLogger logger, TransportChannel channel, String extraInfoOnError) {\n return new TransportChannelResponseHandler<TransportResponse.Empty>(logger, channel, extraInfoOnError) {\n@@ -41,6 +42,19 @@ public TransportResponse.Empty newInstance() {\n };\n }\n \n+ /**\n+ * Convenience method for delegating a response provided by supplier to the provided transport channel\n+ */\n+ public static <T extends TransportResponse> TransportChannelResponseHandler responseHandler(ESLogger logger, Supplier<T> responseSupplier, TransportChannel channel, String extraInfoOnError) {\n+ return new TransportChannelResponseHandler<T>(logger, channel, extraInfoOnError) {\n+ @Override\n+ public T newInstance() {\n+ return responseSupplier.get();\n+ }\n+ };\n+ }\n+\n+\n private final ESLogger logger;\n private final TransportChannel channel;\n private final String extraInfoOnError;",
"filename": "core/src/main/java/org/elasticsearch/transport/TransportChannelResponseHandler.java",
"status": "modified"
},
{
"diff": "@@ -56,12 +56,12 @@ public class ClusterStateCreationUtils {\n /**\n * Creates cluster state with and index that has one shard and #(replicaStates) replicas\n *\n- * @param index name of the index\n- * @param primaryLocal if primary should coincide with the local node in the cluster state\n- * @param primaryState state of primary\n- * @param replicaStates states of the replicas. length of this array determines also the number of replicas\n+ * @param index name of the index\n+ * @param activePrimaryLocal if active primary should coincide with the local node in the cluster state\n+ * @param primaryState state of primary\n+ * @param replicaStates states of the replicas. length of this array determines also the number of replicas\n */\n- public static ClusterState state(String index, boolean primaryLocal, ShardRoutingState primaryState, ShardRoutingState... replicaStates) {\n+ public static ClusterState state(String index, boolean activePrimaryLocal, ShardRoutingState primaryState, ShardRoutingState... replicaStates) {\n final int numberOfReplicas = replicaStates.length;\n \n int numberOfNodes = numberOfReplicas + 1;\n@@ -97,7 +97,7 @@ public static ClusterState state(String index, boolean primaryLocal, ShardRoutin\n String relocatingNode = null;\n UnassignedInfo unassignedInfo = null;\n if (primaryState != ShardRoutingState.UNASSIGNED) {\n- if (primaryLocal) {\n+ if (activePrimaryLocal) {\n primaryNode = newNode(0).id();\n unassignedNodes.remove(primaryNode);\n } else {\n@@ -173,13 +173,13 @@ public static ClusterState stateWithAssignedPrimariesAndOneReplica(String index,\n * Creates cluster state with and index that has one shard and as many replicas as numberOfReplicas.\n * Primary will be STARTED in cluster state but replicas will be one of UNASSIGNED, INITIALIZING, STARTED or RELOCATING.\n *\n- * @param index name of the index\n- * @param primaryLocal if primary should coincide with the local node in the cluster state\n- * @param numberOfReplicas number of replicas\n+ * @param index name of the index\n+ * @param activePrimaryLocal if active primary should coincide with the local node in the cluster state\n+ * @param numberOfReplicas number of replicas\n */\n- public static ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int numberOfReplicas) {\n+ public static ClusterState stateWithActivePrimary(String index, boolean activePrimaryLocal, int numberOfReplicas) {\n int assignedReplicas = randomIntBetween(0, numberOfReplicas);\n- return stateWithStartedPrimary(index, primaryLocal, assignedReplicas, numberOfReplicas - assignedReplicas);\n+ return stateWithActivePrimary(index, activePrimaryLocal, assignedReplicas, numberOfReplicas - assignedReplicas);\n }\n \n /**\n@@ -188,11 +188,11 @@ public static ClusterState stateWithStartedPrimary(String index, boolean primary\n * some (assignedReplicas) will be one of INITIALIZING, STARTED or RELOCATING.\n *\n * @param index name of the index\n- * @param primaryLocal if primary should coincide with the local node in the cluster state\n+ * @param activePrimaryLocal if active primary should coincide with the local node in the cluster state\n * @param assignedReplicas number of replicas that should have INITIALIZING, STARTED or RELOCATING state\n * @param unassignedReplicas number of replicas that should be unassigned\n */\n- public static ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int assignedReplicas, int unassignedReplicas) {\n+ public static ClusterState stateWithActivePrimary(String index, boolean activePrimaryLocal, int assignedReplicas, int unassignedReplicas) {\n ShardRoutingState[] replicaStates = new ShardRoutingState[assignedReplicas + unassignedReplicas];\n // no point in randomizing - node assignment later on does it too.\n for (int i = 0; i < assignedReplicas; i++) {\n@@ -201,7 +201,7 @@ public static ClusterState stateWithStartedPrimary(String index, boolean primary\n for (int i = assignedReplicas; i < replicaStates.length; i++) {\n replicaStates[i] = ShardRoutingState.UNASSIGNED;\n }\n- return state(index, primaryLocal, randomFrom(ShardRoutingState.STARTED, ShardRoutingState.RELOCATING), replicaStates);\n+ return state(index, activePrimaryLocal, randomFrom(ShardRoutingState.STARTED, ShardRoutingState.RELOCATING), replicaStates);\n }\n \n /**",
"filename": "core/src/test/java/org/elasticsearch/action/support/replication/ClusterStateCreationUtils.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -75,9 +76,10 @@\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n \n import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.state;\n-import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.stateWithStartedPrimary;\n+import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.stateWithActivePrimary;\n import static org.hamcrest.CoreMatchers.not;\n import static org.hamcrest.Matchers.arrayWithSize;\n import static org.hamcrest.Matchers.empty;\n@@ -225,7 +227,7 @@ public void testRoutePhaseExecutesRequest() {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n \n- clusterService.setState(stateWithStartedPrimary(index, randomBoolean(), 3));\n+ clusterService.setState(stateWithActivePrimary(index, randomBoolean(), 3));\n \n logger.debug(\"using state: \\n{}\", clusterService.state().prettyPrint());\n \n@@ -249,33 +251,73 @@ public void testRoutePhaseExecutesRequest() {\n assertIndexShardUninitialized();\n }\n \n- public void testPrimaryPhaseExecutesRequest() throws InterruptedException, ExecutionException {\n+ public void testPrimaryPhaseExecutesOrDelegatesRequestToRelocationTarget() throws InterruptedException, ExecutionException {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n- clusterService.setState(state(index, true, ShardRoutingState.STARTED, ShardRoutingState.STARTED));\n+ ClusterState state = stateWithActivePrimary(index, true, randomInt(5));\n+ clusterService.setState(state);\n Request request = new Request(shardId).timeout(\"1ms\");\n PlainActionFuture<Response> listener = new PlainActionFuture<>();\n- TransportReplicationAction.PrimaryPhase primaryPhase = action.new PrimaryPhase(request, createTransportChannel(listener));\n+ AtomicBoolean movedToReplication = new AtomicBoolean();\n+ TransportReplicationAction.PrimaryPhase primaryPhase = action.new PrimaryPhase(request, createTransportChannel(listener)) {\n+ @Override\n+ void finishAndMoveToReplication(TransportReplicationAction.ReplicationPhase replicationPhase) {\n+ super.finishAndMoveToReplication(replicationPhase);\n+ movedToReplication.set(true);\n+ }\n+ };\n+ ShardRouting primaryShard = state.getRoutingTable().shardRoutingTable(shardId).primaryShard();\n+ boolean executeOnPrimary = true;\n+ if (primaryShard.relocating() && randomBoolean()) { // whether shard has been marked as relocated already (i.e. relocation completed)\n+ isRelocated.set(true);\n+ indexShardRouting.set(primaryShard);\n+ executeOnPrimary = false;\n+ }\n primaryPhase.run();\n- assertThat(\"request was not processed on primary\", request.processedOnPrimary.get(), equalTo(true));\n- final String replicaNodeId = clusterService.state().getRoutingTable().shardRoutingTable(index, shardId.id()).replicaShards().get(0).currentNodeId();\n- final List<CapturingTransport.CapturedRequest> requests = transport.getCapturedRequestsByTargetNodeAndClear().get(replicaNodeId);\n- assertThat(requests, notNullValue());\n- assertThat(requests.size(), equalTo(1));\n- assertThat(\"replica request was not sent\", requests.get(0).action, equalTo(\"testAction[r]\"));\n+ assertThat(request.processedOnPrimary.get(), equalTo(executeOnPrimary));\n+ assertThat(movedToReplication.get(), equalTo(executeOnPrimary));\n+ if (executeOnPrimary == false) {\n+ final List<CapturingTransport.CapturedRequest> requests = transport.capturedRequestsByTargetNode().get(primaryShard.relocatingNodeId());\n+ assertThat(requests, notNullValue());\n+ assertThat(requests.size(), equalTo(1));\n+ assertThat(\"primary request was not delegated to relocation target\", requests.get(0).action, equalTo(\"testAction[p]\"));\n+ }\n+ }\n+\n+ public void testPrimaryPhaseExecutesDelegatedRequestOnRelocationTarget() throws InterruptedException, ExecutionException {\n+ final String index = \"test\";\n+ final ShardId shardId = new ShardId(index, \"_na_\", 0);\n+ ClusterState state = state(index, true, ShardRoutingState.RELOCATING);\n+ String primaryTargetNodeId = state.getRoutingTable().shardRoutingTable(shardId).primaryShard().relocatingNodeId();\n+ // simulate execution of the primary phase on the relocation target node\n+ state = ClusterState.builder(state).nodes(DiscoveryNodes.builder(state.nodes()).localNodeId(primaryTargetNodeId)).build();\n+ clusterService.setState(state);\n+ Request request = new Request(shardId).timeout(\"1ms\");\n+ PlainActionFuture<Response> listener = new PlainActionFuture<>();\n+ AtomicBoolean movedToReplication = new AtomicBoolean();\n+ TransportReplicationAction.PrimaryPhase primaryPhase = action.new PrimaryPhase(request, createTransportChannel(listener)) {\n+ @Override\n+ void finishAndMoveToReplication(TransportReplicationAction.ReplicationPhase replicationPhase) {\n+ super.finishAndMoveToReplication(replicationPhase);\n+ movedToReplication.set(true);\n+ }\n+ };\n+ primaryPhase.run();\n+ assertThat(\"request was not processed on primary relocation target\", request.processedOnPrimary.get(), equalTo(true));\n+ assertThat(movedToReplication.get(), equalTo(true));\n }\n \n public void testAddedReplicaAfterPrimaryOperation() {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n // start with no replicas\n- clusterService.setState(stateWithStartedPrimary(index, true, 0));\n+ clusterService.setState(stateWithActivePrimary(index, true, 0));\n logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n final ClusterState stateWithAddedReplicas = state(index, true, ShardRoutingState.STARTED, randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.STARTED);\n \n final Action actionWithAddedReplicaAfterPrimaryOp = new Action(Settings.EMPTY, \"testAction\", transportService, clusterService, threadPool) {\n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception {\n final Tuple<Response, Request> operationOnPrimary = super.shardOperationOnPrimary(metaData, shardRequest);\n // add replicas after primary operation\n ((TestClusterService) clusterService).setState(stateWithAddedReplicas);\n@@ -302,13 +344,13 @@ public void testRelocatingReplicaAfterPrimaryOperation() {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n // start with a replica\n- clusterService.setState(state(index, true, ShardRoutingState.STARTED, randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.STARTED));\n+ clusterService.setState(state(index, true, ShardRoutingState.STARTED, randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.STARTED));\n logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n final ClusterState stateWithRelocatingReplica = state(index, true, ShardRoutingState.STARTED, ShardRoutingState.RELOCATING);\n \n final Action actionWithRelocatingReplicasAfterPrimaryOp = new Action(Settings.EMPTY, \"testAction\", transportService, clusterService, threadPool) {\n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception {\n final Tuple<Response, Request> operationOnPrimary = super.shardOperationOnPrimary(metaData, shardRequest);\n // set replica to relocating\n ((TestClusterService) clusterService).setState(stateWithRelocatingReplica);\n@@ -341,7 +383,7 @@ public void testIndexDeletedAfterPrimaryOperation() {\n \n final Action actionWithDeletedIndexAfterPrimaryOp = new Action(Settings.EMPTY, \"testAction\", transportService, clusterService, threadPool) {\n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception {\n final Tuple<Response, Request> operationOnPrimary = super.shardOperationOnPrimary(metaData, shardRequest);\n // delete index after primary op\n ((TestClusterService) clusterService).setState(stateWithDeletedIndex);\n@@ -432,7 +474,13 @@ public void testReplication() throws ExecutionException, InterruptedException {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ ClusterState state = stateWithActivePrimary(index, true, randomInt(5));\n+ ShardRouting primaryShard = state.getRoutingTable().shardRoutingTable(shardId).primaryShard();\n+ if (primaryShard.relocating() && randomBoolean()) {\n+ // simulate execution of the replication phase on the relocation target node after relocation source was marked as relocated\n+ state = ClusterState.builder(state).nodes(DiscoveryNodes.builder(state.nodes()).localNodeId(primaryShard.relocatingNodeId())).build();\n+ }\n+ clusterService.setState(state);\n \n final IndexShardRoutingTable shardRoutingTable = clusterService.state().routingTable().index(index).shard(shardId.id());\n int assignedReplicas = 0;\n@@ -455,12 +503,19 @@ public void testReplicationWithShadowIndex() throws ExecutionException, Interrup\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, \"_na_\", 0);\n \n- ClusterState state = stateWithStartedPrimary(index, true, randomInt(5));\n+ ClusterState state = stateWithActivePrimary(index, true, randomInt(5));\n MetaData.Builder metaData = MetaData.builder(state.metaData());\n Settings.Builder settings = Settings.builder().put(metaData.get(index).getSettings());\n settings.put(IndexMetaData.SETTING_SHADOW_REPLICAS, true);\n metaData.put(IndexMetaData.builder(metaData.get(index)).settings(settings));\n- clusterService.setState(ClusterState.builder(state).metaData(metaData));\n+ state = ClusterState.builder(state).metaData(metaData).build();\n+\n+ ShardRouting primaryShard = state.getRoutingTable().shardRoutingTable(shardId).primaryShard();\n+ if (primaryShard.relocating() && randomBoolean()) {\n+ // simulate execution of the primary phase on the relocation target node\n+ state = ClusterState.builder(state).nodes(DiscoveryNodes.builder(state.nodes()).localNodeId(primaryShard.relocatingNodeId())).build();\n+ }\n+ clusterService.setState(state);\n \n final IndexShardRoutingTable shardRoutingTable = clusterService.state().routingTable().index(index).shard(shardId.id());\n int assignedReplicas = 0;\n@@ -507,8 +562,9 @@ action.new ReplicationPhase(request,\n assertEquals(request.shardId, replicationRequest.shardId);\n }\n \n+ String localNodeId = clusterService.state().getNodes().localNodeId();\n // no request was sent to the local node\n- assertThat(nodesSentTo.keySet(), not(hasItem(clusterService.state().getNodes().localNodeId())));\n+ assertThat(nodesSentTo.keySet(), not(hasItem(localNodeId)));\n \n // requests were sent to the correct shard copies\n for (ShardRouting shard : clusterService.state().getRoutingTable().shardRoutingTable(shardId)) {\n@@ -518,11 +574,11 @@ action.new ReplicationPhase(request,\n if (shard.unassigned()) {\n continue;\n }\n- if (shard.primary() == false) {\n- nodesSentTo.remove(shard.currentNodeId());\n+ if (localNodeId.equals(shard.currentNodeId()) == false) {\n+ assertThat(nodesSentTo.remove(shard.currentNodeId()), notNullValue());\n }\n- if (shard.relocating()) {\n- nodesSentTo.remove(shard.relocatingNodeId());\n+ if (shard.relocating() && localNodeId.equals(shard.relocatingNodeId()) == false) { // for relocating primaries, we replicate from target to source if source is marked as relocated\n+ assertThat(nodesSentTo.remove(shard.relocatingNodeId()), notNullValue());\n }\n }\n \n@@ -629,6 +685,7 @@ public void run() {\n // shard operation should be ongoing, so the counter is at 2\n // we have to wait here because increment happens in thread\n assertBusy(() -> assertIndexShardCounter(2));\n+\n assertThat(transport.capturedRequests().length, equalTo(0));\n ((ActionWithDelay) action).countDownLatch.countDown();\n t.join();\n@@ -726,12 +783,28 @@ private void assertIndexShardCounter(int expected) {\n \n private final AtomicInteger count = new AtomicInteger(0);\n \n+ private final AtomicBoolean isRelocated = new AtomicBoolean(false);\n+\n+ private final AtomicReference<ShardRouting> indexShardRouting = new AtomicReference<>();\n+\n /*\n * Returns testIndexShardOperationsCounter or initializes it if it was already created in this test run.\n * */\n- private synchronized Releasable getOrCreateIndexShardOperationsCounter() {\n+ private synchronized TransportReplicationAction.IndexShardReference getOrCreateIndexShardOperationsCounter() {\n count.incrementAndGet();\n- return new Releasable() {\n+ return new TransportReplicationAction.IndexShardReference() {\n+ @Override\n+ public boolean isRelocated() {\n+ return isRelocated.get();\n+ }\n+\n+ @Override\n+ public ShardRouting routingEntry() {\n+ ShardRouting shardRouting = indexShardRouting.get();\n+ assert shardRouting != null;\n+ return shardRouting;\n+ }\n+\n @Override\n public void close() {\n count.decrementAndGet();\n@@ -783,7 +856,7 @@ protected Response newResponseInstance() {\n }\n \n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception {\n boolean executedBefore = shardRequest.processedOnPrimary.getAndSet(true);\n assert executedBefore == false : \"request has already been executed on the primary\";\n return new Tuple<>(new Response(), shardRequest);\n@@ -805,7 +878,11 @@ protected boolean resolveIndex() {\n }\n \n @Override\n- protected Releasable getIndexShardOperationsCounter(ShardId shardId) {\n+ protected IndexShardReference getIndexShardReferenceOnPrimary(ShardId shardId) {\n+ return getOrCreateIndexShardOperationsCounter();\n+ }\n+\n+ protected IndexShardReference getIndexShardReferenceOnReplica(ShardId shardId) {\n return getOrCreateIndexShardOperationsCounter();\n }\n }\n@@ -832,7 +909,7 @@ class ActionWithExceptions extends Action {\n }\n \n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) {\n return throwException(shardRequest.shardId());\n }\n \n@@ -870,7 +947,7 @@ class ActionWithDelay extends Action {\n }\n \n @Override\n- protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Throwable {\n+ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Request shardRequest) throws Exception {\n awaitLatch();\n return new Tuple<>(new Response(), shardRequest);\n }",
"filename": "core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.cluster.action.shard;\n \n import org.apache.lucene.index.CorruptIndexException;\n+import org.elasticsearch.action.support.replication.ClusterStateCreationUtils;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateObserver;\n@@ -33,7 +34,6 @@\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.Discovery;\n-import org.elasticsearch.index.shard.ShardNotFoundException;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.cluster.TestClusterService;\n import org.elasticsearch.test.transport.CapturingTransport;\n@@ -55,7 +55,6 @@\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.function.LongConsumer;\n \n-import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.stateWithStartedPrimary;\n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n@@ -127,7 +126,7 @@ public static void stopThreadPool() {\n public void testSuccess() throws InterruptedException {\n final String index = \"test\";\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ clusterService.setState(ClusterStateCreationUtils.stateWithActivePrimary(index, true, randomInt(5)));\n \n String indexUUID = clusterService.state().metaData().index(index).getIndexUUID();\n \n@@ -169,7 +168,7 @@ public void onFailure(Throwable t) {\n public void testNoMaster() throws InterruptedException {\n final String index = \"test\";\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ clusterService.setState(ClusterStateCreationUtils.stateWithActivePrimary(index, true, randomInt(5)));\n \n DiscoveryNodes.Builder noMasterBuilder = DiscoveryNodes.builder(clusterService.state().nodes());\n noMasterBuilder.masterNodeId(null);\n@@ -207,7 +206,7 @@ public void onFailure(Throwable e) {\n public void testMasterChannelException() throws InterruptedException {\n final String index = \"test\";\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ clusterService.setState(ClusterStateCreationUtils.stateWithActivePrimary(index, true, randomInt(5)));\n \n String indexUUID = clusterService.state().metaData().index(index).getIndexUUID();\n \n@@ -264,7 +263,7 @@ public void onFailure(Throwable t) {\n public void testUnhandledFailure() {\n final String index = \"test\";\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ clusterService.setState(ClusterStateCreationUtils.stateWithActivePrimary(index, true, randomInt(5)));\n \n String indexUUID = clusterService.state().metaData().index(index).getIndexUUID();\n \n@@ -294,7 +293,7 @@ public void onFailure(Throwable t) {\n public void testShardNotFound() throws InterruptedException {\n final String index = \"test\";\n \n- clusterService.setState(stateWithStartedPrimary(index, true, randomInt(5)));\n+ clusterService.setState(ClusterStateCreationUtils.stateWithActivePrimary(index, true, randomInt(5)));\n \n String indexUUID = clusterService.state().metaData().index(index).getIndexUUID();\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/action/shard/ShardStateActionTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,115 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util.concurrent;\n+\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+\n+public class SuspendableRefContainerTests extends ESTestCase {\n+\n+ public void testBasicAcquire() throws InterruptedException {\n+ SuspendableRefContainer refContainer = new SuspendableRefContainer();\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+\n+ Releasable lock1 = randomLockingMethod(refContainer);\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ Releasable lock2 = randomLockingMethod(refContainer);\n+ assertThat(refContainer.activeRefs(), equalTo(2));\n+ lock1.close();\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ lock1.close(); // check idempotence\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ lock2.close();\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+\n+ public void testAcquisitionBlockingBlocksNewAcquisitions() throws InterruptedException {\n+ SuspendableRefContainer refContainer = new SuspendableRefContainer();\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+\n+ try (Releasable block = refContainer.blockAcquisition()) {\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ assertThat(refContainer.tryAcquire(), nullValue());\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+ try (Releasable lock = refContainer.tryAcquire()) {\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ }\n+\n+ // same with blocking acquire\n+ AtomicBoolean acquired = new AtomicBoolean();\n+ Thread t = new Thread(() -> {\n+ try (Releasable lock = randomBoolean() ? refContainer.acquire() : refContainer.acquireUninterruptibly()) {\n+ acquired.set(true);\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ } catch (InterruptedException e) {\n+ fail(\"Interrupted\");\n+ }\n+ });\n+ try (Releasable block = refContainer.blockAcquisition()) {\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ t.start();\n+ // check that blocking acquire really blocks\n+ assertThat(acquired.get(), equalTo(false));\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+ t.join();\n+ assertThat(acquired.get(), equalTo(true));\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+\n+ public void testAcquisitionBlockingWaitsOnExistingAcquisitions() throws InterruptedException {\n+ SuspendableRefContainer refContainer = new SuspendableRefContainer();\n+\n+ AtomicBoolean acquired = new AtomicBoolean();\n+ Thread t = new Thread(() -> {\n+ try (Releasable block = refContainer.blockAcquisition()) {\n+ acquired.set(true);\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+ });\n+ try (Releasable lock = randomLockingMethod(refContainer)) {\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ t.start();\n+ assertThat(acquired.get(), equalTo(false));\n+ assertThat(refContainer.activeRefs(), equalTo(1));\n+ }\n+ t.join();\n+ assertThat(acquired.get(), equalTo(true));\n+ assertThat(refContainer.activeRefs(), equalTo(0));\n+ }\n+\n+ private Releasable randomLockingMethod(SuspendableRefContainer refContainer) throws InterruptedException {\n+ switch (randomInt(2)) {\n+ case 0: return refContainer.tryAcquire();\n+ case 1: return refContainer.acquire();\n+ case 2: return refContainer.acquireUninterruptibly();\n+ }\n+ throw new IllegalArgumentException(\"randomLockingMethod inconsistent\");\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/util/concurrent/SuspendableRefContainerTests.java",
"status": "added"
},
{
"diff": "@@ -57,6 +57,8 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.Settings;\n@@ -108,6 +110,7 @@\n import java.util.List;\n import java.util.Set;\n import java.util.concurrent.BrokenBarrierException;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.CyclicBarrier;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.atomic.AtomicBoolean;\n@@ -125,6 +128,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n \n /**\n * Simple unit-test IndexShard related operations.\n@@ -316,36 +320,41 @@ public void testShardStateMetaHashCodeEquals() {\n \n }\n \n- public void testDeleteIndexDecreasesCounter() throws InterruptedException, ExecutionException, IOException {\n+ public void testDeleteIndexPreventsNewOperations() throws InterruptedException, ExecutionException, IOException {\n assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)).get());\n ensureGreen(\"test\");\n IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n IndexService indexService = indicesService.indexServiceSafe(\"test\");\n IndexShard indexShard = indexService.getShardOrNull(0);\n client().admin().indices().prepareDelete(\"test\").get();\n- assertThat(indexShard.getOperationsCount(), equalTo(0));\n+ assertThat(indexShard.getActiveOperationsCount(), equalTo(0));\n try {\n- indexShard.incrementOperationCounter();\n+ indexShard.acquirePrimaryOperationLock();\n+ fail(\"we should not be able to increment anymore\");\n+ } catch (IndexShardClosedException e) {\n+ // expected\n+ }\n+ try {\n+ indexShard.acquireReplicaOperationLock();\n fail(\"we should not be able to increment anymore\");\n } catch (IndexShardClosedException e) {\n // expected\n }\n }\n \n- public void testIndexShardCounter() throws InterruptedException, ExecutionException, IOException {\n+ public void testIndexOperationsCounter() throws InterruptedException, ExecutionException, IOException {\n assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)).get());\n ensureGreen(\"test\");\n IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n IndexService indexService = indicesService.indexServiceSafe(\"test\");\n IndexShard indexShard = indexService.getShardOrNull(0);\n- assertEquals(0, indexShard.getOperationsCount());\n- indexShard.incrementOperationCounter();\n- assertEquals(1, indexShard.getOperationsCount());\n- indexShard.incrementOperationCounter();\n- assertEquals(2, indexShard.getOperationsCount());\n- indexShard.decrementOperationCounter();\n- indexShard.decrementOperationCounter();\n- assertEquals(0, indexShard.getOperationsCount());\n+ assertEquals(0, indexShard.getActiveOperationsCount());\n+ Releasable operation1 = indexShard.acquirePrimaryOperationLock();\n+ assertEquals(1, indexShard.getActiveOperationsCount());\n+ Releasable operation2 = indexShard.acquirePrimaryOperationLock();\n+ assertEquals(2, indexShard.getActiveOperationsCount());\n+ Releasables.close(operation1, operation2);\n+ assertEquals(0, indexShard.getActiveOperationsCount());\n }\n \n public void testMarkAsInactiveTriggersSyncedFlush() throws Exception {\n@@ -777,6 +786,89 @@ public void run() {\n assertEquals(total + 1, shard.flushStats().getTotal());\n }\n \n+ public void testLockingBeforeAndAfterRelocated() throws Exception {\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)\n+ ).get());\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ CountDownLatch latch = new CountDownLatch(1);\n+ Thread recoveryThread = new Thread(() -> {\n+ latch.countDown();\n+ shard.relocated(\"simulated recovery\");\n+ });\n+\n+ try (Releasable ignored = shard.acquirePrimaryOperationLock()) {\n+ // start finalization of recovery\n+ recoveryThread.start();\n+ latch.await();\n+ // recovery can only be finalized after we release the current primaryOperationLock\n+ assertThat(shard.state(), equalTo(IndexShardState.STARTED));\n+ }\n+ // recovery can be now finalized\n+ recoveryThread.join();\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ try (Releasable ignored = shard.acquirePrimaryOperationLock()) {\n+ // lock can again be acquired\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ }\n+ }\n+\n+ public void testStressRelocated() throws Exception {\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)\n+ ).get());\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ final int numThreads = randomIntBetween(2, 4);\n+ Thread[] indexThreads = new Thread[numThreads];\n+ CountDownLatch somePrimaryOperationLockAcquired = new CountDownLatch(1);\n+ CyclicBarrier barrier = new CyclicBarrier(numThreads + 1);\n+ for (int i = 0; i < indexThreads.length; i++) {\n+ indexThreads[i] = new Thread() {\n+ @Override\n+ public void run() {\n+ try (Releasable operationLock = shard.acquirePrimaryOperationLock()) {\n+ somePrimaryOperationLockAcquired.countDown();\n+ barrier.await();\n+ } catch (InterruptedException | BrokenBarrierException e) {\n+ throw new RuntimeException(e);\n+ }\n+ }\n+ };\n+ indexThreads[i].start();\n+ }\n+ AtomicBoolean relocated = new AtomicBoolean();\n+ final Thread recoveryThread = new Thread(() -> {\n+ shard.relocated(\"simulated recovery\");\n+ relocated.set(true);\n+ });\n+ // ensure we wait for at least one primary operation lock to be acquired\n+ somePrimaryOperationLockAcquired.await();\n+ // start recovery thread\n+ recoveryThread.start();\n+ assertThat(relocated.get(), equalTo(false));\n+ assertThat(shard.getActiveOperationsCount(), greaterThan(0));\n+ // ensure we only transition to RELOCATED state after pending operations completed\n+ assertThat(shard.state(), equalTo(IndexShardState.STARTED));\n+ // complete pending operations\n+ barrier.await();\n+ // complete recovery/relocation\n+ recoveryThread.join();\n+ // ensure relocated successfully once pending operations are done\n+ assertThat(relocated.get(), equalTo(true));\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ assertThat(shard.getActiveOperationsCount(), equalTo(0));\n+\n+ for (Thread indexThread : indexThreads) {\n+ indexThread.join();\n+ }\n+ }\n+\n public void testRecoverFromStore() throws IOException {\n createIndex(\"test\");\n ensureGreen();\n@@ -857,6 +949,27 @@ public void testFailIfIndexNotPresentInRecoverFromStore() throws IOException {\n assertHitCount(client().prepareSearch().get(), 1);\n }\n \n+ public void testRecoveryFailsAfterMovingToRelocatedState() throws InterruptedException {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ ShardRouting origRouting = shard.routingEntry();\n+ assertThat(shard.state(), equalTo(IndexShardState.STARTED));\n+ ShardRouting inRecoveryRouting = new ShardRouting(origRouting);\n+ ShardRoutingHelper.relocate(inRecoveryRouting, \"some_node\");\n+ shard.updateRoutingEntry(inRecoveryRouting, true);\n+ shard.relocated(\"simulate mark as relocated\");\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ ShardRouting failedRecoveryRouting = new ShardRouting(origRouting);\n+ try {\n+ shard.updateRoutingEntry(failedRecoveryRouting, true);\n+ fail(\"Expected IndexShardRelocatedException\");\n+ } catch (IndexShardRelocatedException expected) {\n+ }\n+ }\n+\n public void testRestoreShard() throws IOException {\n createIndex(\"test\");\n createIndex(\"test_target\");",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,7 @@\n import static org.elasticsearch.index.shard.IndexShardState.CREATED;\n import static org.elasticsearch.index.shard.IndexShardState.POST_RECOVERY;\n import static org.elasticsearch.index.shard.IndexShardState.RECOVERING;\n+import static org.elasticsearch.index.shard.IndexShardState.RELOCATED;\n import static org.elasticsearch.index.shard.IndexShardState.STARTED;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -181,7 +182,7 @@ public void testIndexStateShardChanged() throws Throwable {\n ensureGreen();\n \n //the 3 relocated shards get closed on the first node\n- assertShardStatesMatch(stateChangeListenerNode1, 3, CLOSED);\n+ assertShardStatesMatch(stateChangeListenerNode1, 3, RELOCATED, CLOSED);\n //the 3 relocated shards get created on the second node\n assertShardStatesMatch(stateChangeListenerNode2, 3, CREATED, RECOVERING, POST_RECOVERY, STARTED);\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -110,8 +111,7 @@ public void testSyncFailsIfOperationIsInFlight() throws InterruptedException {\n \n SyncedFlushService flushService = getInstanceFromNode(SyncedFlushService.class);\n final ShardId shardId = shard.shardId();\n- shard.incrementOperationCounter();\n- try {\n+ try (Releasable operationLock = shard.acquirePrimaryOperationLock()) {\n SyncedFlushUtil.LatchedListener<ShardsSyncedFlushResult> listener = new SyncedFlushUtil.LatchedListener<>();\n flushService.attemptSyncedFlush(shardId, listener);\n listener.latch.await();\n@@ -121,8 +121,6 @@ public void testSyncFailsIfOperationIsInFlight() throws InterruptedException {\n assertEquals(0, syncedFlushResult.successfulShards());\n assertNotEquals(0, syncedFlushResult.totalShards());\n assertEquals(\"[1] ongoing operations on primary\", syncedFlushResult.failureReason());\n- } finally {\n- shard.decrementOperationCounter();\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/flush/SyncedFlushSingleNodeTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,89 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.recovery;\n+\n+import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n+import org.elasticsearch.action.delete.DeleteResponse;\n+import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n+import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n+\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+@TestLogging(\"_root:DEBUG\")\n+@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST)\n+public class IndexPrimaryRelocationIT extends ESIntegTestCase {\n+\n+ private static final int RELOCATION_COUNT = 25;\n+\n+ public void testPrimaryRelocationWhileIndexing() throws Exception {\n+ internalCluster().ensureAtLeastNumDataNodes(randomIntBetween(2, 3));\n+ client().admin().indices().prepareCreate(\"test\")\n+ .setSettings(Settings.settingsBuilder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0))\n+ .addMapping(\"type\", \"field\", \"type=string\")\n+ .get();\n+ ensureGreen(\"test\");\n+\n+ final AtomicBoolean finished = new AtomicBoolean(false);\n+ Thread indexingThread = new Thread() {\n+ @Override\n+ public void run() {\n+ while (finished.get() == false) {\n+ IndexResponse indexResponse = client().prepareIndex(\"test\", \"type\", \"id\").setSource(\"field\", \"value\").get();\n+ assertThat(\"deleted document was found\", indexResponse.isCreated(), equalTo(true));\n+ DeleteResponse deleteResponse = client().prepareDelete(\"test\", \"type\", \"id\").get();\n+ assertThat(\"indexed document was not found\", deleteResponse.isFound(), equalTo(true));\n+ }\n+ }\n+ };\n+ indexingThread.start();\n+\n+ ClusterState initialState = client().admin().cluster().prepareState().get().getState();\n+ DiscoveryNode[] dataNodes = initialState.getNodes().dataNodes().values().toArray(DiscoveryNode.class);\n+ DiscoveryNode relocationSource = initialState.getNodes().dataNodes().get(initialState.getRoutingTable().shardRoutingTable(\"test\", 0).primaryShard().currentNodeId());\n+ for (int i = 0; i < RELOCATION_COUNT; i++) {\n+ DiscoveryNode relocationTarget = randomFrom(dataNodes);\n+ while (relocationTarget.equals(relocationSource)) {\n+ relocationTarget = randomFrom(dataNodes);\n+ }\n+ logger.info(\"--> [iteration {}] relocating from {} to {} \", i, relocationSource.getName(), relocationTarget.getName());\n+ client().admin().cluster().prepareReroute()\n+ .add(new MoveAllocationCommand(\"test\", 0, relocationSource.getId(), relocationTarget.getId()))\n+ .execute().actionGet();\n+ ClusterHealthResponse clusterHealthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForRelocatingShards(0).execute().actionGet();\n+ assertThat(clusterHealthResponse.isTimedOut(), equalTo(false));\n+ logger.info(\"--> [iteration {}] relocation complete\", i);\n+ relocationSource = relocationTarget;\n+ if (indexingThread.isAlive() == false) { // indexing process aborted early, no need for more relocations as test has already failed\n+ break;\n+ }\n+\n+ }\n+ finished.set(true);\n+ indexingThread.join();\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/IndexPrimaryRelocationIT.java",
"status": "added"
},
{
"diff": "@@ -286,7 +286,7 @@ public void run() {\n assertRecoveryState(nodeARecoveryStates.get(0), 0, Type.STORE, Stage.DONE, nodeA, nodeA, false);\n validateIndexRecoveryState(nodeARecoveryStates.get(0).getIndex());\n \n- assertOnGoingRecoveryState(nodeBRecoveryStates.get(0), 0, Type.RELOCATION, nodeA, nodeB, false);\n+ assertOnGoingRecoveryState(nodeBRecoveryStates.get(0), 0, Type.PRIMARY_RELOCATION, nodeA, nodeB, false);\n validateIndexRecoveryState(nodeBRecoveryStates.get(0).getIndex());\n \n logger.info(\"--> request node recovery stats\");\n@@ -339,7 +339,7 @@ public void run() {\n recoveryStates = response.shardRecoveryStates().get(INDEX_NAME);\n assertThat(recoveryStates.size(), equalTo(1));\n \n- assertRecoveryState(recoveryStates.get(0), 0, Type.RELOCATION, Stage.DONE, nodeA, nodeB, false);\n+ assertRecoveryState(recoveryStates.get(0), 0, Type.PRIMARY_RELOCATION, Stage.DONE, nodeA, nodeB, false);\n validateIndexRecoveryState(recoveryStates.get(0).getIndex());\n \n statsResponse = client().admin().cluster().prepareNodesStats().clear().setIndices(new CommonStatsFlags(CommonStatsFlags.Flag.Recovery)).get();\n@@ -400,7 +400,7 @@ public void run() {\n assertRecoveryState(nodeARecoveryStates.get(0), 0, Type.REPLICA, Stage.DONE, nodeB, nodeA, false);\n validateIndexRecoveryState(nodeARecoveryStates.get(0).getIndex());\n \n- assertRecoveryState(nodeBRecoveryStates.get(0), 0, Type.RELOCATION, Stage.DONE, nodeA, nodeB, false);\n+ assertRecoveryState(nodeBRecoveryStates.get(0), 0, Type.PRIMARY_RELOCATION, Stage.DONE, nodeA, nodeB, false);\n validateIndexRecoveryState(nodeBRecoveryStates.get(0).getIndex());\n \n // relocations of replicas are marked as REPLICA and the source node is the node holding the primary (B)\n@@ -421,7 +421,7 @@ public void run() {\n nodeCRecoveryStates = findRecoveriesForTargetNode(nodeC, recoveryStates);\n assertThat(nodeCRecoveryStates.size(), equalTo(1));\n \n- assertRecoveryState(nodeBRecoveryStates.get(0), 0, Type.RELOCATION, Stage.DONE, nodeA, nodeB, false);\n+ assertRecoveryState(nodeBRecoveryStates.get(0), 0, Type.PRIMARY_RELOCATION, Stage.DONE, nodeA, nodeB, false);\n validateIndexRecoveryState(nodeBRecoveryStates.get(0).getIndex());\n \n // relocations of replicas are marked as REPLICA and the source node is the node holding the primary (B)\n@@ -503,16 +503,16 @@ private IndicesStatsResponse createAndPopulateIndex(String name, int nodeCount,\n final IndexRequestBuilder[] docs = new IndexRequestBuilder[numDocs];\n \n for (int i = 0; i < numDocs; i++) {\n- docs[i] = client().prepareIndex(INDEX_NAME, INDEX_TYPE).\n+ docs[i] = client().prepareIndex(name, INDEX_TYPE).\n setSource(\"foo-int\", randomInt(),\n \"foo-string\", randomAsciiOfLength(32),\n \"foo-float\", randomFloat());\n }\n \n indexRandom(true, docs);\n flush();\n- assertThat(client().prepareSearch(INDEX_NAME).setSize(0).get().getHits().totalHits(), equalTo((long) numDocs));\n- return client().admin().indices().prepareStats(INDEX_NAME).execute().actionGet();\n+ assertThat(client().prepareSearch(name).setSize(0).get().getHits().totalHits(), equalTo((long) numDocs));\n+ return client().admin().indices().prepareStats(name).execute().actionGet();\n }\n \n private void validateIndexRecoveryState(RecoveryState.Index indexState) {",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/IndexRecoveryIT.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ public void testSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Store store = newStore(createTempDir());\n RecoverySourceHandler handler = new RecoverySourceHandler(null, request, recoverySettings, null, logger);\n Directory dir = store.directory();\n@@ -118,7 +118,7 @@ public void testHandleCorruptedIndexOnSendSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Path tempDir = createTempDir();\n Store store = newStore(tempDir, false);\n AtomicBoolean failedEngine = new AtomicBoolean(false);\n@@ -181,7 +181,7 @@ public void testHandleExceptinoOnSendSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Path tempDir = createTempDir();\n Store store = newStore(tempDir, false);\n AtomicBoolean failedEngine = new AtomicBoolean(false);",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java",
"status": "modified"
},
{
"diff": "@@ -43,11 +43,9 @@ public void testSerialization() throws Exception {\n new ShardId(\"test\", \"_na_\", 0),\n new DiscoveryNode(\"a\", new LocalTransportAddress(\"1\"), targetNodeVersion),\n new DiscoveryNode(\"b\", new LocalTransportAddress(\"1\"), targetNodeVersion),\n- true,\n Store.MetadataSnapshot.EMPTY,\n- RecoveryState.Type.RELOCATION,\n+ RecoveryState.Type.PRIMARY_RELOCATION,\n 1L\n-\n );\n ByteArrayOutputStream outBuffer = new ByteArrayOutputStream();\n OutputStreamStreamOutput out = new OutputStreamStreamOutput(outBuffer);\n@@ -63,7 +61,6 @@ public void testSerialization() throws Exception {\n assertThat(outRequest.shardId(), equalTo(inRequest.shardId()));\n assertThat(outRequest.sourceNode(), equalTo(inRequest.sourceNode()));\n assertThat(outRequest.targetNode(), equalTo(inRequest.targetNode()));\n- assertThat(outRequest.markAsRelocated(), equalTo(inRequest.markAsRelocated()));\n assertThat(outRequest.metadataSnapshot().asMap(), equalTo(inRequest.metadataSnapshot().asMap()));\n assertThat(outRequest.recoveryId(), equalTo(inRequest.recoveryId()));\n assertThat(outRequest.recoveryType(), equalTo(inRequest.recoveryType()));",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -151,15 +151,15 @@ public void testNoRebalanceOnRollingRestart() throws Exception {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n RecoveryResponse recoveryResponse = client().admin().indices().prepareRecoveries(\"test\").get();\n for (RecoveryState recoveryState : recoveryResponse.shardRecoveryStates().get(\"test\")) {\n- assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode() + \"\\n\" + state.prettyPrint(), recoveryState.getType() != RecoveryState.Type.RELOCATION);\n+ assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode() + \"\\n\" + state.prettyPrint(), recoveryState.getType() != RecoveryState.Type.PRIMARY_RELOCATION);\n }\n internalCluster().restartRandomDataNode();\n ensureGreen();\n ClusterState afterState = client().admin().cluster().prepareState().get().getState();\n \n recoveryResponse = client().admin().indices().prepareRecoveries(\"test\").get();\n for (RecoveryState recoveryState : recoveryResponse.shardRecoveryStates().get(\"test\")) {\n- assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode()+ \"-- \\nbefore: \\n\" + state.prettyPrint() + \"\\nafter: \\n\" + afterState.prettyPrint(), recoveryState.getType() != RecoveryState.Type.RELOCATION);\n+ assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode()+ \"-- \\nbefore: \\n\" + state.prettyPrint() + \"\\nafter: \\n\" + afterState.prettyPrint(), recoveryState.getType() != RecoveryState.Type.PRIMARY_RELOCATION);\n }\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/recovery/FullRollingRestartIT.java",
"status": "modified"
},
{
"diff": "@@ -1036,7 +1036,7 @@ private void assertShardIndexCounter() {\n IndicesService indexServices = getInstance(IndicesService.class, nodeAndClient.name);\n for (IndexService indexService : indexServices) {\n for (IndexShard indexShard : indexService) {\n- assertThat(\"index shard counter on shard \" + indexShard.shardId() + \" on node \" + nodeAndClient.name + \" not 0\", indexShard.getOperationsCount(), equalTo(0));\n+ assertThat(\"index shard counter on shard \" + indexShard.shardId() + \" on node \" + nodeAndClient.name + \" not 0\", indexShard.getActiveOperationsCount(), equalTo(0));\n }\n }\n }",
"filename": "test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
}
]
}
|
{
"body": "Hi team,\n\nI get an error using a `multi_match` query with `cross_fields` type and a numeric query.\nI'm using v2.1.1 on OSX, installed via Homebrew.\n\n**Index basic data**\n\n```\ncurl -XPUT http://localhost:9200/blog/post/1?pretty=1 -d '{\"foo\":123, \"bar\":\"xyzzy\", \"baz\":456}'\n```\n\n**Use a `multi_match` query with `cross_fields` type and a numeric query**\n\n```\ncurl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\": {\"multi_match\": {\"type\": \"cross_fields\", \"query\": \"100\", \"lenient\": true, \"fields\": [\"foo\", \"bar\", \"baz\"]}}}'\n```\n\n**Error**\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n } ],\n \"type\" : \"search_phase_execution_exception\",\n \"reason\" : \"all shards failed\",\n \"phase\" : \"query\",\n \"grouped\" : true,\n \"failed_shards\" : [ {\n \"shard\" : 0,\n \"index\" : \"blog\",\n \"node\" : \"0TxGVVWsSu2qX63hZdOv2w\",\n \"reason\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n } ]\n },\n \"status\" : 400\n}\n```\n\n**Note that the error does not appear if I specify only 1 numeric field in search.**\n\n**Stack trace**\n\n```\nCaused by: java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n at org.apache.lucene.util.NumericUtils.longToPrefixCodedBytes(NumericUtils.java:147)\n at org.apache.lucene.util.NumericUtils.longToPrefixCoded(NumericUtils.java:121)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.getBytesRef(NumericTokenStream.java:163)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:217)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:148)\n at org.apache.lucene.util.AttributeSource$State.clone(AttributeSource.java:54)\n at org.apache.lucene.util.AttributeSource.captureState(AttributeSource.java:281)\n at org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:96)\n at org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:70)\n at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:223)\n at org.apache.lucene.util.QueryBuilder.createBooleanQuery(QueryBuilder.java:87)\n at org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:178)\n at org.elasticsearch.index.search.MultiMatchQuery.parseAndApply(MultiMatchQuery.java:55)\n at org.elasticsearch.index.search.MultiMatchQuery.access$000(MultiMatchQuery.java:42)\n at org.elasticsearch.index.search.MultiMatchQuery$QueryBuilder.parseGroup(MultiMatchQuery.java:118)\n at org.elasticsearch.index.search.MultiMatchQuery$CrossFieldsQueryBuilder.buildGroupedQueries(MultiMatchQuery.java:198)\n at org.elasticsearch.index.search.MultiMatchQuery.parse(MultiMatchQuery.java:86)\n at org.elasticsearch.index.query.MultiMatchQueryParser.parse(MultiMatchQueryParser.java:163)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:257)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:303)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:206)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:201)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:831)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:651)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:617)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n```\n",
"comments": [
{
"body": "Fun times. Reproduces in master. I'll work on fixing it there and backporting it after that.\n",
"created_at": "2016-01-08T18:33:44Z"
},
{
"body": "FYI, I have reported a similar bug with the same symptoms (1 numeric field included is ok but 2+ numeric fields give `number_format_exception`). Don't know if it could be related.\n\nhttps://github.com/elastic/elasticsearch/issues/3975#issuecomment-167577538\n",
"created_at": "2016-01-08T20:01:37Z"
},
{
"body": "Good timing! I've figure out what is up and I've started on a solution. I've only got about two more hours left to work on it so I might not have anything before the weekend but, yeah, I'll have something soon.\n\nThe `lenient: true` issue is similar so I'll work on it while I'm in there.\n",
"created_at": "2016-01-08T20:22:56Z"
},
{
"body": "Had to revert the change. I'll get it in there though.\n",
"created_at": "2016-01-11T17:27:25Z"
}
],
"number": 15860,
"title": "multi_match query gives java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647"
}
|
{
"body": "Closes #15860\n",
"number": 15894,
"review_comments": [
{
"body": "it looks a bit inconsistent to catch Exception here and RuntimeException above?\n",
"created_at": "2016-01-13T08:30:04Z"
}
],
"title": "Fix blended terms take 2"
}
|
{
"commits": [
{
"message": "Fix blended terms for non-strings take 2\n\nIt had some funky errors, like lenient:true not working and queries with\ntwo integer fields blowing up if there was no analyzer defined on the\nquery. This throws a bunch more tests at it and rejiggers how non-strings\nare handled so they don't wander off into scary QueryBuilder-land unless\nthey have a nice strong analyzer to protect them."
}
],
"files": [
{
"diff": "@@ -389,7 +389,12 @@ public boolean useTermQueryWithQueryString() {\n return false;\n }\n \n- /** Creates a term associated with the field of this mapper for the given value */\n+ /**\n+ * Creates a term associated with the field of this mapper for the given\n+ * value. Its important to use termQuery when building term queries because\n+ * things like ParentFieldMapper override it to make more interesting\n+ * queries.\n+ */\n protected Term createTerm(Object value) {\n return new Term(name(), indexedValueForSearch(value));\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java",
"status": "modified"
},
{
"diff": "@@ -212,10 +212,6 @@ public void setZeroTermsQuery(ZeroTermsQuery zeroTermsQuery) {\n this.zeroTermsQuery = zeroTermsQuery;\n }\n \n- protected boolean forceAnalyzeQueryString() {\n- return false;\n- }\n-\n protected Analyzer getAnalyzer(MappedFieldType fieldType) {\n if (this.analyzer == null) {\n if (fieldType != null) {\n@@ -240,17 +236,19 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n field = fieldName;\n }\n \n- if (fieldType != null && fieldType.useTermQueryWithQueryString() && !forceAnalyzeQueryString()) {\n- try {\n- return fieldType.termQuery(value, context);\n- } catch (RuntimeException e) {\n- if (lenient) {\n- return null;\n- }\n- throw e;\n- }\n-\n+ /*\n+ * If the user forced an analyzer we really don't care if they are\n+ * searching a type that wants term queries to be used with query string\n+ * because the QueryBuilder will take care of it. If they haven't forced\n+ * an analyzer then types like NumberFieldType that want terms with\n+ * query string will blow up because their analyzer isn't capable of\n+ * passing through QueryBuilder.\n+ */\n+ boolean noForcedAnalyzer = this.analyzer == null;\n+ if (fieldType != null && fieldType.useTermQueryWithQueryString() && noForcedAnalyzer) {\n+ return termQuery(fieldType, value);\n }\n+\n Analyzer analyzer = getAnalyzer(fieldType);\n assert analyzer != null;\n MatchQueryBuilder builder = new MatchQueryBuilder(analyzer, fieldType);\n@@ -282,27 +280,47 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n }\n }\n \n+ /**\n+ * Creates a TermQuery-like-query for MappedFieldTypes that don't support\n+ * QueryBuilder which is very string-ish. Just delegates to the\n+ * MappedFieldType for MatchQuery but gets more complex for blended queries.\n+ */\n+ protected Query termQuery(MappedFieldType fieldType, Object value) {\n+ return termQuery(fieldType, value, lenient);\n+ }\n+\n+ protected final Query termQuery(MappedFieldType fieldType, Object value, boolean lenient) {\n+ try {\n+ return fieldType.termQuery(value, context);\n+ } catch (RuntimeException e) {\n+ if (lenient) {\n+ return null;\n+ }\n+ throw e;\n+ }\n+ }\n+\n protected Query zeroTermsQuery() {\n return zeroTermsQuery == DEFAULT_ZERO_TERMS_QUERY ? Queries.newMatchNoDocsQuery() : Queries.newMatchAllQuery();\n }\n \n private class MatchQueryBuilder extends QueryBuilder {\n \n private final MappedFieldType mapper;\n+\n /**\n * Creates a new QueryBuilder using the given analyzer.\n */\n public MatchQueryBuilder(Analyzer analyzer, @Nullable MappedFieldType mapper) {\n super(analyzer);\n this.mapper = mapper;\n- }\n+ }\n \n @Override\n protected Query newTermQuery(Term term) {\n return blendTermQuery(term, mapper);\n }\n \n-\n public Query createPhrasePrefixQuery(String field, String queryText, int phraseSlop, int maxExpansions) {\n final Query query = createFieldQuery(getAnalyzer(), Occur.MUST, field, queryText, true, phraseSlop);\n final MultiPhrasePrefixQuery prefixQuery = new MultiPhrasePrefixQuery();\n@@ -352,21 +370,42 @@ public Query createCommonTermsQuery(String field, String queryText, Occur highFr\n protected Query blendTermQuery(Term term, MappedFieldType fieldType) {\n if (fuzziness != null) {\n if (fieldType != null) {\n- Query query = fieldType.fuzzyQuery(term.text(), fuzziness, fuzzyPrefixLength, maxExpansions, transpositions);\n- if (query instanceof FuzzyQuery) {\n- QueryParsers.setRewriteMethod((FuzzyQuery) query, fuzzyRewriteMethod);\n+ try {\n+ Query query = fieldType.fuzzyQuery(term.text(), fuzziness, fuzzyPrefixLength, maxExpansions, transpositions);\n+ if (query instanceof FuzzyQuery) {\n+ QueryParsers.setRewriteMethod((FuzzyQuery) query, fuzzyRewriteMethod);\n+ }\n+ return query;\n+ } catch (RuntimeException e) {\n+ return new TermQuery(term);\n+ // See long comment below about why we're lenient here.\n }\n- return query;\n }\n int edits = fuzziness.asDistance(term.text());\n FuzzyQuery query = new FuzzyQuery(term, edits, fuzzyPrefixLength, maxExpansions, transpositions);\n QueryParsers.setRewriteMethod(query, fuzzyRewriteMethod);\n return query;\n }\n if (fieldType != null) {\n- Query termQuery = fieldType.queryStringTermQuery(term);\n- if (termQuery != null) {\n- return termQuery;\n+ /*\n+ * Its a bit weird to default to lenient here but its the backwards\n+ * compatible. It makes some sense when you think about what we are\n+ * doing here: at this point the user has forced an analyzer and\n+ * passed some string to the match query. We cut it up using the\n+ * analyzer and then tried to cram whatever we get into the field.\n+ * lenient=true here means that we try the terms in the query and on\n+ * the off chance that they are actually valid terms then we\n+ * actually try them. lenient=false would mean that we blow up the\n+ * query if they aren't valid terms. \"valid\" in this context means\n+ * \"parses properly to something of the type being queried.\" So \"1\"\n+ * is a valid number, etc.\n+ *\n+ * We use the text form here because we we've received the term from\n+ * an analyzer that cut some string into text.\n+ */\n+ Query query = termQuery(fieldType, term.bytes(), true);\n+ if (query != null) {\n+ return query;\n }\n }\n return new TermQuery(term);",
"filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -27,7 +27,6 @@\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.DisjunctionMaxQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -104,7 +103,7 @@ public QueryBuilder(boolean groupDismax, float tieBreaker) {\n this.tieBreaker = tieBreaker;\n }\n \n- public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<String, Float> fieldNames, Object value, String minimumShouldMatch) throws IOException{\n+ public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<String, Float> fieldNames, Object value, String minimumShouldMatch) throws IOException{\n List<Query> queries = new ArrayList<>();\n for (String fieldName : fieldNames.keySet()) {\n Float boostValue = fieldNames.get(fieldName);\n@@ -146,8 +145,8 @@ public Query blendTerm(Term term, MappedFieldType fieldType) {\n return MultiMatchQuery.super.blendTermQuery(term, fieldType);\n }\n \n- public boolean forceAnalyzeQueryString() {\n- return false;\n+ public Query termQuery(MappedFieldType fieldType, Object value) {\n+ return MultiMatchQuery.this.termQuery(fieldType, value, lenient);\n }\n }\n \n@@ -196,8 +195,13 @@ public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<Str\n } else {\n blendedFields = null;\n }\n- final FieldAndFieldType fieldAndFieldType = group.get(0);\n- Query q = parseGroup(type.matchQueryType(), fieldAndFieldType.field, 1f, value, minimumShouldMatch);\n+ /*\n+ * We have to pick some field to pass through the superclass so\n+ * we just pick the first field. It shouldn't matter because\n+ * fields are already grouped by their analyzers/types.\n+ */\n+ String representativeField = group.get(0).field;\n+ Query q = parseGroup(type.matchQueryType(), representativeField, 1f, value, minimumShouldMatch);\n if (q != null) {\n queries.add(q);\n }\n@@ -206,11 +210,6 @@ public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<Str\n return queries.isEmpty() ? null : queries;\n }\n \n- @Override\n- public boolean forceAnalyzeQueryString() {\n- return blendedFields != null;\n- }\n-\n @Override\n public Query blendTerm(Term term, MappedFieldType fieldType) {\n if (blendedFields == null) {\n@@ -231,6 +230,16 @@ public Query blendTerm(Term term, MappedFieldType fieldType) {\n }\n return BlendedTermQuery.dismaxBlendedQuery(terms, blendedBoost, tieBreaker);\n }\n+\n+ @Override\n+ public Query termQuery(MappedFieldType fieldType, Object value) {\n+ /*\n+ * Use the string value of the term because we're reusing the\n+ * portion of the query is usually after the analyzer has run on\n+ * each term. We just skip that analyzer phase.\n+ */\n+ return blendTerm(new Term(fieldType.name(), value.toString()), fieldType);\n+ }\n }\n \n @Override\n@@ -241,6 +250,15 @@ protected Query blendTermQuery(Term term, MappedFieldType fieldType) {\n return queryBuilder.blendTerm(term, fieldType);\n }\n \n+ @Override\n+ protected Query termQuery(MappedFieldType fieldType, Object value) {\n+ if (queryBuilder == null) {\n+ // Can be null when the MultiMatchQuery collapses into a MatchQuery\n+ return super.termQuery(fieldType, value);\n+ }\n+ return queryBuilder.termQuery(fieldType, value);\n+ }\n+\n private static final class FieldAndFieldType {\n final String field;\n final MappedFieldType fieldType;\n@@ -255,18 +273,17 @@ private FieldAndFieldType(String field, MappedFieldType fieldType, float boost)\n \n public Term newTerm(String value) {\n try {\n- final BytesRef bytesRef = fieldType.indexedValueForSearch(value);\n- return new Term(field, bytesRef);\n- } catch (Exception ex) {\n+ /*\n+ * Note that this ignore any overrides the fieldType might do\n+ * for termQuery, meaning things like _parent won't work here.\n+ */\n+ return new Term(fieldType.name(), fieldType.indexedValueForSearch(value));\n+ } catch (RuntimeException ex) {\n // we can't parse it just use the incoming value -- it will\n // just have a DF of 0 at the end of the day and will be ignored\n+ // Note that this is like lenient = true allways\n }\n return new Term(field, value);\n }\n }\n-\n- @Override\n- protected boolean forceAnalyzeQueryString() {\n- return this.queryBuilder == null ? super.forceAnalyzeQueryString() : this.queryBuilder.forceAnalyzeQueryString();\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -24,14 +24,18 @@\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.FuzzyQuery;\n import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.NumericRangeQuery;\n import org.apache.lucene.search.PhraseQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.search.MatchQuery;\n import org.elasticsearch.index.search.MatchQuery.ZeroTermsQuery;\n+import org.hamcrest.Matcher;\n+import org.joda.time.format.ISODateTimeFormat;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -120,15 +124,15 @@ protected void doAssertLuceneQuery(MatchQueryBuilder queryBuilder, Query query,\n switch (queryBuilder.type()) {\n case BOOLEAN:\n assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(ExtendedCommonTermsQuery.class))\n- .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)));\n+ .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)).or(instanceOf(NumericRangeQuery.class)));\n break;\n case PHRASE:\n assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(PhraseQuery.class))\n- .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)));\n+ .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)).or(instanceOf(NumericRangeQuery.class)));\n break;\n case PHRASE_PREFIX:\n assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(MultiPhrasePrefixQuery.class))\n- .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)));\n+ .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)).or(instanceOf(NumericRangeQuery.class)));\n break;\n }\n \n@@ -173,10 +177,45 @@ protected void doAssertLuceneQuery(MatchQueryBuilder queryBuilder, Query query,\n // compare lowercased terms here\n String originalTermLc = queryBuilder.value().toString().toLowerCase(Locale.ROOT);\n String actualTermLc = fuzzyQuery.getTerm().text().toLowerCase(Locale.ROOT);\n- assertThat(actualTermLc, equalTo(originalTermLc));\n+ Matcher<String> termLcMatcher = equalTo(originalTermLc);\n+ if (\"false\".equals(originalTermLc) || \"true\".equals(originalTermLc)) {\n+ // Booleans become t/f when querying a boolean field\n+ termLcMatcher = either(termLcMatcher).or(equalTo(originalTermLc.substring(0, 1)));\n+ }\n+ assertThat(actualTermLc, termLcMatcher);\n assertThat(queryBuilder.prefixLength(), equalTo(fuzzyQuery.getPrefixLength()));\n assertThat(queryBuilder.fuzzyTranspositions(), equalTo(fuzzyQuery.getTranspositions()));\n }\n+\n+ if (query instanceof NumericRangeQuery) {\n+ // These are fuzzy numeric queries\n+ assertTrue(queryBuilder.fuzziness() != null);\n+ @SuppressWarnings(\"unchecked\")\n+ NumericRangeQuery<Number> numericRangeQuery = (NumericRangeQuery<Number>) query;\n+ assertTrue(numericRangeQuery.includesMin());\n+ assertTrue(numericRangeQuery.includesMax());\n+\n+ double value;\n+ try {\n+ value = Double.parseDouble(queryBuilder.value().toString());\n+ } catch (NumberFormatException e) {\n+ // Maybe its a date\n+ value = ISODateTimeFormat.dateTimeParser().parseMillis(queryBuilder.value().toString());\n+ }\n+ double width;\n+ if (queryBuilder.fuzziness().equals(Fuzziness.AUTO)) {\n+ width = 1;\n+ } else {\n+ try {\n+ width = queryBuilder.fuzziness().asDouble();\n+ } catch (NumberFormatException e) {\n+ // Maybe a time value?\n+ width = queryBuilder.fuzziness().asTimeValue().getMillis();\n+ }\n+ }\n+ assertEquals(value - width, numericRangeQuery.getMin().doubleValue(), width * .1);\n+ assertEquals(value + width, numericRangeQuery.getMax().doubleValue(), width * .1);\n+ }\n }\n \n public void testIllegalValues() {",
"filename": "core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.apache.lucene.search.FuzzyQuery;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.MatchNoDocsQuery;\n+import org.apache.lucene.search.NumericRangeQuery;\n import org.apache.lucene.search.PhraseQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n@@ -132,7 +133,8 @@ protected void doAssertLuceneQuery(MultiMatchQueryBuilder queryBuilder, Query qu\n .or(instanceOf(BooleanQuery.class)).or(instanceOf(DisjunctionMaxQuery.class))\n .or(instanceOf(FuzzyQuery.class)).or(instanceOf(MultiPhrasePrefixQuery.class))\n .or(instanceOf(MatchAllDocsQuery.class)).or(instanceOf(ExtendedCommonTermsQuery.class))\n- .or(instanceOf(MatchNoDocsQuery.class)).or(instanceOf(PhraseQuery.class)));\n+ .or(instanceOf(MatchNoDocsQuery.class)).or(instanceOf(PhraseQuery.class))\n+ .or(instanceOf(NumericRangeQuery.class)));\n }\n \n public void testIllegaArguments() {",
"filename": "core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.query;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -230,6 +231,12 @@ public void testSingleField() throws NoSuchFieldException, IllegalAccessExceptio\n .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\"))).get();\n assertNoFailures(searchResponse);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\", \"int-field\")).analyzer(\"category\")).get();\n+ assertNoFailures(searchResponse);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n String[] fields = {\"full_name\", \"first_name\", \"last_name\", \"last_name_phrase\", \"first_name_phrase\", \"category_phrase\", \"category\"};\n \n String[] query = {\"marvel\",\"hero\", \"captain\", \"america\", \"15\", \"17\", \"1\", \"5\", \"ultimate\", \"Man\",\n@@ -459,18 +466,65 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n assertHitCount(searchResponse, 1l);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n \n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"skill\", \"int-field\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\")\n+ .operator(Operator.AND))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"skill\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"int-field\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\")\n+ .operator(Operator.AND))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"first_name\", \"last_name\", \"skill\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n .analyzer(\"category\"))).get();\n assertFirstHit(searchResponse, hasId(\"theone\"));\n \n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"int-field\", \"skill\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n .analyzer(\"category\"))).get();\n assertFirstHit(searchResponse, hasId(\"theone\"));\n \n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"first_name\", \"int-field\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"int-field\", \"skill\", \"first_name\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"captain america marvel hero\", \"first_name\", \"last_name\", \"category\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n@@ -529,6 +583,46 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n assertFirstHit(searchResponse, hasId(\"ultimate2\"));\n assertSecondHit(searchResponse, hasId(\"ultimate1\"));\n assertThat(searchResponse.getHits().hits()[0].getScore(), greaterThan(searchResponse.getHits().hits()[1].getScore()));\n+\n+ // Test group based on numeric fields\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\", \"first_name\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ // Two numeric fields together caused trouble at one point!\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"int-field\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"alpha 15\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .lenient(true))).get();\n+ assertFirstHit(searchResponse, hasId(\"ultimate1\"));\n+ /*\n+ * Doesn't find theone because \"alpha 15\" isn't a number and we don't\n+ * break on spaces.\n+ */\n+ assertHitCount(searchResponse, 1);\n+\n+ // Lenient wasn't always properly lenient with two numeric fields\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"alpha 15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .lenient(true))).get();\n+ assertFirstHit(searchResponse, hasId(\"ultimate1\"));\n }\n \n private static final void assertEquivalent(String query, SearchResponse left, SearchResponse right) {",
"filename": "core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java",
"status": "modified"
}
]
}
|
{
"body": "Each of my `logstash-YYYY.MM.DD` indices has a filtered alias added to it named `error_messages-YYYY.MM.DD`. Any that are older than 30 days are closed.\n\nI'm seeing strange behavior when trying to do a match_all query on `error_messages-*`.\n\nThis is using the Java transport client. ES version 1.6.0.\n\n```\nSearchResponse sr = client.prepareSearch(\"error_messages-*\")\n .setQuery(QueryBuilders.matchAllQuery())\n .execute().actionGet();\nthrows:\norg.elasticsearch.indices.IndexClosedException: [logstash-2015.07.30] closed\n```\n\nSo I set `IndicesOptions.lenientExpandOpen()` to ignore unavailable indices and got a different (but similiar) error.\n\n```\nSearchResponse sr = client.prepareSearch(\"error_messages-*\")\n .setQuery(QueryBuilders.matchAllQuery())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n .execute().actionGet();\nthrows:\norg.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/4/index closed]\n```\n\nInterestingly, this query works via HTTP API in Marvel:\n\n```\nPOST error_messages-*/_search?ignore_unavailable=true\n{\n \"query\": {\n \"match_all\": {}\n }\n}\n```\n\nAlso interesting is if I use the concrete index pattern `logstash-*`, the queries work in Java.\n",
"comments": [
{
"body": "@javanna any ideas here?\n",
"created_at": "2015-09-05T12:06:59Z"
},
{
"body": "sorry it took me a while to get to this, I confirm it's a bug, we don't look at the state of the aliases although we should after #9057. Need to dig deeper to see if this is a regression or just something that we never really covered.\n",
"created_at": "2015-10-15T16:44:50Z"
},
{
"body": "Nice, thanks for the confirmation \n",
"created_at": "2015-10-15T16:48:31Z"
},
{
"body": "I'm seeing the same issue. When can we expect a fix and in which version of es? (currently on 1.6)\nThx\n",
"created_at": "2016-01-09T21:12:56Z"
}
],
"number": 13278,
"title": "Searching multiple aliases using wildcard and ignore_unavailable=true throws index closed exception"
}
|
{
"body": "We fail today with ClusterBlockExceptions if an alias expands to a closed index\nduring search since we miss to check the index option down the road after we expanded\naliases.\n\nCloses #13278\n",
"number": 15882,
"review_comments": [
{
"body": "remove added newline?\n",
"created_at": "2016-01-11T12:00:39Z"
}
],
"title": "Check lenient_expand_open after aliases have been resolved"
}
|
{
"commits": [
{
"message": "Check lenient_expand_open after aliases have been resolved\n\nWe fail today with ClusterBlockExceptions if an alias expands to a closed index\nduring search since we miss to check the index option down the road after we expanded\naliases.\n\nCloses #13278"
}
],
"files": [
{
"diff": "@@ -237,7 +237,7 @@ public String resolveDateMathExpression(String dateExpression) {\n public String[] filteringAliases(ClusterState state, String index, String... expressions) {\n // expand the aliases wildcard\n List<String> resolvedExpressions = expressions != null ? Arrays.asList(expressions) : Collections.<String>emptyList();\n- Context context = new Context(state, IndicesOptions.lenientExpandOpen());\n+ Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true);\n for (ExpressionResolver expressionResolver : expressionResolvers) {\n resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions);\n }\n@@ -459,17 +459,25 @@ final static class Context {\n private final ClusterState state;\n private final IndicesOptions options;\n private final long startTime;\n+ private final boolean preserveAliases;\n \n Context(ClusterState state, IndicesOptions options) {\n- this.state = state;\n- this.options = options;\n- startTime = System.currentTimeMillis();\n+ this(state, options, System.currentTimeMillis());\n+ }\n+\n+ Context(ClusterState state, IndicesOptions options, boolean preserveAliases) {\n+ this(state, options, System.currentTimeMillis(), preserveAliases);\n }\n \n public Context(ClusterState state, IndicesOptions options, long startTime) {\n+ this(state, options, startTime, false);\n+ }\n+\n+ public Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases) {\n this.state = state;\n this.options = options;\n this.startTime = startTime;\n+ this.preserveAliases = preserveAliases;\n }\n \n public ClusterState getState() {\n@@ -483,6 +491,15 @@ public IndicesOptions getOptions() {\n public long getStartTime() {\n return startTime;\n }\n+\n+ /**\n+ * This is used to prevent resolving aliases to concrete indices but this also means\n+ * that we might return aliases that point to a closed index. This is currently only used\n+ * by {@link #filteringAliases(ClusterState, String, String...)} since it's the only one that needs aliases\n+ */\n+ boolean isPreserveAliases() {\n+ return preserveAliases;\n+ }\n }\n \n private interface ExpressionResolver {\n@@ -531,6 +548,9 @@ public List<String> resolve(Context context, List<String> expressions) {\n }\n continue;\n }\n+ if (Strings.isEmpty(expression)) {\n+ throw new IndexNotFoundException(expression);\n+ }\n boolean add = true;\n if (expression.charAt(0) == '+') {\n // if its the first, add empty result set\n@@ -612,21 +632,24 @@ public List<String> resolve(Context context, List<String> expressions) {\n .filter(e -> Regex.simpleMatch(pattern, e.getKey()))\n .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));\n }\n+ Set<String> expand = new HashSet<>();\n for (Map.Entry<String, AliasOrIndex> entry : matches.entrySet()) {\n AliasOrIndex aliasOrIndex = entry.getValue();\n- if (aliasOrIndex.isAlias() == false) {\n- AliasOrIndex.Index index = (AliasOrIndex.Index) aliasOrIndex;\n- if (excludeState != null && index.getIndex().getState() == excludeState) {\n- continue;\n- }\n- }\n-\n- if (add) {\n- result.add(entry.getKey());\n+ if (context.isPreserveAliases() && aliasOrIndex.isAlias()) {\n+ expand.add(entry.getKey());\n } else {\n- result.remove(entry.getKey());\n+ for (IndexMetaData meta : aliasOrIndex.getIndices()) {\n+ if (excludeState == null || meta.getState() != excludeState) {\n+ expand.add(meta.getIndex());\n+ }\n+ }\n }\n }\n+ if (add) {\n+ result.addAll(expand);\n+ } else {\n+ result.removeAll(expand);\n+ }\n \n if (matches.isEmpty() && options.allowNoIndices() == false) {\n IndexNotFoundException infe = new IndexNotFoundException(expression);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java",
"status": "modified"
},
{
"diff": "@@ -192,7 +192,7 @@ public void testIndexOptionsLenient() {\n \n context = new IndexNameExpressionResolver.Context(state, lenientExpand);\n results = indexNameExpressionResolver.concreteIndices(context, Strings.EMPTY_ARRAY);\n- assertEquals(4, results.length);\n+ assertEquals(Arrays.toString(results), 4, results.length);\n \n context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());\n results = indexNameExpressionResolver.concreteIndices(context, \"foofoo*\");\n@@ -867,4 +867,37 @@ private MetaData metaDataBuilder(String... indices) {\n }\n return mdBuilder.build();\n }\n+\n+ public void testFilterClosedIndicesOnAliases() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"test-0\").state(State.OPEN).putAlias(AliasMetaData.builder(\"alias-0\")))\n+ .put(indexBuilder(\"test-1\").state(IndexMetaData.State.CLOSE).putAlias(AliasMetaData.builder(\"alias-1\")));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+\n+ IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());\n+ String[] strings = indexNameExpressionResolver.concreteIndices(context, \"alias-*\");\n+ assertArrayEquals(new String[] {\"test-0\"}, strings);\n+\n+ context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen());\n+ strings = indexNameExpressionResolver.concreteIndices(context, \"alias-*\");\n+\n+ assertArrayEquals(new String[] {\"test-0\"}, strings);\n+ }\n+\n+ public void testFilteringAliases() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"test-0\").state(State.OPEN).putAlias(AliasMetaData.builder(\"alias-0\").filter(\"{ \\\"term\\\": \\\"foo\\\"}\")))\n+ .put(indexBuilder(\"test-1\").state(State.OPEN).putAlias(AliasMetaData.builder(\"alias-1\")));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+\n+ String[] strings = indexNameExpressionResolver.filteringAliases(state, \"test-0\", \"alias-*\");\n+ assertArrayEquals(new String[] {\"alias-0\"}, strings);\n+\n+ // concrete index supersedes filtering alias\n+ strings = indexNameExpressionResolver.filteringAliases(state, \"test-0\", \"test-0,alias-*\");\n+ assertNull(strings);\n+\n+ strings = indexNameExpressionResolver.filteringAliases(state, \"test-0\", \"test-*,alias-*\");\n+ assertNull(strings);\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java",
"status": "modified"
},
{
"diff": "@@ -59,7 +59,7 @@ public void testConvertWildcardsTests() {\n IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver();\n \n IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen());\n- assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testYY*\", \"alias*\"))), equalTo(newHashSet(\"alias1\", \"alias2\", \"alias3\", \"testYYY\")));\n+ assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"testYY*\", \"alias*\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"-kuku\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"+test*\", \"-testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\")));\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"+testX*\", \"+testYYY\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.routing;\n \n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.Priority;\n@@ -27,16 +28,35 @@\n import java.util.HashMap;\n import java.util.Map;\n import java.util.Set;\n+import java.util.concurrent.ExecutionException;\n \n import static org.elasticsearch.cluster.metadata.AliasAction.newAddAliasAction;\n import static org.elasticsearch.common.util.set.Sets.newHashSet;\n+import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n \n /**\n *\n */\n public class AliasResolveRoutingIT extends ESIntegTestCase {\n+\n+\n+ // see https://github.com/elastic/elasticsearch/issues/13278\n+ public void testSearchClosedWildcardIndex() throws ExecutionException, InterruptedException {\n+ createIndex(\"test-0\");\n+ createIndex(\"test-1\");\n+ ensureGreen();\n+ client().admin().indices().prepareAliases().addAlias(\"test-0\", \"alias-0\").addAlias(\"test-1\", \"alias-1\").get();\n+ client().admin().indices().prepareClose(\"test-1\").get();\n+ indexRandom(true, client().prepareIndex(\"test-0\", \"type1\", \"1\").setSource(\"field1\", \"the quick brown fox jumps\"),\n+ client().prepareIndex(\"test-0\", \"type1\", \"2\").setSource(\"field1\", \"quick brown\"),\n+ client().prepareIndex(\"test-0\", \"type1\", \"3\").setSource(\"field1\", \"quick\"));\n+ refresh(\"test-*\");\n+ assertHitCount(client().prepareSearch().setIndices(\"alias-*\").setIndicesOptions(IndicesOptions.lenientExpandOpen()).setQuery(matchQuery(\"_all\", \"quick\")).get(), 3l);\n+ }\n+\n public void testResolveIndexRouting() throws Exception {\n createIndex(\"test1\");\n createIndex(\"test2\");",
"filename": "core/src/test/java/org/elasticsearch/routing/AliasResolveRoutingIT.java",
"status": "modified"
}
]
}
|
{
"body": "I am having trouble with my query after migrating to Elasticsearch 2.0/2.1. Previously used version 1.4. I am trying to do a GeoDistance query using geohash. That's the excerpt from ES response:\n\n```\n\"root_cause\": [\n {\n \"type\": \"query_parsing_exception\",\n \"reason\": \"failed to find geo_point field [geohash]\",\n \"index\": \"campaigns\",\n \"line\": 1,\n \"col\": 234\n }\n]\n```\n\nThe minimal index usable to reproduce this problem is as follows:\n\n```\ncurl -s -X PUT \"http://localhost:9200/campaigns\" -d'{\n \"settings\": {\n \"analysis\": {\n \"filter\": {\n \"name_ngram\": {\n \"type\": \"nGram\",\n \"min_gram\": \"2\",\n \"max_gram\": \"50\"\n }\n },\n \"analyzer\": {\n \"search_analyzer\": {\n \"filter\": [\n \"standard\",\n \"asciifolding\",\n \"lowercase\",\n \"trim\"\n ],\n \"tokenizer\": \"standard\"\n },\n \"name_analyzer\": {\n \"filter\": [\n \"name_ngram\",\n \"standard\",\n \"asciifolding\",\n \"lowercase\",\n \"trim\"\n ],\n \"tokenizer\": \"standard\"\n }\n } \n } \n }, \n \"mappings\": { \n \"campaign\": { \n \"properties\": { \n \"locations\": { \n \"properties\": { \n \"id\": { \n \"include_in_all\": false, \n \"type\": \"integer\" \n }, \n \"geohash\": { \n \"type\": \"geo_point\" \n } \n }, \n \"type\": \"nested\" \n }, \n \"id\": { \n \"type\": \"integer\" \n } \n }, \n \"_all\": { \n \"search_analyzer\": \"search_analyzer\", \n \"analyzer\": \"name_analyzer\" \n } \n } \n } \n}'\n```\n\nI am using two analyzers here (as this is what I am using in my project, tried also with one analyzer and the result is the same).\n\nThe minimal data indexed in ES is as follows:\n\n```\ncurl -s -X POST \"http://localhost:9200/campaigns/campaign\" -d'{\n \"id\": 12,\n \"locations\": [\n {\n \"id\": 13,\n \"geohash\": {\n \"geohash\": \"s7ws01wyd7ws\"\n }\n }\n ]\n}'\n```\n\nThe mapping in ES:\n\n```\ncurl -s -X GET \"http://localhost:9200/campaigns/_mapping?pretty\"\n{\n \"campaigns\" : {\n \"mappings\" : {\n \"campaign\" : {\n \"_all\" : {\n \"analyzer\" : \"name_analyzer\",\n \"search_analyzer\" : \"search_analyzer\"\n },\n \"properties\" : {\n \"id\" : {\n \"type\" : \"integer\"\n },\n \"locations\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"geohash\" : {\n \"type\" : \"geo_point\"\n },\n \"id\" : {\n \"type\" : \"integer\",\n \"include_in_all\" : false\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nWorking query to search by \"locations.id\":\n\n```\ncurl -s -X POST \"http://localhost:9200/campaigns/_search\" -d@'{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"nested\": {\n \"path\": \"locations\",\n \"filter\": {\n \"term\": {\n \"locations.id\": \"13\"\n }\n }\n }\n }\n }\n }\n}' | jq ''\n{\n \"took\": 5,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"campaigns\",\n \"_type\": \"campaign\",\n \"_id\": \"AVFYQn4uic5h_a2BXp1k\",\n \"_score\": 1,\n \"_source\": {\n \"id\": 12,\n \"locations\": [\n {\n \"id\": 13,\n \"geohash\": {\n \"geohash\": \"s7ws01wyd7ws\"\n }\n }\n ]\n }\n }\n ]\n }\n}\n```\n\nNot working query to search by \"geohash\" (only when the data is posted from file, but the same response is returned in my application):\n\n```\ncat query_geohash.json \n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"nested\": {\n \"path\": \"locations\",\n \"filter\": {\n \"geo_distance\": {\n \"distance\": \"100.0km\",\n \"geohash\": \"s7ws01wyd7ws\"\n }\n }\n }\n }\n }\n }\n}\ncurl -s -X POST \"http://localhost:9200/campaigns/_search\" -d@query_geohash.json | jq ''\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"query_parsing_exception\",\n \"reason\": \"failed to find geo_point field [geohash]\",\n \"index\": \"campaigns\",\n \"line\": 1,\n \"col\": 234\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \"campaigns\",\n \"node\": \"cqrNlEmRSPGcGewKuUvlnA\",\n \"reason\": {\n \"type\": \"query_parsing_exception\",\n \"reason\": \"failed to find geo_point field [geohash]\",\n \"index\": \"campaigns\",\n \"line\": 1,\n \"col\": 234\n }\n }\n ]\n },\n \"status\": 400\n}\n```\n\nStrangely, the same query works OK when the query is not read from file, but posted \"inline\":\n\n```\ncurl -s -X POST \"http://localhost:9200/campaigns/_search\" -d@'{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"nested\": {\n \"path\": \"locations\",\n \"filter\": {\n \"geo_distance\": {\n \"distance\": \"100.0km\",\n \"geohash\": \"s7ws01wyd7ws\"\n }\n }\n }\n }\n }\n }\n}' | jq ''\n{\n \"took\": 11,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"campaigns\",\n \"_type\": \"campaign\",\n \"_id\": \"AVFYQn4uic5h_a2BXp1k\",\n \"_score\": 1,\n \"_source\": {\n \"id\": 12,\n \"locations\": [\n {\n \"id\": 13,\n \"geohash\": {\n \"geohash\": \"s7ws01wyd7ws\"\n }\n }\n ]\n }\n }\n ]\n }\n}\n```\n\nWhat am I missing here? The geohash query itself looks OK (response is the same when I use \"locations.geohash\" instead of \"geohash\"). The only difference is the way the query is fed to curl. Also, both versions work correctly in ES 1.4.\n\nI am not sure if this is a bug in Elasticsearch or if I made some mistake in my query. Anyway, I'll be really grateful for any help!\n",
"comments": [
{
"body": "Hi @jmatraszek \n\nThanks for the full recreation. \n\n> Strangely, the same query works OK when the query is not read from file, but posted \"inline\":\n\nI get the same failure whether I use a file or post inline. The first issue is that you always need to specify full paths for fields, so you should specify `locations.geohash` instead of just `geohash`. However, when you do this, you now get the exception `failed to find geo_point field [locations]`.\n\nThe reason for this is the use of the `{ \"field\": {\"geohash\": \"s7ws01wyd7ws\" }}` format, which is undocumented and I was certainly unaware that it existed. The documented format is `{\"field\": \"s7ws01wyd7ws\"}`.\n\nHowever, this undocumented format is interfering with your query as it strips `geohash` off the path name to leave just `locations`. It works if you run it as follows:\n\n```\nPOST /campaigns/_search\n{\n \"query\": {\n \"nested\": {\n \"path\": \"locations\",\n \"filter\": {\n \"geo_distance\": {\n \"distance\": \"100.0km\",\n \"locations.geohash.geohash\": \"s7ws01wyd7ws\"\n }\n }\n }\n }\n}\n```\n\nI think we should just remove the ability to specify a geohash with `{ \"field\": { \"geohash\": \"s7ws01wyd7ws\"}}` and this problem will be solved.\n",
"created_at": "2015-12-02T12:55:53Z"
},
{
"body": "This is an interesting one. Turned out to be a very good catch! Its actually not a problem that you can specify a geohash with `{\"field\" : { \"geohash\": \"...\" }}` Its superfluous, but has been supported since 1.0.0.Beta1 (see PR #3352 where documentation is mentioned but not actually added). Removing it will have no effect here but I'll open a separate issue to remove it anyway.\n\nIn this case its a simple name clash. The mapping defines a sub-field `\"geohash\"` for the `\"locations\"` nested field. The string \"geohash\" is the same as the `geohash` extension that signals to the query parser the point is encoded as a geohash. \n\nSo if you pass `\"locations.geohash\"` the query parser sees the `.geohash` extension, parses the point, strips the extension, then fails to find a `geo_point` type for the `locations` nested type.\n\nOne immediate solution is exactly what @clintongormley suggested: pass the full `\"locations.geohash.geohash\"` path. This tells the query parser, \"the subfield named 'geohash', for nested field 'locations' will be in a geohash format\". \nThe other solution is to simply rename the `geohash` sub-field so there's no naming clash.\n\nIn pre-2.x (before GeoPointV2) its actually more scary if you name clash with multi-fields. No exception will be thrown because it won't query on the expected `geohash` sub-field. Certainly problematic for prefix queries. So again, great find - this is definitely trappy! I'll open a PR to fix it on master and backport.\n",
"created_at": "2016-01-09T04:50:40Z"
}
],
"number": 15179,
"title": "Geohash query not working after migrating to Elasticsearch 2.0/2.1"
}
|
{
"body": "Occasionally the .geohash suffix in Geo{Distance|DistanceRange}Query would conflict with a mapping that defines a sub-field by the same name. This occurs often with nested and multi-fields when a mapping defines a geo_point sub-field using the field name \"geohash\". Since the QueryParser already handles parsing geohash encoded geopoints, without requiring the \".geohash\" suffix, the suffix parsing can be removed altogether.\n\nThis PR removes the .geohash suffix parsing, adds explicit test coverage for the nested query use-case, and adds random distance queries to the nested query test suite.\n\ncloses #15179 \n",
"number": 15871,
"review_comments": [
{
"body": "Shouldn't we be able to remove `GeoPointFieldMapper.Names.GEOHASH_SUFFIX` entirely from the code base?\n",
"created_at": "2016-02-26T12:12:46Z"
},
{
"body": "This is not related to your PR but to me it looks weird that we need to create an instance of a test class to get a query builder. I think we should change that to a cleaner API (but not in this PR).\n",
"created_at": "2016-02-26T12:14:37Z"
},
{
"body": "I can see no reference to `GeoPointDistanceQuery` in this file. Is this import needed?\n",
"created_at": "2016-02-26T12:15:09Z"
},
{
"body": "There's no reason for it. It looks to me like `GeoDistanceRangeQueryTests.createTestQueryBuilder` and all referenced methods can be static? They don't require any instance state.\n",
"created_at": "2016-03-08T19:23:12Z"
},
{
"body": "Well, except the generic. But I think that can be fixed too? You're right, I think its worth exploring in a separate cleanup issue?\n",
"created_at": "2016-03-08T19:26:12Z"
}
],
"title": "Remove .geohash suffix from GeoDistanceQuery and GeoDistanceRangeQuery"
}
|
{
"commits": [
{
"message": "Remove .geohash suffix from GeoDistanceQuery and GeoDistanceRangeQuery\n\nOccasionally the .geohash suffix in Geo{Distance|DistanceRange}Query would conflict with a mapping that defines a sub-field by the same name. This occurs often with nested and multi-fields a mapping defines a geo_point sub-field using the field name \"geohash\". Since the QueryParser already handles parsing geohash encoded geopoints without requiring the \".geohash\" suffix, the suffix parsing can be removed altogether.\n\nThis commit removes the .geohash suffix parsing, adds explicit test coverage for the nested query use-case, and adds random distance queries to the nested query test suite."
}
],
"files": [
{
"diff": "@@ -65,7 +65,6 @@ public static class Names {\n public static final String LON = \"lon\";\n public static final String LON_SUFFIX = \".\" + LON;\n public static final String GEOHASH = \"geohash\";\n- public static final String GEOHASH_SUFFIX = \".\" + GEOHASH;\n public static final String IGNORE_MALFORMED = \"ignore_malformed\";\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/BaseGeoPointFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -120,9 +120,6 @@ public GeoDistanceQueryBuilder fromXContent(QueryParseContext parseContext) thro\n } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.LON_SUFFIX)) {\n point.resetLon(parser.doubleValue());\n fieldName = currentFieldName.substring(0, currentFieldName.length() - GeoPointFieldMapper.Names.LON_SUFFIX.length());\n- } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.GEOHASH_SUFFIX)) {\n- point.resetFromGeoHash(parser.text());\n- fieldName = currentFieldName.substring(0, currentFieldName.length() - GeoPointFieldMapper.Names.GEOHASH_SUFFIX.length());\n } else if (parseContext.parseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.NAME_FIELD)) {\n queryName = parser.text();\n } else if (parseContext.parseFieldMatcher().match(currentFieldName, AbstractQueryBuilder.BOOST_FIELD)) {",
"filename": "core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -196,15 +196,6 @@ public GeoDistanceRangeQueryBuilder fromXContent(QueryParseContext parseContext)\n point = new GeoPoint();\n }\n point.resetLon(parser.doubleValue());\n- } else if (currentFieldName.endsWith(GeoPointFieldMapper.Names.GEOHASH_SUFFIX)) {\n- String maybeFieldName = currentFieldName.substring(0, currentFieldName.length() - GeoPointFieldMapper.Names.GEOHASH_SUFFIX.length());\n- if (fieldName == null || fieldName.equals(maybeFieldName)) {\n- fieldName = maybeFieldName;\n- } else {\n- throw new ParsingException(parser.getTokenLocation(), \"[\" + GeoDistanceRangeQueryBuilder.NAME +\n- \"] field name already set to [\" + fieldName + \"] but found [\" + currentFieldName + \"]\");\n- }\n- point = GeoPoint.fromGeohash(parser.text());\n } else if (parseContext.parseFieldMatcher().match(currentFieldName, NAME_FIELD)) {\n queryName = parser.text();\n } else if (parseContext.parseFieldMatcher().match(currentFieldName, BOOST_FIELD)) {",
"filename": "core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -140,6 +140,7 @@ public abstract class AbstractQueryTestCase<QB extends AbstractQueryBuilder<QB>>\n protected static final String DATE_FIELD_NAME = \"mapped_date\";\n protected static final String OBJECT_FIELD_NAME = \"mapped_object\";\n protected static final String GEO_POINT_FIELD_NAME = \"mapped_geo_point\";\n+ protected static final String GEO_POINT_FIELD_MAPPING = \"type=geo_point,lat_lon=true,geohash=true,geohash_prefix=true\";\n protected static final String GEO_SHAPE_FIELD_NAME = \"mapped_geo_shape\";\n protected static final String[] MAPPED_FIELD_NAMES = new String[] { STRING_FIELD_NAME, INT_FIELD_NAME, DOUBLE_FIELD_NAME,\n BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, OBJECT_FIELD_NAME, GEO_POINT_FIELD_NAME, GEO_SHAPE_FIELD_NAME };\n@@ -300,7 +301,7 @@ public void onRemoval(ShardId shardId, Accountable accountable) {\n BOOLEAN_FIELD_NAME, \"type=boolean\",\n DATE_FIELD_NAME, \"type=date\",\n OBJECT_FIELD_NAME, \"type=object\",\n- GEO_POINT_FIELD_NAME, \"type=geo_point,lat_lon=true,geohash=true,geohash_prefix=true\",\n+ GEO_POINT_FIELD_NAME, GEO_POINT_FIELD_MAPPING,\n GEO_SHAPE_FIELD_NAME, \"type=geo_shape\"\n ).string()), MapperService.MergeReason.MAPPING_UPDATE, false);\n // also add mappings for two inner field in the object field",
"filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java",
"status": "modified"
},
{
"diff": "@@ -24,10 +24,12 @@\n import org.apache.lucene.spatial.util.GeoDistanceUtils;\n import org.apache.lucene.util.NumericUtils;\n import org.elasticsearch.Version;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.geo.GeoDistance;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.unit.DistanceUnit;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery;\n import org.elasticsearch.test.geo.RandomGeoGenerator;\n \n@@ -296,6 +298,36 @@ public void testInvalidDistanceUnit() {\n }\n }\n \n+ public void testNestedRangeQuery() throws IOException {\n+ // create a nested geo_point type with a subfield named \"geohash\" (explicit testing for ISSUE #15179)\n+ MapperService mapperService = queryShardContext().getMapperService();\n+ String nestedMapping =\n+ \"{\\\"nested_doc\\\" : {\\\"properties\\\" : {\" +\n+ \"\\\"locations\\\": {\\\"properties\\\": {\" +\n+ \"\\\"geohash\\\": {\\\"type\\\": \\\"geo_point\\\"}},\" +\n+ \"\\\"type\\\": \\\"nested\\\"}\" +\n+ \"}}}\";\n+ mapperService.merge(\"nested_doc\", new CompressedXContent(nestedMapping), MapperService.MergeReason.MAPPING_UPDATE, false);\n+\n+ // create a range query on the nested locations.geohash sub-field\n+ String queryJson =\n+ \"{\\n\" +\n+ \" \\\"nested\\\": {\\n\" +\n+ \" \\\"path\\\": \\\"locations\\\",\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"geo_distance_range\\\": {\\n\" +\n+ \" \\\"from\\\": \\\"0.0km\\\",\\n\" +\n+ \" \\\"to\\\" : \\\"200.0km\\\",\\n\" +\n+ \" \\\"locations.geohash\\\": \\\"s7ws01wyd7ws\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\\n\";\n+ NestedQueryBuilder builder = (NestedQueryBuilder) parseQuery(queryJson);\n+ QueryShardContext context = createShardContext();\n+ builder.toQuery(context);\n+ }\n+\n public void testFromJson() throws IOException {\n String json =\n \"{\\n\" +",
"filename": "core/src/test/java/org/elasticsearch/index/query/GeoDistanceRangeQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -63,7 +63,7 @@ protected void doAssertLuceneQuery(Builder queryBuilder, Query query, QueryShard\n assertThat(query, instanceOf(TermQuery.class));\n TermQuery termQuery = (TermQuery) query;\n Term term = termQuery.getTerm();\n- assertThat(term.field(), equalTo(queryBuilder.fieldName() + GeoPointFieldMapper.Names.GEOHASH_SUFFIX));\n+ assertThat(term.field(), equalTo(queryBuilder.fieldName() + \".\" + GeoPointFieldMapper.Names.GEOHASH));\n String geohash = queryBuilder.geohash();\n if (queryBuilder.precision() != null) {\n int len = Math.min(queryBuilder.precision(), geohash.length());",
"filename": "core/src/test/java/org/elasticsearch/index/query/GeohashCellQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -52,6 +52,7 @@ public void setUp() throws Exception {\n BOOLEAN_FIELD_NAME, \"type=boolean\",\n DATE_FIELD_NAME, \"type=date\",\n OBJECT_FIELD_NAME, \"type=object\",\n+ GEO_POINT_FIELD_NAME, GEO_POINT_FIELD_MAPPING,\n \"nested1\", \"type=nested\"\n ).string()), MapperService.MergeReason.MAPPING_UPDATE, false);\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java",
"status": "modified"
}
]
}
|
{
"body": "Hi team,\n\nI get an error using a `multi_match` query with `cross_fields` type and a numeric query.\nI'm using v2.1.1 on OSX, installed via Homebrew.\n\n**Index basic data**\n\n```\ncurl -XPUT http://localhost:9200/blog/post/1?pretty=1 -d '{\"foo\":123, \"bar\":\"xyzzy\", \"baz\":456}'\n```\n\n**Use a `multi_match` query with `cross_fields` type and a numeric query**\n\n```\ncurl -XGET http://localhost:9200/blog/post/_search?pretty=1 -d '{\"query\": {\"multi_match\": {\"type\": \"cross_fields\", \"query\": \"100\", \"lenient\": true, \"fields\": [\"foo\", \"bar\", \"baz\"]}}}'\n```\n\n**Error**\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n } ],\n \"type\" : \"search_phase_execution_exception\",\n \"reason\" : \"all shards failed\",\n \"phase\" : \"query\",\n \"grouped\" : true,\n \"failed_shards\" : [ {\n \"shard\" : 0,\n \"index\" : \"blog\",\n \"node\" : \"0TxGVVWsSu2qX63hZdOv2w\",\n \"reason\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Illegal shift value, must be 0..63; got shift=2147483647\"\n }\n } ]\n },\n \"status\" : 400\n}\n```\n\n**Note that the error does not appear if I specify only 1 numeric field in search.**\n\n**Stack trace**\n\n```\nCaused by: java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647\n at org.apache.lucene.util.NumericUtils.longToPrefixCodedBytes(NumericUtils.java:147)\n at org.apache.lucene.util.NumericUtils.longToPrefixCoded(NumericUtils.java:121)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.getBytesRef(NumericTokenStream.java:163)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:217)\n at org.apache.lucene.analysis.NumericTokenStream$NumericTermAttributeImpl.clone(NumericTokenStream.java:148)\n at org.apache.lucene.util.AttributeSource$State.clone(AttributeSource.java:54)\n at org.apache.lucene.util.AttributeSource.captureState(AttributeSource.java:281)\n at org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:96)\n at org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:70)\n at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:223)\n at org.apache.lucene.util.QueryBuilder.createBooleanQuery(QueryBuilder.java:87)\n at org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:178)\n at org.elasticsearch.index.search.MultiMatchQuery.parseAndApply(MultiMatchQuery.java:55)\n at org.elasticsearch.index.search.MultiMatchQuery.access$000(MultiMatchQuery.java:42)\n at org.elasticsearch.index.search.MultiMatchQuery$QueryBuilder.parseGroup(MultiMatchQuery.java:118)\n at org.elasticsearch.index.search.MultiMatchQuery$CrossFieldsQueryBuilder.buildGroupedQueries(MultiMatchQuery.java:198)\n at org.elasticsearch.index.search.MultiMatchQuery.parse(MultiMatchQuery.java:86)\n at org.elasticsearch.index.query.MultiMatchQueryParser.parse(MultiMatchQueryParser.java:163)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:257)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:303)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:206)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:201)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:831)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:651)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:617)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n```\n",
"comments": [
{
"body": "Fun times. Reproduces in master. I'll work on fixing it there and backporting it after that.\n",
"created_at": "2016-01-08T18:33:44Z"
},
{
"body": "FYI, I have reported a similar bug with the same symptoms (1 numeric field included is ok but 2+ numeric fields give `number_format_exception`). Don't know if it could be related.\n\nhttps://github.com/elastic/elasticsearch/issues/3975#issuecomment-167577538\n",
"created_at": "2016-01-08T20:01:37Z"
},
{
"body": "Good timing! I've figure out what is up and I've started on a solution. I've only got about two more hours left to work on it so I might not have anything before the weekend but, yeah, I'll have something soon.\n\nThe `lenient: true` issue is similar so I'll work on it while I'm in there.\n",
"created_at": "2016-01-08T20:22:56Z"
},
{
"body": "Had to revert the change. I'll get it in there though.\n",
"created_at": "2016-01-11T17:27:25Z"
}
],
"number": 15860,
"title": "multi_match query gives java.lang.IllegalArgumentException: Illegal shift value, must be 0..63; got shift=2147483647"
}
|
{
"body": "It had some funky errors, like lenient:true not working and queries with\ntwo integer fields blowing up if there was no analyzer defined on the\nquery. This throws a bunch more tests at it and rejiggers how non-strings\nare handled so they don't wander off into scary QueryBuilder-land unless\nthey have a nice strong analyzer to protect them.\n\nCloses #15860\n",
"number": 15869,
"review_comments": [],
"title": "Fix blended terms for non-strings"
}
|
{
"commits": [
{
"message": "Fix blended terms for non-strings\n\nIt had some funky errors, like lenient:true not working and queries with\ntwo integer fields blowing up if there was no analyzer defined on the\nquery. This throws a bunch more tests at it and rejiggers how non-strings\nare handled so they don't wander off into scary QueryBuilder-land unless\nthey have a nice strong analyzer to protect them.\n\nCloses #15860"
}
],
"files": [
{
"diff": "@@ -390,7 +390,7 @@ public boolean useTermQueryWithQueryString() {\n }\n \n /** Creates a term associated with the field of this mapper for the given value */\n- protected Term createTerm(Object value) {\n+ public Term createTerm(Object value) {\n return new Term(name(), indexedValueForSearch(value));\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java",
"status": "modified"
},
{
"diff": "@@ -212,10 +212,6 @@ public void setZeroTermsQuery(ZeroTermsQuery zeroTermsQuery) {\n this.zeroTermsQuery = zeroTermsQuery;\n }\n \n- protected boolean forceAnalyzeQueryString() {\n- return false;\n- }\n-\n protected Analyzer getAnalyzer(MappedFieldType fieldType) {\n if (this.analyzer == null) {\n if (fieldType != null) {\n@@ -240,9 +236,18 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n field = fieldName;\n }\n \n- if (fieldType != null && fieldType.useTermQueryWithQueryString() && !forceAnalyzeQueryString()) {\n+ /*\n+ * If the user forced an analyzer we really don't care if they are\n+ * searching a type that wants term queries to be used with query string\n+ * because the QueryBuilder will take care of it. If they haven't forced\n+ * an analyzer then types like NumberFieldType that want terms with\n+ * query string will blow up because their analyzer isn't capable of\n+ * passing through QueryBuilder.\n+ */\n+ boolean noForcedAnalyzer = this.analyzer == null;\n+ if (fieldType != null && fieldType.useTermQueryWithQueryString() && noForcedAnalyzer) {\n try {\n- return fieldType.termQuery(value, context);\n+ return termQuery(fieldType, value);\n } catch (RuntimeException e) {\n if (lenient) {\n return null;\n@@ -251,6 +256,7 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n }\n \n }\n+\n Analyzer analyzer = getAnalyzer(fieldType);\n assert analyzer != null;\n MatchQueryBuilder builder = new MatchQueryBuilder(analyzer, fieldType);\n@@ -282,6 +288,15 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n }\n }\n \n+ /**\n+ * Creates a TermQuery-like-query for MappedFieldTypes that don't support\n+ * QueryBuilder which is very string-ish. Just delegates to the\n+ * MappedFieldType for MatchQuery but gets more complex for blended queries.\n+ */\n+ protected Query termQuery(MappedFieldType fieldType, Object value) {\n+ return fieldType.termQuery(value, context);\n+ }\n+\n protected Query zeroTermsQuery() {\n return zeroTermsQuery == DEFAULT_ZERO_TERMS_QUERY ? Queries.newMatchNoDocsQuery() : Queries.newMatchAllQuery();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -149,6 +149,10 @@ public Query blendTerm(Term term, MappedFieldType fieldType) {\n public boolean forceAnalyzeQueryString() {\n return false;\n }\n+\n+ public Query termQuery(MappedFieldType fieldType, Object value) {\n+ return fieldType.termQuery(value, context);\n+ }\n }\n \n public class CrossFieldsQueryBuilder extends QueryBuilder {\n@@ -196,8 +200,13 @@ public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<Str\n } else {\n blendedFields = null;\n }\n- final FieldAndFieldType fieldAndFieldType = group.get(0);\n- Query q = parseGroup(type.matchQueryType(), fieldAndFieldType.field, 1f, value, minimumShouldMatch);\n+ /*\n+ * We have to pick some field to pass through the superclass so\n+ * we just pick the first field. It shouldn't matter because\n+ * fields are already grouped by their analyzers/types.\n+ */\n+ String representativeField = group.get(0).field;\n+ Query q = parseGroup(type.matchQueryType(), representativeField, 1f, value, minimumShouldMatch);\n if (q != null) {\n queries.add(q);\n }\n@@ -206,6 +215,28 @@ public List<Query> buildGroupedQueries(MultiMatchQueryBuilder.Type type, Map<Str\n return queries.isEmpty() ? null : queries;\n }\n \n+ /**\n+ * Pick the field for parsing. If any of the fields in the group do\n+ * *not* useTermQueryWithQueryString then we return that one to force\n+ * analysis. If some of the fields would useTermQueryWithQueryString\n+ * then we assume that that parsing field's parser is good enough for\n+ * them and return it. Otherwise we just return the first field. You\n+ * should only get mixed groups like this when you force a certain\n+ * analyzer on a query and use string and integer fields because of the\n+ * way that grouping is done. That means that the use *asked* for the\n+ * integer fields to be searched using a string analyzer so this is\n+ * technically doing exactly what they asked for even if it is a bit\n+ * funky.\n+ */\n+ private String fieldForParsing(List<FieldAndFieldType> group) {\n+ for (FieldAndFieldType field: group) {\n+ if (field.fieldType.useTermQueryWithQueryString()) {\n+ return field.field;\n+ }\n+ }\n+ return group.get(0).field;\n+ }\n+\n @Override\n public boolean forceAnalyzeQueryString() {\n return blendedFields != null;\n@@ -231,6 +262,11 @@ public Query blendTerm(Term term, MappedFieldType fieldType) {\n }\n return BlendedTermQuery.dismaxBlendedQuery(terms, blendedBoost, tieBreaker);\n }\n+\n+ @Override\n+ public Query termQuery(MappedFieldType fieldType, Object value) {\n+ return blendTerm(fieldType.createTerm(value), fieldType);\n+ }\n }\n \n @Override\n@@ -266,7 +302,11 @@ public Term newTerm(String value) {\n }\n \n @Override\n- protected boolean forceAnalyzeQueryString() {\n- return this.queryBuilder == null ? super.forceAnalyzeQueryString() : this.queryBuilder.forceAnalyzeQueryString();\n+ protected Query termQuery(MappedFieldType fieldType, Object value) {\n+ if (queryBuilder == null) {\n+ // Can be null when the MultiMatchQuery collapses into a MatchQuery\n+ return super.termQuery(fieldType, value);\n+ }\n+ return queryBuilder.termQuery(fieldType, value);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.query;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -459,6 +460,23 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n assertHitCount(searchResponse, 1l);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n \n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"skill\", \"int-field\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\")\n+ .operator(Operator.AND))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"skill\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"int-field\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\")\n+ .operator(Operator.AND))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"first_name\", \"last_name\", \"skill\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n@@ -471,6 +489,24 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n .analyzer(\"category\"))).get();\n assertFirstHit(searchResponse, hasId(\"theone\"));\n \n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"first_name\", \"int-field\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"int-field\", \"skill\", \"first_name\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"25 15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .analyzer(\"category\"))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"captain america marvel hero\", \"first_name\", \"last_name\", \"category\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n@@ -529,6 +565,46 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n assertFirstHit(searchResponse, hasId(\"ultimate2\"));\n assertSecondHit(searchResponse, hasId(\"ultimate1\"));\n assertThat(searchResponse.getHits().hits()[0].getScore(), greaterThan(searchResponse.getHits().hits()[1].getScore()));\n+\n+ // Test group based on numeric fields\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\", \"first_name\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ // Two numeric fields together caused trouble at one point!\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"int-field\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS))).get();\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"alpha 15\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .lenient(true))).get();\n+ assertFirstHit(searchResponse, hasId(\"ultimate1\"));\n+ /*\n+ * Doesn't find theone because \"alpha 15\" isn't a number and we don't\n+ * break on spaces.\n+ */\n+ assertHitCount(searchResponse, 1);\n+\n+ // Lenient wasn't always properly lenient with two numeric fields\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"alpha 15\", \"int-field\", \"first_name\", \"skill\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .lenient(true))).get();\n+ assertFirstHit(searchResponse, hasId(\"ultimate1\"));\n }\n \n private static final void assertEquivalent(String query, SearchResponse left, SearchResponse right) {",
"filename": "core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java",
"status": "modified"
}
]
}
|
{
"body": "Environment:\n- 62 indices\n- 50+ types\n- Java 7.0_67\n- JVM arguments `-Xms8G -Xmx8G`\n- Elasticsearch version 1.7.5 (branch 1.7)\n\nSteps:\n1. Start Elasticsearch\n2. Execute the following query `http://localhost:9200/logstash/_mapping/field/*` (produced by Kibana)\n3. Wait the response\n4. Run the same query again\n\nHere is the result (screenshot from JVisualVM);\n\n\nAs you can see the first execution of the query took about 1G of memory and the second execution of the query took about 2.5G (more than 2 times more memory than the first execution).\nI tried to force GC but no memory is free up.\n\nUsing MAT, I can see that the memory is filled with `FieldMappingMetaData`.\nI'm not sure if this is intended but I `GetFieldMappingsResponse` keeps a reference of the mapping: https://github.com/elastic/elasticsearch/blob/1.7/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java#L39\n\nI didn't try on Elasticsearch 2.x but the code is more or less the same.\n",
"comments": [
{
"body": "> I tried to force GC but no memory is free up.\n\nThis could be because elasticsearch disables explicit GCs. What happens if you keep calling this API, does Elasticsearch run out of memory?\n",
"created_at": "2016-01-06T14:08:10Z"
},
{
"body": "> This could be because elasticsearch disables explicit GCs. What happens if you keep calling this API, does Elasticsearch run out of memory?\n\nYes, if I execute the query one more time, Elasticsearch hangs forever. Most of the time when there's activity on the node, Elasticsearch terminates abruptly (OutOfMemory).\n",
"created_at": "2016-01-06T15:39:03Z"
},
{
"body": "If you want I can upload a heap dump\n",
"created_at": "2016-01-06T16:08:28Z"
},
{
"body": "That might help, thanks.\n",
"created_at": "2016-01-06T16:20:22Z"
},
{
"body": "Here's a hprof file: https://drive.google.com/file/d/0B7t8xIk1T8NjejFLVzY1Z1ZzTDg/view?usp=sharing (~ 3Go)\n",
"created_at": "2016-01-08T10:56:03Z"
},
{
"body": "I removed all breakpoints on Intellij and now the graph looks like this:\n\n_somehow the debugging on Intellij blocked the garbage collection_\n\nStill the memory usage is huge and can threaten the stability of the cluster with big spike of heap used and concurrents queries can achieve the same result (ie. Elasticsearch hangs forever or becomes unresponsive for minutes):\n\n\n_Elasticsearch was unresponsive between 13:17 and 13:22_\n\nIs there a circuit breaker available to prevent this operation from causing an `OutOfMemoryError` ?\n",
"created_at": "2016-01-08T12:26:57Z"
},
{
"body": "@Mogztter thanks for the heapdump I looked at it and found on JNI Global references holding on to the memory, any chance you are running your JVM in debug mode? I also wonder if you can run with `-Xcheck:jni` and separately can you try running this with the latest JVM ie a java 8 JVM? Are you using any kind of native plugins or something like this?\n",
"created_at": "2016-01-08T14:10:56Z"
},
{
"body": "ooh nevermind then I didn't see your latest reply!!!\n",
"created_at": "2016-01-08T14:11:57Z"
},
{
"body": "I looked at this again with @jpountz and we found a smoking gun... @jpountz will take care of a fix. Thanks for opening this @Mogztter \n",
"created_at": "2016-01-08T14:26:19Z"
},
{
"body": "The issue appears to be that we over-allocate the data-structure that we use to store the serialized representation of the field mappings, which can become an issue if there are many fields. I'll look into it.\n",
"created_at": "2016-01-08T14:31:06Z"
},
{
"body": "@clintongormley I think we should make this a release blocker as well? @jpountz WDYT? this is 1.7.5\n",
"created_at": "2016-01-08T14:36:02Z"
},
{
"body": "We already need to make a 1.7.5 anyway because of the bitset cache bug so that works for me.\n",
"created_at": "2016-01-08T14:37:38Z"
},
{
"body": "@Mogztter once @jpountz has pushed #15864 to 1.7 can you give it a try just to confirm the situation is better after that?\n",
"created_at": "2016-01-08T19:26:22Z"
},
{
"body": "@jpountz Thanks for the quick fix :+1: \n@s1monw Sure, unfortunately I won't be able to confirm the improvement before Tuesday.\n",
"created_at": "2016-01-08T20:25:14Z"
},
{
"body": "> @s1monw Sure, unfortunately I won't be able to confirm the improvement before Tuesday.\n\nTuesday is more than soon enough :) thanks\n",
"created_at": "2016-01-10T21:40:43Z"
},
{
"body": "The fix has been propagated to all branches.\n",
"created_at": "2016-01-11T08:39:51Z"
},
{
"body": "@jpountz Thanks!\n@s1monw Yes the situation is _way_ better:\n\n\n\nNow the memory taken is about 250 Mo and stable on each call :+1: \n",
"created_at": "2016-01-12T08:45:44Z"
},
{
"body": ":+1: Thanks a lot for confirming.\n",
"created_at": "2016-01-12T08:46:40Z"
},
{
"body": "awesome! thanks for reporting @Mogztter \n",
"created_at": "2016-01-12T09:32:02Z"
},
{
"body": "@jpountz Hello Adrien, I saw that all `v1.7.5` issues are closed :+1: any ETA for Elasticsearch 1.7.5 ? thanks\n",
"created_at": "2016-01-21T12:00:11Z"
},
{
"body": "I can't promise anything but it should go out next week.\n",
"created_at": "2016-01-22T18:16:52Z"
},
{
"body": "Yay 1.7.5 is out, upgrading now, thanks :smile: \n",
"created_at": "2016-02-02T21:52:46Z"
},
{
"body": "@Mogztter thank YOU!\n",
"created_at": "2016-02-03T08:54:21Z"
}
],
"number": 15789,
"title": "High heap usage on get field mapping API"
}
|
{
"body": "It currently tries to align to the page size (16KB) by default. However, this\nmight waste a significant memory (if many BytesStreamOutputs are allocated)\nand is also useless given that BytesStreamOutput does not recycle (on the\ncontrary to ReleasableBytesStreamOutput). So the initial size has been changed\nto 0.\n\nCloses #15789\n",
"number": 15864,
"review_comments": [],
"title": "Fix initial sizing of BytesStreamOutput."
}
|
{
"commits": [
{
"message": "Fix initial sizing of BytesStreamOutput.\n\nIt currently tries to align to the page size (16KB) by default. However, this\nmight waste a significant memory (if many BytesStreamOutputs are allocated)\nand is also useless given that BytesStreamOutput does not recycle (on the\ncontrary to ReleasableBytesStreamOutput). So the initial size has been changed\nto 0.\n\nCloses #15789"
}
],
"files": [
{
"diff": "@@ -39,10 +39,12 @@ public class BytesStreamOutput extends StreamOutput implements BytesStream {\n protected int count;\n \n /**\n- * Create a non recycling {@link BytesStreamOutput} with 1 initial page acquired.\n+ * Create a non recycling {@link BytesStreamOutput} with an initial capacity of 0.\n */\n public BytesStreamOutput() {\n- this(BigArrays.PAGE_SIZE_IN_BYTES);\n+ // since this impl is not recycling anyway, don't bother aligning to\n+ // the page size, this will even save memory\n+ this(0);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/common/io/stream/BytesStreamOutput.java",
"status": "modified"
}
]
}
|
{
"body": "I have a `restaurant` index, and a type named `japanese`. So I use the following URL for searching:\n`http: //localhost:9200/restaurant/japanese/_search`\n\nAnd my query body is:\n\n```\n{\n \"query\": {\n \"match\": {\n \"title\": {\n \"query\": \"sushi\"\n }\n }\n },\n \"highlight\": {\n \"fragment_size\": 70,\n \"no_match_size\": 70,\n \"number_of_fragments\": 1,\n \"require_field_match\": false,\n \"fields\": {\n \"title\": {},\n \"description\": {}\n }\n }\n}\n```\n\nIt just search through the title field with 'sushi', but the type name 'japanese' is highlighted as well.\nIs this a BUG, or did I do something wrong?\n\nES version is 2.0.0.\n",
"comments": [
{
"body": "me too.\nI use Elasticsearch-2.1.1.\n",
"created_at": "2016-01-06T05:00:35Z"
},
{
"body": "I imagine that its a bug. I haven't tried it myself but it looks like it from your description. Can you work around it by specifying a `highlight_query`?\n",
"created_at": "2016-01-06T13:46:52Z"
},
{
"body": "Confirmed as a bug:\n\n```\ncurl -XDELETE localhost:9200/test?pretty\ncurl -XPOST 'localhost:9200/test/japanese?pretty&refresh' -d'{\"foo\": \"japanese sushi\"}'\ncurl -XPOST localhost:9200/test/japanese/_search?pretty -d'{\n \"query\": {\n \"match\": {\n \"foo\": \"sushi\"\n }\n },\n \"highlight\": {\n \"require_field_match\": false,\n \"fields\": {\n \"foo\": {}\n }\n }\n}'\n```\n\nSpits out:\n\n```\n \"foo\" : [ \"<em>japanese</em> <em>sushi</em>\" ]\n```\n\nWhich it really shouldn't.\n",
"created_at": "2016-01-06T14:18:34Z"
},
{
"body": "I've confirmed you can work around this by setting require_field_match to true or by supplying the original query as a highlight_query.\n",
"created_at": "2016-01-06T14:20:02Z"
}
],
"number": 15689,
"title": "Type name is highlighted when searching in specific type"
}
|
{
"body": "These filters leak into highlighting and probably other places and cause things like the type name to be highlighted when using requireFieldMatch=false. We could have special hacks to keep them out of highlighting but it feals better to keep them out of any variable named \"originalQuery\".\n\nCloses #15689\n",
"number": 15793,
"review_comments": [
{
"body": "Sorry for moving the code around here. The big change is using `filteredQuery` instead of `parsedQuery`.\n",
"created_at": "2016-01-06T17:16:36Z"
},
{
"body": "one too many new line? :)\n",
"created_at": "2016-01-12T15:15:48Z"
}
],
"title": "Don't override originalQuery with request filters"
}
|
{
"commits": [
{
"message": "Add a test that the typename isn't highlighted"
},
{
"message": "Add test for alias filter leaking into highlighter"
},
{
"message": "Don't override originalQuery with request filters\n\nThese filters leak into highlighting and probably other places and cause\nthings like the type name to be highlighted when using\nrequireFieldMatch=false. We could have special hacks to keep them out of\nhighlighting but it feals better to keep them out of any variable named\n\"originalQuery\".\n\nCloses #15689"
}
],
"files": [
{
"diff": "@@ -182,11 +182,10 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n searchContext.preProcess();\n \n valid = true;\n- if (request.explain()) {\n- explanation = searchContext.parsedQuery().query().toString();\n- }\n if (request.rewrite()) {\n explanation = getRewrittenQuery(searcher.searcher(), searchContext.query());\n+ } else if (request.explain()) {\n+ explanation = searchContext.filteredQuery().query().toString();\n }\n } catch (QueryShardException|ParsingException e) {\n valid = false;",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -125,7 +125,20 @@ public static class Defaults {\n private Sort sort;\n private Float minimumScore;\n private boolean trackScores = false; // when sorting, track scores as well...\n+ /**\n+ * The original query as sent by the user without the types and aliases\n+ * applied. Putting things in here leaks them into highlighting so don't add\n+ * things like the type filter or alias filters.\n+ */\n private ParsedQuery originalQuery;\n+ /**\n+ * Just like originalQuery but with the filters from types and aliases\n+ * applied.\n+ */\n+ private ParsedQuery filteredQuery;\n+ /**\n+ * The query to actually execute.\n+ */\n private Query query;\n private ParsedQuery postFilter;\n private Query aliasFilter;\n@@ -209,29 +222,34 @@ public void preProcess() {\n if (queryBoost() != AbstractQueryBuilder.DEFAULT_BOOST) {\n parsedQuery(new ParsedQuery(new FunctionScoreQuery(query(), new WeightFactorFunction(queryBoost)), parsedQuery()));\n }\n- Query searchFilter = searchFilter(types());\n- if (searchFilter != null) {\n- if (Queries.isConstantMatchAllQuery(query())) {\n- Query q = new ConstantScoreQuery(searchFilter);\n- if (query().getBoost() != AbstractQueryBuilder.DEFAULT_BOOST) {\n- q = new BoostQuery(q, query().getBoost());\n- }\n- parsedQuery(new ParsedQuery(q, parsedQuery()));\n- } else {\n- BooleanQuery filtered = new BooleanQuery.Builder()\n- .add(query(), Occur.MUST)\n- .add(searchFilter, Occur.FILTER)\n- .build();\n- parsedQuery(new ParsedQuery(filtered, parsedQuery()));\n- }\n- }\n+ filteredQuery(buildFilteredQuery());\n try {\n this.query = searcher().rewrite(this.query);\n } catch (IOException e) {\n throw new QueryPhaseExecutionException(this, \"Failed to rewrite main query\", e);\n }\n }\n \n+ private ParsedQuery buildFilteredQuery() {\n+ Query searchFilter = searchFilter(types());\n+ if (searchFilter == null) {\n+ return originalQuery;\n+ }\n+ Query result;\n+ if (Queries.isConstantMatchAllQuery(query())) {\n+ result = new ConstantScoreQuery(searchFilter);\n+ if (query().getBoost() != AbstractQueryBuilder.DEFAULT_BOOST) {\n+ result = new BoostQuery(result, query().getBoost());\n+ }\n+ } else {\n+ result = new BooleanQuery.Builder()\n+ .add(query, Occur.MUST)\n+ .add(searchFilter, Occur.FILTER)\n+ .build();\n+ }\n+ return new ParsedQuery(result, originalQuery);\n+ }\n+\n @Override\n public Query searchFilter(String[] types) {\n Query filter = mapperService().searchFilter(types);\n@@ -546,6 +564,15 @@ public SearchContext parsedQuery(ParsedQuery query) {\n return this;\n }\n \n+ public ParsedQuery filteredQuery() {\n+ return filteredQuery;\n+ }\n+\n+ private void filteredQuery(ParsedQuery filteredQuery) {\n+ this.filteredQuery = filteredQuery;\n+ this.query = filteredQuery.query();\n+ }\n+\n @Override\n public ParsedQuery parsedQuery() {\n return this.originalQuery;",
"filename": "core/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.highlight;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -2533,6 +2534,43 @@ public void testPostingsHighlighterManyDocs() throws Exception {\n }\n }\n \n+ public void testDoesNotHighlightTypeName() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"typename\").startObject(\"properties\")\n+ .startObject(\"foo\").field(\"type\", \"string\")\n+ .field(\"index_options\", \"offsets\")\n+ .field(\"term_vector\", \"with_positions_offsets\")\n+ .endObject().endObject().endObject().endObject();\n+ assertAcked(prepareCreate(\"test\").addMapping(\"typename\", mapping));\n+ ensureGreen();\n+\n+ indexRandom(true, client().prepareIndex(\"test\", \"typename\").setSource(\"foo\", \"test typename\"));\n+\n+ for (String highlighter: new String[] {\"plain\", \"fvh\", \"postings\"}) {\n+ SearchResponse response = client().prepareSearch(\"test\").setTypes(\"typename\").setQuery(matchQuery(\"foo\", \"test\"))\n+ .highlighter(new HighlightBuilder().field(\"foo\").highlighterType(highlighter).requireFieldMatch(false)).get();\n+ assertHighlight(response, 0, \"foo\", 0, 1, equalTo(\"<em>test</em> typename\"));\n+ }\n+ }\n+\n+ public void testDoesNotHighlightAliasFilters() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"typename\").startObject(\"properties\")\n+ .startObject(\"foo\").field(\"type\", \"string\")\n+ .field(\"index_options\", \"offsets\")\n+ .field(\"term_vector\", \"with_positions_offsets\")\n+ .endObject().endObject().endObject().endObject();\n+ assertAcked(prepareCreate(\"test\").addMapping(\"typename\", mapping));\n+ assertAcked(client().admin().indices().prepareAliases().addAlias(\"test\", \"filtered_alias\", matchQuery(\"foo\", \"japanese\")));\n+ ensureGreen();\n+\n+ indexRandom(true, client().prepareIndex(\"test\", \"typename\").setSource(\"foo\", \"test japanese\"));\n+\n+ for (String highlighter: new String[] {\"plain\", \"fvh\", \"postings\"}) {\n+ SearchResponse response = client().prepareSearch(\"filtered_alias\").setTypes(\"typename\").setQuery(matchQuery(\"foo\", \"test\"))\n+ .highlighter(new HighlightBuilder().field(\"foo\").highlighterType(highlighter).requireFieldMatch(false)).get();\n+ assertHighlight(response, 0, \"foo\", 0, 1, equalTo(\"<em>test</em> japanese\"));\n+ }\n+ }\n+\n @AwaitsFix(bugUrl=\"Broken now that BoostingQuery does not extend BooleanQuery anymore\")\n public void testFastVectorHighlighterPhraseBoost() throws Exception {\n assertAcked(prepareCreate(\"test\").addMapping(\"type1\", type1TermVectorMapping()));",
"filename": "core/src/test/java/org/elasticsearch/search/highlight/HighlighterSearchIT.java",
"status": "modified"
},
{
"diff": "@@ -105,7 +105,7 @@ public void testExplainValidateQueryTwoNodes() throws IOException {\n }\n \n for (Client client : internalCluster()) {\n- ValidateQueryResponse response = client.admin().indices().prepareValidateQuery(\"test\")\n+ ValidateQueryResponse response = client.admin().indices().prepareValidateQuery(\"test\")\n .setQuery(QueryBuilders.queryStringQuery(\"foo\"))\n .setExplain(true)\n .execute().actionGet();",
"filename": "core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java",
"status": "modified"
}
]
}
|
{
"body": "When executing a replication request against a shard that is relocating, the request is routed to the target of the relocation. However, if that request fails we currently fail the source of the relocation. This correctly causes the target of the relocation to be marked as failed but since it leads to both the source and the target being marked as failed, it leads to an unnecessary recovery.\n",
"comments": [],
"number": 15790,
"title": "Failed replication requests on relocating shards causes unnecessary recoveries"
}
|
{
"body": "This commit addresses an issue when handling a failed replication\nrequest against a relocating target shard. Namely, if a replication\nrequest fails against the target of a relocation we currently fail both\nthe source and the target. This leads to an unnecessary\nrecovery. Instead, only the target of the relocation should be failed.\n\nCloses #15790\n",
"number": 15791,
"review_comments": [
{
"body": "I'm fine with this for this fix (to keep it small), but it's a shame we create these objects. As a followup I think we should change IndexShardRoutingTable#activeShards to include relocation targets (as allInitializingShards does). Then we can use that list which is cached and we also won't need the relocation check,\n",
"created_at": "2016-01-06T16:43:20Z"
},
{
"body": "we have a utility method RoutingTable.shardRoutingTable which gives you what you want. Also IndexShardRoutingTabel is iterable so we don't need the shard list.\n",
"created_at": "2016-01-06T16:44:34Z"
},
{
"body": "use == false . Also where do we check that the total number of captured requests is what we expect? (to guarantee it doesn't contain the node the primary shard is on).\n",
"created_at": "2016-01-06T16:47:16Z"
},
{
"body": "== false\n",
"created_at": "2016-01-06T16:47:40Z"
},
{
"body": "we always run on the node with primary, so I don't think we need this check?\n",
"created_at": "2016-01-06T16:48:21Z"
},
{
"body": "I wonder if we should use equality here?\n",
"created_at": "2016-01-06T16:49:19Z"
},
{
"body": "sorry - got confused - I thought this code skips the primary. I think it will be clear if the if here would say `ShardRouting.primary() == false`\n",
"created_at": "2016-01-06T16:52:44Z"
},
{
"body": "this can go away now...\n",
"created_at": "2016-01-06T17:49:25Z"
},
{
"body": "@bleskes Removed in bb4d857e447b64c31812fbbde0af7619b4518aed.\n",
"created_at": "2016-01-06T17:52:04Z"
},
{
"body": "> As a followup I think we should change IndexShardRoutingTable#activeShards to include relocation targets (as allInitializingShards does). Then we can use that list which is cached and we also won't need the relocation check,\n\n@bleskes I opened #15798.\n",
"created_at": "2016-01-06T18:50:23Z"
}
],
"title": "Only fail the relocation target when a replication request on it fails"
}
|
{
"commits": [
{
"message": "Assert that we fail the correct shard when a replication request fails\n\nThis commit adds an assertion to\nTransportReplicationActionTests#runReplicateTest that when a replication\nrequest fails, we fail the correct shard."
},
{
"message": "Only fail the relocation target when a replication request on it fails\n\nThis commit addresses an issue when handling a failed replication\nrequest against a relocating target shard. Namely, if a replication\nrequest fails against the target of a relocation we currently fail both\nthe source and the target. This leads to an unnecessary\nrecovery. Instead, only the target of the relocation should be failed."
},
{
"message": "Assert that replication requests are sent to the correct shard copies\n\nThis commit adds tighter assertions in\nTransportReplicationActionTests#runReplicateTest that replication\nrequests are sent to the correct shard copies."
},
{
"message": "Cleanup TransportReplicationActionTests#runReplicateTest\n\nThis commit cleans up some of the assertions in\nTransportReplicationActionTests#runReplicateTest:\n - use a Map to track actual vs. expected requests\n - assert that no request was sent to the local node\n - use RoutingTable#shardRoutingTable convenience method\n - explicitly use false in boolean conditions\n - clarify requests are expected on replica shards when assigned and\n execution on replicas is true\n - test ShardRouting equality when checking the failed shard request"
},
{
"message": "Redundant assertion in TransportReplicationActionTests#runReplicateTest"
}
],
"files": [
{
"diff": "@@ -844,21 +844,22 @@ protected void doRun() {\n // we never execute replication operation locally as primary operation has already completed locally\n // hence, we ignore any local shard for replication\n if (nodes.localNodeId().equals(shard.currentNodeId()) == false) {\n- performOnReplica(shard, shard.currentNodeId());\n+ performOnReplica(shard);\n }\n // send operation to relocating shard\n if (shard.relocating()) {\n- performOnReplica(shard, shard.relocatingNodeId());\n+ performOnReplica(shard.buildTargetRelocatingShard());\n }\n }\n }\n \n /**\n * send replica operation to target node\n */\n- void performOnReplica(final ShardRouting shard, final String nodeId) {\n+ void performOnReplica(final ShardRouting shard) {\n // if we don't have that node, it means that it might have failed and will be created again, in\n // this case, we don't have to do the operation, and just let it failover\n+ String nodeId = shard.currentNodeId();\n if (!nodes.nodeExists(nodeId)) {\n logger.trace(\"failed to send action [{}] on replica [{}] for request [{}] due to unknown node [{}]\", transportReplicaAction, shard.shardId(), replicaRequest, nodeId);\n onReplicaFailure(nodeId, null);",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java",
"status": "modified"
},
{
"diff": "@@ -302,6 +302,10 @@ public ShardRoutingEntry() {\n this.failure = failure;\n }\n \n+ public ShardRouting getShardRouting() {\n+ return shardRouting;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java",
"status": "modified"
},
{
"diff": "@@ -65,6 +65,7 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n import java.util.concurrent.CountDownLatch;\n@@ -75,9 +76,13 @@\n \n import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.state;\n import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.stateWithStartedPrimary;\n+import static org.hamcrest.CoreMatchers.not;\n import static org.hamcrest.Matchers.arrayWithSize;\n+import static org.hamcrest.Matchers.empty;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasItem;\n import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n \n@@ -486,7 +491,39 @@ action.new ReplicationPhase(request,\n replicationPhase.run();\n final CapturingTransport.CapturedRequest[] capturedRequests = transport.capturedRequests();\n transport.clear();\n- assertThat(capturedRequests.length, equalTo(assignedReplicas));\n+\n+ HashMap<String, Request> nodesSentTo = new HashMap<>();\n+ boolean executeOnReplica =\n+ action.shouldExecuteReplication(clusterService.state().getMetaData().index(shardId.getIndex()).getSettings());\n+ for (CapturingTransport.CapturedRequest capturedRequest : capturedRequests) {\n+ // no duplicate requests\n+ Request replicationRequest = (Request) capturedRequest.request;\n+ assertNull(nodesSentTo.put(capturedRequest.node.getId(), replicationRequest));\n+ // the request is hitting the correct shard\n+ assertEquals(request.shardId, replicationRequest.shardId);\n+ }\n+\n+ // no request was sent to the local node\n+ assertThat(nodesSentTo.keySet(), not(hasItem(clusterService.state().getNodes().localNodeId())));\n+\n+ // requests were sent to the correct shard copies\n+ for (ShardRouting shard : clusterService.state().getRoutingTable().shardRoutingTable(shardId.getIndex(), shardId.id())) {\n+ if (shard.primary() == false && executeOnReplica == false) {\n+ continue;\n+ }\n+ if (shard.unassigned()) {\n+ continue;\n+ }\n+ if (shard.primary() == false) {\n+ nodesSentTo.remove(shard.currentNodeId());\n+ }\n+ if (shard.relocating()) {\n+ nodesSentTo.remove(shard.relocatingNodeId());\n+ }\n+ }\n+\n+ assertThat(nodesSentTo.entrySet(), is(empty()));\n+\n if (assignedReplicas > 0) {\n assertThat(\"listener is done, but there are outstanding replicas\", listener.isDone(), equalTo(false));\n }\n@@ -511,6 +548,12 @@ action.new ReplicationPhase(request,\n transport.clear();\n assertEquals(1, shardFailedRequests.length);\n CapturingTransport.CapturedRequest shardFailedRequest = shardFailedRequests[0];\n+ // get the shard the request was sent to\n+ ShardRouting routing = clusterService.state().getRoutingNodes().node(capturedRequest.node.id()).get(request.shardId.id());\n+ // and the shard that was requested to be failed\n+ ShardStateAction.ShardRoutingEntry shardRoutingEntry = (ShardStateAction.ShardRoutingEntry)shardFailedRequest.request;\n+ // the shard the request was sent to and the shard to be failed should be the same\n+ assertEquals(shardRoutingEntry.getShardRouting(), routing);\n failures.add(shardFailedRequest);\n transport.handleResponse(shardFailedRequest.requestId, TransportResponse.Empty.INSTANCE);\n }",
"filename": "core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java",
"status": "modified"
}
]
}
|
{
"body": "In versions of Elasticsearch prior to 2.0 the documentation for the path_hierarchy analyzer says the 'type' field has these requirements. \n\n```\nRequired. Must be set to PathHierarchy (case-sensitive).\n```\n\nhttps://www.elastic.co/guide/en/elasticsearch/reference/1.7/analysis-pathhierarchy-tokenizer.html\n\nIn 2.0+ that is now incorrect and if you have an index that uses type: 'PathHierarchy' the upgrade process from 1.7.\\* to 2.\\* fails on each shard of the index. It requires it to be type: 'path_hierarchy'. \n\nWe have a user that has an index with many billions of records in it that uses this analyzer as specified by the 1.7 documentation. This index is so large it is infeasible to reindex to simply change this part of the mapping. Is this an intentional change in 2._? Is there any way to patch the mapping attached to the shards so that an upgrade to 2._ is possible?\n\nNote: this issue is not flagged by the migration plugin. We were testing the upgrade on a test cluster and only discovered the issue when the shards attempted to upgrade. It just logs the following stack repeatedly for each shard and effectively corrupts the index since other indices had upgraded successfully while this type fails. In that scenario the cluster can't be reverted without data loss.\n\n```\n[2015-12-14 18:42:56,825][WARN ][indices.cluster ] [node] [[logs-2015.12.11][3]] marking and sending shard failed due to [failed to create index]\n[logs-2015.12.11] IndexCreationException[failed to create index]; nested: IllegalArgumentException[Unknown Tokenizer type [PathHierarchy] for [field_tokens]];\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:362)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewIndices(IndicesClusterStateService.java:307)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:176)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:494)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.IllegalArgumentException: Unknown Tokenizer type [PathHierarchy] for [field_tokens]\n at org.elasticsearch.index.analysis.AnalysisModule.configure(AnalysisModule.java:267)\n at org.elasticsearch.common.inject.AbstractModule.configure(AbstractModule.java:61)\n at org.elasticsearch.common.inject.spi.Elements$RecordingBinder.install(Elements.java:233)\n at org.elasticsearch.common.inject.spi.Elements.getElements(Elements.java:105)\n at org.elasticsearch.common.inject.InjectorShell$Builder.build(InjectorShell.java:143)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:99)\n at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:159)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:55)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:358)\n ... 8 more\n```\n",
"comments": [
{
"body": "This is indeed a bug. We will provide a fix for the next release. If you can't wait for the next release, you might try the following before upgrading to 2.x (try it first on a test cluster, and make backups before proceeding): Close indices. Update settings of indices by changing PathHierarchy to path_hierarchy. Open indices. That should be it. You can then upgrade to 2.x.\n\nExample (tested on 1.7.4):\n\n```\n\n# This is the example index we start with\ncurl -X POST http://localhost:9200/myindex/ -d'\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"path-analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"path-tokenizer\"\n }\n },\n \"tokenizer\": {\n \"path-tokenizer\": {\n \"type\": \"PathHierarchy\",\n \"delimiter\": \".\"\n }\n }\n }\n },\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\",\n \"analyzer\": \"path-analyzer\"\n }\n }\n }\n }\n}'\n\n# Close index\ncurl -X POST http://localhost:9200/myindex/_close\n\n# Update settings by changing PathHierarchy to path_hierarchy\ncurl -X PUT http://localhost:9200/myindex/_settings -d'\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"path-analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"path-tokenizer\"\n }\n },\n \"tokenizer\": {\n \"path-tokenizer\": {\n \"type\": \"path_hierarchy\",\n \"delimiter\": \".\"\n }\n }\n }\n }\n}'\n\n# reopen index\ncurl -X POST http://localhost:9200/myindex/_open\n```\n",
"created_at": "2016-01-05T16:35:46Z"
},
{
"body": "Fixed in #15785\n",
"created_at": "2016-01-06T13:51:19Z"
}
],
"number": 15756,
"title": "PathHierarchy vs path_hierarchy"
}
|
{
"body": "Relates to #15756\n",
"number": 15785,
"review_comments": [],
"title": "Add PathHierarchy type back to path_hierarchy tokenizer for backward compatibility with 1.x"
}
|
{
"commits": [
{
"message": "Add PathHierarchy type back to path_hierarchy tokenizer for backward compatibility with 1.x\n\nCloses #15785"
}
],
"files": [
{
"diff": "@@ -181,6 +181,7 @@ private void registerBuiltInTokenizer(Map<String, AnalysisModule.AnalysisProvide\n tokenizers.put(\"standard\", StandardTokenizerFactory::new);\n tokenizers.put(\"uax_url_email\", UAX29URLEmailTokenizerFactory::new);\n tokenizers.put(\"path_hierarchy\", PathHierarchyTokenizerFactory::new);\n+ tokenizers.put(\"PathHierarchy\", PathHierarchyTokenizerFactory::new);\n tokenizers.put(\"keyword\", KeywordTokenizerFactory::new);\n tokenizers.put(\"letter\", LetterTokenizerFactory::new);\n tokenizers.put(\"lowercase\", LowerCaseTokenizerFactory::new);\n@@ -409,6 +410,7 @@ private PrebuiltAnalysis() {\n // Tokenizer aliases\n tokenizerFactories.put(\"nGram\", new PreBuiltTokenizerFactoryFactory(PreBuiltTokenizers.NGRAM.getTokenizerFactory(Version.CURRENT)));\n tokenizerFactories.put(\"edgeNGram\", new PreBuiltTokenizerFactoryFactory(PreBuiltTokenizers.EDGE_NGRAM.getTokenizerFactory(Version.CURRENT)));\n+ tokenizerFactories.put(\"PathHierarchy\", new PreBuiltTokenizerFactoryFactory(PreBuiltTokenizers.PATH_HIERARCHY.getTokenizerFactory(Version.CURRENT)));\n \n \n // Token filters",
"filename": "core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java",
"status": "modified"
}
]
}
|
{
"body": "This test rarely fails and is often not reproducible, but I had a new failure on my local CI on my feature branch that I found reliably reproduces on master too:\n\n``` bash\n$ gradle \\\n> :core:clean \\\n> :core:integTest \\\n> -Dtests.seed=7B8A12D17560A5D \\\n> -Dtests.class=org.elasticsearch.search.basic.SearchWithRandomIOExceptionsIT \\\n> -Dtests.method=testRandomDirectoryIOExceptions\n```\n\nTake note that the output logs approach 45 MB in size.\n\nI ran `git-bisect` and it looks like this issue was introduced with fcfd98e9e89231d748ae66c81791b0b08b0c6200.\n",
"comments": [
{
"body": "@jasontedor was this a ci run? if so, can you add a link? also, stack trace would be good, just to know at a glance what failed...\n",
"created_at": "2016-01-04T15:25:44Z"
},
{
"body": "@bleskes It failed on my local CI, not the public CI and the entire log output is 45 MB. Here's a relevant snippet of the logs:\n\n```\n[2016-01-03 14:26:38,400][WARN ][org.elasticsearch.index.engine] [node_s0] [test][1] failed engine [index]\njava.io.IOException: a random IOException (_0.fdx)\n at org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOException(MockDirectoryWrapper.java:445)\n at org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:151)\n at org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:127)\n at org.apache.lucene.store.DataOutput.writeInt(DataOutput.java:70)\n at org.apache.lucene.codecs.CodecUtil.writeHeader(CodecUtil.java:91)\n at org.apache.lucene.codecs.CodecUtil.writeIndexHeader(CodecUtil.java:134)\n at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>(CompressingStoredFieldsWriter.java:116)\n at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)\n at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)\n at org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsWriter(AssertingStoredFieldsFormat.java:49)\n at org.apache.lucene.index.DefaultIndexingChain.initStoredFieldsWriter(DefaultIndexingChain.java:81)\n at org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:258)\n at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:295)\n at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)\n at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)\n at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1477)\n at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)\n at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:407)\n at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:358)\n at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:503)\n at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:187)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnReplica(TransportIndexAction.java:169)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnReplica(TransportIndexAction.java:64)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:380)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:286)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:283)\n at org.elasticsearch.transport.local.LocalTransport$2.doRun(LocalTransport.java:296)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2016-01-03 14:26:38,403][WARN ][org.elasticsearch.action.index] [node_s1] [test][1] failed to perform indices:data/write/index[r] on node {node_s0}{TCOt3YwhSvefWkM7AUfErQ}{local}{local[59]}[mode=>local]\nRemoteTransportException[[node_s1][local[60]][indices:data/write/index[r]]]; nested: IndexFailedEngineException[Index failed for [type#5]]; nested: NotSerializableExceptionWrapper[a random IOException (_0.fdx)];\nCaused by: [test][[test][1]] IndexFailedEngineException[Index failed for [type#5]]; nested: NotSerializableExceptionWrapper[a random IOException (_0.fdx)];\n at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:363)\n at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:503)\n at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:187)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnReplica(TransportIndexAction.java:169)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnReplica(TransportIndexAction.java:64)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:380)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:286)\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:283)\n at org.elasticsearch.transport.local.LocalTransport$2.doRun(LocalTransport.java:296)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: NotSerializableExceptionWrapper[a random IOException (_0.fdx)]\n at org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOException(MockDirectoryWrapper.java:445)\n at org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:151)\n at org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:127)\n at org.apache.lucene.store.DataOutput.writeInt(DataOutput.java:70)\n at org.apache.lucene.codecs.CodecUtil.writeHeader(CodecUtil.java:91)\n at org.apache.lucene.codecs.CodecUtil.writeIndexHeader(CodecUtil.java:134)\n at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>(CompressingStoredFieldsWriter.java:116)\n at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)\n at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)\n at org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsWriter(AssertingStoredFieldsFormat.java:49)\n at org.apache.lucene.index.DefaultIndexingChain.initStoredFieldsWriter(DefaultIndexingChain.java:81)\n at org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:258)\n at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:295)\n at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)\n at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)\n at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1477)\n at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)\n at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:407)\n at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:358)\n ... 13 more\n```\n\n@bleskes Let me know if you want me to share the entire log output somewhere, but since this seems to reproduce 100% of the time with this seed I don't think that will be needed?\n",
"created_at": "2016-01-04T15:29:09Z"
}
],
"number": 15754,
"title": "Reproducible SearchWithRandomIOExceptionsIT failure"
}
|
{
"body": "If we fail to create a writer all recovered translog readers are not\nclosed today which causes all open files to leak.\n\nCloses #15754\n\n@jasontedor @mikemccand can you take a look\n",
"number": 15762,
"review_comments": [],
"title": "Close recovered translog readers if createWriter fails"
}
|
{
"commits": [
{
"message": "Close recovered translog readers if createWriter fails\n\nIf we fail to create a writer all recovered translog readers are not\nclosed today which causes all open files to leak.\n\nCloses #15754"
}
],
"files": [
{
"diff": "@@ -167,8 +167,19 @@ public Translog(TranslogConfig config) throws IOException {\n if (recoveredTranslogs.isEmpty()) {\n throw new IllegalStateException(\"at least one reader must be recovered\");\n }\n- current = createWriter(checkpoint.generation + 1);\n- this.lastCommittedTranslogFileGeneration = translogGeneration.translogFileGeneration;\n+ boolean success = false;\n+ try {\n+ current = createWriter(checkpoint.generation + 1);\n+ this.lastCommittedTranslogFileGeneration = translogGeneration.translogFileGeneration;\n+ success = true;\n+ } finally {\n+ // we have to close all the recovered ones otherwise we leak file handles here\n+ // for instance if we have a lot of tlog and we can't create the writer we keep on holding\n+ // on to all the uncommitted tlog files if we don't close\n+ if (success == false) {\n+ IOUtils.closeWhileHandlingException(recoveredTranslogs);\n+ }\n+ }\n } else {\n this.recoveredTranslogs = Collections.emptyList();\n IOUtils.rm(location);",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
},
{
"diff": "@@ -138,7 +138,7 @@ protected Translog.Operation read(BufferedChecksumStreamInput inStream) throws I\n abstract protected void readBytes(ByteBuffer buffer, long position) throws IOException;\n \n @Override\n- public void close() throws IOException {\n+ public final void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n channelReference.decRef();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java",
"status": "modified"
},
{
"diff": "@@ -1590,4 +1590,27 @@ public int write(ByteBuffer src) throws IOException {\n private static final class UnknownException extends RuntimeException {\n \n }\n+\n+ // see https://github.com/elastic/elasticsearch/issues/15754\n+ public void testFailWhileCreateWriteWithRecoveredTLogs() throws IOException {\n+ Path tempDir = createTempDir();\n+ TranslogConfig config = getTranslogConfig(tempDir);\n+ Translog translog = new Translog(config);\n+ translog.add(new Translog.Index(\"test\", \"boom\", \"boom\".getBytes(Charset.forName(\"UTF-8\"))));\n+ Translog.TranslogGeneration generation = translog.getGeneration();\n+ translog.close();\n+ config.setTranslogGeneration(generation);\n+ try {\n+ new Translog(config) {\n+ @Override\n+ protected TranslogWriter createWriter(long fileGeneration) throws IOException {\n+ throw new MockDirectoryWrapper.FakeIOException();\n+ }\n+ };\n+ // if we have a LeakFS here we fail if not all resources are closed\n+ fail(\"should have been failed\");\n+ } catch (MockDirectoryWrapper.FakeIOException ex) {\n+ // all is well\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java",
"status": "modified"
}
]
}
|
{
"body": "`missing` values are great to work with - but they do not work within more complex queries.\n\nHere a simple example to reproduce the bug.\nLet's say, we have some index, with optional fields:\n\n```\nPUT /missing-values/test/1_with_my_field\n{\n \"someField\": \"optionalField_is_present\",\n \"optionalField\": \"present\"\n}\nPUT /missing-values/test/1_without_my_field\n{\n \"someField\": \"optionalField_is_missing\"\n}\n```\n\nNow I want to use the fields in some term aggregation. I want information on all items; even if the optional field is missing, so I can use this code:\n\n```\nGET /missing-values/test/_search\n{\n \"aggregations\" : {\n \"test\": {\n \"terms\" : {\n \"field\" : \"optionalField\",\n \"missing\": \"n/a\"\n },\n \"aggregations\" : {\n \"test\": {\n \"terms\" : {\n \"field\" : \"someField\"\n }\n }\n }\n }\n }\n}\n```\n\nThat works fine as long as the `missing` stuff is applied on the outer level only.\nIf I change the order of the fields:\n\n```\nGET /missing-values/test/_search\n{\n \"aggregations\" : {\n \"test\": {\n \"terms\" : {\n \"field\" : \"someField\"\n },\n \"aggregations\" : {\n \"test\": {\n \"terms\" : {\n \"field\" : \"optionalField\",\n \"missing\": \"n/a\"\n }\n }\n }\n }\n }\n}\n```\n\nI receive an exception due to internally used ordinals:\n`org.elasticsearch.search.aggregations.support.MissingValues$6 cannot be cast to org.elasticsearch.search.aggregations.support.ValuesSource$Bytes$WithOrdinals$FieldData`\n",
"comments": [
{
"body": "I can confirm this bug in `2.1.1` version of `ES` using `java API`\n",
"created_at": "2015-12-30T01:52:41Z"
}
],
"number": 14882,
"title": "MissingValues don't work inside sub aggregations ()"
}
|
{
"body": "There are two bugs:\n- the 'global_ordinals_low_cardinality' mode requires a fielddata-based impl so\n that it can extract the segment to global ordinal mapping\n- the 'global_ordinals_hash' mode abusively casts to the values source to a\n fielddata-based impl while it is not needed\n\nCloses #14882\n",
"number": 15746,
"review_comments": [],
"title": "Make `missing` on terms aggs work with all execution modes."
}
|
{
"commits": [
{
"message": "Make `missing` on terms aggs work with all execution modes.\n\nThere are two bugs:\n - the 'global_ordinals_low_cardinality' mode requires a fielddata-based impl so\n that it can extract the segment to global ordinal mapping\n - the 'global_ordinals_hash' mode abusively casts to the values source to a\n fielddata-based impl while it is not needed\n\nCloses #14882"
}
],
"files": [
{
"diff": "@@ -267,9 +267,9 @@ public static class WithHash extends GlobalOrdinalsStringTermsAggregator {\n \n private final LongHash bucketOrds;\n \n- public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals.FieldData valuesSource,\n+ public WithHash(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals valuesSource,\n Terms.Order order, BucketCountThresholds bucketCountThresholds, IncludeExclude.OrdinalsFilter includeExclude, AggregationContext aggregationContext,\n- Aggregator parent, SubAggCollectionMode collectionMode,\n+ Aggregator parent, SubAggCollectionMode collectionMode,\n boolean showTermDocCountError, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData)\n throws IOException {\n super(name, factories, valuesSource, order, bucketCountThresholds, includeExclude, aggregationContext, parent, collectionMode,\n@@ -341,7 +341,7 @@ public static class LowCardinality extends GlobalOrdinalsStringTermsAggregator {\n private RandomAccessOrds segmentOrds;\n \n public LowCardinality(String name, AggregatorFactories factories, ValuesSource.Bytes.WithOrdinals valuesSource,\n- Terms.Order order,\n+ Terms.Order order,\n BucketCountThresholds bucketCountThresholds, AggregationContext aggregationContext, Aggregator parent,\n SubAggCollectionMode collectionMode, boolean showTermDocCountError, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) throws IOException {\n@@ -411,11 +411,10 @@ private void mapSegmentCountsToGlobalCounts() {\n // This is the cleanest way I can think of so far\n \n GlobalOrdinalMapping mapping;\n- if (globalOrds instanceof GlobalOrdinalMapping) {\n- mapping = (GlobalOrdinalMapping) globalOrds;\n- } else {\n- assert globalOrds.getValueCount() == segmentOrds.getValueCount();\n+ if (globalOrds.getValueCount() == segmentOrds.getValueCount()) {\n mapping = null;\n+ } else {\n+ mapping = (GlobalOrdinalMapping) globalOrds;\n }\n for (long i = 1; i < segmentDocCounts.size(); i++) {\n // We use set(...) here, because we need to reset the slow to 0.",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -94,7 +94,7 @@ Aggregator create(String name, AggregatorFactories factories, ValuesSource value\n throws IOException {\n final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter();\n return new GlobalOrdinalsStringTermsAggregator.WithHash(name, factories,\n- (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, order, bucketCountThresholds, filter, aggregationContext,\n+ (ValuesSource.Bytes.WithOrdinals) valuesSource, order, bucketCountThresholds, filter, aggregationContext,\n parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData);\n }\n \n@@ -111,7 +111,10 @@ Aggregator create(String name, AggregatorFactories factories, ValuesSource value\n AggregationContext aggregationContext, Aggregator parent, SubAggCollectionMode subAggCollectMode,\n boolean showTermDocCountError, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData)\n throws IOException {\n- if (includeExclude != null || factories.count() > 0) {\n+ if (includeExclude != null || factories.count() > 0\n+ // we need the FieldData impl to be able to extract the\n+ // segment to global ord mapping\n+ || valuesSource.getClass() != ValuesSource.Bytes.FieldData.class) {\n return GLOBAL_ORDINALS.create(name, factories, valuesSource, order, bucketCountThresholds, includeExclude,\n aggregationContext, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.ExecutionMode;\n import org.elasticsearch.search.aggregations.metrics.cardinality.Cardinality;\n import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBounds;\n import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroid;\n@@ -68,18 +69,24 @@ public void testUnmappedTerms() {\n }\n \n public void testStringTerms() {\n- SearchResponse response = client().prepareSearch(\"idx\").addAggregation(terms(\"my_terms\").field(\"str\").missing(\"bar\")).get();\n- assertSearchResponse(response);\n- Terms terms = response.getAggregations().get(\"my_terms\");\n- assertEquals(2, terms.getBuckets().size());\n- assertEquals(1, terms.getBucketByKey(\"foo\").getDocCount());\n- assertEquals(1, terms.getBucketByKey(\"bar\").getDocCount());\n-\n- response = client().prepareSearch(\"idx\").addAggregation(terms(\"my_terms\").field(\"str\").missing(\"foo\")).get();\n- assertSearchResponse(response);\n- terms = response.getAggregations().get(\"my_terms\");\n- assertEquals(1, terms.getBuckets().size());\n- assertEquals(2, terms.getBucketByKey(\"foo\").getDocCount());\n+ for (ExecutionMode mode : ExecutionMode.values()) {\n+ SearchResponse response = client().prepareSearch(\"idx\").addAggregation(\n+ terms(\"my_terms\")\n+ .field(\"str\")\n+ .executionHint(mode.toString())\n+ .missing(\"bar\")).get();\n+ assertSearchResponse(response);\n+ Terms terms = response.getAggregations().get(\"my_terms\");\n+ assertEquals(2, terms.getBuckets().size());\n+ assertEquals(1, terms.getBucketByKey(\"foo\").getDocCount());\n+ assertEquals(1, terms.getBucketByKey(\"bar\").getDocCount());\n+\n+ response = client().prepareSearch(\"idx\").addAggregation(terms(\"my_terms\").field(\"str\").missing(\"foo\")).get();\n+ assertSearchResponse(response);\n+ terms = response.getAggregations().get(\"my_terms\");\n+ assertEquals(1, terms.getBuckets().size());\n+ assertEquals(2, terms.getBucketByKey(\"foo\").getDocCount());\n+ }\n }\n \n public void testLongTerms() {",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/MissingValueIT.java",
"status": "modified"
}
]
}
|
{
"body": "Reproduce with the following:\n\n``` javascript\n \"mappings\" : {\n \"type\": {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_point\",\n \"fields\": {\n \"geohash\" : {\n \"type\" : \"geo_point\",\n \"geohash_precision\" : 12,\n \"geohash_prefix\" : true\n },\n \"latlon\" : {\n \"type\" : \"geo_point\",\n \"lat_lon\" : true\n }\n }\n }\n }\n }\n }\n```\n\n``` javascript\n{ \"location\" : [-0.1485188, 51.5250666] }\n```\n\nThrows the following error:\n\n``` javascript\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"parse_exception\",\n \"reason\": \"geo_point expected\"\n }\n ],\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"failed to parse\",\n \"caused_by\": {\n \"type\": \"parse_exception\",\n \"reason\": \"geo_point expected\"\n }\n },\n \"status\": 400\n}\n```\n",
"comments": [],
"number": 15701,
"title": "Multi-Fields not working for GeoPoint type"
}
|
{
"body": "This PR fixes multi-field support in `BaseGeoPointFieldMapper` by passing an externalValueContext to the multiField parser. Unit testing is added to ensure coverage.\n\ncloses #15701 \n",
"number": 15702,
"review_comments": [],
"title": "Fix multi-field support for GeoPoint types"
}
|
{
"commits": [
{
"message": "Fix multi-field support for GeoPoint types\n\nThis commit fixes multiField support for GeoPointFieldMapper by passing an externalValueContext to the multiField parser. Unit testing is added for multi field coverage."
},
{
"message": "Reconcile GeoPoint toString and fromString methods\n\nGeoPoint.toString prints as a json array of values, but resetFromString expects comma delimited. This commit reconciles the methods."
}
],
"files": [
{
"diff": "@@ -146,7 +146,7 @@ public int hashCode() {\n \n @Override\n public String toString() {\n- return \"[\" + lat + \", \" + lon + \"]\";\n+ return lat + \", \" + lon;\n }\n \n public static GeoPoint parseFromLatLon(String latLon) {",
"filename": "core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java",
"status": "modified"
},
{
"diff": "@@ -338,7 +338,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n */\n public static ValueAndBoost parseCreateFieldForString(ParseContext context, String nullValue, float defaultBoost) throws IOException {\n if (context.externalValueSet()) {\n- return new ValueAndBoost((String) context.externalValue(), defaultBoost);\n+ return new ValueAndBoost(context.externalValue().toString(), defaultBoost);\n }\n XContentParser parser = context.parser();\n if (parser.currentToken() == XContentParser.Token.VALUE_NULL) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -412,7 +412,7 @@ protected void parse(ParseContext context, GeoPoint point, String geoHash) throw\n latMapper.parse(context.createExternalValueContext(point.lat()));\n lonMapper.parse(context.createExternalValueContext(point.lon()));\n }\n- multiFields.parse(this, context);\n+ multiFields.parse(this, context.createExternalValueContext(point));\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/BaseGeoPointFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.search.SearchHitField;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.VersionUtils;\n+import org.elasticsearch.test.geo.RandomGeoGenerator;\n \n import java.util.List;\n import java.util.Map;\n@@ -787,4 +789,32 @@ public void testGeoHashSearchWithPrefix() throws Exception {\n assertEquals(\"dr5regy6rc6y\".substring(0, numHashes-i), hashes.get(i));\n }\n }\n+\n+ public void testMultiField() throws Exception {\n+ int numDocs = randomIntBetween(10, 100);\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"pin\").startObject(\"properties\").startObject(\"location\")\n+ .field(\"type\", \"geo_point\").startObject(\"fields\")\n+ .startObject(\"geohash\").field(\"type\", \"geo_point\").field(\"geohash_precision\", 12).field(\"geohash_prefix\", true).endObject()\n+ .startObject(\"latlon\").field(\"type\", \"geo_point\").field(\"lat_lon\", true).endObject().endObject()\n+ .endObject().endObject().endObject().endObject().string();\n+ CreateIndexRequestBuilder mappingRequest = client().admin().indices().prepareCreate(\"test\")\n+ .addMapping(\"pin\", mapping);\n+ mappingRequest.execute().actionGet();\n+\n+ // create index and add random test points\n+ client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().execute().actionGet();\n+ for (int i=0; i<numDocs; ++i) {\n+ final GeoPoint pt = RandomGeoGenerator.randomPoint(random());\n+ client().prepareIndex(\"test\", \"pin\").setSource(jsonBuilder().startObject().startObject(\"location\").field(\"lat\", pt.lat())\n+ .field(\"lon\", pt.lon()).endObject().endObject()).setRefresh(true).execute().actionGet();\n+ }\n+\n+ // query by geohash subfield\n+ SearchResponse searchResponse = client().prepareSearch().addField(\"location.geohash\").setQuery(matchAllQuery()).execute().actionGet();\n+ assertEquals(numDocs, searchResponse.getHits().totalHits());\n+\n+ // query by latlon subfield\n+ searchResponse = client().prepareSearch().addField(\"location.latlon\").setQuery(matchAllQuery()).execute().actionGet();\n+ assertEquals(numDocs, searchResponse.getHits().totalHits());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -121,12 +122,13 @@ public void testGeoPointMultiField() throws Exception {\n assertThat(bField.get(\"type\").toString(), equalTo(\"string\"));\n assertThat(bField.get(\"index\").toString(), equalTo(\"not_analyzed\"));\n \n- client().prepareIndex(\"my-index\", \"my-type\", \"1\").setSource(\"a\", \"51,19\").setRefresh(true).get();\n+ GeoPoint point = new GeoPoint(51, 19);\n+ client().prepareIndex(\"my-index\", \"my-type\", \"1\").setSource(\"a\", point.toString()).setRefresh(true).get();\n SearchResponse countResponse = client().prepareSearch(\"my-index\").setSize(0)\n .setQuery(constantScoreQuery(geoDistanceQuery(\"a\").point(51, 19).distance(50, DistanceUnit.KILOMETERS)))\n .get();\n assertThat(countResponse.getHits().totalHits(), equalTo(1l));\n- countResponse = client().prepareSearch(\"my-index\").setSize(0).setQuery(matchQuery(\"a.b\", \"51,19\")).get();\n+ countResponse = client().prepareSearch(\"my-index\").setSize(0).setQuery(matchQuery(\"a.b\", point.toString())).get();\n assertThat(countResponse.getHits().totalHits(), equalTo(1l));\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldsIntegrationIT.java",
"status": "modified"
}
]
}
|
{
"body": "I'm seeing multi-fields of type boolean silently being reduced to a normal boolean field in 1.2.1 which wasn't the behavior in 0.90.9. See https://gist.github.com/Omega359/0c2a93690b4db30693a1 for an example of this.\n\nTo me it seems like it should work - the boolean field mapper seems to be calling out to multiFieldsBuilder - but I'm not versed enough in the internals of ES to know where if at all it's broken.\n",
"comments": [
{
"body": "@martijnvg can you take a look at this?\n",
"created_at": "2014-06-24T10:38:54Z"
},
{
"body": "We're currently hitting the same problem. Our mapping (which used to work fine in 0.90.x and is necessary to allow certain queries based on user input) is:\n\n```\n {\n \"template_boolean\": {\n \"match\": \"*\",\n \"match_mapping_type\": \"boolean\",\n \"mapping\": {\n \"type\": \"multi_field\",\n \"fields\": {\n \"{name}\": {\n \"index\": \"not_analyzed\", // or \"analyzed\", doesn't really matter\n \"type\": \"boolean\",\n \"include_in_all\": true\n },\n \"untouched\": {\n \"type\": \"boolean\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n }\n```\n\nWhich is silently ignored, the result is simply a {\"type\":\"boolean\"}. For other types than boolean, the same kind of template still works fine.\n",
"created_at": "2014-10-02T09:03:48Z"
},
{
"body": "@martijnvg I just realised we were still using the pre-1.0 syntax for multifields. However, it doesn't really improve with the proper new syntax: https://gist.github.com/KlausBrunner/9016653829d295ae96f2\n\nAlso, existing multifield mappings work just fine - but it's seemingly impossible to create new ones.\n",
"created_at": "2014-10-07T08:22:18Z"
},
{
"body": "@clintongormley Not to be a nuisance, but we're a bit bothered by this bug and I don't see an easy/obvious fix from browsing the code. Could someone look into this?\n",
"created_at": "2014-10-14T08:02:21Z"
},
{
"body": "@KlausBrunner yes, currently there is no `fields` support for fields of type `boolean`. Out of interest, what are you trying to achieve with this mapping? The example you give just maps the same value in the same way twice.\n\nI could imagine having a mapping like this:\n\n```\n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"some_boolean_field\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\",\n \"fields\": {\n \"raw\": {\n \"type\": \"boolean\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nwhich would index a not-analyzed string version and a boolean version, but the example you give doesn't make sense.\n",
"created_at": "2014-10-16T12:10:13Z"
},
{
"body": "@clintongormley You're right that the exact mapping in my example doesn't make a lot of sense, but in the case of booleans it doesn't matter for us anyway. We defined an additional \"raw\" (not_analyzed) mapping for all types of fields and rely on it to exist when we build queries. Until we switched to 1.x, this worked fine - now we need to have special treatment just for boolean fields, which is quite annoying.\n",
"created_at": "2014-10-16T13:30:12Z"
},
{
"body": "I'll echo Klaus - we have exactly the same scenario and special casing boolean fields just seems broken.\n",
"created_at": "2014-10-16T13:32:12Z"
},
{
"body": "@KlausBrunner and @Omega359 so you'd rather index double the data? That seems odd to me...\n",
"created_at": "2014-10-16T18:44:08Z"
},
{
"body": "Than have custom code to handle a single type definition that doesn't behave like every other type we use? Yes.\n",
"created_at": "2014-10-16T18:48:27Z"
},
{
"body": "I need this feature to store a boolean that wasn't previously stored.\n \"type\":\"boolean\",\n \"fields\" : {\n \"stored\" : {\n \"type\":\"boolean\",\n \"index\":\"no\",\n \"store\":\"yes\"\n }\n }\n",
"created_at": "2015-02-06T18:34:40Z"
}
],
"number": 6587,
"title": "ES 1.2.1 - boolean multifield silently ignored"
}
|
{
"body": "`bool` is our only core mapper that does not support sub fields.\n\nClose #6587\n",
"number": 15636,
"review_comments": [],
"title": "Add sub-fields support to `bool` fields."
}
|
{
"commits": [
{
"message": "Add sub-fields support to `bool` fields.\n\n`bool` is our only core mapper that does not support sub fields.\n\nClose #6587"
}
],
"files": [
{
"diff": "@@ -43,6 +43,7 @@\n import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue;\n import static org.elasticsearch.index.mapper.MapperBuilders.booleanField;\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField;\n \n /**\n * A field mapper for boolean fields.\n@@ -107,6 +108,8 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n }\n builder.nullValue(nodeBooleanValue(propNode));\n iterator.remove();\n+ } else if (parseMultiField(builder, name, parserContext, propName, propNode)) {\n+ iterator.remove();\n }\n }\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.RAMDirectory;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -110,4 +111,27 @@ public void testSerialization() throws IOException {\n builder.endObject();\n assertEquals(\"{\\\"field\\\":{\\\"type\\\":\\\"boolean\\\",\\\"doc_values\\\":false,\\\"null_value\\\":true}}\", builder.string());\n }\n+\n+ public void testMultiFields() throws IOException {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"boolean\")\n+ .startObject(\"fields\")\n+ .startObject(\"as_string\")\n+ .field(\"type\", \"string\")\n+ .field(\"index\", \"not_analyzed\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper mapper = indexService.mapperService().merge(\"type\", new CompressedXContent(mapping), true, false);\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+ BytesReference source = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"field\", false)\n+ .endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", source);\n+ assertNotNull(doc.rootDoc().getField(\"field.as_string\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/core/BooleanFieldMapperTests.java",
"status": "modified"
}
]
}
|
{
"body": "If you have a dynamic mapping update for a field that already exists on another field, we have a hack in https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java#L615 to try to not introduce a conflict with an existing type. However this is very fragile as it doesn't copy all parameters. Instead we should try to directly reuse the existing field type.\n",
"comments": [],
"number": 15568,
"title": "Cross-type dynamic mappings are fragile"
}
|
{
"body": "Today when dynamically mapping a field that is already defined in another type,\nwe use the regular dynamic mapping logic and try to copy some settings to avoid\nintroducing conflicts. However this is quite fragile as we don't deal with every\nexisting setting. This proposes a different approach that will just borrow a\nfield mapper from another type (which is safe since they are immutable).\n\nThis has a couple downsides, like eg. the fact that sub-fields will be borrowed\nas well, but overall I think it goes into the right direction of having more\nsimilar mappings across types.\n\nClose #15568\n",
"number": 15633,
"review_comments": [],
"title": "Improve cross-type dynamic mapping updates."
}
|
{
"commits": [
{
"message": "Improve cross-type dynamic mapping updates.\n\nToday when dynamically mapping a field that is already defined in another type,\nwe use the regular dynamic mapping logic and try to copy some settings to avoid\nintroducing conflicts. However this is quite fragile as we don't deal with every\nexisting setting. This proposes a different approach that will just reuse the\nshared field type.\n\nClose #15568"
}
],
"files": [
{
"diff": "@@ -596,40 +596,22 @@ private static ObjectMapper parseDynamicValue(final ParseContext context, Object\n if (dynamic == ObjectMapper.Dynamic.FALSE) {\n return null;\n }\n+ final String path = context.path().pathAsText(currentFieldName);\n final Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings(), context.path());\n- final MappedFieldType existingFieldType = context.mapperService().fullName(context.path().pathAsText(currentFieldName));\n+ final MappedFieldType existingFieldType = context.mapperService().fullName(path);\n Mapper.Builder builder = null;\n if (existingFieldType != null) {\n // create a builder of the same type\n builder = createBuilderFromFieldType(context, existingFieldType, currentFieldName);\n- if (builder != null) {\n- // best-effort to not introduce a conflict\n- if (builder instanceof StringFieldMapper.Builder) {\n- StringFieldMapper.Builder stringBuilder = (StringFieldMapper.Builder) builder;\n- stringBuilder.fieldDataSettings(existingFieldType.fieldDataType().getSettings());\n- stringBuilder.store(existingFieldType.stored());\n- stringBuilder.indexOptions(existingFieldType.indexOptions());\n- stringBuilder.tokenized(existingFieldType.tokenized());\n- stringBuilder.omitNorms(existingFieldType.omitNorms());\n- stringBuilder.docValues(existingFieldType.hasDocValues());\n- stringBuilder.indexAnalyzer(existingFieldType.indexAnalyzer());\n- stringBuilder.searchAnalyzer(existingFieldType.searchAnalyzer());\n- } else if (builder instanceof NumberFieldMapper.Builder) {\n- NumberFieldMapper.Builder<?,?> numberBuilder = (NumberFieldMapper.Builder<?, ?>) builder;\n- numberBuilder.fieldDataSettings(existingFieldType.fieldDataType().getSettings());\n- numberBuilder.store(existingFieldType.stored());\n- numberBuilder.indexOptions(existingFieldType.indexOptions());\n- numberBuilder.tokenized(existingFieldType.tokenized());\n- numberBuilder.omitNorms(existingFieldType.omitNorms());\n- numberBuilder.docValues(existingFieldType.hasDocValues());\n- numberBuilder.precisionStep(existingFieldType.numericPrecisionStep());\n- }\n- }\n }\n if (builder == null) {\n builder = createBuilderFromDynamicValue(context, token, currentFieldName);\n }\n Mapper mapper = builder.build(builderContext);\n+ if (existingFieldType != null) {\n+ // try to not introduce a conflict\n+ mapper = mapper.updateFieldType(Collections.singletonMap(path, existingFieldType));\n+ }\n \n mapper = parseAndMergeUpdate(mapper, context);\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -363,6 +363,8 @@ public FieldMapper updateFieldType(Map<String, MappedFieldType> fullNameToFieldT\n final MappedFieldType newFieldType = fullNameToFieldType.get(fieldType.name());\n if (newFieldType == null) {\n throw new IllegalStateException();\n+ } else if (fieldType.getClass() != newFieldType.getClass()) {\n+ throw new IllegalStateException(\"Mixing up field types: \" + fieldType.getClass() + \" != \" + newFieldType.getClass());\n }\n MultiFields updatedMultiFields = multiFields.updateFieldType(fullNameToFieldType);\n if (fieldType == newFieldType && multiFields == updatedMultiFields) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.index.mapper;\n \n+import org.apache.lucene.index.IndexOptions;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -30,8 +31,13 @@\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.core.DateFieldMapper.DateFieldType;\n+import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n import org.elasticsearch.index.mapper.core.FloatFieldMapper;\n import org.elasticsearch.index.mapper.core.IntegerFieldMapper;\n+import org.elasticsearch.index.mapper.core.LongFieldMapper;\n+import org.elasticsearch.index.mapper.core.LongFieldMapper.LongFieldType;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n@@ -367,17 +373,52 @@ public void testComplexArray() throws Exception {\n }\n \n public void testReuseExistingMappings() throws IOException, Exception {\n- IndexService indexService = createIndex(\"test\", Settings.EMPTY, \"type\", \"my_field1\", \"type=string,store=yes\", \"my_field2\", \"type=integer,precision_step=10\");\n+ IndexService indexService = createIndex(\"test\", Settings.EMPTY, \"type\",\n+ \"my_field1\", \"type=string,store=yes\",\n+ \"my_field2\", \"type=integer,precision_step=10\",\n+ \"my_field3\", \"type=long,doc_values=false\",\n+ \"my_field4\", \"type=float,index_options=freqs\",\n+ \"my_field5\", \"type=double,precision_step=14\",\n+ \"my_field6\", \"type=date,doc_values=false\");\n \n // Even if the dynamic type of our new field is long, we already have a mapping for the same field\n // of type string so it should be mapped as a string\n DocumentMapper newMapper = indexService.mapperService().documentMapperWithAutoCreate(\"type2\").getDocumentMapper();\n Mapper update = parse(newMapper, indexService.mapperService().documentMapperParser(),\n- XContentFactory.jsonBuilder().startObject().field(\"my_field1\", 42).endObject());\n+ XContentFactory.jsonBuilder().startObject()\n+ .field(\"my_field1\", 42)\n+ .field(\"my_field2\", 43)\n+ .field(\"my_field3\", 44)\n+ .field(\"my_field4\", 45)\n+ .field(\"my_field5\", 46)\n+ .field(\"my_field6\", 47)\n+ .endObject());\n Mapper myField1Mapper = null;\n+ Mapper myField2Mapper = null;\n+ Mapper myField3Mapper = null;\n+ Mapper myField4Mapper = null;\n+ Mapper myField5Mapper = null;\n+ Mapper myField6Mapper = null;\n for (Mapper m : update) {\n- if (m.name().equals(\"my_field1\")) {\n+ switch (m.name()) {\n+ case \"my_field1\":\n myField1Mapper = m;\n+ break;\n+ case \"my_field2\":\n+ myField2Mapper = m;\n+ break;\n+ case \"my_field3\":\n+ myField3Mapper = m;\n+ break;\n+ case \"my_field4\":\n+ myField4Mapper = m;\n+ break;\n+ case \"my_field5\":\n+ myField5Mapper = m;\n+ break;\n+ case \"my_field6\":\n+ myField6Mapper = m;\n+ break;\n }\n }\n assertNotNull(myField1Mapper);\n@@ -388,20 +429,28 @@ public void testReuseExistingMappings() throws IOException, Exception {\n \n // Even if dynamic mappings would map a numeric field as a long, here it should map it as a integer\n // since we already have a mapping of type integer\n- update = parse(newMapper, indexService.mapperService().documentMapperParser(),\n- XContentFactory.jsonBuilder().startObject().field(\"my_field2\", 42).endObject());\n- Mapper myField2Mapper = null;\n- for (Mapper m : update) {\n- if (m.name().equals(\"my_field2\")) {\n- myField2Mapper = m;\n- }\n- }\n assertNotNull(myField2Mapper);\n // same type\n assertTrue(myField2Mapper instanceof IntegerFieldMapper);\n // and same option\n assertEquals(10, ((IntegerFieldMapper) myField2Mapper).fieldType().numericPrecisionStep());\n \n+ assertNotNull(myField3Mapper);\n+ assertTrue(myField3Mapper instanceof LongFieldMapper);\n+ assertFalse(((LongFieldType) ((LongFieldMapper) myField3Mapper).fieldType()).hasDocValues());\n+\n+ assertNotNull(myField4Mapper);\n+ assertTrue(myField4Mapper instanceof FloatFieldMapper);\n+ assertEquals(IndexOptions.DOCS_AND_FREQS, ((FieldMapper) myField4Mapper).fieldType().indexOptions());\n+\n+ assertNotNull(myField5Mapper);\n+ assertTrue(myField5Mapper instanceof DoubleFieldMapper);\n+ assertEquals(14, ((DoubleFieldMapper) myField5Mapper).fieldType().numericPrecisionStep());\n+\n+ assertNotNull(myField6Mapper);\n+ assertTrue(myField6Mapper instanceof DateFieldMapper);\n+ assertFalse(((DateFieldType) ((DateFieldMapper) myField6Mapper).fieldType()).hasDocValues());\n+\n // This can't work\n try {\n parse(newMapper, indexService.mapperService().documentMapperParser(),",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java",
"status": "modified"
}
]
}
|
{
"body": "When running on FreeBSD, using the hot_threads API causes this exception is the logs:\n\n```\n[2015-12-20 22:21:42,473][DEBUG][action.admin.cluster.node.hotthreads] [Noh-Varr] failed to execute on node [fQuWKkdWRauzZNKz21sskA]\nRemoteTransportException[[Noh-Varr][127.0.0.1:9300][cluster:monitor/nodes/hot_threads[n]]]; nested: ElasticsearchException[failed to detect hot threads]; nested: IllegalStateException[MBean doesn't support thread CPU Time];\nCaused by: ElasticsearchException[failed to detect hot threads]; nested: IllegalStateException[MBean doesn't support thread CPU Time];\n at org.elasticsearch.action.admin.cluster.node.hotthreads.TransportNodesHotThreadsAction.nodeOperation(TransportNodesHotThreadsAction.java:88)\n at org.elasticsearch.action.admin.cluster.node.hotthreads.TransportNodesHotThreadsAction.nodeOperation(TransportNodesHotThreadsAction.java:45)\n at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:218)\n at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:214)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:335)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.IllegalStateException: MBean doesn't support thread CPU Time\n at org.elasticsearch.monitor.jvm.HotThreads.innerDetect(HotThreads.java:153)\n at org.elasticsearch.monitor.jvm.HotThreads.detect(HotThreads.java:89)\n at org.elasticsearch.action.admin.cluster.node.hotthreads.TransportNodesHotThreadsAction.nodeOperation(TransportNodesHotThreadsAction.java:86)\n ... 8 more\n```\n\nThe HTTP response however, is an empty 200 response:\n\n```\n» get -v 127.0.0.1:9200/_nodes/hot_threads\n* Trying 127.0.0.1...\n* Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)\n> GET /_nodes/hot_threads HTTP/1.1\n> Host: 127.0.0.1:9200\n> User-Agent: curl/7.46.0\n> Accept: */*\n> \n< HTTP/1.1 200 OK\n< Content-Type: text/plain; charset=UTF-8\n< Content-Length: 0\n< \n* Connection #0 to host 127.0.0.1 left intact\n```\n\n(Note this is running from the master branch of Elasticsearch)\n",
"comments": [],
"number": 15562,
"title": "`/_nodes/hot_threads` fails entirely with no message on FreeBSD"
}
|
{
"body": "This adds the required changes/checks so that the build can run on\nFreeBSD.\n\nThere are a few things that differ between FreeBSD and Linux:\n- CPU probes return -1 for CPU usage\n- `hot_threads` cannot be supported on FreeBSD\n\nFrom OpenJDK's `os_bsd.cpp`:\n\n``` c++\nbool os::is_thread_cpu_time_supported() {\n #ifdef __APPLE__\n return true;\n #else\n return false;\n #endif\n}\n```\n\nSo this API now returns (for each FreeBSD node):\n\n```\ncurl -s localhost:9200/_nodes/hot_threads\n::: {Devil Hunter Gabriel}{q8OJnKCcQS6EB9fygU4R4g}{127.0.0.1}{127.0.0.1:9300}\n hot_threads is not supported on FreeBSD\n```\n- multicast fails in native `join` method - known bug:\n https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246\n\nWhich causes:\n\n```\n1> Caused by: java.net.SocketException: Invalid argument\n1> at java.net.PlainDatagramSocketImpl.join(Native Method)\n1> at java.net.AbstractPlainDatagramSocketImpl.join(AbstractPlainDatagramSocketImpl.java:179)\n1> at java.net.MulticastSocket.joinGroup(MulticastSocket.java:323)\n1> at org.elasticsearch.plugin.discovery.multicast.MulticastChannel$Plain.buildMulticastSocket(MulticastChannel.java:309)\n```\n\nSo these tests are skipped on FreeBSD.\n\nResolves #15562\n",
"number": 15617,
"review_comments": [],
"title": "Fix build to run correctly on FreeBSD"
}
|
{
"commits": [
{
"message": "Fix build to run correctly on FreeBSD\n\nThis adds the required changes/checks so that the build can run on\nFreeBSD.\n\nThere are a few things that differ between FreeBSD and Linux:\n\n- CPU probes return -1 for CPU usage\n- `hot_threads` cannot be supported on FreeBSD\n\nFrom OpenJDK's `os_bsd.cpp`:\n\n```c++\nbool os::is_thread_cpu_time_supported() {\n #ifdef __APPLE__\n return true;\n #else\n return false;\n #endif\n}\n```\n\nSo this API now returns (for each FreeBSD node):\n\n```\ncurl -s localhost:9200/_nodes/hot_threads\n::: {Devil Hunter Gabriel}{q8OJnKCcQS6EB9fygU4R4g}{127.0.0.1}{127.0.0.1:9300}\n hot_threads is not supported on FreeBSD\n```\n\n- multicast fails in native `join` method - known bug:\n https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246\n\nWhich causes:\n\n```\n1> Caused by: java.net.SocketException: Invalid argument\n1> at java.net.PlainDatagramSocketImpl.join(Native Method)\n1> at java.net.AbstractPlainDatagramSocketImpl.join(AbstractPlainDatagramSocketImpl.java:179)\n1> at java.net.MulticastSocket.joinGroup(MulticastSocket.java:323)\n1> at org.elasticsearch.plugin.discovery.multicast.MulticastChannel$Plain.buildMulticastSocket(MulticastChannel.java:309)\n```\n\nSo these tests are skipped on FreeBSD.\n\nResolves #15562"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.monitor.jvm;\n \n import org.apache.lucene.util.CollectionUtil;\n+import org.apache.lucene.util.Constants;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -131,6 +132,11 @@ private static boolean isIdleThread(ThreadInfo threadInfo) {\n private String innerDetect() throws Exception {\n StringBuilder sb = new StringBuilder();\n \n+ if (Constants.FREE_BSD) {\n+ sb.append(\"hot_threads is not supported on FreeBSD\");\n+ return sb.toString();\n+ }\n+\n sb.append(\"Hot threads at \");\n sb.append(DATE_TIME_FORMATTER.printer().print(System.currentTimeMillis()));\n sb.append(\", interval=\");",
"filename": "core/src/main/java/org/elasticsearch/monitor/jvm/HotThreads.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.action.admin;\n \n+import org.apache.lucene.util.Constants;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.node.hotthreads.NodeHotThreads;\n import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsRequestBuilder;\n@@ -40,6 +41,7 @@\n import static org.hamcrest.Matchers.lessThan;\n \n public class HotThreadsIT extends ESIntegTestCase {\n+\n public void testHotThreadsDontFail() throws ExecutionException, InterruptedException {\n /**\n * This test just checks if nothing crashes or gets stuck etc.\n@@ -125,6 +127,7 @@ public void onFailure(Throwable e) {\n }\n \n public void testIgnoreIdleThreads() throws ExecutionException, InterruptedException {\n+ assumeTrue(\"no support for hot_threads on FreeBSD\", Constants.FREE_BSD == false);\n \n // First time, don't ignore idle threads:\n NodesHotThreadsRequestBuilder builder = client().admin().cluster().prepareNodesHotThreads();\n@@ -158,12 +161,19 @@ public void testTimestampAndParams() throws ExecutionException, InterruptedExcep\n \n NodesHotThreadsResponse response = client().admin().cluster().prepareNodesHotThreads().execute().get();\n \n- for (NodeHotThreads node : response.getNodesMap().values()) {\n- String result = node.getHotThreads();\n- assertTrue(result.indexOf(\"Hot threads at\") != -1);\n- assertTrue(result.indexOf(\"interval=500ms\") != -1);\n- assertTrue(result.indexOf(\"busiestThreads=3\") != -1);\n- assertTrue(result.indexOf(\"ignoreIdleThreads=true\") != -1);\n+ if (Constants.FREE_BSD) {\n+ for (NodeHotThreads node : response.getNodesMap().values()) {\n+ String result = node.getHotThreads();\n+ assertTrue(result.indexOf(\"hot_threads is not supported\") != -1);\n+ }\n+ } else {\n+ for (NodeHotThreads node : response.getNodesMap().values()) {\n+ String result = node.getHotThreads();\n+ assertTrue(result.indexOf(\"Hot threads at\") != -1);\n+ assertTrue(result.indexOf(\"interval=500ms\") != -1);\n+ assertTrue(result.indexOf(\"busiestThreads=3\") != -1);\n+ assertTrue(result.indexOf(\"ignoreIdleThreads=true\") != -1);\n+ }\n }\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/HotThreadsIT.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.plugin.discovery.multicast;\n \n+import org.apache.lucene.util.Constants;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -45,6 +46,7 @@\n import java.net.MulticastSocket;\n \n public class MulticastZenPingTests extends ESTestCase {\n+\n private Settings buildRandomMulticast(Settings settings) {\n Settings.Builder builder = Settings.builder().put(settings);\n builder.put(\"discovery.zen.ping.multicast.group\", \"224.2.3.\" + randomIntBetween(0, 255));\n@@ -57,6 +59,7 @@ private Settings buildRandomMulticast(Settings settings) {\n }\n \n public void testSimplePings() throws InterruptedException {\n+ assumeTrue(\"https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246\", Constants.FREE_BSD == false);\n Settings settings = Settings.EMPTY;\n settings = buildRandomMulticast(settings);\n Thread.sleep(30000);\n@@ -129,8 +132,16 @@ public boolean nodeHasJoinedClusterOnce() {\n }\n }\n \n+ // This test is here because when running on FreeBSD, if no tests are\n+ // executed for the 'multicast' project it will assume everything\n+ // failed, so we need to have at least one test that runs.\n+ public void testAlwaysRun() throws Exception {\n+ assertTrue(true);\n+ }\n+\n @SuppressForbidden(reason = \"I bind to wildcard addresses. I am a total nightmare\")\n public void testExternalPing() throws Exception {\n+ assumeTrue(\"https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246\", Constants.FREE_BSD == false);\n Settings settings = Settings.EMPTY;\n settings = buildRandomMulticast(settings);\n ",
"filename": "plugins/discovery-multicast/src/test/java/org/elasticsearch/plugin/discovery/multicast/MulticastZenPingTests.java",
"status": "modified"
},
{
"diff": "@@ -6,17 +6,17 @@\n \n - match:\n $body: |\n- / #host ip heap.percent ram.percent cpu load node.role master name\n- ^ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ (-)?\\d*(\\.\\d+)? \\s+ [-dc] \\s+ [-*mx] \\s+ (\\S+\\s?)+ \\n)+ $/\n+ / #host ip heap.percent ram.percent cpu load node.role master name\n+ ^ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d* \\s+ (-)?\\d* \\s+ (-)?\\d*(\\.\\d+)? \\s+ [-dc] \\s+ [-*mx] \\s+ (\\S+\\s?)+ \\n)+ $/\n \n - do:\n cat.nodes:\n v: true\n \n - match:\n $body: |\n- /^ host \\s+ ip \\s+ heap\\.percent \\s+ ram\\.percent \\s+ cpu \\s+ load \\s+ node\\.role \\s+ master \\s+ name \\n\n- (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d* \\s+ \\d* \\s+ (-)?\\d*(\\.\\d+)? \\s+ [-dc] \\s+ [-*mx] \\s+ (\\S+\\s?)+ \\n)+ $/\n+ /^ host \\s+ ip \\s+ heap\\.percent \\s+ ram\\.percent \\s+ cpu \\s+ load \\s+ node\\.role \\s+ master \\s+ name \\n\n+ (\\S+ \\s+ (\\d{1,3}\\.){3}\\d{1,3} \\s+ \\d+ \\s+ \\d* \\s+ (-)?\\d* \\s+ (-)?\\d*(\\.\\d+)? \\s+ [-dc] \\s+ [-*mx] \\s+ (\\S+\\s?)+ \\n)+ $/\n \n - do:\n cat.nodes:",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.nodes/10_basic.yaml",
"status": "modified"
}
]
}
|
{
"body": "I've seen a couple cases recently where users \"tuned\" up the number of threads in the thread pools for indexing, and then didn't realize this leads to crazy issues like Lucene writing an insane number of segments and doing an insane number of merges, etc.\n\nI don't see any reason to allow this value to exceed the number of CPU cores?\n",
"comments": [
{
"body": "+1\n",
"created_at": "2015-12-21T21:36:11Z"
},
{
"body": "I guess that's a good limit to have\n",
"created_at": "2015-12-21T22:02:10Z"
},
{
"body": "then how to deal with high concurrency indexing situation?\nI \"tuned\" up the number of threads just in order to handle it although I know well its side effort.\n",
"created_at": "2015-12-22T07:49:01Z"
},
{
"body": "@makeyang increasing the bulk and index thread pool sizes beyond the number of cores your hardware does not help indexing throughput ... it will hurt it, as you're seeing in #15538. See https://www.elastic.co/blog/performance-indexing-2-0 for ways to optimize for indexing.\n",
"created_at": "2015-12-22T09:26:40Z"
},
{
"body": "@mikemccand \nit's not about indexing performance. \nmy situation is here: I have a ES cluster and many other teams will write their data into this cluster, the data is very samll though the clients are many, so sometime when all bulk/index thread is occupied, the client will get rejecte exception. the write is not heavy at all. cpu/memory is all used little. but reject exception happened, how to deal with this situation? \n",
"created_at": "2015-12-22T09:40:26Z"
},
{
"body": "@makeyang in that case you should increase your bulk `queue_size`, not the `size`.\n",
"created_at": "2015-12-22T09:42:45Z"
},
{
"body": "@mikemccand got it, thanks.\nI have another question to ask: I have done load test(especially for indexing performance) and found out that single node can handle 40000 docs/second, over this, ES will throw reject exception to client. but at the max throughput, the node's resouce is not completely occupied, \ncpu usage is 80%\niops is 170(spin disk, I have done iops test with other tools, iops can reacheve 500).\nso my question is why it can't write more while resouce is not fully occupied?\nwhat is bottleneck and how to identify it?\n",
"created_at": "2015-12-22T09:48:01Z"
},
{
"body": "@mikemccand please help to share any thoughts, insights, anything\n",
"created_at": "2015-12-22T10:44:35Z"
},
{
"body": "@makeyang did you try the suggestions from the above blog post?\n",
"created_at": "2015-12-22T11:03:20Z"
},
{
"body": "yeah. try all of them.\n",
"created_at": "2015-12-23T01:38:59Z"
},
{
"body": "@mikemccand do you drive car? How about wearing helmet while driving? You know I've seen a couple of cases recently where wearing helmets while driving cars could have saved lives.\r\n\r\nWhen you people would realise that all your \"couple cases\" stuff is absolute nonsense. Haven't heard about [survivorship bias](https://en.wikipedia.org/wiki/Survivorship_bias)? There're bicycles for kids and adults, they are somewhat different in (and by) design — manufacturers realise that different groups won't be satisfied with single approach. And I don't see \"Elasticsearch for Juniors\" label on.",
"created_at": "2018-12-19T04:41:39Z"
}
],
"number": 15582,
"title": "Don't allow `bulk` and `indexing` thread pools size larger than number of CPU cores"
}
|
{
"body": "I only limit `bulk` and `index` thread pools.\n\nCloses #15582 \n",
"number": 15585,
"review_comments": [],
"title": "Limit the max size of bulk and index thread pools to bounded number of processors"
}
|
{
"commits": [
{
"message": "limit the max size of bulk and index thread pools to bounded number of processors"
},
{
"message": "add logger.warn if thread pool size is clipped; fix test failure"
}
],
"files": [
{
"diff": "@@ -458,7 +458,7 @@ private ExecutorHolder rebuild(String name, ExecutorHolder previousExecutorHolde\n if (ThreadPoolType.FIXED == previousInfo.getThreadPoolType()) {\n SizeValue updatedQueueSize = getAsSizeOrUnbounded(settings, \"capacity\", getAsSizeOrUnbounded(settings, \"queue\", getAsSizeOrUnbounded(settings, \"queue_size\", previousInfo.getQueueSize())));\n if (Objects.equals(previousInfo.getQueueSize(), updatedQueueSize)) {\n- int updatedSize = settings.getAsInt(\"size\", previousInfo.getMax());\n+ int updatedSize = applyHardSizeLimit(name, settings.getAsInt(\"size\", previousInfo.getMax()));\n if (previousInfo.getMax() != updatedSize) {\n logger.debug(\"updating thread_pool [{}], type [{}], size [{}], queue_size [{}]\", name, type, updatedSize, updatedQueueSize);\n // if you think this code is crazy: that's because it is!\n@@ -480,7 +480,7 @@ private ExecutorHolder rebuild(String name, ExecutorHolder previousExecutorHolde\n defaultQueueSize = previousInfo.getQueueSize();\n }\n \n- int size = settings.getAsInt(\"size\", defaultSize);\n+ int size = applyHardSizeLimit(name, settings.getAsInt(\"size\", defaultSize));\n SizeValue queueSize = getAsSizeOrUnbounded(settings, \"capacity\", getAsSizeOrUnbounded(settings, \"queue\", getAsSizeOrUnbounded(settings, \"queue_size\", defaultQueueSize)));\n logger.debug(\"creating thread_pool [{}], type [{}], size [{}], queue_size [{}]\", name, type, size, queueSize);\n Executor executor = EsExecutors.newFixed(name, size, queueSize == null ? -1 : (int) queueSize.singles(), threadFactory);\n@@ -533,6 +533,21 @@ private ExecutorHolder rebuild(String name, ExecutorHolder previousExecutorHolde\n throw new IllegalArgumentException(\"No type found [\" + type + \"], for [\" + name + \"]\");\n }\n \n+ private int applyHardSizeLimit(String name, int size) {\n+ int availableProcessors = EsExecutors.boundedNumberOfProcessors(settings);\n+ if ((name.equals(Names.BULK) || name.equals(Names.INDEX)) && size > availableProcessors) {\n+ // We use a hard max size for the indexing pools, because if too many threads enter Lucene's IndexWriter, it means\n+ // too many segments written, too frequently, too much merging, etc:\n+ // TODO: I would love to be loud here (throw an exception if you ask for a too-big size), but I think this is dangerous\n+ // because on upgrade this setting could be in cluster state and hard for the user to correct?\n+ logger.warn(\"requested thread pool size [{}] for [{}] is too large; setting to maximum [{}] instead\",\n+ size, name, availableProcessors);\n+ size = availableProcessors;\n+ }\n+\n+ return size;\n+ }\n+\n private void updateSettings(Settings settings) {\n Map<String, Settings> groupSettings = settings.getAsGroups();\n if (groupSettings.isEmpty()) {",
"filename": "core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.threadpool.ThreadPool.Names;\n@@ -89,6 +90,51 @@ public void testThreadPoolCanNotOverrideThreadPoolType() throws InterruptedExcep\n }\n }\n \n+ public void testIndexingThreadPoolsMaxSize() throws InterruptedException {\n+ String threadPoolName = randomThreadPoolName();\n+ for (String name : new String[] {ThreadPool.Names.BULK, ThreadPool.Names.INDEX}) {\n+ ThreadPool threadPool = null;\n+ try {\n+\n+ int maxSize = EsExecutors.boundedNumberOfProcessors(Settings.EMPTY);\n+\n+ // try to create a too-big (maxSize+1) thread pool\n+ threadPool = new ThreadPool(settingsBuilder()\n+ .put(\"name\", \"testIndexingThreadPoolsMaxSize\")\n+ .put(\"threadpool.\" + name + \".size\", maxSize+1)\n+ .build());\n+\n+ // confirm it clipped us at the maxSize:\n+ assertEquals(maxSize, ((ThreadPoolExecutor) threadPool.executor(name)).getMaximumPoolSize());\n+\n+ ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);\n+ threadPool.setClusterSettings(clusterSettings);\n+\n+ // update it to a tiny size:\n+ clusterSettings.applySettings(\n+ settingsBuilder()\n+ .put(\"threadpool.\" + name + \".size\", 1)\n+ .build()\n+ );\n+\n+ // confirm it worked:\n+ assertEquals(1, ((ThreadPoolExecutor) threadPool.executor(name)).getMaximumPoolSize());\n+\n+ // try to update to too-big size:\n+ clusterSettings.applySettings(\n+ settingsBuilder()\n+ .put(\"threadpool.\" + name + \".size\", maxSize+1)\n+ .build()\n+ );\n+\n+ // confirm it clipped us at the maxSize:\n+ assertEquals(maxSize, ((ThreadPoolExecutor) threadPool.executor(name)).getMaximumPoolSize());\n+ } finally {\n+ terminateThreadPoolIfNeeded(threadPool);\n+ }\n+ }\n+ }\n+\n public void testUpdateSettingsCanNotChangeThreadPoolType() throws InterruptedException {\n String threadPoolName = randomThreadPoolName();\n ThreadPool.ThreadPoolType invalidThreadPoolType = randomIncorrectThreadPoolType(threadPoolName);\n@@ -165,6 +211,14 @@ public void testCachedExecutorType() throws InterruptedException {\n }\n }\n \n+ private static int getExpectedThreadPoolSize(Settings settings, String name, int size) {\n+ if (name.equals(ThreadPool.Names.BULK) || name.equals(ThreadPool.Names.INDEX)) {\n+ return Math.min(size, EsExecutors.boundedNumberOfProcessors(settings));\n+ } else {\n+ return size;\n+ }\n+ }\n+\n public void testFixedExecutorType() throws InterruptedException {\n String threadPoolName = randomThreadPool(ThreadPool.ThreadPoolType.FIXED);\n ThreadPool threadPool = null;\n@@ -179,12 +233,14 @@ public void testFixedExecutorType() throws InterruptedException {\n Settings settings = clusterSettings.applySettings(settingsBuilder()\n .put(\"threadpool.\" + threadPoolName + \".size\", \"15\")\n .build());\n+\n+ int expectedSize = getExpectedThreadPoolSize(nodeSettings, threadPoolName, 15);\n assertEquals(info(threadPool, threadPoolName).getThreadPoolType(), ThreadPool.ThreadPoolType.FIXED);\n assertThat(threadPool.executor(threadPoolName), instanceOf(EsThreadPoolExecutor.class));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(15));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(15));\n- assertThat(info(threadPool, threadPoolName).getMin(), equalTo(15));\n- assertThat(info(threadPool, threadPoolName).getMax(), equalTo(15));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(expectedSize));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(expectedSize));\n+ assertThat(info(threadPool, threadPoolName).getMin(), equalTo(expectedSize));\n+ assertThat(info(threadPool, threadPoolName).getMax(), equalTo(expectedSize));\n // keep alive does not apply to fixed thread pools\n assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getKeepAliveTime(TimeUnit.MINUTES), equalTo(0L));\n \n@@ -194,20 +250,23 @@ public void testFixedExecutorType() throws InterruptedException {\n // Make sure keep alive value is not used\n assertThat(info(threadPool, threadPoolName).getKeepAlive(), nullValue());\n // Make sure keep pool size value were reused\n- assertThat(info(threadPool, threadPoolName).getMin(), equalTo(15));\n- assertThat(info(threadPool, threadPoolName).getMax(), equalTo(15));\n+ assertThat(info(threadPool, threadPoolName).getMin(), equalTo(expectedSize));\n+ assertThat(info(threadPool, threadPoolName).getMax(), equalTo(expectedSize));\n assertThat(threadPool.executor(threadPoolName), instanceOf(EsThreadPoolExecutor.class));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(15));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(15));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(expectedSize));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(expectedSize));\n \n // Change size\n Executor oldExecutor = threadPool.executor(threadPoolName);\n settings = clusterSettings.applySettings(settingsBuilder().put(settings).put(\"threadpool.\" + threadPoolName + \".size\", \"10\").build());\n+\n+ expectedSize = getExpectedThreadPoolSize(nodeSettings, threadPoolName, 10);\n+\n // Make sure size values changed\n- assertThat(info(threadPool, threadPoolName).getMax(), equalTo(10));\n- assertThat(info(threadPool, threadPoolName).getMin(), equalTo(10));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(10));\n- assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(10));\n+ assertThat(info(threadPool, threadPoolName).getMax(), equalTo(expectedSize));\n+ assertThat(info(threadPool, threadPoolName).getMin(), equalTo(expectedSize));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getMaximumPoolSize(), equalTo(expectedSize));\n+ assertThat(((EsThreadPoolExecutor) threadPool.executor(threadPoolName)).getCorePoolSize(), equalTo(expectedSize));\n // Make sure executor didn't change\n assertEquals(info(threadPool, threadPoolName).getThreadPoolType(), ThreadPool.ThreadPoolType.FIXED);\n assertThat(threadPool.executor(threadPoolName), sameInstance(oldExecutor));",
"filename": "core/src/test/java/org/elasticsearch/threadpool/UpdateThreadPoolSettingsTests.java",
"status": "modified"
}
]
}
|
{
"body": "For programmatically-generated queries, you sometimes want to apply min-match constraints on your bool should clauses. Is there thinking behind why minimum_should_match is ignored when the number of should clauses is lesser than it?\n\nTo be clear, I'm looking for the behavior where the following query returns 0 results:\n\n```\n{\n \"query\": {\n \"bool\": {\n \"should\": [{\n \"match\": {\n \"_all\": \"elizabeth\"\n }\n }, {\n \"match\": {\n \"_all\": \"smith\"\n }\n }],\n \"minimum_should_match\": 3\n }\n }\n}\n```\n\nWe're using a query-understanding pipeline which should get 0 results from one of the programmatically generated queries if there isn't enough support in the query to make the match. However, that is not currently happening right now.\n\nAny way I can resolve this in the current set up? Or would this be a feature/bug request?\n",
"comments": [
{
"body": "I agree this is a bug and this query should return no hits. The issue seems to stem from Queries.calculateMinShouldMatch which returns the number of optional clauses in case minimum_should_match gets higher.\n",
"created_at": "2015-12-18T13:23:44Z"
}
],
"number": 15521,
"title": "minimum_should_match ignored when number of `should` clauses smaller "
}
|
{
"body": "Changes Queries.calculateMinShouldMatch to return the number of \"min should match\" clauses that the user wanted even if the number of optional clauses is smaller than the provided number.\nIn such case the query now returns no result.\nCloses #15521\n",
"number": 15571,
"review_comments": [
{
"body": "do you know if there is a good reason to return 0 if we got a negative value, or could we just return 'result' directly? (if this doesn't make any tests fail, this is good enough for me)\n",
"created_at": "2015-12-21T13:41:32Z"
},
{
"body": "I agree it is clearer this way. Then we don't need the comment anymore?\n",
"created_at": "2015-12-21T13:42:38Z"
},
{
"body": "It's ok to return 'result' directly (the value is re-checked in applyMinimumShouldMatch) except for ExtendedCommonTermsQuery which uses calculateMinShouldMatch directly without checking the returned result. I can modify ExtendedCommonTermsQuery if you think that it's clearer this way.\n",
"created_at": "2015-12-21T14:23:05Z"
},
{
"body": "Sure, I'll remove the comment.\n",
"created_at": "2015-12-21T14:23:19Z"
},
{
"body": "> I can modify ExtendedCommonTermsQuery if you think that it's clearer this way.\n\nActually I was more thinking that our query parsers should not replace negative values with \"0\". They should either propagate to the query and let them deal with it or throw an exception instead of being lenient?\n",
"created_at": "2015-12-21T14:33:49Z"
},
{
"body": "The boolean query throws an IllegalArgumentException when min should match is a negative value but only on some cases, for instance if you don't have should clauses then the query works.\nI can add more tests in order to emphasize the kind of problem we could have with this approach but I think we should first define what is a valid min should match value first. For instance what should we do if min should match is set but there is no \"should\" clause ? \n",
"created_at": "2015-12-21T14:45:37Z"
},
{
"body": "> The boolean query throws an IllegalArgumentException when min should match is a negative value but only on some cases, for instance if you don't have should clauses then the query works.\n\nOK, I didn't know that, it's more complex than I thought. Let's merge this PR as-in then and work on another PR to make minimum_should_match validation more consistent.\n",
"created_at": "2015-12-21T14:54:31Z"
},
{
"body": "Sure, thanks.\n",
"created_at": "2015-12-21T15:03:32Z"
}
],
"title": "Min should match greater than the number of optional clauses should return no result"
}
|
{
"commits": [
{
"message": "Queries.calculateMinShouldMatch returns the number of \"min should match\" clauses that the user wanted\neven if the number of optional clauses is smaller than the provided number.\nIn such case the query now returns no result.\nCloses #15521"
}
],
"files": [
{
"diff": "@@ -179,8 +179,6 @@ public static int calculateMinShouldMatch(int optionalClauseCount, String spec)\n result = calc < 0 ? result + calc : calc;\n }\n \n- return (optionalClauseCount < result ?\n- optionalClauseCount : (result < 0 ? 0 : result));\n-\n+ return result < 0 ? 0 : result;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java",
"status": "modified"
},
{
"diff": "@@ -273,8 +273,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n return new MatchAllDocsQuery();\n }\n final String minimumShouldMatch;\n- if (context.isFilter() && this.minimumShouldMatch == null) {\n- //will be applied for real only if there are should clauses\n+ if (context.isFilter() && this.minimumShouldMatch == null && shouldClauses.size() > 0) {\n minimumShouldMatch = \"1\";\n } else {\n minimumShouldMatch = this.minimumShouldMatch;",
"filename": "core/src/main/java/org/elasticsearch/index/query/BoolQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -254,6 +254,24 @@ public void testMinShouldMatchFilterWithShouldClauses() throws Exception {\n assertThat(innerBooleanClause2.getQuery(), instanceOf(MatchAllDocsQuery.class));\n }\n \n+ public void testMinShouldMatchBiggerThanNumberOfShouldClauses() throws Exception {\n+ BooleanQuery bq = (BooleanQuery) parseQuery(\n+ boolQuery()\n+ .should(termQuery(\"foo\", \"bar\"))\n+ .should(termQuery(\"foo2\", \"bar2\"))\n+ .minimumNumberShouldMatch(\"3\")\n+ .buildAsBytes()).toQuery(createShardContext());\n+ assertEquals(3, bq.getMinimumNumberShouldMatch());\n+\n+ bq = (BooleanQuery) parseQuery(\n+ boolQuery()\n+ .should(termQuery(\"foo\", \"bar\"))\n+ .should(termQuery(\"foo2\", \"bar2\"))\n+ .minimumNumberShouldMatch(3)\n+ .buildAsBytes()).toQuery(createShardContext());\n+ assertEquals(3, bq.getMinimumNumberShouldMatch());\n+ }\n+\n public void testFromJson() throws IOException {\n String query =\n \"{\" +",
"filename": "core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -1016,6 +1016,59 @@ public void testMultiMatchQueryMinShouldMatch() {\n searchResponse = client().prepareSearch().setQuery(multiMatchQuery).get();\n assertHitCount(searchResponse, 1l);\n assertFirstHit(searchResponse, hasId(\"1\"));\n+ // Min should match > # optional clauses returns no docs.\n+ multiMatchQuery = multiMatchQuery(\"value1 value2 value3\", \"field1\", \"field2\");\n+ multiMatchQuery.minimumShouldMatch(\"4\");\n+ searchResponse = client().prepareSearch().setQuery(multiMatchQuery).get();\n+ assertHitCount(searchResponse, 0l);\n+ }\n+\n+ public void testBoolQueryMinShouldMatchBiggerThanNumberOfShouldClauses() throws IOException {\n+ createIndex(\"test\");\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", new String[]{\"value1\", \"value2\", \"value3\"}).get();\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field2\", \"value1\").get();\n+ refresh();\n+\n+ BoolQueryBuilder boolQuery = boolQuery()\n+ .must(termQuery(\"field1\", \"value1\"))\n+ .should(boolQuery()\n+ .should(termQuery(\"field1\", \"value1\"))\n+ .should(termQuery(\"field1\", \"value2\"))\n+ .minimumNumberShouldMatch(3));\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(boolQuery).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"1\"));\n+\n+ boolQuery = boolQuery()\n+ .must(termQuery(\"field1\", \"value1\"))\n+ .should(boolQuery()\n+ .should(termQuery(\"field1\", \"value1\"))\n+ .should(termQuery(\"field1\", \"value2\"))\n+ .minimumNumberShouldMatch(1))\n+ // Only one should clause is defined, returns no docs.\n+ .minimumNumberShouldMatch(2);\n+ searchResponse = client().prepareSearch().setQuery(boolQuery).get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ boolQuery = boolQuery()\n+ .should(termQuery(\"field1\", \"value1\"))\n+ .should(boolQuery()\n+ .should(termQuery(\"field1\", \"value1\"))\n+ .should(termQuery(\"field1\", \"value2\"))\n+ .minimumNumberShouldMatch(3))\n+ .minimumNumberShouldMatch(1);\n+ searchResponse = client().prepareSearch().setQuery(boolQuery).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"1\"));\n+\n+ boolQuery = boolQuery()\n+ .must(termQuery(\"field1\", \"value1\"))\n+ .must(boolQuery()\n+ .should(termQuery(\"field1\", \"value1\"))\n+ .should(termQuery(\"field1\", \"value2\"))\n+ .minimumNumberShouldMatch(3));\n+ searchResponse = client().prepareSearch().setQuery(boolQuery).get();\n+ assertHitCount(searchResponse, 0l);\n }\n \n public void testFuzzyQueryString() {",
"filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java",
"status": "modified"
}
]
}
|
{
"body": "When installing the Windows service for Elasticsearch 2.x, some part of the batch file breaks when the ES_HOME path contains parentheses. This behavior is not present in 1.7.\n\nExample:\n\n> C:\\Program Files (x86)\\elasticsearch-2.1.0> bin\\service.bat install\n> Installing service : \"elasticsearch-service-x64\"\n> Using JAVA_HOME (64-bit): \"C:\\Program Files\\Java\\jdk1.8.0_45\"\n> \\elasticsearch-2.1.0/lib/elasticsearch-2.1.0.jar was unexpected at this time.\n",
"comments": [
{
"body": "@gmarz could you take a look at this please\n",
"created_at": "2015-12-11T13:31:44Z"
},
{
"body": "It looks like this originates from the new if statement around setting ES_CLASSPATH in elasticsearch.in.bat\n",
"created_at": "2015-12-11T18:21:59Z"
},
{
"body": "@kstachowiak-kcura that looks like it may be part of the issue, but the service also fails to start even after properly escaping the path in the bat file. I'll dig into this further tomorrow.\n",
"created_at": "2015-12-15T04:48:10Z"
},
{
"body": "In my case, the problem occurs when the path where ES is installed contains `)` - closing parantheses. In this case `Program Files (x86)`. Just tested with different patterns, like: `Program Files (x86`, `Program Files x86` all work fine. The closing parentheses is causing the problem.\n",
"created_at": "2015-12-22T04:40:55Z"
}
],
"number": 15349,
"title": "Windows service installer doesn't escape file path in 2.x"
}
|
{
"body": "When ES_HOME contains parentheses, parsing of the if statement around ES_CLASSPATH is thrown off. This fix enables delayed expansion in service.bat (already enabled in elasticsearch.bat).\n\nCloses #15349\n",
"number": 15549,
"review_comments": [],
"title": "Fix Windows service installation failure"
}
|
{
"commits": [
{
"message": "Fix Windows service installation failure when ES_HOME contains parantheses\n\nCloses #15349"
}
],
"files": [
{
"diff": "@@ -93,7 +93,7 @@ set JAVA_OPTS=%JAVA_OPTS% -Djna.nosys=true\n \n REM check in case a user was using this mechanism\n if \"%ES_CLASSPATH%\" == \"\" (\n-set ES_CLASSPATH=%ES_HOME%/lib/elasticsearch-${project.version}.jar;%ES_HOME%/lib/*\n+set ES_CLASSPATH=!ES_HOME!/lib/elasticsearch-${project.version}.jar;!ES_HOME!/lib/*\n ) else (\n ECHO Error: Don't modify the classpath with ES_CLASSPATH, Best is to add 1>&2\n ECHO additional elements via the plugin mechanism, or if code must really be 1>&2",
"filename": "distribution/src/main/resources/bin/elasticsearch.in.bat",
"status": "modified"
},
{
"diff": "@@ -1,5 +1,5 @@\n @echo off\n-SETLOCAL\n+SETLOCAL enabledelayedexpansion\n \n TITLE Elasticsearch Service ${project.version}\n ",
"filename": "distribution/src/main/resources/bin/service.bat",
"status": "modified"
}
]
}
|
{
"body": "same ES2.1.0 server and client. \n\n[DEBUG][action.admin.cluster.node.info] [node1] failed to execute on node [yQYHZ3hxTWOwrQPq8w4rrQ]\nRemoteTransportException[[node2][node2:9305][cluster:monitor/nodes/info[n]]]; nested: NotSerializableExceptionWrapper;\nCaused by: NotSerializableExceptionWrapper[null]\nat java.util.ArrayList.sort(ArrayList.java:1456)\nat java.util.Collections.sort(Collections.java:175)\nat org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.getInfos(PluginsInfo.java:55)\nat org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.writeTo(PluginsInfo.java:86)\nat org.elasticsearch.action.admin.cluster.node.info.NodeInfo.writeTo(NodeInfo.java:284)\nat org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:97)\nat org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:75)\nat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:211)\nat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:207)\nat org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:299)\nat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\nat java.lang.Thread.run(Thread.java:745)\n",
"comments": [
{
"body": "no, the cluster noeds are all 2.1.0\n",
"created_at": "2015-12-18T10:31:55Z"
},
{
"body": "The exception that is being thrown here is very likely a `ConcurrentModificationException` which means something rather nefarious is going on. What plugins do you have installed? Can you provide the steps that led to you seeing this so that we have a reproduction?\n",
"created_at": "2015-12-18T11:25:14Z"
},
{
"body": "I can reproduce this locally. This can be caused by concurrent requests to any of the nodes info, cluster stats and the cat plugins APIs. I will open a pull request to address shortly.\n",
"created_at": "2015-12-18T12:31:14Z"
},
{
"body": "I'm also receiving these errors intermittently. I left my 5 node 2.1.0 cluster running over the Christmas break collecting about 120k events per minute from firewalls. Have the following plugins installed...\n\n```\n- license\n- marvel-agent\n- cloud-aws\n- shield\n- watcher\n```\n\nReceiving the following error on all cluster nodes...\n\n[2015-12-29 07:26:26,735][DEBUG][action.admin.cluster.node.info] [elkrp11] failed to execute on node [m_uC_oh1TAmasv2k1n3zJg]\nRemoteTransportException[[elkrp7][10.10.60.121:9300][cluster:monitor/nodes/info[n]]]; nested: NotSerializableExceptionWrapper;\nCaused by: NotSerializableExceptionWrapper[null]\n at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)\n at java.util.ArrayList$Itr.next(ArrayList.java:851)\n at org.elasticsearch.action.admin.cluster.node.info.PluginsInfo.writeTo(PluginsInfo.java:86)\n at org.elasticsearch.action.admin.cluster.node.info.NodeInfo.writeTo(NodeInfo.java:284)\n at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:97)\n at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:75)\n at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:211)\n at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:207)\n at org.elasticsearch.shield.transport.ShieldServerTransportService$ProfileSecuredRequestHandler.messageReceived(ShieldServerTransportService.java:165)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:299)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n",
"created_at": "2015-12-29T00:48:17Z"
},
{
"body": "Closed in the 2.1 branch by 0cd08afd53b90523ed9cb052869653bc2184cf25 and in the 2.0 branch by 916218d1f87ea95785e659f514fae5f9a524005d.\n",
"created_at": "2015-12-29T01:13:17Z"
},
{
"body": "I'm running a 2.1.1 cluster and this issue still persists\n",
"created_at": "2015-12-29T05:10:22Z"
},
{
"body": "> I'm running a 2.1.1 cluster and this issue still persists\n\nThe fix for this was put into the 2.1 branch for inclusion in an eventual 2.1.2 patch release by commit 0cd08afd53b90523ed9cb052869653bc2184cf25, and the into the 2.0 branch for inclusion in a potential (but not guaranteed) 2.0.3 release by commit 916218d1f87ea95785e659f514fae5f9a524005d. The version labels on #15541 are v2.0.3 and v2.1.2 showing the potential releases that will contain the fix.\n",
"created_at": "2015-12-29T12:20:01Z"
},
{
"body": "Issue seems still persists on v2.4.0:\n\n```\nCaused by: NotSerializableExceptionWrapper[too_complex_to_determinize_exception: Determinizing automaton with 77 states and 317 transitions would result in more than 10000 states.]\n at org.apache.lucene.util.automaton.Operations.determinize(Operations.java:743)\n at org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:58)\n at org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:515)\n at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)\n at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)\n at org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude.toAutomaton(IncludeExclude.java:384)\n at org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude.convertToOrdinalsFilter(IncludeExclude.java:408)\n at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory$ExecutionMode$2.create(TermsAggregatorFactory.java:71)\n at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.doCreateInternal(TermsAggregatorFactory.java:246)\n at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:64)\n at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:102)\n at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:87)\n at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:85)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:111)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:372)\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:480)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:392)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:389)\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:293)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\n```\n\n```\n{\n\"name\": \"opcart-es-data-1\",\n\"cluster_name\": \"opcart-prod-v2\",\n\"version\": {\n\"number\": \"2.4.0\",\n\"build_hash\": \"ce9f0c7394dee074091dd1bc4e9469251181fc55\",\n\"build_timestamp\": \"2016-08-29T09:14:17Z\",\n\"build_snapshot\": false,\n\"lucene_version\": \"5.5.2\"\n},\n\"tagline\": \"You Know, for Search\"\n}\n```\n",
"created_at": "2016-09-09T13:40:24Z"
},
{
"body": "@oleg-andreyev This is a completely different issue. You're hitting a `TooComplexToDeterminizeException` inside Lucene, which we do not serialize, hence the wrapper, hence the message:\n\n> `too_complex_to_determinize_exception: Determinizing automaton with 77 states and 317 transitions would result in more than 10000 states.`\n\nIt looks like you have a regular expression that is too complex? I suggest opening a discuss thread on the [Elastic Discourse forums](https://discuss.elastic.co) if you have additional questions.\n",
"created_at": "2016-09-09T14:23:46Z"
}
],
"number": 15537,
"title": "ES2.1.0 keep throw NotSerializableExceptionWrapper"
}
|
{
"body": "A ConcurrentModificationException can arise on an Elasticsearch cluster\nrunning OpenJDK 8 based JVMs from concurrent requests to the nodes info,\nnodes stats and cat plugins endpoints. The issue can arise because\nPlugsinInfo#getInfos currently performs a sort of its backing list. This\nmethod is used in each of those endpoints and two concurrent requests\nwill cause this backing list to be sorted concurrently. Since sorting\nthe backing list modifies it, and concurrent modifications are bad, the\nsort implementation in OpenJDK 8 added protection against concurrent\nmodifications.\n\nThis commit addresses this issue by removing the sort\nfrom the Plugins#getInfos method, but still preserves the sorted result\nfrom this method.\n\nCloses #15537\n",
"number": 15541,
"review_comments": [],
"title": "Fix ConcurrentModificationException from nodes info and nodes stats"
}
|
{
"commits": [
{
"message": "Fix ConcurrentModificationException from nodes info and nodes stats\n\nA ConcurrentModificationException can arise on an Elasticsearch cluster\nrunning OpenJDK 8 based JVMs from concurrent requests to the nodes info,\nnodes stats and cat plugins endpoints. The issue can arise because\nPlugsinInfo#getInfos currently performs a sort of its backing list. This\nmethod is used in each of those endpoints and two concurrent requests\nwill cause this backing list to be sorted concurrently. Since sorting\nthe backing list modifies it, and concurrent modifications are bad, the\nsort implementation in OpenJDK 8 added protection against concurrent\nmodifications.\n\nThis commit addresses this issue by removing the sort\nfrom the Plugins#getInfos method, but still preserves the sorted result\nfrom this method.\n\nCloses #15537"
}
],
"files": [
{
"diff": "@@ -28,38 +28,33 @@\n import org.elasticsearch.plugins.PluginInfo;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Collections;\n+import java.util.Arrays;\n import java.util.Comparator;\n import java.util.List;\n+import java.util.Set;\n+import java.util.TreeSet;\n \n public class PluginsInfo implements Streamable, ToXContent {\n static final class Fields {\n static final XContentBuilderString PLUGINS = new XContentBuilderString(\"plugins\");\n }\n \n- private List<PluginInfo> infos;\n+ private Set<PluginInfo> infos;\n \n public PluginsInfo() {\n- infos = new ArrayList<>();\n- }\n-\n- public PluginsInfo(int size) {\n- infos = new ArrayList<>(size);\n- }\n-\n- /**\n- * @return an ordered list based on plugins name\n- */\n- public List<PluginInfo> getInfos() {\n- Collections.sort(infos, new Comparator<PluginInfo>() {\n+ infos = new TreeSet<>(new Comparator<PluginInfo>() {\n @Override\n public int compare(final PluginInfo o1, final PluginInfo o2) {\n return o1.getName().compareTo(o2.getName());\n }\n });\n+ }\n \n- return infos;\n+ /**\n+ * @return an ordered list based on plugins name\n+ */\n+ public List<PluginInfo> getInfos() {\n+ return Arrays.asList(infos.toArray(new PluginInfo[infos.size()]));\n }\n \n public void add(PluginInfo info) {\n@@ -75,6 +70,7 @@ public static PluginsInfo readPluginsInfo(StreamInput in) throws IOException {\n @Override\n public void readFrom(StreamInput in) throws IOException {\n int plugins_size = in.readInt();\n+ infos.clear();\n for (int i = 0; i < plugins_size; i++) {\n infos.add(PluginInfo.readFromStream(in));\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/PluginsInfo.java",
"status": "modified"
},
{
"diff": "@@ -28,8 +28,12 @@\n import java.io.OutputStream;\n import java.nio.file.Files;\n import java.nio.file.Path;\n+import java.util.ArrayList;\n+import java.util.ConcurrentModificationException;\n import java.util.List;\n import java.util.Properties;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.elasticsearch.common.util.CollectionUtils.eagerTransform;\n import static org.hamcrest.Matchers.contains;\n@@ -273,7 +277,7 @@ public void testReadFromPropertiesSitePluginWithoutSite() throws Exception {\n }\n \n public void testPluginListSorted() {\n- PluginsInfo pluginsInfo = new PluginsInfo(5);\n+ PluginsInfo pluginsInfo = new PluginsInfo();\n pluginsInfo.add(new PluginInfo(\"c\", \"foo\", true, \"dummy\", true, \"dummyclass\", true));\n pluginsInfo.add(new PluginInfo(\"b\", \"foo\", true, \"dummy\", true, \"dummyclass\", true));\n pluginsInfo.add(new PluginInfo(\"e\", \"foo\", true, \"dummy\", true, \"dummyclass\", true));\n@@ -289,4 +293,42 @@ public String apply(PluginInfo input) {\n });\n assertThat(names, contains(\"a\", \"b\", \"c\", \"d\", \"e\"));\n }\n+\n+ public void testConcurrentModificationsAreAvoided() throws InterruptedException {\n+ final PluginsInfo pluginsInfo = new PluginsInfo();\n+ int numberOfPlugins = randomIntBetween(128, 256);\n+ for (int i = 0; i < numberOfPlugins; i++) {\n+ pluginsInfo.add(new PluginInfo(\"name\", \"description\", false, \"version\", true, \"classname\", true));\n+ }\n+\n+ int randomNumberOfThreads = randomIntBetween(2, 8);\n+ final int numberOfAttempts = randomIntBetween(2048, 4096);\n+ final CountDownLatch latch = new CountDownLatch(1 + randomNumberOfThreads);\n+ List<Thread> threads = new ArrayList<>(randomNumberOfThreads);\n+ final AtomicBoolean cme = new AtomicBoolean();\n+ for (int i = 0; i < randomNumberOfThreads; i++) {\n+ Thread thread = new Thread(new Runnable() {\n+ @Override\n+ public void run() {\n+ latch.countDown();\n+ for (int j = 0; j < numberOfAttempts; j++) {\n+ try {\n+ pluginsInfo.getInfos();\n+ } catch (ConcurrentModificationException e) {\n+ cme.set(true);\n+ }\n+ }\n+ }\n+ });\n+ threads.add(thread);\n+ thread.start();\n+ }\n+\n+ latch.countDown();\n+ for (Thread thread : threads) {\n+ thread.join();\n+ }\n+\n+ assertFalse(cme.get());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java",
"status": "modified"
}
]
}
|
{
"body": "here is a link to a relevant failure: http://build-us-00.elasticsearch.org/job/es_core_master_metal/5665/\n\nthe test is indexing into a single shard with 0 replicas and this only shard is moving from node 0 to node 1. During the time we index the documents we mark the primary as started but until the clusterstate is published on node 0 it depends on which node you hit for indexing if the doc goes into both shards or not. Now if you hit node 1 the doc only goes into the new started (relocated) primary. Yet if you are fast enough and hit node 0 for the delete we can't find the doc in the currently relocating (clusterstate is not yet updated) - the logic knows it needs to push the delete into the new primary on node 1 so it really get's deleted but we return the response from node 0 which has `isFound = false`\n",
"comments": [
{
"body": "FYI - I disabled the check for now since it might bring up other failures here and there if timing allows. We should reenable the check once we have a fix - I added a log line instead\n",
"created_at": "2014-11-29T14:42:48Z"
},
{
"body": "a short update, now that we disabled the hard failure, we see that the tests fail to delete the doc: http://build-us-00.elasticsearch.org/job/es_bwc_1x/5883/CHECK_BRANCH=origin%2F1.4,jdk=JDK7,label=bwc/testReport/junit/org.elasticsearch.bwcompat/BasicBackwardsCompatibilityTest/testRecoverFromPreviousVersion/\n",
"created_at": "2014-12-19T13:37:48Z"
}
],
"number": 8706,
"title": "Delete might returns false `isFound()` while primary is relocated "
}
|
{
"body": "This implements a clean handoff between shard copies during primary relocation. \nAfter replaying the translog to target copy during relocation, the source copy is \nmarked as `relocated`. Further writes are blocked on `relocated` shard copies \nand are retried until relocation completes (waits for the cluster state to point to \nthe new copy). \nThe recovery process blocks until all pending writes complete on the source copy. \nIn case of failure/cancellation of recoveries after the source copy has been marked \nas `relocated`, the source state is marked back to `started` and resumes to accept\nwrites.\n\nrelates #8706\n",
"number": 15532,
"review_comments": [],
"title": "Implement proper handoff between primary copies during relocation"
}
|
{
"commits": [
{
"message": "Implement proper handoff between shard copies in primary relocation\n\nNow we mark the source copy as relocated after replaying the translog\nto target during relocation, blocking till all pending operations complete.\nAny further write operations are blocked on relocated shard copies, they are\nretried when relocation has completed"
}
],
"files": [
{
"diff": "@@ -130,6 +130,8 @@\n import java.util.Map;\n import java.util.Objects;\n import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.ScheduledFuture;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n@@ -206,6 +208,9 @@ public class IndexShard extends AbstractIndexShardComponent {\n private final IndexSearcherWrapper searcherWrapper;\n private final TimeValue inactiveTime;\n \n+ private volatile boolean blockOperations = false;\n+ private AtomicReference<CountDownLatch> pendingOps = new AtomicReference<>();\n+\n /**\n * True if this shard is still indexing (recently) and false if we've been idle for long enough (as periodically checked by {@link\n * IndexingMemoryController}).\n@@ -393,6 +398,16 @@ public void updateRoutingEntry(final ShardRouting newRouting, final boolean pers\n }\n }\n }\n+ if (state == IndexShardState.RELOCATED && newRouting.relocating() == false) {\n+ // if the shard is marked RELOCATED but the shard is not in relocating state (due to recovery failure)\n+ // we move to STARTED\n+ synchronized (mutex) {\n+ if (state == IndexShardState.RELOCATED) {\n+ changeState(IndexShardState.STARTED, \"primary relocation failed\");\n+ blockOperations = false;\n+ }\n+ }\n+ }\n this.shardRouting = newRouting;\n indexEventListener.shardRoutingChanged(this, currentRouting, newRouting);\n } finally {\n@@ -428,13 +443,14 @@ public IndexShardState markAsRecovering(String reason, RecoveryState recoverySta\n }\n }\n \n- public IndexShard relocated(String reason) throws IndexShardNotStartedException {\n+ public IndexShard relocated(String reason) throws InterruptedException {\n synchronized (mutex) {\n if (state != IndexShardState.STARTED) {\n throw new IndexShardNotStartedException(shardId, state);\n }\n changeState(IndexShardState.RELOCATED, reason);\n }\n+ blockOperations();\n return this;\n }\n \n@@ -795,7 +811,6 @@ public void close(String reason, boolean flushEngine) throws IOException {\n }\n }\n \n-\n public IndexShard postRecovery(String reason) throws IndexShardStartedException, IndexShardRelocatedException, IndexShardClosedException {\n if (mapperService.hasMapping(PercolatorService.TYPE_NAME)) {\n refresh(\"percolator_load_queries\");\n@@ -952,9 +967,13 @@ private void ensureWriteAllowed(Engine.Operation op) throws IllegalIndexShardSta\n \n if (origin == Engine.Operation.Origin.PRIMARY) {\n // for primaries, we only allow to write when actually started (so the cluster has decided we started)\n- // otherwise, we need to retry, we also want to still allow to index if we are relocated in case it fails\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n- throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when started/recovering, origin [\" + origin + \"]\");\n+ // otherwise, we need to retry\n+ if (state != IndexShardState.STARTED) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operation only allowed when started, origin [\" + origin + \"]\");\n+ }\n+ // we can miss blocking for operation when we increment the operation counter after we check for pending ops\n+ if (blockOperations) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operation blocked, origin [\" + origin + \"]\");\n }\n } else {\n // for replicas, we allow to write also while recovering, since we index also during recovery to replicas\n@@ -1495,12 +1514,40 @@ protected void alreadyClosed() {\n }\n }\n \n+ /**\n+ * blocks indefinitely until pending operations complete\n+ */\n+ public void blockOperations() throws InterruptedException {\n+ boolean success = false;\n+ try {\n+ blockOperations = true;\n+ pendingOps.compareAndSet(null, new CountDownLatch(1));\n+ if (indexShardOperationCounter.refCount() - 1 > 0) {\n+ pendingOps.get().await();\n+ }\n+ success = true;\n+ } finally {\n+ if (success == false) {\n+ blockOperations = false;\n+ }\n+ pendingOps.set(null);\n+ }\n+ }\n+\n public void incrementOperationCounter() {\n+ if (blockOperations && shardRouting.primary()) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"primary has relocated\");\n+ }\n indexShardOperationCounter.incRef();\n }\n \n public void decrementOperationCounter() {\n indexShardOperationCounter.decRef();\n+ if (blockOperations && indexShardOperationCounter.refCount() - 1 == 0) {\n+ if (pendingOps.get() != null) {\n+ pendingOps.get().countDown();\n+ }\n+ }\n }\n \n public int getOperationsCount() {",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -28,13 +28,16 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportRequestHandler;\n+import org.elasticsearch.transport.TransportResponse;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.ArrayList;\n@@ -60,8 +63,7 @@ public static class Actions {\n \n private final ClusterService clusterService;\n \n- private final OngoingRecoveres ongoingRecoveries = new OngoingRecoveres();\n-\n+ private final OngoingRecoveries ongoingRecoveries = new OngoingRecoveries();\n \n @Inject\n public RecoverySource(Settings settings, TransportService transportService, IndicesService indicesService,\n@@ -106,11 +108,11 @@ private RecoveryResponse recover(final StartRecoveryRequest request) {\n }\n if (!targetShardRouting.initializing()) {\n logger.debug(\"delaying recovery of {} as it is not listed as initializing on the target node {}. known shards state is [{}]\",\n- request.shardId(), request.targetNode(), targetShardRouting.state());\n+ request.shardId(), request.targetNode(), targetShardRouting.state());\n throw new DelayRecoveryException(\"source node has the state of the target shard to be [\" + targetShardRouting.state() + \"], expecting to be [initializing]\");\n }\n \n- logger.trace(\"[{}][{}] starting recovery to {}, mark_as_relocated {}\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), request.markAsRelocated());\n+ logger.trace(\"[{}][{}] starting recovery to {}\", request.shardId().index().name(), request.shardId().id(), request.targetNode());\n final RecoverySourceHandler handler;\n if (shard.indexSettings().isOnSharedFilesystem()) {\n handler = new SharedFSRecoverySourceHandler(shard, request, recoverySettings, transportService, logger);\n@@ -133,8 +135,7 @@ public void messageReceived(final StartRecoveryRequest request, final TransportC\n }\n }\n \n-\n- private static final class OngoingRecoveres {\n+ private static final class OngoingRecoveries {\n private final Map<IndexShard, Set<RecoverySourceHandler>> ongoingRecoveries = new HashMap<>();\n \n synchronized void add(IndexShard shard, RecoverySourceHandler handler) {",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java",
"status": "modified"
},
{
"diff": "@@ -396,11 +396,10 @@ public void run() throws InterruptedException {\n }\n });\n \n-\n- if (request.markAsRelocated()) {\n- // TODO what happens if the recovery process fails afterwards, we need to mark this back to started\n+ if (isPrimaryRelocation()) {\n+ // if the recovery process fails afterwards, relocated shard is marked back to started\n try {\n- shard.relocated(\"to \" + request.targetNode());\n+ cancellableThreads.execute(() -> shard.relocated(\"to\" + request.targetNode()));\n } catch (IllegalIndexShardStateException e) {\n // we can ignore this exception since, on the other node, when it moved to phase3\n // it will also send shard started, which might cause the index shard we work against\n@@ -409,7 +408,11 @@ public void run() throws InterruptedException {\n }\n stopWatch.stop();\n logger.trace(\"[{}][{}] finalizing recovery to {}: took [{}]\",\n- indexName, shardId, request.targetNode(), stopWatch.totalTime());\n+ indexName, shardId, request.targetNode(), stopWatch.totalTime());\n+ }\n+\n+ protected boolean isPrimaryRelocation() {\n+ return request.recoveryType() == RecoveryState.Type.RELOCATION && shard.routingEntry().primary();\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -178,7 +178,7 @@ private void doRecovery(final RecoveryStatus recoveryStatus) {\n return;\n }\n final StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n- false, metadataSnapshot, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n+ metadataSnapshot, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n \n final AtomicReference<RecoveryResponse> responseHolder = new AtomicReference<>();\n try {\n@@ -267,7 +267,6 @@ public RecoveryResponse newInstance() {\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n return;\n }\n-\n onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, e), true);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -84,8 +84,4 @@ protected int sendSnapshot(Translog.Snapshot snapshot) {\n return 0;\n }\n \n- private boolean isPrimaryRelocation() {\n- return request.recoveryType() == RecoveryState.Type.RELOCATION && shard.routingEntry().primary();\n- }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/SharedFSRecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -41,8 +41,6 @@ public class StartRecoveryRequest extends TransportRequest {\n \n private DiscoveryNode targetNode;\n \n- private boolean markAsRelocated;\n-\n private Store.MetadataSnapshot metadataSnapshot;\n \n private RecoveryState.Type recoveryType;\n@@ -56,12 +54,11 @@ public StartRecoveryRequest() {\n * @param sourceNode The node to recover from\n * @param targetNode The node to recover to\n */\n- public StartRecoveryRequest(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, boolean markAsRelocated, Store.MetadataSnapshot metadataSnapshot, RecoveryState.Type recoveryType, long recoveryId) {\n+ public StartRecoveryRequest(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, Store.MetadataSnapshot metadataSnapshot, RecoveryState.Type recoveryType, long recoveryId) {\n this.recoveryId = recoveryId;\n this.shardId = shardId;\n this.sourceNode = sourceNode;\n this.targetNode = targetNode;\n- this.markAsRelocated = markAsRelocated;\n this.recoveryType = recoveryType;\n this.metadataSnapshot = metadataSnapshot;\n }\n@@ -82,10 +79,6 @@ public DiscoveryNode targetNode() {\n return targetNode;\n }\n \n- public boolean markAsRelocated() {\n- return markAsRelocated;\n- }\n-\n public RecoveryState.Type recoveryType() {\n return recoveryType;\n }\n@@ -101,7 +94,6 @@ public void readFrom(StreamInput in) throws IOException {\n shardId = ShardId.readShardId(in);\n sourceNode = DiscoveryNode.readNode(in);\n targetNode = DiscoveryNode.readNode(in);\n- markAsRelocated = in.readBoolean();\n metadataSnapshot = new Store.MetadataSnapshot(in);\n recoveryType = RecoveryState.Type.fromId(in.readByte());\n \n@@ -114,7 +106,6 @@ public void writeTo(StreamOutput out) throws IOException {\n shardId.writeTo(out);\n sourceNode.writeTo(out);\n targetNode.writeTo(out);\n- out.writeBoolean(markAsRelocated);\n metadataSnapshot.writeTo(out);\n out.writeByte(recoveryType.id());\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -46,6 +47,7 @@\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n import org.elasticsearch.index.shard.IndexShardNotStartedException;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n@@ -293,7 +295,7 @@ public void testRelocatingReplicaAfterPrimaryOperation() {\n final String index = \"test\";\n final ShardId shardId = new ShardId(index, 0);\n // start with a replica\n- clusterService.setState(state(index, true, ShardRoutingState.STARTED, randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.STARTED));\n+ clusterService.setState(state(index, true, ShardRoutingState.STARTED, randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.STARTED));\n logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n final ClusterState stateWithRelocatingReplica = state(index, true, ShardRoutingState.STARTED, ShardRoutingState.RELOCATING);\n \n@@ -348,6 +350,25 @@ protected Tuple<Response, Request> shardOperationOnPrimary(MetaData metaData, Re\n assertThat(\"replication phase should be skipped if index gets deleted after primary operation\", transport.capturedRequestsByTargetNode().size(), equalTo(0));\n }\n \n+ public void testRelocatedStateBlocksIndexing() throws InterruptedException {\n+ final String index = \"test\";\n+ final ShardId shardId = new ShardId(index, 0);\n+ clusterService.setState(state(index, true, ShardRoutingState.RELOCATING));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ final Action actionOnRelocatedPrimary = new Action(Settings.EMPTY, \"testAction\", transportService, clusterService, threadPool) {\n+ @Override\n+ protected Releasable getIndexShardOperationsCounter(ShardId shardId) {\n+ throw new IllegalIndexShardStateException(shardId, IndexShardState.RELOCATED, \"primary has relocated\");\n+ }\n+ };\n+ Request request = new Request(shardId);\n+ PlainActionFuture<Response> listener = new PlainActionFuture<>();\n+ TransportReplicationAction<Request, Request, Response>.PrimaryPhase primaryPhase = actionOnRelocatedPrimary.new PrimaryPhase(request, createTransportChannel(listener));\n+ primaryPhase.run();\n+ assertListenerThrows(\"relocated shard must throw retryable exception\", listener, IllegalIndexShardStateException.class);\n+ assertThat(action.retryPrimaryException(new IllegalIndexShardStateException(shardId, IndexShardState.RELOCATED, \"relocated exception\")), equalTo(true));\n+ }\n+\n public void testWriteConsistency() throws ExecutionException, InterruptedException {\n action = new ActionWithConsistency(Settings.EMPTY, \"testActionWithConsistency\", transportService, clusterService, threadPool);\n final String index = \"test\";\n@@ -477,9 +498,8 @@ protected void runReplicateTest(IndexShardRoutingTable shardRoutingTable, int as\n assertIndexShardCounter(2);\n // TODO: set a default timeout\n TransportReplicationAction<Request, Request, Response>.ReplicationPhase replicationPhase =\n- action.new ReplicationPhase(request,\n- new Response(),\n- request.shardId(), createTransportChannel(listener), reference, null);\n+ action.new ReplicationPhase(request, new Response(), request.shardId(),\n+ createTransportChannel(listener), reference, null);\n \n assertThat(replicationPhase.totalShards(), equalTo(totalShards));\n assertThat(replicationPhase.pending(), equalTo(assignedReplicas));",
"filename": "core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java",
"status": "modified"
},
{
"diff": "@@ -119,6 +119,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n \n /**\n * Simple unit-test IndexShard related operations.\n@@ -768,6 +769,82 @@ public void run() {\n assertEquals(total + 1, shard.flushStats().getTotal());\n }\n \n+ public void testIndexBlockAfterRelocated() throws Exception {\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)\n+ ).get());\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ shard.relocated(\"simulated recovery\");\n+ try {\n+ shard.incrementOperationCounter();\n+ fail(\"indexing must be blocked after shard relocated\");\n+ } catch (IllegalIndexShardStateException expected) {\n+ assertThat(expected.currentState(), equalTo(IndexShardState.RELOCATED));\n+ }\n+ }\n+\n+ public void testStressRelocated() throws Exception {\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0)\n+ ).get());\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ final int numThreads = randomIntBetween(2, 4);\n+ Thread[] indexThreads = new Thread[numThreads];\n+ CyclicBarrier barrier = new CyclicBarrier(numThreads + 1);\n+ for (int i = 0; i < indexThreads.length; i++) {\n+ indexThreads[i] = new Thread() {\n+ @Override\n+ public void run() {\n+ shard.incrementOperationCounter();\n+ try {\n+ barrier.await();\n+ } catch (InterruptedException | BrokenBarrierException e) {\n+ throw new RuntimeException(e);\n+ }\n+ shard.decrementOperationCounter();\n+ }\n+ };\n+ indexThreads[i].start();\n+ }\n+ AtomicBoolean success = new AtomicBoolean();\n+ final Thread recoveryThread = new Thread(() -> {\n+ try {\n+ shard.relocated(\"simulated recovery\");\n+ success.set(true);\n+ } catch (InterruptedException e) {\n+ throw new RuntimeException(e);\n+ }\n+ });\n+ recoveryThread.start();\n+ // ensure we mark the shard as relocated first\n+ assertBusy(() -> {\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ });\n+ // ensure we block for pending operations to complete\n+ assertBusy(() -> {\n+ assertThat(success.get(), equalTo(false));\n+ assertThat(shard.getOperationsCount(), greaterThan(0));\n+ });\n+ // complete pending operations\n+ barrier.await();\n+ // ensure relocated successfully once pending operations are done\n+ assertBusy(() -> {\n+ assertThat(success.get(), equalTo(true));\n+ assertThat(shard.getOperationsCount(), equalTo(0));\n+ });\n+\n+ recoveryThread.join();\n+ for (Thread indexThread : indexThreads) {\n+ indexThread.join();\n+ }\n+ }\n+\n public void testRecoverFromStore() throws IOException {\n createIndex(\"test\");\n ensureGreen();\n@@ -848,6 +925,23 @@ public void testFailIfIndexNotPresentInRecoverFromStore() throws IOException {\n assertHitCount(client().prepareSearch().get(), 1);\n }\n \n+ public void testStateAfterRecoveryFails() throws InterruptedException {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ final IndexShard shard = test.getShardOrNull(0);\n+ ShardRouting origRouting = shard.routingEntry();\n+ ShardRouting inRecoveryRouting = new ShardRouting(origRouting);\n+ ShardRoutingHelper.relocate(inRecoveryRouting, \"some_node\");\n+ shard.updateRoutingEntry(inRecoveryRouting, true);\n+ shard.relocated(\"simulate mark as relocated\");\n+ assertThat(shard.state(), equalTo(IndexShardState.RELOCATED));\n+ ShardRouting failedRecoveryRouting = new ShardRouting(origRouting);\n+ shard.updateRoutingEntry(failedRecoveryRouting, true);\n+ assertThat(shard.state(), equalTo(IndexShardState.STARTED));\n+ }\n+\n public void testRestoreShard() throws IOException {\n createIndex(\"test\");\n createIndex(\"test_target\");",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,7 @@\n import static org.elasticsearch.index.shard.IndexShardState.CREATED;\n import static org.elasticsearch.index.shard.IndexShardState.POST_RECOVERY;\n import static org.elasticsearch.index.shard.IndexShardState.RECOVERING;\n+import static org.elasticsearch.index.shard.IndexShardState.RELOCATED;\n import static org.elasticsearch.index.shard.IndexShardState.STARTED;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -181,7 +182,7 @@ public void testIndexStateShardChanged() throws Throwable {\n ensureGreen();\n \n //the 3 relocated shards get closed on the first node\n- assertShardStatesMatch(stateChangeListenerNode1, 3, CLOSED);\n+ assertShardStatesMatch(stateChangeListenerNode1, 3, RELOCATED, CLOSED);\n //the 3 relocated shards get created on the second node\n assertShardStatesMatch(stateChangeListenerNode2, 3, CREATED, RECOVERING, POST_RECOVERY, STARTED);\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n@@ -32,13 +33,21 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.discovery.DiscoveryService;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.NodeServicesProvider;\n import org.elasticsearch.index.recovery.RecoveryStats;\n+import org.elasticsearch.index.shard.IndexEventListener;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.IndicesService;\n@@ -50,9 +59,11 @@\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.MockIndexEventListener;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.store.MockFSDirectoryService;\n import org.elasticsearch.test.transport.MockTransportService;\n+import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.ConnectTransportException;\n import org.elasticsearch.transport.Transport;\n import org.elasticsearch.transport.TransportException;\n@@ -65,8 +76,10 @@\n import java.util.Collection;\n import java.util.List;\n import java.util.Map;\n+import java.util.concurrent.Callable;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -97,7 +110,7 @@ public class IndexRecoveryIT extends ESIntegTestCase {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return pluginList(MockTransportService.TestPlugin.class);\n+ return pluginList(MockTransportService.TestPlugin.class, MockIndexEventListener.TestPlugin.class);\n }\n \n private void assertRecoveryStateWithoutStage(RecoveryState state, int shardId, Type type,\n@@ -202,6 +215,59 @@ public void testGatewayRecoveryTestActiveOnly() throws Exception {\n assertThat(recoveryStates.size(), equalTo(0)); // Should not expect any responses back\n }\n \n+ public void testPrimaryRecoveryRelocated() throws Exception {\n+ logger.info(\"--> start node A\");\n+ String nodeA = internalCluster().startNode();\n+\n+ CountDownLatch blockClose = new CountDownLatch(1);\n+ IndexEventListener listener = new IndexEventListener() {\n+ @Override\n+ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard, Settings indexSettings) {\n+ try {\n+ blockClose.await();\n+ } catch (InterruptedException e) {\n+ }\n+ }\n+ };\n+\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, nodeA).setNewDelegate(listener);\n+ createAndPopulateIndex(INDEX_NAME, 1, SHARD_COUNT, REPLICA_COUNT);\n+\n+ logger.info(\"--> start node B\");\n+ String nodeB = internalCluster().startNode();\n+ ensureGreen();\n+\n+ IndicesService indicesService = internalCluster().getInstance(IndicesService.class, nodeA);\n+ final IndexService indexService = indicesService.indexServiceSafe(INDEX_NAME);\n+ final IndexShard sourceShard = indexService.getShard(0);\n+ logger.info(\"--> simulate pending operation\");\n+ sourceShard.incrementOperationCounter();\n+ CountDownLatch startingGun = new CountDownLatch(1);\n+ final Thread finishPendingOps = new Thread() {\n+ @Override\n+ public void run() {\n+ try {\n+ startingGun.await();\n+ sourceShard.decrementOperationCounter();\n+ } catch (InterruptedException e) {\n+ }\n+ }\n+ };\n+ finishPendingOps.start();\n+ logger.info(\"--> move shard from: {} to: {}\", nodeA, nodeB);\n+ client().admin().cluster().prepareReroute()\n+ .add(new MoveAllocationCommand(new ShardId(INDEX_NAME, 0), nodeA, nodeB))\n+ .get();\n+ logger.info(\"--> recovery should be stuck in relocated state\");\n+ startingGun.countDown();\n+ assertBusy(() -> assertThat(sourceShard.state(), equalTo(IndexShardState.RELOCATED)));\n+ logger.info(\"--> finish pending operation\");\n+ finishPendingOps.join();\n+ blockClose.countDown();\n+ logger.info(\"--> source shard should be closed after recovery\");\n+ assertBusy(() -> assertThat(sourceShard.state(), equalTo(IndexShardState.CLOSED)));\n+ }\n+\n public void testReplicaRecovery() throws Exception {\n logger.info(\"--> start node A\");\n String nodeA = internalCluster().startNode();",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/IndexRecoveryIT.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ public void testSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Store store = newStore(createTempDir());\n RecoverySourceHandler handler = new RecoverySourceHandler(null, request, recoverySettings, null, logger);\n Directory dir = store.directory();\n@@ -118,7 +118,7 @@ public void testHandleCorruptedIndexOnSendSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Path tempDir = createTempDir();\n Store store = newStore(tempDir, false);\n AtomicBoolean failedEngine = new AtomicBoolean(false);\n@@ -181,7 +181,7 @@ public void testHandleExceptinoOnSendSendFiles() throws Throwable {\n StartRecoveryRequest request = new StartRecoveryRequest(shardId,\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n new DiscoveryNode(\"b\", DummyTransportAddress.INSTANCE, Version.CURRENT),\n- randomBoolean(), null, RecoveryState.Type.STORE, randomLong());\n+ null, RecoveryState.Type.STORE, randomLong());\n Path tempDir = createTempDir();\n Store store = newStore(tempDir, false);\n AtomicBoolean failedEngine = new AtomicBoolean(false);",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java",
"status": "modified"
},
{
"diff": "@@ -43,8 +43,7 @@ public void testSerialization() throws Exception {\n new ShardId(\"test\", 0),\n new DiscoveryNode(\"a\", new LocalTransportAddress(\"1\"), targetNodeVersion),\n new DiscoveryNode(\"b\", new LocalTransportAddress(\"1\"), targetNodeVersion),\n- true,\n- Store.MetadataSnapshot.EMPTY,\n+ Store.MetadataSnapshot.EMPTY,\n RecoveryState.Type.RELOCATION,\n 1l\n \n@@ -63,7 +62,6 @@ public void testSerialization() throws Exception {\n assertThat(outRequest.shardId(), equalTo(inRequest.shardId()));\n assertThat(outRequest.sourceNode(), equalTo(inRequest.sourceNode()));\n assertThat(outRequest.targetNode(), equalTo(inRequest.targetNode()));\n- assertThat(outRequest.markAsRelocated(), equalTo(inRequest.markAsRelocated()));\n assertThat(outRequest.metadataSnapshot().asMap(), equalTo(inRequest.metadataSnapshot().asMap()));\n assertThat(outRequest.recoveryId(), equalTo(inRequest.recoveryId()));\n assertThat(outRequest.recoveryType(), equalTo(inRequest.recoveryType()));",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java",
"status": "modified"
}
]
}
|
{
"body": "Consider this dataset:\n\n```\nPOST _bulk\n{\"index\": { \"_index\": \"aggtest\", \"_type\": \"default\" }}\n{\"a\": 1, \"b\": 1}\n{\"index\": { \"_index\": \"aggtest\", \"_type\": \"default\" }}\n{\"a\": 1, \"b\": 2}\n{\"index\": { \"_index\": \"aggtest\", \"_type\": \"default\" }}\n{\"a\": 3, \"b\": 1}\n{\"index\": { \"_index\": \"aggtest\", \"_type\": \"default\" }}\n{\"a\": 3, \"b\": 3}\n```\n\nRun this aggregation on it:\n\n```\nPOST aggtest/default/_search?size=0\n{\n \"aggs\": {\n \"by_a\": {\n \"histogram\": {\n \"field\": \"a\",\n \"interval\": 1\n },\n \"aggs\": {\n \"by_b\": {\n \"range\": {\n \"field\": \"b\",\n \"ranges\": [\n { \"from\": 1, \"to\": 2 },\n { \"from\": 2, \"to\": 3 },\n { \"from\": 3, \"to\": 4 }\n ]\n },\n \"aggs\": {\n \"the_filter\": {\n \"bucket_selector\": {\n \"buckets_path\": {\n \"the_doc_count\": \"_count\"\n },\n \"script\": {\n \"inline\": \"the_doc_count > 0\",\n \"lang\": \"expression\"\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nThis is the result. Note how the empty buckets of `by_b` are not removed by the script when the parent bucket of `by_a` is empty.\n\n\n",
"comments": [
{
"body": "@colings86 could you take a look please?\n",
"created_at": "2015-12-16T11:47:29Z"
},
{
"body": "@AndreKR thanks for raising this, I have merged a PR with the fix and it should be available from version 2.2. In the meantime a workaround could be to set min_doc_count on the histogram to 1.\n",
"created_at": "2015-12-18T14:50:52Z"
}
],
"number": 15471,
"title": "Bucket selector script is not applied to empty histogram buckets"
}
|
{
"body": "Closes #15471\n",
"number": 15519,
"review_comments": [
{
"body": "hmm, why did imports move that much?\n",
"created_at": "2015-12-18T10:13:06Z"
},
{
"body": "is it possible to write this test in a way that it either does not require a script, or uses a mock script?\n",
"created_at": "2015-12-18T10:14:42Z"
},
{
"body": "I think something might have changed in the import ordering in the IDE. Not sure if that was a project setting changed in master or if it's just my Eclipse, I will check.\n",
"created_at": "2015-12-18T12:09:47Z"
}
],
"title": "Run pipeline aggregations for empty buckets added in the Range Aggregation"
}
|
{
"commits": [
{
"message": "Aggregations: Run pipeline aggregations for empty buckets added in the Range Aggregation\n\nCloses #15471"
}
],
"files": [
{
"diff": "@@ -391,20 +391,24 @@ protected boolean lessThan(IteratorAndCurrent<B> a, IteratorAndCurrent<B> b) {\n return reducedBuckets;\n }\n \n- private void addEmptyBuckets(List<B> list) {\n+ private void addEmptyBuckets(List<B> list, ReduceContext reduceContext) {\n B lastBucket = null;\n ExtendedBounds bounds = emptyBucketInfo.bounds;\n ListIterator<B> iter = list.listIterator();\n \n // first adding all the empty buckets *before* the actual data (based on th extended_bounds.min the user requested)\n+ InternalAggregations reducedEmptySubAggs = InternalAggregations.reduce(Collections.singletonList(emptyBucketInfo.subAggregations),\n+ reduceContext);\n if (bounds != null) {\n B firstBucket = iter.hasNext() ? list.get(iter.nextIndex()) : null;\n if (firstBucket == null) {\n if (bounds.min != null && bounds.max != null) {\n long key = bounds.min;\n long max = bounds.max;\n while (key <= max) {\n- iter.add(getFactory().createBucket(key, 0, emptyBucketInfo.subAggregations, keyed, formatter));\n+ iter.add(getFactory().createBucket(key, 0,\n+ reducedEmptySubAggs,\n+ keyed, formatter));\n key = emptyBucketInfo.rounding.nextRoundingValue(key);\n }\n }\n@@ -413,7 +417,9 @@ private void addEmptyBuckets(List<B> list) {\n long key = bounds.min;\n if (key < firstBucket.key) {\n while (key < firstBucket.key) {\n- iter.add(getFactory().createBucket(key, 0, emptyBucketInfo.subAggregations, keyed, formatter));\n+ iter.add(getFactory().createBucket(key, 0,\n+ reducedEmptySubAggs,\n+ keyed, formatter));\n key = emptyBucketInfo.rounding.nextRoundingValue(key);\n }\n }\n@@ -428,7 +434,9 @@ private void addEmptyBuckets(List<B> list) {\n if (lastBucket != null) {\n long key = emptyBucketInfo.rounding.nextRoundingValue(lastBucket.key);\n while (key < nextBucket.key) {\n- iter.add(getFactory().createBucket(key, 0, emptyBucketInfo.subAggregations, keyed, formatter));\n+ iter.add(getFactory().createBucket(key, 0,\n+ reducedEmptySubAggs, keyed,\n+ formatter));\n key = emptyBucketInfo.rounding.nextRoundingValue(key);\n }\n assert key == nextBucket.key;\n@@ -441,7 +449,9 @@ private void addEmptyBuckets(List<B> list) {\n long key = emptyBucketInfo.rounding.nextRoundingValue(lastBucket.key);\n long max = bounds.max;\n while (key <= max) {\n- iter.add(getFactory().createBucket(key, 0, emptyBucketInfo.subAggregations, keyed, formatter));\n+ iter.add(getFactory().createBucket(key, 0,\n+ reducedEmptySubAggs, keyed,\n+ formatter));\n key = emptyBucketInfo.rounding.nextRoundingValue(key);\n }\n }\n@@ -453,7 +463,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n \n // adding empty buckets if needed\n if (minDocCount == 0) {\n- addEmptyBuckets(reducedBuckets);\n+ addEmptyBuckets(reducedBuckets, reduceContext);\n }\n \n if (order == InternalOrder.KEY_ASC) {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java",
"status": "modified"
},
{
"diff": "@@ -45,12 +45,14 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n+import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.derivative;\n import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.having;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.lessThan;\n import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n \n @ESIntegTestCase.SuiteScopeTestCase\n public class BucketSelectorTests extends ESIntegTestCase {\n@@ -74,6 +76,7 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n public void setupSuiteScopeCluster() throws Exception {\n createIndex(\"idx\");\n createIndex(\"idx_unmapped\");\n+ createIndex(\"idx_with_gaps\");\n \n interval = randomIntBetween(1, 50);\n numDocs = randomIntBetween(10, 500);\n@@ -84,6 +87,10 @@ public void setupSuiteScopeCluster() throws Exception {\n for (int docs = 0; docs < numDocs; docs++) {\n builders.add(client().prepareIndex(\"idx\", \"type\").setSource(newDocBuilder()));\n }\n+ builders.add(client().prepareIndex(\"idx_with_gaps\", \"type\").setSource(newDocBuilder(1, 1, 0, 0)));\n+ builders.add(client().prepareIndex(\"idx_with_gaps\", \"type\").setSource(newDocBuilder(1, 2, 0, 0)));\n+ builders.add(client().prepareIndex(\"idx_with_gaps\", \"type\").setSource(newDocBuilder(3, 1, 0, 0)));\n+ builders.add(client().prepareIndex(\"idx_with_gaps\", \"type\").setSource(newDocBuilder(3, 3, 0, 0)));\n \n client().preparePutIndexedScript().setId(\"my_script\").setScriptLang(GroovyScriptEngineService.NAME)\n .setSource(\"{ \\\"script\\\": \\\"Double.isNaN(_value0) ? false : (_value0 + _value1 > 100)\\\" }\").get();\n@@ -93,12 +100,17 @@ public void setupSuiteScopeCluster() throws Exception {\n }\n \n private XContentBuilder newDocBuilder() throws IOException {\n+ return newDocBuilder(randomIntBetween(minNumber, maxNumber), randomIntBetween(minNumber, maxNumber),\n+ randomIntBetween(minNumber, maxNumber), randomIntBetween(minNumber, maxNumber));\n+ }\n+\n+ private XContentBuilder newDocBuilder(int field1Value, int field2Value, int field3Value, int field4Value) throws IOException {\n XContentBuilder jsonBuilder = jsonBuilder();\n jsonBuilder.startObject();\n- jsonBuilder.field(FIELD_1_NAME, randomIntBetween(minNumber, maxNumber));\n- jsonBuilder.field(FIELD_2_NAME, randomIntBetween(minNumber, maxNumber));\n- jsonBuilder.field(FIELD_3_NAME, randomIntBetween(minNumber, maxNumber));\n- jsonBuilder.field(FIELD_4_NAME, randomIntBetween(minNumber, maxNumber));\n+ jsonBuilder.field(FIELD_1_NAME, field1Value);\n+ jsonBuilder.field(FIELD_2_NAME, field2Value);\n+ jsonBuilder.field(FIELD_3_NAME, field3Value);\n+ jsonBuilder.field(FIELD_4_NAME, field4Value);\n jsonBuilder.endObject();\n return jsonBuilder;\n }\n@@ -451,4 +463,70 @@ public void testPartiallyUnmapped() throws Exception {\n assertThat(field2SumValue + field3SumValue, greaterThan(100.0));\n }\n }\n+\n+ public void testEmptyBuckets() {\n+ SearchResponse response = client().prepareSearch(\"idx_with_gaps\")\n+ .addAggregation(histogram(\"histo\").field(FIELD_1_NAME).interval(1)\n+ .subAggregation(histogram(\"inner_histo\").field(FIELD_1_NAME).interval(1).extendedBounds(1l, 4l).minDocCount(0)\n+ .subAggregation(derivative(\"derivative\").setBucketsPaths(\"_count\").gapPolicy(GapPolicy.INSERT_ZEROS))))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ InternalHistogram<Bucket> histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(3));\n+\n+ Histogram.Bucket bucket = buckets.get(0);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsString(), equalTo(\"1\"));\n+ Histogram innerHisto = bucket.getAggregations().get(\"inner_histo\");\n+ assertThat(innerHisto, notNullValue());\n+ List<? extends Histogram.Bucket> innerBuckets = innerHisto.getBuckets();\n+ assertThat(innerBuckets, notNullValue());\n+ assertThat(innerBuckets.size(), equalTo(4));\n+ for (int i = 0; i < innerBuckets.size(); i++) {\n+ Histogram.Bucket innerBucket = innerBuckets.get(i);\n+ if (i == 0) {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), nullValue());\n+ } else {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), notNullValue());\n+ }\n+ }\n+\n+ bucket = buckets.get(1);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsString(), equalTo(\"2\"));\n+ innerHisto = bucket.getAggregations().get(\"inner_histo\");\n+ assertThat(innerHisto, notNullValue());\n+ innerBuckets = innerHisto.getBuckets();\n+ assertThat(innerBuckets, notNullValue());\n+ assertThat(innerBuckets.size(), equalTo(4));\n+ for (int i = 0; i < innerBuckets.size(); i++) {\n+ Histogram.Bucket innerBucket = innerBuckets.get(i);\n+ if (i == 0) {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), nullValue());\n+ } else {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), notNullValue());\n+ }\n+ }\n+ bucket = buckets.get(2);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsString(), equalTo(\"3\"));\n+ innerHisto = bucket.getAggregations().get(\"inner_histo\");\n+ assertThat(innerHisto, notNullValue());\n+ innerBuckets = innerHisto.getBuckets();\n+ assertThat(innerBuckets, notNullValue());\n+ assertThat(innerBuckets.size(), equalTo(4));\n+ for (int i = 0; i < innerBuckets.size(); i++) {\n+ Histogram.Bucket innerBucket = innerBuckets.get(i);\n+ if (i == 0) {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), nullValue());\n+ } else {\n+ assertThat(innerBucket.getAggregations().get(\"derivative\"), notNullValue());\n+ }\n+ }\n+ }\n }",
"filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/BucketSelectorTests.java",
"status": "modified"
},
{
"diff": "@@ -58,7 +58,6 @@\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.Matchers.notNullValue;\n-import static org.hamcrest.Matchers.nullValue;\n import static org.hamcrest.Matchers.sameInstance;\n \n @ClusterScope(scope = Scope.SUITE)\n@@ -739,6 +738,10 @@ public void testEmptyAggregation() throws Exception {\n ScriptedMetric scriptedMetric = bucket.getAggregations().get(\"scripted\");\n assertThat(scriptedMetric, notNullValue());\n assertThat(scriptedMetric.getName(), equalTo(\"scripted\"));\n- assertThat(scriptedMetric.aggregation(), nullValue());\n+ assertThat(scriptedMetric.aggregation(), notNullValue());\n+ assertThat(scriptedMetric.aggregation(), instanceOf(List.class));\n+ List<Integer> aggregationResult = (List<Integer>) scriptedMetric.aggregation();\n+ assertThat(aggregationResult.size(), equalTo(1));\n+ assertThat(aggregationResult.get(0), equalTo(0));\n }\n }",
"filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/ScriptedMetricTests.java",
"status": "modified"
}
]
}
|
{
"body": "Hello,\n\nI've got an application which uses match_phrase queries to look for, and highlight, exact matches within file texts. It worked fantastically on ElasticSearch version 2.0.0 and 2.0.1, but breaks on 2.1.0.\n\nThe query appears to run correctly but the highlighter highlights ALL instances of terms in the query instead of just those that appear within matched phrases. For example, if the query contains the word \"is\", then every instance of the word \"is\" - throughout the entire document - is highlighted.\n# Example:\n\nAs an example, I've prepared a series of commands in the Sense chrome plugin to replicate the problem:\n\n```\nPOST /test_index\n{\n \"settings\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 0\n },\n \"mappings\" : {\n \"file\" : {\n \"properties\" : {\n \"text\" : { \"type\" : \"string\"}\n }\n }\n }\n}\n\nPOST /test_index/file\n{\n \"text\": \"The quick brown fox jumped over the other, lazier, fox.\"\n}\n\nPOST /test_index/file\n{\n \"text\": \"Doc Brown is one quick fox.\"\n}\n\nPOST /test_index/file/_search\n{\n \"query\": {\n \"match_phrase\": {\n \"text\": \"quick brown fox\"\n }\n },\n \"highlight\": {\n \"fields\": {\n \"text\": {\n \"number_of_fragments\": 0\n }\n }\n }\n}\n```\n## Version 2.0.1 results\n\nUnder version 2.0.1, the search call returns the following hits array:\n\n``` JSON\n\"hits\": [\n {\n \"_index\": \"test_index\",\n \"_type\": \"file\",\n \"_id\": \"AVF-PCoeysoVw9xNBYa3\",\n \"_score\": 0.55737644,\n \"_source\":{\n \"text\": \"The quick brown fox jumped over the other, lazier, fox.\"\n },\n \"highlight\": {\n \"text\": [\n \"The <em>quick</em> <em>brown</em> <em>fox</em> jumped over the other, lazier, fox.\"\n ]\n }\n }\n]\n```\n\nThis is the expected and desired output.\n## Version 2.1.0 results\n\nUnder version 2.1.0, the search call returns the following hits array:\n\n``` JSON\n\"hits\": [\n {\n \"_index\": \"test_index\",\n \"_type\": \"file\",\n \"_id\": \"AVF-MdXsEcdhEnh7dUcO\",\n \"_score\": 0.55737644,\n \"_source\": {\n \"text\": \"The quick brown fox jumped over the other, lazier, fox.\"\n },\n \"highlight\": {\n \"text\": [\n \"The <em>quick</em> <em>brown</em> <em>fox</em> jumped over the other, lazier, <em>fox</em>.\"\n ]\n }\n }\n]\n```\n\nThis is the same match, and even has the same score, but the word \"fox\" is incorrectly highlighted when it occurs without the rest of the phrase.\n## Tested solutions\n\nIn attempt to rectify this problem, I attempted two things:\n1. First, on the theory that perhaps it had switched to the postings highlighter rather than the plain highlighter, I attempted to force the use of the plain version with the `\"type\": \"plain\"` flag. Unfortunately, this had no visible effect.\n2. Second, duplicated the original query within a `\"highlight_query\"` block. This too, unfortunately, had no visible effect.\n# Theory\n\nCurrently, my theory is that this problem is related to the upgrade to lucene-5.3.0. (https://github.com/elastic/elasticsearch/pull/13239)\nSpecifically, I wonder if it has to do with how NearSpansOrdered are processed. (https://issues.apache.org/jira/browse/LUCENE-6537)\n\nThat said, this is purely conjecture. It's quite possible - or even likely - that the problem lies elsewhere.\nI'm hoping that an elasticsearch expert might have a better idea.\n\nThank you.\n",
"comments": [
{
"body": "Thanks for the detailed bug report, I could easily reproduce the issue. Actually the problem does not come from Lucene but that a change that I did when integrating the 5.3 release. I opened a pr at #15516\n",
"created_at": "2015-12-17T16:06:14Z"
},
{
"body": "Fantastic, thank you!\n",
"created_at": "2015-12-17T16:23:22Z"
}
],
"number": 15291,
"title": "Highlighter tags extraneous terms when used with match_phrase query (v2.1.0)"
}
|
{
"body": "This is a bug that I introduced in #13239 while thinking that the differences\nwere due to changes in Lucene: extractUnknownQuery is also called when span\nextraction already succeeded, so we should only fall back to Weight.extractTerms\nif no spans have been extracted yet.\n\nClose #15291\n",
"number": 15516,
"review_comments": [],
"title": "Fix spans extraction to not also include individual terms."
}
|
{
"commits": [
{
"message": "Fix spans extraction to not also include individual terms.\n\nThis is a bug that I introduced in #13239 while thinking that the differences\nwere due to changes in Lucene: extractUnknownQuery is also called when span\nextraction already succeeded, so we should only fall back to Weight.extractTerms\nif no spans have been extracted yet.\n\nClose #15291"
}
],
"files": [
{
"diff": "@@ -82,7 +82,7 @@ protected void extractUnknownQuery(Query query,\n } else if (query instanceof FiltersFunctionScoreQuery) {\n query = ((FiltersFunctionScoreQuery) query).getSubQuery();\n extract(query, query.getBoost(), terms);\n- } else {\n+ } else if (terms.isEmpty()) {\n extractWeightedTerms(terms, query, query.getBoost());\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java",
"status": "modified"
},
{
"diff": "@@ -1557,7 +1557,7 @@ public void testPlainHighlightDifferentFragmenter() throws Exception {\n .fragmenter(\"simple\"))).get();\n \n assertHighlight(response, 0, \"tags\", 0, equalTo(\"this is a really <em>long</em> <em>tag</em> i would like to highlight\"));\n- assertHighlight(response, 0, \"tags\", 1, 2, equalTo(\"here is another one that is very <em>long</em> <em>tag</em> and has the <em>tag</em> token near the end\"));\n+ assertHighlight(response, 0, \"tags\", 1, 2, equalTo(\"here is another one that is very <em>long</em> <em>tag</em> and has the tag token near the end\"));\n \n response = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.matchQuery(\"tags\", \"long tag\").type(MatchQuery.Type.PHRASE))\n@@ -1566,7 +1566,7 @@ public void testPlainHighlightDifferentFragmenter() throws Exception {\n .fragmenter(\"span\"))).get();\n \n assertHighlight(response, 0, \"tags\", 0, equalTo(\"this is a really <em>long</em> <em>tag</em> i would like to highlight\"));\n- assertHighlight(response, 0, \"tags\", 1, 2, equalTo(\"here is another one that is very <em>long</em> <em>tag</em> and has the <em>tag</em> token near the end\"));\n+ assertHighlight(response, 0, \"tags\", 1, 2, equalTo(\"here is another one that is very <em>long</em> <em>tag</em> and has the tag token near the end\"));\n \n assertFailures(client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.matchQuery(\"tags\", \"long tag\").type(MatchQuery.Type.PHRASE))\n@@ -2062,7 +2062,7 @@ public void testPostingsHighlighter() throws Exception {\n \n searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n \n- assertHighlight(searchResponse, 0, \"field2\", 0, 1, equalTo(\"The <xxx>quick</xxx> <xxx>brown</xxx> fox jumps over the lazy <xxx>quick</xxx> dog\"));\n+ assertHighlight(searchResponse, 0, \"field2\", 0, 1, equalTo(\"The <xxx>quick</xxx> <xxx>brown</xxx> fox jumps over the lazy quick dog\"));\n }\n \n public void testPostingsHighlighterMultipleFields() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/search/highlight/HighlighterSearchIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.highlight;\n+\n+import org.apache.lucene.analysis.MockAnalyzer;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.PhraseQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.highlight.QueryScorer;\n+import org.apache.lucene.util.LuceneTestCase;\n+\n+public class PlainHighlighterTests extends LuceneTestCase {\n+\n+ public void testHighlightPhrase() throws Exception {\n+ Query query = new PhraseQuery.Builder()\n+ .add(new Term(\"field\", \"foo\"))\n+ .add(new Term(\"field\", \"bar\"))\n+ .build();\n+ QueryScorer queryScorer = new CustomQueryScorer(query);\n+ org.apache.lucene.search.highlight.Highlighter highlighter = new org.apache.lucene.search.highlight.Highlighter(queryScorer);\n+ String[] frags = highlighter.getBestFragments(new MockAnalyzer(random()), \"field\", \"bar foo bar foo\", 10);\n+ assertArrayEquals(new String[] {\"bar <B>foo</B> <B>bar</B> foo\"}, frags);\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/search/highlight/PlainHighlighterTests.java",
"status": "added"
}
]
}
|
{
"body": "There should be a null check in https://github.com/elastic/elasticsearch/blob/148265bd164cd5a614cd020fb480d5974f523d81/core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java#L1237\n\nIt should probably raise a unique exception in this event. We encountered this in Logstash with some users running custom plugins.\n\nJackson also does not perform a null check, leading to a confusing NPE in some Jackson codes.\n",
"comments": [
{
"body": "@javanna could you look at this one please\n",
"created_at": "2015-10-30T09:41:54Z"
},
{
"body": "@javanna mind taking a look here?\n",
"created_at": "2015-12-11T23:12:33Z"
}
],
"number": 14346,
"title": "XContent Builder doesn't check for null keys in maps. It should raise an exception"
}
|
{
"body": "Throw exception when trying to write map with null keys in XContentBuilder, otherwise jackson will break and return a cryptic error message.\n\nCloses #14346\n",
"number": 15479,
"review_comments": [
{
"body": "can you undo this change given that we will soon ban wildcard imports? #15395\n",
"created_at": "2015-12-16T15:14:24Z"
},
{
"body": "undo wildcard import?\n",
"created_at": "2015-12-16T15:14:39Z"
},
{
"body": "ok I had missed all that discussion.\n",
"created_at": "2015-12-16T15:41:39Z"
}
],
"title": "Throw exception when trying to write map with null keys"
}
|
{
"commits": [
{
"message": "Throw exception when trying to write map with null keys\n\nCloses #14346"
}
],
"files": [
{
"diff": "@@ -53,7 +53,7 @@\n */\n public final class XContentBuilder implements BytesStream, Releasable {\n \n- public static enum FieldCaseConversion {\n+ public enum FieldCaseConversion {\n /**\n * No conversion will occur.\n */\n@@ -251,14 +251,7 @@ public XContentBuilder endArray() throws IOException {\n }\n \n public XContentBuilder field(XContentBuilderString name) throws IOException {\n- if (fieldCaseConversion == FieldCaseConversion.UNDERSCORE) {\n- generator.writeFieldName(name.underscore());\n- } else if (fieldCaseConversion == FieldCaseConversion.CAMELCASE) {\n- generator.writeFieldName(name.camelCase());\n- } else {\n- generator.writeFieldName(name.underscore());\n- }\n- return this;\n+ return field(name, fieldCaseConversion);\n }\n \n public XContentBuilder field(XContentBuilderString name, FieldCaseConversion conversion) throws IOException {\n@@ -273,22 +266,13 @@ public XContentBuilder field(XContentBuilderString name, FieldCaseConversion con\n }\n \n public XContentBuilder field(String name) throws IOException {\n- if (fieldCaseConversion == FieldCaseConversion.UNDERSCORE) {\n- if (cachedStringBuilder == null) {\n- cachedStringBuilder = new StringBuilder();\n- }\n- name = Strings.toUnderscoreCase(name, cachedStringBuilder);\n- } else if (fieldCaseConversion == FieldCaseConversion.CAMELCASE) {\n- if (cachedStringBuilder == null) {\n- cachedStringBuilder = new StringBuilder();\n- }\n- name = Strings.toCamelCase(name, cachedStringBuilder);\n- }\n- generator.writeFieldName(name);\n- return this;\n+ return field(name, fieldCaseConversion);\n }\n \n public XContentBuilder field(String name, FieldCaseConversion conversion) throws IOException {\n+ if (name == null) {\n+ throw new IllegalArgumentException(\"field name cannot be null\");\n+ }\n if (conversion == FieldCaseConversion.UNDERSCORE) {\n if (cachedStringBuilder == null) {\n cachedStringBuilder = new StringBuilder();",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/XContentBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,6 @@\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.io.FastCharArrayWriter;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -39,6 +38,7 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Calendar;\n+import java.util.Collections;\n import java.util.Date;\n import java.util.GregorianCalendar;\n import java.util.HashMap;\n@@ -51,9 +51,6 @@\n import static org.elasticsearch.common.xcontent.XContentBuilder.FieldCaseConversion.UNDERSCORE;\n import static org.hamcrest.Matchers.equalTo;\n \n-/**\n- *\n- */\n public class XContentBuilderTests extends ESTestCase {\n public void testPrettyWithLfAtEnd() throws Exception {\n ByteArrayOutputStream os = new ByteArrayOutputStream();\n@@ -350,4 +347,33 @@ public void testRenderGeoPoint() throws IOException {\n \"}\", string.trim());\n }\n \n+ public void testWriteMapWithNullKeys() throws IOException {\n+ XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ try {\n+ builder.map(Collections.singletonMap(null, \"test\"));\n+ fail(\"write map should have failed\");\n+ } catch(IllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"field name cannot be null\"));\n+ }\n+ }\n+\n+ public void testWriteMapValueWithNullKeys() throws IOException {\n+ XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ try {\n+ builder.value(Collections.singletonMap(null, \"test\"));\n+ fail(\"write map should have failed\");\n+ } catch(IllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"field name cannot be null\"));\n+ }\n+ }\n+\n+ public void testWriteFieldMapWithNullKeys() throws IOException {\n+ XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));\n+ try {\n+ builder.field(\"map\", Collections.singletonMap(null, \"test\"));\n+ fail(\"write map should have failed\");\n+ } catch(IllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"field name cannot be null\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/xcontent/builder/XContentBuilderTests.java",
"status": "modified"
}
]
}
|
{
"body": "Today we are super lenient (how could I missed that for f**k sake) with failing\n/ closing the translog writer when we hit an exception. It's actually worse, we allow\nto further write to it and don't care what has been already written to disk and what hasn't.\nWe keep the buffer in memory and try to write it again on the next operation.\n\nWhen we hit a disk-full expcetion due to for instance a big merge we are likely adding document to the\ntranslog but fail to write them to disk. Once the merge failed and freed up it's diskspace (note this is\na small window when concurrently indexing and failing the shard due to out of space exceptions) we will\nallow in-flight operations to add to the translog and then once we fail the shard fsync it. These operations\nare written to disk and fsynced which is fine but the previous buffer flush might have written some bytes\nto disk which are not corrupting the translog. That wouldn't be an issue if we prevented the fsync.\n\nCloses #15333\n",
"comments": [
{
"body": "LGTM. Great catch.\n",
"created_at": "2015-12-14T14:27:31Z"
},
{
"body": "@mikemccand @bleskes pushed a new commit\n",
"created_at": "2015-12-14T15:10:16Z"
},
{
"body": "last change LGTM\n",
"created_at": "2015-12-14T15:12:49Z"
},
{
"body": "LGTM, thanks @s1monw!\n",
"created_at": "2015-12-14T15:21:22Z"
},
{
"body": "I let CI run on this before backporting\n",
"created_at": "2015-12-14T17:30:18Z"
},
{
"body": "pushed to \n- 2.x in https://github.com/elastic/elasticsearch/commit/b3e5518c4b8920e3223a9a6f714ce395b2c20ca2\n- 2.1 in https://github.com/elastic/elasticsearch/commit/d51ecc9f0702552eeabe97a5faea65b9a46296c6\n- 2.0 in https://github.com/elastic/elasticsearch/commit/aeee579a752ce50eb7ef8dd925dabd619e328dee\n",
"created_at": "2015-12-14T18:26:57Z"
}
],
"number": 15420,
"title": "Fail and close translog hard if writing to disk fails"
}
|
{
"body": "Today we only test this when writing sequentially. Yet, in practice we mainly\nwrite concurrently, this commit adds a test that tests that concurrent writes with\nsudden fatal failure will not corrupt our translog.\n\nRelates to #15420\n",
"number": 15475,
"review_comments": [
{
"body": "we repeat this pattern in a couple of places and it's easy to get wrong. Maybe add a closeOnTragicEvent method?\n",
"created_at": "2015-12-16T09:47:49Z"
},
{
"body": "this is problematic since you then need to change the signature of the original exceptin and it's even harder to get wrong -1 on doing this\n",
"created_at": "2015-12-16T09:49:08Z"
},
{
"body": "what happens if all operations succeeded before the fail was set to true?\n",
"created_at": "2015-12-16T09:53:17Z"
},
{
"body": "I meant like this:\n\n```\n private void closeOnTragicEvent(Throwable ex) {\n if (current.getTragicException() != null) {\n try {\n close();\n } catch (Exception inner) {\n ex.addSuppressed(inner);\n }\n }\n }\n```\n\nNot sure I follow the signature change comment?\n",
"created_at": "2015-12-16T10:01:25Z"
},
{
"body": "yeah so I am afraid we are missing to rethrow then\n",
"created_at": "2015-12-16T11:13:47Z"
},
{
"body": "I will make sure I call sync afterwards :)\n",
"created_at": "2015-12-16T11:15:24Z"
}
],
"title": "Beef up TranslogTests with concurrent fatal exceptions test"
}
|
{
"commits": [
{
"message": "Beef up TranslogTests with concurrent fatal exceptions test\n\nToday we only test this when writing sequentially. Yet, in practice we mainly\nwrite concurrently, this commit adds a test that tests that concurrent writes with\nsudden fatal failure will not corrupt our translog.\n\nRelates to #15420"
},
{
"message": "apply review from @bleskes"
}
],
"files": [
{
"diff": "@@ -158,7 +158,7 @@ public Translog(TranslogConfig config) throws IOException {\n \n try {\n if (translogGeneration != null) {\n- final Checkpoint checkpoint = Checkpoint.read(location.resolve(CHECKPOINT_FILE_NAME));\n+ final Checkpoint checkpoint = readCheckpoint();\n this.recoveredTranslogs = recoverFromFiles(translogGeneration, checkpoint);\n if (recoveredTranslogs.isEmpty()) {\n throw new IllegalStateException(\"at least one reader must be recovered\");\n@@ -421,13 +421,7 @@ public Location add(Operation operation) throws IOException {\n return location;\n }\n } catch (AlreadyClosedException | IOException ex) {\n- if (current.getTragicException() != null) {\n- try {\n- close();\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n- }\n+ closeOnTragicEvent(ex);\n throw ex;\n } catch (Throwable e) {\n throw new TranslogException(shardId, \"Failed to write operation [\" + operation + \"]\", e);\n@@ -507,13 +501,7 @@ public void sync() throws IOException {\n current.sync();\n }\n } catch (AlreadyClosedException | IOException ex) {\n- if (current.getTragicException() != null) {\n- try {\n- close();\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n- }\n+ closeOnTragicEvent(ex);\n throw ex;\n }\n }\n@@ -545,10 +533,23 @@ public boolean ensureSynced(Location location) throws IOException {\n ensureOpen();\n return current.syncUpTo(location.translogLocation + location.size);\n }\n+ } catch (AlreadyClosedException | IOException ex) {\n+ closeOnTragicEvent(ex);\n+ throw ex;\n }\n return false;\n }\n \n+ private void closeOnTragicEvent(Throwable ex) {\n+ if (current.getTragicException() != null) {\n+ try {\n+ close();\n+ } catch (Exception inner) {\n+ ex.addSuppressed(inner);\n+ }\n+ }\n+ }\n+\n /**\n * return stats\n */\n@@ -1433,4 +1434,9 @@ public Throwable getTragicException() {\n return current.getTragicException();\n }\n \n+ /** Reads and returns the current checkpoint */\n+ final Checkpoint readCheckpoint() throws IOException {\n+ return Checkpoint.read(location.resolve(CHECKPOINT_FILE_NAME));\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.mockfile.FilterFileChannel;\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.ByteArrayDataOutput;\n+import org.apache.lucene.store.MockDirectoryWrapper;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.LineFileDocs;\n import org.apache.lucene.util.LuceneTestCase;\n@@ -62,6 +63,7 @@\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicLong;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.function.Predicate;\n \n import static org.hamcrest.Matchers.*;\n \n@@ -1242,11 +1244,11 @@ private static class TranslogThread extends Thread {\n private final CountDownLatch downLatch;\n private final int opsPerThread;\n private final int threadId;\n- private final BlockingQueue<LocationOperation> writtenOperations;\n+ private final Collection<LocationOperation> writtenOperations;\n private final Throwable[] threadExceptions;\n private final Translog translog;\n \n- public TranslogThread(Translog translog, CountDownLatch downLatch, int opsPerThread, int threadId, BlockingQueue<LocationOperation> writtenOperations, Throwable[] threadExceptions) {\n+ public TranslogThread(Translog translog, CountDownLatch downLatch, int opsPerThread, int threadId, Collection<LocationOperation> writtenOperations, Throwable[] threadExceptions) {\n this.translog = translog;\n this.downLatch = downLatch;\n this.opsPerThread = opsPerThread;\n@@ -1276,76 +1278,58 @@ public void run() {\n throw new ElasticsearchException(\"not supported op type\");\n }\n \n- Translog.Location loc = translog.add(op);\n+ Translog.Location loc = add(op);\n writtenOperations.add(new LocationOperation(op, loc));\n+ afterAdd();\n }\n } catch (Throwable t) {\n threadExceptions[threadId] = t;\n }\n }\n+\n+ protected Translog.Location add(Translog.Operation op) throws IOException {\n+ return translog.add(op);\n+ }\n+\n+ protected void afterAdd() throws IOException {}\n }\n \n public void testFailFlush() throws IOException {\n Path tempDir = createTempDir();\n- final AtomicBoolean simulateDiskFull = new AtomicBoolean();\n+ final AtomicBoolean fail = new AtomicBoolean();\n TranslogConfig config = getTranslogConfig(tempDir);\n- Translog translog = new Translog(config) {\n- @Override\n- TranslogWriter.ChannelFactory getChannelFactory() {\n- final TranslogWriter.ChannelFactory factory = super.getChannelFactory();\n-\n- return new TranslogWriter.ChannelFactory() {\n- @Override\n- public FileChannel open(Path file) throws IOException {\n- FileChannel channel = factory.open(file);\n- return new FilterFileChannel(channel) {\n-\n- @Override\n- public int write(ByteBuffer src) throws IOException {\n- if (simulateDiskFull.get()) {\n- if (src.limit() > 1) {\n- final int pos = src.position();\n- final int limit = src.limit();\n- src.limit(limit / 2);\n- super.write(src);\n- src.position(pos);\n- src.limit(limit);\n- throw new IOException(\"__FAKE__ no space left on device\");\n- }\n- }\n- return super.write(src);\n- }\n- };\n- }\n- };\n- }\n- };\n+ Translog translog = getFailableTranslog(fail, config);\n \n List<Translog.Location> locations = new ArrayList<>();\n int opsSynced = 0;\n- int opsAdded = 0;\n boolean failed = false;\n while(failed == false) {\n try {\n locations.add(translog.add(new Translog.Index(\"test\", \"\" + opsSynced, Integer.toString(opsSynced).getBytes(Charset.forName(\"UTF-8\")))));\n- opsAdded++;\n translog.sync();\n opsSynced++;\n+ } catch (MockDirectoryWrapper.FakeIOException ex) {\n+ failed = true;\n+ assertFalse(translog.isOpen());\n } catch (IOException ex) {\n failed = true;\n assertFalse(translog.isOpen());\n assertEquals(\"__FAKE__ no space left on device\", ex.getMessage());\n }\n- simulateDiskFull.set(randomBoolean());\n+ fail.set(randomBoolean());\n }\n- simulateDiskFull.set(false);\n+ fail.set(false);\n if (randomBoolean()) {\n try {\n locations.add(translog.add(new Translog.Index(\"test\", \"\" + opsSynced, Integer.toString(opsSynced).getBytes(Charset.forName(\"UTF-8\")))));\n fail(\"we are already closed\");\n } catch (AlreadyClosedException ex) {\n assertNotNull(ex.getCause());\n- assertEquals(ex.getCause().getMessage(), \"__FAKE__ no space left on device\");\n+ if (ex.getCause() instanceof MockDirectoryWrapper.FakeIOException) {\n+ assertNull(ex.getCause().getMessage());\n+ } else {\n+ assertEquals(ex.getCause().getMessage(), \"__FAKE__ no space left on device\");\n+ }\n }\n \n }\n@@ -1402,4 +1386,152 @@ public void testTranslogOpsCountIsCorrect() throws IOException {\n }\n }\n }\n+\n+ public void testFatalIOExceptionsWhileWritingConcurrently() throws IOException, InterruptedException {\n+ Path tempDir = createTempDir();\n+ final AtomicBoolean fail = new AtomicBoolean(false);\n+\n+ TranslogConfig config = getTranslogConfig(tempDir);\n+ Translog translog = getFailableTranslog(fail, config);\n+\n+ final int threadCount = randomIntBetween(1, 5);\n+ Thread[] threads = new Thread[threadCount];\n+ final Throwable[] threadExceptions = new Throwable[threadCount];\n+ final CountDownLatch downLatch = new CountDownLatch(1);\n+ final CountDownLatch added = new CountDownLatch(randomIntBetween(10, 100));\n+ List<LocationOperation> writtenOperations = Collections.synchronizedList(new ArrayList<>());\n+ for (int i = 0; i < threadCount; i++) {\n+ final int threadId = i;\n+ threads[i] = new TranslogThread(translog, downLatch, 200, threadId, writtenOperations, threadExceptions) {\n+ @Override\n+ protected Translog.Location add(Translog.Operation op) throws IOException {\n+ Translog.Location add = super.add(op);\n+ added.countDown();\n+ return add;\n+ }\n+\n+ @Override\n+ protected void afterAdd() throws IOException {\n+ if (randomBoolean()) {\n+ translog.sync();\n+ }\n+ }\n+ };\n+ threads[i].setDaemon(true);\n+ threads[i].start();\n+ }\n+ downLatch.countDown();\n+ added.await();\n+ try (Translog.View view = translog.newView()) {\n+ // this holds a reference to the current tlog channel such that it's not closed\n+ // if we hit a tragic event. this is important to ensure that asserts inside the Translog#add doesn't trip\n+ // otherwise our assertions here are off by one sometimes.\n+ fail.set(true);\n+ for (int i = 0; i < threadCount; i++) {\n+ threads[i].join();\n+ }\n+ boolean atLeastOneFailed = false;\n+ for (Throwable ex : threadExceptions) {\n+ if (ex != null) {\n+ atLeastOneFailed = true;\n+ break;\n+ }\n+ }\n+ if (atLeastOneFailed == false) {\n+ try {\n+ boolean syncNeeded = translog.syncNeeded();\n+ translog.close();\n+ assertFalse(\"should have failed if sync was needed\", syncNeeded);\n+ } catch (IOException ex) {\n+ // boom now we failed\n+ }\n+ }\n+ Collections.sort(writtenOperations, (a, b) -> a.location.compareTo(b.location));\n+ assertFalse(translog.isOpen());\n+ final Checkpoint checkpoint = Checkpoint.read(config.getTranslogPath().resolve(Translog.CHECKPOINT_FILE_NAME));\n+ Iterator<LocationOperation> iterator = writtenOperations.iterator();\n+ while (iterator.hasNext()) {\n+ LocationOperation next = iterator.next();\n+ if (checkpoint.offset < (next.location.translogLocation + next.location.size)) {\n+ // drop all that haven't been synced\n+ iterator.remove();\n+ }\n+ }\n+ config.setTranslogGeneration(translog.getGeneration());\n+ try (Translog tlog = new Translog(config)) {\n+ try (Translog.Snapshot snapshot = tlog.newSnapshot()) {\n+ if (writtenOperations.size() != snapshot.estimatedTotalOperations()) {\n+ for (int i = 0; i < threadCount; i++) {\n+ if (threadExceptions[i] != null)\n+ threadExceptions[i].printStackTrace();\n+ }\n+ }\n+ assertEquals(writtenOperations.size(), snapshot.estimatedTotalOperations());\n+ for (int i = 0; i < writtenOperations.size(); i++) {\n+ assertEquals(\"expected operation\" + i + \" to be in the previous translog but wasn't\", tlog.currentFileGeneration() - 1, writtenOperations.get(i).location.generation);\n+ Translog.Operation next = snapshot.next();\n+ assertNotNull(\"operation \" + i + \" must be non-null\", next);\n+ assertEquals(next, writtenOperations.get(i).operation);\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ private Translog getFailableTranslog(final AtomicBoolean fail, final TranslogConfig config) throws IOException {\n+ return new Translog(config) {\n+ @Override\n+ TranslogWriter.ChannelFactory getChannelFactory() {\n+ final TranslogWriter.ChannelFactory factory = super.getChannelFactory();\n+\n+ return new TranslogWriter.ChannelFactory() {\n+ @Override\n+ public FileChannel open(Path file) throws IOException {\n+ FileChannel channel = factory.open(file);\n+ return new ThrowingFileChannel(fail, randomBoolean(), channel);\n+ }\n+ };\n+ }\n+ };\n+ }\n+\n+ public static class ThrowingFileChannel extends FilterFileChannel {\n+ private final AtomicBoolean fail;\n+ private final boolean partialWrite;\n+\n+ public ThrowingFileChannel(AtomicBoolean fail, boolean partialWrite, FileChannel delegate) {\n+ super(delegate);\n+ this.fail = fail;\n+ this.partialWrite = partialWrite;\n+ }\n+\n+ @Override\n+ public long write(ByteBuffer[] srcs, int offset, int length) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public int write(ByteBuffer src, long position) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+\n+ public int write(ByteBuffer src) throws IOException {\n+ if (fail.get()) {\n+ if (partialWrite) {\n+ if (src.limit() > 1) {\n+ final int pos = src.position();\n+ final int limit = src.limit();\n+ src.limit(limit / 2);\n+ super.write(src);\n+ src.position(pos);\n+ src.limit(limit);\n+ throw new IOException(\"__FAKE__ no space left on device\");\n+ }\n+ }\n+ throw new MockDirectoryWrapper.FakeIOException();\n+ }\n+ return super.write(src);\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java",
"status": "modified"
}
]
}
|
{
"body": "These two parameters are not serialized (https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java#L107), although they are build in the json response (https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java#L82). Consequently, when I start two nodes on one machine I get for one node\n\n```\n\"os\": {\n \"refresh_interval_in_millis\": 1000,\n \"name\": \"Linux\",\n \"arch\": \"amd64\",\n \"version\": \"3.13.0-39-generic\",\n \"available_processors\": 12,\n \"allocated_processors\": 12\n },\n```\n\nand\n\n```\n\"os\": {\n \"refresh_interval_in_millis\": 1000,\n \"available_processors\": 12,\n \"allocated_processors\": 12\n },\n```\n\nfor the one that did not handle the request.\nIs there any reason for it or should I fix it?\n",
"comments": [
{
"body": "+1 to fix\n",
"created_at": "2015-12-14T16:35:04Z"
},
{
"body": "Seems worth fixing.\n",
"created_at": "2015-12-14T16:36:48Z"
}
],
"number": 15422,
"title": "node info does not contain os arch and name"
}
|
{
"body": "These three properties are build in the JSON response but were not\ntransported when a node sends the response.\n\ncloses #15422\n",
"number": 15454,
"review_comments": [],
"title": "serialize os name, arch and version too"
}
|
{
"commits": [
{
"message": "serialize os name, arch and version too\n\nThese three properties are build in the jason response but were not\ntransported when a node sends the response.\n\ncloses #15422"
}
],
"files": [
{
"diff": "@@ -111,6 +111,11 @@ public void readFrom(StreamInput in) throws IOException {\n if (in.getVersion().onOrAfter(Version.V_2_1_0)) {\n allocatedProcessors = in.readInt();\n }\n+ if (in.getVersion().onOrAfter(Version.V_2_2_0)) {\n+ name = in.readOptionalString();\n+ arch = in.readOptionalString();\n+ version = in.readOptionalString();\n+ }\n }\n \n @Override\n@@ -120,5 +125,10 @@ public void writeTo(StreamOutput out) throws IOException {\n if (out.getVersion().onOrAfter(Version.V_2_1_0)) {\n out.writeInt(allocatedProcessors);\n }\n+ if (out.getVersion().onOrAfter(Version.V_2_2_0)) {\n+ out.writeOptionalString(name);\n+ out.writeOptionalString(arch);\n+ out.writeOptionalString(version);\n+ }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java",
"status": "modified"
},
{
"diff": "@@ -74,6 +74,7 @@ public void testNodeInfoStreaming() throws IOException {\n assertExpectedUnchanged(nodeInfo, readNodeInfo);\n \n comparePluginsAndModulesOnOrAfter2_2_0(nodeInfo, readNodeInfo);\n+ compareOSOnOrAfter2_2_0(nodeInfo, readNodeInfo);\n \n // test version before V_2_2_0\n version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_2_1_1);\n@@ -87,6 +88,7 @@ public void testNodeInfoStreaming() throws IOException {\n assertExpectedUnchanged(nodeInfo, readNodeInfo);\n \n comparePluginsAndModulesBefore2_2_0(nodeInfo, readNodeInfo);\n+ compareOSOnOrBefore2_2_0(nodeInfo, readNodeInfo);\n }\n \n // checks all properties that are expected to be unchanged. Once we start changing them between versions this method has to be changed as well\n@@ -101,8 +103,6 @@ private void assertExpectedUnchanged(NodeInfo nodeInfo, NodeInfo readNodeInfo) t\n }\n compareJsonOutput(nodeInfo.getHttp(), readNodeInfo.getHttp());\n compareJsonOutput(nodeInfo.getJvm(), readNodeInfo.getJvm());\n- // see issue https://github.com/elastic/elasticsearch/issues/15422\n- // compareJsonOutput(nodeInfo.getOs(), readNodeInfo.getOs());\n compareJsonOutput(nodeInfo.getProcess(), readNodeInfo.getProcess());\n compareJsonOutput(nodeInfo.getSettings(), readNodeInfo.getSettings());\n compareJsonOutput(nodeInfo.getThreadPool(), readNodeInfo.getThreadPool());\n@@ -142,6 +142,19 @@ private void compareJsonOutput(ToXContent param1, ToXContent param2) throws IOEx\n assertThat(param1Builder.string(), equalTo(param2Builder.string()));\n }\n \n+ // see https://github.com/elastic/elasticsearch/issues/15422\n+ private void compareOSOnOrBefore2_2_0(NodeInfo nodeInfo, NodeInfo readNodeInfo) {\n+ OsInfo osInfo = nodeInfo.getOs();\n+ OsInfo readOsInfo = readNodeInfo.getOs();\n+ assertThat(osInfo.getAllocatedProcessors(), equalTo(readOsInfo.getAllocatedProcessors()));\n+ assertThat(osInfo.getAvailableProcessors(), equalTo(readOsInfo.getAvailableProcessors()));\n+ assertThat(osInfo.getRefreshInterval(), equalTo(readOsInfo.getRefreshInterval()));\n+ }\n+\n+ private void compareOSOnOrAfter2_2_0(NodeInfo nodeInfo, NodeInfo readNodeInfo) throws IOException {\n+ compareJsonOutput(nodeInfo.getOs(), readNodeInfo.getOs());\n+ }\n+\n private NodeInfo createNodeInfo() {\n Build build = Build.CURRENT;\n DiscoveryNode node = new DiscoveryNode(\"test_node\", DummyTransportAddress.INSTANCE, VersionUtils.randomVersion(random()));",
"filename": "core/src/test/java/org/elasticsearch/nodesinfo/NodeInfoStreamingTests.java",
"status": "modified"
}
]
}
|
{
"body": "Using the default settings, if a delete request is issued for a single document against a non-existing index, Elasticsearch will create the index. Steps to reproduce:\n\n```\nDELETE /foo\n```\n\n```\nDELETE /foo/bar/1\n```\n\nThen:\n\n```\nGET /foo\n```\n\nresponds with HTTP 200 OK and response body\n\n```\n{\n \"foo\" : {\n \"aliases\" : { },\n \"mappings\" : { },\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"1450124918393\",\n \"number_of_shards\" : \"5\",\n \"number_of_replicas\" : \"1\",\n \"uuid\" : \"Ny3PruSCTIG5TlYecU0_XA\",\n \"version\" : {\n \"created\" : \"3000099\"\n }\n }\n },\n \"warmers\" : { }\n }\n}\n```\n\nshowing that it created the index.\n",
"comments": [
{
"body": "+1 this just cost me a day of work.\n",
"created_at": "2016-08-08T17:27:03Z"
},
{
"body": "Carrying the discussion over from #21926 \r\n\r\n> I think we should have a dedicated setting for this which defaults to false.\r\n\r\nI would prefer to make deletes with an external version auto create indices rather than have a settings that controls deletes in general. It seems that's the only use case that needs it so we can have it contained. ",
"created_at": "2016-12-02T10:29:20Z"
},
{
"body": "++ @bleskes ",
"created_at": "2016-12-02T10:54:31Z"
},
{
"body": "+1, I was using an index management scheme that was open loop deleting potentially existent indices based on time range. This then creates thousands of ghost indices that cannot be deleted, and completely kills the cluster performance. Poor behaviour. I am on v5.20",
"created_at": "2017-04-25T16:06:47Z"
},
{
"body": "Hey guys, as this appears to be still an issue in master, I would like to give it a try iff nobody else is working on it.\r\n\r\nAs the discussion was held over time in multiple threads, let me sum up what the expected behavior should be : \r\n* if an external version is used : create the index ( the same behavior as of now )\r\n* otherwise : throw an `index_not_found`\r\n(the change should not introduce an additional `IndexOption`)",
"created_at": "2017-04-28T12:52:03Z"
}
],
"number": 15425,
"title": "Deleting a document from a non-existing index creates the index"
}
|
{
"body": "Closes #15425\n",
"number": 15451,
"review_comments": [],
"title": "Only create index on document deletion if external versioning is used"
}
|
{
"commits": [
{
"message": "Deleting a document from a non-existing index only creates the index if external versioning is used"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.action;\n \n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.index.VersionType;\n \n /**\n * Generic interface to group ActionRequest, which work on single document level\n@@ -62,4 +63,16 @@ public interface DocumentRequest<T> extends IndicesRequest {\n * @return the Routing\n */\n String routing();\n+\n+ /**\n+ * Set the version type for this request\n+ * @return the Request\n+ */\n+ T versionType(VersionType versionType);\n+\n+ /**\n+ * Get the version type for this request\n+ * @return the version type\n+ */\n+ VersionType versionType();\n }",
"filename": "core/src/main/java/org/elasticsearch/action/DocumentRequest.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n import org.elasticsearch.indices.IndexClosedException;\n@@ -100,15 +101,23 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n for (ActionRequest request : bulkRequest.requests) {\n if (request instanceof DocumentRequest) {\n DocumentRequest req = (DocumentRequest) request;\n- Set<String> types = indicesAndTypes.get(req.index());\n- if (types == null) {\n- indicesAndTypes.put(req.index(), types = new HashSet<>());\n+ if (req instanceof IndexRequest\n+ || req instanceof UpdateRequest\n+ || req instanceof DeleteRequest && (req.versionType() == VersionType.EXTERNAL || (req.versionType() == VersionType.EXTERNAL_GTE))) {\n+ Set<String> types = indicesAndTypes.get(req.index());\n+ if (types == null) {\n+ indicesAndTypes.put(req.index(), types = new HashSet<>());\n+ }\n+ types.add(req.type());\n }\n- types.add(req.type());\n } else {\n throw new ElasticsearchException(\"Parsed unknown request in bulk actions: \" + request.getClass().getSimpleName());\n }\n }\n+ if (indicesAndTypes.isEmpty()) {\n+ executeBulk(bulkRequest, startTime, listener, responses);\n+ return;\n+ }\n final AtomicInteger counter = new AtomicInteger(indicesAndTypes.size());\n ClusterState state = clusterService.state();\n for (Map.Entry<String, Set<String>> entry : indicesAndTypes.entrySet()) {",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -71,7 +71,8 @@ public TransportDeleteAction(Settings settings, TransportService transportServic\n @Override\n protected void doExecute(final DeleteRequest request, final ActionListener<DeleteResponse> listener) {\n ClusterState state = clusterService.state();\n- if (autoCreateIndex.shouldAutoCreate(request.index(), state)) {\n+ if ((request.versionType() == VersionType.EXTERNAL || request.versionType() == VersionType.EXTERNAL_GTE)\n+ && autoCreateIndex.shouldAutoCreate(request.index(), state)) {\n createIndexAction.execute(new CreateIndexRequest(request).index(request.index()).cause(\"auto(delete api)\").masterNodeTimeout(request.timeout()), new ActionListener<CreateIndexResponse>() {\n @Override\n public void onResponse(CreateIndexResponse result) {",
"filename": "core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java",
"status": "modified"
},
{
"diff": "@@ -21,11 +21,17 @@\n package org.elasticsearch.action.bulk;\n \n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.delete.DeleteRequest;\n+import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.nio.charset.StandardCharsets;\n \n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.notNullValue;\n \n public class BulkIntegrationIT extends ESIntegTestCase {\n public void testBulkIndexCreatesMapping() throws Exception {\n@@ -42,4 +48,23 @@ public void run() {\n }\n });\n }\n+\n+ public void testBulkRequestWithDeleteDoesNotCreateMapping() {\n+ BulkItemResponse[] items = client().prepareBulk().add(new DeleteRequest(\"test\", \"test\", \"test\")).get().getItems();\n+ assertThat(items.length, equalTo(1));\n+ assertTrue(items[0].isFailed());\n+ assertThat(items[0].getFailure().getCause(), instanceOf(IndexNotFoundException.class));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getMetaData().getIndices().size(), equalTo(0));\n+ }\n+\n+ public void testBulkRequestWithDeleteAndExternalVersioningCreatesMapping() {\n+ BulkItemResponse[] items = client().prepareBulk().add(\n+ new DeleteRequest(\"test\", \"test\", \"test\")\n+ .version(randomIntBetween(0, 1000))\n+ .versionType(randomFrom(VersionType.EXTERNAL, VersionType.EXTERNAL_GTE)))\n+ .get().getItems();\n+ assertThat(items.length, equalTo(1));\n+ assertFalse(items[0].isFailed());\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getMetaData().index(\"test\"), notNullValue());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/bulk/BulkIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.delete;\n+\n+import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.test.ESIntegTestCase;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.notNullValue;\n+\n+public class DeleteRequestIT extends ESIntegTestCase {\n+\n+ public void testDeleteDoesNotCreateMapping() {\n+ try {\n+ client().prepareDelete(\"test\", \"test\", \"test\").get();\n+ fail(\"Expected to fail with IndexNotFoundException\");\n+ } catch (IndexNotFoundException expected) {\n+ // ignore\n+ }\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getMetaData().getIndices().size(), equalTo(0));\n+ }\n+\n+ public void testDeleteWithExternalVersioningCreatesMapping() {\n+ client().prepareDelete(\"test\", \"test\", \"test\")\n+ .setVersion(randomIntBetween(0, 1000))\n+ .setVersionType(randomFrom(VersionType.EXTERNAL, VersionType.EXTERNAL_GTE))\n+ .get();\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getMetaData().index(\"test\"), notNullValue());\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/action/delete/DeleteRequestIT.java",
"status": "added"
},
{
"diff": "@@ -78,8 +78,11 @@ field _parent, which is in the format parent_type#parent_id.\n [[delete-index-creation]]\n === Automatic index creation\n \n-The delete operation automatically creates an index if it has not been\n-created before (check out the <<indices-create-index,create index API>>\n+Automatic index creation only applies to delete operations that use\n+<<docs-index_,external versioning>>. With version type set to\n+`external`, `external_gt` or `external_gte` the delete operation\n+automatically creates an index if it has not been created\n+before (check out the <<indices-create-index,create index API>>\n for manually creating an index), and also automatically creates a\n dynamic type mapping for the specific type if it has not been created\n before (check out the <<indices-put-mapping,put mapping>>",
"filename": "docs/reference/docs/delete.asciidoc",
"status": "modified"
},
{
"diff": "@@ -14,3 +14,8 @@ your application to Elasticsearch 2.2.\n The field stats' response format has been changed for number based and date fields. The `min_value` and\n `max_value` elements now return values as number and the new `min_value_as_string` and `max_value_as_string`\n return the values as string.\n+\n+==== Delete API\n+\n+Deleting a document from a non-existing index now only creates the index if <<docs-index_,external versioning>>\n+is used (i.e. `version_type` set to `external`, `external_gt` or `external_gte`).",
"filename": "docs/reference/migration/migrate_2_2.asciidoc",
"status": "modified"
}
]
}
|
{
"body": "elasticsearch v2.1.0\n\n`http://127.0.0.1:9200/_analyze?text=13ab2abcd&analyzer=keyword1`\n`http://127.0.0.1:9200/_analyze?text=13ab2abcd&tokenizer=keyword1`\n`http://127.0.0.1:9200/_analyze?text=13ab2abcd&tokenizer=keyword&token_filters=keyword1`\n\nthe `keyword1` is not exist,if you use them for testing,the server will not responding.\nerror in the console is:\n\n`\n[2015-12-01 21:10:05,347][ERROR][transport ] [Crossbones] failed to handle exception for action [indices:admin/analyze[s]], handler [org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1@64daa69a]\njava.lang.NullPointerException\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:195)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.access$700(TransportSingleShardAction.java:115)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1.handleException(TransportSingleShardAction.java:174)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)\n at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:361)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-01 21:11:06,503][ERROR][transport ] [Crossbones] failed to handle exception for action [indices:admin/analyze[s]], handler [org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1@5560501]\njava.lang.NullPointerException\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:195)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.access$700(TransportSingleShardAction.java:115)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1.handleException(TransportSingleShardAction.java:174)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)\n at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:361)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-01 21:12:04,471][ERROR][transport ] [Crossbones] failed to handle exception for action [indices:admin/analyze[s]], handler [org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1@26a7dd07]\njava.lang.NullPointerException\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:195)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.access$700(TransportSingleShardAction.java:115)\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$1.handleException(TransportSingleShardAction.java:174)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)\n at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:361)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n`\n",
"comments": [
{
"body": "This also happens if the analyzer is attached to a specific index, and the API is called out of context:\n\nThis works:\n\n```\nGET /my_index/_analyze\n{\n \"analyzer\" : \"some_analyzer\",\n \"text\" : \"3 bedroom\"\n}\n```\n\nThis hangs, and throws a java.lang.NullPointerException\n\n```\nGET /_analyze\n{\n \"analyzer\" : \"some_analyzer\",\n \"text\" : \"3 bedroom\"\n}\n```\n",
"created_at": "2015-12-08T19:11:32Z"
},
{
"body": "@PhaedrusTheGreek answer was proven correct.\n",
"created_at": "2015-12-09T10:40:33Z"
}
],
"number": 15148,
"title": "Analyze API will not responding if the analyzer or the tokenizer is not exist"
}
|
{
"body": "Fix error handling in TransportSingleShardAction without shardIt\n\nCloses #15148\n",
"number": 15447,
"review_comments": [
{
"body": "leftover?\n",
"created_at": "2015-12-15T15:35:22Z"
},
{
"body": "temporary test?\n",
"created_at": "2015-12-15T15:35:25Z"
},
{
"body": "Ahh... Good catch... I have to fix...\n",
"created_at": "2015-12-15T15:36:48Z"
},
{
"body": "Just catch IllegalArgumentException?\n",
"created_at": "2015-12-15T15:43:15Z"
},
{
"body": "Do we need this?\n",
"created_at": "2015-12-15T15:45:52Z"
},
{
"body": "Just catch IllegalArgumentException I think. Maybe assert something about the message but its not a big deal.\n",
"created_at": "2015-12-15T15:46:13Z"
},
{
"body": "If you do that then the action will not be retried in another shard. It's ok for the analyze api because it would fail on every shard but it seems like it would break other actions. I think the problem you're trying to fix is that perform does not check if shardIt is null before accessing it. \n",
"created_at": "2015-12-15T16:32:31Z"
},
{
"body": "It is a big deal! Otherwise we have no idea if this test is hitting a completely different error than what was expected. Testing exceptions should always check something in the exception message to be sure the _right_ exception was caught (we use IAE all over the place, so the exception class alone is not sufficient).\n",
"created_at": "2015-12-15T19:23:54Z"
},
{
"body": "Yeah, you are right. Things like IllegalArgumentException should always check it the message. There are exceptions for which its not as important because they are more specific. Though most of the time its just worth checking anyway.\n",
"created_at": "2015-12-15T19:35:48Z"
},
{
"body": "@jimferenczi Thanks for your comment. However ,I'm not sure what you say... I thought we have already the null check of shardIt , https://github.com/johtani/elasticsearch/blob/fix/no_respond_analyze_api_without_index/core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java#L156 .\nThen I thought perform method always fail.\nIs my understanding wrong? Do I overlook what it is?\n",
"created_at": "2015-12-17T14:38:08Z"
},
{
"body": "@johtani Yes you're right, I misread the changes. Sorry ;)\n",
"created_at": "2015-12-17T14:50:51Z"
}
],
"title": "Analysis : Fix no response from Analyze API without specified index"
}
|
{
"commits": [],
"files": []
}
|
{
"body": "Even with `index.mapper.dynamic: false` in the config file, automatic mapping is being created.\n[Documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#index-creation) states that:\n\n> Automatic mapping creation can be disabled by setting index.mapper.dynamic to false in the config files\n# steps to reproduce\n\nstep 1. Make sure setting exists:\n\n```\ncat config/elasticsearch.yml | grep mapper\nindex.mapper.dynamic: false\n```\n\nstep 2. Make sure no mapping or template exists:\n\n```\ncurl http://localhost:9200/_mapping\n{}\ncurl http://localhost:9200/_template\n{}\n```\n\nstep 3. Create an index:\n\n```\ncurl -XPUT http://localhost:9200/myindex/mytype/1 -d '{\n \"myfield\": 1\n}'\n{\"_index\":\"myindex\",\"_type\":\"mytype\",\"_id\":\"1\",\"_version\":1,\"_shards\":{\"total\":1,\"successful\":1,\"failed\":0},\"created\":true}\n```\n\nstep 4. Notice, that a mapping was automatically created:\n\n```\ncurl http://localhost:9200/_mapping\n{\"myindex\":{\"mappings\":{\"mytype\":{\"properties\":{\"myfield\":{\"type\":\"long\"}}}}}}\n```\n# expected\n\nNo mapping should be automatically created at step 4.\n# other info\n\n```\ncurl http://localhost:9200\n{\n \"name\" : \"master\",\n \"cluster_name\" : \"traveltime\",\n \"version\" : {\n \"number\" : \"2.1.0\",\n \"build_hash\" : \"72cd1f1a3eee09505e036106146dc1949dc5dc87\",\n \"build_timestamp\" : \"2015-11-18T22:40:03Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"5.3.1\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n",
"comments": [
{
"body": "It appears that this parameter is ignored during the index creation process, eg:\n\n```\nPUT t \n{\n \"settings\": {\n \"index.mapper.dynamic\": false\n }\n}\n```\n\nThis throws an exception correctly:\n\n```\nPUT t/x/1\n{}\n```\n\nBut start a node with `--index.mapper.dynamic false`, then:\n\n```\nDELETE t\n```\n\nThis creates the `t` index with the `x` type:\n\n```\nPUT t/x/1\n{} \n```\n\nAnd this then fails with the correct exception:\n\n```\nPUT t/y/2\n{}\n```\n",
"created_at": "2015-12-11T09:35:42Z"
},
{
"body": "I took a stab at this one: https://github.com/elastic/elasticsearch/pull/15424. Had a little trouble with the test setup so could use some input.\n",
"created_at": "2015-12-14T17:35:47Z"
}
],
"number": 15381,
"title": "Config `index.mapper.dynamic: false` is not honored."
}
|
{
"body": "The MapperService doesn't currently check the\nindex.mapper.dynamic setting during index creation,\nso indices can be created with dynamic mappings even\nif this setting is false. Add a check that throws an\nexception in this case. Fixes #15381\n",
"number": 15424,
"review_comments": [
{
"body": "This setting is per index and it is honored every time (no bug). What's not working is when you set this setting for all indices.\nTo do this you can override nodeSettings in your test: \n\n```\nprotected Settings nodeSettings(int nodeOrdinal) {\n return settingsBuilder().put(super.nodeSettings(nodeOrdinal))\n .put(\"index.mapper.dynamic\", \"false\")\n .build();\n}\n```\n",
"created_at": "2015-12-17T14:27:52Z"
},
{
"body": "You cannot do the check here. You have no way to know if the type is created automatically by an index operation or through a legitimate create index.\n",
"created_at": "2015-12-17T14:28:53Z"
},
{
"body": "Can we make `\"index.mapper.dynamic\"` a constant in `MapperService`?\n",
"created_at": "2015-12-17T23:18:37Z"
},
{
"body": "We greatly appreciate the contribution, but we do not place attribution in our source code like this. :)\n",
"created_at": "2015-12-17T23:19:40Z"
},
{
"body": "Assert on the exception message, to prevent situations where the same exception is thrown but for a different reason.\n",
"created_at": "2015-12-17T23:20:20Z"
},
{
"body": "Use the factored out [constant](https://github.com/elastic/elasticsearch/pull/15424/files#r47975685) for the setting key here.\n",
"created_at": "2015-12-17T23:20:41Z"
},
{
"body": "Is there any way to just make this a unit test (extending `ESTestCase`) instead of a full blown integration test?\n",
"created_at": "2015-12-17T23:21:31Z"
},
{
"body": "If this [does end up needing to be an integration test](https://github.com/elastic/elasticsearch/pull/15424/files#r47975786), this file will need a license header.\n",
"created_at": "2015-12-17T23:22:06Z"
},
{
"body": "Also, I think that we should factor the default value out into a constant as well, and reuse it here and in `MapperService`.\n",
"created_at": "2015-12-17T23:22:44Z"
},
{
"body": "TypeMissingException ? It's a not a parsing exception per say. Can you add the index name and the type in the message.\n",
"created_at": "2015-12-18T08:45:44Z"
},
{
"body": "Nit: Can you make this `INDEX_MAPPER_DYNAMIC_SETTING`?\n",
"created_at": "2015-12-18T14:27:37Z"
},
{
"body": "Nit: Can you make this `INDEX_MAPPER_DYNAMIC_DEFAULT`?\n",
"created_at": "2015-12-18T14:27:52Z"
},
{
"body": "This should be using the default value of the setting, instead of an explicit `true`.\n",
"created_at": "2015-12-18T14:28:17Z"
},
{
"body": "I can! How did you come up with that name?\n",
"created_at": "2015-12-18T14:30:17Z"
},
{
"body": "The key used in settings is `index.mapper.dynamic`; I changed the `.` to `_` and made it uppercase.\n",
"created_at": "2015-12-18T14:31:44Z"
},
{
"body": "Can you fold this logic into AutoCreateIndex so that it will also apply to delete/bulk/update requests?\n",
"created_at": "2015-12-28T19:00:58Z"
},
{
"body": "This is going to fail the forbidden APIs check for using the default locale; have you been running the `precommit` task or any tasks that depend on `precommit`?\n",
"created_at": "2015-12-28T19:09:17Z"
},
{
"body": "You can get away with it right now because there is only one test, but this should be initialized once before the test suite, not once before each test in the suite.\n",
"created_at": "2015-12-28T19:11:05Z"
},
{
"body": "Since this is static, the name should be `THREAD_POOL`.\n",
"created_at": "2015-12-28T19:11:18Z"
},
{
"body": "Call this `destroyThreadPool`?\n",
"created_at": "2015-12-28T19:12:03Z"
},
{
"body": "Instead of passing `null` you can pass `new Index(request.index())`.\n",
"created_at": "2015-12-28T19:29:08Z"
},
{
"body": "Can we change this to just an `assertTrue` that `e.getMessage()` contains just the substring you added in `TransportIndexAction`?\n",
"created_at": "2015-12-28T19:31:35Z"
},
{
"body": "Should assert that `e` is an instance of `TypeMissingException`.\n",
"created_at": "2015-12-28T19:31:48Z"
},
{
"body": "If neither `onFailure` nor `onResponse` are mistakenly not executed, this test will still pass. I think that we should add something to avoid that possibility.\n",
"created_at": "2015-12-28T19:36:05Z"
},
{
"body": "The `new HashSet<>()` can be replaced with `Collections.emptySet()` (and then you'll have an import to remove).\n",
"created_at": "2015-12-28T19:49:04Z"
},
{
"body": "Can you make this a JUnit assertion instead of a Java assertion? It's preferable to use a test assertion with matchers for this because then nice error messages are produced automatically when the assertion fails.\n",
"created_at": "2015-12-29T14:26:48Z"
},
{
"body": "You can just use the literal boolean `false` instead of the string `\"false\"`.\n",
"created_at": "2015-12-29T14:31:29Z"
},
{
"body": "Let's also make this a JUnit assertion instead of a Java assertion.\n",
"created_at": "2015-12-29T14:32:09Z"
},
{
"body": "You can just make this an anonymous ActionListener as you had before, capturing an AtomicBoolean to flag whether or not `onFailure` was actually invoked. I think that it will clearer if it's inline with the test, and there will be a little less boilerplate because you can drop the field and the setter.\n",
"created_at": "2015-12-29T14:33:42Z"
},
{
"body": "Sorry, what I meant in my previous comment about matchers is that this should be something like\n\n``` java\nassertThat(e, instanceOf(IndexNotFoundException.class));\n```\n\nI'm going off-the-cuff here, you'll have to lookup the exact syntax and the appropriate matcher to statically import (`CoreMatchers#instanceOf`?). :)\n\nThis gives nicer error messages like \"`Expected: an instance of org.elasticsearch.index.IndexNotFoundException but: ...`\" instead of generic assertion errors.\n",
"created_at": "2015-12-29T23:20:52Z"
}
],
"title": "MapperService: check index.mapper.dynamic during index creation"
}
|
{
"commits": [
{
"message": "MapperService: check index.mapper.dynamic during index creation\nThe MapperService doesn't currently check the\nindex.mapper.dynamic setting during index creation,\nso indices can be created with dynamic mappings even\nif this setting is false. Add a check that throws an\nexception in this case. Fixes #15381"
},
{
"message": "Check for dynamic disabled in TransportIndexAction.java"
},
{
"message": "Licensing change and factor index.mapper.dynamic to a constant"
},
{
"message": "dynamic mapping exception cleanup\nthrow a TypeMissingException whose message\ncontains the index and type"
},
{
"message": "variable renames\nDYNAMIC_MAPPING_ENABLED_SETTING -> INDEX_MAPPER_DYNAMIC_SETTING,\nDYNAMIC_MAPPING_ENABLED_DEFAULT -> INDEX_MAPPER_DYNAMIC_DEFAULT"
},
{
"message": "Make DynamicMappingDisabledTests a unit test"
},
{
"message": "Remove unused import"
},
{
"message": "Review cleanups 12/28\n- move dynamic mapping disabled check to AutoCreateIndex\n- move test threadpool setup to an @BeforeClass method\n- rename threadPool -> THREAD_POOL, afterClass -> destroyThreadPool\n- make sure onFailure gets called in the test"
},
{
"message": "Review cleanups 12/29\n- use an AtomicBoolean and an inline ActionListener\n- use JUnit assertions instead of Java"
},
{
"message": "Add comment and use default false AtomicBoolean and assertThat"
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.mapper.MapperService;\n \n /**\n * Encapsulates the logic of whether a new index should be automatically created when\n@@ -35,13 +36,15 @@ public final class AutoCreateIndex {\n \n private final boolean needToCheck;\n private final boolean globallyDisabled;\n+ private final boolean dynamicMappingDisabled;\n private final String[] matches;\n private final String[] matches2;\n private final IndexNameExpressionResolver resolver;\n \n @Inject\n public AutoCreateIndex(Settings settings, IndexNameExpressionResolver resolver) {\n this.resolver = resolver;\n+ dynamicMappingDisabled = !settings.getAsBoolean(MapperService.INDEX_MAPPER_DYNAMIC_SETTING, MapperService.INDEX_MAPPER_DYNAMIC_DEFAULT);\n String value = settings.get(\"action.auto_create_index\");\n if (value == null || Booleans.isExplicitTrue(value)) {\n needToCheck = true;\n@@ -82,7 +85,7 @@ public boolean shouldAutoCreate(String index, ClusterState state) {\n if (exists) {\n return false;\n }\n- if (globallyDisabled) {\n+ if (globallyDisabled || dynamicMappingDisabled) {\n return false;\n }\n // matches not set, default value of \"true\"",
"filename": "core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java",
"status": "modified"
},
{
"diff": "@@ -68,6 +68,8 @@\n public class MapperService extends AbstractIndexComponent implements Closeable {\n \n public static final String DEFAULT_MAPPING = \"_default_\";\n+ public static final String INDEX_MAPPER_DYNAMIC_SETTING = \"index.mapper.dynamic\";\n+ public static final boolean INDEX_MAPPER_DYNAMIC_DEFAULT = true;\n private static ObjectHashSet<String> META_FIELDS = ObjectHashSet.from(\n \"_uid\", \"_id\", \"_type\", \"_all\", \"_parent\", \"_routing\", \"_index\",\n \"_size\", \"_timestamp\", \"_ttl\"\n@@ -120,7 +122,7 @@ public MapperService(IndexSettings indexSettings, AnalysisService analysisServic\n this.searchQuoteAnalyzer = new MapperAnalyzerWrapper(analysisService.defaultSearchQuoteAnalyzer(), p -> p.searchQuoteAnalyzer());\n this.mapperRegistry = mapperRegistry;\n \n- this.dynamic = this.indexSettings.getSettings().getAsBoolean(\"index.mapper.dynamic\", true);\n+ this.dynamic = this.indexSettings.getSettings().getAsBoolean(INDEX_MAPPER_DYNAMIC_SETTING, INDEX_MAPPER_DYNAMIC_DEFAULT);\n defaultPercolatorMappingSource = \"{\\n\" +\n \"\\\"_default_\\\":{\\n\" +\n \"\\\"properties\\\" : {\\n\" +",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,114 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.action.index.TransportIndexAction;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.AutoCreateIndex;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.elasticsearch.transport.TransportService;\n+import org.elasticsearch.transport.local.LocalTransport;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n+import org.elasticsearch.test.cluster.TestClusterService;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import static org.hamcrest.CoreMatchers.instanceOf;\n+\n+import java.util.Collections;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+public class DynamicMappingDisabledTests extends ESSingleNodeTestCase {\n+\n+ private static ThreadPool THREAD_POOL;\n+ private TestClusterService clusterService;\n+ private LocalTransport transport;\n+ private TransportService transportService;\n+ private IndicesService indicesService;\n+ private ShardStateAction shardStateAction;\n+ private ActionFilters actionFilters;\n+ private IndexNameExpressionResolver indexNameExpressionResolver;\n+ private AutoCreateIndex autoCreateIndex;\n+ private Settings settings;\n+\n+ @BeforeClass\n+ public static void createThreadPool() {\n+ THREAD_POOL = new ThreadPool(\"DynamicMappingDisabledTests\");\n+ }\n+\n+ @Override\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ settings = Settings.builder()\n+ .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING, false)\n+ .build();\n+ clusterService = new TestClusterService(THREAD_POOL);\n+ transport = new LocalTransport(settings, THREAD_POOL, Version.CURRENT, new NamedWriteableRegistry());\n+ transportService = new TransportService(transport, THREAD_POOL);\n+ indicesService = getInstanceFromNode(IndicesService.class);\n+ shardStateAction = new ShardStateAction(settings, clusterService, transportService, null, null);\n+ actionFilters = new ActionFilters(Collections.emptySet());\n+ indexNameExpressionResolver = new IndexNameExpressionResolver(settings);\n+ autoCreateIndex = new AutoCreateIndex(settings, indexNameExpressionResolver);\n+ }\n+\n+ @AfterClass\n+ public static void destroyThreadPool() {\n+ ThreadPool.terminate(THREAD_POOL, 30, TimeUnit.SECONDS);\n+ // since static must set to null to be eligible for collection\n+ THREAD_POOL = null;\n+ }\n+\n+ public void testDynamicDisabled() {\n+ TransportIndexAction action = new TransportIndexAction(settings, transportService, clusterService,\n+ indicesService, THREAD_POOL, shardStateAction, null, null, actionFilters, indexNameExpressionResolver,\n+ autoCreateIndex);\n+\n+ IndexRequest request = new IndexRequest(\"index\", \"type\", \"1\");\n+ request.source(\"foo\", 3);\n+ final AtomicBoolean onFailureCalled = new AtomicBoolean();\n+\n+ action.execute(request, new ActionListener<IndexResponse>() {\n+ @Override\n+ public void onResponse(IndexResponse indexResponse) {\n+ fail(\"Indexing request should have failed\");\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ onFailureCalled.set(true);\n+ assertThat(e, instanceOf(IndexNotFoundException.class));\n+ assertEquals(e.getMessage(), \"no such index\");\n+ }\n+ });\n+\n+ assertTrue(onFailureCalled.get());\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingDisabledTests.java",
"status": "added"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.util.Arrays;\n@@ -40,7 +41,7 @@ public void testGetSettingsWithBlocks() throws Exception {\n .setSettings(Settings.settingsBuilder()\n .put(\"index.refresh_interval\", -1)\n .put(\"index.merge.policy.expunge_deletes_allowed\", \"30\")\n- .put(\"index.mapper.dynamic\", false)));\n+ .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING, false)));\n \n for (String block : Arrays.asList(SETTING_BLOCKS_READ, SETTING_BLOCKS_WRITE, SETTING_READ_ONLY)) {\n try {\n@@ -49,7 +50,7 @@ public void testGetSettingsWithBlocks() throws Exception {\n assertThat(response.getIndexToSettings().size(), greaterThanOrEqualTo(1));\n assertThat(response.getSetting(\"test\", \"index.refresh_interval\"), equalTo(\"-1\"));\n assertThat(response.getSetting(\"test\", \"index.merge.policy.expunge_deletes_allowed\"), equalTo(\"30\"));\n- assertThat(response.getSetting(\"test\", \"index.mapper.dynamic\"), equalTo(\"false\"));\n+ assertThat(response.getSetting(\"test\", MapperService.INDEX_MAPPER_DYNAMIC_SETTING), equalTo(\"false\"));\n } finally {\n disableIndexBlock(\"test\", block);\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/settings/GetSettingsBlocksIT.java",
"status": "modified"
}
]
}
|
{
"body": "Today, the disk got full and ElasticSearch is not able to go back again. Isn't there a built-in system that prevents such failures. I agree that we should be monitoring the hard space and not let this happen in first place, but some times things happen.\n\nMy setup is a single node at present. Using ES 2.1.0, which was supposed to have this fix.\n\nI don't see a clear way to recover the node. A post at https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html seemed to help, but still few indices got corrupted and I have no way to recovering them.\n\nAt the end, I ended up deleted the indices, but that's not the way it should be. Such things must be taken care of ultimately. But this is clearly a bug with ES 2.1.0\n",
"comments": [
{
"body": "maybe you can tell us what prevented you from starting up again?\n",
"created_at": "2015-12-09T11:13:22Z"
},
{
"body": "There were 15 indices on the node. Of those 15 indices, 9 indices had issues with their shards and ES status was red.\n\nIssue curl -XGET http://localhost:9200/_cat/shards command, listed 52 shards UNASSIGNED and 4 shards in INITIALIZING status.\n\nI issued reroute command (localhost:9200/_cluster/reroute) to move UNASSIGNED to force shard allocation. \n\nHowever, the shards that were in INITIALIZING status stay there. The CPU usage was 100% (8-cores busy) for more than 4 hours, before I gave up and started deleting all indices that were causing the problem. Data was about 50GB and 6 Million records.\n\nEven issuing systemctl stop elasticsearch.service took forever.\n\nIs this what you were looking for, if not let me know what you are looking for and I will reply ASAP\n",
"created_at": "2015-12-09T11:53:33Z"
},
{
"body": "there are lots of open questions, do you have some logs telling why the shards where unassigned? did you just upgrade? Why do you force them to allocate? did you run into any disk space issues?\n",
"created_at": "2015-12-09T15:40:08Z"
},
{
"body": "Yes, the disk got full and then after the issue started happening as ES\nstopped responding\n\nRegards,\nKetan\n\nOn Dec 9, 2015, at 9:11 PM, Simon Willnauer notifications@github.com\nwrote:\n\nthere are lots of open questions, do you have some logs telling why the\nshards where unassigned? did you just upgrade? Why do you force them to\nallocate? did you run into any disk space issues?\n\n—\nReply to this email directly or view it on GitHub\nhttps://github.com/elastic/elasticsearch/issues/15333#issuecomment-163296271\n.\n",
"created_at": "2015-12-09T15:59:51Z"
},
{
"body": "@kpcool Please could you provide the logs and also answers for the questions asked by @s1monw . The information you have provided up until now provides no clues at to why the shards were not reassigned, etc.\n",
"created_at": "2015-12-10T12:42:43Z"
},
{
"body": "Here's the log around that time.\n\n```\n[2015-12-09 00:00:18,560][ERROR][index.engine ] [Mister Jip] [topbeat-2015.12.09][3] failed to merge\njava.io.IOException: No space left on device\n at sun.nio.ch.FileDispatcherImpl.write0(Native Method)\n at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)\n at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)\n at sun.nio.ch.IOUtil.write(IOUtil.java:65)\n at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)\n at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)\n at java.nio.channels.Channels.writeFully(Channels.java:101)\n at java.nio.channels.Channels.access$000(Channels.java:61)\n at java.nio.channels.Channels$1.write(Channels.java:174)\n at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:271)\n at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)\n at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)\n at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)\n at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)\n at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)\n at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)\n at org.apache.lucene.util.packed.DirectWriter.flush(DirectWriter.java:86)\n at org.apache.lucene.util.packed.DirectWriter.add(DirectWriter.java:78)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:218)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:80)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addSortedNumericField(Lucene50DocValuesConsumer.java:470)\n at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedNumericField(PerFieldDocValuesFormat.java:126)\n at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedNumericField(DocValuesConsumer.java:417)\n at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:236)\n at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150)\n at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105)\n at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4089)\n at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)\n at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)\n at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)\n at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)\n[2015-12-09 00:00:18,997][WARN ][index.engine ] [Mister Jip] [topbeat-2015.12.09][3] failed engine [already closed by tragic event]\njava.io.IOException: No space left on device\n at sun.nio.ch.FileDispatcherImpl.write0(Native Method)\n at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)\n at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)\n at sun.nio.ch.IOUtil.write(IOUtil.java:65)\n at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)\n at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)\n at java.nio.channels.Channels.writeFully(Channels.java:101)\n at java.nio.channels.Channels.access$000(Channels.java:61)\n at java.nio.channels.Channels$1.write(Channels.java:174)\n at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:271)\n at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)\n at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)\n at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)\n at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)\n at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)\n at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)\n at org.apache.lucene.util.packed.DirectWriter.flush(DirectWriter.java:86)\n at org.apache.lucene.util.packed.DirectWriter.add(DirectWriter.java:78)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:218)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:80)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addSortedNumericField(Lucene50DocValuesConsumer.java:470)\n at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedNumericField(PerFieldDocValuesFormat.java:126)\n at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedNumericField(DocValuesConsumer.java:417)\n at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:236)\n at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150)\n at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105)\n at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4089)\n at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)\n at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)\n at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)\n at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)\n[2015-12-09 00:00:19,015][WARN ][indices.cluster ] [Mister Jip] [[topbeat-2015.12.09][3]] marking and sending shard failed due to [engine failure, reason [already closed by tragic event]]\njava.io.IOException: No space left on device\n at sun.nio.ch.FileDispatcherImpl.write0(Native Method)\n at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)\n at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)\n at sun.nio.ch.IOUtil.write(IOUtil.java:65)\n at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)\n at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)\n at java.nio.channels.Channels.writeFully(Channels.java:101)\n at java.nio.channels.Channels.access$000(Channels.java:61)\n at java.nio.channels.Channels$1.write(Channels.java:174)\n at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:271)\n at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)\n at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)\n at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)\n at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)\n at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)\n at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)\n at org.apache.lucene.util.packed.DirectWriter.flush(DirectWriter.java:86)\n at org.apache.lucene.util.packed.DirectWriter.add(DirectWriter.java:78)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:218)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:80)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addSortedNumericField(Lucene50DocValuesConsumer.java:470)\n at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedNumericField(PerFieldDocValuesFormat.java:126)\n at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedNumericField(DocValuesConsumer.java:417)\n at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:236)\n at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150)\n at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105)\n at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4089)\n at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)\n at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)\n at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)\n at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)\n[2015-12-09 00:00:19,015][WARN ][cluster.action.shard ] [Mister Jip] [topbeat-2015.12.09][3] received shard failed for [topbeat-2015.12.09][3], node[HmS7B_CdRFqPFT1UeUZEfA], [P], v[5], s[INITIALIZING], a[id=3Yn-3bO6QtClvHUDwYnClw], unassigned_info[[reason=ALLOCATION_FAILED], at[2015-12-09T04:58:37.185Z], details[engine failure, reason [merge failed], failure MergeException[java.io.IOException: No space left on device]; nested: IOException[No space left on device]; ]], indexUUID [rvUixkXqTty2osh3-PMubw], message [engine failure, reason [already closed by tragic event]], failure [IOException[No space left on device]]\njava.io.IOException: No space left on device\n at sun.nio.ch.FileDispatcherImpl.write0(Native Method)\n at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)\n at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)\n at sun.nio.ch.IOUtil.write(IOUtil.java:65)\n at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)\n at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)\n at java.nio.channels.Channels.writeFully(Channels.java:101)\n at java.nio.channels.Channels.access$000(Channels.java:61)\n at java.nio.channels.Channels$1.write(Channels.java:174)\n at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:271)\n at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)\n at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)\n at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)\n at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)\n at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)\n at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)\n at org.apache.lucene.util.packed.DirectWriter.flush(DirectWriter.java:86)\n at org.apache.lucene.util.packed.DirectWriter.add(DirectWriter.java:78)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:218)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addNumericField(Lucene50DocValuesConsumer.java:80)\n at org.apache.lucene.codecs.lucene50.Lucene50DocValuesConsumer.addSortedNumericField(Lucene50DocValuesConsumer.java:470)\n at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedNumericField(PerFieldDocValuesFormat.java:126)\n at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedNumericField(DocValuesConsumer.java:417)\n at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:236)\n at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150)\n at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105)\n at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4089)\n at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)\n at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)\n at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)\n at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)\n\n[2015-12-09 00:00:19,887][WARN ][index.translog ] [Mister Jip] [topbeat-2015.12.09][0] failed to delete temp file /var/lib/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/0/translog/translog-6857015315422195400.tlog\njava.nio.file.NoSuchFileException: /var/lib/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/0/translog/translog-6857015315422195400.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 00:00:24,760][WARN ][cluster.routing.allocation.decider] [Mister Jip] high disk watermark [90%] exceeded on [HmS7B_CdRFqPFT1UeUZEfA][Mister Jip][/var/lib/elasticsearch/DC_Reports/nodes/0] free: 1.3mb[0%], shards will be relocated away from this node\n[2015-12-09 00:00:24,760][INFO ][cluster.routing.allocation.decider] [Mister Jip] rerouting shards: [high disk watermark exceeded on one or more nodes]\n\n[2015-12-09 00:00:24,851][INFO ][rest.suppressed ] /dealscornerin-50 Params: {index=dealscornerin-50}\n[dealscornerin-50] IndexAlreadyExistsException[already exists]\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:168)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:520)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.access$200(MetaDataCreateIndexService.java:97)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:241)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:388)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 00:00:54,241][ERROR][marvel.agent ] [Mister Jip] background thread had an uncaught exception\nElasticsearchException[failed to flush exporter bulks]\n at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)\n at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)\n at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)\n at java.lang.Thread.run(Thread.java:745)\n Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:\n[0]: index [.marvel-es-2015.12.09], type [index_recovery], id [AVGFHCn_dr-UG15JaoIa], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[1]: index [.marvel-es-2015.12.09], type [indices_stats], id [AVGFHCn_dr-UG15JaoIb], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[2]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.04:3:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[3]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.04:3:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[4]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.04:1:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[5]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.04:1:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[6]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.04:2:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[7]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.04:2:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[8]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.04:4:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[9]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.04:4:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[10]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.04:0:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[11]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.04:0:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[12]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.03:3:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[13]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.03:3:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[14]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.03:1:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[15]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.03:1:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[16]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.03:2:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[17]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.03:2:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[18]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:topbeat-2015.12.03:4:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[19]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:_na:topbeat-2015.12.03:4:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n--------------Similar logs--------------\n[98]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:dealscornerin-49:0:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]\n[99]: index [.marvel-es-2015.12.09], type [shards], id [nbwEWrIlSBWjVgm32O2hAA:HmS7B_CdRFqPFT1UeUZEfA:packetbeat-2015.12.03:3:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3e3d36d4]]]\n at org.elasticsearch.marvel.agent.exporter.local.LocalBulk.flush(LocalBulk.java:114)\n at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:101)\n ... 3 more\n[2015-12-09 00:00:55,213][WARN ][cluster.routing.allocation.decider] [Mister Jip] high disk watermark [90%] exceeded on [HmS7B_CdRFqPFT1UeUZEfA][Mister Jip][/var/lib/elasticsearch/DC_Reports/nodes/0] free: 20kb[3.8E-5%], shards will be relocated away from this node\n[2015-12-09 00:01:04,257][DEBUG][action.admin.indices.stats] [Mister Jip] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][4], node[HmS7B_CdRFqPFT1UeUZEfA], [P], v[5], s[INITIALIZING], a[id=n5bBcfxdS7ey8IpgEyxwzA], unassigned_info[[reason=ALLOCATION_FAILED], at[2015-12-09T04:58:37.227Z], details[engine failure, reason [merge failed], failure MergeException[java.io.IOException: No space left on device]; nested: IOException[No space left on device]; ]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][4]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 00:01:04,257][DEBUG][action.admin.indices.stats] [Mister Jip] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][2], node[HmS7B_CdRFqPFT1UeUZEfA], [P], v[33], s[INITIALIZING], a[id=hsMorSXnRYCQa28IlkksYQ], unassigned_info[[reason=ALLOCATION_FAILED], at[2015-12-09T04:58:37.227Z], details[engine failure, reason [already closed by tragic event], failure IOException[No space left on device]]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][2]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][2]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n```\n",
"created_at": "2015-12-10T13:02:02Z"
},
{
"body": "OK, so here the disk is full. What happened in the logs after you cleared out space on the disk?\n",
"created_at": "2015-12-10T13:09:34Z"
},
{
"body": "Here's the log when I tried to start the ES after shutting it down:\n\n```\n[2015-12-09 01:53:35,386][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.\n[2015-12-09 01:53:35,632][INFO ][node ] [Maxam] version[2.1.0], pid[5742], build[72cd1f1/2015-11-18T22:40:03Z]\n[2015-12-09 01:53:35,632][INFO ][node ] [Maxam] initializing ...\n[2015-12-09 01:53:36,167][INFO ][plugins ] [Maxam] loaded [license, marvel-agent], sites [kopf]\n[2015-12-09 01:53:36,219][INFO ][env ] [Maxam] using [1] data paths, mounts [[/home (/dev/mapper/centos-home)]], net usable_space [826.2gb], net total_space [872.6gb], spins? [possibly], types [xfs]\n[2015-12-09 01:53:38,666][INFO ][node ] [Maxam] initialized\n[2015-12-09 01:53:38,666][INFO ][node ] [Maxam] starting ...\n[2015-12-09 01:53:38,877][INFO ][transport ] [Maxam] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}\n[2015-12-09 01:53:38,897][INFO ][discovery ] [Maxam] DC_Reports/ywHqZlB2Ty6FKboZPgRoZQ\n[2015-12-09 01:53:41,926][INFO ][cluster.service ] [Maxam] new_master {Maxam}{ywHqZlB2Ty6FKboZPgRoZQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)\n[2015-12-09 01:53:41,939][INFO ][http ] [Maxam] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}\n[2015-12-09 01:53:41,940][INFO ][node ] [Maxam] started\n[2015-12-09 01:53:46,499][INFO ][license.plugin.core ] [Maxam] license [07b70bf8-cc41-45d7-900c-67a16d05b960] - valid\n[2015-12-09 01:53:46,500][ERROR][license.plugin.core ] [Maxam]\n#\n# License will expire on [Thursday, December 31, 2015]. If you have a new license, please update it.\n# Otherwise, please reach out to your support contact.\n#\n# Commercial plugins operate with reduced functionality on license expiration:\n# - marvel\n# - The agent will stop collecting cluster and indices metrics\n[2015-12-09 01:53:47,706][INFO ][gateway ] [Maxam] recovered [27] indices into cluster_state\n[2015-12-09 01:53:48,164][WARN ][index.translog ] [Maxam] [topbeat-2015.12.09][3] failed to delete temp file /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/3/translog/translog-4819800625171304865.tlog\njava.nio.file.NoSuchFileException: /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/3/translog/translog-4819800625171304865.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 01:53:48,172][WARN ][index.translog ] [Maxam] [topbeat-2015.12.09][2] failed to delete temp file /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/2/translog/translog-4092628000966967177.tlog\njava.nio.file.NoSuchFileException: /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/2/translog/translog-4092628000966967177.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 01:53:48,172][WARN ][index.translog ] [Maxam] [topbeat-2015.12.09][1] failed to delete temp file /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/1/translog/translog-1515358772559515929.tlog\njava.nio.file.NoSuchFileException: /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/1/translog/translog-1515358772559515929.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 01:53:48,164][WARN ][index.translog ] [Maxam] [topbeat-2015.12.09][4] failed to delete temp file /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/4/translog/translog-7914277937547324566.tlog\njava.nio.file.NoSuchFileException: /home/elkuser/elasticsearch/DC_Reports/nodes/0/indices/topbeat-2015.12.09/4/translog/translog-7914277937547324566.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-12-09 01:53:48,819][DEBUG][action.admin.indices.stats] [Maxam] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][1], node[ywHqZlB2Ty6FKboZPgRoZQ], [P], v[3], s[INITIALIZING], a[id=beI3ZtSZRLSjmp382hpjGA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-12-09T06:53:42.007Z]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][1]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][1]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 01:53:48,827][DEBUG][action.admin.indices.stats] [Maxam] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][4], node[ywHqZlB2Ty6FKboZPgRoZQ], [P], v[3], s[INITIALIZING], a[id=ibydvhMyTG-8uFU_y2Gx1g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-12-09T06:53:42.007Z]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][4]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 01:53:48,835][DEBUG][action.admin.indices.stats] [Maxam] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][3], node[ywHqZlB2Ty6FKboZPgRoZQ], [P], v[3], s[INITIALIZING], a[id=bOgSHC15TWakUYPIMhEz7A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-12-09T06:53:42.007Z]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][3]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][3]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 01:53:48,839][DEBUG][action.admin.indices.stats] [Maxam] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][2], node[ywHqZlB2Ty6FKboZPgRoZQ], [P], v[3], s[INITIALIZING], a[id=QqLsO7WMRn-P24LXF1LuiQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-12-09T06:53:42.007Z]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][2]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][2]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 01:53:49,020][DEBUG][action.admin.indices.stats] [Maxam] [indices:monitor/stats] failed to execute operation for shard [[topbeat-2015.12.09][2], node[ywHqZlB2Ty6FKboZPgRoZQ], [P], v[3], s[INITIALIZING], a[id=QqLsO7WMRn-P24LXF1LuiQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-12-09T06:53:42.007Z]]]\n[topbeat-2015.12.09][[topbeat-2015.12.09][2]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)\n at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [topbeat-2015.12.09][[topbeat-2015.12.09][2]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]\n at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:974)\n at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:808)\n at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:628)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)\n ... 7 more\n[2015-12-09 01:54:49,046][ERROR][marvel.agent ] [Maxam] background thread had an uncaught exception\nElasticsearchException[failed to flush exporter bulks]\n at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)\n at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)\n at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)\n at java.lang.Thread.run(Thread.java:745)\n Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:\n[0]: index [.marvel-es-2015.12.09], type [index_recovery], id [AVGFhHRp_6d18XbNgIBf], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[1]: index [.marvel-es-2015.12.09], type [indices_stats], id [AVGFhHRp_6d18XbNgIBg], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[2]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:1:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[3]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:1:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[4]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:4:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[5]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:4:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[6]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:3:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[7]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:3:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[8]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:2:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[9]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:2:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[10]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:0:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[11]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.04:0:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[12]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:1:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[13]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:1:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[14]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:4:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[15]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:4:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[16]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:3:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[17]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:3:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[18]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:2:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[19]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:2:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[20]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:0:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[21]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.03:0:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[22]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.08:1:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[23]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.08:1:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[24]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.08:4:p], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n[25]: index [.marvel-es-2015.12.09], type [shards], id [KZr_5qQeRbiay0_pdQRUjw:_na:topbeat-2015.12.08:4:r], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n```\n",
"created_at": "2015-12-11T04:30:37Z"
},
{
"body": "with your last log I can't see exceptions that indicate that recovery failed. Does the cluster come back up or do you get stuck in recoveries? The `failed to delete temp file /home/elkuser/elasticsearch/...` warn logs are annoying but harmless and fixed already. \n",
"created_at": "2015-12-11T10:28:51Z"
},
{
"body": "Here's the log where there's unhandled exception.\n\n```\n[2015-12-09 01:54:49,046][ERROR][marvel.agent ] [Maxam] background thread had an uncaught exception\nElasticsearchException[failed to flush exporter bulks]\n at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)\n at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)\n at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)\n at java.lang.Thread.run(Thread.java:745)\n Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:\n[0]: index [.marvel-es-2015.12.09], type [index_recovery], id [AVGFhHRp_6d18XbNgIBf], message [UnavailableShardsException[[.marvel-es-2015.12.09][0] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@1a525917]]\n```\n\nAlso, the cluster stuck in recovery for more than 4 hours, before I gave up and started removing indices giving problem. Basically, shards were in UNASSIGNED state and some were in INITIALIZING state.\n\nI can upload the whole log file which about 42MB if that would help (for entire day).\n",
"created_at": "2015-12-11T12:00:36Z"
},
{
"body": "> I can upload the whole log file which about 42MB if that would help (for entire day).\n\nplease\n\n> Also, the cluster stuck in recovery for more than 4 hours, before I gave up and started removing indices giving problem. Basically, shards were in UNASSIGNED state and some were in INITIALIZING state.\n\nI can't see why this is happening. so the logs would be awesome\n",
"created_at": "2015-12-11T12:56:30Z"
},
{
"body": "[dc20151208.tar.gz](https://github.com/elastic/elasticsearch/files/60052/dc20151208.tar.gz)\n[dc.tar.gz](https://github.com/elastic/elasticsearch/files/60050/dc.tar.gz)\n\ndc20151208 - was when the disk was not full but about to get full\ndc.tar.gz - is when the disk was full and es couldn't initialize all shards.\n",
"created_at": "2015-12-12T03:59:06Z"
},
{
"body": "the interesting exceptions are here:\n\n```\nCaused by: [packetbeat-2015.12.09][[packetbeat-2015.12.09][1]] EngineException[failed to recover from translog]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x203a77f9, got: 0x2c22706f];\n at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:254)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:175)\n ... 11 more\nCaused by: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x203a77f9, got: 0x2c22706f];\n at org.elasticsearch.index.translog.Translog.readOperation(Translog.java:1636)\n at org.elasticsearch.index.translog.TranslogReader.read(TranslogReader.java:132)\n at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.readOperation(TranslogReader.java:299)\n at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.next(TranslogReader.java:290)\n at org.elasticsearch.index.translog.MultiSnapshot.next(MultiSnapshot.java:70)\n at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:240)\n ... 12 more\nCaused by: TranslogCorruptedException[translog stream is corrupted, expected: 0x203a77f9, got: 0x2c22706f]\n at org.elasticsearch.index.translog.Translog.verifyChecksum(Translog.java:1593)\n at org.elasticsearch.index.translog.Translog.readOperation(Translog.java:1626)\n ... 17 more\n```\n\nI still need to investigate what's going on but can you tell me what system you are running this on? Is this a local machine or a cloud machine? I am also curious what filesystem you are using?\n",
"created_at": "2015-12-12T22:48:27Z"
},
{
"body": "Its a standalone node. \nMachine Info: 16GB DDR3, 1TB HDD , Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz. \nFileSystem: xfs\nCentOS: 7.0 64Bit\nJava : \"1.8.0_65\"\n",
"created_at": "2015-12-14T06:07:28Z"
},
{
"body": "alright I think I found the issue here @kpcool your logfiles brought the conclusion thanks you very much. This is actually a serious issue with our transaction log which basically corrupts itself when you hit a disk-full exception. I will keep you posted on this issue. Thanks for baring with me and helping to figure this out.\n",
"created_at": "2015-12-14T09:08:18Z"
},
{
"body": "What happens here is that when we hit a disk full expection while we are flushing the transaction log we might be able to write a portion of the data but we will try to flush the entire data block over and over again. Yet, in the most of the scenarios the disk-full happens during a merge and that merge will fail and release disk-space. Once that is done we might be able to flush the translog again but we already wrote big chunks of data to disk which are now 1. corrupted and 2. treated as non-existing since our internal offsets haven't advanced. \n",
"created_at": "2015-12-14T09:12:30Z"
},
{
"body": "Encountered this issue on one of the older clusters after disk full issue. Is there any way to recover the index? Losing something from the translog is not a big issue for me.\n\nHere is the expection while starting the node:\n\n```\n[2016-07-29 07:48:15,265][WARN ][indices.cluster ] [Recorder] [[myindex][0]] marking and sending shard failed due to [failed recovery]\n[myindex][[myindex][0]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x88b7b1d6, got: 0x2c202266];\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [myindex][[myindex][0]] EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x88b7b1d6, got: 0x2c202266];\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:177)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1509)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1493)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:966)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:938)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)\n ... 5 more\nCaused by: [myindex][[myindex][0]] EngineException[failed to recover from translog]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x88b7b1d6, got: 0x2c202266];\n at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:240)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:174)\n ... 11 more\nCaused by: TranslogCorruptedException[translog corruption while reading from stream]; nested: TranslogCorruptedException[translog stream is corrupted, expected: 0x88b7b1d6, got: 0x2c202266];\n at org.elasticsearch.index.translog.Translog.readOperation(Translog.java:1717)\n at org.elasticsearch.index.translog.TranslogReader.read(TranslogReader.java:132)\n at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.readOperation(TranslogReader.java:296)\n at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.next(TranslogReader.java:287)\n at org.elasticsearch.index.translog.MultiSnapshot.next(MultiSnapshot.java:70)\n at org.elasticsearch.index.shard.TranslogRecoveryPerformer.recoveryFromSnapshot(TranslogRecoveryPerformer.java:105)\n at org.elasticsearch.index.shard.IndexShard$1.recoveryFromSnapshot(IndexShard.java:1578)\n at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:238)\n ... 12 more\nCaused by: TranslogCorruptedException[translog stream is corrupted, expected: 0x88b7b1d6, got: 0x2c202266]\n at org.elasticsearch.index.translog.Translog.verifyChecksum(Translog.java:1675)\n at org.elasticsearch.index.translog.Translog.readOperation(Translog.java:1707)\n ... 19 more\n```\n\nThese are the contents of the index translog directory:\n\n```\nelasticsearch/nodes/0/indices/myindex/0/translog# ls\ntranslog-1445516620591.ckp translog-1445516623424.ckp translog-1445516631010.tlog translog-1445516631019.tlog translog-1445516631028.tlog translog-1445516631037.tlog translog-1445516631046.tlog\ntranslog-1445516620592.ckp translog-1445516624605.ckp translog-1445516631011.ckp translog-1445516631020.ckp translog-1445516631029.ckp translog-1445516631038.ckp translog-1445516631047.ckp\ntranslog-1445516620593.ckp translog-1445516624606.ckp translog-1445516631011.tlog translog-1445516631020.tlog translog-1445516631029.tlog translog-1445516631038.tlog translog-1445516631047.tlog\ntranslog-1445516620594.ckp translog-1445516624607.ckp translog-1445516631012.ckp translog-1445516631021.ckp translog-1445516631030.ckp translog-1445516631039.ckp translog-1445516631048.ckp\ntranslog-1445516620595.ckp translog-1445516624608.ckp translog-1445516631012.tlog translog-1445516631021.tlog translog-1445516631030.tlog translog-1445516631039.tlog translog-1445516631048.tlog\ntranslog-1445516620596.ckp translog-1445516624609.ckp translog-1445516631013.ckp translog-1445516631022.ckp translog-1445516631031.ckp translog-1445516631040.ckp translog-1445516631049.ckp\ntranslog-1445516620790.ckp translog-1445516624610.ckp translog-1445516631013.tlog translog-1445516631022.tlog translog-1445516631031.tlog translog-1445516631040.tlog translog-1445516631049.tlog\ntranslog-1445516621043.ckp translog-1445516624611.ckp translog-1445516631014.ckp translog-1445516631023.ckp translog-1445516631032.ckp translog-1445516631041.ckp translog-1445516631050.ckp\ntranslog-1445516621044.ckp translog-1445516624612.ckp translog-1445516631014.tlog translog-1445516631023.tlog translog-1445516631032.tlog translog-1445516631041.tlog translog-1445516631050.tlog\ntranslog-1445516621237.ckp translog-1445516625986.ckp translog-1445516631015.ckp translog-1445516631024.ckp translog-1445516631033.ckp translog-1445516631042.ckp translog-1445516631051.ckp\ntranslog-1445516621238.ckp translog-1445516628096.ckp translog-1445516631015.tlog translog-1445516631024.tlog translog-1445516631033.tlog translog-1445516631042.tlog translog-1445516631051.tlog\ntranslog-1445516621239.ckp translog-1445516628097.ckp translog-1445516631016.ckp translog-1445516631025.ckp translog-1445516631034.ckp translog-1445516631043.ckp translog-1445516631052.ckp\ntranslog-1445516621240.ckp translog-1445516628624.ckp translog-1445516631016.tlog translog-1445516631025.tlog translog-1445516631034.tlog translog-1445516631043.tlog translog-1445516631052.tlog\ntranslog-1445516621380.ckp translog-1445516628625.ckp translog-1445516631017.ckp translog-1445516631026.ckp translog-1445516631035.ckp translog-1445516631044.ckp translog-1445516631053.tlog\ntranslog-1445516623082.ckp translog-1445516629747.ckp translog-1445516631017.tlog translog-1445516631026.tlog translog-1445516631035.tlog translog-1445516631044.tlog translog.ckp\ntranslog-1445516623417.ckp translog-1445516631009.ckp translog-1445516631018.ckp translog-1445516631027.ckp translog-1445516631036.ckp translog-1445516631045.ckp\ntranslog-1445516623418.ckp translog-1445516631009.tlog translog-1445516631018.tlog translog-1445516631027.tlog translog-1445516631036.tlog translog-1445516631045.tlog\ntranslog-1445516623423.ckp translog-1445516631010.ckp translog-1445516631019.ckp translog-1445516631028.ckp translog-1445516631037.ckp translog-1445516631046.ckp\n```\n\n@s1monw Is there any theoretical chance to fix this? Maybe removing part of the translog a persuading elasticsearch it is complete?\n",
"created_at": "2016-07-29T08:02:01Z"
},
{
"body": "FYI I saw this same error in an Elasticsearch 2.3.3 environment that ran out of disk space. I was surprised I had to restart Elasticsearch in order to recover. \r\nI was hoping this bug was fixed by #15420 but looks like that fix *is* in 2.3.3, so there must be additional bugs. \r\nHopefully latest Elasticsearch is automatically recovers from full disk problems.",
"created_at": "2017-12-12T20:03:50Z"
}
],
"number": 15333,
"title": "Failure to recover shards after disk is full"
}
|
{
"body": "Today we are super lenient (how could I missed that for f**k sake) with failing\n/ closing the translog writer when we hit an exception. It's actually worse, we allow\nto further write to it and don't care what has been already written to disk and what hasn't.\nWe keep the buffer in memory and try to write it again on the next operation.\n\nWhen we hit a disk-full expcetion due to for instance a big merge we are likely adding document to the\ntranslog but fail to write them to disk. Once the merge failed and freed up it's diskspace (note this is\na small window when concurrently indexing and failing the shard due to out of space exceptions) we will\nallow in-flight operations to add to the translog and then once we fail the shard fsync it. These operations\nare written to disk and fsynced which is fine but the previous buffer flush might have written some bytes\nto disk which are not corrupting the translog. That wouldn't be an issue if we prevented the fsync.\n\nCloses #15333\n",
"number": 15420,
"review_comments": [
{
"body": "we need to increment the operationCounter here as well (probably means our tests are not strong enough to catch docs that are bigger then the buffer..)\n",
"created_at": "2015-12-14T12:53:42Z"
},
{
"body": "might as well synchonize the method now\n",
"created_at": "2015-12-14T12:58:19Z"
},
{
"body": "unless under lock, I don't think this buys us much(and we check in the flush method)?. we increment the channel reference here, so that might be enough as well.\n",
"created_at": "2015-12-14T13:04:14Z"
},
{
"body": "should we move this to the write lock? I'm worried now we will write an offsetToSync that is not correlated to the operation counter. \n",
"created_at": "2015-12-14T13:08:25Z"
},
{
"body": "only set this after the checkpoint was written?\n",
"created_at": "2015-12-14T13:11:29Z"
},
{
"body": "can we add some java docs about this?\n",
"created_at": "2015-12-14T13:15:11Z"
},
{
"body": "nice!\n",
"created_at": "2015-12-14T13:16:43Z"
},
{
"body": "when does this one happen?\n",
"created_at": "2015-12-14T13:20:13Z"
},
{
"body": "I added a comment why this is there\n",
"created_at": "2015-12-14T13:20:14Z"
},
{
"body": "we shouldn't be here, right? knowing that we failed before, we should have a tragic exception in the queue? feels like I miss something.\n",
"created_at": "2015-12-14T13:21:54Z"
},
{
"body": "also, shouldn't this be opsAdded?\n",
"created_at": "2015-12-14T13:28:19Z"
},
{
"body": "we catch IOExceptions in #add but not in sync I will add a comment\n",
"created_at": "2015-12-14T13:30:47Z"
},
{
"body": "yeah we wont' get there - leftover\n",
"created_at": "2015-12-14T13:31:05Z"
},
{
"body": "in that case can look at the cause and it should have no space left on device, right?\n",
"created_at": "2015-12-14T13:59:23Z"
},
{
"body": "can we say this was a translog tragic event?\n",
"created_at": "2015-12-14T14:17:58Z"
},
{
"body": "Is the incoming throwable really allowed to be null sometimes? Seems dangerous because then we could close ourselves here but `.getTragicException()` would return null, as if no tragedy occurred?\n",
"created_at": "2015-12-14T14:55:47Z"
},
{
"body": "Why do we need to assign to local `offset` and `opsCount` here? Since we hold the write lock, `writtenOffset` cannot change between the call to `checkpoint` and when we assign it to `lastSyncedOffset`?\n",
"created_at": "2015-12-14T14:58:49Z"
},
{
"body": "Can we change the message to make it clear it's \"faked\" by this test? Or maybe just use Lucene's `FakeIOException`? (Just for future sanity when debugging test failures...).\n",
"created_at": "2015-12-14T15:00:16Z"
},
{
"body": "I just did this for consistency with the other impls.\n",
"created_at": "2015-12-14T15:04:10Z"
},
{
"body": "sure thing\n",
"created_at": "2015-12-14T15:04:25Z"
},
{
"body": "Hmm should we catch `Throwable` instead of `IOException` like we do in the other tragic cases...? But I guess then we'd need to exclude `AlreadyClosedException` (that's not tragic)?\n",
"created_at": "2015-12-14T15:05:22Z"
},
{
"body": "oh hmm that's a leftover - good catch\n",
"created_at": "2015-12-14T15:05:40Z"
},
{
"body": "Hmm that's no good, we should fix the other impls too. Code that looks like it's doing something important for a reason, but in fact is pointless, is dangerous :) Or is it not pointless?\n",
"created_at": "2015-12-14T15:15:26Z"
}
],
"title": "Fail and close translog hard if writing to disk fails"
}
|
{
"commits": [
{
"message": "Fail and close translog hard if writing to disk fails\n\nToday we are super lenient (how could I missed that for f**k sake) with failing\n/ closing the translog writer when we hit an exception. It's actually worse, we allow\nto further write to it and don't care what has been already written to disk and what hasn't.\nWe keep the buffer in memory and try to write it again on the next operation.\n\nWhen we hit a disk-full expcetion due to for instance a big merge we are likely adding document to the\ntranslog but fail to write them to disk. Once the merge failed and freed up it's diskspace (note this is\na small window when concurrently indexing and failing the shard due to out of space exceptions) we will\nallow in-flight operations to add to the translog and then once we fail the shard fsync it. These operations\nare written to disk and fsynced which is fine but the previous buffer flush might have written some bytes\nto disk which are not corrupting the translog. That wouldn't be an issue if we prevented the fsync.\n\nCloses #15333"
},
{
"message": "apply feedback from @bleskes"
},
{
"message": "Expose tragic event to translog, close translog once we hit a tragic even and fail engine if we hit one too"
},
{
"message": "apply feedback from @mikemccand"
},
{
"message": "simplify code and use members directly"
}
],
"files": [
{
"diff": "@@ -781,10 +781,14 @@ protected boolean maybeFailEngine(String source, Throwable t) {\n // we need to fail the engine. it might have already been failed before\n // but we are double-checking it's failed and closed\n if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) {\n- failEngine(\"already closed by tragic event\", indexWriter.getTragicException());\n+ failEngine(\"already closed by tragic event on the index writer\", indexWriter.getTragicException());\n+ } else if (translog.isOpen() == false && translog.getTragicException() != null) {\n+ failEngine(\"already closed by tragic event on the translog\", translog.getTragicException());\n }\n return true;\n- } else if (t != null && indexWriter.isOpen() == false && indexWriter.getTragicException() == t) {\n+ } else if (t != null &&\n+ ((indexWriter.isOpen() == false && indexWriter.getTragicException() == t)\n+ || (translog.isOpen() == false && translog.getTragicException() == t))) {\n // this spot on - we are handling the tragic event exception here so we have to fail the engine\n // right away\n failEngine(source, t);",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -48,33 +48,45 @@ public BufferingTranslogWriter(ShardId shardId, long generation, ChannelReferenc\n public Translog.Location add(BytesReference data) throws IOException {\n try (ReleasableLock lock = writeLock.acquire()) {\n ensureOpen();\n- operationCounter++;\n final long offset = totalOffset;\n if (data.length() >= buffer.length) {\n flush();\n // we use the channel to write, since on windows, writing to the RAF might not be reflected\n // when reading through the channel\n- data.writeTo(channel);\n+ try {\n+ data.writeTo(channel);\n+ } catch (Throwable ex) {\n+ closeWithTragicEvent(ex);\n+ throw ex;\n+ }\n writtenOffset += data.length();\n totalOffset += data.length();\n- return new Translog.Location(generation, offset, data.length());\n- }\n- if (data.length() > buffer.length - bufferCount) {\n- flush();\n+ } else {\n+ if (data.length() > buffer.length - bufferCount) {\n+ flush();\n+ }\n+ data.writeTo(bufferOs);\n+ totalOffset += data.length();\n }\n- data.writeTo(bufferOs);\n- totalOffset += data.length();\n+ operationCounter++;\n return new Translog.Location(generation, offset, data.length());\n }\n }\n \n protected final void flush() throws IOException {\n assert writeLock.isHeldByCurrentThread();\n if (bufferCount > 0) {\n+ ensureOpen();\n // we use the channel to write, since on windows, writing to the RAF might not be reflected\n // when reading through the channel\n- Channels.writeToChannel(buffer, 0, bufferCount, channel);\n- writtenOffset += bufferCount;\n+ final int bufferSize = bufferCount;\n+ try {\n+ Channels.writeToChannel(buffer, 0, bufferSize, channel);\n+ } catch (Throwable ex) {\n+ closeWithTragicEvent(ex);\n+ throw ex;\n+ }\n+ writtenOffset += bufferSize;\n bufferCount = 0;\n }\n }\n@@ -102,20 +114,28 @@ public boolean syncNeeded() {\n }\n \n @Override\n- public void sync() throws IOException {\n- if (!syncNeeded()) {\n- return;\n- }\n- synchronized (this) {\n+ public synchronized void sync() throws IOException {\n+ if (syncNeeded()) {\n+ ensureOpen(); // this call gives a better exception that the incRef if we are closed by a tragic event\n channelReference.incRef();\n try {\n+ final long offsetToSync;\n+ final int opsCounter;\n try (ReleasableLock lock = writeLock.acquire()) {\n flush();\n- lastSyncedOffset = totalOffset;\n+ offsetToSync = totalOffset;\n+ opsCounter = operationCounter;\n }\n // we can do this outside of the write lock but we have to protect from\n // concurrent syncs\n- checkpoint(lastSyncedOffset, operationCounter, channelReference);\n+ ensureOpen(); // just for kicks - the checkpoint happens or not either way\n+ try {\n+ checkpoint(offsetToSync, opsCounter, channelReference);\n+ } catch (Throwable ex) {\n+ closeWithTragicEvent(ex);\n+ throw ex;\n+ }\n+ lastSyncedOffset = offsetToSync;\n } finally {\n channelReference.decRef();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/BufferingTranslogWriter.java",
"status": "modified"
},
{
"diff": "@@ -115,7 +115,7 @@ public class Translog extends AbstractIndexShardComponent implements IndexShardC\n private final Path location;\n private TranslogWriter current;\n private volatile ImmutableTranslogReader currentCommittingTranslog;\n- private long lastCommittedTranslogFileGeneration = -1; // -1 is safe as it will not cause an translog deletion.\n+ private volatile long lastCommittedTranslogFileGeneration = -1; // -1 is safe as it will not cause an translog deletion.\n private final AtomicBoolean closed = new AtomicBoolean();\n private final TranslogConfig config;\n private final String translogUUID;\n@@ -279,7 +279,8 @@ public void updateBuffer(ByteSizeValue bufferSize) {\n }\n }\n \n- boolean isOpen() {\n+ /** Returns {@code true} if this {@code Translog} is still open. */\n+ public boolean isOpen() {\n return closed.get() == false;\n }\n \n@@ -288,10 +289,14 @@ public void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n try (ReleasableLock lock = writeLock.acquire()) {\n try {\n- IOUtils.close(current, currentCommittingTranslog);\n+ current.sync();\n } finally {\n- IOUtils.close(recoveredTranslogs);\n- recoveredTranslogs.clear();\n+ try {\n+ IOUtils.close(current, currentCommittingTranslog);\n+ } finally {\n+ IOUtils.close(recoveredTranslogs);\n+ recoveredTranslogs.clear();\n+ }\n }\n } finally {\n FutureUtils.cancel(syncScheduler);\n@@ -354,7 +359,7 @@ public long sizeInBytes() {\n TranslogWriter createWriter(long fileGeneration) throws IOException {\n TranslogWriter newFile;\n try {\n- newFile = TranslogWriter.create(config.getType(), shardId, translogUUID, fileGeneration, location.resolve(getFilename(fileGeneration)), new OnCloseRunnable(), config.getBufferSize());\n+ newFile = TranslogWriter.create(config.getType(), shardId, translogUUID, fileGeneration, location.resolve(getFilename(fileGeneration)), new OnCloseRunnable(), config.getBufferSize(), getChannelFactory());\n } catch (IOException e) {\n throw new TranslogException(shardId, \"failed to create new translog file\", e);\n }\n@@ -393,7 +398,7 @@ public Translog.Operation read(Location location) {\n * @see Index\n * @see org.elasticsearch.index.translog.Translog.Delete\n */\n- public Location add(Operation operation) throws TranslogException {\n+ public Location add(Operation operation) throws IOException {\n final ReleasableBytesStreamOutput out = new ReleasableBytesStreamOutput(bigArrays);\n try {\n final BufferedChecksumStreamOutput checksumStreamOutput = new BufferedChecksumStreamOutput(out);\n@@ -415,7 +420,14 @@ public Location add(Operation operation) throws TranslogException {\n assert current.assertBytesAtLocation(location, bytes);\n return location;\n }\n- } catch (AlreadyClosedException ex) {\n+ } catch (AlreadyClosedException | IOException ex) {\n+ if (current.getTragicException() != null) {\n+ try {\n+ close();\n+ } catch (Exception inner) {\n+ ex.addSuppressed(inner);\n+ }\n+ }\n throw ex;\n } catch (Throwable e) {\n throw new TranslogException(shardId, \"Failed to write operation [\" + operation + \"]\", e);\n@@ -429,6 +441,7 @@ public Location add(Operation operation) throws TranslogException {\n * Snapshots are fixed in time and will not be updated with future operations.\n */\n public Snapshot newSnapshot() {\n+ ensureOpen();\n try (ReleasableLock lock = readLock.acquire()) {\n ArrayList<TranslogReader> toOpen = new ArrayList<>();\n toOpen.addAll(recoveredTranslogs);\n@@ -493,6 +506,15 @@ public void sync() throws IOException {\n if (closed.get() == false) {\n current.sync();\n }\n+ } catch (AlreadyClosedException | IOException ex) {\n+ if (current.getTragicException() != null) {\n+ try {\n+ close();\n+ } catch (Exception inner) {\n+ ex.addSuppressed(inner);\n+ }\n+ }\n+ throw ex;\n }\n }\n \n@@ -520,6 +542,7 @@ static String getCommitCheckpointFileName(long generation) {\n public boolean ensureSynced(Location location) throws IOException {\n try (ReleasableLock lock = readLock.acquire()) {\n if (location.generation == current.generation) { // if we have a new one it's already synced\n+ ensureOpen();\n return current.syncUpTo(location.translogLocation + location.size);\n }\n }\n@@ -548,31 +571,29 @@ public TranslogConfig getConfig() {\n private final class OnCloseRunnable implements Callback<ChannelReference> {\n @Override\n public void handle(ChannelReference channelReference) {\n- try (ReleasableLock lock = writeLock.acquire()) {\n- if (isReferencedGeneration(channelReference.getGeneration()) == false) {\n- Path translogPath = channelReference.getPath();\n- assert channelReference.getPath().getParent().equals(location) : \"translog files must be in the location folder: \" + location + \" but was: \" + translogPath;\n- // if the given translogPath is not the current we can safely delete the file since all references are released\n- logger.trace(\"delete translog file - not referenced and not current anymore {}\", translogPath);\n- IOUtils.deleteFilesIgnoringExceptions(translogPath);\n- IOUtils.deleteFilesIgnoringExceptions(translogPath.resolveSibling(getCommitCheckpointFileName(channelReference.getGeneration())));\n+ if (isReferencedGeneration(channelReference.getGeneration()) == false) {\n+ Path translogPath = channelReference.getPath();\n+ assert channelReference.getPath().getParent().equals(location) : \"translog files must be in the location folder: \" + location + \" but was: \" + translogPath;\n+ // if the given translogPath is not the current we can safely delete the file since all references are released\n+ logger.trace(\"delete translog file - not referenced and not current anymore {}\", translogPath);\n+ IOUtils.deleteFilesIgnoringExceptions(translogPath);\n+ IOUtils.deleteFilesIgnoringExceptions(translogPath.resolveSibling(getCommitCheckpointFileName(channelReference.getGeneration())));\n \n- }\n- try (DirectoryStream<Path> stream = Files.newDirectoryStream(location)) {\n- for (Path path : stream) {\n- Matcher matcher = PARSE_STRICT_ID_PATTERN.matcher(path.getFileName().toString());\n- if (matcher.matches()) {\n- long generation = Long.parseLong(matcher.group(1));\n- if (isReferencedGeneration(generation) == false) {\n- logger.trace(\"delete translog file - not referenced and not current anymore {}\", path);\n- IOUtils.deleteFilesIgnoringExceptions(path);\n- IOUtils.deleteFilesIgnoringExceptions(path.resolveSibling(getCommitCheckpointFileName(channelReference.getGeneration())));\n- }\n+ }\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(location)) {\n+ for (Path path : stream) {\n+ Matcher matcher = PARSE_STRICT_ID_PATTERN.matcher(path.getFileName().toString());\n+ if (matcher.matches()) {\n+ long generation = Long.parseLong(matcher.group(1));\n+ if (isReferencedGeneration(generation) == false) {\n+ logger.trace(\"delete translog file - not referenced and not current anymore {}\", path);\n+ IOUtils.deleteFilesIgnoringExceptions(path);\n+ IOUtils.deleteFilesIgnoringExceptions(path.resolveSibling(getCommitCheckpointFileName(channelReference.getGeneration())));\n }\n }\n- } catch (IOException e) {\n- logger.warn(\"failed to delete unreferenced translog files\", e);\n }\n+ } catch (IOException e) {\n+ logger.warn(\"failed to delete unreferenced translog files\", e);\n }\n }\n }\n@@ -1294,6 +1315,7 @@ public void prepareCommit() throws IOException {\n throw new IllegalStateException(\"already committing a translog with generation: \" + currentCommittingTranslog.getGeneration());\n }\n final TranslogWriter oldCurrent = current;\n+ oldCurrent.ensureOpen();\n oldCurrent.sync();\n currentCommittingTranslog = current.immutableReader();\n Path checkpoint = location.resolve(CHECKPOINT_FILE_NAME);\n@@ -1389,7 +1411,7 @@ long getFirstOperationPosition() { // for testing\n \n private void ensureOpen() {\n if (closed.get()) {\n- throw new AlreadyClosedException(\"translog is already closed\");\n+ throw new AlreadyClosedException(\"translog is already closed\", current.getTragicException());\n }\n }\n \n@@ -1400,4 +1422,15 @@ int getNumOpenViews() {\n return outstandingViews.size();\n }\n \n+ TranslogWriter.ChannelFactory getChannelFactory() {\n+ return TranslogWriter.ChannelFactory.DEFAULT;\n+ }\n+\n+ /** If this {@code Translog} was closed as a side-effect of a tragic exception,\n+ * e.g. disk full while flushing a new segment, this returns the root cause exception.\n+ * Otherwise (no tragic exception has occurred) it returns null. */\n+ public Throwable getTragicException() {\n+ return current.getTragicException();\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
},
{
"diff": "@@ -140,16 +140,16 @@ protected Translog.Operation read(BufferedChecksumStreamInput inStream) throws I\n @Override\n public void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n- doClose();\n+ channelReference.decRef();\n }\n }\n \n- protected void doClose() throws IOException {\n- channelReference.decRef();\n+ protected final boolean isClosed() {\n+ return closed.get();\n }\n \n protected void ensureOpen() {\n- if (closed.get()) {\n+ if (isClosed()) {\n throw new AlreadyClosedException(\"translog [\" + getGeneration() + \"] is already closed\");\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.translog;\n \n import org.apache.lucene.codecs.CodecUtil;\n+import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.OutputStreamDataOutput;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n@@ -54,6 +55,9 @@ public class TranslogWriter extends TranslogReader {\n protected volatile int operationCounter;\n /* the offset in bytes written to the file */\n protected volatile long writtenOffset;\n+ /* if we hit an exception that we can't recover from we assign it to this var and ship it with every AlreadyClosedException we throw */\n+ private volatile Throwable tragedy;\n+\n \n public TranslogWriter(ShardId shardId, long generation, ChannelReference channelReference) throws IOException {\n super(generation, channelReference, channelReference.getChannel().position());\n@@ -65,10 +69,10 @@ public TranslogWriter(ShardId shardId, long generation, ChannelReference channel\n this.lastSyncedOffset = channelReference.getChannel().position();;\n }\n \n- public static TranslogWriter create(Type type, ShardId shardId, String translogUUID, long fileGeneration, Path file, Callback<ChannelReference> onClose, int bufferSize) throws IOException {\n+ public static TranslogWriter create(Type type, ShardId shardId, String translogUUID, long fileGeneration, Path file, Callback<ChannelReference> onClose, int bufferSize, ChannelFactory channelFactory) throws IOException {\n final BytesRef ref = new BytesRef(translogUUID);\n final int headerLength = CodecUtil.headerLength(TRANSLOG_CODEC) + ref.length + RamUsageEstimator.NUM_BYTES_INT;\n- final FileChannel channel = FileChannel.open(file, StandardOpenOption.WRITE, StandardOpenOption.READ, StandardOpenOption.CREATE_NEW);\n+ final FileChannel channel = channelFactory.open(file);\n try {\n // This OutputStreamDataOutput is intentionally not closed because\n // closing it will close the FileChannel\n@@ -90,6 +94,12 @@ public static TranslogWriter create(Type type, ShardId shardId, String translogU\n throw throwable;\n }\n }\n+ /** If this {@code TranslogWriter} was closed as a side-effect of a tragic exception,\n+ * e.g. disk full while flushing a new segment, this returns the root cause exception.\n+ * Otherwise (no tragic exception has occurred) it returns null. */\n+ public Throwable getTragicException() {\n+ return tragedy;\n+ }\n \n public enum Type {\n \n@@ -118,6 +128,16 @@ public static Type fromString(String type) {\n }\n }\n \n+ protected final void closeWithTragicEvent(Throwable throwable) throws IOException {\n+ try (ReleasableLock lock = writeLock.acquire()) {\n+ if (tragedy == null) {\n+ tragedy = throwable;\n+ } else {\n+ tragedy.addSuppressed(throwable);\n+ }\n+ close();\n+ }\n+ }\n \n /**\n * add the given bytes to the translog and return the location they were written at\n@@ -127,9 +147,14 @@ public Translog.Location add(BytesReference data) throws IOException {\n try (ReleasableLock lock = writeLock.acquire()) {\n ensureOpen();\n position = writtenOffset;\n- data.writeTo(channel);\n+ try {\n+ data.writeTo(channel);\n+ } catch (Throwable e) {\n+ closeWithTragicEvent(e);\n+ throw e;\n+ }\n writtenOffset = writtenOffset + data.length();\n- operationCounter = operationCounter + 1;\n+ operationCounter++;;\n }\n return new Translog.Location(generation, position, data.length());\n }\n@@ -143,12 +168,13 @@ public void updateBufferSize(int bufferSize) throws TranslogException {\n /**\n * write all buffered ops to disk and fsync file\n */\n- public void sync() throws IOException {\n+ public synchronized void sync() throws IOException { // synchronized to ensure only one sync happens a time\n // check if we really need to sync here...\n if (syncNeeded()) {\n try (ReleasableLock lock = writeLock.acquire()) {\n+ ensureOpen();\n+ checkpoint(writtenOffset, operationCounter, channelReference);\n lastSyncedOffset = writtenOffset;\n- checkpoint(lastSyncedOffset, operationCounter, channelReference);\n }\n }\n }\n@@ -262,15 +288,6 @@ public boolean syncUpTo(long offset) throws IOException {\n return false;\n }\n \n- @Override\n- protected final void doClose() throws IOException {\n- try (ReleasableLock lock = writeLock.acquire()) {\n- sync();\n- } finally {\n- super.doClose();\n- }\n- }\n-\n @Override\n protected void readBytes(ByteBuffer buffer, long position) throws IOException {\n try (ReleasableLock lock = readLock.acquire()) {\n@@ -288,4 +305,20 @@ private static void writeCheckpoint(long syncPosition, int numOperations, Path t\n Checkpoint checkpoint = new Checkpoint(syncPosition, numOperations, generation);\n Checkpoint.write(checkpointFile, checkpoint, options);\n }\n+\n+ static class ChannelFactory {\n+\n+ static final ChannelFactory DEFAULT = new ChannelFactory();\n+\n+ // only for testing until we have a disk-full FileSystemt\n+ public FileChannel open(Path file) throws IOException {\n+ return FileChannel.open(file, StandardOpenOption.WRITE, StandardOpenOption.READ, StandardOpenOption.CREATE_NEW);\n+ }\n+ }\n+\n+ protected final void ensureOpen() {\n+ if (isClosed()) {\n+ throw new AlreadyClosedException(\"translog [\" + getGeneration() + \"] is already closed\", tragedy);\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java",
"status": "modified"
},
{
"diff": "@@ -34,13 +34,12 @@\n public class BufferedTranslogTests extends TranslogTests {\n \n @Override\n- protected Translog create(Path path) throws IOException {\n+ protected TranslogConfig getTranslogConfig(Path path) {\n Settings build = Settings.settingsBuilder()\n .put(\"index.translog.fs.type\", TranslogWriter.Type.BUFFERED.name())\n .put(\"index.translog.fs.buffer_size\", 10 + randomInt(128 * 1024), ByteSizeUnit.BYTES)\n .put(IndexMetaData.SETTING_VERSION_CREATED, org.elasticsearch.Version.CURRENT)\n .build();\n- TranslogConfig translogConfig = new TranslogConfig(shardId, path, IndexSettingsModule.newIndexSettings(shardId.index(), build), Translog.Durabilty.REQUEST, BigArrays.NON_RECYCLING_INSTANCE, null);\n- return new Translog(translogConfig);\n+ return new TranslogConfig(shardId, path, IndexSettingsModule.newIndexSettings(shardId.index(), build), Translog.Durabilty.REQUEST, BigArrays.NON_RECYCLING_INSTANCE, null);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/translog/BufferedTranslogTests.java",
"status": "modified"
},
{
"diff": "@@ -22,9 +22,11 @@\n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.apache.lucene.codecs.CodecUtil;\n import org.apache.lucene.index.Term;\n+import org.apache.lucene.mockfile.FilterFileChannel;\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.ByteArrayDataOutput;\n import org.apache.lucene.util.IOUtils;\n+import org.apache.lucene.util.LineFileDocs;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -110,16 +112,19 @@ public void tearDown() throws Exception {\n }\n }\n \n- protected Translog create(Path path) throws IOException {\n+ private Translog create(Path path) throws IOException {\n+ return new Translog(getTranslogConfig(path));\n+ }\n+\n+ protected TranslogConfig getTranslogConfig(Path path) {\n Settings build = Settings.settingsBuilder()\n .put(TranslogConfig.INDEX_TRANSLOG_FS_TYPE, TranslogWriter.Type.SIMPLE.name())\n .put(IndexMetaData.SETTING_VERSION_CREATED, org.elasticsearch.Version.CURRENT)\n .build();\n- TranslogConfig translogConfig = new TranslogConfig(shardId, path, IndexSettingsModule.newIndexSettings(shardId.index(), build), Translog.Durabilty.REQUEST, BigArrays.NON_RECYCLING_INSTANCE, null);\n- return new Translog(translogConfig);\n+ return new TranslogConfig(shardId, path, IndexSettingsModule.newIndexSettings(shardId.index(), build), Translog.Durabilty.REQUEST, BigArrays.NON_RECYCLING_INSTANCE, null);\n }\n \n- protected void addToTranslogAndList(Translog translog, ArrayList<Translog.Operation> list, Translog.Operation op) {\n+ protected void addToTranslogAndList(Translog translog, ArrayList<Translog.Operation> list, Translog.Operation op) throws IOException {\n list.add(op);\n translog.add(op);\n }\n@@ -330,7 +335,7 @@ public void testStats() throws IOException {\n }\n }\n \n- public void testSnapshot() {\n+ public void testSnapshot() throws IOException {\n ArrayList<Translog.Operation> ops = new ArrayList<>();\n Translog.Snapshot snapshot = translog.newSnapshot();\n assertThat(snapshot, SnapshotMatchers.size(0));\n@@ -389,7 +394,7 @@ public void testSnapshotOnClosedTranslog() throws IOException {\n Translog.Snapshot snapshot = translog.newSnapshot();\n fail(\"translog is closed\");\n } catch (AlreadyClosedException ex) {\n- assertThat(ex.getMessage(), containsString(\"translog-1.tlog is already closed can't increment\"));\n+ assertEquals(ex.getMessage(), \"translog is already closed\");\n }\n }\n \n@@ -634,7 +639,7 @@ public void testConcurrentWriteViewsAndSnapshot() throws Throwable {\n final String threadId = \"writer_\" + i;\n writers[i] = new Thread(new AbstractRunnable() {\n @Override\n- public void doRun() throws BrokenBarrierException, InterruptedException {\n+ public void doRun() throws BrokenBarrierException, InterruptedException, IOException {\n barrier.await();\n int counter = 0;\n while (run.get()) {\n@@ -1279,4 +1284,122 @@ public void run() {\n }\n }\n }\n+\n+ public void testFailFlush() throws IOException {\n+ Path tempDir = createTempDir();\n+ final AtomicBoolean simulateDiskFull = new AtomicBoolean();\n+ TranslogConfig config = getTranslogConfig(tempDir);\n+ Translog translog = new Translog(config) {\n+ @Override\n+ TranslogWriter.ChannelFactory getChannelFactory() {\n+ final TranslogWriter.ChannelFactory factory = super.getChannelFactory();\n+\n+ return new TranslogWriter.ChannelFactory() {\n+ @Override\n+ public FileChannel open(Path file) throws IOException {\n+ FileChannel channel = factory.open(file);\n+ return new FilterFileChannel(channel) {\n+\n+ @Override\n+ public int write(ByteBuffer src) throws IOException {\n+ if (simulateDiskFull.get()) {\n+ if (src.limit() > 1) {\n+ final int pos = src.position();\n+ final int limit = src.limit();\n+ src.limit(limit / 2);\n+ super.write(src);\n+ src.position(pos);\n+ src.limit(limit);\n+ throw new IOException(\"__FAKE__ no space left on device\");\n+ }\n+ }\n+ return super.write(src);\n+ }\n+ };\n+ }\n+ };\n+ }\n+ };\n+\n+ List<Translog.Location> locations = new ArrayList<>();\n+ int opsSynced = 0;\n+ int opsAdded = 0;\n+ boolean failed = false;\n+ while(failed == false) {\n+ try {\n+ locations.add(translog.add(new Translog.Index(\"test\", \"\" + opsSynced, Integer.toString(opsSynced).getBytes(Charset.forName(\"UTF-8\")))));\n+ opsAdded++;\n+ translog.sync();\n+ opsSynced++;\n+ } catch (IOException ex) {\n+ failed = true;\n+ assertFalse(translog.isOpen());\n+ assertEquals(\"__FAKE__ no space left on device\", ex.getMessage());\n+ }\n+ simulateDiskFull.set(randomBoolean());\n+ }\n+ simulateDiskFull.set(false);\n+ if (randomBoolean()) {\n+ try {\n+ locations.add(translog.add(new Translog.Index(\"test\", \"\" + opsSynced, Integer.toString(opsSynced).getBytes(Charset.forName(\"UTF-8\")))));\n+ fail(\"we are already closed\");\n+ } catch (AlreadyClosedException ex) {\n+ assertNotNull(ex.getCause());\n+ assertEquals(ex.getCause().getMessage(), \"__FAKE__ no space left on device\");\n+ }\n+\n+ }\n+ Translog.TranslogGeneration translogGeneration = translog.getGeneration();\n+ try {\n+ translog.newSnapshot();\n+ fail(\"already closed\");\n+ } catch (AlreadyClosedException ex) {\n+ // all is well\n+ assertNotNull(ex.getCause());\n+ assertSame(translog.getTragicException(), ex.getCause());\n+ }\n+\n+ try {\n+ translog.commit();\n+ fail(\"already closed\");\n+ } catch (AlreadyClosedException ex) {\n+ assertNotNull(ex.getCause());\n+ assertSame(translog.getTragicException(), ex.getCause());\n+ }\n+\n+ assertFalse(translog.isOpen());\n+ translog.close(); // we are closed\n+ config.setTranslogGeneration(translogGeneration);\n+ try (Translog tlog = new Translog(config)){\n+ assertEquals(\"lastCommitted must be 1 less than current\", translogGeneration.translogFileGeneration + 1, tlog.currentFileGeneration());\n+ assertFalse(tlog.syncNeeded());\n+\n+ try (Translog.Snapshot snapshot = tlog.newSnapshot()) {\n+ assertEquals(opsSynced, snapshot.estimatedTotalOperations());\n+ for (int i = 0; i < opsSynced; i++) {\n+ assertEquals(\"expected operation\" + i + \" to be in the previous translog but wasn't\", tlog.currentFileGeneration() - 1, locations.get(i).generation);\n+ Translog.Operation next = snapshot.next();\n+ assertNotNull(\"operation \" + i + \" must be non-null\", next);\n+ assertEquals(i, Integer.parseInt(next.getSource().source.toUtf8()));\n+ }\n+ }\n+ }\n+ }\n+\n+ public void testTranslogOpsCountIsCorrect() throws IOException {\n+ List<Translog.Location> locations = new ArrayList<>();\n+ int numOps = randomIntBetween(100, 200);\n+ LineFileDocs lineFileDocs = new LineFileDocs(random()); // writes pretty big docs so we cross buffer boarders regularly\n+ for (int opsAdded = 0; opsAdded < numOps; opsAdded++) {\n+ locations.add(translog.add(new Translog.Index(\"test\", \"\" + opsAdded, lineFileDocs.nextDoc().toString().getBytes(Charset.forName(\"UTF-8\")))));\n+ try (Translog.Snapshot snapshot = translog.newSnapshot()) {\n+ assertEquals(opsAdded+1, snapshot.estimatedTotalOperations());\n+ for (int i = 0; i < opsAdded; i++) {\n+ assertEquals(\"expected operation\" + i + \" to be in the current translog but wasn't\", translog.currentFileGeneration(), locations.get(i).generation);\n+ Translog.Operation next = snapshot.next();\n+ assertNotNull(\"operation \" + i + \" must be non-null\", next);\n+ }\n+ }\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java",
"status": "modified"
}
]
}
|
{
"body": "QueryParsingException[[range] query does not support [_name]]\n at org.elasticsearch.index.query.RangeQueryParser.parse(RangeQueryParser.java:108)\n\nLooking at RangeQueryParser the _name field has been removed between 1.7 and 2.0. Possibly related to #11744 ?\n\nIt appears to be fixed again in 'master' but may need to be fixed independently for the 2.x line.\n\nFollowing query was used\n\n```\n{\n \"nested\": {\n \"path\": \"inner_name_only\",\n \"score_mode\": \"avg\",\n \"query\": {\n \"range\": {\n \"inner_name_only.inner_date_field\": {\n \"gte\" : \"1999/01/01\",\n \"lte\" : \"2020/01/01\",\n \"_name\" : \"query1\"\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "Thanks for reporting this, to me it looks like the expected place for the \"_name\" parameter was changed once with a0af88e9 to be on the top level, so in 2.x the following should work:\n\n```\n\"range\": {\n \"inner_name_only.inner_date_field\": {\n \"gte\": \"1999/01/01\",\n \"lte\": \"2020/01/01\"\n },\n \"_name\": \"query1\"\n }\n```\n\nHowever, as you mentioned in #11974 support for \"_name\" was added again on the main level alongside e.g. \"boost\" on master, but this was not ported to the 2.x branch. I think we should also add this back again on 2.x since the original move to the top-level of the query seems incidental. @javanna wdyt?\n",
"created_at": "2015-12-09T15:30:11Z"
},
{
"body": "Agreed it's weird to have back compat on master that does not exist on 2.x. So we should either remove the back compat from master or add it to 2.x.\n",
"created_at": "2015-12-11T12:53:54Z"
},
{
"body": "@jpountz thanks, I just checked the situation on master again and will add the test if you agree. The version we seem to be deprecating on master is the only one allowed on 2.x, so I'd be in favour of changing that to the behaviour on master (allowing both places, but using ParseField to deprecate the use on the top level). \n",
"created_at": "2015-12-11T13:49:55Z"
},
{
"body": "Closed by #15394\n",
"created_at": "2015-12-14T09:24:36Z"
}
],
"number": 15306,
"title": "Range query does not support _name anymore in 2.x"
}
|
{
"body": "This re-introduces support for the \"_name\" parameter in range queries on the field level that was droped while transitioning to 2.x but was allowed up to 1.7 and is the prefered location on master.\nAlso keeps parsing \"_name\" on the top level but deprecating it via use of ParseField.\n\nCloses #15306 \n",
"number": 15394,
"review_comments": [],
"title": "RangeQueryParser should accept `_name` in inner field"
}
|
{
"commits": [
{
"message": "Query DSL: RangeQueryParser should accept `_name` in inner field\n\nThis re-introduces support for the \"_name\" parameter in range queries\non the field level that was droped while transitioning to 2.x but\nwas allowed up to 1.7 and is the preffered location on master.\n\nAlso keeps parsing \"_name\" on the top level but deprecating it via\nuse of ParseField."
}
],
"files": [
{
"diff": "@@ -104,6 +104,8 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n timeZone = DateTimeZone.forID(parser.text());\n } else if (\"format\".equals(currentFieldName)) {\n forcedDateParser = new DateMathParser(Joda.forPattern(parser.text()));\n+ } else if (\"_name\".equals(currentFieldName)) {\n+ queryName = parser.text();\n } else {\n throw new QueryParsingException(parseContext, \"[range] query does not support [\" + currentFieldName + \"]\");\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/RangeQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -800,6 +800,23 @@ public void testRange2Query() throws IOException {\n assertThat(rangeQuery.includesMax(), equalTo(false));\n }\n \n+ /**\n+ * test that \"_name\" gets parsed on field level and on query level\n+ */\n+ @Test\n+ public void testNamedRangeQuery() throws IOException {\n+ IndexQueryParserService queryParser = queryParser();\n+ String query = \"{ range: { age: { gte:\\\"23\\\", lt:\\\"54\\\", _name:\\\"middle_aged\\\"}}}\";\n+ ParsedQuery parsedQuery = queryParser.parse(query);\n+ assertThat(parsedQuery.query(), instanceOf(NumericRangeQuery.class));\n+ assertThat(parsedQuery.namedFilters().keySet(), contains(\"middle_aged\"));\n+\n+ query = \"{ range: { age: { gte:\\\"23\\\", lt:\\\"54\\\"}, _name:\\\"middle_aged_toplevel\\\"}}\";\n+ parsedQuery = queryParser.parse(query);\n+ assertThat(parsedQuery.query(), instanceOf(NumericRangeQuery.class));\n+ assertThat(parsedQuery.namedFilters().keySet(), contains(\"middle_aged_toplevel\"));\n+ }\n+\n @Test\n public void testRangeFilteredQueryBuilder() throws IOException {\n IndexQueryParserService queryParser = queryParser();",
"filename": "core/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java",
"status": "modified"
}
]
}
|
{
"body": "QueryParsingException[[range] query does not support [_name]]\n at org.elasticsearch.index.query.RangeQueryParser.parse(RangeQueryParser.java:108)\n\nLooking at RangeQueryParser the _name field has been removed between 1.7 and 2.0. Possibly related to #11744 ?\n\nIt appears to be fixed again in 'master' but may need to be fixed independently for the 2.x line.\n\nFollowing query was used\n\n```\n{\n \"nested\": {\n \"path\": \"inner_name_only\",\n \"score_mode\": \"avg\",\n \"query\": {\n \"range\": {\n \"inner_name_only.inner_date_field\": {\n \"gte\" : \"1999/01/01\",\n \"lte\" : \"2020/01/01\",\n \"_name\" : \"query1\"\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "Thanks for reporting this, to me it looks like the expected place for the \"_name\" parameter was changed once with a0af88e9 to be on the top level, so in 2.x the following should work:\n\n```\n\"range\": {\n \"inner_name_only.inner_date_field\": {\n \"gte\": \"1999/01/01\",\n \"lte\": \"2020/01/01\"\n },\n \"_name\": \"query1\"\n }\n```\n\nHowever, as you mentioned in #11974 support for \"_name\" was added again on the main level alongside e.g. \"boost\" on master, but this was not ported to the 2.x branch. I think we should also add this back again on 2.x since the original move to the top-level of the query seems incidental. @javanna wdyt?\n",
"created_at": "2015-12-09T15:30:11Z"
},
{
"body": "Agreed it's weird to have back compat on master that does not exist on 2.x. So we should either remove the back compat from master or add it to 2.x.\n",
"created_at": "2015-12-11T12:53:54Z"
},
{
"body": "@jpountz thanks, I just checked the situation on master again and will add the test if you agree. The version we seem to be deprecating on master is the only one allowed on 2.x, so I'd be in favour of changing that to the behaviour on master (allowing both places, but using ParseField to deprecate the use on the top level). \n",
"created_at": "2015-12-11T13:49:55Z"
},
{
"body": "Closed by #15394\n",
"created_at": "2015-12-14T09:24:36Z"
}
],
"number": 15306,
"title": "Range query does not support _name anymore in 2.x"
}
|
{
"body": "The issue in #15306 brought up the question where the query parser accepts the \"_name\" parameter for range queries. This adds a test for the current situation on master which expects \"_name\" in the inner field but also allows but deprecates it on the top level.\n",
"number": 15391,
"review_comments": [],
"title": "Tests: Add test for parsing \"_name\" field in RangeQueryParser"
}
|
{
"commits": [
{
"message": "Tests: Add test for parsing \"_name\" field in RangeQueryParser"
}
],
"files": [
{
"diff": "@@ -23,7 +23,9 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermRangeQuery;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.lucene.BytesRefs;\n+import org.hamcrest.core.IsEqual;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n \n@@ -353,4 +355,42 @@ public void testFromJson() throws IOException {\n assertEquals(json, \"2015-01-01 00:00:00\", parsed.from());\n assertEquals(json, \"now\", parsed.to());\n }\n+\n+ public void testNamedQueryParsing() throws IOException {\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"range\\\" : {\\n\" +\n+ \" \\\"timestamp\\\" : {\\n\" +\n+ \" \\\"from\\\" : \\\"2015-01-01 00:00:00\\\",\\n\" +\n+ \" \\\"to\\\" : \\\"now\\\",\\n\" +\n+ \" \\\"boost\\\" : 1.0,\\n\" +\n+ \" \\\"_name\\\" : \\\"my_range\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ assertNotNull(parseQuery(json));\n+\n+ json =\n+ \"{\\n\" +\n+ \" \\\"range\\\" : {\\n\" +\n+ \" \\\"timestamp\\\" : {\\n\" +\n+ \" \\\"from\\\" : \\\"2015-01-01 00:00:00\\\",\\n\" +\n+ \" \\\"to\\\" : \\\"now\\\",\\n\" +\n+ \" \\\"boost\\\" : 1.0\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_name\\\" : \\\"my_range\\\"\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ // non strict parsing should accept \"_name\" on top level\n+ assertNotNull(parseQuery(json, ParseFieldMatcher.EMPTY));\n+\n+ // with strict parsing, ParseField will throw exception\n+ try {\n+ parseQuery(json, ParseFieldMatcher.STRICT);\n+ fail(\"Strict parsing should trigger exception for '_name' on top level\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"Deprecated field [_name] used, replaced by [query name is not supported in short version of range query]\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java",
"status": "modified"
}
]
}
|
{
"body": "You should specify that `{\"copy_to\": \"top.child\"}` needs `top` field defined first, otherwise it would throw an error when indexing.\n\nAlso, I'm completely dissatisfied with the documentation on the site. \n",
"comments": [
{
"body": "Hi @celesteking \n\nI'd say this is a bug. Recreation:\n\n```\nPUT test \n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"copy_to\": \"top.child\"\n }\n }\n }\n }\n}\n\nPUT test/test/1\n{\n \"foo\": \"bar\"\n}\n```\n\nreturns:\n\n```\n{\n \"error\": \"MapperParsingException[attempt to copy value to non-existing object [top.child]]\",\n \"status\": 400\n}\n```\n\n> Also, I'm completely dissatisfied with the documentation on the site.\n\nYou're welcome to participate in this open source project by sending PRs to improve the documentation, or the code.\n",
"created_at": "2015-05-25T12:38:16Z"
},
{
"body": "There is actually a TODO in the code just before that exception is thrown, stating that we should create the parent object dynamically. I agree it is a bug.\n",
"created_at": "2015-05-26T06:32:40Z"
}
],
"number": 11237,
"title": "copy_to needs top level object "
}
|
{
"body": "Backport from master: Fix copy_to when the target is a dynamic object field.\nCloses #11237\n",
"number": 15385,
"review_comments": [],
"title": "Fix copy_to when the target is a dynamic object field."
}
|
{
"commits": [
{
"message": "Backport from master:\nFix copy_to when the target is a dynamic object field.\nCloses #11237"
}
],
"files": [
{
"diff": "@@ -717,37 +717,64 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n // The path of the dest field might be completely different from the current one so we need to reset it\n context = context.overridePath(new ContentPath(0));\n \n+ String[] paths = Strings.splitStringToArray(field, '.');\n+ String fieldName = paths[paths.length-1];\n ObjectMapper mapper = context.root();\n- String objectPath = \"\";\n- String fieldPath = field;\n- int posDot = field.lastIndexOf('.');\n- if (posDot > 0) {\n- objectPath = field.substring(0, posDot);\n- context.path().add(objectPath);\n- mapper = context.docMapper().objectMappers().get(objectPath);\n- fieldPath = field.substring(posDot + 1);\n- }\n- if (mapper == null) {\n- //TODO: Create an object dynamically?\n- throw new MapperParsingException(\"attempt to copy value to non-existing object [\" + field + \"]\");\n- }\n- ObjectMapper update = parseDynamicValue(context, mapper, fieldPath, context.parser().currentToken());\n- assert update != null; // we are parsing a dynamic value so we necessarily created a new mapping\n-\n- // propagate the update to the root\n- while (objectPath.length() > 0) {\n- String parentPath = \"\";\n+ ObjectMapper[] mappers = new ObjectMapper[paths.length-1];\n+ if (paths.length > 1) {\n ObjectMapper parent = context.root();\n- posDot = objectPath.lastIndexOf('.');\n- if (posDot > 0) {\n- parentPath = objectPath.substring(0, posDot);\n- parent = context.docMapper().objectMappers().get(parentPath);\n+ for (int i = 0; i < paths.length-1; i++) {\n+ mapper = context.docMapper().objectMappers().get(context.path().fullPathAsText(paths[i]));\n+ if (mapper == null) {\n+ // One mapping is missing, check if we are allowed to create a dynamic one.\n+ ObjectMapper.Dynamic dynamic = parent.dynamic();\n+ if (dynamic == null) {\n+ dynamic = dynamicOrDefault(context.root().dynamic());\n+ }\n+\n+ switch (dynamic) {\n+ case STRICT:\n+ throw new StrictDynamicMappingException(parent.fullPath(), paths[i]);\n+ case TRUE:\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, paths[i], \"object\");\n+ if (builder == null) {\n+ // if this is a non root object, then explicitly set the dynamic behavior if set\n+ if (!(parent instanceof RootObjectMapper) && parent.dynamic() != ObjectMapper.Defaults.DYNAMIC) {\n+ ((ObjectMapper.Builder) builder).dynamic(parent.dynamic());\n+ }\n+ builder = MapperBuilders.object(paths[i]).enabled(true).pathType(parent.pathType());\n+ }\n+ Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings(), context.path());\n+ mapper = (ObjectMapper) builder.build(builderContext);\n+ if (mapper.nested() != ObjectMapper.Nested.NO) {\n+ throw new MapperParsingException(\"It is forbidden to create dynamic nested objects ([\" + context.path().fullPathAsText(paths[i]) + \"]) through `copy_to`\");\n+ }\n+ break;\n+ case FALSE:\n+ // Maybe we should log something to tell the user that the copy_to is ignored in this case.\n+ break;\n+ default:\n+ throw new AssertionError(\"Unexpected dynamic type \" + dynamic);\n+\n+ }\n+ }\n+ context.path().add(paths[i]);\n+ mappers[i] = mapper;\n+ parent = mapper;\n }\n- if (parent == null) {\n- throw new IllegalStateException(\"[\" + objectPath + \"] has no parent for path [\" + parentPath + \"]\");\n+ }\n+ ObjectMapper update = parseDynamicValue(context, mapper, fieldName, context.parser().currentToken());\n+ assert update != null; // we are parsing a dynamic value so we necessarily created a new mapping\n+\n+ if (paths.length > 1) {\n+ for (int i = paths.length - 2; i >= 0; i--) {\n+ ObjectMapper parent = context.root();\n+ if (i > 0) {\n+ parent = mappers[i-1];\n+ }\n+ assert parent != null;\n+ update = parent.mappingUpdate(update);\n }\n- update = parent.mappingUpdate(update);\n- objectPath = parentPath;\n }\n context.addDynamicMappingsUpdate(update);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n \n import java.io.IOException;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n \n@@ -72,6 +73,25 @@ public void testDynamicTemplateCopyTo() throws Exception {\n \n }\n \n+ public void testDynamicObjectCopyTo() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"doc\").startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"root.top.child\")\n+ .endObject()\n+ .endObject().endObject().endObject().string();\n+ assertAcked(\n+ client().admin().indices().prepareCreate(\"test-idx\")\n+ .addMapping(\"doc\", mapping)\n+ );\n+ client().prepareIndex(\"test-idx\", \"doc\", \"1\")\n+ .setSource(\"foo\", \"bar\")\n+ .get();\n+ client().admin().indices().prepareRefresh(\"test-idx\").execute().actionGet();\n+ SearchResponse response = client().prepareSearch(\"test-idx\")\n+ .setQuery(QueryBuilders.termQuery(\"root.top.child\", \"bar\")).get();\n+ assertThat(response.getHits().totalHits(), equalTo(1L));\n+ }\n \n private XContentBuilder createDynamicTemplateMapping() throws IOException {\n return XContentFactory.jsonBuilder().startObject().startObject(\"doc\")",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -174,27 +174,126 @@ public void testCopyToFieldsInnerObjectParsing() throws Exception {\n \n @SuppressWarnings(\"unchecked\")\n @Test\n- public void testCopyToFieldsNonExistingInnerObjectParsing() throws Exception {\n- String mapping = jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n-\n+ public void testCopyToDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n .startObject(\"copy_test\")\n- .field(\"type\", \"string\")\n- .field(\"copy_to\", \"very.inner.field\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n .endObject()\n-\n- .endObject().endObject().endObject().string();\n+ .endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n BytesReference json = jsonBuilder().startObject()\n .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n .endObject().bytes();\n \n+ ParseContext.Document doc = docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ assertThat(doc.getFields(\"copy_test\").length, equalTo(1));\n+ assertThat(doc.getFields(\"copy_test\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"very.inner.field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"very.inner.field\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"new_field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"new_field\")[0].stringValue(), equalTo(\"bar\"));\n+ }\n+\n+ public void testCopyToDynamicInnerInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.far.inner.field\")\n+ .endObject()\n+ .startObject(\"very\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"far\")\n+ .field(\"type\", \"object\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n+ .endObject().bytes();\n+\n+ ParseContext.Document doc = docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ assertThat(doc.getFields(\"copy_test\").length, equalTo(1));\n+ assertThat(doc.getFields(\"copy_test\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"very.far.inner.field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"very.far.inner.field\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"new_field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"new_field\")[0].stringValue(), equalTo(\"bar\"));\n+ }\n+\n+ public void testCopyToStrictDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .field(\"dynamic\", \"strict\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .endObject().bytes();\n+\n+ try {\n+ docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ fail();\n+ } catch (MapperParsingException ex) {\n+ assertThat(ex.getMessage(), startsWith(\"mapping set to strict, dynamic introduction of [very] within [type1] is not allowed\"));\n+ }\n+ }\n+\n+ public void testCopyToInnerStrictDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.far.field\")\n+ .endObject()\n+ .startObject(\"very\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"far\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", \"strict\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .endObject().bytes();\n+\n try {\n docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n fail();\n } catch (MapperParsingException ex) {\n- assertThat(ex.getMessage(), startsWith(\"attempt to copy value to non-existing object\"));\n+ assertThat(ex.getMessage(), startsWith(\"mapping set to strict, dynamic introduction of [field] within [very.far] is not allowed\"));\n }\n }\n \n@@ -346,6 +445,41 @@ public void testCopyToNestedField() throws Exception {\n }\n }\n \n+ public void testCopyToDynamicNestedObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startArray(\"dynamic_templates\")\n+ .startObject()\n+ .startObject(\"objects\")\n+ .field(\"match_mapping_type\", \"object\")\n+ .startObject(\"mapping\")\n+ .field(\"type\", \"nested\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endArray()\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n+ .endObject().bytes();\n+\n+ try {\n+ docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ fail();\n+ } catch (MapperParsingException ex) {\n+ assertThat(ex.getMessage(), startsWith(\"It is forbidden to create dynamic nested objects ([very]) through `copy_to`\"));\n+ }\n+ }\n+\n private void assertFieldValue(Document doc, String field, Number... expected) {\n IndexableField[] values = doc.getFields(field);\n if (values == null) {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperTests.java",
"status": "modified"
}
]
}
|
{
"body": "I was trying to create aliases with routing on an index that includes parent/child docs. Posting to an alias endpoint while specifying \"parent=X\" causes an error. I was thinking it shouldn't, because the parent/child routing is to ensure the docs will wind up on the same shard, but this is already guaranteed by the routed alias.\n\nCurl recreation:\n\nhttps://gist.github.com/erikcameron/5621421\n\nAs noted, it looks like it works if you give parent and routing explicitly. \n\nThanks!\n-E\n",
"comments": [
{
"body": "Makes sense, ES shouldn't throw an error when indexing into an alias and `parent` is set. Assuming that someone wants to override the parent routing (which is automatically set when `parent` is present in a index request) when indexing into an index alias, ES should use the routing specified in the index alias.\n",
"created_at": "2013-05-29T13:29:53Z"
},
{
"body": "Just ran into this same issue, is it likely to be fixed soon?\n",
"created_at": "2014-08-24T23:15:04Z"
},
{
"body": "Just had this issue too, I've lost lot of time finding out why it failed like this :).\n",
"created_at": "2014-10-21T17:13:33Z"
},
{
"body": "Any news on this issue @martijnvg ?\n",
"created_at": "2014-11-28T18:06:52Z"
},
{
"body": "Any chance of this getting back-ported to ES 2.x? This is a rather high priority bug for us since we can't determine the routing of the parent without investigating the alias directly before every insert (which is very expensive).\n",
"created_at": "2016-04-28T11:18:20Z"
},
{
"body": "any news on this? This bug is happening still in 2.3.3, has been ported to any 2.x tag?\n",
"created_at": "2016-07-29T09:23:49Z"
}
],
"number": 3068,
"title": "conflict between alias routing and parent/child routing"
}
|
{
"body": "Separates routing and parent in all documentrequest in order to be able to distinguish an explicit routing value from a parent routing.\n\nThe final value for the routing is returned by MetaData.resolveIndexRouting which\nresolves conflicts between routing, parent routing and alias routing with the following rules:\n- If the routing is specified in the request then parent routing and alias routing are ignored.\n- If the routing is not specified:\n - The parent routing is ignored if there is an alias routing that matches the request.\n - Otherwise the parent routing is applied.\n Fixes #3068\n",
"number": 15371,
"review_comments": [
{
"body": "should we remove this?\n",
"created_at": "2015-12-10T15:14:56Z"
},
{
"body": "where is it set?\n",
"created_at": "2015-12-10T15:15:41Z"
},
{
"body": "Sorry my cherry-pick was wrong. I'll fix it.\n",
"created_at": "2015-12-10T15:29:34Z"
},
{
"body": "Sorry my cherry-pick was wrong. I'll fix it.\n",
"created_at": "2015-12-10T15:29:39Z"
},
{
"body": "Don't these need to be guarded on `getVersion()` for 2.2+?\n",
"created_at": "2015-12-10T19:26:13Z"
},
{
"body": "Ditto here, guard with version check\n",
"created_at": "2015-12-10T19:26:39Z"
},
{
"body": "version check?\n",
"created_at": "2015-12-10T19:27:10Z"
},
{
"body": "Can we not ignore but throw an error back to the user? We shouldn't silently ignore something the user has passed in, it may be they are confused about the api and don't realize that part of their request is being ignored.\n",
"created_at": "2015-12-10T19:29:01Z"
},
{
"body": "Ignore this (and my other comments about version checks) if it is going to master only.\n",
"created_at": "2015-12-10T19:30:00Z"
},
{
"body": "@rjernst that's how it was done before but the purpose of this PR is to prioritize an explicit routing (set by the user) over an alias routing and a parent routing. IMO if the user adds a routing to the request we should always honor it and ignore the rest of the rules (parent and alias). Regarding the users confusion when they use the API I don't think the current situation is better (see #3068). I can restore the current behavior and only apply the parent/alias priority proposed in #3068 if you want but I have the feeling that it would bring more confusion. Bottom line is that if the user sets a routing then we should trust that he knows what he's doing ;).\n",
"created_at": "2015-12-11T08:36:50Z"
},
{
"body": "@jimferenczi How do you know the the user didn't make a mistake in his code and set routing when he did not mean to, but meant to only set parent? Being lenient is bad, but being silently lenient is even worse. \n",
"created_at": "2015-12-11T08:44:34Z"
},
{
"body": "I agree that being lenient is bad but what about being too strict ;). IMO it's not a question of being lenient or not but the question is how the rules for routing are applied and in which order. If we want to keep the same behavior then there would be no way to override a parent routing or an alias routing explicitly. I am fine with that but what do you think of the workaround for #3068 ? I could alos change this PR to handle the parent routing vs alias routing only. This would mean that if an alias routing is defined for the request then the parent routing is ignored. @rjernst @jpountz which one do you prefer ?\n",
"created_at": "2015-12-11T09:01:58Z"
},
{
"body": "I think the main use-case for routed aliases is to be able to transparently query a multi-tenant index holding data for several users as if each user had a dedicated index. The consumers of these aliases might not even know that these are aliases, it might be something that is set up by the system administrator. So I think we need the following rules when an alias routing is configured:\n- alias _routing wins over _parent (like the PR already does)\n- if _routing is set and there is an alias _routing, fail if the routing keys are different (like before) otherwise we would either allow to put documents in shards that are not visible to the alias (bad) or silently ignore the user-provided routing key which I believe could be confusing. Given that routing is an expert feature, I think it's totally fine to fail in that case: if an alias with routing is configured, this is exactly so that consumers of this alias don't have to deal with routing explicitly?\n",
"created_at": "2015-12-11T09:05:03Z"
},
{
"body": "@jpountz great, I'll change the PR to handle the \"alias_routing wins over _parent\" only and we'll continue to fail if the routing clashes with an alias routing. Thanks.\n",
"created_at": "2015-12-11T09:13:25Z"
},
{
"body": "Can we have unit tests for resolveIndexRouting? I think in `o.e.cluster.metadata.MetaDataTests`?\n",
"created_at": "2015-12-11T14:58:12Z"
},
{
"body": "Sure, its pushed (with a bug fix ;) ).\n",
"created_at": "2015-12-11T15:29:15Z"
},
{
"body": "we should check the exception is what we expect (assert part or all of the message)\n",
"created_at": "2015-12-11T15:40:44Z"
},
{
"body": "What about cases without an alias?\n",
"created_at": "2015-12-11T15:41:28Z"
}
],
"title": "Resolves the conflict between alias routing and parent routing by applying the alias routing and ignoring the parent routing."
}
|
{
"commits": [
{
"message": "Separates routing and parent in all documentrequest in order to be able to distinguish an explicit routing value from a parent routing.\nResolves conflicts between parent routing and alias routing with the following rule:\n * The parent routing is ignored if there is an alias routing that matches the request.\nCloses #3068"
}
],
"files": [
{
"diff": "@@ -62,4 +62,12 @@ public interface DocumentRequest<T> extends IndicesRequest {\n * @return the Routing\n */\n String routing();\n+\n+\n+ /**\n+ * Get the parent for this request\n+ * @return the Parent\n+ */\n+ String parent();\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/action/DocumentRequest.java",
"status": "modified"
},
{
"diff": "@@ -239,7 +239,7 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n }\n } else {\n concreteIndices.resolveIfAbsent(req);\n- req.routing(clusterState.metaData().resolveIndexRouting(req.routing(), req.index()));\n+ req.routing(clusterState.metaData().resolveIndexRouting(req.parent(), req.routing(), req.index()));\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -50,6 +50,8 @@ public class DeleteRequest extends ReplicationRequest<DeleteRequest> implements\n private String id;\n @Nullable\n private String routing;\n+ @Nullable\n+ private String parent;\n private boolean refresh;\n private long version = Versions.MATCH_ANY;\n private VersionType versionType = VersionType.INTERNAL;\n@@ -94,6 +96,7 @@ public DeleteRequest(DeleteRequest request, ActionRequest originalRequest) {\n this.type = request.type();\n this.id = request.id();\n this.routing = request.routing();\n+ this.parent = request.parent();\n this.refresh = request.refresh();\n this.version = request.version();\n this.versionType = request.versionType();\n@@ -155,13 +158,18 @@ public DeleteRequest id(String id) {\n }\n \n /**\n- * Sets the parent id of this document. Will simply set the routing to this value, as it is only\n- * used for routing with delete requests.\n+ * @return The parent for this request.\n+ */\n+ @Override\n+ public String parent() {\n+ return parent;\n+ }\n+\n+ /**\n+ * Sets the parent id of this document.\n */\n public DeleteRequest parent(String parent) {\n- if (routing == null) {\n- routing = parent;\n- }\n+ this.parent = parent;\n return this;\n }\n \n@@ -230,6 +238,7 @@ public void readFrom(StreamInput in) throws IOException {\n type = in.readString();\n id = in.readString();\n routing = in.readOptionalString();\n+ parent = in.readOptionalString();\n refresh = in.readBoolean();\n version = in.readLong();\n versionType = VersionType.fromValue(in.readByte());\n@@ -241,6 +250,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(type);\n out.writeString(id);\n out.writeOptionalString(routing());\n+ out.writeOptionalString(parent());\n out.writeBoolean(refresh);\n out.writeLong(version);\n out.writeByte(versionType.getValue());",
"filename": "core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public void onFailure(Throwable e) {\n \n @Override\n protected void resolveRequest(final MetaData metaData, String concreteIndex, DeleteRequest request) {\n- request.routing(metaData.resolveIndexRouting(request.routing(), request.index()));\n+ request.routing(metaData.resolveIndexRouting(request.parent(), request.routing(), request.index()));\n if (metaData.hasIndex(concreteIndex)) {\n // check if routing is required, if so, do a broadcast delete\n MappingMetaData mappingMd = metaData.index(concreteIndex).mappingOrDefault(request.type());",
"filename": "core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@ public class GetRequest extends SingleShardRequest<GetRequest> implements Realti\n private String type;\n private String id;\n private String routing;\n+ private String parent;\n private String preference;\n \n private String[] fields;\n@@ -77,6 +78,7 @@ public GetRequest(GetRequest getRequest, ActionRequest originalRequest) {\n this.type = getRequest.type;\n this.id = getRequest.id;\n this.routing = getRequest.routing;\n+ this.parent = getRequest.parent;\n this.preference = getRequest.preference;\n this.fields = getRequest.fields;\n this.fetchSourceContext = getRequest.fetchSourceContext;\n@@ -153,13 +155,17 @@ public GetRequest id(String id) {\n }\n \n /**\n- * Sets the parent id of this document. Will simply set the routing to this value, as it is only\n- * used for routing with delete requests.\n+ * @return The parent for this request.\n+ */\n+ public String parent() {\n+ return parent;\n+ }\n+\n+ /**\n+ * Sets the parent id of this document.\n */\n public GetRequest parent(String parent) {\n- if (routing == null) {\n- routing = parent;\n- }\n+ this.parent = parent;\n return this;\n }\n \n@@ -291,6 +297,7 @@ public void readFrom(StreamInput in) throws IOException {\n type = in.readString();\n id = in.readString();\n routing = in.readOptionalString();\n+ parent = in.readOptionalString();\n preference = in.readOptionalString();\n refresh = in.readBoolean();\n int size = in.readInt();\n@@ -320,6 +327,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(type);\n out.writeString(id);\n out.writeOptionalString(routing);\n+ out.writeOptionalString(parent);\n out.writeOptionalString(preference);\n \n out.writeBoolean(refresh);",
"filename": "core/src/main/java/org/elasticsearch/action/get/GetRequest.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@ public static class Item implements Streamable, IndicesRequest {\n private String type;\n private String id;\n private String routing;\n+ private String parent;\n private String[] fields;\n private long version = Versions.MATCH_ANY;\n private VersionType versionType = VersionType.INTERNAL;\n@@ -116,12 +117,17 @@ public String routing() {\n }\n \n public Item parent(String parent) {\n- if (routing == null) {\n- this.routing = parent;\n- }\n+ this.parent = parent;\n return this;\n }\n \n+ /**\n+ * @return The parent for this request.\n+ */\n+ public String parent() {\n+ return parent;\n+ }\n+\n public Item fields(String... fields) {\n this.fields = fields;\n return this;\n@@ -173,6 +179,7 @@ public void readFrom(StreamInput in) throws IOException {\n type = in.readOptionalString();\n id = in.readString();\n routing = in.readOptionalString();\n+ parent = in.readOptionalString();\n int size = in.readVInt();\n if (size > 0) {\n fields = new String[size];\n@@ -192,6 +199,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalString(type);\n out.writeString(id);\n out.writeOptionalString(routing);\n+ out.writeOptionalString(parent);\n if (fields == null) {\n out.writeVInt(0);\n } else {\n@@ -221,6 +229,7 @@ public boolean equals(Object o) {\n if (!id.equals(item.id)) return false;\n if (!index.equals(item.index)) return false;\n if (routing != null ? !routing.equals(item.routing) : item.routing != null) return false;\n+ if (parent != null ? !parent.equals(item.parent) : item.parent != null) return false;\n if (type != null ? !type.equals(item.type) : item.type != null) return false;\n if (versionType != item.versionType) return false;\n \n@@ -233,6 +242,7 @@ public int hashCode() {\n result = 31 * result + (type != null ? type.hashCode() : 0);\n result = 31 * result + id.hashCode();\n result = 31 * result + (routing != null ? routing.hashCode() : 0);\n+ result = 31 * result + (parent != null ? parent.hashCode() : 0);\n result = 31 * result + (fields != null ? Arrays.hashCode(fields) : 0);\n result = 31 * result + Long.hashCode(version);\n result = 31 * result + versionType.hashCode();",
"filename": "core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java",
"status": "modified"
},
{
"diff": "@@ -82,7 +82,7 @@ protected void resolveRequest(ClusterState state, InternalRequest request) {\n request.request().preference(Preference.PRIMARY.type());\n }\n // update the routing (request#index here is possibly an alias)\n- request.request().routing(state.metaData().resolveIndexRouting(request.request().routing(), request.request().index()));\n+ request.request().routing(state.metaData().resolveIndexRouting(request.request().parent(), request.request().routing(), request.request().index()));\n // Fail fast on the node that received the request.\n if (request.request().routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.request().type())) {\n throw new RoutingMissingException(request.concreteIndex(), request.request().type(), request.request().id());",
"filename": "core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ protected void doExecute(final MultiGetRequest request, final ActionListener<Mul\n responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(item.index(), item.type(), item.id(), new IndexNotFoundException(item.index()))));\n continue;\n }\n- item.routing(clusterState.metaData().resolveIndexRouting(item.routing(), item.index()));\n+ item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index()));\n String concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item);\n if (item.routing() == null && clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type())) {\n responses.set(i, new MultiGetItemResponse(null, new MultiGetResponse.Failure(concreteSingleIndex, item.type(), item.id(),",
"filename": "core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java",
"status": "modified"
},
{
"diff": "@@ -304,14 +304,10 @@ public String routing() {\n }\n \n /**\n- * Sets the parent id of this document. If routing is not set, automatically set it as the\n- * routing as well.\n+ * Sets the parent id of this document.\n */\n public IndexRequest parent(String parent) {\n this.parent = parent;\n- if (routing == null) {\n- routing = parent;\n- }\n return this;\n }\n \n@@ -593,7 +589,7 @@ private Version getVersion(MetaData metaData, String concreteIndex) {\n \n public void process(MetaData metaData, @Nullable MappingMetaData mappingMd, boolean allowIdGeneration, String concreteIndex) {\n // resolve the routing if needed\n- routing(metaData.resolveIndexRouting(routing, index));\n+ routing(metaData.resolveIndexRouting(parent, routing, index));\n \n // resolve timestamp if provided externally\n if (timestamp != null) {",
"filename": "core/src/main/java/org/elasticsearch/action/index/IndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -65,6 +65,8 @@ public class TermVectorsRequest extends SingleShardRequest<TermVectorsRequest> i\n \n private String routing;\n \n+ private String parent;\n+\n private VersionType versionType = VersionType.INTERNAL;\n \n private long version = Versions.MATCH_ANY;\n@@ -162,6 +164,7 @@ public TermVectorsRequest(TermVectorsRequest other) {\n this.flagsEnum = other.getFlags().clone();\n this.preference = other.preference();\n this.routing = other.routing();\n+ this.parent = other.parent();\n if (other.selectedFields != null) {\n this.selectedFields = new HashSet<>(other.selectedFields);\n }\n@@ -181,6 +184,7 @@ public TermVectorsRequest(MultiGetRequest.Item item) {\n this.type = item.type();\n this.selectedFields(item.fields());\n this.routing(item.routing());\n+ this.parent(item.parent());\n }\n \n public EnumSet<Flag> getFlags() {\n@@ -259,14 +263,16 @@ public TermVectorsRequest routing(String routing) {\n return this;\n }\n \n+ @Override\n+ public String parent() {\n+ return parent;\n+ }\n+\n /**\n- * Sets the parent id of this document. Will simply set the routing to this\n- * value, as it is only used for routing with delete requests.\n+ * Sets the parent id of this document.\n */\n public TermVectorsRequest parent(String parent) {\n- if (routing == null) {\n- routing = parent;\n- }\n+ this.parent = parent;\n return this;\n }\n \n@@ -506,6 +512,7 @@ public void readFrom(StreamInput in) throws IOException {\n doc = in.readBytesReference();\n }\n routing = in.readOptionalString();\n+ parent = in.readOptionalString();\n preference = in.readOptionalString();\n long flags = in.readVLong();\n \n@@ -545,6 +552,7 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBytesReference(doc);\n }\n out.writeOptionalString(routing);\n+ out.writeOptionalString(parent);\n out.writeOptionalString(preference);\n long longFlags = 0;\n for (Flag flag : flagsEnum) {\n@@ -629,6 +637,8 @@ public static void parseRequest(TermVectorsRequest termVectorsRequest, XContentP\n termVectorsRequest.doc(jsonBuilder().copyCurrentStructure(parser));\n } else if (\"_routing\".equals(currentFieldName) || \"routing\".equals(currentFieldName)) {\n termVectorsRequest.routing = parser.text();\n+ } else if (\"_parent\".equals(currentFieldName) || \"parent\".equals(currentFieldName)) {\n+ termVectorsRequest.parent = parser.text();\n } else if (\"_version\".equals(currentFieldName) || \"version\".equals(currentFieldName)) {\n termVectorsRequest.version = parser.longValue();\n } else if (\"_version_type\".equals(currentFieldName) || \"_versionType\".equals(currentFieldName) || \"version_type\".equals(currentFieldName) || \"versionType\".equals(currentFieldName)) {",
"filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java",
"status": "modified"
},
{
"diff": "@@ -66,7 +66,7 @@ protected void doExecute(final MultiTermVectorsRequest request, final ActionList\n for (int i = 0; i < request.requests.size(); i++) {\n TermVectorsRequest termVectorsRequest = request.requests.get(i);\n termVectorsRequest.startTime = System.currentTimeMillis();\n- termVectorsRequest.routing(clusterState.metaData().resolveIndexRouting(termVectorsRequest.routing(), termVectorsRequest.index()));\n+ termVectorsRequest.routing(clusterState.metaData().resolveIndexRouting(termVectorsRequest.parent(), termVectorsRequest.routing(), termVectorsRequest.index()));\n if (!clusterState.metaData().hasConcreteIndex(termVectorsRequest.index())) {\n responses.set(i, new MultiTermVectorsItemResponse(null, new MultiTermVectorsResponse.Failure(termVectorsRequest.index(),\n termVectorsRequest.type(), termVectorsRequest.id(), new IndexNotFoundException(termVectorsRequest.index()))));\n@@ -88,12 +88,12 @@ protected void doExecute(final MultiTermVectorsRequest request, final ActionList\n }\n shardRequest.add(i, termVectorsRequest);\n }\n- \n+\n if (shardRequests.size() == 0) {\n // only failures..\n listener.onResponse(new MultiTermVectorsResponse(responses.toArray(new MultiTermVectorsItemResponse[responses.length()])));\n }\n- \n+\n final AtomicInteger counter = new AtomicInteger(shardRequests.size());\n for (final MultiTermVectorsShardRequest shardRequest : shardRequests.values()) {\n shardAction.execute(shardRequest, new ActionListener<MultiTermVectorsShardResponse>() {",
"filename": "core/src/main/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsAction.java",
"status": "modified"
},
{
"diff": "@@ -71,8 +71,8 @@ protected boolean resolveIndex(TermVectorsRequest request) {\n \n @Override\n protected void resolveRequest(ClusterState state, InternalRequest request) {\n- // update the routing (request#index here is possibly an alias)\n- request.request().routing(state.metaData().resolveIndexRouting(request.request().routing(), request.request().index()));\n+ // update the routing (request#index here is possibly an alias or a parent)\n+ request.request().routing(state.metaData().resolveIndexRouting(request.request().parent(), request.request().routing(), request.request().index()));\n // Fail fast on the node that received the request.\n if (request.request().routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.request().type())) {\n throw new RoutingMissingException(request.concreteIndex(), request.request().type(), request.request().id());",
"filename": "core/src/main/java/org/elasticsearch/action/termvectors/TransportTermVectorsAction.java",
"status": "modified"
},
{
"diff": "@@ -101,7 +101,7 @@ protected boolean retryOnFailure(Throwable e) {\n \n @Override\n protected boolean resolveRequest(ClusterState state, UpdateRequest request, ActionListener<UpdateResponse> listener) {\n- request.routing((state.metaData().resolveIndexRouting(request.routing(), request.index())));\n+ request.routing((state.metaData().resolveIndexRouting(request.parent(), request.routing(), request.index())));\n // Fail fast on the node that received the request, rather than failing when translating on the index or delete request.\n if (request.routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.type())) {\n throw new RoutingMissingException(request.concreteIndex(), request.type(), request.id());",
"filename": "core/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java",
"status": "modified"
},
{
"diff": "@@ -184,13 +184,10 @@ public String routing() {\n }\n \n /**\n- * The parent id is used for the upsert request and also implicitely sets the routing if not already set.\n+ * The parent id is used for the upsert request.\n */\n public UpdateRequest parent(String parent) {\n this.parent = parent;\n- if (routing == null) {\n- routing = parent;\n- }\n return this;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java",
"status": "modified"
},
{
"diff": "@@ -441,13 +441,19 @@ public String[] getConcreteAllClosedIndices() {\n */\n // TODO: This can be moved to IndexNameExpressionResolver too, but this means that we will support wildcards and other expressions\n // in the index,bulk,update and delete apis.\n- public String resolveIndexRouting(@Nullable String routing, String aliasOrIndex) {\n+ public String resolveIndexRouting(@Nullable String parent, @Nullable String routing, String aliasOrIndex) {\n if (aliasOrIndex == null) {\n+ if (routing == null) {\n+ return parent;\n+ }\n return routing;\n }\n \n AliasOrIndex result = getAliasAndIndexLookup().get(aliasOrIndex);\n if (result == null || result.isAlias() == false) {\n+ if (routing == null) {\n+ return parent;\n+ }\n return routing;\n }\n AliasOrIndex.Alias alias = (AliasOrIndex.Alias) result;\n@@ -461,17 +467,19 @@ public String resolveIndexRouting(@Nullable String routing, String aliasOrIndex)\n }\n AliasMetaData aliasMd = alias.getFirstAliasMetaData();\n if (aliasMd.indexRouting() != null) {\n+ if (aliasMd.indexRouting().indexOf(',') != -1) {\n+ throw new IllegalArgumentException(\"index/alias [\" + aliasOrIndex + \"] provided with routing value [\" + aliasMd.getIndexRouting() + \"] that resolved to several routing values, rejecting operation\");\n+ }\n if (routing != null) {\n if (!routing.equals(aliasMd.indexRouting())) {\n throw new IllegalArgumentException(\"Alias [\" + aliasOrIndex + \"] has index routing associated with it [\" + aliasMd.indexRouting() + \"], and was provided with routing value [\" + routing + \"], rejecting operation\");\n }\n }\n- routing = aliasMd.indexRouting();\n+ // Alias routing overrides the parent routing (if any).\n+ return aliasMd.indexRouting();\n }\n- if (routing != null) {\n- if (routing.indexOf(',') != -1) {\n- throw new IllegalArgumentException(\"index/alias [\" + aliasOrIndex + \"] provided with routing value [\" + routing + \"] that resolved to several routing values, rejecting operation\");\n- }\n+ if (routing == null) {\n+ return parent;\n }\n return routing;\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -255,7 +255,7 @@ public void testStreamRequest() throws IOException {\n assertThat(request.positions(), equalTo(req2.positions()));\n assertThat(request.termStatistics(), equalTo(req2.termStatistics()));\n assertThat(request.preference(), equalTo(pref));\n- assertThat(request.routing(), equalTo(parent));\n+ assertThat(request.routing(), equalTo(null));\n \n }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.test.ESTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n \n public class MetaDataTests extends ESTestCase {\n \n@@ -41,4 +42,72 @@ public void testIndexAndAliasWithSameName() {\n }\n }\n \n+ public void testResolveIndexRouting() {\n+ IndexMetaData.Builder builder = IndexMetaData.builder(\"index\")\n+ .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n+ .numberOfShards(1)\n+ .numberOfReplicas(0)\n+ .putAlias(AliasMetaData.builder(\"alias0\").build())\n+ .putAlias(AliasMetaData.builder(\"alias1\").routing(\"1\").build())\n+ .putAlias(AliasMetaData.builder(\"alias2\").routing(\"1,2\").build());\n+ MetaData metaData = MetaData.builder().put(builder).build();\n+\n+ // no alias, no index\n+ assertEquals(metaData.resolveIndexRouting(null, null, null), null);\n+ assertEquals(metaData.resolveIndexRouting(null, \"0\", null), \"0\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", \"0\", null), \"0\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", null, null), \"32\");\n+\n+ // index, no alias\n+ assertEquals(metaData.resolveIndexRouting(\"32\", \"0\", \"index\"), \"0\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", null, \"index\"), \"32\");\n+ assertEquals(metaData.resolveIndexRouting(null, null, \"index\"), null);\n+ assertEquals(metaData.resolveIndexRouting(null, \"0\", \"index\"), \"0\");\n+\n+ // alias with no index routing\n+ assertEquals(metaData.resolveIndexRouting(null, null, \"alias0\"), null);\n+ assertEquals(metaData.resolveIndexRouting(null, \"0\", \"alias0\"), \"0\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", null, \"alias0\"), \"32\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", \"0\", \"alias0\"), \"0\");\n+\n+ // alias with index routing.\n+ assertEquals(metaData.resolveIndexRouting(null, null, \"alias1\"), \"1\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", null, \"alias1\"), \"1\");\n+ assertEquals(metaData.resolveIndexRouting(\"32\", \"1\", \"alias1\"), \"1\");\n+ try {\n+ metaData.resolveIndexRouting(null, \"0\", \"alias1\");\n+ fail(\"should fail\");\n+ } catch (IllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), is(\"Alias [alias1] has index routing associated with it [1], and was provided with routing value [0], rejecting operation\"));\n+ }\n+\n+ try {\n+ metaData.resolveIndexRouting(\"32\", \"0\", \"alias1\");\n+ fail(\"should fail\");\n+ } catch (IllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), is(\"Alias [alias1] has index routing associated with it [1], and was provided with routing value [0], rejecting operation\"));\n+ }\n+\n+ // alias with invalid index routing.\n+ try {\n+ metaData.resolveIndexRouting(null, null, \"alias2\");\n+ fail(\"should fail\");\n+ } catch (IllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), is(\"index/alias [alias2] provided with routing value [1,2] that resolved to several routing values, rejecting operation\"));\n+ }\n+\n+ try {\n+ metaData.resolveIndexRouting(null, \"1\", \"alias2\");\n+ fail(\"should fail\");\n+ } catch (IllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), is(\"index/alias [alias2] provided with routing value [1,2] that resolved to several routing values, rejecting operation\"));\n+ }\n+\n+ try {\n+ metaData.resolveIndexRouting(\"32\", null, \"alias2\");\n+ fail(\"should fail\");\n+ } catch (IllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), is(\"index/alias [alias2] provided with routing value [1,2] that resolved to several routing values, rejecting operation\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java",
"status": "modified"
},
{
"diff": "@@ -51,24 +51,27 @@ public void testResolveIndexRouting() throws Exception {\n client().admin().indices().prepareAliases().addAliasAction(newAddAliasAction(\"test1\", \"alias0\").routing(\"0\")).execute().actionGet();\n client().admin().indices().prepareAliases().addAliasAction(newAddAliasAction(\"test2\", \"alias0\").routing(\"0\")).execute().actionGet();\n \n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"test1\"), nullValue());\n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"alias\"), nullValue());\n-\n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"test1\"), nullValue());\n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"alias10\"), equalTo(\"0\"));\n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"alias20\"), equalTo(\"0\"));\n- assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"alias21\"), equalTo(\"1\"));\n- assertThat(clusterService().state().metaData().resolveIndexRouting(\"3\", \"test1\"), equalTo(\"3\"));\n- assertThat(clusterService().state().metaData().resolveIndexRouting(\"0\", \"alias10\"), equalTo(\"0\"));\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"test1\"), nullValue());\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"alias\"), nullValue());\n+\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"test1\"), nullValue());\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"alias10\"), equalTo(\"0\"));\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"alias20\"), equalTo(\"0\"));\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, null, \"alias21\"), equalTo(\"1\"));\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"3\", \"test1\"), equalTo(\"3\"));\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(null, \"0\", \"alias10\"), equalTo(\"0\"));\n+\n+ // Force the alias routing and ignore the parent.\n+ assertThat(clusterService().state().metaData().resolveIndexRouting(\"1\", null, \"alias10\"), equalTo(\"0\"));\n try {\n- clusterService().state().metaData().resolveIndexRouting(\"1\", \"alias10\");\n+ clusterService().state().metaData().resolveIndexRouting(null, \"1\", \"alias10\");\n fail(\"should fail\");\n } catch (IllegalArgumentException e) {\n // all is well, we can't have two mappings, one provided, and one in the alias\n }\n \n try {\n- clusterService().state().metaData().resolveIndexRouting(null, \"alias0\");\n+ clusterService().state().metaData().resolveIndexRouting(null, null, \"alias0\");\n fail(\"should fail\");\n } catch (IllegalArgumentException ex) {\n // Expected",
"filename": "core/src/test/java/org/elasticsearch/routing/AliasResolveRoutingIT.java",
"status": "modified"
},
{
"diff": "@@ -223,6 +223,7 @@ Can't be used to update the routing of an existing document.\n Parent is used to route the update request to the right shard and sets the\n parent for the upsert request if the document being updated doesn't exist.\n Can't be used to update the `parent` of an existing document.\n+If an alias index routing is specified then it overrides the parent routing and it is used to route the request.\n \n `timeout`::\n ",
"filename": "docs/reference/docs/update.asciidoc",
"status": "modified"
},
{
"diff": "@@ -193,8 +193,8 @@ curl -XPOST 'http://localhost:9200/_aliases' -d '\n As shown in the example above, search routing may contain several values\n separated by comma. Index routing can contain only a single value.\n \n-If an operation that uses routing alias also has a routing parameter, an\n-intersection of both alias routing and routing specified in the\n+If a search operation that uses routing alias also has a routing parameter, an\n+intersection of both search alias routing and routing specified in the\n parameter is used. For example the following command will use \"2\" as a\n routing value:\n \n@@ -203,6 +203,9 @@ routing value:\n curl -XGET 'http://localhost:9200/alias2/_search?q=user:kimchy&routing=2,3'\n --------------------------------------------------\n \n+If an index operation that uses index routing alias also has a parent routing, the\n+parent routing is ignored.\n+\n [float]\n [[alias-adding]]\n === Add a single alias",
"filename": "docs/reference/indices/aliases.asciidoc",
"status": "modified"
}
]
}
|
{
"body": "Hi!\nRunning elasticsearch 2.1.0 I encountered weird behavior when trying to execute update. Tried to execute this query:\n\n```\nPOST my-index/my-type/my_id/_update\n{\"doc\": {\"my_field_name\": \"blabla\"}, \"doc_as_upsert\": true, \"fields\": \"_source\"}\n```\n\nand got **OutOfMemoryError[Java heap space]** (it took Elasticsearch a few seconds to answer).\n\nAs I understand it, the format of the query above is wrong. The value of \"fields\" should be list, not string (indeed when correcting the query the update worked fine). In a case like that I would expect to get some kind of an error indicating my request is not valid but It seems like some internal error have happen in elasticsearch and resulted in OutOfMemoryError.\n\nThis is the stack trace:\n`java.lang.OutOfMemoryError: Java heap space\n at java.util.Arrays.copyOf(Arrays.java:2219)\n at java.util.ArrayList.grow(ArrayList.java:242)\n at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)\n at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)\n at java.util.ArrayList.add(ArrayList.java:440)\n at org.elasticsearch.common.xcontent.support.AbstractXContentParser.readList(AbstractXContentParser.java:290)\n at org.elasticsearch.common.xcontent.support.AbstractXContentParser.readList(AbstractXContentParser.java:253)\n at org.elasticsearch.common.xcontent.support.AbstractXContentParser.list(AbstractXContentParser.java:218)\n at org.elasticsearch.action.update.UpdateRequest.source(UpdateRequest.java:672)\n at org.elasticsearch.rest.action.update.RestUpdateAction.handleRequest(RestUpdateAction.java:101)\n at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)\n at org.elasticsearch.rest.RestController.executeHandler(RestController.java:207)\n at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)\n at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)\n at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:348)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n`\n",
"comments": [
{
"body": "thanks for reporting @anyatch - nice catch\n",
"created_at": "2015-12-14T15:28:31Z"
}
],
"number": 15338,
"title": "OutOfMemoryError[Java heap space] when executing update with bad fields"
}
|
{
"body": "closes #15338\n",
"number": 15350,
"review_comments": [
{
"body": "Do we ever read lists of objects with this?\n",
"created_at": "2015-12-09T21:07:24Z"
},
{
"body": "maybe this should be `token != null && (token.isValue() || token == VALUE_NULL)`?\n",
"created_at": "2015-12-10T09:33:29Z"
},
{
"body": "Yes, the condition `token != null && token.isValue()` is not enough, we need to consume all the parser and stop when we hit the end of the array or the end of the parent's object.\n",
"created_at": "2015-12-10T13:17:11Z"
},
{
"body": "See previous comment\n",
"created_at": "2015-12-10T13:17:20Z"
},
{
"body": "why do we need to check for END_OBJECT?\n",
"created_at": "2015-12-10T13:18:06Z"
},
{
"body": "Can you test with eg. `{\"foo\": 1, \"bar\": 2}`? I'm afraid calling parser.list() on foo might return [1,2] given how it's implemented?\n",
"created_at": "2015-12-10T13:20:56Z"
},
{
"body": "It is a small safeguard in case some code tries to read a list but only a value is available. `list()` and `readList()` methods should only be called when parsing an array but who knows.\n",
"created_at": "2015-12-10T13:24:44Z"
},
{
"body": "could we throw an exception instead then?\n",
"created_at": "2015-12-10T13:25:11Z"
},
{
"body": "it will also contain '2' in that case right? this feels wrong, doesn't it?\n",
"created_at": "2016-03-17T10:42:22Z"
}
],
"title": "Fix OOM in AbstractXContentParser"
}
|
{
"commits": [
{
"message": "Fix OOM in AbstractXContentParser\n\nThis commit fixes an OOM error that happens when the XContentParser.readList() method is asked to parse a single value instead of an array. It fixes the UpdateRequest parsing as well as remove some leniency in the readList() method so that it expect to be in an array before parsing values.\n\ncloses #15338"
}
],
"files": [
{
"diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.script.ScriptService.ScriptType;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n@@ -671,9 +672,15 @@ public UpdateRequest source(BytesReference source) throws Exception {\n } else if (\"detect_noop\".equals(currentFieldName)) {\n detectNoop(parser.booleanValue());\n } else if (\"fields\".equals(currentFieldName)) {\n- List<Object> values = parser.list();\n- String[] fields = values.toArray(new String[values.size()]);\n- fields(fields);\n+ List<Object> fields = null;\n+ if (token == XContentParser.Token.START_ARRAY) {\n+ fields = (List) parser.list();\n+ } else if (token.isValue()) {\n+ fields = Collections.singletonList(parser.text());\n+ }\n+ if (fields != null) {\n+ fields(fields.toArray(new String[fields.size()]));\n+ }\n } else {\n //here we don't have settings available, unable to throw deprecation exceptions\n scriptParameterParser.token(currentFieldName, token, parser, ParseFieldMatcher.EMPTY);",
"filename": "core/src/main/java/org/elasticsearch/action/update/UpdateRequest.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.xcontent.support;\n \n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -286,14 +287,21 @@ static Map<String, Object> readMap(XContentParser parser, MapFactory mapFactory)\n \n static List<Object> readList(XContentParser parser, MapFactory mapFactory) throws IOException {\n XContentParser.Token token = parser.currentToken();\n+ if (token == null) {\n+ token = parser.nextToken();\n+ }\n if (token == XContentParser.Token.FIELD_NAME) {\n token = parser.nextToken();\n }\n if (token == XContentParser.Token.START_ARRAY) {\n token = parser.nextToken();\n+ } else {\n+ throw new ElasticsearchParseException(\"Failed to parse list: expecting \"\n+ + XContentParser.Token.START_ARRAY + \" but got \" + token);\n }\n+\n ArrayList<Object> list = new ArrayList<>();\n- for (; token != XContentParser.Token.END_ARRAY; token = parser.nextToken()) {\n+ for (; token != null && token != XContentParser.Token.END_ARRAY; token = parser.nextToken()) {\n list.add(readValue(parser, mapFactory, token));\n }\n return list;",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -36,6 +37,7 @@\n import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.hamcrest.Matchers.arrayContaining;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n@@ -179,4 +181,17 @@ public void testInvalidBodyThrowsParseException() throws Exception {\n assertThat(e.getMessage(), equalTo(\"Failed to derive xcontent\"));\n }\n }\n+\n+ // Related to issue 15338\n+ public void testFieldsParsing() throws Exception {\n+ UpdateRequest request = new UpdateRequest(\"test\", \"type1\", \"1\")\n+ .source(new BytesArray(\"{\\\"doc\\\": {\\\"field1\\\": \\\"value1\\\"}, \\\"fields\\\": \\\"_source\\\"}\"));\n+ assertThat(request.doc().sourceAsMap().get(\"field1\").toString(), equalTo(\"value1\"));\n+ assertThat(request.fields(), arrayContaining(\"_source\"));\n+\n+ request = new UpdateRequest(\"test\", \"type2\", \"2\")\n+ .source(new BytesArray(\"{\\\"doc\\\": {\\\"field2\\\": \\\"value2\\\"}, \\\"fields\\\": [\\\"field1\\\", \\\"field2\\\"]}\"));\n+ assertThat(request.doc().sourceAsMap().get(\"field2\").toString(), equalTo(\"value2\"));\n+ assertThat(request.fields(), arrayContaining(\"field1\", \"field2\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/update/UpdateRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,78 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.xcontent;\n+\n+import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.util.List;\n+\n+import static org.hamcrest.Matchers.contains;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasSize;\n+import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.nullValue;\n+\n+public class XContentParserTests extends ESTestCase {\n+\n+ public void testReadList() throws IOException {\n+ assertThat(readList(\"{\\\"foo\\\": [\\\"bar\\\"]}\"), contains(\"bar\"));\n+ assertThat(readList(\"{\\\"foo\\\": [\\\"bar\\\",\\\"baz\\\"]}\"), contains(\"bar\", \"baz\"));\n+ assertThat(readList(\"{\\\"foo\\\": [1, 2, 3], \\\"bar\\\": 4}\"), contains(1, 2, 3));\n+ assertThat(readList(\"{\\\"foo\\\": [{\\\"bar\\\":1},{\\\"baz\\\":2},{\\\"qux\\\":3}]}\"), hasSize(3));\n+ assertThat(readList(\"{\\\"foo\\\": [null]}\"), contains(nullValue()));\n+ assertThat(readList(\"{\\\"foo\\\": []}\"), hasSize(0));\n+ assertThat(readList(\"{\\\"foo\\\": [1]}\"), contains(1));\n+ assertThat(readList(\"{\\\"foo\\\": [1,2]}\"), contains(1, 2));\n+ assertThat(readList(\"{\\\"foo\\\": [{},{},{},{}]}\"), hasSize(4));\n+ }\n+\n+ public void testReadListThrowsException() throws IOException {\n+ // Calling XContentParser.list() or listOrderedMap() to read a simple\n+ // value or object should throw an exception\n+ assertReadListThrowsException(\"{\\\"foo\\\": \\\"bar\\\"}\");\n+ assertReadListThrowsException(\"{\\\"foo\\\": 1, \\\"bar\\\": 2}\");\n+ assertReadListThrowsException(\"{\\\"foo\\\": {\\\"bar\\\":\\\"baz\\\"}}\");\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private static <T> List<T> readList(String source) throws IOException {\n+ try (XContentParser parser = XContentType.JSON.xContent().createParser(source)) {\n+ XContentParser.Token token = parser.nextToken();\n+ assertThat(token, equalTo(XContentParser.Token.START_OBJECT));\n+ token = parser.nextToken();\n+ assertThat(token, equalTo(XContentParser.Token.FIELD_NAME));\n+ assertThat(parser.currentName(), equalTo(\"foo\"));\n+ return (List<T>) (randomBoolean() ? parser.listOrderedMap() : parser.list());\n+ }\n+ }\n+\n+ private void assertReadListThrowsException(String source) {\n+ try {\n+ readList(source);\n+ fail(\"should have thrown a parse exception\");\n+ } catch (Exception e) {\n+ assertThat(e, instanceOf(ElasticsearchParseException.class));\n+ assertThat(e.getMessage(), containsString(\"Failed to parse list\"));\n+ }\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/xcontent/XContentParserTests.java",
"status": "added"
}
]
}
|
{
"body": "When you define in `elasticsearch.yml`:\n\n``` yml\nnetwork.bind_host: \"NONEXSISTINGADDRESS\"\n```\n\nIt fails as expected:\n\n```\nException in thread \"main\" BindTransportException[Failed to resolve host [null]]; nested: UnknownHostException[NONEXSISTINGADDRESS: unknown error];\nLikely root cause: java.net.UnknownHostException: NONEXSISTINGADDRESS: unknown error\n at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)\n at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)\n at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)\n at java.net.InetAddress.getAllByName0(InetAddress.java:1276)\n at java.net.InetAddress.getAllByName(InetAddress.java:1192)\n at java.net.InetAddress.getAllByName(InetAddress.java:1126)\n at org.elasticsearch.common.network.NetworkUtils.getAllByName(NetworkUtils.java:203)\n at org.elasticsearch.common.network.NetworkService.resolveInetAddress(NetworkService.java:200)\n at org.elasticsearch.common.network.NetworkService.resolveBindHostAddress(NetworkService.java:111)\n at org.elasticsearch.transport.netty.NettyTransport.bindServerBootstrap(NettyTransport.java:430)\n at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:319)\n at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)\n at org.elasticsearch.transport.TransportService.doStart(TransportService.java:170)\n at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)\n at org.elasticsearch.node.Node.start(Node.java:254)\n at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:221)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:287)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nRefer to the log for complete error details.\n```\n\nWhen you wrap it in an array, it's totally ignored:\n\n``` yml\nnetwork.bind_host: [\"NONEXSISTINGADDRESS\"]\n```\n\n```\n[2015-12-09 18:00:29,443][INFO ][transport ] [Collector] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[fe80::1]:9300}, {[::1]:9300}\n```\n\nIt means that you can't define a list of bind_host as:\n\n``` yml\nnetwork.bind_host: [\"_eth0:ipv4_\", \"_lo:ipv4_\"]\n```\n\nFrom discussion: https://discuss.elastic.co/t/es-2-1-only-bind-on-localhost/36720\n",
"comments": [
{
"body": "This was never supported, we added it for 2.2: #13954\n\nAlthough IMO, separately the real problem here is that the wrong data type was passed in for a setting, since that one did not support arrays, and nothing gave an error about that\n",
"created_at": "2015-12-09T17:15:03Z"
},
{
"body": "Thanks! I was wondering why it was correct in code and when I was running a unit test...\nWas on 2.x branch! :(\n\nI'm closing this then. I think your comment is already covered by another issue IIRC.\n",
"created_at": "2015-12-09T17:18:16Z"
},
{
"body": "BTW I think we don't have unit test for multiple addresses in 2.x. I just looked at `NetworkServiceTests`.\nDo you think I should send a PR to add this test?\n\n``` java\npublic void testBindMultipleAddresses() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n InetAddress[] addresses = service.resolveBindHostAddresses(new String[]{\"127.0.0.1\", \"127.0.0.2\"});\n assertThat(addresses.length, is(2));\n}\n```\n",
"created_at": "2015-12-09T17:19:26Z"
},
{
"body": "as long as it does not actually bind or rely on local configuration (e.g. number of interfaces) its a good idea.\n",
"created_at": "2015-12-09T17:22:20Z"
}
],
"number": 15340,
"title": "network.bind_host does not support arrays"
}
|
{
"body": "Follow up for #15340\n\nWe test that bind with wilcard IP + fixed IP it raises an exception\nWe test binding multiple IPs\n",
"number": 15342,
"review_comments": [],
"title": "add more tests to network service"
}
|
{
"commits": [
{
"message": "add more tests to network service\n\nFollow up for #15340\n\nWe test that bind with wilcard IP + fixed IP it raises an exception\nWe test binding multiple IPs"
}
],
"files": [
{
"diff": "@@ -24,14 +24,16 @@\n \n import java.net.InetAddress;\n \n+import static org.hamcrest.Matchers.is;\n+\n /**\n * Tests for network service... try to keep them safe depending upon configuration\n * please don't actually bind to anything, just test the addresses.\n */\n public class NetworkServiceTests extends ESTestCase {\n \n- /** \n- * ensure exception if we bind to multicast ipv4 address \n+ /**\n+ * ensure exception if we bind to multicast ipv4 address\n */\n public void testBindMulticastV4() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n@@ -42,9 +44,8 @@ public void testBindMulticastV4() throws Exception {\n assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n }\n }\n- \n- /** \n- * ensure exception if we bind to multicast ipv6 address \n+ /**\n+ * ensure exception if we bind to multicast ipv6 address\n */\n public void testBindMulticastV6() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n@@ -55,9 +56,9 @@ public void testBindMulticastV6() throws Exception {\n assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n }\n }\n- \n- /** \n- * ensure exception if we publish to multicast ipv4 address \n+\n+ /**\n+ * ensure exception if we publish to multicast ipv4 address\n */\n public void testPublishMulticastV4() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n@@ -68,9 +69,9 @@ public void testPublishMulticastV4() throws Exception {\n assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n }\n }\n- \n- /** \n- * ensure exception if we publish to multicast ipv6 address \n+\n+ /**\n+ * ensure exception if we publish to multicast ipv6 address\n */\n public void testPublishMulticastV6() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n@@ -82,37 +83,59 @@ public void testPublishMulticastV6() throws Exception {\n }\n }\n \n- /** \n- * ensure specifying wildcard ipv4 address will bind to all interfaces \n+ /**\n+ * ensure specifying wildcard ipv4 address will bind to all interfaces\n */\n public void testBindAnyLocalV4() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n assertEquals(InetAddress.getByName(\"0.0.0.0\"), service.resolveBindHostAddresses(new String[] { \"0.0.0.0\" })[0]);\n }\n- \n- /** \n- * ensure specifying wildcard ipv6 address will bind to all interfaces \n+\n+ /**\n+ * ensure specifying wildcard ipv6 address will bind to all interfaces\n */\n public void testBindAnyLocalV6() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n assertEquals(InetAddress.getByName(\"::\"), service.resolveBindHostAddresses(new String[] { \"::\" })[0]);\n }\n \n- /** \n- * ensure specifying wildcard ipv4 address selects reasonable publish address \n+ /**\n+ * ensure specifying wildcard ipv4 address selects reasonable publish address\n */\n public void testPublishAnyLocalV4() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n InetAddress address = service.resolvePublishHostAddresses(new String[] { \"0.0.0.0\" });\n assertFalse(address.isAnyLocalAddress());\n }\n \n- /** \n- * ensure specifying wildcard ipv6 address selects reasonable publish address \n+ /**\n+ * ensure specifying wildcard ipv6 address selects reasonable publish address\n */\n public void testPublishAnyLocalV6() throws Exception {\n NetworkService service = new NetworkService(Settings.EMPTY);\n InetAddress address = service.resolvePublishHostAddresses(new String[] { \"::\" });\n assertFalse(address.isAnyLocalAddress());\n }\n+\n+ /**\n+ * ensure we can bind to multiple addresses\n+ */\n+ public void testBindMultipleAddresses() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ InetAddress[] addresses = service.resolveBindHostAddresses(new String[]{\"127.0.0.1\", \"127.0.0.2\"});\n+ assertThat(addresses.length, is(2));\n+ }\n+\n+ /**\n+ * ensure we can't bind to multiple addresses when using wildcard\n+ */\n+ public void testBindMultipleAddressesWithWildcard() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ try {\n+ service.resolveBindHostAddresses(new String[]{\"0.0.0.0\", \"127.0.0.1\"});\n+ fail(\"should have hit exception\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"is wildcard, but multiple addresses specified\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/network/NetworkServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "I was trying to create aliases with routing on an index that includes parent/child docs. Posting to an alias endpoint while specifying \"parent=X\" causes an error. I was thinking it shouldn't, because the parent/child routing is to ensure the docs will wind up on the same shard, but this is already guaranteed by the routed alias.\n\nCurl recreation:\n\nhttps://gist.github.com/erikcameron/5621421\n\nAs noted, it looks like it works if you give parent and routing explicitly. \n\nThanks!\n-E\n",
"comments": [
{
"body": "Makes sense, ES shouldn't throw an error when indexing into an alias and `parent` is set. Assuming that someone wants to override the parent routing (which is automatically set when `parent` is present in a index request) when indexing into an index alias, ES should use the routing specified in the index alias.\n",
"created_at": "2013-05-29T13:29:53Z"
},
{
"body": "Just ran into this same issue, is it likely to be fixed soon?\n",
"created_at": "2014-08-24T23:15:04Z"
},
{
"body": "Just had this issue too, I've lost lot of time finding out why it failed like this :).\n",
"created_at": "2014-10-21T17:13:33Z"
},
{
"body": "Any news on this issue @martijnvg ?\n",
"created_at": "2014-11-28T18:06:52Z"
},
{
"body": "Any chance of this getting back-ported to ES 2.x? This is a rather high priority bug for us since we can't determine the routing of the parent without investigating the alias directly before every insert (which is very expensive).\n",
"created_at": "2016-04-28T11:18:20Z"
},
{
"body": "any news on this? This bug is happening still in 2.3.3, has been ported to any 2.x tag?\n",
"created_at": "2016-07-29T09:23:49Z"
}
],
"number": 3068,
"title": "conflict between alias routing and parent/child routing"
}
|
{
"body": "Separates routing and parent in all documentrequest in order to be able to distinguish an explicit routing value from a parent routing.\n\nThe final value for the routing is returned by MetaData.resolveIndexRouting which\nresolves conflicts between routing, parent routing and alias routing with the following rules:\n- If the routing is specified in the request then parent routing and alias routing are ignored.\n- If the routing is not specified:\n - The parent routing is ignored if there is an alias routing that matches the request.\n - Otherwise the parent routing is applied.\n Fixes #3068\n",
"number": 15336,
"review_comments": [
{
"body": "you would need to protect this read with a version check so that it does not fail when reading data coming from an old node - this would look like this:\n\n``` java\nif (in.getVersion().onOrAfter(Version.V_2_2_0)) {\n parent = in.readOptionalString();\n} else {\n parent = null;\n}\n```\n",
"created_at": "2015-12-09T18:18:05Z"
},
{
"body": "similarly to the read, you need to protect the write with a version check. Additionally since you changed the parent setter to not set the routing, I think you would need to do the following:\n\n``` java\nif (out.getVersion().onOrAfter(Version.V_2_2_0)) {\n out.writeOptionalString(routing());\n out.writeOptionalString(parent());\n} else {\n String routing = routing();\n if (routing == null) {\n routing = parent();\n }\n out.writeOptionalString(routing);\n}\n```\n",
"created_at": "2015-12-09T18:20:11Z"
},
{
"body": "version check\n",
"created_at": "2015-12-09T18:23:21Z"
},
{
"body": "version check\n",
"created_at": "2015-12-09T18:23:26Z"
},
{
"body": "you should change the writeTo method to take care of nodes on older versions like in DeleteRequest\n",
"created_at": "2015-12-09T18:25:11Z"
},
{
"body": "similar concerns as DeleteRequest\n",
"created_at": "2015-12-09T18:25:32Z"
},
{
"body": "similar concerns as IndexRequest\n",
"created_at": "2015-12-09T18:25:58Z"
},
{
"body": "can we keep the `if (!routing.equals(aliasMd.indexRouting())) { throw XXX }` in case both a routing key and an alias routing are set?\n",
"created_at": "2015-12-09T18:32:25Z"
},
{
"body": "otherwise you could index into an alias that target a specific shard and yet index in other shards by specifying a routing key, which I guess could be seen as a bug\n",
"created_at": "2015-12-09T18:33:03Z"
},
{
"body": "IMO if the user adds a routing to an index request this means that he wants to override the default behavior, right? (including the alias routing) otherwise we could have the same reasoning when a parent AND a routing are specified. If the priorities are well defined then I don't think it's a bug but rather a RTFM ;) .\n",
"created_at": "2015-12-10T08:42:03Z"
}
],
"title": "Resolve index routing conflicts with priorities"
}
|
{
"commits": [
{
"message": "Merge pull request #15085 from kaneshin/docs/modify/post_filter\n\nRemove a trailing comma from an example data of JSON"
},
{
"message": "Document that _index is a virtual field and only supports term queries\n\nCloses #15070\nCloses #15081"
},
{
"message": "Fixes #14489\n Do not to load fields from _source when using the `fields` option.\n Non stored (non existing) fields are ignored by the fields visitor when using the `fields` option.\n\nFixes #10783\n Support * wildcard to retrieve stored fields when using the `fields` option.\n Supported pattern styles are \"xxx*\", \"*xxx\", \"*xxx*\" and \"xxx*yyy\"."
},
{
"message": "Merge pull request #15017 from jimferenczi/fields_option\n\nRefuse to load fields from _source when using the `fields` option and support wildcards."
},
{
"message": "Tests: Correction in AbstractShapeBuilderTestCase\n\nRemoved check that two shapes that are different according\nto equals() have different hashCode since that is not required\nby the contract of hashCode."
},
{
"message": "add java-api doc about shading / embedding\n\nTwo new sections added\n\n* Dealing with JAR dependency conflicts\n* Embedding jar with dependencies\n\nCloses #15071."
},
{
"message": "Add simple EditorConfig\n\nThe EditorConfig file applies the formatting rules described in CONTRIBUTING.md."
},
{
"message": "Split cluster state update tasks into roles\n\nThis commit splits cluster state update tasks into roles. Those roles\nare:\n - task info\n - task configuration\n - task executor\n - task listener\n\nAll tasks that have the same executor will be executed in batches. This\nremoves the need for local batching as was previously in\nMetaDataMappingService.\n\nAdditionally, this commit reintroduces batching on mapping update calls.\n\nRelates #13627"
},
{
"message": "Add test for cluster state batch updates"
},
{
"message": "Simplify grouping of cluster state update tasks"
},
{
"message": "Simplify InternalClusterService#submitStateUpdateTask with lambdas"
},
{
"message": "Explicitly correspond cluster state tasks and execution results"
},
{
"message": "Simplify loop in InternalClusterService#runTasksForExecutor"
},
{
"message": "Add builder to create cluster state executor results"
},
{
"message": "ClusterStateTaskListener#onNoLongerMaster now throws NotMasterException"
},
{
"message": "Add docs for cluster state update task batching"
},
{
"message": "[doc] Information on JVM fork count\n\nI spent 20 minutes reading gradle docs to figure out how to do this. No one\nelse should have to do that.\n\nAlso, some of the documentation was out of date."
},
{
"message": "Merge pull request #14899 from jasontedor/cluster-state-batch\n\nSplit cluster state update tasks into roles"
},
{
"message": "fix toXContent() for mapper attachments field\n\nWe must use simpleName() instead of name() because otherwise when the mapping\nis generated as a string the field name will be the full path with dots\nand that is illegal from es 2.0 on.\n\ncloses https://github.com/elastic/elasticsearch-mapper-attachments/issues/169"
},
{
"message": "S3 repository: fix spelling error\n\nReported at https://github.com/elastic/elasticsearch-cloud-aws/pull/221"
},
{
"message": "[docs] Updating the Python client docxs"
},
{
"message": "Update scripting.asciidoc\n\nFix script syntax for script_score\r\n\r\nCloses #15096"
},
{
"message": "Merge pull request #15108 from joschi/editorconfig\n\nAdd simple EditorConfig"
},
{
"message": "Build: Increase the number of failed tests shown in test summary\n\nWe had increased this in maven, but it was lost in the transition to\ngradle. This change adds it as a configurable setting the the logger for\nrandomized testing and bumps it to 25."
},
{
"message": "Merge pull request #15126 from rjernst/bump_failures_summary\n\nIncrease the number of failed tests shown in test summary"
},
{
"message": "Fix unit tests to bind to port 0.\n\nI will followup with ITs and other modules. By fixing this, these tests become more reliable (will never sporatically\nfail due to other stuff on your machine: ports are assigned by the OS), and it allows us to move forward with\ngradle parallel builds, in my tests this is a nice speedup, but we can't do it until tests are cleaned up"
},
{
"message": "Merge pull request #15127 from rmuir/unit_tests_port_0\n\nFix unit tests to bind to port 0."
},
{
"message": "AwaitsFix testDynamicUpdates\n\npending on https://github.com/elastic/elasticsearch/issues/15129"
},
{
"message": "multi field names may not contain dots\n\nrelated to #14957"
},
{
"message": "[TEST] mark test as awaitsfix: RareClusterStateIT.testDeleteCreateInOneBulk()"
}
],
"files": [
{
"diff": "@@ -0,0 +1,85 @@\n+((java-mode\n+ .\n+ ((eval\n+ .\n+ (progn\n+ (defun my/point-in-defun-declaration-p ()\n+ (let ((bod (save-excursion (c-beginning-of-defun)\n+ (point))))\n+ (<= bod\n+ (point)\n+ (save-excursion (goto-char bod)\n+ (re-search-forward \"{\")\n+ (point)))))\n+\n+ (defun my/is-string-concatenation-p ()\n+ \"Returns true if the previous line is a string concatenation\"\n+ (save-excursion\n+ (let ((start (point)))\n+ (forward-line -1)\n+ (if (re-search-forward \" \\\\\\+$\" start t) t nil))))\n+\n+ (defun my/inside-java-lambda-p ()\n+ \"Returns true if point is the first statement inside of a lambda\"\n+ (save-excursion\n+ (c-beginning-of-statement-1)\n+ (let ((start (point)))\n+ (forward-line -1)\n+ (if (search-forward \" -> {\" start t) t nil))))\n+\n+ (defun my/trailing-paren-p ()\n+ \"Returns true if point is a training paren and semicolon\"\n+ (save-excursion\n+ (end-of-line)\n+ (let ((endpoint (point)))\n+ (beginning-of-line)\n+ (if (re-search-forward \"[ ]*);$\" endpoint t) t nil))))\n+\n+ (defun my/prev-line-call-with-no-args-p ()\n+ \"Return true if the previous line is a function call with no arguments\"\n+ (save-excursion\n+ (let ((start (point)))\n+ (forward-line -1)\n+ (if (re-search-forward \".($\" start t) t nil))))\n+\n+ (defun my/arglist-cont-nonempty-indentation (arg)\n+ (if (my/inside-java-lambda-p)\n+ '+\n+ (if (my/is-string-concatenation-p)\n+ 16\n+ (unless (my/point-in-defun-declaration-p) '++))))\n+\n+ (defun my/statement-block-intro (arg)\n+ (if (and (c-at-statement-start-p) (my/inside-java-lambda-p)) 0 '+))\n+\n+ (defun my/block-close (arg)\n+ (if (my/inside-java-lambda-p) '- 0))\n+\n+ (defun my/arglist-close (arg) (if (my/trailing-paren-p) 0 '--))\n+\n+ (defun my/arglist-intro (arg)\n+ (if (my/prev-line-call-with-no-args-p) '++ 0))\n+\n+ (c-set-offset 'inline-open 0)\n+ (c-set-offset 'topmost-intro-cont '+)\n+ (c-set-offset 'statement-block-intro 'my/statement-block-intro)\n+ (c-set-offset 'block-close 'my/block-close)\n+ (c-set-offset 'knr-argdecl-intro '+)\n+ (c-set-offset 'substatement-open '+)\n+ (c-set-offset 'substatement-label '+)\n+ (c-set-offset 'case-label '+)\n+ (c-set-offset 'label '+)\n+ (c-set-offset 'statement-case-open '+)\n+ (c-set-offset 'statement-cont '++)\n+ (c-set-offset 'arglist-intro 'my/arglist-intro)\n+ (c-set-offset 'arglist-cont-nonempty '(my/arglist-cont-nonempty-indentation c-lineup-arglist))\n+ (c-set-offset 'arglist-close 'my/arglist-close)\n+ (c-set-offset 'inexpr-class 0)\n+ (c-set-offset 'access-label 0)\n+ (c-set-offset 'inher-intro '++)\n+ (c-set-offset 'inher-cont '++)\n+ (c-set-offset 'brace-list-intro '+)\n+ (c-set-offset 'func-decl-cont '++)\n+ ))\n+ (c-basic-offset . 4)\n+ (c-comment-only-line-offset . (0 . 0)))))",
"filename": ".dir-locals.el",
"status": "added"
},
{
"diff": "@@ -0,0 +1,10 @@\n+# EditorConfig: http://editorconfig.org/\n+\n+root = true\n+\n+[*.java]\n+charset = utf-8\n+indent_style = space\n+indent_size = 4\n+trim_trailing_whitespace = true\n+insert_final_newline = true",
"filename": ".editorconfig",
"status": "added"
},
{
"diff": "@@ -8,8 +8,8 @@ work/\n logs/\n .DS_Store\n build/\n-target/\n-*-execution-hints.log\n+generated-resources/\n+**/.local*\n docs/html/\n docs/build.log\n /tmp/\n@@ -31,3 +31,7 @@ nb-configuration.xml\n nbactions.xml\n \n dependency-reduced-pom.xml\n+\n+# old patterns specific to maven\n+*-execution-hints.log\n+target/",
"filename": ".gitignore",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,32 @@\n+-/target\n+-/core/target\n+-/qa/target\n+-/rest-api-spec/target\n+-/test-framework/target\n+-/plugins/target\n+-/plugins/analysis-icu/target\n+-/plugins/analysis-kuromoji/target\n+-/plugins/analysis-phonetic/target\n+-/plugins/analysis-smartcn/target\n+-/plugins/analysis-stempel/target\n+-/plugins/cloud-aws/target\n+-/plugins/cloud-azure/target\n+-/plugins/cloud-gce/target\n+-/plugins/delete-by-query/target\n+-/plugins/discovery-azure/target\n+-/plugins/discovery-ec2/target\n+-/plugins/discovery-gce/target\n+-/plugins/discovery-multicast/target\n+-/plugins/jvm-example/target\n+-/plugins/lang-expression/target\n+-/plugins/lang-groovy/target\n+-/plugins/lang-javascript/target\n+-/plugins/lang-python/target\n+-/plugins/mapper-murmur3/target\n+-/plugins/mapper-size/target\n+-/plugins/repository-azure/target\n+-/plugins/repository-s3/target\n+-/plugins/site-example/target\n+-/plugins/store-smb/target\n+-/plugins/target\n+-*.class",
"filename": ".projectile",
"status": "added"
},
{
"diff": "@@ -74,11 +74,9 @@ Then sit back and wait. There will probably be discussion about the pull request\n Contributing to the Elasticsearch codebase\n ------------------------------------------\n \n-**Repository:** [https://github.com/elasticsearch/elasticsearch](https://github.com/elastic/elasticsearch)\n+**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)\n \n-Make sure you have [Maven](http://maven.apache.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE by running `mvn eclipse:eclipse` and then importing the project into their workspace: `File > Import > Existing project into workspace`.\n-\n-Elasticsearch also works perfectly with Eclipse's [m2e](http://www.eclipse.org/m2e/). Once you've installed m2e you can import Elasticsearch as an `Existing Maven Project`.\n+Make sure you have [Gradle](http://gradle.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the option `Search for nested projects`. Additionally you will want to ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini` accordingly to avoid GC overhead errors.\n \n Please follow these formatting guidelines:\n \n@@ -92,15 +90,15 @@ To create a distribution from the source, simply run:\n \n ```sh\n cd elasticsearch/\n-mvn clean package -DskipTests\n+gradle assemble\n ```\n \n-You will find the newly built packages under: `./target/releases/`.\n+You will find the newly built packages under: `./distribution/build/distributions/`.\n \n Before submitting your changes, run the test suite to make sure that nothing is broken, with:\n \n ```sh\n-mvn clean test -Dtests.slow=true\n+gradle check\n ```\n \n Source: [Contributing to elasticsearch](https://www.elastic.co/contributing-to-elasticsearch/)",
"filename": "CONTRIBUTING.md",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,7 @@\n+As a quick helper, below are the equivalent commands from maven to gradle (TESTING.md has also been updated). You can also run \"gradle tasks\" to see all tasks that are available to run.\n+clean -> clean\n+test -> test\n+verify -> check\n+verify -Dskip.unit.tests -> integTest\n+package -DskipTests -> assemble\n+install -DskipTests -> install",
"filename": "GRADLE.CHEATSHEET",
"status": "added"
},
{
"diff": "@@ -200,19 +200,22 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo\n \n h3. Building from Source\n \n-Elasticsearch uses \"Maven\":http://maven.apache.org for its build system.\n+Elasticsearch uses \"Gradle\":http://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.8 should do.\n \n-In order to create a distribution, simply run the @mvn clean package\n--DskipTests@ command in the cloned directory.\n+In order to create a distribution, simply run the @gradle build@ command in the cloned directory.\n \n The distribution for each project will be created under the @target/releases@ directory in that project.\n \n See the \"TESTING\":TESTING.asciidoc file for more information about\n running the Elasticsearch test suite.\n \n-h3. Upgrading to Elasticsearch 1.x?\n+h3. Upgrading from Elasticsearch 1.x?\n \n-In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the \"setup reference\":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.\n+In order to ensure a smooth upgrade process from earlier versions of\n+Elasticsearch (1.x), it is required to perform a full cluster restart. Please\n+see the \"setup reference\":\n+https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html\n+for more details on the upgrade process.\n \n h1. License\n ",
"filename": "README.textile",
"status": "modified"
},
{
"diff": "@@ -13,7 +13,7 @@ To create a distribution without running the tests, simply run the\n following:\n \n -----------------------------\n-mvn clean package -DskipTests\n+gradle assemble\n -----------------------------\n \n == Other test options\n@@ -35,7 +35,7 @@ Use local transport (default since 1.3):\n Alternatively, you can set the `ES_TEST_LOCAL` environment variable:\n \n -------------------------------------\n-export ES_TEST_LOCAL=true && mvn test\n+export ES_TEST_LOCAL=true && gradle test\n -------------------------------------\n \n === Running Elasticsearch from a checkout\n@@ -44,7 +44,7 @@ In order to run Elasticsearch from source without building a package, you can\n run it using Maven:\n \n -------------------------------------\n-./run.sh\n+gradle run\n -------------------------------------\n \n === Test case filtering.\n@@ -55,20 +55,20 @@ run it using Maven:\n Run a single test case (variants)\n \n ----------------------------------------------------------\n-mvn test -Dtests.class=org.elasticsearch.package.ClassName\n-mvn test \"-Dtests.class=*.ClassName\"\n+gradle test -Dtests.class=org.elasticsearch.package.ClassName\n+gradle test \"-Dtests.class=*.ClassName\"\n ----------------------------------------------------------\n \n Run all tests in a package and sub-packages\n \n ----------------------------------------------------\n-mvn test \"-Dtests.class=org.elasticsearch.package.*\"\n+gradle test \"-Dtests.class=org.elasticsearch.package.*\"\n ----------------------------------------------------\n \n Run any test methods that contain 'esi' (like: ...r*esi*ze...).\n \n -------------------------------\n-mvn test \"-Dtests.method=*esi*\"\n+gradle test \"-Dtests.method=*esi*\"\n -------------------------------\n \n You can also filter tests by certain annotations ie:\n@@ -81,23 +81,23 @@ You can also filter tests by certain annotations ie:\n Those annotation names can be combined into a filter expression like:\n \n ------------------------------------------------\n-mvn test -Dtests.filter=\"@nightly and not @backwards\"\n+gradle test -Dtests.filter=\"@nightly and not @backwards\"\n ------------------------------------------------\n \n to run all nightly test but not the ones that are backwards tests. `tests.filter` supports\n the boolean operators `and, or, not` and grouping ie:\n \n \n ---------------------------------------------------------------\n-mvn test -Dtests.filter=\"@nightly and not(@badapple or @backwards)\"\n+gradle test -Dtests.filter=\"@nightly and not(@badapple or @backwards)\"\n ---------------------------------------------------------------\n \n === Seed and repetitions.\n \n Run with a given seed (seed is a hex-encoded long).\n \n ------------------------------\n-mvn test -Dtests.seed=DEADBEEF\n+gradle test -Dtests.seed=DEADBEEF\n ------------------------------\n \n === Repeats _all_ tests of ClassName N times.\n@@ -106,7 +106,7 @@ Every test repetition will have a different method seed\n (derived from a single random master seed).\n \n --------------------------------------------------\n-mvn test -Dtests.iters=N -Dtests.class=*.ClassName\n+gradle test -Dtests.iters=N -Dtests.class=*.ClassName\n --------------------------------------------------\n \n === Repeats _all_ tests of ClassName N times.\n@@ -115,7 +115,7 @@ Every test repetition will have exactly the same master (0xdead) and\n method-level (0xbeef) seed.\n \n ------------------------------------------------------------------------\n-mvn test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.seed=DEAD:BEEF\n+gradle test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.seed=DEAD:BEEF\n ------------------------------------------------------------------------\n \n === Repeats a given test N times\n@@ -125,14 +125,14 @@ ie: testFoo[0], testFoo[1], etc... so using testmethod or tests.method\n ending in a glob is necessary to ensure iterations are run).\n \n -------------------------------------------------------------------------\n-mvn test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.method=mytest*\n+gradle test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.method=mytest*\n -------------------------------------------------------------------------\n \n Repeats N times but skips any tests after the first failure or M initial failures.\n \n -------------------------------------------------------------\n-mvn test -Dtests.iters=N -Dtests.failfast=true -Dtestcase=...\n-mvn test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=...\n+gradle test -Dtests.iters=N -Dtests.failfast=true -Dtestcase=...\n+gradle test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=...\n -------------------------------------------------------------\n \n === Test groups.\n@@ -142,32 +142,38 @@ Test groups can be enabled or disabled (true/false).\n Default value provided below in [brackets].\n \n ------------------------------------------------------------------\n-mvn test -Dtests.nightly=[false] - nightly test group (@Nightly)\n-mvn test -Dtests.weekly=[false] - weekly tests (@Weekly)\n-mvn test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)\n+gradle test -Dtests.nightly=[false] - nightly test group (@Nightly)\n+gradle test -Dtests.weekly=[false] - weekly tests (@Weekly)\n+gradle test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)\n ------------------------------------------------------------------\n \n === Load balancing and caches.\n \n-By default, the tests run sequentially on a single forked JVM.\n+By default the tests run on up to 4 JVMs based on the number of cores. If you\n+want to explicitly specify the number of JVMs you can do so on the command\n+line:\n \n-To run with more forked JVMs than the default use:\n+----------------------------\n+gradle test -Dtests.jvms=8\n+----------------------------\n+\n+Or in `~/.gradle/gradle.properties`:\n \n ----------------------------\n-mvn test -Dtests.jvms=8 test\n+systemProp.tests.jvms=8\n ----------------------------\n \n-Don't count hypercores for CPU-intense tests and leave some slack\n-for JVM-internal threads (like the garbage collector). Make sure there is\n-enough RAM to handle child JVMs.\n+Its difficult to pick the \"right\" number here. Hypercores don't count for CPU\n+intensive tests and you should leave some slack for JVM-interal threads like\n+the garbage collector. And you have to have enough RAM to handle each JVM.\n \n === Test compatibility.\n \n It is possible to provide a version that allows to adapt the tests behaviour\n to older features or bugs that have been changed or fixed in the meantime.\n \n -----------------------------------------\n-mvn test -Dtests.compatibility=1.0.0\n+gradle test -Dtests.compatibility=1.0.0\n -----------------------------------------\n \n \n@@ -176,45 +182,50 @@ mvn test -Dtests.compatibility=1.0.0\n Run all tests without stopping on errors (inspect log files).\n \n -----------------------------------------\n-mvn test -Dtests.haltonfailure=false test\n+gradle test -Dtests.haltonfailure=false\n -----------------------------------------\n \n Run more verbose output (slave JVM parameters, etc.).\n \n ----------------------\n-mvn test -verbose test\n+gradle test -verbose\n ----------------------\n \n Change the default suite timeout to 5 seconds for all\n tests (note the exclamation mark).\n \n ---------------------------------------\n-mvn test -Dtests.timeoutSuite=5000! ...\n+gradle test -Dtests.timeoutSuite=5000! ...\n ---------------------------------------\n \n-Change the logging level of ES (not mvn)\n+Change the logging level of ES (not gradle)\n \n --------------------------------\n-mvn test -Des.logger.level=DEBUG\n+gradle test -Des.logger.level=DEBUG\n --------------------------------\n \n Print all the logging output from the test runs to the commandline\n even if tests are passing.\n \n ------------------------------\n-mvn test -Dtests.output=always\n+gradle test -Dtests.output=always\n ------------------------------\n \n Configure the heap size.\n \n ------------------------------\n-mvn test -Dtests.heap.size=512m\n+gradle test -Dtests.heap.size=512m\n ------------------------------\n \n Pass arbitrary jvm arguments.\n \n ------------------------------\n-mvn test -Dtests.jvm.argline=\"-XX:HeapDumpPath=/path/to/heapdumps\"\n+# specify heap dump path\n+gradle test -Dtests.jvm.argline=\"-XX:HeapDumpPath=/path/to/heapdumps\"\n+# enable gc logging\n+gradle test -Dtests.jvm.argline=\"-verbose:gc\"\n+# enable security debugging\n+gradle test -Dtests.jvm.argline=\"-Djava.security.debug=access,failure\"\n ------------------------------\n \n == Backwards Compatibility Tests\n@@ -225,15 +236,15 @@ To run backwards compatibilty tests untar or unzip a release and run the tests\n with the following command:\n \n ---------------------------------------------------------------------------\n-mvn test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z -Dtests.bwc.path=/path/to/elasticsearch -Dtests.security.manager=false\n+gradle test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z -Dtests.bwc.path=/path/to/elasticsearch -Dtests.security.manager=false\n ---------------------------------------------------------------------------\n \n Note that backwards tests must be run with security manager disabled.\n If the elasticsearch release is placed under `./backwards/elasticsearch-x.y.z` the path\n can be omitted:\n \n ---------------------------------------------------------------------------\n-mvn test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z -Dtests.security.manager=false\n+gradle test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z -Dtests.security.manager=false\n ---------------------------------------------------------------------------\n \n To setup the bwc test environment execute the following steps (provided you are\n@@ -245,19 +256,25 @@ $ curl -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elastic\n $ tar -xzf elasticsearch-1.2.1.tar.gz\n ---------------------------------------------------------------------------\n \n-== Running integration tests\n+== Running verification tasks\n \n-To run the integration tests:\n+To run all verification tasks, including static checks, unit tests, and integration tests:\n \n ---------------------------------------------------------------------------\n-mvn verify\n+gradle check\n ---------------------------------------------------------------------------\n \n-Note that this will also run the unit tests first. If you want to just\n-run the integration tests only (because you are debugging them):\n+Note that this will also run the unit tests and precommit tasks first. If you want to just\n+run the integration tests (because you are debugging them):\n \n ---------------------------------------------------------------------------\n-mvn verify -Dskip.unit.tests\n+gradle integTest\n+---------------------------------------------------------------------------\n+\n+If you want to just run the precommit checks:\n+\n+---------------------------------------------------------------------------\n+gradle precommit\n ---------------------------------------------------------------------------\n \n == Testing the REST layer\n@@ -269,11 +286,20 @@ The REST layer is tested through specific tests that are shared between all\n the elasticsearch official clients and consist of YAML files that describe the\n operations to be executed and the obtained results that need to be tested.\n \n-The REST tests are run automatically when executing the maven test command. To run only the\n+The REST tests are run automatically when executing the \"gradle check\" command. To run only the\n REST tests use the following command:\n \n ---------------------------------------------------------------------------\n-mvn verify -Dtests.filter=\"@Rest\"\n+gradle :distribution:tar:integTest \\\n+ -Dtests.class=org.elasticsearch.test.rest.RestIT\n+---------------------------------------------------------------------------\n+\n+A specific test case can be run with\n+\n+---------------------------------------------------------------------------\n+gradle :distribution:tar:integTest \\\n+ -Dtests.class=org.elasticsearch.test.rest.RestIT \\\n+ -Dtests.method=\"test {p0=cat.shards/10_basic/Help}\"\n ---------------------------------------------------------------------------\n \n `RestNIT` are the executable test classes that runs all the\n@@ -298,20 +324,6 @@ comma separated list of nodes to connect to (e.g. localhost:9300). A transport c\n be created based on that and used for all the before|after test operations, and to extract\n the http addresses of the nodes so that REST requests can be sent to them.\n \n-== Skip validate\n-\n-To disable validation step (forbidden API or `// NOCOMMIT`) use\n-\n----------------------------------------------------------------------------\n-mvn test -Dvalidate.skip=true\n----------------------------------------------------------------------------\n-\n-You can also skip this by using the \"dev\" profile:\n-\n----------------------------------------------------------------------------\n-mvn test -Pdev\n----------------------------------------------------------------------------\n-\n == Testing scripts\n \n The simplest way to test scripts and the packaged distributions is to use\n@@ -329,147 +341,63 @@ vagrant plugin install vagrant-cachier\n . Validate your installed dependencies:\n \n -------------------------------------\n-mvn -Dtests.vagrant -pl qa/vagrant validate\n+gradle :qa:vagrant:checkVagrantVersion\n -------------------------------------\n \n-. Download the VMs. Since Maven or ant or something eats the progress reports\n-from Vagrant when you run it inside mvn its probably best if you run this one\n-time to setup all the VMs one at a time. Run this to download and setup the VMs\n-we use for testing by default:\n-\n---------------------------------------------------------\n-vagrant up --provision trusty --provider virtualbox && vagrant halt trusty\n-vagrant up --provision centos-7 --provider virtualbox && vagrant halt centos-7\n---------------------------------------------------------\n-\n-or run this to download and setup all the VMs:\n-\n--------------------------------------------------------------------------------\n-vagrant halt\n-for box in $(vagrant status | grep 'poweroff\\|not created' | cut -f1 -d' '); do\n- vagrant up --provision $box --provider virtualbox\n- vagrant halt $box\n-done\n--------------------------------------------------------------------------------\n-\n-. Smoke test the maven/ant dance that we use to get vagrant involved in\n-integration testing is working:\n-\n----------------------------------------------\n-mvn -Dtests.vagrant -Psmoke-vms -pl qa/vagrant verify\n----------------------------------------------\n-\n-or this to validate all the VMs:\n-\n--------------------------------------------------\n-mvn -Dtests.vagrant=all -Psmoke-vms -pl qa/vagrant verify\n--------------------------------------------------\n-\n-That will start up the VMs and then immediate quit.\n-\n-. Finally run the tests. The fastest way to get this started is to run:\n-\n------------------------------------\n-mvn clean install -DskipTests\n-mvn -Dtests.vagrant -pl qa/vagrant verify\n------------------------------------\n-\n-You could just run:\n-\n---------------------\n-mvn -Dtests.vagrant verify\n---------------------\n-\n-but that will run all the tests. Which is probably a good thing, but not always\n-what you want.\n+. Download and smoke test the VMs with `gradle vagrantSmokeTest` or\n+`gradle vagrantSmokeTestAllDistros`. The first time you run this it will\n+download the base images and provision the boxes and immediately quit. If you\n+you this again it'll skip the download step.\n \n-Whichever snippet you run mvn will build the tar, zip and deb packages. If you\n-have rpmbuild installed it'll build the rpm package as well. Then mvn will\n-spin up trusty and verify the tar, zip, and deb package. If you have rpmbuild\n-installed it'll spin up centos-7 and verify the tar, zip and rpm packages. We\n-chose those two distributions as the default because they cover deb and rpm\n-packaging and SyvVinit and systemd.\n+. Run the tests with `gradle checkPackages`. This will cause gradle to build\n+the tar, zip, and deb packages and all the plugins. It will then run the tests\n+on ubuntu-1404 and centos-7. We chose those two distributions as the default\n+because they cover deb and rpm packaging and SyvVinit and systemd.\n \n-You can control the boxes that are used for testing like so. Run just\n-fedora-22 with:\n+You can run on all the VMs by running `gradle checkPackagesAllDistros`. You can\n+run a particular VM with a command like `gradle checkOel7`. See `gradle tasks`\n+for a list. Its important to know that if you ctrl-c any of these `gradle`\n+commands then the boxes will remain running and you'll have to terminate them\n+with `vagrant halt`.\n \n---------------------------------------------\n-mvn -Dtests.vagrant -pl qa/vagrant verify -DboxesToTest=fedora-22\n---------------------------------------------\n-\n-or run wheezy and trusty:\n-\n-------------------------------------------------------------------\n-mvn -Dtests.vagrant -pl qa/vagrant verify -DboxesToTest='wheezy, trusty'\n-------------------------------------------------------------------\n-\n-or run all the boxes:\n-\n----------------------------------------\n-mvn -Dtests.vagrant=all -pl qa/vagrant verify\n----------------------------------------\n-\n-Its important to know that if you ctrl-c any of these `mvn` runs that you'll\n-probably leave a VM up. You can terminate it by running:\n-\n-------------\n-vagrant halt\n-------------\n-\n-This is just regular vagrant so you can run normal multi box vagrant commands\n-to test things manually. Just run:\n-\n----------------------------------------\n-vagrant up trusty --provider virtualbox && vagrant ssh trusty\n----------------------------------------\n-\n-to get an Ubuntu or\n-\n--------------------------------------------\n-vagrant up centos-7 --provider virtualbox && vagrant ssh centos-7\n--------------------------------------------\n-\n-to get a CentOS. Once you are done with them you should halt them:\n-\n--------------------\n-vagrant halt trusty\n--------------------\n+All the regular vagrant commands should just work so you can get a shell in a\n+VM running trusty by running\n+`vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404`.\n \n These are the linux flavors the Vagrantfile currently supports:\n \n-* precise aka Ubuntu 12.04\n-* trusty aka Ubuntu 14.04\n-* vivid aka Ubuntun 15.04\n-* wheezy aka Debian 7, the current debian oldstable distribution\n-* jessie aka Debian 8, the current debina stable distribution\n+* ubuntu-1204 aka precise\n+* ubuntu-1404 aka trusty\n+* ubuntu-1504 aka vivid\n+* debian-8 aka jessie, the current debian stable distribution\n * centos-6\n * centos-7\n * fedora-22\n * oel-7 aka Oracle Enterprise Linux 7\n+* sles-12\n+* opensuse-13\n \n We're missing the following from the support matrix because there aren't high\n quality boxes available in vagrant atlas:\n \n * sles-11\n-* sles-12\n-* opensuse-13\n * oel-6\n \n We're missing the follow because our tests are very linux/bash centric:\n \n * Windows Server 2012\n \n-Its important to think of VMs like cattle: if they become lame you just shoot\n+Its important to think of VMs like cattle. If they become lame you just shoot\n them and let vagrant reprovision them. Say you've hosed your precise VM:\n \n ----------------------------------------------------\n-vagrant ssh precise -c 'sudo rm -rf /bin'; echo oops\n+vagrant ssh ubuntu-1404 -c 'sudo rm -rf /bin'; echo oops\n ----------------------------------------------------\n \n All you've got to do to get another one is\n \n ----------------------------------------------\n-vagrant destroy -f trusty && vagrant up trusty --provider virtualbox\n+vagrant destroy -f ubuntu-1404 && vagrant up ubuntu-1404 --provider virtualbox\n ----------------------------------------------\n \n The whole process takes a minute and a half on a modern laptop, two and a half\n@@ -487,13 +415,8 @@ vagrant halt\n vagrant destroy -f\n ------------------\n \n-\n-----------\n-vagrant up\n-----------\n-\n-would normally start all the VMs but we've prevented that because that'd\n-consume a ton of ram.\n+`vagrant up` would normally start all the VMs but we've prevented that because\n+that'd consume a ton of ram.\n \n == Testing scripts more directly\n \n@@ -502,7 +425,7 @@ destructive. When working with a single package its generally faster to run its\n tests in a tighter loop than maven provides. In one window:\n \n --------------------------------\n-mvn -pl distribution/rpm package\n+gradle :distribution:rpm:assemble\n --------------------------------\n \n and in another window:\n@@ -516,10 +439,7 @@ sudo bats $BATS/*rpm*.bats\n If you wanted to retest all the release artifacts on a single VM you could:\n \n -------------------------------------------------\n-# Build all the distributions fresh but skip recompiling elasticsearch:\n-mvn -amd -pl distribution install -DskipTests\n-# Copy them all the testroot\n-mvn -Dtests.vagrant -pl qa/vagrant pre-integration-test\n+gradle prepareTestRoot\n vagrant up trusty --provider virtualbox && vagrant ssh trusty\n cd $TESTROOT\n sudo bats $BATS/*.bats\n@@ -550,5 +470,22 @@ mvn -Dtests.coverage verify jacoco:report\n \n == Debugging from an IDE\n \n-If you want to run elasticsearch from your IDE, you should execute ./run.sh\n-It opens a remote debugging port that you can connect with your IDE.\n+If you want to run elasticsearch from your IDE, the `gradle run` task\n+supports a remote debugging option:\n+\n+---------------------------------------------------------------------------\n+gradle run --debug-jvm\n+---------------------------------------------------------------------------\n+\n+== Building with extra plugins\n+Additional plugins may be built alongside elasticsearch, where their\n+dependency on elasticsearch will be substituted with the local elasticsearch\n+build. To add your plugin, create a directory called x-plugins as a sibling\n+of elasticsearch. Checkout your plugin underneath x-plugins and the build\n+will automatically pick it up. You can verify the plugin is included as part\n+of the build by checking the projects of the build.\n+\n+---------------------------------------------------------------------------\n+gradle projects\n+---------------------------------------------------------------------------\n+",
"filename": "TESTING.asciidoc",
"status": "modified"
},
{
"diff": "@@ -22,37 +22,38 @@\n # under the License.\n \n Vagrant.configure(2) do |config|\n- config.vm.define \"precise\" do |config|\n+ config.vm.define \"ubuntu-1204\" do |config|\n config.vm.box = \"ubuntu/precise64\"\n ubuntu_common config\n end\n- config.vm.define \"trusty\" do |config|\n+ config.vm.define \"ubuntu-1404\" do |config|\n config.vm.box = \"ubuntu/trusty64\"\n ubuntu_common config\n end\n- config.vm.define \"vivid\" do |config|\n+ config.vm.define \"ubuntu-1504\" do |config|\n config.vm.box = \"ubuntu/vivid64\"\n- ubuntu_common config\n- end\n- config.vm.define \"wheezy\" do |config|\n- config.vm.box = \"debian/wheezy64\"\n- deb_common(config)\n+ ubuntu_common config, extra: <<-SHELL\n+ # Install Jayatana so we can work around it being present.\n+ [ -f /usr/share/java/jayatanaag.jar ] || install jayatana\n+ SHELL\n end\n- config.vm.define \"jessie\" do |config|\n+ # Wheezy's backports don't contain Openjdk 8 and the backflips required to\n+ # get the sun jdk on there just aren't worth it. We have jessie for testing\n+ # debian and it works fine.\n+ config.vm.define \"debian-8\" do |config|\n config.vm.box = \"debian/jessie64\"\n- deb_common(config)\n+ deb_common config,\n+ 'echo deb http://http.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'\n end\n config.vm.define \"centos-6\" do |config|\n- # TODO switch from chef to boxcutter to provide?\n- config.vm.box = \"chef/centos-6.6\"\n- rpm_common(config)\n+ config.vm.box = \"boxcutter/centos67\"\n+ rpm_common config\n end\n config.vm.define \"centos-7\" do |config|\n # There is a centos/7 box but it doesn't have rsync or virtualbox guest\n # stuff on there so its slow to use. So chef it is....\n- # TODO switch from chef to boxcutter to provide?\n- config.vm.box = \"chef/centos-7.0\"\n- rpm_common(config)\n+ config.vm.box = \"boxcutter/centos71\"\n+ rpm_common config\n end\n # This box hangs _forever_ on ```yum check-update```. I have no idea why.\n # config.vm.define \"oel-6\", autostart: false do |config|\n@@ -61,20 +62,34 @@ Vagrant.configure(2) do |config|\n # end\n config.vm.define \"oel-7\" do |config|\n config.vm.box = \"boxcutter/oel70\"\n- rpm_common(config)\n+ rpm_common config\n end\n config.vm.define \"fedora-22\" do |config|\n # Fedora hosts their own 'cloud' images that aren't in Vagrant's Atlas but\n # and are missing required stuff like rsync. It'd be nice if we could use\n # them but they much slower to get up and running then the boxcutter image.\n config.vm.box = \"boxcutter/fedora22\"\n- dnf_common(config)\n+ dnf_common config\n+ end\n+ config.vm.define \"opensuse-13\" do |config|\n+ config.vm.box = \"chef/opensuse-13\"\n+ config.vm.box_url = \"http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_opensuse-13.2-x86_64_chef-provisionerless.box\"\n+ opensuse_common config\n+ end\n+ # The SLES boxes are not considered to be highest quality, but seem to be sufficient for a test run\n+ config.vm.define \"sles-12\" do |config|\n+ config.vm.box = \"idar/sles12\"\n+ sles_common config\n end\n # Switch the default share for the project root from /vagrant to\n # /elasticsearch because /vagrant is confusing when there is a project inside\n # the elasticsearch project called vagrant....\n config.vm.synced_folder \".\", \"/vagrant\", disabled: true\n- config.vm.synced_folder \"\", \"/elasticsearch\"\n+ config.vm.synced_folder \".\", \"/elasticsearch\"\n+ config.vm.provider \"virtualbox\" do |v|\n+ # Give the boxes 2GB so they can run our tests if they have to.\n+ v.memory = 2048\n+ end\n if Vagrant.has_plugin?(\"vagrant-cachier\")\n config.cache.scope = :box\n end\n@@ -104,72 +119,143 @@ SOURCE_PROMPT\n end\n end\n \n-def ubuntu_common(config)\n- # Ubuntu is noisy\n+def ubuntu_common(config, extra: '')\n+ deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*', extra: extra\n+end\n+\n+def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')\n # http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html\n config.vm.provision \"fix-no-tty\", type: \"shell\" do |s|\n s.privileged = false\n s.inline = \"sudo sed -i '/tty/!s/mesg n/tty -s \\\\&\\\\& mesg n/' /root/.profile\"\n end\n- deb_common(config)\n-end\n-\n-def deb_common(config)\n- provision(config, \"apt-get update\", \"/var/cache/apt/archives/last_update\",\n- \"apt-get install -y\", \"openjdk-7-jdk\")\n+ provision(config,\n+ update_command: \"apt-get update\",\n+ update_tracking_file: \"/var/cache/apt/archives/last_update\",\n+ install_command: \"apt-get install -y\",\n+ java_package: \"openjdk-8-jdk\",\n+ extra: <<-SHELL\n+ export DEBIAN_FRONTEND=noninteractive\n+ ls /etc/apt/sources.list.d/#{openjdk_list}.list > /dev/null 2>&1 ||\n+ (echo \"==> Importing java-8 ppa\" &&\n+ #{add_openjdk_repository_command} &&\n+ apt-get update)\n+ #{extra}\n+SHELL\n+ )\n end\n \n def rpm_common(config)\n- provision(config, \"yum check-update\", \"/var/cache/yum/last_update\",\n- \"yum install -y\", \"java-1.7.0-openjdk-devel\")\n+ provision(config,\n+ update_command: \"yum check-update\",\n+ update_tracking_file: \"/var/cache/yum/last_update\",\n+ install_command: \"yum install -y\",\n+ java_package: \"java-1.8.0-openjdk-devel\")\n end\n \n def dnf_common(config)\n- provision(config, \"dnf check-update\", \"/var/cache/dnf/last_update\",\n- \"dnf install -y\", \"java-1.8.0-openjdk-devel\")\n+ provision(config,\n+ update_command: \"dnf check-update\",\n+ update_tracking_file: \"/var/cache/dnf/last_update\",\n+ install_command: \"dnf install -y\",\n+ java_package: \"java-1.8.0-openjdk-devel\")\n if Vagrant.has_plugin?(\"vagrant-cachier\")\n # Autodetect doesn't work....\n config.cache.auto_detect = false\n config.cache.enable :generic, { :cache_dir => \"/var/cache/dnf\" }\n end\n end\n \n+def opensuse_common(config)\n+ suse_common config, ''\n+end\n+\n+def suse_common(config, extra)\n+ provision(config,\n+ update_command: \"zypper --non-interactive list-updates\",\n+ update_tracking_file: \"/var/cache/zypp/packages/last_update\",\n+ install_command: \"zypper --non-interactive --quiet install --no-recommends\",\n+ java_package: \"java-1_8_0-openjdk-devel\",\n+ extra: extra)\n+end\n+\n+def sles_common(config)\n+ extra = <<-SHELL\n+ zypper rr systemsmanagement_puppet\n+ zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD1/ dvd1 || true\n+ zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD2/ dvd2 || true\n+ zypper addrepo http://download.opensuse.org/repositories/Java:Factory/SLE_12/Java:Factory.repo || true\n+ zypper --no-gpg-checks --non-interactive refresh\n+ zypper --non-interactive install git-core\n+SHELL\n+ suse_common config, extra\n+end\n \n-def provision(config, update_command, update_tracking_file, install_command, java_package)\n+# Register the main box provisioning script.\n+# @param config Vagrant's config object. Required.\n+# @param update_command [String] The command used to update the package\n+# manager. Required. Think `apt-get update`.\n+# @param update_tracking_file [String] The location of the file tracking the\n+# last time the update command was run. Required. Should be in a place that\n+# is cached by vagrant-cachier.\n+# @param install_command [String] The command used to install a package.\n+# Required. Think `apt-get install #{package}`.\n+# @param java_package [String] The name of the java package. Required.\n+# @param extra [String] Extra provisioning commands run before anything else.\n+# Optional. Used for things like setting up the ppa for Java 8.\n+def provision(config,\n+ update_command: 'required',\n+ update_tracking_file: 'required',\n+ install_command: 'required',\n+ java_package: 'required',\n+ extra: '')\n+ # Vagrant run ruby 2.0.0 which doesn't have required named parameters....\n+ raise ArgumentError.new('update_command is required') if update_command == 'required'\n+ raise ArgumentError.new('update_tracking_file is required') if update_tracking_file == 'required'\n+ raise ArgumentError.new('install_command is required') if install_command == 'required'\n+ raise ArgumentError.new('java_package is required') if java_package == 'required'\n config.vm.provision \"bats dependencies\", type: \"shell\", inline: <<-SHELL\n set -e\n+ set -o pipefail\n installed() {\n command -v $1 2>&1 >/dev/null\n }\n install() {\n # Only apt-get update if we haven't in the last day\n- if [ ! -f /tmp/update ] || [ \"x$(find /tmp/update -mtime +0)\" == \"x/tmp/update\" ]; then\n- sudo #{update_command} || true\n- touch #{update_tracking_file}\n+ if [ ! -f #{update_tracking_file} ] || [ \"x$(find #{update_tracking_file} -mtime +0)\" == \"x#{update_tracking_file}\" ]; then\n+ echo \"==> Updating repository\"\n+ #{update_command} || true\n+ touch #{update_tracking_file}\n fi\n- sudo #{install_command} $1\n+ echo \"==> Installing $1\"\n+ #{install_command} $1\n }\n ensure() {\n installed $1 || install $1\n }\n+\n+ #{extra}\n+\n installed java || install #{java_package}\n+ ensure tar\n ensure curl\n ensure unzip\n \n installed bats || {\n # Bats lives in a git repository....\n ensure git\n+ echo \"==> Installing bats\"\n git clone https://github.com/sstephenson/bats /tmp/bats\n # Centos doesn't add /usr/local/bin to the path....\n- sudo /tmp/bats/install.sh /usr\n- sudo rm -rf /tmp/bats\n+ /tmp/bats/install.sh /usr\n+ rm -rf /tmp/bats\n }\n cat \\<\\<VARS > /etc/profile.d/elasticsearch_vars.sh\n-export ZIP=/elasticsearch/distribution/zip/target/releases\n-export TAR=/elasticsearch/distribution/tar/target/releases\n-export RPM=/elasticsearch/distribution/rpm/target/releases\n-export DEB=/elasticsearch/distribution/deb/target/releases\n-export TESTROOT=/elasticsearch/qa/vagrant/target/testroot\n+export ZIP=/elasticsearch/distribution/zip/build/distributions\n+export TAR=/elasticsearch/distribution/tar/build/distributions\n+export RPM=/elasticsearch/distribution/rpm/build/distributions\n+export DEB=/elasticsearch/distribution/deb/build/distributions\n+export TESTROOT=/elasticsearch/qa/vagrant/build/testroot\n export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts\n VARS\n SHELL",
"filename": "Vagrantfile",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,249 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+import com.bmuschko.gradle.nexus.NexusPlugin\n+import org.gradle.plugins.ide.eclipse.model.SourceFolder\n+\n+// common maven publishing configuration\n+subprojects {\n+ group = 'org.elasticsearch'\n+ version = org.elasticsearch.gradle.VersionProperties.elasticsearch\n+\n+ plugins.withType(NexusPlugin).whenPluginAdded {\n+ modifyPom {\n+ project {\n+ url 'https://github.com/elastic/elasticsearch'\n+ inceptionYear '2009'\n+\n+ scm {\n+ url 'https://github.com/elastic/elasticsearch'\n+ connection 'scm:https://elastic@github.com/elastic/elasticsearch'\n+ developerConnection 'scm:git://github.com/elastic/elasticsearch.git'\n+ }\n+\n+ licenses {\n+ license {\n+ name 'The Apache Software License, Version 2.0'\n+ url 'http://www.apache.org/licenses/LICENSE-2.0.txt'\n+ distribution 'repo'\n+ }\n+ }\n+ }\n+ }\n+ extraArchive {\n+ javadoc = true\n+ tests = false\n+ }\n+ // we have our own username/password prompts so that they only happen once\n+ // TODO: add gpg signing prompts\n+ project.gradle.taskGraph.whenReady { taskGraph ->\n+ if (taskGraph.allTasks.any { it.name == 'uploadArchives' }) {\n+ Console console = System.console()\n+ if (project.hasProperty('nexusUsername') == false) {\n+ String nexusUsername = console.readLine('\\nNexus username: ')\n+ project.rootProject.allprojects.each {\n+ it.ext.nexusUsername = nexusUsername\n+ }\n+ }\n+ if (project.hasProperty('nexusPassword') == false) {\n+ String nexusPassword = new String(console.readPassword('\\nNexus password: '))\n+ project.rootProject.allprojects.each {\n+ it.ext.nexusPassword = nexusPassword\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+\n+allprojects {\n+ // injecting groovy property variables into all projects\n+ project.ext {\n+ // for eclipse hacks...\n+ isEclipse = System.getProperty(\"eclipse.launcher\") != null || gradle.startParameter.taskNames.contains('eclipse') || gradle.startParameter.taskNames.contains('cleanEclipse')\n+ }\n+}\n+\n+subprojects {\n+ project.afterEvaluate {\n+ // include license and notice in jars\n+ tasks.withType(Jar) {\n+ into('META-INF') {\n+ from project.rootProject.rootDir\n+ include 'LICENSE.txt'\n+ include 'NOTICE.txt'\n+ }\n+ }\n+ // ignore missing javadocs\n+ tasks.withType(Javadoc) { Javadoc javadoc ->\n+ // the -quiet here is because of a bug in gradle, in that adding a string option\n+ // by itself is not added to the options. By adding quiet, both this option and\n+ // the \"value\" -quiet is added, separated by a space. This is ok since the javadoc\n+ // command already adds -quiet, so we are just duplicating it\n+ // see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959\n+ javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet')\n+ }\n+ }\n+\n+ /* Sets up the dependencies that we build as part of this project but\n+ register as thought they were external to resolve internally. We register\n+ them as external dependencies so the build plugin that we use can be used\n+ to build elasticsearch plugins outside of the elasticsearch source tree. */\n+ ext.projectSubstitutions = [\n+ \"org.elasticsearch:rest-api-spec:${version}\": ':rest-api-spec',\n+ \"org.elasticsearch:elasticsearch:${version}\": ':core',\n+ \"org.elasticsearch:test-framework:${version}\": ':test-framework',\n+ \"org.elasticsearch.distribution.integ-test-zip:elasticsearch:${version}\": ':distribution:integ-test-zip',\n+ \"org.elasticsearch.distribution.zip:elasticsearch:${version}\": ':distribution:zip',\n+ \"org.elasticsearch.distribution.tar:elasticsearch:${version}\": ':distribution:tar',\n+ \"org.elasticsearch.distribution.rpm:elasticsearch:${version}\": ':distribution:rpm',\n+ \"org.elasticsearch.distribution.deb:elasticsearch:${version}\": ':distribution:deb',\n+ ]\n+ configurations.all {\n+ resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->\n+ projectSubstitutions.each { k,v ->\n+ subs.substitute(subs.module(k)).with(subs.project(v))\n+ }\n+ }\n+ }\n+}\n+\n+// Ensure similar tasks in dependent projects run first. The projectsEvaluated here is\n+// important because, while dependencies.all will pickup future dependencies,\n+// it is not necessarily true that the task exists in both projects at the time\n+// the dependency is added.\n+gradle.projectsEvaluated {\n+ allprojects {\n+ if (project.path == ':test-framework') {\n+ // :test-framework:test cannot run before and after :core:test\n+ return\n+ }\n+ configurations.all {\n+ dependencies.all { Dependency dep ->\n+ Project upstreamProject = null\n+ if (dep instanceof ProjectDependency) {\n+ upstreamProject = dep.dependencyProject\n+ } else {\n+ // gradle doesn't apply substitutions until resolve time, so they won't\n+ // show up as a ProjectDependency above\n+ String substitution = projectSubstitutions.get(\"${dep.group}:${dep.name}:${dep.version}\")\n+ if (substitution != null) {\n+ upstreamProject = findProject(substitution)\n+ }\n+ }\n+ if (upstreamProject != null) {\n+ if (project.path == upstreamProject.path) {\n+ // TODO: distribution integ tests depend on themselves (!), fix that\n+ return\n+ }\n+ for (String taskName : ['test', 'integTest']) {\n+ Task task = project.tasks.findByName(taskName)\n+ Task upstreamTask = upstreamProject.tasks.findByName(taskName)\n+ if (task != null && upstreamTask != null) {\n+ task.mustRunAfter(upstreamTask)\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n+\n+// intellij configuration\n+allprojects {\n+ apply plugin: 'idea'\n+}\n+\n+idea {\n+ project {\n+ languageLevel = org.elasticsearch.gradle.BuildPlugin.minimumJava.toString()\n+ vcs = 'Git'\n+ }\n+}\n+// Make sure gradle idea was run before running anything in intellij (including import).\n+File ideaMarker = new File(projectDir, '.local-idea-is-configured')\n+tasks.idea.doLast {\n+ ideaMarker.setText('', 'UTF-8')\n+}\n+if (System.getProperty('idea.active') != null && ideaMarker.exists() == false) {\n+ throw new GradleException('You must run gradle idea from the root of elasticsearch before importing into IntelliJ')\n+}\n+// add buildSrc itself as a groovy project\n+task buildSrcIdea(type: GradleBuild) {\n+ buildFile = 'buildSrc/build.gradle'\n+ tasks = ['cleanIdea', 'ideaModule']\n+}\n+tasks.idea.dependsOn(buildSrcIdea)\n+\n+\n+// eclipse configuration\n+allprojects {\n+ apply plugin: 'eclipse'\n+\n+ plugins.withType(JavaBasePlugin) {\n+ eclipse.classpath.defaultOutputDir = new File(project.buildDir, 'eclipse')\n+ eclipse.classpath.file.whenMerged { classpath ->\n+ // give each source folder a unique corresponding output folder\n+ int i = 0;\n+ classpath.entries.findAll { it instanceof SourceFolder }.each { folder ->\n+ i++;\n+ // this is *NOT* a path or a file.\n+ folder.output = \"build/eclipse/\" + i\n+ }\n+ }\n+ }\n+ task copyEclipseSettings(type: Copy) {\n+ // TODO: \"package this up\" for external builds\n+ from new File(project.rootDir, 'buildSrc/src/main/resources/eclipse.settings')\n+ into '.settings'\n+ }\n+ // otherwise .settings is not nuked entirely\n+ tasks.cleanEclipse {\n+ delete '.settings'\n+ }\n+ // otherwise the eclipse merging is *super confusing*\n+ tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)\n+}\n+\n+// add buildSrc itself as a groovy project\n+task buildSrcEclipse(type: GradleBuild) {\n+ buildFile = 'buildSrc/build.gradle'\n+ tasks = ['cleanEclipse', 'eclipse']\n+}\n+tasks.eclipse.dependsOn(buildSrcEclipse)\n+\n+// we need to add the same --debug-jvm option as\n+// the real RunTask has, so we can pass it through\n+class Run extends DefaultTask {\n+ boolean debug = false\n+\n+ @org.gradle.api.internal.tasks.options.Option(\n+ option = \"debug-jvm\",\n+ description = \"Enable debugging configuration, to allow attaching a debugger to elasticsearch.\"\n+ )\n+ public void setDebug(boolean enabled) {\n+ project.project(':distribution').run.clusterConfig.debug = enabled\n+ }\n+}\n+task run(type: Run) {\n+ dependsOn ':distribution:run'\n+ description = 'Runs elasticsearch in the foreground'\n+ group = 'Verification'\n+ impliesSubProjects = true\n+}",
"filename": "build.gradle",
"status": "added"
},
{
"diff": "@@ -0,0 +1,92 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+// we must use buildscript + apply so that an external plugin\n+// can apply this file, since the plugins directive is not\n+// supported through file includes\n+buildscript {\n+ repositories {\n+ jcenter()\n+ }\n+ dependencies {\n+ classpath 'com.bmuschko:gradle-nexus-plugin:2.3.1'\n+ }\n+}\n+apply plugin: 'groovy'\n+apply plugin: 'com.bmuschko.nexus'\n+// TODO: move common IDE configuration to a common file to include\n+apply plugin: 'idea'\n+apply plugin: 'eclipse'\n+\n+group = 'org.elasticsearch.gradle'\n+archivesBaseName = 'build-tools'\n+\n+Properties props = new Properties()\n+props.load(project.file('version.properties').newDataInputStream())\n+version = props.getProperty('elasticsearch')\n+\n+repositories {\n+ mavenCentral()\n+ maven {\n+ name 'sonatype-snapshots'\n+ url \"https://oss.sonatype.org/content/repositories/snapshots/\"\n+ }\n+ jcenter()\n+}\n+\n+dependencies {\n+ compile gradleApi()\n+ compile localGroovy()\n+ compile \"com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}\"\n+ compile(\"junit:junit:${props.getProperty('junit')}\") {\n+ transitive = false\n+ }\n+ compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'\n+ compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'\n+ compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'\n+ compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....\n+ compile 'de.thetaphi:forbiddenapis:2.0'\n+ compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'\n+}\n+\n+processResources {\n+ inputs.file('version.properties')\n+ from 'version.properties'\n+}\n+\n+extraArchive {\n+ javadoc = false\n+ tests = false\n+}\n+\n+eclipse {\n+ classpath {\n+ defaultOutputDir = new File(file('build'), 'eclipse')\n+ }\n+}\n+\n+task copyEclipseSettings(type: Copy) {\n+ from project.file('src/main/resources/eclipse.settings')\n+ into '.settings'\n+}\n+// otherwise .settings is not nuked entirely\n+tasks.cleanEclipse {\n+ delete '.settings'\n+}\n+tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)",
"filename": "buildSrc/build.gradle",
"status": "added"
},
{
"diff": "@@ -0,0 +1,53 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.SuiteBalancer\n+import com.carrotsearch.ant.tasks.junit4.balancers.ExecutionTimeBalancer\n+import com.carrotsearch.ant.tasks.junit4.listeners.ExecutionTimesReport\n+import org.apache.tools.ant.types.FileSet\n+\n+class BalancersConfiguration {\n+ // parent task, so executionTime can register an additional listener\n+ RandomizedTestingTask task\n+ List<SuiteBalancer> balancers = new ArrayList<>()\n+\n+ void executionTime(Map<String,Object> properties) {\n+ ExecutionTimeBalancer balancer = new ExecutionTimeBalancer()\n+\n+ FileSet fileSet = new FileSet()\n+ Object filename = properties.remove('cacheFilename')\n+ if (filename == null) {\n+ throw new IllegalArgumentException('cacheFilename is required for executionTime balancer')\n+ }\n+ fileSet.setIncludes(filename.toString())\n+\n+ File cacheDir = task.project.projectDir\n+ Object dir = properties.remove('cacheDir')\n+ if (dir != null) {\n+ cacheDir = new File(dir.toString())\n+ }\n+ fileSet.setDir(cacheDir)\n+ balancer.add(fileSet)\n+\n+ int historySize = 10\n+ Object size = properties.remove('historySize')\n+ if (size instanceof Integer) {\n+ historySize = (Integer)size\n+ } else if (size != null) {\n+ throw new IllegalArgumentException('historySize must be an integer')\n+ }\n+ ExecutionTimesReport listener = new ExecutionTimesReport()\n+ listener.setFile(new File(cacheDir, filename.toString()))\n+ listener.setHistoryLength(historySize)\n+\n+ if (properties.isEmpty() == false) {\n+ throw new IllegalArgumentException('Unknown properties for executionTime balancer: ' + properties.keySet())\n+ }\n+\n+ task.listenersConfig.listeners.add(listener)\n+ balancers.add(balancer)\n+ }\n+\n+ void custom(SuiteBalancer balancer) {\n+ balancers.add(balancer)\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/BalancersConfiguration.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,25 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener\n+import com.carrotsearch.ant.tasks.junit4.listeners.antxml.AntXmlReport\n+\n+\n+class ListenersConfiguration {\n+ RandomizedTestingTask task\n+ List<AggregatedEventListener> listeners = new ArrayList<>()\n+\n+ void junitReport(Map<String, Object> props) {\n+ AntXmlReport reportListener = new AntXmlReport()\n+ Object dir = props == null ? null : props.get('dir')\n+ if (dir != null) {\n+ reportListener.setDir(task.project.file(dir))\n+ } else {\n+ reportListener.setDir(new File(task.project.buildDir, 'reports' + File.separator + \"${task.name}Junit\"))\n+ }\n+ listeners.add(reportListener)\n+ }\n+\n+ void custom(AggregatedEventListener listener) {\n+ listeners.add(listener)\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/ListenersConfiguration.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,64 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import org.gradle.api.logging.LogLevel\n+import org.gradle.api.logging.Logger\n+\n+/**\n+ * Writes data passed to this stream as log messages.\n+ *\n+ * The stream will be flushed whenever a newline is detected.\n+ * Allows setting an optional prefix before each line of output.\n+ */\n+public class LoggingOutputStream extends OutputStream {\n+\n+ /** The starting length of the buffer */\n+ static final int DEFAULT_BUFFER_LENGTH = 4096\n+\n+ /** The buffer of bytes sent to the stream */\n+ byte[] buffer = new byte[DEFAULT_BUFFER_LENGTH]\n+\n+ /** Offset of the start of unwritten data in the buffer */\n+ int start = 0\n+\n+ /** Offset of the end (semi-open) of unwritten data in the buffer */\n+ int end = 0\n+\n+ /** Logger to write stream data to */\n+ Logger logger\n+\n+ /** Prefix to add before each line of output */\n+ String prefix = \"\"\n+\n+ /** Log level to write log messages to */\n+ LogLevel level\n+\n+ void write(final int b) throws IOException {\n+ if (b == 0) return;\n+ if (b == (int)'\\n' as char) {\n+ // always flush with newlines instead of adding to the buffer\n+ flush()\n+ return\n+ }\n+\n+ if (end == buffer.length) {\n+ if (start != 0) {\n+ // first try shifting the used buffer back to the beginning to make space\n+ System.arraycopy(buffer, start, buffer, 0, end - start)\n+ } else {\n+ // need more space, extend the buffer\n+ }\n+ final int newBufferLength = buffer.length + DEFAULT_BUFFER_LENGTH;\n+ final byte[] newBuffer = new byte[newBufferLength];\n+ System.arraycopy(buffer, 0, newBuffer, 0, buffer.length);\n+ buffer = newBuffer;\n+ }\n+\n+ buffer[end++] = (byte) b;\n+ }\n+\n+ void flush() {\n+ if (end == start) return\n+ logger.log(level, prefix + new String(buffer, start, end - start));\n+ start = end\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/LoggingOutputStream.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,47 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.JUnit4\n+import org.gradle.api.AntBuilder\n+import org.gradle.api.Plugin\n+import org.gradle.api.Project\n+import org.gradle.api.Task\n+import org.gradle.api.plugins.JavaBasePlugin\n+import org.gradle.api.tasks.TaskContainer\n+import org.gradle.api.tasks.testing.Test\n+\n+class RandomizedTestingPlugin implements Plugin<Project> {\n+\n+ void apply(Project project) {\n+ replaceTestTask(project.tasks)\n+ configureAnt(project.ant)\n+ }\n+\n+ static void replaceTestTask(TaskContainer tasks) {\n+ Test oldTestTask = tasks.findByPath('test')\n+ if (oldTestTask == null) {\n+ // no test task, ok, user will use testing task on their own\n+ return\n+ }\n+ tasks.remove(oldTestTask)\n+\n+ Map properties = [\n+ name: 'test',\n+ type: RandomizedTestingTask,\n+ dependsOn: oldTestTask.dependsOn,\n+ group: JavaBasePlugin.VERIFICATION_GROUP,\n+ description: 'Runs unit tests with the randomized testing framework'\n+ ]\n+ RandomizedTestingTask newTestTask = tasks.create(properties)\n+ newTestTask.classpath = oldTestTask.classpath\n+ newTestTask.testClassesDir = oldTestTask.testClassesDir\n+\n+ // hack so check task depends on custom test\n+ Task checkTask = tasks.findByPath('check')\n+ checkTask.dependsOn.remove(oldTestTask)\n+ checkTask.dependsOn.add(newTestTask)\n+ }\n+\n+ static void configureAnt(AntBuilder ant) {\n+ ant.project.addTaskDefinition('junit4:junit4', JUnit4.class)\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,304 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.ListenersList\n+import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener\n+import com.esotericsoftware.kryo.serializers.FieldSerializer\n+import groovy.xml.NamespaceBuilder\n+import groovy.xml.NamespaceBuilderSupport\n+import org.apache.tools.ant.BuildException\n+import org.apache.tools.ant.DefaultLogger\n+import org.apache.tools.ant.RuntimeConfigurable\n+import org.apache.tools.ant.UnknownElement\n+import org.gradle.api.DefaultTask\n+import org.gradle.api.file.FileCollection\n+import org.gradle.api.file.FileTreeElement\n+import org.gradle.api.internal.tasks.options.Option\n+import org.gradle.api.specs.Spec\n+import org.gradle.api.tasks.*\n+import org.gradle.api.tasks.util.PatternFilterable\n+import org.gradle.api.tasks.util.PatternSet\n+import org.gradle.logging.ProgressLoggerFactory\n+import org.gradle.util.ConfigureUtil\n+\n+import javax.inject.Inject\n+\n+class RandomizedTestingTask extends DefaultTask {\n+\n+ // TODO: change to \"executable\" to match gradle test params?\n+ @Optional\n+ @Input\n+ String jvm = 'java'\n+\n+ @Optional\n+ @Input\n+ File workingDir = new File(project.buildDir, 'testrun' + File.separator + name)\n+\n+ @Optional\n+ @Input\n+ FileCollection classpath\n+\n+ @Input\n+ String parallelism = '1'\n+\n+ @InputDirectory\n+ File testClassesDir\n+\n+ @Optional\n+ @Input\n+ boolean haltOnFailure = true\n+\n+ @Optional\n+ @Input\n+ boolean shuffleOnSlave = true\n+\n+ @Optional\n+ @Input\n+ boolean enableAssertions = true\n+\n+ @Optional\n+ @Input\n+ boolean enableSystemAssertions = true\n+\n+ @Optional\n+ @Input\n+ boolean leaveTemporary = false\n+\n+ @Optional\n+ @Input\n+ String ifNoTests = 'ignore'\n+\n+ TestLoggingConfiguration testLoggingConfig = new TestLoggingConfiguration()\n+\n+ BalancersConfiguration balancersConfig = new BalancersConfiguration(task: this)\n+ ListenersConfiguration listenersConfig = new ListenersConfiguration(task: this)\n+\n+ List<String> jvmArgs = new ArrayList<>()\n+\n+ @Optional\n+ @Input\n+ String argLine = null\n+\n+ Map<String, String> systemProperties = new HashMap<>()\n+ PatternFilterable patternSet = new PatternSet()\n+\n+ RandomizedTestingTask() {\n+ outputs.upToDateWhen {false} // randomized tests are never up to date\n+ listenersConfig.listeners.add(new TestProgressLogger(factory: getProgressLoggerFactory()))\n+ listenersConfig.listeners.add(new TestReportLogger(logger: logger, config: testLoggingConfig))\n+ }\n+\n+ @Inject\n+ ProgressLoggerFactory getProgressLoggerFactory() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ void jvmArgs(Iterable<String> arguments) {\n+ jvmArgs.addAll(arguments)\n+ }\n+\n+ void jvmArg(String argument) {\n+ jvmArgs.add(argument)\n+ }\n+\n+ void systemProperty(String property, String value) {\n+ systemProperties.put(property, value)\n+ }\n+\n+ void include(String... includes) {\n+ this.patternSet.include(includes);\n+ }\n+\n+ void include(Iterable<String> includes) {\n+ this.patternSet.include(includes);\n+ }\n+\n+ void include(Spec<FileTreeElement> includeSpec) {\n+ this.patternSet.include(includeSpec);\n+ }\n+\n+ void include(Closure includeSpec) {\n+ this.patternSet.include(includeSpec);\n+ }\n+\n+ void exclude(String... excludes) {\n+ this.patternSet.exclude(excludes);\n+ }\n+\n+ void exclude(Iterable<String> excludes) {\n+ this.patternSet.exclude(excludes);\n+ }\n+\n+ void exclude(Spec<FileTreeElement> excludeSpec) {\n+ this.patternSet.exclude(excludeSpec);\n+ }\n+\n+ void exclude(Closure excludeSpec) {\n+ this.patternSet.exclude(excludeSpec);\n+ }\n+\n+ @Input\n+ void testLogging(Closure closure) {\n+ ConfigureUtil.configure(closure, testLoggingConfig)\n+ }\n+\n+ @Input\n+ void balancers(Closure closure) {\n+ ConfigureUtil.configure(closure, balancersConfig)\n+ }\n+\n+ @Input\n+ void listeners(Closure closure) {\n+ ConfigureUtil.configure(closure, listenersConfig)\n+ }\n+\n+ @Option(\n+ option = \"tests\",\n+ description = \"Sets test class or method name to be included. This is for IDEs. Use -Dtests.class and -Dtests.method\"\n+ )\n+ void setTestNameIncludePattern(String testNamePattern) {\n+ // This is only implemented to give support for IDEs running tests. There are 3 patterns expected:\n+ // * An exact test class and method\n+ // * An exact test class\n+ // * A package name prefix, ending with .*\n+ // There is no way to distinguish the first two without looking at classes, so we use the rule\n+ // that class names start with an uppercase letter...\n+ // TODO: this doesn't work yet, but not sure why...intellij says it is using --tests, and this work from the command line...\n+ String[] parts = testNamePattern.split('\\\\.')\n+ String lastPart = parts[parts.length - 1]\n+ String classname\n+ String methodname = null\n+ if (lastPart.equals('*') || lastPart.charAt(0).isUpperCase()) {\n+ // package name or class name, just pass through\n+ classname = testNamePattern\n+ } else {\n+ // method name, need to separate\n+ methodname = lastPart\n+ classname = testNamePattern.substring(0, testNamePattern.length() - lastPart.length() - 1)\n+ }\n+ ant.setProperty('tests.class', classname)\n+ if (methodname != null) {\n+ ant.setProperty('tests.method', methodname)\n+ }\n+ }\n+\n+ @TaskAction\n+ void executeTests() {\n+ Map attributes = [\n+ jvm: jvm,\n+ parallelism: parallelism,\n+ heartbeat: testLoggingConfig.slowTests.heartbeat,\n+ dir: workingDir,\n+ tempdir: new File(workingDir, 'temp'),\n+ haltOnFailure: true, // we want to capture when a build failed, but will decide whether to rethrow later\n+ shuffleOnSlave: shuffleOnSlave,\n+ leaveTemporary: leaveTemporary,\n+ ifNoTests: ifNoTests\n+ ]\n+\n+ DefaultLogger listener = null\n+ ByteArrayOutputStream antLoggingBuffer = null\n+ if (logger.isInfoEnabled() == false) {\n+ // in info logging, ant already outputs info level, so we see everything\n+ // but on errors or when debugging, we want to see info level messages\n+ // because junit4 emits jvm output with ant logging\n+ if (testLoggingConfig.outputMode == TestLoggingConfiguration.OutputMode.ALWAYS) {\n+ // we want all output, so just stream directly\n+ listener = new DefaultLogger(\n+ errorPrintStream: System.err,\n+ outputPrintStream: System.out,\n+ messageOutputLevel: org.apache.tools.ant.Project.MSG_INFO)\n+ } else {\n+ // we want to buffer the info, and emit it if the test fails\n+ antLoggingBuffer = new ByteArrayOutputStream()\n+ PrintStream stream = new PrintStream(antLoggingBuffer, true, \"UTF-8\")\n+ listener = new DefaultLogger(\n+ errorPrintStream: stream,\n+ outputPrintStream: stream,\n+ messageOutputLevel: org.apache.tools.ant.Project.MSG_INFO)\n+ }\n+ project.ant.project.addBuildListener(listener)\n+ }\n+\n+ NamespaceBuilderSupport junit4 = NamespaceBuilder.newInstance(ant, 'junit4')\n+ try {\n+ junit4.junit4(attributes) {\n+ classpath {\n+ pathElement(path: classpath.asPath)\n+ }\n+ if (enableAssertions) {\n+ jvmarg(value: '-ea')\n+ }\n+ if (enableSystemAssertions) {\n+ jvmarg(value: '-esa')\n+ }\n+ for (String arg : jvmArgs) {\n+ jvmarg(value: arg)\n+ }\n+ if (argLine != null) {\n+ jvmarg(line: argLine)\n+ }\n+ fileset(dir: testClassesDir) {\n+ for (String includePattern : patternSet.getIncludes()) {\n+ include(name: includePattern)\n+ }\n+ for (String excludePattern : patternSet.getExcludes()) {\n+ exclude(name: excludePattern)\n+ }\n+ }\n+ for (Map.Entry<String, String> prop : systemProperties) {\n+ sysproperty key: prop.getKey(), value: prop.getValue()\n+ }\n+ makeListeners()\n+ }\n+ } catch (BuildException e) {\n+ if (antLoggingBuffer != null) {\n+ logger.error('JUnit4 test failed, ant output was:')\n+ logger.error(antLoggingBuffer.toString('UTF-8'))\n+ }\n+ if (haltOnFailure) {\n+ throw e;\n+ }\n+ }\n+\n+ if (listener != null) {\n+ // remove the listener we added so other ant tasks dont have verbose logging!\n+ project.ant.project.removeBuildListener(listener)\n+ }\n+ }\n+\n+ static class ListenersElement extends UnknownElement {\n+ AggregatedEventListener[] listeners\n+\n+ ListenersElement() {\n+ super('listeners')\n+ setNamespace('junit4')\n+ setQName('listeners')\n+ }\n+\n+ public void handleChildren(Object realThing, RuntimeConfigurable wrapper) {\n+ assert realThing instanceof ListenersList\n+ ListenersList list = (ListenersList)realThing\n+\n+ for (AggregatedEventListener listener : listeners) {\n+ list.addConfigured(listener)\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Makes an ant xml element for 'listeners' just as AntBuilder would, except configuring\n+ * the element adds the already created children.\n+ */\n+ def makeListeners() {\n+ def context = ant.getAntXmlContext()\n+ def parentWrapper = context.currentWrapper()\n+ def parent = parentWrapper.getProxy()\n+ UnknownElement element = new ListenersElement(listeners: listenersConfig.listeners)\n+ element.setProject(context.getProject())\n+ element.setRealThing(logger)\n+ ((UnknownElement)parent).addChild(element)\n+ RuntimeConfigurable wrapper = new RuntimeConfigurable(element, element.getQName())\n+ parentWrapper.addChild(wrapper)\n+ return wrapper.getProxy()\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,14 @@\n+package com.carrotsearch.gradle.junit4\n+\n+class SlowTestsConfiguration {\n+ int heartbeat = 0\n+ int summarySize = 0\n+\n+ void heartbeat(int heartbeat) {\n+ this.heartbeat = heartbeat\n+ }\n+\n+ void summarySize(int summarySize) {\n+ this.summarySize = summarySize\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/SlowTestsConfiguration.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,14 @@\n+package com.carrotsearch.gradle.junit4\n+\n+class StackTraceFiltersConfiguration {\n+ List<String> patterns = new ArrayList<>()\n+ List<String> contains = new ArrayList<>()\n+\n+ void regex(String pattern) {\n+ patterns.add(pattern)\n+ }\n+\n+ void contains(String contain) {\n+ contains.add(contain)\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/StackTraceFiltersConfiguration.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,43 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import org.gradle.api.tasks.Input\n+import org.gradle.util.ConfigureUtil\n+\n+class TestLoggingConfiguration {\n+ /** Display mode for output streams. */\n+ static enum OutputMode {\n+ /** Always display the output emitted from tests. */\n+ ALWAYS,\n+ /**\n+ * Display the output only if a test/ suite failed. This requires internal buffering\n+ * so the output will be shown only after a test completes.\n+ */\n+ ONERROR,\n+ /** Don't display the output, even on test failures. */\n+ NEVER\n+ }\n+\n+ OutputMode outputMode = OutputMode.ONERROR\n+ SlowTestsConfiguration slowTests = new SlowTestsConfiguration()\n+ StackTraceFiltersConfiguration stackTraceFilters = new StackTraceFiltersConfiguration()\n+\n+ /** Summarize the first N failures at the end of the test. */\n+ @Input\n+ int showNumFailuresAtEnd = 3 // match TextReport default\n+\n+ void slowTests(Closure closure) {\n+ ConfigureUtil.configure(closure, slowTests)\n+ }\n+\n+ void stackTraceFilters(Closure closure) {\n+ ConfigureUtil.configure(closure, stackTraceFilters)\n+ }\n+\n+ void outputMode(String mode) {\n+ outputMode = mode.toUpperCase() as OutputMode\n+ }\n+\n+ void showNumFailuresAtEnd(int n) {\n+ showNumFailuresAtEnd = n\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestLoggingConfiguration.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,187 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.JUnit4\n+import com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.eventbus.Subscribe\n+import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedStartEvent\n+import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedSuiteResultEvent\n+import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedTestResultEvent\n+import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener\n+import org.gradle.logging.ProgressLogger\n+import org.gradle.logging.ProgressLoggerFactory\n+import org.junit.runner.Description\n+\n+import static com.carrotsearch.ant.tasks.junit4.events.aggregated.TestStatus.*\n+import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.formatDurationInSeconds\n+import static java.lang.Math.max\n+\n+/**\n+ * Adapts junit4's event listeners into gradle's ProgressLogger. Note that\n+ * junit4 guarantees (via guava) that methods on this class won't be called by\n+ * multiple threads simultaneously which is helpful in making it simpler.\n+ *\n+ * Every time a test finishes this class will update the logger. It will log\n+ * the last finished test method on the logger line until the first suite\n+ * finishes. Once the first suite finishes it always logs the last finished\n+ * suite. This means that in test runs with a single suite the logger will be\n+ * updated with the test name the whole time which is useful because these runs\n+ * usually have longer individual tests. For test runs with lots of suites the\n+ * majority of the time is spent showing the last suite that finished which is\n+ * more useful for those test runs because test methods there tend to be very\n+ * quick.\n+ */\n+class TestProgressLogger implements AggregatedEventListener {\n+ /** Factory to build a progress logger when testing starts */\n+ ProgressLoggerFactory factory\n+ ProgressLogger progressLogger\n+ int totalSuites\n+ int totalSlaves\n+\n+ // sprintf formats used to align the integers we print\n+ String suitesFormat\n+ String slavesFormat\n+ String testsFormat\n+\n+ // Counters incremented test completion.\n+ volatile int suitesCompleted = 0\n+ volatile int testsCompleted = 0\n+ volatile int testsFailed = 0\n+ volatile int testsIgnored = 0\n+\n+ // Information about the last, most interesting event.\n+ volatile String eventDescription\n+ volatile int eventSlave\n+ volatile long eventExecutionTime\n+\n+ /** Have we finished a whole suite yet? */\n+ volatile boolean suiteFinished = false\n+ /* Note that we probably overuse volatile here but it isn't hurting us and\n+ lets us move things around without worying about breaking things. */\n+\n+ @Subscribe\n+ void onStart(AggregatedStartEvent e) throws IOException {\n+ totalSuites = e.suiteCount\n+ totalSlaves = e.slaveCount\n+ progressLogger = factory.newOperation(TestProgressLogger)\n+ progressLogger.setDescription('Randomized test runner')\n+ progressLogger.started()\n+ progressLogger.progress(\n+ \"Starting JUnit4 for ${totalSuites} suites on ${totalSlaves} jvms\")\n+\n+ suitesFormat = \"%0${widthForTotal(totalSuites)}d\"\n+ slavesFormat = \"%-${widthForTotal(totalSlaves)}s\"\n+ /* Just guess the number of tests because we can't figure it out from\n+ here and it isn't worth doing anything fancy to prevent the console\n+ from jumping around a little. 200 is a pretty wild guess for the\n+ minimum but it makes REST tests output sanely. */\n+ int totalNumberOfTestsGuess = max(200, totalSuites * 10)\n+ testsFormat = \"%0${widthForTotal(totalNumberOfTestsGuess)}d\"\n+ }\n+\n+ @Subscribe\n+ void onTestResult(AggregatedTestResultEvent e) throws IOException {\n+ testsCompleted++\n+ switch (e.status) {\n+ case ERROR:\n+ case FAILURE:\n+ testsFailed++\n+ break\n+ case IGNORED:\n+ case IGNORED_ASSUMPTION:\n+ testsIgnored++\n+ break\n+ case OK:\n+ break\n+ default:\n+ throw new IllegalArgumentException(\n+ \"Unknown test status: [${e.status}]\")\n+ }\n+ if (!suiteFinished) {\n+ updateEventInfo(e)\n+ }\n+\n+ log()\n+ }\n+\n+ @Subscribe\n+ void onSuiteResult(AggregatedSuiteResultEvent e) throws IOException {\n+ suitesCompleted++\n+ suiteFinished = true\n+ updateEventInfo(e)\n+ log()\n+ }\n+\n+ /**\n+ * Update the suite information with a junit4 event.\n+ */\n+ private void updateEventInfo(Object e) {\n+ eventDescription = simpleName(e.description.className)\n+ if (e.description.methodName != null) {\n+ eventDescription += \"#${e.description.methodName}\"\n+ }\n+ eventSlave = e.slave.id\n+ eventExecutionTime = e.executionTime\n+ }\n+\n+ /**\n+ * Extract a Class#getSimpleName style name from Class#getName style\n+ * string. We can't just use Class#getSimpleName because junit descriptions\n+ * don't alway s set the class field but they always set the className\n+ * field.\n+ */\n+ private static String simpleName(String className) {\n+ return className.substring(className.lastIndexOf('.') + 1)\n+ }\n+\n+ private void log() {\n+ /* Remember that instances of this class are only ever active on one\n+ thread at a time so there really aren't race conditions here. It'd be\n+ OK if there were because they'd only display an overcount\n+ temporarily. */\n+ String log = ''\n+ if (totalSuites > 1) {\n+ /* Skip printing the suites to save space when there is only a\n+ single suite. This is nice because when there is only a single\n+ suite we log the method name and those can be long. */\n+ log += sprintf(\"Suites [${suitesFormat}/${suitesFormat}], \",\n+ [suitesCompleted, totalSuites])\n+ }\n+ log += sprintf(\"Tests [${testsFormat}|%d|%d], \",\n+ [testsCompleted, testsFailed, testsIgnored])\n+ log += \"in ${formatDurationInSeconds(eventExecutionTime)} \"\n+ if (totalSlaves > 1) {\n+ /* Skip printing the slaves if there is only one of them. This is\n+ nice because when there is only a single slave there is often\n+ only a single suite and we could use the extra space to log the\n+ test method names. */\n+ log += \"J${sprintf(slavesFormat, eventSlave)} \"\n+ }\n+ log += \"completed ${eventDescription}\"\n+ progressLogger.progress(log)\n+ }\n+\n+ private static int widthForTotal(int total) {\n+ return ((total - 1) as String).length()\n+ }\n+\n+ @Override\n+ void setOuter(JUnit4 junit) {}\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,388 @@\n+package com.carrotsearch.gradle.junit4\n+\n+import com.carrotsearch.ant.tasks.junit4.JUnit4\n+import com.carrotsearch.ant.tasks.junit4.Pluralize\n+import com.carrotsearch.ant.tasks.junit4.TestsSummaryEventListener\n+import com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.base.Strings\n+import com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.eventbus.Subscribe\n+import com.carrotsearch.ant.tasks.junit4.events.*\n+import com.carrotsearch.ant.tasks.junit4.events.aggregated.*\n+import com.carrotsearch.ant.tasks.junit4.events.mirrors.FailureMirror\n+import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener\n+import com.carrotsearch.ant.tasks.junit4.listeners.StackTraceFilter\n+import org.apache.tools.ant.filters.TokenFilter\n+import org.gradle.api.logging.LogLevel\n+import org.gradle.api.logging.Logger\n+import org.junit.runner.Description\n+\n+import java.util.concurrent.atomic.AtomicBoolean\n+import java.util.concurrent.atomic.AtomicInteger\n+\n+import javax.sound.sampled.AudioSystem;\n+import javax.sound.sampled.Clip;\n+import javax.sound.sampled.Line;\n+import javax.sound.sampled.LineEvent;\n+import javax.sound.sampled.LineListener;\n+\n+import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.*\n+import static com.carrotsearch.gradle.junit4.TestLoggingConfiguration.OutputMode\n+\n+class TestReportLogger extends TestsSummaryEventListener implements AggregatedEventListener {\n+\n+ static final String FAILURE_MARKER = \" <<< FAILURES!\"\n+\n+ /** Status names column. */\n+ static EnumMap<? extends TestStatus, String> statusNames;\n+ static {\n+ statusNames = new EnumMap<>(TestStatus.class);\n+ for (TestStatus s : TestStatus.values()) {\n+ statusNames.put(s,\n+ s == TestStatus.IGNORED_ASSUMPTION\n+ ? \"IGNOR/A\" : s.toString());\n+ }\n+ }\n+\n+ JUnit4 owner\n+\n+ /** Logger to write the report to */\n+ Logger logger\n+\n+ TestLoggingConfiguration config\n+\n+ /** Forked concurrent JVM count. */\n+ int forkedJvmCount\n+\n+ /** Format line for JVM ID string. */\n+ String jvmIdFormat\n+\n+ /** Output stream that logs messages to the given logger */\n+ LoggingOutputStream outStream\n+ LoggingOutputStream errStream\n+\n+ /** A list of failed tests, if to be displayed at the end. */\n+ List<Description> failedTests = new ArrayList<>()\n+\n+ /** Stack trace filters. */\n+ StackTraceFilter stackFilter = new StackTraceFilter()\n+\n+ Map<String, Long> suiteTimes = new HashMap<>()\n+ boolean slowTestsFound = false\n+\n+ int totalSuites\n+ AtomicInteger suitesCompleted = new AtomicInteger()\n+\n+ @Subscribe\n+ void onStart(AggregatedStartEvent e) throws IOException {\n+ this.totalSuites = e.getSuiteCount();\n+ StringBuilder info = new StringBuilder('==> Test Info: ')\n+ info.append('seed=' + owner.getSeed() + '; ')\n+ info.append(Pluralize.pluralize(e.getSlaveCount(), 'jvm') + '=' + e.getSlaveCount() + '; ')\n+ info.append(Pluralize.pluralize(e.getSuiteCount(), 'suite') + '=' + e.getSuiteCount())\n+ logger.lifecycle(info.toString())\n+\n+ forkedJvmCount = e.getSlaveCount();\n+ jvmIdFormat = \" J%-\" + (1 + (int) Math.floor(Math.log10(forkedJvmCount))) + \"d\";\n+\n+ outStream = new LoggingOutputStream(logger: logger, level: LogLevel.LIFECYCLE, prefix: \" 1> \")\n+ errStream = new LoggingOutputStream(logger: logger, level: LogLevel.ERROR, prefix: \" 2> \")\n+\n+ for (String contains : config.stackTraceFilters.contains) {\n+ TokenFilter.ContainsString containsFilter = new TokenFilter.ContainsString()\n+ containsFilter.setContains(contains)\n+ stackFilter.addContainsString(containsFilter)\n+ }\n+ for (String pattern : config.stackTraceFilters.patterns) {\n+ TokenFilter.ContainsRegex regexFilter = new TokenFilter.ContainsRegex()\n+ regexFilter.setPattern(pattern)\n+ stackFilter.addContainsRegex(regexFilter)\n+ }\n+ }\n+\n+ @Subscribe\n+ void onChildBootstrap(ChildBootstrap e) throws IOException {\n+ logger.info(\"Started J\" + e.getSlave().id + \" PID(\" + e.getSlave().getPidString() + \").\");\n+ }\n+\n+ @Subscribe\n+ void onHeartbeat(HeartBeatEvent e) throws IOException {\n+ logger.warn(\"HEARTBEAT J\" + e.getSlave().id + \" PID(\" + e.getSlave().getPidString() + \"): \" +\n+ formatTime(e.getCurrentTime()) + \", stalled for \" +\n+ formatDurationInSeconds(e.getNoEventDuration()) + \" at: \" +\n+ (e.getDescription() == null ? \"<unknown>\" : formatDescription(e.getDescription())))\n+ try {\n+ playBeat();\n+ } catch (Exception nosound) { /* handling exceptions with style */ }\n+ slowTestsFound = true\n+ }\n+\n+ void playBeat() throws Exception {\n+ Clip clip = (Clip)AudioSystem.getLine(new Line.Info(Clip.class));\n+ final AtomicBoolean stop = new AtomicBoolean();\n+ clip.addLineListener(new LineListener() {\n+ @Override\n+ public void update(LineEvent event) {\n+ if (event.getType() == LineEvent.Type.STOP) {\n+ stop.set(true);\n+ }\n+ }\n+ });\n+ InputStream stream = getClass().getResourceAsStream(\"/beat.wav\");\n+ try {\n+ clip.open(AudioSystem.getAudioInputStream(stream));\n+ clip.start();\n+ while (!stop.get()) {\n+ Thread.sleep(20);\n+ }\n+ clip.close();\n+ } finally {\n+ stream.close();\n+ }\n+ }\n+\n+ @Subscribe\n+ void onQuit(AggregatedQuitEvent e) throws IOException {\n+ if (config.showNumFailuresAtEnd > 0 && !failedTests.isEmpty()) {\n+ List<Description> sublist = this.failedTests\n+ StringBuilder b = new StringBuilder()\n+ b.append('Tests with failures')\n+ if (sublist.size() > config.showNumFailuresAtEnd) {\n+ sublist = sublist.subList(0, config.showNumFailuresAtEnd)\n+ b.append(\" (first \" + config.showNumFailuresAtEnd + \" out of \" + failedTests.size() + \")\")\n+ }\n+ b.append(':\\n')\n+ for (Description description : sublist) {\n+ b.append(\" - \").append(formatDescription(description, true)).append('\\n')\n+ }\n+ logger.warn(b.toString())\n+ }\n+ if (config.slowTests.summarySize > 0) {\n+ List<Map.Entry<String, Long>> sortedSuiteTimes = new ArrayList<>(suiteTimes.entrySet())\n+ Collections.sort(sortedSuiteTimes, new Comparator<Map.Entry<String, Long>>() {\n+ @Override\n+ int compare(Map.Entry<String, Long> o1, Map.Entry<String, Long> o2) {\n+ return o2.value - o1.value // sort descending\n+ }\n+ })\n+ LogLevel level = slowTestsFound ? LogLevel.WARN : LogLevel.INFO\n+ int numToLog = Math.min(config.slowTests.summarySize, sortedSuiteTimes.size())\n+ logger.log(level, 'Slow Tests Summary:')\n+ for (int i = 0; i < numToLog; ++i) {\n+ logger.log(level, String.format(Locale.ENGLISH, '%6.2fs | %s',\n+ sortedSuiteTimes.get(i).value / 1000.0,\n+ sortedSuiteTimes.get(i).key));\n+ }\n+ logger.log(level, '') // extra vertical separation\n+ }\n+ if (failedTests.isEmpty()) {\n+ // summary is already printed for failures\n+ logger.lifecycle('==> Test Summary: ' + getResult().toString())\n+ }\n+ }\n+\n+ @Subscribe\n+ void onSuiteStart(AggregatedSuiteStartedEvent e) throws IOException {\n+ if (isPassthrough()) {\n+ SuiteStartedEvent evt = e.getSuiteStartedEvent();\n+ emitSuiteStart(LogLevel.LIFECYCLE, evt.getDescription());\n+ }\n+ }\n+\n+ @Subscribe\n+ void onOutput(PartialOutputEvent e) throws IOException {\n+ if (isPassthrough()) {\n+ // We only allow passthrough output if there is one JVM.\n+ switch (e.getEvent().getType()) {\n+ case EventType.APPEND_STDERR:\n+ ((IStreamEvent) e.getEvent()).copyTo(errStream);\n+ break;\n+ case EventType.APPEND_STDOUT:\n+ ((IStreamEvent) e.getEvent()).copyTo(outStream);\n+ break;\n+ default:\n+ break;\n+ }\n+ }\n+ }\n+\n+ @Subscribe\n+ void onTestResult(AggregatedTestResultEvent e) throws IOException {\n+ if (isPassthrough() && e.getStatus() != TestStatus.OK) {\n+ flushOutput();\n+ emitStatusLine(LogLevel.ERROR, e, e.getStatus(), e.getExecutionTime());\n+ }\n+\n+ if (!e.isSuccessful()) {\n+ failedTests.add(e.getDescription());\n+ }\n+ }\n+\n+ @Subscribe\n+ void onSuiteResult(AggregatedSuiteResultEvent e) throws IOException {\n+ final int completed = suitesCompleted.incrementAndGet();\n+\n+ if (e.isSuccessful() && e.getTests().isEmpty()) {\n+ return;\n+ }\n+ if (config.slowTests.summarySize > 0) {\n+ suiteTimes.put(e.getDescription().getDisplayName(), e.getExecutionTime())\n+ }\n+\n+ LogLevel level = e.isSuccessful() && config.outputMode != OutputMode.ALWAYS ? LogLevel.INFO : LogLevel.LIFECYCLE\n+\n+ // We must emit buffered test and stream events (in case of failures).\n+ if (!isPassthrough()) {\n+ emitSuiteStart(level, e.getDescription())\n+ emitBufferedEvents(level, e)\n+ }\n+\n+ // Emit a synthetic failure for suite-level errors, if any.\n+ if (!e.getFailures().isEmpty()) {\n+ emitStatusLine(level, e, TestStatus.ERROR, 0)\n+ }\n+\n+ if (!e.getFailures().isEmpty()) {\n+ failedTests.add(e.getDescription())\n+ }\n+\n+ emitSuiteEnd(level, e, completed)\n+ }\n+\n+ /** Suite prologue. */\n+ void emitSuiteStart(LogLevel level, Description description) throws IOException {\n+ logger.log(level, 'Suite: ' + description.getDisplayName());\n+ }\n+\n+ void emitBufferedEvents(LogLevel level, AggregatedSuiteResultEvent e) throws IOException {\n+ if (config.outputMode == OutputMode.NEVER) {\n+ return\n+ }\n+\n+ final IdentityHashMap<TestFinishedEvent,AggregatedTestResultEvent> eventMap = new IdentityHashMap<>();\n+ for (AggregatedTestResultEvent tre : e.getTests()) {\n+ eventMap.put(tre.getTestFinishedEvent(), tre)\n+ }\n+\n+ final boolean emitOutput = config.outputMode == OutputMode.ALWAYS && isPassthrough() == false ||\n+ config.outputMode == OutputMode.ONERROR && e.isSuccessful() == false\n+\n+ for (IEvent event : e.getEventStream()) {\n+ switch (event.getType()) {\n+ case EventType.APPEND_STDOUT:\n+ if (emitOutput) ((IStreamEvent) event).copyTo(outStream);\n+ break;\n+\n+ case EventType.APPEND_STDERR:\n+ if (emitOutput) ((IStreamEvent) event).copyTo(errStream);\n+ break;\n+\n+ case EventType.TEST_FINISHED:\n+ assert eventMap.containsKey(event)\n+ final AggregatedTestResultEvent aggregated = eventMap.get(event);\n+ if (aggregated.getStatus() != TestStatus.OK) {\n+ flushOutput();\n+ emitStatusLine(level, aggregated, aggregated.getStatus(), aggregated.getExecutionTime());\n+ }\n+\n+ default:\n+ break;\n+ }\n+ }\n+\n+ if (emitOutput) {\n+ flushOutput()\n+ }\n+ }\n+\n+ void emitSuiteEnd(LogLevel level, AggregatedSuiteResultEvent e, int suitesCompleted) throws IOException {\n+\n+ final StringBuilder b = new StringBuilder();\n+ b.append(String.format(Locale.ENGLISH, 'Completed [%d/%d]%s in %.2fs, ',\n+ suitesCompleted,\n+ totalSuites,\n+ e.getSlave().slaves > 1 ? ' on J' + e.getSlave().id : '',\n+ e.getExecutionTime() / 1000.0d));\n+ b.append(e.getTests().size()).append(Pluralize.pluralize(e.getTests().size(), ' test'));\n+\n+ int failures = e.getFailureCount();\n+ if (failures > 0) {\n+ b.append(', ').append(failures).append(Pluralize.pluralize(failures, ' failure'));\n+ }\n+\n+ int errors = e.getErrorCount();\n+ if (errors > 0) {\n+ b.append(', ').append(errors).append(Pluralize.pluralize(errors, ' error'));\n+ }\n+\n+ int ignored = e.getIgnoredCount();\n+ if (ignored > 0) {\n+ b.append(', ').append(ignored).append(' skipped');\n+ }\n+\n+ if (!e.isSuccessful()) {\n+ b.append(' <<< FAILURES!');\n+ }\n+\n+ b.append('\\n')\n+ logger.log(level, b.toString());\n+ }\n+\n+ /** Emit status line for an aggregated event. */\n+ void emitStatusLine(LogLevel level, AggregatedResultEvent result, TestStatus status, long timeMillis) throws IOException {\n+ final StringBuilder line = new StringBuilder();\n+\n+ line.append(Strings.padEnd(statusNames.get(status), 8, ' ' as char))\n+ line.append(formatDurationInSeconds(timeMillis))\n+ if (forkedJvmCount > 1) {\n+ line.append(String.format(Locale.ENGLISH, jvmIdFormat, result.getSlave().id))\n+ }\n+ line.append(' | ')\n+\n+ line.append(formatDescription(result.getDescription()))\n+ if (!result.isSuccessful()) {\n+ line.append(FAILURE_MARKER)\n+ }\n+ logger.log(level, line.toString())\n+\n+ PrintWriter writer = new PrintWriter(new LoggingOutputStream(logger: logger, level: level, prefix: ' > '))\n+\n+ if (status == TestStatus.IGNORED && result instanceof AggregatedTestResultEvent) {\n+ writer.write('Cause: ')\n+ writer.write(((AggregatedTestResultEvent) result).getCauseForIgnored())\n+ writer.flush()\n+ }\n+\n+ final List<FailureMirror> failures = result.getFailures();\n+ if (!failures.isEmpty()) {\n+ int count = 0;\n+ for (FailureMirror fm : failures) {\n+ count++;\n+ if (fm.isAssumptionViolation()) {\n+ writer.write(String.format(Locale.ENGLISH,\n+ 'Assumption #%d: %s',\n+ count, fm.getMessage() == null ? '(no message)' : fm.getMessage()));\n+ } else {\n+ writer.write(String.format(Locale.ENGLISH,\n+ 'Throwable #%d: %s',\n+ count,\n+ stackFilter.apply(fm.getTrace())));\n+ }\n+ }\n+ writer.flush()\n+ }\n+ }\n+\n+ void flushOutput() throws IOException {\n+ outStream.flush()\n+ errStream.flush()\n+ }\n+\n+ /** Returns true if output should be logged immediately. */\n+ boolean isPassthrough() {\n+ return forkedJvmCount == 1 && config.outputMode == OutputMode.ALWAYS\n+ }\n+\n+ @Override\n+ void setOuter(JUnit4 task) {\n+ owner = task\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestReportLogger.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,426 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle\n+\n+import org.gradle.process.ExecResult\n+\n+import java.time.ZonedDateTime\n+import java.time.ZoneOffset\n+\n+import nebula.plugin.extraconfigurations.ProvidedBasePlugin\n+import org.elasticsearch.gradle.precommit.PrecommitTasks\n+import org.gradle.api.*\n+import org.gradle.api.artifacts.*\n+import org.gradle.api.artifacts.dsl.RepositoryHandler\n+import org.gradle.api.artifacts.maven.MavenPom\n+import org.gradle.api.tasks.bundling.Jar\n+import org.gradle.api.tasks.compile.JavaCompile\n+import org.gradle.internal.jvm.Jvm\n+import org.gradle.util.GradleVersion\n+\n+/**\n+ * Encapsulates build configuration for elasticsearch projects.\n+ */\n+class BuildPlugin implements Plugin<Project> {\n+\n+ static final JavaVersion minimumJava = JavaVersion.VERSION_1_8\n+\n+ @Override\n+ void apply(Project project) {\n+ project.pluginManager.apply('java')\n+ project.pluginManager.apply('carrotsearch.randomized-testing')\n+ // these plugins add lots of info to our jars\n+ configureJarManifest(project) // jar config must be added before info broker\n+ project.pluginManager.apply('nebula.info-broker')\n+ project.pluginManager.apply('nebula.info-basic')\n+ project.pluginManager.apply('nebula.info-java')\n+ project.pluginManager.apply('nebula.info-scm')\n+ project.pluginManager.apply('nebula.info-jar')\n+ project.pluginManager.apply('com.bmuschko.nexus')\n+ project.pluginManager.apply(ProvidedBasePlugin)\n+\n+ globalBuildInfo(project)\n+ configureRepositories(project)\n+ configureConfigurations(project)\n+ project.ext.versions = VersionProperties.versions\n+ configureCompile(project)\n+\n+ configureTest(project)\n+ configurePrecommit(project)\n+ }\n+\n+ /** Performs checks on the build environment and prints information about the build environment. */\n+ static void globalBuildInfo(Project project) {\n+ if (project.rootProject.ext.has('buildChecksDone') == false) {\n+ String javaHome = findJavaHome()\n+ File gradleJavaHome = Jvm.current().javaHome\n+ String gradleJavaVersionDetails = \"${System.getProperty('java.vendor')} ${System.getProperty('java.version')}\" +\n+ \" [${System.getProperty('java.vm.name')} ${System.getProperty('java.vm.version')}]\"\n+\n+ String javaVersionDetails = gradleJavaVersionDetails\n+ String javaVersion = System.getProperty('java.version')\n+ JavaVersion javaVersionEnum = JavaVersion.current()\n+ if (new File(javaHome).canonicalPath != gradleJavaHome.canonicalPath) {\n+ javaVersionDetails = findJavaVersionDetails(project, javaHome)\n+ javaVersionEnum = JavaVersion.toVersion(findJavaSpecificationVersion(project, javaHome))\n+ javaVersion = findJavaVersion(project, javaHome)\n+ }\n+\n+ // Build debugging info\n+ println '======================================='\n+ println 'Elasticsearch Build Hamster says Hello!'\n+ println '======================================='\n+ println \" Gradle Version : ${project.gradle.gradleVersion}\"\n+ println \" OS Info : ${System.getProperty('os.name')} ${System.getProperty('os.version')} (${System.getProperty('os.arch')})\"\n+ if (gradleJavaVersionDetails != javaVersionDetails) {\n+ println \" JDK Version (gradle) : ${gradleJavaVersionDetails}\"\n+ println \" JDK Version (compile) : ${javaVersionDetails}\"\n+ } else {\n+ println \" JDK Version : ${gradleJavaVersionDetails}\"\n+ }\n+\n+ // enforce gradle version\n+ GradleVersion minGradle = GradleVersion.version('2.8')\n+ if (GradleVersion.current() < minGradle) {\n+ throw new GradleException(\"${minGradle} or above is required to build elasticsearch\")\n+ }\n+\n+ // enforce Java version\n+ if (javaVersionEnum < minimumJava) {\n+ throw new GradleException(\"Java ${minimumJava} or above is required to build Elasticsearch\")\n+ }\n+\n+ project.rootProject.ext.javaHome = javaHome\n+ project.rootProject.ext.javaVersion = javaVersion\n+ project.rootProject.ext.buildChecksDone = true\n+ }\n+ project.targetCompatibility = minimumJava\n+ project.sourceCompatibility = minimumJava\n+ // set java home for each project, so they dont have to find it in the root project\n+ project.ext.javaHome = project.rootProject.ext.javaHome\n+ project.ext.javaVersion = project.rootProject.ext.javaVersion\n+ }\n+\n+ /** Finds and enforces JAVA_HOME is set */\n+ private static String findJavaHome() {\n+ String javaHome = System.getenv('JAVA_HOME')\n+ if (javaHome == null) {\n+ if (System.getProperty(\"idea.active\") != null) {\n+ // intellij doesn't set JAVA_HOME, so we use the jdk gradle was run with\n+ javaHome = Jvm.current().javaHome\n+ } else {\n+ throw new GradleException('JAVA_HOME must be set to build Elasticsearch')\n+ }\n+ }\n+ return javaHome\n+ }\n+\n+ /** Finds printable java version of the given JAVA_HOME */\n+ private static String findJavaVersionDetails(Project project, String javaHome) {\n+ String versionInfoScript = 'print(' +\n+ 'java.lang.System.getProperty(\"java.vendor\") + \" \" + java.lang.System.getProperty(\"java.version\") + ' +\n+ '\" [\" + java.lang.System.getProperty(\"java.vm.name\") + \" \" + java.lang.System.getProperty(\"java.vm.version\") + \"]\");'\n+ return runJavascript(project, javaHome, versionInfoScript).trim()\n+ }\n+\n+ /** Finds the parsable java specification version */\n+ private static String findJavaSpecificationVersion(Project project, String javaHome) {\n+ String versionScript = 'print(java.lang.System.getProperty(\"java.specification.version\"));'\n+ return runJavascript(project, javaHome, versionScript)\n+ }\n+\n+ /** Finds the parsable java specification version */\n+ private static String findJavaVersion(Project project, String javaHome) {\n+ String versionScript = 'print(java.lang.System.getProperty(\"java.version\"));'\n+ return runJavascript(project, javaHome, versionScript)\n+ }\n+\n+ /** Runs the given javascript using jjs from the jdk, and returns the output */\n+ private static String runJavascript(Project project, String javaHome, String script) {\n+ File tmpScript = File.createTempFile('es-gradle-tmp', '.js')\n+ tmpScript.setText(script, 'UTF-8')\n+ ByteArrayOutputStream output = new ByteArrayOutputStream()\n+ ExecResult result = project.exec {\n+ executable = new File(javaHome, 'bin/jjs')\n+ args tmpScript.toString()\n+ standardOutput = output\n+ errorOutput = new ByteArrayOutputStream()\n+ ignoreExitValue = true // we do not fail so we can first cleanup the tmp file\n+ }\n+ java.nio.file.Files.delete(tmpScript.toPath())\n+ result.assertNormalExitValue()\n+ return output.toString('UTF-8').trim()\n+ }\n+\n+ /** Return the configuration name used for finding transitive deps of the given dependency. */\n+ private static String transitiveDepConfigName(String groupId, String artifactId, String version) {\n+ return \"_transitive_${groupId}:${artifactId}:${version}\"\n+ }\n+\n+ /**\n+ * Makes dependencies non-transitive.\n+ *\n+ * Gradle allows setting all dependencies as non-transitive very easily.\n+ * Sadly this mechanism does not translate into maven pom generation. In order\n+ * to effectively make the pom act as if it has no transitive dependencies,\n+ * we must exclude each transitive dependency of each direct dependency.\n+ *\n+ * Determining the transitive deps of a dependency which has been resolved as\n+ * non-transitive is difficult because the process of resolving removes the\n+ * transitive deps. To sidestep this issue, we create a configuration per\n+ * direct dependency version. This specially named and unique configuration\n+ * will contain all of the transitive dependencies of this particular\n+ * dependency. We can then use this configuration during pom generation\n+ * to iterate the transitive dependencies and add excludes.\n+ */\n+ static void configureConfigurations(Project project) {\n+ // fail on any conflicting dependency versions\n+ project.configurations.all({ Configuration configuration ->\n+ if (configuration.name.startsWith('_transitive_')) {\n+ // don't force transitive configurations to not conflict with themselves, since\n+ // we just have them to find *what* transitive deps exist\n+ return\n+ }\n+ configuration.resolutionStrategy.failOnVersionConflict()\n+ })\n+\n+ // force all dependencies added directly to compile/testCompile to be non-transitive, except for ES itself\n+ Closure disableTransitiveDeps = { ModuleDependency dep ->\n+ if (!(dep instanceof ProjectDependency) && dep.getGroup() != 'org.elasticsearch') {\n+ dep.transitive = false\n+\n+ // also create a configuration just for this dependency version, so that later\n+ // we can determine which transitive dependencies it has\n+ String depConfig = transitiveDepConfigName(dep.group, dep.name, dep.version)\n+ if (project.configurations.findByName(depConfig) == null) {\n+ project.configurations.create(depConfig)\n+ project.dependencies.add(depConfig, \"${dep.group}:${dep.name}:${dep.version}\")\n+ }\n+ }\n+ }\n+\n+ project.configurations.compile.dependencies.all(disableTransitiveDeps)\n+ project.configurations.testCompile.dependencies.all(disableTransitiveDeps)\n+ project.configurations.provided.dependencies.all(disableTransitiveDeps)\n+\n+ // add exclusions to the pom directly, for each of the transitive deps of this project's deps\n+ project.modifyPom { MavenPom pom ->\n+ pom.withXml { XmlProvider xml ->\n+ // first find if we have dependencies at all, and grab the node\n+ NodeList depsNodes = xml.asNode().get('dependencies')\n+ if (depsNodes.isEmpty()) {\n+ return\n+ }\n+\n+ // check each dependency for any transitive deps\n+ for (Node depNode : depsNodes.get(0).children()) {\n+ String groupId = depNode.get('groupId').get(0).text()\n+ String artifactId = depNode.get('artifactId').get(0).text()\n+ String version = depNode.get('version').get(0).text()\n+\n+ // collect the transitive deps now that we know what this dependency is\n+ String depConfig = transitiveDepConfigName(groupId, artifactId, version)\n+ Configuration configuration = project.configurations.findByName(depConfig)\n+ if (configuration == null) {\n+ continue // we did not make this dep non-transitive\n+ }\n+ Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts\n+ if (artifacts.size() <= 1) {\n+ // this dep has no transitive deps (or the only artifact is itself)\n+ continue\n+ }\n+\n+ // we now know we have something to exclude, so add the exclusion elements\n+ Node exclusions = depNode.appendNode('exclusions')\n+ for (ResolvedArtifact transitiveArtifact : artifacts) {\n+ ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id\n+ if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {\n+ continue; // don't exclude the dependency itself!\n+ }\n+ Node exclusion = exclusions.appendNode('exclusion')\n+ exclusion.appendNode('groupId', transitiveDep.group)\n+ exclusion.appendNode('artifactId', transitiveDep.name)\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ /** Adds repositores used by ES dependencies */\n+ static void configureRepositories(Project project) {\n+ RepositoryHandler repos = project.repositories\n+ repos.mavenCentral()\n+ repos.maven {\n+ name 'sonatype-snapshots'\n+ url 'http://oss.sonatype.org/content/repositories/snapshots/'\n+ }\n+ String luceneVersion = VersionProperties.lucene\n+ if (luceneVersion.contains('-snapshot')) {\n+ // extract the revision number from the version with a regex matcher\n+ String revision = (luceneVersion =~ /\\w+-snapshot-(\\d+)/)[0][1]\n+ repos.maven {\n+ name 'lucene-snapshots'\n+ url \"http://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/${revision}\"\n+ }\n+ }\n+ }\n+\n+ /** Adds compiler settings to the project */\n+ static void configureCompile(Project project) {\n+ project.ext.compactProfile = 'compact3'\n+ project.afterEvaluate {\n+ // fail on all javac warnings\n+ project.tasks.withType(JavaCompile) {\n+ options.fork = true\n+ options.forkOptions.executable = new File(project.javaHome, 'bin/javac')\n+ options.forkOptions.memoryMaximumSize = \"1g\"\n+ /*\n+ * -path because gradle will send in paths that don't always exist.\n+ * -missing because we have tons of missing @returns and @param.\n+ */\n+ // don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :)\n+ options.compilerArgs << '-Werror' << '-Xlint:all,-path' << '-Xdoclint:all' << '-Xdoclint:-missing'\n+ // compile with compact 3 profile by default\n+ // NOTE: this is just a compile time check: does not replace testing with a compact3 JRE\n+ if (project.compactProfile != 'full') {\n+ options.compilerArgs << '-profile' << project.compactProfile\n+ }\n+ options.encoding = 'UTF-8'\n+ }\n+ }\n+ }\n+\n+ /** Adds additional manifest info to jars */\n+ static void configureJarManifest(Project project) {\n+ project.tasks.withType(Jar) { Jar jarTask ->\n+ jarTask.doFirst {\n+ // this doFirst is added before the info plugin, therefore it will run\n+ // after the doFirst added by the info plugin, and we can override attributes\n+ jarTask.manifest.attributes(\n+ 'X-Compile-Elasticsearch-Version': VersionProperties.elasticsearch,\n+ 'X-Compile-Lucene-Version': VersionProperties.lucene,\n+ 'Build-Date': ZonedDateTime.now(ZoneOffset.UTC),\n+ 'Build-Java-Version': project.javaVersion)\n+ if (jarTask.manifest.attributes.containsKey('Change') == false) {\n+ logger.warn('Building without git revision id.')\n+ jarTask.manifest.attributes('Change': 'N/A')\n+ }\n+ }\n+ }\n+ }\n+\n+ /** Returns a closure of common configuration shared by unit and integration tests. */\n+ static Closure commonTestConfig(Project project) {\n+ return {\n+ jvm \"${project.javaHome}/bin/java\"\n+ parallelism System.getProperty('tests.jvms', 'auto')\n+ ifNoTests 'fail'\n+ leaveTemporary true\n+\n+ // TODO: why are we not passing maxmemory to junit4?\n+ jvmArg '-Xmx' + System.getProperty('tests.heap.size', '512m')\n+ jvmArg '-Xms' + System.getProperty('tests.heap.size', '512m')\n+ if (JavaVersion.current().isJava7()) {\n+ // some tests need a large permgen, but that only exists on java 7\n+ jvmArg '-XX:MaxPermSize=128m'\n+ }\n+ jvmArg '-XX:MaxDirectMemorySize=512m'\n+ jvmArg '-XX:+HeapDumpOnOutOfMemoryError'\n+ File heapdumpDir = new File(project.buildDir, 'heapdump')\n+ heapdumpDir.mkdirs()\n+ jvmArg '-XX:HeapDumpPath=' + heapdumpDir\n+ argLine System.getProperty('tests.jvm.argline')\n+\n+ // we use './temp' since this is per JVM and tests are forbidden from writing to CWD\n+ systemProperty 'java.io.tmpdir', './temp'\n+ systemProperty 'java.awt.headless', 'true'\n+ systemProperty 'tests.maven', 'true' // TODO: rename this once we've switched to gradle!\n+ systemProperty 'tests.artifact', project.name\n+ systemProperty 'tests.task', path\n+ systemProperty 'tests.security.manager', 'true'\n+ // default test sysprop values\n+ systemProperty 'tests.ifNoTests', 'fail'\n+ systemProperty 'es.logger.level', 'WARN'\n+ for (Map.Entry<String, String> property : System.properties.entrySet()) {\n+ if (property.getKey().startsWith('tests.') ||\n+ property.getKey().startsWith('es.')) {\n+ systemProperty property.getKey(), property.getValue()\n+ }\n+ }\n+\n+ // System assertions (-esa) are disabled for now because of what looks like a\n+ // JDK bug triggered by Groovy on JDK7. We should look at re-enabling system\n+ // assertions when we upgrade to a new version of Groovy (currently 2.4.4) or\n+ // require JDK8. See https://issues.apache.org/jira/browse/GROOVY-7528.\n+ enableSystemAssertions false\n+\n+ testLogging {\n+ showNumFailuresAtEnd 25\n+ slowTests {\n+ heartbeat 10\n+ summarySize 5\n+ }\n+ stackTraceFilters {\n+ // custom filters: we carefully only omit test infra noise here\n+ contains '.SlaveMain.'\n+ regex(/^(\\s+at )(org\\.junit\\.)/)\n+ // also includes anonymous classes inside these two:\n+ regex(/^(\\s+at )(com\\.carrotsearch\\.randomizedtesting\\.RandomizedRunner)/)\n+ regex(/^(\\s+at )(com\\.carrotsearch\\.randomizedtesting\\.ThreadLeakControl)/)\n+ regex(/^(\\s+at )(com\\.carrotsearch\\.randomizedtesting\\.rules\\.)/)\n+ regex(/^(\\s+at )(org\\.apache\\.lucene\\.util\\.TestRule)/)\n+ regex(/^(\\s+at )(org\\.apache\\.lucene\\.util\\.AbstractBeforeAfterRule)/)\n+ }\n+ if (System.getProperty('tests.class') != null && System.getProperty('tests.output') == null) {\n+ // if you are debugging, you want to see the output!\n+ outputMode 'always'\n+ } else {\n+ outputMode System.getProperty('tests.output', 'onerror')\n+ }\n+ }\n+\n+ balancers {\n+ executionTime cacheFilename: \".local-${project.version}-${name}-execution-times.log\"\n+ }\n+\n+ listeners {\n+ junitReport()\n+ }\n+\n+ exclude '**/*$*.class'\n+ }\n+ }\n+\n+ /** Configures the test task */\n+ static Task configureTest(Project project) {\n+ Task test = project.tasks.getByName('test')\n+ test.configure(commonTestConfig(project))\n+ test.configure {\n+ include '**/*Tests.class'\n+ }\n+ return test\n+ }\n+\n+ private static configurePrecommit(Project project) {\n+ Task precommit = PrecommitTasks.create(project, true)\n+ project.check.dependsOn(precommit)\n+ project.test.mustRunAfter(precommit)\n+ project.dependencyLicenses.dependencies = project.configurations.runtime - project.configurations.provided\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle\n+\n+import org.gradle.api.DefaultTask\n+import org.gradle.api.tasks.*\n+import org.gradle.internal.nativeintegration.filesystem.Chmod\n+import java.io.File\n+import javax.inject.Inject\n+\n+/**\n+ * Creates an empty directory.\n+ */\n+class EmptyDirTask extends DefaultTask {\n+ @Input\n+ Object dir\n+\n+ @Input\n+ int dirMode = 0755\n+\n+ @TaskAction\n+ void create() {\n+ dir = dir as File\n+ dir.mkdirs()\n+ getChmod().chmod(dir, dirMode)\n+ }\n+\n+ @Inject\n+ Chmod getChmod() {\n+ throw new UnsupportedOperationException()\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/EmptyDirTask.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,50 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle\n+\n+import org.gradle.api.DefaultTask\n+import org.gradle.api.tasks.*\n+import java.io.File\n+\n+/**\n+ * Creates a file and sets it contents to something.\n+ */\n+class FileContentsTask extends DefaultTask {\n+ /**\n+ * The file to be built. Must be of type File to make @OutputFile happy.\n+ */\n+ @OutputFile\n+ File file\n+\n+ @Input\n+ Object contents\n+\n+ /**\n+ * The file to be built. Takes any objecct and coerces to a file.\n+ */\n+ void setFile(Object file) {\n+ this.file = file as File\n+ }\n+\n+ @TaskAction\n+ void setContents() {\n+ file = file as File\n+ file.text = contents.toString()\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/FileContentsTask.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.gradle\n+\n+import org.gradle.api.GradleException\n+import org.gradle.api.tasks.Exec\n+\n+/**\n+ * A wrapper around gradle's Exec task to capture output and log on error.\n+ */\n+class LoggedExec extends Exec {\n+ LoggedExec() {\n+ if (logger.isInfoEnabled() == false) {\n+ standardOutput = new ByteArrayOutputStream()\n+ errorOutput = standardOutput\n+ ignoreExitValue = true\n+ doLast {\n+ if (execResult.exitValue != 0) {\n+ standardOutput.toString('UTF-8').eachLine { line -> logger.error(line) }\n+ throw new GradleException(\"Process '${executable} ${args.join(' ')}' finished with non-zero exit value ${execResult.exitValue}\")\n+ }\n+ }\n+ }\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/LoggedExec.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle\n+\n+import org.apache.tools.ant.filters.ReplaceTokens\n+import org.gradle.api.file.CopySpec\n+\n+/**\n+ * Gradle provides \"expansion\" functionality using groovy's SimpleTemplatingEngine (TODO: check name).\n+ * However, it allows substitutions of the form {@code $foo} (no curlies). Rest tests provide\n+ * some substitution from the test runner, which this form is used for.\n+ *\n+ * This class provides a helper to do maven filtering, where only the form {@code $\\{foo\\}} is supported.\n+ *\n+ * TODO: we should get rid of this hack, and make the rest tests use some other identifier\n+ * for builtin vars\n+ */\n+class MavenFilteringHack {\n+ /**\n+ * Adds a filter to the given copy spec that will substitute maven variables.\n+ * @param CopySpec\n+ */\n+ static void filter(CopySpec copySpec, Map substitutions) {\n+ Map mavenSubstitutions = substitutions.collectEntries() {\n+ key, value -> [\"{${key}\".toString(), value.toString()]\n+ }\n+ copySpec.filter(ReplaceTokens, tokens: mavenSubstitutions, beginToken: '$', endToken: '}')\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/MavenFilteringHack.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle\n+\n+/**\n+ * Accessor for shared dependency versions used by elasticsearch, namely the elasticsearch and lucene versions.\n+ */\n+class VersionProperties {\n+ static final String elasticsearch\n+ static final String lucene\n+ static final Map<String, String> versions = new HashMap<>()\n+ static {\n+ Properties props = new Properties()\n+ InputStream propsStream = VersionProperties.class.getResourceAsStream('/version.properties')\n+ if (propsStream == null) {\n+ throw new RuntimeException('/version.properties resource missing')\n+ }\n+ props.load(propsStream)\n+ elasticsearch = props.getProperty('elasticsearch')\n+ lucene = props.getProperty('lucene')\n+ for (String property : props.stringPropertyNames()) {\n+ versions.put(property, props.getProperty(property))\n+ }\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/VersionProperties.groovy",
"status": "added"
},
{
"diff": "@@ -0,0 +1,124 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle.plugin\n+\n+import org.elasticsearch.gradle.BuildPlugin\n+import org.elasticsearch.gradle.test.RestIntegTestTask\n+import org.elasticsearch.gradle.test.RunTask\n+import org.gradle.api.Project\n+import org.gradle.api.Task\n+import org.gradle.api.tasks.SourceSet\n+import org.gradle.api.tasks.bundling.Zip\n+\n+/**\n+ * Encapsulates build configuration for an Elasticsearch plugin.\n+ */\n+public class PluginBuildPlugin extends BuildPlugin {\n+\n+ @Override\n+ public void apply(Project project) {\n+ super.apply(project)\n+ configureDependencies(project)\n+ // this afterEvaluate must happen before the afterEvaluate added by integTest creation,\n+ // so that the file name resolution for installing the plugin will be setup\n+ project.afterEvaluate {\n+ String name = project.pluginProperties.extension.name\n+ project.jar.baseName = name\n+ project.bundlePlugin.baseName = name\n+\n+ project.integTest.dependsOn(project.bundlePlugin)\n+ project.tasks.run.dependsOn(project.bundlePlugin)\n+ if (project.path.startsWith(':modules:')) {\n+ project.integTest.clusterConfig.module(project)\n+ project.tasks.run.clusterConfig.module(project)\n+ } else {\n+ project.integTest.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)\n+ project.tasks.run.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)\n+ }\n+ }\n+ createIntegTestTask(project)\n+ createBundleTask(project)\n+ project.tasks.create('run', RunTask) // allow running ES with this plugin in the foreground of a build\n+ }\n+\n+ private static void configureDependencies(Project project) {\n+ project.dependencies {\n+ provided \"org.elasticsearch:elasticsearch:${project.versions.elasticsearch}\"\n+ testCompile \"org.elasticsearch:test-framework:${project.versions.elasticsearch}\"\n+ // we \"upgrade\" these optional deps to provided for plugins, since they will run\n+ // with a full elasticsearch server that includes optional deps\n+ provided \"com.spatial4j:spatial4j:${project.versions.spatial4j}\"\n+ provided \"com.vividsolutions:jts:${project.versions.jts}\"\n+ provided \"log4j:log4j:${project.versions.log4j}\"\n+ provided \"log4j:apache-log4j-extras:${project.versions.log4j}\"\n+ provided \"org.slf4j:slf4j-api:${project.versions.slf4j}\"\n+ provided \"net.java.dev.jna:jna:${project.versions.jna}\"\n+ }\n+ }\n+\n+ /** Adds an integTest task which runs rest tests */\n+ private static void createIntegTestTask(Project project) {\n+ RestIntegTestTask integTest = project.tasks.create('integTest', RestIntegTestTask.class)\n+ integTest.mustRunAfter(project.precommit, project.test)\n+ project.check.dependsOn(integTest)\n+ }\n+\n+ /**\n+ * Adds a bundlePlugin task which builds the zip containing the plugin jars,\n+ * metadata, properties, and packaging files\n+ */\n+ private static void createBundleTask(Project project) {\n+ File pluginMetadata = project.file('src/main/plugin-metadata')\n+\n+ // create a task to build the properties file for this plugin\n+ PluginPropertiesTask buildProperties = project.tasks.create('pluginProperties', PluginPropertiesTask.class)\n+\n+ // add the plugin properties and metadata to test resources, so unit tests can\n+ // know about the plugin (used by test security code to statically initialize the plugin in unit tests)\n+ SourceSet testSourceSet = project.sourceSets.test\n+ testSourceSet.output.dir(buildProperties.generatedResourcesDir, builtBy: 'pluginProperties')\n+ testSourceSet.resources.srcDir(pluginMetadata)\n+\n+ // create the actual bundle task, which zips up all the files for the plugin\n+ Zip bundle = project.tasks.create(name: 'bundlePlugin', type: Zip, dependsOn: [project.jar, buildProperties]) {\n+ from buildProperties // plugin properties file\n+ from pluginMetadata // metadata (eg custom security policy)\n+ from project.jar // this plugin's jar\n+ from project.configurations.runtime - project.configurations.provided // the dep jars\n+ // extra files for the plugin to go into the zip\n+ from('src/main/packaging') // TODO: move all config/bin/_size/etc into packaging\n+ from('src/main') {\n+ include 'config/**'\n+ include 'bin/**'\n+ }\n+ from('src/site') {\n+ include '_site/**'\n+ }\n+ }\n+ project.assemble.dependsOn(bundle)\n+\n+ // remove jar from the archives (things that will be published), and set it to the zip\n+ project.configurations.archives.artifacts.removeAll { it.archiveTask.is project.jar }\n+ project.artifacts.add('archives', bundle)\n+\n+ // also make the zip the default artifact (used when depending on this project)\n+ project.configurations.getByName('default').extendsFrom = []\n+ project.artifacts.add('default', bundle)\n+ }\n+}",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy",
"status": "added"
}
]
}
|
{
"body": "This is a minor issue or is not issue.\n\nis it right StatsAggegator? (https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java)\nI think that StatsAggegator will be StatsAggregator.\n\nThanks.\n",
"comments": [
{
"body": "@colings86 we should probably fix this :)\n",
"created_at": "2015-11-17T14:50:35Z"
},
{
"body": "Submited a PR to fix this.\n",
"created_at": "2015-12-06T03:32:20Z"
}
],
"number": 14730,
"title": "StatsAggegator has a missing character."
}
|
{
"body": "@colings86 Here is the new PR to this issue. Excuse me, I'm new to this and I've done some mistakes with git.\n\nThat should be ok!\n\nCloses #14730\n",
"number": 15321,
"review_comments": [],
"title": "Correct typo in class name of StatsAggregator"
}
|
{
"commits": [
{
"message": "Correct typo in class name of StatsAggregator #15264 (Closes)"
}
],
"files": [
{
"diff": "@@ -34,6 +34,6 @@ public StatsParser() {\n \n @Override\n protected AggregatorFactory createFactory(String aggregationName, ValuesSourceConfig<ValuesSource.Numeric> config) {\n- return new StatsAggegator.Factory(aggregationName, config);\n+ return new StatsAggregator.Factory(aggregationName, config);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsParser.java",
"status": "modified"
}
]
}
|
{
"body": "Hi Team,\n\nI had downloaded elasticsearch-2.1.1.zip and tried to change the node name through command line as given in the following documentation page:\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html\n\n> elasticsearch.bat --cluster.name my_cluster_name --node.name my_node_name\n\nI get the following error:\nUnrecognized option: --cluster.name\n",
"comments": [
{
"body": "This is a bug. Pull request to fix it is here: https://github.com/elastic/elasticsearch/pull/15320\n",
"created_at": "2016-01-20T14:27:42Z"
},
{
"body": "Closed by #15320\n",
"created_at": "2016-02-05T16:27:41Z"
},
{
"body": "@brwe this bug persists in version 5.5.1 as well. i have tested it on ubuntu 15.04 .Please suggesst a fix for this. ",
"created_at": "2017-01-04T05:59:36Z"
},
{
"body": "@chandankmr The short answer to your question is that parameters of the form expressed here are not supported in Elasticsearch 5.x. This is covered in the [migration docs](https://www.elastic.co/guide/en/elasticsearch/reference/5.0/breaking_50_settings_changes.html#_removed_using_double_dashes_to_configure_elasticsearch) and elsewhere in the docs. If you require additional general help, please ask on the [forum](https://discuss.elastic.co).",
"created_at": "2017-01-04T12:11:04Z"
}
],
"number": 16086,
"title": "Elastic search not recognizing --cluster.name and --node.name parameter"
}
|
{
"body": "Options should be passed after `start`.\n\nCloses #15284 \nCloses #16086\n\nTests pass on windows and linux but I don't know much `ant` or `bat` so any help with improving the test is more than welcome. \n",
"number": 15320,
"review_comments": [
{
"body": "had to change this to 2.0.3 to build a local distribution, not sure if this needs changing though. \n",
"created_at": "2016-01-20T15:19:42Z"
}
],
"title": "fix command line options for windows bat file"
}
|
{
"commits": [
{
"message": "fix command line options for windows bat file\n\nOptions should be passed after `start`.\n\ncloses #15284"
},
{
"message": "update version"
}
],
"files": [
{
"diff": "@@ -1,5 +1,5 @@\n <?xml version=\"1.0\"?>\n-<project name=\"elasticsearch-integration-tests\">\n+<project name=\"elasticsearch-integration-tests\" xmlns:if=\"ant:if\">\n <!-- our pid file for easy cleanup -->\n <property name=\"integ.pidfile\" location=\"${integ.scratch}/es.pid\"/>\n \n@@ -144,6 +144,7 @@\n <attribute name=\"es.transport.tcp.port\" default=\"${integ.transport.port}\"/>\n <attribute name=\"es.pidfile\" default=\"${integ.pidfile}\"/>\n <attribute name=\"jvm.args\" default=\"${tests.jvm.argline}\"/>\n+ <attribute name=\"use.dash.p.for.pid.file.param\" default=\"false\"/>\n <element name=\"additional-args\" optional=\"true\"/>\n <sequential>\n <!-- make sure no elasticsearch instance is currently running and listening on the port we need -->\n@@ -154,6 +155,12 @@\n <socket server=\"localhost\" port=\"@{es.http.port}\"></socket>\n </condition>\n </fail>\n+ <condition property=\"use.dash\" value=\"true\">\n+ <equals arg1=\"@{use.dash.p.for.pid.file.param}\" arg2=\"true\" />\n+ </condition>\n+ <condition property=\"use.no.dash\" value=\"true\">\n+ <equals arg1=\"@{use.dash.p.for.pid.file.param}\" arg2=\"false\" />\n+ </condition>\n <!-- run bin/elasticsearch with args -->\n <echo>Starting up external cluster...</echo>\n \n@@ -163,10 +170,12 @@\n <env key=\"JAVA_HOME\" value=\"${java.home}\"/>\n <!-- we pass these as gc options, even if they arent, to avoid conflicting gc options -->\n <env key=\"ES_GC_OPTS\" value=\"@{jvm.args}\"/>\n+ <arg value=\"-p\" if:set=\"use.dash\"/>\n+ <arg value=\"@{es.pidfile}\" if:set=\"use.dash\"/>\n+ <arg value=\"-Des.pidfile=@{es.pidfile}\" if:set=\"use.no.dash\"/>\n <arg value=\"-Des.cluster.name=@{es.cluster.name}\"/>\n <arg value=\"-Des.http.port=@{es.http.port}\"/>\n <arg value=\"-Des.transport.tcp.port=@{es.transport.tcp.port}\"/>\n- <arg value=\"-Des.pidfile=@{es.pidfile}\"/>\n <arg value=\"-Des.discovery.zen.ping.unicast.hosts=@{es.unicast.hosts}\"/>\n <arg value=\"-Des.path.repo=@{home}/repo\"/>\n <arg value=\"-Des.path.shared_data=@{home}/../\"/>",
"filename": "dev-tools/src/main/resources/ant/integration-tests.xml",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,6 @@ IF ERRORLEVEL 1 (\n \tEXIT /B %ERRORLEVEL%\n )\n \n-\"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% !newparams! -cp \"%ES_CLASSPATH%\" \"org.elasticsearch.bootstrap.Elasticsearch\" start\n+\"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% -cp \"%ES_CLASSPATH%\" \"org.elasticsearch.bootstrap.Elasticsearch\" start !newparams!\n \n ENDLOCAL",
"filename": "distribution/src/main/resources/bin/elasticsearch.bat",
"status": "modified"
},
{
"diff": "@@ -148,6 +148,7 @@\n <module>smoke-test-plugins</module>\n <module>smoke-test-multinode</module>\n <module>smoke-test-client</module>\n+ <module>smoke-test-command-line-params</module>\n </modules>\n \n <profiles>",
"filename": "qa/pom.xml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,22 @@\n+<?xml version=\"1.0\"?>\n+<project name=\"smoke-test-comman-line-parameters\"\n+ xmlns:ac=\"antlib:net.sf.antcontrib\">\n+\n+ <import file=\"${elasticsearch.integ.antfile.default}\"/>\n+\n+ <target name=\"stop-one-node\" if=\"integ.pidfile.exists\">\n+ <stop-node es.pidfile=\"${integ.pidfile}\"/>\n+ </target>\n+ \n+ <target name=\"start-one-node\" depends=\"setup-workspace\" unless=\"${shouldskip}\">\n+ <ac:trycatch property=\"failure.message\">\n+ <ac:try>\n+ <startup-elasticsearch use.dash.p.for.pid.file.param=\"true\"/>\n+ </ac:try>\n+ <ac:catch>\n+ <echo>Failed to start node with message: ${failure.message}</echo>\n+ <stop-node es.pidfile=\"${integ.pidfile}\"/>\n+ </ac:catch>\n+ </ac:trycatch>\n+ </target>\n+</project>",
"filename": "qa/smoke-test-command-line-params/integration-tests.xml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,298 @@\n+<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+\n+<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n+ xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n+ xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n+ <modelVersion>4.0.0</modelVersion>\n+\n+ <parent>\n+ <groupId>org.elasticsearch.qa</groupId>\n+ <artifactId>elasticsearch-qa</artifactId>\n+ <version>2.0.3-SNAPSHOT</version>\n+ </parent>\n+\n+ <!-- \n+ This tests command line params such as -p pid file or -h. Only tests pid file now.\n+ -->\n+\n+ <artifactId>smoke-test-command-line-params</artifactId>\n+ <name>QA: Smoke Test Command Line Params</name>\n+ <description>Tests command line params such as -p pidfile work. Currently only tests pid file. Note that we cannot have this as vagrant tests only because windows needs to be checked as well.</description>\n+\n+ <properties>\n+ <skip.unit.tests>true</skip.unit.tests>\n+ <elasticsearch.integ.antfile>${project.basedir}/integration-tests.xml</elasticsearch.integ.antfile>\n+ <tests.rest.suite>smoke_test_command_line_params</tests.rest.suite>\n+ <tests.rest.load_packaged>false</tests.rest.load_packaged>\n+ </properties>\n+ <dependencies>\n+ <dependency>\n+ <groupId>org.elasticsearch</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <type>test-jar</type>\n+ <scope>test</scope>\n+ </dependency>\n+\n+ <!-- Provided dependencies by elasticsearch itself -->\n+ <dependency>\n+ <groupId>org.elasticsearch</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-core</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-backward-codecs</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-analyzers-common</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-queries</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-memory</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-highlighter</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-queryparser</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-suggest</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-join</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-spatial</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-expressions</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.spatial4j</groupId>\n+ <artifactId>spatial4j</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.vividsolutions</groupId>\n+ <artifactId>jts</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.github.spullara.mustache.java</groupId>\n+ <artifactId>compiler</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.google.guava</groupId>\n+ <artifactId>guava</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.carrotsearch</groupId>\n+ <artifactId>hppc</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>joda-time</groupId>\n+ <artifactId>joda-time</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.joda</groupId>\n+ <artifactId>joda-convert</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.core</groupId>\n+ <artifactId>jackson-core</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-smile</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-yaml</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-cbor</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>io.netty</groupId>\n+ <artifactId>netty</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.ning</groupId>\n+ <artifactId>compress-lzf</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.tdunning</groupId>\n+ <artifactId>t-digest</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>commons-cli</groupId>\n+ <artifactId>commons-cli</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.codehaus.groovy</groupId>\n+ <artifactId>groovy-all</artifactId>\n+ <classifier>indy</classifier>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>log4j</groupId>\n+ <artifactId>log4j</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>log4j</groupId>\n+ <artifactId>apache-log4j-extras</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.slf4j</groupId>\n+ <artifactId>slf4j-api</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>net.java.dev.jna</groupId>\n+ <artifactId>jna</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+\n+ <!-- Required by the REST test framework -->\n+ <!-- TODO: remove this dependency when we will have a REST Test module -->\n+ <dependency>\n+ <groupId>org.apache.httpcomponents</groupId>\n+ <artifactId>httpclient</artifactId>\n+ <scope>test</scope>\n+ </dependency>\n+ </dependencies>\n+\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-dependency-plugin</artifactId>\n+ <executions>\n+ <execution>\n+ <id>integ-setup-dependencies</id>\n+ <phase>pre-integration-test</phase>\n+ <goals>\n+ <goal>copy</goal>\n+ </goals>\n+ <configuration>\n+ <skip>${skip.integ.tests}</skip>\n+ <useBaseVersion>true</useBaseVersion>\n+ <outputDirectory>${integ.deps}/plugins</outputDirectory>\n+\n+ <artifactItems>\n+ <!-- elasticsearch distribution -->\n+ <artifactItem>\n+ <groupId>org.elasticsearch.distribution.zip</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <version>${elasticsearch.version}</version>\n+ <type>zip</type>\n+ <overWrite>true</overWrite>\n+ <outputDirectory>${integ.deps}</outputDirectory>\n+ </artifactItem>\n+ </artifactItems>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ </plugin>\n+ <!-- integration tests -->\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-antrun-plugin</artifactId>\n+ <executions>\n+ <!-- start up external cluster -->\n+ <execution>\n+ <id>integ-setup</id>\n+ <phase>pre-integration-test</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"${elasticsearch.integ.antfile}\" target=\"start-one-node\">\n+ <property name=\"tests.jvm.argline\" value=\"${tests.jvm.argline}\"/>\n+ </ant>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ <!-- shut down external cluster -->\n+ <execution>\n+ <id>integ-teardown</id>\n+ <phase>post-integration-test</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"${elasticsearch.integ.antfile}\" target=\"stop-one-node\"/>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ <dependencies>\n+ <dependency>\n+ <groupId>ant-contrib</groupId>\n+ <artifactId>ant-contrib</artifactId>\n+ <version>1.0b3</version>\n+ <exclusions>\n+ <exclusion>\n+ <groupId>ant</groupId>\n+ <artifactId>ant</artifactId>\n+ </exclusion>\n+ </exclusions>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.ant</groupId>\n+ <artifactId>ant-nodeps</artifactId>\n+ <version>1.8.1</version>\n+ </dependency>\n+ </dependencies>\n+ </plugin>\n+ </plugins>\n+ </build>\n+\n+</project>",
"filename": "qa/smoke-test-command-line-params/pom.xml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,13 @@\n+# Integration tests for smoke testing multi-node IT\n+# If the local machine which is running the test is low on disk space\n+# We can have one unassigned shard\n+---\n+\"cluster health basic test, wait for both nodes to join\":\n+ - do:\n+ cluster.health:\n+ wait_for_nodes: 1\n+\n+ - is_true: cluster_name\n+ - is_false: timed_out\n+ - gte: { number_of_nodes: 1 }\n+ - gte: { number_of_data_nodes: 1 }",
"filename": "qa/smoke-test-command-line-params/rest-api-spec/test/smoke_test_command_line_params/10_basic.yaml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.smoketest;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Name;\n+import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;\n+import org.elasticsearch.test.rest.ESRestTestCase;\n+import org.elasticsearch.test.rest.RestTestCandidate;\n+import org.elasticsearch.test.rest.parser.RestTestParseException;\n+\n+import java.io.IOException;\n+\n+public class SmokeTestCommandLineParamsIT extends ESRestTestCase {\n+\n+ public SmokeTestCommandLineParamsIT(@Name(\"yaml\") RestTestCandidate testCandidate) {\n+ super(testCandidate);\n+ }\n+\n+ @ParametersFactory\n+ public static Iterable<Object[]> parameters() throws IOException, RestTestParseException {\n+ return ESRestTestCase.createParameters(0, 1);\n+ }\n+}\n+",
"filename": "qa/smoke-test-command-line-params/src/test/java/org/elasticsearch/smoketest/SmokeTestCommandLineParamsIT.java",
"status": "added"
}
]
}
|
{
"body": "The bat script is broken on windows. The parameters `!newparams!` should be placed at the ned of the line in bat script here: https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch.bat#L46 as @spinscale pointed out.\n\nWe need a test for that too. This was not caught because currently we only test for the `-Des.pidfile` parameter: https://github.com/elastic/elasticsearch/blob/2.x/dev-tools/src/main/resources/ant/integration-tests.xml#L169 (have not looked at master yet). \n\nI can do that just want to open the issue in case someone else is dying to do that...\n",
"comments": [
{
"body": "We do the same (set the `pidfile` setting) in master:\nhttps://github.com/elastic/elasticsearch/blob/70107c5c3cf6958ad4c38a15fc33d25e66673610/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy#L206\n",
"created_at": "2015-12-07T17:18:23Z"
},
{
"body": "Fixed by #15320\n",
"created_at": "2016-02-14T18:46:07Z"
}
],
"number": 15284,
"title": "command line parameters like --help and -p pidfile broken on windows"
}
|
{
"body": "Options should be passed after `start`.\n\nCloses #15284 \nCloses #16086\n\nTests pass on windows and linux but I don't know much `ant` or `bat` so any help with improving the test is more than welcome. \n",
"number": 15320,
"review_comments": [
{
"body": "had to change this to 2.0.3 to build a local distribution, not sure if this needs changing though. \n",
"created_at": "2016-01-20T15:19:42Z"
}
],
"title": "fix command line options for windows bat file"
}
|
{
"commits": [
{
"message": "fix command line options for windows bat file\n\nOptions should be passed after `start`.\n\ncloses #15284"
},
{
"message": "update version"
}
],
"files": [
{
"diff": "@@ -1,5 +1,5 @@\n <?xml version=\"1.0\"?>\n-<project name=\"elasticsearch-integration-tests\">\n+<project name=\"elasticsearch-integration-tests\" xmlns:if=\"ant:if\">\n <!-- our pid file for easy cleanup -->\n <property name=\"integ.pidfile\" location=\"${integ.scratch}/es.pid\"/>\n \n@@ -144,6 +144,7 @@\n <attribute name=\"es.transport.tcp.port\" default=\"${integ.transport.port}\"/>\n <attribute name=\"es.pidfile\" default=\"${integ.pidfile}\"/>\n <attribute name=\"jvm.args\" default=\"${tests.jvm.argline}\"/>\n+ <attribute name=\"use.dash.p.for.pid.file.param\" default=\"false\"/>\n <element name=\"additional-args\" optional=\"true\"/>\n <sequential>\n <!-- make sure no elasticsearch instance is currently running and listening on the port we need -->\n@@ -154,6 +155,12 @@\n <socket server=\"localhost\" port=\"@{es.http.port}\"></socket>\n </condition>\n </fail>\n+ <condition property=\"use.dash\" value=\"true\">\n+ <equals arg1=\"@{use.dash.p.for.pid.file.param}\" arg2=\"true\" />\n+ </condition>\n+ <condition property=\"use.no.dash\" value=\"true\">\n+ <equals arg1=\"@{use.dash.p.for.pid.file.param}\" arg2=\"false\" />\n+ </condition>\n <!-- run bin/elasticsearch with args -->\n <echo>Starting up external cluster...</echo>\n \n@@ -163,10 +170,12 @@\n <env key=\"JAVA_HOME\" value=\"${java.home}\"/>\n <!-- we pass these as gc options, even if they arent, to avoid conflicting gc options -->\n <env key=\"ES_GC_OPTS\" value=\"@{jvm.args}\"/>\n+ <arg value=\"-p\" if:set=\"use.dash\"/>\n+ <arg value=\"@{es.pidfile}\" if:set=\"use.dash\"/>\n+ <arg value=\"-Des.pidfile=@{es.pidfile}\" if:set=\"use.no.dash\"/>\n <arg value=\"-Des.cluster.name=@{es.cluster.name}\"/>\n <arg value=\"-Des.http.port=@{es.http.port}\"/>\n <arg value=\"-Des.transport.tcp.port=@{es.transport.tcp.port}\"/>\n- <arg value=\"-Des.pidfile=@{es.pidfile}\"/>\n <arg value=\"-Des.discovery.zen.ping.unicast.hosts=@{es.unicast.hosts}\"/>\n <arg value=\"-Des.path.repo=@{home}/repo\"/>\n <arg value=\"-Des.path.shared_data=@{home}/../\"/>",
"filename": "dev-tools/src/main/resources/ant/integration-tests.xml",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,6 @@ IF ERRORLEVEL 1 (\n \tEXIT /B %ERRORLEVEL%\n )\n \n-\"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% !newparams! -cp \"%ES_CLASSPATH%\" \"org.elasticsearch.bootstrap.Elasticsearch\" start\n+\"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% -cp \"%ES_CLASSPATH%\" \"org.elasticsearch.bootstrap.Elasticsearch\" start !newparams!\n \n ENDLOCAL",
"filename": "distribution/src/main/resources/bin/elasticsearch.bat",
"status": "modified"
},
{
"diff": "@@ -148,6 +148,7 @@\n <module>smoke-test-plugins</module>\n <module>smoke-test-multinode</module>\n <module>smoke-test-client</module>\n+ <module>smoke-test-command-line-params</module>\n </modules>\n \n <profiles>",
"filename": "qa/pom.xml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,22 @@\n+<?xml version=\"1.0\"?>\n+<project name=\"smoke-test-comman-line-parameters\"\n+ xmlns:ac=\"antlib:net.sf.antcontrib\">\n+\n+ <import file=\"${elasticsearch.integ.antfile.default}\"/>\n+\n+ <target name=\"stop-one-node\" if=\"integ.pidfile.exists\">\n+ <stop-node es.pidfile=\"${integ.pidfile}\"/>\n+ </target>\n+ \n+ <target name=\"start-one-node\" depends=\"setup-workspace\" unless=\"${shouldskip}\">\n+ <ac:trycatch property=\"failure.message\">\n+ <ac:try>\n+ <startup-elasticsearch use.dash.p.for.pid.file.param=\"true\"/>\n+ </ac:try>\n+ <ac:catch>\n+ <echo>Failed to start node with message: ${failure.message}</echo>\n+ <stop-node es.pidfile=\"${integ.pidfile}\"/>\n+ </ac:catch>\n+ </ac:trycatch>\n+ </target>\n+</project>",
"filename": "qa/smoke-test-command-line-params/integration-tests.xml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,298 @@\n+<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+\n+<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n+ xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n+ xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n+ <modelVersion>4.0.0</modelVersion>\n+\n+ <parent>\n+ <groupId>org.elasticsearch.qa</groupId>\n+ <artifactId>elasticsearch-qa</artifactId>\n+ <version>2.0.3-SNAPSHOT</version>\n+ </parent>\n+\n+ <!-- \n+ This tests command line params such as -p pid file or -h. Only tests pid file now.\n+ -->\n+\n+ <artifactId>smoke-test-command-line-params</artifactId>\n+ <name>QA: Smoke Test Command Line Params</name>\n+ <description>Tests command line params such as -p pidfile work. Currently only tests pid file. Note that we cannot have this as vagrant tests only because windows needs to be checked as well.</description>\n+\n+ <properties>\n+ <skip.unit.tests>true</skip.unit.tests>\n+ <elasticsearch.integ.antfile>${project.basedir}/integration-tests.xml</elasticsearch.integ.antfile>\n+ <tests.rest.suite>smoke_test_command_line_params</tests.rest.suite>\n+ <tests.rest.load_packaged>false</tests.rest.load_packaged>\n+ </properties>\n+ <dependencies>\n+ <dependency>\n+ <groupId>org.elasticsearch</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <type>test-jar</type>\n+ <scope>test</scope>\n+ </dependency>\n+\n+ <!-- Provided dependencies by elasticsearch itself -->\n+ <dependency>\n+ <groupId>org.elasticsearch</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-core</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-backward-codecs</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-analyzers-common</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-queries</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-memory</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-highlighter</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-queryparser</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-suggest</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-join</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-spatial</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.lucene</groupId>\n+ <artifactId>lucene-expressions</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.spatial4j</groupId>\n+ <artifactId>spatial4j</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.vividsolutions</groupId>\n+ <artifactId>jts</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.github.spullara.mustache.java</groupId>\n+ <artifactId>compiler</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.google.guava</groupId>\n+ <artifactId>guava</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.carrotsearch</groupId>\n+ <artifactId>hppc</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>joda-time</groupId>\n+ <artifactId>joda-time</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.joda</groupId>\n+ <artifactId>joda-convert</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.core</groupId>\n+ <artifactId>jackson-core</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-smile</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-yaml</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.fasterxml.jackson.dataformat</groupId>\n+ <artifactId>jackson-dataformat-cbor</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>io.netty</groupId>\n+ <artifactId>netty</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.ning</groupId>\n+ <artifactId>compress-lzf</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>com.tdunning</groupId>\n+ <artifactId>t-digest</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>commons-cli</groupId>\n+ <artifactId>commons-cli</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.codehaus.groovy</groupId>\n+ <artifactId>groovy-all</artifactId>\n+ <classifier>indy</classifier>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>log4j</groupId>\n+ <artifactId>log4j</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>log4j</groupId>\n+ <artifactId>apache-log4j-extras</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.slf4j</groupId>\n+ <artifactId>slf4j-api</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+ <dependency>\n+ <groupId>net.java.dev.jna</groupId>\n+ <artifactId>jna</artifactId>\n+ <scope>provided</scope>\n+ </dependency>\n+\n+ <!-- Required by the REST test framework -->\n+ <!-- TODO: remove this dependency when we will have a REST Test module -->\n+ <dependency>\n+ <groupId>org.apache.httpcomponents</groupId>\n+ <artifactId>httpclient</artifactId>\n+ <scope>test</scope>\n+ </dependency>\n+ </dependencies>\n+\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-dependency-plugin</artifactId>\n+ <executions>\n+ <execution>\n+ <id>integ-setup-dependencies</id>\n+ <phase>pre-integration-test</phase>\n+ <goals>\n+ <goal>copy</goal>\n+ </goals>\n+ <configuration>\n+ <skip>${skip.integ.tests}</skip>\n+ <useBaseVersion>true</useBaseVersion>\n+ <outputDirectory>${integ.deps}/plugins</outputDirectory>\n+\n+ <artifactItems>\n+ <!-- elasticsearch distribution -->\n+ <artifactItem>\n+ <groupId>org.elasticsearch.distribution.zip</groupId>\n+ <artifactId>elasticsearch</artifactId>\n+ <version>${elasticsearch.version}</version>\n+ <type>zip</type>\n+ <overWrite>true</overWrite>\n+ <outputDirectory>${integ.deps}</outputDirectory>\n+ </artifactItem>\n+ </artifactItems>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ </plugin>\n+ <!-- integration tests -->\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-antrun-plugin</artifactId>\n+ <executions>\n+ <!-- start up external cluster -->\n+ <execution>\n+ <id>integ-setup</id>\n+ <phase>pre-integration-test</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"${elasticsearch.integ.antfile}\" target=\"start-one-node\">\n+ <property name=\"tests.jvm.argline\" value=\"${tests.jvm.argline}\"/>\n+ </ant>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ <!-- shut down external cluster -->\n+ <execution>\n+ <id>integ-teardown</id>\n+ <phase>post-integration-test</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <ant antfile=\"${elasticsearch.integ.antfile}\" target=\"stop-one-node\"/>\n+ </target>\n+ <skip>${skip.integ.tests}</skip>\n+ </configuration>\n+ </execution>\n+ </executions>\n+ <dependencies>\n+ <dependency>\n+ <groupId>ant-contrib</groupId>\n+ <artifactId>ant-contrib</artifactId>\n+ <version>1.0b3</version>\n+ <exclusions>\n+ <exclusion>\n+ <groupId>ant</groupId>\n+ <artifactId>ant</artifactId>\n+ </exclusion>\n+ </exclusions>\n+ </dependency>\n+ <dependency>\n+ <groupId>org.apache.ant</groupId>\n+ <artifactId>ant-nodeps</artifactId>\n+ <version>1.8.1</version>\n+ </dependency>\n+ </dependencies>\n+ </plugin>\n+ </plugins>\n+ </build>\n+\n+</project>",
"filename": "qa/smoke-test-command-line-params/pom.xml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,13 @@\n+# Integration tests for smoke testing multi-node IT\n+# If the local machine which is running the test is low on disk space\n+# We can have one unassigned shard\n+---\n+\"cluster health basic test, wait for both nodes to join\":\n+ - do:\n+ cluster.health:\n+ wait_for_nodes: 1\n+\n+ - is_true: cluster_name\n+ - is_false: timed_out\n+ - gte: { number_of_nodes: 1 }\n+ - gte: { number_of_data_nodes: 1 }",
"filename": "qa/smoke-test-command-line-params/rest-api-spec/test/smoke_test_command_line_params/10_basic.yaml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.smoketest;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Name;\n+import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;\n+import org.elasticsearch.test.rest.ESRestTestCase;\n+import org.elasticsearch.test.rest.RestTestCandidate;\n+import org.elasticsearch.test.rest.parser.RestTestParseException;\n+\n+import java.io.IOException;\n+\n+public class SmokeTestCommandLineParamsIT extends ESRestTestCase {\n+\n+ public SmokeTestCommandLineParamsIT(@Name(\"yaml\") RestTestCandidate testCandidate) {\n+ super(testCandidate);\n+ }\n+\n+ @ParametersFactory\n+ public static Iterable<Object[]> parameters() throws IOException, RestTestParseException {\n+ return ESRestTestCase.createParameters(0, 1);\n+ }\n+}\n+",
"filename": "qa/smoke-test-command-line-params/src/test/java/org/elasticsearch/smoketest/SmokeTestCommandLineParamsIT.java",
"status": "added"
}
]
}
|
{
"body": "Hi folks,\n\nI'm trying to start tribe node using following config:\n\n<pre>\ntransport.tcp.port: 9301\nhttp.port: 9201\nnetwork.host: 0.0.0.0\npath.data: /var/lib/elasticsearch/\npath.logs: /var/log/elasticsearch/\n\ntribe:\n kibana:\n cluster.name: logstash-kibana\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"127.0.0.1\"]\n els:\n cluster.name: logstash-data\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.128.69.48\", \"10.128.75.237\"]\n</pre>\n\nThis config resides in the file /etc/tribe-elasticseach/elasticsearch.yml. I'm starting it using following command:\n\n<pre>\nsudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -Ddefault.path.conf=/etc/tribe-elasticsearch/\n</pre>\n\nElasticsearch fails with following output:\n\n<pre>\n[2015-11-05 17:07:42,433][INFO ][node ] [Bucky] version[2.0.0], pid[25943], build[de54438/2015-10-22T08:09:48Z]\n[2015-11-05 17:07:42,434][INFO ][node ] [Bucky] initializing ...\n[2015-11-05 17:07:42,596][INFO ][plugins ] [Bucky] loaded [], sites []\nException in thread \"main\" java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/usr/share/elasticsearch/config/elasticsearch.yml\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:457)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkRead(SecurityManager.java:888)\n at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\n at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:290)\n at java.nio.file.Files.exists(Files.java:2385)\n at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:87)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.tribe.TribeService.<init>(TribeService.java:136)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at <<<guice>>>\n at org.elasticsearch.node.Node.<init>(Node.java:198)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n</pre>\n\nI'm not sure why it tries to access /usr/share/elasticsearch/config/elasticsearch.yml. There is no such file in the elasticsearch deb package. I created this file, but command above still fails with same output. Please advise how this can be resolved.\n\nI'm running elasticsearch 2.0.0 installed from the debian package downloaded from the official site. I'm using ubuntu 14.\n\nThanks,\nKirill.\n",
"comments": [
{
"body": "You're specifying the custom config file location incorrectly. \n\nSee https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking_20_setting_changes.html#_custom_config_file\n",
"created_at": "2015-11-09T11:36:07Z"
},
{
"body": "Hi @clintongormley , thanks for quick response. As you can see from the description I've provided I was using option -Ddefault.path.conf. I tried again same command with option --path.conf. There was no exception because of config access issue, but I had to specify also --path.data and --path.logs because for some reason those settings were ignored in the config I've provided. In my config I also specify nonstandard ports to use and those settings are also not used. Any advise what can be wrong?\n\nThanks,\nKirill.\n",
"created_at": "2015-11-09T19:05:07Z"
},
{
"body": "Looks like config is ignored completely. If I specify all options via command line I still get exception like above:\n\n<pre>\n# sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch --path.conf=/etc/tribe-elasticsearch/ --path.logs=/var/log/elasticsearch --path.data=/var/lib/elasticsearch/ --transport.tcp.port=9301 --http.port=9201 --network.host=0.0.0.0 --tribels.cluster.name=logstash-data --tribe.els.discovery.zen.ping.multicast.enabled=false --tribe.els.discovery.zen.ping.unicast.hosts=[\"10.128.69.48\",\"10.128.75.237\"] \nlog4j:WARN No appenders could be found for logger (bootstrap).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\nException in thread \"main\" java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/usr/share/elasticsearch/config/elasticsearch.yml\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:457)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkRead(SecurityManager.java:888)\n at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\n at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:290)\n at java.nio.file.Files.exists(Files.java:2385)\n at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:87)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.tribe.TribeService.<init>(TribeService.java:136)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at <<<guice>>>\n at org.elasticsearch.node.Node.<init>(Node.java:198)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n</pre>\n",
"created_at": "2015-11-09T19:23:19Z"
},
{
"body": "Thanks for persisting. I've managed to replicate this and it is indeed a bug.\n\nWhen the tribe node attempts to instantiate a node for the tribe service, it checks for access to the config directory, but that setting is no longer available to it and so it defaults to checking for path.home.\n\nThis can be replicated with a simple config file, saved as `foo/elasticsearch.yml`:\n\n```\nnode.name: foo\n\ntribe:\n foo:\n cluster.name: bar\n```\n\nStart elasticsearch as:\n\n```\n./elasticsearch-2.0.0/bin/elasticsearch --path.conf foo/\n```\n\nAnd it fails with:\n\n```\n[2015-11-17 13:54:47,763][INFO ][node ] [foo] version[2.0.0], pid[5940], build[de54438/2015-10-22T08:09:48Z]\n[2015-11-17 13:54:47,763][INFO ][node ] [foo] initializing ...\n[2015-11-17 13:54:47,836][INFO ][plugins ] [foo] loaded [], sites []\nException in thread \"main\" java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/Users/clinton/workspace/servers/elasticsearch-2.0.0/config/elasticsearch.yml\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkRead(SecurityManager.java:888)\n at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)\n at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:290)\n at java.nio.file.Files.exists(Files.java:2385)\n at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:87)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.tribe.TribeService.<init>(TribeService.java:136)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at <<<guice>>>\n at org.elasticsearch.node.Node.<init>(Node.java:198)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n",
"created_at": "2015-11-17T12:55:24Z"
},
{
"body": "@javanna could you take a look at this please?\n",
"created_at": "2015-11-17T13:02:57Z"
},
{
"body": "I had a look at this. Only selected settings are forwarded to the inner tribe clients from the tribe node. `path.home` is one of them but `path.conf` is not. That said, if I remember correctly the tribe clients shouldn't read from configuration file (and sysprops) but only inherit a few settings from the parent node (like it happens in `TribeService`), something that we had enforced with #9721. I think something got lost with #13383 where `loadConfigSettings` was removed, which was our way to prevent loading anything from the config file. With that set to `false` I believe we wouldn't even check for the existence of the file, thus we wouldn't need any permission for that. At this point it seems to me that we would have to forward `path.conf` to the tribe clients just because we are going to check for its existence at some point although we have nothing to load from it (otherwise we check for path.home that we have no permissions for)? I think I'd need @rjernst to verify if what I explained makes any sense, it might be that I overlooked something.\n",
"created_at": "2015-11-17T18:36:50Z"
},
{
"body": "If I understand the tribe node correctly, it is no different than any other client node (well, creating multiple client nodes internally). So to me, it should be passing along any settings it needs to configure the node (including `path.conf`). However, I'm not sure what this has to do with the transport client? The transport client by definition now does not use the config file settings (and the stack trace shown above indicates the exception was from building a node, not a transport client).\n",
"created_at": "2015-11-17T21:21:02Z"
},
{
"body": "> However, I'm not sure what this has to do with the transport client?\n\n@rjernst it doesn't have to do directly with the transport client, but the inner tribe nodes have a similar requirement when it comes to loading from config file. They should not be reading out of the config file but only inherit some selected settings from their \"parent\" node (the actual tribe node), and that is why we were previously setting `loadConfigSettings` to `false`, which is now removed though. If my analysis is correct security manager barfs because we check if the config file exists while creating inner client nodes as part of `TribeService`, but we shouldn't need to read from that file at that point anyway. I could forward the `path.conf` setting to the client nodes too, but I feel it is not the right fix given that we should not be reading from that file nor check if it exists. Not sure what the right fix is though.\n",
"created_at": "2015-11-18T17:50:54Z"
},
{
"body": "I looked deeper, I can confirm this is not just a problem around passing in the right `path.conf` to the inner nodes. The inner client nodes must not read from the main configuration file, something that was fixed in #9721. The option to not load from config settings for a node was though removed with #13383. I had expected `TribeUnitTests` to fail after that change but it doesn't unfortunately. If you try setting for instance `transport.tcp.port` in the configuration file, the tribe node will get that port, but the inner nodes will try to get that one too and will fail. The inner nodes should only get some selected settings from their parent node but never read from config file or system properties.\n",
"created_at": "2015-11-19T14:28:21Z"
},
{
"body": "+1\nRemoving the path.conf did not resolve the issue \n\nthe config used \n\n```\nbootstrap:\n mlockall: true\ncluster:\n name: tribe.elk.h2.com\ndiscovery:\n zen:\n minimum_master_nodes: 2\n ping:\n unicast:\n hosts:\n - h2-clt01\n - h2-clt02\n - h2-clt03\nnetwork:\n host: h2-clt01\nnode:\n data: false\n master: true\n name: h2-ct01-h2-ct01\npath:\n data: /data/h2-ct01\ntribe:\n h2:\n cluster:\n name: elk.h2.com\n discovery:\n zen:\n ping:\n unicast:\n hosts:\n - h2-cm01\n - h2-cm02\n - h2-cm03\n h3:\n cluster:\n name: elk.h3.com\n discovery:\n zen:\n ping:\n unicast:\n hosts:\n - h3-cm01\n - h3-cm02\n - h3-cm03\n```\n",
"created_at": "2015-11-19T15:21:35Z"
},
{
"body": "There is a workaround for this bug. Assuming your tribe config directory is `/etc/tribe/`:\n\n```\ncd /etc\ncp -a /etc/tribe /etc/tribe-client\necho \"\" > /etc/tribe-client/elasticsearch.yml\nchown -R elasticsearch /etc/tribe-client\n```\n\nThen edit `/etc/tribe/elasticsearch.yml` and specify a `path.conf` for each tribe cluster, eg:\n\n```\n# arbitrary config\ntransport.tcp.port: 9301\nhttp.port: 9201\nnetwork.host: 0.0.0.0\npath.data: /var/lib/elasticsearch/\npath.logs: /var/log/elasticsearch/\n\ntribe:\n kibana:\n path.conf: /etc/tribe-client ### ADD THIS LINE\n cluster.name: logstash-kibana\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"127.0.0.1\"]\n els:\n path.conf: /etc/tribe-client ### ADD THIS LINE\n cluster.name: logstash-data\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.128.69.48\", \"10.128.75.237\"]\n```\n\nThen start elasticsearch as:\n\n```\n./bin/elasticsearch --path.conf /etc/tribe\n```\n\nThe tribe node will use `/etc/tribe/` as its config directory. Then the tribe node starts a node client for each cluster, and will use `/etc/tribe-client` as its config directory, but `/etc/tribe-client/elasticsearch.yml` is empty, so no settings will be loaded.\n",
"created_at": "2015-11-19T15:42:04Z"
},
{
"body": "Workaround above works, the only caveat is that depending on where the additional empty configuration file is located, we might not have the permissions to read from it. I think it should work if we simply add an empty configuration file under the tribe node config and point right to it, not only specifying its parent directory but the complete path that includes the filename:\n\n```\ntribe.t1.path.conf: /path/to/config/tribe.yml\ntribe.t2.path.conf: /path/to/config/tribe.yml\n```\n",
"created_at": "2015-11-19T16:24:43Z"
},
{
"body": "Ran into this last night when attempting to set up a tribe node on 2.0. This will also affect users who attempt to set a custom transport.tcp.port for the tribe node. In this case, setting a custom transport.tcp.port for the tribe node causes a misleading `BindException[Address already in use];` exception when the port specified is not actually already in use.\n\n```\ncluster.name: elasticsearch_2_0_0_tribe_cluster\nnetwork.host: 127.0.0.1\ntransport.tcp.port: 11111\nnode.name: tribe_cluster_node1\ntribe:\n t1:\n cluster.name: elasticsearch_2_0_0_cluster1\n t2:\n cluster.name: elasticsearch_2_0_0_cluster2\n```\n\nSettings for the 2 clusters:\n\n```\ncluster.name: elasticsearch_2_0_0_cluster2\nnetwork.host: 127.0.0.1\ntransport.tcp.port: 9301\nhttp.port: 9201\nnode.name: cluster2_node1\n```\n\nand \n\n```\ncluster.name: elasticsearch_2_0_0_cluster1\nnetwork.host: 127.0.0.1\ntransport.tcp.port: 9300\nhttp.port: 9200\nnode.name: cluster1_node1\n```\n\nThe problem is that the tribe node will not start up as long as I have the transport.tcp.port: 11111 in place. If I don't set a custom transport port for the tribe node, it starts up fine and can connect with the 2 clusters.\n\nThe following is the error that shows up when I attempt to set transport.tcp.port for the tribe node. Note that prior to starting the tribe node, I used lsof to confirm that there's no process on the machine using port 11111 (and it doesn't matter what port I set it to, as long as transport.tcp.port is set for the tribe node, it will throw the same bind exception).\n\n```\n[2015-11-19 01:09:12,816][DEBUG][discovery.zen.elect ] [tribe_cluster_node1/t1] using minimum_master_nodes [-1]\n\n[2015-11-19 01:09:12,816][DEBUG][discovery.zen.ping.unicast] [tribe_cluster_node1/t1] using initial hosts [127.0.0.1, [::1]], with concurrent_connects [10]\n\n[2015-11-19 01:09:12,817][DEBUG][discovery.zen ] [tribe_cluster_node1/t1] using ping.timeout [3s], join.timeout [1m], master_election.filter_client [true], master_election.filter_data [false]\n\n[2015-11-19 01:09:12,817][DEBUG][discovery.zen.fd ] [tribe_cluster_node1/t1] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]\n\n[2015-11-19 01:09:12,817][DEBUG][discovery.zen.fd ] [tribe_cluster_node1/t1] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]\n\n[2015-11-19 01:09:12,820][DEBUG][script ] [tribe_cluster_node1/t1] using script cache with max_size [100], expire [null]\n\n[2015-11-19 01:09:12,853][DEBUG][cluster.routing.allocation.decider] [tribe_cluster_node1/t1] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]\n\n[2015-11-19 01:09:12,853][DEBUG][cluster.routing.allocation.decider] [tribe_cluster_node1/t1] using [cluster_concurrent_rebalance] with [2]\n\n[2015-11-19 01:09:12,854][DEBUG][cluster.routing.allocation.decider] [tribe_cluster_node1/t1] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]\n\n[2015-11-19 01:09:12,855][DEBUG][gateway ] [tribe_cluster_node1/t1] using initial_shards [quorum]\n\n[2015-11-19 01:09:12,885][DEBUG][indices.recovery ] [tribe_cluster_node1/t1] using max_bytes_per_sec[40mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]\n\n[2015-11-19 01:09:12,886][DEBUG][indices.store ] [tribe_cluster_node1/t1] using indices.store.throttle.type [NONE], with index.store.throttle.max_bytes_per_sec [10gb]\n\n[2015-11-19 01:09:12,886][DEBUG][indices.memory ] [tribe_cluster_node1/t1] using indexing buffer size [99mb], with indices.memory.min_shard_index_buffer_size [4mb], indices.memory.max_shard_index_buffer_size [512mb], indices.memory.shard_inactive_time [5m], indices.memory.interval [30s]\n\n[2015-11-19 01:09:12,887][DEBUG][indices.cache.query ] [tribe_cluster_node1/t1] using [node] query cache with size [10%], actual_size [99mb], max filter count [1000]\n\n[2015-11-19 01:09:12,887][DEBUG][indices.fielddata.cache ] [tribe_cluster_node1/t1] using size [-1] [-1b], expire [null]\n\n[2015-11-19 01:09:12,897][INFO ][node ] [tribe_cluster_node1/t1] initialized\n\n[2015-11-19 01:09:12,906][INFO ][node ] [tribe_cluster_node1] initialized\n\n[2015-11-19 01:09:12,907][INFO ][node ] [tribe_cluster_node1] starting ...\n\n[2015-11-19 01:09:12,924][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select timeout of 500\n\n[2015-11-19 01:09:12,924][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug workaround enabled = false\n\n[2015-11-19 01:09:12,947][DEBUG][transport.netty ] [tribe_cluster_node1] using profile[default], worker_count[8], port[11111], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]\n\n[2015-11-19 01:09:12,957][DEBUG][transport.netty ] [tribe_cluster_node1] binding server bootstrap to: 127.0.0.1\n\n[2015-11-19 01:09:12,985][DEBUG][transport.netty ] [tribe_cluster_node1] Bound profile [default] to address {127.0.0.1:11111}\n\n[2015-11-19 01:09:12,986][INFO ][transport ] [tribe_cluster_node1] publish_address {127.0.0.1:11111}, bound_addresses {127.0.0.1:11111}\n\n[2015-11-19 01:09:12,993][DEBUG][discovery.local ] [tribe_cluster_node1] Connected to cluster [Cluster [elasticsearch_2_0_0_tribe_cluster]]\n\n[2015-11-19 01:09:12,996][INFO ][discovery ] [tribe_cluster_node1] elasticsearch_2_0_0_tribe_cluster/baK4hDMwRiaKGS5D8ivYng\n\n[2015-11-19 01:09:12,996][WARN ][discovery ] [tribe_cluster_node1] waited for 0s and no initial state was set by the discovery\n\n[2015-11-19 01:09:12,996][DEBUG][gateway ] [tribe_cluster_node1] can't wait on start for (possibly) reading state from gateway, will do it asynchronously\n\n[2015-11-19 01:09:13,010][DEBUG][http.netty ] [tribe_cluster_node1] Bound http to address {127.0.0.1:22222}\n\n[2015-11-19 01:09:13,011][INFO ][http ] [tribe_cluster_node1] publish_address {127.0.0.1:22222}, bound_addresses {127.0.0.1:22222}\n\n[2015-11-19 01:09:13,011][INFO ][node ] [tribe_cluster_node1/t2] starting ...\n\n[2015-11-19 01:09:13,016][DEBUG][transport.netty ] [tribe_cluster_node1/t2] using profile[default], worker_count[8], port[11111], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]\n\n[2015-11-19 01:09:13,022][DEBUG][transport.netty ] [tribe_cluster_node1/t2] binding server bootstrap to: 127.0.0.1\n\n[2015-11-19 01:09:13,039][INFO ][node ] [tribe_cluster_node1/t2] stopping ...\n\n[2015-11-19 01:09:13,041][INFO ][node ] [tribe_cluster_node1/t2] stopped\n\n[2015-11-19 01:09:13,042][INFO ][node ] [tribe_cluster_node1/t2] closing ...\n\n[2015-11-19 01:09:13,048][INFO ][node ] [tribe_cluster_node1/t2] closed\n\n[2015-11-19 01:09:13,048][INFO ][node ] [tribe_cluster_node1/t1] closing ...\n\n[2015-11-19 01:09:13,052][INFO ][node ] [tribe_cluster_node1/t1] closed\n\nException in thread \"main\" BindTransportException[Failed to bind to [11111]]; nested: ChannelException[Failed to bind to: /127.0.0.1:11111]; nested: BindException[Address already in use];\n\nLikely root cause: java.net.BindException: Address already in use\n\nat sun.nio.ch.Net.bind0(Native Method)\n\nat sun.nio.ch.Net.bind(Net.java:444)\n\nat sun.nio.ch.Net.bind(Net.java:436)\n\nat sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)\n\nat sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)\n\nat org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)\n\nat org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)\n\nat org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)\n\nat org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n\nat org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n\nat org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\nat java.lang.Thread.run(Thread.java:745)\n\nRefer to the log for complete error details.\n\n[2015-11-19 01:09:13,058][INFO ][node ] [tribe_cluster_node1] stopping ...\n\n[2015-11-19 01:09:13,064][INFO ][node ] [tribe_cluster_node1] stopped\n\n[2015-11-19 01:09:13,064][INFO ][node ] [tribe_cluster_node1] closing ...\n\n[2015-11-19 01:09:13,066][INFO ][node ] [tribe_cluster_node1] closed\n```\n\nNote that I cannot reproduce this on 1.7.2. On 1.7.2, I can set up a custom transport.tcp.port for the tribe node and it will start up fine. \n",
"created_at": "2015-11-19T17:48:42Z"
},
{
"body": "@ppf2 this happens because the tribe node process will start three nodes, the first one will get the configured port, and the second will try to get the same one as it reads from the same configuration file. The workaround provided by Clint above should work till we fix this properly.\n",
"created_at": "2015-11-19T23:21:47Z"
},
{
"body": "@javanna I am going to explore having the tribe node have its own subclass of Node which can customize this single behavior (how to get the node's settings). I don't think we should add back this general purpose flag as we need to keep the tons of ways Nodes can be configured to a minimum.\n",
"created_at": "2015-11-19T23:44:40Z"
},
{
"body": "@rjernst thanks that sounds good to me. \n",
"created_at": "2015-11-20T00:03:33Z"
},
{
"body": "Confirmed that the workaround works to prevent the BindTransportException error, thx!\n",
"created_at": "2015-11-20T00:03:35Z"
},
{
"body": "@rjernst Do we have a sense of whether the fix will make it to the upcoming 2.1 release? Or will it likely be after 2.1 (i.e. use the workaround until a later 2.x release)?\n",
"created_at": "2015-11-20T00:27:10Z"
},
{
"body": "@ppf2 Definitely after 2.1. I would not want to destabilize 2.1 with a refactoring like this. \n",
"created_at": "2015-11-20T01:10:11Z"
},
{
"body": "@rjernst sounds good, thx!\n",
"created_at": "2015-11-20T01:12:15Z"
},
{
"body": "This requires some fairly extensive changes, so we will target this for 2.2. In the meantime, we should document the workaround in the 2.1 docs.\n",
"created_at": "2015-12-02T09:12:57Z"
},
{
"body": "I opened a PR to fix this here: #15300.\n\nNote that I was able to do the fix simply enough that I think it will be ok to backport to 2.1.x\n",
"created_at": "2015-12-08T04:22:34Z"
},
{
"body": "thanks @rjernst \n",
"created_at": "2015-12-09T12:15:45Z"
},
{
"body": "Thanks @rjernst !\n",
"created_at": "2015-12-10T02:26:47Z"
},
{
"body": "I'm late to the party but thought this might be useful for anyone coming across this. I found that the dummy config file isn't needed to work around the issue. Instead for creating a new directory (/etc/tribe-client in the example) path.conf can reference the current configuration directory.\n\nUsing the above example where the config directory was /etc/tribe\n\n# arbitrary config\n\ntransport.tcp.port: 9301\nhttp.port: 9201\nnetwork.host: 0.0.0.0\npath.data: /var/lib/elasticsearch/\npath.logs: /var/log/elasticsearch/\n\ntribe:\n kibana:\n path.conf: /etc/tribe #\n cluster.name: logstash-kibana\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"127.0.0.1\"]\n els:\n path.conf: /etc/tribe #\n cluster.name: logstash-data\n discovery.zen.ping.multicast.enabled: false\n discovery.zen.ping.unicast.hosts: [\"10.128.69.48\", \"10.128.75.237\"]\n",
"created_at": "2015-12-12T23:02:06Z"
},
{
"body": "Is this fixed in 2.1.1?\n",
"created_at": "2015-12-22T00:25:55Z"
},
{
"body": "With v2.1.1, I still have to specify path.conf and I used the valid path as mentioned above by lb425. In my case, I also had to specify path.plugins for similar reason. Otherwise, I kept getting AccessControlException error.\n\nI did not have to specify both path.conf and path.plugins when I was using v1.7.3\n",
"created_at": "2015-12-30T10:50:28Z"
},
{
"body": "WRT ES v2.1.1, I have to do the following to get the tribe node talking to two different clusters: cluster A and cluster B\n\n**# tribe node's configuration (elasticsearch.yml)**\n**network.host:** 0.0.0.0\n**transport.tcp.port:** 9300\n**http.port:** 9200\n**http.enabled:** true\n\ntribe.**t1**.cluster.name: **<cluster A>**\ntribe.**t1**.discovery.zen.ping.unicast.hosts: **<cluster A's master node>**\ntribe.**t1**.discovery.zen.ping.multicast.enabled: false\ntribe.**t1**.path.conf: **<valid path/to/conf>**\ntribe.**t1**.path.plugins: **<valid path/to/plugin>**\ntribe.**t1**.network.bind_host: **0.0.0.0**\ntribe.**t1**.network.publish_host: **<tribe node's IP>**\ntribe.**t1**.transport.tcp.port: **<optional but different from tribe node port above>**\n\n_repeat the same block but replace \"t1\" to \"t2\" for cluster B and fill in proper info related to cluster B but keep the tribe.t2.network.\\* the same with different tribe.t2.transport.tcp.port value from t1 if specified_\n",
"created_at": "2015-12-31T14:20:17Z"
},
{
"body": "@thn-dev Setting network and path settings for tribe nodes (the t1, t2 here) should not be necessary. Can you share your full elasticsearch.yml for both the tribe node, as well as cluster A and cluster B?\n",
"created_at": "2015-12-31T19:12:18Z"
},
{
"body": "@rjernst I did not have to do network and path settings when I was using v1.7.3. It was a surprise to me when v2.1.1 kept giving me AccessControlException error message. Initially, it pointed to the \"plugins\" location, after I set it, it complained about the \"config\" location. If I did not do the network settings for t1 and t2, it was not able to connect to cluster A and/or B. This part is weird too. Again, I did not have to do this in v1.7.3.\n\nAll ES instances are installed using .rpm file, not .zip file.\n\nMy settings for tribe node is above with additional parameters\n- cluster.name\n- discovery.zen.ping.multicast.enabled: false\n\nCluster A and B, each has 1 master node, 3 data nodes with the following parameters' settings (I don't have all information with me at the moment)\n- cluster.name: <cluster A or B>\n- network.host: 0.0.0.0\n- transport.tcp.port: 9300\n- http.port: 9200 (master)\n- http.enabled: true (master)\n- discovery.zen.ping.multicast.enabled: false\n- discovery.zen.ping.unicast.hosts: <master node's IP>\n- path.conf: /data/es/config\n- path.plugins: /data/es/plugins\n- path.data: /data/es\n",
"created_at": "2016-01-01T00:56:30Z"
}
],
"number": 14573,
"title": "elasticsearch fails to start tribe node"
}
|
{
"body": "The tribe node creates one local client node for each cluster it\nconnects to. Refactorings in #13383 broke this so that each local client\nnode now tries to load the full elasticsearch.yml that the real tribe\nnode uses.\n\nThis change fixes the problem by adding a TribeClientNode which is a\nsubclass of Node. The Environment the node uses is now passed in (in\nplace of Settings), and the TribeClientNode simply does not use\nInternalSettingsPreparer.prepareEnvironment.\n\nThe tests around tribe nodes are not great. The existing tests pass, but\nI also manually tested by creating 2 local clusters, and configuring and\nstarting a tribe node. With this I was able to see in the logs the tribe\nnode connecting to each cluster.\n\ncloses #14573\n",
"number": 15300,
"review_comments": [
{
"body": "seems like this is not needed anymore given that we don't go through InternalSettingsPreparer anymore? sysprops will always be ignored I think\n",
"created_at": "2015-12-08T10:44:39Z"
},
{
"body": "agreed that this test is way too delicate, testing the tribe node is not easy. We would need a proper qa test for that with external nodes etc. `TribeUnitTests` should have failed but it didn't, because we specify the config using `path.conf`, which is not passed to the tribe clients, so in our unrealistic test the tribe clients don't actually load the same config because they don't know where it is, and that makes the test succeed. I tried making the test fail by writing our own config file within temp_dir/config but that didn't quite work either (meaning that the test still succeeds). I think too many changes have happened since the test was written, we should get rid of it and have a proper qa test with external nodes.\n",
"created_at": "2015-12-08T13:21:46Z"
},
{
"body": "Removed.\n",
"created_at": "2015-12-08T16:06:26Z"
}
],
"title": "Fix tribe node to load config file for internal client nodes"
}
|
{
"commits": [
{
"message": "Tribe: Fix tribe node to load config file for internal client nodes\n\nThe tribe node creates one local client node for each cluster it\nconnects to. Refactorings in #13383 broke this so that each local client\nnode now tries to load the full elasticsearch.yml that the real tribe\nnode uses.\n\nThis change fixes the problem by adding a TribeClientNode which is a\nsubclass of Node. The Environment the node uses is now passed in (in\nplace of Settings), and the TribeClientNode simply does not use\nInternalSettingsPreparer.prepareEnvironment.\n\nThe tests around tribe nodes are not great. The existing tests pass, but\nI also manually tested by creating 2 local clusters, and configuring and\nstarting a tribe node. With this I was able to see in the logs the tribe\nnode connecting to each cluster.\n\ncloses #13383"
},
{
"message": "Removed unnecessary setting previously used to ignore sysprops in tribe nodes"
}
],
"files": [
{
"diff": "@@ -128,14 +128,13 @@ public class Node implements Releasable {\n * @param preparedSettings Base settings to configure the node with\n */\n public Node(Settings preparedSettings) {\n- this(preparedSettings, Version.CURRENT, Collections.<Class<? extends Plugin>>emptyList());\n+ this(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, Collections.<Class<? extends Plugin>>emptyList());\n }\n \n- Node(Settings preparedSettings, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {\n- final Settings pSettings = settingsBuilder().put(preparedSettings)\n- .put(Client.CLIENT_TYPE_SETTING, CLIENT_TYPE).build();\n- Environment tmpEnv = InternalSettingsPreparer.prepareEnvironment(pSettings, null);\n- Settings tmpSettings = TribeService.processSettings(tmpEnv.settings());\n+ protected Node(Environment tmpEnv, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {\n+ Settings tmpSettings = settingsBuilder().put(tmpEnv.settings())\n+ .put(Client.CLIENT_TYPE_SETTING, CLIENT_TYPE).build();\n+ tmpSettings = TribeService.processSettings(tmpSettings);\n \n ESLogger logger = Loggers.getLogger(Node.class, tmpSettings.get(\"name\"));\n logger.info(\"version[{}], pid[{}], build[{}/{}]\", version, JvmInfo.jvmInfo().pid(), Build.CURRENT.shortHash(), Build.CURRENT.date());",
"filename": "core/src/main/java/org/elasticsearch/node/Node.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,37 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.tribe;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.node.Node;\n+import org.elasticsearch.plugins.Plugin;\n+\n+import java.util.Collections;\n+\n+/**\n+ * An internal node that connects to a remove cluster, as part of a tribe node.\n+ */\n+class TribeClientNode extends Node {\n+ TribeClientNode(Settings settings) {\n+ super(new Environment(settings), Version.CURRENT, Collections.<Class<? extends Plugin>>emptyList());\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/tribe/TribeClientNode.java",
"status": "added"
},
{
"diff": "@@ -132,14 +132,14 @@ public TribeService(Settings settings, ClusterService clusterService, DiscoveryS\n nodesSettings.remove(\"on_conflict\"); // remove prefix settings that don't indicate a client\n for (Map.Entry<String, Settings> entry : nodesSettings.entrySet()) {\n Settings.Builder sb = Settings.builder().put(entry.getValue());\n- sb.put(\"node.name\", settings.get(\"name\") + \"/\" + entry.getKey());\n+ sb.put(\"name\", settings.get(\"name\") + \"/\" + entry.getKey());\n sb.put(\"path.home\", settings.get(\"path.home\")); // pass through ES home dir\n sb.put(TRIBE_NAME, entry.getKey());\n- sb.put(InternalSettingsPreparer.IGNORE_SYSTEM_PROPERTIES_SETTING, true);\n if (sb.get(\"http.enabled\") == null) {\n sb.put(\"http.enabled\", false);\n }\n- nodes.add(NodeBuilder.nodeBuilder().settings(sb).client(true).build());\n+ sb.put(\"node.client\", true);\n+ nodes.add(new TribeClientNode(sb.build()));\n }\n \n String[] blockIndicesWrite = Strings.EMPTY_ARRAY;",
"filename": "core/src/main/java/org/elasticsearch/tribe/TribeService.java",
"status": "modified"
},
{
"diff": "@@ -54,13 +54,12 @@ public class TribeUnitTests extends ESTestCase {\n @BeforeClass\n public static void createTribes() {\n Settings baseSettings = Settings.builder()\n- .put(InternalSettingsPreparer.IGNORE_SYSTEM_PROPERTIES_SETTING, true)\n .put(\"http.enabled\", false)\n .put(\"node.mode\", NODE_MODE)\n .put(\"path.home\", createTempDir()).build();\n \n- tribe1 = NodeBuilder.nodeBuilder().settings(Settings.builder().put(baseSettings).put(\"cluster.name\", \"tribe1\").put(\"node.name\", \"tribe1_node\")).node();\n- tribe2 = NodeBuilder.nodeBuilder().settings(Settings.builder().put(baseSettings).put(\"cluster.name\", \"tribe2\").put(\"node.name\", \"tribe2_node\")).node();\n+ tribe1 = new TribeClientNode(Settings.builder().put(baseSettings).put(\"cluster.name\", \"tribe1\").put(\"name\", \"tribe1_node\").build()).start();\n+ tribe2 = new TribeClientNode(Settings.builder().put(baseSettings).put(\"cluster.name\", \"tribe2\").put(\"name\", \"tribe2_node\").build()).start();\n }\n \n @AfterClass",
"filename": "qa/evil-tests/src/test/java/org/elasticsearch/tribe/TribeUnitTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.Version;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.node.internal.InternalSettingsPreparer;\n import org.elasticsearch.plugins.Plugin;\n \n import java.util.Collection;\n@@ -39,7 +40,7 @@ public class MockNode extends Node {\n private Collection<Class<? extends Plugin>> plugins;\n \n public MockNode(Settings settings, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {\n- super(settings, version, classpathPlugins);\n+ super(InternalSettingsPreparer.prepareEnvironment(settings, null), version, classpathPlugins);\n this.version = version;\n this.plugins = classpathPlugins;\n }",
"filename": "test-framework/src/main/java/org/elasticsearch/node/MockNode.java",
"status": "modified"
}
]
}
|
{
"body": "This is a minor issue or is not issue.\n\nis it right StatsAggegator? (https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java)\nI think that StatsAggegator will be StatsAggregator.\n\nThanks.\n",
"comments": [
{
"body": "@colings86 we should probably fix this :)\n",
"created_at": "2015-11-17T14:50:35Z"
},
{
"body": "Submited a PR to fix this.\n",
"created_at": "2015-12-06T03:32:20Z"
}
],
"number": 14730,
"title": "StatsAggegator has a missing character."
}
|
{
"body": "Closes #14730\n",
"number": 15264,
"review_comments": [],
"title": "Correct typo in class name of StatsAggregator"
}
|
{
"commits": [
{
"message": "Merge remote-tracking branch 'elastic/master' into #14719--url-params-parsing-should-not-be-lenient"
},
{
"message": "Implementing #14719"
},
{
"message": "Implementing #14730"
},
{
"message": "Implementing #14730"
},
{
"message": "Merge branch 'master' into #14719--url-params-parsing-should-not-be-lenient"
}
],
"files": [
{
"diff": "@@ -29,7 +29,9 @@\n \n import java.net.SocketAddress;\n import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.Map;\n+import java.util.Set;\n \n /**\n *\n@@ -41,6 +43,7 @@ public class NettyHttpRequest extends HttpRequest {\n private final Map<String, String> params;\n private final String rawPath;\n private final BytesReference content;\n+ private final Set<String> consumedParams;\n \n public NettyHttpRequest(org.jboss.netty.handler.codec.http.HttpRequest request, Channel channel) {\n this.request = request;\n@@ -60,6 +63,8 @@ public NettyHttpRequest(org.jboss.netty.handler.codec.http.HttpRequest request,\n this.rawPath = uri.substring(0, pathEndPos);\n RestUtils.decodeQueryString(uri, pathEndPos + 1, params);\n }\n+\n+ this.consumedParams = new HashSet<>(params().size());\n }\n \n public org.jboss.netty.handler.codec.http.HttpRequest request() {\n@@ -107,6 +112,11 @@ public Map<String, String> params() {\n return params;\n }\n \n+ @Override\n+ public boolean allParamsConsumed() {\n+ return this.consumedParams.containsAll(this.params().keySet());\n+ }\n+\n @Override\n public boolean hasContent() {\n return content.length() > 0;\n@@ -160,11 +170,13 @@ public boolean hasParam(String key) {\n \n @Override\n public String param(String key) {\n+ this.consumedParams.add(key);\n return params.get(key);\n }\n \n @Override\n public String param(String key, String defaultValue) {\n+ this.consumedParams.add(key);\n String value = params.get(key);\n if (value == null) {\n return defaultValue;",
"filename": "core/src/main/java/org/elasticsearch/http/netty/NettyHttpRequest.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.path.PathTrie;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.http.HttpException;\n import org.elasticsearch.rest.support.RestUtils;\n \n import java.io.IOException;\n@@ -209,7 +210,13 @@ boolean checkRequestParameters(final RestRequest request, final RestChannel chan\n void executeHandler(RestRequest request, RestChannel channel) throws Exception {\n final RestHandler handler = getHandler(request);\n if (handler != null) {\n- handler.handleRequest(request, channel);\n+ handler.handleRequest(request, channel);\n+\n+ //Just validate params to READ operations\n+ if(RestRequest.Method.GET.equals(request.method()) && !request.allParamsConsumed()){\n+ channel.sendResponse(new BytesRestResponse(BAD_REQUEST, \"There are wrong parameters\"));\n+ }\n+\n } else {\n if (request.method() == RestRequest.Method.OPTIONS) {\n // when we have OPTIONS request, simply send OK by default (with the Access Control Origin header which gets automatically added)",
"filename": "core/src/main/java/org/elasticsearch/rest/RestController.java",
"status": "modified"
},
{
"diff": "@@ -88,6 +88,8 @@ public SocketAddress getLocalAddress() {\n \n public abstract Map<String, String> params();\n \n+ public abstract boolean allParamsConsumed();\n+\n public float paramAsFloat(String key, float defaultValue) {\n String sValue = param(key);\n if (sValue == null) {",
"filename": "core/src/main/java/org/elasticsearch/rest/RestRequest.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,6 @@ public StatsParser() {\n \n @Override\n protected AggregatorFactory createFactory(String aggregationName, ValuesSourceConfig<ValuesSource.Numeric> config) {\n- return new StatsAggegator.Factory(aggregationName, config);\n+ return new StatsAggregator.Factory(aggregationName, config);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsParser.java",
"status": "modified"
},
{
"diff": "@@ -94,6 +94,11 @@ public String param(String key) {\n return null;\n }\n \n+ @Override\n+ public boolean allParamsConsumed() {\n+ return true;\n+ }\n+\n @Override\n public Map<String, String> params() {\n return null;",
"filename": "core/src/test/java/org/elasticsearch/rest/RestRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -23,14 +23,18 @@\n import org.elasticsearch.rest.RestRequest;\n \n import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.Map;\n+import java.util.Set;\n \n public class FakeRestRequest extends RestRequest {\n \n private final Map<String, String> headers;\n \n private final Map<String, String> params;\n \n+ private final Set<String> consumedParams;\n+\n public FakeRestRequest() {\n this(new HashMap<String, String>(), new HashMap<String, String>());\n }\n@@ -41,6 +45,7 @@ public FakeRestRequest(Map<String, String> headers, Map<String, String> context)\n putInContext(entry.getKey(), entry.getValue());\n }\n this.params = new HashMap<>();\n+ this.consumedParams = new HashSet<>(params().size());\n }\n \n @Override\n@@ -85,18 +90,26 @@ public boolean hasParam(String key) {\n \n @Override\n public String param(String key) {\n+ this.consumedParams.add(key);\n return params.get(key);\n }\n \n @Override\n public String param(String key, String defaultValue) {\n+ this.consumedParams.add(key);\n String value = params.get(key);\n if (value == null) {\n return defaultValue;\n }\n return value;\n }\n \n+ @Override\n+ public boolean allParamsConsumed() {\n+ return this.consumedParams.containsAll(this.params().keySet());\n+ }\n+\n+\n @Override\n public Map<String, String> params() {\n return params;",
"filename": "test-framework/src/main/java/org/elasticsearch/test/rest/FakeRestRequest.java",
"status": "modified"
}
]
}
|
{
"body": "I am upgrading from Elasticsearch 1.7.1 to 2.1. I have few test cases that I wanted to validate before upgrading. Attached [BulkProcessorTest.txt](https://github.com/elastic/elasticsearch/files/51364/BulkProcessorTest.txt) is one among them.\n\nHere are the steps\n1) create object with random values and corresponding JSONs are 3K Bytes. \n2) Index 200 such records (could be any number) to index \"test\"\n3) Drop index \"test\"\n4) Index 200K Records to index \"test\"\n\nIn 1.7.1 step 4 used to take ~40 seconds.\nIn 2.1.0 step 4 is taking ~600 seconds\nI reran the same test on 2.0.1 it is taking ~40 seconds. \n\nIf step 3 is not executed Step 4 takes only ~40 seconds in 2.1.0.\n\nI saw lot of these warnings when the performance was slow: \nhigh disk watermark [90%] exceeded on [ZkeRJaTUR4iu3h8DmIiV2w][Thing]... free: 11gb[4.7%], shards will be relocated away from this node.\nAlso, disk and cpu utilization were significantly higher.\n\nThese warnings and high utilization were not seen when step 3 was not executed 2.1.0.\n",
"comments": [
{
"body": "This is because your are benchmarking an abusive and unrealistic use of Elasticsearch by randomly generating field names. This is hitting two changes in particular:\n\n## Mapping changes must be confirmed by the master\n\nPreviously it was possible for the same field to be added to an index on different shards with different mapping. This could result in incorrect results and even data loss down the line. Now, any mapping update must be confirmed by the master (which then publishes a new cluster state to all nodes and waits for confirmation that it has ben received) before continuing with indexing.\n\nYou are adding new random field names in every document, so every document triggers a cluster state update. Not surprising that this is slow!\n\n## Doc values on by default\n\nAggregations, sorting, and scripting need to be able to retrieve the value of a field for a particular document. Previously we used in-memory fielddata to do this, but that had two downsides:\n- A speed bump when loading fielddata from a big segment\n- It filled up your heap and could cause OOMs\n\nNow these values are written to disk at index time as doc values, a format which makes random access very fast. This format requires a value for every field in every document (it is not sparse). You are adding random field names each with one value, so your doc values matrix is growing exponentially. This becomes even worse when small segments are merged into big segments.\n\n(Actually, in Lucene 5.4 there is an optimization for sparse doc values (LUCENE-6863)[https://issues.apache.org/jira/browse/LUCENE-6863] which kicks in for fields that are present in less than 1% of documents).\n\n## Realistic benchmarks\n\nWe optimize for the way people actually use our software. Benchmarks are useless unless they test real world use cases.\n",
"created_at": "2015-12-04T08:54:01Z"
},
{
"body": "@clintongormley I am not generating random field names each time, I am generating random values for the same object. There are 30 different fields and that is constant through out the test. In the attached BulkProcessor.txt you may notice that. I think that is a realistic use case. \n\nBulkProccessor is not slow in all cases. If I am adding 200K documents, it takes 40 seconds. But, if I add some documents, let's say 1000 and delete that index, and add 200K records to the same index, this time it takes 600 seconds. \n\nI repeated the same test on 2.0.1 and 2.0. In both the cases it takes only ~40 seconds. \n",
"created_at": "2015-12-04T10:57:28Z"
},
{
"body": "> In the attached BulkProcessor.txt you may notice that.\n\nOK - that wasn't evident in BulkProcessor - you just refer to a RandomValueGenerator but the code isn't there.\n\n@jasontedor has managed to replicate a slow down here. Reopening\n",
"created_at": "2015-12-04T12:14:56Z"
},
{
"body": "@ksundeepsatya I've managed to reproduce a slow down here, and we have an understanding what is causing the slowdown. However, I was only observing a 2x slowdown, not a 10x slowdown. Given our understanding of what is causing the slowdown, are you by chance on spinning disks instead of SSD? That could explain the difference.\n",
"created_at": "2015-12-04T12:29:11Z"
},
{
"body": "@jasontedor I am using SSD. I was able to see a 10X slow down on multiple machines all having SSD. \n",
"created_at": "2015-12-04T12:36:02Z"
},
{
"body": "> I am using SSD. I was able to see a 10X slow down on multiple machines all having SSD.\n\n@ksundeepsatya Okay, thanks. I don't have a good explanation for observing a 2x difference locally vs your reported 10x, but either way there is a performance regression here that we will address.\n",
"created_at": "2015-12-04T12:38:46Z"
},
{
"body": "@jasontedor just curious to know if if this could be reproducible in any other flow other the way I mentioned. \nI am seeing it in 2.1, but not in 2.0.1. If this is a isolated case, I can upgrade to 2.1 instead of 2.0.1. \n",
"created_at": "2015-12-04T12:44:32Z"
},
{
"body": "> high disk watermark [90%] exceeded on [ZkeRJaTUR4iu3h8DmIiV2w][Thing]... free: 11gb[4.7%], shards will be relocated away from this node.\n\nHow many nodes are you running and how much free disk are you having before runs? It looks like your'e running out of disk too.\n",
"created_at": "2015-12-04T14:44:22Z"
},
{
"body": "@sarwarbhuiyan I am running Embedded Elasticsearch, single node. I have 328G free space on my disk. \nHere is the code snippet\nSettings.Builder elasticsearchSettings = Settings.settingsBuilder()\n .put(\"http.enabled\", \"false\").put(\"path.data\", dataDirectory)\n .put(\"path.home\", dataDirectory).put(\"script.indexed\", \"on\");\n\n```\n client = nodeBuilder().local(true)\n .settings(elasticsearchSettings.build()).node().client();\n```\n",
"created_at": "2015-12-04T15:06:03Z"
},
{
"body": "> just curious to know if if this could be reproducible in any other flow other the way I mentioned. \n\n@ksundeepsatya We are still investigating the scenarios under which this can occur. The reason that I asked about SSD vs. spinning disk is because the underlying cause appears to be due to how frequently merges are occurring.\n",
"created_at": "2015-12-04T15:08:29Z"
},
{
"body": "> How many nodes are you running and how much free disk are you having before runs? It looks like your'e running out of disk too.\n\n@sarwarbhuiyan It's a single embedded node in @ksundeepsatya's test code. The disk space is a red herring and not related to the underlying issue.\n",
"created_at": "2015-12-04T15:09:16Z"
},
{
"body": "> just curious to know if if this could be reproducible in any other flow other the way I mentioned. \n\n@ksundeepsatya Coming back to this question, we have a pretty good understanding now of the underlying cause of the performance regression, why that regression happened, and the scenarios under which it occurred.\n\nThe performance regression was caused by a substantial increase in the number of merges. The merges were caused by a certain indexing buffer (the version map) not being increased in size after bulk indexing started. The reason that the version map was not resized is because the controller that manages the indexing buffers did not detect the change in state for the underlying shards. Finally, the reason that it did not detect the state change is because of the create -> index -> delete -> create -> index sequence that you went through. The second time that the index was created, the shards were regenerated with the same ShardIds that they previously had. The state from the previous shards had not cleared and so the controller did not resize the buffer.\n\nSo, it appears the scenarios under which this regression can occur are rare and special. That said, it led us to rework the controller code so that it is not stateful and we are less likely to run into scenarios like this one.\n\nI've opened #15251 to address this and we will be working to have it released in version 2.1.1.\n\nThank you for discovering and reporting this issue.\n",
"created_at": "2015-12-04T20:13:20Z"
},
{
"body": "Let me clarify that this issue was not a problem in the BulkProcessor (java api code) as the title suggested, but in the indexing code on the server side. I updated the title to reflect that.\n",
"created_at": "2015-12-05T07:43:46Z"
},
{
"body": "Hi javanna / jason.\n\nExperiencing the same over here. Just upgraded our ES server to 2.1.0 and suddenly test execution times are running up 10 times. (Only after deleting the index and reinserting test data)\n\nBut luckily you found the issue and hopefully it is fixed. Can I verify the fix by running a snapshot? if so, where can I find the latest snapshot? (sonatype?)\n\nThanks in advance\n",
"created_at": "2015-12-10T12:13:06Z"
},
{
"body": "> Experiencing the same over here. Just upgraded our ES server to 2.1.0 and suddenly test execution times are running up 10 times. (Only after deleting the index and reinserting test data)\n\nThat sounds like it could be the same issue.\n\n> Can I verify the fix by running a snapshot? if so, where can I find the latest snapshot? (sonatype?)\n\nYes. Clone the 2.1 branch of the source repository and package it using `mvn package -DskipTests`. This will produce a `tar.gz` that you can deploy.\n\n> Thanks in advance\n\nThank you! The sooner that you're able to let us know if you're still seeing performance issues after running against a build with the fix, the more likely it is that we'll be able to address in advance of the next release.\n",
"created_at": "2015-12-10T12:48:31Z"
},
{
"body": "Hi Jason,\n\nVerified. When running with the 2.1.1-SNAPSHOT release from sonatype it simply runs as expected. Rerunning the tests (i.e. deleting / recreating indices) show similar performances when inserting data.\n",
"created_at": "2015-12-10T13:27:46Z"
},
{
"body": "> Verified. When running with the 2.1.1-SNAPSHOT release from sonatype it simply runs as expected. Rerunning the tests (i.e. deleting / recreating indices) show similar performances when inserting data.\n\nThanks much for verifying!\n",
"created_at": "2015-12-10T13:46:52Z"
},
{
"body": "\"Previously it was possible for the same field to be added to an index on different shards with different mapping. This could result in incorrect results and even data loss down the line. Now, any mapping update must be confirmed by the master (which then publishes a new cluster state to all nodes and waits for confirmation that it has ben received) before continuing with indexing.\" \n\nYou should be able to turn this off when creating many types under an index. And then be able to turn it back on when finish. For example if I want to create an index with 200 types all with the same mapping but have different type names. Like aa,ab,ac etc. In 1.7,2 I could just run a script and it would take about 10 Minutes now with 2.3. it takes HOURS. I never could finish I had to go back to 1.7.2\n",
"created_at": "2016-05-28T14:01:45Z"
},
{
"body": "> For example if I want to create an index with 200 types all with the same mapping but have different type names\n\nIf all mappings are the same, maybe you could use a field with a value mapped to what you currently store as type. It will make things much simpler and easier to manage, with the one downside of having to add a filter to each search. This downside can also be mitigated by setting aliases with a filter.\n\nAnother option is to create the mapping in advance, before you index.\n\n> On 2.3 it takes HOURS..\n\nThat's peculiar as mapping updates are batched and processed together. Any idea why? Do you have hot threads of the master?\n",
"created_at": "2016-05-29T21:05:31Z"
},
{
"body": "We have one index with 3 fairly large and deeply nested types. On ES1 it tooks us 38 minutes to import a dataset of 4.4 million records. With ES 2 (2.1, 2.2, 2.3) it now takes us 2 hours and 39 minutes. Both metrics come from imports on my dev machine (1 shard, 0 replicas).\n\nThat's almost 5 times as slow.\n\nBelow is the mapping for one of the three types:\n\n<details>\n\n```\n{\n \"properties\" : {\n \"sourceSystem\" : {\n \"properties\" : {\n \"code\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"sourceSystemId\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"recordURI\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sourceInstitutionID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sourceID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"owner\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"licenceType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"licence\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"unitID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"collectionType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"title\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"caption\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"description\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"serviceAccessPoints\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"accessUri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"format\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"variant\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"type\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"taxonCount\" : {\n \"type\" : \"integer\"\n },\n \"creator\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"copyrightText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"associatedSpecimenReference\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"associatedTaxonReference\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specimenTypeStatus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"multiMediaPublic\" : {\n \"type\" : \"boolean\"\n },\n \"subjectParts\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"subjectOrientations\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"phasesOrStages\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sexes\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gatheringEvents\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"projectTitle\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"worldRegion\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"continent\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"country\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"iso3166Code\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"provinceState\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"island\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"locality\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"city\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sublocality\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"localityText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"dateTimeBegin\" : {\n \"type\" : \"date\"\n },\n \"dateTimeEnd\" : {\n \"type\" : \"date\"\n },\n \"method\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"altitude\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"altitudeUnifOfMeasurement\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"depth\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"depthUnitOfMeasurement\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gatheringPersons\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"gatheringOrganizations\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"siteCoordinates\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"longitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"latitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridCellSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gridLatitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridLongitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridCellCode\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gridQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"point\" : {\n \"type\" : \"geo_shape\"\n }\n }\n },\n \"bioStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"youngBioDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngBioName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngFossilZone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngFossilSubZone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngBioCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngStratType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioPreferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"rangePosition\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldFossilzone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldFossilSubzone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioStratType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"chronoStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"youngRegionalSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngRegionalStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngRegionalSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternErathem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternEonothem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngChronoName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"chronoPreferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"oldRegionalSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldRegionalStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldRegionalSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternErathem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternEonothem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldChronoName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"chronoIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"lithoStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"qualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"preferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"member2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"member\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"informalName2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"informalName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"importedName2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"importedName1\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"lithoIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formation2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formationGroup2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formationGroup\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"certainty2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"certainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bed2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bed\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"identifications\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"taxonRank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"scientificName\" : {\n \"properties\" : {\n \"fullScientificName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"taxonomicStatus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"genusOrMonomial\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"subgenus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"infraspecificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificMarker\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"nameAddendum\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"authorshipVerbatim\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"year\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"references\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"titleCitation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"citationDetail\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"uri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"publicationDate\" : {\n \"type\" : \"date\"\n }\n }\n },\n \"experts\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \"defaultClassification\" : {\n \"properties\" : {\n \"kingdom\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"phylum\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"className\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"order\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"superFamily\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"family\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"genus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"subgenus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificRank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"systemClassification\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"rank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"vernacularNames\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"language\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"preferred\" : {\n \"type\" : \"boolean\"\n },\n \"references\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"titleCitation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"citationDetail\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"uri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"publicationDate\" : {\n \"type\" : \"date\"\n }\n }\n },\n \"experts\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \"identificationQualifiers\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"dateIdentified\" : {\n \"type\" : \"date\"\n },\n \"identifiers\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"identifyingEpithets\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"theme\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\",\n \"index\" : \"analyzed\"\n },\n \"ci\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n },\n \"dynamic\" : \"strict\"\n}\nLog file: /home/ayco/projects/nba/v2/import/log/print-mapping.2016_06_24_15_06.log\n{\n \"properties\" : {\n \"sourceSystem\" : {\n \"properties\" : {\n \"code\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"sourceSystemId\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"recordURI\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sourceInstitutionID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sourceID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"owner\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"licenceType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"licence\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"unitID\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"collectionType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"title\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"caption\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"description\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"serviceAccessPoints\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"accessUri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"format\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"variant\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"type\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"taxonCount\" : {\n \"type\" : \"integer\"\n },\n \"creator\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"copyrightText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"associatedSpecimenReference\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"associatedTaxonReference\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specimenTypeStatus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"multiMediaPublic\" : {\n \"type\" : \"boolean\"\n },\n \"subjectParts\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"subjectOrientations\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"phasesOrStages\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sexes\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gatheringEvents\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"projectTitle\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"worldRegion\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"continent\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"country\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"iso3166Code\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"provinceState\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"island\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"locality\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"city\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"sublocality\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"localityText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"dateTimeBegin\" : {\n \"type\" : \"date\"\n },\n \"dateTimeEnd\" : {\n \"type\" : \"date\"\n },\n \"method\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"altitude\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"altitudeUnifOfMeasurement\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"depth\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"depthUnitOfMeasurement\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gatheringPersons\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"gatheringOrganizations\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"siteCoordinates\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"longitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"latitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridCellSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gridLatitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridLongitudeDecimal\" : {\n \"type\" : \"double\"\n },\n \"gridCellCode\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"gridQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"point\" : {\n \"type\" : \"geo_shape\"\n }\n }\n },\n \"bioStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"youngBioDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngBioName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngFossilZone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngFossilSubZone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngBioCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngStratType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioPreferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"rangePosition\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bioIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldFossilzone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldFossilSubzone\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldBioStratType\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"chronoStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"youngRegionalSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngRegionalStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngRegionalSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternErathem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngInternEonothem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngChronoName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"youngCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldDatingQualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"chronoPreferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"oldRegionalSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldRegionalStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldRegionalSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSystem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSubstage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternStage\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternSeries\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternErathem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldInternEonothem\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldChronoName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"chronoIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"oldCertainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"lithoStratigraphy\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"qualifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"preferredFlag\" : {\n \"type\" : \"boolean\"\n },\n \"member2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"member\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"informalName2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"informalName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"importedName2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"importedName1\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"lithoIdentifier\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formation2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formationGroup2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formationGroup\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"formation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"certainty2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"certainty\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bed2\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"bed\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"identifications\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"taxonRank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"scientificName\" : {\n \"properties\" : {\n \"fullScientificName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"taxonomicStatus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"genusOrMonomial\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"subgenus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"infraspecificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificMarker\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"nameAddendum\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"authorshipVerbatim\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"year\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"references\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"titleCitation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"citationDetail\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"uri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"publicationDate\" : {\n \"type\" : \"date\"\n }\n }\n },\n \"experts\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \"defaultClassification\" : {\n \"properties\" : {\n \"kingdom\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"phylum\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"className\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"order\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"superFamily\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"family\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"genus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"subgenus\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"specificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificEpithet\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"infraspecificRank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"systemClassification\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"rank\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n },\n \"vernacularNames\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"language\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"preferred\" : {\n \"type\" : \"boolean\"\n },\n \"references\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"titleCitation\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"citationDetail\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"uri\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"author\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"publicationDate\" : {\n \"type\" : \"date\"\n }\n }\n },\n \"experts\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"fullName\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n },\n \"like\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"like_analyzer\"\n }\n }\n },\n \"organization\" : {\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"name\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \"identificationQualifiers\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"dateIdentified\" : {\n \"type\" : \"date\"\n },\n \"identifiers\" : {\n \"type\" : \"nested\",\n \"properties\" : {\n \"agentText\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n }\n }\n }\n },\n \"identifyingEpithets\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n },\n \"theme\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fields\" : {\n \"analyzed\" : {\n \"type\" : \"string\"\n },\n \"ignoreCase\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"case_insensitive_analyzer\"\n }\n }\n }\n },\n \"dynamic\" : \"strict\"\n}\n```\n\n</details>\n",
"created_at": "2016-06-24T13:32:35Z"
},
{
"body": "I was looking for an answer to the same issue, and the snippet you just pasted broke my finger.\n",
"created_at": "2016-07-06T09:47:39Z"
},
{
"body": "@ayco-at-naturalis i've added `<details>` tags to your incredibly long comment. I think your issue is probably to do with a change to the default `distance_error_pct` (see https://github.com/elastic/elasticsearch/issues/17907)\n",
"created_at": "2016-07-06T12:44:00Z"
},
{
"body": "@clintongormley Thanks for your reply and sorry for the amount of text. It does seem though as if it is indeed the (deep) nesting that is causing the trouble - not de geo stuff that you mention. As soon as I stopped providing data (while indexing) voor \"nested\" objects (which themselves contain \"nested\" objects), perfomance went back to what it was in ES 1.3.4\n\nOf course, since that data is not optional, I would still need to load that data in a separate round.\n\nNote that it isn't the mapping itself that makes it slow. It's when you actually provide data for the nested structures defined in the mapping.\n",
"created_at": "2016-07-11T07:15:11Z"
},
{
"body": "> it is indeed the (deep) nesting that is causing the trouble - not de geo stuff that you mention\n\nmy mistake, the geo issue only kicks in if you specify a `precision`\n",
"created_at": "2016-07-11T13:51:23Z"
}
],
"number": 15225,
"title": "Indexing rate is ~10X lower in Elasticsearch 2.1"
}
|
{
"body": "This commit modifies IndexingMemoryController to be stateless. Rather\nthan statefully tracking the indexing status of shards,\nIndexingMemoryController can grab all available shards, check their idle\nstate, and then resize the buffers based on the number of and which\nshards are not idle.\n\nThe driver for this change is a performance regression that can arise in\nsome scenarios after #13918. One scenario under which this performance\nregression can arise is if an index is deleted and then created\nagain. Because IndexingMemoryController was previously statefully\ntracking the state of shards via a map of ShardIds, the new shards with\nthe same ShardIds as previously existing shards would not be detected\nand therefore their version maps would never be resized from the\ndefaults. This led to an explosion in the number of merges causing a\ndegradation in performance.\n\nCloses #15225\n",
"number": 15251,
"review_comments": [
{
"body": "Can we just absorb this method directly into `run()`?\n",
"created_at": "2015-12-04T20:08:08Z"
},
{
"body": "Done in 5341404f014fbfd0c0b67c61546df38625d9b4ad.\n",
"created_at": "2015-12-04T20:17:57Z"
}
],
"title": "IndexingMemoryController should not track shard index states"
}
|
{
"commits": [
{
"message": "IndexingMemoryController should not track shard index states\n\nThis commit modifies IndexingMemoryController to be stateless. Rather\nthan statefully tracking the indexing status of shards,\nIndexingMemoryController can grab all available shards, check their idle\nstate, and then resize the buffers based on the number of and which\nshards are not idle.\n\nThe driver for this change is a performance regression that can arise in\nsome scenarios after #13918. One scenario under which this performance\nregression can arise is if an index is deleted and then created\nagain. Because IndexingMemoryController was previously statefully\ntracking the state of shards via a map of ShardIds, the new shards with\nthe same ShardIds as previously existing shards would not be detected\nand therefore their version maps would never be resized from the\ndefaults. This led to an explosion in the number of merges causing a\ndegradation in performance.\n\nCloses #15225"
},
{
"message": "Absorb core ShardsIndicesStatusChecker logic into body of run"
}
],
"files": [
{
"diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n-import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.monitor.jvm.JvmInfo;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -200,23 +199,17 @@ public ByteSizeValue translogBufferSize() {\n return translogBuffer;\n }\n \n-\n- protected List<ShardId> availableShards() {\n- ArrayList<ShardId> list = new ArrayList<>();\n+ protected List<IndexShard> availableShards() {\n+ List<IndexShard> activeShards = new ArrayList<>();\n \n for (IndexService indexService : indicesService) {\n- for (IndexShard indexShard : indexService) {\n- if (shardAvailable(indexShard)) {\n- list.add(indexShard.shardId());\n+ for (IndexShard shard : indexService) {\n+ if (shardAvailable(shard)) {\n+ activeShards.add(shard);\n }\n }\n }\n- return list;\n- }\n-\n- /** returns true if shard exists and is availabe for updates */\n- protected boolean shardAvailable(ShardId shardId) {\n- return shardAvailable(getShard(shardId));\n+ return activeShards;\n }\n \n /** returns true if shard exists and is availabe for updates */\n@@ -225,19 +218,8 @@ protected boolean shardAvailable(@Nullable IndexShard shard) {\n return shard != null && shard.canIndex() && CAN_UPDATE_INDEX_BUFFER_STATES.contains(shard.state());\n }\n \n- /** gets an {@link IndexShard} instance for the given shard. returns null if the shard doesn't exist */\n- protected IndexShard getShard(ShardId shardId) {\n- IndexService indexService = indicesService.indexService(shardId.index().name());\n- if (indexService != null) {\n- IndexShard indexShard = indexService.getShardOrNull(shardId.id());\n- return indexShard;\n- }\n- return null;\n- }\n-\n /** set new indexing and translog buffers on this shard. this may cause the shard to refresh to free up heap. */\n- protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n- final IndexShard shard = getShard(shardId);\n+ protected void updateShardBuffers(IndexShard shard, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n if (shard != null) {\n try {\n shard.updateBufferSize(shardIndexingBufferSize, shardTranslogBufferSize);\n@@ -246,113 +228,33 @@ protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBu\n } catch (FlushNotAllowedEngineException e) {\n // ignore\n } catch (Exception e) {\n- logger.warn(\"failed to set shard {} index buffer to [{}]\", e, shardId, shardIndexingBufferSize);\n+ logger.warn(\"failed to set shard {} index buffer to [{}]\", e, shard.shardId(), shardIndexingBufferSize);\n }\n }\n }\n \n- /** returns {@link IndexShard#getActive} if the shard exists, else null */\n- protected Boolean getShardActive(ShardId shardId) {\n- final IndexShard indexShard = getShard(shardId);\n- if (indexShard == null) {\n- return null;\n- }\n- return indexShard.getActive();\n- }\n-\n /** check if any shards active status changed, now. */\n public void forceCheck() {\n statusChecker.run();\n }\n \n class ShardsIndicesStatusChecker implements Runnable {\n-\n- // True if the shard was active last time we checked\n- private final Map<ShardId,Boolean> shardWasActive = new HashMap<>();\n-\n @Override\n public synchronized void run() {\n- EnumSet<ShardStatusChangeType> changes = purgeDeletedAndClosedShards();\n-\n- updateShardStatuses(changes);\n-\n- if (changes.isEmpty() == false) {\n- // Something changed: recompute indexing buffers:\n- calcAndSetShardBuffers(\"[\" + changes + \"]\");\n- }\n- }\n-\n- /**\n- * goes through all existing shards and check whether there are changes in their active status\n- */\n- private void updateShardStatuses(EnumSet<ShardStatusChangeType> changes) {\n- for (ShardId shardId : availableShards()) {\n-\n- // Is the shard active now?\n- Boolean isActive = getShardActive(shardId);\n-\n- if (isActive == null) {\n- // shard was closed..\n- continue;\n- }\n-\n- // Was the shard active last time we checked?\n- Boolean wasActive = shardWasActive.get(shardId);\n- if (wasActive == null) {\n- // First time we are seeing this shard\n- shardWasActive.put(shardId, isActive);\n- changes.add(ShardStatusChangeType.ADDED);\n- } else if (isActive) {\n- // Shard is active now\n- if (wasActive == false) {\n- // Shard became active itself, since we last checked (due to new indexing op arriving)\n- changes.add(ShardStatusChangeType.BECAME_ACTIVE);\n- logger.debug(\"marking shard {} as active indexing wise\", shardId);\n- shardWasActive.put(shardId, true);\n- } else if (checkIdle(shardId) == Boolean.TRUE) {\n- // Make shard inactive now\n- changes.add(ShardStatusChangeType.BECAME_INACTIVE);\n-\n- shardWasActive.put(shardId, false);\n- }\n- }\n- }\n- }\n-\n- /**\n- * purge any existing statuses that are no longer updated\n- *\n- * @return the changes applied\n- */\n- private EnumSet<ShardStatusChangeType> purgeDeletedAndClosedShards() {\n- EnumSet<ShardStatusChangeType> changes = EnumSet.noneOf(ShardStatusChangeType.class);\n-\n- Iterator<ShardId> statusShardIdIterator = shardWasActive.keySet().iterator();\n- while (statusShardIdIterator.hasNext()) {\n- ShardId shardId = statusShardIdIterator.next();\n- if (shardAvailable(shardId) == false) {\n- changes.add(ShardStatusChangeType.DELETED);\n- statusShardIdIterator.remove();\n- }\n- }\n- return changes;\n- }\n-\n- private void calcAndSetShardBuffers(String reason) {\n-\n- // Count how many shards are now active:\n- int activeShardCount = 0;\n- for (Map.Entry<ShardId,Boolean> ent : shardWasActive.entrySet()) {\n- if (ent.getValue()) {\n- activeShardCount++;\n+ List<IndexShard> availableShards = availableShards();\n+ List<IndexShard> activeShards = new ArrayList<>();\n+ for (IndexShard shard : availableShards) {\n+ if (!checkIdle(shard)) {\n+ activeShards.add(shard);\n }\n }\n+ int activeShardCount = activeShards.size();\n \n // TODO: we could be smarter here by taking into account how RAM the IndexWriter on each shard\n // is actually using (using IW.ramBytesUsed), so that small indices (e.g. Marvel) would not\n // get the same indexing buffer as large indices. But it quickly gets tricky...\n if (activeShardCount == 0) {\n- logger.debug(\"no active shards (reason={})\", reason);\n+ logger.debug(\"no active shards\");\n return;\n }\n \n@@ -372,13 +274,10 @@ private void calcAndSetShardBuffers(String reason) {\n shardTranslogBufferSize = maxShardTranslogBufferSize;\n }\n \n- logger.debug(\"recalculating shard indexing buffer (reason={}), total is [{}] with [{}] active shards, each shard set to indexing=[{}], translog=[{}]\", reason, indexingBuffer, activeShardCount, shardIndexingBufferSize, shardTranslogBufferSize);\n+ logger.debug(\"recalculating shard indexing buffer, total is [{}] with [{}] active shards, each shard set to indexing=[{}], translog=[{}]\", indexingBuffer, activeShardCount, shardIndexingBufferSize, shardTranslogBufferSize);\n \n- for (Map.Entry<ShardId,Boolean> ent : shardWasActive.entrySet()) {\n- if (ent.getValue()) {\n- // This shard is active\n- updateShardBuffers(ent.getKey(), shardIndexingBufferSize, shardTranslogBufferSize);\n- }\n+ for (IndexShard shard : activeShards) {\n+ updateShardBuffers(shard, shardIndexingBufferSize, shardTranslogBufferSize);\n }\n }\n }\n@@ -389,14 +288,13 @@ protected long currentTimeInNanos() {\n \n /** ask this shard to check now whether it is inactive, and reduces its indexing and translog buffers if so. returns Boolean.TRUE if\n * it did deactive, Boolean.FALSE if it did not, and null if the shard is unknown */\n- protected Boolean checkIdle(ShardId shardId) {\n- String ignoreReason; // eclipse compiler does not know it is really final\n- final IndexShard shard = getShard(shardId);\n+ protected Boolean checkIdle(IndexShard shard) {\n+ String ignoreReason = null; // eclipse compiler does not know it is really final\n if (shard != null) {\n try {\n if (shard.checkIdle()) {\n logger.debug(\"marking shard {} as inactive (inactive_time[{}]) indexing wise\",\n- shardId,\n+ shard.shardId(),\n shard.getInactiveTime());\n return Boolean.TRUE;\n }\n@@ -412,15 +310,11 @@ protected Boolean checkIdle(ShardId shardId) {\n ignoreReason = \"shard not found\";\n }\n if (ignoreReason != null) {\n- logger.trace(\"ignore [{}] while marking shard {} as inactive\", ignoreReason, shardId);\n+ logger.trace(\"ignore [{}] while marking shard {} as inactive\", ignoreReason, shard.shardId());\n }\n return null;\n }\n \n- private static enum ShardStatusChangeType {\n- ADDED, DELETED, BECAME_ACTIVE, BECAME_INACTIVE\n- }\n-\n @Override\n public void onShardActive(IndexShard indexShard) {\n // At least one shard used to be inactive ie. a new write operation just showed up.",
"filename": "core/src/main/java/org/elasticsearch/indices/memory/IndexingMemoryController.java",
"status": "modified"
},
{
"diff": "@@ -22,54 +22,51 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n \n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.not;\n \n-public class IndexingMemoryControllerTests extends ESTestCase {\n+public class IndexingMemoryControllerTests extends ESSingleNodeTestCase {\n \n static class MockController extends IndexingMemoryController {\n \n final static ByteSizeValue INACTIVE = new ByteSizeValue(-1);\n \n- final Map<ShardId, ByteSizeValue> indexingBuffers = new HashMap<>();\n- final Map<ShardId, ByteSizeValue> translogBuffers = new HashMap<>();\n+ final Map<IndexShard, ByteSizeValue> indexingBuffers = new HashMap<>();\n+ final Map<IndexShard, ByteSizeValue> translogBuffers = new HashMap<>();\n \n- final Map<ShardId, Long> lastIndexTimeNanos = new HashMap<>();\n- final Set<ShardId> activeShards = new HashSet<>();\n+ final Map<IndexShard, Long> lastIndexTimeNanos = new HashMap<>();\n+ final Set<IndexShard> activeShards = new HashSet<>();\n \n long currentTimeSec = TimeValue.timeValueNanos(System.nanoTime()).seconds();\n \n public MockController(Settings settings) {\n super(Settings.builder()\n- .put(SHARD_INACTIVE_INTERVAL_TIME_SETTING, \"200h\") // disable it\n- .put(IndexShard.INDEX_SHARD_INACTIVE_TIME_SETTING, \"1ms\") // nearly immediate\n- .put(settings)\n- .build(),\n- null, null, 100 * 1024 * 1024); // fix jvm mem size to 100mb\n+ .put(SHARD_INACTIVE_INTERVAL_TIME_SETTING, \"200h\") // disable it\n+ .put(IndexShard.INDEX_SHARD_INACTIVE_TIME_SETTING, \"1ms\") // nearly immediate\n+ .put(settings)\n+ .build(),\n+ null, null, 100 * 1024 * 1024); // fix jvm mem size to 100mb\n }\n \n- public void deleteShard(ShardId id) {\n+ public void deleteShard(IndexShard id) {\n indexingBuffers.remove(id);\n translogBuffers.remove(id);\n }\n \n- public void assertBuffers(ShardId id, ByteSizeValue indexing, ByteSizeValue translog) {\n+ public void assertBuffers(IndexShard id, ByteSizeValue indexing, ByteSizeValue translog) {\n assertThat(indexingBuffers.get(id), equalTo(indexing));\n assertThat(translogBuffers.get(id), equalTo(translog));\n }\n \n- public void assertInActive(ShardId id) {\n+ public void assertInactive(IndexShard id) {\n assertThat(indexingBuffers.get(id), equalTo(INACTIVE));\n assertThat(translogBuffers.get(id), equalTo(INACTIVE));\n }\n@@ -80,36 +77,31 @@ protected long currentTimeInNanos() {\n }\n \n @Override\n- protected List<ShardId> availableShards() {\n+ protected List<IndexShard> availableShards() {\n return new ArrayList<>(indexingBuffers.keySet());\n }\n \n @Override\n- protected boolean shardAvailable(ShardId shardId) {\n- return indexingBuffers.containsKey(shardId);\n+ protected boolean shardAvailable(IndexShard shard) {\n+ return indexingBuffers.containsKey(shard);\n }\n \n @Override\n- protected Boolean getShardActive(ShardId shardId) {\n- return activeShards.contains(shardId);\n+ protected void updateShardBuffers(IndexShard shard, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n+ indexingBuffers.put(shard, shardIndexingBufferSize);\n+ translogBuffers.put(shard, shardTranslogBufferSize);\n }\n \n @Override\n- protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n- indexingBuffers.put(shardId, shardIndexingBufferSize);\n- translogBuffers.put(shardId, shardTranslogBufferSize);\n- }\n-\n- @Override\n- protected Boolean checkIdle(ShardId shardId) {\n+ protected Boolean checkIdle(IndexShard shard) {\n final TimeValue inactiveTime = settings.getAsTime(IndexShard.INDEX_SHARD_INACTIVE_TIME_SETTING, TimeValue.timeValueMinutes(5));\n- Long ns = lastIndexTimeNanos.get(shardId);\n+ Long ns = lastIndexTimeNanos.get(shard);\n if (ns == null) {\n return null;\n } else if (currentTimeInNanos() - ns >= inactiveTime.nanos()) {\n- indexingBuffers.put(shardId, INACTIVE);\n- translogBuffers.put(shardId, INACTIVE);\n- activeShards.remove(shardId);\n+ indexingBuffers.put(shard, INACTIVE);\n+ translogBuffers.put(shard, INACTIVE);\n+ activeShards.remove(shard);\n return true;\n } else {\n return false;\n@@ -120,118 +112,126 @@ public void incrementTimeSec(int sec) {\n currentTimeSec += sec;\n }\n \n- public void simulateIndexing(ShardId shardId) {\n- lastIndexTimeNanos.put(shardId, currentTimeInNanos());\n- if (indexingBuffers.containsKey(shardId) == false) {\n+ public void simulateIndexing(IndexShard shard) {\n+ lastIndexTimeNanos.put(shard, currentTimeInNanos());\n+ if (indexingBuffers.containsKey(shard) == false) {\n // First time we are seeing this shard; start it off with inactive buffers as IndexShard does:\n- indexingBuffers.put(shardId, IndexingMemoryController.INACTIVE_SHARD_INDEXING_BUFFER);\n- translogBuffers.put(shardId, IndexingMemoryController.INACTIVE_SHARD_TRANSLOG_BUFFER);\n+ indexingBuffers.put(shard, IndexingMemoryController.INACTIVE_SHARD_INDEXING_BUFFER);\n+ translogBuffers.put(shard, IndexingMemoryController.INACTIVE_SHARD_TRANSLOG_BUFFER);\n }\n- activeShards.add(shardId);\n+ activeShards.add(shard);\n forceCheck();\n }\n }\n \n public void testShardAdditionAndRemoval() {\n+ createIndex(\"test\", Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 3).put(SETTING_NUMBER_OF_REPLICAS, 0).build());\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\").build());\n- final ShardId shard1 = new ShardId(\"test\", 1);\n- controller.simulateIndexing(shard1);\n- controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\").build());\n+ IndexShard shard0 = test.getShard(0);\n+ controller.simulateIndexing(shard0);\n+ controller.assertBuffers(shard0, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n \n // add another shard\n- final ShardId shard2 = new ShardId(\"test\", 2);\n- controller.simulateIndexing(shard2);\n+ IndexShard shard1 = test.getShard(1);\n+ controller.simulateIndexing(shard1);\n+ controller.assertBuffers(shard0, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n- controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n \n // remove first shard\n- controller.deleteShard(shard1);\n+ controller.deleteShard(shard0);\n controller.forceCheck();\n- controller.assertBuffers(shard2, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n \n // remove second shard\n- controller.deleteShard(shard2);\n+ controller.deleteShard(shard1);\n controller.forceCheck();\n \n // add a new one\n- final ShardId shard3 = new ShardId(\"test\", 3);\n- controller.simulateIndexing(shard3);\n- controller.assertBuffers(shard3, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+ IndexShard shard2 = test.getShard(2);\n+ controller.simulateIndexing(shard2);\n+ controller.assertBuffers(shard2, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n }\n \n public void testActiveInactive() {\n- MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\")\n- .put(IndexShard.INDEX_SHARD_INACTIVE_TIME_SETTING, \"5s\")\n- .build());\n+ createIndex(\"test\", Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 2).put(SETTING_NUMBER_OF_REPLICAS, 0).build());\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n \n- final ShardId shard1 = new ShardId(\"test\", 1);\n+ MockController controller = new MockController(Settings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\")\n+ .put(IndexShard.INDEX_SHARD_INACTIVE_TIME_SETTING, \"5s\")\n+ .build());\n+\n+ IndexShard shard0 = test.getShard(0);\n+ controller.simulateIndexing(shard0);\n+ IndexShard shard1 = test.getShard(1);\n controller.simulateIndexing(shard1);\n- final ShardId shard2 = new ShardId(\"test\", 2);\n- controller.simulateIndexing(shard2);\n+ controller.assertBuffers(shard0, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n- controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n \n // index into both shards, move the clock and see that they are still active\n+ controller.simulateIndexing(shard0);\n controller.simulateIndexing(shard1);\n- controller.simulateIndexing(shard2);\n \n controller.incrementTimeSec(10);\n controller.forceCheck();\n \n // both shards now inactive\n- controller.assertInActive(shard1);\n- controller.assertInActive(shard2);\n+ controller.assertInactive(shard0);\n+ controller.assertInactive(shard1);\n \n // index into one shard only, see it becomes active\n- controller.simulateIndexing(shard1);\n- controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n- controller.assertInActive(shard2);\n+ controller.simulateIndexing(shard0);\n+ controller.assertBuffers(shard0, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n+ controller.assertInactive(shard1);\n \n controller.incrementTimeSec(3); // increment but not enough to become inactive\n controller.forceCheck();\n- controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n- controller.assertInActive(shard2);\n+ controller.assertBuffers(shard0, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n+ controller.assertInactive(shard1);\n \n controller.incrementTimeSec(3); // increment some more\n controller.forceCheck();\n- controller.assertInActive(shard1);\n- controller.assertInActive(shard2);\n+ controller.assertInactive(shard0);\n+ controller.assertInactive(shard1);\n \n // index some and shard becomes immediately active\n- controller.simulateIndexing(shard2);\n- controller.assertInActive(shard1);\n- controller.assertBuffers(shard2, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n+ controller.simulateIndexing(shard1);\n+ controller.assertInactive(shard0);\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n }\n \n public void testMinShardBufferSizes() {\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n- .put(IndexingMemoryController.MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n- .put(IndexingMemoryController.MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"40kb\").build());\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"40kb\").build());\n \n assertTwoActiveShards(controller, new ByteSizeValue(6, ByteSizeUnit.MB), new ByteSizeValue(40, ByteSizeUnit.KB));\n }\n \n public void testMaxShardBufferSizes() {\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n- .put(IndexingMemoryController.MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, \"3mb\")\n- .put(IndexingMemoryController.MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"10kb\").build());\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, \"3mb\")\n+ .put(IndexingMemoryController.MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"10kb\").build());\n \n assertTwoActiveShards(controller, new ByteSizeValue(3, ByteSizeUnit.MB), new ByteSizeValue(10, ByteSizeUnit.KB));\n }\n \n public void testRelativeBufferSizes() {\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"50%\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.5%\")\n- .build());\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"50%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.5%\")\n+ .build());\n \n assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(50, ByteSizeUnit.MB)));\n assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n@@ -240,34 +240,35 @@ public void testRelativeBufferSizes() {\n \n public void testMinBufferSizes() {\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"0.001%\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.001%\")\n- .put(IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n- .put(IndexingMemoryController.MIN_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n \n assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n }\n \n public void testMaxBufferSizes() {\n MockController controller = new MockController(Settings.builder()\n- .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"90%\")\n- .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"90%\")\n- .put(IndexingMemoryController.MAX_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n- .put(IndexingMemoryController.MAX_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.MAX_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MAX_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n \n assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n }\n \n protected void assertTwoActiveShards(MockController controller, ByteSizeValue indexBufferSize, ByteSizeValue translogBufferSize) {\n- final ShardId shard1 = new ShardId(\"test\", 1);\n+ createIndex(\"test\", Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 2).put(SETTING_NUMBER_OF_REPLICAS, 0).build());\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n+ IndexService test = indicesService.indexService(\"test\");\n+ IndexShard shard0 = test.getShard(0);\n+ controller.simulateIndexing(shard0);\n+ IndexShard shard1 = test.getShard(1);\n controller.simulateIndexing(shard1);\n- final ShardId shard2 = new ShardId(\"test\", 2);\n- controller.simulateIndexing(shard2);\n+ controller.assertBuffers(shard0, indexBufferSize, translogBufferSize);\n controller.assertBuffers(shard1, indexBufferSize, translogBufferSize);\n- controller.assertBuffers(shard2, indexBufferSize, translogBufferSize);\n-\n }\n-\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/memory/IndexingMemoryControllerTests.java",
"status": "modified"
}
]
}
|
{
"body": "Recreation on 2.1.0:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"_id\": {\n \"type\": \"string\"\n }\n }\n },\n \"two\": {}\n }\n}\n```\n\nThis returns OK, but in the logs:\n\n```\n[2015-11-27 10:36:51,848][WARN ][indices.cluster ] [Primevil] [t] failed to add mapping [one], source [{\"one\":{\"properties\":{\"_id\":{\"type\":\"string\"}}}}]\njava.lang.IllegalArgumentException: Mapper for [_id] conflicts with existing mapping in other types:\n[mapper [_id] cannot be changed from type [_id] to [string]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:364)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:315)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:261)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:418)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:372)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:177)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:494)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-11-27 10:36:51,849][WARN ][indices.cluster ] [Primevil] [[t][4]] marking and sending shard failed due to [master [{Primevil}{FZZ2xKQATOSVT6qMODFe2w}{127.0.0.1}{127.0.0.1:9300}] marked shard as started, but shard has not been created, mark shard as failed]\n[2015-11-27 10:36:51,849][WARN ][cluster.action.shard ] [Primevil] [t][4] received shard failed for [t][4], node[FZZ2xKQATOSVT6qMODFe2w], [P], v[2], s[STARTED], a[id=SfSzHTRlSAawEvHtC7cMLg], indexUUID [bcTOLPxjTM2cB95wPE0sJQ], message [master [{Primevil}{FZZ2xKQATOSVT6qMODFe2w}{127.0.0.1}{127.0.0.1:9300}] marked shard as started, but shard has not been created, mark shard as failed], failure [Unknown]\n```\n",
"comments": [
{
"body": "Im experiencing this issue. Is there a known workaround? How do I prevent data loss?\n",
"created_at": "2015-12-04T08:16:54Z"
},
{
"body": "We're working on a bug fix. There isn't a workaround that I'm aware of. You will need to reindex this data to a new index.\n",
"created_at": "2015-12-04T09:03:38Z"
},
{
"body": "Delaying to 2.2.0: I'm on the fence to put such a change in a bugfix release.\n",
"created_at": "2015-12-08T16:12:36Z"
},
{
"body": "Shouldn't dynamic templates recognise these meta-properties (i.e., \"_all\", \"_id\", \"_parent\", \"_routing\", \"_timestamp\", \"_ttl\") and handle them properly?\n\nI have the following template:\n\n```\n{\n \"shop_order_v1\" : {\n \"order\" : 0,\n \"template\" : \"shop_order_v1*\",\n \"settings\" : { },\n \"mappings\" : {\n \"shop_order\" : {\n \"dynamic_templates\" : [ {\n \"store_generic\" : {\n \"mapping\" : {\n \"index\" : \"not_analyzed\",\n \"store\" : true,\n \"doc_values\" : true\n },\n \"match\" : \"*\"\n }\n } ],\n \"properties\" : {\n \"timestamp\" : {\n \"store\" : true,\n \"format\" : \"yyyy-MM-dd HH:mm:ss\",\n \"doc_values\" : true,\n \"type\" : \"date\"\n },\n \"started\" : {\n \"store\" : true,\n \"format\" : \"yyyy-MM-dd HH:mm:ss\",\n \"doc_values\" : true,\n \"type\" : \"date\"\n },\n \"geo_point\" : {\n \"doc_values\" : true,\n \"type\" : \"geo_point\"\n }\n }\n }\n },\n \"aliases\" : {\n \"shop_order_v1\" : { }\n }\n }\n}\n```\n\nI'm hitting this error when I'm trying to index a document that contains the \"_id\" property, e.g.\n\n```\n{\n \"_id\": \"foo\"\n}\n```\n\nAt the moment I'm working around this by removing the \"_id\" property from the docs before indexing (bulk), but this used to work ok on 1.7.4.\n",
"created_at": "2016-01-02T12:44:39Z"
},
{
"body": "@krisb78 the ability to embed metafields in the source of a document has been removed. It required double parsing of the document (once on the coordinating node and once on the data node)\n",
"created_at": "2016-01-10T17:42:51Z"
},
{
"body": "Thought so - thanks, I'll amend my code.\n",
"created_at": "2016-01-10T21:41:44Z"
},
{
"body": "This is a breaking change that is not mentioned in your documentation...\n",
"created_at": "2016-03-02T01:33:55Z"
}
],
"number": 15057,
"title": "Mapping `properties._id` causes conflicts and shard failures"
}
|
{
"body": "There are two ways that a field can be defined twice:\n- by reusing the name of a meta mapper in the root object (`_id`, `_routing`,\n etc.)\n- by defining a sub-field both explicitly in the mapping and through the code\n in a field mapper (like ExternalMapper does)\n\nThis commit adds new checks in order to make sure this never happens.\n\nClose #15057\n",
"number": 15243,
"review_comments": [],
"title": "Validate that fields are defined only once."
}
|
{
"commits": [
{
"message": "Validate that fields are defined only once.\n\nThere are two ways that a field can be defined twice:\n - by reusing the name of a meta mapper in the root object (`_id`, `_routing`,\n etc.)\n - by defining a sub-field both explicitly in the mapping and through the code\n in a field mapper (like ExternalMapper does)\n\nThis commit adds new checks in order to make sure this never happens.\n\nClose #15057"
}
],
"files": [
{
"diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.ElasticsearchGenerationException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -92,7 +91,7 @@ public class MapperService extends AbstractIndexComponent implements Closeable {\n private final ReleasableLock mappingWriteLock = new ReleasableLock(mappingLock.writeLock());\n \n private volatile FieldTypeLookup fieldTypes;\n- private volatile ImmutableOpenMap<String, ObjectMapper> fullPathObjectMappers = ImmutableOpenMap.of();\n+ private volatile Map<String, ObjectMapper> fullPathObjectMappers = new HashMap<>();\n private boolean hasNested = false; // updated dynamically to true when a nested object is added\n \n private final DocumentMapperParser documentParser;\n@@ -300,8 +299,41 @@ private boolean assertSerialization(DocumentMapper mapper) {\n return true;\n }\n \n+ private void checkFieldUniqueness(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {\n+ final Set<String> objectFullNames = new HashSet<>();\n+ for (ObjectMapper objectMapper : objectMappers) {\n+ final String fullPath = objectMapper.fullPath();\n+ if (objectFullNames.add(fullPath) == false) {\n+ throw new IllegalArgumentException(\"Object mapper [\" + fullPath + \"] is defined twice in mapping for type [\" + type + \"]\");\n+ }\n+ }\n+\n+ if (indexSettings.getIndexVersionCreated().before(Version.V_3_0_0)) {\n+ // Before 3.0 some metadata mappers are also registered under the root object mapper\n+ // So we avoid false positives by deduplicating mappers\n+ // given that we check exact equality, this would still catch the case that a mapper\n+ // is defined under the root object \n+ Collection<FieldMapper> uniqueFieldMappers = Collections.newSetFromMap(new IdentityHashMap<>());\n+ uniqueFieldMappers.addAll(fieldMappers);\n+ fieldMappers = uniqueFieldMappers;\n+ }\n+\n+ final Set<String> fieldNames = new HashSet<>();\n+ for (FieldMapper fieldMapper : fieldMappers) {\n+ final String name = fieldMapper.name();\n+ if (objectFullNames.contains(name)) {\n+ throw new IllegalArgumentException(\"Field [\" + name + \"] is defined both as an object and a field in [\" + type + \"]\");\n+ } else if (fieldNames.add(name) == false) {\n+ throw new IllegalArgumentException(\"Field [\" + name + \"] is defined twice in [\" + type + \"]\");\n+ }\n+ }\n+ }\n+\n protected void checkMappersCompatibility(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers, boolean updateAllTypes) {\n assert mappingLock.isWriteLockedByCurrentThread();\n+\n+ checkFieldUniqueness(type, objectMappers, fieldMappers);\n+\n for (ObjectMapper newObjectMapper : objectMappers) {\n ObjectMapper existingObjectMapper = fullPathObjectMappers.get(newObjectMapper.fullPath());\n if (existingObjectMapper != null) {\n@@ -313,6 +345,13 @@ protected void checkMappersCompatibility(String type, Collection<ObjectMapper> o\n }\n }\n }\n+\n+ for (FieldMapper fieldMapper : fieldMappers) {\n+ if (fullPathObjectMappers.containsKey(fieldMapper.name())) {\n+ throw new IllegalArgumentException(\"Field [{}] is defined as a field in mapping [\" + fieldMapper.name() + \"] but this name is already used for an object in other types\");\n+ }\n+ }\n+\n fieldTypes.checkCompatibility(type, fieldMappers, updateAllTypes);\n }\n \n@@ -330,14 +369,14 @@ protected Tuple<Collection<ObjectMapper>, Collection<FieldMapper>> checkMappersC\n \n protected void addMappers(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {\n assert mappingLock.isWriteLockedByCurrentThread();\n- ImmutableOpenMap.Builder<String, ObjectMapper> fullPathObjectMappers = ImmutableOpenMap.builder(this.fullPathObjectMappers);\n+ Map<String, ObjectMapper> fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers);\n for (ObjectMapper objectMapper : objectMappers) {\n fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper);\n if (objectMapper.nested().isNested()) {\n hasNested = true;\n }\n }\n- this.fullPathObjectMappers = fullPathObjectMappers.build();\n+ this.fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers);\n this.fieldTypes = this.fieldTypes.copyAndAddAll(type, fieldMappers);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -27,10 +27,12 @@\n \n import java.io.IOException;\n import java.util.Arrays;\n+import java.util.Collections;\n import java.util.Comparator;\n import java.util.HashMap;\n-import java.util.List;\n+import java.util.HashSet;\n import java.util.Map;\n+import java.util.Set;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -41,7 +43,9 @@\n */\n public final class Mapping implements ToXContent {\n \n- public static final List<String> LEGACY_INCLUDE_IN_OBJECT = Arrays.asList(\"_all\", \"_id\", \"_parent\", \"_routing\", \"_timestamp\", \"_ttl\");\n+ // Set of fields that were included into the root object mapper before 2.0\n+ public static final Set<String> LEGACY_INCLUDE_IN_OBJECT = Collections.unmodifiableSet(new HashSet<>(\n+ Arrays.asList(\"_all\", \"_id\", \"_parent\", \"_routing\", \"_timestamp\", \"_ttl\")));\n \n final Version indexCreated;\n final RootObjectMapper root;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/Mapping.java",
"status": "modified"
},
{
"diff": "@@ -87,7 +87,7 @@ public void testExternalValuesWithMultifield() throws Exception {\n .startObject(\"f\")\n .field(\"type\", ExternalMapperPlugin.EXTERNAL_UPPER)\n .startObject(\"fields\")\n- .startObject(\"f\")\n+ .startObject(\"g\")\n .field(\"type\", \"string\")\n .field(\"store\", \"yes\")\n .startObject(\"fields\")\n@@ -107,7 +107,7 @@ public void testExternalValuesWithMultifield() throws Exception {\n refresh();\n \n SearchResponse response = client().prepareSearch(\"test-idx\")\n- .setQuery(QueryBuilders.termQuery(\"f.f.raw\", \"FOO BAR\"))\n+ .setQuery(QueryBuilders.termQuery(\"f.g.raw\", \"FOO BAR\"))\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo((long) 1));",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/externalvalues/ExternalValuesMapperIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -202,6 +202,51 @@ public void testConflictNewTypeUpdate() throws Exception {\n assertNull(mapperService.documentMapper(\"type2\").mapping().root().getMapper(\"foo\"));\n }\n \n+ public void testReuseMetaField() throws IOException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"_id\").field(\"type\", \"string\").endObject()\n+ .endObject().endObject().endObject();\n+ MapperService mapperService = createIndex(\"test\", Settings.settingsBuilder().build()).mapperService();\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"Field [_id] is defined twice in [type]\"));\n+ }\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"Field [_id] is defined twice in [type]\"));\n+ }\n+ }\n+\n+ public void testReuseMetaFieldBackCompat() throws IOException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"_id\").field(\"type\", \"string\").endObject()\n+ .endObject().endObject().endObject();\n+ // the logic is different for 2.x indices since they record some meta mappers (including _id)\n+ // in the root object\n+ Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_1_0).build();\n+ MapperService mapperService = createIndex(\"test\", settings).mapperService();\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"Field [_id] is defined twice in [type]\"));\n+ }\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"Field [_id] is defined twice in [type]\"));\n+ }\n+ }\n+\n public void testIndexFieldParsingBackcompat() throws IOException {\n IndexService indexService = createIndex(\"test\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build());\n XContentBuilder indexMapping = XContentFactory.jsonBuilder();",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -392,8 +392,7 @@ public void testStoredFieldsWithoutSource() throws Exception {\n createIndex(\"test\");\n client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForYellowStatus().execute().actionGet();\n \n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n- .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"_source\").field(\"enabled\", false).endObject().startObject(\"properties\")\n .startObject(\"byte_field\").field(\"type\", \"byte\").field(\"store\", \"yes\").endObject()\n .startObject(\"short_field\").field(\"type\", \"short\").field(\"store\", \"yes\").endObject()\n .startObject(\"integer_field\").field(\"type\", \"integer\").field(\"store\", \"yes\").endObject()\n@@ -556,8 +555,7 @@ public void testFieldsPulledFromFieldData() throws Exception {\n createIndex(\"test\");\n client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForYellowStatus().execute().actionGet();\n \n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n- .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"_source\").field(\"enabled\", false).endObject().startObject(\"properties\")\n .startObject(\"string_field\").field(\"type\", \"string\").endObject()\n .startObject(\"byte_field\").field(\"type\", \"byte\").endObject()\n .startObject(\"short_field\").field(\"type\", \"short\").endObject()",
"filename": "modules/lang-groovy/src/test/java/org/elasticsearch/messy/tests/SearchFieldsTests.java",
"status": "modified"
}
]
}
|
{
"body": "You should specify that `{\"copy_to\": \"top.child\"}` needs `top` field defined first, otherwise it would throw an error when indexing.\n\nAlso, I'm completely dissatisfied with the documentation on the site. \n",
"comments": [
{
"body": "Hi @celesteking \n\nI'd say this is a bug. Recreation:\n\n```\nPUT test \n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"copy_to\": \"top.child\"\n }\n }\n }\n }\n}\n\nPUT test/test/1\n{\n \"foo\": \"bar\"\n}\n```\n\nreturns:\n\n```\n{\n \"error\": \"MapperParsingException[attempt to copy value to non-existing object [top.child]]\",\n \"status\": 400\n}\n```\n\n> Also, I'm completely dissatisfied with the documentation on the site.\n\nYou're welcome to participate in this open source project by sending PRs to improve the documentation, or the code.\n",
"created_at": "2015-05-25T12:38:16Z"
},
{
"body": "There is actually a TODO in the code just before that exception is thrown, stating that we should create the parent object dynamically. I agree it is a bug.\n",
"created_at": "2015-05-26T06:32:40Z"
}
],
"number": 11237,
"title": "copy_to needs top level object "
}
|
{
"body": "Tentative fix for #111237.\nFixes #11237\n",
"number": 15216,
"review_comments": [
{
"body": "why is it needed?\n",
"created_at": "2015-12-03T19:20:26Z"
},
{
"body": "I think we should throw an exception if `mapper.nested() != Nested.NO`, as this would not make sense (and I'm pretty sure some users have dynamic templates to make all their objects nested by default).\n",
"created_at": "2015-12-03T19:24:46Z"
},
{
"body": "Hmm, shouldn't we look up the object mapper using `context.path().fullPathAsText(\"\")` instead?\n",
"created_at": "2015-12-03T19:31:22Z"
},
{
"body": "So maybe we need a test where two levels of nested objects already exist. This would catch this bug?\n",
"created_at": "2015-12-03T19:40:14Z"
},
{
"body": "You want to disallow the creation of dynamic nested objects throw the copy_to ? It makes sense and if the user wants to force the creation of a nested objects for the copy_to target he just need to add the definition in the mapping. Bottom line (but maybe I am missing something) it seems weird to define a copy_to without defining the type of the target in the mapping (lazy users ? ;) ).\n",
"created_at": "2015-12-04T09:59:06Z"
},
{
"body": "> You want to disallow the creation of dynamic nested objects throw the copy_to ?\n\nExactly.\n\n> if the user wants to force the creation of a nested objects for the copy_to target he just need to add the definition in the mapping\n\nThis belongs to a different issue, but I'm also wondering that copying to a nested field, even if it already exists should not be allowed as we have no idea in which nested doc the value should be copied?\n\nTo be clear, if you are currently parsing a.b.c and copying to a.b.d and a.b is a nested object field, it's fine since the source and target and under the same nested scope (so adding to the current nested doc that is being built is fine). However, if you're copying from a.b.c to a.d.e and a.d is nested, then we have no way to know to which nested document of a.d.e the value should go, so we should reject it?\n\n(I'm starting this discussion here because your code made me think about this potential problem, but please don't try to address it in the current PR, I think this should go into a dedicated PR.)\n\n> Bottom line (but maybe I am missing something) it seems weird to define a copy_to without defining the type of the target in the mapping (lazy users ? ;) ).\n\nI tend to agree. I think the idea is that we already support dynamic field creation for fields, so we want to be consistent and do it with objects as well.\n",
"created_at": "2015-12-04T12:17:51Z"
},
{
"body": "> This belongs to a different issue, but I'm also wondering that copying to a nested field, even if it already exists should not be allowed as we have no idea in which nested doc the value should be copied?\n\nAgreed. See https://github.com/elastic/elasticsearch/issues/14659\n",
"created_at": "2015-12-04T12:21:54Z"
},
{
"body": "Why do we need a good github search when we have @clintongormley :)\n",
"created_at": "2015-12-04T12:26:41Z"
}
],
"title": "Fix copy_to when the target is a dynamic object field."
}
|
{
"commits": [
{
"message": "Fix copy_to when the target is a dynamic object field.\nFixes #11237"
}
],
"files": [
{
"diff": "@@ -28,8 +28,6 @@\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.ReleasableLock;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.core.DateFieldMapper.DateFieldType;\n@@ -47,7 +45,6 @@\n import java.util.Collections;\n import java.util.HashSet;\n import java.util.List;\n-import java.util.Map;\n import java.util.Set;\n \n /** A parser for documents, given mappings from a DocumentMapper */\n@@ -712,37 +709,64 @@ private static void parseCopy(String field, ParseContext context) throws IOExcep\n // The path of the dest field might be completely different from the current one so we need to reset it\n context = context.overridePath(new ContentPath(0));\n \n+ String[] paths = Strings.splitStringToArray(field, '.');\n+ String fieldName = paths[paths.length-1];\n ObjectMapper mapper = context.root();\n- String objectPath = \"\";\n- String fieldPath = field;\n- int posDot = field.lastIndexOf('.');\n- if (posDot > 0) {\n- objectPath = field.substring(0, posDot);\n- context.path().add(objectPath);\n- mapper = context.docMapper().objectMappers().get(objectPath);\n- fieldPath = field.substring(posDot + 1);\n- }\n- if (mapper == null) {\n- //TODO: Create an object dynamically?\n- throw new MapperParsingException(\"attempt to copy value to non-existing object [\" + field + \"]\");\n- }\n- ObjectMapper update = parseDynamicValue(context, mapper, fieldPath, context.parser().currentToken());\n- assert update != null; // we are parsing a dynamic value so we necessarily created a new mapping\n-\n- // propagate the update to the root\n- while (objectPath.length() > 0) {\n- String parentPath = \"\";\n+ ObjectMapper[] mappers = new ObjectMapper[paths.length-1];\n+ if (paths.length > 1) {\n ObjectMapper parent = context.root();\n- posDot = objectPath.lastIndexOf('.');\n- if (posDot > 0) {\n- parentPath = objectPath.substring(0, posDot);\n- parent = context.docMapper().objectMappers().get(parentPath);\n+ for (int i = 0; i < paths.length-1; i++) {\n+ mapper = context.docMapper().objectMappers().get(context.path().fullPathAsText(paths[i]));\n+ if (mapper == null) {\n+ // One mapping is missing, check if we are allowed to create a dynamic one.\n+ ObjectMapper.Dynamic dynamic = parent.dynamic();\n+ if (dynamic == null) {\n+ dynamic = dynamicOrDefault(context.root().dynamic());\n+ }\n+\n+ switch (dynamic) {\n+ case STRICT:\n+ throw new StrictDynamicMappingException(parent.fullPath(), paths[i]);\n+ case TRUE:\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, paths[i], \"object\");\n+ if (builder == null) {\n+ // if this is a non root object, then explicitly set the dynamic behavior if set\n+ if (!(parent instanceof RootObjectMapper) && parent.dynamic() != ObjectMapper.Defaults.DYNAMIC) {\n+ ((ObjectMapper.Builder) builder).dynamic(parent.dynamic());\n+ }\n+ builder = MapperBuilders.object(paths[i]).enabled(true).pathType(parent.pathType());\n+ }\n+ Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings(), context.path());\n+ mapper = (ObjectMapper) builder.build(builderContext);\n+ if (mapper.nested() != ObjectMapper.Nested.NO) {\n+ throw new MapperParsingException(\"It is forbidden to create dynamic nested objects ([\" + context.path().fullPathAsText(paths[i]) + \"]) through `copy_to`\");\n+ }\n+ break;\n+ case FALSE:\n+ // Maybe we should log something to tell the user that the copy_to is ignored in this case.\n+ break;\n+ default:\n+ throw new AssertionError(\"Unexpected dynamic type \" + dynamic);\n+\n+ }\n+ }\n+ context.path().add(paths[i]);\n+ mappers[i] = mapper;\n+ parent = mapper;\n }\n- if (parent == null) {\n- throw new IllegalStateException(\"[\" + objectPath + \"] has no parent for path [\" + parentPath + \"]\");\n+ }\n+ ObjectMapper update = parseDynamicValue(context, mapper, fieldName, context.parser().currentToken());\n+ assert update != null; // we are parsing a dynamic value so we necessarily created a new mapping\n+\n+ if (paths.length > 1) {\n+ for (int i = paths.length - 2; i >= 0; i--) {\n+ ObjectMapper parent = context.root();\n+ if (i > 0) {\n+ parent = mappers[i-1];\n+ }\n+ assert parent != null;\n+ update = parent.mappingUpdate(update);\n }\n- update = parent.mappingUpdate(update);\n- objectPath = parentPath;\n }\n context.addDynamicMappingsUpdate(update);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n \n import java.io.IOException;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n \n@@ -68,6 +69,25 @@ public void testDynamicTemplateCopyTo() throws Exception {\n \n }\n \n+ public void testDynamicObjectCopyTo() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"doc\").startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"root.top.child\")\n+ .endObject()\n+ .endObject().endObject().endObject().string();\n+ assertAcked(\n+ client().admin().indices().prepareCreate(\"test-idx\")\n+ .addMapping(\"doc\", mapping)\n+ );\n+ client().prepareIndex(\"test-idx\", \"doc\", \"1\")\n+ .setSource(\"foo\", \"bar\")\n+ .get();\n+ client().admin().indices().prepareRefresh(\"test-idx\").execute().actionGet();\n+ SearchResponse response = client().prepareSearch(\"test-idx\")\n+ .setQuery(QueryBuilders.termQuery(\"root.top.child\", \"bar\")).get();\n+ assertThat(response.getHits().totalHits(), equalTo(1L));\n+ }\n \n private XContentBuilder createDynamicTemplateMapping() throws IOException {\n return XContentFactory.jsonBuilder().startObject().startObject(\"doc\")",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -167,27 +167,126 @@ public void testCopyToFieldsInnerObjectParsing() throws Exception {\n \n }\n \n- public void testCopyToFieldsNonExistingInnerObjectParsing() throws Exception {\n- String mapping = jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n-\n+ public void testCopyToDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n .startObject(\"copy_test\")\n- .field(\"type\", \"string\")\n- .field(\"copy_to\", \"very.inner.field\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n .endObject()\n-\n- .endObject().endObject().endObject().string();\n+ .endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n BytesReference json = jsonBuilder().startObject()\n .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n .endObject().bytes();\n \n+ ParseContext.Document doc = docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ assertThat(doc.getFields(\"copy_test\").length, equalTo(1));\n+ assertThat(doc.getFields(\"copy_test\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"very.inner.field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"very.inner.field\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"new_field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"new_field\")[0].stringValue(), equalTo(\"bar\"));\n+ }\n+\n+ public void testCopyToDynamicInnerInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.far.inner.field\")\n+ .endObject()\n+ .startObject(\"very\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"far\")\n+ .field(\"type\", \"object\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n+ .endObject().bytes();\n+\n+ ParseContext.Document doc = docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ assertThat(doc.getFields(\"copy_test\").length, equalTo(1));\n+ assertThat(doc.getFields(\"copy_test\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"very.far.inner.field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"very.far.inner.field\")[0].stringValue(), equalTo(\"foo\"));\n+\n+ assertThat(doc.getFields(\"new_field\").length, equalTo(1));\n+ assertThat(doc.getFields(\"new_field\")[0].stringValue(), equalTo(\"bar\"));\n+ }\n+\n+ public void testCopyToStrictDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .field(\"dynamic\", \"strict\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .endObject().bytes();\n+\n+ try {\n+ docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ fail();\n+ } catch (MapperParsingException ex) {\n+ assertThat(ex.getMessage(), startsWith(\"mapping set to strict, dynamic introduction of [very] within [type1] is not allowed\"));\n+ }\n+ }\n+\n+ public void testCopyToInnerStrictDynamicInnerObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.far.field\")\n+ .endObject()\n+ .startObject(\"very\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"far\")\n+ .field(\"type\", \"object\")\n+ .field(\"dynamic\", \"strict\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .endObject().bytes();\n+\n try {\n docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n fail();\n } catch (MapperParsingException ex) {\n- assertThat(ex.getMessage(), startsWith(\"attempt to copy value to non-existing object\"));\n+ assertThat(ex.getMessage(), startsWith(\"mapping set to strict, dynamic introduction of [field] within [very.far] is not allowed\"));\n }\n }\n \n@@ -337,6 +436,41 @@ public void testCopyToNestedField() throws Exception {\n }\n }\n \n+ public void testCopyToDynamicNestedObjectParsing() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type1\")\n+ .startArray(\"dynamic_templates\")\n+ .startObject()\n+ .startObject(\"objects\")\n+ .field(\"match_mapping_type\", \"object\")\n+ .startObject(\"mapping\")\n+ .field(\"type\", \"nested\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endArray()\n+ .startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"very.inner.field\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ BytesReference json = jsonBuilder().startObject()\n+ .field(\"copy_test\", \"foo\")\n+ .field(\"new_field\", \"bar\")\n+ .endObject().bytes();\n+\n+ try {\n+ docMapper.parse(\"test\", \"type1\", \"1\", json).rootDoc();\n+ fail();\n+ } catch (MapperParsingException ex) {\n+ assertThat(ex.getMessage(), startsWith(\"It is forbidden to create dynamic nested objects ([very]) through `copy_to`\"));\n+ }\n+ }\n+\n private void assertFieldValue(Document doc, String field, Number... expected) {\n IndexableField[] values = doc.getFields(field);\n if (values == null) {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperTests.java",
"status": "modified"
}
]
}
|
{
"body": "_From @gpaul on November 17, 2015 10:43_\n\nThe following set of steps against a fresh elasticsearch 2.0.0 instance with v3.0.2 of this plugin installed shows that copy_to isn't working for the name field. I doubt it is working for the other metadata fields, either.\n\nYou can copy/paste this in your shell.\n\n```\n# create a mapping\ncurl -XPOST 'http://localhost:9200/test_copyto' -d '{\n \"mappings\": {\n \"person\": {\n \"properties\": {\n \"copy_dst\": { \"type\": \"string\" },\n \"doc\": {\n \"type\": \"attachment\",\n \"fields\": {\n \"name\": { \"copy_to\": \"copy_dst\" }\n }\n }\n }\n }\n }\n}'\n## => {\"acknowledged\":true}\n\n\n# index a document, specifying a document name\ncurl -XPOST 'http://localhost:9200/test_copyto/person/1' -d '{\n \"doc\": {\n \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"_name\": \"my-attachment-name.doc\"\n }\n}'\n## => {\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_version\":1,\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\n\n\n# search for the document by its contents\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"doc.content\",\n \"fields\": [\"copy_dst\", \"doc.content\"],\n \"query\": \"ipsum\"\n }\n }\n}\n'\n## => {\"took\":5,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\": \"total\":1,\"max_score\":0.04119441,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.04119441,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-attachment-name.doc\"\n# }\n#}}]}}\n\n\n# search for the document by its name\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"doc.name\",\n \"fields\": [\"doc.name\"],\n \"query\": \"my-test.doc\"\n }\n }\n}\n'\n## => {\"took\":5,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":0.02250402,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.02250402,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-attachment-name.doc\"\n# }\n#}}]}}\n\n# search for the document by the copy_dst field\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"copy_dst\",\n \"fields\": [\"copy_dst\"],\n \"query\": \"my-test.doc\"\n }\n }\n}\n'\n## => {\"took\":1,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":0,\"max_score\":null,\"hits\":[]}}\n```\n\n_Copied from original issue: elastic/elasticsearch-mapper-attachments#190_\n",
"comments": [
{
"body": "_From @gpaul on November 23, 2015 13:37_\n\nPing. Should I open this issue against the main elasticsearch repository now that this plugin is moving there?\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "Did you try the same script with elasticsearch 1.7? I'd like to know if it's a regression or if it has always been there.\n\nI know that copy_to feature is supposed to work for the extracted text but I don't think it worked for metadata.\n\nIf I'm right (so it's not an issue but more a feature request), then you can open it in elasticsearch repo.\nIf I'm wrong (so it's a regression), then keep it here.\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "_From @gpaul on November 23, 2015 14:58_\n\nIt seems like a regression:\nelasticsearch 1.7.0 with mapper-attachments 2.7.1\nyields\n\n```\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n {\n \"query\": {\n \"query_string\": {\n \"default_field\": \"copy_dst\",\n \"fields\": [\"copy_dst\"],\n \"query\": \"my-test.doc\"\n }\n }\n }\n '\n#{\"took\":3,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":0.02250402,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.02250402,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-test.doc\"\n# }\n#}}]}}\n```\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "Thank you @gpaul \n",
"created_at": "2015-11-23T16:10:56Z"
},
{
"body": "_From @hhoechtl on November 23, 2015 15:2_\n\nIt's also not working with the .content field\n",
"created_at": "2015-11-23T16:10:56Z"
},
{
"body": "I created a test for elasticsearch 1.7 and it is working well in 1.x series:\n\n``` java\n@Test\npublic void testCopyToMetaData() throws Exception {\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/integration/simple/copy-to-metadata.json\");\n byte[] txt = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/text-in-english.txt\");\n\n client().admin().indices().putMapping(putMappingRequest(\"test\").type(\"person\").source(mapping)).actionGet();\n\n index(\"test\", \"person\", jsonBuilder().startObject()\n .startObject(\"file\")\n .field(\"_content\", txt)\n .field(\"_name\", \"name\")\n .endObject()\n .endObject());\n refresh();\n\n CountResponse countResponse = client().prepareCount(\"test\").setQuery(queryStringQuery(\"name\").defaultField(\"file.name\")).execute().get();\n assertThatWithError(countResponse.getCount(), equalTo(1l));\n\n countResponse = client().prepareCount(\"test\").setQuery(queryStringQuery(\"name\").defaultField(\"copy\")).execute().get();\n assertThatWithError(countResponse.getCount(), equalTo(1l));\n}\n```\n\nI created a test for 2.\\* branches which demonstrates the regression from 2.0.\nIt can be reused to fix this issue: https://github.com/elastic/elasticsearch-mapper-attachments/commit/30aeda668a9090f28929a74d59b9bf81e1738161:\n\n``` yml\n\"Copy To Feature\":\n\n - do:\n indices.create:\n index: test\n body:\n mappings:\n doc:\n properties:\n copy_dst:\n type: string\n doc:\n type: attachment\n fields:\n name:\n copy_to: copy_dst\n - do:\n cluster.health:\n wait_for_status: yellow\n\n - do:\n index:\n index: test\n type: doc\n id: 1\n body:\n doc:\n _content: \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\"\n _name: \"name\"\n\n - do:\n indices.refresh: {}\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n doc.content: \"ipsum\"\n\n - match: { hits.total: 1 }\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n doc.name: \"name\"\n\n - match: { hits.total: 1 }\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n copy_dst: \"name\"\n\n - match: { hits.total: 1 }\n```\n\n@rjernst Could you give a look please?\n",
"created_at": "2015-11-23T16:25:24Z"
},
{
"body": "@dadoonet One odd thing I see is using a copy_to from within a multi field. Seems like we should disallow that? We removed support here:\nhttps://github.com/elastic/elasticsearch/pull/10802/files#diff-ab54166cc098ea6f03350e68a8689a5bL432\n\n The copy's are now handled outside of the mappers, while before there was a lot of spaghetti sharing between object mappers and field mappers that made document parsing complex. If we want to add it back, we will probably need a good refactoring in the way multi fields and copy_tos are handled. The problem is `copy_to` inside a multi field is essentially a nested `copy_to` since multi fields are conceptually just a copy to (see my notes in #10802).\n",
"created_at": "2015-11-30T22:22:23Z"
},
{
"body": "@clintongormley WDYT? \n\nLet me sum up the discussion.\n\nBefore 2.0, we were able to support:\n\n``` js\nPUT /test/person/_mapping\n{\n \"person\": {\n \"properties\": {\n \"file\": {\n \"type\": \"attachment\",\n \"fields\": {\n \"content\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"name\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"author\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"title\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n }\n }\n },\n \"copy\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nIt means that extracted `content`, `title`, `name` (for filename) and `author` can be indexed in another field `copy` where the user can run a \"fulltext\" search.\n\nFrom 2.0, this is not supported anymore for the reasons @rjernst described.\n\nShould we document that mapper attachments does not support anymore `copy_to` feature on extracted/provided nested fields?\nNote that we don't support [a global `copy_to` on the `BASE64` content](https://github.com/elastic/elasticsearch-mapper-attachments/issues/100) itself.\n\nOr should we try to implement such a thing _only_ for mapper attachments plugin. I can't see another use case today. May be some other community plugins would like to have it but really unsure here.\n\nIMO, users can always run search on multiple fields at the same time so instead of searching in `copy` in the above example, they could search on `file.content`, `file.title`, `file.name` and `file.author`. So not supporting this feature anymore does not sound as a big deal to me.\n\nThoughts?\n",
"created_at": "2015-12-01T08:05:34Z"
},
{
"body": "My feeling is that, long term, we should remove the `attachment` field type. Instead, we should move Tika to being a processor in the node ingest API which will read the binary attachment and add the contents and any meta fields to the `_source` field itself. The result is that attachments stop being magical and become just like any other field which can be configured in standard ways.\n\nWith this goal in mind, it doesn't make sense to add complicated (and likely buggy) hacks to fix this regression in 2.x. But, we should let the user know that copy-to on multi-fields is not supported: we should throw an exception at mapping time instead of silently ignoring the problem.\n",
"created_at": "2015-12-01T11:03:18Z"
},
{
"body": "So...I took a look at the copy_to and multi_fields and tried to throw an exception when we encounter copy_to in multi_fields - we could do that I guess. But I have the suspicion that re-adding the copy_to to multi_fields is just a matter of shifting three lines. I made a pr here so you can see what I mean: #15152 Tests pass but I might be missing something.\n",
"created_at": "2015-12-01T16:24:25Z"
},
{
"body": "That would be an awesome news @brwe. Was not expecting that. Are the tests I wrote for mapper attachments plugin work as well? Is that what you mean by `Tests pass`?\n",
"created_at": "2015-12-01T16:29:27Z"
},
{
"body": "@dadoonet I mean the tests that were there already and the tests I added in the pr. I suspect that your test would pass as well but did not check.\n\nHowever, we ( @rjernst @clintongormley and I ) had a chat yesterday about this in here is the outcome:\nDoing this fix is not a good idea for several reasons:\n1. it would introduce a dependency between DocumentParser and FieldMapper again which was removed with much effort in #10802\n2. long term we want to replace the implementation of multi fields with `copy_to` mechanism anyway\n3. chains of `copy_to` ala\n\n```\n\"a\" : {\n\"type\": \"string\",\n\"copy_to\": \"b\"\n},\n\"b\": {\n\"type\": \"string\",\n\"copy_to\": \"a\"\n}\n```\n\nshould not be possible because they add complexity in code and usage.\n\nThis reduces flexibility of mappings and how they can transform data. However, the consensus is that elasticsearch is not the right place to perform these kind of transformations anyway and these kind of operations should move to external tools such as the planned node ingest plugin https://github.com/elastic/elasticsearch/issues/14049\n\nApplying the fix I proposed would just delay the removal of the feature and therefore we think we should not do it.\n",
"created_at": "2015-12-02T11:13:08Z"
},
{
"body": "It seems like you have removed a feature without providing a better alternative. The use of copy_to for custom '_all' fields is well-documented and very useful.\n",
"created_at": "2015-12-02T11:47:32Z"
},
{
"body": "@brwe My 2c on your discussion \n1. it would introduce a dependency between DocumentParser and FieldMapper again which was removed with much effort in #10802\n -> if such cleanup code broke something - then perhaps it was merged too soon. A temporary patch to fix a broken feature until such time as it can be properly reengineered is not unreasonable.\n2. long term we want to replace the implementation of multi fields with copy_to mechanism anyway\n -> perfect - the process I would like to see in that case is: a deprecation note in 2.1 followed by a better alternative and migration path users can follow in preparation for 2.2 - keeping in mind that mappings from older versions of elasticsearch need to be upgraded. There are people who used ES as their primary data store - I'm not one of those, but having to rebuilding indexes when new ES versions are released is unfortunate. \n3. chains of copy_to ... should not be possible because they add complexity in code and usage.\n -> by all means prohibit them. I didn't know this was possible in earlier versions as it seems too easy to define cycles.\n",
"created_at": "2015-12-02T11:59:25Z"
},
{
"body": "@gpaul If our resources were unlimited, then I would agree with you. However, in an effort to clean up a massive code base and to remove complexity, we have to do it incrementally and sometimes we have to remove things that worked before. The mapping cleanup was 5 long months of work, and there is still a good deal more to be done. It brought some huge improvements (just see how many issues were linked to https://github.com/elastic/elasticsearch/issues/8870) but meant that we couldn't support everything that we supported before.\n\nEvery hack that we add into the code adds technical debt and increases the likelihood of introducing new bugs. We'd much rather focus our limited resources on making the system clean, stable, reliable, and maintainable.\n\nThis is why I don't want to make this change. The workaround for your case is to search across multiple fields.\n",
"created_at": "2015-12-02T13:04:42Z"
},
{
"body": "[woops, I was logged in as a friend of mine when I posted this comment a minute ago. I've removed it and this is a repost as myself ><]\n\nThat's fair. Thanks for all the hard work.\n\nAs I'll have to redesign my mappings anyway, should I avoid copy_to in its entirety going forward or is it just the multi-field case that was causing pain? I'd like to avoid features that are on their way out.\n",
"created_at": "2015-12-02T15:02:50Z"
},
{
"body": "@gpaul It is just copy_to in a multi field. \n",
"created_at": "2015-12-02T15:05:47Z"
},
{
"body": "Got it, thanks.\n",
"created_at": "2015-12-02T15:11:26Z"
},
{
"body": "Hi, I think I've this problem as well.\n\nFollowing the documentation (?) here: https://github.com/elastic/elasticsearch-mapper-attachments#copy-to-feature the `copyTo` on the `content` should work, but I cannot manage to. Isn't that the correct documentation?\n\nI want to make use of this feature to copy the extracted content into a custom `_all` field. Any hints how to solve this?\n\nIs there a content extraction service/endpoint I could make use of to index prepared content, so that I don't have to rely on copyTo?\n\n_Edit_: These docs mention the `copyTo` feature as well: https://www.elastic.co/guide/en/elasticsearch/plugins/current/mapper-attachments-copy-to.html\n\nThanks!\n",
"created_at": "2016-03-24T19:19:03Z"
}
],
"number": 14946,
"title": "copy_to of mapper attachments metadata field isn't working"
}
|
{
"body": "Copy to within multi field is ignored from 2.0 on, see #10802.\nInstead of just ignoring it, we should throw an exception if this\nis found in the mapping when a mapping is added. For already\nexisting indices we should at least log a warning.\n\nrelated to #14946 \n\nI made it so that a warning is logged each time a mapping is parsed that has a copy_to in a multi field. \nBut the copy_to is not removed from the mapping so now the exception is logged each time the mapping is parsed. Is that OK?\n",
"number": 15213,
"review_comments": [
{
"body": "Can you make the parseCopyFields happen in an else here? Then the setting will not be serialized back out because the multi field will never have a copyTo member.\n",
"created_at": "2015-12-03T23:33:49Z"
},
{
"body": "Thank you for moving this file into core! It will make unit testing mappers much easier!\n\nI don't see why we need to make this complicated with a bunch of variants to create a mapper service. At least can we have just 2? Simple (takes a temp dir and index settings), and complicated that passes in IndicesModule (already constructed and any types added, or you could also take null to mean \"create it\") and version.\n",
"created_at": "2015-12-03T23:40:40Z"
},
{
"body": "Can we just explicitly test these? Let's have a test that the field is removed, no error, in the old ones, and another test that tan exception is thrown for newer indices. Randomizing the version is fine, but let's keep it to randomize within the versions we expect to have a particular behavior, so that we keep full coverage of what we are testing on every test run.\n",
"created_at": "2015-12-03T23:42:43Z"
},
{
"body": "I can do that but it might be weird with rolling upgrades. The mapping will have the copy_to until an upgraded node becomes master and then this entry suddenly vanishes and the warning stops. Seems like an undesirable behavior. \n@clintongormley and I discussed this yesterday and we thought that it is not a big deal to keep the copy_to because it will only warn whenever the mapping is changed so it will not spam the log with warnings in any case. \nIf one wants to get rid of the warning they can still remove it from the mapping by updating it explicitly.\n",
"created_at": "2015-12-04T11:22:59Z"
},
{
"body": "Hmm thinking about this again... If you're doing a rolling upgrade, then when the shard is assigned to a new node (as part of the upgrade) it should update the mapping and send the new mapping to the master. So we should only get this warning once (I think for the first shard?).\n",
"created_at": "2015-12-04T12:19:05Z"
},
{
"body": "The mappings will get merged on master so if the master is an old node and still has the copy_to and the new mapping without the copy_to comes in it will be merged with the copy_to from master and the resulting mapping will still have it. Only when a new node becomes master it will actually remove the copy_to. \nMy concern is actually not so much that the log message appears too often but more that it appears in any case at least once but then when a user sees it and checks the mapping then the copy_to might or might not be there depending on if the master was upgraded already or not. \n",
"created_at": "2015-12-04T13:16:39Z"
}
],
"title": "throw exception if a copy_to is within a multi field"
}
|
{
"commits": [
{
"message": "throw exception if a copy_to is within a multi field\n\nCopy to within multi field is ignored from 2.0 on, see #10802.\nInstead of just ignoring it, we should throw an exception if this\nis found in the mapping when a mapping is added. For already\nexisting indices we should at least log a warning.\n\nrelated to #14946"
},
{
"message": "simplify MapperTestUtils to only have two methods"
},
{
"message": "explicit test for version with and without exception"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.mapper;\n \n-import com.google.common.collect.ImmutableMap;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseFieldMatcher;\n@@ -134,6 +133,24 @@ public Version indexVersionCreated() {\n public ParseFieldMatcher parseFieldMatcher() {\n return parseFieldMatcher;\n }\n+\n+ public boolean isWithinMultiField() { return false; }\n+\n+ protected Map<String, TypeParser> typeParsers() { return typeParsers; }\n+\n+ public ParserContext createMultiFieldContext(ParserContext in) {\n+ return new MultiFieldParserContext(in) {\n+ @Override\n+ public boolean isWithinMultiField() { return true; }\n+ };\n+ }\n+\n+ class MultiFieldParserContext extends ParserContext {\n+ MultiFieldParserContext(ParserContext in) {\n+ super(in.type(), in.analysisService, in.similarityLookupService(), in.mapperService(), in.typeParsers(), in.indexVersionCreated(), in.parseFieldMatcher());\n+ }\n+ }\n+\n }\n \n Mapper.Builder<?,?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/Mapper.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.loader.SettingsLoader;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n@@ -184,11 +185,12 @@ public static void parseNumberField(NumberFieldMapper.Builder builder, String na\n public static void parseField(FieldMapper.Builder builder, String name, Map<String, Object> fieldNode, Mapper.TypeParser.ParserContext parserContext) {\n NamedAnalyzer indexAnalyzer = builder.fieldType().indexAnalyzer();\n NamedAnalyzer searchAnalyzer = builder.fieldType().searchAnalyzer();\n+ Version indexVersionCreated = parserContext.indexVersionCreated();\n for (Iterator<Map.Entry<String, Object>> iterator = fieldNode.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n final String propName = Strings.toUnderscoreCase(entry.getKey());\n final Object propNode = entry.getValue();\n- if (propName.equals(\"index_name\") && parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n+ if (propName.equals(\"index_name\") && indexVersionCreated.before(Version.V_2_0_0_beta1)) {\n builder.indexName(propNode.toString());\n iterator.remove();\n } else if (propName.equals(\"store\")) {\n@@ -242,7 +244,7 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n iterator.remove();\n } else if (propName.equals(\"omit_term_freq_and_positions\")) {\n final IndexOptions op = nodeBooleanValue(propNode) ? IndexOptions.DOCS : IndexOptions.DOCS_AND_FREQS_AND_POSITIONS;\n- if (parserContext.indexVersionCreated().onOrAfter(Version.V_1_0_0_RC2)) {\n+ if (indexVersionCreated.onOrAfter(Version.V_1_0_0_RC2)) {\n throw new ElasticsearchParseException(\"'omit_term_freq_and_positions' is not supported anymore - use ['index_options' : 'docs'] instead\");\n }\n // deprecated option for BW compat\n@@ -252,8 +254,8 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n builder.indexOptions(nodeIndexOptionValue(propNode));\n iterator.remove();\n } else if (propName.equals(\"analyzer\") || // for backcompat, reading old indexes, remove for v3.0\n- propName.equals(\"index_analyzer\") && parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n- \n+ propName.equals(\"index_analyzer\") && indexVersionCreated.before(Version.V_2_0_0_beta1)) {\n+\n NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"analyzer [\" + propNode.toString() + \"] not found for field [\" + name + \"]\");\n@@ -270,10 +272,10 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n } else if (propName.equals(\"include_in_all\")) {\n builder.includeInAll(nodeBooleanValue(propNode));\n iterator.remove();\n- } else if (propName.equals(\"postings_format\") && parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n+ } else if (propName.equals(\"postings_format\") && indexVersionCreated.before(Version.V_2_0_0_beta1)) {\n // ignore for old indexes\n iterator.remove();\n- } else if (propName.equals(\"doc_values_format\") && parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n+ } else if (propName.equals(\"doc_values_format\") && indexVersionCreated.before(Version.V_2_0_0_beta1)) {\n // ignore for old indexes\n iterator.remove();\n } else if (propName.equals(\"similarity\")) {\n@@ -284,14 +286,23 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n builder.fieldDataSettings(settings);\n iterator.remove();\n } else if (propName.equals(\"copy_to\")) {\n+ if (parserContext.isWithinMultiField()) {\n+ if (indexVersionCreated.after(Version.V_2_1_0) ||\n+ (indexVersionCreated.after(Version.V_2_0_1) && indexVersionCreated.before(Version.V_2_1_0))) {\n+ throw new MapperParsingException(\"copy_to in multi fields is not allowed. Found the copy_to in field [\" + name + \"] which is within a multi field.\");\n+ } else {\n+ ESLoggerFactory.getLogger(\"mapping [\" + parserContext.type() + \"]\").warn(\"Found a copy_to in field [\" + name + \"] which is within a multi field. This feature has been removed and the copy_to will be ignored.\");\n+ // we still parse this, otherwise the message will only appear once and the copy_to removed. After that it will appear again. Better to have it always.\n+ }\n+ }\n parseCopyFields(propNode, builder);\n iterator.remove();\n }\n }\n \n if (indexAnalyzer == null) {\n if (searchAnalyzer != null) {\n- // If the index was created before 2.0 then we are trying to upgrade the mappings so use the default indexAnalyzer \n+ // If the index was created before 2.0 then we are trying to upgrade the mappings so use the default indexAnalyzer\n // instead of throwing an exception so the user is able to upgrade\n if (parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n indexAnalyzer = parserContext.analysisService().defaultIndexAnalyzer();\n@@ -307,6 +318,7 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n }\n \n public static boolean parseMultiField(FieldMapper.Builder builder, String name, Mapper.TypeParser.ParserContext parserContext, String propName, Object propNode) {\n+ parserContext = parserContext.createMultiFieldContext(parserContext);\n if (propName.equals(\"path\") && parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n builder.multiFieldPathType(parsePathType(name, propNode.toString()));\n return true;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,102 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+\n+package org.elasticsearch.index.mapper.core;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.VersionUtils;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.hamcrest.core.IsEqual.equalTo;\n+\n+public class MultiFieldCopyToMapperTests extends ESTestCase {\n+\n+ public void testExceptionForCopyToInMultiFields() throws IOException {\n+ XContentBuilder mapping = createMappinmgWithCopyToInMultiField();\n+ Tuple<List<Version>, List<Version>> versionsWithAndWithoutExpectedExceptions = versionsWithAndWithoutExpectedExceptions();\n+\n+ // first check that for newer versions we throw exception if copy_to is found withing multi field\n+ Version indexVersion = randomFrom(versionsWithAndWithoutExpectedExceptions.v1());\n+ MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, indexVersion).build());\n+ try {\n+ mapperService.parse(\"type\", new CompressedXContent(mapping.string()), true);\n+ fail(\"Parsing should throw an exception because the mapping contains a copy_to in a multi field\");\n+ } catch (MapperParsingException e) {\n+ assertThat(e.getMessage(), equalTo(\"copy_to in multi fields is not allowed. Found the copy_to in field [c] which is within a multi field.\"));\n+ }\n+\n+ // now test that with an older version the pasring just works\n+ indexVersion = randomFrom(versionsWithAndWithoutExpectedExceptions.v2());\n+ mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, indexVersion).build());\n+ mapperService.parse(\"type\", new CompressedXContent(mapping.string()), true);\n+ }\n+\n+ private static XContentBuilder createMappinmgWithCopyToInMultiField() throws IOException {\n+ XContentBuilder mapping = jsonBuilder();\n+ mapping.startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"a\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .startObject(\"b\")\n+ .field(\"type\", \"string\")\n+ .startObject(\"fields\")\n+ .startObject(\"c\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"a\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ return mapping;\n+ }\n+\n+ // returs a tuple where\n+ // v1 is a list of versions for which we expect an excpetion when a copy_to in multi fields is found and\n+ // v2 is older versions where we throw no exception and we just log a warning\n+ private static Tuple<List<Version>, List<Version>> versionsWithAndWithoutExpectedExceptions() {\n+ List<Version> versionsWithException = new ArrayList<>();\n+ List<Version> versionsWithoutException = new ArrayList<>();\n+ for (Version version : VersionUtils.allVersions()) {\n+ if (version.after(Version.V_2_1_0) ||\n+ (version.after(Version.V_2_0_1) && version.before(Version.V_2_1_0))) {\n+ versionsWithException.add(version);\n+ } else {\n+ versionsWithoutException.add(version);\n+ }\n+ }\n+ return new Tuple<>(versionsWithException, versionsWithoutException);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/core/MultiFieldCopyToMapperTests.java",
"status": "added"
},
{
"diff": "@@ -429,3 +429,11 @@ to use the old default of 0. This was done to prevent phrase queries from\n matching across different values of the same term unexpectedly. Specifically,\n 100 was chosen to cause phrase queries with slops up to 99 to match only within\n a single value of a field.\n+\n+==== copy_to and multi fields\n+\n+A <<copy-to,copy_to>> within a <<multi-fields,multi field>> is ignored from version 2.0 on. With any version after\n+2.1 or 2.0.1 creating a mapping that has a copy_to within a multi field will result \n+in an exception.\n+\n+",
"filename": "docs/reference/migration/migrate_2_0/mapping.asciidoc",
"status": "modified"
},
{
"diff": "@@ -22,13 +22,20 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.IndicesModule;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Before;\n \n public class AttachmentUnitTestCase extends ESTestCase {\n \n protected Settings testSettings;\n- \n+\n+ protected static IndicesModule getIndicesModuleWithRegisteredAttachmentMapper() {\n+ IndicesModule indicesModule = new IndicesModule();\n+ indicesModule.registerMapper(AttachmentMapper.CONTENT_TYPE, new AttachmentMapper.TypeParser());\n+ return indicesModule;\n+ }\n+\n @Before\n public void createSettings() throws Exception {\n testSettings = Settings.builder()",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/AttachmentUnitTestCase.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n import org.junit.Before;\n \n@@ -37,7 +38,7 @@ public class DateAttachmentMapperTests extends AttachmentUnitTestCase {\n \n @Before\n public void setupMapperParser() throws Exception {\n- mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n }\n \n public void testSimpleMappings() throws Exception {",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/DateAttachmentMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n \n import java.io.IOException;\n \n@@ -42,7 +42,7 @@\n public class EncryptedDocMapperTests extends AttachmentUnitTestCase {\n \n public void testMultipleDocsEncryptedLast() throws IOException {\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/encrypted/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);\n@@ -72,7 +72,7 @@ public void testMultipleDocsEncryptedLast() throws IOException {\n }\n \n public void testMultipleDocsEncryptedFirst() throws IOException {\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/encrypted/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);\n byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/htmlWithValidDateMeta.html\");\n@@ -103,9 +103,8 @@ public void testMultipleDocsEncryptedFirst() throws IOException {\n public void testMultipleDocsEncryptedNotIgnoringErrors() throws IOException {\n try {\n DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(),\n- Settings.builder()\n- .put(\"index.mapping.attachment.ignore_errors\", false)\n- .build()).documentMapperParser();\n+ Settings.builder().put(\"index.mapping.attachment.ignore_errors\", false).build(),\n+ getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/encrypted/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/EncryptedDocMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -24,8 +24,8 @@\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n import org.junit.Before;\n \n import java.io.IOException;\n@@ -50,9 +50,8 @@ public void setupMapperParser() throws IOException {\n \n public void setupMapperParser(boolean langDetect) throws IOException {\n DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(),\n- Settings.settingsBuilder()\n- .put(\"index.mapping.attachment.detect_language\", langDetect)\n- .build()).documentMapperParser();\n+ Settings.settingsBuilder().put(\"index.mapping.attachment.detect_language\", langDetect).build(),\n+ getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/language/language-mapping.json\");\n docMapper = mapperParser.parse(mapping);\n ",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/LanguageDetectionAttachmentMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n \n import java.io.IOException;\n \n@@ -44,7 +44,7 @@ protected void checkMeta(String filename, Settings otherSettings, Long expectedD\n .put(this.testSettings)\n .put(otherSettings)\n .build();\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), settings).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), settings, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/metadata/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/MetadataMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -27,8 +27,8 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n@@ -48,7 +48,7 @@ public class MultifieldAttachmentMapperTests extends AttachmentUnitTestCase {\n \n @Before\n public void setupMapperParser() throws Exception {\n- mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n \n }\n \n@@ -91,7 +91,7 @@ public void testExternalValues() throws Exception {\n String bytes = Base64.encodeBytes(originalText.getBytes(StandardCharsets.ISO_8859_1));\n threadPool = new ThreadPool(\"testing-only\");\n \n- MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY);\n+ MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper());\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/multifield/multifield-mapping.json\");\n ",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/MultifieldAttachmentMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.ParseContext;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n import org.junit.Test;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -42,7 +43,7 @@\n public class SimpleAttachmentMapperTests extends AttachmentUnitTestCase {\n \n public void testSimpleMappings() throws Exception {\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/simple/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);\n byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/testXHTML.html\");\n@@ -69,9 +70,8 @@ public void testSimpleMappings() throws Exception {\n \n public void testContentBackcompat() throws Exception {\n DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(),\n- Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id)\n- .build()).documentMapperParser();\n+ Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build(),\n+ getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/simple/test-mapping.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);\n byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/testXHTML.html\");\n@@ -86,7 +86,7 @@ public void testContentBackcompat() throws Exception {\n * test for https://github.com/elastic/elasticsearch-mapper-attachments/issues/179\n */\n public void testSimpleMappingsWithAllFields() throws Exception {\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/simple/test-mapping-all-fields.json\");\n DocumentMapper docMapper = mapperParser.parse(mapping);\n byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/testXHTML.html\");\n@@ -132,7 +132,7 @@ public void testMapperErrorWithDotTwoLevels169() throws Exception {\n .endObject();\n \n byte[] mapping = mappingBuilder.bytes().toBytes();\n- MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY);\n+ MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper());\n DocumentMapper docMapper = mapperService.parse(\"mail\", new CompressedXContent(mapping), true);\n // this should not throw an exception\n mapperService.parse(\"mail\", new CompressedXContent(docMapper.mapping().toString()), true);",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/SimpleAttachmentMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,7 @@\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n \n import java.io.FileNotFoundException;\n import java.io.IOException;\n@@ -45,6 +45,7 @@\n import static org.elasticsearch.common.cli.CliToolConfig.Builder.option;\n import static org.elasticsearch.common.io.Streams.copy;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.mapper.attachments.AttachmentUnitTestCase.getIndicesModuleWithRegisteredAttachmentMapper;\n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n \n /**\n@@ -86,7 +87,7 @@ protected TikaRunner(Terminal terminal, String url, Integer size, String base64t\n this.size = size;\n this.url = url;\n this.base64text = base64text;\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(PathUtils.get(\".\"), Settings.EMPTY).documentMapperParser(); // use CWD b/c it won't be used\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(PathUtils.get(\".\"), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser(); // use CWD b/c it won't be used\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/standalone/standalone-mapping.json\");\n docMapper = mapperParser.parse(mapping);",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/StandaloneRunner.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.mapper.attachments.AttachmentMapper;\n+import org.elasticsearch.index.mapper.core.MapperTestUtils;\n import org.junit.Before;\n \n import java.io.IOException;\n@@ -48,7 +48,7 @@ public class VariousDocTests extends AttachmentUnitTestCase {\n \n @Before\n public void createMapper() throws IOException {\n- DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY).documentMapperParser();\n+ DocumentMapperParser mapperParser = MapperTestUtils.newMapperService(createTempDir(), Settings.EMPTY, getIndicesModuleWithRegisteredAttachmentMapper()).documentMapperParser();\n \n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/unit/various-doc/test-mapping.json\");\n docMapper = mapperParser.parse(mapping);\n@@ -95,7 +95,7 @@ public void testTxtDocument() throws Exception {\n assertParseable(\"text-in-english.txt\");\n testMapper(\"text-in-english.txt\", false);\n }\n- \n+\n /**\n * Test for .epub\n */\n@@ -131,7 +131,7 @@ void assertException(String filename, String expectedMessage) throws Exception {\n protected void assertParseable(String filename) throws Exception {\n try (InputStream is = VariousDocTests.class.getResourceAsStream(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" + filename)) {\n byte bytes[] = IOUtils.toByteArray(is);\n- String parsedContent = TikaImpl.parse(bytes, new Metadata(), -1); \n+ String parsedContent = TikaImpl.parse(bytes, new Metadata(), -1);\n assertThat(parsedContent, not(isEmptyOrNullString()));\n logger.debug(\"extracted content: {}\", parsedContent);\n }",
"filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/VariousDocTests.java",
"status": "modified"
}
]
}
|
{
"body": "Hello all.\n\nI am running 4-node ES cluster with logstash as data producer. Recently I've upgraded ES from 1.6 to 2.1. After upgrade recent indexs' shards are continuously being marked as failed and re-initialized. If failed shard is primary shard then indexing stucks.\n\nOne node:\n\n```\n2015-11-26_07:01:21.22958 [2015-11-26 07:01:21,229][WARN ][index.engine ] [John Falsworth] [logstash-2015.11.26][2] failed engine [refresh failed]\n2015-11-26_07:01:21.22961 java.lang.NullPointerException\n2015-11-26_07:01:21.23006 [2015-11-26 07:01:21,229][WARN ][index.shard ] [John Falsworth] [logstash-2015.11.26][2] Failed to perform scheduled engine refresh\n2015-11-26_07:01:21.23008 [logstash-2015.11.26][[logstash-2015.11.26][2]] RefreshFailedEngineException[Refresh failed]; nested: NullPointerException;\n2015-11-26_07:01:21.23008 at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:686)\n2015-11-26_07:01:21.23008 at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:615)\n2015-11-26_07:01:21.23009 at org.elasticsearch.index.shard.IndexShard$EngineRefresher$1.run(IndexShard.java:1255)\n2015-11-26_07:01:21.23009 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n2015-11-26_07:01:21.23009 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n2015-11-26_07:01:21.23010 at java.lang.Thread.run(Thread.java:745)\n2015-11-26_07:01:21.23010 Caused by: java.lang.NullPointerException\n2015-11-26_07:01:21.23044 [2015-11-26 07:01:21,230][WARN ][indices.cluster ] [John Falsworth] [[logstash-2015.11.26][2]] marking and sending shard failed due to [eng\nine failure, reason [refresh failed]]\n2015-11-26_07:01:21.23046 java.lang.NullPointerException\n2015-11-26_07:04:18.05160 [2015-11-26 07:04:18,050][WARN ][index.translog ] [John Falsworth] [logstash-2015.11.26][2] failed to delete temp file /var/lib/elasticsea\nrch/dss2es/nodes/0/indices/logstash-2015.11.26/2/translog/translog-8708119625210250383.tlog\n2015-11-26_07:04:18.05162 java.nio.file.NoSuchFileException: /var/lib/elasticsearch/dss2es/nodes/0/indices/logstash-2015.11.26/2/translog/translog-8708119625210250383.tlog\n2015-11-26_07:04:18.05163 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n2015-11-26_07:04:18.05163 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n2015-11-26_07:04:18.05163 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n2015-11-26_07:04:18.05164 at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n2015-11-26_07:04:18.05164 at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n2015-11-26_07:04:18.05164 at java.nio.file.Files.delete(Files.java:1126)\n2015-11-26_07:04:18.05165 at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n2015-11-26_07:04:18.05165 at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n2015-11-26_07:04:18.05165 at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n2015-11-26_07:04:18.05166 at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n2015-11-26_07:04:18.05166 at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n2015-11-26_07:04:18.05167 at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n2015-11-26_07:04:18.05167 at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n2015-11-26_07:04:18.05167 at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n2015-11-26_07:04:18.05168 at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n2015-11-26_07:04:18.05168 at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n2015-11-26_07:04:18.05168 at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n2015-11-26_07:04:18.05169 at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n2015-11-26_07:04:18.05169 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n2015-11-26_07:04:18.05169 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n2015-11-26_07:04:18.05171 at java.lang.Thread.run(Thread.java:745)\n```\n\nAnother node:\n\n```\n2015-11-26_07:04:16.53521 [2015-11-26 07:04:16,534][WARN ][index.engine ] [Turner D. Century] [logstash-2015.11.26][2] failed engine [refresh failed]\n2015-11-26_07:04:16.53524 java.lang.NullPointerException\n2015-11-26_07:04:16.53525 at org.elasticsearch.indices.cache.query.IndicesQueryCache$1.onDocIdSetEviction(IndicesQueryCache.java:158)\n2015-11-26_07:04:16.53525 at org.apache.lucene.search.LRUQueryCache.clearCoreCacheKey(LRUQueryCache.java:313)\n2015-11-26_07:04:16.53525 at org.apache.lucene.search.LRUQueryCache$1.onClose(LRUQueryCache.java:276)\n2015-11-26_07:04:16.53526 at org.apache.lucene.index.SegmentCoreReaders.notifyCoreClosedListeners(SegmentCoreReaders.java:168)\n2015-11-26_07:04:16.53526 at org.apache.lucene.index.SegmentCoreReaders.decRef(SegmentCoreReaders.java:157)\n2015-11-26_07:04:16.53526 at org.apache.lucene.index.SegmentReader.doClose(SegmentReader.java:175)\n2015-11-26_07:04:16.53527 at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)\n2015-11-26_07:04:16.53527 at org.apache.lucene.index.StandardDirectoryReader.doClose(StandardDirectoryReader.java:359)\n2015-11-26_07:04:16.53527 at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)\n2015-11-26_07:04:16.53528 at org.apache.lucene.index.IndexReader.close(IndexReader.java:403)\n2015-11-26_07:04:16.53528 at org.apache.lucene.index.FilterDirectoryReader.doClose(FilterDirectoryReader.java:134)\n2015-11-26_07:04:16.53529 at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)\n2015-11-26_07:04:16.53529 at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:130)\n2015-11-26_07:04:16.53530 at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)\n2015-11-26_07:04:16.53531 at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:274)\n2015-11-26_07:04:16.53531 at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:189)\n2015-11-26_07:04:16.53531 at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253)\n2015-11-26_07:04:16.53532 at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:678)\n2015-11-26_07:04:16.53532 at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:615)\n2015-11-26_07:04:16.53532 at org.elasticsearch.index.shard.IndexShard$EngineRefresher$1.run(IndexShard.java:1255)\n2015-11-26_07:04:16.53532 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n2015-11-26_07:04:16.53533 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n2015-11-26_07:04:16.53533 at java.lang.Thread.run(Thread.java:745)\n```\n",
"comments": [
{
"body": "Anyone? I can handle this by moving failed shard to another node by the way.\n",
"created_at": "2015-11-27T14:29:32Z"
},
{
"body": "It seems like NPE is in cache eviction method which can't aquire stats counters using `stats2`\n\n``` java\nprotected void onDocIdSetEviction(Object readerCoreKey, int numEntries, long sumRamBytesUsed) {\n assert Thread.holdsLock(this);\n super.onDocIdSetEviction(readerCoreKey, numEntries, sumRamBytesUsed);\n // We can't use ShardCoreKeyMap here because its core closed\n // listener is called before the listener of the cache which\n // triggers this eviction. So instead we use use stats2 that\n // we only evict when nothing is cached anymore on the segment\n // instead of relying on close listeners\n final StatsAndCount statsAndCount = stats2.get(readerCoreKey);\n final Stats shardStats = statsAndCount.stats;\n shardStats.cacheSize -= numEntries;\n shardStats.ramBytesUsed -= sumRamBytesUsed;\n statsAndCount.count -= numEntries;\n if (statsAndCount.count == 0) {\n stats2.remove(readerCoreKey);\n }\n }\n```\n\nWhat is the preferable way to fix this? Ignoring stats update if `statsAndCount` is `null` leads to incorrect statistics.\n",
"created_at": "2015-11-27T23:18:43Z"
},
{
"body": "Also, API call `POST /_cache/clear` episodically gives the same error\n\n``` json\n{\n \"_shards\": {\n \"total\": 6830,\n \"successful\": 6827,\n \"failed\": 3,\n \"failures\": [\n {\n \"shard\": 2,\n \"index\": \"logstash-2015.11.26\",\n \"status\": \"INTERNAL_SERVER_ERROR\",\n \"reason\": {\n \"type\": \"null_pointer_exception\",\n \"reason\": null\n }\n }\n ]\n }\n}\n```\n",
"created_at": "2015-11-27T23:26:05Z"
},
{
"body": "@s1monw Is this issue fixed by https://github.com/elastic/elasticsearch/pull/15012 ?\n",
"created_at": "2015-11-28T13:41:48Z"
},
{
"body": "@clintongormley I think this is something else, relating to evictions in the new query cache. Maybe @jpountz can take a look?\n",
"created_at": "2015-11-30T15:34:40Z"
},
{
"body": "Same story here. Updated to 2.1 a couple of days ago also, and ever since we have the same error message that some temporary files could not be deleted:\n\n```\n[2015-12-02 09:31:41,802][WARN ][index.translog ] [Bacon] [.kibana][0] failed to delete temp file /appdata/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-4247269950871871367.tlog\n```\n\nThe mentioned files never exist. \nAlso some of our shards are stuck in a translog state. When this first occured a couple of days ago, they would recover from it after some time, but now that's not working anymore and the number of affected shards is growing. Restarting elasticsearch does not help (anymore). The indices also cannot be closed or deleted via the http api anymore. Indexing is stuck, too.\n",
"created_at": "2015-12-02T10:00:04Z"
},
{
"body": "@IngaFeick This warning can be safely ignored. It will be fixed in 2.1.1, see https://github.com/elastic/elasticsearch/pull/14872.\n",
"created_at": "2015-12-02T10:39:34Z"
},
{
"body": "To be clear, only this warning that elasticsearch \"failed to delete\" a temp file will be fixed. The NullPointerException mentioned above is an actual bug however.\n",
"created_at": "2015-12-02T10:53:41Z"
},
{
"body": "Thanks @jpountz , unfortunately we get that NPE also:\n\n```\njava.lang.NullPointerException\n at org.elasticsearch.indices.cache.query.IndicesQueryCache$1.onDocIdSetEviction(IndicesQueryCache.java:158)\n at org.apache.lucene.search.LRUQueryCache.clearCoreCacheKey(LRUQueryCache.java:313)\n at org.apache.lucene.search.LRUQueryCache$1.onClose(LRUQueryCache.java:276)\n at org.apache.lucene.index.SegmentCoreReaders.notifyCoreClosedListeners(SegmentCoreReaders.java:168)\n at org.apache.lucene.index.SegmentCoreReaders.decRef(SegmentCoreReaders.java:157)\n at org.apache.lucene.index.SegmentReader.doClose(SegmentReader.java:175)\n at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)\n at org.apache.lucene.index.ReadersAndUpdates.dropReaders(ReadersAndUpdates.java:182)\n at org.apache.lucene.index.IndexWriter$ReaderPool.dropAll(IndexWriter.java:603)\n at org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2075)\n```\n",
"created_at": "2015-12-02T10:57:42Z"
},
{
"body": "As a temporary and dirty workaround I'm running `POST /cache/clear` every ten minutes in cronjob. This slows down search requests but prevents shards format failing.\n",
"created_at": "2015-12-02T10:58:44Z"
},
{
"body": "@vs-adm we tried that also, to no noticable effect, but I'll give that cronjob a try to prevent additional failures. Thanks!\n",
"created_at": "2015-12-02T11:00:39Z"
},
{
"body": "I managed to find a case in which I'm able to reproduce the above NullPointerException, see #15202. Hopefully this is the same case as this one.\n",
"created_at": "2015-12-02T21:10:12Z"
}
],
"number": 15043,
"title": "ES 2.1 occasional failed engine [refresh failed] due to NullPointerException while evicting query cache."
}
|
{
"body": "This is due to the fact that the query cache will still call the\nonDocIdSetEviction callback in this case but with a number of entries equal to\nzero.\n\nClose #15043\n",
"number": 15202,
"review_comments": [],
"title": "Fix NPE when a segment with an empty cache gets closed."
}
|
{
"commits": [
{
"message": "Fix NPE when a segment with an empty cache gets closed.\n\nThis is due to the fact that the query cache will still call the\nonDocIdSetEviction callback in this case but with a number of entries equal to\nzero.\n\nClose #15043"
}
],
"files": [
{
"diff": "@@ -150,18 +150,23 @@ protected void onDocIdSetCache(Object readerCoreKey, long ramBytesUsed) {\n protected void onDocIdSetEviction(Object readerCoreKey, int numEntries, long sumRamBytesUsed) {\n assert Thread.holdsLock(this);\n super.onDocIdSetEviction(readerCoreKey, numEntries, sumRamBytesUsed);\n- // We can't use ShardCoreKeyMap here because its core closed\n- // listener is called before the listener of the cache which\n- // triggers this eviction. So instead we use use stats2 that\n- // we only evict when nothing is cached anymore on the segment\n- // instead of relying on close listeners\n- final StatsAndCount statsAndCount = stats2.get(readerCoreKey);\n- final Stats shardStats = statsAndCount.stats;\n- shardStats.cacheSize -= numEntries;\n- shardStats.ramBytesUsed -= sumRamBytesUsed;\n- statsAndCount.count -= numEntries;\n- if (statsAndCount.count == 0) {\n- stats2.remove(readerCoreKey);\n+ // onDocIdSetEviction might sometimes be called with a number\n+ // of entries equal to zero if the cache for the given segment\n+ // was already empty when the close listener was called\n+ if (numEntries > 0) {\n+ // We can't use ShardCoreKeyMap here because its core closed\n+ // listener is called before the listener of the cache which\n+ // triggers this eviction. So instead we use use stats2 that\n+ // we only evict when nothing is cached anymore on the segment\n+ // instead of relying on close listeners\n+ final StatsAndCount statsAndCount = stats2.get(readerCoreKey);\n+ final Stats shardStats = statsAndCount.stats;\n+ shardStats.cacheSize -= numEntries;\n+ shardStats.ramBytesUsed -= sumRamBytesUsed;\n+ statsAndCount.count -= numEntries;\n+ if (statsAndCount.count == 0) {\n+ stats2.remove(readerCoreKey);\n+ }\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,308 @@\n+package org.elasticsearch.indices.cache.query;\n+\n+import java.io.IOException;\n+\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexWriter;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.ConstantScoreScorer;\n+import org.apache.lucene.search.ConstantScoreWeight;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.QueryCachingPolicy;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.Weight;\n+import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.cache.query.QueryCacheStats;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.test.ESTestCase;\n+\n+public class IndicesQueryCacheTests extends ESTestCase {\n+\n+ private static class DummyQuery extends Query {\n+\n+ private final int id;\n+\n+ DummyQuery(int id) {\n+ this.id = id;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ return super.equals(obj) && id == ((DummyQuery) obj).id;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return 31 * super.hashCode() + id;\n+ }\n+\n+ @Override\n+ public String toString(String field) {\n+ return \"dummy\";\n+ }\n+\n+ @Override\n+ public Weight createWeight(IndexSearcher searcher, boolean needsScores)\n+ throws IOException {\n+ return new ConstantScoreWeight(this) {\n+ @Override\n+ public Scorer scorer(LeafReaderContext context) throws IOException {\n+ return new ConstantScoreScorer(this, score(), DocIdSetIterator.all(context.reader().maxDoc()));\n+ }\n+ };\n+ }\n+\n+ }\n+\n+ public void testBasics() throws IOException {\n+ Directory dir = newDirectory();\n+ IndexWriter w = new IndexWriter(dir, newIndexWriterConfig());\n+ w.addDocument(new Document());\n+ DirectoryReader r = DirectoryReader.open(w, false);\n+ w.close();\n+ ShardId shard = new ShardId(new Index(\"index\"), 0);\n+ r = ElasticsearchDirectoryReader.wrap(r, shard);\n+ IndexSearcher s = new IndexSearcher(r);\n+ s.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);\n+\n+ Settings settings = Settings.builder()\n+ .put(IndicesQueryCache.INDICES_CACHE_QUERY_COUNT, 10)\n+ .build();\n+ IndicesQueryCache cache = new IndicesQueryCache(settings);\n+ s.setQueryCache(cache);\n+\n+ QueryCacheStats stats = cache.getStats(shard);\n+ assertEquals(0L, stats.getCacheSize());\n+ assertEquals(0L, stats.getCacheCount());\n+ assertEquals(0L, stats.getHitCount());\n+ assertEquals(0L, stats.getMissCount());\n+\n+ assertEquals(1, s.count(new DummyQuery(0)));\n+\n+ stats = cache.getStats(shard);\n+ assertEquals(1L, stats.getCacheSize());\n+ assertEquals(1L, stats.getCacheCount());\n+ assertEquals(0L, stats.getHitCount());\n+ assertEquals(1L, stats.getMissCount());\n+\n+ for (int i = 1; i < 20; ++i) {\n+ assertEquals(1, s.count(new DummyQuery(i)));\n+ }\n+\n+ stats = cache.getStats(shard);\n+ assertEquals(10L, stats.getCacheSize());\n+ assertEquals(20L, stats.getCacheCount());\n+ assertEquals(0L, stats.getHitCount());\n+ assertEquals(20L, stats.getMissCount());\n+\n+ s.count(new DummyQuery(10));\n+\n+ stats = cache.getStats(shard);\n+ assertEquals(10L, stats.getCacheSize());\n+ assertEquals(20L, stats.getCacheCount());\n+ assertEquals(1L, stats.getHitCount());\n+ assertEquals(20L, stats.getMissCount());\n+\n+ IOUtils.close(r, dir);\n+\n+ // got emptied, but no changes to other metrics\n+ stats = cache.getStats(shard);\n+ assertEquals(0L, stats.getCacheSize());\n+ assertEquals(20L, stats.getCacheCount());\n+ assertEquals(1L, stats.getHitCount());\n+ assertEquals(20L, stats.getMissCount());\n+\n+ cache.onClose(shard);\n+\n+ // forgot everything\n+ stats = cache.getStats(shard);\n+ assertEquals(0L, stats.getCacheSize());\n+ assertEquals(0L, stats.getCacheCount());\n+ assertEquals(0L, stats.getHitCount());\n+ assertEquals(0L, stats.getMissCount());\n+\n+ cache.close(); // this triggers some assertions\n+ }\n+\n+ public void testTwoShards() throws IOException {\n+ Directory dir1 = newDirectory();\n+ IndexWriter w1 = new IndexWriter(dir1, newIndexWriterConfig());\n+ w1.addDocument(new Document());\n+ DirectoryReader r1 = DirectoryReader.open(w1, false);\n+ w1.close();\n+ ShardId shard1 = new ShardId(new Index(\"index\"), 0);\n+ r1 = ElasticsearchDirectoryReader.wrap(r1, shard1);\n+ IndexSearcher s1 = new IndexSearcher(r1);\n+ s1.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);\n+\n+ Directory dir2 = newDirectory();\n+ IndexWriter w2 = new IndexWriter(dir2, newIndexWriterConfig());\n+ w2.addDocument(new Document());\n+ DirectoryReader r2 = DirectoryReader.open(w2, false);\n+ w2.close();\n+ ShardId shard2 = new ShardId(new Index(\"index\"), 1);\n+ r2 = ElasticsearchDirectoryReader.wrap(r2, shard2);\n+ IndexSearcher s2 = new IndexSearcher(r2);\n+ s2.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);\n+\n+ Settings settings = Settings.builder()\n+ .put(IndicesQueryCache.INDICES_CACHE_QUERY_COUNT, 10)\n+ .build();\n+ IndicesQueryCache cache = new IndicesQueryCache(settings);\n+ s1.setQueryCache(cache);\n+ s2.setQueryCache(cache);\n+\n+ assertEquals(1, s1.count(new DummyQuery(0)));\n+\n+ QueryCacheStats stats1 = cache.getStats(shard1);\n+ assertEquals(1L, stats1.getCacheSize());\n+ assertEquals(1L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(1L, stats1.getMissCount());\n+\n+ QueryCacheStats stats2 = cache.getStats(shard2);\n+ assertEquals(0L, stats2.getCacheSize());\n+ assertEquals(0L, stats2.getCacheCount());\n+ assertEquals(0L, stats2.getHitCount());\n+ assertEquals(0L, stats2.getMissCount());\n+\n+ assertEquals(1, s2.count(new DummyQuery(0)));\n+\n+ stats1 = cache.getStats(shard1);\n+ assertEquals(1L, stats1.getCacheSize());\n+ assertEquals(1L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(1L, stats1.getMissCount());\n+\n+ stats2 = cache.getStats(shard2);\n+ assertEquals(1L, stats2.getCacheSize());\n+ assertEquals(1L, stats2.getCacheCount());\n+ assertEquals(0L, stats2.getHitCount());\n+ assertEquals(1L, stats2.getMissCount());\n+\n+ for (int i = 0; i < 20; ++i) {\n+ assertEquals(1, s2.count(new DummyQuery(i)));\n+ }\n+\n+ stats1 = cache.getStats(shard1);\n+ assertEquals(0L, stats1.getCacheSize()); // evicted\n+ assertEquals(1L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(1L, stats1.getMissCount());\n+\n+ stats2 = cache.getStats(shard2);\n+ assertEquals(10L, stats2.getCacheSize());\n+ assertEquals(20L, stats2.getCacheCount());\n+ assertEquals(1L, stats2.getHitCount());\n+ assertEquals(20L, stats2.getMissCount());\n+\n+ IOUtils.close(r1, dir1);\n+\n+ // no changes\n+ stats1 = cache.getStats(shard1);\n+ assertEquals(0L, stats1.getCacheSize());\n+ assertEquals(1L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(1L, stats1.getMissCount());\n+\n+ stats2 = cache.getStats(shard2);\n+ assertEquals(10L, stats2.getCacheSize());\n+ assertEquals(20L, stats2.getCacheCount());\n+ assertEquals(1L, stats2.getHitCount());\n+ assertEquals(20L, stats2.getMissCount());\n+\n+ cache.onClose(shard1);\n+\n+ // forgot everything about shard1\n+ stats1 = cache.getStats(shard1);\n+ assertEquals(0L, stats1.getCacheSize());\n+ assertEquals(0L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(0L, stats1.getMissCount());\n+\n+ stats2 = cache.getStats(shard2);\n+ assertEquals(10L, stats2.getCacheSize());\n+ assertEquals(20L, stats2.getCacheCount());\n+ assertEquals(1L, stats2.getHitCount());\n+ assertEquals(20L, stats2.getMissCount());\n+\n+ IOUtils.close(r2, dir2);\n+ cache.onClose(shard2);\n+\n+ // forgot everything about shard2\n+ stats1 = cache.getStats(shard1);\n+ assertEquals(0L, stats1.getCacheSize());\n+ assertEquals(0L, stats1.getCacheCount());\n+ assertEquals(0L, stats1.getHitCount());\n+ assertEquals(0L, stats1.getMissCount());\n+\n+ stats2 = cache.getStats(shard2);\n+ assertEquals(0L, stats2.getCacheSize());\n+ assertEquals(0L, stats2.getCacheCount());\n+ assertEquals(0L, stats2.getHitCount());\n+ assertEquals(0L, stats2.getMissCount());\n+\n+ cache.close(); // this triggers some assertions\n+ }\n+\n+ // Make sure the cache behaves correctly when a segment that is associated\n+ // with an empty cache gets closed. In that particular case, the eviction\n+ // callback is called with a number of evicted entries equal to 0\n+ // see https://github.com/elastic/elasticsearch/issues/15043\n+ public void testStatsOnEviction() throws IOException {\n+ Directory dir1 = newDirectory();\n+ IndexWriter w1 = new IndexWriter(dir1, newIndexWriterConfig());\n+ w1.addDocument(new Document());\n+ DirectoryReader r1 = DirectoryReader.open(w1, false);\n+ w1.close();\n+ ShardId shard1 = new ShardId(new Index(\"index\"), 0);\n+ r1 = ElasticsearchDirectoryReader.wrap(r1, shard1);\n+ IndexSearcher s1 = new IndexSearcher(r1);\n+ s1.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);\n+\n+ Directory dir2 = newDirectory();\n+ IndexWriter w2 = new IndexWriter(dir2, newIndexWriterConfig());\n+ w2.addDocument(new Document());\n+ DirectoryReader r2 = DirectoryReader.open(w2, false);\n+ w2.close();\n+ ShardId shard2 = new ShardId(new Index(\"index\"), 1);\n+ r2 = ElasticsearchDirectoryReader.wrap(r2, shard2);\n+ IndexSearcher s2 = new IndexSearcher(r2);\n+ s2.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);\n+\n+ Settings settings = Settings.builder()\n+ .put(IndicesQueryCache.INDICES_CACHE_QUERY_COUNT, 10)\n+ .build();\n+ IndicesQueryCache cache = new IndicesQueryCache(settings);\n+ s1.setQueryCache(cache);\n+ s2.setQueryCache(cache);\n+\n+ assertEquals(1, s1.count(new DummyQuery(0)));\n+\n+ for (int i = 1; i <= 20; ++i) {\n+ assertEquals(1, s2.count(new DummyQuery(i)));\n+ }\n+\n+ QueryCacheStats stats1 = cache.getStats(shard1);\n+ assertEquals(0L, stats1.getCacheSize());\n+ assertEquals(1L, stats1.getCacheCount());\n+\n+ // this used to fail because we were evicting an empty cache on\n+ // the segment from r1\n+ IOUtils.close(r1, dir1);\n+ cache.onClose(shard1);\n+\n+ IOUtils.close(r2, dir2);\n+ cache.onClose(shard2);\n+\n+ cache.close(); // this triggers some assertions\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/indices/cache/query/IndicesQueryCacheTests.java",
"status": "added"
}
]
}
|
{
"body": "ClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ignored. This can cause unexpected behaviour\nwhen gateway allocator is still fetching shards or has marked shards as ignored\nsince their quorum is not met yet.\n\nCloses #14670\n",
"comments": [
{
"body": "@s1monw I don't understand the removal of the transactionId stuff, is that not used anywhere?\n",
"created_at": "2015-11-11T15:19:07Z"
},
{
"body": "left some minor comments. LGTM o.w.\n",
"created_at": "2015-11-11T19:20:30Z"
},
{
"body": "> @s1monw I don't understand the removal of the transactionId stuff, is that not used anywhere?\n\nit's not used anymore\n",
"created_at": "2015-11-11T19:46:28Z"
}
],
"number": 14678,
"title": "Take ignored unallocated shards into account when making allocation decision"
}
|
{
"body": "ClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ignored. This can cause unexpected behavior\nwhen gateway allocator is still fetching shards or has marked shards as ignored\nsince their quorum is not met yet.\n\nCloses #14670\nCloses #14678\n\nthis is a backport of #14678 and closes #14776 \n\n@bleskes can you take a look, I had to backport the UnassignedIterator as well from 2.0 but I think it makes things cleaner here as well.\n",
"number": 15195,
"review_comments": [],
"title": "Take ingored unallocated shards into account when making allocation decision"
}
|
{
"commits": [
{
"message": "Take ingored unallocated shards into account when makeing allocation decision\n\nClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ingored. This can cause unexpected behavior\nwhen gateway allocator is still fetching shards or has marked shareds as ignored\nsince their quorum is not met yet.\n\nCloses #14670\nCloses #14678"
}
],
"files": [
{
"diff": "@@ -183,7 +183,7 @@ public void onFailure(String source, Throwable t) {\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- if (oldState != newState && newState.getRoutingNodes().hasUnassigned()) {\n+ if (oldState != newState && newState.getRoutingNodes().unassigned().size() > 0) {\n logger.trace(\"unassigned shards after shard failures. scheduling a reroute.\");\n routingService.reroute(\"unassigned shards after shard failures, scheduling a reroute\");\n }",
"filename": "src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java",
"status": "modified"
},
{
"diff": "@@ -51,9 +51,7 @@ public class RoutingNodes implements Iterable<RoutingNode> {\n \n private final Map<String, RoutingNode> nodesToShards = newHashMap();\n \n- private final UnassignedShards unassignedShards = new UnassignedShards();\n-\n- private final List<MutableShardRouting> ignoredUnassignedShards = newArrayList();\n+ private final UnassignedShards unassignedShards = new UnassignedShards(this);\n \n private final Map<ShardId, List<MutableShardRouting>> assignedShards = newHashMap();\n \n@@ -170,14 +168,6 @@ public int requiredAverageNumberOfShardsPerNode() {\n return totalNumberOfShards / nodesToShards.size();\n }\n \n- public boolean hasUnassigned() {\n- return !unassignedShards.isEmpty();\n- }\n-\n- public List<MutableShardRouting> ignoredUnassigned() {\n- return this.ignoredUnassignedShards;\n- }\n-\n public UnassignedShards unassigned() {\n return this.unassignedShards;\n }\n@@ -223,14 +213,25 @@ public ObjectIntOpenHashMap<String> nodesPerAttributesCounts(String attributeNam\n return nodesPerAttributesCounts;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned primaries even if the\n+ * primaries are marked as temporarily ignored.\n+ */\n public boolean hasUnassignedPrimaries() {\n- return unassignedShards.numPrimaries() > 0;\n+ return unassignedShards.getNumPrimaries() + unassignedShards.getNumIgnoredPrimaries() > 0;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned shards even if the\n+ * shards are marked as temporarily ignored.\n+ * @see UnassignedShards#isEmpty()\n+ * @see UnassignedShards#isIgnoredEmpty()\n+ */\n public boolean hasUnassignedShards() {\n- return !unassignedShards.isEmpty();\n+ return unassignedShards.isEmpty() == false || unassignedShards.isIgnoredEmpty() == false;\n }\n \n+\n public boolean hasInactivePrimaries() {\n return inactivePrimaryCount > 0;\n }\n@@ -532,33 +533,23 @@ public void reinitShadowPrimary(MutableShardRouting candidate) {\n }\n \n public final static class UnassignedShards implements Iterable<MutableShardRouting> {\n-\n+ private final RoutingNodes nodes;\n private final List<MutableShardRouting> unassigned;\n-\n+ private final List<MutableShardRouting> ignored;\n private int primaries = 0;\n- private long transactionId = 0;\n- private final UnassignedShards source;\n- private final long sourceTransactionId;\n+ private int ignoredPrimaries = 0;\n \n- public UnassignedShards(UnassignedShards other) {\n- source = other;\n- sourceTransactionId = other.transactionId;\n- unassigned = new ArrayList<>(other.unassigned);\n- primaries = other.primaries;\n- }\n-\n- public UnassignedShards() {\n+ public UnassignedShards(RoutingNodes nodes) {\n unassigned = new ArrayList<>();\n- source = null;\n- sourceTransactionId = -1;\n+ ignored = new ArrayList<>();\n+ this.nodes = nodes;\n }\n \n public void add(MutableShardRouting mutableShardRouting) {\n if(mutableShardRouting.primary()) {\n primaries++;\n }\n unassigned.add(mutableShardRouting);\n- transactionId++;\n }\n \n public void addAll(Collection<MutableShardRouting> mutableShardRoutings) {\n@@ -571,72 +562,162 @@ public void sort(Comparator<ShardRouting> comparator) {\n CollectionUtil.timSort(unassigned, comparator);\n }\n \n- public int size() {\n- return unassigned.size();\n- }\n+ /**\n+ * Returns the size of the non-ignored unassigned shards\n+ */\n+ public int size() { return unassigned.size(); }\n \n- public int numPrimaries() {\n+ /**\n+ * Returns the size of the temporarily marked as ignored unassigned shards\n+ */\n+ public int ignoredSize() { return ignored.size(); }\n+\n+ /**\n+ * Returns the number of non-ignored unassigned primaries\n+ */\n+ public int getNumPrimaries() {\n return primaries;\n }\n \n- @Override\n- public Iterator<MutableShardRouting> iterator() {\n- final Iterator<MutableShardRouting> iterator = unassigned.iterator();\n- return new Iterator<MutableShardRouting>() {\n- private MutableShardRouting current;\n- @Override\n- public boolean hasNext() {\n- return iterator.hasNext();\n- }\n+ /**\n+ * Returns the number of temporarily marked as ignored unassigned primaries\n+ */\n+ public int getNumIgnoredPrimaries() { return ignoredPrimaries; }\n \n- @Override\n- public MutableShardRouting next() {\n- return current = iterator.next();\n- }\n+ public UnassignedIterator iterator() {\n+ return new UnassignedIterator();\n+ }\n \n- @Override\n- public void remove() {\n- iterator.remove();\n- if (current.primary()) {\n- primaries--;\n+\n+ /**\n+ * The list of ignored unassigned shards (read only). The ignored unassigned shards\n+ * are not part of the formal unassigned list, but are kept around and used to build\n+ * back the list of unassigned shards as part of the routing table.\n+ */\n+ public List<MutableShardRouting> ignored() {\n+ return Collections.unmodifiableList(ignored);\n+ }\n+\n+ /**\n+ * Marks a shard as temporarily ignored and adds it to the ignore unassigned list.\n+ * Should be used with caution, typically,\n+ * the correct usage is to removeAndIgnore from the iterator.\n+ * @see #ignored()\n+ * @see UnassignedIterator#removeAndIgnore()\n+ * @see #isIgnoredEmpty()\n+ */\n+ public void ignoreShard(MutableShardRouting shard) {\n+ if (shard.primary()) {\n+ ignoredPrimaries++;\n+ }\n+ ignored.add(shard);\n+ }\n+\n+ /**\n+ * Takes all unassigned shards that match the given shard id and moves it to the end of the unassigned list.\n+ */\n+ public void moveToEnd(ShardId shardId) {\n+ if (unassigned.isEmpty() == false) {\n+ Iterator<MutableShardRouting> iterator = unassigned.iterator();\n+ List<MutableShardRouting> shardsToMove = Lists.newArrayList();\n+\n+ while (iterator.hasNext()) {\n+ MutableShardRouting next = iterator.next();\n+ if (next.shardId().equals(shardId)) {\n+ shardsToMove.add(next);\n+ iterator.remove();\n }\n- transactionId++;\n }\n- };\n+ if (shardsToMove.isEmpty() == false) {\n+ unassigned.addAll(shardsToMove);\n+ }\n+ }\n }\n \n- public boolean isEmpty() {\n- return unassigned.isEmpty();\n- }\n+ public class UnassignedIterator implements Iterator<MutableShardRouting> {\n \n- public void shuffle() {\n- Collections.shuffle(unassigned);\n+ private final Iterator<MutableShardRouting> iterator;\n+ private MutableShardRouting current;\n+\n+ public UnassignedIterator() {\n+ this.iterator = unassigned.iterator();\n+ }\n+\n+ @Override\n+ public boolean hasNext() {\n+ return iterator.hasNext();\n+ }\n+\n+ @Override\n+ public MutableShardRouting next() {\n+ return current = iterator.next();\n+ }\n+\n+ /**\n+ * Removes and ignores the unassigned shard (will be ignored for this run, but\n+ * will be added back to unassigned once the metadata is constructed again).\n+ * Typically this is used when an allocation decision prevents a shard from being allocated such\n+ * that subsequent consumers of this API won't try to allocate this shard again.\n+ */\n+ public void removeAndIgnore() {\n+ innerRemove();\n+ ignoreShard(current);\n+ }\n+\n+ /**\n+ * Initializes the current unassigned shard and moves it from the unassigned list.\n+ */\n+ public void initialize(String nodeId, long version) {\n+ innerRemove();\n+ nodes.assign(new MutableShardRouting(current, version), nodeId);\n+ }\n+\n+\n+ /**\n+ * Unsupported operation, just there for the interface. Use {@link #removeAndIgnore()} or\n+ * {@link #initialize(String, long)}.\n+ */\n+ @Override\n+ public void remove() {\n+ throw new UnsupportedOperationException(\"remove is not supported in unassigned iterator, use removeAndIgnore or initialize\");\n+ }\n+\n+ private void innerRemove() {\n+ iterator.remove();\n+ if (current.primary()) {\n+ primaries--;\n+ }\n+ }\n }\n \n- public void clear() {\n- transactionId++;\n- unassigned.clear();\n- primaries = 0;\n+ /**\n+ * Returns <code>true</code> iff this collection contains one or more non-ignored unassigned shards.\n+ */\n+ public boolean isEmpty() {\n+ return unassigned.isEmpty();\n }\n \n- public void transactionEnd(UnassignedShards shards) {\n- assert shards.source == this && shards.sourceTransactionId == transactionId :\n- \"Expected ID: \" + shards.sourceTransactionId + \" actual: \" + transactionId + \" Expected Source: \" + shards.source + \" actual: \" + this;\n- transactionId++;\n- this.unassigned.clear();\n- this.unassigned.addAll(shards.unassigned);\n- this.primaries = shards.primaries;\n+ /**\n+ * Returns <code>true</code> iff any unassigned shards are marked as temporarily ignored.\n+ * @see UnassignedShards#ignoreShard(MutableShardRouting)\n+ * @see UnassignedIterator#removeAndIgnore()\n+ */\n+ public boolean isIgnoredEmpty() {\n+ return ignored.isEmpty();\n }\n \n- public UnassignedShards transactionBegin() {\n- return new UnassignedShards(this);\n+ public void shuffle() {\n+ Collections.shuffle(unassigned);\n }\n \n+ /**\n+ * Drains all unassigned shards and returns it.\n+ * This method will not drain ignored shards.\n+ */\n public MutableShardRouting[] drain() {\n MutableShardRouting[] mutableShardRoutings = unassigned.toArray(new MutableShardRouting[unassigned.size()]);\n unassigned.clear();\n primaries = 0;\n- transactionId++;\n return mutableShardRoutings;\n }\n }\n@@ -657,6 +738,7 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n return true;\n }\n int unassignedPrimaryCount = 0;\n+ int unassignedIgnoredPrimaryCount = 0;\n int inactivePrimaryCount = 0;\n int inactiveShardCount = 0;\n int relocating = 0;\n@@ -713,8 +795,16 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n seenShards.add(shard.shardId());\n }\n \n- assert unassignedPrimaryCount == routingNodes.unassignedShards.numPrimaries() :\n- \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().numPrimaries() + \"]\";\n+ for (ShardRouting shard : routingNodes.unassigned().ignored()) {\n+ if (shard.primary()) {\n+ unassignedIgnoredPrimaryCount++;\n+ }\n+ }\n+\n+ assert unassignedPrimaryCount == routingNodes.unassignedShards.getNumPrimaries() :\n+ \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().getNumPrimaries() + \"]\";\n+ assert unassignedIgnoredPrimaryCount == routingNodes.unassignedShards.getNumIgnoredPrimaries() :\n+ \"Unassigned ignored primaries is [\" + unassignedIgnoredPrimaryCount + \"] but RoutingNodes returned unassigned ignored primaries [\" + routingNodes.unassigned().getNumIgnoredPrimaries() + \"]\";\n assert inactivePrimaryCount == routingNodes.inactivePrimaryCount :\n \"Inactive Primary count [\" + inactivePrimaryCount + \"] but RoutingNodes returned inactive primaries [\" + routingNodes.inactivePrimaryCount + \"]\";\n assert inactiveShardCount == routingNodes.inactiveShardCount :",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java",
"status": "modified"
},
{
"diff": "@@ -300,7 +300,7 @@ public Builder updateNodes(RoutingNodes routingNodes) {\n indexBuilder.addShard(refData, shardRoutingEntry);\n }\n }\n- for (MutableShardRouting shardRoutingEntry : Iterables.concat(routingNodes.unassigned(), routingNodes.ignoredUnassigned())) {\n+ for (MutableShardRouting shardRoutingEntry : Iterables.concat(routingNodes.unassigned(), routingNodes.unassigned().ignored())) {\n String index = shardRoutingEntry.index();\n IndexRoutingTable.Builder indexBuilder = indexRoutingTableBuilders.get(index);\n if (indexBuilder == null) {",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -177,7 +177,7 @@ private boolean reroute(RoutingAllocation allocation) {\n changed |= electPrimariesAndUnassignedDanglingReplicas(allocation);\n \n // now allocate all the unassigned to available nodes\n- if (allocation.routingNodes().hasUnassigned()) {\n+ if (allocation.routingNodes().unassigned().isEmpty() == false) {\n changed |= shardsAllocators.allocateUnassigned(allocation);\n // elect primaries again, in case this is needed with unassigned allocation\n changed |= electPrimariesAndUnassignedDanglingReplicas(allocation);\n@@ -487,18 +487,7 @@ private boolean applyFailedShard(RoutingAllocation allocation, ShardRouting fail\n // so we give a chance for other allocations and won't create poison failed allocations\n // that can keep other shards from being allocated (because of limits applied on how many\n // shards we can start per node)\n- List<MutableShardRouting> shardsToMove = Lists.newArrayList();\n- for (Iterator<MutableShardRouting> unassignedIt = routingNodes.unassigned().iterator(); unassignedIt.hasNext(); ) {\n- MutableShardRouting unassignedShardRouting = unassignedIt.next();\n- if (unassignedShardRouting.shardId().equals(failedShard.shardId())) {\n- unassignedIt.remove();\n- shardsToMove.add(unassignedShardRouting);\n- }\n- }\n- if (!shardsToMove.isEmpty()) {\n- routingNodes.unassigned().addAll(shardsToMove);\n- }\n-\n+ routingNodes.unassigned().moveToEnd(failedShard.shardId());\n node.moveToUnassigned(unassignedInfo);\n break;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java",
"status": "modified"
},
{
"diff": "@@ -322,13 +322,13 @@ private NodeSorter newNodeSorter() {\n return new NodeSorter(nodesArray(), weight, this);\n }\n \n- private boolean initialize(RoutingNodes routing, RoutingNodes.UnassignedShards unassigned) {\n+ private boolean initialize(RoutingNodes routing) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Start distributing Shards\");\n }\n indices.addAll(allocation.routingTable().indicesRouting().keySet());\n buildModelFromAssigned(routing.shards(assignedFilter));\n- return allocateUnassigned(unassigned, routing.ignoredUnassigned());\n+ return allocateUnassigned(routing.unassigned());\n }\n \n private static float absDelta(float lower, float higher) {\n@@ -382,8 +382,7 @@ private boolean balance(boolean onlyAssign) {\n logger.trace(\"Start assigning unassigned shards\");\n }\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n- boolean changed = initialize(routingNodes, unassigned);\n+ boolean changed = initialize(routingNodes);\n if (onlyAssign == false && changed == false && allocation.deciders().canRebalance(allocation).type() == Type.YES) {\n NodeSorter sorter = newNodeSorter();\n if (nodes.size() > 1) { /* skip if we only have one node */\n@@ -462,7 +461,6 @@ private boolean balance(boolean onlyAssign) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n \n@@ -537,8 +535,7 @@ public boolean move(MutableShardRouting shard, RoutingNode node ) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Try moving shard [{}] from [{}]\", shard, node);\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n- boolean changed = initialize(routingNodes, unassigned);\n+ boolean changed = initialize(routingNodes);\n if (!changed) {\n final ModelNode sourceNode = nodes.get(node.nodeId());\n assert sourceNode != null;\n@@ -574,7 +571,6 @@ public boolean move(MutableShardRouting shard, RoutingNode node ) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n \n@@ -607,7 +603,7 @@ private void buildModelFromAssigned(Iterable<MutableShardRouting> shards) {\n * Allocates all given shards on the minimal eligable node for the shards index\n * with respect to the weight function. All given shards must be unassigned.\n */\n- private boolean allocateUnassigned(RoutingNodes.UnassignedShards unassigned, List<MutableShardRouting> ignoredUnassigned) {\n+ private boolean allocateUnassigned(RoutingNodes.UnassignedShards unassigned) {\n assert !nodes.isEmpty();\n if (logger.isTraceEnabled()) {\n logger.trace(\"Start allocating unassigned shards\");\n@@ -656,9 +652,9 @@ public int compare(MutableShardRouting o1,\n if (!shard.primary()) {\n boolean drop = deciders.canAllocate(shard, allocation).type() == Type.NO;\n if (drop) {\n- ignoredUnassigned.add(shard);\n+ unassigned.ignoreShard(shard);\n while(i < primaryLength-1 && comparator.compare(primary[i], primary[i+1]) == 0) {\n- ignoredUnassigned.add(primary[++i]);\n+ unassigned.ignoreShard(primary[++i]);\n }\n continue;\n } else {\n@@ -762,10 +758,10 @@ public int compare(MutableShardRouting o1,\n } else if (logger.isTraceEnabled()) {\n logger.trace(\"No Node found to assign shard [{}]\", shard);\n }\n- ignoredUnassigned.add(shard);\n+ unassigned.ignoreShard(shard);\n if (!shard.primary()) { // we could not allocate it and we are a replica - check if we can ignore the other replicas\n while(secondaryLength > 0 && comparator.compare(shard, secondary[secondaryLength-1]) == 0) {\n- ignoredUnassigned.add(secondary[--secondaryLength]);\n+ unassigned.ignoreShard(secondary[--secondaryLength]);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.allocation.RerouteExplanation;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n@@ -221,12 +222,11 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n throw new ElasticsearchIllegalArgumentException(\"[allocate] allocation of \" + shardId + \" on node \" + discoNode + \" is not allowed, reason: \" + decision);\n }\n // go over and remove it from the unassigned\n- for (Iterator<MutableShardRouting> it = allocation.routingNodes().unassigned().iterator(); it.hasNext(); ) {\n+ for (RoutingNodes.UnassignedShards.UnassignedIterator it = allocation.routingNodes().unassigned().iterator(); it.hasNext(); ) {\n if (it.next() != shardRouting) {\n continue;\n }\n- it.remove();\n- allocation.routingNodes().assign(shardRouting, routingNode.nodeId());\n+ it.initialize(routingNode.nodeId(), shardRouting.version());\n if (shardRouting.primary()) {\n // we need to clear the post allocation flag, since its an explicit allocation of the primary shard\n // and we want to force allocate it (and create a new index for it)",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateAllocationCommand.java",
"status": "modified"
},
{
"diff": "@@ -165,7 +165,7 @@ protected Settings getIndexSettings(String index) {\n }\n }); // sort for priority ordering\n // First, handle primaries, they must find a place to be allocated on here\n- Iterator<MutableShardRouting> unassignedIterator = routingNodes.unassigned().iterator();\n+ RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n while (unassignedIterator.hasNext()) {\n MutableShardRouting shard = unassignedIterator.next();\n \n@@ -187,8 +187,7 @@ protected Settings getIndexSettings(String index) {\n if (shardState.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard started state\", shard);\n allocation.setHasPendingAsyncFetch();\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue;\n }\n shardState.processAllocation(allocation);\n@@ -316,8 +315,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // if we are restoring this shard we still can allocate\n if (shard.restoreSource() == null) {\n // we can't really allocate, so ignore it and continue\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: not allocating, number_of_allocated_shards_found [{}], required_number [{}]\", shard.index(), shard.id(), numberOfAllocationsFound, requiredAllocation);\n }\n@@ -347,8 +345,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // we found a match\n changed = true;\n // make sure we create one with the version from the recovered state\n- allocation.routingNodes().assign(new MutableShardRouting(shard, highestVersion), node.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(node.nodeId(), highestVersion);\n \n // found a node, so no throttling, no \"no\", and break out of the loop\n throttledNodes.clear();\n@@ -367,20 +364,18 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // we found a match\n changed = true;\n // make sure we create one with the version from the recovered state\n- allocation.routingNodes().assign(new MutableShardRouting(shard, highestVersion), node.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(node.nodeId(), highestVersion);\n }\n } else {\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: throttling allocation [{}] to [{}] on primary allocation\", shard.index(), shard.id(), shard, throttledNodes);\n }\n // we are throttling this, but we have enough to allocate to this node, ignore it for now\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n \n- if (!routingNodes.hasUnassigned()) {\n+ if (routingNodes.unassigned().isEmpty()) {\n return changed;\n }\n \n@@ -410,8 +405,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n \n if (!canBeAllocatedToAtLeastOneNode) {\n logger.trace(\"{}: ignoring allocation, can't be allocated on any node\", shard);\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue;\n }\n \n@@ -424,8 +418,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n if (shardStores.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard stores\", shard);\n allocation.setHasPendingAsyncFetch();\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue; // still fetching\n }\n shardStores.processAllocation(allocation);\n@@ -516,16 +509,14 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n logger.debug(\"[{}][{}]: throttling allocation [{}] to [{}] in order to reuse its unallocated persistent store with total_size [{}]\", shard.index(), shard.id(), shard, lastDiscoNodeMatched, new ByteSizeValue(lastSizeMatched));\n }\n // we are throttling this, but we have enough to allocate to this node, ignore it for now\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n } else {\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: allocating [{}] to [{}] in order to reuse its unallocated persistent store with total_size [{}]\", shard.index(), shard.id(), shard, lastDiscoNodeMatched, new ByteSizeValue(lastSizeMatched));\n }\n // we found a match\n changed = true;\n- allocation.routingNodes().assign(shard, lastNodeMatched.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(lastNodeMatched.nodeId(), shard.version());\n }\n } else if (hasReplicaData == false) {\n // if we didn't manage to find *any* data (regardless of matching sizes), check if the allocation\n@@ -541,8 +532,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n * see {@link org.elasticsearch.cluster.routing.RoutingService#clusterChanged(ClusterChangedEvent)}).\n */\n changed = true;\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -89,7 +89,7 @@ public void testDelayedAllocationNodeLeavesAndComesBack() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -139,7 +139,7 @@ public void testDelayedAllocationChangeWithSettingTo100ms() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -167,7 +167,7 @@ public void testDelayedAllocationChangeWithSettingTo0() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));",
"filename": "src/test/java/org/elasticsearch/cluster/routing/DelayedAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public void testNoDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -125,7 +125,7 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -271,7 +271,7 @@ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Except\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertEquals(clusterState.routingNodes().unassigned().size(), 0);\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -338,15 +338,14 @@ public void setShardsToDelay(List<ShardRouting> delayedShards) {\n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n final RoutingNodes routingNodes = allocation.routingNodes();\n- final Iterator<MutableShardRouting> unassignedIterator = routingNodes.unassigned().iterator();\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n boolean changed = false;\n while (unassignedIterator.hasNext()) {\n MutableShardRouting shard = unassignedIterator.next();\n for (ShardRouting shardToDelay : delayedShards) {\n if (isSameShard(shard, shardToDelay)) {\n changed = true;\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -215,12 +215,12 @@ public void testNodeLeave() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n // verify that NODE_LEAVE is the reason for meta\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.NODE_LEFT));\n@@ -245,12 +245,12 @@ public void testFailedShard() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // fail shard\n ShardRouting shardToFail = clusterState.routingNodes().shardsWithState(STARTED).get(0);\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyFailedShards(clusterState, ImmutableList.of(new FailedRerouteAllocation.FailedShard(shardToFail, \"test fail\")))).build();\n // verify the reason and details\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.ALLOCATION_FAILED));\n@@ -307,7 +307,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n@@ -331,7 +331,7 @@ public void testFindNextDelayedAllocation() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
},
{
"diff": "@@ -440,7 +440,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n \n }\n- unassigned.clear();\n+ unassigned.drain();\n return changed;\n }\n }), ClusterInfoService.EMPTY);",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceConfigurationTests.java",
"status": "modified"
},
{
"diff": "@@ -24,16 +24,19 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.gateway.none.NoneGatewayAllocator;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.junit.Test;\n \n+import java.util.Iterator;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n@@ -723,4 +726,109 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n assertEquals(numRelocating, 1);\n \n }\n+\n+ public void testRebalanceWithIgnoredUnassignedShards() {\n+ final AtomicBoolean allocateTest1 = new AtomicBoolean(false);\n+ AllocationService strategy = createAllocationService(ImmutableSettings.EMPTY, new NoneGatewayAllocator() {\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ if (allocateTest1.get() == false) {\n+ RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n+ RoutingNodes.UnassignedShards.UnassignedIterator iterator = unassigned.iterator();\n+ while (iterator.hasNext()) {\n+ MutableShardRouting next = iterator.next();\n+ if (\"test1\".equals(next.index())) {\n+ iterator.removeAndIgnore();\n+ }\n+\n+ }\n+ }\n+ return super.allocateUnassigned(allocation);\n+ }\n+ });\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(2).numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"test1\").numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .addAsNew(metaData.index(\"test1\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"start two nodes\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"start all the primary shards for test\");\n+ RoutingNodes routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test\", INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+\n+ logger.debug(\"now, start 1 more node, check that rebalancing will not happen since we unassigned shards\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n+ .put(newNode(\"node2\")))\n+ .build();\n+ logger.debug(\"reroute and check that nothing has changed\");\n+ RoutingAllocation.Result reroute = strategy.reroute(clusterState);\n+ assertFalse(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(UNASSIGNED));\n+ }\n+ logger.debug(\"now set allocateTest1 to true and reroute we should see the [test1] index initializing\");\n+ allocateTest1.set(true);\n+ reroute = strategy.reroute(clusterState);\n+ assertTrue(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"now start initializing shards and expect exactly one rebalance from node1 to node 2 sicne index [test] is all on node1\");\n+\n+ routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test1\", INITIALIZING)).routingTable();\n+\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ int numStarted = 0;\n+ int numRelocating = 0;\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ if (routingTable.index(\"test\").shard(i).primaryShard().state() == STARTED) {\n+ numStarted++;\n+ } else if (routingTable.index(\"test\").shard(i).primaryShard().state() == RELOCATING) {\n+ numRelocating++;\n+ }\n+ }\n+ assertEquals(numStarted, 1);\n+ assertEquals(numRelocating, 1);\n+\n+ }\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ClusterRebalanceRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,7 @@\n public class PriorityComparatorTests extends ElasticsearchTestCase {\n \n public void testPriorityComparatorSort() {\n- RoutingNodes.UnassignedShards shards = new RoutingNodes.UnassignedShards();\n+ RoutingNodes.UnassignedShards shards = new RoutingNodes.UnassignedShards(null);\n int numIndices = randomIntBetween(3, 99);\n IndexMeta[] indices = new IndexMeta[numIndices];\n final Map<String, IndexMeta> map = new HashMap<>();",
"filename": "src/test/java/org/elasticsearch/gateway/local/PriorityComparatorTests.java",
"status": "modified"
}
]
}
|
{
"body": "This is _cluster/health output after one of nodes restarted:\n\n```\n{\n \"cluster_name\" : \"<cluster_name>\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 14,\n \"number_of_data_nodes\" : 10,\n \"active_primary_shards\" : 1022,\n \"active_shards\" : 1839,\n \"relocating_shards\" : 2,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 205,\n \"delayed_unassigned_shards\" : 0,\n \"number_of_pending_tasks\" : 0,\n \"number_of_in_flight_fetch\" : 0\n}\n```\n\nIt says cluster is in `yellow` status but it is relocating shards.\nThis cluster doesn't have custom `cluster.routing.allocation.allow_rebalance` configuration. So, it should be using `indices_all_active`.\n\nNo relocation should happen in this case?\n",
"comments": [
{
"body": "@masaruh all the shards have initialized, so there isn't a reason why relocation wouldn't be allowed. Also, just because the cluster is yellow doesn't mean that the particular index relocating isn't green?\n",
"created_at": "2015-11-11T00:04:25Z"
},
{
"body": "All primaries have initialized but not all shards are, right?\n\nIt says:\n\n```\n /**\n * Re-balancing is allowed only once all shards on all indices are active. \n */\n INDICES_ALL_ACTIVE;\n```\n\nI expect relocation happens only when cluster state is green.\n",
"created_at": "2015-11-11T01:58:22Z"
},
{
"body": "@masaruh ahh okay, I see what you are saying, my mistake for misinterpreting it :)\n",
"created_at": "2015-11-11T02:03:20Z"
}
],
"number": 14670,
"title": "Shard relocation happens while cluster is yellow"
}
|
{
"body": "ClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ignored. This can cause unexpected behavior\nwhen gateway allocator is still fetching shards or has marked shards as ignored\nsince their quorum is not met yet.\n\nCloses #14670\nCloses #14678\n\nthis is a backport of #14678 and closes #14776 \n\n@bleskes can you take a look, I had to backport the UnassignedIterator as well from 2.0 but I think it makes things cleaner here as well.\n",
"number": 15195,
"review_comments": [],
"title": "Take ingored unallocated shards into account when making allocation decision"
}
|
{
"commits": [
{
"message": "Take ingored unallocated shards into account when makeing allocation decision\n\nClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ingored. This can cause unexpected behavior\nwhen gateway allocator is still fetching shards or has marked shareds as ignored\nsince their quorum is not met yet.\n\nCloses #14670\nCloses #14678"
}
],
"files": [
{
"diff": "@@ -183,7 +183,7 @@ public void onFailure(String source, Throwable t) {\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- if (oldState != newState && newState.getRoutingNodes().hasUnassigned()) {\n+ if (oldState != newState && newState.getRoutingNodes().unassigned().size() > 0) {\n logger.trace(\"unassigned shards after shard failures. scheduling a reroute.\");\n routingService.reroute(\"unassigned shards after shard failures, scheduling a reroute\");\n }",
"filename": "src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java",
"status": "modified"
},
{
"diff": "@@ -51,9 +51,7 @@ public class RoutingNodes implements Iterable<RoutingNode> {\n \n private final Map<String, RoutingNode> nodesToShards = newHashMap();\n \n- private final UnassignedShards unassignedShards = new UnassignedShards();\n-\n- private final List<MutableShardRouting> ignoredUnassignedShards = newArrayList();\n+ private final UnassignedShards unassignedShards = new UnassignedShards(this);\n \n private final Map<ShardId, List<MutableShardRouting>> assignedShards = newHashMap();\n \n@@ -170,14 +168,6 @@ public int requiredAverageNumberOfShardsPerNode() {\n return totalNumberOfShards / nodesToShards.size();\n }\n \n- public boolean hasUnassigned() {\n- return !unassignedShards.isEmpty();\n- }\n-\n- public List<MutableShardRouting> ignoredUnassigned() {\n- return this.ignoredUnassignedShards;\n- }\n-\n public UnassignedShards unassigned() {\n return this.unassignedShards;\n }\n@@ -223,14 +213,25 @@ public ObjectIntOpenHashMap<String> nodesPerAttributesCounts(String attributeNam\n return nodesPerAttributesCounts;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned primaries even if the\n+ * primaries are marked as temporarily ignored.\n+ */\n public boolean hasUnassignedPrimaries() {\n- return unassignedShards.numPrimaries() > 0;\n+ return unassignedShards.getNumPrimaries() + unassignedShards.getNumIgnoredPrimaries() > 0;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned shards even if the\n+ * shards are marked as temporarily ignored.\n+ * @see UnassignedShards#isEmpty()\n+ * @see UnassignedShards#isIgnoredEmpty()\n+ */\n public boolean hasUnassignedShards() {\n- return !unassignedShards.isEmpty();\n+ return unassignedShards.isEmpty() == false || unassignedShards.isIgnoredEmpty() == false;\n }\n \n+\n public boolean hasInactivePrimaries() {\n return inactivePrimaryCount > 0;\n }\n@@ -532,33 +533,23 @@ public void reinitShadowPrimary(MutableShardRouting candidate) {\n }\n \n public final static class UnassignedShards implements Iterable<MutableShardRouting> {\n-\n+ private final RoutingNodes nodes;\n private final List<MutableShardRouting> unassigned;\n-\n+ private final List<MutableShardRouting> ignored;\n private int primaries = 0;\n- private long transactionId = 0;\n- private final UnassignedShards source;\n- private final long sourceTransactionId;\n+ private int ignoredPrimaries = 0;\n \n- public UnassignedShards(UnassignedShards other) {\n- source = other;\n- sourceTransactionId = other.transactionId;\n- unassigned = new ArrayList<>(other.unassigned);\n- primaries = other.primaries;\n- }\n-\n- public UnassignedShards() {\n+ public UnassignedShards(RoutingNodes nodes) {\n unassigned = new ArrayList<>();\n- source = null;\n- sourceTransactionId = -1;\n+ ignored = new ArrayList<>();\n+ this.nodes = nodes;\n }\n \n public void add(MutableShardRouting mutableShardRouting) {\n if(mutableShardRouting.primary()) {\n primaries++;\n }\n unassigned.add(mutableShardRouting);\n- transactionId++;\n }\n \n public void addAll(Collection<MutableShardRouting> mutableShardRoutings) {\n@@ -571,72 +562,162 @@ public void sort(Comparator<ShardRouting> comparator) {\n CollectionUtil.timSort(unassigned, comparator);\n }\n \n- public int size() {\n- return unassigned.size();\n- }\n+ /**\n+ * Returns the size of the non-ignored unassigned shards\n+ */\n+ public int size() { return unassigned.size(); }\n \n- public int numPrimaries() {\n+ /**\n+ * Returns the size of the temporarily marked as ignored unassigned shards\n+ */\n+ public int ignoredSize() { return ignored.size(); }\n+\n+ /**\n+ * Returns the number of non-ignored unassigned primaries\n+ */\n+ public int getNumPrimaries() {\n return primaries;\n }\n \n- @Override\n- public Iterator<MutableShardRouting> iterator() {\n- final Iterator<MutableShardRouting> iterator = unassigned.iterator();\n- return new Iterator<MutableShardRouting>() {\n- private MutableShardRouting current;\n- @Override\n- public boolean hasNext() {\n- return iterator.hasNext();\n- }\n+ /**\n+ * Returns the number of temporarily marked as ignored unassigned primaries\n+ */\n+ public int getNumIgnoredPrimaries() { return ignoredPrimaries; }\n \n- @Override\n- public MutableShardRouting next() {\n- return current = iterator.next();\n- }\n+ public UnassignedIterator iterator() {\n+ return new UnassignedIterator();\n+ }\n \n- @Override\n- public void remove() {\n- iterator.remove();\n- if (current.primary()) {\n- primaries--;\n+\n+ /**\n+ * The list of ignored unassigned shards (read only). The ignored unassigned shards\n+ * are not part of the formal unassigned list, but are kept around and used to build\n+ * back the list of unassigned shards as part of the routing table.\n+ */\n+ public List<MutableShardRouting> ignored() {\n+ return Collections.unmodifiableList(ignored);\n+ }\n+\n+ /**\n+ * Marks a shard as temporarily ignored and adds it to the ignore unassigned list.\n+ * Should be used with caution, typically,\n+ * the correct usage is to removeAndIgnore from the iterator.\n+ * @see #ignored()\n+ * @see UnassignedIterator#removeAndIgnore()\n+ * @see #isIgnoredEmpty()\n+ */\n+ public void ignoreShard(MutableShardRouting shard) {\n+ if (shard.primary()) {\n+ ignoredPrimaries++;\n+ }\n+ ignored.add(shard);\n+ }\n+\n+ /**\n+ * Takes all unassigned shards that match the given shard id and moves it to the end of the unassigned list.\n+ */\n+ public void moveToEnd(ShardId shardId) {\n+ if (unassigned.isEmpty() == false) {\n+ Iterator<MutableShardRouting> iterator = unassigned.iterator();\n+ List<MutableShardRouting> shardsToMove = Lists.newArrayList();\n+\n+ while (iterator.hasNext()) {\n+ MutableShardRouting next = iterator.next();\n+ if (next.shardId().equals(shardId)) {\n+ shardsToMove.add(next);\n+ iterator.remove();\n }\n- transactionId++;\n }\n- };\n+ if (shardsToMove.isEmpty() == false) {\n+ unassigned.addAll(shardsToMove);\n+ }\n+ }\n }\n \n- public boolean isEmpty() {\n- return unassigned.isEmpty();\n- }\n+ public class UnassignedIterator implements Iterator<MutableShardRouting> {\n \n- public void shuffle() {\n- Collections.shuffle(unassigned);\n+ private final Iterator<MutableShardRouting> iterator;\n+ private MutableShardRouting current;\n+\n+ public UnassignedIterator() {\n+ this.iterator = unassigned.iterator();\n+ }\n+\n+ @Override\n+ public boolean hasNext() {\n+ return iterator.hasNext();\n+ }\n+\n+ @Override\n+ public MutableShardRouting next() {\n+ return current = iterator.next();\n+ }\n+\n+ /**\n+ * Removes and ignores the unassigned shard (will be ignored for this run, but\n+ * will be added back to unassigned once the metadata is constructed again).\n+ * Typically this is used when an allocation decision prevents a shard from being allocated such\n+ * that subsequent consumers of this API won't try to allocate this shard again.\n+ */\n+ public void removeAndIgnore() {\n+ innerRemove();\n+ ignoreShard(current);\n+ }\n+\n+ /**\n+ * Initializes the current unassigned shard and moves it from the unassigned list.\n+ */\n+ public void initialize(String nodeId, long version) {\n+ innerRemove();\n+ nodes.assign(new MutableShardRouting(current, version), nodeId);\n+ }\n+\n+\n+ /**\n+ * Unsupported operation, just there for the interface. Use {@link #removeAndIgnore()} or\n+ * {@link #initialize(String, long)}.\n+ */\n+ @Override\n+ public void remove() {\n+ throw new UnsupportedOperationException(\"remove is not supported in unassigned iterator, use removeAndIgnore or initialize\");\n+ }\n+\n+ private void innerRemove() {\n+ iterator.remove();\n+ if (current.primary()) {\n+ primaries--;\n+ }\n+ }\n }\n \n- public void clear() {\n- transactionId++;\n- unassigned.clear();\n- primaries = 0;\n+ /**\n+ * Returns <code>true</code> iff this collection contains one or more non-ignored unassigned shards.\n+ */\n+ public boolean isEmpty() {\n+ return unassigned.isEmpty();\n }\n \n- public void transactionEnd(UnassignedShards shards) {\n- assert shards.source == this && shards.sourceTransactionId == transactionId :\n- \"Expected ID: \" + shards.sourceTransactionId + \" actual: \" + transactionId + \" Expected Source: \" + shards.source + \" actual: \" + this;\n- transactionId++;\n- this.unassigned.clear();\n- this.unassigned.addAll(shards.unassigned);\n- this.primaries = shards.primaries;\n+ /**\n+ * Returns <code>true</code> iff any unassigned shards are marked as temporarily ignored.\n+ * @see UnassignedShards#ignoreShard(MutableShardRouting)\n+ * @see UnassignedIterator#removeAndIgnore()\n+ */\n+ public boolean isIgnoredEmpty() {\n+ return ignored.isEmpty();\n }\n \n- public UnassignedShards transactionBegin() {\n- return new UnassignedShards(this);\n+ public void shuffle() {\n+ Collections.shuffle(unassigned);\n }\n \n+ /**\n+ * Drains all unassigned shards and returns it.\n+ * This method will not drain ignored shards.\n+ */\n public MutableShardRouting[] drain() {\n MutableShardRouting[] mutableShardRoutings = unassigned.toArray(new MutableShardRouting[unassigned.size()]);\n unassigned.clear();\n primaries = 0;\n- transactionId++;\n return mutableShardRoutings;\n }\n }\n@@ -657,6 +738,7 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n return true;\n }\n int unassignedPrimaryCount = 0;\n+ int unassignedIgnoredPrimaryCount = 0;\n int inactivePrimaryCount = 0;\n int inactiveShardCount = 0;\n int relocating = 0;\n@@ -713,8 +795,16 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n seenShards.add(shard.shardId());\n }\n \n- assert unassignedPrimaryCount == routingNodes.unassignedShards.numPrimaries() :\n- \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().numPrimaries() + \"]\";\n+ for (ShardRouting shard : routingNodes.unassigned().ignored()) {\n+ if (shard.primary()) {\n+ unassignedIgnoredPrimaryCount++;\n+ }\n+ }\n+\n+ assert unassignedPrimaryCount == routingNodes.unassignedShards.getNumPrimaries() :\n+ \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().getNumPrimaries() + \"]\";\n+ assert unassignedIgnoredPrimaryCount == routingNodes.unassignedShards.getNumIgnoredPrimaries() :\n+ \"Unassigned ignored primaries is [\" + unassignedIgnoredPrimaryCount + \"] but RoutingNodes returned unassigned ignored primaries [\" + routingNodes.unassigned().getNumIgnoredPrimaries() + \"]\";\n assert inactivePrimaryCount == routingNodes.inactivePrimaryCount :\n \"Inactive Primary count [\" + inactivePrimaryCount + \"] but RoutingNodes returned inactive primaries [\" + routingNodes.inactivePrimaryCount + \"]\";\n assert inactiveShardCount == routingNodes.inactiveShardCount :",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java",
"status": "modified"
},
{
"diff": "@@ -300,7 +300,7 @@ public Builder updateNodes(RoutingNodes routingNodes) {\n indexBuilder.addShard(refData, shardRoutingEntry);\n }\n }\n- for (MutableShardRouting shardRoutingEntry : Iterables.concat(routingNodes.unassigned(), routingNodes.ignoredUnassigned())) {\n+ for (MutableShardRouting shardRoutingEntry : Iterables.concat(routingNodes.unassigned(), routingNodes.unassigned().ignored())) {\n String index = shardRoutingEntry.index();\n IndexRoutingTable.Builder indexBuilder = indexRoutingTableBuilders.get(index);\n if (indexBuilder == null) {",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -177,7 +177,7 @@ private boolean reroute(RoutingAllocation allocation) {\n changed |= electPrimariesAndUnassignedDanglingReplicas(allocation);\n \n // now allocate all the unassigned to available nodes\n- if (allocation.routingNodes().hasUnassigned()) {\n+ if (allocation.routingNodes().unassigned().isEmpty() == false) {\n changed |= shardsAllocators.allocateUnassigned(allocation);\n // elect primaries again, in case this is needed with unassigned allocation\n changed |= electPrimariesAndUnassignedDanglingReplicas(allocation);\n@@ -487,18 +487,7 @@ private boolean applyFailedShard(RoutingAllocation allocation, ShardRouting fail\n // so we give a chance for other allocations and won't create poison failed allocations\n // that can keep other shards from being allocated (because of limits applied on how many\n // shards we can start per node)\n- List<MutableShardRouting> shardsToMove = Lists.newArrayList();\n- for (Iterator<MutableShardRouting> unassignedIt = routingNodes.unassigned().iterator(); unassignedIt.hasNext(); ) {\n- MutableShardRouting unassignedShardRouting = unassignedIt.next();\n- if (unassignedShardRouting.shardId().equals(failedShard.shardId())) {\n- unassignedIt.remove();\n- shardsToMove.add(unassignedShardRouting);\n- }\n- }\n- if (!shardsToMove.isEmpty()) {\n- routingNodes.unassigned().addAll(shardsToMove);\n- }\n-\n+ routingNodes.unassigned().moveToEnd(failedShard.shardId());\n node.moveToUnassigned(unassignedInfo);\n break;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java",
"status": "modified"
},
{
"diff": "@@ -322,13 +322,13 @@ private NodeSorter newNodeSorter() {\n return new NodeSorter(nodesArray(), weight, this);\n }\n \n- private boolean initialize(RoutingNodes routing, RoutingNodes.UnassignedShards unassigned) {\n+ private boolean initialize(RoutingNodes routing) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Start distributing Shards\");\n }\n indices.addAll(allocation.routingTable().indicesRouting().keySet());\n buildModelFromAssigned(routing.shards(assignedFilter));\n- return allocateUnassigned(unassigned, routing.ignoredUnassigned());\n+ return allocateUnassigned(routing.unassigned());\n }\n \n private static float absDelta(float lower, float higher) {\n@@ -382,8 +382,7 @@ private boolean balance(boolean onlyAssign) {\n logger.trace(\"Start assigning unassigned shards\");\n }\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n- boolean changed = initialize(routingNodes, unassigned);\n+ boolean changed = initialize(routingNodes);\n if (onlyAssign == false && changed == false && allocation.deciders().canRebalance(allocation).type() == Type.YES) {\n NodeSorter sorter = newNodeSorter();\n if (nodes.size() > 1) { /* skip if we only have one node */\n@@ -462,7 +461,6 @@ private boolean balance(boolean onlyAssign) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n \n@@ -537,8 +535,7 @@ public boolean move(MutableShardRouting shard, RoutingNode node ) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Try moving shard [{}] from [{}]\", shard, node);\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n- boolean changed = initialize(routingNodes, unassigned);\n+ boolean changed = initialize(routingNodes);\n if (!changed) {\n final ModelNode sourceNode = nodes.get(node.nodeId());\n assert sourceNode != null;\n@@ -574,7 +571,6 @@ public boolean move(MutableShardRouting shard, RoutingNode node ) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n \n@@ -607,7 +603,7 @@ private void buildModelFromAssigned(Iterable<MutableShardRouting> shards) {\n * Allocates all given shards on the minimal eligable node for the shards index\n * with respect to the weight function. All given shards must be unassigned.\n */\n- private boolean allocateUnassigned(RoutingNodes.UnassignedShards unassigned, List<MutableShardRouting> ignoredUnassigned) {\n+ private boolean allocateUnassigned(RoutingNodes.UnassignedShards unassigned) {\n assert !nodes.isEmpty();\n if (logger.isTraceEnabled()) {\n logger.trace(\"Start allocating unassigned shards\");\n@@ -656,9 +652,9 @@ public int compare(MutableShardRouting o1,\n if (!shard.primary()) {\n boolean drop = deciders.canAllocate(shard, allocation).type() == Type.NO;\n if (drop) {\n- ignoredUnassigned.add(shard);\n+ unassigned.ignoreShard(shard);\n while(i < primaryLength-1 && comparator.compare(primary[i], primary[i+1]) == 0) {\n- ignoredUnassigned.add(primary[++i]);\n+ unassigned.ignoreShard(primary[++i]);\n }\n continue;\n } else {\n@@ -762,10 +758,10 @@ public int compare(MutableShardRouting o1,\n } else if (logger.isTraceEnabled()) {\n logger.trace(\"No Node found to assign shard [{}]\", shard);\n }\n- ignoredUnassigned.add(shard);\n+ unassigned.ignoreShard(shard);\n if (!shard.primary()) { // we could not allocate it and we are a replica - check if we can ignore the other replicas\n while(secondaryLength > 0 && comparator.compare(shard, secondary[secondaryLength-1]) == 0) {\n- ignoredUnassigned.add(secondary[--secondaryLength]);\n+ unassigned.ignoreShard(secondary[--secondaryLength]);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.allocation.RerouteExplanation;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n@@ -221,12 +222,11 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain)\n throw new ElasticsearchIllegalArgumentException(\"[allocate] allocation of \" + shardId + \" on node \" + discoNode + \" is not allowed, reason: \" + decision);\n }\n // go over and remove it from the unassigned\n- for (Iterator<MutableShardRouting> it = allocation.routingNodes().unassigned().iterator(); it.hasNext(); ) {\n+ for (RoutingNodes.UnassignedShards.UnassignedIterator it = allocation.routingNodes().unassigned().iterator(); it.hasNext(); ) {\n if (it.next() != shardRouting) {\n continue;\n }\n- it.remove();\n- allocation.routingNodes().assign(shardRouting, routingNode.nodeId());\n+ it.initialize(routingNode.nodeId(), shardRouting.version());\n if (shardRouting.primary()) {\n // we need to clear the post allocation flag, since its an explicit allocation of the primary shard\n // and we want to force allocate it (and create a new index for it)",
"filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateAllocationCommand.java",
"status": "modified"
},
{
"diff": "@@ -165,7 +165,7 @@ protected Settings getIndexSettings(String index) {\n }\n }); // sort for priority ordering\n // First, handle primaries, they must find a place to be allocated on here\n- Iterator<MutableShardRouting> unassignedIterator = routingNodes.unassigned().iterator();\n+ RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n while (unassignedIterator.hasNext()) {\n MutableShardRouting shard = unassignedIterator.next();\n \n@@ -187,8 +187,7 @@ protected Settings getIndexSettings(String index) {\n if (shardState.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard started state\", shard);\n allocation.setHasPendingAsyncFetch();\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue;\n }\n shardState.processAllocation(allocation);\n@@ -316,8 +315,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // if we are restoring this shard we still can allocate\n if (shard.restoreSource() == null) {\n // we can't really allocate, so ignore it and continue\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: not allocating, number_of_allocated_shards_found [{}], required_number [{}]\", shard.index(), shard.id(), numberOfAllocationsFound, requiredAllocation);\n }\n@@ -347,8 +345,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // we found a match\n changed = true;\n // make sure we create one with the version from the recovered state\n- allocation.routingNodes().assign(new MutableShardRouting(shard, highestVersion), node.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(node.nodeId(), highestVersion);\n \n // found a node, so no throttling, no \"no\", and break out of the loop\n throttledNodes.clear();\n@@ -367,20 +364,18 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n // we found a match\n changed = true;\n // make sure we create one with the version from the recovered state\n- allocation.routingNodes().assign(new MutableShardRouting(shard, highestVersion), node.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(node.nodeId(), highestVersion);\n }\n } else {\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: throttling allocation [{}] to [{}] on primary allocation\", shard.index(), shard.id(), shard, throttledNodes);\n }\n // we are throttling this, but we have enough to allocate to this node, ignore it for now\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n \n- if (!routingNodes.hasUnassigned()) {\n+ if (routingNodes.unassigned().isEmpty()) {\n return changed;\n }\n \n@@ -410,8 +405,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n \n if (!canBeAllocatedToAtLeastOneNode) {\n logger.trace(\"{}: ignoring allocation, can't be allocated on any node\", shard);\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue;\n }\n \n@@ -424,8 +418,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n if (shardStores.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard stores\", shard);\n allocation.setHasPendingAsyncFetch();\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n continue; // still fetching\n }\n shardStores.processAllocation(allocation);\n@@ -516,16 +509,14 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n logger.debug(\"[{}][{}]: throttling allocation [{}] to [{}] in order to reuse its unallocated persistent store with total_size [{}]\", shard.index(), shard.id(), shard, lastDiscoNodeMatched, new ByteSizeValue(lastSizeMatched));\n }\n // we are throttling this, but we have enough to allocate to this node, ignore it for now\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n } else {\n if (logger.isDebugEnabled()) {\n logger.debug(\"[{}][{}]: allocating [{}] to [{}] in order to reuse its unallocated persistent store with total_size [{}]\", shard.index(), shard.id(), shard, lastDiscoNodeMatched, new ByteSizeValue(lastSizeMatched));\n }\n // we found a match\n changed = true;\n- allocation.routingNodes().assign(shard, lastNodeMatched.nodeId());\n- unassignedIterator.remove();\n+ unassignedIterator.initialize(lastNodeMatched.nodeId(), shard.version());\n }\n } else if (hasReplicaData == false) {\n // if we didn't manage to find *any* data (regardless of matching sizes), check if the allocation\n@@ -541,8 +532,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n * see {@link org.elasticsearch.cluster.routing.RoutingService#clusterChanged(ClusterChangedEvent)}).\n */\n changed = true;\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java",
"status": "modified"
},
{
"diff": "@@ -89,7 +89,7 @@ public void testDelayedAllocationNodeLeavesAndComesBack() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -139,7 +139,7 @@ public void testDelayedAllocationChangeWithSettingTo100ms() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -167,7 +167,7 @@ public void testDelayedAllocationChangeWithSettingTo0() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().routingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));",
"filename": "src/test/java/org/elasticsearch/cluster/routing/DelayedAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public void testNoDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -125,7 +125,7 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -271,7 +271,7 @@ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Except\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertEquals(clusterState.routingNodes().unassigned().size(), 0);\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -338,15 +338,14 @@ public void setShardsToDelay(List<ShardRouting> delayedShards) {\n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n final RoutingNodes routingNodes = allocation.routingNodes();\n- final Iterator<MutableShardRouting> unassignedIterator = routingNodes.unassigned().iterator();\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n boolean changed = false;\n while (unassignedIterator.hasNext()) {\n MutableShardRouting shard = unassignedIterator.next();\n for (ShardRouting shardToDelay : delayedShards) {\n if (isSameShard(shard, shardToDelay)) {\n changed = true;\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n+ unassignedIterator.removeAndIgnore();\n }\n }\n }",
"filename": "src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -215,12 +215,12 @@ public void testNodeLeave() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n // verify that NODE_LEAVE is the reason for meta\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.NODE_LEFT));\n@@ -245,12 +245,12 @@ public void testFailedShard() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // fail shard\n ShardRouting shardToFail = clusterState.routingNodes().shardsWithState(STARTED).get(0);\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyFailedShards(clusterState, ImmutableList.of(new FailedRerouteAllocation.FailedShard(shardToFail, \"test fail\")))).build();\n // verify the reason and details\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.routingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.ALLOCATION_FAILED));\n@@ -307,7 +307,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n@@ -331,7 +331,7 @@ public void testFindNextDelayedAllocation() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.routingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.routingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.routingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java",
"status": "modified"
},
{
"diff": "@@ -440,7 +440,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n \n }\n- unassigned.clear();\n+ unassigned.drain();\n return changed;\n }\n }), ClusterInfoService.EMPTY);",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceConfigurationTests.java",
"status": "modified"
},
{
"diff": "@@ -24,16 +24,19 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.gateway.none.NoneGatewayAllocator;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.junit.Test;\n \n+import java.util.Iterator;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n@@ -723,4 +726,109 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n assertEquals(numRelocating, 1);\n \n }\n+\n+ public void testRebalanceWithIgnoredUnassignedShards() {\n+ final AtomicBoolean allocateTest1 = new AtomicBoolean(false);\n+ AllocationService strategy = createAllocationService(ImmutableSettings.EMPTY, new NoneGatewayAllocator() {\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ if (allocateTest1.get() == false) {\n+ RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n+ RoutingNodes.UnassignedShards.UnassignedIterator iterator = unassigned.iterator();\n+ while (iterator.hasNext()) {\n+ MutableShardRouting next = iterator.next();\n+ if (\"test1\".equals(next.index())) {\n+ iterator.removeAndIgnore();\n+ }\n+\n+ }\n+ }\n+ return super.allocateUnassigned(allocation);\n+ }\n+ });\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(2).numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"test1\").numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .addAsNew(metaData.index(\"test1\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"start two nodes\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"start all the primary shards for test\");\n+ RoutingNodes routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test\", INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+\n+ logger.debug(\"now, start 1 more node, check that rebalancing will not happen since we unassigned shards\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n+ .put(newNode(\"node2\")))\n+ .build();\n+ logger.debug(\"reroute and check that nothing has changed\");\n+ RoutingAllocation.Result reroute = strategy.reroute(clusterState);\n+ assertFalse(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(UNASSIGNED));\n+ }\n+ logger.debug(\"now set allocateTest1 to true and reroute we should see the [test1] index initializing\");\n+ allocateTest1.set(true);\n+ reroute = strategy.reroute(clusterState);\n+ assertTrue(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"now start initializing shards and expect exactly one rebalance from node1 to node 2 sicne index [test] is all on node1\");\n+\n+ routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test1\", INITIALIZING)).routingTable();\n+\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ int numStarted = 0;\n+ int numRelocating = 0;\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ if (routingTable.index(\"test\").shard(i).primaryShard().state() == STARTED) {\n+ numStarted++;\n+ } else if (routingTable.index(\"test\").shard(i).primaryShard().state() == RELOCATING) {\n+ numRelocating++;\n+ }\n+ }\n+ assertEquals(numStarted, 1);\n+ assertEquals(numRelocating, 1);\n+\n+ }\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ClusterRebalanceRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,7 @@\n public class PriorityComparatorTests extends ElasticsearchTestCase {\n \n public void testPriorityComparatorSort() {\n- RoutingNodes.UnassignedShards shards = new RoutingNodes.UnassignedShards();\n+ RoutingNodes.UnassignedShards shards = new RoutingNodes.UnassignedShards(null);\n int numIndices = randomIntBetween(3, 99);\n IndexMeta[] indices = new IndexMeta[numIndices];\n final Map<String, IndexMeta> map = new HashMap<>();",
"filename": "src/test/java/org/elasticsearch/gateway/local/PriorityComparatorTests.java",
"status": "modified"
}
]
}
|
{
"body": "In 1.x it is possible via index templates to create an index with an alias with the same name as the index. The index index must match the index template and have an alias with the same name as the index being created.\n\nRelates to #14706.\n",
"comments": [
{
"body": "If someone runs into this issue, wouldn't it be better to go back to the old version, remove the alias and then attempt to upgrade again?\n",
"created_at": "2015-11-18T18:21:42Z"
},
{
"body": "good question. It just the exception that is being thrown is not nice... and we should off course prevent this from happening \n",
"created_at": "2015-11-18T18:23:46Z"
},
{
"body": "I'm +1 on making the exception better, but more reserved on adding this leniency... Leniency tends to hurt a lot in the long term.\n",
"created_at": "2015-11-18T18:32:20Z"
},
{
"body": "+1 on throwing a better exception, but not allowing aliases that clash with index names to be imported.\n",
"created_at": "2015-11-18T18:45:52Z"
},
{
"body": "@jpountz @clintongormley I've changed to PR to thrown a meaningful error instead.\n",
"created_at": "2015-11-19T06:29:25Z"
},
{
"body": "LGTM. this situation cannot be recreated anymore with 2.0+ right? so the only thing we can do is fail and ask the user to go back and remove the alias?\n",
"created_at": "2015-12-01T19:10:50Z"
},
{
"body": "@javanna correct, although just a note: the only way to remove the alias is to delete the index.\n",
"created_at": "2015-12-02T10:33:09Z"
},
{
"body": "@clintongormley @javanna yes, an index can't be created if there is an index template with an alias that has the same name as the index being created. The create index request will fail now with a class cast exception and when this pr gets in it will fail with a more meaningful error.\n\nSo:\n\n```\nPUT /_template/test\n{\n \"template\": \"test\",\n \"aliases\": {\n \"test\": {}\n }\n}\n```\n\n```\nPUT /test\n```\n\nThe create index call fails on >= 2.0. Personally I would prefer that the create index template call would fail too. Now the create index call fails and then in order to create the index the index template needs to be modified. If the create index template call fails basically notify the user earlier in the process.\n\nI'll merge this PR, because we need this for upgrades and I will open a follow up PR the adds extra validation to the create index template api.\n",
"created_at": "2015-12-02T10:56:06Z"
},
{
"body": "++ thanks for the explanation\n",
"created_at": "2015-12-02T11:03:32Z"
}
],
"number": 14842,
"title": "Throw a meaningful error when loading metadata and an alias and index have the same name"
}
|
{
"body": "This can cause index creation to fail down the line.\n\nFollow up of PR #14842\n",
"number": 15184,
"review_comments": [],
"title": "Disallow index template pattern to be the same as an alias name"
}
|
{
"commits": [
{
"message": "index template: Disallow index template pattern to be the same as an alias name\n\nThis can cause index creation to fail down the line."
}
],
"files": [
{
"diff": "@@ -217,6 +217,9 @@ private void validate(PutRequest request) {\n for (Alias alias : request.aliases) {\n //we validate the alias only partially, as we don't know yet to which index it'll get applied to\n aliasValidator.validateAliasStandalone(alias);\n+ if (request.template.equals(alias.name())) {\n+ throw new IllegalArgumentException(\"Alias [\" + alias.name() + \"] cannot be the same as the template pattern [\" + request.template + \"]\");\n+ }\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.action.admin.indices.template.put;\n \n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.cluster.metadata.AliasValidator;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n@@ -28,13 +30,10 @@\n import org.elasticsearch.indices.InvalidIndexTemplateException;\n import org.elasticsearch.test.ESTestCase;\n \n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n \n import static org.hamcrest.CoreMatchers.containsString;\n+import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.instanceOf;\n \n public class MetaDataIndexTemplateServiceTests extends ESTestCase {\n@@ -68,6 +67,17 @@ public void testIndexTemplateValidationAccumulatesValidationErrors() {\n assertThat(throwables.get(0).getMessage(), containsString(\"index must have 1 or more primary shards\"));\n }\n \n+ public void testIndexTemplateWithAliasNameEqualToTemplatePattern() {\n+ PutRequest request = new PutRequest(\"api\", \"foobar_template\");\n+ request.template(\"foobar\");\n+ request.aliases(Collections.singleton(new Alias(\"foobar\")));\n+\n+ List<Throwable> errors = putTemplate(request);\n+ assertThat(errors.size(), equalTo(1));\n+ assertThat(errors.get(0), instanceOf(IllegalArgumentException.class));\n+ assertThat(errors.get(0).getMessage(), equalTo(\"Alias [foobar] cannot be the same as the template pattern [foobar]\"));\n+ }\n+\n private static List<Throwable> putTemplate(PutRequest request) {\n MetaDataCreateIndexService createIndexService = new MetaDataCreateIndexService(\n Settings.EMPTY,\n@@ -79,7 +89,7 @@ private static List<Throwable> putTemplate(PutRequest request) {\n new HashSet<>(),\n null,\n null);\n- MetaDataIndexTemplateService service = new MetaDataIndexTemplateService(Settings.EMPTY, null, createIndexService, null);\n+ MetaDataIndexTemplateService service = new MetaDataIndexTemplateService(Settings.EMPTY, null, createIndexService, new AliasValidator(Settings.EMPTY));\n \n final List<Throwable> throwables = new ArrayList<>();\n service.putTemplate(request, new MetaDataIndexTemplateService.PutListener() {",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "The update API allows conflicting field mappings to be introduced which can then bring a cluster down as it keeps trying and failing to sync the mappings. Recreation:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/two/1\n{}\n\nPOST t/two/1/_update\n{\n \"doc\": {\n \"foo\": {\n \"bar\": 5\n }\n }\n}\n\nGET t/_mapping\n```\n\nreturns:\n\n```\n{\n \"t\": {\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n },\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"long\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nFrom: https://discuss.elastic.co/t/cluster-keeps-becoming-unresponsive/35677\n",
"comments": [
{
"body": "This is a more general issue of our mapping API. I haven't figured out the cause yet, but here is a minimal recreation that reproduces the issue all the time:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nGET t/_mapping\n```\n",
"created_at": "2015-11-27T13:19:16Z"
}
],
"number": 15049,
"title": "Update API allows conflicting mappings"
}
|
{
"body": "Today we only check mapping compatibility when adding mappers to the\nlookup structure. However, at this stage, the mapping has already been merged\npartially, so we can leave mappings in a bad state. This commit removes the\ncompatibility check from Mapper.merge entirely and performs it _before_ we\ncall Mapper.merge.\n\nOne minor regression is that the exception messages don't group together errors\nthat come from MappedFieldType.checkCompatibility and Mapper.merge. Since we\nrun the former before the latter, Mapper.merge won't even have a chance to let\nthe user know about conflicts if conflicts were discovered by\nMappedFieldType.checkCompatibility.\n\nClose #15049\n",
"number": 15175,
"review_comments": [],
"title": "Check mapping compatibility up-front."
}
|
{
"commits": [
{
"message": "Check mapping compatibility up-front.\n\nToday we only check mapping compatibility when adding mappers to the\nlookup structure. However, at this stage, the mapping has already been merged\npartially, so we can leave mappings in a bad state. This commit removes the\ncompatibility check from Mapper.merge entirely and performs it _before_ we\ncall Mapper.merge.\n\nOne minor regression is that the exception messages don't group together errors\nthat come from MappedFieldType.checkCompatibility and Mapper.merge. Since we\nrun the former before the latter, Mapper.merge won't even have a chance to let\nthe user know about conflicts if conflicts were discovered by\nMappedFieldType.checkCompatibility.\n\nClose #15049"
}
],
"files": [
{
"diff": "@@ -336,8 +336,6 @@ public boolean isParent(String type) {\n \n private void addMappers(Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers, boolean updateAllTypes) {\n assert mappingLock.isWriteLockedByCurrentThread();\n- // first ensure we don't have any incompatible new fields\n- mapperService.checkNewMappersCompatibility(objectMappers, fieldMappers, updateAllTypes);\n \n // update mappers for this document type\n Map<String, ObjectMapper> builder = new HashMap<>(this.objectMappers);\n@@ -356,6 +354,7 @@ private void addMappers(Collection<ObjectMapper> objectMappers, Collection<Field\n \n public MergeResult merge(Mapping mapping, boolean simulate, boolean updateAllTypes) {\n try (ReleasableLock lock = mappingWriteLock.acquire()) {\n+ mapperService.checkMappersCompatibility(type, mapping, updateAllTypes);\n final MergeResult mergeResult = new MergeResult(simulate, updateAllTypes);\n this.mapping.merge(mapping, mergeResult);\n if (simulate == false) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -307,7 +307,6 @@ public void setFieldTypeReference(MappedFieldTypeReference ref) {\n if (ref.get().equals(fieldType()) == false) {\n throw new IllegalStateException(\"Cannot overwrite field type reference to unequal reference\");\n }\n- ref.incrementAssociatedMappers();\n this.fieldTypeRef = ref;\n }\n \n@@ -380,11 +379,6 @@ public void merge(Mapper mergeWith, MergeResult mergeResult) {\n return;\n }\n \n- boolean strict = this.fieldTypeRef.getNumAssociatedMappers() > 1 && mergeResult.updateAllTypes() == false;\n- fieldType().checkCompatibility(fieldMergeWith.fieldType(), subConflicts, strict);\n- for (String conflict : subConflicts) {\n- mergeResult.addConflict(conflict);\n- }\n multiFields.merge(mergeWith, mergeResult);\n \n if (mergeResult.simulate() == false && mergeResult.hasConflicts() == false) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n \n import java.util.ArrayList;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.List;\n@@ -38,18 +39,49 @@ class FieldTypeLookup implements Iterable<MappedFieldType> {\n /** Full field name to field type */\n private final CopyOnWriteHashMap<String, MappedFieldTypeReference> fullNameToFieldType;\n \n+ /** Full field name to types containing a mapping for this full name. */\n+ private final CopyOnWriteHashMap<String, Set<String>> fullNameToTypes;\n+\n /** Index field name to field type */\n private final CopyOnWriteHashMap<String, MappedFieldTypeReference> indexNameToFieldType;\n \n+ /** Index field name to types containing a mapping for this index name. */\n+ private final CopyOnWriteHashMap<String, Set<String>> indexNameToTypes;\n+\n /** Create a new empty instance. */\n public FieldTypeLookup() {\n fullNameToFieldType = new CopyOnWriteHashMap<>();\n+ fullNameToTypes = new CopyOnWriteHashMap<>();\n indexNameToFieldType = new CopyOnWriteHashMap<>();\n+ indexNameToTypes = new CopyOnWriteHashMap<>();\n+ }\n+\n+ private FieldTypeLookup(\n+ CopyOnWriteHashMap<String, MappedFieldTypeReference> fullName,\n+ CopyOnWriteHashMap<String, Set<String>> fullNameToTypes,\n+ CopyOnWriteHashMap<String, MappedFieldTypeReference> indexName,\n+ CopyOnWriteHashMap<String, Set<String>> indexNameToTypes) {\n+ this.fullNameToFieldType = fullName;\n+ this.fullNameToTypes = fullNameToTypes;\n+ this.indexNameToFieldType = indexName;\n+ this.indexNameToTypes = indexNameToTypes;\n }\n \n- private FieldTypeLookup(CopyOnWriteHashMap<String, MappedFieldTypeReference> fullName, CopyOnWriteHashMap<String, MappedFieldTypeReference> indexName) {\n- fullNameToFieldType = fullName;\n- indexNameToFieldType = indexName;\n+ private static CopyOnWriteHashMap<String, Set<String>> addType(CopyOnWriteHashMap<String, Set<String>> map, String key, String type) {\n+ Set<String> types = map.get(key);\n+ if (types == null) {\n+ return map.copyAndPut(key, Collections.singleton(type));\n+ } else if (types.contains(type)) {\n+ // noting to do\n+ return map;\n+ } else {\n+ Set<String> newTypes = new HashSet<>(types.size() + 1);\n+ newTypes.addAll(types);\n+ newTypes.add(type);\n+ assert newTypes.size() == types.size() + 1;\n+ newTypes = Collections.unmodifiableSet(newTypes);\n+ return map.copyAndPut(key, newTypes);\n+ }\n }\n \n /**\n@@ -63,7 +95,9 @@ public FieldTypeLookup copyAndAddAll(String type, Collection<FieldMapper> newFie\n throw new IllegalArgumentException(\"Default mappings should not be added to the lookup\");\n }\n CopyOnWriteHashMap<String, MappedFieldTypeReference> fullName = this.fullNameToFieldType;\n+ CopyOnWriteHashMap<String, Set<String>> fullNameToTypes = this.fullNameToTypes;\n CopyOnWriteHashMap<String, MappedFieldTypeReference> indexName = this.indexNameToFieldType;\n+ CopyOnWriteHashMap<String, Set<String>> indexNameToTypes = this.indexNameToTypes;\n \n for (FieldMapper fieldMapper : newFieldMappers) {\n MappedFieldType fieldType = fieldMapper.fieldType();\n@@ -91,23 +125,39 @@ public FieldTypeLookup copyAndAddAll(String type, Collection<FieldMapper> newFie\n // this new field bridges between two existing field names (a full and index name), which we cannot support\n throw new IllegalStateException(\"insane mappings found. field \" + fieldType.names().fullName() + \" maps across types to field \" + fieldType.names().indexName());\n }\n+\n+ fullNameToTypes = addType(fullNameToTypes, fieldType.names().fullName(), type);\n+ indexNameToTypes = addType(indexNameToTypes, fieldType.names().indexName(), type);\n+ }\n+ return new FieldTypeLookup(fullName, fullNameToTypes, indexName, indexNameToTypes);\n+ }\n+\n+ private static boolean beStrict(String type, Set<String> types, boolean updateAllTypes) {\n+ assert types.size() >= 1;\n+ if (updateAllTypes) {\n+ return false;\n+ } else if (types.size() == 1 && types.contains(type)) {\n+ // we are implicitly updating all types\n+ return false;\n+ } else {\n+ return true;\n }\n- return new FieldTypeLookup(fullName, indexName);\n }\n \n /**\n * Checks if the given mappers' field types are compatible with existing field types.\n * If any are not compatible, an IllegalArgumentException is thrown.\n * If updateAllTypes is true, only basic compatibility is checked.\n */\n- public void checkCompatibility(Collection<FieldMapper> newFieldMappers, boolean updateAllTypes) {\n- for (FieldMapper fieldMapper : newFieldMappers) {\n+ public void checkCompatibility(String type, Collection<FieldMapper> fieldMappers, boolean updateAllTypes) {\n+ for (FieldMapper fieldMapper : fieldMappers) {\n MappedFieldTypeReference ref = fullNameToFieldType.get(fieldMapper.fieldType().names().fullName());\n if (ref != null) {\n List<String> conflicts = new ArrayList<>();\n ref.get().checkTypeName(fieldMapper.fieldType(), conflicts);\n if (conflicts.isEmpty()) { // only check compat if they are the same type\n- boolean strict = updateAllTypes == false;\n+ final Set<String> types = fullNameToTypes.get(fieldMapper.fieldType().names().fullName());\n+ boolean strict = beStrict(type, types, updateAllTypes);\n ref.get().checkCompatibility(fieldMapper.fieldType(), conflicts, strict);\n }\n if (conflicts.isEmpty() == false) {\n@@ -121,7 +171,8 @@ public void checkCompatibility(Collection<FieldMapper> newFieldMappers, boolean\n List<String> conflicts = new ArrayList<>();\n indexNameRef.get().checkTypeName(fieldMapper.fieldType(), conflicts);\n if (conflicts.isEmpty()) { // only check compat if they are the same type\n- boolean strict = updateAllTypes == false;\n+ final Set<String> types = indexNameToTypes.get(fieldMapper.fieldType().names().indexName());\n+ boolean strict = beStrict(type, types, updateAllTypes);\n indexNameRef.get().checkCompatibility(fieldMapper.fieldType(), conflicts, strict);\n }\n if (conflicts.isEmpty() == false) {\n@@ -138,13 +189,31 @@ public MappedFieldType get(String field) {\n return ref.get();\n }\n \n+ /** Get the set of types that have a mapping for the given field. */\n+ public Set<String> getTypes(String field) {\n+ Set<String> types = fullNameToTypes.get(field);\n+ if (types == null) {\n+ types = Collections.emptySet();\n+ }\n+ return types;\n+ }\n+\n /** Returns the field type for the given index name */\n public MappedFieldType getByIndexName(String field) {\n MappedFieldTypeReference ref = indexNameToFieldType.get(field);\n if (ref == null) return null;\n return ref.get();\n }\n \n+ /** Get the set of types that have a mapping for the given field. */\n+ public Set<String> getTypesByIndexName(String field) {\n+ Set<String> types = indexNameToTypes.get(field);\n+ if (types == null) {\n+ types = Collections.emptySet();\n+ }\n+ return types;\n+ }\n+\n /**\n * Returns a list of the index names of a simple match regex like pattern against full name and index name.\n */",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java",
"status": "modified"
},
{
"diff": "@@ -23,12 +23,10 @@\n */\n public class MappedFieldTypeReference {\n private MappedFieldType fieldType; // the current field type this reference points to\n- private int numAssociatedMappers;\n \n public MappedFieldTypeReference(MappedFieldType fieldType) {\n fieldType.freeze(); // ensure frozen\n this.fieldType = fieldType;\n- this.numAssociatedMappers = 1;\n }\n \n public MappedFieldType get() {\n@@ -40,11 +38,4 @@ public void set(MappedFieldType fieldType) {\n this.fieldType = fieldType;\n }\n \n- public int getNumAssociatedMappers() {\n- return numAssociatedMappers;\n- }\n-\n- public void incrementAssociatedMappers() {\n- ++numAssociatedMappers;\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldTypeReference.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.regex.Regex;\n@@ -260,13 +261,10 @@ private DocumentMapper merge(DocumentMapper mapper, boolean updateAllTypes) {\n assert result.hasConflicts() == false; // we already simulated\n return oldMapper;\n } else {\n- List<ObjectMapper> newObjectMappers = new ArrayList<>();\n- List<FieldMapper> newFieldMappers = new ArrayList<>();\n- for (MetadataFieldMapper metadataMapper : mapper.mapping().metadataMappers) {\n- newFieldMappers.add(metadataMapper);\n- }\n- MapperUtils.collect(mapper.mapping().root, newObjectMappers, newFieldMappers);\n- checkNewMappersCompatibility(newObjectMappers, newFieldMappers, updateAllTypes);\n+ Tuple<Collection<ObjectMapper>, Collection<FieldMapper>> newMappers = checkMappersCompatibility(\n+ mapper.type(), mapper.mapping(), updateAllTypes);\n+ Collection<ObjectMapper> newObjectMappers = newMappers.v1();\n+ Collection<FieldMapper> newFieldMappers = newMappers.v2();\n addMappers(mapper.type(), newObjectMappers, newFieldMappers);\n \n for (DocumentTypeListener typeListener : typeListeners) {\n@@ -302,9 +300,9 @@ private boolean assertSerialization(DocumentMapper mapper) {\n return true;\n }\n \n- protected void checkNewMappersCompatibility(Collection<ObjectMapper> newObjectMappers, Collection<FieldMapper> newFieldMappers, boolean updateAllTypes) {\n+ protected void checkMappersCompatibility(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers, boolean updateAllTypes) {\n assert mappingLock.isWriteLockedByCurrentThread();\n- for (ObjectMapper newObjectMapper : newObjectMappers) {\n+ for (ObjectMapper newObjectMapper : objectMappers) {\n ObjectMapper existingObjectMapper = fullPathObjectMappers.get(newObjectMapper.fullPath());\n if (existingObjectMapper != null) {\n MergeResult result = new MergeResult(true, updateAllTypes);\n@@ -315,7 +313,19 @@ protected void checkNewMappersCompatibility(Collection<ObjectMapper> newObjectMa\n }\n }\n }\n- fieldTypes.checkCompatibility(newFieldMappers, updateAllTypes);\n+ fieldTypes.checkCompatibility(type, fieldMappers, updateAllTypes);\n+ }\n+\n+ protected Tuple<Collection<ObjectMapper>, Collection<FieldMapper>> checkMappersCompatibility(\n+ String type, Mapping mapping, boolean updateAllTypes) {\n+ List<ObjectMapper> objectMappers = new ArrayList<>();\n+ List<FieldMapper> fieldMappers = new ArrayList<>();\n+ for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n+ fieldMappers.add(metadataMapper);\n+ }\n+ MapperUtils.collect(mapping.root, objectMappers, fieldMappers);\n+ checkMappersCompatibility(type, objectMappers, fieldMappers, updateAllTypes);\n+ return new Tuple<>(objectMappers, fieldMappers);\n }\n \n protected void addMappers(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -135,6 +135,15 @@ protected NumberFieldType(NumberFieldType ref) {\n super(ref);\n }\n \n+ @Override\n+ public void checkCompatibility(MappedFieldType other,\n+ List<String> conflicts, boolean strict) {\n+ super.checkCompatibility(other, conflicts, strict);\n+ if (numericPrecisionStep() != other.numericPrecisionStep()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [precision_step] values\");\n+ }\n+ }\n+\n public abstract NumberFieldType clone();\n \n @Override\n@@ -251,11 +260,6 @@ public void merge(Mapper mergeWith, MergeResult mergeResult) {\n return;\n }\n NumberFieldMapper nfmMergeWith = (NumberFieldMapper) mergeWith;\n- if (this.fieldTypeRef.getNumAssociatedMappers() > 1 && mergeResult.updateAllTypes() == false) {\n- if (fieldType().numericPrecisionStep() != nfmMergeWith.fieldType().numericPrecisionStep()) {\n- mergeResult.addConflict(\"mapper [\" + fieldType().names().fullName() + \"] is used by multiple types. Set update_all_types to true to update precision_step across all types.\");\n- }\n- }\n \n if (mergeResult.simulate() == false && mergeResult.hasConflicts() == false) {\n this.includeInAll = nfmMergeWith.includeInAll;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,8 @@ public void testEmpty() {\n FieldTypeLookup lookup = new FieldTypeLookup();\n assertNull(lookup.get(\"foo\"));\n assertNull(lookup.getByIndexName(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypes(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypesByIndexName(\"foo\"));\n Collection<String> names = lookup.simpleMatchToFullName(\"foo\");\n assertNotNull(names);\n assertTrue(names.isEmpty());\n@@ -70,6 +72,14 @@ public void testAddNewField() {\n assertNull(lookup.get(\"bar\"));\n assertEquals(f.fieldType(), lookup2.getByIndexName(\"bar\"));\n assertNull(lookup.getByIndexName(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypes(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypesByIndexName(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypes(\"bar\"));\n+ assertEquals(Collections.emptySet(), lookup.getTypesByIndexName(\"bar\"));\n+ assertEquals(Collections.singleton(\"type\"), lookup2.getTypes(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup2.getTypesByIndexName(\"foo\"));\n+ assertEquals(Collections.emptySet(), lookup2.getTypes(\"bar\"));\n+ assertEquals(Collections.singleton(\"type\"), lookup2.getTypesByIndexName(\"bar\"));\n assertEquals(1, size(lookup2.iterator()));\n }\n \n@@ -144,7 +154,7 @@ public void testAddExistingBridgeName() {\n public void testCheckCompatibilityNewField() {\n FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup.checkCompatibility(newList(f1), false);\n+ lookup.checkCompatibility(\"type\", newList(f1), false);\n }\n \n public void testCheckCompatibilityMismatchedTypes() {\n@@ -155,14 +165,14 @@ public void testCheckCompatibilityMismatchedTypes() {\n MappedFieldType ft2 = FakeFieldMapper.makeOtherFieldType(\"foo\", \"foo\");\n FieldMapper f2 = new FakeFieldMapper(\"foo\", ft2);\n try {\n- lookup.checkCompatibility(newList(f2), false);\n+ lookup.checkCompatibility(\"type2\", newList(f2), false);\n fail(\"expected type mismatch\");\n } catch (IllegalArgumentException e) {\n assertTrue(e.getMessage().contains(\"cannot be changed from type [faketype] to [otherfaketype]\"));\n }\n // fails even if updateAllTypes == true\n try {\n- lookup.checkCompatibility(newList(f2), true);\n+ lookup.checkCompatibility(\"type2\", newList(f2), true);\n fail(\"expected type mismatch\");\n } catch (IllegalArgumentException e) {\n assertTrue(e.getMessage().contains(\"cannot be changed from type [faketype] to [otherfaketype]\"));\n@@ -178,25 +188,27 @@ public void testCheckCompatibilityConflict() {\n ft2.setBoost(2.0f);\n FieldMapper f2 = new FakeFieldMapper(\"foo\", ft2);\n try {\n- lookup.checkCompatibility(newList(f2), false);\n+ // different type\n+ lookup.checkCompatibility(\"type2\", newList(f2), false);\n fail(\"expected conflict\");\n } catch (IllegalArgumentException e) {\n assertTrue(e.getMessage().contains(\"to update [boost] across all types\"));\n }\n- lookup.checkCompatibility(newList(f2), true); // boost is updateable, so ok if forcing\n+ lookup.checkCompatibility(\"type\", newList(f2), false); // boost is updateable, so ok since we are implicitly updating all types\n+ lookup.checkCompatibility(\"type2\", newList(f2), true); // boost is updateable, so ok if forcing\n // now with a non changeable setting\n MappedFieldType ft3 = FakeFieldMapper.makeFieldType(\"foo\", \"bar\");\n ft3.setStored(true);\n FieldMapper f3 = new FakeFieldMapper(\"foo\", ft3);\n try {\n- lookup.checkCompatibility(newList(f3), false);\n+ lookup.checkCompatibility(\"type2\", newList(f3), false);\n fail(\"expected conflict\");\n } catch (IllegalArgumentException e) {\n assertTrue(e.getMessage().contains(\"has different [store] values\"));\n }\n // even with updateAllTypes == true, incompatible\n try {\n- lookup.checkCompatibility(newList(f3), true);\n+ lookup.checkCompatibility(\"type2\", newList(f3), true);\n fail(\"expected conflict\");\n } catch (IllegalArgumentException e) {\n assertTrue(e.getMessage().contains(\"has different [store] values\"));",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java",
"status": "modified"
},
{
"diff": "@@ -25,12 +25,14 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.search.SearchHitField;\n@@ -715,28 +717,25 @@ public void testGeoPointMapperMerge() throws Exception {\n String stage1Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"point\").field(\"type\", \"geo_point\").field(\"lat_lon\", true)\n .field(\"geohash\", true).endObject().endObject().endObject().endObject().string();\n- DocumentMapperParser parser = createIndex(\"test\", settings).mapperService().documentMapperParser();\n- DocumentMapper stage1 = parser.parse(stage1Mapping);\n+ MapperService mapperService = createIndex(\"test\", settings).mapperService();\n+ DocumentMapper stage1 = mapperService.merge(\"type\", new CompressedXContent(stage1Mapping), true, false);\n String stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"point\").field(\"type\", \"geo_point\").field(\"lat_lon\", false)\n .field(\"geohash\", false).endObject().endObject().endObject().endObject().string();\n- DocumentMapper stage2 = parser.parse(stage2Mapping);\n-\n- MergeResult mergeResult = stage1.merge(stage2.mapping(), false, false);\n- assertThat(mergeResult.hasConflicts(), equalTo(true));\n- assertThat(mergeResult.buildConflicts().length, equalTo(3));\n- // todo better way of checking conflict?\n- assertThat(\"mapper [point] has different [lat_lon]\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n- assertThat(\"mapper [point] has different [geohash]\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n- assertThat(\"mapper [point] has different [geohash_precision]\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(stage2Mapping), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"mapper [point] has different [lat_lon]\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [point] has different [geohash]\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [point] has different [geohash_precision]\"));\n+ }\n \n // correct mapping and ensure no failures\n stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"point\").field(\"type\", \"geo_point\").field(\"lat_lon\", true)\n .field(\"geohash\", true).endObject().endObject().endObject().endObject().string();\n- stage2 = parser.parse(stage2Mapping);\n- mergeResult = stage1.merge(stage2.mapping(), false, false);\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(false));\n+ mapperService.merge(\"type\", new CompressedXContent(stage2Mapping), false, false);\n }\n \n public void testGeoHashSearch() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -22,19 +22,22 @@\n import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n \n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.isIn;\n@@ -376,23 +379,21 @@ public void testGeoShapeMapperMerge() throws Exception {\n .startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"tree\", \"geohash\").field(\"strategy\", \"recursive\")\n .field(\"precision\", \"1m\").field(\"tree_levels\", 8).field(\"distance_error_pct\", 0.01).field(\"orientation\", \"ccw\")\n .endObject().endObject().endObject().endObject().string();\n- DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n- DocumentMapper stage1 = parser.parse(stage1Mapping);\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ DocumentMapper stage1 = mapperService.merge(\"type\", new CompressedXContent(stage1Mapping), true, false);\n String stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"tree\", \"quadtree\")\n .field(\"strategy\", \"term\").field(\"precision\", \"1km\").field(\"tree_levels\", 26).field(\"distance_error_pct\", 26)\n .field(\"orientation\", \"cw\").endObject().endObject().endObject().endObject().string();\n- DocumentMapper stage2 = parser.parse(stage2Mapping);\n-\n- MergeResult mergeResult = stage1.merge(stage2.mapping(), false, false);\n- // check correct conflicts\n- assertThat(mergeResult.hasConflicts(), equalTo(true));\n- assertThat(mergeResult.buildConflicts().length, equalTo(4));\n- ArrayList<String> conflicts = new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()));\n- assertThat(\"mapper [shape] has different [strategy]\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different [tree]\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different [tree_levels]\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different [precision]\", isIn(conflicts));\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(stage2Mapping), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"mapper [shape] has different [strategy]\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [shape] has different [tree]\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [shape] has different [tree_levels]\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [shape] has different [precision]\"));\n+ }\n \n // verify nothing changed\n FieldMapper fieldMapper = stage1.mappers().getMapper(\"shape\");\n@@ -411,11 +412,7 @@ public void testGeoShapeMapperMerge() throws Exception {\n stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"shape\").field(\"type\", \"geo_shape\").field(\"precision\", \"1m\")\n .field(\"tree_levels\", 8).field(\"distance_error_pct\", 0.001).field(\"orientation\", \"cw\").endObject().endObject().endObject().endObject().string();\n- stage2 = parser.parse(stage2Mapping);\n- mergeResult = stage1.merge(stage2.mapping(), false, false);\n-\n- // verify mapping changes, and ensure no failures\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(false));\n+ mapperService.merge(\"type\", new CompressedXContent(stage2Mapping), false, false);\n \n fieldMapper = stage1.mappers().getMapper(\"shape\");\n assertThat(fieldMapper, instanceOf(GeoShapeFieldMapper.class));",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -22,16 +22,19 @@\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexableField;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n import java.util.Arrays;\n \n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n@@ -113,9 +116,9 @@ public void testMergeMultiField() throws Exception {\n \n public void testUpgradeFromMultiFieldTypeToMultiFields() throws Exception {\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/multifield/merge/test-mapping1.json\");\n- DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n \n- DocumentMapper docMapper = parser.parse(mapping);\n+ DocumentMapper docMapper = mapperService.merge(\"person\", new CompressedXContent(mapping), true, false);\n \n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n assertThat(docMapper.mappers().getMapper(\"name.indexed\"), nullValue());\n@@ -129,12 +132,7 @@ public void testUpgradeFromMultiFieldTypeToMultiFields() throws Exception {\n \n \n mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/multifield/merge/upgrade1.json\");\n- DocumentMapper docMapper2 = parser.parse(mapping);\n-\n- MergeResult mergeResult = docMapper.merge(docMapper2.mapping(), true, false);\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(false));\n-\n- docMapper.merge(docMapper2.mapping(), false, false);\n+ mapperService.merge(\"person\", new CompressedXContent(mapping), false, false);\n \n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n \n@@ -151,12 +149,7 @@ public void testUpgradeFromMultiFieldTypeToMultiFields() throws Exception {\n assertThat(f, notNullValue());\n \n mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/multifield/merge/upgrade2.json\");\n- DocumentMapper docMapper3 = parser.parse(mapping);\n-\n- mergeResult = docMapper.merge(docMapper3.mapping(), true, false);\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(false));\n-\n- docMapper.merge(docMapper3.mapping(), false, false);\n+ mapperService.merge(\"person\", new CompressedXContent(mapping), false, false);\n \n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n \n@@ -168,24 +161,19 @@ public void testUpgradeFromMultiFieldTypeToMultiFields() throws Exception {\n \n \n mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/multifield/merge/upgrade3.json\");\n- DocumentMapper docMapper4 = parser.parse(mapping);\n- mergeResult = docMapper.merge(docMapper4.mapping(), true, false);\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(true));\n- assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different [index] values\"));\n- assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different [store] values\"));\n-\n- mergeResult = docMapper.merge(docMapper4.mapping(), false, false);\n- assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(true));\n-\n- assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n- assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different [index] values\"));\n- assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different [store] values\"));\n-\n- // There are conflicts, but the `name.not_indexed3` has been added, b/c that field has no conflicts\n+ try {\n+ mapperService.merge(\"person\", new CompressedXContent(mapping), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"mapper [name] has different [index] values\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [name] has different [store] values\"));\n+ }\n+\n+ // There are conflicts, so the `name.not_indexed3` has not been added\n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n assertThat(docMapper.mappers().getMapper(\"name.indexed\"), notNullValue());\n assertThat(docMapper.mappers().getMapper(\"name.not_indexed\"), notNullValue());\n assertThat(docMapper.mappers().getMapper(\"name.not_indexed2\"), notNullValue());\n- assertThat(docMapper.mappers().getMapper(\"name.not_indexed3\"), notNullValue());\n+ assertThat(docMapper.mappers().getMapper(\"name.not_indexed3\"), nullValue());\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.IndexableFieldType;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -478,7 +479,7 @@ public void testDisableNorms() throws Exception {\n .startObject(\"properties\").startObject(\"field\").field(\"type\", \"string\").endObject().endObject()\n .endObject().endObject().string();\n \n- DocumentMapper defaultMapper = parser.parse(mapping);\n+ DocumentMapper defaultMapper = indexService.mapperService().merge(\"type\", new CompressedXContent(mapping), true, false);\n \n ParsedDocument doc = defaultMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n@@ -507,10 +508,12 @@ public void testDisableNorms() throws Exception {\n updatedMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"field\").field(\"type\", \"string\").startObject(\"norms\").field(\"enabled\", true).endObject()\n .endObject().endObject().endObject().endObject().string();\n- mergeResult = defaultMapper.merge(parser.parse(updatedMapping).mapping(), true, false);\n- assertTrue(mergeResult.hasConflicts());\n- assertEquals(1, mergeResult.buildConflicts().length);\n- assertTrue(mergeResult.buildConflicts()[0].contains(\"different [omit_norms]\"));\n+ try {\n+ defaultMapper.merge(parser.parse(updatedMapping).mapping(), true, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"different [omit_norms]\"));\n+ }\n }\n \n /**",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n@@ -557,17 +558,16 @@ public void testBackcompatParsingTwiceDoesNotChangeTokenizeValue() throws Except\n public void testMergingConflicts() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"_timestamp\").field(\"enabled\", true)\n- .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n .field(\"store\", \"yes\")\n .field(\"index\", \"analyzed\")\n .field(\"path\", \"foo\")\n .field(\"default\", \"1970-01-01\")\n .endObject()\n .endObject().endObject().string();\n Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build();\n- DocumentMapperParser parser = createIndex(\"test\", indexSettings).mapperService().documentMapperParser();\n+ MapperService mapperService = createIndex(\"test\", indexSettings).mapperService();\n \n- DocumentMapper docMapper = parser.parse(mapping);\n+ DocumentMapper docMapper = mapperService.merge(\"type\", new CompressedXContent(mapping), true, false);\n assertThat(docMapper.timestampFieldMapper().fieldType().fieldDataType().getLoading(), equalTo(MappedFieldType.Loading.LAZY));\n mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"_timestamp\").field(\"enabled\", false)\n@@ -579,20 +579,32 @@ public void testMergingConflicts() throws Exception {\n .endObject()\n .endObject().endObject().string();\n \n- MergeResult mergeResult = docMapper.merge(parser.parse(mapping).mapping(), true, false);\n- List<String> expectedConflicts = new ArrayList<>(Arrays.asList(\n- \"mapper [_timestamp] has different [index] values\",\n- \"mapper [_timestamp] has different [store] values\",\n- \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\",\n- \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\"));\n-\n- for (String conflict : mergeResult.buildConflicts()) {\n- assertTrue(\"found unexpected conflict [\" + conflict + \"]\", expectedConflicts.remove(conflict));\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"mapper [_timestamp] has different [index] values\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [_timestamp] has different [store] values\"));\n }\n- assertTrue(\"missing conflicts: \" + Arrays.toString(expectedConflicts.toArray()), expectedConflicts.isEmpty());\n+\n assertThat(docMapper.timestampFieldMapper().fieldType().fieldDataType().getLoading(), equalTo(MappedFieldType.Loading.LAZY));\n assertTrue(docMapper.timestampFieldMapper().enabled());\n- assertThat(docMapper.timestampFieldMapper().fieldType().fieldDataType().getFormat(indexSettings), equalTo(\"doc_values\"));\n+\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .field(\"store\", \"yes\")\n+ .field(\"index\", \"analyzed\")\n+ .field(\"path\", \"bar\")\n+ .field(\"default\", \"1970-01-02\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(mapping), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\"));\n+ assertThat(e.getMessage(), containsString(\"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\"));\n+ }\n }\n \n public void testBackcompatMergingConflictsForIndexValues() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -48,7 +48,7 @@ public void testAllEnabled() throws Exception {\n public void testAllConflicts() throws Exception {\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/update/all_mapping_create_index.json\");\n String mappingUpdate = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/update/all_mapping_update_with_conflicts.json\");\n- String[] errorMessage = {\"[_all] enabled is true now encountering false\",\n+ String[] errorMessage = {\n \"[_all] has different [omit_norms] values\",\n \"[_all] has different [store] values\",\n \"[_all] has different [store_term_vector] values\",\n@@ -61,6 +61,13 @@ public void testAllConflicts() throws Exception {\n testConflict(mapping, mappingUpdate, errorMessage);\n }\n \n+ public void testAllDisabled() throws Exception {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"mappings\").startObject(TYPE).startObject(\"_all\").field(\"enabled\", true).endObject().endObject().endObject().endObject();\n+ XContentBuilder mappingUpdate = jsonBuilder().startObject().startObject(\"_all\").field(\"enabled\", false).endObject().startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject().endObject();\n+ String errorMessage = \"[_all] enabled is true now encountering false\";\n+ testConflict(mapping.string(), mappingUpdate.string(), errorMessage);\n+ }\n+\n public void testAllWithDefault() throws Exception {\n String defaultMapping = jsonBuilder().startObject().startObject(\"_default_\")\n .startObject(\"_all\")",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterIT.java",
"status": "modified"
},
{
"diff": "@@ -123,14 +123,14 @@ public void testConflictSameType() throws Exception {\n mapperService.merge(\"type\", new CompressedXContent(update.string()), false, false);\n fail();\n } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(), containsString(\"Merge failed\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [foo] cannot be changed from type [long] to [double]\"));\n }\n \n try {\n mapperService.merge(\"type\", new CompressedXContent(update.string()), false, false);\n fail();\n } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(), containsString(\"Merge failed\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [foo] cannot be changed from type [long] to [double]\"));\n }\n \n assertTrue(mapperService.documentMapper(\"type\").mapping().root().getMapper(\"foo\") instanceof LongFieldMapper);\n@@ -167,7 +167,6 @@ public void testConflictNewType() throws Exception {\n }\n \n // same as the testConflictNewType except that the mapping update is on an existing type\n- @AwaitsFix(bugUrl=\"https://github.com/elastic/elasticsearch/issues/15049\")\n public void testConflictNewTypeUpdate() throws Exception {\n XContentBuilder mapping1 = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"long\").endObject()",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -140,7 +140,7 @@ public void testUpdateMappingWithConflicts() throws Exception {\n .setSource(\"{\\\"type\\\":{\\\"properties\\\":{\\\"body\\\":{\\\"type\\\":\\\"integer\\\"}}}}\").execute().actionGet();\n fail(\"Expected MergeMappingException\");\n } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(), containsString(\"mapper [body] of different type\"));\n+ assertThat(e.getMessage(), containsString(\"mapper [body] cannot be changed from type [string] to [int]\"));\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java",
"status": "modified"
}
]
}
|
{
"body": "The update API allows conflicting field mappings to be introduced which can then bring a cluster down as it keeps trying and failing to sync the mappings. Recreation:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/two/1\n{}\n\nPOST t/two/1/_update\n{\n \"doc\": {\n \"foo\": {\n \"bar\": 5\n }\n }\n}\n\nGET t/_mapping\n```\n\nreturns:\n\n```\n{\n \"t\": {\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n },\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"long\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nFrom: https://discuss.elastic.co/t/cluster-keeps-becoming-unresponsive/35677\n",
"comments": [
{
"body": "This is a more general issue of our mapping API. I haven't figured out the cause yet, but here is a minimal recreation that reproduces the issue all the time:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nGET t/_mapping\n```\n",
"created_at": "2015-11-27T13:19:16Z"
}
],
"number": 15049,
"title": "Update API allows conflicting mappings"
}
|
{
"body": "This adds safety that you can't index into the `_default_` type (it was possible\nbefore), and can't add default mappers to the field type lookups (was not\nhappening in tests but I think this is still a good check).\n\nAlso MapperService.types() now excludes `_default` so that eg. the `ids` query\ndoes not try to search on this type anymore.\n\nRelated to #15049\n",
"number": 15156,
"review_comments": [],
"title": "Don't treat _default_ as a regular type."
}
|
{
"commits": [
{
"message": "Don't treat _default_ as a regular type.\n\nThis adds safety that you can't index into the `_default_` type (it was possible\nbefore), and can't add default mappers to the field type lookups (was not\nhappening in tests but I think this is still a good check).\n\nAlso MapperService.types() now excludes `_default` so that eg. the `ids` query\ndoes not try to search on this type anymore."
}
],
"files": [
{
"diff": "@@ -351,7 +351,7 @@ private void addMappers(Collection<ObjectMapper> objectMappers, Collection<Field\n this.fieldMappers = this.fieldMappers.copyAndAllAll(fieldMappers);\n \n // finally update for the entire index\n- mapperService.addMappers(objectMappers, fieldMappers);\n+ mapperService.addMappers(type, objectMappers, fieldMappers);\n }\n \n public MergeResult merge(Mapping mapping, boolean simulate, boolean updateAllTypes) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -79,6 +79,10 @@ public ParsedDocument parseDocument(SourceToParse source) throws MapperParsingEx\n }\n \n private ParsedDocument innerParseDocument(SourceToParse source) throws MapperParsingException {\n+ if (docMapper.type().equals(MapperService.DEFAULT_MAPPING)) {\n+ throw new IllegalArgumentException(\"It is forbidden to index into the default mapping [\" + MapperService.DEFAULT_MAPPING + \"]\");\n+ }\n+\n ParseContext.InternalParseContext context = cache.get();\n \n final Mapping mapping = docMapper.mapping();",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.List;\n+import java.util.Objects;\n import java.util.Set;\n \n /**\n@@ -56,7 +57,11 @@ private FieldTypeLookup(CopyOnWriteHashMap<String, MappedFieldTypeReference> ful\n * from the provided fields. If a field already exists, the field type will be updated\n * to use the new mappers field type.\n */\n- public FieldTypeLookup copyAndAddAll(Collection<FieldMapper> newFieldMappers) {\n+ public FieldTypeLookup copyAndAddAll(String type, Collection<FieldMapper> newFieldMappers) {\n+ Objects.requireNonNull(type, \"type must not be null\");\n+ if (MapperService.DEFAULT_MAPPING.equals(type)) {\n+ throw new IllegalArgumentException(\"Default mappings should not be added to the lookup\");\n+ }\n CopyOnWriteHashMap<String, MappedFieldTypeReference> fullName = this.fullNameToFieldType;\n CopyOnWriteHashMap<String, MappedFieldTypeReference> indexName = this.indexNameToFieldType;\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldTypeLookup.java",
"status": "modified"
},
{
"diff": "@@ -267,7 +267,7 @@ private DocumentMapper merge(DocumentMapper mapper, boolean updateAllTypes) {\n }\n MapperUtils.collect(mapper.mapping().root, newObjectMappers, newFieldMappers);\n checkNewMappersCompatibility(newObjectMappers, newFieldMappers, updateAllTypes);\n- addMappers(newObjectMappers, newFieldMappers);\n+ addMappers(mapper.type(), newObjectMappers, newFieldMappers);\n \n for (DocumentTypeListener typeListener : typeListeners) {\n typeListener.beforeCreate(mapper);\n@@ -318,7 +318,7 @@ protected void checkNewMappersCompatibility(Collection<ObjectMapper> newObjectMa\n fieldTypes.checkCompatibility(newFieldMappers, updateAllTypes);\n }\n \n- protected void addMappers(Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {\n+ protected void addMappers(String type, Collection<ObjectMapper> objectMappers, Collection<FieldMapper> fieldMappers) {\n assert mappingLock.isWriteLockedByCurrentThread();\n ImmutableOpenMap.Builder<String, ObjectMapper> fullPathObjectMappers = ImmutableOpenMap.builder(this.fullPathObjectMappers);\n for (ObjectMapper objectMapper : objectMappers) {\n@@ -328,7 +328,7 @@ protected void addMappers(Collection<ObjectMapper> objectMappers, Collection<Fie\n }\n }\n this.fullPathObjectMappers = fullPathObjectMappers.build();\n- this.fieldTypes = this.fieldTypes.copyAndAddAll(fieldMappers);\n+ this.fieldTypes = this.fieldTypes.copyAndAddAll(type, fieldMappers);\n }\n \n public DocumentMapper parse(String mappingType, CompressedXContent mappingSource, boolean applyDefault) throws MapperParsingException {\n@@ -345,10 +345,21 @@ public boolean hasMapping(String mappingType) {\n return mappers.containsKey(mappingType);\n }\n \n+ /**\n+ * Return the set of concrete types that have a mapping.\n+ * NOTE: this does not return the default mapping.\n+ */\n public Collection<String> types() {\n- return mappers.keySet();\n+ final Set<String> types = new HashSet<>(mappers.keySet());\n+ types.remove(DEFAULT_MAPPING);\n+ return Collections.unmodifiableSet(types);\n }\n \n+ /**\n+ * Return the {@link DocumentMapper} for the given type. By using the special\n+ * {@value #DEFAULT_MAPPING} type, you can get a {@link DocumentMapper} for\n+ * the default mapping.\n+ */\n public DocumentMapper documentMapper(String type) {\n return mappers.get(type);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.Iterator;\n import java.util.List;\n \n@@ -47,10 +48,20 @@ public void testEmpty() {\n assertFalse(itr.hasNext());\n }\n \n+ public void testDefaultMapping() {\n+ FieldTypeLookup lookup = new FieldTypeLookup();\n+ try {\n+ lookup.copyAndAddAll(MapperService.DEFAULT_MAPPING, Collections.emptyList());\n+ fail();\n+ } catch (IllegalArgumentException expected) {\n+ assertEquals(\"Default mappings should not be added to the lookup\", expected.getMessage());\n+ }\n+ }\n+\n public void testAddNewField() {\n FieldTypeLookup lookup = new FieldTypeLookup();\n FakeFieldMapper f = new FakeFieldMapper(\"foo\", \"bar\");\n- FieldTypeLookup lookup2 = lookup.copyAndAddAll(newList(f));\n+ FieldTypeLookup lookup2 = lookup.copyAndAddAll(\"type\", newList(f));\n assertNull(lookup.get(\"foo\"));\n assertNull(lookup.get(\"bar\"));\n assertNull(lookup.getByIndexName(\"foo\"));\n@@ -67,8 +78,8 @@ public void testAddExistingField() {\n MappedFieldType originalFieldType = f.fieldType();\n FakeFieldMapper f2 = new FakeFieldMapper(\"foo\", \"foo\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f));\n- FieldTypeLookup lookup2 = lookup.copyAndAddAll(newList(f2));\n+ lookup = lookup.copyAndAddAll(\"type1\", newList(f));\n+ FieldTypeLookup lookup2 = lookup.copyAndAddAll(\"type2\", newList(f2));\n \n assertNotSame(originalFieldType, f.fieldType());\n assertSame(f.fieldType(), f2.fieldType());\n@@ -82,8 +93,8 @@ public void testAddExistingIndexName() {\n FakeFieldMapper f2 = new FakeFieldMapper(\"bar\", \"foo\");\n MappedFieldType originalFieldType = f.fieldType();\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f));\n- FieldTypeLookup lookup2 = lookup.copyAndAddAll(newList(f2));\n+ lookup = lookup.copyAndAddAll(\"type1\", newList(f));\n+ FieldTypeLookup lookup2 = lookup.copyAndAddAll(\"type2\", newList(f2));\n \n assertNotSame(originalFieldType, f.fieldType());\n assertSame(f.fieldType(), f2.fieldType());\n@@ -98,8 +109,8 @@ public void testAddExistingFullName() {\n FakeFieldMapper f2 = new FakeFieldMapper(\"foo\", \"bar\");\n MappedFieldType originalFieldType = f.fieldType();\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f));\n- FieldTypeLookup lookup2 = lookup.copyAndAddAll(newList(f2));\n+ lookup = lookup.copyAndAddAll(\"type1\", newList(f));\n+ FieldTypeLookup lookup2 = lookup.copyAndAddAll(\"type2\", newList(f2));\n \n assertNotSame(originalFieldType, f.fieldType());\n assertSame(f.fieldType(), f2.fieldType());\n@@ -113,18 +124,18 @@ public void testAddExistingBridgeName() {\n FakeFieldMapper f = new FakeFieldMapper(\"foo\", \"foo\");\n FakeFieldMapper f2 = new FakeFieldMapper(\"bar\", \"bar\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f, f2));\n+ lookup = lookup.copyAndAddAll(\"type1\", newList(f, f2));\n \n try {\n FakeFieldMapper f3 = new FakeFieldMapper(\"foo\", \"bar\");\n- lookup.copyAndAddAll(newList(f3));\n+ lookup.copyAndAddAll(\"type2\", newList(f3));\n } catch (IllegalStateException e) {\n assertTrue(e.getMessage().contains(\"insane mappings\"));\n }\n \n try {\n FakeFieldMapper f3 = new FakeFieldMapper(\"bar\", \"foo\");\n- lookup.copyAndAddAll(newList(f3));\n+ lookup.copyAndAddAll(\"type2\", newList(f3));\n } catch (IllegalStateException e) {\n assertTrue(e.getMessage().contains(\"insane mappings\"));\n }\n@@ -139,7 +150,7 @@ public void testCheckCompatibilityNewField() {\n public void testCheckCompatibilityMismatchedTypes() {\n FieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f1));\n+ lookup = lookup.copyAndAddAll(\"type\", newList(f1));\n \n MappedFieldType ft2 = FakeFieldMapper.makeOtherFieldType(\"foo\", \"foo\");\n FieldMapper f2 = new FakeFieldMapper(\"foo\", ft2);\n@@ -161,7 +172,7 @@ public void testCheckCompatibilityMismatchedTypes() {\n public void testCheckCompatibilityConflict() {\n FieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f1));\n+ lookup = lookup.copyAndAddAll(\"type\", newList(f1));\n \n MappedFieldType ft2 = FakeFieldMapper.makeFieldType(\"foo\", \"bar\");\n ft2.setBoost(2.0f);\n@@ -196,7 +207,7 @@ public void testSimpleMatchIndexNames() {\n FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"baz\");\n FakeFieldMapper f2 = new FakeFieldMapper(\"bar\", \"boo\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f1, f2));\n+ lookup = lookup.copyAndAddAll(\"type\", newList(f1, f2));\n Collection<String> names = lookup.simpleMatchToIndexNames(\"b*\");\n assertTrue(names.contains(\"baz\"));\n assertTrue(names.contains(\"boo\"));\n@@ -206,7 +217,7 @@ public void testSimpleMatchFullNames() {\n FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"baz\");\n FakeFieldMapper f2 = new FakeFieldMapper(\"bar\", \"boo\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f1, f2));\n+ lookup = lookup.copyAndAddAll(\"type\", newList(f1, f2));\n Collection<String> names = lookup.simpleMatchToFullName(\"b*\");\n assertTrue(names.contains(\"foo\"));\n assertTrue(names.contains(\"bar\"));\n@@ -215,7 +226,7 @@ public void testSimpleMatchFullNames() {\n public void testIteratorImmutable() {\n FakeFieldMapper f1 = new FakeFieldMapper(\"foo\", \"bar\");\n FieldTypeLookup lookup = new FieldTypeLookup();\n- lookup = lookup.copyAndAddAll(newList(f1));\n+ lookup = lookup.copyAndAddAll(\"type\", newList(f1));\n \n try {\n Iterator<MappedFieldType> itr = lookup.iterator();",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n \n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n+import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.junit.Rule;\n import org.junit.rules.ExpectedException;\n@@ -31,6 +33,11 @@\n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.Matchers.hasToString;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.concurrent.ExecutionException;\n+\n public class MapperServiceTests extends ESSingleNodeTestCase {\n @Rule\n public ExpectedException expectedException = ExpectedException.none();\n@@ -82,4 +89,56 @@ public void testTypeNameTooLong() {\n .execute()\n .actionGet();\n }\n+\n+ public void testTypes() throws Exception {\n+ IndexService indexService1 = createIndex(\"index1\");\n+ MapperService mapperService = indexService1.mapperService();\n+ assertEquals(Collections.emptySet(), mapperService.types());\n+\n+ mapperService.merge(\"type1\", new CompressedXContent(\"{\\\"type1\\\":{}}\"), true, false);\n+ assertNull(mapperService.documentMapper(MapperService.DEFAULT_MAPPING));\n+ assertEquals(Collections.singleton(\"type1\"), mapperService.types());\n+\n+ mapperService.merge(MapperService.DEFAULT_MAPPING, new CompressedXContent(\"{\\\"_default_\\\":{}}\"), true, false);\n+ assertNotNull(mapperService.documentMapper(MapperService.DEFAULT_MAPPING));\n+ assertEquals(Collections.singleton(\"type1\"), mapperService.types());\n+\n+ mapperService.merge(\"type2\", new CompressedXContent(\"{\\\"type2\\\":{}}\"), true, false);\n+ assertNotNull(mapperService.documentMapper(MapperService.DEFAULT_MAPPING));\n+ assertEquals(new HashSet<>(Arrays.asList(\"type1\", \"type2\")), mapperService.types());\n+ }\n+\n+ public void testIndexIntoDefaultMapping() throws Throwable {\n+ // 1. test implicit index creation\n+ try {\n+ client().prepareIndex(\"index1\", MapperService.DEFAULT_MAPPING, \"1\").setSource(\"{\").execute().get();\n+ fail();\n+ } catch (Throwable t) {\n+ if (t instanceof ExecutionException) {\n+ t = ((ExecutionException) t).getCause();\n+ }\n+ if (t instanceof IllegalArgumentException) {\n+ assertEquals(\"It is forbidden to index into the default mapping [_default_]\", t.getMessage());\n+ } else {\n+ throw t;\n+ }\n+ }\n+\n+ // 2. already existing index\n+ IndexService indexService = createIndex(\"index2\");\n+ try {\n+ client().prepareIndex(\"index2\", MapperService.DEFAULT_MAPPING, \"2\").setSource().execute().get();\n+ fail();\n+ } catch (Throwable t) {\n+ if (t instanceof ExecutionException) {\n+ t = ((ExecutionException) t).getCause();\n+ }\n+ if (t instanceof IllegalArgumentException) {\n+ assertEquals(\"It is forbidden to index into the default mapping [_default_]\", t.getMessage());\n+ } else {\n+ throw t;\n+ }\n+ }\n+ assertFalse(indexService.mapperService().hasMapping(MapperService.DEFAULT_MAPPING));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java",
"status": "modified"
}
]
}
|
{
"body": "_From @gpaul on November 17, 2015 10:43_\n\nThe following set of steps against a fresh elasticsearch 2.0.0 instance with v3.0.2 of this plugin installed shows that copy_to isn't working for the name field. I doubt it is working for the other metadata fields, either.\n\nYou can copy/paste this in your shell.\n\n```\n# create a mapping\ncurl -XPOST 'http://localhost:9200/test_copyto' -d '{\n \"mappings\": {\n \"person\": {\n \"properties\": {\n \"copy_dst\": { \"type\": \"string\" },\n \"doc\": {\n \"type\": \"attachment\",\n \"fields\": {\n \"name\": { \"copy_to\": \"copy_dst\" }\n }\n }\n }\n }\n }\n}'\n## => {\"acknowledged\":true}\n\n\n# index a document, specifying a document name\ncurl -XPOST 'http://localhost:9200/test_copyto/person/1' -d '{\n \"doc\": {\n \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n \"_name\": \"my-attachment-name.doc\"\n }\n}'\n## => {\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_version\":1,\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\n\n\n# search for the document by its contents\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"doc.content\",\n \"fields\": [\"copy_dst\", \"doc.content\"],\n \"query\": \"ipsum\"\n }\n }\n}\n'\n## => {\"took\":5,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\": \"total\":1,\"max_score\":0.04119441,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.04119441,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-attachment-name.doc\"\n# }\n#}}]}}\n\n\n# search for the document by its name\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"doc.name\",\n \"fields\": [\"doc.name\"],\n \"query\": \"my-test.doc\"\n }\n }\n}\n'\n## => {\"took\":5,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":0.02250402,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.02250402,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-attachment-name.doc\"\n# }\n#}}]}}\n\n# search for the document by the copy_dst field\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n{\n \"query\": {\n \"query_string\": {\n \"default_field\": \"copy_dst\",\n \"fields\": [\"copy_dst\"],\n \"query\": \"my-test.doc\"\n }\n }\n}\n'\n## => {\"took\":1,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":0,\"max_score\":null,\"hits\":[]}}\n```\n\n_Copied from original issue: elastic/elasticsearch-mapper-attachments#190_\n",
"comments": [
{
"body": "_From @gpaul on November 23, 2015 13:37_\n\nPing. Should I open this issue against the main elasticsearch repository now that this plugin is moving there?\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "Did you try the same script with elasticsearch 1.7? I'd like to know if it's a regression or if it has always been there.\n\nI know that copy_to feature is supposed to work for the extracted text but I don't think it worked for metadata.\n\nIf I'm right (so it's not an issue but more a feature request), then you can open it in elasticsearch repo.\nIf I'm wrong (so it's a regression), then keep it here.\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "_From @gpaul on November 23, 2015 14:58_\n\nIt seems like a regression:\nelasticsearch 1.7.0 with mapper-attachments 2.7.1\nyields\n\n```\ncurl -XPOST 'http://localhost:9200/test_copyto/person/_search' -d '\n {\n \"query\": {\n \"query_string\": {\n \"default_field\": \"copy_dst\",\n \"fields\": [\"copy_dst\"],\n \"query\": \"my-test.doc\"\n }\n }\n }\n '\n#{\"took\":3,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":0.02250402,\"hits\":[{\"_index\":\"test_copyto\",\"_type\":\"person\",\"_id\":\"1\",\"_score\":0.02250402,\"_source\":{\n# \"doc\": {\n# \"_content\": \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\",\n# \"_name\": \"my-test.doc\"\n# }\n#}}]}}\n```\n",
"created_at": "2015-11-23T16:10:55Z"
},
{
"body": "Thank you @gpaul \n",
"created_at": "2015-11-23T16:10:56Z"
},
{
"body": "_From @hhoechtl on November 23, 2015 15:2_\n\nIt's also not working with the .content field\n",
"created_at": "2015-11-23T16:10:56Z"
},
{
"body": "I created a test for elasticsearch 1.7 and it is working well in 1.x series:\n\n``` java\n@Test\npublic void testCopyToMetaData() throws Exception {\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/integration/simple/copy-to-metadata.json\");\n byte[] txt = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/text-in-english.txt\");\n\n client().admin().indices().putMapping(putMappingRequest(\"test\").type(\"person\").source(mapping)).actionGet();\n\n index(\"test\", \"person\", jsonBuilder().startObject()\n .startObject(\"file\")\n .field(\"_content\", txt)\n .field(\"_name\", \"name\")\n .endObject()\n .endObject());\n refresh();\n\n CountResponse countResponse = client().prepareCount(\"test\").setQuery(queryStringQuery(\"name\").defaultField(\"file.name\")).execute().get();\n assertThatWithError(countResponse.getCount(), equalTo(1l));\n\n countResponse = client().prepareCount(\"test\").setQuery(queryStringQuery(\"name\").defaultField(\"copy\")).execute().get();\n assertThatWithError(countResponse.getCount(), equalTo(1l));\n}\n```\n\nI created a test for 2.\\* branches which demonstrates the regression from 2.0.\nIt can be reused to fix this issue: https://github.com/elastic/elasticsearch-mapper-attachments/commit/30aeda668a9090f28929a74d59b9bf81e1738161:\n\n``` yml\n\"Copy To Feature\":\n\n - do:\n indices.create:\n index: test\n body:\n mappings:\n doc:\n properties:\n copy_dst:\n type: string\n doc:\n type: attachment\n fields:\n name:\n copy_to: copy_dst\n - do:\n cluster.health:\n wait_for_status: yellow\n\n - do:\n index:\n index: test\n type: doc\n id: 1\n body:\n doc:\n _content: \"e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=\"\n _name: \"name\"\n\n - do:\n indices.refresh: {}\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n doc.content: \"ipsum\"\n\n - match: { hits.total: 1 }\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n doc.name: \"name\"\n\n - match: { hits.total: 1 }\n\n - do:\n search:\n index: test\n body:\n query:\n match:\n copy_dst: \"name\"\n\n - match: { hits.total: 1 }\n```\n\n@rjernst Could you give a look please?\n",
"created_at": "2015-11-23T16:25:24Z"
},
{
"body": "@dadoonet One odd thing I see is using a copy_to from within a multi field. Seems like we should disallow that? We removed support here:\nhttps://github.com/elastic/elasticsearch/pull/10802/files#diff-ab54166cc098ea6f03350e68a8689a5bL432\n\n The copy's are now handled outside of the mappers, while before there was a lot of spaghetti sharing between object mappers and field mappers that made document parsing complex. If we want to add it back, we will probably need a good refactoring in the way multi fields and copy_tos are handled. The problem is `copy_to` inside a multi field is essentially a nested `copy_to` since multi fields are conceptually just a copy to (see my notes in #10802).\n",
"created_at": "2015-11-30T22:22:23Z"
},
{
"body": "@clintongormley WDYT? \n\nLet me sum up the discussion.\n\nBefore 2.0, we were able to support:\n\n``` js\nPUT /test/person/_mapping\n{\n \"person\": {\n \"properties\": {\n \"file\": {\n \"type\": \"attachment\",\n \"fields\": {\n \"content\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"name\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"author\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n },\n \"title\": {\n \"type\": \"string\",\n \"copy_to\": \"copy\"\n }\n }\n },\n \"copy\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nIt means that extracted `content`, `title`, `name` (for filename) and `author` can be indexed in another field `copy` where the user can run a \"fulltext\" search.\n\nFrom 2.0, this is not supported anymore for the reasons @rjernst described.\n\nShould we document that mapper attachments does not support anymore `copy_to` feature on extracted/provided nested fields?\nNote that we don't support [a global `copy_to` on the `BASE64` content](https://github.com/elastic/elasticsearch-mapper-attachments/issues/100) itself.\n\nOr should we try to implement such a thing _only_ for mapper attachments plugin. I can't see another use case today. May be some other community plugins would like to have it but really unsure here.\n\nIMO, users can always run search on multiple fields at the same time so instead of searching in `copy` in the above example, they could search on `file.content`, `file.title`, `file.name` and `file.author`. So not supporting this feature anymore does not sound as a big deal to me.\n\nThoughts?\n",
"created_at": "2015-12-01T08:05:34Z"
},
{
"body": "My feeling is that, long term, we should remove the `attachment` field type. Instead, we should move Tika to being a processor in the node ingest API which will read the binary attachment and add the contents and any meta fields to the `_source` field itself. The result is that attachments stop being magical and become just like any other field which can be configured in standard ways.\n\nWith this goal in mind, it doesn't make sense to add complicated (and likely buggy) hacks to fix this regression in 2.x. But, we should let the user know that copy-to on multi-fields is not supported: we should throw an exception at mapping time instead of silently ignoring the problem.\n",
"created_at": "2015-12-01T11:03:18Z"
},
{
"body": "So...I took a look at the copy_to and multi_fields and tried to throw an exception when we encounter copy_to in multi_fields - we could do that I guess. But I have the suspicion that re-adding the copy_to to multi_fields is just a matter of shifting three lines. I made a pr here so you can see what I mean: #15152 Tests pass but I might be missing something.\n",
"created_at": "2015-12-01T16:24:25Z"
},
{
"body": "That would be an awesome news @brwe. Was not expecting that. Are the tests I wrote for mapper attachments plugin work as well? Is that what you mean by `Tests pass`?\n",
"created_at": "2015-12-01T16:29:27Z"
},
{
"body": "@dadoonet I mean the tests that were there already and the tests I added in the pr. I suspect that your test would pass as well but did not check.\n\nHowever, we ( @rjernst @clintongormley and I ) had a chat yesterday about this in here is the outcome:\nDoing this fix is not a good idea for several reasons:\n1. it would introduce a dependency between DocumentParser and FieldMapper again which was removed with much effort in #10802\n2. long term we want to replace the implementation of multi fields with `copy_to` mechanism anyway\n3. chains of `copy_to` ala\n\n```\n\"a\" : {\n\"type\": \"string\",\n\"copy_to\": \"b\"\n},\n\"b\": {\n\"type\": \"string\",\n\"copy_to\": \"a\"\n}\n```\n\nshould not be possible because they add complexity in code and usage.\n\nThis reduces flexibility of mappings and how they can transform data. However, the consensus is that elasticsearch is not the right place to perform these kind of transformations anyway and these kind of operations should move to external tools such as the planned node ingest plugin https://github.com/elastic/elasticsearch/issues/14049\n\nApplying the fix I proposed would just delay the removal of the feature and therefore we think we should not do it.\n",
"created_at": "2015-12-02T11:13:08Z"
},
{
"body": "It seems like you have removed a feature without providing a better alternative. The use of copy_to for custom '_all' fields is well-documented and very useful.\n",
"created_at": "2015-12-02T11:47:32Z"
},
{
"body": "@brwe My 2c on your discussion \n1. it would introduce a dependency between DocumentParser and FieldMapper again which was removed with much effort in #10802\n -> if such cleanup code broke something - then perhaps it was merged too soon. A temporary patch to fix a broken feature until such time as it can be properly reengineered is not unreasonable.\n2. long term we want to replace the implementation of multi fields with copy_to mechanism anyway\n -> perfect - the process I would like to see in that case is: a deprecation note in 2.1 followed by a better alternative and migration path users can follow in preparation for 2.2 - keeping in mind that mappings from older versions of elasticsearch need to be upgraded. There are people who used ES as their primary data store - I'm not one of those, but having to rebuilding indexes when new ES versions are released is unfortunate. \n3. chains of copy_to ... should not be possible because they add complexity in code and usage.\n -> by all means prohibit them. I didn't know this was possible in earlier versions as it seems too easy to define cycles.\n",
"created_at": "2015-12-02T11:59:25Z"
},
{
"body": "@gpaul If our resources were unlimited, then I would agree with you. However, in an effort to clean up a massive code base and to remove complexity, we have to do it incrementally and sometimes we have to remove things that worked before. The mapping cleanup was 5 long months of work, and there is still a good deal more to be done. It brought some huge improvements (just see how many issues were linked to https://github.com/elastic/elasticsearch/issues/8870) but meant that we couldn't support everything that we supported before.\n\nEvery hack that we add into the code adds technical debt and increases the likelihood of introducing new bugs. We'd much rather focus our limited resources on making the system clean, stable, reliable, and maintainable.\n\nThis is why I don't want to make this change. The workaround for your case is to search across multiple fields.\n",
"created_at": "2015-12-02T13:04:42Z"
},
{
"body": "[woops, I was logged in as a friend of mine when I posted this comment a minute ago. I've removed it and this is a repost as myself ><]\n\nThat's fair. Thanks for all the hard work.\n\nAs I'll have to redesign my mappings anyway, should I avoid copy_to in its entirety going forward or is it just the multi-field case that was causing pain? I'd like to avoid features that are on their way out.\n",
"created_at": "2015-12-02T15:02:50Z"
},
{
"body": "@gpaul It is just copy_to in a multi field. \n",
"created_at": "2015-12-02T15:05:47Z"
},
{
"body": "Got it, thanks.\n",
"created_at": "2015-12-02T15:11:26Z"
},
{
"body": "Hi, I think I've this problem as well.\n\nFollowing the documentation (?) here: https://github.com/elastic/elasticsearch-mapper-attachments#copy-to-feature the `copyTo` on the `content` should work, but I cannot manage to. Isn't that the correct documentation?\n\nI want to make use of this feature to copy the extracted content into a custom `_all` field. Any hints how to solve this?\n\nIs there a content extraction service/endpoint I could make use of to index prepared content, so that I don't have to rely on copyTo?\n\n_Edit_: These docs mention the `copyTo` feature as well: https://www.elastic.co/guide/en/elasticsearch/plugins/current/mapper-attachments-copy-to.html\n\nThanks!\n",
"created_at": "2016-03-24T19:19:03Z"
}
],
"number": 14946,
"title": "copy_to of mapper attachments metadata field isn't working"
}
|
{
"body": "This is just a poc for #14946 . I am opening this pull request so that we can discuss better. \n\nI tried re enabling `copy_to` functionality for `multi_fields` and it seems to me this is just a matter of shifting three lines. I have not thought about it much though yet and might be missing something. However, tests with this branch pass.\n",
"number": 15152,
"review_comments": [],
"title": "Enable copy_to in multi_fields"
}
|
{
"commits": [
{
"message": "tests for copy_to in multi fields"
},
{
"message": "move copy_to handling to fieldMapper - this way multi fields can have copy_to too"
}
],
"files": [
{
"diff": "@@ -312,9 +312,6 @@ private static Mapper parseObjectOrField(ParseContext context, Mapper mapper) th\n } else {\n FieldMapper fieldMapper = (FieldMapper)mapper;\n Mapper update = fieldMapper.parse(context);\n- if (fieldMapper.copyTo() != null) {\n- parseCopyFields(context, fieldMapper, fieldMapper.copyTo().copyToFields());\n- }\n return update;\n }\n }\n@@ -683,7 +680,7 @@ private static ObjectMapper parseDynamicValue(final ParseContext context, Object\n }\n \n /** Creates instances of the fields that the current field should be copied to */\n- private static void parseCopyFields(ParseContext context, FieldMapper fieldMapper, List<String> copyToFields) throws IOException {\n+ public static void parseCopyFields(ParseContext context, FieldMapper fieldMapper, List<String> copyToFields) throws IOException {\n if (!context.isWithinCopyTo() && copyToFields.isEmpty() == false) {\n context = context.createCopyToContext();\n for (String field : copyToFields) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -339,6 +339,9 @@ public Mapper parse(ParseContext context) throws IOException {\n throw new MapperParsingException(\"failed to parse [\" + fieldType().names().fullName() + \"]\", e);\n }\n multiFields.parse(this, context);\n+ if (copyTo() != null) {\n+ DocumentParser.parseCopyFields(context, this, this.copyTo().copyToFields());\n+ }\n return null;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,9 @@\n \n import java.io.IOException;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -92,4 +94,26 @@ private XContentBuilder createDynamicTemplateMapping() throws IOException {\n .endArray();\n }\n \n+ public void testCopyToWithinMultiField() throws IOException {\n+ String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .startObject(\"fields\")\n+ .startObject(\"raw\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"another_field\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject().string();\n+ assertAcked(\n+ client().admin().indices().prepareCreate(\"test-idx\")\n+ .addMapping(\"type\", mapping)\n+ );\n+ client().prepareIndex(\"test-idx\", \"type\").setSource(\"{\\\"copy_test\\\":\\\"foo bar\\\"}\").get();\n+ refresh();\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(QueryBuilders.termQuery(\"another_field\", \"foo\")).get();\n+ assertSearchResponse(searchResponse);\n+ assertThat(searchResponse.getHits().getHits().length, equalTo(1));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperIntegrationIT.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.mapper.copyto;\n \n import org.apache.lucene.index.IndexableField;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -40,6 +41,7 @@\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.junit.Test;\n \n+import java.nio.charset.Charset;\n import java.util.Arrays;\n import java.util.List;\n import java.util.Map;\n@@ -358,4 +360,21 @@ private void assertFieldValue(Document doc, String field, Number... expected) {\n assertArrayEquals(expected, actual);\n }\n \n+ @Test\n+ public void testCopyToInMultiFields() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"copy_test\")\n+ .field(\"type\", \"string\")\n+ .startObject(\"fields\")\n+ .startObject(\"raw\")\n+ .field(\"type\", \"string\")\n+ .field(\"copy_to\", \"another_field\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+ ParsedDocument doc = docMapper.parse(\"test\", \"type\", \"1\", new BytesArray(\"{\\\"copy_test\\\":\\\"foo bar\\\"}\"));\n+ assertThat(doc.docs().get(0).getFields(\"another_field\")[0].stringValue(), equalTo(\"foo bar\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/copyto/CopyToMapperTests.java",
"status": "modified"
}
]
}
|
{
"body": "The update API allows conflicting field mappings to be introduced which can then bring a cluster down as it keeps trying and failing to sync the mappings. Recreation:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/two/1\n{}\n\nPOST t/two/1/_update\n{\n \"doc\": {\n \"foo\": {\n \"bar\": 5\n }\n }\n}\n\nGET t/_mapping\n```\n\nreturns:\n\n```\n{\n \"t\": {\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"nested\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\"\n }\n }\n }\n }\n },\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"long\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nFrom: https://discuss.elastic.co/t/cluster-keeps-becoming-unresponsive/35677\n",
"comments": [
{
"body": "This is a more general issue of our mapping API. I haven't figured out the cause yet, but here is a minimal recreation that reproduces the issue all the time:\n\n```\nPUT t\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nPUT t/two/_mapping\n{\n \"two\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"long\"\n }\n }\n }\n}\n\nGET t/_mapping\n```\n",
"created_at": "2015-11-27T13:19:16Z"
}
],
"number": 15049,
"title": "Update API allows conflicting mappings"
}
|
{
"body": "Related to #15049\n",
"number": 15144,
"review_comments": [],
"title": "Don't ignore mapping merge failures."
}
|
{
"commits": [
{
"message": "Mappings: Don't ignore merge failures."
}
],
"files": [
{
"diff": "@@ -250,13 +250,14 @@ private DocumentMapper merge(DocumentMapper mapper, boolean updateAllTypes) {\n DocumentMapper oldMapper = mappers.get(mapper.type());\n \n if (oldMapper != null) {\n- MergeResult result = oldMapper.merge(mapper.mapping(), false, updateAllTypes);\n+ // simulate first\n+ MergeResult result = oldMapper.merge(mapper.mapping(), true, updateAllTypes);\n if (result.hasConflicts()) {\n- // TODO: What should we do???\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"merging mapping for type [{}] resulted in conflicts: [{}]\", mapper.type(), Arrays.toString(result.buildConflicts()));\n- }\n+ throw new MergeMappingException(result.buildConflicts());\n }\n+ // then apply for real\n+ result = oldMapper.merge(mapper.mapping(), false, updateAllTypes);\n+ assert result.hasConflicts() == false; // we already simulated\n return oldMapper;\n } else {\n List<ObjectMapper> newObjectMappers = new ArrayList<>();",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,9 @@\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MergeMappingException;\n import org.elasticsearch.index.mapper.MergeResult;\n+import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n import java.io.IOException;\n@@ -107,6 +109,100 @@ protected void testConflictWhileMergingAndMappingUnchanged(XContentBuilder mappi\n assertThat(mappingAfterUpdate, equalTo(mappingBeforeUpdate));\n }\n \n+ public void testConflictSameType() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"long\").endObject()\n+ .endObject().endObject().endObject();\n+ MapperService mapperService = createIndex(\"test\", Settings.settingsBuilder().build(), \"type\", mapping).mapperService();\n+\n+ XContentBuilder update = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"double\").endObject()\n+ .endObject().endObject().endObject();\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (MergeMappingException e) {\n+ // expected\n+ }\n+\n+ try {\n+ mapperService.merge(\"type\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (MergeMappingException e) {\n+ // expected\n+ }\n+\n+ assertTrue(mapperService.documentMapper(\"type\").mapping().root().getMapper(\"foo\") instanceof LongFieldMapper);\n+ }\n+\n+ public void testConflictNewType() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"long\").endObject()\n+ .endObject().endObject().endObject();\n+ MapperService mapperService = createIndex(\"test\", Settings.settingsBuilder().build(), \"type1\", mapping).mapperService();\n+\n+ XContentBuilder update = XContentFactory.jsonBuilder().startObject().startObject(\"type2\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"double\").endObject()\n+ .endObject().endObject().endObject();\n+\n+ try {\n+ mapperService.merge(\"type2\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // expected\n+ assertTrue(e.getMessage().contains(\"conflicts with existing mapping in other types\"));\n+ }\n+\n+ try {\n+ mapperService.merge(\"type2\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // expected\n+ assertTrue(e.getMessage().contains(\"conflicts with existing mapping in other types\"));\n+ }\n+\n+ assertTrue(mapperService.documentMapper(\"type1\").mapping().root().getMapper(\"foo\") instanceof LongFieldMapper);\n+ assertNull(mapperService.documentMapper(\"type2\"));\n+ }\n+\n+ // same as the testConflictNewType except that the mapping update is on an existing type\n+ @AwaitsFix(bugUrl=\"https://github.com/elastic/elasticsearch/issues/15049\")\n+ public void testConflictNewTypeUpdate() throws Exception {\n+ XContentBuilder mapping1 = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"long\").endObject()\n+ .endObject().endObject().endObject();\n+ XContentBuilder mapping2 = XContentFactory.jsonBuilder().startObject().startObject(\"type2\").endObject().endObject();\n+ MapperService mapperService = createIndex(\"test\", Settings.settingsBuilder().build()).mapperService();\n+\n+ mapperService.merge(\"type1\", new CompressedXContent(mapping1.string()), false, false);\n+ mapperService.merge(\"type2\", new CompressedXContent(mapping2.string()), false, false);\n+\n+ XContentBuilder update = XContentFactory.jsonBuilder().startObject().startObject(\"type2\")\n+ .startObject(\"properties\").startObject(\"foo\").field(\"type\", \"double\").endObject()\n+ .endObject().endObject().endObject();\n+\n+ try {\n+ mapperService.merge(\"type2\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // expected\n+ assertTrue(e.getMessage().contains(\"conflicts with existing mapping in other types\"));\n+ }\n+\n+ try {\n+ mapperService.merge(\"type2\", new CompressedXContent(update.string()), false, false);\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // expected\n+ assertTrue(e.getMessage().contains(\"conflicts with existing mapping in other types\"));\n+ }\n+\n+ assertTrue(mapperService.documentMapper(\"type1\").mapping().root().getMapper(\"foo\") instanceof LongFieldMapper);\n+ assertNotNull(mapperService.documentMapper(\"type2\"));\n+ assertNull(mapperService.documentMapper(\"type2\").mapping().root().getMapper(\"foo\"));\n+ }\n+\n public void testIndexFieldParsingBackcompat() throws IOException {\n IndexService indexService = createIndex(\"test\", Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build());\n XContentBuilder indexMapping = XContentFactory.jsonBuilder();",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingTests.java",
"status": "modified"
}
]
}
|
{
"body": "Test failure: http://build-us-00.elastic.co/job/es_core_master_window-2008/2553/testReport/junit/org.elasticsearch.indices.state/RareClusterStateIT/testDeleteCreateInOneBulk/\n\nThe test fails due to a race in acquiring `ShardLock` locks. \n\nWhen an index is deleted, an asynchronous process is started to process pending deletes on shards of that index. This process first acquires all `ShardLock` locks for the given index in numeric shard order. Meanwhile, the new index can already have been created, and some shard locks can already be held due to shard creation in `IndicesClusterStateService.applyInitializingShard`. For example, shard 0 is locked by `processPendingDeletes` but shard 1 is locked by `applyInitializingShard`. In that case, `processPendingDeletes` cannot lock shard 1 and blocks (and will hold lock on shard 0 for 30 minutes). This means that shard 0 cannot be initialised for 30 minutes.\n\nInteresting bits of stack trace:\n\n```\n\"elasticsearch[node_t1][generic][T#2]\" ID=602 TIMED_WAITING on java.util.concurrent.Semaphore$NonfairSync@2fc45c3b\n at sun.misc.Unsafe.park(Native Method)\n - timed waiting on java.util.concurrent.Semaphore$NonfairSync@2fc45c3b\n at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)\n at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:409)\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:555)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:485)\n at org.elasticsearch.env.NodeEnvironment.lockAllForIndex(NodeEnvironment.java:429)\n at org.elasticsearch.indices.IndicesService.processPendingDeletes(IndicesService.java:649)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.lockIndexAndAck(NodeIndexDeletedAction.java:101)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction.access$300(NodeIndexDeletedAction.java:46)\n at org.elasticsearch.cluster.action.index.NodeIndexDeletedAction$1.doRun(NodeIndexDeletedAction.java:90)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Locked synchronizers:\n - java.util.concurrent.ThreadPoolExecutor$Worker@b17810e\n\n\n\"elasticsearch[node_t1][clusterService#updateTask][T#1]\" ID=591 TIMED_WAITING on java.util.concurrent.Semaphore$NonfairSync@7fdcd730\n at sun.misc.Unsafe.park(Native Method)\n - timed waiting on java.util.concurrent.Semaphore$NonfairSync@7fdcd730\n at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)\n at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:409)\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:555)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:485)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:234)\n - locked org.elasticsearch.index.IndexService@707e1798\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:628)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:528)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n - locked java.lang.Object@773b911a\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:517)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n Locked synchronizers:\n - java.util.concurrent.ThreadPoolExecutor$Worker@26f887da\n```\n",
"comments": [
{
"body": "really this is a bug in how elasticsearch works altogether. all these locks and all the algs we have work on the index name rather than on it's uuid which is a huge problem. That's the place to fix this rather than changing the way we process pending deletes.\n",
"created_at": "2015-11-23T11:03:38Z"
},
{
"body": "we already have some issues related to this: https://github.com/elastic/elasticsearch/issues/13264 and https://github.com/elastic/elasticsearch/issues/13265 (this one is rather unrelated but goes into the same direction of safety)\n",
"created_at": "2015-11-23T13:28:35Z"
},
{
"body": "I disabled the test on branches master, 2.x, 2.1 and 2.0.\n",
"created_at": "2015-12-01T10:50:42Z"
}
],
"number": 14932,
"title": "Processing pending deletes can block shard initialisation for 30 minutes"
}
|
{
"body": "Relates to #14932 which shows that processing pending deletes can block shard initialization for 30 minutes. The core issue is that processing pending deletes (which is an asynchronous operation) is based on index name instead of UUID. If a new index with same name has been recreated in the meantime, we run into some issues.\n\nAs a fix, I suggest the following changes:\n- Pass index UUID when requesting ShardLock. If uuid do not match with uuid of currently held ShardLock for that shard, throw LockObtainFailedException. We use the special wildcard \"null\" to match any uuid. This is useful if we don't know what the index UUID is (fallback to the old behavior).\n- pendingDeletes map is indexed by index uuid, not index name anymore. This means that we process only the pending deletes for the specific UUID.\n- processPendingDeletes should not request all, but as many ShardLocks as possible. If only subset is locked, then index directory should NOT be deleted. Same as before, it should only delete directories for which it has lock.\n\nNote that it is still unsafe in the following case: Let's assume we recreate index with more shards. If the given machine only gets the shard with number higher than any number in old index, then processPendingDeletes might still delete the index directory.\n",
"number": 15139,
"review_comments": [],
"title": "Processing pending deletes should rely on index UUID instead of index name"
}
|
{
"commits": [
{
"message": "wip to deal with pending deletes issue"
}
],
"files": [
{
"diff": "@@ -323,7 +323,7 @@ public static void upgradeMultiDataPath(NodeEnvironment nodeEnv, ESLogger logger\n \n for (String index : allIndices) {\n for (ShardId shardId : findAllShardIds(nodeEnv.indexPaths(new Index(index)))) {\n- try (ShardLock lock = nodeEnv.shardLock(shardId, 0)) {\n+ try (ShardLock lock = nodeEnv.shardLock(shardId, null, 0)) {\n if (upgrader.needsUpgrading(shardId)) {\n final ShardPath shardPath = upgrader.pickShardPath(shardId);\n upgrader.upgrade(shardId, shardPath);",
"filename": "core/src/main/java/org/elasticsearch/common/util/MultiDataPathUpgrader.java",
"status": "modified"
},
{
"diff": "@@ -294,7 +294,7 @@ private static String toString(Collection<String> items) {\n public void deleteShardDirectorySafe(ShardId shardId, IndexSettings indexSettings) throws IOException {\n final Path[] paths = availableShardPaths(shardId);\n logger.trace(\"deleting shard {} directory, paths: [{}]\", shardId, paths);\n- try (ShardLock lock = shardLock(shardId)) {\n+ try (ShardLock lock = shardLock(shardId, indexSettings.getUUID(), TimeUnit.SECONDS.toMillis(5))) {\n deleteShardDirectoryUnderLock(lock, indexSettings);\n }\n }\n@@ -342,7 +342,7 @@ public static void acquireFSLockForPaths(IndexSettings indexSettings, Path... sh\n */\n public void deleteShardDirectoryUnderLock(ShardLock lock, IndexSettings indexSettings) throws IOException {\n final ShardId shardId = lock.getShardId();\n- assert isShardLocked(shardId) : \"shard \" + shardId + \" is not locked\";\n+ assert isShardLocked(shardId, indexSettings.getUUID()) : \"shard \" + shardId + \" is not locked\";\n final Path[] paths = availableShardPaths(shardId);\n logger.trace(\"acquiring locks for {}, paths: [{}]\", shardId, paths);\n acquireFSLockForPaths(indexSettings, paths);\n@@ -358,9 +358,9 @@ public void deleteShardDirectoryUnderLock(ShardLock lock, IndexSettings indexSet\n assert FileSystemUtils.exists(paths) == false;\n }\n \n- private boolean isShardLocked(ShardId id) {\n+ private boolean isShardLocked(ShardId id, String indexUUID) {\n try {\n- shardLock(id, 0).close();\n+ shardLock(id, indexUUID, 0).close();\n return false;\n } catch (IOException ex) {\n return true;\n@@ -426,7 +426,7 @@ public List<ShardLock> lockAllForIndex(Index index, IndexSettings settings, long\n try {\n for (int i = 0; i < numShards; i++) {\n long timeoutLeftMS = Math.max(0, lockTimeoutMS - TimeValue.nsecToMSec((System.nanoTime() - startTimeNS)));\n- allLocks.add(shardLock(new ShardId(index, i), timeoutLeftMS));\n+ allLocks.add(shardLock(new ShardId(index, i), settings.getUUID(), timeoutLeftMS));\n }\n success = true;\n } finally {\n@@ -438,43 +438,79 @@ public List<ShardLock> lockAllForIndex(Index index, IndexSettings settings, long\n return allLocks;\n }\n \n+ /**\n+ * Tries to lock as many local shards as possible for the given index.\n+ *\n+ * @param index the index to lock shards for\n+ * @param lockTimeoutMS how long to wait for acquiring the indices shard locks\n+ * @return the {@link ShardLock} instances for this index.\n+ * @throws IOException if an IOException occurs.\n+ */\n+ public List<ShardLock> lockAsManyAsPossibleForIndex(Index index, IndexSettings settings, long lockTimeoutMS) throws IOException {\n+ final int numShards = settings.getNumberOfShards();\n+ if (numShards <= 0) {\n+ throw new IllegalArgumentException(\"settings must contain a non-null > 0 number of shards\");\n+ }\n+ logger.trace(\"locking as many shards as possible for index {} - [{}]\", index, numShards);\n+ List<ShardLock> successfulLocks = new ArrayList<>(numShards);\n+ long startTimeNS = System.nanoTime();\n+ for (int i = 0; i < numShards; i++) {\n+ try {\n+ long timeoutLeftMS = Math.max(0, lockTimeoutMS - TimeValue.nsecToMSec((System.nanoTime() - startTimeNS)));\n+ successfulLocks.add(shardLock(new ShardId(index, i), settings.getUUID(), timeoutLeftMS));\n+ } catch (Exception e) {\n+ logger.trace(\"failed to lock shard [{}][{}] with index UUID {}\", index, i, settings.getUUID());\n+ }\n+ }\n+ return successfulLocks;\n+ }\n+\n /**\n * Tries to lock the given shards ID. A shard lock is required to perform any kind of\n * write operation on a shards data directory like deleting files, creating a new index writer\n- * or recover from a different shard instance into it. If the shard lock can not be acquired\n- * an {@link LockObtainFailedException} is thrown.\n+ * or recover from a different shard instance into it.\n+ * If the shard lock can not be acquired or the {@code indexUuid} values\n+ * do not match, an {@link org.apache.lucene.store.LockObtainFailedException} is thrown\n *\n * Note: this method will return immediately if the lock can't be acquired.\n *\n * @param id the shard ID to lock\n+ * @param indexUuid the index uuid\n * @return the shard lock. Call {@link ShardLock#close()} to release the lock\n * @throws IOException if an IOException occurs.\n */\n- public ShardLock shardLock(ShardId id) throws IOException {\n- return shardLock(id, 0);\n+ public ShardLock shardLock(ShardId id, String indexUuid) throws IOException {\n+ return shardLock(id, indexUuid, 0);\n }\n \n /**\n * Tries to lock the given shards ID. A shard lock is required to perform any kind of\n * write operation on a shards data directory like deleting files, creating a new index writer\n- * or recover from a different shard instance into it. If the shard lock can not be acquired\n- * an {@link org.apache.lucene.store.LockObtainFailedException} is thrown\n+ * or recover from a different shard instance into it.\n+ * If the shard lock can not be acquired or the {@code indexUuid} values\n+ * do not match, an {@link org.apache.lucene.store.LockObtainFailedException} is thrown\n * @param id the shard ID to lock\n+ * @param indexUuid the index uuid. If null, we match any index uuid\n * @param lockTimeoutMS the lock timeout in milliseconds\n * @return the shard lock. Call {@link ShardLock#close()} to release the lock\n * @throws IOException if an IOException occurs.\n */\n- public ShardLock shardLock(final ShardId id, long lockTimeoutMS) throws IOException {\n+ public ShardLock shardLock(final ShardId id, String indexUuid, long lockTimeoutMS) throws IOException {\n logger.trace(\"acquiring node shardlock on [{}], timeout [{}]\", id, lockTimeoutMS);\n final InternalShardLock shardLock;\n final boolean acquired;\n synchronized (shardLocks) {\n if (shardLocks.containsKey(id)) {\n shardLock = shardLocks.get(id);\n- shardLock.incWaitCount();\n- acquired = false;\n+ if (indexUuid == null || shardLock.indexUuid == null || indexUuid.equals(shardLock.indexUuid)) {\n+ shardLock.incWaitCount();\n+ acquired = false;\n+ } else {\n+ throw new LockObtainFailedException(\"Index UUID of caller [\" + indexUuid +\n+ \"] does not match index UUID of current held lock [\" + shardLock.indexUuid + \"]\");\n+ }\n } else {\n- shardLock = new InternalShardLock(id);\n+ shardLock = new InternalShardLock(id, indexUuid);\n shardLocks.put(id, shardLock);\n acquired = true;\n }\n@@ -518,10 +554,12 @@ private final class InternalShardLock {\n */\n private final Semaphore mutex = new Semaphore(1);\n private int waitCount = 1; // guarded by shardLocks\n- private ShardId shardId;\n+ private final ShardId shardId;\n+ private final String indexUuid;\n \n- InternalShardLock(ShardId id) {\n+ InternalShardLock(ShardId id, String indexUuid) {\n shardId = id;\n+ this.indexUuid = indexUuid;\n mutex.acquireUninterruptibly();\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -233,7 +233,7 @@ public synchronized IndexShard createShard(ShardRouting routing) throws IOExcept\n boolean success = false;\n Store store = null;\n IndexShard indexShard = null;\n- final ShardLock lock = nodeEnv.shardLock(shardId, TimeUnit.SECONDS.toMillis(5));\n+ final ShardLock lock = nodeEnv.shardLock(shardId, this.indexSettings.getUUID(), TimeUnit.SECONDS.toMillis(5));\n try {\n eventListener.beforeIndexShardCreated(shardId, indexSettings);\n ShardPath path;",
"filename": "core/src/main/java/org/elasticsearch/index/IndexService.java",
"status": "modified"
},
{
"diff": "@@ -89,7 +89,7 @@ public class IndicesService extends AbstractLifecycleComponent<IndicesService> i\n private final ClusterService clusterService;\n private final IndexNameExpressionResolver indexNameExpressionResolver;\n private volatile Map<String, IndexService> indices = emptyMap();\n- private final Map<Index, List<PendingDelete>> pendingDeletes = new HashMap<>();\n+ private final Map<String, List<PendingDelete>> pendingDeletes = new HashMap<>();\n private final OldShardsStats oldShardsStats = new OldShardsStats();\n private final IndexStoreConfig indexStoreConfig;\n private final MapperRegistry mapperRegistry;\n@@ -576,23 +576,23 @@ public void addPendingDelete(ShardId shardId, IndexSettings settings) {\n throw new IllegalArgumentException(\"settings must not be null\");\n }\n PendingDelete pendingDelete = new PendingDelete(shardId, settings);\n- addPendingDelete(shardId.index(), pendingDelete);\n+ addPendingDelete(settings.getUUID(), pendingDelete);\n }\n \n /**\n * Adds a pending delete for the given index.\n */\n public void addPendingDelete(Index index, IndexSettings settings) {\n PendingDelete pendingDelete = new PendingDelete(index, settings);\n- addPendingDelete(index, pendingDelete);\n+ addPendingDelete(settings.getUUID(), pendingDelete);\n }\n \n- private void addPendingDelete(Index index, PendingDelete pendingDelete) {\n+ private void addPendingDelete(String indexUUID, PendingDelete pendingDelete) {\n synchronized (pendingDeletes) {\n- List<PendingDelete> list = pendingDeletes.get(index);\n+ List<PendingDelete> list = pendingDeletes.get(indexUUID);\n if (list == null) {\n list = new ArrayList<>();\n- pendingDeletes.put(index, list);\n+ pendingDeletes.put(indexUUID, list);\n }\n list.add(pendingDelete);\n }\n@@ -652,15 +652,20 @@ public int compareTo(PendingDelete o) {\n public void processPendingDeletes(Index index, IndexSettings indexSettings, TimeValue timeout) throws IOException, InterruptedException {\n logger.debug(\"{} processing pending deletes\", index);\n final long startTimeNS = System.nanoTime();\n- final List<ShardLock> shardLocks = nodeEnv.lockAllForIndex(index, indexSettings, timeout.millis());\n+ final List<ShardLock> shardLocks = nodeEnv.lockAsManyAsPossibleForIndex(index, indexSettings, timeout.millis());\n+ if (shardLocks.isEmpty()) {\n+ logger.debug(\"{} no shards could be locked\", index);\n+ throw new LockObtainFailedException(\"no shards could be locked\");\n+ }\n+ boolean allShardsLocked = shardLocks.size() >= indexSettings.getNumberOfShards();\n try {\n Map<ShardId, ShardLock> locks = new HashMap<>();\n for (ShardLock lock : shardLocks) {\n locks.put(lock.getShardId(), lock);\n }\n final List<PendingDelete> remove;\n synchronized (pendingDeletes) {\n- remove = pendingDeletes.remove(index);\n+ remove = pendingDeletes.remove(indexSettings.getUUID());\n }\n if (remove != null && remove.isEmpty() == false) {\n CollectionUtil.timSort(remove); // make sure we delete indices first\n@@ -678,7 +683,9 @@ public void processPendingDeletes(Index index, IndexSettings indexSettings, Time\n assert delete.shardId == -1;\n logger.debug(\"{} deleting index store reason [{}]\", index, \"pending delete\");\n try {\n- nodeEnv.deleteIndexDirectoryUnderLock(index, indexSettings);\n+ if (allShardsLocked) {\n+ nodeEnv.deleteIndexDirectoryUnderLock(index, indexSettings);\n+ }\n iterator.remove();\n } catch (IOException ex) {\n logger.debug(\"{} retry pending delete\", ex, index);\n@@ -707,14 +714,18 @@ public void processPendingDeletes(Index index, IndexSettings indexSettings, Time\n }\n } while ((System.nanoTime() - startTimeNS) < timeout.nanos());\n }\n+\n+ if (allShardsLocked == false) {\n+ throw new LockObtainFailedException(\"could only obtain \" + shardLocks.size() + \" out of \" + indexSettings.getNumberOfShards() + \" locks\");\n+ }\n } finally {\n IOUtils.close(shardLocks);\n }\n }\n \n- int numPendingDeletes(Index index) {\n+ int numPendingDeletes(String indexUUID) {\n synchronized (pendingDeletes) {\n- List<PendingDelete> deleteList = pendingDeletes.get(index);\n+ List<PendingDelete> deleteList = pendingDeletes.get(indexUUID);\n if (deleteList == null) {\n return 0;\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n@@ -37,6 +38,7 @@\n import java.util.List;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n \n@@ -87,11 +89,11 @@ public void testNodeLockMultipleEnvironment() throws IOException {\n public void testShardLock() throws IOException {\n final NodeEnvironment env = newNodeEnvironment();\n \n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", 0));\n+ ShardLock fooLock = env.shardLock(new ShardId(\"foo\", 0), idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5));\n assertEquals(new ShardId(\"foo\", 0), fooLock.getShardId());\n \n try {\n- env.shardLock(new ShardId(\"foo\", 0));\n+ env.shardLock(new ShardId(\"foo\", 0), idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n@@ -109,11 +111,11 @@ public void testShardLock() throws IOException {\n \n fooLock.close();\n // can lock again?\n- env.shardLock(new ShardId(\"foo\", 0)).close();\n+ env.shardLock(new ShardId(\"foo\", 0), this.idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5)).close();\n \n List<ShardLock> locks = env.lockAllForIndex(new Index(\"foo\"), idxSettings, randomIntBetween(0, 10));\n try {\n- env.shardLock(new ShardId(\"foo\", 0));\n+ env.shardLock(new ShardId(\"foo\", 0), this.idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5));\n fail(\"shard is locked\");\n } catch (LockObtainFailedException ex) {\n // expected\n@@ -142,7 +144,7 @@ public void testGetAllIndices() throws Exception {\n \n public void testDeleteSafe() throws IOException, InterruptedException {\n final NodeEnvironment env = newNodeEnvironment();\n- ShardLock fooLock = env.shardLock(new ShardId(\"foo\", 0));\n+ ShardLock fooLock = env.shardLock(new ShardId(\"foo\", 0), this.idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5));\n assertEquals(new ShardId(\"foo\", 0), fooLock.getShardId());\n \n \n@@ -200,7 +202,8 @@ public void onFailure(Throwable t) {\n @Override\n protected void doRun() throws Exception {\n start.await();\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", 0))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", 0),\n+ NodeEnvironmentTests.this.idxSettings.getUUID(), TimeUnit.SECONDS.toMillis(5))) {\n blockLatch.countDown();\n Thread.sleep(randomIntBetween(1, 10));\n }\n@@ -258,7 +261,7 @@ public void run() {\n for (int i = 0; i < iters; i++) {\n int shard = randomIntBetween(0, counts.length - 1);\n try {\n- try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", shard), scaledRandomIntBetween(0, 10))) {\n+ try (ShardLock autoCloses = env.shardLock(new ShardId(\"foo\", shard), idxSettings.getUUID(), scaledRandomIntBetween(0, 10))) {\n counts[shard].value++;\n countsAtomic[shard].incrementAndGet();\n assertEquals(flipFlop[shard].incrementAndGet(), 1);",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
},
{
"diff": "@@ -164,21 +164,21 @@ public void testPendingTasks() throws Exception {\n assertAcked(client().admin().indices().prepareClose(\"test\"));\n assertTrue(path.exists());\n \n- assertEquals(indicesService.numPendingDeletes(test.index()), numPending);\n+ assertEquals(indicesService.numPendingDeletes(test.indexUUID()), numPending);\n \n // shard lock released... we can now delete\n indicesService.processPendingDeletes(test.index(), test.getIndexSettings(), new TimeValue(0, TimeUnit.MILLISECONDS));\n- assertEquals(indicesService.numPendingDeletes(test.index()), 0);\n+ assertEquals(indicesService.numPendingDeletes(test.indexUUID()), 0);\n assertFalse(path.exists());\n \n if (randomBoolean()) {\n indicesService.addPendingDelete(new ShardId(test.index(), 0), test.getIndexSettings());\n indicesService.addPendingDelete(new ShardId(test.index(), 1), test.getIndexSettings());\n indicesService.addPendingDelete(new ShardId(\"bogus\", 1), test.getIndexSettings());\n- assertEquals(indicesService.numPendingDeletes(test.index()), 2);\n+ assertEquals(indicesService.numPendingDeletes(test.indexUUID()), 2);\n // shard lock released... we can now delete\n indicesService.processPendingDeletes(test.index(), test.getIndexSettings(), new TimeValue(0, TimeUnit.MILLISECONDS));\n- assertEquals(indicesService.numPendingDeletes(test.index()), 0);\n+ assertEquals(indicesService.numPendingDeletes(test.indexUUID()), 0);\n }\n assertAcked(client().admin().indices().prepareOpen(\"test\"));\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -1864,7 +1864,7 @@ public void assertAfterTest() throws IOException {\n Set<ShardId> shardIds = env.lockedShards();\n for (ShardId id : shardIds) {\n try {\n- env.shardLock(id, TimeUnit.SECONDS.toMillis(5)).close();\n+ env.shardLock(id, null, TimeUnit.SECONDS.toMillis(5)).close();\n } catch (IOException ex) {\n fail(\"Shard \" + id + \" is still locked after 5 sec waiting\");\n }",
"filename": "test-framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
}
]
}
|
{
"body": "The introduction in #14899 of applying mapping updates in batches brought with it a bug that could cause existing mappings to be lost during an update. In particular, in a scenario in which a batch contained updates for at least two distinct existing types on the same index, the existing mappings for all but the first existing type on the index would be lost. This arises because the workflow for applying mapping updates in batches now looks like:\n1. [if](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L234) any of the indices in the request batch do not exist on master, [create](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L236) them for the purpose of applying the mapping update and [merge](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L244) in the existing index mappings from the cluster state\n2. [apply](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L250-L257) the mapping updates\n3. [if](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L261-L263) any indices were created, [delete](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L262) them from master\n\nThe code for step 1. [looks like](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L229-L249):\n\n``` java\nfor (PutMappingClusterStateUpdateRequest request : tasks) {\n // failures here mean something is broken with our cluster state - fail all tasks by letting exceptions bubble up\n for (String index : request.indices()) {\n if (currentState.metaData().hasIndex(index)) {\n // if we don't have the index, we will throw exceptions later;\n if (indicesService.hasIndex(index) == false) {\n final IndexMetaData indexMetaData = currentState.metaData().index(index);\n IndexService indexService = indicesService.createIndex(nodeServicesProvider, indexMetaData, Collections.EMPTY_LIST);\n indicesToClose.add(indexMetaData.getIndex());\n // make sure to add custom default mapping if exists\n if (indexMetaData.getMappings().containsKey(MapperService.DEFAULT_MAPPING)) {\n indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.getMappings().get(MapperService.DEFAULT_MAPPING).source(), false, request.updateAllTypes());\n }\n // only add the current relevant mapping (if exists)\n if (indexMetaData.getMappings().containsKey(request.type())) {\n indexService.mapperService().merge(request.type(), indexMetaData.getMappings().get(request.type()).source(), false, request.updateAllTypes());\n }\n }\n }\n }\n}\n```\n\nThe flaw is that on the second distinct existing type for the same index, [`indicesService.hasIndex(index) == false`](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L234) will evaluate to false (because the [index was created](https://github.com/elastic/elasticsearch/blob/c4a229819406deb4407d8401d698453d936186cf/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L236) for the first existing type for the index) and the existing mapping will never get merged in. This will then cause the mapping update to overwrite the existing mapping so losing it.\n",
"comments": [],
"number": 15129,
"title": "Batched mapping updates can overwrite existing mappings"
}
|
{
"body": "This commit addresses an issues introduced in #14899 to apply mapping\nupdates in batches. The issue is that an existing mapping for a type\ncould be lost if that type came in a batch that already contained a\nmapping update for another type on the same index. The underlying issue\nwas that the existing mapping would not be merged in because the merging\nlogic was only tripped once per index, rather than for all types seeing\nupdates for each index. Resolving this issue is simply a matter of\nensuring that all existing types seeing updates are merged in.\n\nCloses #15129\n",
"number": 15130,
"review_comments": [
{
"body": "I think this will be clearer if we say - \"// precreate incoming indices and popluate them with the relevant types\"\n",
"created_at": "2015-12-01T08:23:48Z"
},
{
"body": "I think we only want to do this once , when the index is created?\n",
"created_at": "2015-12-01T08:24:41Z"
},
{
"body": "Re the addition of types - I think we can also just check if it's not there? I'm thinking about batching multiple requests for the same type ( commenting here because the code in question is not in the scope of the change)\n",
"created_at": "2015-12-01T08:26:01Z"
},
{
"body": "For the record, my other pull request #15123 is changing this logic to add all types anyway when creating such temporary indices since this is needed for cross-type validation that we need eg. for parent/child (today we are missing some validation checks when the master node does not have the mappers locally).\n",
"created_at": "2015-12-01T09:27:37Z"
}
],
"title": "Preserve existing mappings on batch mapping updates"
}
|
{
"commits": [
{
"message": "Preserve existing mappings on batch mapping updates\n\nThis commit addresses an issues introduced in #14899 to apply mapping\nupdates in batches. The issue is that an existing mapping for a type\ncould be lost if that type came in a batch that already contained a\nmapping update for another type on the same index. The underlying issue\nwas that the existing mapping would not be merged in because the merging\nlogic was only tripped once per index, rather than for all types seeing\nupdates for each index. Resolving this issue is simply a matter of\nensuring that all existing types seeing updates are merged in.\n\nCloses #15129"
},
{
"message": "Add the default mapping at most once on batch mapping updates\n\nWhen creating an index on master for the purpose of updating mappings,\nthe default mapping could needlessly be added multiple times. This\ncommit ensures that the default mapping is added at most once while\npreparing to update mappings."
},
{
"message": "Add each mapping at most once on batch mapping updates\n\nWhen creating an index on master for the purpose of updating mappings, a\nmapping being updated could needlessly be merged multiple times. This\ncommit ensures that each mapping is merged at most once while preparing\nto update mappings."
}
],
"files": [
{
"diff": "@@ -221,26 +221,31 @@ public void refreshMapping(final String index, final String indexUUID, final Str\n class PutMappingExecutor implements ClusterStateTaskExecutor<PutMappingClusterStateUpdateRequest> {\n @Override\n public BatchResult<PutMappingClusterStateUpdateRequest> execute(ClusterState currentState, List<PutMappingClusterStateUpdateRequest> tasks) throws Exception {\n- List<String> indicesToClose = new ArrayList<>();\n+ Set<String> indicesToClose = new HashSet<>();\n BatchResult.Builder<PutMappingClusterStateUpdateRequest> builder = BatchResult.builder();\n- Map<PutMappingClusterStateUpdateRequest, TaskResult> executionResults = new HashMap<>();\n try {\n // precreate incoming indices;\n for (PutMappingClusterStateUpdateRequest request : tasks) {\n // failures here mean something is broken with our cluster state - fail all tasks by letting exceptions bubble up\n for (String index : request.indices()) {\n if (currentState.metaData().hasIndex(index)) {\n // if we don't have the index, we will throw exceptions later;\n- if (indicesService.hasIndex(index) == false) {\n+ if (indicesService.hasIndex(index) == false || indicesToClose.contains(index)) {\n final IndexMetaData indexMetaData = currentState.metaData().index(index);\n- IndexService indexService = indicesService.createIndex(nodeServicesProvider, indexMetaData, Collections.EMPTY_LIST);\n- indicesToClose.add(indexMetaData.getIndex());\n- // make sure to add custom default mapping if exists\n- if (indexMetaData.getMappings().containsKey(MapperService.DEFAULT_MAPPING)) {\n- indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.getMappings().get(MapperService.DEFAULT_MAPPING).source(), false, request.updateAllTypes());\n+ IndexService indexService;\n+ if (indicesService.hasIndex(index) == false) {\n+ indicesToClose.add(index);\n+ indexService = indicesService.createIndex(nodeServicesProvider, indexMetaData, Collections.EMPTY_LIST);\n+ // make sure to add custom default mapping if exists\n+ if (indexMetaData.getMappings().containsKey(MapperService.DEFAULT_MAPPING)) {\n+ indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.getMappings().get(MapperService.DEFAULT_MAPPING).source(), false, request.updateAllTypes());\n+ }\n+ } else {\n+ indexService = indicesService.indexService(index);\n }\n- // only add the current relevant mapping (if exists)\n- if (indexMetaData.getMappings().containsKey(request.type())) {\n+ // only add the current relevant mapping (if exists and not yet added)\n+ if (indexMetaData.getMappings().containsKey(request.type()) &&\n+ !indexService.mapperService().hasMapping(request.type())) {\n indexService.mapperService().merge(request.type(), indexMetaData.getMappings().get(request.type()).source(), false, request.updateAllTypes());\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java",
"status": "modified"
},
{
"diff": "@@ -51,7 +51,6 @@\n @ClusterScope(randomDynamicTemplates = false)\n public class UpdateMappingIntegrationIT extends ESIntegTestCase {\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/15129\")\n public void testDynamicUpdates() throws Exception {\n client().admin().indices().prepareCreate(\"test\")\n .setSettings(",
"filename": "core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java",
"status": "modified"
}
]
}
|
{
"body": "One of the breaking changes of Elasticsearch 2.0 was that field names are no longer allowed to have dots: https://github.com/elastic/elasticsearch/pull/12068\n\nHowever, if you use the new multi-field syntax, Elasticsearch will create field name with dots. The documentation supports the behavior. The aggregation in the example is on a field named \"city.raw\":\n\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html\n\nThe unit tests in the code look for field names with dots: https://github.com/elastic/elasticsearch/blob/2.1/core/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java#L88\n\nThe tests pass because the check in ObjectMapper$TypeParser only checks the original name.\n\nThe field name is built in ContentPath:\nhttps://github.com/elastic/elasticsearch/blob/2.1/core/src/main/java/org/elasticsearch/index/mapper/ContentPath.java#L50\nhttps://github.com/elastic/elasticsearch/blob/2.1/core/src/main/java/org/elasticsearch/index/mapper/ContentPath.java#L80-L87\n\nThese field names will fail if the indices are run through the MetaDataIndexUpgradeService. Are dots allowed in this context?\n",
"comments": [
{
"body": "well spotted, thanks! Simple recreation, that runs on 2.1 (or on 1.7 and upgrades without warning to 2.1):\n\n```\nPUT t\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"fields\": {\n \"bar.baz\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/t/1\n{\n \"foo\": \"test\"\n}\n\nGET t/t/_search\n{\n \"query\": {\n \"match\": {\n \"foo.bar.baz\": \"test\"\n }\n }\n}\n```\n",
"created_at": "2015-11-24T12:14:31Z"
},
{
"body": "I think it's the same root cause as described for the mapper attachment plugin. Linking to it here https://github.com/elastic/elasticsearch-mapper-attachments/issues/169\n",
"created_at": "2015-11-24T16:50:19Z"
},
{
"body": "I took a look and must admit I do not fully understand what the issue is.\n\n@brusic What exactly is the error you see? I wrote a test for the example and all seems well (https://github.com/brwe/elasticsearch/commit/43e94268e4d59d85b46fb204275d3b24c288bad6#diff-523d9598b02bfef9341f7aa20ebfdf47R543). The ContentPath builds the field path not the name and if I understand correctly that contains as many dots as there are levels.\nI think I am missing something?\n\n@clintongormley We should not be able to create multi fields with dots in the name, that is a bug for sure. I'll try to fix\n",
"created_at": "2015-11-30T15:45:47Z"
},
{
"body": "It all depends on how strict should Elasticsearch should be regarding non-dots-in-field names. Better yet, what is the difference between a field name and a path?\n\nAt the Lucene level, multifield will create a field name with a dot in it. Inspected a one-shard index with Luke, and the fields created using the fields syntax will have a dot. Clinton's example does not fully show the impact since it uses a field name with a dot. Using 'bar' instead of 'bar.baz'.\n\n```\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"fields\": {\n \"bar\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/t/1\n{\n \"foo\": \"test\"\n}\n\nGET t/t/_search\n{\n \"query\": {\n \"match\": {\n \"foo.bar\": \"test\"\n }\n }\n}\n```\n\nA field name of foo.bar was created. Is this field name allowed? There was a discussion on the mailing list were these field names were getting flagged as incorrect when a user attempted to upgrade:\n\nhttps://discuss.elastic.co/t/elasticsearch-2-mapping/34366/9\n\nEither dots should never be allowed or the fields created with multifields should not be flagged as incorrect since they are paths and not field names.\n",
"created_at": "2015-11-30T20:35:44Z"
},
{
"body": "> A field name of foo.bar was created\n\nThis isn't true. We do not create a field with name `foo.bar`. We create the equivalent of the following mapping:\n\n```\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"foo\": {\n \"type\": \"string\",\n \"copy_to\": \"foo.bar\",\n \"properties\": {\n \"bar\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nThe `name` of a field is _just_ the name. In this case, the `name` of the multi field is `bar`. It's path is how you access it in a query, which uses dots to separate path elements. So `bar` is underneath a `foo` field, and so its path is `foo.bar`. This path is why dots are no longer allowed in field names: we cannot distinguish between `foo.bar` being a field on its own, or an object field `foo` with a subfield `bar`.\n",
"created_at": "2015-11-30T20:47:42Z"
},
{
"body": "FYI, the mapping I showed there wont' actually work IIRC. It was just used to explain how multi fields work conceptually. Only an `object` field can have `properties` (ie subfields) hence why the general purpose `fields` exists for concrete data types.\n",
"created_at": "2015-11-30T20:49:45Z"
},
{
"body": "I think the confusion stems from the ambiguous use of the words `field name`. We have 1) a _Lucene_ field name which can contain as many dots as needed (we create them for example [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java#L306) and this is also what we can see in Luke) and 2) a `field name` in _elasticsearch mapping_ which is just the innermost identifier for a value in a nested json structure and may not contain any dots in elasticsearch mapping.\n\nFor example:\n\n```\n{\n \"a\": {\n \"b\": {\n \"c\": \"All mappings and no play makes Britta a dull programmer\"\n }\n }\n}\n```\n\n1) will be `a.b.c`\n2) will be `c` \n\nThe path we were talking about before is `a.b` and is needed on Lucene level I think to distinguish `a.b.c` for example from another field that also has the name `c` but is nested in objects `d` and `e` (and therefore has path `d.e`). \nLet me know if this makes sense.\n\nAlso, @brusic just to be sure: did you actually encounter any error? We only check the elasticsearch field name for dots, the Lucene field names are untouched by that.\n",
"created_at": "2015-12-01T13:51:49Z"
},
{
"body": "> The path we were talking about before is a.b\n\nOne minore correction: path would be `a.b.c` there. It is like path vs name for File in java. \n",
"created_at": "2015-12-01T15:18:53Z"
},
{
"body": "I agree with Britta in that the confusion is related to the ambiguity of the words field name, which I acknowledged in my previous comment. I have not seen the error myself, but have seen it a couple of times on the forum. Since I was curious what was happening behind the scenes, I checked it out myself, only looking at the Lucene field names.\n\nIf someone else has a problem with upgrading, they can create a new issue.\n",
"created_at": "2015-12-02T19:45:43Z"
}
],
"number": 14957,
"title": "Multi-fields will create field names with dots '.'"
}
|
{
"body": "related to #14957\n",
"number": 15118,
"review_comments": [
{
"body": "Add an assert that the exception was what we expect (check part of the message)?\n",
"created_at": "2015-11-30T16:43:05Z"
},
{
"body": "Maybe this message should give more context, that this is a multifield for another field (and include that field path)?\n",
"created_at": "2015-11-30T16:44:20Z"
}
],
"title": "Multi field names may not contain dots"
}
|
{
"commits": [
{
"message": "multi field names may not contain dots\n\nrelated to #14957"
}
],
"files": [
{
"diff": "@@ -325,6 +325,9 @@ public static boolean parseMultiField(FieldMapper.Builder builder, String name,\n \n for (Map.Entry<String, Object> multiFieldEntry : multiFieldsPropNodes.entrySet()) {\n String multiFieldName = multiFieldEntry.getKey();\n+ if (multiFieldName.contains(\".\")) {\n+ throw new MapperParsingException(\"Field name [\" + multiFieldName + \"] which is a multi field of [\" + name + \"] cannot contain '.'\");\n+ }\n if (!(multiFieldEntry.getValue() instanceof Map)) {\n throw new MapperParsingException(\"illegal field [\" + multiFieldName + \"], only fields can be specified inside fields\");\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java",
"status": "modified"
},
{
"diff": "@@ -31,35 +31,25 @@\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.DocumentMapperParser;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n-import org.elasticsearch.index.mapper.core.CompletionFieldMapper;\n-import org.elasticsearch.index.mapper.core.DateFieldMapper;\n-import org.elasticsearch.index.mapper.core.LongFieldMapper;\n-import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.index.mapper.core.TokenCountFieldMapper;\n+import org.elasticsearch.index.mapper.core.*;\n import org.elasticsearch.index.mapper.geo.BaseGeoPointFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n-import org.junit.Test;\n import org.elasticsearch.test.VersionUtils;\n+import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.Map;\n import java.util.TreeMap;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.mapper.MapperBuilders.*;\n import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath;\n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n-import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.mapper.MapperBuilders.doc;\n-import static org.elasticsearch.index.mapper.MapperBuilders.rootObject;\n-import static org.elasticsearch.index.mapper.MapperBuilders.stringField;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.instanceOf;\n-import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n@@ -536,4 +526,30 @@ public void testNestedFieldNotAllowed() throws Exception {\n assertTrue(e.getMessage().contains(\"cannot be used in multi field\"));\n }\n }\n+\n+ public void testMultiFieldWithDot() throws IOException {\n+ XContentBuilder mapping = jsonBuilder();\n+ mapping.startObject()\n+ .startObject(\"my_type\")\n+ .startObject(\"properties\")\n+ .startObject(\"city\")\n+ .field(\"type\", \"string\")\n+ .startObject(\"fields\")\n+ .startObject(\"raw.foo\")\n+ .field(\"type\", \"string\")\n+ .field(\"index\", \"not_analyzed\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+\n+ MapperService mapperService = createIndex(\"test\").mapperService();\n+ try {\n+ mapperService.documentMapperParser().parse(mapping.string());\n+ fail(\"this should throw an exception because one field contains a dot\");\n+ } catch (MapperParsingException e) {\n+ assertThat(e.getMessage(), equalTo(\"Field name [raw.foo] which is a multi field of [city] cannot contain '.'\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java",
"status": "modified"
}
]
}
|
{
"body": "I have updated 1.4.2 to 2.0.0. \nI installed plugin \n `sudo bin/plugin install mapper-murmur3`\n\nI had some data inserted in 1.4.2. After I updated to 2.0.0 I am getting this error while starting elasticsearch.\n\n```\nException: java.lang.IllegalStateException: unable to upgrade the mappings for the index [abc], reason: [no handler for type [murmur3] declared on field [hash]]\nLikely root cause: MapperParsingException[no handler for type [murmur3] declared on field [hash]]\n at org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField(TypeParsers.java:341)\n at org.elasticsearch.index.mapper.core.StringFieldMapper$TypeParser.parse(StringFieldMapper.java:201)\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:315)\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:228)\n at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:137)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:211)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:192)\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:368)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:242)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:329)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:112)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:226)\n at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:85)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:408)\n at <<<guice>>>\n at org.elasticsearch.node.Node.<init>(Node.java:198)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n\nMy mapping was:\n\n```\nPUT /abc\n{\n \"mappings\": {\n \"productView\": {\n \"properties\": {\n \"itemId\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\",\n \"fields\": {\n \"hash\": {\n \"type\": \"murmur3\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nBut when I delete the old data and recreate the index it works without problem.\n",
"comments": [
{
"body": "The issue reproduces for me. It seems to be because we try to upgrade the mappings before plugins have a chance to register mappers. So any mappers that are registered through a plugin are affected by this problem.\n",
"created_at": "2015-11-19T10:26:19Z"
},
{
"body": "@rjernst @imotov Any ideas if/how we can fix it?\n",
"created_at": "2015-11-19T10:41:35Z"
},
{
"body": "I think we need to move registering mappers to the node level (which makes sense anyways). Instead of getting the mapper service to register them, the mapper service should take in all the custom types as args. I'm not sure if it will directly fix the issue, but at least then it seems like it should be possible. \n",
"created_at": "2015-11-19T10:47:26Z"
},
{
"body": "> It seems to be because we try to upgrade the mappings before plugins have a chance to register mappers. So any mappers that are registered through a plugin are affected by this problem.\n\n++ That explains as well issues we have seen for mapper attachments plugin. See https://github.com/elastic/elasticsearch-mapper-attachments/issues/180\n",
"created_at": "2015-11-19T11:59:06Z"
},
{
"body": "> I think we need to move registering mappers to the node level\n\nI opened #14896 for this.\n",
"created_at": "2015-11-20T17:04:27Z"
},
{
"body": "This has been fixed by #14896 and then I added tests in #14977. This will be fixed as of 2.1.1. I considered backporting to 2.0 as well but there were so many conflicts that I got a bit afraid of doing more harm than good by introducing new bugs.\n",
"created_at": "2015-11-25T15:59:16Z"
},
{
"body": "same issue in 2.1.1\nroot@dedi:/usr/share/elasticsearch# ./bin/plugin list\nInstalled plugins in /usr/share/elasticsearch/plugins:\n - marvel-agent\n - license\n - mapper-attachments\n\n[2015-12-19 07:31:59,246][INFO ][node ] [Manta] version[2.1.1], pid[28723], build[40e2c53/2015-12-15T13:05:55Z]\n[2015-12-19 07:31:59,247][INFO ][node ] [Manta] initializing ...\n[2015-12-19 07:32:00,314][INFO ][plugins ] [Manta] loaded [license, marvel-agent, mapper-attachments], sites []\n[2015-12-19 07:32:00,356][INFO ][env ] [Manta] using [1] data paths, mounts [[/home (/dev/sda4)]], net usable_space [641.6gb], net total_space [857.8gb], spins? [possibly], types [ext4]\n[2015-12-19 07:32:04,427][ERROR][gateway ] [Manta] failed to read local state, exiting...\njava.lang.IllegalStateException: unable to upgrade the mappings for the index [evol2], reason: [Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:339)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)\n at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)\n at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:46)\n at org.elasticsearch.node.Node.<init>(Node.java:200)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nCaused by: java.lang.IllegalArgumentException: Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:368)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:319)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:333)\n ... 48 more\n[2015-12-19 07:32:04,659][ERROR][gateway ] [Manta] failed to read local state, exiting...\njava.lang.IllegalStateException: unable to upgrade the mappings for the index [evol2], reason: [Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:339)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)\n at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)\n at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:46)\n at org.elasticsearch.node.Node.<init>(Node.java:200)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nCaused by: java.lang.IllegalArgumentException: Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:368)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:319)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:333)\n ... 39 more\n[2015-12-19 07:32:04,808][ERROR][gateway ] [Manta] failed to read local state, exiting...\njava.lang.IllegalStateException: unable to upgrade the mappings for the index [evol2], reason: [Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:339)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)\n at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:887)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)\n at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)\n at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:46)\n at org.elasticsearch.node.Node.<init>(Node.java:200)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nCaused by: java.lang.IllegalArgumentException: Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:368)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:319)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:333)\n ... 30 more\n[2015-12-19 07:32:04,824][ERROR][bootstrap ] Guice Exception: java.lang.IllegalStateException: unable to upgrade the mappings for the index [evol2], reason: [Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]]\nLikely root cause: java.lang.IllegalArgumentException: Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]\n at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:117)\n at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:368)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:319)\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:333)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)\n at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at <<<guice>>>\n at org.elasticsearch.node.Node.<init>(Node.java:200)\n at org.elasticsearch.node.Node.<init>(Node.java:128)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n",
"created_at": "2015-12-19T06:37:22Z"
},
{
"body": "@jmreymond could you open a new issue about this? \n",
"created_at": "2015-12-19T07:34:53Z"
},
{
"body": "@jmreymond @dadoonet This is a different issue actually:\n\n```\nunable to upgrade the mappings for the index [evol2], reason: [Mapper for [docsfile] conflicts with existing mapping in other types:\n[mapper [docsfile.content] cannot be changed from type [string] to [attachment]]]\n```\n\nThe `evol2` index has at least two types, one of them is declaring `docsfile.content` as a `string` field and another type is declaring it as an `attachment` field. This kind of configuration caused bad bugs in elasticsearch 1.x, which is why it is not supported anymore as of version 2.0 and the only fix is to reindex your data in different indices.\n",
"created_at": "2015-12-21T09:31:15Z"
},
{
"body": "Hello, I could not figure out how to make this work with java client api in 2.1.1 when using local node which I am using for integration tests. \nMy settings:\n\n```\nSettings settings = Settings.put(\"plugin.types\", MapperMurmur3Plugin.class.getName())\n```\n\nWith the local node:\n\n```\nNode node = nodeBuilder()\n .settings(settings)\n .local(true).node();\n```\n\nWhile index creation it gives me :\n\n```\nCaused by: org.elasticsearch.index.mapper.MapperParsingException: no handler for type [murmur3] declared on field [hash]\n at org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField(TypeParsers.java:362) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.core.StringFieldMapper$TypeParser.parse(StringFieldMapper.java:203) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:310) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:223) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:140) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:121) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:391) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265) ~[elasticsearch-2.1.1.jar:2.1.1]\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:378) ~[elasticsearch-2.1.1.jar:2.1.1]\n```\n\nMy mapping is same as I posted in the first comment.\n\nThis error does not happen in 2.0.0 client. Is it related with this issue or am I not registering the plugin in the correct way ?\n",
"created_at": "2015-12-21T12:32:23Z"
},
{
"body": "@maksutspahi You can not start a plugin like this from 2.1.\nRead also this: https://discuss.elastic.co/t/es-plugin-add-to-node-server-programmatically/37616/5\n",
"created_at": "2015-12-21T13:25:48Z"
},
{
"body": "Thank you that worked. Also this bug seems fixed in 2.1.1 thanks.\n",
"created_at": "2015-12-22T07:07:52Z"
}
],
"number": 14828,
"title": "elasticsearch 2.0 handler type for murmur3 not found"
}
|
{
"body": "Relates to #14828\n",
"number": 14977,
"review_comments": [],
"title": "Add a test that upgrades succeed even if a mapping contains fields that come from a plugin."
}
|
{
"commits": [
{
"message": "Add a test that upgrades succeed even if a mapping contains fields that come from a plugin."
}
],
"files": [
{
"diff": "@@ -149,6 +149,16 @@ def start_node(version, release_dir, data_dir, repo_dir, tcp_port=DEFAULT_TRANSP\n cmd.append('-f') # version before 1.0 start in background automatically\n return subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n \n+def install_plugin(version, release_dir, plugin_name):\n+ run_plugin(version, release_dir, 'install', [plugin_name])\n+\n+def remove_plugin(version, release_dir, plugin_name):\n+ run_plugin(version, release_dir, 'remove', [plugin_name])\n+\n+def run_plugin(version, release_dir, plugin_cmd, args):\n+ cmd = [os.path.join(release_dir, 'bin/plugin'), plugin_cmd] + args\n+ subprocess.check_call(cmd)\n+\n def create_client(http_port=DEFAULT_HTTP_TCP_PORT, timeout=30):\n logging.info('Waiting for node to startup')\n for _ in range(0, timeout):",
"filename": "dev-tools/create_bwc_index.py",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,124 @@\n+import create_bwc_index\n+import logging\n+import os\n+import random\n+import shutil\n+import subprocess\n+import sys\n+import tempfile\n+\n+def fetch_version(version):\n+ logging.info('fetching ES version %s' % version)\n+ if subprocess.call([sys.executable, os.path.join(os.path.split(sys.argv[0])[0], 'get-bwc-version.py'), version]) != 0:\n+ raise RuntimeError('failed to download ES version %s' % version)\n+\n+def create_index(plugin, mapping, docs):\n+ '''\n+ Creates a static back compat index (.zip) with mappings using fields defined in plugins.\n+ '''\n+ \n+ logging.basicConfig(format='[%(levelname)s] [%(asctime)s] %(message)s', level=logging.INFO,\n+ datefmt='%Y-%m-%d %I:%M:%S %p')\n+ logging.getLogger('elasticsearch').setLevel(logging.ERROR)\n+ logging.getLogger('urllib3').setLevel(logging.WARN)\n+\n+ tmp_dir = tempfile.mkdtemp()\n+ plugin_installed = False\n+ node = None\n+ try:\n+ data_dir = os.path.join(tmp_dir, 'data')\n+ repo_dir = os.path.join(tmp_dir, 'repo')\n+ logging.info('Temp data dir: %s' % data_dir)\n+ logging.info('Temp repo dir: %s' % repo_dir)\n+\n+ version = '2.0.0'\n+ classifier = '%s-%s' %(plugin, version)\n+ index_name = 'index-%s' % classifier\n+\n+ # Download old ES releases if necessary:\n+ release_dir = os.path.join('backwards', 'elasticsearch-%s' % version)\n+ if not os.path.exists(release_dir):\n+ fetch_version(version)\n+\n+ create_bwc_index.install_plugin(version, release_dir, plugin)\n+ plugin_installed = True\n+ node = create_bwc_index.start_node(version, release_dir, data_dir, repo_dir, cluster_name=index_name)\n+ client = create_bwc_index.create_client()\n+ put_plugin_mappings(client, index_name, mapping, docs)\n+ create_bwc_index.shutdown_node(node)\n+\n+ print('%s server output:\\n%s' % (version, node.stdout.read().decode('utf-8')))\n+ node = None\n+ create_bwc_index.compress_index(classifier, tmp_dir, 'plugins/%s/src/test/resources/indices/bwc' %plugin)\n+ finally:\n+ if node is not None:\n+ create_bwc_index.shutdown_node(node)\n+ if plugin_installed:\n+ create_bwc_index.remove_plugin(version, release_dir, plugin)\n+ shutil.rmtree(tmp_dir)\n+\n+def put_plugin_mappings(client, index_name, mapping, docs):\n+ client.indices.delete(index=index_name, ignore=404)\n+ logging.info('Create single shard test index')\n+\n+ client.indices.create(index=index_name, body={\n+ 'settings': {\n+ 'number_of_shards': 1,\n+ 'number_of_replicas': 0\n+ },\n+ 'mappings': {\n+ 'type': mapping\n+ }\n+ })\n+ health = client.cluster.health(wait_for_status='green', wait_for_relocating_shards=0)\n+ assert health['timed_out'] == False, 'cluster health timed out %s' % health\n+\n+ logging.info('Indexing documents')\n+ for i in range(len(docs)):\n+ client.index(index=index_name, doc_type=\"type\", id=str(i), body=docs[i])\n+ logging.info('Flushing index')\n+ client.indices.flush(index=index_name)\n+\n+ logging.info('Running basic checks')\n+ count = client.count(index=index_name)['count']\n+ assert count == len(docs), \"expected %d docs, got %d\" %(len(docs), count)\n+\n+def main():\n+ docs = [\n+ {\n+ \"foo\": \"abc\"\n+ },\n+ {\n+ \"foo\": \"abcdef\"\n+ },\n+ {\n+ \"foo\": \"a\"\n+ }\n+ ]\n+\n+ murmur3_mapping = {\n+ 'properties': {\n+ 'foo': {\n+ 'type': 'string',\n+ 'fields': {\n+ 'hash': {\n+ 'type': 'murmur3'\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ create_index(\"mapper-murmur3\", murmur3_mapping, docs)\n+\n+ size_mapping = {\n+ '_size': {\n+ 'enabled': True\n+ }\n+ }\n+\n+ create_index(\"mapper-size\", size_mapping, docs)\n+\n+if __name__ == '__main__':\n+ main()\n+",
"filename": "dev-tools/create_bwc_index_with_plugin_mappings.py",
"status": "added"
},
{
"diff": "@@ -0,0 +1,77 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.murmur3;\n+\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.plugin.mapper.MapperMurmur3Plugin;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.search.aggregations.AggregationBuilders;\n+import org.elasticsearch.search.aggregations.metrics.cardinality.Cardinality;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n+\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.Collection;\n+import java.util.Collections;\n+\n+@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0)\n+@LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n+public class Murmur3FieldMapperUpgradeTests extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Collections.singleton(MapperMurmur3Plugin.class);\n+ }\n+\n+ public void testUpgradeOldMapping() throws IOException {\n+ final String indexName = \"index-mapper-murmur3-2.0.0\";\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ Path backwardsIndex = getBwcIndicesPath().resolve(indexName + \".zip\");\n+ try (InputStream stream = Files.newInputStream(backwardsIndex)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ assertTrue(Files.exists(unzipDataDir));\n+\n+ final String node = internalCluster().startNode();\n+ Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, node).nodeDataPaths();\n+ assertEquals(1, nodePaths.length);\n+ Path dataPath = nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER);\n+ assertFalse(Files.exists(dataPath));\n+ Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n+ Files.move(src, dataPath);\n+\n+ ensureYellow();\n+ final SearchResponse countResponse = client().prepareSearch(indexName).setSize(0).get();\n+ ElasticsearchAssertions.assertHitCount(countResponse, 3L);\n+\n+ final SearchResponse cardinalityResponse = client().prepareSearch(indexName).addAggregation(\n+ AggregationBuilders.cardinality(\"card\").field(\"foo.hash\")).get();\n+ Cardinality cardinality = cardinalityResponse.getAggregations().get(\"card\");\n+ assertEquals(3L, cardinality.getValue());\n+ }\n+\n+}",
"filename": "plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperUpgradeTests.java",
"status": "added"
},
{
"diff": "",
"filename": "plugins/mapper-murmur3/src/test/resources/indices/bwc/index-mapper-murmur3-2.0.0.zip",
"status": "added"
},
{
"diff": "@@ -0,0 +1,88 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.size;\n+\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.plugin.mapper.MapperSizePlugin;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.search.SearchHit;\n+import org.elasticsearch.search.SearchHitField;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n+\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.Map;\n+\n+@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0)\n+@LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n+public class SizeFieldMapperUpgradeTests extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Collections.singleton(MapperSizePlugin.class);\n+ }\n+\n+ public void testUpgradeOldMapping() throws IOException {\n+ final String indexName = \"index-mapper-size-2.0.0\";\n+ Path unzipDir = createTempDir();\n+ Path unzipDataDir = unzipDir.resolve(\"data\");\n+ Path backwardsIndex = getBwcIndicesPath().resolve(indexName + \".zip\");\n+ try (InputStream stream = Files.newInputStream(backwardsIndex)) {\n+ TestUtil.unzip(stream, unzipDir);\n+ }\n+ assertTrue(Files.exists(unzipDataDir));\n+\n+ final String node = internalCluster().startNode();\n+ Path[] nodePaths = internalCluster().getInstance(NodeEnvironment.class, node).nodeDataPaths();\n+ assertEquals(1, nodePaths.length);\n+ Path dataPath = nodePaths[0].resolve(NodeEnvironment.INDICES_FOLDER);\n+ assertFalse(Files.exists(dataPath));\n+ Path src = unzipDataDir.resolve(indexName + \"/nodes/0/indices\");\n+ Files.move(src, dataPath);\n+\n+ ensureYellow();\n+ final SearchResponse countResponse = client().prepareSearch(indexName).setSize(0).get();\n+ ElasticsearchAssertions.assertHitCount(countResponse, 3L);\n+\n+ final SearchResponse sizeResponse = client().prepareSearch(indexName)\n+ .addField(\"_source\")\n+ .addField(\"_size\")\n+ .get();\n+ ElasticsearchAssertions.assertHitCount(sizeResponse, 3L);\n+ for (SearchHit hit : sizeResponse.getHits().getHits()) {\n+ String source = hit.getSourceAsString();\n+ assertNotNull(source);\n+ Map<String, SearchHitField> fields = hit.getFields();\n+ assertTrue(fields.containsKey(\"_size\"));\n+ Number size = fields.get(\"_size\").getValue();\n+ assertNotNull(size);\n+ assertEquals(source.length(), size.longValue());\n+ }\n+ }\n+\n+}",
"filename": "plugins/mapper-size/src/test/java/org/elasticsearch/index/mapper/size/SizeFieldMapperUpgradeTests.java",
"status": "added"
},
{
"diff": "",
"filename": "plugins/mapper-size/src/test/resources/indices/bwc/index-mapper-size-2.0.0.zip",
"status": "added"
}
]
}
|
{
"body": "Significant terms were not reduced correctly if they were long terms.\nAlso, clean up the bwc test a little. Upgrades are not needed.\n\nrelated to #13522\n",
"comments": [
{
"body": "LGTM. Is there any chance of making the BWC test share more code with the non-bwc tests? I find that keeps them passing with more regularity. That can wait for another Pr though.\n",
"created_at": "2015-11-23T17:53:46Z"
},
{
"body": "Yeah I can do that. I don't think we have an integ test for long terms with significant terms so it would make a lot of sense to add one. will make another pr for that.\n",
"created_at": "2015-11-23T17:56:32Z"
}
],
"number": 14948,
"title": "Fix significant terms reduce for long terms"
}
|
{
"body": "We had no itegration test before with long terms and several shards only\na bwc test.\n\nrelated to #14948\n",
"number": 14968,
"review_comments": [],
"title": "run bwc test also as integ test and share methods"
}
|
{
"commits": [
{
"message": "run bwc test also as integ test and share methods\n\nWe had no itegration test before with long terms and several shards only\na bwc test.\n\nrelated to #14948"
}
],
"files": [
{
"diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -39,10 +38,7 @@\n import org.elasticsearch.search.aggregations.bucket.filter.InternalFilter;\n import org.elasticsearch.search.aggregations.bucket.script.NativeSignificanceScoreScriptNoParams;\n import org.elasticsearch.search.aggregations.bucket.script.NativeSignificanceScoreScriptWithParams;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsAggregatorFactory;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsBuilder;\n+import org.elasticsearch.search.aggregations.bucket.significant.*;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.ChiSquare;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.GND;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.MutualInformation;\n@@ -56,6 +52,7 @@\n import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.search.aggregations.bucket.SharedSignificantTermsTestMethods;\n import org.junit.Test;\n \n import java.io.IOException;\n@@ -69,7 +66,6 @@\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n-import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.closeTo;\n@@ -82,12 +78,12 @@\n */\n @ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.SUITE)\n public class SignificantTermsSignificanceScoreIT extends ESIntegTestCase {\n+\n static final String INDEX_NAME = \"testidx\";\n static final String DOC_TYPE = \"doc\";\n static final String TEXT_FIELD = \"text\";\n static final String CLASS_FIELD = \"class\";\n \n-\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n return pluginList(CustomSignificanceHeuristicPlugin.class);\n@@ -101,7 +97,7 @@ public String randomExecutionHint() {\n public void testPlugin() throws Exception {\n String type = randomBoolean() ? \"string\" : \"long\";\n String settings = \"{\\\"index.number_of_shards\\\": 1, \\\"index.number_of_replicas\\\": 0}\";\n- index01Docs(type, settings);\n+ SharedSignificantTermsTestMethods.index01Docs(type, settings, this);\n SearchResponse response = client().prepareSearch(INDEX_NAME).setTypes(DOC_TYPE)\n .addAggregation(new TermsBuilder(\"class\")\n .field(CLASS_FIELD)\n@@ -257,7 +253,7 @@ public void testXContentResponse() throws Exception {\n \n String type = randomBoolean() ? \"string\" : \"long\";\n String settings = \"{\\\"index.number_of_shards\\\": 1, \\\"index.number_of_replicas\\\": 0}\";\n- index01Docs(type, settings);\n+ SharedSignificantTermsTestMethods.index01Docs(type, settings, this);\n SearchResponse response = client().prepareSearch(INDEX_NAME).setTypes(DOC_TYPE)\n .addAggregation(new TermsBuilder(\"class\").field(CLASS_FIELD).subAggregation(new SignificantTermsBuilder(\"sig_terms\").field(TEXT_FIELD)))\n .execute()\n@@ -334,7 +330,7 @@ public void testDeletesIssue7951() throws Exception {\n public void testBackgroundVsSeparateSet() throws Exception {\n String type = randomBoolean() ? \"string\" : \"long\";\n String settings = \"{\\\"index.number_of_shards\\\": 1, \\\"index.number_of_replicas\\\": 0}\";\n- index01Docs(type, settings);\n+ SharedSignificantTermsTestMethods.index01Docs(type, settings, this);\n testBackgroundVsSeparateSet(new MutualInformation.MutualInformationBuilder(true, true), new MutualInformation.MutualInformationBuilder(true, false));\n testBackgroundVsSeparateSet(new ChiSquare.ChiSquareBuilder(true, true), new ChiSquare.ChiSquareBuilder(true, false));\n testBackgroundVsSeparateSet(new GND.GNDBuilder(true), new GND.GNDBuilder(false));\n@@ -395,28 +391,6 @@ public void testBackgroundVsSeparateSet(SignificanceHeuristicBuilder significanc\n assertThat(score11Background, equalTo(score11SeparateSets));\n }\n \n- private void index01Docs(String type, String settings) throws ExecutionException, InterruptedException {\n- String mappings = \"{\\\"doc\\\": {\\\"properties\\\":{\\\"text\\\": {\\\"type\\\":\\\"\" + type + \"\\\"}}}}\";\n- assertAcked(prepareCreate(INDEX_NAME).setSettings(settings).addMapping(\"doc\", mappings));\n- String[] gb = {\"0\", \"1\"};\n- List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"1\")\n- .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"2\")\n- .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"3\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"4\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"5\")\n- .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"6\")\n- .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"7\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRandom(true, false, indexRequestBuilderList);\n- }\n-\n @Test\n public void testScoresEqualForPositiveAndNegative() throws Exception {\n indexEqualTestData();\n@@ -537,4 +511,9 @@ private void indexRandomFrequencies01(String type) throws ExecutionException, In\n }\n indexRandom(true, indexRequestBuilderList);\n }\n+\n+ public void testReduceFromSeveralShards() throws IOException, ExecutionException, InterruptedException {\n+ SharedSignificantTermsTestMethods.aggregateAndCheckFromSeveralShards(this);\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/SignificantTermsSignificanceScoreIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,102 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.test.search.aggregations.bucket;\n+\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;\n+import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.ESTestCase;\n+import org.junit.Assert;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class SharedSignificantTermsTestMethods {\n+ public static final String INDEX_NAME = \"testidx\";\n+ public static final String DOC_TYPE = \"doc\";\n+ public static final String TEXT_FIELD = \"text\";\n+ public static final String CLASS_FIELD = \"class\";\n+\n+ public static void aggregateAndCheckFromSeveralShards(ESIntegTestCase testCase) throws ExecutionException, InterruptedException {\n+ String type = ESTestCase.randomBoolean() ? \"string\" : \"long\";\n+ String settings = \"{\\\"index.number_of_shards\\\": 5, \\\"index.number_of_replicas\\\": 0}\";\n+ index01Docs(type, settings, testCase);\n+ testCase.ensureGreen();\n+ testCase.logClusterState();\n+ checkSignificantTermsAggregationCorrect(testCase);\n+ }\n+\n+ private static void checkSignificantTermsAggregationCorrect(ESIntegTestCase testCase) {\n+\n+ SearchResponse response = testCase.client().prepareSearch(INDEX_NAME).setTypes(DOC_TYPE)\n+ .addAggregation(new TermsBuilder(\"class\").field(CLASS_FIELD).subAggregation(\n+ new SignificantTermsBuilder(\"sig_terms\")\n+ .field(TEXT_FIELD)))\n+ .execute()\n+ .actionGet();\n+ assertSearchResponse(response);\n+ StringTerms classes = response.getAggregations().get(\"class\");\n+ Assert.assertThat(classes.getBuckets().size(), equalTo(2));\n+ for (Terms.Bucket classBucket : classes.getBuckets()) {\n+ Map<String, Aggregation> aggs = classBucket.getAggregations().asMap();\n+ Assert.assertTrue(aggs.containsKey(\"sig_terms\"));\n+ SignificantTerms agg = (SignificantTerms) aggs.get(\"sig_terms\");\n+ Assert.assertThat(agg.getBuckets().size(), equalTo(1));\n+ SignificantTerms.Bucket sigBucket = agg.iterator().next();\n+ String term = sigBucket.getKeyAsString();\n+ String classTerm = classBucket.getKeyAsString();\n+ Assert.assertTrue(term.equals(classTerm));\n+ }\n+ }\n+\n+ public static void index01Docs(String type, String settings, ESIntegTestCase testCase) throws ExecutionException, InterruptedException {\n+ String mappings = \"{\\\"doc\\\": {\\\"properties\\\":{\\\"text\\\": {\\\"type\\\":\\\"\" + type + \"\\\"}}}}\";\n+ assertAcked(testCase.prepareCreate(INDEX_NAME).setSettings(settings).addMapping(\"doc\", mappings));\n+ String[] gb = {\"0\", \"1\"};\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"1\")\n+ .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"2\")\n+ .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"3\")\n+ .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"4\")\n+ .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"5\")\n+ .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"1\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"6\")\n+ .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"0\"));\n+ indexRequestBuilderList.add(ESIntegTestCase.client().prepareIndex(INDEX_NAME, DOC_TYPE, \"7\")\n+ .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n+ testCase.indexRandom(true, false, indexRequestBuilderList);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/test/search/aggregations/bucket/SharedSignificantTermsTestMethods.java",
"status": "added"
},
{
"diff": "@@ -18,89 +18,17 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n-import org.elasticsearch.action.index.IndexRequestBuilder;\n-import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.search.aggregations.Aggregation;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;\n-import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsBuilder;\n-import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n-import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n-import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n import org.elasticsearch.test.ESBackcompatTestCase;\n+import org.elasticsearch.test.search.aggregations.bucket.SharedSignificantTermsTestMethods;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.List;\n-import java.util.Map;\n import java.util.concurrent.ExecutionException;\n \n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.equalTo;\n-\n-/**\n- */\n public class SignificantTermsBackwardCompatibilityIT extends ESBackcompatTestCase {\n- static final String INDEX_NAME = \"testidx\";\n- static final String DOC_TYPE = \"doc\";\n- static final String TEXT_FIELD = \"text\";\n- static final String CLASS_FIELD = \"class\";\n-\n /**\n * Test for streaming significant terms buckets to old es versions.\n */\n- public void testBucketStreaming() throws IOException, ExecutionException, InterruptedException {\n- logger.debug(\"testBucketStreaming: indexing documents\");\n- String type = randomBoolean() ? \"string\" : \"long\";\n- String settings = \"{\\\"index.number_of_shards\\\": 5, \\\"index.number_of_replicas\\\": 0}\";\n- index01Docs(type, settings);\n- ensureGreen();\n- logClusterState();\n- checkSignificantTermsAggregationCorrect();\n- logger.debug(\"testBucketStreaming: done testing significant terms while upgrading\");\n- }\n-\n- private void index01Docs(String type, String settings) throws ExecutionException, InterruptedException {\n- String mappings = \"{\\\"doc\\\": {\\\"properties\\\":{\\\"\" + TEXT_FIELD + \"\\\": {\\\"type\\\":\\\"\" + type + \"\\\"},\\\"\" + CLASS_FIELD\n- + \"\\\": {\\\"type\\\":\\\"string\\\"}}}}\";\n- assertAcked(prepareCreate(INDEX_NAME).setSettings(settings).addMapping(\"doc\", mappings));\n- String[] gb = {\"0\", \"1\"};\n- List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"1\")\n- .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"2\")\n- .setSource(TEXT_FIELD, \"1\", CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"3\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"4\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"5\")\n- .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"1\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"6\")\n- .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"0\"));\n- indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"7\")\n- .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRandom(true, indexRequestBuilderList);\n- }\n-\n- private void checkSignificantTermsAggregationCorrect() {\n- SearchResponse response = client().prepareSearch(INDEX_NAME).setTypes(DOC_TYPE)\n- .addAggregation(new TermsBuilder(\"class\").field(CLASS_FIELD).subAggregation(\n- new SignificantTermsBuilder(\"sig_terms\")\n- .field(TEXT_FIELD)))\n- .execute()\n- .actionGet();\n- assertSearchResponse(response);\n- StringTerms classes = response.getAggregations().get(\"class\");\n- assertThat(classes.getBuckets().size(), equalTo(2));\n- for (Terms.Bucket classBucket : classes.getBuckets()) {\n- Map<String, Aggregation> aggs = classBucket.getAggregations().asMap();\n- assertTrue(aggs.containsKey(\"sig_terms\"));\n- SignificantTerms agg = (SignificantTerms) aggs.get(\"sig_terms\");\n- assertThat(agg.getBuckets().size(), equalTo(1));\n- String term = agg.iterator().next().getKeyAsString();\n- String classTerm = classBucket.getKeyAsString();\n- assertTrue(term.equals(classTerm));\n- }\n+ public void testAggregateAndCheckFromSeveralShards() throws IOException, ExecutionException, InterruptedException {\n+ SharedSignificantTermsTestMethods.aggregateAndCheckFromSeveralShards(this);\n }\n }",
"filename": "qa/backwards/shared/src/test/java/org/elasticsearch/search/aggregations/bucket/SignificantTermsBackwardCompatibilityIT.java",
"status": "modified"
}
]
}
|
{
"body": "",
"comments": [
{
"body": "LGTM\n",
"created_at": "2015-11-19T19:57:07Z"
},
{
"body": "Can we remove the variants of error, warn, hell maybe all logger methods that _dont_ take throwable? \n\nMeans you gotta think about it, pass null in the cases you don't have one.\n",
"created_at": "2015-11-19T19:58:49Z"
},
{
"body": "and +1 to this fix, lets just make another issue. i think we still want to ban Throwable.toString()/getMessage() there too and just clean house. no fancy IDE refactoring tools, just a lot of beer and old fashioned cleanup grunt work, that's the only to hunt down and fix all these.\n",
"created_at": "2015-11-19T20:15:51Z"
},
{
"body": "> Can we remove the variants of error, warn, hell maybe all logger methods that dont take throwable?\n\n+1\n",
"created_at": "2015-11-19T20:41:56Z"
},
{
"body": "> Can we remove the variants of error, warn, hell maybe all logger methods that dont take throwable?\n\nOK I'm gonna give this a shot ...\n",
"created_at": "2015-11-19T21:14:59Z"
},
{
"body": "> OK I'm gonna give this a shot ...\n\n_All_ of them? I get removing error that doesn't take one - its pretty rare to have a genuine error without an exception and in those cases we can just call it with null but I figure its reasonably common to have warnings and most trace and info logs won't have an exception to log.\n",
"created_at": "2015-11-20T16:55:06Z"
},
{
"body": "> All of them? \n\nYeah, unfortunately: we use all log levels (even TRACE!) in ES when logging an exception, so I'm exploring making it a required argument to all of them now.\n\nHaving to think about it and pass `null` if you don't have an exception seems like the lesser evil than the 2 dozen or so places I've found so far (including this original one, from #14866) that were failing to include the caught exception in their logging ...\n",
"created_at": "2015-11-20T18:49:16Z"
}
],
"number": 14867,
"title": "Include root-cause exception when we fail to change shard's index buffer"
}
|
{
"body": "This change makes the `Throwable cause` a required argument to `ESlogger.warn` and `.error`, and fixes places where we were not passing the current exception to these methods.\n\nWhen there really is no exception, we can pass null.\n\nThe goal is to try to prevent issues like #14867 in the future.\n\nI also found a few places that had the wrong number of {} vs the number of parameters passed.\n\nCloses #14905 \n",
"number": 14945,
"review_comments": [
{
"body": "\\0/\n",
"created_at": "2015-11-23T16:15:05Z"
},
{
"body": "Maybe use `@param` for cause? Maybe add some explanation like:\n\n```\n* @param cause exception that caused this warning. If no exception caused it pass null. Its intentional that there isn't an overload of this method without this parameter. We want to make not logging the exception inconvenient because logging exception makes issues so much easier to debug.\n```\n",
"created_at": "2015-11-23T16:22:12Z"
},
{
"body": "I think this one was right before.\n",
"created_at": "2015-11-23T16:24:17Z"
},
{
"body": "This one too.\n",
"created_at": "2015-11-23T16:24:26Z"
},
{
"body": "meessage -> message\n",
"created_at": "2015-11-23T16:24:55Z"
},
{
"body": "Thanks for the explanation!\n",
"created_at": "2015-11-23T16:26:00Z"
},
{
"body": "If `shardsClosedTimeout` is a `TimeValue` then its `toString` should output a sane suffix. Forcing this into seconds doesn't seem required.\n",
"created_at": "2015-11-23T16:28:29Z"
},
{
"body": "`LoggerMessageFormat` **should** convert the arrays to strings properly.\n",
"created_at": "2015-11-23T16:31:04Z"
},
{
"body": "Just drop the `t.getMessage()` entirely?\n",
"created_at": "2015-11-23T16:33:30Z"
},
{
"body": "I wonder why this was the way it was....\n",
"created_at": "2015-11-23T16:36:10Z"
},
{
"body": "We don't need to call `toString` on failure - the logger will do that for us.\n\nIt feels like failure's toString should include the reason so it'd be enough to just do\n\n``` java\nlogger.error(\"Shard Failure: [{}]\", failure.getCause(), failure);\n```\n",
"created_at": "2015-11-23T16:39:43Z"
},
{
"body": "Same here.\n",
"created_at": "2015-11-23T16:40:11Z"
},
{
"body": "Nice!\n",
"created_at": "2015-11-23T16:40:27Z"
}
],
"title": "Require the exception cause to ESLogger.warn and .error"
}
|
{
"commits": [
{
"message": "require Throwable cause to ESLogger.warn and error"
},
{
"message": "fixup mismatch params for some logger lines"
},
{
"message": "fix nocommits"
},
{
"message": "feedback"
},
{
"message": "merge master"
},
{
"message": "merge master again"
}
],
"files": [
{
"diff": "@@ -189,10 +189,10 @@ public ClusterState execute(final ClusterState currentState) {\n transientUpdates.put(entry.getKey(), entry.getValue());\n changed = true;\n } else {\n- logger.warn(\"ignoring transient setting [{}], [{}]\", entry.getKey(), error);\n+ logger.warn(\"ignoring transient setting [{}], [{}]\", null, entry.getKey(), error);\n }\n } else {\n- logger.warn(\"ignoring transient setting [{}], not dynamically updateable\", entry.getKey());\n+ logger.warn(\"ignoring transient setting [{}], not dynamically updateable\", null, entry.getKey());\n }\n }\n \n@@ -206,10 +206,10 @@ public ClusterState execute(final ClusterState currentState) {\n persistentUpdates.put(entry.getKey(), entry.getValue());\n changed = true;\n } else {\n- logger.warn(\"ignoring persistent setting [{}], [{}]\", entry.getKey(), error);\n+ logger.warn(\"ignoring persistent setting [{}], [{}]\", null, entry.getKey(), error);\n }\n } else {\n- logger.warn(\"ignoring persistent setting [{}], not dynamically updateable\", entry.getKey());\n+ logger.warn(\"ignoring persistent setting [{}], not dynamically updateable\", null, entry.getKey());\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java",
"status": "modified"
},
{
"diff": "@@ -96,8 +96,10 @@ protected NodesSnapshotStatus newResponse(Request request, AtomicReferenceArray\n nodesList.add((NodeSnapshotStatus) resp);\n } else if (resp instanceof FailedNodeException) {\n failures.add((FailedNodeException) resp);\n+ } else if (resp instanceof Throwable) {\n+ logger.warn(\"unknown response type, expected NodeSnapshotStatus or FailedNodeException\", (Throwable) resp);\n } else {\n- logger.warn(\"unknown response type [{}], expected NodeSnapshotStatus or FailedNodeException\", resp);\n+ logger.warn(\"unknown response type [{}], expected NodeSnapshotStatus or FailedNodeException\", null, resp);\n }\n }\n return new NodesSnapshotStatus(clusterName, nodesList.toArray(new NodeSnapshotStatus[nodesList.size()]),",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java",
"status": "modified"
},
{
"diff": "@@ -109,7 +109,7 @@ protected UpgradeResponse newResponse(UpgradeRequest request, int totalShards, i\n if (primaryCount == metaData.index(index).getNumberOfShards()) {\n updatedVersions.put(index, new Tuple<>(versionEntry.getValue().v1(), versionEntry.getValue().v2().toString()));\n } else {\n- logger.warn(\"Not updating settings for the index [{}] because upgraded of some primary shards failed - expected[{}], received[{}]\", index,\n+ logger.warn(\"Not updating settings for the index [{}] because upgraded of some primary shards failed - expected[{}], received[{}]\", null, index,\n expectedPrimaryCount, primaryCount == null ? 0 : primaryCount);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/TransportUpgradeAction.java",
"status": "modified"
},
{
"diff": "@@ -296,7 +296,7 @@ public void onTimeout(TimeValue timeout) {\n try {\n failReplicaIfNeeded(request.internalShardId.getIndex(), request.internalShardId.id(), t, request);\n } catch (Throwable unexpected) {\n- logger.error(\"{} unexpected error while failing replica\", request.internalShardId.id(), unexpected);\n+ logger.error(\"{} unexpected error while failing replica\", unexpected, request.internalShardId.id());\n } finally {\n responseWithFailure(t);\n }\n@@ -383,7 +383,7 @@ protected void doRun() {\n return;\n }\n if (observer.observedState().nodes().nodeExists(primary.currentNodeId()) == false) {\n- logger.trace(\"primary shard [{}] is assigned to anode we do not know the node, scheduling a retry.\", primary.shardId(), primary.currentNodeId());\n+ logger.trace(\"primary shard [{}] is assigned to a node we do not know the node [{}], scheduling a retry.\", primary.shardId(), primary.currentNodeId());\n retryBecauseUnavailable(shardIt.shardId(), \"Primary shard is not active or isn't assigned to a known node.\");\n return;\n }",
"filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java",
"status": "modified"
},
{
"diff": "@@ -201,7 +201,7 @@ private void perform(@Nullable final Throwable currentFailure) {\n failure = new NoShardAvailableActionException(null, LoggerMessageFormat.format(\"No shard available for [{}]\", internalRequest.request()), failure);\n } else {\n if (logger.isDebugEnabled()) {\n- logger.debug(\"{}: failed to execute [{}]\", failure, null, internalRequest.request());\n+ logger.debug(\"failed to execute [{}]\", failure, internalRequest.request());\n }\n }\n listener.onFailure(failure);",
"filename": "core/src/main/java/org/elasticsearch/action/support/single/shard/TransportSingleShardAction.java",
"status": "modified"
},
{
"diff": "@@ -110,7 +110,7 @@ protected Result prepare(UpdateRequest request, final GetResult getResult) {\n // (the default) or \"none\", meaning abort upsert\n if (!\"create\".equals(scriptOpChoice)) {\n if (!\"none\".equals(scriptOpChoice)) {\n- logger.warn(\"Used upsert operation [{}] for script [{}], doing nothing...\", scriptOpChoice,\n+ logger.warn(\"Used upsert operation [{}] for script [{}], doing nothing...\", null, scriptOpChoice,\n request.script.getScript());\n }\n UpdateResponse update = new UpdateResponse(getResult.getIndex(), getResult.getType(), getResult.getId(),\n@@ -235,7 +235,7 @@ protected Result prepare(UpdateRequest request, final GetResult getResult) {\n update.setGetResult(extractGetResult(request, request.index(), getResult.getVersion(), updatedSourceAsMap, updateSourceContentType, getResult.internalSourceRef()));\n return new Result(update, Operation.NONE, updatedSourceAsMap, updateSourceContentType);\n } else {\n- logger.warn(\"Used update operation [{}] for script [{}], doing nothing...\", operation, request.script.getScript());\n+ logger.warn(\"Used update operation [{}] for script [{}], doing nothing...\", null, operation, request.script.getScript());\n UpdateResponse update = new UpdateResponse(getResult.getIndex(), getResult.getType(), getResult.getId(), getResult.getVersion(), false);\n return new Result(update, Operation.NONE, updatedSourceAsMap, updateSourceContentType);\n }",
"filename": "core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java",
"status": "modified"
},
{
"diff": "@@ -88,7 +88,7 @@ public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean sec\n // check if the user is running as root, and bail\n if (Natives.definitelyRunningAsRoot()) {\n if (Boolean.parseBoolean(System.getProperty(\"es.insecure.allow.root\"))) {\n- logger.warn(\"running as ROOT user. this is a bad idea!\");\n+ logger.warn(\"running as ROOT user. this is a bad idea!\", null);\n } else {\n throw new RuntimeException(\"don't run elasticsearch as root.\");\n }\n@@ -278,7 +278,7 @@ static void init(String[] args) throws Throwable {\n // warn if running using the client VM\n if (JvmInfo.jvmInfo().getVmName().toLowerCase(Locale.ROOT).contains(\"client\")) {\n ESLogger logger = Loggers.getLogger(Bootstrap.class);\n- logger.warn(\"jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line\");\n+ logger.warn(\"jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line\", null);\n }\n \n try {\n@@ -313,7 +313,7 @@ static void init(String[] args) throws Throwable {\n PrintStream ps = new PrintStream(os, false, \"UTF-8\");\n new StartupError(e).printStackTrace(ps);\n ps.flush();\n- logger.error(\"Guice Exception: {}\", os.toString(\"UTF-8\"));\n+ logger.error(\"Guice Exception: {}\", null, os.toString(\"UTF-8\"));\n } else {\n // full exception\n logger.error(\"Exception\", e);",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java",
"status": "modified"
},
{
"diff": "@@ -54,9 +54,9 @@ private JNAKernel32Library() {\n Native.register(\"kernel32\");\n logger.debug(\"windows/Kernel32 library loaded\");\n } catch (NoClassDefFoundError e) {\n- logger.warn(\"JNA not found. native methods and handlers will be disabled.\");\n+ logger.warn(\"JNA not found. native methods and handlers will be disabled.\", e);\n } catch (UnsatisfiedLinkError e) {\n- logger.warn(\"unable to link Windows/Kernel32 library. native methods and handlers will be disabled.\");\n+ logger.warn(\"unable to link Windows/Kernel32 library. native methods and handlers will be disabled.\", e);\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/JNAKernel32Library.java",
"status": "modified"
},
{
"diff": "@@ -74,7 +74,7 @@ static void tryMlockall() {\n softLimit = rlimit.rlim_cur.longValue();\n hardLimit = rlimit.rlim_max.longValue();\n } else {\n- logger.warn(\"Unable to retrieve resource limits: \" + JNACLibrary.strerror(Native.getLastError()));\n+ logger.warn(\"Unable to retrieve resource limits: \" + JNACLibrary.strerror(Native.getLastError()), null);\n }\n }\n } catch (UnsatisfiedLinkError e) {\n@@ -83,23 +83,23 @@ static void tryMlockall() {\n }\n \n // mlockall failed for some reason\n- logger.warn(\"Unable to lock JVM Memory: error=\" + errno + \",reason=\" + errMsg);\n- logger.warn(\"This can result in part of the JVM being swapped out.\");\n+ logger.warn(\"Unable to lock JVM Memory: error=\" + errno + \",reason=\" + errMsg, null);\n+ logger.warn(\"This can result in part of the JVM being swapped out.\", null);\n if (errno == JNACLibrary.ENOMEM) {\n if (rlimitSuccess) {\n- logger.warn(\"Increase RLIMIT_MEMLOCK, soft limit: \" + rlimitToString(softLimit) + \", hard limit: \" + rlimitToString(hardLimit));\n+ logger.warn(\"Increase RLIMIT_MEMLOCK, soft limit: \" + rlimitToString(softLimit) + \", hard limit: \" + rlimitToString(hardLimit), null);\n if (Constants.LINUX) {\n // give specific instructions for the linux case to make it easy\n String user = System.getProperty(\"user.name\");\n logger.warn(\"These can be adjusted by modifying /etc/security/limits.conf, for example: \\n\" +\n \"\\t# allow user '\" + user + \"' mlockall\\n\" +\n \"\\t\" + user + \" soft memlock unlimited\\n\" +\n- \"\\t\" + user + \" hard memlock unlimited\"\n+ \"\\t\" + user + \" hard memlock unlimited\", null\n );\n- logger.warn(\"If you are logged in interactively, you will have to re-login for the new limits to take effect.\");\n+ logger.warn(\"If you are logged in interactively, you will have to re-login for the new limits to take effect.\", null);\n }\n } else {\n- logger.warn(\"Increase RLIMIT_MEMLOCK (ulimit).\");\n+ logger.warn(\"Increase RLIMIT_MEMLOCK (ulimit).\", null);\n }\n }\n }\n@@ -137,7 +137,7 @@ static void tryVirtualLock() {\n // the amount of memory we wish to lock, plus a small overhead (1MB).\n SizeT size = new SizeT(JvmInfo.jvmInfo().getMem().getHeapInit().getBytes() + (1024 * 1024));\n if (!kernel.SetProcessWorkingSetSize(process, size, size)) {\n- logger.warn(\"Unable to lock JVM memory. Failed to set working set size. Error code \" + Native.getLastError());\n+ logger.warn(\"Unable to lock JVM memory. Failed to set working set size. Error code \" + Native.getLastError(), null);\n } else {\n JNAKernel32Library.MemoryBasicInformation memInfo = new JNAKernel32Library.MemoryBasicInformation();\n long address = 0;\n@@ -170,7 +170,7 @@ static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) {\n if (result) {\n logger.debug(\"console ctrl handler correctly set\");\n } else {\n- logger.warn(\"unknown error \" + Native.getLastError() + \" when adding console ctrl handler:\");\n+ logger.warn(\"unknown error \" + Native.getLastError() + \" when adding console ctrl handler:\", null);\n }\n } catch (UnsatisfiedLinkError e) {\n // this will have already been logged by Kernel32Library, no need to repeat it\n@@ -186,11 +186,6 @@ static void trySeccomp(Path tmpFile) {\n LOCAL_SECCOMP_ALL = true;\n }\n } catch (Throwable t) {\n- // this is likely to happen unless the kernel is newish, its a best effort at the moment\n- // so we log stacktrace at debug for now...\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"unable to install syscall filter\", t);\n- }\n logger.warn(\"unable to install syscall filter: \", t);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java",
"status": "modified"
},
{
"diff": "@@ -113,12 +113,12 @@ String getWarningMessage() {\n */\n static void check() {\n if (Boolean.parseBoolean(System.getProperty(JVM_BYPASS))) {\n- Loggers.getLogger(JVMCheck.class).warn(\"bypassing jvm version check for version [{}], this can result in data corruption!\", fullVersion());\n+ Loggers.getLogger(JVMCheck.class).warn(\"bypassing jvm version check for version [{}], this can result in data corruption!\", null, fullVersion());\n } else if (\"Oracle Corporation\".equals(Constants.JVM_VENDOR)) {\n HotspotBug bug = JVM_BROKEN_HOTSPOT_VERSIONS.get(Constants.JVM_VERSION);\n if (bug != null) {\n if (bug.workAround != null && ManagementFactory.getRuntimeMXBean().getInputArguments().contains(bug.workAround)) {\n- Loggers.getLogger(JVMCheck.class).warn(bug.getWarningMessage());\n+ Loggers.getLogger(JVMCheck.class).warn(bug.getWarningMessage(), null);\n } else {\n throw new RuntimeException(bug.getErrorMessage());\n }",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/JVMCheck.java",
"status": "modified"
},
{
"diff": "@@ -54,31 +54,31 @@ private Natives() {}\n \n static void tryMlockall() {\n if (!JNA_AVAILABLE) {\n- logger.warn(\"cannot mlockall because JNA is not available\");\n+ logger.warn(\"cannot mlockall because JNA is not available\", null);\n return;\n }\n JNANatives.tryMlockall();\n }\n \n static boolean definitelyRunningAsRoot() {\n if (!JNA_AVAILABLE) {\n- logger.warn(\"cannot check if running as root because JNA is not available\");\n+ logger.warn(\"cannot check if running as root because JNA is not available\", null);\n return false;\n }\n return JNANatives.definitelyRunningAsRoot();\n }\n \n static void tryVirtualLock() {\n if (!JNA_AVAILABLE) {\n- logger.warn(\"cannot mlockall because JNA is not available\");\n+ logger.warn(\"cannot mlockall because JNA is not available\", null);\n return;\n }\n JNANatives.tryVirtualLock();\n }\n \n static void addConsoleCtrlHandler(ConsoleCtrlHandler handler) {\n if (!JNA_AVAILABLE) {\n- logger.warn(\"cannot register console handler because JNA is not available\");\n+ logger.warn(\"cannot register console handler because JNA is not available\", null);\n return;\n }\n JNANatives.addConsoleCtrlHandler(handler);\n@@ -93,7 +93,7 @@ static boolean isMemoryLocked() {\n \n static void trySeccomp(Path tmpFile) {\n if (!JNA_AVAILABLE) {\n- logger.warn(\"cannot install syscall filters because JNA is not available\");\n+ logger.warn(\"cannot install syscall filters because JNA is not available\", null);\n return;\n }\n JNANatives.trySeccomp(tmpFile);",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Natives.java",
"status": "modified"
},
{
"diff": "@@ -367,7 +367,7 @@ public LivenessResponse newInstance() {\n }\n }).txGet();\n if (!ignoreClusterName && !clusterName.equals(livenessResponse.getClusterName())) {\n- logger.warn(\"node {} not part of the cluster {}, ignoring...\", listedNode, clusterName);\n+ logger.warn(\"node {} not part of the cluster {}, ignoring...\", null, listedNode, clusterName);\n newFilteredNodes.add(listedNode);\n } else if (livenessResponse.getDiscoveryNode() != null) {\n // use discovered information but do keep the original transport address, so people can control which address is exactly used.\n@@ -475,7 +475,7 @@ public void handleException(TransportException e) {\n HashSet<DiscoveryNode> newFilteredNodes = new HashSet<>();\n for (Map.Entry<DiscoveryNode, ClusterStateResponse> entry : clusterStateResponses.entrySet()) {\n if (!ignoreClusterName && !clusterName.equals(entry.getValue().getClusterName())) {\n- logger.warn(\"node {} not part of the cluster {}, ignoring...\", entry.getValue().getState().nodes().localNode(), clusterName);\n+ logger.warn(\"node {} not part of the cluster {}, ignoring...\", null, entry.getValue().getState().nodes().localNode(), clusterName);\n newFilteredNodes.add(entry.getKey());\n continue;\n }",
"filename": "core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java",
"status": "modified"
},
{
"diff": "@@ -302,7 +302,7 @@ protected void configure() {\n String shardsAllocatorType = shardsAllocators.bindType(binder(), settings, ClusterModule.SHARDS_ALLOCATOR_TYPE_KEY, ClusterModule.BALANCED_ALLOCATOR);\n if (shardsAllocatorType.equals(ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR)) {\n final ESLogger logger = Loggers.getLogger(getClass(), settings);\n- logger.warn(\"{} allocator has been removed in 2.0 using {} instead\", ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR, ClusterModule.BALANCED_ALLOCATOR);\n+ logger.warn(\"{} allocator has been removed in 2.0 using {} instead\", null, ClusterModule.EVEN_SHARD_COUNT_ALLOCATOR, ClusterModule.BALANCED_ALLOCATOR);\n }\n allocationDeciders.bind(binder());\n indexTemplateFilters.bind(binder());",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java",
"status": "modified"
},
{
"diff": "@@ -115,7 +115,7 @@ public void onRefreshSettings(Settings settings) {\n \n if (newUpdateFrequency != null) {\n if (newUpdateFrequency.getMillis() < TimeValue.timeValueSeconds(10).getMillis()) {\n- logger.warn(\"[{}] set too low [{}] (< 10s)\", INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, newUpdateFrequency);\n+ logger.warn(\"[{}] set too low [{}] (< 10s)\", null, INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, newUpdateFrequency);\n throw new IllegalStateException(\"Unable to set \" + INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL + \" less than 10 seconds\");\n } else {\n logger.info(\"updating [{}] from [{}] to [{}]\", INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL, updateFrequency, newUpdateFrequency);\n@@ -317,7 +317,7 @@ public void onResponse(NodesStatsResponse nodeStatses) {\n @Override\n public void onFailure(Throwable e) {\n if (e instanceof ReceiveTimeoutTransportException) {\n- logger.error(\"NodeStatsAction timed out for ClusterInfoUpdateJob (reason [{}])\", e.getMessage());\n+ logger.error(\"NodeStatsAction timed out for ClusterInfoUpdateJob\", e);\n } else {\n if (e instanceof ClusterBlockException) {\n if (logger.isTraceEnabled()) {\n@@ -347,7 +347,7 @@ public void onResponse(IndicesStatsResponse indicesStatsResponse) {\n @Override\n public void onFailure(Throwable e) {\n if (e instanceof ReceiveTimeoutTransportException) {\n- logger.error(\"IndicesStatsAction timed out for ClusterInfoUpdateJob (reason [{}])\", e.getMessage());\n+ logger.error(\"IndicesStatsAction timed out for ClusterInfoUpdateJob\", e);\n } else {\n if (e instanceof ClusterBlockException) {\n if (logger.isTraceEnabled()) {\n@@ -367,14 +367,14 @@ public void onFailure(Throwable e) {\n nodeLatch.await(fetchTimeout.getMillis(), TimeUnit.MILLISECONDS);\n } catch (InterruptedException e) {\n Thread.currentThread().interrupt(); // restore interrupt status\n- logger.warn(\"Failed to update node information for ClusterInfoUpdateJob within {} timeout\", fetchTimeout);\n+ logger.warn(\"Failed to update node information for ClusterInfoUpdateJob within {} timeout\", e, fetchTimeout);\n }\n \n try {\n indicesLatch.await(fetchTimeout.getMillis(), TimeUnit.MILLISECONDS);\n } catch (InterruptedException e) {\n Thread.currentThread().interrupt(); // restore interrupt status\n- logger.warn(\"Failed to update shard information for ClusterInfoUpdateJob within {} timeout\", fetchTimeout);\n+ logger.warn(\"Failed to update shard information for ClusterInfoUpdateJob within {} timeout\", e, fetchTimeout);\n }\n ClusterInfo clusterInfo = getClusterInfo();\n for (Listener l : listeners) {\n@@ -405,7 +405,7 @@ static void fillDiskUsagePerNode(ESLogger logger, NodeStats[] nodeStatsArray,\n ImmutableOpenMap.Builder<String, DiskUsage> newMostAvaiableUsages) {\n for (NodeStats nodeStats : nodeStatsArray) {\n if (nodeStats.getFs() == null) {\n- logger.warn(\"Unable to retrieve node FS stats for {}\", nodeStats.getNode().name());\n+ logger.warn(\"Unable to retrieve node FS stats for {}\", null, nodeStats.getNode().name());\n } else {\n FsInfo.Path leastAvailablePath = null;\n FsInfo.Path mostAvailablePath = null;",
"filename": "core/src/main/java/org/elasticsearch/cluster/InternalClusterInfoService.java",
"status": "modified"
},
{
"diff": "@@ -76,7 +76,7 @@ public void nodeIndexDeleted(final ClusterState clusterState, final String index\n transportService.sendRequest(clusterState.nodes().masterNode(),\n INDEX_DELETED_ACTION_NAME, new NodeIndexDeletedMessage(index, nodeId), EmptyTransportResponseHandler.INSTANCE_SAME);\n if (nodes.localNode().isDataNode() == false) {\n- logger.trace(\"[{}] not acking store deletion (not a data node)\");\n+ logger.trace(\"[{}] not acking store deletion (not a data node)\", nodeId);\n return;\n }\n threadPool.generic().execute(new AbstractRunnable() {\n@@ -102,9 +102,9 @@ private void lockIndexAndAck(String index, DiscoveryNodes nodes, String nodeId,\n transportService.sendRequest(clusterState.nodes().masterNode(),\n INDEX_STORE_DELETED_ACTION_NAME, new NodeIndexStoreDeletedMessage(index, nodeId), EmptyTransportResponseHandler.INSTANCE_SAME);\n } catch (LockObtainFailedException exc) {\n- logger.warn(\"[{}] failed to lock all shards for index - timed out after 30 seconds\", index);\n+ logger.warn(\"[{}] failed to lock all shards for index - timed out after 30 seconds\", exc, index);\n } catch (InterruptedException e) {\n- logger.warn(\"[{}] failed to lock all shards for index - interrupted\", index);\n+ logger.warn(\"[{}] failed to lock all shards for index - interrupted\", e, index);\n }\n }\n \n@@ -191,4 +191,4 @@ public void readFrom(StreamInput in) throws IOException {\n nodeId = in.readString();\n }\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/index/NodeIndexDeletedAction.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,7 @@ public NodeMappingRefreshAction(Settings settings, TransportService transportSer\n public void nodeMappingRefresh(final ClusterState state, final NodeMappingRefreshRequest request) {\n final DiscoveryNodes nodes = state.nodes();\n if (nodes.masterNode() == null) {\n- logger.warn(\"can't send mapping refresh for [{}][{}], no master known.\", request.index(), Strings.arrayToCommaDelimitedString(request.types()));\n+ logger.warn(\"can't send mapping refresh for [{}][{}], no master known.\", null, request.index(), Strings.arrayToCommaDelimitedString(request.types()));\n return;\n }\n transportService.sendRequest(nodes.masterNode(), ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/index/NodeMappingRefreshAction.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,7 @@ public void shardFailed(final ShardRouting shardRouting, final String indexUUID,\n public void shardFailed(final ShardRouting shardRouting, final String indexUUID, final String message, @Nullable final Throwable failure, TimeValue timeout, Listener listener) {\n DiscoveryNode masterNode = clusterService.state().nodes().masterNode();\n if (masterNode == null) {\n- logger.warn(\"can't send shard failed for {}, no master known.\", shardRouting);\n+ logger.warn(\"can't send shard failed for {}, no master known.\", null, shardRouting);\n listener.onShardFailedNoMaster();\n return;\n }\n@@ -122,7 +122,7 @@ public void handleException(TransportException exp) {\n public void shardStarted(final ShardRouting shardRouting, String indexUUID, final String reason) {\n DiscoveryNode masterNode = clusterService.state().nodes().masterNode();\n if (masterNode == null) {\n- logger.warn(\"{} can't send shard started for {}, no master known.\", shardRouting.shardId(), shardRouting);\n+ logger.warn(\"{} can't send shard started for {}, no master known.\", null, shardRouting.shardId(), shardRouting);\n return;\n }\n shardStarted(shardRouting, indexUUID, reason, masterNode);",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java",
"status": "modified"
},
{
"diff": "@@ -780,7 +780,7 @@ public static MetaData addDefaultUnitsIfNeeded(ESLogger logger, MetaData metaDat\n continue;\n }\n // It's a naked number that previously would be interpreted as default unit (bytes); now we add it:\n- logger.warn(\"byte-sized cluster setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", settingName, settingValue);\n+ logger.warn(\"byte-sized cluster setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", null, settingName, settingValue);\n if (newPersistentSettings == null) {\n newPersistentSettings = Settings.builder();\n newPersistentSettings.put(metaData.persistentSettings());\n@@ -794,7 +794,7 @@ public static MetaData addDefaultUnitsIfNeeded(ESLogger logger, MetaData metaDat\n continue;\n }\n // It's a naked number that previously would be interpreted as default unit (ms); now we add it:\n- logger.warn(\"time cluster setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", settingName, settingValue);\n+ logger.warn(\"time cluster setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", null, settingName, settingValue);\n if (newPersistentSettings == null) {\n newPersistentSettings = Settings.builder();\n newPersistentSettings.put(metaData.persistentSettings());",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -172,7 +172,7 @@ private IndexMetaData addDefaultUnitsIfNeeded(IndexMetaData indexMetaData) {\n continue;\n }\n // It's a naked number that previously would be interpreted as default unit (bytes); now we add it:\n- logger.warn(\"byte-sized index setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", byteSizeSetting, value);\n+ logger.warn(\"byte-sized index setting [{}] with value [{}] is missing units; assuming default units (b) but in future versions this will be a hard error\", null, byteSizeSetting, value);\n if (newSettings == null) {\n newSettings = Settings.builder();\n newSettings.put(settings);\n@@ -189,7 +189,7 @@ private IndexMetaData addDefaultUnitsIfNeeded(IndexMetaData indexMetaData) {\n continue;\n }\n // It's a naked number that previously would be interpreted as default unit (ms); now we add it:\n- logger.warn(\"time index setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", timeSetting, value);\n+ logger.warn(\"time index setting [{}] with value [{}] is missing units; assuming default units (ms) but in future versions this will be a hard error\", null, timeSetting, value);\n if (newSettings == null) {\n newSettings = Settings.builder();\n newSettings.put(settings);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -243,10 +243,10 @@ private boolean processIndexMappingTasks(List<MappingTask> tasks, IndexService i\n continue;\n }\n \n- logger.warn(\"[{}] re-syncing mappings with cluster state for types [{}]\", index, updatedTypes);\n+ logger.warn(\"[{}] re-syncing mappings with cluster state for types [{}]\", null, index, updatedTypes);\n dirty = true;\n } catch (Throwable t) {\n- logger.warn(\"[{}] failed to refresh-mapping in cluster state, types [{}]\", index, refreshTask.types);\n+ logger.warn(\"[{}] failed to refresh-mapping in cluster state, types [{}]\", t, index, refreshTask.types);\n }\n } else if (task instanceof UpdateTask) {\n UpdateTask updateTask = (UpdateTask) task;\n@@ -279,10 +279,10 @@ private boolean processIndexMappingTasks(List<MappingTask> tasks, IndexService i\n builder.putMapping(new MappingMetaData(updatedMapper));\n dirty = true;\n } catch (Throwable t) {\n- logger.warn(\"[{}] failed to update-mapping in cluster state, type [{}]\", index, updateTask.type);\n+ logger.warn(\"[{}] failed to update-mapping in cluster state, type [{}]\", t, index, updateTask.type);\n }\n } else {\n- logger.warn(\"illegal state, got wrong mapping task type [{}]\", task);\n+ logger.warn(\"illegal state, got wrong mapping task type [{}]\", null, task);\n }\n }\n return dirty;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java",
"status": "modified"
},
{
"diff": "@@ -102,7 +102,7 @@ public void clusterChanged(ClusterChangedEvent event) {\n final int dash = autoExpandReplicas.indexOf('-');\n if (-1 == dash) {\n logger.warn(\"failed to set [{}] for index [{}], it should be dash delimited [{}]\",\n- IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), autoExpandReplicas);\n+ null, IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, indexMetaData.getIndex(), autoExpandReplicas);\n continue;\n }\n final String sMin = autoExpandReplicas.substring(0, dash);\n@@ -173,7 +173,7 @@ public void onResponse(ClusterStateUpdateResponse response) {\n @Override\n public void onFailure(Throwable t) {\n for (String index : indices) {\n- logger.warn(\"[{}] fail to auto expand replicas to [{}]\", index, fNumberOfReplicas);\n+ logger.warn(\"[{}] fail to auto expand replicas to [{}]\", null, index, fNumberOfReplicas);\n }\n }\n });",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -98,7 +98,7 @@ public ClusterRebalanceAllocationDecider(Settings settings, NodeSettingsService\n try {\n type = ClusterRebalanceType.parseString(allowRebalance);\n } catch (IllegalStateException e) {\n- logger.warn(\"[{}] has a wrong value {}, defaulting to 'indices_all_active'\", CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, allowRebalance);\n+ logger.warn(\"[{}] has a wrong value {}, defaulting to 'indices_all_active'\", e, CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, allowRebalance);\n type = ClusterRebalanceType.INDICES_ALL_ACTIVE;\n }\n logger.debug(\"using [{}] with [{}]\", CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, type.toString().toLowerCase(Locale.ROOT));",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -150,7 +150,7 @@ private void warnAboutDiskIfNeeded(DiskUsage usage) {\n // Check absolute disk values\n if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdHigh.bytes()) {\n logger.warn(\"high disk watermark [{}] exceeded on {}, shards will be relocated away from this node\",\n- DiskThresholdDecider.this.freeBytesThresholdHigh, usage);\n+ null, DiskThresholdDecider.this.freeBytesThresholdHigh, usage);\n } else if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdLow.bytes()) {\n logger.info(\"low disk watermark [{}] exceeded on {}, replicas will not be assigned to this node\",\n DiskThresholdDecider.this.freeBytesThresholdLow, usage);\n@@ -159,7 +159,7 @@ private void warnAboutDiskIfNeeded(DiskUsage usage) {\n // Check percentage disk values\n if (usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdHigh) {\n logger.warn(\"high disk watermark [{}] exceeded on {}, shards will be relocated away from this node\",\n- Strings.format1Decimals(100.0 - DiskThresholdDecider.this.freeDiskThresholdHigh, \"%\"), usage);\n+ null, Strings.format1Decimals(100.0 - DiskThresholdDecider.this.freeDiskThresholdHigh, \"%\"), usage);\n } else if (usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdLow) {\n logger.info(\"low disk watermark [{}] exceeded on {}, replicas will not be assigned to this node\",\n Strings.format1Decimals(100.0 - DiskThresholdDecider.this.freeDiskThresholdLow, \"%\"), usage);\n@@ -435,13 +435,13 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n long freeBytesAfterShard = freeBytes - shardSize;\n if (freeBytesAfterShard < freeBytesThresholdHigh.bytes()) {\n logger.warn(\"after allocating, node [{}] would have less than the required {} free bytes threshold ({} bytes free), preventing allocation\",\n- node.nodeId(), freeBytesThresholdHigh, freeBytesAfterShard);\n+ null, node.nodeId(), freeBytesThresholdHigh, freeBytesAfterShard);\n return allocation.decision(Decision.NO, NAME, \"after allocation less than required [%s] free on node, free: [%s]\",\n freeBytesThresholdLow, new ByteSizeValue(freeBytesAfterShard));\n }\n if (freeSpaceAfterShard < freeDiskThresholdHigh) {\n logger.warn(\"after allocating, node [{}] would have more than the allowed {} free disk threshold ({} free), preventing allocation\",\n- node.nodeId(), Strings.format1Decimals(freeDiskThresholdHigh, \"%\"), Strings.format1Decimals(freeSpaceAfterShard, \"%\"));\n+ null, node.nodeId(), Strings.format1Decimals(freeDiskThresholdHigh, \"%\"), Strings.format1Decimals(freeSpaceAfterShard, \"%\"));\n return allocation.decision(Decision.NO, NAME, \"after allocation more than allowed [%s%%] used disk on node, free: [%s%%]\",\n usedDiskThresholdLow, freeSpaceAfterShard);\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java",
"status": "modified"
},
{
"diff": "@@ -566,7 +566,7 @@ public void run() {\n \n private void warnAboutSlowTaskIfNeeded(TimeValue executionTime, String source) {\n if (executionTime.getMillis() > slowTaskLoggingThreshold.getMillis()) {\n- logger.warn(\"cluster state update task [{}] took {} above the warn threshold of {}\", source, executionTime, slowTaskLoggingThreshold);\n+ logger.warn(\"cluster state update task [{}] took {} above the warn threshold of {}\", null, source, executionTime, slowTaskLoggingThreshold);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java",
"status": "modified"
},
{
"diff": "@@ -71,7 +71,7 @@ public static byte[] getSecureMungedAddress() {\n }\n \n if (!isValidAddress(address)) {\n- logger.warn(\"Unable to get a valid mac address, will use a dummy address\");\n+ logger.warn(\"Unable to get a valid mac address, will use a dummy address\", null);\n address = constructDummyMulticastAddress();\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/MacAddressProvider.java",
"status": "modified"
},
{
"diff": "@@ -141,6 +141,7 @@ memoryBytesLimit, new ByteSizeValue(memoryBytesLimit),\n }\n if (memoryBytesLimit > 0 && newUsedWithOverhead > memoryBytesLimit) {\n logger.warn(\"[{}] New used memory {} [{}] for data of [{}] would be larger than configured breaker: {} [{}], breaking\",\n+ null,\n this.name,\n newUsedWithOverhead, new ByteSizeValue(newUsedWithOverhead), label,\n memoryBytesLimit, new ByteSizeValue(memoryBytesLimit));",
"filename": "core/src/main/java/org/elasticsearch/common/breaker/ChildMemoryCircuitBreaker.java",
"status": "modified"
},
{
"diff": "@@ -129,7 +129,7 @@ memoryBytesLimit, new ByteSizeValue(memoryBytesLimit),\n }\n if (memoryBytesLimit > 0 && newUsedWithOverhead > memoryBytesLimit) {\n logger.warn(\"New used memory {} [{}] from field [{}] would be larger than configured breaker: {} [{}], breaking\",\n- newUsedWithOverhead, new ByteSizeValue(newUsedWithOverhead), label,\n+ null, newUsedWithOverhead, new ByteSizeValue(newUsedWithOverhead), label,\n memoryBytesLimit, new ByteSizeValue(memoryBytesLimit));\n circuitBreak(label, newUsedWithOverhead);\n }",
"filename": "core/src/main/java/org/elasticsearch/common/breaker/MemoryCircuitBreaker.java",
"status": "modified"
},
{
"diff": "@@ -101,21 +101,19 @@ public interface ESLogger {\n \n /**\n * Logs a WARN level message.\n- */\n- void warn(String msg, Object... params);\n-\n- /**\n- * Logs a WARN level message.\n+ *\n+ * @param cause exception that caused this warning. If you are certain no exception caused it, pass {@code null}. It's intentional\n+ * that there isn't an overload of this method without this parameter: we want to make logging a warning without the exception\n+ * inconvenient because logging exceptions is often vital for debugging.\n */\n void warn(String msg, Throwable cause, Object... params);\n \n /**\n * Logs an ERROR level message.\n- */\n- void error(String msg, Object... params);\n-\n- /**\n- * Logs an ERROR level message.\n+ *\n+ * @param cause exception that caused this error. If you are certain no exception caused it, pass {@code null}. It's intentional\n+ * that there isn't an overload of this method without this parameter: we want to make logging an error without the exception\n+ * inconvenient because logging exceptions is often vital for debugging.\n */\n void error(String msg, Throwable cause, Object... params);\n ",
"filename": "core/src/main/java/org/elasticsearch/common/logging/ESLogger.java",
"status": "modified"
},
{
"diff": "@@ -131,25 +131,13 @@ protected void internalInfo(String msg, Throwable cause) {\n logger.log(record);\n }\n \n- @Override\n- protected void internalWarn(String msg) {\n- LogRecord record = new ESLogRecord(Level.WARNING, msg);\n- logger.log(record);\n- }\n-\n @Override\n protected void internalWarn(String msg, Throwable cause) {\n LogRecord record = new ESLogRecord(Level.WARNING, msg);\n record.setThrown(cause);\n logger.log(record);\n }\n \n- @Override\n- protected void internalError(String msg) {\n- LogRecord record = new ESLogRecord(Level.SEVERE, msg);\n- logger.log(record);\n- }\n-\n @Override\n protected void internalError(String msg, Throwable cause) {\n LogRecord record = new ESLogRecord(Level.SEVERE, msg);",
"filename": "core/src/main/java/org/elasticsearch/common/logging/jdk/JdkESLogger.java",
"status": "modified"
},
{
"diff": "@@ -125,21 +125,11 @@ protected void internalInfo(String msg, Throwable cause) {\n logger.log(FQCN, Level.INFO, msg, cause);\n }\n \n- @Override\n- protected void internalWarn(String msg) {\n- logger.log(FQCN, Level.WARN, msg, null);\n- }\n-\n @Override\n protected void internalWarn(String msg, Throwable cause) {\n logger.log(FQCN, Level.WARN, msg, cause);\n }\n \n- @Override\n- protected void internalError(String msg) {\n- logger.log(FQCN, Level.ERROR, msg, null);\n- }\n-\n @Override\n protected void internalError(String msg, Throwable cause) {\n logger.log(FQCN, Level.ERROR, msg, cause);",
"filename": "core/src/main/java/org/elasticsearch/common/logging/log4j/Log4jESLogger.java",
"status": "modified"
}
]
}
|
{
"body": "```\nPUT t/t/1\n{\n \"num\": 34\n}\n\nGET /_search\n{\n \"query\": {\n \"regexp\": {\n \"num\": {\n \"value\": \"34\"\n }\n }\n }\n}\n```\n\nreturns:\n\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"expected '\\\"' at position 11\"\n }\n ],\n",
"comments": [
{
"body": "Hi, I created a fix for this in the referenced PR, that I believe does the right thing.\n\nAll tests pass.\n",
"created_at": "2015-11-21T02:31:40Z"
}
],
"number": 14782,
"title": "Confusing exception when `regexp` query used on numeric field"
}
|
{
"body": "Fixes #14782\n",
"number": 14910,
"review_comments": [
{
"body": "In this case you can use the `setSource` version that takes a field name and save a few characters. No big deal either way though.\n",
"created_at": "2015-11-21T02:53:21Z"
},
{
"body": "Ah, I missed that method overload. Will fix!\n",
"created_at": "2015-11-21T03:00:44Z"
}
],
"title": "Return a better exception message when `regexp` query is used on a numeric field"
}
|
{
"commits": [
{
"message": "Return a better exception message when regexp query is used on a numeric field"
}
],
"files": [
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.query.QueryShardException;\n import org.elasticsearch.index.similarity.SimilarityProvider;\n \n import java.io.IOException;\n@@ -481,6 +482,10 @@ public Query prefixQuery(String value, @Nullable MultiTermQuery.RewriteMethod me\n }\n \n public Query regexpQuery(String value, int flags, int maxDeterminizedStates, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryShardContext context) {\n+ if (numericType() != null) {\n+ throw new QueryShardException(context, \"Cannot use regular expression to filter numeric field [\" + names.fullName + \"]\");\n+ }\n+\n RegexpQuery query = new RegexpQuery(createTerm(value), flags, maxDeterminizedStates);\n if (method != null) {\n query.setRewriteMethod(method);",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java",
"status": "modified"
},
{
"diff": "@@ -46,6 +46,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n \n public class SimpleSearchIT extends ESIntegTestCase {\n public void testSearchNullIndex() {\n@@ -336,6 +337,18 @@ public void testTooLargeFromAndSizeBackwardsCompatibilityRecommendation() throws\n .setFrom(DefaultSearchContext.Defaults.MAX_RESULT_WINDOW * 10).get(), 1);\n }\n \n+ public void testQueryNumericFieldWithRegex() throws Exception {\n+ createIndex(\"idx\");\n+ indexRandom(true, client().prepareIndex(\"idx\", \"type\").setSource(\"num\", 34));\n+ \n+ try {\n+ client().prepareSearch(\"idx\").setQuery(QueryBuilders.regexpQuery(\"num\", \"34\")).get();\n+ fail(\"SearchPhaseExecutionException should have been thrown\");\n+ } catch (SearchPhaseExecutionException ex) {\n+ assertThat(ex.getCause().getCause().getMessage(), equalTo(\"Cannot use regular expression to filter numeric field [num]\"));\n+ }\n+ }\n+\n private void assertWindowFails(SearchRequestBuilder search) {\n try {\n search.get();",
"filename": "core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
}
]
}
|
{
"body": "BulkProcessor uses a semaphore to control the level of concurrency on its underlying client. Prior to acquiring a permit on this semaphore, BulkProcessor#execute invokes BulkProcessor.Listener#beforeBulk. If this method throws, then BulkProcessor#execute releases a permit on its semaphore. This is a mistake, and can lead to more than the intended number of threads having a permit on the semaphore.\n",
"comments": [],
"number": 14908,
"title": "BulkProcessor can release too many permits on its semaphore"
}
|
{
"body": "This commit adds an acquired flag to BulkProcessor#execute that is set\nonly after successful acquisition of a permit on the semaphore\nthere. This flag is used to ensure that we do not release a permit on\nthe semaphore when we did not obtain a permit on the semaphore.\n\nCloses #14908\n",
"number": 14909,
"review_comments": [],
"title": "Do not release unacquired semaphore"
}
|
{
"commits": [
{
"message": "Do not release unacquired semaphore\n\nThis commit adds an acquired flag to BulkProcessor#execute that is set\nonly after successful acquisition of a permit on the semaphore\nthere. This flag is used to ensure that we do not release a permit on\nthe semaphore when we did not obtain a permit on the semaphore.\n\nCloses #14908"
}
],
"files": [
{
"diff": "@@ -324,9 +324,11 @@ private void execute() {\n }\n } else {\n boolean success = false;\n+ boolean acquired = false;\n try {\n listener.beforeBulk(executionId, bulkRequest);\n semaphore.acquire();\n+ acquired = true;\n client.bulk(bulkRequest, new ActionListener<BulkResponse>() {\n @Override\n public void onResponse(BulkResponse response) {\n@@ -353,7 +355,7 @@ public void onFailure(Throwable e) {\n } catch (Throwable t) {\n listener.afterBulk(executionId, bulkRequest, t);\n } finally {\n- if (!success) { // if we fail on client.bulk() release the semaphore\n+ if (!success && acquired) { // if we fail on client.bulk() release the semaphore\n semaphore.release();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java",
"status": "modified"
}
]
}
|
{
"body": "I noticed this while reviewing #14773. Namely, the code that exists today in IPv4RangeBuilder.java and modifications proposed in #14773 will silently parse invalid IPv4 address/mask combinations. This is due to a regex pattern\n\n``` java\nprivate static final Pattern MASK_PATTERN = Pattern.compile(\"(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,3})\");\n```\n\nthat will match strings that do not represent valid combinations. For example, it will match \"256.512.256.512/33\".\n\nThe code then proceeds to capture these potentially invalid octets/mask and parse them.\n\n``` java\nint addr = (( Integer.parseInt(matcher.group(1)) << 24 ) & 0xFF000000)\n | (( Integer.parseInt(matcher.group(2)) << 16 ) & 0xFF0000)\n | (( Integer.parseInt(matcher.group(3)) << 8 ) & 0xFF00)\n | ( Integer.parseInt(matcher.group(4)) & 0xFF);\n\nint mask = (-1) << (32 - Integer.parseInt(matcher.group(5)));\n```\n\nWhile there are regex patterns to match IPv4 address/masks, I think that a simple state machine would be vastly simpler.\n",
"comments": [
{
"body": "Thanks for opening this issue. Let's fix this leniency!\n",
"created_at": "2015-11-19T14:54:23Z"
}
],
"number": 14862,
"title": "Current CIDR parsing will silently parse invalid CIDRs"
}
|
{
"body": "This commit fixes some leniency in the parsing of CIDRs. The leniency\nthat existed before includes not validating the octet values nor\nvalidating the network mask; in some cases issues with these values were\nsilently ignored. Parsing is now done according to the guidelines in [RFC\n4632](https://tools.ietf.org/html/rfc4632), [page 6](https://tools.ietf.org/html/rfc4632#section-3.1).\n\nCloses #14862\n",
"number": 14874,
"review_comments": [
{
"body": "the outer parentheses are useless. On the other hand useless parentheses would be better spent to make precedence explicit when bitwise operators are involved. so I would do `long mask = 1L << (32 - networkMask);`\n",
"created_at": "2015-11-20T03:22:47Z"
},
{
"body": "Done in f51d52496f53841b15dee7628414f20a1cb0325a.\n",
"created_at": "2015-11-20T03:36:05Z"
},
{
"body": "it shouldn't hold up the change but i find the test a bit difficult to follow.\n\nI guess i'd recommend adding the simpler tests to the top of the file, as the randomized invalid testing is the most difficult (random test/Tuples/streams/lambas/hamcrest matchers/many helper methods). \n\nI'm not sure anyway that for testing invalid cases that random testing buys us so much? I would personally rather see it test like 8 or 9 explicit bad cases and assert we get the correct exception.\n\nFor valid cases on the other hand, random tests are maybe justifiable , but this can be accomplished by just wiring the first two bytes and exhaustively testing all valid masks for all addresses over that /16, which maybe makes it contained and intuitive.\n",
"created_at": "2015-11-20T04:36:28Z"
},
{
"body": "missing license header.\n",
"created_at": "2015-11-20T06:36:12Z"
},
{
"body": "what about throwing an IllegalFormatException instead? I'm a bit concerned about catching IAE as this is a very generic exception.\n",
"created_at": "2015-11-20T09:44:01Z"
},
{
"body": "IllegalFormatException is intended for bad format strings passed to methods like String#format.\n",
"created_at": "2015-11-20T11:47:16Z"
},
{
"body": "Hmm bad idea indeed. But I'd still feel better about a more specific exception than an IAE?\n",
"created_at": "2015-11-20T11:55:49Z"
},
{
"body": "Added in c754c1c01bed8863a91c02dc171132dcb5c6c731.\n",
"created_at": "2015-11-20T11:58:10Z"
},
{
"body": "just please don't add one. There are too many classes already.\n",
"created_at": "2015-11-20T11:59:25Z"
},
{
"body": "Agreed that adding a new exception would be even worse than throwing IAE. I liked the idea of throwing an exception instead of checking for a null return value but if this exception is the generic IllegalArgumentException then I'm not sure that it is really less error-prone.\n",
"created_at": "2015-11-20T12:12:19Z"
},
{
"body": "I tried to simplify the tests in 4e9a821382cc66d85746c7cc2f5f040dd501b003. What do you think @rmuir?\n",
"created_at": "2015-11-20T12:41:58Z"
},
{
"body": "I've added a note in the Javadoc that this method throws IllegalArgumentException in 45ab5dd6eabdcbcd7dd0d2b68867438e99f22f8e. Either the caller correctly catches these (and just wraps them in one of our fancy exceptions) or the caller doesn't catch these and we still blow up. I see this as no different than the way that NumberFormatException is handled, for example, in the JDK.\n",
"created_at": "2015-11-20T13:09:44Z"
},
{
"body": "ok, fair enough\n",
"created_at": "2015-11-20T13:20:03Z"
},
{
"body": "I'm thinking it could be more user-friendly to suggest a correction, like `\"did you mean \" + ipToString(accumulator & (blockSize - 1)) + \"/\" + networkMask` in the error message\n",
"created_at": "2015-11-23T18:18:22Z"
}
],
"title": "Do not be lenient when parsing CIDRs"
}
|
{
"commits": [
{
"message": "Do not be lenient when parsing CIDRs\n\nThis commit fixes some leniency in the parsing of CIDRs. The leniency\nthat existed before includes not validating the octet values nor\nvalidating the network mask; in some cases issues with these values were\nsilently ignored. Parsing is now done according to the guidelines in RFC\n4632, page 6.\n\nCloses #14862"
}
],
"files": [
{
"diff": "@@ -0,0 +1,116 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.network;\n+\n+import java.util.Arrays;\n+import java.util.Locale;\n+import java.util.Objects;\n+\n+public final class Cidrs {\n+ private Cidrs() {\n+ }\n+\n+ /**\n+ * Parses an IPv4 address block in CIDR notation into a pair of\n+ * longs representing the bottom and top of the address block\n+ *\n+ * @param cidr an address block in CIDR notation a.b.c.d/n\n+ * @return array representing the address block\n+ * @throws IllegalArgumentException if the cidr can not be parsed\n+ */\n+ public static long[] cidrMaskToMinMax(String cidr) {\n+ Objects.requireNonNull(cidr, \"cidr\");\n+ String[] fields = cidr.split(\"/\");\n+ if (fields.length != 2) {\n+ throw new IllegalArgumentException(\n+ String.format(Locale.ROOT, \"invalid IPv4/CIDR; expected [a.b.c.d, e] but was [%s] after splitting on \\\"/\\\" in [%s]\", Arrays.toString(fields), cidr)\n+ );\n+ }\n+ // do not try to parse IPv4-mapped IPv6 address\n+ if (fields[0].contains(\":\")) {\n+ throw new IllegalArgumentException(\n+ String.format(Locale.ROOT, \"invalid IPv4/CIDR; expected [a.b.c.d, e] where a, b, c, d are decimal octets but was [%s] after splitting on \\\"/\\\" in [%s]\", Arrays.toString(fields), cidr)\n+ );\n+ }\n+ byte[] addressBytes;\n+ try {\n+ addressBytes = InetAddresses.forString(fields[0]).getAddress();\n+ } catch (Throwable t) {\n+ throw new IllegalArgumentException(\n+ String.format(Locale.ROOT, \"invalid IPv4/CIDR; unable to parse [%s] as an IP address literal\", fields[0]), t\n+ );\n+ }\n+ long accumulator =\n+ ((addressBytes[0] & 0xFFL) << 24) +\n+ ((addressBytes[1] & 0xFFL) << 16) +\n+ ((addressBytes[2] & 0xFFL) << 8) +\n+ ((addressBytes[3] & 0xFFL));\n+ int networkMask;\n+ try {\n+ networkMask = Integer.parseInt(fields[1]);\n+ } catch (NumberFormatException e) {\n+ throw new IllegalArgumentException(\n+ String.format(Locale.ROOT, \"invalid IPv4/CIDR; invalid network mask [%s] in [%s]\", fields[1], cidr),\n+ e\n+ );\n+ }\n+ if (networkMask < 0 || networkMask > 32) {\n+ throw new IllegalArgumentException(\n+ String.format(Locale.ROOT, \"invalid IPv4/CIDR; invalid network mask [%s], out of range in [%s]\", fields[1], cidr)\n+ );\n+ }\n+\n+ long blockSize = 1L << (32 - networkMask);\n+ // validation\n+ if ((accumulator & (blockSize - 1)) != 0) {\n+ throw new IllegalArgumentException(\n+ String.format(\n+ Locale.ROOT,\n+ \"invalid IPv4/CIDR; invalid address/network mask combination in [%s]; perhaps [%s] was intended?\",\n+ cidr,\n+ octetsToCIDR(longToOctets(accumulator - (accumulator & (blockSize - 1))), networkMask)\n+ )\n+ );\n+ }\n+ return new long[] { accumulator, accumulator + blockSize };\n+ }\n+\n+ static int[] longToOctets(long value) {\n+ assert value >= 0 && value <= (1L << 32) : value;\n+ int[] octets = new int[4];\n+ octets[0] = (int)((value >> 24) & 0xFF);\n+ octets[1] = (int)((value >> 16) & 0xFF);\n+ octets[2] = (int)((value >> 8) & 0xFF);\n+ octets[3] = (int)(value & 0xFF);\n+ return octets;\n+ }\n+\n+ static String octetsToString(int[] octets) {\n+ assert octets != null;\n+ assert octets.length == 4;\n+ return String.format(Locale.ROOT, \"%d.%d.%d.%d\", octets[0], octets[1], octets[2], octets[3]);\n+ }\n+\n+ static String octetsToCIDR(int[] octets, int networkMask) {\n+ assert octets != null;\n+ assert octets.length == 4;\n+ return octetsToString(octets) + \"/\" + networkMask;\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/network/Cidrs.java",
"status": "added"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Numbers;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.network.Cidrs;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.Fuzziness;\n@@ -48,6 +49,7 @@\n import org.elasticsearch.index.mapper.core.LongFieldMapper.CustomLongNumericField;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.search.aggregations.bucket.range.ipv4.InternalIPv4Range;\n \n import java.io.IOException;\n import java.util.Iterator;\n@@ -76,7 +78,6 @@ public static String longToIp(long longIp) {\n }\n \n private static final Pattern pattern = Pattern.compile(\"\\\\.\");\n- private static final Pattern MASK_PATTERN = Pattern.compile(\"(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,3})\");\n \n public static long ipToLong(String ip) {\n try {\n@@ -97,64 +98,6 @@ public static long ipToLong(String ip) {\n }\n }\n \n- /**\n- * Computes the min & max ip addresses (represented as long values -\n- * same way as stored in index) represented by the given CIDR mask\n- * expression. The returned array has the length of 2, where the first entry\n- * represents the {@code min} address and the second the {@code max}. A\n- * {@code -1} value for either the {@code min} or the {@code max},\n- * represents an unbounded end. In other words:\n- *\n- * <p>\n- * {@code min == -1 == \"0.0.0.0\" }\n- * </p>\n- *\n- * and\n- *\n- * <p>\n- * {@code max == -1 == \"255.255.255.255\" }\n- * </p>\n- */\n- public static long[] cidrMaskToMinMax(String cidr) {\n- Matcher matcher = MASK_PATTERN.matcher(cidr);\n- if (!matcher.matches()) {\n- return null;\n- }\n- int addr = ((Integer.parseInt(matcher.group(1)) << 24) & 0xFF000000) | ((Integer.parseInt(matcher.group(2)) << 16) & 0xFF0000)\n- | ((Integer.parseInt(matcher.group(3)) << 8) & 0xFF00) | (Integer.parseInt(matcher.group(4)) & 0xFF);\n-\n- int mask = (-1) << (32 - Integer.parseInt(matcher.group(5)));\n-\n- if (Integer.parseInt(matcher.group(5)) == 0) {\n- mask = 0 << 32;\n- }\n-\n- int from = addr & mask;\n- long longFrom = intIpToLongIp(from);\n- if (longFrom == 0) {\n- longFrom = -1;\n- }\n-\n- int to = from + (~mask);\n- long longTo = intIpToLongIp(to) + 1; // we have to +1 here as the range\n- // is non-inclusive on the \"to\"\n- // side\n-\n- if (longTo == MAX_IP) {\n- longTo = -1;\n- }\n-\n- return new long[] { longFrom, longTo };\n- }\n-\n- private static long intIpToLongIp(int i) {\n- long p1 = ((long) ((i >> 24) & 0xFF)) << 24;\n- int p2 = ((i >> 16) & 0xFF) << 16;\n- int p3 = ((i >> 8) & 0xFF) << 8;\n- int p4 = i & 0xFF;\n- return p1 + p2 + p3 + p4;\n- }\n-\n public static class Defaults extends NumberFieldMapper.Defaults {\n public static final String NULL_VALUE = null;\n \n@@ -274,13 +217,13 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n if (value != null) {\n long[] fromTo;\n if (value instanceof BytesRef) {\n- fromTo = cidrMaskToMinMax(((BytesRef) value).utf8ToString());\n+ fromTo = Cidrs.cidrMaskToMinMax(((BytesRef) value).utf8ToString());\n } else {\n- fromTo = cidrMaskToMinMax(value.toString());\n+ fromTo = Cidrs.cidrMaskToMinMax(value.toString());\n }\n if (fromTo != null) {\n- return rangeQuery(fromTo[0] < 0 ? null : fromTo[0],\n- fromTo[1] < 0 ? null : fromTo[1], true, false);\n+ return rangeQuery(fromTo[0] == 0 ? null : fromTo[0],\n+ fromTo[1] == InternalIPv4Range.MAX_IP ? null : fromTo[1], true, false);\n }\n }\n return super.termQuery(value, context);",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -19,11 +19,10 @@\n \n package org.elasticsearch.search.aggregations.bucket.range.ipv4;\n \n+import org.elasticsearch.common.network.Cidrs;\n import org.elasticsearch.search.aggregations.bucket.range.AbstractRangeBuilder;\n import org.elasticsearch.search.builder.SearchSourceBuilderException;\n \n-import static org.elasticsearch.index.mapper.ip.IpFieldMapper.cidrMaskToMinMax;\n-\n /**\n * Builder for the {@code IPv4Range} aggregation.\n */\n@@ -59,11 +58,13 @@ public IPv4RangeBuilder addMaskRange(String mask) {\n * Add a range based on a CIDR mask.\n */\n public IPv4RangeBuilder addMaskRange(String key, String mask) {\n- long[] fromTo = cidrMaskToMinMax(mask);\n- if (fromTo == null) {\n- throw new SearchSourceBuilderException(\"invalid CIDR mask [\" + mask + \"] in ip_range aggregation [\" + getName() + \"]\");\n+ long[] fromTo;\n+ try {\n+ fromTo = Cidrs.cidrMaskToMinMax(mask);\n+ } catch (IllegalArgumentException e) {\n+ throw new SearchSourceBuilderException(\"invalid CIDR mask [\" + mask + \"] in ip_range aggregation [\" + getName() + \"]\", e);\n }\n- ranges.add(new Range(key, fromTo[0] < 0 ? null : fromTo[0], fromTo[1] < 0 ? null : fromTo[1]));\n+ ranges.add(new Range(key, fromTo[0] == 0 ? null : fromTo[0], fromTo[1] == InternalIPv4Range.MAX_IP ? null : fromTo[1]));\n return this;\n }\n \n@@ -106,5 +107,4 @@ public IPv4RangeBuilder addUnboundedFrom(String key, String from) {\n public IPv4RangeBuilder addUnboundedFrom(String from) {\n return addUnboundedFrom(null, from);\n }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ipv4/IPv4RangeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n *\n */\n public class InternalIPv4Range extends InternalRange<InternalIPv4Range.Bucket, InternalIPv4Range> {\n+ public static final long MAX_IP = 1L << 32;\n \n public final static Type TYPE = new Type(\"ip_range\", \"iprange\");\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ipv4/InternalIPv4Range.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.bucket.range.ipv4;\n \n+import org.elasticsearch.common.network.Cidrs;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.ip.IpFieldMapper;\n import org.elasticsearch.search.SearchParseException;\n@@ -125,13 +126,15 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n }\n \n private static void parseMaskRange(String cidr, RangeAggregator.Range range, String aggregationName, SearchContext ctx) {\n- long[] fromTo = IpFieldMapper.cidrMaskToMinMax(cidr);\n- if (fromTo == null) {\n+ long[] fromTo;\n+ try {\n+ fromTo = Cidrs.cidrMaskToMinMax(cidr);\n+ } catch (IllegalArgumentException e) {\n throw new SearchParseException(ctx, \"invalid CIDR mask [\" + cidr + \"] in aggregation [\" + aggregationName + \"]\",\n- null);\n+ null, e);\n }\n- range.from = fromTo[0] < 0 ? Double.NEGATIVE_INFINITY : fromTo[0];\n- range.to = fromTo[1] < 0 ? Double.POSITIVE_INFINITY : fromTo[1];\n+ range.from = fromTo[0] == 0 ? Double.NEGATIVE_INFINITY : fromTo[0];\n+ range.to = fromTo[1] == InternalIPv4Range.MAX_IP ? Double.POSITIVE_INFINITY : fromTo[1];\n if (range.key == null) {\n range.key = cidr;\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ipv4/IpRangeParser.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,192 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.network;\n+\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.network.Cidrs;\n+import org.elasticsearch.search.aggregations.bucket.range.ipv4.IPv4RangeBuilder;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.*;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+public class CidrsTests extends ESTestCase {\n+ public void testNullCidr() {\n+ try {\n+ Cidrs.cidrMaskToMinMax(null);\n+ fail(\"expected NullPointerException\");\n+ } catch (NullPointerException e) {\n+ assertThat(e, hasToString(containsString(\"cidr\")));\n+ }\n+ }\n+\n+ public void testSplittingSlash() {\n+ List<String> cases = new ArrayList<>();\n+ cases.add(\"1.2.3.4\");\n+ cases.add(\"1.2.3.4/32/32\");\n+ cases.add(\"1.2.3.4/\");\n+ cases.add(\"/\");\n+ for (String test : cases) {\n+ try {\n+ Cidrs.cidrMaskToMinMax(test);\n+ fail(\"expected IllegalArgumentException after splitting\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e, hasToString(containsString(\"expected [a.b.c.d, e]\")));\n+ assertThat(e, hasToString(containsString(\"splitting on \\\"/\\\"\")));\n+ }\n+ }\n+ }\n+\n+ public void testSplittingDot() {\n+ List<String> cases = new ArrayList<>();\n+ cases.add(\"1.2.3/32\");\n+ cases.add(\"1/32\");\n+ cases.add(\"1./32\");\n+ cases.add(\"1../32\");\n+ cases.add(\"1.../32\");\n+ cases.add(\"1.2.3.4.5/32\");\n+ cases.add(\"/32\");\n+ for (String test : cases) {\n+ try {\n+ Cidrs.cidrMaskToMinMax(test);\n+ fail(\"expected IllegalArgumentException after splitting\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e, hasToString(containsString(\"unable to parse\")));\n+ assertThat(e, hasToString(containsString(\"as an IP address literal\")));\n+ }\n+ }\n+ }\n+\n+ public void testValidSpecificCases() {\n+ List<Tuple<String, long[]>> cases = new ArrayList<>();\n+ cases.add(new Tuple<>(\"192.168.0.0/24\", new long[]{(192L << 24) + (168 << 16), (192L << 24) + (168 << 16) + (1 << 8)}));\n+ cases.add(new Tuple<>(\"192.168.128.0/17\", new long[]{(192L << 24) + (168 << 16) + (128 << 8), (192L << 24) + (168 << 16) + (128 << 8) + (1 << 15)}));\n+ cases.add(new Tuple<>(\"128.0.0.0/1\", new long[]{128L << 24, (128L << 24) + (1L << 31)})); // edge case\n+ cases.add(new Tuple<>(\"0.0.0.0/0\", new long[]{0, 1L << 32})); // edge case\n+ cases.add(new Tuple<>(\"0.0.0.0/1\", new long[]{0, 1L << 31})); // edge case\n+ cases.add(new Tuple<>(\n+ \"192.168.1.1/32\",\n+ new long[]{(192L << 24) + (168L << 16) + (1L << 8) + 1L, (192L << 24) + (168L << 16) + (1L << 8) + 1L + 1})\n+ ); // edge case\n+ for (Tuple<String, long[]> test : cases) {\n+ long[] actual = Cidrs.cidrMaskToMinMax(test.v1());\n+ assertArrayEquals(test.v1(), test.v2(), actual);\n+ }\n+ }\n+\n+ public void testInvalidSpecificOctetCases() {\n+ List<String> cases = new ArrayList<>();\n+ cases.add(\"256.0.0.0/8\"); // first octet out of range\n+ cases.add(\"255.256.0.0/16\"); // second octet out of range\n+ cases.add(\"255.255.256.0/24\"); // third octet out of range\n+ cases.add(\"255.255.255.256/32\"); // fourth octet out of range\n+ cases.add(\"abc.0.0.0/8\"); // octet that can not be parsed\n+ cases.add(\"-1.0.0.0/8\"); // first octet out of range\n+ cases.add(\"128.-1.0.0/16\"); // second octet out of range\n+ cases.add(\"128.128.-1.0/24\"); // third octet out of range\n+ cases.add(\"128.128.128.-1/32\"); // fourth octet out of range\n+\n+ for (String test : cases) {\n+ try {\n+ Cidrs.cidrMaskToMinMax(test);\n+ fail(\"expected invalid address\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e, hasToString(containsString(\"unable to parse\")));\n+ assertThat(e, hasToString(containsString(\"as an IP address literal\")));\n+ }\n+ }\n+ }\n+\n+ public void testInvalidSpecificNetworkMaskCases() {\n+ List<String> cases = new ArrayList<>();\n+ cases.add(\"128.128.128.128/-1\"); // network mask out of range\n+ cases.add(\"128.128.128.128/33\"); // network mask out of range\n+ cases.add(\"128.128.128.128/abc\"); // network mask that can not be parsed\n+\n+ for (String test : cases) {\n+ try {\n+ Cidrs.cidrMaskToMinMax(test);\n+ fail(\"expected invalid network mask\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e, hasToString(containsString(\"network mask\")));\n+ }\n+ }\n+ }\n+\n+ public void testValidCombinations() {\n+ for (long i = 0; i < (1 << 16); i++) {\n+ for (int mask = 16; mask <= 32; mask++) {\n+ String test = Cidrs.octetsToCIDR(Cidrs.longToOctets(i << 16), mask);\n+ long[] actual = Cidrs.cidrMaskToMinMax(test);\n+ assertNotNull(test, actual);\n+ assertEquals(test, 2, actual.length);\n+ assertEquals(test, i << 16, actual[0]);\n+ assertEquals(test, (i << 16) + (1L << (32 - mask)), actual[1]);\n+ }\n+ }\n+ }\n+\n+ public void testInvalidCombinations() {\n+ List<String> cases = new ArrayList<>();\n+ cases.add(\"192.168.0.1/24\"); // invalid because fourth octet is not zero\n+ cases.add(\"192.168.1.0/16\"); // invalid because third octet is not zero\n+ cases.add(\"192.1.0.0/8\"); // invalid because second octet is not zero\n+ cases.add(\"128.0.0.0/0\"); // invalid because first octet is not zero\n+ // create cases that have a bit set outside of the network mask\n+ int value = 1;\n+ for (int i = 0; i < 31; i++) {\n+ cases.add(Cidrs.octetsToCIDR(Cidrs.longToOctets(value), 32 - i - 1));\n+ value <<= 1;\n+ }\n+\n+ for (String test : cases) {\n+ try {\n+ Cidrs.cidrMaskToMinMax(test);\n+ fail(\"expected invalid combination\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(test, e, hasToString(containsString(\"invalid address/network mask combination\")));\n+ }\n+ }\n+ }\n+\n+ public void testRandomValidCombinations() {\n+ List<Tuple<String, Integer>> cases = new ArrayList<>();\n+ // random number of strings with valid octets and valid network masks\n+ for (int i = 0; i < randomIntBetween(1, 1024); i++) {\n+ int networkMask = randomIntBetween(0, 32);\n+ long mask = (1L << (32 - networkMask)) - 1;\n+ long address = randomLongInIPv4Range() & ~mask;\n+ cases.add(new Tuple<>(Cidrs.octetsToCIDR(Cidrs.longToOctets(address), networkMask), networkMask));\n+ }\n+\n+ for (Tuple<String, Integer> test : cases) {\n+ long[] actual = Cidrs.cidrMaskToMinMax(test.v1());\n+ assertNotNull(test.v1(), actual);\n+ assertEquals(test.v1(), 2, actual.length);\n+ // assert the resulting block has the right size\n+ assertEquals(test.v1(), 1L << (32 - test.v2()), actual[1] - actual[0]);\n+ }\n+ }\n+\n+ private long randomLongInIPv4Range() {\n+ return randomLong() & 0x00000000FFFFFFFFL;\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/network/CidrsTests.java",
"status": "added"
},
{
"diff": "@@ -107,7 +107,7 @@ public void testSimpleIp() throws Exception {\n assertHitCount(search, 1l);\n }\n \n- public void testIpCIDR() throws Exception {\n+ public void testIpCidr() throws Exception {\n createIndex(\"test\");\n \n client().admin().indices().preparePutMapping(\"test\").setType(\"type1\")\n@@ -129,20 +129,15 @@ public void testIpCIDR() throws Exception {\n assertHitCount(search, 1l);\n \n search = client().prepareSearch()\n- .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.0.1/24\")))\n+ .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.0.0/24\")))\n .execute().actionGet();\n assertHitCount(search, 3l);\n \n search = client().prepareSearch()\n- .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.0.1/8\")))\n+ .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.0.0.0/8\")))\n .execute().actionGet();\n assertHitCount(search, 4l);\n \n- search = client().prepareSearch()\n- .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"192.168.1.1/24\")))\n- .execute().actionGet();\n- assertHitCount(search, 1l);\n-\n search = client().prepareSearch()\n .setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"0.0.0.0/0\")))\n .execute().actionGet();\n@@ -155,7 +150,7 @@ public void testIpCIDR() throws Exception {\n \n assertFailures(client().prepareSearch().setQuery(boolQuery().must(QueryBuilders.termQuery(\"ip\", \"0/0/0/0/0\"))),\n RestStatus.BAD_REQUEST,\n- containsString(\"not a valid ip address\"));\n+ containsString(\"invalid IPv4/CIDR; expected [a.b.c.d, e] but was [[0, 0, 0, 0, 0]]\"));\n }\n \n public void testSimpleId() {",
"filename": "core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
}
]
}
|
{
"body": "If we run out of disk while recoverying the transaction log\nwe repeatedly fail since we expect the latest tranlog to be uncommitted.\nThis change adds 2 safety levels:\n- uncommitted checkpoints are first written to a temp file and then atomically\n renamed into a committed (recovered) checkpoint\n- if the latest uncommitted checkpoints generation is already recovered it has to be\n identical, if not the recovery fails\n\nThis allows to fail in between recovering the latest uncommitted checkpoint and moving\nthe checkpoint generation to N+1 which can for instance happen in a situation where\nwe can run out of disk. If we run out of disk while recovering the uncommitted checkpoint\neither the temp file writing or the atomic rename will fail such that we never have a\nhalf written or corrupted recovered checkpoint.\n",
"comments": [
{
"body": "this was reported here: https://discuss.elastic.co/t/no-space-disaster/34158\n",
"created_at": "2015-11-11T20:03:20Z"
},
{
"body": "I left some comments but the change looks good to me!\n",
"created_at": "2015-11-12T10:43:17Z"
},
{
"body": "Merged via 1bdf29e2634e9cc8d02a84953bad879b5864353a\n",
"created_at": "2015-11-16T18:19:58Z"
},
{
"body": "Hi expert,\n\nI need expert guide to resolve my issue, i am using following version. \nkibana 4.2.0\nelasticsearch 2.1\n\nelasticsearch log are.\n\n[root@centos elasticsearch]# more elasticsearch.log \n[2015-12-03 11:21:45,534][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, C\nONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed\n[2015-12-03 11:21:45,987][INFO ][node ] [Lin Sun] version[2.1.0], pid[3991], build[72cd1f1/2015-11-18T22:40:03Z]\n[2015-12-03 11:21:45,987][INFO ][node ] [Lin Sun] initializing ...\n[2015-12-03 11:21:46,108][INFO ][plugins ] [Lin Sun] loaded [], sites []\n[2015-12-03 11:21:46,149][INFO ][env ] [Lin Sun] using [1] data paths, mounts [[/ (/dev/mapper/vg_centos-lv_root)]], net usable_space [7\n.7gb], net total_space [17.1gb], spins? [possibly], types [ext4]\n[2015-12-03 11:21:48,854][INFO ][node ] [Lin Sun] initialized\n[2015-12-03 11:21:48,855][INFO ][node ] [Lin Sun] starting ...\n[2015-12-03 11:21:49,128][INFO ][transport ] [Lin Sun] publish_address {192.168.48.63:9300}, bound_addresses {192.168.48.63:9300}\n[2015-12-03 11:21:49,159][INFO ][discovery ] [Lin Sun] elasticsearch/pNZpYIdPQq-X4_OFlqUOPg\n[2015-12-03 11:21:52,234][INFO ][cluster.service ] [Lin Sun] new_master {Lin Sun}{pNZpYIdPQq-X4_OFlqUOPg}{192.168.48.63}{192.168.48.63:9300}, reason\n: zen-disco-join(elected_as_master, [0] joins received)\n[2015-12-03 11:21:52,354][INFO ][http ] [Lin Sun] publish_address {192.168.48.63:9200}, bound_addresses {192.168.48.63:9200}\n[2015-12-03 11:21:52,355][INFO ][node ] [Lin Sun] started\n[2015-12-03 11:21:52,452][INFO ][gateway ] [Lin Sun] recovered [1] indices into cluster_state\n[2015-12-03 11:21:53,276][WARN ][index.translog ] [Lin Sun] [.kibana][0] failed to delete temp file /var/lib/elasticsearch/elasticsearch/nodes/0/in\ndices/.kibana/0/translog/translog-8220187682635755947.tlog\njava.nio.file.NoSuchFileException: /var/lib/elasticsearch/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-8220187682635755947.tlog\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)\n at java.nio.file.Files.delete(Files.java:1126)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:324)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:166)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n\n---\n\nI tried to use localhost instead of IP in configuration file but no any luck.\n\nMy kibana console shown\nInstalled Plugins\nName Status\nplugin:kibana Ready\n\nplugin:elasticsearch Unable to connect to Elasticsearch at http://0.0.0.0:9200. Retrying in 2.5 seconds.\n\nplugin:kbn_vislib_vis_types Ready\nplugin:markdown_vis Ready\nplugin:metric_vis Ready\nplugin:spyModes Ready \n\nRegards,\n\nAsif,\n",
"created_at": "2015-12-03T12:32:08Z"
},
{
"body": "@asifalis please ask questions like these in the forum http://discuss.elastic.co/\n",
"created_at": "2015-12-03T17:53:33Z"
},
{
"body": "Currently experiencing this or similar issue with Version: 2.2.1, Build: d045fc2/2016-03-09T09:38:54Z, JVM: 1.8.0_66.\n\nStacktrace:\n[2016-05-12 11:20:38,861][WARN ][indices.cluster ] [spx-elastic-FPA2-02] [[releases_2][1]] marking and sending shard failed due to [failed recovery]\n[releases_2][[releases_2][1]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/opt/esdata/spxA/nodes/0/indices/releases_2/1/translog/translog-867.ckp];\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)\n at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)\n at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: [releases_2][[releases_2][1]] EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/opt/esdata/spxA/nodes/0/indices/releases_2/1/translog/translog-867.ckp];\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:155)\n at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)\n at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1510)\n at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1494)\n at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:969)\n at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:941)\n at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)\n ... 5 more\nCaused by: java.nio.file.NoSuchFileException: /opt/esdata/spxA/nodes/0/indices/releases_2/1/translog/translog-867.ckp\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n at java.nio.file.Files.newByteChannel(Files.java:361)\n at java.nio.file.Files.newByteChannel(Files.java:407)\n at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)\n at java.nio.file.Files.newInputStream(Files.java:152)\n at org.elasticsearch.index.translog.Checkpoint.read(Checkpoint.java:82)\n at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:330)\n at org.elasticsearch.index.translog.Translog.<init>(Translog.java:179)\n at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:208)\n at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:151)\n ... 11 more\n\n/opt/esdata/spxA/nodes/0/indices/releases_2/1/translog# ls -al\ntotal 60\ndrwxr-xr-x 2 elasticsearch elasticsearch 4096 May 12 11:31 .\ndrwxr-xr-x 5 elasticsearch elasticsearch 4096 May 10 13:40 ..\n-rw-r--r-- 1 elasticsearch elasticsearch 20 May 12 04:44 translog-865.ckp\n-rw-r--r-- 1 elasticsearch elasticsearch 20 May 12 04:44 translog-866.ckp\n-rw-r--r-- 1 elasticsearch elasticsearch 43 May 12 04:44 translog-867.tlog\n-rw-r--r-- 1 elasticsearch elasticsearch 34570 May 12 04:44 translog-868.tlog\n-rw-r--r-- 1 elasticsearch elasticsearch 20 May 12 04:44 translog.ckp\n",
"created_at": "2016-05-12T09:33:13Z"
},
{
"body": "I encounter this. But I have disk space.\nENV:\n\n```\nJVM: 1.8.0_45 ES: 2.1.0\n```\n",
"created_at": "2016-08-20T13:12:29Z"
}
],
"number": 14695,
"title": "Translog recovery can repeatedly fail if we run out of disk"
}
|
{
"body": "#14695 introduced more careful handling in recovering translog checkpoints. Part of it introduced a temp file which is used to write a new checkpoint if needed. That temp file is not always used and thus needs to be cleaned up. However, if it is used we currently log an ugly warn message about failing to delete it.\n\nHere is an example:\n\n```\n 1> [2015-11-19 22:56:08,049][WARN ][org.elasticsearch.index.translog] [node_t1] [test][0] failed to delete temp file /home/boaz/elasticsearch/core/build/testrun/integTest/J0/temp/org.elasticsearch.rec\n 1> java.nio.file.NoSuchFileException: /home/boaz/elasticsearch/core/build/testrun/integTest/J0/temp/org.elasticsearch.recovery.RelocationIT_720114FFC2D82BCD-002/tempDir-018/data/TEST-CHILD_VM=[0]-CLUS\n 1> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n 1> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n 1> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n 1> at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)\n```\n\n@jpountz can you take a look?\n",
"number": 14872,
"review_comments": [],
"title": "Don't delete temp recovered checkpoint file if it was renamed"
}
|
{
"commits": [
{
"message": "Don't delete temp recovered checkpoint file it was renamed\n\n#14695 introduced more careful handling in recovering translog checkpoints. Part of it introduced a temp file which is used to write a new checkpoint if needed. That temp file is not always used and thus needs to be cleaned up. However, if it is used we currently log an ungly warn message about failing to delete it."
}
],
"files": [
{
"diff": "@@ -129,19 +129,19 @@ public void handle(View view) {\n };\n \n \n-\n /**\n * Creates a new Translog instance. This method will create a new transaction log unless the given {@link TranslogConfig} has\n * a non-null {@link org.elasticsearch.index.translog.Translog.TranslogGeneration}. If the generation is null this method\n * us destructive and will delete all files in the translog path given.\n+ *\n * @see TranslogConfig#getTranslogPath()\n */\n public Translog(TranslogConfig config) throws IOException {\n super(config.getShardId(), config.getIndexSettings());\n this.config = config;\n TranslogGeneration translogGeneration = config.getTranslogGeneration();\n \n- if (translogGeneration == null || translogGeneration.translogUUID == null) { // legacy case\n+ if (translogGeneration == null || translogGeneration.translogUUID == null) { // legacy case\n translogUUID = Strings.randomBase64UUID();\n } else {\n translogUUID = translogGeneration.translogUUID;\n@@ -190,6 +190,7 @@ private final ArrayList<ImmutableTranslogReader> recoverFromFiles(TranslogGenera\n boolean success = false;\n ArrayList<ImmutableTranslogReader> foundTranslogs = new ArrayList<>();\n final Path tempFile = Files.createTempFile(location, TRANSLOG_FILE_PREFIX, TRANSLOG_FILE_SUFFIX); // a temp file to copy checkpoint to - note it must be in on the same FS otherwise atomic move won't work\n+ boolean tempFileRenamed = false;\n try (ReleasableLock lock = writeLock.acquire()) {\n logger.debug(\"open uncommitted translog checkpoint {}\", checkpoint);\n final String checkpointTranslogFile = getFilename(checkpoint.generation);\n@@ -215,6 +216,7 @@ private final ArrayList<ImmutableTranslogReader> recoverFromFiles(TranslogGenera\n Files.copy(location.resolve(CHECKPOINT_FILE_NAME), tempFile, StandardCopyOption.REPLACE_EXISTING);\n IOUtils.fsync(tempFile, false);\n Files.move(tempFile, commitCheckpoint, StandardCopyOption.ATOMIC_MOVE);\n+ tempFileRenamed = true;\n // we only fsync the directory the tempFile was already fsynced\n IOUtils.fsync(commitCheckpoint.getParent(), true);\n }\n@@ -223,10 +225,12 @@ private final ArrayList<ImmutableTranslogReader> recoverFromFiles(TranslogGenera\n if (success == false) {\n IOUtils.closeWhileHandlingException(foundTranslogs);\n }\n- try {\n- Files.delete(tempFile);\n- } catch (IOException ex) {\n- logger.warn(\"failed to delete temp file {}\", ex, tempFile);\n+ if (tempFileRenamed == false) {\n+ try {\n+ Files.delete(tempFile);\n+ } catch (IOException ex) {\n+ logger.warn(\"failed to delete temp file {}\", ex, tempFile);\n+ }\n }\n }\n return foundTranslogs;\n@@ -347,7 +351,6 @@ public long sizeInBytes() {\n }\n \n \n-\n TranslogWriter createWriter(long fileGeneration) throws IOException {\n TranslogWriter newFile;\n try {\n@@ -508,6 +511,7 @@ static String getCommitCheckpointFileName(long generation) {\n \n /**\n * Ensures that the given location has be synced / written to the underlying storage.\n+ *\n * @return Returns <code>true</code> iff this call caused an actual sync operation otherwise <code>false</code>\n */\n public boolean ensureSynced(Location location) throws IOException {\n@@ -749,13 +753,21 @@ public int compareTo(Location o) {\n \n @Override\n public boolean equals(Object o) {\n- if (this == o) return true;\n- if (o == null || getClass() != o.getClass()) return false;\n+ if (this == o) {\n+ return true;\n+ }\n+ if (o == null || getClass() != o.getClass()) {\n+ return false;\n+ }\n \n Location location = (Location) o;\n \n- if (generation != location.generation) return false;\n- if (translogLocation != location.translogLocation) return false;\n+ if (generation != location.generation) {\n+ return false;\n+ }\n+ if (translogLocation != location.translogLocation) {\n+ return false;\n+ }\n return size == location.size;\n \n }\n@@ -1089,7 +1101,7 @@ public VersionType versionType() {\n }\n \n @Override\n- public Source getSource(){\n+ public Source getSource() {\n throw new IllegalStateException(\"trying to read doc source from delete operation\");\n }\n \n@@ -1198,7 +1210,7 @@ static Translog.Operation readOperation(BufferedChecksumStreamInput in) throws I\n // to prevent this unfortunately.\n in.mark(opSize);\n \n- in.skip(opSize-4);\n+ in.skip(opSize - 4);\n verifyChecksum(in);\n in.reset();\n }\n@@ -1250,7 +1262,7 @@ public static void writeOperationNoSize(BufferedChecksumStreamOutput out, Transl\n out.writeByte(op.opType().id());\n op.writeTo(out);\n long checksum = out.getChecksum();\n- out.writeInt((int)checksum);\n+ out.writeInt((int) checksum);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
}
]
}
|
{
"body": "It looks like `/_field_stats` does not accept milliseconds since epoch for gte/lte unless it is explicitly mapped as such. This isn't the case with say, range filters, which accept `format: \"epoch_millis\"`. This is super important to Kibana as there's no direct mapping from JODA formats to javascript date formatting libraries, so we've always sent everything as epoch. By default date fields are mapped to accept epoch as a format, but if the user uses a custom format, this won't happen.\n\nWe didn't notice this bug until Marvel which sets an explicit mapping format.\n",
"comments": [],
"number": 14804,
"title": "/_field_stats does not accept epoch_millis for date fields"
}
|
{
"body": "Added a `format` for index constraints on date fields to allow dates of a different format to be specified than is defined in the date field's format setting in the mapping.\n\nPR for #14804\n",
"number": 14823,
"review_comments": [
{
"body": "Can we support milliseconds through an explicit format parameter? We just got rid of this fallback mechanism in date fields for 2.0. It can be trappy when a user has no way to disable this builtin format. \n",
"created_at": "2015-11-18T07:38:38Z"
},
{
"body": "Ok, I'll remove the fallback logic and add a `format` option to the api that overrides the format that has been specified in the mapping for a date field.\n",
"created_at": "2015-11-18T08:23:34Z"
},
{
"body": "+1 to _not_ build in another hard coded fall back (we don't know what a number means - might be seconds). I do wonder what goes wrong here. The default date field formatter should fall back to mills: `strict_date_optional_time||epoch_millis` . \n",
"created_at": "2015-11-18T08:23:47Z"
},
{
"body": "@bleskes This is in the case if someone has configured a custom date field format. (see #14804) If that happens Kibana doesn't know about this and it is very hard for them to convert to mills since epoch. So adding a format option would help here a lot.\n",
"created_at": "2015-11-18T09:00:53Z"
},
{
"body": "this feels weird. We have a parameter and not use it. Can we add java docs and throw and operation not supported if someone used it? (this goes for all of those)\n",
"created_at": "2015-11-18T09:48:01Z"
},
{
"body": "can we add an else with an exception?\n",
"created_at": "2015-11-18T09:50:36Z"
},
{
"body": "java docs?\n",
"created_at": "2015-11-18T09:50:57Z"
},
{
"body": "testDateFiltering_optionalFormat?\n",
"created_at": "2015-11-18T09:51:42Z"
},
{
"body": "do we really need so many tests? this is just about parsing? It can probably just have unit testing for this..\n",
"created_at": "2015-11-18T09:53:21Z"
},
{
"body": "can we also test what happens when the value is not parseable by format given?\n",
"created_at": "2015-11-18T09:53:53Z"
},
{
"body": "I'll add the unsupported operation exception.\n\nthere is jdoc on the abstract method. I'll add there it is up to the implementation if the format parameter is used.\n",
"created_at": "2015-11-18T10:49:44Z"
},
{
"body": "it isn't just about parsing, but also about the fact that the date format gets used. But yes the amount of tests can be less :)\n",
"created_at": "2015-11-18T11:19:49Z"
},
{
"body": "can we have one without a format? also, for a future change, can we not call them field 1 to 5 but something with meaning like date_field?\n",
"created_at": "2015-11-18T12:21:28Z"
},
{
"body": "\"Field stats index constraints on date fields optionally accept a format option, used to parse the constraint's value. If missing, the format configured in the field's mapping is used.\"\n",
"created_at": "2015-11-18T12:24:23Z"
},
{
"body": "the other fields are already without a format, so the parsing is properly logic is properly tested.\n",
"created_at": "2015-11-18T12:48:57Z"
}
],
"title": "Field stats: Added `format` option for index constraints"
}
|
{
"commits": [
{
"message": "field stats: Added a `format` option to index constraint that allows to specify date index constraint values in a different format then the for specified in the mapping.\n\nCloses #14804"
}
],
"files": [
{
"diff": "@@ -122,9 +122,11 @@ public long getSumTotalTermFreq() {\n \n /**\n * @param value The string to be parsed\n- * @return The concrete object represented by the string argument\n+ * @param optionalFormat A string describing how to parse the specified value. Whether this parameter is supported\n+ * depends on the implementation. If optionalFormat is specified and the implementation\n+ * doesn't support it an {@link UnsupportedOperationException} is thrown\n */\n- protected abstract T valueOf(String value);\n+ protected abstract T valueOf(String value, String optionalFormat);\n \n /**\n * Merges the provided stats into this stats instance.\n@@ -153,7 +155,7 @@ public void append(FieldStats stats) {\n */\n public boolean match(IndexConstraint constraint) {\n int cmp;\n- T value = valueOf(constraint.getValue());\n+ T value = valueOf(constraint.getValue(), constraint.getOptionalFormat());\n if (constraint.getProperty() == IndexConstraint.Property.MIN) {\n cmp = minValue.compareTo(value);\n } else if (constraint.getProperty() == IndexConstraint.Property.MAX) {\n@@ -245,7 +247,10 @@ public void append(FieldStats stats) {\n }\n \n @Override\n- protected java.lang.Long valueOf(String value) {\n+ protected java.lang.Long valueOf(String value, String optionalFormat) {\n+ if (optionalFormat != null) {\n+ throw new UnsupportedOperationException(\"custom format isn't supported\");\n+ }\n return java.lang.Long.valueOf(value);\n }\n \n@@ -295,7 +300,10 @@ public void append(FieldStats stats) {\n }\n \n @Override\n- protected java.lang.Float valueOf(String value) {\n+ protected java.lang.Float valueOf(String value, String optionalFormat) {\n+ if (optionalFormat != null) {\n+ throw new UnsupportedOperationException(\"custom format isn't supported\");\n+ }\n return java.lang.Float.valueOf(value);\n }\n \n@@ -345,7 +353,10 @@ public void append(FieldStats stats) {\n }\n \n @Override\n- protected java.lang.Double valueOf(String value) {\n+ protected java.lang.Double valueOf(String value, String optionalFormat) {\n+ if (optionalFormat != null) {\n+ throw new UnsupportedOperationException(\"custom format isn't supported\");\n+ }\n return java.lang.Double.valueOf(value);\n }\n \n@@ -399,7 +410,10 @@ public void append(FieldStats stats) {\n }\n \n @Override\n- protected BytesRef valueOf(String value) {\n+ protected BytesRef valueOf(String value, String optionalFormat) {\n+ if (optionalFormat != null) {\n+ throw new UnsupportedOperationException(\"custom format isn't supported\");\n+ }\n return new BytesRef(value);\n }\n \n@@ -448,7 +462,11 @@ public String getMaxValue() {\n }\n \n @Override\n- protected java.lang.Long valueOf(String value) {\n+ protected java.lang.Long valueOf(String value, String optionalFormat) {\n+ FormatDateTimeFormatter dateFormatter = this.dateFormatter;\n+ if (optionalFormat != null) {\n+ dateFormatter = Joda.forPattern(optionalFormat);\n+ }\n return dateFormatter.parser().parseMillis(value);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.fieldstats;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.ValidateActions;\n import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n@@ -121,22 +122,24 @@ private void parseIndexContraints(List<IndexConstraint> indexConstraints, XConte\n currentName = parser.currentName();\n } else if (fieldToken == Token.START_OBJECT) {\n IndexConstraint.Property property = IndexConstraint.Property.parse(currentName);\n- Token propertyToken = parser.nextToken();\n- if (propertyToken != Token.FIELD_NAME) {\n- throw new IllegalArgumentException(\"unexpected token [\" + propertyToken + \"]\");\n- }\n- IndexConstraint.Comparison comparison = IndexConstraint.Comparison.parse(parser.currentName());\n- propertyToken = parser.nextToken();\n- if (propertyToken.isValue() == false) {\n- throw new IllegalArgumentException(\"unexpected token [\" + propertyToken + \"]\");\n- }\n- String value = parser.text();\n- indexConstraints.add(new IndexConstraint(field, property, comparison, value));\n-\n- propertyToken = parser.nextToken();\n- if (propertyToken != Token.END_OBJECT) {\n- throw new IllegalArgumentException(\"unexpected token [\" + propertyToken + \"]\");\n+ String value = null;\n+ String optionalFormat = null;\n+ IndexConstraint.Comparison comparison = null;\n+ for (Token propertyToken = parser.nextToken(); propertyToken != Token.END_OBJECT; propertyToken = parser.nextToken()) {\n+ if (propertyToken.isValue()) {\n+ if (\"format\".equals(parser.currentName())) {\n+ optionalFormat = parser.text();\n+ } else {\n+ comparison = IndexConstraint.Comparison.parse(parser.currentName());\n+ value = parser.text();\n+ }\n+ } else {\n+ if (propertyToken != Token.FIELD_NAME) {\n+ throw new IllegalArgumentException(\"unexpected token [\" + propertyToken + \"]\");\n+ }\n+ }\n }\n+ indexConstraints.add(new IndexConstraint(field, property, comparison, value, optionalFormat));\n } else {\n throw new IllegalArgumentException(\"unexpected token [\" + fieldToken + \"]\");\n }\n@@ -189,6 +192,9 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeByte(indexConstraint.getProperty().getId());\n out.writeByte(indexConstraint.getComparison().getId());\n out.writeString(indexConstraint.getValue());\n+ if (out.getVersion().onOrAfter(Version.V_2_0_1)) {\n+ out.writeOptionalString(indexConstraint.getOptionalFormat());\n+ }\n }\n out.writeString(level);\n }",
"filename": "core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java",
"status": "modified"
},
{
"diff": "@@ -19,48 +19,81 @@\n \n package org.elasticsearch.action.fieldstats;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n \n import java.io.IOException;\n import java.util.Locale;\n+import java.util.Objects;\n \n public class IndexConstraint {\n \n private final String field;\n private final Property property;\n private final Comparison comparison;\n private final String value;\n+ private final String optionalFormat;\n \n IndexConstraint(StreamInput input) throws IOException {\n this.field = input.readString();\n this.property = Property.read(input.readByte());\n this.comparison = Comparison.read(input.readByte());\n this.value = input.readString();\n+ if (input.getVersion().onOrAfter(Version.V_2_0_1)) {\n+ this.optionalFormat = input.readOptionalString();\n+ } else {\n+ this.optionalFormat = null;\n+ }\n }\n \n public IndexConstraint(String field, Property property, Comparison comparison, String value) {\n- this.field = field;\n- this.property = property;\n- this.comparison = comparison;\n- this.value = value;\n+ this(field, property, comparison, value, null);\n+ }\n+\n+ public IndexConstraint(String field, Property property, Comparison comparison, String value, String optionalFormat) {\n+ this.field = Objects.requireNonNull(field);\n+ this.property = Objects.requireNonNull(property);\n+ this.comparison = Objects.requireNonNull(comparison);\n+ this.value = Objects.requireNonNull(value);\n+ this.optionalFormat = optionalFormat;\n }\n \n+ /**\n+ * @return On what field the constraint is going to be applied on\n+ */\n public String getField() {\n return field;\n }\n \n+ /**\n+ * @return How to compare the specified value against the field property (lt, lte, gt and gte)\n+ */\n public Comparison getComparison() {\n return comparison;\n }\n \n+ /**\n+ * @return On what property of a field the contraint is going to be applied on (min or max value)\n+ */\n public Property getProperty() {\n return property;\n }\n \n+ /**\n+ * @return The value to compare against\n+ */\n public String getValue() {\n return value;\n }\n \n+ /**\n+ * @return An optional format, that specifies how the value string is converted in the native value of the field.\n+ * Not all field types support this and right now only date field supports this option.\n+ */\n+ public String getOptionalFormat() {\n+ return optionalFormat;\n+ }\n+\n public enum Property {\n \n MIN((byte) 0),",
"filename": "core/src/main/java/org/elasticsearch/action/fieldstats/IndexConstraint.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,7 @@ public void testFieldsParsing() throws Exception {\n assertThat(request.getFields()[3], equalTo(\"field4\"));\n assertThat(request.getFields()[4], equalTo(\"field5\"));\n \n- assertThat(request.getIndexConstraints().length, equalTo(6));\n+ assertThat(request.getIndexConstraints().length, equalTo(8));\n assertThat(request.getIndexConstraints()[0].getField(), equalTo(\"field2\"));\n assertThat(request.getIndexConstraints()[0].getValue(), equalTo(\"9\"));\n assertThat(request.getIndexConstraints()[0].getProperty(), equalTo(MAX));\n@@ -67,6 +67,16 @@ public void testFieldsParsing() throws Exception {\n assertThat(request.getIndexConstraints()[5].getValue(), equalTo(\"9\"));\n assertThat(request.getIndexConstraints()[5].getProperty(), equalTo(MAX));\n assertThat(request.getIndexConstraints()[5].getComparison(), equalTo(LT));\n+ assertThat(request.getIndexConstraints()[6].getField(), equalTo(\"field1\"));\n+ assertThat(request.getIndexConstraints()[6].getValue(), equalTo(\"2014-01-01\"));\n+ assertThat(request.getIndexConstraints()[6].getProperty(), equalTo(MIN));\n+ assertThat(request.getIndexConstraints()[6].getComparison(), equalTo(GTE));\n+ assertThat(request.getIndexConstraints()[6].getOptionalFormat(), equalTo(\"date_optional_time\"));\n+ assertThat(request.getIndexConstraints()[7].getField(), equalTo(\"field1\"));\n+ assertThat(request.getIndexConstraints()[7].getValue(), equalTo(\"2015-01-01\"));\n+ assertThat(request.getIndexConstraints()[7].getProperty(), equalTo(MAX));\n+ assertThat(request.getIndexConstraints()[7].getComparison(), equalTo(LT));\n+ assertThat(request.getIndexConstraints()[7].getOptionalFormat(), equalTo(\"date_optional_time\"));\n }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/action/fieldstats/FieldStatsRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.action.fieldstats.IndexConstraint;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.joda.time.DateTime;\n+import org.joda.time.DateTimeZone;\n \n import java.util.ArrayList;\n import java.util.List;\n@@ -359,4 +361,33 @@ public void testDateFiltering() {\n assertThat(response.getIndicesMergedFieldStats().get(\"test2\").get(\"value\").getMinValue(), equalTo(\"2014-01-02T00:00:00.000Z\"));\n }\n \n+ public void testDateFiltering_optionalFormat() {\n+ createIndex(\"test1\", Settings.EMPTY, \"type\", \"value\", \"type=date,format=strict_date_optional_time\");\n+ client().prepareIndex(\"test1\", \"type\").setSource(\"value\", \"2014-01-01T00:00:00.000Z\").get();\n+ createIndex(\"test2\", Settings.EMPTY, \"type\", \"value\", \"type=date,format=strict_date_optional_time\");\n+ client().prepareIndex(\"test2\", \"type\").setSource(\"value\", \"2014-01-02T00:00:00.000Z\").get();\n+ client().admin().indices().prepareRefresh().get();\n+\n+ DateTime dateTime1 = new DateTime(2014, 1, 1, 0, 0, 0, 0, DateTimeZone.UTC);\n+ DateTime dateTime2 = new DateTime(2014, 1, 2, 0, 0, 0, 0, DateTimeZone.UTC);\n+ FieldStatsResponse response = client().prepareFieldStats()\n+ .setFields(\"value\")\n+ .setIndexContraints(new IndexConstraint(\"value\", MIN, GT, String.valueOf(dateTime1.getMillis()), \"epoch_millis\"), new IndexConstraint(\"value\", MAX, LTE, String.valueOf(dateTime2.getMillis()), \"epoch_millis\"))\n+ .setLevel(\"indices\")\n+ .get();\n+ assertThat(response.getIndicesMergedFieldStats().size(), equalTo(1));\n+ assertThat(response.getIndicesMergedFieldStats().get(\"test2\").get(\"value\").getMinValue(), equalTo(\"2014-01-02T00:00:00.000Z\"));\n+\n+ try {\n+ client().prepareFieldStats()\n+ .setFields(\"value\")\n+ .setIndexContraints(new IndexConstraint(\"value\", MIN, GT, String.valueOf(dateTime1.getMillis()), \"xyz\"))\n+ .setLevel(\"indices\")\n+ .get();\n+ fail(\"IllegalArgumentException should have been thrown\");\n+ } catch (IllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"Invalid format\"));\n+ }\n+ }\n+\n }\n\\ No newline at end of file",
"filename": "core/src/test/java/org/elasticsearch/fieldstats/FieldStatsTests.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,16 @@\n \"max_value\" : {\n \"lt\": 9\n }\n+ },\n+ \"field1\": {\n+ \"min_value\" : {\n+ \"gte\": \"2014-01-01\",\n+ \"format\" : \"date_optional_time\"\n+ },\n+ \"max_value\" : {\n+ \"lt\": \"2015-01-01\",\n+ \"format\" : \"date_optional_time\"\n+ }\n }\n }\n }\n\\ No newline at end of file",
"filename": "core/src/test/resources/org/elasticsearch/action/fieldstats/fieldstats-index-constraints-request.json",
"status": "modified"
},
{
"diff": "@@ -240,7 +240,7 @@ curl -XPOST \"http://localhost:9200/_field_stats?level=indices\" -d '{\n \"index_constraints\" : { <2>\n \"creation_date\" : { <3>\n \"min_value\" : { <4>\n- \"gte\" : \"2014-01-01T00:00:00.000Z\",\n+ \"gte\" : \"2014-01-01T00:00:00.000Z\"\n },\n \"max_value\" : {\n \"lt\" : \"2015-01-01T00:00:00.000Z\"\n@@ -263,3 +263,25 @@ Each index constraint support the following comparisons:\n `gt`:: \tGreater-than\n `lte`:: \tLess-than or equal to\n `lt`:: \tLess-than\n+\n+Field stats index constraints on date fields optionally accept a `format` option, used to parse the constraint's value.\n+If missing, the format configured in the field's mapping is used.\n+\n+[source,js]\n+--------------------------------------------------\n+curl -XPOST \"http://localhost:9200/_field_stats?level=indices\" -d '{\n+ \"fields\" : [\"answer_count\"] <1>\n+ \"index_constraints\" : { <2>\n+ \"creation_date\" : { <3>\n+ \"min_value\" : { <4>\n+ \"gte\" : \"2014-01-01\",\n+ \"format\" : \"date_optional_time\"\n+ },\n+ \"max_value\" : {\n+ \"lt\" : \"2015-01-01\",\n+ \"format\" : \"date_optional_time\"\n+ }\n+ }\n+ }\n+}'\n+--------------------------------------------------\n\\ No newline at end of file",
"filename": "docs/reference/search/field-stats.asciidoc",
"status": "modified"
}
]
}
|
{
"body": "Hi all.\nI have the following error trying to execute a spatial query with a ShapeRelation Contain.\naused by: java.lang.IllegalArgumentException: \n at org.elasticsearch.index.query.GeoShapeQueryParser.getArgs(GeoShapeQueryParser.java:192)\n at org.elasticsearch.index.query.GeoShapeQueryParser.parse(GeoShapeQueryParser.java:169)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:250)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:263)\n at org.elasticsearch.index.query.IndexQueryParserService.parseInnerFilter(IndexQueryParserService.java:220)\n at org.elasticsearch.search.query.PostFilterParseElement.parse(PostFilterParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:838)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:654)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:620)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:371)\n\nRegards \n",
"comments": [
{
"body": "@glascaleia can you please post the query you're running that's giving you this error?\n",
"created_at": "2016-04-20T00:15:38Z"
},
{
"body": " {\n \"query\" : {\n \"bool\" : {\n \"must\" : {\n \"geo_shape\" : {\n \"mountainNowArea.location\" : {\n \"shape\" : {\n \"type\" : \"polygon\",\n \"coordinates\" : [ [ [ -4.21875, 25.482951175 ], [ 15.1171875, 27.994401411 ], [ 24.609375, 18.312810846 ], [ 23.5546875, 7.710991655 ], [ -7.03125, 12.897489184 ], [ -4.21875, 25.482951175 ] ] ]\n },\n \"relation\" : \"contains\"\n },\n \"_name\" : null\n }\n }\n }\n },\n \"sort\" : [ {\n \"mountainNowArea.creationDate\" : {\n \"order\" : \"desc\"\n }\n } ]\n}\n",
"created_at": "2016-04-20T07:29:17Z"
},
{
"body": "This fails for me on 2.3.0, but works on master. Full recreation:\n\n```\nPUT t \n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"mountainNowArea\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"geo_shape\"\n },\n \"creationDate\": {\n \"type\": \"date\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT t/t/1\n{\n \"mountainNowArea\": {\n \"location\": {\n \"type\": \"polygon\",\n \"coordinates\": [\n [\n [\n 100,\n 0\n ],\n [\n 101,\n 0\n ],\n [\n 101,\n 1\n ],\n [\n 100,\n 1\n ],\n [\n 100,\n 0\n ]\n ]\n ]\n },\n \"creationDate\": \"2015-01-01\"\n }\n}\n\nGET _search\n{\n \"query\": {\n \"bool\": {\n \"must\": {\n \"geo_shape\": {\n \"mountainNowArea.location\": {\n \"shape\": {\n \"type\": \"polygon\",\n \"coordinates\": [\n [\n [\n -4.21875,\n 25.482951175\n ],\n [\n 15.1171875,\n 27.994401411\n ],\n [\n 24.609375,\n 18.312810846\n ],\n [\n 23.5546875,\n 7.710991655\n ],\n [\n -7.03125,\n 12.897489184\n ],\n [\n -4.21875,\n 25.482951175\n ]\n ]\n ]\n },\n \"relation\": \"contains\"\n },\n \"_name\": null\n }\n }\n }\n },\n \"sort\": [\n {\n \"mountainNowArea.creationDate\": {\n \"order\": \"desc\"\n }\n }\n ]\n}\n```\n",
"created_at": "2016-04-20T12:42:00Z"
},
{
"body": "@glascaleia what version of ES?\n\n@clintongormley can you confirm the following exception?\n\n``` javascript\n\"reason\": \"Failed to find geo_shape field [mountainNowArea.location]\",\n```\n\nCan you confirm success when explicitly searching the `t` index? e.g.:\n\n``` javascript\nGET t/_search\n```\n\nThis failure is unrelated to the original IAE issue. So it sounds to me like there are 2 separate issues.\n",
"created_at": "2016-05-06T16:43:54Z"
},
{
"body": "Fixed by 26b078ff56062140e4a2f309b98eb6c560185f70\n",
"created_at": "2016-05-18T07:34:03Z"
}
],
"number": 17866,
"title": "Error with geo_shape relation CONTAINS"
}
|
{
"body": "At the time of `geo_shape` query conception, `CONTAINS` was not yet a supported spatial operation in Lucene. Since it is now available this PR adds `ShapeRelation.CONTAINS` to `GeoShapeQuery`. Randomized testing is included and documentation is updated.\n\ncloses #14713\nBackport closes #17866\n",
"number": 14810,
"review_comments": [
{
"body": "Can we have a unit test instead of an integration test?\n",
"created_at": "2015-11-17T21:28:57Z"
},
{
"body": "added commit that refactors to ESSingleNodeTestCase\n",
"created_at": "2015-11-18T04:13:12Z"
}
],
"title": "Add CONTAINS relation to geo_shape query"
}
|
{
"commits": [
{
"message": "Add CONTAINS relation to geo_shape query\n\nAt the time of geo_shape query conception, CONTAINS was not yet a supported spatial operation in Lucene. Since it is now available this commit adds ShapeRelation.CONTAINS to GeoShapeQuery. Randomized testing is included and documentation is updated."
}
],
"files": [
{
"diff": "@@ -34,7 +34,8 @@ public enum ShapeRelation implements Writeable<ShapeRelation>{\n \n INTERSECTS(\"intersects\"),\n DISJOINT(\"disjoint\"),\n- WITHIN(\"within\");\n+ WITHIN(\"within\"),\n+ CONTAINS(\"contains\");\n \n private final String relationName;\n ",
"filename": "core/src/main/java/org/elasticsearch/common/geo/ShapeRelation.java",
"status": "modified"
},
{
"diff": "@@ -361,6 +361,8 @@ public static SpatialArgs getArgs(ShapeBuilder shape, ShapeRelation relation) {\n return new SpatialArgs(SpatialOperation.Intersects, shape.build());\n case WITHIN:\n return new SpatialArgs(SpatialOperation.IsWithin, shape.build());\n+ case CONTAINS:\n+ return new SpatialArgs(SpatialOperation.Contains, shape.build());\n default:\n throw new IllegalArgumentException(\"invalid relation [\" + relation + \"]\");\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@\n import org.elasticsearch.common.geo.builders.PointCollection;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n+import org.elasticsearch.search.geo.GeoShapeQueryTests;\n import org.junit.Assert;\n \n import java.util.Random;\n@@ -155,7 +156,7 @@ private static ShapeBuilder createShape(Random r, Point nearPoint, Rectangle wit\n /**\n * Creates a random shape useful for randomized testing, NOTE: exercise caution when using this to build random GeometryCollections\n * as creating a large random number of random shapes can result in massive resource consumption\n- * see: {@link org.elasticsearch.search.geo.GeoShapeIntegrationIT#testShapeFilterWithRandomGeoCollection}\n+ * see: {@link GeoShapeQueryTests#testShapeFilterWithRandomGeoCollection}\n *\n * The following options are included\n * @param nearPoint Create a shape near a provided point",
"filename": "core/src/test/java/org/elasticsearch/test/geo/RandomShapeGenerator.java",
"status": "modified"
},
{
"diff": "@@ -50,7 +50,8 @@ The following query will find the point using the Elasticsearch's\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\" : [[13.0, 53.0], [14.0, 52.0]]\n- }\n+ },\n+ \"relation\": \"within\"\n }\n }\n }\n@@ -61,7 +62,7 @@ The following query will find the point using the Elasticsearch's\n \n ==== Pre-Indexed Shape\n \n-The Filter also supports using a shape which has already been indexed in\n+The Query also supports using a shape which has already been indexed in\n another index and/or index type. This is particularly useful for when\n you have a pre-defined list of shapes which are useful to your\n application and you want to reference this using a logical name (for\n@@ -101,3 +102,15 @@ shape:\n }\n --------------------------------------------------\n \n+==== Spatial Relations\n+\n+The Query supports the following spatial relations:\n+\n+* `INTERSECTS` - (default) Return all documents whose `geo_shape` field\n+intersects the query geometry.\n+* `DISJOINT` - Return all documents whose `geo_shape` field\n+has nothing in common with the query geometry.\n+* `WITHIN` - Return all documents whose `geo_shape` field\n+is within the query geometry.\n+* `CONTAINS` - Return all documents whose `geo_shape` field\n+contains the query geometry.\n\\ No newline at end of file",
"filename": "docs/reference/query-dsl/geo-shape-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,108 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.messy.tests;\n+\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.common.geo.builders.ShapeBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.geo.GeoShapeFieldMapper;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.script.groovy.GroovyPlugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+\n+import java.util.Collection;\n+import java.util.Collections;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n+\n+/**\n+ */\n+public class GeoShapeIntegrationTests extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Collections.singleton(GroovyPlugin.class);\n+ }\n+\n+ /**\n+ * Test that orientation parameter correctly persists across cluster restart\n+ */\n+ public void testOrientationPersistence() throws Exception {\n+ String idxName = \"orientation\";\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"shape\")\n+ .startObject(\"properties\").startObject(\"location\")\n+ .field(\"type\", \"geo_shape\")\n+ .field(\"orientation\", \"left\")\n+ .endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ // create index\n+ assertAcked(prepareCreate(idxName).addMapping(\"shape\", mapping));\n+\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"shape\")\n+ .startObject(\"properties\").startObject(\"location\")\n+ .field(\"type\", \"geo_shape\")\n+ .field(\"orientation\", \"right\")\n+ .endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ assertAcked(prepareCreate(idxName+\"2\").addMapping(\"shape\", mapping));\n+ ensureGreen(idxName, idxName+\"2\");\n+\n+ internalCluster().fullRestart();\n+ ensureGreen(idxName, idxName+\"2\");\n+\n+ // left orientation test\n+ IndicesService indicesService = internalCluster().getInstance(IndicesService.class, findNodeName(idxName));\n+ IndexService indexService = indicesService.indexService(idxName);\n+ MappedFieldType fieldType = indexService.mapperService().smartNameFieldType(\"location\");\n+ assertThat(fieldType, instanceOf(GeoShapeFieldMapper.GeoShapeFieldType.class));\n+\n+ GeoShapeFieldMapper.GeoShapeFieldType gsfm = (GeoShapeFieldMapper.GeoShapeFieldType)fieldType;\n+ ShapeBuilder.Orientation orientation = gsfm.orientation();\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.CLOCKWISE));\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.LEFT));\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.CW));\n+\n+ // right orientation test\n+ indicesService = internalCluster().getInstance(IndicesService.class, findNodeName(idxName+\"2\"));\n+ indexService = indicesService.indexService(idxName+\"2\");\n+ fieldType = indexService.mapperService().smartNameFieldType(\"location\");\n+ assertThat(fieldType, instanceOf(GeoShapeFieldMapper.GeoShapeFieldType.class));\n+\n+ gsfm = (GeoShapeFieldMapper.GeoShapeFieldType)fieldType;\n+ orientation = gsfm.orientation();\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.COUNTER_CLOCKWISE));\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.RIGHT));\n+ assertThat(orientation, equalTo(ShapeBuilder.Orientation.CCW));\n+ }\n+\n+ private String findNodeName(String index) {\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ IndexShardRoutingTable shard = state.getRoutingTable().index(index).shard(0);\n+ String nodeId = shard.assignedShards().get(0).currentNodeId();\n+ return state.getNodes().get(nodeId).name();\n+ }\n+}",
"filename": "plugins/lang-groovy/src/test/java/org/elasticsearch/messy/tests/GeoShapeIntegrationTests.java",
"status": "added"
}
]
}
|
{
"body": "I've spotted multiple instances across the code base where we catch `InterruptedException` and call `Thread.interrupted()` afterwards. The problem is that `Thread.interrupted()` queries and _clears_ the interrupted status (see also [Javadoc](http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#interrupted--)) which is contrary to what should happen, namely to _set_ the interrupt flag again so code higher up the call stack can check the flag and act accordingly. This is done by calling `Thread.currentThread().interrupt()` (see [Javadoc](http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#interrupt--)).\n",
"comments": [
{
"body": "It'd be nice to have [BulkProcessor#L348](https://github.com/elastic/elasticsearch/blob/1.7/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java#L348) fixed back in 1.7. I just ran into this today, as I have multiple data source threads running (using ExecutorService), all which populate a single BulkProcessor. We `cancel` the source thread in some cases, which triggers an interrupt and thus the BulkProcessor may sometimes swallow it (more often than not).\n\nIt does look like the Throwable is returned back to the BulkListener, so maybe I can check if it's an InterruptedException and re-interrupt? Thoughts?\n",
"created_at": "2016-06-17T01:46:21Z"
},
{
"body": "> It'd be nice to have BulkProcessor#L348 fixed back in 1.7.\n\nWe only fix critical bugs (e.g. ones that lead to data loss) in 1.7 and I'd not consider this one critical enough.\n\n> It does look like the Throwable is returned back to the BulkListener, so maybe I can check if it's an InterruptedException and re-interrupt? Thoughts?\n\nAdmittedly, this is just a workaround but this should work. Just don't forget to remove it once you upgrade. ;) It won't hurt calling `Thread.currentThread().interrupt()` again in your listener but it is not necessary on newer versions of Elasticsearch (starting with Elasticsearch 2.2.0)\n",
"created_at": "2016-06-17T05:42:15Z"
},
{
"body": "Seems to work ok. Thanks for the feedback @danielmitterdorfer !\n",
"created_at": "2016-06-17T13:26:12Z"
}
],
"number": 14798,
"title": "Thread interrupt flag is not properly restored after an InterruptedException"
}
|
{
"body": "This commit replaces all occurrences of Thread.interrupted() with\nThread.currentThread().interrupt(). While the former checks and clears the current\nthread's interrupt flag the latter sets it, which is actually intended.\n\nCloses #14798\n",
"number": 14799,
"review_comments": [
{
"body": "As this class is used on the client, which we do not control at all, this should be treated as library code. As library we do not know and should also not assume in which environment we are running and how the client wants to respond to interruption. Hence, we should definitely call `Thread.currentThread().interrupt()` here. However, we should restore the flag at the end of the catch block in order to reduce potential side-effects in listeners which we also do not control.\n",
"created_at": "2015-11-17T14:15:16Z"
},
{
"body": "As a base class it should stick to standard practice (which is to restore the flag). If subclasses want to do anything differently, they can always catch `InterruptedException` and handle it explicitly.\n",
"created_at": "2015-11-17T14:16:34Z"
},
{
"body": "This one is a bit more tricky, but there are also other calls that also set the interruption flag already (see `InternalShardLock#acquire()` which is called by `NodeEnvironment#lockAllForIndex()`). After looking what happens in the error handling code for this case, I'd opt for rethrowing `InterruptedException` directly instead of just setting the flag. In conjunction with that this shows that it makes sense that `AbstractRunnable` restores the flag on `InterruptedException` then (this whole code block is enclosed in a subclass of `AbstractRunnable`).\n",
"created_at": "2015-11-17T14:17:53Z"
},
{
"body": "This is called in `#doStop()` which is called when the component is about to stop anyway and does no harm.\n",
"created_at": "2015-11-17T14:18:23Z"
},
{
"body": "This is used by an internal thread where we control the run loop so the interruption here is also fine. However, I am not sure why we create threads ourselves outside of pools. Btw, I think this thread's implementation could be simplified significantly if we stick to Java's standard methods, i.e. interruption :) (but I would not do this in the scope of this PR).\n",
"created_at": "2015-11-17T14:18:54Z"
},
{
"body": "Called in `#beforeTest()` so would be ok for me to restore the flag here.\n",
"created_at": "2015-11-17T14:19:12Z"
},
{
"body": "Called in the `#stop()` method which can be interrupted when waiting for an external process to stop. I'd opt for rethrowing InterruptedException were we not implementing `Closeable` and delegating to `#stop()` in our implementation of `#close()`. Even then I'd just rethrow and restore the flag just in `#close()`. Client code does not need to be adapted at all as it is already aware of `InterruptedException` now.\n",
"created_at": "2015-11-17T14:20:28Z"
},
{
"body": "In this case I would argue that\n\n> rethrow InterruptedException whenever you do not know what code is higher up the call stack\n\ntranslates to passing the exception to the listener - it's what we do for any other exception.\n",
"created_at": "2015-11-18T10:11:20Z"
},
{
"body": "same here - since we have on onFailure handler, calling is the equivalent of re-throwing the exception, imo.\n",
"created_at": "2015-11-18T10:12:15Z"
},
{
"body": "Imho, the interruption is dealt with here. We don't need to bother the code higher up with it. \n",
"created_at": "2015-11-18T10:18:45Z"
},
{
"body": "Looking what this does, an interruption is ignored and will only used to cause an extra TTL check or shutdown. I think doing nothing is the right move here. \n",
"created_at": "2015-11-18T10:25:38Z"
},
{
"body": "I think the test should fail. We don't know what else have happened due to this interrupt nor whether the node is ready.\n",
"created_at": "2015-11-18T10:26:40Z"
},
{
"body": "+1 to throwing the exception.\n",
"created_at": "2015-11-18T10:28:04Z"
},
{
"body": "This puts the burden on the client (or anybody that implements such a listener), which needs to know that we do not restore the interrupt flag and do it in the listener implementation instead. As a consequence of this decision the client will not be able to cancel a bulk request _unless_ it is aware of the idiosyncrasies of our client API.\n\nThe problem is that `InterruptedException` is not like any other exception and we're really interfering with the expectations of the client here. To me we're assuming too much about the client when we do not restore the flag here.\n",
"created_at": "2015-11-18T12:22:17Z"
},
{
"body": "To me that's also the same as above but as we're on the server-side here I'm not as reluctant as on the client side to agree though I still think it's broken. The separate catch block for `InterruptedException` indicates that the original developer was aware that we have to treat `InterruptedException` differently. It's just that the handling is not correct.\n\nHowever, if we don't handle `InterruptedException` differently, I would remove the separate catch block too:\n\n```\n @Override\n public final void run() {\n try {\n doRun();\n } catch (Throwable t) {\n onFailure(t);\n } finally {\n onAfter();\n }\n }\n```\n\nI'll update the `AbstractRunnable` accordingly.\n",
"created_at": "2015-11-18T12:28:23Z"
},
{
"body": "Agreed. However, as I was trying to explain in my comment, we're dealing with interruption inconsistently in this method. When we're interrupted while we're stuck in `InternalShardLock#acquire()` we throw an exception and will _not_ notify the master (i.e. we will not call the second method `TransportService#sendRequest()` from `IndicesService#lockIndexAndAck()`) but if we are interrupted while we are in a timeout we will still notify the master (as we just return from the method). If that's ok, then I'm fine with that. :)\n",
"created_at": "2015-11-18T12:35:12Z"
},
{
"body": "Ok, I'll implement an empty catch block and just put a comment there.\n",
"created_at": "2015-11-18T12:38:08Z"
},
{
"body": "I'll change that accordingly.\n",
"created_at": "2015-11-18T12:38:52Z"
},
{
"body": "Then I think the most sensible thing is to propagate.\n",
"created_at": "2015-11-18T12:57:25Z"
},
{
"body": "OK. I'm good with fixing this inconsistency by throwing an InterruptedException (and catching it up just like we do with LockObtainFailedException\n",
"created_at": "2015-11-18T13:18:35Z"
},
{
"body": "We've discussed that this will go into a separate PR as it directly affects the client and I need to check anyway whether we're handling all cases in `BulkProcessor` consistently (i.e. numberOfConcurrentRequest > 0 vs. numberOfConcurrentRequest == 0). Hence, I'll revert the changes in the client API in this PR.\n",
"created_at": "2015-11-18T13:35:23Z"
},
{
"body": "did you look at what will happen if we bubble up this exception?\n",
"created_at": "2015-11-18T15:25:35Z"
},
{
"body": "`#close()` could either be invoked directly or from within a try-with-resources block and I did not come across either. We always invoke `#stop()` which now just throws the exception.\n\nUnfortunately, bubbling up is not an option as we implement `Closeable` from the JDK. Hence, I sensed restoring the flag is the most sensible option in this case.\n",
"created_at": "2015-11-18T15:34:16Z"
},
{
"body": "+1 \n\n> On 18 Nov 2015, at 16:34, Daniel Mitterdorfer notifications@github.com wrote:\n> \n> In test-framework/src/main/java/org/elasticsearch/test/ExternalNode.java:\n> \n> > @@ -233,7 +229,11 @@ synchronized boolean running() {\n> > \n> > ```\n> > @Override\n> > public void close() {\n> > ```\n> > - stop();\n> > - try {\n> > - stop();\n> > - } catch (InterruptedException e) {\n> \n> `#close()' could either be invoked directly or from within a try-with-resources block and I did not come across either.\n> \n> Unfortunately, bubbling up is not an option as we implement Closeable from the JDK. Hence, I sensed restoring the flag is the most sensible option in this case.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2015-11-18T15:39:17Z"
}
],
"title": "Restore thread interrupt flag after an InterruptedException"
}
|
{
"commits": [
{
"message": "Restore thread interrupt flag after an InterruptedException\n\nThis commit replaces all occurrences of Thread.interrupted() with\nThread.currentThread().interrupt(). While the former checks and clears the current\nthread's interrupt flag the latter sets it, which is actually intended.\n\nCloses #14798"
}
],
"files": [
{
"diff": "@@ -103,6 +103,8 @@ private void lockIndexAndAck(String index, DiscoveryNodes nodes, String nodeId,\n INDEX_STORE_DELETED_ACTION_NAME, new NodeIndexStoreDeletedMessage(index, nodeId), EmptyTransportResponseHandler.INSTANCE_SAME);\n } catch (LockObtainFailedException exc) {\n logger.warn(\"[{}] failed to lock all shards for index - timed out after 30 seconds\", index);\n+ } catch (InterruptedException e) {\n+ logger.warn(\"[{}] failed to lock all shards for index - interrupted\", index);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/cluster/action/index/NodeIndexDeletedAction.java",
"status": "modified"
},
{
"diff": "@@ -35,9 +35,6 @@ public boolean isForceExecution() {\n public final void run() {\n try {\n doRun();\n- } catch (InterruptedException ex) {\n- Thread.interrupted();\n- onFailure(ex);\n } catch (Throwable t) {\n onFailure(t);\n } finally {",
"filename": "core/src/main/java/org/elasticsearch/common/util/concurrent/AbstractRunnable.java",
"status": "modified"
},
{
"diff": "@@ -643,7 +643,7 @@ public int compareTo(PendingDelete o) {\n * @param index the index to process the pending deletes for\n * @param timeout the timeout used for processing pending deletes\n */\n- public void processPendingDeletes(Index index, IndexSettings indexSettings, TimeValue timeout) throws IOException {\n+ public void processPendingDeletes(Index index, IndexSettings indexSettings, TimeValue timeout) throws IOException, InterruptedException {\n logger.debug(\"{} processing pending deletes\", index);\n final long startTimeNS = System.nanoTime();\n final List<ShardLock> shardLocks = nodeEnv.lockAllForIndex(index, indexSettings, timeout.millis());\n@@ -695,14 +695,9 @@ public void processPendingDeletes(Index index, IndexSettings indexSettings, Time\n }\n if (remove.isEmpty() == false) {\n logger.warn(\"{} still pending deletes present for shards {} - retrying\", index, remove.toString());\n- try {\n- Thread.sleep(sleepTime);\n- sleepTime = Math.min(maxSleepTimeMs, sleepTime * 2); // increase the sleep time gradually\n- logger.debug(\"{} schedule pending delete retry after {} ms\", index, sleepTime);\n- } catch (InterruptedException e) {\n- Thread.interrupted();\n- return;\n- }\n+ Thread.sleep(sleepTime);\n+ sleepTime = Math.min(maxSleepTimeMs, sleepTime * 2); // increase the sleep time gradually\n+ logger.debug(\"{} schedule pending delete retry after {} ms\", index, sleepTime);\n }\n } while ((System.nanoTime() - startTimeNS) < timeout.nanos());\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -99,7 +99,7 @@ protected void doStop() {\n try {\n this.purgerThread.shutdown();\n } catch (InterruptedException e) {\n- Thread.interrupted();\n+ // we intentionally do not want to restore the interruption flag, we're about to shutdown anyway\n }\n }\n \n@@ -340,7 +340,7 @@ public void await() {\n try {\n condition.await(timeout.millis(), TimeUnit.MILLISECONDS);\n } catch (InterruptedException e) {\n- Thread.interrupted();\n+ // we intentionally do not want to restore the interruption flag, we're about to shutdown anyway\n } finally {\n lock.unlock();\n }",
"filename": "core/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n-import java.io.IOException;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -135,7 +134,7 @@ public void testDeleteIndexStore() throws Exception {\n ensureGreen(\"test\");\n }\n \n- public void testPendingTasks() throws IOException {\n+ public void testPendingTasks() throws Exception {\n IndicesService indicesService = getIndicesService();\n IndexService test = createIndex(\"test\");\n ",
"filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -63,19 +63,14 @@ public synchronized void afterTest() throws IOException {\n }\n \n @Override\n- public synchronized void beforeTest(Random random, double transportClientRatio) throws IOException {\n+ public synchronized void beforeTest(Random random, double transportClientRatio) throws IOException, InterruptedException {\n super.beforeTest(random, transportClientRatio);\n cluster.beforeTest(random, transportClientRatio);\n Settings defaultSettings = cluster.getDefaultSettings();\n final Client client = cluster.size() > 0 ? cluster.client() : cluster.clientNodeClient();\n for (int i = 0; i < externalNodes.length; i++) {\n if (!externalNodes[i].running()) {\n- try {\n- externalNodes[i] = externalNodes[i].start(client, defaultSettings, NODE_PREFIX + i, cluster.getClusterName(), i);\n- } catch (InterruptedException e) {\n- Thread.interrupted();\n- return;\n- }\n+ externalNodes[i] = externalNodes[i].start(client, defaultSettings, NODE_PREFIX + i, cluster.getClusterName(), i);\n }\n externalNodes[i].reset(random.nextLong());\n }",
"filename": "test-framework/src/main/java/org/elasticsearch/test/CompositeTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -206,19 +206,15 @@ synchronized void reset(long seed) {\n this.random.setSeed(seed);\n }\n \n- synchronized void stop() {\n+ synchronized void stop() throws InterruptedException {\n if (running()) {\n try {\n if (this.client != null) {\n client.close();\n }\n } finally {\n process.destroy();\n- try {\n- process.waitFor();\n- } catch (InterruptedException e) {\n- Thread.interrupted();\n- }\n+ process.waitFor();\n process = null;\n nodeInfo = null;\n \n@@ -233,7 +229,11 @@ synchronized boolean running() {\n \n @Override\n public void close() {\n- stop();\n+ try {\n+ stop();\n+ } catch (InterruptedException e) {\n+ Thread.currentThread().interrupt();\n+ }\n }\n \n synchronized String getName() {",
"filename": "test-framework/src/main/java/org/elasticsearch/test/ExternalNode.java",
"status": "modified"
},
{
"diff": "@@ -910,7 +910,7 @@ public Client client(Node node, String clusterName) {\n }\n \n @Override\n- public synchronized void beforeTest(Random random, double transportClientRatio) throws IOException {\n+ public synchronized void beforeTest(Random random, double transportClientRatio) throws IOException, InterruptedException {\n super.beforeTest(random, transportClientRatio);\n reset(true);\n }",
"filename": "test-framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -63,7 +63,7 @@ public long seed() {\n /**\n * This method should be executed before each test to reset the cluster to its initial state.\n */\n- public void beforeTest(Random random, double transportClientRatio) throws IOException {\n+ public void beforeTest(Random random, double transportClientRatio) throws IOException, InterruptedException {\n assert transportClientRatio >= 0.0 && transportClientRatio <= 1.0;\n logger.debug(\"Reset test cluster with transport client ratio: [{}]\", transportClientRatio);\n this.transportClientRatio = transportClientRatio;",
"filename": "test-framework/src/main/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
},
{
"diff": "@@ -94,7 +94,7 @@ public static void assertSettings(Settings left, Settings right, boolean checkCl\n }\n }\n \n- public void testBeforeTest() throws IOException {\n+ public void testBeforeTest() throws Exception {\n long clusterSeed = randomLong();\n int minNumDataNodes = randomIntBetween(0, 3);\n int maxNumDataNodes = randomIntBetween(minNumDataNodes, 4);",
"filename": "test-framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java",
"status": "modified"
}
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.