issue
dict
pr
dict
pr_details
dict
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 2.4.4/5.3.1\r\n\r\nA `_field_stats` call on a type `geo_point` field throws an exception for an index that was created in `2.4.4` and upgraded to `5.3.1`. I also reproduced this on going from `2.3.3` -> `5.3.0`.\r\n\r\nThis causes Kibana to not properly grab the index mappings when defining an index pattern rendering all fields as neither searchable nor aggregatable. \r\n\r\n**Steps to reproduce**:\r\n1. Create an index mapping with a geo_point field\r\n\r\n```\r\nPUT index\r\n{\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"geo_field\": {\r\n \"type\": \"geo_point\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n2. Add a sample document\r\n\r\n```\r\nPUT index/type/1\r\n{\r\n \"geo_field\": \"33.8957, -112.0577\"\r\n}\r\n```\r\n\r\n3. Upgrade to 5.3.1 (I simply copied the data directory over)\r\n\r\n4. Attempt a `_field_stats` call on the geo_field\r\n\r\n```\r\nGET index/_field_stats?fields=geo_field\r\n```\r\nThe response\r\n\r\n```\r\n{\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 4,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 3,\r\n \"index\": \"index\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"exception\",\r\n \"reason\": \"java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5\",\r\n \"caused_by\": {\r\n \"type\": \"execution_exception\",\r\n \"reason\": \"java.lang.ArrayIndexOutOfBoundsException: 5\",\r\n \"caused_by\": {\r\n \"type\": \"array_index_out_of_bounds_exception\",\r\n \"reason\": \"5\"\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"indices\": {\r\n \"_all\": {\r\n \"fields\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\n", "comments": [ { "body": "This seems to be a deserialization error with geo_point encoding in 2.x. @nknize can you take a look ?", "created_at": "2017-04-23T19:52:24Z" }, { "body": "@n0othing also note that field stats is deprecated in favour of the new field caps API (5.4)", "created_at": "2017-04-25T11:32:07Z" }, { "body": "We've had a couple reports of this affecting Kibana users already. Because we use field stats to figure out the searchable/aggregatable status of fields this effectively breaks Kibana for any index patterns containing a geo_point field after upgrading to 5.3.1 from 2.x\r\n\r\nhttps://github.com/elastic/kibana/issues/11379\r\nhttps://github.com/elastic/kibana/issues/11377\r\nhttps://github.com/elastic/kibana/issues/9571#issuecomment-296392234", "created_at": "2017-04-25T14:54:44Z" }, { "body": "PR opened.... https://github.com/elastic/elasticsearch/pull/24534", "created_at": "2017-05-06T19:25:39Z" }, { "body": "Per the PR, this is fixed in 5.3.3 and 5.4.1, should we close?", "created_at": "2017-06-05T17:28:40Z" }, { "body": "I'm still seeing this behavior when upgrading from 2.x to 5.4.1 and 5.3.3", "created_at": "2017-06-13T16:27:03Z" }, { "body": "There was a reversed ternary logic bug that wasn't caught by the munged test. Opened fix at #25211 for 5.4.2 release /cc @jimczi @clintongormley ", "created_at": "2017-06-14T00:13:02Z" }, { "body": "fix is merged in #25211 ", "created_at": "2017-06-14T13:58:20Z" } ], "number": 24275, "title": "_field_stats call on geo_point field broken after upgrading from 2.4.4 -> 5.3.1" }
{ "body": "The ternary logic in `prefixCodedToGeoPoint` was reversed such that numericEncoded GeoPoints were using the decoding logic for GeoCoded points and vice versa. This PR fixes that boneheaded bug.\r\n\r\ncloses #24275\r\n", "number": 25211, "review_comments": [], "title": "Fix GeoPoint FieldStats ternary logic bug" }
{ "commits": [ { "message": "Fix GeoPoint FieldStats ternary logic bug\n\nThe ternary logic in prefixCodedToGeoPoint was reversed such that numericEncoded GeoPoints were using the decoding logic for GeoCoded points and vice versa. This commit fixes that boneheaded bug." } ], "files": [ { "diff": "@@ -661,8 +661,8 @@ public FieldMapper updateFieldType(Map<String, MappedFieldType> fullNameToFieldT\n return updated;\n }\n \n- private static GeoPoint prefixCodedToGeoPoint(BytesRef val, boolean isGeoCoded) {\n- final long encoded = isGeoCoded ? prefixCodedToGeoCoded(val) : LegacyNumericUtils.prefixCodedToLong(val);\n+ private static GeoPoint prefixCodedToGeoPoint(BytesRef val, boolean numericEncoded) {\n+ final long encoded = numericEncoded ? LegacyNumericUtils.prefixCodedToLong(val) : prefixCodedToGeoCoded(val);\n return new GeoPoint(MortonEncoder.decodeLatitude(encoded), MortonEncoder.decodeLongitude(encoded));\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -687,22 +687,17 @@ public static FieldStats randomFieldStats(boolean withNullMinMax) throws Unknown\n }\n }\n \n- public void testGeopoint() {\n- Version version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.CURRENT);\n+ public void testGeopoint2x() {\n+ Version version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_2_4_5);\n Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build();\n createIndex(\"test\", settings, \"test\",\n \"field_index\", makeType(\"geo_point\", true, false, false));\n- version = Version.CURRENT;\n- settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build();\n- createIndex(\"test5x\", settings, \"test\",\n- \"field_index\", makeType(\"geo_point\", true, false, false));\n int numDocs = random().nextInt(20);\n for (int i = 0; i <= numDocs; ++i) {\n double lat = GeoTestUtil.nextLatitude();\n double lon = GeoTestUtil.nextLongitude();\n final String src = lat + \",\" + lon;\n client().prepareIndex(\"test\", \"test\").setSource(\"field_index\", src).get();\n- client().prepareIndex(\"test5x\", \"test\").setSource(\"field_index\", src).get();\n }\n \n client().admin().indices().prepareRefresh().get();\n@@ -714,6 +709,25 @@ public void testGeopoint() {\n // which is wildly different from V_5_0 which is point encoded. Skipping min/max in favor of testing\n }\n \n+ public void testGeopoint5x() {\n+ Version version = VersionUtils.randomVersionBetween(random(), Version.V_5_0_0, Version.CURRENT);\n+ Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build();\n+ createIndex(\"test\", settings, \"test\",\n+ \"field_index\", makeType(\"geo_point\", true, false, false));\n+ int numDocs = random().nextInt(20);\n+ for (int i = 0; i <= numDocs; ++i) {\n+ double lat = GeoTestUtil.nextLatitude();\n+ double lon = GeoTestUtil.nextLongitude();\n+ final String src = lat + \",\" + lon;\n+ client().prepareIndex(\"test\", \"test\").setSource(\"field_index\", src).get();\n+ }\n+\n+ client().admin().indices().prepareRefresh().get();\n+ FieldStatsResponse result = client().prepareFieldStats().setFields(\"field_index\").get();\n+ FieldStats stats = result.getAllFieldStats().get(\"field_index\");\n+ assertEquals(stats.getDisplayType(), \"geo_point\");\n+ }\n+\n private void assertSerialization(FieldStats stats, Version version) throws IOException {\n BytesStreamOutput output = new BytesStreamOutput();\n output.setVersion(version);", "filename": "core/src/test/java/org/elasticsearch/fieldstats/FieldStatsTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: master 186c16ea41406b284bce896ab23771a93e93e7ec, 6.0.0-alpha1 and alpha2\r\n\r\n**Plugins installed**: repository-s3\r\n\r\n**JVM version** (`java -version`): 1.8.0_131-b11\r\n\r\n**OS version** :\r\n- Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64\r\n- ubuntu-1604 4.4.0-75-generic #96-Ubuntu SMP Thu Apr 20 09:56:33 UTC 2017 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen creating snapshots to S3, the first snapshot is often successful, a second snapshot fails most of the time with a SecurityException. This leads to a PARTIAL snapshot with errors in the log or a\r\n`{\"error\":{\"root_cause\":[{\"type\":\"access_control_exception\",\"reason\":\"access denied (\\\"java.net.SocketPermission\\\" \\\"54.231.134.114:443\\\" \\\"connect,resolve\\\")\"}],\"type\":\"access_control_exception\",\"reason\":\"access denied (\\\"java.net.SocketPermission\\\" \\\"54.231.134.114:443\\\" \\\"connect,resolve\\\")\"},\"status\":500}`\r\n`\r\n\r\n**Steps to reproduce**:\r\n 1. Create 10 empty indices\r\n 2. Register S3 repository \r\n 3. Create snapshots in a loop\r\n \r\n(python scripts attached)\r\n\r\n**Analysis**:\r\n\r\nThe exception occurs when a socket gets opened by a S3OutputStream.close() operation. The problem is that a plugin can only use its own code/jars to perform privileged operations. In this case the stack contains elements from the elasticsearch and the lucene-core jar which gets more obvious from the security debugging below.\r\n\r\nA snapshot might succeed if connections got opened using e.g. the listBucket or bucketExists methods and gets reused on S3OutputStream.close() calls.\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-06-13T10:53:06,284][INFO ][o.e.s.SnapshotShardsService] [PoQjxkm] snapshot [elasticsearch-local:20170613t0952-1/Y6AH1fueS2i8hfL8hASjEg] is done\r\n[2017-06-13T10:53:08,940][WARN ][o.e.s.SnapshotsService ] [PoQjxkm] failed to create snapshot [20170613t0952-2/4QbHE3LjTEyN9OR25QVQGg]\r\njava.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:89) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:76) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:88) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\tSuppressed: java.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\t\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\t\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\t\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\t\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\t\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\t\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:96) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-06-13T10:53:08,963][WARN ][r.suppressed ] path: /_snapshot/elasticsearch-local/20170613t0952-2, params: {repository=elasticsearch-local, snapshot=20170613t0952-2}\r\njava.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:89) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:76) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:88) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\tSuppressed: java.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\t\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\t\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\t\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\t\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\t\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\t\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:96) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n```\r\n-Djava.security.debug=\"access,failure,domain\"\r\n\r\n```\r\naccess: access denied (\"java.net.SocketPermission\" \"52.218.64.73:443\" \"connect,resolve\")\r\njava.lang.Exception: Stack trace\r\n at java.lang.Thread.dumpStack(Thread.java:1336)\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:462)\r\n at java.security.AccessController.checkPermission(AccessController.java:884)\r\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\r\n at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051)\r\n at java.net.Socket.connect(Socket.java:584)\r\n at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)\r\n at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542)\r\n at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412)\r\n at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134)\r\n at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179)\r\n at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328)\r\n at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612)\r\n at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447)\r\n at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)\r\n at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)\r\n at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)\r\n at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837)\r\n at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)\r\n at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)\r\n at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)\r\n at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)\r\n at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654)\r\n at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99)\r\n at org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69)\r\n at org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87)\r\n at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)\r\n at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:88)\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:60)\r\n at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95)\r\n at java.security.AccessController.doPrivileged(Native Method)\r\n at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48)\r\n at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95)\r\n at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187)\r\n at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:136)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1008)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1242)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:815)\r\n at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:380)\r\n at org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88)\r\n at org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:334)\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:748)\r\naccess: domain that failed ProtectionDomain (file:/Users/jodraeger/servers/elasticsearch-6.0.0-alpha3-SNAPSHOT/lib/lucene-core-7.0.0-snapshot-a0aef2f.jar <no signer certificates>)\r\n sun.misc.Launcher$AppClassLoader@18b4aac2\r\n <no principals>\r\n java.security.Permissions@47db5fa5 (\r\n (\"java.lang.RuntimePermission\" \"exitVM\")\r\n (\"java.io.FilePermission\" \"/Users/jodraeger/servers/elasticsearch-6.0.0-alpha3-SNAPSHOT/lib/lucene-core-7.0.0-snapshot-a0aef2f.jar\" \"read\")\r\n)\r\n```\r\n\r\n", "comments": [ { "body": "thanks for testing out 6.0 @joachimdraeger - i've made you an Elastic Pioneer\r\n\r\n@tbrooks8 please could you take a look", "created_at": "2017-06-13T13:09:56Z" }, { "body": "This should be fixed by #25254", "created_at": "2017-07-10T15:37:41Z" } ], "number": 25192, "title": "intermittent SecurityException when creating s3-repository snapshots" }
{ "body": "Use Apache commons IO to copy streams in repository S3 plugin to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192\r\n\r\n\r\n", "number": 25206, "review_comments": [ { "body": "I do not think we need a whole new dependency for this.", "created_at": "2017-06-13T17:57:21Z" }, { "body": "I can implement the stream copying, it's not too much code.", "created_at": "2017-06-13T19:01:12Z" } ], "title": "Fix SecurityException in repository-s3 plugin " }
{ "commits": [ { "message": "Use Apache commons IO to copy streams in repository S3 plugin to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192" }, { "message": "Revert \"Use Apache commons IO to copy streams in repository S3 plugin to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192\"\n\nThis reverts commit 429f22fe8c66e3f16cdfc791c71a0ee6bcfea383." }, { "message": "Using an own implementation to copy streams in repository-S3 plugin to avoid SecurityException. A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192" } ], "files": [ { "diff": "@@ -47,6 +47,7 @@\n \n class S3BlobContainer extends AbstractBlobContainer {\n \n+ public static final int BUF_SIZE = 8 * 1024;\n protected final S3BlobStore blobStore;\n \n protected final String keyPath;\n@@ -91,9 +92,18 @@ public void writeBlob(String blobName, InputStream inputStream, long blobSize) t\n if (blobExists(blobName)) {\n throw new FileAlreadyExistsException(\"blob [\" + blobName + \"] already exists, cannot overwrite\");\n }\n- try (OutputStream stream = createOutput(blobName)) {\n- SocketAccess.doPrivilegedIOException(() -> Streams.copy(inputStream, stream));\n- }\n+\n+ SocketAccess.doPrivilegedIOException(() -> {\n+ try (OutputStream os = createOutput(blobName);\n+ InputStream is = inputStream) {\n+ byte[] buf = new byte[BUF_SIZE];\n+ int read;\n+ while ((read = is.read(buf)) != -1) {\n+ os.write(buf, 0, read);\n+ }\n+ }\n+ return null;\n+ });\n }\n \n @Override", "filename": "plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobContainer.java", "status": "modified" } ] }
{ "body": "I'd wanted to debug this but it looks like I'm not going to have time. The test makes a snapsot in 5.6 and then does a full cluster restart to upgrade to master. Then it does `GET /_snapshot/repo/_all` and looks for the snapshot and doesn't find it.", "comments": [ { "body": "@imotov got it with https://github.com/elastic/elasticsearch/pull/25204#pullrequestreview-43792712.", "created_at": "2017-06-13T17:14:39Z" } ], "number": 25203, "title": "Snapshot/restore test in full cluster restart/upgrade tests fail" }
{ "body": "Extract the snapshot/restore full cluster restart tests from the translog full cluster restart tests. That way they are easier to read.\r\n\r\nCloses #25203", "number": 25204, "review_comments": [], "title": "Extract the snapshot/restore full cluster restart tests from the translog full cluster restart tests" }
{ "commits": [ { "message": "WIP" }, { "message": "Break restore tests out from translog tests" }, { "message": "WIP" }, { "message": "Ready to investigate" }, { "message": "Merge branch 'master' into recovery_bwc" }, { "message": "Add awaitsfix" }, { "message": "Fix test" }, { "message": "Fix properly\n\nDid `git add .` in the wrong dir..... Silly me!" } ], "files": [ { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.upgrades;\n \n-import org.apache.http.ParseException;\n import org.apache.http.entity.ContentType;\n import org.apache.http.entity.StringEntity;\n import org.apache.http.util.EntityUtils;\n@@ -32,6 +31,7 @@\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.test.rest.ESRestTestCase;\n+import org.junit.Before;\n \n import java.io.IOException;\n import java.util.Collections;\n@@ -42,6 +42,7 @@\n import java.util.regex.Pattern;\n \n import static java.util.Collections.emptyMap;\n+import static java.util.Collections.singletonList;\n import static java.util.Collections.singletonMap;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.containsString;\n@@ -54,24 +55,34 @@\n * with {@code tests.is_old_cluster} set to {@code false}.\n */\n public class FullClusterRestartIT extends ESRestTestCase {\n- private static final String REPO = \"/_snapshot/repo\";\n-\n private final boolean runningAgainstOldCluster = Booleans.parseBoolean(System.getProperty(\"tests.is_old_cluster\"));\n private final Version oldClusterVersion = Version.fromString(System.getProperty(\"tests.old_cluster_version\"));\n private final boolean supportsLenientBooleans = oldClusterVersion.onOrAfter(Version.V_6_0_0_alpha1);\n \n+ private String index;\n+\n+ @Before\n+ public void setIndex() {\n+ index = getTestName().toLowerCase(Locale.ROOT);\n+ }\n+\n @Override\n protected boolean preserveIndicesUponCompletion() {\n return true;\n }\n \n+ @Override\n+ protected boolean preserveSnapshotsUponCompletion() {\n+ return true;\n+ }\n+\n @Override\n protected boolean preserveReposUponCompletion() {\n return true;\n }\n \n public void testSearch() throws Exception {\n- String index = getTestName().toLowerCase(Locale.ROOT);\n+ int count;\n if (runningAgainstOldCluster) {\n XContentBuilder mappingsAndSettings = jsonBuilder();\n mappingsAndSettings.startObject();\n@@ -103,8 +114,8 @@ public void testSearch() throws Exception {\n client().performRequest(\"PUT\", \"/\" + index, Collections.emptyMap(),\n new StringEntity(mappingsAndSettings.string(), ContentType.APPLICATION_JSON));\n \n- int numDocs = randomIntBetween(2000, 3000);\n- indexRandomDocuments(index, numDocs, true, i -> {\n+ count = randomIntBetween(2000, 3000);\n+ indexRandomDocuments(count, true, true, i -> {\n return JsonXContent.contentBuilder().startObject()\n .field(\"string\", randomAlphaOfLength(10))\n .field(\"int\", randomInt(100))\n@@ -115,45 +126,51 @@ public void testSearch() throws Exception {\n // TODO a binary field\n .endObject();\n });\n- logger.info(\"Refreshing [{}]\", index);\n- client().performRequest(\"POST\", \"/\" + index + \"/_refresh\");\n+ refresh();\n+ } else {\n+ count = countOfIndexedRandomDocuments();\n }\n- assertBasicSearchWorks(index);\n+ assertBasicSearchWorks(count);\n }\n \n- void assertBasicSearchWorks(String index) throws IOException {\n+ void assertBasicSearchWorks(int count) throws IOException {\n logger.info(\"--> testing basic search\");\n Map<String, Object> response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\"));\n assertNoFailures(response);\n- int numDocs1 = (int) XContentMapValues.extractValue(\"hits.total\", response);\n- logger.info(\"Found {} in old index\", numDocs1);\n+ int numDocs = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ logger.info(\"Found {} in old index\", numDocs);\n+ assertEquals(count, numDocs);\n \n logger.info(\"--> testing basic search with sort\");\n String searchRequestBody = \"{ \\\"sort\\\": [{ \\\"int\\\" : \\\"asc\\\" }]}\";\n response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\", Collections.emptyMap(),\n new StringEntity(searchRequestBody, ContentType.APPLICATION_JSON)));\n assertNoFailures(response);\n- int numDocs2 = (int) XContentMapValues.extractValue(\"hits.total\", response);\n- assertEquals(numDocs1, numDocs2);\n+ numDocs = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ assertEquals(count, numDocs);\n \n logger.info(\"--> testing exists filter\");\n searchRequestBody = \"{ \\\"query\\\": { \\\"exists\\\" : {\\\"field\\\": \\\"string\\\"} }}\";\n response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\", Collections.emptyMap(),\n new StringEntity(searchRequestBody, ContentType.APPLICATION_JSON)));\n assertNoFailures(response);\n- numDocs2 = (int) XContentMapValues.extractValue(\"hits.total\", response);\n- assertEquals(numDocs1, numDocs2);\n+ numDocs = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ assertEquals(count, numDocs);\n \n searchRequestBody = \"{ \\\"query\\\": { \\\"exists\\\" : {\\\"field\\\": \\\"field.with.dots\\\"} }}\";\n response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\", Collections.emptyMap(),\n new StringEntity(searchRequestBody, ContentType.APPLICATION_JSON)));\n assertNoFailures(response);\n- numDocs2 = (int) XContentMapValues.extractValue(\"hits.total\", response);\n- assertEquals(numDocs1, numDocs2);\n+ numDocs = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ assertEquals(count, numDocs);\n }\n \n static Map<String, Object> toMap(Response response) throws IOException {\n- return XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false);\n+ return toMap(EntityUtils.toString(response.getEntity()));\n+ }\n+\n+ static Map<String, Object> toMap(String response) throws IOException {\n+ return XContentHelper.convertToMap(JsonXContent.jsonXContent, response, false);\n }\n \n static void assertNoFailures(Map<String, Object> response) {\n@@ -165,7 +182,7 @@ static void assertNoFailures(Map<String, Object> response) {\n * Tests that a single document survives. Super basic smoke test.\n */\n public void testSingleDoc() throws IOException {\n- String docLocation = \"/\" + getTestName().toLowerCase(Locale.ROOT) + \"/doc/1\";\n+ String docLocation = \"/\" + index + \"/doc/1\";\n String doc = \"{\\\"test\\\": \\\"test\\\"}\";\n \n if (runningAgainstOldCluster) {\n@@ -176,11 +193,11 @@ public void testSingleDoc() throws IOException {\n assertThat(EntityUtils.toString(client().performRequest(\"GET\", docLocation).getEntity()), containsString(doc));\n }\n \n- public void testRandomDocumentsAndSnapshot() throws IOException {\n- String testName = getTestName().toLowerCase(Locale.ROOT);\n- String index = testName + \"_data\";\n- String infoDocument = \"/\" + testName + \"_info/doc/info\";\n-\n+ /**\n+ * Tests recovery of an index with or without a translog and the\n+ * statistics we gather about that. \n+ */\n+ public void testRecovery() throws IOException {\n int count;\n boolean shouldHaveTranslog;\n if (runningAgainstOldCluster) {\n@@ -189,34 +206,19 @@ public void testRandomDocumentsAndSnapshot() throws IOException {\n * an index without a translog so we randomize whether\n * or not we have one. */\n shouldHaveTranslog = randomBoolean();\n- logger.info(\"Creating {} documents\", count);\n- indexRandomDocuments(index, count, true, i -> jsonBuilder().startObject().field(\"field\", \"value\").endObject());\n- createSnapshot();\n+\n+ indexRandomDocuments(count, true, true, i -> jsonBuilder().startObject().field(\"field\", \"value\").endObject());\n // Explicitly flush so we're sure to have a bunch of documents in the Lucene index\n client().performRequest(\"POST\", \"/_flush\");\n if (shouldHaveTranslog) {\n // Update a few documents so we are sure to have a translog\n- indexRandomDocuments(index, count / 10, false /* Flushing here would invalidate the whole thing....*/,\n+ indexRandomDocuments(count / 10, false /* Flushing here would invalidate the whole thing....*/, false,\n i -> jsonBuilder().startObject().field(\"field\", \"value\").endObject());\n }\n-\n- // Record how many documents we built so we can compare later\n- XContentBuilder infoDoc = JsonXContent.contentBuilder().startObject();\n- infoDoc.field(\"count\", count);\n- infoDoc.field(\"should_have_translog\", shouldHaveTranslog);\n- infoDoc.endObject();\n- client().performRequest(\"PUT\", infoDocument, singletonMap(\"refresh\", \"true\"),\n- new StringEntity(infoDoc.string(), ContentType.APPLICATION_JSON));\n+ saveInfoDocument(\"should_have_translog\", Boolean.toString(shouldHaveTranslog));\n } else {\n- // Load the number of documents that were written to the old cluster\n- String doc = EntityUtils.toString(\n- client().performRequest(\"GET\", infoDocument, singletonMap(\"filter_path\", \"_source\")).getEntity());\n- Matcher m = Pattern.compile(\"\\\"count\\\":(\\\\d+)\").matcher(doc);\n- assertTrue(doc, m.find());\n- count = Integer.parseInt(m.group(1));\n- m = Pattern.compile(\"\\\"should_have_translog\\\":(true|false)\").matcher(doc);\n- assertTrue(doc, m.find());\n- shouldHaveTranslog = Booleans.parseBoolean(m.group(1));\n+ count = countOfIndexedRandomDocuments();\n+ shouldHaveTranslog = Booleans.parseBoolean(loadInfoDocument(\"should_have_translog\"));\n }\n \n // Count the documents in the index to make sure we have as many as we put there\n@@ -225,133 +227,181 @@ public void testRandomDocumentsAndSnapshot() throws IOException {\n assertThat(countResponse, containsString(\"\\\"total\\\":\" + count));\n \n if (false == runningAgainstOldCluster) {\n- assertTranslogRecoveryStatistics(index, shouldHaveTranslog);\n+ boolean restoredFromTranslog = false;\n+ boolean foundPrimary = false;\n+ Map<String, String> params = new HashMap<>();\n+ params.put(\"h\", \"index,shard,type,stage,translog_ops_recovered\");\n+ params.put(\"s\", \"index,shard,type\");\n+ String recoveryResponse = EntityUtils.toString(client().performRequest(\"GET\", \"/_cat/recovery/\" + index, params).getEntity());\n+ for (String line : recoveryResponse.split(\"\\n\")) {\n+ // Find the primaries\n+ foundPrimary = true;\n+ if (false == line.contains(\"done\") && line.contains(\"existing_store\")) {\n+ continue;\n+ }\n+ /* Mark if we see a primary that looked like it restored from the translog.\n+ * Not all primaries will look like this all the time because we modify\n+ * random documents when we want there to be a translog and they might\n+ * not be spread around all the shards. */\n+ Matcher m = Pattern.compile(\"(\\\\d+)$\").matcher(line);\n+ assertTrue(line, m.find());\n+ int translogOps = Integer.parseInt(m.group(1));\n+ if (translogOps > 0) {\n+ restoredFromTranslog = true;\n+ }\n+ }\n+ assertTrue(\"expected to find a primary but didn't\\n\" + recoveryResponse, foundPrimary);\n+ assertEquals(\"mismatch while checking for translog recovery\\n\" + recoveryResponse, shouldHaveTranslog, restoredFromTranslog);\n+\n+ String currentLuceneVersion = Version.CURRENT.luceneVersion.toString();\n+ String bwcLuceneVersion = oldClusterVersion.luceneVersion.toString();\n+ if (shouldHaveTranslog && false == currentLuceneVersion.equals(bwcLuceneVersion)) {\n+ int numCurrentVersion = 0;\n+ int numBwcVersion = 0;\n+ params.clear();\n+ params.put(\"h\", \"prirep,shard,index,version\");\n+ params.put(\"s\", \"prirep,shard,index\");\n+ String segmentsResponse = EntityUtils.toString(\n+ client().performRequest(\"GET\", \"/_cat/segments/\" + index, params).getEntity());\n+ for (String line : segmentsResponse.split(\"\\n\")) {\n+ if (false == line.startsWith(\"p\")) {\n+ continue;\n+ }\n+ Matcher m = Pattern.compile(\"(\\\\d+\\\\.\\\\d+\\\\.\\\\d+)$\").matcher(line);\n+ assertTrue(line, m.find());\n+ String version = m.group(1);\n+ if (currentLuceneVersion.equals(version)) {\n+ numCurrentVersion++;\n+ } else if (bwcLuceneVersion.equals(version)) {\n+ numBwcVersion++;\n+ } else {\n+ fail(\"expected version to be one of [\" + currentLuceneVersion + \",\" + bwcLuceneVersion + \"] but was \" + line);\n+ }\n+ }\n+ assertNotEquals(\"expected at least 1 current segment after translog recovery\", 0, numCurrentVersion);\n+ assertNotEquals(\"expected at least 1 old segment\", 0, numBwcVersion);\n+ }\n }\n+ }\n \n- restoreSnapshot(index, count);\n+ public void testSnapshotRestore() throws IOException {\n+ int count;\n+ if (runningAgainstOldCluster) {\n+ count = between(200, 300);\n+ indexRandomDocuments(count, true, true, i -> jsonBuilder().startObject().field(\"field\", \"value\").endObject());\n+\n+ // Create the repo and the snapshot\n+ XContentBuilder repoConfig = JsonXContent.contentBuilder().startObject(); {\n+ repoConfig.field(\"type\", \"fs\");\n+ repoConfig.startObject(\"settings\"); {\n+ repoConfig.field(\"compress\", randomBoolean());\n+ repoConfig.field(\"location\", System.getProperty(\"tests.path.repo\"));\n+ }\n+ repoConfig.endObject();\n+ }\n+ repoConfig.endObject();\n+ client().performRequest(\"PUT\", \"/_snapshot/repo\", emptyMap(),\n+ new StringEntity(repoConfig.string(), ContentType.APPLICATION_JSON));\n+\n+ XContentBuilder snapshotConfig = JsonXContent.contentBuilder().startObject(); {\n+ snapshotConfig.field(\"indices\", index);\n+ }\n+ snapshotConfig.endObject();\n+ client().performRequest(\"PUT\", \"/_snapshot/repo/snap\", singletonMap(\"wait_for_completion\", \"true\"),\n+ new StringEntity(snapshotConfig.string(), ContentType.APPLICATION_JSON));\n \n- // TODO finish adding tests for the things in OldIndexBackwardsCompatibilityIT\n+ // Refresh the index so the count doesn't fail\n+ refresh();\n+ } else {\n+ count = countOfIndexedRandomDocuments();\n+ }\n+\n+ // Count the documents in the index to make sure we have as many as we put there\n+ String countResponse = EntityUtils.toString(\n+ client().performRequest(\"GET\", \"/\" + index + \"/_search\", singletonMap(\"size\", \"0\")).getEntity());\n+ assertThat(countResponse, containsString(\"\\\"total\\\":\" + count));\n+\n+ if (false == runningAgainstOldCluster) {\n+ /* Remove any \"restored\" indices from the old cluster run of this test.\n+ * We intentionally don't remove them while running this against the\n+ * old cluster so we can test starting the node with a restored index\n+ * in the cluster. */\n+ client().performRequest(\"DELETE\", \"/restored_*\");\n+ }\n+\n+ // Check the metadata, especially the version\n+ String response = EntityUtils.toString(\n+ client().performRequest(\"GET\", \"/_snapshot/repo/_all\", singletonMap(\"verbose\", \"true\")).getEntity());\n+ Map<String, Object> map = toMap(response);\n+ assertEquals(response, singletonList(\"snap\"), XContentMapValues.extractValue(\"snapshots.snapshot\", map));\n+ assertEquals(response, singletonList(\"SUCCESS\"), XContentMapValues.extractValue(\"snapshots.state\", map));\n+ assertEquals(response, singletonList(oldClusterVersion.toString()), XContentMapValues.extractValue(\"snapshots.version\", map));\n+\n+ XContentBuilder restoreCommand = JsonXContent.contentBuilder().startObject();\n+ restoreCommand.field(\"include_global_state\", randomBoolean());\n+ restoreCommand.field(\"indices\", index);\n+ restoreCommand.field(\"rename_pattern\", index);\n+ restoreCommand.field(\"rename_replacement\", \"restored_\" + index);\n+ restoreCommand.endObject();\n+ client().performRequest(\"POST\", \"/_snapshot/repo/snap/_restore\", singletonMap(\"wait_for_completion\", \"true\"),\n+ new StringEntity(restoreCommand.string(), ContentType.APPLICATION_JSON));\n+\n+ countResponse = EntityUtils.toString(\n+ client().performRequest(\"GET\", \"/restored_\" + index + \"/_search\", singletonMap(\"size\", \"0\")).getEntity());\n+ assertThat(countResponse, containsString(\"\\\"total\\\":\" + count));\n+ \n }\n \n // TODO tests for upgrades after shrink. We've had trouble with shrink in the past.\n \n- private void indexRandomDocuments(String index, int count, boolean flushAllowed,\n+ private void indexRandomDocuments(int count, boolean flushAllowed, boolean saveInfo,\n CheckedFunction<Integer, XContentBuilder, IOException> docSupplier) throws IOException {\n+ logger.info(\"Indexing {} random documents\", count);\n for (int i = 0; i < count; i++) {\n logger.debug(\"Indexing document [{}]\", i);\n client().performRequest(\"POST\", \"/\" + index + \"/doc/\" + i, emptyMap(),\n new StringEntity(docSupplier.apply(i).string(), ContentType.APPLICATION_JSON));\n if (rarely()) {\n- logger.info(\"Refreshing [{}]\", index);\n- client().performRequest(\"POST\", \"/\" + index + \"/_refresh\");\n+ refresh();\n }\n if (flushAllowed && rarely()) {\n- logger.info(\"Flushing [{}]\", index);\n+ logger.debug(\"Flushing [{}]\", index);\n client().performRequest(\"POST\", \"/\" + index + \"/_flush\");\n }\n }\n- }\n-\n- private void createSnapshot() throws IOException {\n- XContentBuilder repoConfig = JsonXContent.contentBuilder().startObject(); {\n- repoConfig.field(\"type\", \"fs\");\n- repoConfig.startObject(\"settings\"); {\n- repoConfig.field(\"compress\", randomBoolean());\n- repoConfig.field(\"location\", System.getProperty(\"tests.path.repo\"));\n- }\n- repoConfig.endObject();\n+ if (saveInfo) {\n+ saveInfoDocument(\"count\", Integer.toString(count));\n }\n- repoConfig.endObject();\n- client().performRequest(\"PUT\", REPO, emptyMap(), new StringEntity(repoConfig.string(), ContentType.APPLICATION_JSON));\n-\n- client().performRequest(\"PUT\", REPO + \"/snap\", singletonMap(\"wait_for_completion\", \"true\"));\n }\n \n- private void assertTranslogRecoveryStatistics(String index, boolean shouldHaveTranslog) throws ParseException, IOException {\n- boolean restoredFromTranslog = false;\n- boolean foundPrimary = false;\n- Map<String, String> params = new HashMap<>();\n- params.put(\"h\", \"index,shard,type,stage,translog_ops_recovered\");\n- params.put(\"s\", \"index,shard,type\");\n- String recoveryResponse = EntityUtils.toString(client().performRequest(\"GET\", \"/_cat/recovery/\" + index, params).getEntity());\n- for (String line : recoveryResponse.split(\"\\n\")) {\n- // Find the primaries\n- foundPrimary = true;\n- if (false == line.contains(\"done\") && line.contains(\"existing_store\")) {\n- continue;\n- }\n- /* Mark if we see a primary that looked like it restored from the translog.\n- * Not all primaries will look like this all the time because we modify\n- * random documents when we want there to be a translog and they might\n- * not be spread around all the shards. */\n- Matcher m = Pattern.compile(\"(\\\\d+)$\").matcher(line);\n- assertTrue(line, m.find());\n- int translogOps = Integer.parseInt(m.group(1));\n- if (translogOps > 0) {\n- restoredFromTranslog = true;\n- }\n- }\n- assertTrue(\"expected to find a primary but didn't\\n\" + recoveryResponse, foundPrimary);\n- assertEquals(\"mismatch while checking for translog recovery\\n\" + recoveryResponse, shouldHaveTranslog, restoredFromTranslog);\n-\n- String currentLuceneVersion = Version.CURRENT.luceneVersion.toString();\n- String bwcLuceneVersion = oldClusterVersion.luceneVersion.toString();\n- if (shouldHaveTranslog && false == currentLuceneVersion.equals(bwcLuceneVersion)) {\n- int numCurrentVersion = 0;\n- int numBwcVersion = 0;\n- params.clear();\n- params.put(\"h\", \"prirep,shard,index,version\");\n- params.put(\"s\", \"prirep,shard,index\");\n- String segmentsResponse = EntityUtils.toString(\n- client().performRequest(\"GET\", \"/_cat/segments/\" + index, params).getEntity());\n- for (String line : segmentsResponse.split(\"\\n\")) {\n- if (false == line.startsWith(\"p\")) {\n- continue;\n- }\n- Matcher m = Pattern.compile(\"(\\\\d+\\\\.\\\\d+\\\\.\\\\d+)$\").matcher(line);\n- assertTrue(line, m.find());\n- String version = m.group(1);\n- if (currentLuceneVersion.equals(version)) {\n- numCurrentVersion++;\n- } else if (bwcLuceneVersion.equals(version)) {\n- numBwcVersion++;\n- } else {\n- fail(\"expected version to be one of [\" + currentLuceneVersion + \",\" + bwcLuceneVersion + \"] but was \" + line);\n- }\n- }\n- assertNotEquals(\"expected at least 1 current segment after translog recovery\", 0, numCurrentVersion);\n- assertNotEquals(\"expected at least 1 old segment\", 0, numBwcVersion);\n- }\n+ private int countOfIndexedRandomDocuments() throws IOException {\n+ return Integer.parseInt(loadInfoDocument(\"count\"));\n }\n \n- private void restoreSnapshot(String index, int count) throws ParseException, IOException {\n- if (false == runningAgainstOldCluster) {\n- /* Remove any \"restored\" indices from the old cluster run of this test.\n- * We intentionally don't remove them while running this against the\n- * old cluster so we can test starting the node with a restored index\n- * in the cluster. */\n- client().performRequest(\"DELETE\", \"/restored_*\");\n- }\n-\n- if (runningAgainstOldCluster) {\n- // TODO restoring the snapshot seems to fail! This seems like a bug.\n- XContentBuilder restoreCommand = JsonXContent.contentBuilder().startObject();\n- restoreCommand.field(\"include_global_state\", false);\n- restoreCommand.field(\"indices\", index);\n- restoreCommand.field(\"rename_pattern\", index);\n- restoreCommand.field(\"rename_replacement\", \"restored_\" + index);\n- restoreCommand.endObject();\n- client().performRequest(\"POST\", REPO + \"/snap/_restore\", singletonMap(\"wait_for_completion\", \"true\"),\n- new StringEntity(restoreCommand.string(), ContentType.APPLICATION_JSON));\n-\n- String countResponse = EntityUtils.toString(\n- client().performRequest(\"GET\", \"/restored_\" + index + \"/_search\", singletonMap(\"size\", \"0\")).getEntity());\n- assertThat(countResponse, containsString(\"\\\"total\\\":\" + count));\n- }\n+ private void saveInfoDocument(String type, String value) throws IOException {\n+ XContentBuilder infoDoc = JsonXContent.contentBuilder().startObject();\n+ infoDoc.field(\"value\", value);\n+ infoDoc.endObject();\n+ // Only create the first version so we know how many documents are created when the index is first created\n+ Map<String, String> params = singletonMap(\"op_type\", \"create\");\n+ client().performRequest(\"PUT\", \"/info/doc/\" + index + \"_\" + type, params,\n+ new StringEntity(infoDoc.string(), ContentType.APPLICATION_JSON));\n+ }\n \n+ private String loadInfoDocument(String type) throws IOException {\n+ String doc = EntityUtils.toString(\n+ client().performRequest(\"GET\", \"/info/doc/\" + index + \"_\" + type, singletonMap(\"filter_path\", \"_source\")).getEntity());\n+ Matcher m = Pattern.compile(\"\\\"value\\\":\\\"(.+)\\\"\").matcher(doc);\n+ assertTrue(doc, m.find());\n+ return m.group(1);\n }\n \n private Object randomLenientBoolean() {\n return randomFrom(new Object[] {\"off\", \"no\", \"0\", 0, \"false\", false, \"on\", \"yes\", \"1\", 1, \"true\", true});\n }\n+\n+ private void refresh() throws IOException {\n+ logger.debug(\"Refreshing [{}]\", index);\n+ client().performRequest(\"POST\", \"/\" + index + \"/_refresh\");\n+ }\n }", "filename": "qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java", "status": "modified" }, { "diff": "@@ -178,11 +178,21 @@ protected boolean preserveTemplatesUponCompletion() {\n \n /**\n * Returns whether to preserve the repositories on completion of this test.\n+ * Defaults to not preserving repos. See also\n+ * {@link #preserveSnapshotsUponCompletion()}.\n */\n protected boolean preserveReposUponCompletion() {\n return false;\n }\n \n+ /**\n+ * Returns whether to preserve the snapshots in repositories on completion of this\n+ * test. Defaults to not preserving snapshots. Only works for {@code fs} repositories.\n+ */\n+ protected boolean preserveSnapshotsUponCompletion() {\n+ return false;\n+ }\n+\n private void wipeCluster() throws IOException {\n if (preserveIndicesUponCompletion() == false) {\n // wipe indices\n@@ -214,7 +224,7 @@ private void wipeSnapshots() throws IOException {\n String repoName = repo.getKey();\n Map<?, ?> repoSpec = (Map<?, ?>) repo.getValue();\n String repoType = (String) repoSpec.get(\"type\");\n- if (repoType.equals(\"fs\")) {\n+ if (false == preserveSnapshotsUponCompletion() && repoType.equals(\"fs\")) {\n // All other repo types we really don't have a chance of being able to iterate properly, sadly.\n String url = \"_snapshot/\" + repoName + \"/_all\";\n Map<String, String> params = singletonMap(\"ignore_unavailable\", \"true\");", "filename": "test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java", "status": "modified" } ] }
{ "body": "Following this issue: #25088\r\n\r\n**Elasticsearch version**: 5.4.0 and 5.4.1\r\n**JVM version**: 1.8\r\n**OS version** : Windows 7 \r\n \r\nNo highlights returned using a prefix in a highlight_query when a document attribute has an empty value or no match found\r\n\r\n**Steps to reproduce**:\r\n\r\nCreate the following index\r\n```\r\nPUT foo\r\n{\r\n \"mappings\": {\r\n \"blogpost\": { \r\n \"properties\": { \r\n \"title\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\"}, \r\n \"description\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\" }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nAdd a new document, notice description is empty (no match for 'sq' string). \r\n\r\n```\r\nPUT foo/entry/1\r\n{\r\n \"title\": \"SQ05_H04_WorldTour test\",\r\n \"description\": \"\"\r\n}\r\n```\r\nRun the following query to highlight the string containing 'sq'. \r\n\r\n```\r\nGET foo/_search\r\n{\r\n \"query\":{\r\n \"match_all\":{ }\r\n },\r\n \"highlight\":{\r\n \"fields\":{\r\n \"title\":{\r\n \"number_of_fragments\":10,\r\n \"type\":\"fvh\",\r\n \"highlight_query\":{\r\n \"prefix\":{\r\n \"title\":\"sq\"\r\n }\r\n }\r\n },\r\n \"description\":{\r\n \"number_of_fragments\":10,\r\n \"type\":\"fvh\",\r\n \"highlight_query\":{\r\n \"prefix\":{\r\n \"description\":\"sq\"\r\n }\r\n }\r\n }\r\n },\r\n \"require_field_match\":false\r\n }\r\n}\r\n```\r\nThe result as expected highlights the string containing 'sq'\r\n\r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"foo\",\r\n \"_type\": \"entry\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"title\": \"SQ05_H04_WorldTour test\",\r\n \"description\": \"\"\r\n },\r\n \"highlight\": {\r\n \"title\": [\r\n \"<em>SQ05_H04_WorldTour</em> test\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\nLets re-execute the same query, but this time we change the order of fields.. \r\n```\r\nGET foo/_search\r\n{\r\n \"query\":{\r\n \"match_all\":{ }\r\n },\r\n \"highlight\":{\r\n \"fields\":{\r\n \"description\":{\r\n \"number_of_fragments\":10,\r\n \"type\":\"fvh\",\r\n \"highlight_query\":{\r\n \"prefix\":{\r\n \"description\":\"sq\"\r\n }\r\n }\r\n },\r\n \"title\":{\r\n \"number_of_fragments\":10,\r\n \"type\":\"fvh\",\r\n \"highlight_query\":{\r\n \"prefix\":{\r\n \"title\":\"sq\"\r\n }\r\n }\r\n }\r\n },\r\n \"require_field_match\":false\r\n }\r\n}\r\n```\r\nSurprisingly, no highlighs returned.. \r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"foo\",\r\n \"_type\": \"entry\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"title\": \"SQ05_H04_WorldTour test\",\r\n \"description\": \"\"\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n \r\nis this is a bug? same behavior in 2.x", "comments": [ { "body": "This is also true for multi terms query like wildcard, regexp", "created_at": "2017-06-12T22:21:43Z" } ], "number": 25171, "title": "no highlights with a highlight prefix query and FVH when no match found" }
{ "body": "This commit removes the global caching of the field query and replaces it with\r\na caching per field. Each field can use a different `highlight_query` and the rewriting of\r\nsome queries (prefix, automaton, ...) depends on the targeted field so the query used for highlighting\r\nmust be unique per field.\r\nThere might be a small performance penalty when highlighting multiple fields since the query needs to be rewritten\r\nonce per field with this change.\r\n\r\nFixes #25171", "number": 25197, "review_comments": [], "title": "FastVectorHighlighter should not cache the field query globally" }
{ "commits": [ { "message": "FastVectorHighlighter should not cache the field query globally\n\nThis commit removes the global caching of the field query and replaces it with\na caching per field. Each field can use a different `highlight_query` and the rewriting of\nsome queries (prefix, automaton, ...) depends on the targeted field so the query used for highlighting\nmust be unique per field.\nThere might be a small performance penalty when highlighting multiple fields since the query needs to be rewritten\nonce per highlighted field with this change.\n\nFixes #25171" } ], "files": [ { "diff": "@@ -87,29 +87,6 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n HighlighterEntry cache = (HighlighterEntry) hitContext.cache().get(CACHE_KEY);\n \n try {\n- FieldQuery fieldQuery;\n- if (field.fieldOptions().requireFieldMatch()) {\n- if (cache.fieldMatchFieldQuery == null) {\n- /*\n- * we use top level reader to rewrite the query against all readers,\n- * with use caching it across hits (and across readers...)\n- */\n- cache.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query,\n- hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n- }\n- fieldQuery = cache.fieldMatchFieldQuery;\n- } else {\n- if (cache.noFieldMatchFieldQuery == null) {\n- /*\n- * we use top level reader to rewrite the query against all readers,\n- * with use caching it across hits (and across readers...)\n- */\n- cache.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query,\n- hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n- }\n- fieldQuery = cache.noFieldMatchFieldQuery;\n- }\n-\n MapperHighlightEntry entry = cache.mappers.get(mapper);\n if (entry == null) {\n FragListBuilder fragListBuilder;\n@@ -151,6 +128,21 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n }\n fragmentsBuilder.setDiscreteMultiValueHighlighting(termVectorMultiValue);\n entry = new MapperHighlightEntry();\n+ if (field.fieldOptions().requireFieldMatch()) {\n+ /**\n+ * we use top level reader to rewrite the query against all readers,\n+ * with use caching it across hits (and across readers...)\n+ */\n+ entry.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query,\n+ hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n+ } else {\n+ /**\n+ * we use top level reader to rewrite the query against all readers,\n+ * with use caching it across hits (and across readers...)\n+ */\n+ entry.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query,\n+ hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n+ }\n entry.fragListBuilder = fragListBuilder;\n entry.fragmentsBuilder = fragmentsBuilder;\n if (cache.fvh == null) {\n@@ -162,6 +154,12 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n CustomFieldQuery.highlightFilters.set(field.fieldOptions().highlightFilter());\n cache.mappers.put(mapper, entry);\n }\n+ final FieldQuery fieldQuery;\n+ if (field.fieldOptions().requireFieldMatch()) {\n+ fieldQuery = entry.fieldMatchFieldQuery;\n+ } else {\n+ fieldQuery = entry.noFieldMatchFieldQuery;\n+ }\n cache.fvh.setPhraseLimit(field.fieldOptions().phraseLimit());\n \n String[] fragments;\n@@ -249,12 +247,12 @@ private static BoundaryScanner getBoundaryScanner(Field field) {\n private class MapperHighlightEntry {\n public FragListBuilder fragListBuilder;\n public FragmentsBuilder fragmentsBuilder;\n+ public FieldQuery noFieldMatchFieldQuery;\n+ public FieldQuery fieldMatchFieldQuery;\n }\n \n private class HighlighterEntry {\n public org.apache.lucene.search.vectorhighlight.FastVectorHighlighter fvh;\n- public FieldQuery noFieldMatchFieldQuery;\n- public FieldQuery fieldMatchFieldQuery;\n public Map<FieldMapper, MapperHighlightEntry> mappers = new HashMap<>();\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/FastVectorHighlighter.java", "status": "modified" }, { "diff": "@@ -0,0 +1,49 @@\n+setup:\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ mappings:\n+ doc:\n+ \"properties\":\n+ \"title\":\n+ \"type\": \"text\"\n+ \"term_vector\": \"with_positions_offsets\"\n+ \"description\":\n+ \"type\": \"text\"\n+ \"term_vector\": \"with_positions_offsets\"\n+ - do:\n+ index:\n+ index: test\n+ type: doc\n+ id: 1\n+ body:\n+ \"title\" : \"The quick brown fox is brown\"\n+ \"description\" : \"The quick pink panther is pink\"\n+ - do:\n+ indices.refresh: {}\n+\n+---\n+\"Highlight query\":\n+ - skip:\n+ version: \" - 5.5.99\"\n+ reason: bug fixed in 5.6\n+ - do:\n+ search:\n+ body:\n+ highlight:\n+ type: fvh\n+ fields:\n+ description:\n+ type: fvh\n+ highlight_query:\n+ prefix:\n+ description: br\n+ title:\n+ type: fvh\n+ highlight_query:\n+ prefix:\n+ title: br\n+\n+ - match: {hits.hits.0.highlight.title.0: \"The quick <em>brown</em> fox is <em>brown</em>\"}\n+ - is_false: hits.hits.0.highlight.description", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.highlight/20_fvh.yml", "status": "added" } ] }
{ "body": "After #21123 when Elasticsearch receive a HEAD request it returns the Content-Length of the that it would return for a GET request with an empty response body. Except in the document exists, index exists, and type exists requests which return 0. We should fix them to also return the Content-Length that would be in the response.\n", "comments": [ { "body": "I'm adding the v5.1.0 label too, I think we should target a fix there.\n", "created_at": "2016-10-26T05:16:19Z" }, { "body": "These are all addressed now. Closing.", "created_at": "2017-06-12T12:10:12Z" } ], "number": 21125, "title": "Some endpoints return Content-Length: 0 for HEAD requests" }
{ "body": "Today when an exception is thrown handling a HEAD request, the body is swallowed before the channel has a chance to see it. Yet, the channel is where we compute the content length that would be returned as a header in the response. This is a violation of the HTTP specification. This commit addresses the issue. To address this issue, we remove the special handling in bytes rest response for HEAD requests when an exception is thrown. Instead, we let the upstream channel handle the special case, as we already do today for the non-exceptional case.\r\n\r\nRelates #21125\r\n", "number": 25172, "review_comments": [], "title": "Fix handling of exceptions thrown on HEAD requests" }
{ "commits": [ { "message": "Fix handling of exceptions thrown on HEAD requests\n\nToday when an exception is thrown handling a HEAD request, the body is\nswallowed before the channel has a chance to see it. Yet, the channel is\nwhere we compute the content length that would be returned as a header\nin the response. This is a violation of the HTTP specification. This\ncommit addresses the issue. To address this issue, we remove the special\nhandling in bytes rest response for HEAD requests when an exception is\nthrown. Instead, we let the upstream channel handle the special case, as\nwe already do today for the non-exceptional case." }, { "message": "More comments" } ], "files": [ { "diff": "@@ -93,14 +93,9 @@ public BytesRestResponse(RestChannel channel, Exception e) throws IOException {\n \n public BytesRestResponse(RestChannel channel, RestStatus status, Exception e) throws IOException {\n this.status = status;\n- if (channel.request().method() == RestRequest.Method.HEAD) {\n- this.content = BytesArray.EMPTY;\n- this.contentType = TEXT_CONTENT_TYPE;\n- } else {\n- try (XContentBuilder builder = build(channel, status, e)) {\n- this.content = builder.bytes();\n- this.contentType = builder.contentType().mediaType();\n- }\n+ try (XContentBuilder builder = build(channel, status, e)) {\n+ this.content = builder.bytes();\n+ this.contentType = builder.contentType().mediaType();\n }\n if (e instanceof ElasticsearchException) {\n copyHeaders(((ElasticsearchException) e));", "filename": "core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java", "status": "modified" }, { "diff": "@@ -162,6 +162,16 @@ public void testGetSourceAction() throws IOException {\n }\n }\n \n+ public void testException() throws IOException {\n+ /*\n+ * This will throw an index not found exception which will be sent on the channel; previously when handling HEAD requests that would\n+ * throw an exception, the content was swallowed and a content length header of zero was returned. Instead of swallowing the content\n+ * we now let it rise up to the upstream channel so that it can compute the content length that would be returned. This test case is\n+ * a test for this situation.\n+ */\n+ headTestCase(\"/index-not-found-exception\", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0));\n+ }\n+\n private void headTestCase(final String url, final Map<String, String> params, final Matcher<Integer> matcher) throws IOException {\n headTestCase(url, params, OK.getStatus(), matcher);\n }", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: master\r\n**JVM version** (`java -version`): 1.8.0_121\r\n**OS version** (`uname -a` if on a Unix-like system): macOS Sierra\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nIt looks like https://github.com/elastic/elasticsearch/pull/24723 broke fetching the aliases of a wildcard list of indices:\r\n\r\n**Steps to reproduce**:\r\n\r\nmaster:\r\n![image](https://user-images.githubusercontent.com/1329312/26857829-5a3b2188-4ae2-11e7-80c6-c2b37ee188f9.png)\r\n\r\nwith #24723 reverted\r\n![image](https://user-images.githubusercontent.com/1329312/26857756-cb42df66-4ae1-11e7-9465-bb8629c59228.png)\r\n", "comments": [ { "body": "I concur, although this has nothing to do with wildcards, regular index patterns are impacted too. The reason that this is due to #24723 is because `/{index}/_alias` use to be specially handled by the get indices API but now it's handed by the get aliases API on the endpoint `/{index}/_alias/{name}` (so with these requests `name` would default to empty which acts like `_all`).\r\n\r\nIt's not obvious what the right outcome here is. We had the two endpoints before:\r\n - `/{index}/_alias`\r\n - `/{index}/_alias/{name}`\r\n\r\nIt really should have been the case that the former was always handled by the latter. Their behaviors were inconsistent which is why changing the former to be handled by the latter has this outcome. That is, on 5.4.1 today you can see the following:\r\n\r\n```\r\n21:28:56 [jason:~] $ curl -XGET localhost:9200/logstash-0/_alias?pretty=true\r\n{\r\n \"logstash-0\" : {\r\n \"aliases\" : { }\r\n }\r\n}\r\n21:33:39 [jason:~] $ curl -XGET localhost:9200/logstash-0/_alias/_all?pretty=true\r\n{ }\r\n```\r\n\r\nI tend to think the latter behavior is the one that we should maintain as that's how *all* other alias requests were handled except for ones that did not specify any alias name. What is the need for the empty list of aliases if none exist for an index when this can also be inferred from no key matching the index being in the response? So maybe it's enough to document the impact of #24723?", "created_at": "2017-06-07T01:39:15Z" }, { "body": "What do you think @dakrone?", "created_at": "2017-06-07T01:39:43Z" }, { "body": "> I tend to think the latter behavior is the one that we should maintain as that's how all other alias requests were handled except for ones that did not specify any alias name.\r\n> What do you think @dakrone\r\n\r\nI believe I agree with this statement, as @jasontedor said, this isn't about wildcards, as the following still works:\r\n\r\n```\r\nPUT /real\r\n\r\nPUT /real2\r\n\r\nPOST /_aliases?pretty\r\n{\r\n \"actions\": [\r\n {\"add\": {\"index\": \"real\", \"alias\": \"fake\"}},\r\n {\"add\": {\"index\": \"real\", \"alias\": \"fake2\"}},\r\n {\"add\": {\"index\": \"real2\", \"alias\": \"realfake\"}}\r\n ]\r\n}\r\n\r\nGET /real*/_alias\r\n```\r\n\r\nWhich returns\r\n\r\n```\r\n{\r\n \"real\" : {\r\n \"aliases\" : {\r\n \"fake\" : { },\r\n \"fake2\" : { }\r\n }\r\n },\r\n \"real2\" : {\r\n \"aliases\" : {\r\n \"realfake\" : { }\r\n }\r\n }\r\n}\r\n```\r\n\r\nTherefore, I'm in favor of documenting the breaking change also I think. What do you think @spalger?", "created_at": "2017-06-07T02:15:26Z" }, { "body": "> I tend to think the latter behavior is the one that we should maintain as that's how all other alias requests were handled except for ones that did not specify any alias name. What is the need for the empty list of aliases if none exist for an index when this can also be inferred from no key matching the index being in the response? So maybe it's enough to document the impact of #24723?\r\n\r\nI disagree. The `GET {index}/_alias` form is widely used to retrieve a list of concrete indices, and it would silently break lots of applications.\r\n\r\nAlso, it would be inconsistent:\r\n\r\n```\r\nPUT t \r\nGET t\r\nGET t/_alias\r\nGET t/_mapping\r\n```\r\n\r\nwhich returns:\r\n\r\n```\r\n# PUT t\r\n{\r\n \"acknowledged\": true,\r\n \"shards_acknowledged\": true\r\n}\r\n\r\n# GET t\r\n{\r\n \"t\": {\r\n \"aliases\": {},\r\n \"mappings\": {},\r\n \"settings\": {\r\n \"index\": {\r\n \"creation_date\": \"1496818805172\",\r\n \"number_of_shards\": \"5\",\r\n \"number_of_replicas\": \"1\",\r\n \"uuid\": \"tx4bhuh6Sheu2dSN_kSo4A\",\r\n \"version\": {\r\n \"created\": \"6000003\"\r\n },\r\n \"provided_name\": \"t\"\r\n }\r\n }\r\n }\r\n}\r\n\r\n# GET t/_alias\r\n{}\r\n\r\n# GET t/_mapping\r\n{}\r\n```\r\n\r\nFor clients that want to check eg aliases, they now need to first check if there is an `_aliases` key before inspecting the contents.\r\n\r\nI'd much prefer a consistent rule:\r\n\r\n* return zero or more concrete indices at the top level\r\n* under each index, there is an `_aliases` key\r\n* under which there is a list of zero or more aliases\r\n\r\nsame applies to mappings and settings (there are always settings)\r\n\r\nIn other words, a final missing parameter behaves like `*`\r\n", "created_at": "2017-06-07T07:03:33Z" }, { "body": "@clintongormley it sounds like you want to see:\r\n\r\n```json\r\nGET /test-*/_alias\r\n{\r\n \"test-1\": {\r\n \"_aliases\": {} // no aliases\r\n },\r\n \"test-2\": {\r\n \"_aliases\": {\r\n \"myalias\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\n> under each index, there is an `_aliases` key\r\n\r\nAre you sure you want it to be `_aliases` instead of `aliases` like the get indices API? That doesn't seem consistent with the older API to me", "created_at": "2017-06-07T14:31:00Z" }, { "body": "> Are you sure you want it to be _aliases instead of aliases like the get indices API? That doesn't seem consistent with the older API to me\r\n\r\nSorry, meant `aliases`", "created_at": "2017-06-07T14:43:45Z" }, { "body": "Sounds good, I will work on fixing this.", "created_at": "2017-06-07T14:53:18Z" }, { "body": "I pushed two PRs for this #25114 and #25118, so this should be resolved now. Thanks for bringing this up @spalger!", "created_at": "2017-06-08T16:58:03Z" }, { "body": "thanks @dakrone ", "created_at": "2017-06-08T17:08:14Z" }, { "body": "I'm seeing a potential issue here still:\r\n\r\n```\r\nPUT foo\r\nPUT .kibana\r\n```\r\n\r\n```\r\nGET _alias\r\n\r\n{\r\n \".kibana\": {\r\n \"aliases\": {}\r\n },\r\n \"foo\": {\r\n \"aliases\": {}\r\n }\r\n}\r\n```\r\n\r\n```\r\nGET */_alias\r\n\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"index_not_found_exception\",\r\n \"reason\": \"no such index\",\r\n \"index_uuid\": \"_na_\",\r\n \"index\": \"_all\"\r\n }\r\n ],\r\n \"type\": \"index_not_found_exception\",\r\n \"reason\": \"no such index\",\r\n \"index_uuid\": \"_na_\",\r\n \"index\": \"_all\"\r\n },\r\n \"status\": 404\r\n}\r\n```\r\n\r\n```\r\nPUT .kibana/_alias/test\r\n```\r\n\r\n```\r\nGET */_alias\r\n\r\n{\r\n \"foo\": {\r\n \"aliases\": {}\r\n },\r\n \".kibana\": {\r\n \"aliases\": {\r\n \"test\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\nIt appears that wildcard searching for existing indices/aliases only works _if at least one alias already exists_. I don't think that is desired behavior and will cause issues with [an open PR fetching indices](https://github.com/elastic/kibana/pull/12200)", "created_at": "2017-06-14T21:01:53Z" }, { "body": "thanks for bringing this up @chrisronline, I'll look into fixing this.", "created_at": "2017-06-14T21:27:34Z" }, { "body": "@chrisronline I am unable to reproduce this on master, are you using x-pack on this cluster? (perhaps that's something that might be affecting it)", "created_at": "2017-06-16T18:55:34Z" }, { "body": "Yes @dakrone. Good point. I don't see the same behavior without x-pack, but I still see it with x-pack.", "created_at": "2017-06-16T19:06:34Z" }, { "body": "Okay, let me try it with x-pack and see what the issue is there.", "created_at": "2017-06-16T19:08:21Z" }, { "body": "hi @chrisronline ,\r\nthe behaviour that you see is listed in the [security plugin limitations](https://www.elastic.co/guide/en/x-pack/current/security-limitations.html#_changes_in_index_wildcard_behavior). It's not a new thing and it doesn't have to do with these recent changes.", "created_at": "2017-06-19T15:39:09Z" } ], "number": 25090, "title": "_alias API no longer accepts index wildcards" }
{ "body": "Previously this would output:\r\n\r\n```\r\nGET /test-1/_mappings\r\n\r\n{ }\r\n```\r\n\r\nAnd after this change:\r\n\r\n```\r\nGET /test-1/_mappings\r\n\r\n{\r\n \"test-1\": {\r\n \"mappings\": {}\r\n }\r\n}\r\n```\r\n\r\nTo bring parity back to the REST output after #24723.\r\n\r\nRelates to #25090\r\n", "number": 25118, "review_comments": [], "title": "Include empty mappings in GET /{index}/_mappings requests" }
{ "commits": [ { "message": "Include empty mappings in GET /{index}/_mappings requests\n\nPreviously this would output:\n\n```\nGET /test-1/_mappings\n\n{ }\n```\n\nAnd after this change:\n\n```\nGET /test-1/_mappings\n\n{\n \"test-1\": {\n \"mappings\": {}\n }\n}\n```\n\nTo bring parity back to the REST output after #24723.\n\nRelates to #25090" } ], "files": [ { "diff": "@@ -89,9 +89,6 @@ public RestResponse buildResponse(GetMappingsResponse response, XContentBuilder\n \n builder.startObject();\n for (ObjectObjectCursor<String, ImmutableOpenMap<String, MappingMetaData>> indexEntry : mappingsByIndex) {\n- if (indexEntry.value.isEmpty()) {\n- continue;\n- }\n builder.startObject(indexEntry.key);\n builder.startObject(Fields.MAPPINGS);\n for (ObjectObjectCursor<String, MappingMetaData> typeEntry : indexEntry.value) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,19 @@ setup:\n type_2: {}\n type_3: {}\n \n+---\n+\"Get /{index}/_mapping with empty mappings\":\n+\n+ - do:\n+ indices.create:\n+ index: t\n+\n+ - do:\n+ indices.get_mapping:\n+ index: t\n+\n+ - match: { t.mappings: {}}\n+\n ---\n \"Get /_mapping\":\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_mapping/10_basic.yml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.4.0 and 5.4.1\r\n**Regression**: Yes, works fine in ES 2.x\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8\r\n\r\n**OS version** : Windows 7 \r\n\r\n**Description of the problem**:\r\n\r\nNo highlights returned when searching using dis_max with a multi_match queries. \r\nElasticsearch fails with 'null_pointer_exception' \r\n\r\n**Steps to reproduce**:\r\n\r\nCreate the following index\r\n```\r\nPUT blog \r\n{\r\n \"mappings\": {\r\n \"blogpost\": { \r\n \"properties\": { \r\n \"title\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\"}, \r\n \"body\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\" }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nAdd a new document\r\n\r\n```\r\nPUT blog/blogpost/1\r\n{\r\n \"title\": \"welcome test\",\r\n \"body\": \"foo\"\r\n}\r\n```\r\nRun the following query to search 'test' string. \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": {\r\n \"dis_max\": {\r\n \"queries\": [\r\n {\r\n \"multi_match\": {\r\n \"fields\": [\r\n \"title\"\r\n ], \r\n \"slop\": 0, \r\n \"type\": \"phrase_prefix\", \r\n \"max_expansions\": 10, \r\n \"query\": \"test\"\r\n }\r\n }, \r\n {\r\n \"multi_match\": {\r\n \"fields\": [\r\n \"body\"\r\n ], \r\n \"slop\": 0, \r\n \"type\": \"phrase_prefix\", \r\n \"max_expansions\": 10, \r\n \"query\": \"test\"\r\n }\r\n }]\r\n }\r\n }\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"matched_fields\": [\r\n \"title\",\r\n \"body\"\r\n ], \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```\r\nResult: \r\n\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 4,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 3,\r\n \"index\": \"blog\",\r\n \"node\": \"eMbzhELBQSO9kiRwVss65A\",\r\n \"reason\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.25811607,\r\n \"hits\": []\r\n }\r\n}\r\n```\r\nNow, let's update the same document and replace 'foo' with 'test' in the body attribute: \r\n\r\n```\r\nPUT blog/blogpost/1\r\n{\r\n \"title\": \"welcome test\",\r\n \"body\": \"test\"\r\n}\r\n```\r\n\r\nRe-execute the search query above, it will successfully return the result\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.2876821,\r\n \"hits\": [\r\n {\r\n \"_index\": \"blog\",\r\n \"_type\": \"blogpost\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.2876821,\r\n \"_source\": {\r\n \"title\": \"welcome test\",\r\n \"body\": \" test\"\r\n },\r\n \"highlight\": {\r\n \"title\": [\r\n \"w<em>elco</em>me <em>test</em>\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\nAs you can see, the query will always fail if the document does not find a match in both attributes. \r\n\r\n--Cheers! \r\n", "comments": [ { "body": "Highlighting is an inexact science and a highlighter has to balance the competing demands of speed, accuracy and ability to summarize lengthy docs. The reason we have so many highlighter implementations reflects the legacy of various people taking up the challenge of creating the definitive Lucene highlighter.\r\nEach implementation has its pros and cons (preserves phrases, works with n-grams, splits on sentences, is fast etc). A failure to highlight is not necessarily a \"bug\" but sometimes a deliberate design trade-off of an implementation (speed over accuracy).\r\n\r\nWhat you are seeing is a deficiency with the `fvh` highlighter and I note that if you set the type to `plain` your example works OK.\r\n\r\nThe latest hot contender for _one-highlighter-to-rule-them-all_ is the `unified` highlighter but I see that this also fails on this example. \r\n\r\nI don't know for sure that the failure in this case is an oversight or a deliberate limitation of fvh in trying to be a _Fast_ Vector Highlighter. The `plain` highlighter offers a workaround in this case and so this issue may ultimately be closed with \"wontfix\".\r\n", "created_at": "2017-06-07T08:41:02Z" }, { "body": "Given that we're wanting to replace the `plain` highlighter with the `unified` highlighter, would be good to fix this.\r\n\r\nThe `unified` highlighter dies with \r\n\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Less than 2 subSpans.size():1\"\r\n\r\nChanging the `phrase_prefix` query to a `phrase` query works. Makes me wonder if there is something wrong with that query in particular.\r\n\r\n@jimczi any thoughts?\r\n", "created_at": "2017-06-07T11:41:17Z" }, { "body": "The `unified` highlighter has a bug when the `phrase_prefix` query contain a single term. I'll work on a fix but I don't think this is related with the bug in the `fvh`. ", "created_at": "2017-06-07T12:11:59Z" }, { "body": "Thank you guys for your input. Please also note that the following query fails too while both queries works in ES 2.x. \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n { \"match_phrase_prefix\" : {\r\n \"title\" : \"test\"\r\n }\r\n \r\n }, { \"match_phrase_prefix\" : {\r\n \"body\" : \"test\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"matched_fields\": [\r\n \"title\",\r\n \"body\"\r\n ], \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n\r\n```\r\n\r\n@markharwood Thanks for your comment, this is a regression compared to ES 2.x. \r\n\r\n\r\n**Update**\r\nAnother attempt with the following: \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n { \r\n \"match_phrase_prefix\" : {\r\n \"title\" : \"test\"\r\n }}, { \r\n \"match_phrase_prefix\" : {\r\n \"body\" : \"test\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"type\": \"fvh\"\r\n },\r\n \"body\": {\r\n \"number_of_fragments\": 0, \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```", "created_at": "2017-06-07T14:34:09Z" }, { "body": "Another simpler query \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"highlight_query\": {\r\n \"match_phrase_prefix\" : {\r\n \"title\" : \"tests\"\r\n }\r\n },\r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```", "created_at": "2017-06-07T21:06:19Z" } ], "number": 25088, "title": "No highlights returned using dis_max with multi_match queries" }
{ "body": "The FVH fails with an NPE when a match phrase prefix is rewritten in an empty phrase query.\r\nThis change makes sure that the multi match query rewrites to a MatchNoDocsQuery (instead of an empty phrase query) when there is\r\na single term and that term does not expand to any term in the index.\r\n\r\nFixes #25088", "number": 25116, "review_comments": [], "title": "Fix Fast Vector Highlighter NPE on match phrase prefix" }
{ "commits": [ { "message": "Fix Fast Vector Highlighter NPE on match phrase prefix\n\nThe FVH fails with an NPE when a match phrase prefix is rewritten in an empty phrase query.\nThis change makes sure that the multi match query rewrites to a MatchNoDocsQuery (instead of an empty phrase query) when there is\na single term and that term does not expand to any term in the index.\n\nFixes #25088" }, { "message": "fix simple test" } ], "files": [ { "diff": "@@ -164,6 +164,11 @@ public Query rewrite(IndexReader reader) throws IOException {\n }\n }\n if (terms.isEmpty()) {\n+ if (sizeMinus1 == 0) {\n+ // no prefix and the phrase query is empty\n+ return Queries.newMatchNoDocsQuery(\"No terms supplied for \" + MultiPhrasePrefixQuery.class.getName());\n+ }\n+\n // if the terms does not exist we could return a MatchNoDocsQuery but this would break the unified highlighter\n // which rewrites query with an empty reader.\n return new BooleanQuery.Builder()", "filename": "core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java", "status": "modified" }, { "diff": "@@ -1462,7 +1462,6 @@ public void testPhrasePrefix() throws IOException {\n \n assertHighlight(searchResponse, 0, \"field0\", 0, 1, equalTo(\"The quick <x>brown</x> fox jumps over the lazy dog\"));\n \n-\n source = searchSource()\n .query(matchPhrasePrefixQuery(\"field0\", \"quick bro\"))\n .highlighter(highlight().field(\"field0\").order(\"score\").preTags(\"<x>\").postTags(\"</x>\").highlighterType(type));\n@@ -1472,6 +1471,21 @@ public void testPhrasePrefix() throws IOException {\n assertHighlight(searchResponse, 0, \"field0\", 0, 1, equalTo(\"The <x>quick</x> <x>brown</x> fox jumps over the lazy dog\"));\n \n logger.info(\"--> highlighting and searching on field1\");\n+ source = searchSource()\n+ .query(boolQuery()\n+ .should(matchPhrasePrefixQuery(\"field1\", \"test\"))\n+ .should(matchPhrasePrefixQuery(\"field1\", \"bro\"))\n+ )\n+ .highlighter(highlight().field(\"field1\").order(\"score\").preTags(\"<x>\").postTags(\"</x>\").highlighterType(type));\n+\n+ searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n+ assertThat(searchResponse.getHits().totalHits, equalTo(2L));\n+ for (int i = 0; i < 2; i++) {\n+ assertHighlight(searchResponse, i, \"field1\", 0, 1, anyOf(\n+ equalTo(\"The quick <x>browse</x> button is a fancy thing, right <x>bro</x>?\"),\n+ equalTo(\"The quick <x>brown</x> fox jumps over the lazy dog\")));\n+ }\n+\n source = searchSource()\n .query(matchPhrasePrefixQuery(\"field1\", \"quick bro\"))\n .highlighter(highlight().field(\"field1\").order(\"score\").preTags(\"<x>\").postTags(\"</x>\").highlighterType(type));", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" }, { "diff": "@@ -280,9 +280,7 @@ public void testExplainWithRewriteValidateQueryAllShards() throws Exception {\n assertExplanations(QueryBuilders.matchPhrasePrefixQuery(\"field\", \"ju\"),\n Arrays.asList(\n equalTo(\"field:jumps\"),\n- equalTo(\"+MatchNoDocsQuery(\\\"empty MultiPhraseQuery\\\") +MatchNoDocsQuery(\\\"No \" +\n- \"terms supplied for org.elasticsearch.common.lucene.search.\" +\n- \"MultiPhrasePrefixQuery\\\")\")\n+ equalTo(\"field:\\\"ju*\\\"\")\n ), true, true);\n }\n ", "filename": "core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: master\r\n**JVM version** (`java -version`): 1.8.0_121\r\n**OS version** (`uname -a` if on a Unix-like system): macOS Sierra\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nIt looks like https://github.com/elastic/elasticsearch/pull/24723 broke fetching the aliases of a wildcard list of indices:\r\n\r\n**Steps to reproduce**:\r\n\r\nmaster:\r\n![image](https://user-images.githubusercontent.com/1329312/26857829-5a3b2188-4ae2-11e7-80c6-c2b37ee188f9.png)\r\n\r\nwith #24723 reverted\r\n![image](https://user-images.githubusercontent.com/1329312/26857756-cb42df66-4ae1-11e7-9465-bb8629c59228.png)\r\n", "comments": [ { "body": "I concur, although this has nothing to do with wildcards, regular index patterns are impacted too. The reason that this is due to #24723 is because `/{index}/_alias` use to be specially handled by the get indices API but now it's handed by the get aliases API on the endpoint `/{index}/_alias/{name}` (so with these requests `name` would default to empty which acts like `_all`).\r\n\r\nIt's not obvious what the right outcome here is. We had the two endpoints before:\r\n - `/{index}/_alias`\r\n - `/{index}/_alias/{name}`\r\n\r\nIt really should have been the case that the former was always handled by the latter. Their behaviors were inconsistent which is why changing the former to be handled by the latter has this outcome. That is, on 5.4.1 today you can see the following:\r\n\r\n```\r\n21:28:56 [jason:~] $ curl -XGET localhost:9200/logstash-0/_alias?pretty=true\r\n{\r\n \"logstash-0\" : {\r\n \"aliases\" : { }\r\n }\r\n}\r\n21:33:39 [jason:~] $ curl -XGET localhost:9200/logstash-0/_alias/_all?pretty=true\r\n{ }\r\n```\r\n\r\nI tend to think the latter behavior is the one that we should maintain as that's how *all* other alias requests were handled except for ones that did not specify any alias name. What is the need for the empty list of aliases if none exist for an index when this can also be inferred from no key matching the index being in the response? So maybe it's enough to document the impact of #24723?", "created_at": "2017-06-07T01:39:15Z" }, { "body": "What do you think @dakrone?", "created_at": "2017-06-07T01:39:43Z" }, { "body": "> I tend to think the latter behavior is the one that we should maintain as that's how all other alias requests were handled except for ones that did not specify any alias name.\r\n> What do you think @dakrone\r\n\r\nI believe I agree with this statement, as @jasontedor said, this isn't about wildcards, as the following still works:\r\n\r\n```\r\nPUT /real\r\n\r\nPUT /real2\r\n\r\nPOST /_aliases?pretty\r\n{\r\n \"actions\": [\r\n {\"add\": {\"index\": \"real\", \"alias\": \"fake\"}},\r\n {\"add\": {\"index\": \"real\", \"alias\": \"fake2\"}},\r\n {\"add\": {\"index\": \"real2\", \"alias\": \"realfake\"}}\r\n ]\r\n}\r\n\r\nGET /real*/_alias\r\n```\r\n\r\nWhich returns\r\n\r\n```\r\n{\r\n \"real\" : {\r\n \"aliases\" : {\r\n \"fake\" : { },\r\n \"fake2\" : { }\r\n }\r\n },\r\n \"real2\" : {\r\n \"aliases\" : {\r\n \"realfake\" : { }\r\n }\r\n }\r\n}\r\n```\r\n\r\nTherefore, I'm in favor of documenting the breaking change also I think. What do you think @spalger?", "created_at": "2017-06-07T02:15:26Z" }, { "body": "> I tend to think the latter behavior is the one that we should maintain as that's how all other alias requests were handled except for ones that did not specify any alias name. What is the need for the empty list of aliases if none exist for an index when this can also be inferred from no key matching the index being in the response? So maybe it's enough to document the impact of #24723?\r\n\r\nI disagree. The `GET {index}/_alias` form is widely used to retrieve a list of concrete indices, and it would silently break lots of applications.\r\n\r\nAlso, it would be inconsistent:\r\n\r\n```\r\nPUT t \r\nGET t\r\nGET t/_alias\r\nGET t/_mapping\r\n```\r\n\r\nwhich returns:\r\n\r\n```\r\n# PUT t\r\n{\r\n \"acknowledged\": true,\r\n \"shards_acknowledged\": true\r\n}\r\n\r\n# GET t\r\n{\r\n \"t\": {\r\n \"aliases\": {},\r\n \"mappings\": {},\r\n \"settings\": {\r\n \"index\": {\r\n \"creation_date\": \"1496818805172\",\r\n \"number_of_shards\": \"5\",\r\n \"number_of_replicas\": \"1\",\r\n \"uuid\": \"tx4bhuh6Sheu2dSN_kSo4A\",\r\n \"version\": {\r\n \"created\": \"6000003\"\r\n },\r\n \"provided_name\": \"t\"\r\n }\r\n }\r\n }\r\n}\r\n\r\n# GET t/_alias\r\n{}\r\n\r\n# GET t/_mapping\r\n{}\r\n```\r\n\r\nFor clients that want to check eg aliases, they now need to first check if there is an `_aliases` key before inspecting the contents.\r\n\r\nI'd much prefer a consistent rule:\r\n\r\n* return zero or more concrete indices at the top level\r\n* under each index, there is an `_aliases` key\r\n* under which there is a list of zero or more aliases\r\n\r\nsame applies to mappings and settings (there are always settings)\r\n\r\nIn other words, a final missing parameter behaves like `*`\r\n", "created_at": "2017-06-07T07:03:33Z" }, { "body": "@clintongormley it sounds like you want to see:\r\n\r\n```json\r\nGET /test-*/_alias\r\n{\r\n \"test-1\": {\r\n \"_aliases\": {} // no aliases\r\n },\r\n \"test-2\": {\r\n \"_aliases\": {\r\n \"myalias\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\n> under each index, there is an `_aliases` key\r\n\r\nAre you sure you want it to be `_aliases` instead of `aliases` like the get indices API? That doesn't seem consistent with the older API to me", "created_at": "2017-06-07T14:31:00Z" }, { "body": "> Are you sure you want it to be _aliases instead of aliases like the get indices API? That doesn't seem consistent with the older API to me\r\n\r\nSorry, meant `aliases`", "created_at": "2017-06-07T14:43:45Z" }, { "body": "Sounds good, I will work on fixing this.", "created_at": "2017-06-07T14:53:18Z" }, { "body": "I pushed two PRs for this #25114 and #25118, so this should be resolved now. Thanks for bringing this up @spalger!", "created_at": "2017-06-08T16:58:03Z" }, { "body": "thanks @dakrone ", "created_at": "2017-06-08T17:08:14Z" }, { "body": "I'm seeing a potential issue here still:\r\n\r\n```\r\nPUT foo\r\nPUT .kibana\r\n```\r\n\r\n```\r\nGET _alias\r\n\r\n{\r\n \".kibana\": {\r\n \"aliases\": {}\r\n },\r\n \"foo\": {\r\n \"aliases\": {}\r\n }\r\n}\r\n```\r\n\r\n```\r\nGET */_alias\r\n\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"index_not_found_exception\",\r\n \"reason\": \"no such index\",\r\n \"index_uuid\": \"_na_\",\r\n \"index\": \"_all\"\r\n }\r\n ],\r\n \"type\": \"index_not_found_exception\",\r\n \"reason\": \"no such index\",\r\n \"index_uuid\": \"_na_\",\r\n \"index\": \"_all\"\r\n },\r\n \"status\": 404\r\n}\r\n```\r\n\r\n```\r\nPUT .kibana/_alias/test\r\n```\r\n\r\n```\r\nGET */_alias\r\n\r\n{\r\n \"foo\": {\r\n \"aliases\": {}\r\n },\r\n \".kibana\": {\r\n \"aliases\": {\r\n \"test\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\nIt appears that wildcard searching for existing indices/aliases only works _if at least one alias already exists_. I don't think that is desired behavior and will cause issues with [an open PR fetching indices](https://github.com/elastic/kibana/pull/12200)", "created_at": "2017-06-14T21:01:53Z" }, { "body": "thanks for bringing this up @chrisronline, I'll look into fixing this.", "created_at": "2017-06-14T21:27:34Z" }, { "body": "@chrisronline I am unable to reproduce this on master, are you using x-pack on this cluster? (perhaps that's something that might be affecting it)", "created_at": "2017-06-16T18:55:34Z" }, { "body": "Yes @dakrone. Good point. I don't see the same behavior without x-pack, but I still see it with x-pack.", "created_at": "2017-06-16T19:06:34Z" }, { "body": "Okay, let me try it with x-pack and see what the issue is there.", "created_at": "2017-06-16T19:08:21Z" }, { "body": "hi @chrisronline ,\r\nthe behaviour that you see is listed in the [security plugin limitations](https://www.elastic.co/guide/en/x-pack/current/security-limitations.html#_changes_in_index_wildcard_behavior). It's not a new thing and it doesn't have to do with these recent changes.", "created_at": "2017-06-19T15:39:09Z" } ], "number": 25090, "title": "_alias API no longer accepts index wildcards" }
{ "body": "Previously in #24723 we changed the `_alias` API to not go through the\r\n`RestGetIndicesAction` endpoint, instead creating a `RestGetAliasesAction` that\r\ndid the same thing.\r\n\r\nThis changes the formatting so that it matches the old formatting of the\r\nendpoint, before:\r\n\r\n```\r\nGET /test-1/_alias\r\n\r\n{ }\r\n```\r\n\r\nAnd after this change:\r\n\r\n```\r\nGET /test-1/_alias\r\n\r\n{\r\n \"test-1\": {\r\n \"aliases\": {}\r\n }\r\n}\r\n```\r\n\r\nThis is related to #25090", "number": 25114, "review_comments": [ { "body": "Can you clarify this comment? Something like `the list corresponding to a concrete index will be empty if no aliases are present for that index`?", "created_at": "2017-06-08T01:20:22Z" } ], "title": "Return index name and empty map for /{index}/_alias with no aliases" }
{ "commits": [ { "message": "Return index name and empty map for /{index}/_alias with no aliases\n\nPreviously in #24723 we changed the `_alias` API to not go through the\n`RestGetIndicesAction` endpoint, instead creating a `RestGetAliasesAction` that\ndid the same thing.\n\nThis changes the formatting so that it matches the old formatting of the\nendpoint, before:\n\n```\nGET /test-1/_alias\n\n{ }\n```\n\nAnd after this change:\n\n```\nGET /test-1/_alias\n\n{\n \"test-1\": {\n \"aliases\": {}\n }\n}\n```\n\nThis is related to #25090" } ], "files": [ { "diff": "@@ -243,7 +243,8 @@ public SortedMap<String, AliasOrIndex> getAliasAndIndexLookup() {\n *\n * @param aliases The names of the index aliases to find\n * @param concreteIndices The concrete indexes the index aliases must point to order to be returned.\n- * @return the found index aliases grouped by index\n+ * @return a map of index to a list of alias metadata, the list corresponding to a concrete index will be empty if no aliases are\n+ * present for that index\n */\n public ImmutableOpenMap<String, List<AliasMetaData>> findAliases(final String[] aliases, String[] concreteIndices) {\n assert aliases != null;\n@@ -273,8 +274,8 @@ public int compare(AliasMetaData o1, AliasMetaData o2) {\n return o1.alias().compareTo(o2.alias());\n }\n });\n- mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n }\n+ mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n }\n return mapBuilder.build();\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -76,6 +76,7 @@ public String getName() {\n \n @Override\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n+ final boolean namesProvided = request.hasParam(\"name\");\n final String[] aliases = request.paramAsStringArrayOrEmptyIfAll(\"name\");\n final GetAliasesRequest getAliasesRequest = new GetAliasesRequest(aliases);\n final String[] indices = Strings.splitStringByCommaToArray(request.param(\"index\"));\n@@ -89,9 +90,13 @@ public RestResponse buildResponse(GetAliasesResponse response, XContentBuilder b\n final ImmutableOpenMap<String, List<AliasMetaData>> aliasMap = response.getAliases();\n \n final Set<String> aliasNames = new HashSet<>();\n- for (final ObjectCursor<List<AliasMetaData>> cursor : aliasMap.values()) {\n+ final Set<String> indicesToDisplay = new HashSet<>();\n+ for (final ObjectObjectCursor<String, List<AliasMetaData>> cursor : aliasMap) {\n for (final AliasMetaData aliasMetaData : cursor.value) {\n aliasNames.add(aliasMetaData.alias());\n+ if (namesProvided) {\n+ indicesToDisplay.add(cursor.key);\n+ }\n }\n }\n \n@@ -131,17 +136,19 @@ public RestResponse buildResponse(GetAliasesResponse response, XContentBuilder b\n }\n \n for (final ObjectObjectCursor<String, List<AliasMetaData>> entry : response.getAliases()) {\n- builder.startObject(entry.key);\n- {\n- builder.startObject(\"aliases\");\n+ if (namesProvided == false || (namesProvided && indicesToDisplay.contains(entry.key))) {\n+ builder.startObject(entry.key);\n {\n- for (final AliasMetaData alias : entry.value) {\n- AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS);\n+ builder.startObject(\"aliases\");\n+ {\n+ for (final AliasMetaData alias : entry.value) {\n+ AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS);\n+ }\n }\n+ builder.endObject();\n }\n builder.endObject();\n }\n- builder.endObject();\n }\n }\n builder.endObject();", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.get;\n \n+import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.get.GetIndexRequest.Feature;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n@@ -281,6 +282,8 @@ private void assertEmptyMappings(GetIndexResponse response) {\n \n private void assertEmptyAliases(GetIndexResponse response) {\n assertThat(response.aliases(), notNullValue());\n- assertThat(response.aliases().isEmpty(), equalTo(true));\n+ for (final ObjectObjectCursor<String, List<AliasMetaData>> entry : response.getAliases()) {\n+ assertTrue(entry.value.isEmpty());\n+ }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/get/GetIndexIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.aliases;\n \n+import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n@@ -32,6 +33,7 @@\n import org.elasticsearch.cluster.metadata.AliasOrIndex;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.StopWatch;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -49,6 +51,7 @@\n \n import java.util.Arrays;\n import java.util.HashSet;\n+import java.util.List;\n import java.util.Set;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.ExecutorService;\n@@ -567,20 +570,24 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting alias1\");\n GetAliasesResponse getResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(1));\n+ assertThat(getResponse.getAliases().size(), equalTo(5));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n+ assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n AliasesExistResponse existsResponse = admin().indices().prepareAliasesExist(\"alias1\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n logger.info(\"--> getting all aliases that start with alias*\");\n getResponse = admin().indices().prepareGetAliases(\"alias*\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(1));\n+ assertThat(getResponse.getAliases().size(), equalTo(5));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(2));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n@@ -592,6 +599,10 @@ public void testIndicesGetAliases() throws Exception {\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getSearchRouting(), nullValue());\n+ assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n+ assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"alias*\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n@@ -676,12 +687,13 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bar\");\n getResponse = admin().indices().prepareGetAliases(\"f*\").addIndices(\"*bar\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(1));\n+ assertThat(getResponse.getAliases().size(), equalTo(2));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n+ assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"f*\")\n .addIndices(\"*bar\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n@@ -690,13 +702,14 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bac\");\n getResponse = admin().indices().prepareGetAliases(\"foo\").addIndices(\"*bac\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(1));\n+ assertThat(getResponse.getAliases().size(), equalTo(2));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n+ assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"foo\")\n .addIndices(\"*bac\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n@@ -729,7 +742,9 @@ public void testIndicesGetAliases() throws Exception {\n .removeAlias(\"foobar\", \"foo\"));\n \n getResponse = admin().indices().prepareGetAliases(\"foo\").addIndices(\"foobar\").get();\n- assertThat(getResponse.getAliases().isEmpty(), equalTo(true));\n+ for (final ObjectObjectCursor<String, List<AliasMetaData>> entry : getResponse.getAliases()) {\n+ assertTrue(entry.value.isEmpty());\n+ }\n existsResponse = admin().indices().prepareAliasesExist(\"foo\").addIndices(\"foobar\").get();\n assertThat(existsResponse.exists(), equalTo(false));\n }", "filename": "core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -84,6 +84,10 @@ setup:\n \n ---\n \"check delete with index list\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n - do:\n indices.delete_alias:\n index: \"test_index1,test_index2\"\n@@ -106,6 +110,10 @@ setup:\n \n ---\n \"check delete with prefix* index\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n - do:\n indices.delete_alias:\n index: \"test_*\"\n@@ -129,6 +137,10 @@ setup:\n \n ---\n \"check delete with index list and * aliases\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n - do:\n indices.delete_alias:\n index: \"test_index1,test_index2\"\n@@ -152,6 +164,10 @@ setup:\n \n ---\n \"check delete with index list and _all aliases\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n - do:\n indices.delete_alias:\n index: \"test_index1,test_index2\"\n@@ -175,6 +191,10 @@ setup:\n \n ---\n \"check delete with index list and wildcard aliases\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n - do:\n indices.delete_alias:\n index: \"test_index1,test_index2\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_alias/all_path_options.yml", "status": "modified" }, { "diff": "@@ -40,6 +40,62 @@ setup:\n - match: {test_index.aliases.test_blias: {}}\n - is_false: test_index_2\n \n+---\n+\"Get aliases via /_all/_alias/\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n+ - do:\n+ indices.create:\n+ index: myindex\n+\n+ - do:\n+ indices.get_alias:\n+ index: _all\n+\n+ - match: {test_index.aliases.test_alias: {}}\n+ - match: {test_index.aliases.test_blias: {}}\n+ - match: {test_index_2.aliases.test_alias: {}}\n+ - match: {test_index_2.aliases.test_blias: {}}\n+ - match: {myindex.aliases: {}}\n+\n+---\n+\"Get aliases via /*/_alias/\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n+ - do:\n+ indices.create:\n+ index: myindex\n+\n+ - do:\n+ indices.get_alias:\n+ index: \"*\"\n+\n+ - match: {test_index.aliases.test_alias: {}}\n+ - match: {test_index.aliases.test_blias: {}}\n+ - match: {test_index_2.aliases.test_alias: {}}\n+ - match: {test_index_2.aliases.test_blias: {}}\n+ - match: {myindex.aliases: {}}\n+\n+---\n+\"Get and index with no aliases via /{index}/_alias/\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n+\n+ - do:\n+ indices.create:\n+ index: myindex\n+\n+ - do:\n+ indices.get_alias:\n+ index: myindex\n+\n+ - match: {myindex.aliases: {}}\n+\n ---\n \"Get specific alias via /{index}/_alias/{name}\":\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_alias/10_basic.yml", "status": "modified" }, { "diff": "@@ -14,6 +14,9 @@ setup:\n \n ---\n \"put alias per index\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n \n - do:\n indices.put_alias:\n@@ -69,7 +72,9 @@ setup:\n \n ---\n \"put alias prefix* index\":\n-\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n \n - do:\n indices.put_alias:\n@@ -86,7 +91,9 @@ setup:\n \n ---\n \"put alias in list of indices\":\n-\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: only requested indices are included in 6.x\n \n - do:\n indices.put_alias:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/all_path_options.yml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.4.0 and 5.4.1\r\n**Regression**: Yes, works fine in ES 2.x\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8\r\n\r\n**OS version** : Windows 7 \r\n\r\n**Description of the problem**:\r\n\r\nNo highlights returned when searching using dis_max with a multi_match queries. \r\nElasticsearch fails with 'null_pointer_exception' \r\n\r\n**Steps to reproduce**:\r\n\r\nCreate the following index\r\n```\r\nPUT blog \r\n{\r\n \"mappings\": {\r\n \"blogpost\": { \r\n \"properties\": { \r\n \"title\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\"}, \r\n \"body\": { \"type\": \"text\", \"term_vector\": \"with_positions_offsets\" }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nAdd a new document\r\n\r\n```\r\nPUT blog/blogpost/1\r\n{\r\n \"title\": \"welcome test\",\r\n \"body\": \"foo\"\r\n}\r\n```\r\nRun the following query to search 'test' string. \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": {\r\n \"dis_max\": {\r\n \"queries\": [\r\n {\r\n \"multi_match\": {\r\n \"fields\": [\r\n \"title\"\r\n ], \r\n \"slop\": 0, \r\n \"type\": \"phrase_prefix\", \r\n \"max_expansions\": 10, \r\n \"query\": \"test\"\r\n }\r\n }, \r\n {\r\n \"multi_match\": {\r\n \"fields\": [\r\n \"body\"\r\n ], \r\n \"slop\": 0, \r\n \"type\": \"phrase_prefix\", \r\n \"max_expansions\": 10, \r\n \"query\": \"test\"\r\n }\r\n }]\r\n }\r\n }\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"matched_fields\": [\r\n \"title\",\r\n \"body\"\r\n ], \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```\r\nResult: \r\n\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 4,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 3,\r\n \"index\": \"blog\",\r\n \"node\": \"eMbzhELBQSO9kiRwVss65A\",\r\n \"reason\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.25811607,\r\n \"hits\": []\r\n }\r\n}\r\n```\r\nNow, let's update the same document and replace 'foo' with 'test' in the body attribute: \r\n\r\n```\r\nPUT blog/blogpost/1\r\n{\r\n \"title\": \"welcome test\",\r\n \"body\": \"test\"\r\n}\r\n```\r\n\r\nRe-execute the search query above, it will successfully return the result\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.2876821,\r\n \"hits\": [\r\n {\r\n \"_index\": \"blog\",\r\n \"_type\": \"blogpost\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.2876821,\r\n \"_source\": {\r\n \"title\": \"welcome test\",\r\n \"body\": \" test\"\r\n },\r\n \"highlight\": {\r\n \"title\": [\r\n \"w<em>elco</em>me <em>test</em>\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\nAs you can see, the query will always fail if the document does not find a match in both attributes. \r\n\r\n--Cheers! \r\n", "comments": [ { "body": "Highlighting is an inexact science and a highlighter has to balance the competing demands of speed, accuracy and ability to summarize lengthy docs. The reason we have so many highlighter implementations reflects the legacy of various people taking up the challenge of creating the definitive Lucene highlighter.\r\nEach implementation has its pros and cons (preserves phrases, works with n-grams, splits on sentences, is fast etc). A failure to highlight is not necessarily a \"bug\" but sometimes a deliberate design trade-off of an implementation (speed over accuracy).\r\n\r\nWhat you are seeing is a deficiency with the `fvh` highlighter and I note that if you set the type to `plain` your example works OK.\r\n\r\nThe latest hot contender for _one-highlighter-to-rule-them-all_ is the `unified` highlighter but I see that this also fails on this example. \r\n\r\nI don't know for sure that the failure in this case is an oversight or a deliberate limitation of fvh in trying to be a _Fast_ Vector Highlighter. The `plain` highlighter offers a workaround in this case and so this issue may ultimately be closed with \"wontfix\".\r\n", "created_at": "2017-06-07T08:41:02Z" }, { "body": "Given that we're wanting to replace the `plain` highlighter with the `unified` highlighter, would be good to fix this.\r\n\r\nThe `unified` highlighter dies with \r\n\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Less than 2 subSpans.size():1\"\r\n\r\nChanging the `phrase_prefix` query to a `phrase` query works. Makes me wonder if there is something wrong with that query in particular.\r\n\r\n@jimczi any thoughts?\r\n", "created_at": "2017-06-07T11:41:17Z" }, { "body": "The `unified` highlighter has a bug when the `phrase_prefix` query contain a single term. I'll work on a fix but I don't think this is related with the bug in the `fvh`. ", "created_at": "2017-06-07T12:11:59Z" }, { "body": "Thank you guys for your input. Please also note that the following query fails too while both queries works in ES 2.x. \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n { \"match_phrase_prefix\" : {\r\n \"title\" : \"test\"\r\n }\r\n \r\n }, { \"match_phrase_prefix\" : {\r\n \"body\" : \"test\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"matched_fields\": [\r\n \"title\",\r\n \"body\"\r\n ], \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n\r\n```\r\n\r\n@markharwood Thanks for your comment, this is a regression compared to ES 2.x. \r\n\r\n\r\n**Update**\r\nAnother attempt with the following: \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n { \r\n \"match_phrase_prefix\" : {\r\n \"title\" : \"test\"\r\n }}, { \r\n \"match_phrase_prefix\" : {\r\n \"body\" : \"test\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"type\": \"fvh\"\r\n },\r\n \"body\": {\r\n \"number_of_fragments\": 0, \r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```", "created_at": "2017-06-07T14:34:09Z" }, { "body": "Another simpler query \r\n\r\n```\r\nGET blog/_search\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"highlight\": {\r\n \"fields\": {\r\n \"title\": {\r\n \"number_of_fragments\": 0, \r\n \"highlight_query\": {\r\n \"match_phrase_prefix\" : {\r\n \"title\" : \"tests\"\r\n }\r\n },\r\n \"type\": \"fvh\"\r\n }\r\n }, \r\n \"require_field_match\": false\r\n }\r\n}\r\n```", "created_at": "2017-06-07T21:06:19Z" } ], "number": 25088, "title": "No highlights returned using dis_max with multi_match queries" }
{ "body": "The unified highlighter rewrites MultiPhrasePrefixQuery to SpanNearQuer even when there is a single term in the phrase.\r\nThough SpanNearQuery throws an exception when the number of clauses is less than 2.\r\nThis change returns a simple PrefixQuery when there is a single term and builds the SpanNearQuery otherwise.\r\n\r\nRelates #25088", "number": 25103, "review_comments": [], "title": "Higlighters: Fix MultiPhrasePrefixQuery rewriting" }
{ "commits": [ { "message": "Higlighters: Fix MultiPhrasePrefixQuery rewriting\n\nThe unified highlighter rewrites MultiPhrasePrefixQuery to SpanNearQuer even when there is a single term in the phrase.\nThough SpanNearQuery throws an exception when the number of clauses is less than 2.\nThis change returns a simple PrefixQuery when there is a single term and builds the SpanNearQuery otherwise.\n\nRelates #25088" } ], "files": [ { "diff": "@@ -182,13 +182,16 @@ private Collection<Query> rewriteCustomQuery(Query query) {\n positionSpanQueries[i] = innerQueries[0];\n }\n }\n+\n+ if (positionSpanQueries.length == 1) {\n+ return Collections.singletonList(positionSpanQueries[0]);\n+ }\n // sum position increments beyond 1\n int positionGaps = 0;\n if (positions.length >= 2) {\n // positions are in increasing order. max(0,...) is just a safeguard.\n positionGaps = Math.max(0, positions[positions.length - 1] - positions[0] - positions.length + 1);\n }\n-\n //if original slop is 0 then require inOrder\n boolean inorder = (mpq.getSlop() == 0);\n return Collections.singletonList(new SpanNearQuery(positionSpanQueries,", "filename": "core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java", "status": "modified" }, { "diff": "@@ -121,6 +121,19 @@ public void testNoMatchSize() throws Exception {\n BreakIterator.getSentenceInstance(Locale.ROOT), 100, inputs);\n }\n \n+ public void testMultiPhrasePrefixQuerySingleTerm() throws Exception {\n+ final String[] inputs = {\n+ \"The quick brown fox.\"\n+ };\n+ final String[] outputs = {\n+ \"The quick <b>brown</b> fox.\"\n+ };\n+ MultiPhrasePrefixQuery query = new MultiPhrasePrefixQuery();\n+ query.add(new Term(\"text\", \"bro\"));\n+ assertHighlightOneDoc(\"text\", inputs, new StandardAnalyzer(), query, Locale.ROOT,\n+ BreakIterator.getSentenceInstance(Locale.ROOT), 0, outputs);\n+ }\n+\n public void testMultiPhrasePrefixQuery() throws Exception {\n final String[] inputs = {\n \"The quick brown fox.\"", "filename": "core/src/test/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighterTests.java", "status": "modified" }, { "diff": "@@ -1455,11 +1455,20 @@ public void testPhrasePrefix() throws IOException {\n \n for (String type : UNIFIED_AND_NULL) {\n SearchSourceBuilder source = searchSource()\n- .query(matchPhrasePrefixQuery(\"field0\", \"quick bro\"))\n+ .query(matchPhrasePrefixQuery(\"field0\", \"bro\"))\n .highlighter(highlight().field(\"field0\").order(\"score\").preTags(\"<x>\").postTags(\"</x>\").highlighterType(type));\n \n SearchResponse searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n \n+ assertHighlight(searchResponse, 0, \"field0\", 0, 1, equalTo(\"The quick <x>brown</x> fox jumps over the lazy dog\"));\n+\n+\n+ source = searchSource()\n+ .query(matchPhrasePrefixQuery(\"field0\", \"quick bro\"))\n+ .highlighter(highlight().field(\"field0\").order(\"score\").preTags(\"<x>\").postTags(\"</x>\").highlighterType(type));\n+\n+ searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n+\n assertHighlight(searchResponse, 0, \"field0\", 0, 1, equalTo(\"The <x>quick</x> <x>brown</x> fox jumps over the lazy dog\"));\n \n logger.info(\"--> highlighting and searching on field1\");", "filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java", "status": "modified" } ] }
{ "body": "In #22144 we disabled `_all` field by default, which means that that unless `_all` was explicitly enabled in a 5.x mapping, it will be disabled in 6.0 after upgrade. In other words, 5.x indices with `_all` explicitly enabled will continue to work in 6.0 as before, while indices with `_all` not explicitly enabled will lose `_all` fields and will start to use our fallback mechanism for all fields queries. \r\n\r\nAfter discussions with @archanid and @dakrone we couldn't come to a clear conclusion on how this situation should be handled. We see at least 2 options here. \r\n\r\n1) It can be fixed by updating all existing 5.x mappings to have `_all` explicitly enabled when the first 6.0 node shows up in the cluster. \r\n\r\n2) Alternatively we can just update our [documentation](https://www.elastic.co/guide/en/elasticsearch/reference/master/breaking_60_mappings_changes.html#_the_literal__all_literal_meta_field_is_now_disabled_by_default) that doesn't seem to explicitly say what happens to the existing mappings. \r\n", "comments": [ { "body": "This is a bug. 5.x indices should keep working in the same way as before when the cluster is upgraded to 6.x. This means that `_all` should continue to be enabled by default for indices created in 5.x.\r\n\r\nI'd also be OK with upgrading the 5.x mappings to make `_all` explicitly enabled, rather than introducing version logic into the default.", "created_at": "2017-06-06T12:57:22Z" }, { "body": "@clintongormley @dakrone Can you help fill out the table? Given the 5.x scenario on the left, what should happen when they upgrade, and should we raise a deprecation issue in the migration tool?\r\n\r\n| 5.x setting | 6.0? | Deprecation Issue? |\r\n|----------|-------------|------|\r\n| `\"_all\": { \"enabled\": false }` | no setting | no |\r\n| `\"_all\": { \"enabled\": true }` | `\"_all\": { \"enabled\": true }` | yes |\r\n| nothing ~~or improperly configured~~ | `\"_all\": { \"enabled\": true }` | yes |\r\n", "created_at": "2017-06-06T13:23:49Z" }, { "body": "@archanid i've updated the table", "created_at": "2017-06-06T15:16:07Z" }, { "body": "thanks @clintongormley Just to clarify, what I meant by \"improperly configured\", and I should have been more clear, is `\"_all\": { }` which is a possible thing. After talking with @imotov and @dakrone this morning we think that case is the same as \"nothing\". We'll still want to keep the `\"enabled\": true` functionality if the index was created in 5.x.", "created_at": "2017-06-06T18:25:05Z" }, { "body": ":tada:", "created_at": "2017-06-08T20:51:11Z" } ], "number": 25068, "title": "Handling of _all field in indices migrated from 5.x to 6.0" }
{ "body": "When we disabled `_all` by default for indices created in 6.0, we missed adding\r\na layer that would handle the situation where `_all` was not enabled in 5.x and\r\nthen the cluster was updated to 6.0, this means that when the cluster was\r\nupdated the `_all` field would be disabled for 5.x indices and field values\r\nwould not be added to the `_all` field.\r\n\r\nThis adds a compatibility layer for 5.x indices where we treat the default\r\nenabled value for the `_all` field to be `true` if unset on 5.x indices.\r\n\r\nResolves #25068\r\n", "number": 25087, "review_comments": [], "title": "Correctly enable _all for older 5.x indices" }
{ "commits": [ { "message": "Correctly enable _all for older 5.x indices\n\nWhen we disabled `_all` by default for indices created in 6.0, we missed adding\na layer that would handle the situation where `_all` was not enabled in 5.x and\nthen the cluster was updated to 6.0, this means that when the cluster was\nupdated the `_all` field would be disabled for 5.x indices and field values\nwould not be added to the `_all` field.\n\nThis adds a compatibility layer for 5.x indices where we treat the default\nenabled value for the `_all` field to be `true` if unset on 5.x indices.\n\nResolves #25068" } ], "files": [ { "diff": "@@ -133,24 +133,38 @@ public MetadataFieldMapper.Builder<?,?> parse(String name, Map<String, Object> n\n }\n \n parseTextField(builder, builder.name, node, parserContext);\n+ boolean enabledSet = false;\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n String fieldName = entry.getKey();\n Object fieldNode = entry.getValue();\n if (fieldName.equals(\"enabled\")) {\n boolean enabled = TypeParsers.nodeBooleanValueLenient(name, \"enabled\", fieldNode);\n builder.enabled(enabled ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED);\n+ enabledSet = true;\n iterator.remove();\n }\n }\n+ if (enabledSet == false && parserContext.indexVersionCreated().before(Version.V_6_0_0_alpha1)) {\n+ // So there is no \"enabled\" field, however, the index was created prior to 6.0,\n+ // and therefore the default for this particular index should be \"true\" for\n+ // enabling _all\n+ builder.enabled(EnabledAttributeMapper.ENABLED);\n+ }\n return builder;\n }\n \n @Override\n public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) {\n final Settings indexSettings = context.mapperService().getIndexSettings().getSettings();\n if (fieldType != null) {\n- return new AllFieldMapper(indexSettings, fieldType);\n+ if (context.indexVersionCreated().before(Version.V_6_0_0_alpha1)) {\n+ // The index was created prior to 6.0, and therefore the default for this\n+ // particular index should be \"true\" for enabling _all\n+ return new AllFieldMapper(fieldType.clone(), EnabledAttributeMapper.ENABLED, indexSettings);\n+ } else {\n+ return new AllFieldMapper(indexSettings, fieldType);\n+ }\n } else {\n return parse(NAME, Collections.emptyMap(), context)\n .build(new BuilderContext(indexSettings, new ContentPath(1)));\n@@ -197,7 +211,6 @@ private AllFieldMapper(Settings indexSettings, MappedFieldType existing) {\n private AllFieldMapper(MappedFieldType fieldType, EnabledAttributeMapper enabled, Settings indexSettings) {\n super(NAME, fieldType, Defaults.FIELD_TYPE, indexSettings);\n this.enabledState = enabled;\n-\n }\n \n public boolean enabled() {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java", "status": "modified" }, { "diff": "@@ -0,0 +1,109 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.InternalSettingsPlugin;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+\n+public class AllFieldIT extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Arrays.asList(InternalSettingsPlugin.class); // uses index.version.created\n+ }\n+\n+ public void test5xIndicesContinueToUseAll() throws Exception {\n+ // Default 5.x settings\n+ assertAcked(prepareCreate(\"test\").setSettings(\"index.version.created\", Version.V_5_1_1.id));\n+ client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"body\", \"foo\").get();\n+ refresh();\n+ SearchResponse resp = client().prepareSearch(\"test\").setQuery(QueryBuilders.matchQuery(\"_all\", \"foo\")).get();\n+ assertHitCount(resp, 1);\n+ assertSearchHits(resp, \"1\");\n+\n+ // _all explicitly enabled\n+ assertAcked(prepareCreate(\"test2\")\n+ .setSource(jsonBuilder()\n+ .startObject()\n+ .startObject(\"mappings\")\n+ .startObject(\"type\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", true)\n+ .endObject() // _all\n+ .endObject() // type\n+ .endObject() // mappings\n+ .endObject())\n+ .setSettings(\"index.version.created\", Version.V_5_4_0_ID));\n+ client().prepareIndex(\"test2\", \"type\", \"1\").setSource(\"foo\", \"bar\").get();\n+ refresh();\n+ resp = client().prepareSearch(\"test2\").setQuery(QueryBuilders.matchQuery(\"_all\", \"bar\")).get();\n+ assertHitCount(resp, 1);\n+ assertSearchHits(resp, \"1\");\n+\n+ // _all explicitly disabled\n+ assertAcked(prepareCreate(\"test3\")\n+ .setSource(jsonBuilder()\n+ .startObject()\n+ .startObject(\"mappings\")\n+ .startObject(\"type\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", false)\n+ .endObject() // _all\n+ .endObject() // type\n+ .endObject() // mappings\n+ .endObject())\n+ .setSettings(\"index.version.created\", Version.V_5_4_0_ID));\n+ client().prepareIndex(\"test3\", \"type\", \"1\").setSource(\"foo\", \"baz\").get();\n+ refresh();\n+ resp = client().prepareSearch(\"test3\").setQuery(QueryBuilders.matchQuery(\"_all\", \"baz\")).get();\n+ assertHitCount(resp, 0);\n+\n+ // _all present, but not enabled or disabled (default settings)\n+ assertAcked(prepareCreate(\"test4\")\n+ .setSource(jsonBuilder()\n+ .startObject()\n+ .startObject(\"mappings\")\n+ .startObject(\"type\")\n+ .startObject(\"_all\")\n+ .endObject() // _all\n+ .endObject() // type\n+ .endObject() // mappings\n+ .endObject())\n+ .setSettings(\"index.version.created\", Version.V_5_4_0_ID));\n+ client().prepareIndex(\"test4\", \"type\", \"1\").setSource(\"foo\", \"eggplant\").get();\n+ refresh();\n+ resp = client().prepareSearch(\"test4\").setQuery(QueryBuilders.matchQuery(\"_all\", \"eggplant\")).get();\n+ assertHitCount(resp, 1);\n+ assertSearchHits(resp, \"1\");\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/AllFieldIT.java", "status": "added" }, { "diff": "@@ -46,6 +46,7 @@\n import org.apache.lucene.search.spans.SpanTermQuery;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.automaton.TooComplexToDeterminizeException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.compress.CompressedXContent;\n@@ -833,6 +834,9 @@ public void testToQuerySplitOnWhitespace() throws IOException {\n \n public void testExistsFieldQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ assumeTrue(\"5.x behaves differently, so skip on non-6.x indices\",\n+ indexVersionCreated.onOrAfter(Version.V_6_0_0_alpha1));\n+\n QueryShardContext context = createShardContext();\n QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder(\"foo:*\");\n Query query = queryBuilder.toQuery(context);", "filename": "core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -198,6 +199,9 @@ public void testFieldsCannotBeSetToNull() {\n }\n \n public void testDefaultFieldParsing() throws IOException {\n+ assumeTrue(\"5.x behaves differently, so skip on non-6.x indices\",\n+ indexVersionCreated.onOrAfter(Version.V_6_0_0_alpha1));\n+\n String query = randomAlphaOfLengthBetween(1, 10).toLowerCase(Locale.ROOT);\n String contentString = \"{\\n\" +\n \" \\\"simple_query_string\\\" : {\\n\" +", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java", "status": "modified" }, { "diff": "@@ -12,7 +12,7 @@\n indices.get_mapping:\n index: test_index\n \n- - match: { test_index.mappings.type_1: {}}\n+ - is_true: test_index.mappings.type_1\n \n ---\n \"Create index with settings\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.create/10_basic.yml", "status": "modified" }, { "diff": "@@ -42,10 +42,10 @@ setup:\n - do:\n indices.get_mapping: {}\n \n- - match: { test_1.mappings.type_1: {}}\n- - match: { test_1.mappings.type_2: {}}\n- - match: { test_2.mappings.type_2: {}}\n- - match: { test_2.mappings.type_3: {}}\n+ - is_true: test_1.mappings.type_1\n+ - is_true: test_1.mappings.type_2\n+ - is_true: test_2.mappings.type_2\n+ - is_true: test_2.mappings.type_3\n \n ---\n \"Get /{index}/_mapping\":\n@@ -58,8 +58,8 @@ setup:\n indices.get_mapping:\n index: test_1\n \n- - match: { test_1.mappings.type_1: {}}\n- - match: { test_1.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_1\n+ - is_true: test_1.mappings.type_2\n - is_false: test_2\n \n \n@@ -75,8 +75,8 @@ setup:\n index: test_1\n type: _all\n \n- - match: { test_1.mappings.type_1: {}}\n- - match: { test_1.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_1\n+ - is_true: test_1.mappings.type_2\n - is_false: test_2\n \n ---\n@@ -91,8 +91,8 @@ setup:\n index: test_1\n type: '*'\n \n- - match: { test_1.mappings.type_1: {}}\n- - match: { test_1.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_1\n+ - is_true: test_1.mappings.type_2\n - is_false: test_2\n \n ---\n@@ -107,7 +107,6 @@ setup:\n index: test_1\n type: type_1\n \n- - match: { test_1.mappings.type_1: {}}\n - is_false: test_1.mappings.type_2\n - is_false: test_2\n \n@@ -123,8 +122,8 @@ setup:\n index: test_1\n type: type_1,type_2\n \n- - match: { test_1.mappings.type_1: {}}\n- - match: { test_1.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_1\n+ - is_true: test_1.mappings.type_2\n - is_false: test_2\n \n ---\n@@ -139,7 +138,7 @@ setup:\n index: test_1\n type: '*2'\n \n- - match: { test_1.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_2\n - is_false: test_1.mappings.type_1\n - is_false: test_2\n \n@@ -154,8 +153,8 @@ setup:\n indices.get_mapping:\n type: type_2\n \n- - match: { test_1.mappings.type_2: {}}\n- - match: { test_2.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_2\n+ - is_true: test_2.mappings.type_2\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -171,8 +170,8 @@ setup:\n index: _all\n type: type_2\n \n- - match: { test_1.mappings.type_2: {}}\n- - match: { test_2.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_2\n+ - is_true: test_2.mappings.type_2\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -188,8 +187,8 @@ setup:\n index: '*'\n type: type_2\n \n- - match: { test_1.mappings.type_2: {}}\n- - match: { test_2.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_2\n+ - is_true: test_2.mappings.type_2\n - is_false: test_1.mappings.type_1\n - is_false: test_2.mappings.type_3\n \n@@ -205,8 +204,8 @@ setup:\n index: test_1,test_2\n type: type_2\n \n- - match: { test_1.mappings.type_2: {}}\n- - match: { test_2.mappings.type_2: {}}\n+ - is_true: test_1.mappings.type_2\n+ - is_true: test_2.mappings.type_2\n - is_false: test_2.mappings.type_3\n \n ---\n@@ -221,6 +220,6 @@ setup:\n index: '*2'\n type: type_2\n \n- - match: { test_2.mappings.type_2: {}}\n+ - is_true: test_2.mappings.type_2\n - is_false: test_1\n - is_false: test_2.mappings.type_3", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_mapping/10_basic.yml", "status": "modified" }, { "diff": "@@ -56,8 +56,8 @@ setup:\n indices.get_mapping:\n index: test-x*\n \n- - match: { test-xxx.mappings.type_1: {}}\n- - match: { test-xxy.mappings.type_2: {}}\n+ - is_true: test-xxx.mappings.type_1\n+ - is_true: test-xxy.mappings.type_2\n \n ---\n \"Get test-* with wildcard_expansion=all\":\n@@ -67,9 +67,9 @@ setup:\n index: test-x*\n expand_wildcards: all\n \n- - match: { test-xxx.mappings.type_1: {}}\n- - match: { test-xxy.mappings.type_2: {}}\n- - match: { test-xyy.mappings.type_3: {}}\n+ - is_true: test-xxx.mappings.type_1\n+ - is_true: test-xxy.mappings.type_2\n+ - is_true: test-xyy.mappings.type_3\n \n ---\n \"Get test-* with wildcard_expansion=open\":\n@@ -79,8 +79,8 @@ setup:\n index: test-x*\n expand_wildcards: open\n \n- - match: { test-xxx.mappings.type_1: {}}\n- - match: { test-xxy.mappings.type_2: {}}\n+ - is_true: test-xxx.mappings.type_1\n+ - is_true: test-xxy.mappings.type_2\n \n ---\n \"Get test-* with wildcard_expansion=closed\":\n@@ -90,7 +90,7 @@ setup:\n index: test-x*\n expand_wildcards: closed\n \n- - match: { test-xyy.mappings.type_3: {}}\n+ - is_true: test-xyy.mappings.type_3\n \n ---\n \"Get test-* with wildcard_expansion=none\":\n@@ -112,8 +112,6 @@ setup:\n index: test-x*\n expand_wildcards: open,closed\n \n- - match: { test-xxx.mappings.type_1: {}}\n- - match: { test-xxy.mappings.type_2: {}}\n- - match: { test-xyy.mappings.type_3: {}}\n-\n-\n+ - is_true: test-xxx.mappings.type_1\n+ - is_true: test-xxy.mappings.type_2\n+ - is_true: test-xyy.mappings.type_3", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_mapping/50_wildcard_expansion.yml", "status": "modified" }, { "diff": "@@ -147,6 +147,8 @@ public abstract class AbstractQueryTestCase<QB extends AbstractQueryBuilder<QB>>\n DOUBLE_FIELD_NAME, BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, DATE_RANGE_FIELD_NAME, GEO_POINT_FIELD_NAME, };\n private static final int NUMBER_OF_TESTQUERIES = 20;\n \n+ protected static Version indexVersionCreated;\n+\n private static ServiceHolder serviceHolder;\n private static int queryNameId = 0;\n private static Settings nodeSettings;\n@@ -185,8 +187,8 @@ public static void beforeClass() {\n \n protected Settings indexSettings() {\n // we have to prefer CURRENT since with the range of versions we support it's rather unlikely to get the current actually.\n- Version indexVersionCreated = randomBoolean() ? Version.CURRENT\n- : VersionUtils.randomVersionBetween(random(), null, Version.CURRENT);\n+ indexVersionCreated = randomBoolean() ? Version.CURRENT\n+ : VersionUtils.randomVersionBetween(random(), null, Version.CURRENT);\n return Settings.builder()\n .put(IndexMetaData.SETTING_VERSION_CREATED, indexVersionCreated)\n .build();", "filename": "test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java", "status": "modified" } ] }
{ "body": "I was investigating something else and it *looks* like refreshes caused by realtime gets do not get factored into the refresh cycle. I was just looking around the code and that is how it looked. This reproduces the problem, I think:\r\n\r\n```\r\nDELETE /test\r\n\r\nPUT /test\r\n{\r\n \"settings\": {\r\n \"refresh_interval\": -1\r\n }\r\n}\r\n\r\nPUT /test/doc/1\r\n{\r\n \"test\": \"TEST\"\r\n}\r\n\r\nGET /test/doc/1\r\n\r\nGET /test/_stats/refresh\r\n```\r\n\r\nThe refresh stats should show a refresh. Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.", "comments": [ { "body": "> The refresh stats should show a refresh.\r\n\r\n+1\r\n\r\n> Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.\r\n\r\nI wonder if we need this? What we really care about is how many refreshes happen per sec. Doesn't matter what caused them (and it's very easy to see what the refresh interval settings is). \r\n", "created_at": "2017-05-19T19:56:26Z" }, { "body": "I agree, there is a bug here. The problem is the stats tracking is done for refresh calls that go through index shard, but not for refresh calls decided inside the engine.\r\n\r\n> Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.\r\n\r\nI do not think that we need this. A refresh is a refresh.", "created_at": "2017-05-19T19:58:50Z" }, { "body": "> I do not think that we need this. A refresh is a refresh.\r\n\r\nI'm not sure we need this either. If we had this then we could point to it and say \"there, that is why you have so many refreshes\" without having to dig deeply. It'd be an easy thing to tell people to check.", "created_at": "2017-05-19T20:10:38Z" } ], "number": 24806, "title": "Refreshes caused by realtime gets are not counted in the refresh stats" }
{ "body": "The PR takes a different approach to solve #24806 than currently implemented via #25052. The `refreshMetric` that IndexShard maintains is updated using the refresh listeners infrastructure in lucene. This means that we truly count all refreshes that lucene makes and not have to worry about each individual caller (like `IndexShard@refresh` and `Engine#get()`)", "number": 25083, "review_comments": [ { "body": "Can we give this a more meaningful name like `currentRefreshStartTime`?", "created_at": "2017-06-06T21:01:57Z" }, { "body": "I wonder if assertions are enabled if we should clear out `time` (e.g., it to `Long.MAX_VALUE`) and assert that it's cleared in before?", "created_at": "2017-06-06T21:02:56Z" }, { "body": "`refreshListeners`?", "created_at": "2017-06-06T21:04:23Z" }, { "body": "sure", "created_at": "2017-06-06T21:10:08Z" }, { "body": "I doubted about it and decided that the current assertions are enough. If set it to `MAX_VALUE` or something and it goes wrong the stats will be extremely off and hard to recover. Felt like an overkill. ", "created_at": "2017-06-06T21:11:06Z" }, { "body": "Sadly we also have a `RefreshListeners` class. I felt it would be confusing. ", "created_at": "2017-06-06T21:11:38Z" }, { "body": "> If set it to `MAX_VALUE` or something and it goes wrong the stats will be extremely off and hard to recover.\r\n\r\nWell, it would only be when assertions are enabled.\r\n\r\n> Felt like an overkill.\r\n\r\nOkay.", "created_at": "2017-06-06T21:17:55Z" }, { "body": "Probably should be on the line above.", "created_at": "2017-06-06T21:26:38Z" }, { "body": "I wonder if it is worth keeping the assertions about whether or not we refreshed. Certainly up to you.", "created_at": "2017-06-06T21:27:50Z" }, { "body": "Agreed that would strengthen the test, but I feel it's unrelated?", "created_at": "2017-06-06T21:36:03Z" }, { "body": "yeah... auto intelijj formatting..", "created_at": "2017-06-06T21:36:39Z" } ], "title": "Update `IndexShard#refreshMetric` via a `ReferenceManager.RefreshListener`" }
{ "commits": [ { "message": "use refersh listener for real time get" }, { "message": "move to class" }, { "message": "variable name" }, { "message": "line break be gone" }, { "message": "Merge remote-tracking branch 'upstream/master' into refresh_stats_listener" } ], "files": [ { "diff": "@@ -89,7 +89,6 @@\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n import java.util.function.Function;\n-import java.util.function.LongConsumer;\n \n public abstract class Engine implements Closeable {\n \n@@ -486,7 +485,7 @@ protected final GetResult getFromSearcher(Get get, Function<String, Searcher> se\n }\n }\n \n- public abstract GetResult get(Get get, Function<String, Searcher> searcherFactory, LongConsumer onRefresh) throws EngineException;\n+ public abstract GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException;\n \n /**\n * Returns a new searcher instance. The consumer of this", "filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -41,6 +41,8 @@\n import org.elasticsearch.indices.IndexingMemoryController;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.util.List;\n+\n /*\n * Holds all the configuration that is used to create an {@link Engine}.\n * Once {@link Engine} has been created with this object, changes to this\n@@ -65,7 +67,7 @@ public final class EngineConfig {\n private final QueryCache queryCache;\n private final QueryCachingPolicy queryCachingPolicy;\n @Nullable\n- private final ReferenceManager.RefreshListener refreshListeners;\n+ private final List<ReferenceManager.RefreshListener> refreshListeners;\n @Nullable\n private final Sort indexSort;\n \n@@ -111,7 +113,7 @@ public EngineConfig(OpenMode openMode, ShardId shardId, ThreadPool threadPool,\n MergePolicy mergePolicy, Analyzer analyzer,\n Similarity similarity, CodecService codecService, Engine.EventListener eventListener,\n TranslogRecoveryPerformer translogRecoveryPerformer, QueryCache queryCache, QueryCachingPolicy queryCachingPolicy,\n- TranslogConfig translogConfig, TimeValue flushMergesAfter, ReferenceManager.RefreshListener refreshListeners,\n+ TranslogConfig translogConfig, TimeValue flushMergesAfter, List<ReferenceManager.RefreshListener> refreshListeners,\n Sort indexSort) {\n if (openMode == null) {\n throw new IllegalArgumentException(\"openMode must not be null\");\n@@ -310,9 +312,9 @@ public enum OpenMode {\n }\n \n /**\n- * {@linkplain ReferenceManager.RefreshListener} instance to configure.\n+ * The refresh listeners to add to Lucene\n */\n- public ReferenceManager.RefreshListener getRefreshListeners() {\n+ public List<ReferenceManager.RefreshListener> getRefreshListeners() {\n return refreshListeners;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.apache.lucene.index.SnapshotDeletionPolicy;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.ReferenceManager;\n import org.apache.lucene.search.SearcherFactory;\n import org.apache.lucene.search.SearcherManager;\n import org.apache.lucene.search.TermQuery;\n@@ -92,7 +93,6 @@\n import java.util.concurrent.locks.Lock;\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.function.Function;\n-import java.util.function.LongConsumer;\n import java.util.function.LongSupplier;\n \n public class InternalEngine extends Engine {\n@@ -213,8 +213,8 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n assert pendingTranslogRecovery.get() == false : \"translog recovery can't be pending before we set it\";\n // don't allow commits until we are done with recovering\n pendingTranslogRecovery.set(openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG);\n- if (engineConfig.getRefreshListeners() != null) {\n- searcherManager.addListener(engineConfig.getRefreshListeners());\n+ for (ReferenceManager.RefreshListener listener: engineConfig.getRefreshListeners()) {\n+ searcherManager.addListener(listener);\n }\n success = true;\n } finally {\n@@ -405,7 +405,7 @@ private SearcherManager createSearcherManager() throws EngineException {\n }\n \n @Override\n- public GetResult get(Get get, Function<String, Searcher> searcherFactory, LongConsumer onRefresh) throws EngineException {\n+ public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException {\n assert Objects.equals(get.uid().field(), uidField) : get.uid().field();\n try (ReleasableLock ignored = readLock.acquire()) {\n ensureOpen();\n@@ -419,9 +419,7 @@ public GetResult get(Get get, Function<String, Searcher> searcherFactory, LongCo\n throw new VersionConflictEngineException(shardId, get.type(), get.id(),\n get.versionType().explainConflictForReads(versionValue.version, get.version()));\n }\n- long time = System.nanoTime();\n refresh(\"realtime_get\");\n- onRefresh.accept(System.nanoTime() - time);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -26,11 +26,13 @@\n import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.QueryCachingPolicy;\n+import org.apache.lucene.search.ReferenceManager;\n import org.apache.lucene.search.Sort;\n import org.apache.lucene.search.UsageTrackingQueryCachingPolicy;\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.ThreadInterruptedException;\n+import org.elasticsearch.Assertions;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n@@ -123,6 +125,7 @@\n import java.nio.channels.ClosedByInterruptException;\n import java.nio.charset.StandardCharsets;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.EnumSet;\n import java.util.List;\n import java.util.Locale;\n@@ -660,7 +663,7 @@ private Engine.DeleteResult delete(Engine engine, Engine.Delete delete) throws I\n \n public Engine.GetResult get(Engine.Get get) {\n readAllowed();\n- return getEngine().get(get, this::acquireSearcher, (timeElapsed) -> refreshMetric.inc(timeElapsed));\n+ return getEngine().get(get, this::acquireSearcher);\n }\n \n /**\n@@ -676,9 +679,7 @@ public void refresh(String source) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"refresh with source [{}] indexBufferRAMBytesUsed [{}]\", source, new ByteSizeValue(bytes));\n }\n- long time = System.nanoTime();\n getEngine().refresh(source);\n- refreshMetric.inc(System.nanoTime() - time);\n } finally {\n if (logger.isTraceEnabled()) {\n logger.trace(\"remove [{}] writing bytes for shard [{}]\", new ByteSizeValue(bytes), shardId());\n@@ -689,9 +690,7 @@ public void refresh(String source) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"refresh with source [{}]\", source);\n }\n- long time = System.nanoTime();\n getEngine().refresh(source);\n- refreshMetric.inc(System.nanoTime() - time);\n }\n }\n \n@@ -1847,7 +1846,8 @@ private EngineConfig newEngineConfig(EngineConfig.OpenMode openMode) {\n return new EngineConfig(openMode, shardId,\n threadPool, indexSettings, warmer, store, indexSettings.getMergePolicy(),\n mapperService.indexAnalyzer(), similarityService.similarity(mapperService), codecService, shardEventListener, translogRecoveryPerformer, indexCache.query(), cachingPolicy, translogConfig,\n- IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING.get(indexSettings.getSettings()), refreshListeners, indexSort);\n+ IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING.get(indexSettings.getSettings()),\n+ Arrays.asList(refreshListeners, new RefreshMetricUpdater(refreshMetric)), indexSort);\n }\n \n /**\n@@ -2123,4 +2123,35 @@ protected void delete(Engine engine, Engine.Delete engineDelete) throws IOExcept\n }\n }\n \n+ private static class RefreshMetricUpdater implements ReferenceManager.RefreshListener {\n+\n+ private final MeanMetric refreshMetric;\n+ private long currentRefreshStartTime;\n+ private Thread callingThread = null;\n+\n+ private RefreshMetricUpdater(MeanMetric refreshMetric) {\n+ this.refreshMetric = refreshMetric;\n+ }\n+\n+ @Override\n+ public void beforeRefresh() throws IOException {\n+ if (Assertions.ENABLED) {\n+ assert callingThread == null : \"beforeRefresh was called by \" + callingThread.getName() +\n+ \" without a corresponding call to afterRefresh\";\n+ callingThread = Thread.currentThread();\n+ }\n+ currentRefreshStartTime = System.nanoTime();\n+ }\n+\n+ @Override\n+ public void afterRefresh(boolean didRefresh) throws IOException {\n+ if (Assertions.ENABLED) {\n+ assert callingThread != null : \"afterRefresh called but not beforeRefresh\";\n+ assert callingThread == Thread.currentThread() : \"beforeRefreshed called by a different thread. current [\"\n+ + Thread.currentThread().getName() + \"], thread that called beforeRefresh [\" + callingThread.getName() + \"]\";\n+ callingThread = null;\n+ }\n+ refreshMetric.inc(System.nanoTime() - currentRefreshStartTime);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -172,6 +172,7 @@\n import java.util.stream.Collectors;\n import java.util.stream.LongStream;\n \n+import static java.util.Collections.emptyList;\n import static java.util.Collections.emptyMap;\n import static java.util.Collections.shuffle;\n import static org.elasticsearch.index.engine.Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY;\n@@ -429,11 +430,13 @@ public void onFailedEngine(String reason, @Nullable Exception e) {\n // we don't need to notify anybody in this test\n }\n };\n+ final List<ReferenceManager.RefreshListener> refreshListenerList =\n+ refreshListener == null ? emptyList() : Collections.singletonList(refreshListener);\n EngineConfig config = new EngineConfig(openMode, shardId, threadPool, indexSettings, null, store,\n mergePolicy, iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), listener,\n new TranslogHandler(xContentRegistry(), shardId.getIndexName(), indexSettings.getSettings(), logger),\n IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig,\n- TimeValue.timeValueMinutes(5), refreshListener, indexSort);\n+ TimeValue.timeValueMinutes(5), refreshListenerList, indexSort);\n \n return config;\n }\n@@ -921,9 +924,7 @@ public void testConcurrentGetAndFlush() throws Exception {\n \n final AtomicReference<Engine.GetResult> latestGetResult = new AtomicReference<>();\n final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n- final AtomicBoolean refreshed = new AtomicBoolean(false);\n- latestGetResult.set(engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true)));\n- assertTrue(\"failed to refresh\", refreshed.get());\n+ latestGetResult.set(engine.get(newGet(true, doc), searcherFactory));\n final AtomicBoolean flushFinished = new AtomicBoolean(false);\n final CyclicBarrier barrier = new CyclicBarrier(2);\n Thread getThread = new Thread(() -> {\n@@ -937,7 +938,7 @@ public void testConcurrentGetAndFlush() throws Exception {\n if (previousGetResult != null) {\n previousGetResult.release();\n }\n- latestGetResult.set(engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause a flush is just done\")));\n+ latestGetResult.set(engine.get(newGet(true, doc), searcherFactory));\n if (latestGetResult.get().exists() == false) {\n break;\n }\n@@ -958,7 +959,6 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n- final AtomicBoolean refreshed = new AtomicBoolean(false);\n \n // create a document\n Document document = testDocumentWithTextField();\n@@ -973,13 +973,12 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, not there non realtime\n- Engine.GetResult getResult = engine.get(newGet(false, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have a refresh\"));\n+ Engine.GetResult getResult = engine.get(newGet(false, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n getResult.release();\n \n // but, we can still get it (in realtime)\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true));\n- assertTrue(\"failed to refresh\", refreshed.getAndSet(false));\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -994,7 +993,7 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // also in non realtime\n- getResult = engine.get(newGet(false, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have a refresh\"));\n+ getResult = engine.get(newGet(false, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1014,8 +1013,7 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, we can still get it (in realtime)\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true));\n- assertTrue(\"failed to refresh\", refreshed.get());\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1040,7 +1038,7 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, get should not see it (in realtime)\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause the document is deleted\"));\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n getResult.release();\n \n@@ -1080,7 +1078,7 @@ public void testSimpleOperations() throws Exception {\n engine.flush();\n \n // and, verify get (in real time)\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause a flush is just done\"));\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1867,7 +1865,6 @@ class OpAndVersion {\n final Term uidTerm = newUid(doc);\n engine.index(indexForDoc(doc));\n final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n- final AtomicBoolean refreshed = new AtomicBoolean(false);\n for (int i = 0; i < thread.length; i++) {\n thread[i] = new Thread(() -> {\n startGun.countDown();\n@@ -1877,7 +1874,7 @@ class OpAndVersion {\n throw new AssertionError(e);\n }\n for (int op = 0; op < opsPerThread; op++) {\n- try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory, (onRefresh) -> refreshed.set(true))) {\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory)) {\n FieldsVisitor visitor = new FieldsVisitor(true);\n get.docIdAndVersion().context.reader().document(get.docIdAndVersion().docId, visitor);\n List<String> values = new ArrayList<>(Strings.commaDelimitedListToSet(visitor.source().utf8ToString()));\n@@ -1905,7 +1902,6 @@ class OpAndVersion {\n for (int i = 0; i < thread.length; i++) {\n thread[i].join();\n }\n- assertTrue(\"failed to refresh\", refreshed.getAndSet(false));\n List<OpAndVersion> sortedHistory = new ArrayList<>(history);\n sortedHistory.sort(Comparator.comparing(o -> o.version));\n Set<String> currentValues = new HashSet<>();\n@@ -1920,8 +1916,7 @@ class OpAndVersion {\n assertTrue(op.added + \" should not exist\", exists);\n }\n \n- try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory, (onRefresh) -> refreshed.set(true))) {\n- assertTrue(\"failed to refresh\", refreshed.get());\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory)) {\n FieldsVisitor visitor = new FieldsVisitor(true);\n get.docIdAndVersion().context.reader().document(get.docIdAndVersion().docId, visitor);\n List<String> values = Arrays.asList(Strings.commaDelimitedListToStringArray(visitor.source().utf8ToString()));\n@@ -2287,7 +2282,7 @@ public void testEnableGcDeletes() throws Exception {\n engine.delete(new Engine.Delete(\"test\", \"1\", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()));\n \n // Get should not find the document\n- Engine.GetResult getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause the document is deleted\"));\n+ Engine.GetResult getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n \n // Give the gc pruning logic a chance to kick in\n@@ -2301,7 +2296,7 @@ public void testEnableGcDeletes() throws Exception {\n engine.delete(new Engine.Delete(\"test\", \"2\", newUid(\"2\"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()));\n \n // Get should not find the document (we never indexed uid=2):\n- getResult = engine.get(new Engine.Get(true, \"type\", \"2\", newUid(\"2\")), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n+ getResult = engine.get(new Engine.Get(true, \"type\", \"2\", newUid(\"2\")), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n \n // Try to index uid=1 with a too-old version, should fail:\n@@ -2311,7 +2306,7 @@ public void testEnableGcDeletes() throws Exception {\n assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class));\n \n // Get should still not find the document\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n \n // Try to index uid=2 with a too-old version, should fail:\n@@ -2321,7 +2316,7 @@ public void testEnableGcDeletes() throws Exception {\n assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class));\n \n // Get should not find the document\n- getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n+ getResult = engine.get(newGet(true, doc), searcherFactory);\n assertThat(getResult.exists(), equalTo(false));\n }\n }\n@@ -3654,7 +3649,6 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio\n final ParsedDocument doc = testParsedDocument(\"1\", null, document, B_1, null);\n final Term uid = newUid(doc);\n final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n- final AtomicBoolean refreshed = new AtomicBoolean(false);\n for (int i = 0; i < numberOfOperations; i++) {\n if (randomBoolean()) {\n final Engine.Index index = new Engine.Index(\n@@ -3716,8 +3710,7 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio\n }\n \n assertThat(engine.seqNoService().getLocalCheckpoint(), equalTo(expectedLocalCheckpoint));\n- try (Engine.GetResult result = engine.get(new Engine.Get(true, \"type\", \"2\", uid), searcherFactory, (onRefresh) -> refreshed.set(exists))) {\n- assertEquals(\"failed to refresh\", exists, refreshed.get());\n+ try (Engine.GetResult result = engine.get(new Engine.Get(true, \"type\", \"2\", uid), searcherFactory)) {\n assertThat(result.exists(), equalTo(exists));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java", "status": "modified" }, { "diff": "@@ -139,6 +139,7 @@\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.hasKey;\n import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.hasToString;\n@@ -880,6 +881,26 @@ public void testShardStats() throws IOException {\n closeShards(shard);\n }\n \n+ public void testRefreshMetric() throws IOException {\n+ IndexShard shard = newStartedShard();\n+ assertThat(shard.refreshStats().getTotal(), equalTo(2L)); // one refresh on end of recovery, one on starting shard\n+ long initialTotalTime = shard.refreshStats().getTotalTimeInMillis();\n+ // check time advances\n+ for (int i = 1; shard.refreshStats().getTotalTimeInMillis() == initialTotalTime; i++) {\n+ indexDoc(shard, \"test\", \"test\");\n+ assertThat(shard.refreshStats().getTotal(), equalTo(2L + i - 1));\n+ shard.refresh(\"test\");\n+ assertThat(shard.refreshStats().getTotal(), equalTo(2L + i));\n+ assertThat(shard.refreshStats().getTotalTimeInMillis(), greaterThanOrEqualTo(initialTotalTime));\n+ }\n+ long refreshCount = shard.refreshStats().getTotal();\n+ indexDoc(shard, \"test\", \"test\");\n+ try (Engine.GetResult ignored = shard.get(new Engine.Get(true, \"test\", \"test\", new Term(\"_id\", \"test\")))) {\n+ assertThat(shard.refreshStats().getTotal(), equalTo(refreshCount + 1));\n+ }\n+ closeShards(shard);\n+ }\n+\n private ParsedDocument testParsedDocument(String id, String type, String routing,\n ParseContext.Document document, BytesReference source, Mapping mappingUpdate) {\n Field idField = new Field(\"_id\", id, IdFieldMapper.Defaults.FIELD_TYPE);", "filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java", "status": "modified" }, { "diff": "@@ -64,6 +64,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n import java.util.Locale;\n import java.util.concurrent.atomic.AtomicBoolean;\n@@ -120,7 +121,7 @@ public void onFailedEngine(String reason, @Nullable Exception e) {\n store, newMergePolicy(), iwc.getAnalyzer(),\n iwc.getSimilarity(), new CodecService(null, logger), eventListener, translogHandler,\n IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig,\n- TimeValue.timeValueMinutes(5), listeners, null);\n+ TimeValue.timeValueMinutes(5), Collections.singletonList(listeners), null);\n engine = new InternalEngine(config);\n listeners.setTranslog(engine.getTranslog());\n }\n@@ -298,8 +299,7 @@ public void testLotsOfThreads() throws Exception {\n listener.assertNoError();\n \n Engine.Get get = new Engine.Get(false, \"test\", threadId, new Term(IdFieldMapper.NAME, threadId));\n- try (Engine.GetResult getResult = engine.get(get, engine::acquireSearcher,\n- onRefresh -> fail(\"shouldn't have a refresh\"))) {\n+ try (Engine.GetResult getResult = engine.get(get, engine::acquireSearcher)) {\n assertTrue(\"document not found\", getResult.exists());\n assertEquals(iteration, getResult.version());\n SingleFieldsVisitor visitor = new SingleFieldsVisitor(\"test\");", "filename": "core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java", "status": "modified" } ] }
{ "body": "Follow up of #25064\r\n\r\nThis test example shows the problem:\r\n\r\n```java\r\n public void testGroupsWithSecuredSettings() {\r\n MockSecureSettings secureSettings = new MockSecureSettings();\r\n secureSettings.setString(\"s3.client.myconfig.access_key\", \"myconfig_key\");\r\n secureSettings.setString(\"s3.client.myconfig.secret_key\", \"myconfig_secret\");\r\n secureSettings.setString(\"s3.client.default.access_key\", \"default_key\");\r\n secureSettings.setString(\"s3.client.default.secret_key\", \"default_secret\");\r\n Settings settings = Settings.builder().setSecureSettings(secureSettings).build();\r\n\r\n assertThat(settings.getGroups(\"s3.client.\").keySet(), containsInAnyOrder(\"myconfig\", \"default\"));\r\n }\r\n```\r\n\r\nThis gives:\r\n\r\n```\r\njava.lang.AssertionError: \r\nExpected: iterable over [\"myconfig\", \"default\"] in any order\r\n but: No item matches: \"myconfig\", \"default\" in []\r\n```\r\n\r\n", "comments": [], "number": 25069, "title": "Settings getGroups does not play well with secured settings" }
{ "body": "This commit fixes the group methods of Settings to properly include\r\ngrouped secure settings. Previously the secure settings were included\r\nbut without the group prefix being removed.\r\n\r\ncloses #25069", "number": 25076, "review_comments": [], "title": "Settings: Fix setting groups to include secure settings" }
{ "commits": [ { "message": "Settings: Fix setting groups to include secure settings\n\nThis commit fixes the group methdos of Settings to properly include\ngrouped secure settings. Previously the secure settings were included\nbut without the group prefix being removed.\n\ncloses #25069" } ], "files": [ { "diff": "@@ -507,35 +507,21 @@ public Map<String, Settings> getGroups(String settingPrefix, boolean ignoreNonGr\n }\n \n private Map<String, Settings> getGroupsInternal(String settingPrefix, boolean ignoreNonGrouped) throws SettingsException {\n- // we don't really care that it might happen twice\n- Map<String, Map<String, String>> map = new LinkedHashMap<>();\n- for (Object o : settings.keySet()) {\n- String setting = (String) o;\n- if (setting.startsWith(settingPrefix)) {\n- String nameValue = setting.substring(settingPrefix.length());\n- int dotIndex = nameValue.indexOf('.');\n- if (dotIndex == -1) {\n- if (ignoreNonGrouped) {\n- continue;\n- }\n- throw new SettingsException(\"Failed to get setting group for [\" + settingPrefix + \"] setting prefix and setting [\"\n- + setting + \"] because of a missing '.'\");\n- }\n- String name = nameValue.substring(0, dotIndex);\n- String value = nameValue.substring(dotIndex + 1);\n- Map<String, String> groupSettings = map.get(name);\n- if (groupSettings == null) {\n- groupSettings = new LinkedHashMap<>();\n- map.put(name, groupSettings);\n+ Settings prefixSettings = getByPrefix(settingPrefix);\n+ Map<String, Settings> groups = new HashMap<>();\n+ for (String groupName : prefixSettings.names()) {\n+ Settings groupSettings = prefixSettings.getByPrefix(groupName + \".\");\n+ if (groupSettings.isEmpty()) {\n+ if (ignoreNonGrouped) {\n+ continue;\n }\n- groupSettings.put(value, get(setting));\n+ throw new SettingsException(\"Failed to get setting group for [\" + settingPrefix + \"] setting prefix and setting [\"\n+ + settingPrefix + groupName + \"] because of a missing '.'\");\n }\n+ groups.put(groupName, groupSettings);\n }\n- Map<String, Settings> retVal = new LinkedHashMap<>();\n- for (Map.Entry<String, Map<String, String>> entry : map.entrySet()) {\n- retVal.put(entry.getKey(), new Settings(Collections.unmodifiableMap(entry.getValue()), secureSettings));\n- }\n- return Collections.unmodifiableMap(retVal);\n+\n+ return Collections.unmodifiableMap(groups);\n }\n /**\n * Returns group settings for the given setting prefix.", "filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import static org.hamcrest.Matchers.allOf;\n import static org.hamcrest.Matchers.arrayContaining;\n import static org.hamcrest.Matchers.contains;\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasToString;\n@@ -525,6 +526,29 @@ public void testSecureSettingsPrefix() {\n assertTrue(prefixSettings.names().contains(\"foo\"));\n }\n \n+ public void testGroupPrefix() {\n+ MockSecureSettings secureSettings = new MockSecureSettings();\n+ secureSettings.setString(\"test.key1.foo\", \"somethingsecure\");\n+ secureSettings.setString(\"test.key1.bar\", \"somethingsecure\");\n+ secureSettings.setString(\"test.key2.foo\", \"somethingsecure\");\n+ secureSettings.setString(\"test.key2.bog\", \"somethingsecure\");\n+ Settings.Builder builder = Settings.builder();\n+ builder.put(\"test.key1.baz\", \"blah1\");\n+ builder.put(\"test.key1.other\", \"blah2\");\n+ builder.put(\"test.key2.baz\", \"blah3\");\n+ builder.put(\"test.key2.else\", \"blah4\");\n+ builder.setSecureSettings(secureSettings);\n+ Settings settings = builder.build();\n+ Map<String, Settings> groups = settings.getGroups(\"test\");\n+ assertEquals(2, groups.size());\n+ Settings key1 = groups.get(\"key1\");\n+ assertNotNull(key1);\n+ assertThat(key1.names(), containsInAnyOrder(\"foo\", \"bar\", \"baz\", \"other\"));\n+ Settings key2 = groups.get(\"key2\");\n+ assertNotNull(key2);\n+ assertThat(key2.names(), containsInAnyOrder(\"foo\", \"bog\", \"baz\", \"else\"));\n+ }\n+\n public void testEmptyFilterMap() {\n Settings.Builder builder = Settings.builder();\n builder.put(\"a\", \"a1\");", "filename": "core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java", "status": "modified" } ] }
{ "body": "I was investigating something else and it *looks* like refreshes caused by realtime gets do not get factored into the refresh cycle. I was just looking around the code and that is how it looked. This reproduces the problem, I think:\r\n\r\n```\r\nDELETE /test\r\n\r\nPUT /test\r\n{\r\n \"settings\": {\r\n \"refresh_interval\": -1\r\n }\r\n}\r\n\r\nPUT /test/doc/1\r\n{\r\n \"test\": \"TEST\"\r\n}\r\n\r\nGET /test/doc/1\r\n\r\nGET /test/_stats/refresh\r\n```\r\n\r\nThe refresh stats should show a refresh. Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.", "comments": [ { "body": "> The refresh stats should show a refresh.\r\n\r\n+1\r\n\r\n> Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.\r\n\r\nI wonder if we need this? What we really care about is how many refreshes happen per sec. Doesn't matter what caused them (and it's very easy to see what the refresh interval settings is). \r\n", "created_at": "2017-05-19T19:56:26Z" }, { "body": "I agree, there is a bug here. The problem is the stats tracking is done for refresh calls that go through index shard, but not for refresh calls decided inside the engine.\r\n\r\n> Actually, it might be useful to track just how many of these we're getting in their own stat either on their own or in addition to refresh stats because many of them is indicative of a problem.\r\n\r\nI do not think that we need this. A refresh is a refresh.", "created_at": "2017-05-19T19:58:50Z" }, { "body": "> I do not think that we need this. A refresh is a refresh.\r\n\r\nI'm not sure we need this either. If we had this then we could point to it and say \"there, that is why you have so many refreshes\" without having to dig deeply. It'd be an easy thing to tell people to check.", "created_at": "2017-05-19T20:10:38Z" } ], "number": 24806, "title": "Refreshes caused by realtime gets are not counted in the refresh stats" }
{ "body": "Is it a way to solve refresh stats tracking by passing `refreshMetric` into `get` function ? And it just keeps the refresh number count as before.\r\n\r\nRelated to #24806 ", "number": 25052, "review_comments": [ { "body": "Sorry, I do not think this should be done by pushing the metric into the engine. Now there are two places the metric can be updated and it's mixing things up. This needs to stay at the index shard level and can be done via callbacks.", "created_at": "2017-06-05T09:31:31Z" }, { "body": "I had not idea we had this! I'd use `Runnable` instead.", "created_at": "2017-06-05T14:02:22Z" }, { "body": "I'd name it `onRefresh` because the engine doesn't want to know about metrics at all, even in names.", "created_at": "2017-06-05T14:05:29Z" }, { "body": "Honestly I'd nuke this rather than make the callback `@Nullable`.", "created_at": "2017-06-05T14:07:23Z" }, { "body": "Let's use a LongConsumer here to avoid boxing.", "created_at": "2017-06-05T14:34:55Z" }, { "body": "+1", "created_at": "2017-06-05T14:35:00Z" }, { "body": "+1", "created_at": "2017-06-05T14:35:21Z" }, { "body": "If we totally delete this, we also need to replace the call of this method in `InternalEngineTests` and `RefreshListenersTests` by:\r\n`get(Get get, Function<String, Searcher> searcherFactory, LongConsumer onRefresh)`\r\nex.https://github.com/PnPie/elasticsearch/blob/test/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java#L923 \r\nFor non-realtime get we can pass `null` for `onRefresh` consumer, for realtime get so we just pass sth. whatever like `(x) -> {}` ?", "created_at": "2017-06-05T16:41:49Z" }, { "body": "I'd pass `() -> fail(\"didn't expect a refresh\")` when you don't expect a GET and do something a bit more elaborate when you expect a refresh:\r\n```\r\nAtomicBoolean refreshed = new AtomicBoolean(false);\r\nengine.get(get, () -> refreshed.set(true));\r\nassertTrue(\"expected a refresh\", refreshed.get());\r\n```", "created_at": "2017-06-05T17:06:08Z" }, { "body": "This import can be dropped now.", "created_at": "2017-06-06T14:26:51Z" }, { "body": "This is the wrong exception type (it's a security exception), and I'm not sure if we even need to throw here, just let it NPE?", "created_at": "2017-06-06T14:28:56Z" }, { "body": "Yeah. NPE should be fine.", "created_at": "2017-06-06T14:30:18Z" } ], "title": "Add refresh stats tracking for realtime get" }
{ "commits": [ { "message": "add refresh stats tracking for realtime get" }, { "message": "modify refresh metric update via callback while doing a realtime get" }, { "message": "fix refresh tracking" }, { "message": "Small cleanup\n\nDropped a null check we don't need anymore, cleaned some now-unused\nimports, and removed an unused field." }, { "message": "checkstyle...." } ], "files": [ { "diff": "@@ -89,6 +89,7 @@\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n import java.util.function.Function;\n+import java.util.function.LongConsumer;\n \n public abstract class Engine implements Closeable {\n \n@@ -485,11 +486,7 @@ protected final GetResult getFromSearcher(Get get, Function<String, Searcher> se\n }\n }\n \n- public final GetResult get(Get get) throws EngineException {\n- return get(get, this::acquireSearcher);\n- }\n-\n- public abstract GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException;\n+ public abstract GetResult get(Get get, Function<String, Searcher> searcherFactory, LongConsumer onRefresh) throws EngineException;\n \n /**\n * Returns a new searcher instance. The consumer of this", "filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -92,6 +92,7 @@\n import java.util.concurrent.locks.Lock;\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.function.Function;\n+import java.util.function.LongConsumer;\n import java.util.function.LongSupplier;\n \n public class InternalEngine extends Engine {\n@@ -404,7 +405,7 @@ private SearcherManager createSearcherManager() throws EngineException {\n }\n \n @Override\n- public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException {\n+ public GetResult get(Get get, Function<String, Searcher> searcherFactory, LongConsumer onRefresh) throws EngineException {\n assert Objects.equals(get.uid().field(), uidField) : get.uid().field();\n try (ReleasableLock ignored = readLock.acquire()) {\n ensureOpen();\n@@ -418,7 +419,9 @@ public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws\n throw new VersionConflictEngineException(shardId, get.type(), get.id(),\n get.versionType().explainConflictForReads(versionValue.version, get.version()));\n }\n+ long time = System.nanoTime();\n refresh(\"realtime_get\");\n+ onRefresh.accept(System.nanoTime() - time);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -661,7 +661,7 @@ private Engine.DeleteResult delete(Engine engine, Engine.Delete delete) throws I\n \n public Engine.GetResult get(Engine.Get get) {\n readAllowed();\n- return getEngine().get(get, this::acquireSearcher);\n+ return getEngine().get(get, this::acquireSearcher, (timeElapsed) -> refreshMetric.inc(timeElapsed));\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -920,7 +920,10 @@ public void testConcurrentGetAndFlush() throws Exception {\n engine.index(indexForDoc(doc));\n \n final AtomicReference<Engine.GetResult> latestGetResult = new AtomicReference<>();\n- latestGetResult.set(engine.get(newGet(true, doc)));\n+ final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final AtomicBoolean refreshed = new AtomicBoolean(false);\n+ latestGetResult.set(engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true)));\n+ assertTrue(\"failed to refresh\", refreshed.get());\n final AtomicBoolean flushFinished = new AtomicBoolean(false);\n final CyclicBarrier barrier = new CyclicBarrier(2);\n Thread getThread = new Thread(() -> {\n@@ -934,7 +937,7 @@ public void testConcurrentGetAndFlush() throws Exception {\n if (previousGetResult != null) {\n previousGetResult.release();\n }\n- latestGetResult.set(engine.get(newGet(true, doc)));\n+ latestGetResult.set(engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause a flush is just done\")));\n if (latestGetResult.get().exists() == false) {\n break;\n }\n@@ -954,6 +957,9 @@ public void testSimpleOperations() throws Exception {\n MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0));\n searchResult.close();\n \n+ final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final AtomicBoolean refreshed = new AtomicBoolean(false);\n+\n // create a document\n Document document = testDocumentWithTextField();\n document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE));\n@@ -967,12 +973,13 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, not there non realtime\n- Engine.GetResult getResult = engine.get(newGet(false, doc));\n+ Engine.GetResult getResult = engine.get(newGet(false, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have a refresh\"));\n assertThat(getResult.exists(), equalTo(false));\n getResult.release();\n \n // but, we can still get it (in realtime)\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true));\n+ assertTrue(\"failed to refresh\", refreshed.getAndSet(false));\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -987,7 +994,7 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // also in non realtime\n- getResult = engine.get(newGet(false, doc));\n+ getResult = engine.get(newGet(false, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have a refresh\"));\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1007,7 +1014,8 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, we can still get it (in realtime)\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> refreshed.set(true));\n+ assertTrue(\"failed to refresh\", refreshed.get());\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1032,7 +1040,7 @@ public void testSimpleOperations() throws Exception {\n searchResult.close();\n \n // but, get should not see it (in realtime)\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause the document is deleted\"));\n assertThat(getResult.exists(), equalTo(false));\n getResult.release();\n \n@@ -1072,7 +1080,7 @@ public void testSimpleOperations() throws Exception {\n engine.flush();\n \n // and, verify get (in real time)\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause a flush is just done\"));\n assertThat(getResult.exists(), equalTo(true));\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n@@ -1858,6 +1866,8 @@ class OpAndVersion {\n ParsedDocument doc = testParsedDocument(\"1\", null, testDocument(), bytesArray(\"\"), null);\n final Term uidTerm = newUid(doc);\n engine.index(indexForDoc(doc));\n+ final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final AtomicBoolean refreshed = new AtomicBoolean(false);\n for (int i = 0; i < thread.length; i++) {\n thread[i] = new Thread(() -> {\n startGun.countDown();\n@@ -1867,7 +1877,7 @@ class OpAndVersion {\n throw new AssertionError(e);\n }\n for (int op = 0; op < opsPerThread; op++) {\n- try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm))) {\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory, (onRefresh) -> refreshed.set(true))) {\n FieldsVisitor visitor = new FieldsVisitor(true);\n get.docIdAndVersion().context.reader().document(get.docIdAndVersion().docId, visitor);\n List<String> values = new ArrayList<>(Strings.commaDelimitedListToSet(visitor.source().utf8ToString()));\n@@ -1895,6 +1905,7 @@ class OpAndVersion {\n for (int i = 0; i < thread.length; i++) {\n thread[i].join();\n }\n+ assertTrue(\"failed to refresh\", refreshed.getAndSet(false));\n List<OpAndVersion> sortedHistory = new ArrayList<>(history);\n sortedHistory.sort(Comparator.comparing(o -> o.version));\n Set<String> currentValues = new HashSet<>();\n@@ -1909,7 +1920,8 @@ class OpAndVersion {\n assertTrue(op.added + \" should not exist\", exists);\n }\n \n- try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm))) {\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), uidTerm), searcherFactory, (onRefresh) -> refreshed.set(true))) {\n+ assertTrue(\"failed to refresh\", refreshed.get());\n FieldsVisitor visitor = new FieldsVisitor(true);\n get.docIdAndVersion().context.reader().document(get.docIdAndVersion().docId, visitor);\n List<String> values = Arrays.asList(Strings.commaDelimitedListToStringArray(visitor.source().utf8ToString()));\n@@ -2262,6 +2274,8 @@ public void testEnableGcDeletes() throws Exception {\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), newMergePolicy(), null))) {\n engine.config().setEnableGcDeletes(false);\n \n+ final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+\n // Add document\n Document document = testDocument();\n document.add(new TextField(\"value\", \"test1\", Field.Store.YES));\n@@ -2273,7 +2287,7 @@ public void testEnableGcDeletes() throws Exception {\n engine.delete(new Engine.Delete(\"test\", \"1\", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()));\n \n // Get should not find the document\n- Engine.GetResult getResult = engine.get(newGet(true, doc));\n+ Engine.GetResult getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause the document is deleted\"));\n assertThat(getResult.exists(), equalTo(false));\n \n // Give the gc pruning logic a chance to kick in\n@@ -2287,7 +2301,7 @@ public void testEnableGcDeletes() throws Exception {\n engine.delete(new Engine.Delete(\"test\", \"2\", newUid(\"2\"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()));\n \n // Get should not find the document (we never indexed uid=2):\n- getResult = engine.get(new Engine.Get(true, \"type\", \"2\", newUid(\"2\")));\n+ getResult = engine.get(new Engine.Get(true, \"type\", \"2\", newUid(\"2\")), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n assertThat(getResult.exists(), equalTo(false));\n \n // Try to index uid=1 with a too-old version, should fail:\n@@ -2297,7 +2311,7 @@ public void testEnableGcDeletes() throws Exception {\n assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class));\n \n // Get should still not find the document\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n assertThat(getResult.exists(), equalTo(false));\n \n // Try to index uid=2 with a too-old version, should fail:\n@@ -2307,7 +2321,7 @@ public void testEnableGcDeletes() throws Exception {\n assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class));\n \n // Get should not find the document\n- getResult = engine.get(newGet(true, doc));\n+ getResult = engine.get(newGet(true, doc), searcherFactory, (onRefresh) -> fail(\"shouldn't have refreshed cause document doesn't exists\"));\n assertThat(getResult.exists(), equalTo(false));\n }\n }\n@@ -3639,6 +3653,8 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio\n document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE));\n final ParsedDocument doc = testParsedDocument(\"1\", null, document, B_1, null);\n final Term uid = newUid(doc);\n+ final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final AtomicBoolean refreshed = new AtomicBoolean(false);\n for (int i = 0; i < numberOfOperations; i++) {\n if (randomBoolean()) {\n final Engine.Index index = new Engine.Index(\n@@ -3700,7 +3716,8 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio\n }\n \n assertThat(engine.seqNoService().getLocalCheckpoint(), equalTo(expectedLocalCheckpoint));\n- try (Engine.GetResult result = engine.get(new Engine.Get(true, \"type\", \"2\", uid))) {\n+ try (Engine.GetResult result = engine.get(new Engine.Get(true, \"type\", \"2\", uid), searcherFactory, (onRefresh) -> refreshed.set(exists))) {\n+ assertEquals(\"failed to refresh\", exists, refreshed.get());\n assertThat(result.exists(), equalTo(exists));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java", "status": "modified" }, { "diff": "@@ -298,7 +298,8 @@ public void testLotsOfThreads() throws Exception {\n listener.assertNoError();\n \n Engine.Get get = new Engine.Get(false, \"test\", threadId, new Term(IdFieldMapper.NAME, threadId));\n- try (Engine.GetResult getResult = engine.get(get)) {\n+ try (Engine.GetResult getResult = engine.get(get, engine::acquireSearcher,\n+ onRefresh -> fail(\"shouldn't have a refresh\"))) {\n assertTrue(\"document not found\", getResult.exists());\n assertEquals(iteration, getResult.version());\n SingleFieldsVisitor visitor = new SingleFieldsVisitor(\"test\");", "filename": "core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.4.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: java version \"1.8.0_131\"\r\n\r\n**OS version**: Debian 3.16.39-1+deb8u2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nMaking a HEAD request on an existing index to check the existance of a given alias always returns 200 OK, even if the alias doesn't exist (for that index).\r\n\r\nI expect a 404 status like in previous versions of ES.\r\n\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. create an index (i.e. foo_index)\r\n 2. `curl -I -XHEAD 'localhost:9200/foo_index/_alias/nonexisting_alias'`\r\n\r\n```\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 2\r\n```\r\n\r\n", "comments": [ { "body": "We aligned the HEAD and GET methods (as per the HTTP specification). The problem here is that it appears that GET does not return a 404 on a non-existing alias, instead it returns an empty body. I think the reason for this is in case you specify multiple indices (`GET /i/_alias/a,b`) and `a` exists but `b` doesn't we would return 200 OK and `a` and so this devolves to an empty body on `GET /i/_alias/b`). This seems wrong to me. I will solicit the thoughts of others.", "created_at": "2017-05-12T13:33:58Z" }, { "body": "@javanna @clintongormley Thoughts here?", "created_at": "2017-05-12T13:34:47Z" }, { "body": "With indices, you would be able to control how to treat unavailable indices through the `ignore_unavailable` parameter. But when resolving aliases, we never throw exception if the provided names don't exist and there is no option to change that. Hence we return 200 all the time. Maybe we should apply the indices options to the alias side as well and allow for throwing exception hence returning 404 when one or more aliases is missing? Otherwise, I am not sure we should always throw whenever an alias is not found as that becomes problematic when one alias is there but others aren't.\r\n\r\nUnrelated, but I would prefer to have get alias return something that indicates more clearly that the index exists but the alias doesn't, like:\r\n\r\n```\r\n{\r\n \"foo_index\" : {}\r\n}\r\n```\r\n\r\nrather than completely skipping the index object in the response and returning an empty body. But that is a different concern and doesn't address this issue. ", "created_at": "2017-05-12T13:57:28Z" }, { "body": "> Otherwise, I am not sure we should always throw whenever an alias is not found as that becomes problematic when one alias is there but others aren't.\r\n\r\nSure, I meant to question only the behavior when none are found.", "created_at": "2017-05-12T15:13:40Z" }, { "body": "right, we could also throw error in get alias and 404 in alias exists when nothing is found. That would be ok I guess.", "created_at": "2017-05-12T17:34:58Z" }, { "body": "@clintongormley After spelunking through git, it appears the behavior here might have originated with an older [comment](https://github.com/elastic/elasticsearch/issues/4071#issuecomment-32045177). Do you still think we should not 404 if there are no matching aliases?", "created_at": "2017-05-15T13:09:24Z" }, { "body": "\r\n> After spelunking through git, it appears the behavior here might have originated with an older comment. Do you still think we should not 404 if there are no matching aliases?\r\n\r\nIt is hard to know the right thing to do here. I think the problem comes down to the API itself. The `{index}` portion is really `{index-or-alias}` as either can be provided. The `{alias}` part means \"filter the result to only show this alias\". I'm not even sure that this filtering functionality is practically useful. When would you ask for filtered aliases rather than all the aliases for an index?\r\n\r\nOn top of that, aligning the GET and HEAD behaviours doesn't make much sense here either as they're asking for different things. The HEAD request in our API is defined as \"does this alias exist on this index?\". \r\n\r\nThe only way I can think to make these two consistent is to throw a 404 if any of the listed aliases doesn't exist. Maybe that's OK. It's not terribly useful for GET, but it is useful for HEAD. Of course, this would be a breaking change\r\n\r\n", "created_at": "2017-05-15T13:36:10Z" }, { "body": "I think this should be mentioned in the [Breaking changes in 5.4](https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-5.4.html#breaking-changes-5.4) document and/or fixed, because `HEAD` request returns `404` for this endpoint in Elasticsearch 5.3 (I guess it works in the same way in all 5.* versions except 5.4 and higher). As far as remember versions 2.* and in 1.* also return `404`.\r\n\r\nAlso the official python client doesn't know how to handle new behaviour (see [the `exists_alias` method of `elasticsearch.client.IndicesClient`](https://github.com/elastic/elasticsearch-py/blob/5.4.0/elasticsearch/client/indices.py#L357) in `elasticsearch>=5.0.0,<6.0.0`), so I guess update to 5.4 breaks a lot of applications that use this package.\r\n\r\nPeople should know where to look. I've spent almost whole day on debugging. I thought it was an error in my application.", "created_at": "2017-06-02T15:28:00Z" }, { "body": "hi @m1kola which breaking change do you mean? You commented on an issue that is opened for discussion, hence no change directly related to this issue has been committed yet.", "created_at": "2017-06-02T15:50:02Z" }, { "body": "@m1kola It's in the [release notes for 5.4.0](https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-5.4.0.html) as this is considered a bug fix (the fact that HEAD and GET had mismatched behavior is a specification violation, and therefore a bug). The line item is:\r\n\r\n>Fix alias HEAD requests #23094 (issue: #21125)", "created_at": "2017-06-02T15:53:25Z" }, { "body": "> hi @m1kola which breaking change do you mean? You commented on an issue that is opened for discussion, hence no change directly related to this issue has been committed yet.\r\n\r\nHi @javanna. I know that the issue is marked for discussion, but it looks like initially @jbfewo wanted to report a bug. See [the first comment](https://github.com/elastic/elasticsearch/issues/24644#issue-228274526).\r\n\r\n> I expect a 404 status like in previous versions of ES.\r\n\r\n---\r\n\r\n> @m1kola It's in the release notes for 5.4.0 as this is considered a bug fix (the fact that HEAD and GET had mismatched behavior is a specification violation, and therefore a bug). The line item is:\r\n> \r\n> > Fix alias HEAD requests #23094 (issue: #21125)\r\n\r\nHi @jasontedor. Yes, it fixes specification, but introduces incompatibility with 5.3 and older versions. See the demo below.\r\n\r\n# Steps to compare/reproduce\r\n\r\nI compare the behaviour of ES 5.3.3 and 5.4.1 using the official Docker images.\r\n\r\nES 5.3.3:\r\n```\r\ndocker run -p 9200:9200 -e \"http.host=0.0.0.0\" -e \"transport.host=127.0.0.1\" docker.elastic.co/elasticsearch/elasticsearch:5.3.3\r\n```\r\n\r\nES 5.4.1:\r\n```\r\ndocker run -p 9200:9200 -e \"http.host=0.0.0.0\" -e \"transport.host=127.0.0.1\" docker.elastic.co/elasticsearch/elasticsearch:5.4.1\r\n```\r\n\r\n## Check an alias. Index doesn't exist.\r\n\r\nCheck the `this_index_should_be_deleted` alias exists. The `my_index` index doesn't exist, at the moment of a request.\r\n\r\n**Request:**\r\n\r\n```\r\ncurl -H \"Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==\" -H \"Content-Type: application/json; charset=UTF-8\" -I 'http://localhost:9200/my_index/_alias/this_index_should_be_deleted?pretty'\r\n```\r\n\r\n**Response. Both 5.3.3 and 5.4.1 return `404`:**\r\n\r\n```\r\nHTTP/1.1 404 Not Found\r\ncontent-type: text/plain; charset=UTF-8\r\ncontent-length: 0\r\n```\r\n\r\n## Create the `my_index` index\r\n\r\nRequest:\r\n```\r\ncurl -sD - -H \"Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==\" -H \"Content-Type: application/json; charset=UTF-8\" -XPUT 'http://localhost:9200/my_index?pretty' -d '{}'\r\n```\r\n\r\nResponse. Same on both versions:\r\n```\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 60\r\n\r\n{\r\n \"acknowledged\" : true,\r\n \"shards_acknowledged\" : true\r\n}\r\n```\r\n\r\n## Check an alias. Alias doesn't really exist, but index exists\r\n\r\nRequest:\r\n```\r\ncurl -H \"Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==\" -H \"Content-Type: application/json; charset=UTF-8\" -I 'http://localhost:9200/my_index/_alias/this_index_should_be_deleted?pretty'\r\n```\r\nResponse on 5.3.3:\r\n```\r\nHTTP/1.1 404 Not Found\r\ncontent-type: text/plain; charset=UTF-8\r\ncontent-length: 0\r\n```\r\n\r\n&#x1F534; Response on 5.4.1:\r\n```\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 4\r\n```\r\n\r\n## Create an alias\r\n\r\nRequest:\r\n\r\n```\r\ncurl -sD - -H \"Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==\" -H \"Content-Type: application/json; charset=UTF-8\" -XPUT 'http://localhost:9200/my_index/_alias/alias_for_my_index?pretty' -d '{}'\r\n```\r\n\r\nResponse. Same on both versions:\r\n\r\n```\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 28\r\n\r\n{\r\n \"acknowledged\" : true\r\n}\r\n```\r\n\r\n## Check the alias. Both index and alias are exist\r\n\r\nRequest:\r\n\r\n```\r\ncurl -sD - -H \"Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==\" -H \"Content-Type: application/json; charset=UTF-8\" -XPUT 'http://localhost:9200/my_index/_alias/alias_for_my_index?pretty' -d '{}'\r\n```\r\n\r\nResponse. Same on both versions:\r\n\r\n```\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 28\r\n\r\n{\r\n \"acknowledged\" : true\r\n}\r\n```\r\n\r\nHope this helps.", "created_at": "2017-06-02T17:49:41Z" }, { "body": "It's just mislabeled, we are going to fix this.", "created_at": "2017-06-02T18:20:39Z" }, { "body": "I opened #25043.", "created_at": "2017-06-03T12:14:18Z" } ], "number": 24644, "title": "HEAD /index/_alias/ always returns 200 OK even alias is not set" }
{ "body": "Previously the HEAD and GET aliases endpoints were misaligned in behavior. The HEAD verb would 404 if any aliases are missing while the GET verb would not if any aliases existed. When HEAD was aligned with GET, this broke the previous usage of HEAD to serve as an existence check for aliases. It is the behavior of GET that is problematic here though, if any alias is missing the request should 404. This commit addresses this by modifying the behavior of GET to behave in this way. This fixes the behavior for HEAD to also 404 when aliases are missing.\r\n\r\nCloses #24644\r\n", "number": 25043, "review_comments": [ { "body": "You can probably use `aliasMap.values()` since you don't need the key for anything right? I don't think it's necessarily any better though, so either way is fine.", "created_at": "2017-06-06T15:26:03Z" }, { "body": "I think we should have a consistent message regardless of the number of aliases, I know it's bad form for people to write tests against error messages, but that doesn't mean people don't. So, I think we should stick with just `aliases [foo] missing`. What do you think? (I'm only +0 on the change)", "created_at": "2017-06-06T15:31:16Z" }, { "body": "could be `status.getStatus()` in the event we change the local variable above.", "created_at": "2017-06-06T15:32:10Z" }, { "body": "We can remove the `!` if we reverse this if statement, so\r\n\r\n```java\r\nif (difference.isEmpty()) {\r\n status = RestStatus.OK;\r\n} else {\r\n ... the error stuff ...\r\n}", "created_at": "2017-06-06T15:35:17Z" }, { "body": "Personally I think we should return `RestStatus.PARTIAL_CONTENT` (206) in the event that requested aliases were requested and *some* were found while some were not, but it sounds like this ship has already sailed as far as what to return.", "created_at": "2017-06-06T15:38:33Z" }, { "body": "I think this should be `5.4.99`, since this will be backported, correct?", "created_at": "2017-06-06T15:54:15Z" }, { "body": "Same here about version", "created_at": "2017-06-06T15:54:24Z" }, { "body": "Same here for version", "created_at": "2017-06-06T15:54:32Z" }, { "body": "And same here for version", "created_at": "2017-06-06T15:54:41Z" }, { "body": "I disagree, see my top-level comment.", "created_at": "2017-06-06T18:03:35Z" }, { "body": "I disagree, see my top-level comment.", "created_at": "2017-06-06T18:03:38Z" }, { "body": "I disagree, see my top-level comment.", "created_at": "2017-06-06T18:03:40Z" }, { "body": "I disagree, see my top-level comment.", "created_at": "2017-06-06T18:03:44Z" }, { "body": "I disagree that it's bad form, and I have a test for both cases. It's a pet peeve of mine to see messages like \"error(s) were encountered while processing\"; the information to determine whether \"an error was encountered while processing\" or \"errors were encountered while processing\" should be provided is available, the programmer just needs to write some simple code to give a correct error message.\r\n\r\nWhy do you think it should be consistent instead of correct in every case?", "created_at": "2017-06-06T18:08:36Z" }, { "body": "Okay.", "created_at": "2017-06-06T18:10:02Z" }, { "body": "Okay.", "created_at": "2017-06-06T18:10:42Z" }, { "body": "206 is wrong, it's for range requests only. Also, 2xx is often not seen as an error client side while 4xx is. \r\n", "created_at": "2017-06-06T18:12:49Z" }, { "body": "> Why do you think it should be consistent instead of correct in every case?\r\n\r\nAs I mentioned above, mostly for users wrapping ES in their own application, anyone that wants to \"wrap\" this error needs to change parsing `aliases [.*] missing` to `alias(es)? [.*] missing`, which will not be obvious to an end user unless they hit the endpoint with both single and mulitple aliases. I would expect someone who wanted to wrap ES to hit the endpoint, see the error, and then write something that wrapped that error in a prettier way for their application, not realizing that the error may change depending on the cardinality of the aliases.\r\n\r\nLike I said above though, I'm only +0 on it, if you don't agree we don't have to change it.", "created_at": "2017-06-06T18:16:00Z" }, { "body": "@dakrone Yeah I'm not sure about that concern since I do not think we offer any guarantees on these error messages.", "created_at": "2017-06-06T18:37:08Z" } ], "title": "GET aliases should 404 if aliases are missing" }
{ "commits": [ { "message": "GET aliases should 404 if aliases are missing\n\nPreviously the HEAD and GET aliases endpoints were misaigned in\nbehavior. The HEAD verb would 404 if any aliases are missing while the\nGET verb would not if any aliases existed. When HEAD was aligned with\nGET, this broke the previous usage of HEAD to serve as an existence\ncheck for aliases. It is the behavior of GET that is problematic here\nthough, if any alias is missing the request should 404. This commit\naddresses this by modifying the behavior of GET to behave in this\nway. This fixes the behavior for HEAD to also 404 when aliases are\nmissing." }, { "message": "Handle wildcards" }, { "message": "Fix comment" }, { "message": "Include existing aliases" }, { "message": "Merge branch 'master' into get-aliases-not-found\n\n* master:\n Add support for clear scroll to high level REST client (#25038)\n Tiny correction in inner-hits.asciidoc (#25066)\n Added release notes for 6.0.0-alpha2\n Expand index expressions against indices only when managing aliases (#23997)\n Collapse inner hits rest test should not skip 5.x\n Settings: Fix secure settings by prefix (#25064)\n add `exclude_keys` option to KeyValueProcessor (#24876)\n Test: update missing body tests to run against versions >= 5.5.0\n Track EWMA[1] of task execution time in search threadpool executor\n Removes an invalid assert in resizing big arrays which does not always hold (resizing can result in a smaller size than the current size, while the assert attempted to verify the new size is always greater than the current).\n Fixed NPEs caused by requests without content. (#23497)\n Plugins can register pre-configured char filters (#25000)\n Build: Allow preserving shared dir (#24962)\n Tests: Make secure settings available from settings builder for tests (#25037)" }, { "message": "Fix status" }, { "message": "Simplify" }, { "message": "Reverse" }, { "message": "Remove import" } ], "files": [ { "diff": "@@ -21,11 +21,19 @@\n \n import java.util.Collection;\n import java.util.Collections;\n+import java.util.EnumSet;\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.Objects;\n import java.util.Set;\n+import java.util.SortedSet;\n+import java.util.TreeSet;\n import java.util.concurrent.ConcurrentHashMap;\n+import java.util.function.BiConsumer;\n+import java.util.function.BinaryOperator;\n+import java.util.function.Function;\n+import java.util.function.Supplier;\n+import java.util.stream.Collector;\n import java.util.stream.Collectors;\n \n public final class Sets {\n@@ -69,6 +77,47 @@ public static <T> Set<T> difference(Set<T> left, Set<T> right) {\n return left.stream().filter(k -> !right.contains(k)).collect(Collectors.toSet());\n }\n \n+ public static <T> SortedSet<T> sortedDifference(Set<T> left, Set<T> right) {\n+ Objects.requireNonNull(left);\n+ Objects.requireNonNull(right);\n+ return left.stream().filter(k -> !right.contains(k)).collect(new SortedSetCollector<>());\n+ }\n+\n+ private static class SortedSetCollector<T> implements Collector<T, SortedSet<T>, SortedSet<T>> {\n+\n+ @Override\n+ public Supplier<SortedSet<T>> supplier() {\n+ return TreeSet::new;\n+ }\n+\n+ @Override\n+ public BiConsumer<SortedSet<T>, T> accumulator() {\n+ return (s, e) -> s.add(e);\n+ }\n+\n+ @Override\n+ public BinaryOperator<SortedSet<T>> combiner() {\n+ return (s, t) -> {\n+ s.addAll(t);\n+ return s;\n+ };\n+ }\n+\n+ @Override\n+ public Function<SortedSet<T>, SortedSet<T>> finisher() {\n+ return Function.identity();\n+ }\n+\n+ static final Set<Characteristics> CHARACTERISTICS =\n+ Collections.unmodifiableSet(EnumSet.of(Collector.Characteristics.IDENTITY_FINISH));\n+\n+ @Override\n+ public Set<Characteristics> characteristics() {\n+ return CHARACTERISTICS;\n+ }\n+\n+ }\n+\n public static <T> Set<T> union(Set<T> left, Set<T> right) {\n Objects.requireNonNull(left);\n Objects.requireNonNull(right);", "filename": "core/src/main/java/org/elasticsearch/common/util/set/Sets.java", "status": "modified" }, { "diff": "@@ -19,15 +19,18 @@\n \n package org.elasticsearch.rest.action.admin.indices;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n-\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -39,14 +42,17 @@\n import org.elasticsearch.rest.action.RestBuilderListener;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Locale;\n+import java.util.Set;\n+import java.util.SortedSet;\n import java.util.stream.Collectors;\n \n import static org.elasticsearch.rest.RestRequest.Method.GET;\n import static org.elasticsearch.rest.RestRequest.Method.HEAD;\n-import static org.elasticsearch.rest.RestStatus.OK;\n \n /**\n * The REST handler for get alias and head alias APIs.\n@@ -80,41 +86,68 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n return channel -> client.admin().indices().getAliases(getAliasesRequest, new RestBuilderListener<GetAliasesResponse>(channel) {\n @Override\n public RestResponse buildResponse(GetAliasesResponse response, XContentBuilder builder) throws Exception {\n- if (response.getAliases().isEmpty()) {\n- // empty body if indices were specified but no matching aliases exist\n- if (indices.length > 0) {\n- return new BytesRestResponse(OK, builder.startObject().endObject());\n+ final ImmutableOpenMap<String, List<AliasMetaData>> aliasMap = response.getAliases();\n+\n+ final Set<String> aliasNames = new HashSet<>();\n+ for (final ObjectCursor<List<AliasMetaData>> cursor : aliasMap.values()) {\n+ for (final AliasMetaData aliasMetaData : cursor.value) {\n+ aliasNames.add(aliasMetaData.alias());\n+ }\n+ }\n+\n+ // first remove requested aliases that are exact matches\n+ final SortedSet<String> difference = Sets.sortedDifference(Arrays.stream(aliases).collect(Collectors.toSet()), aliasNames);\n+\n+ // now remove requested aliases that contain wildcards that are simple matches\n+ final List<String> matches = new ArrayList<>();\n+ outer:\n+ for (final String pattern : difference) {\n+ if (pattern.contains(\"*\")) {\n+ for (final String aliasName : aliasNames) {\n+ if (Regex.simpleMatch(pattern, aliasName)) {\n+ matches.add(pattern);\n+ continue outer;\n+ }\n+ }\n+ }\n+ }\n+ difference.removeAll(matches);\n+\n+ final RestStatus status;\n+ builder.startObject();\n+ {\n+ if (difference.isEmpty()) {\n+ status = RestStatus.OK;\n } else {\n- final String message = String.format(Locale.ROOT, \"alias [%s] missing\", toNamesString(getAliasesRequest.aliases()));\n- builder.startObject();\n- {\n- builder.field(\"error\", message);\n- builder.field(\"status\", RestStatus.NOT_FOUND.getStatus());\n+ status = RestStatus.NOT_FOUND;\n+ final String message;\n+ if (difference.size() == 1) {\n+ message = String.format(Locale.ROOT, \"alias [%s] missing\", toNamesString(difference.iterator().next()));\n+ } else {\n+ message = String.format(Locale.ROOT, \"aliases [%s] missing\", toNamesString(difference.toArray(new String[0])));\n }\n- builder.endObject();\n- return new BytesRestResponse(RestStatus.NOT_FOUND, builder);\n+ builder.field(\"error\", message);\n+ builder.field(\"status\", status.getStatus());\n }\n- } else {\n- builder.startObject();\n- {\n- for (final ObjectObjectCursor<String, List<AliasMetaData>> entry : response.getAliases()) {\n- builder.startObject(entry.key);\n+\n+ for (final ObjectObjectCursor<String, List<AliasMetaData>> entry : response.getAliases()) {\n+ builder.startObject(entry.key);\n+ {\n+ builder.startObject(\"aliases\");\n {\n- builder.startObject(\"aliases\");\n- {\n- for (final AliasMetaData alias : entry.value) {\n- AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS);\n- }\n+ for (final AliasMetaData alias : entry.value) {\n+ AliasMetaData.Builder.toXContent(alias, builder, ToXContent.EMPTY_PARAMS);\n }\n- builder.endObject();\n }\n builder.endObject();\n }\n+ builder.endObject();\n }\n- builder.endObject();\n- return new BytesRestResponse(OK, builder);\n }\n+ builder.endObject();\n+ return new BytesRestResponse(status, builder);\n }\n+\n });\n }\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -108,6 +108,12 @@ public void testAliasExists() throws IOException {\n }\n }\n \n+ public void testAliasDoesNotExist() throws IOException {\n+ createTestDoc();\n+ headTestCase(\"/_alias/test_alias\", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0));\n+ headTestCase(\"/test/_alias/test_alias\", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0));\n+ }\n+\n public void testTemplateExists() throws IOException {\n try (XContentBuilder builder = jsonBuilder()) {\n builder.startObject();", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java", "status": "modified" }, { "diff": "@@ -1,5 +1,8 @@\n ---\n \"Basic test for delete alias\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing aliases\n \n - do:\n indices.create:\n@@ -25,8 +28,10 @@\n name: testali\n \n - do:\n+ catch: missing\n indices.get_alias:\n index: testind\n name: testali\n \n- - match: { '': {}}\n+ - match: { 'status': 404 }\n+ - match: { 'error': 'alias [testali] missing' }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_alias/10_basic.yml", "status": "modified" }, { "diff": "@@ -78,7 +78,6 @@ setup:\n \n ---\n \"Get aliases via /{index}/_alias/prefix*\":\n-\n - do:\n indices.get_alias:\n index: test_index\n@@ -166,25 +165,51 @@ setup:\n \n \n ---\n-\"Non-existent alias on an existing index returns an empty body\":\n+\"Non-existent alias on an existing index returns 404\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing aliases\n \n - do:\n+ catch: missing\n indices.get_alias:\n index: test_index\n name: non-existent\n \n- - match: { '': {}}\n+ - match: { 'status': 404}\n+ - match: { 'error': 'alias [non-existent] missing' }\n \n ---\n-\"Existent and non-existent alias returns just the existing\":\n+\"Existent and non-existent alias returns 404 and the existing alias\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing aliases\n \n - do:\n+ catch: missing\n indices.get_alias:\n index: test_index\n name: test_alias,non-existent\n \n- - match: {test_index.aliases.test_alias: {}}\n- - is_false: test_index.aliases.non-existent\n+ - match: { 'status': 404 }\n+ - match: { 'error': 'alias [non-existent] missing' }\n+ - match: { test_index.aliases.test_alias: { } }\n+\n+---\n+\"Existent and non-existent aliases returns 404 and the existing alias\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing aliases\n+\n+ - do:\n+ catch: missing\n+ indices.get_alias:\n+ index: test_index\n+ name: test_alias,non-existent,another-non-existent\n+\n+ - match: { 'status': 404 }\n+ - match: { 'error': 'aliases [another-non-existent,non-existent] missing' }\n+ - match: { test_index.aliases.test_alias: { } }\n \n ---\n \"Getting alias on an non-existent index should return 404\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_alias/10_basic.yml", "status": "modified" } ] }
{ "body": "When running a 5.2.2 client against a 5.3 or 5.4 server, when GeoDistance queries with default settings are sent, the server responds with an error:\r\n\r\nCaused by: java.io.IOException: Unknown GeoDistance ordinal [3]\r\n\r\nThis regression was introduced in 5.3 and on in this commit:\r\n\r\nhttps://github.com/elastic/elasticsearch/commit/3a2c628fabe2b30545cd16afc9227542b24bfcfe\r\n\r\nThat commit changed the ordinals of the Enums in a backwards-incompatible way.\r\n\r\nThis should be documented in the breaking changes for 5.3. This also breaks the promise of 5.x clients being able to talk to 5.x servers compatibly (note that the 5.2.2 client is not using any new features here). ", "comments": [ { "body": "Hmm I think we should actually fix those, the break was not intended.", "created_at": "2017-05-23T07:47:03Z" }, { "body": "++ @jpountz \r\n", "created_at": "2017-05-23T07:58:09Z" }, { "body": "closed with ffec0c6 ", "created_at": "2017-06-09T20:28:07Z" } ], "number": 24816, "title": "Document backwards-incompatible change for GeoDistance queries (5.3 breaking change/incompatibility)" }
{ "body": "`GeoDistance` enum ordinals from 5.3+ are not backwards compatible with earlier client versions.. This PR adds backward compatibilty support in `GeoDistance` serialization.\r\n\r\ncloses #24816 ", "number": 25033, "review_comments": [ { "body": "Can you also fix the write side so that if the client is on 5.4 while the cluster is on 5.3 then things would still work?", "created_at": "2017-06-05T07:19:06Z" }, { "body": "we should update version labels on #19846?", "created_at": "2017-06-05T07:30:27Z" }, { "body": "absolutely! Good catch @jpountz made the fix.", "created_at": "2017-06-06T18:27:37Z" }, { "body": "can you add a comment saying it is the ordinal of `PLANE`", "created_at": "2017-06-07T07:37:16Z" }, { "body": "same here", "created_at": "2017-06-07T07:37:32Z" }, { "body": "can you comment those are the ordinals of `ARC` in both cases?", "created_at": "2017-06-07T07:38:06Z" }, { "body": "quickly test writeTo too?", "created_at": "2017-06-07T07:39:39Z" } ], "title": "Fix GeoDistance Ordinal for BWC" }
{ "commits": [ { "message": "Fix GeoDistance Ordinal for BWC\n\nGeoDistance enum ordinals from 5.3.3+ are not backwards compatible with earlier client versions.. This commit adds backward compatibilty support in GeoDistance serialization." } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.geo;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n@@ -40,7 +41,26 @@ public enum GeoDistance implements Writeable {\n \n /** Creates a GeoDistance instance from an input stream */\n public static GeoDistance readFromStream(StreamInput in) throws IOException {\n+ Version clientVersion = in.getVersion();\n int ord = in.readVInt();\n+ // bwc client deprecation for FACTOR and SLOPPY_ARC\n+ if (clientVersion.before(Version.V_5_3_3)) {\n+ switch (ord) {\n+ case 0: return PLANE;\n+ case 1: // FACTOR uses PLANE\n+ // bwc client deprecation for FACTOR\n+ DEPRECATION_LOGGER.deprecated(\"[factor] is deprecated. Using [plane] instead.\");\n+ return PLANE;\n+ case 2: return ARC;\n+ case 3: // SLOPPY_ARC uses ARC\n+ // bwc client deprecation for SLOPPY_ARC\n+ DEPRECATION_LOGGER.deprecated(\"[sloppy_arc] is deprecated. Using [arc] instead.\");\n+ return ARC;\n+ default:\n+ throw new IOException(\"Unknown GeoDistance ordinal [\" + ord + \"]\");\n+ }\n+ }\n+\n if (ord < 0 || ord >= values().length) {\n throw new IOException(\"Unknown GeoDistance ordinal [\" + ord + \"]\");\n }\n@@ -50,6 +70,20 @@ public static GeoDistance readFromStream(StreamInput in) throws IOException {\n /** Writes an instance of a GeoDistance object to an output stream */\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n+ Version clientVersion = out.getVersion();\n+ int ord = this.ordinal();\n+ if (clientVersion.before(Version.V_5_3_3)) {\n+ switch (ord) {\n+ case 0:\n+ out.write(0); // write PLANE ordinal\n+ return;\n+ case 1:\n+ out.write(2); // write bwc ARC ordinal\n+ return;\n+ default:\n+ throw new IOException(\"Unknown GeoDistance ordinal [\" + ord + \"]\");\n+ }\n+ }\n out.writeVInt(this.ordinal());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/geo/GeoDistance.java", "status": "modified" }, { "diff": "@@ -19,16 +19,19 @@\n package org.elasticsearch.common.geo;\n \n import org.apache.lucene.geo.Rectangle;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.VersionUtils;\n \n import java.io.IOException;\n \n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.isOneOf;\n import static org.hamcrest.Matchers.lessThan;\n \n /**\n@@ -67,6 +70,37 @@ public void testInvalidReadFrom() throws Exception {\n }\n }\n \n+ public void testReadFromSerializationBWC() throws Exception {\n+ int ordinal = randomInt(3);\n+ try (BytesStreamOutput out = new BytesStreamOutput()) {\n+ out.writeVInt(ordinal);\n+ try (StreamInput in = out.bytes().streamInput()) {\n+ // set client version (should this be done in .streamInput()?)\n+ in.setVersion(VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_5_3_2));\n+ GeoDistance copy = GeoDistance.readFromStream(in);\n+ assertThat(copy, isOneOf(GeoDistance.PLANE, GeoDistance.ARC));\n+ if (ordinal == 1) {\n+ assertWarnings(\"[factor] is deprecated. Using [plane] instead.\");\n+ } else if (ordinal == 3) {\n+ assertWarnings(\"[sloppy_arc] is deprecated. Using [arc] instead.\");\n+ }\n+ }\n+ }\n+ }\n+\n+ public void testWriteToSerializationBWC() throws Exception {\n+ GeoDistance geoDistance = randomFrom(GeoDistance.PLANE, GeoDistance.ARC);\n+ try (BytesStreamOutput out = new BytesStreamOutput()) {\n+ out.setVersion(VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_5_3_2));\n+ geoDistance.writeTo(out);\n+ try (StreamInput in = out.bytes().streamInput()) {\n+ in.setVersion(out.getVersion());\n+ GeoDistance copy = GeoDistance.readFromStream(in);\n+ assertThat(copy, isOneOf(GeoDistance.PLANE, GeoDistance.ARC));\n+ }\n+ }\n+ }\n+\n public void testDistanceCheck() {\n // Note, is within is an approximation, so, even though 0.52 is outside 50mi, we still get \"true\"\n double radius = DistanceUnit.convert(50, DistanceUnit.MILES, DistanceUnit.METERS);", "filename": "core/src/test/java/org/elasticsearch/common/geo/GeoDistanceTests.java", "status": "modified" } ] }
{ "body": "Given a document with a field that has arrays of free text values the `significant_text` aggregation treats each array element as a separate document when it comes to counting doc frequencies of terms which can lead to this error from the significance heuristic:\r\n\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"subsetFreq > subsetSize, in JLHScore\"\r\n\r\nThis is down to a bug in where the set of previously-seen tokens is allocated and destroyed. ", "comments": [], "number": 25029, "title": "Aggregations bug: Significant_text fails on arrays of text" }
{ "body": "The set of previously-seen tokens in a doc was allocated per-JSON-field string value rather than once per JSON document meaning the number of docs containing a term could be over-counted leading to exceptions from the checks in significance heuristics. Added unit test for this scenario\r\n\r\nCloses #25029", "number": 25030, "review_comments": [], "title": "Aggregations bug: Significant_text fails on arrays of text." }
{ "commits": [ { "message": "Aggregations bug: Significant_text fails on arrays of text.\nThe set of previously-seen tokens in a doc was allocated per-JSON-field string value rather than once per JSON document meaning the number of docs containing a term could be over-counted leading to exceptions from the checks in significance heuristics. Added unit test for this scenario\n\nCloses #25029" }, { "message": "Added multi-field test" }, { "message": "Checkstyle violation fix" } ], "files": [ { "diff": "@@ -113,45 +113,40 @@ public void collect(int doc, long bucket) throws IOException {\n }\n }\n \n- private void processTokenStream(int doc, long bucket, TokenStream ts, String fieldText) throws IOException{\n+ private void processTokenStream(int doc, long bucket, TokenStream ts, BytesRefHash inDocTerms, String fieldText) \n+ throws IOException{\n if (dupSequenceSpotter != null) {\n ts = new DeDuplicatingTokenFilter(ts, dupSequenceSpotter);\n }\n CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class);\n ts.reset();\n try {\n- //Assume tokens will average 5 bytes in length to size number of tokens\n- BytesRefHash inDocTerms = new BytesRefHash(1+(fieldText.length()/5), context.bigArrays());\n- \n- try{\n- while (ts.incrementToken()) {\n- if (dupSequenceSpotter != null) {\n- long newTrieSize = dupSequenceSpotter.getEstimatedSizeInBytes();\n- long growth = newTrieSize - lastTrieSize;\n- // Only update the circuitbreaker after\n- if (growth > MEMORY_GROWTH_REPORTING_INTERVAL_BYTES) {\n- addRequestCircuitBreakerBytes(growth);\n- lastTrieSize = newTrieSize;\n- }\n+ while (ts.incrementToken()) {\n+ if (dupSequenceSpotter != null) {\n+ long newTrieSize = dupSequenceSpotter.getEstimatedSizeInBytes();\n+ long growth = newTrieSize - lastTrieSize;\n+ // Only update the circuitbreaker after\n+ if (growth > MEMORY_GROWTH_REPORTING_INTERVAL_BYTES) {\n+ addRequestCircuitBreakerBytes(growth);\n+ lastTrieSize = newTrieSize;\n }\n- previous.clear();\n- previous.copyChars(termAtt);\n- BytesRef bytes = previous.get();\n- if (inDocTerms.add(bytes) >= 0) {\n- if (includeExclude == null || includeExclude.accept(bytes)) {\n- long bucketOrdinal = bucketOrds.add(bytes);\n- if (bucketOrdinal < 0) { // already seen\n- bucketOrdinal = -1 - bucketOrdinal;\n- collectExistingBucket(sub, doc, bucketOrdinal);\n- } else {\n- collectBucket(sub, doc, bucketOrdinal);\n- }\n+ }\n+ previous.clear();\n+ previous.copyChars(termAtt);\n+ BytesRef bytes = previous.get();\n+ if (inDocTerms.add(bytes) >= 0) {\n+ if (includeExclude == null || includeExclude.accept(bytes)) {\n+ long bucketOrdinal = bucketOrds.add(bytes);\n+ if (bucketOrdinal < 0) { // already seen\n+ bucketOrdinal = -1 - bucketOrdinal;\n+ collectExistingBucket(sub, doc, bucketOrdinal);\n+ } else {\n+ collectBucket(sub, doc, bucketOrdinal);\n }\n }\n }\n- } finally{\n- Releasables.close(inDocTerms);\n }\n+\n } finally{\n ts.close();\n }\n@@ -166,23 +161,28 @@ private void collectFromSource(int doc, long bucket, String indexedFieldName, St\n \n SourceLookup sourceLookup = context.lookup().source();\n sourceLookup.setSegmentAndDocument(ctx, doc);\n+ BytesRefHash inDocTerms = new BytesRefHash(256, context.bigArrays());\n \n- for (String sourceField : sourceFieldNames) {\n- List<Object> textsToHighlight = sourceLookup.extractRawValues(sourceField); \n- textsToHighlight = textsToHighlight.stream().map(obj -> {\n- if (obj instanceof BytesRef) {\n- return fieldType.valueForDisplay(obj).toString();\n- } else {\n- return obj;\n- }\n- }).collect(Collectors.toList()); \n- \n- Analyzer analyzer = fieldType.indexAnalyzer(); \n- for (Object fieldValue : textsToHighlight) {\n- String fieldText = fieldValue.toString();\n- TokenStream ts = analyzer.tokenStream(indexedFieldName, fieldText);\n- processTokenStream(doc, bucket, ts, fieldText); \n- } \n+ try { \n+ for (String sourceField : sourceFieldNames) {\n+ List<Object> textsToHighlight = sourceLookup.extractRawValues(sourceField); \n+ textsToHighlight = textsToHighlight.stream().map(obj -> {\n+ if (obj instanceof BytesRef) {\n+ return fieldType.valueForDisplay(obj).toString();\n+ } else {\n+ return obj;\n+ }\n+ }).collect(Collectors.toList()); \n+ \n+ Analyzer analyzer = fieldType.indexAnalyzer(); \n+ for (Object fieldValue : textsToHighlight) {\n+ String fieldText = fieldValue.toString();\n+ TokenStream ts = analyzer.tokenStream(indexedFieldName, fieldText);\n+ processTokenStream(doc, bucket, ts, inDocTerms, fieldText); \n+ } \n+ }\n+ } finally{\n+ Releasables.close(inDocTerms);\n }\n }\n };", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTextAggregator.java", "status": "modified" }, { "diff": "@@ -123,4 +123,36 @@ public void testSignificance() throws IOException {\n }\n }\n }\n+ \n+ /**\n+ * Test documents with arrays of text\n+ */\n+ public void testSignificanceOnTextArrays() throws IOException {\n+ TextFieldType textFieldType = new TextFieldType();\n+ textFieldType.setName(\"text\");\n+ textFieldType.setIndexAnalyzer(new NamedAnalyzer(\"my_analyzer\", AnalyzerScope.GLOBAL, new StandardAnalyzer()));\n+\n+ IndexWriterConfig indexWriterConfig = newIndexWriterConfig();\n+ try (Directory dir = newDirectory(); IndexWriter w = new IndexWriter(dir, indexWriterConfig)) {\n+ for (int i = 0; i < 10; i++) {\n+ Document doc = new Document();\n+ doc.add(new Field(\"text\", \"foo\", textFieldType));\n+ String json =\"{ \\\"text\\\" : [\\\"foo\\\",\\\"foo\\\"], \\\"title\\\" : [\\\"foo\\\", \\\"foo\\\"]}\";\n+ doc.add(new StoredField(\"_source\", new BytesRef(json)));\n+ w.addDocument(doc);\n+ }\n+\n+ SignificantTextAggregationBuilder sigAgg = new SignificantTextAggregationBuilder(\"sig_text\", \"text\");\n+ sigAgg.sourceFieldNames(Arrays.asList(new String [] {\"title\", \"text\"}));\n+ try (IndexReader reader = DirectoryReader.open(w)) {\n+ assertEquals(\"test expects a single segment\", 1, reader.leaves().size());\n+ IndexSearcher searcher = new IndexSearcher(reader); \n+ searchAndReduce(searcher, new TermQuery(new Term(\"text\", \"foo\")), sigAgg, textFieldType);\n+ // No significant results to be found in this test - only checking we don't end up\n+ // with the internal exception discovered in issue https://github.com/elastic/elasticsearch/issues/25029\n+ }\n+ }\n+ }\n+ \n+ \n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTextAggregatorTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.0\r\n\r\n**Plugins installed**: found-elasticsearch repository-s3 x-pack (default cloud set)\r\n\r\n**JVM version**: java version \"1.8.0_72\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_72-b15)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)\r\n\r\n**OS version**: Ubuntu 14.04.1 LTS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nUnclear on resulting behavior, but got a ran into it with the following logs.\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n<details>\r\n <summary><code>[2017-02-10T05:10:35,904][WARN ][org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction] not accumulating exceptions, excluding exception from response\r\norg.elasticsearch.action.FailedNodeException: Failed node [WmfKMkelS7qOP_43OOpkVA]\r\n</code></summary>\r\n\r\n```\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:247) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$300(TransportNodesAction.java:160) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:219) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1024) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1126) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1104) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:68) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:123) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.onFailure(SecurityServerTransportInterceptor.java:224) ~[?:?]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.lambda$messageReceived$0(SecurityServerTransportInterceptor.java:289) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:56) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$null$2(ServerTransportFilter.java:164) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.maybeRun(AuthorizationUtils.java:127) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.setRunAsRoles(AuthorizationUtils.java:121) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.authorize(AuthorizationUtils.java:109) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.lambda$inbound$3(ServerTransportFilter.java:166) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:56) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$authenticateAsync$0(AuthenticationService.java:182) ~[x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$lookForExistingAuthentication$2(AuthenticationService.java:201) ~[x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lookForExistingAuthentication(AuthenticationService.java:213) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.authenticateAsync(AuthenticationService.java:180) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.access$000(AuthenticationService.java:142) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:114) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter$NodeProfile.inbound(ServerTransportFilter.java:142) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:296) [x-pack-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:610) [elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.0.jar:5.2.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_72]\r\nCaused by: org.elasticsearch.transport.RemoteTransportException: [instance-XX][X.X.X.X:X][cluster:monitor/nodes/stats[n]]\r\nCaused by: org.apache.lucene.store.AlreadyClosedException: translog is already closed\r\n\tat org.elasticsearch.index.translog.Translog.ensureOpen(Translog.java:1310) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.index.translog.Translog.totalOperations(Translog.java:355) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.index.translog.Translog.totalOperations(Translog.java:340) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.index.translog.Translog.stats(Translog.java:572) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.index.shard.IndexShard.translogStats(IndexShard.java:734) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:213) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.indices.IndicesService.stats(IndicesService.java:309) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.node.service.NodeService.stats(NodeService.java:107) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:77) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:42) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:237) ~[?:?]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n\t... 24 more\r\n```\r\n</details>", "comments": [ { "body": "@pickypg I'm assigning this to you as it seems you plan to pick this up. We can debate whether node stats should return errors to the users (rather than log them under WARN) but this is not the cause of this issue. I believe this goes wrong now because we stopped wrapping up internal engine exceptions and that confuses the logic [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/indices/IndicesService.java#L319). I think we should teach that clause about AlreadyClosedException. The shard was just closed concurrently to the stats call, which is not a problem ", "created_at": "2017-02-10T08:13:06Z" }, { "body": "@bleskes I totally agree that this is a fake failure, but I do wonder about the value of ever throwing away exceptions to a `TransportNodesAction`?\r\n\r\nIn addition to making the appropriate fix here, I wonder if a secondary fix would be to remove the `accumulateExceptions` method on it?", "created_at": "2017-02-10T16:31:27Z" }, { "body": "> In addition to making the appropriate fix here, I wonder if a secondary fix would be to remove the accumulateExceptions method on it?\r\n\r\nI tend to agree - we should report what happened to the use. It will put an extra burden on finding the right exceptions to ignore, but I think it's the right tradeoff. IMO it should be a separate change.", "created_at": "2017-02-12T07:02:07Z" }, { "body": "Agree it should be a separate change.", "created_at": "2017-02-12T19:15:57Z" }, { "body": "Going to fix this by:\r\n\r\n- [x] Catching `AlreadyClosedException` #25016\r\n- [x] Eliminating `accumulateExceptions` from `TransportNodesAction` (thus always accumulating exceptions) #25017", "created_at": "2017-06-02T00:44:19Z" }, { "body": "This is also occurring with `docker.elastic.co/elasticsearch/elasticsearch:5.4.1`", "created_at": "2017-06-16T08:48:38Z" }, { "body": "This was merged and backported to the respective branches both PRs. Thanks!", "created_at": "2017-06-28T16:58:50Z" } ], "number": 23099, "title": "Failed node exception due to translog already closed " }
{ "body": "This catches `AlreadyClosedException` during `stats` calls to avoid failing a `_nodes/stats` request because of the ignorable, concurrent index closure.\r\n\r\nPart of #23099", "number": 25016, "review_comments": [], "title": "_nodes/stats should not fail due to concurrent AlreadyClosedException" }
{ "commits": [ { "message": "_nodes/stats should not fail due to concurrent AlreadyClosedException\n\nThis catches `AlreadyClosedException` during `stats` calls to avoid\nfailing a `_nodes/stats` request because of the ignorable, concurrent\nindex closure." }, { "message": "refactor to make testable and add test" } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.LockObtainFailedException;\n import org.apache.lucene.util.CollectionUtil;\n import org.apache.lucene.util.IOUtils;\n@@ -292,35 +293,49 @@ public NodeIndicesStats stats(boolean includePrevious, CommonStatsFlags flags) {\n }\n }\n \n- Map<Index, List<IndexShardStats>> statsByShard = new HashMap<>();\n- for (IndexService indexService : this) {\n- for (IndexShard indexShard : indexService) {\n+ return new NodeIndicesStats(oldStats, statsByShard(this, flags));\n+ }\n+\n+ Map<Index, List<IndexShardStats>> statsByShard(final IndicesService indicesService, final CommonStatsFlags flags) {\n+ final Map<Index, List<IndexShardStats>> statsByShard = new HashMap<>();\n+\n+ for (final IndexService indexService : indicesService) {\n+ for (final IndexShard indexShard : indexService) {\n try {\n- if (indexShard.routingEntry() == null) {\n+ final IndexShardStats indexShardStats = indicesService.indexShardStats(indicesService, indexShard, flags);\n+\n+ if (indexShardStats == null) {\n continue;\n }\n- IndexShardStats indexShardStats =\n- new IndexShardStats(indexShard.shardId(),\n- new ShardStats[]{\n- new ShardStats(\n- indexShard.routingEntry(),\n- indexShard.shardPath(),\n- new CommonStats(indicesQueryCache, indexShard, flags),\n- indexShard.commitStats(),\n- indexShard.seqNoStats())});\n-\n- if (!statsByShard.containsKey(indexService.index())) {\n+\n+ if (statsByShard.containsKey(indexService.index()) == false) {\n statsByShard.put(indexService.index(), arrayAsArrayList(indexShardStats));\n } else {\n statsByShard.get(indexService.index()).add(indexShardStats);\n }\n- } catch (IllegalIndexShardStateException e) {\n+ } catch (IllegalIndexShardStateException | AlreadyClosedException e) {\n // we can safely ignore illegal state on ones that are closing for example\n logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"{} ignoring shard stats\", indexShard.shardId()), e);\n }\n }\n }\n- return new NodeIndicesStats(oldStats, statsByShard);\n+\n+ return statsByShard;\n+ }\n+\n+ IndexShardStats indexShardStats(final IndicesService indicesService, final IndexShard indexShard, final CommonStatsFlags flags) {\n+ if (indexShard.routingEntry() == null) {\n+ return null;\n+ }\n+\n+ return new IndexShardStats(indexShard.shardId(),\n+ new ShardStats[] {\n+ new ShardStats(indexShard.routingEntry(),\n+ indexShard.shardPath(),\n+ new CommonStats(indicesService.getIndicesQueryCache(), indexShard, flags),\n+ indexShard.commitStats(),\n+ indexShard.seqNoStats())\n+ });\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -18,7 +18,10 @@\n */\n package org.elasticsearch.indices;\n \n+import org.apache.lucene.store.AlreadyClosedException;\n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;\n+import org.elasticsearch.action.admin.indices.stats.IndexShardStats;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexGraveyard;\n@@ -41,6 +44,9 @@\n import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardPath;\n import org.elasticsearch.index.similarity.BM25SimilarityProvider;\n@@ -55,6 +61,7 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n+import java.util.List;\n import java.util.Map;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n@@ -66,6 +73,8 @@\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.not;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n \n public class IndicesServiceTests extends ESSingleNodeTestCase {\n \n@@ -369,4 +378,57 @@ public void testStandAloneMapperServiceWithPlugins() throws IOException {\n assertThat(mapperService.documentMapperParser().parserContext(\"type\").getSimilarity(\"test\"),\n instanceOf(BM25SimilarityProvider.class));\n }\n+\n+ public void testStatsByShardDoesNotDieFromExpectedExceptions() {\n+ final int shardCount = randomIntBetween(2, 5);\n+ final int failedShardId = randomIntBetween(0, shardCount - 1);\n+\n+ final Index index = new Index(\"test-index\", \"abc123\");\n+ // the shard that is going to fail\n+ final ShardId shardId = new ShardId(index, failedShardId);\n+\n+ final List<IndexShard> shards = new ArrayList<>(shardCount);\n+ final List<IndexShardStats> shardStats = new ArrayList<>(shardCount - 1);\n+\n+ final IndexShardState state = randomFrom(IndexShardState.values());\n+ final String message = \"TEST - expected\";\n+\n+ final RuntimeException expectedException =\n+ randomFrom(new IllegalIndexShardStateException(shardId, state, message), new AlreadyClosedException(message));\n+\n+ // this allows us to control the indices that exist\n+ final IndicesService mockIndicesService = mock(IndicesService.class);\n+ final IndexService indexService = mock(IndexService.class);\n+\n+ // generate fake shards and their responses\n+ for (int i = 0; i < shardCount; ++i) {\n+ final IndexShard shard = mock(IndexShard.class);\n+\n+ shards.add(shard);\n+\n+ if (failedShardId != i) {\n+ final IndexShardStats successfulShardStats = mock(IndexShardStats.class);\n+\n+ shardStats.add(successfulShardStats);\n+\n+ when(mockIndicesService.indexShardStats(mockIndicesService, shard, CommonStatsFlags.ALL)).thenReturn(successfulShardStats);\n+ } else {\n+ when(mockIndicesService.indexShardStats(mockIndicesService, shard, CommonStatsFlags.ALL)).thenThrow(expectedException);\n+ }\n+ }\n+\n+ when(mockIndicesService.iterator()).thenReturn(Collections.singleton(indexService).iterator());\n+ when(indexService.iterator()).thenReturn(shards.iterator());\n+ when(indexService.index()).thenReturn(index);\n+\n+ // real one, which has a logger defined\n+ final IndicesService indicesService = getIndicesService();\n+\n+ final Map<Index, List<IndexShardStats>> indexStats = indicesService.statsByShard(mockIndicesService, CommonStatsFlags.ALL);\n+\n+ assertThat(indexStats.isEmpty(), equalTo(false));\n+ assertThat(\"index not defined\", indexStats.containsKey(index), equalTo(true));\n+ assertThat(\"unexpected shard stats\", indexStats.get(index), equalTo(shardStats));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" } ] }
{ "body": "The BigArrays class (which is used by a lot of aggregations) allocates the new array pages when growing the array before it calls the circuit breaker. This means that if the amount of memory required for the grow operation exceeds the available heap we will throw an OOME instead of tripping the circuit breaker.\r\n\r\nThis issue was identified whilst investigating https://github.com/elastic/elasticsearch/issues/15892\r\n\r\nThe fix is to estimate the amount of memory required by big arrays (should be able to estimate within 16KB) and then use this with the circuit breaker before we allocate the arrays.", "comments": [ { "body": "This hit me again last night. I am headed to put in the artificially low circuitbreakers now until this is fixed.\r\n`[2017-05-24T20:46:52,701][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [es10] fatal error in thread [elasticsearch[es10][search][T#5]], exiting\r\njava.lang.OutOfMemoryError: Java heap space\r\n at org.elasticsearch.common.util.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:99) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:96) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:53) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.AbstractRecycler.obtain(AbstractRecycler.java:33) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:28) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.Recyclers$3.obtain(Recyclers.java:119) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.PageCacheRecycler.bytePage(PageCacheRecycler.java:147) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.AbstractBigArray.newBytePage(AbstractBigArray.java:112) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.BigByteArray.resize(BigByteArray.java:141) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.BigArrays.resizeInPlace(BigArrays.java:438) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.BigArrays.resize(BigArrays.java:485) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.BigArrays.grow(BigArrays.java:502) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.ensureCapacity(HyperLogLogPlusPlus.java:197) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.collect(HyperLogLogPlusPlus.java:232) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregator$DirectCollector.collect(CardinalityAggregator.java:199) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.LeafBucketCollector$2.collect(LeafBucketCollector.java:67) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectBucket(BucketsAggregator.java:72) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator$WithHash$2.collect(GlobalOrdinalsStringTermsAggregator.java:304) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.AggregatorFactory$MultiBucketAggregatorWrapper$1.collect(AggregatorFactory.java:136) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector.prepareSelectedBuckets(BestBucketsDeferringCollector.java:178) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector.replay(DeferringBucketCollector.java:44) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.AggregatorBase.runDeferredCollections(AggregatorBase.java:206) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator.buildAggregation(GlobalOrdinalsStringTermsAggregator.java:193) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:129) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:114) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.indices.IndicesService.lambda$loadIntoContext$16(IndicesService.java:1107) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.indices.IndicesService$$Lambda$1920/2118142078.accept(Unknown Source) ~[?:?]\r\n at org.elasticsearch.indices.IndicesService.lambda$cacheShardLevelResult$18(IndicesService.java:1188) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.indices.IndicesService$$Lambda$1922/164644094.get(Unknown Source) ~[?:?]`", "created_at": "2017-05-25T12:42:14Z" } ], "number": 24790, "title": "BigArrays should call the circuit breaker before allocating the arrays" }
{ "body": "Previously, when allocating bytes for a BigArray, the array was created\r\n(or attempted to be created) and only then would the array be checked\r\nfor the amount of RAM used to see if the circuit breaker should trip.\r\n\r\nThis is problematic because for very large arrays, if creating or\r\nresizing the array, it is possible to attempt to create/resize and get\r\nan OOM error before the circuit breaker trips, because the allocation\r\nhappens before checking with the circuit breaker.\r\n\r\nThis commit ensures that the circuit breaker is checked before all big\r\narray allocations (note, this does not effect the array allocations that\r\nare less than 16kb which use the [Type]ArrayWrapper classes found in\r\nBigArrays.java). If such an allocation or resizing would cause the\r\ncircuit breaker to trip, then the breaker trips before attempting to\r\nallocate and potentially running into an OOM error from the JVM.\r\n\r\nCloses #24790", "number": 25010, "review_comments": [ { "body": "Reflection? :(\r\n\r\nWhat about instead of using an array of `Byte`, `Int`, etc which seems really fragile, we use a list of lambdas that invoke the real method. That way if there is ever a refactoring it will fail during compilation rather than at runtime.", "created_at": "2017-06-01T20:25:33Z" }, { "body": "Same here about using lambdas instead of reflection", "created_at": "2017-06-01T20:25:56Z" }, { "body": "Can you assert the `getBytesWanted()` and `getByteLimit()` are also correct here?", "created_at": "2017-06-01T20:27:43Z" }, { "body": "How would you feel about removing this and making explicit the `dataAlreadyCreated` parameter?", "created_at": "2017-06-01T20:30:41Z" }, { "body": "I prefer that, I made the change", "created_at": "2017-06-02T02:57:53Z" }, { "body": "Added", "created_at": "2017-06-02T03:29:33Z" }, { "body": "Good point - I used reflection as that is what the two tests above it used. I will change all the tests to use lamdas instead.", "created_at": "2017-06-02T03:30:08Z" }, { "body": "could we add an assert that array.ramBytesUsed() is equal to the estimate we computed previously?", "created_at": "2017-06-02T07:01:27Z" }, { "body": "I agree with @dakrone that we need to find another way", "created_at": "2017-06-02T07:01:52Z" }, { "body": "Would it work if we passed `BigArrays.NON_RECYCLING_INSTANCE` instead for the estimators?", "created_at": "2017-06-02T07:07:59Z" }, { "body": "Added", "created_at": "2017-06-02T16:24:55Z" }, { "body": "@jpountz I completely missed the `NON_RECYCLING_INSTANCE`! It works perfectly. I made the change.", "created_at": "2017-06-02T16:32:16Z" }, { "body": "Can you add the `oldMemSize` and new estimate to the assertion message? (In case it fails)", "created_at": "2017-06-02T16:39:22Z" }, { "body": "Can we add an assertion that this is never negative? I don't think it ever will be, but just to be sure...", "created_at": "2017-06-02T16:40:11Z" }, { "body": "I don't think anything uses this `SHALLOW_SIZE`, so I think it can be private?", "created_at": "2017-06-02T16:42:07Z" }, { "body": "added", "created_at": "2017-06-02T16:50:11Z" }, { "body": "added", "created_at": "2017-06-02T16:50:13Z" }, { "body": "@dakrone it needs to be package-private because the other array wrapper classes (e.g. `ByteArrayWrapper` access it. Making it `private` throws compilation errors.", "created_at": "2017-06-02T16:51:53Z" }, { "body": "Ahh okay, I missed that usage then, thanks!", "created_at": "2017-06-02T17:07:44Z" } ], "title": "Checks the circuit breaker before allocating bytes for a new big array" }
{ "commits": [ { "message": "Checks the circuit breaker before allocating bytes for a new big array\n\nPreviously, when allocating bytes for a BigArray, the array was created\n(or attempted to be created) and only then would the array be checked\nfor the amount of RAM used to see if the circuit breaker should trip.\n\nThis is problematic because for very large arrays, if creating or\nresizing the array, it is possible to attempt to create/resize and get\nan OOM error before the circuit breaker trips, because the allocation\nhappens before checking with the circuit breaker.\n\nThis commit ensures that the circuit breaker is checked before all big\narray allocations (note, this does not effect the array allocations that\nare less than 16kb which use the [Type]ArrayWrapper classes found in\nBigArrays.java). If such an allocation or resizing would cause the\ncircuit breaker to trip, then the breaker trips before attempting to\nallocate and potentially running into an OOM error from the JVM.\n\nCloses #24790" }, { "message": "address feedback" }, { "message": "make bigarrays non-nullable again" }, { "message": "make asserts great again" } ], "files": [ { "diff": "@@ -41,7 +41,7 @@ abstract class AbstractArray implements BigArray {\n public final void close() {\n if (closed.compareAndSet(false, true)) {\n try {\n- bigArrays.adjustBreaker(-ramBytesUsed());\n+ bigArrays.adjustBreaker(-ramBytesUsed(), true);\n } finally {\n doClose();\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/AbstractArray.java", "status": "modified" }, { "diff": "@@ -87,6 +87,11 @@ public final long size() {\n \n @Override\n public final long ramBytesUsed() {\n+ return ramBytesEstimated(size);\n+ }\n+\n+ /** Given the size of the array, estimate the number of bytes it will use. */\n+ public final long ramBytesEstimated(final long size) {\n // rough approximate, we only take into account the size of the values, not the overhead of the array objects\n return ((long) pageIndex(size - 1) + 1) * pageSize() * numBytesPerElement();\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/AbstractBigArray.java", "status": "modified" }, { "diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.breaker.CircuitBreaker;\n import org.elasticsearch.common.breaker.CircuitBreakingException;\n-import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.recycler.Recycler;\n@@ -91,7 +90,7 @@ public void close() {\n \n private abstract static class AbstractArrayWrapper extends AbstractArray implements BigArray {\n \n- protected static final long SHALLOW_SIZE = RamUsageEstimator.shallowSizeOfInstance(ByteArrayWrapper.class);\n+ static final long SHALLOW_SIZE = RamUsageEstimator.shallowSizeOfInstance(ByteArrayWrapper.class);\n \n private final Releasable releasable;\n private final long size;\n@@ -377,6 +376,7 @@ public BigArrays(Settings settings, @Nullable final CircuitBreakerService breake\n // Checking the breaker is disabled if not specified\n this(new PageCacheRecycler(settings), breakerService, false);\n }\n+\n // public for tests\n public BigArrays(PageCacheRecycler recycler, @Nullable final CircuitBreakerService breakerService, boolean checkBreaker) {\n this.checkBreaker = checkBreaker;\n@@ -392,9 +392,12 @@ public BigArrays(PageCacheRecycler recycler, @Nullable final CircuitBreakerServi\n /**\n * Adjust the circuit breaker with the given delta, if the delta is\n * negative, or checkBreaker is false, the breaker will be adjusted\n- * without tripping\n+ * without tripping. If the data was already created before calling\n+ * this method, and the breaker trips, we add the delta without breaking\n+ * to account for the created data. If the data has not been created yet,\n+ * we do not add the delta to the breaker if it trips.\n */\n- void adjustBreaker(long delta) {\n+ void adjustBreaker(final long delta, final boolean isDataAlreadyCreated) {\n if (this.breakerService != null) {\n CircuitBreaker breaker = this.breakerService.getBreaker(CircuitBreaker.REQUEST);\n if (this.checkBreaker) {\n@@ -404,9 +407,11 @@ void adjustBreaker(long delta) {\n try {\n breaker.addEstimateBytesAndMaybeBreak(delta, \"<reused_arrays>\");\n } catch (CircuitBreakingException e) {\n- // since we've already created the data, we need to\n- // add it so closing the stream re-adjusts properly\n- breaker.addWithoutBreaking(delta);\n+ if (isDataAlreadyCreated) {\n+ // since we've already created the data, we need to\n+ // add it so closing the stream re-adjusts properly\n+ breaker.addWithoutBreaking(delta);\n+ }\n // re-throw the original exception\n throw e;\n }\n@@ -435,15 +440,21 @@ public CircuitBreakerService breakerService() {\n \n private <T extends AbstractBigArray> T resizeInPlace(T array, long newSize) {\n final long oldMemSize = array.ramBytesUsed();\n+ assert oldMemSize == array.ramBytesEstimated(array.size) :\n+ \"ram bytes used should equal that which was previously estimated: ramBytesUsed=\" +\n+ oldMemSize + \", ramBytesEstimated=\" + array.ramBytesEstimated(array.size);\n+ final long estimatedIncreaseInBytes = array.ramBytesEstimated(newSize) - oldMemSize;\n+ assert estimatedIncreaseInBytes >= 0 :\n+ \"estimated increase in bytes for resizing should not be negative: \" + estimatedIncreaseInBytes;\n+ adjustBreaker(estimatedIncreaseInBytes, false);\n array.resize(newSize);\n- adjustBreaker(array.ramBytesUsed() - oldMemSize);\n return array;\n }\n \n private <T extends BigArray> T validate(T array) {\n boolean success = false;\n try {\n- adjustBreaker(array.ramBytesUsed());\n+ adjustBreaker(array.ramBytesUsed(), true);\n success = true;\n } finally {\n if (!success) {\n@@ -459,16 +470,17 @@ private <T extends BigArray> T validate(T array) {\n * @param clearOnResize whether values should be set to 0 on initialization and resize\n */\n public ByteArray newByteArray(long size, boolean clearOnResize) {\n- final ByteArray array;\n if (size > BYTE_PAGE_SIZE) {\n- array = new BigByteArray(size, this, clearOnResize);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigByteArray.estimateRamBytes(size), false);\n+ return new BigByteArray(size, this, clearOnResize);\n } else if (size >= BYTE_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<byte[]> page = recycler.bytePage(clearOnResize);\n- array = new ByteArrayWrapper(this, page.v(), size, page, clearOnResize);\n+ return validate(new ByteArrayWrapper(this, page.v(), size, page, clearOnResize));\n } else {\n- array = new ByteArrayWrapper(this, new byte[(int) size], size, null, clearOnResize);\n+ return validate(new ByteArrayWrapper(this, new byte[(int) size], size, null, clearOnResize));\n }\n- return validate(array);\n }\n \n /**\n@@ -541,16 +553,17 @@ public boolean equals(ByteArray array, ByteArray other) {\n * @param clearOnResize whether values should be set to 0 on initialization and resize\n */\n public IntArray newIntArray(long size, boolean clearOnResize) {\n- final IntArray array;\n if (size > INT_PAGE_SIZE) {\n- array = new BigIntArray(size, this, clearOnResize);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigIntArray.estimateRamBytes(size), false);\n+ return new BigIntArray(size, this, clearOnResize);\n } else if (size >= INT_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<int[]> page = recycler.intPage(clearOnResize);\n- array = new IntArrayWrapper(this, page.v(), size, page, clearOnResize);\n+ return validate(new IntArrayWrapper(this, page.v(), size, page, clearOnResize));\n } else {\n- array = new IntArrayWrapper(this, new int[(int) size], size, null, clearOnResize);\n+ return validate(new IntArrayWrapper(this, new int[(int) size], size, null, clearOnResize));\n }\n- return validate(array);\n }\n \n /**\n@@ -591,16 +604,17 @@ public IntArray grow(IntArray array, long minSize) {\n * @param clearOnResize whether values should be set to 0 on initialization and resize\n */\n public LongArray newLongArray(long size, boolean clearOnResize) {\n- final LongArray array;\n if (size > LONG_PAGE_SIZE) {\n- array = new BigLongArray(size, this, clearOnResize);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigLongArray.estimateRamBytes(size), false);\n+ return new BigLongArray(size, this, clearOnResize);\n } else if (size >= LONG_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<long[]> page = recycler.longPage(clearOnResize);\n- array = new LongArrayWrapper(this, page.v(), size, page, clearOnResize);\n+ return validate(new LongArrayWrapper(this, page.v(), size, page, clearOnResize));\n } else {\n- array = new LongArrayWrapper(this, new long[(int) size], size, null, clearOnResize);\n+ return validate(new LongArrayWrapper(this, new long[(int) size], size, null, clearOnResize));\n }\n- return validate(array);\n }\n \n /**\n@@ -641,16 +655,17 @@ public LongArray grow(LongArray array, long minSize) {\n * @param clearOnResize whether values should be set to 0 on initialization and resize\n */\n public DoubleArray newDoubleArray(long size, boolean clearOnResize) {\n- final DoubleArray arr;\n if (size > LONG_PAGE_SIZE) {\n- arr = new BigDoubleArray(size, this, clearOnResize);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigDoubleArray.estimateRamBytes(size), false);\n+ return new BigDoubleArray(size, this, clearOnResize);\n } else if (size >= LONG_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<long[]> page = recycler.longPage(clearOnResize);\n- arr = new DoubleArrayWrapper(this, page.v(), size, page, clearOnResize);\n+ return validate(new DoubleArrayWrapper(this, page.v(), size, page, clearOnResize));\n } else {\n- arr = new DoubleArrayWrapper(this, new long[(int) size], size, null, clearOnResize);\n+ return validate(new DoubleArrayWrapper(this, new long[(int) size], size, null, clearOnResize));\n }\n- return validate(arr);\n }\n \n /** Allocate a new {@link DoubleArray} of the given capacity. */\n@@ -688,16 +703,17 @@ public DoubleArray grow(DoubleArray array, long minSize) {\n * @param clearOnResize whether values should be set to 0 on initialization and resize\n */\n public FloatArray newFloatArray(long size, boolean clearOnResize) {\n- final FloatArray array;\n if (size > INT_PAGE_SIZE) {\n- array = new BigFloatArray(size, this, clearOnResize);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigFloatArray.estimateRamBytes(size), false);\n+ return new BigFloatArray(size, this, clearOnResize);\n } else if (size >= INT_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<int[]> page = recycler.intPage(clearOnResize);\n- array = new FloatArrayWrapper(this, page.v(), size, page, clearOnResize);\n+ return validate(new FloatArrayWrapper(this, page.v(), size, page, clearOnResize));\n } else {\n- array = new FloatArrayWrapper(this, new int[(int) size], size, null, clearOnResize);\n+ return validate(new FloatArrayWrapper(this, new int[(int) size], size, null, clearOnResize));\n }\n- return validate(array);\n }\n \n /** Allocate a new {@link FloatArray} of the given capacity. */\n@@ -736,14 +752,16 @@ public FloatArray grow(FloatArray array, long minSize) {\n public <T> ObjectArray<T> newObjectArray(long size) {\n final ObjectArray<T> array;\n if (size > OBJECT_PAGE_SIZE) {\n- array = new BigObjectArray<>(size, this);\n+ // when allocating big arrays, we want to first ensure we have the capacity by\n+ // checking with the circuit breaker before attempting to allocate\n+ adjustBreaker(BigObjectArray.estimateRamBytes(size), false);\n+ return new BigObjectArray<>(size, this);\n } else if (size >= OBJECT_PAGE_SIZE / 2 && recycler != null) {\n final Recycler.V<Object[]> page = recycler.objectPage();\n- array = new ObjectArrayWrapper<>(this, page.v(), size, page);\n+ return validate(new ObjectArrayWrapper<>(this, page.v(), size, page));\n } else {\n- array = new ObjectArrayWrapper<>(this, new Object[(int) size], size, null);\n+ return validate(new ObjectArrayWrapper<>(this, new Object[(int) size], size, null));\n }\n- return validate(array);\n }\n \n /** Resize the array to the exact provided size. */", "filename": "core/src/main/java/org/elasticsearch/common/util/BigArrays.java", "status": "modified" }, { "diff": "@@ -33,6 +33,8 @@\n */\n final class BigByteArray extends AbstractBigArray implements ByteArray {\n \n+ private static final BigByteArray ESTIMATOR = new BigByteArray(0, BigArrays.NON_RECYCLING_INSTANCE, false);\n+\n private byte[][] pages;\n \n /** Constructor. */\n@@ -44,7 +46,7 @@ final class BigByteArray extends AbstractBigArray implements ByteArray {\n pages[i] = newBytePage(i);\n }\n }\n- \n+\n @Override\n public byte get(long index) {\n final int pageIndex = pageIndex(index);\n@@ -147,4 +149,9 @@ public void resize(long newSize) {\n this.size = newSize;\n }\n \n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/BigByteArray.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n */\n final class BigDoubleArray extends AbstractBigArray implements DoubleArray {\n \n+ private static final BigDoubleArray ESTIMATOR = new BigDoubleArray(0, BigArrays.NON_RECYCLING_INSTANCE, false);\n+\n private long[][] pages;\n \n /** Constructor. */\n@@ -110,4 +112,9 @@ public void fill(long fromIndex, long toIndex, double value) {\n }\n }\n \n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/BigDoubleArray.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n */\n final class BigFloatArray extends AbstractBigArray implements FloatArray {\n \n+ private static final BigFloatArray ESTIMATOR = new BigFloatArray(0, BigArrays.NON_RECYCLING_INSTANCE, false);\n+\n private int[][] pages;\n \n /** Constructor. */\n@@ -110,4 +112,9 @@ public void fill(long fromIndex, long toIndex, float value) {\n }\n }\n \n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/BigFloatArray.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n */\n final class BigIntArray extends AbstractBigArray implements IntArray {\n \n+ private static final BigIntArray ESTIMATOR = new BigIntArray(0, BigArrays.NON_RECYCLING_INSTANCE, false);\n+\n private int[][] pages;\n \n /** Constructor. */\n@@ -108,4 +110,9 @@ public void resize(long newSize) {\n this.size = newSize;\n }\n \n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/BigIntArray.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n */\n final class BigLongArray extends AbstractBigArray implements LongArray {\n \n+ private static final BigLongArray ESTIMATOR = new BigLongArray(0, BigArrays.NON_RECYCLING_INSTANCE, false);\n+\n private long[][] pages;\n \n /** Constructor. */\n@@ -111,4 +113,9 @@ public void fill(long fromIndex, long toIndex, long value) {\n }\n }\n \n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/BigLongArray.java", "status": "modified" }, { "diff": "@@ -32,6 +32,8 @@\n */\n final class BigObjectArray<T> extends AbstractBigArray implements ObjectArray<T> {\n \n+ private static final BigObjectArray ESTIMATOR = new BigObjectArray(0, BigArrays.NON_RECYCLING_INSTANCE);\n+\n private Object[][] pages;\n \n /** Constructor. */\n@@ -85,4 +87,9 @@ public void resize(long newSize) {\n this.size = newSize;\n }\n \n-}\n\\ No newline at end of file\n+ /** Estimates the number of bytes that would be consumed by an array of the given size. */\n+ public static long estimateRamBytes(final long size) {\n+ return ESTIMATOR.ramBytesEstimated(size);\n+ }\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/common/util/BigObjectArray.java", "status": "modified" }, { "diff": "@@ -33,6 +33,11 @@\n import java.lang.reflect.InvocationTargetException;\n import java.lang.reflect.Method;\n import java.util.Arrays;\n+import java.util.List;\n+import java.util.function.Function;\n+\n+import static org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n \n public class BigArraysTests extends ESTestCase {\n \n@@ -330,22 +335,17 @@ private ByteArray byteArrayWithBytes(byte[] bytes) {\n }\n \n public void testMaxSizeExceededOnNew() throws Exception {\n- final int size = scaledRandomIntBetween(5, 1 << 22);\n- for (String type : Arrays.asList(\"Byte\", \"Int\", \"Long\", \"Float\", \"Double\", \"Object\")) {\n- HierarchyCircuitBreakerService hcbs = new HierarchyCircuitBreakerService(\n- Settings.builder()\n- .put(HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), size - 1, ByteSizeUnit.BYTES)\n- .build(),\n- new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n- BigArrays bigArrays = new BigArrays(null, hcbs, false).withCircuitBreaking();\n- Method create = BigArrays.class.getMethod(\"new\" + type + \"Array\", long.class);\n+ final long size = scaledRandomIntBetween(5, 1 << 22);\n+ final long maxSize = size - 1;\n+ for (BigArraysHelper bigArraysHelper : bigArrayCreators(maxSize, true)) {\n try {\n- create.invoke(bigArrays, size);\n- fail(\"expected an exception on \" + create);\n- } catch (InvocationTargetException e) {\n- assertTrue(e.getCause() instanceof CircuitBreakingException);\n+ bigArraysHelper.arrayAllocator.apply(size);\n+ fail(\"circuit breaker should trip\");\n+ } catch (CircuitBreakingException e) {\n+ assertEquals(maxSize, e.getByteLimit());\n+ assertThat(e.getBytesWanted(), greaterThanOrEqualTo(size));\n }\n- assertEquals(0, hcbs.getBreaker(CircuitBreaker.REQUEST).getUsed());\n+ assertEquals(0, bigArraysHelper.bigArrays.breakerService().getBreaker(CircuitBreaker.REQUEST).getUsed());\n }\n }\n \n@@ -354,7 +354,7 @@ public void testMaxSizeExceededOnResize() throws Exception {\n final long maxSize = randomIntBetween(1 << 10, 1 << 22);\n HierarchyCircuitBreakerService hcbs = new HierarchyCircuitBreakerService(\n Settings.builder()\n- .put(HierarchyCircuitBreakerService.REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), maxSize, ByteSizeUnit.BYTES)\n+ .put(REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), maxSize, ByteSizeUnit.BYTES)\n .build(),\n new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n BigArrays bigArrays = new BigArrays(null, hcbs, false).withCircuitBreaking();\n@@ -377,4 +377,63 @@ public void testMaxSizeExceededOnResize() throws Exception {\n }\n }\n \n+ public void testEstimatedBytesSameAsActualBytes() throws Exception {\n+ final int maxSize = 1 << scaledRandomIntBetween(15, 22);\n+ final long size = randomIntBetween((1 << 14) + 1, maxSize);\n+ for (final BigArraysHelper bigArraysHelper : bigArrayCreators(maxSize, false)) {\n+ final BigArray bigArray = bigArraysHelper.arrayAllocator.apply(size);\n+ assertEquals(bigArraysHelper.ramEstimator.apply(size).longValue(), bigArray.ramBytesUsed());\n+ }\n+ }\n+\n+ private List<BigArraysHelper> bigArrayCreators(final long maxSize, final boolean withBreaking) {\n+ final BigArrays byteBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper byteHelper = new BigArraysHelper(byteBigArrays,\n+ (Long size) -> byteBigArrays.newByteArray(size),\n+ (Long size) -> BigByteArray.estimateRamBytes(size));\n+ final BigArrays intBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper intHelper = new BigArraysHelper(intBigArrays,\n+ (Long size) -> intBigArrays.newIntArray(size),\n+ (Long size) -> BigIntArray.estimateRamBytes(size));\n+ final BigArrays longBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper longHelper = new BigArraysHelper(longBigArrays,\n+ (Long size) -> longBigArrays.newLongArray(size),\n+ (Long size) -> BigLongArray.estimateRamBytes(size));\n+ final BigArrays floatBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper floatHelper = new BigArraysHelper(floatBigArrays,\n+ (Long size) -> floatBigArrays.newFloatArray(size),\n+ (Long size) -> BigFloatArray.estimateRamBytes(size));\n+ final BigArrays doubleBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper doubleHelper = new BigArraysHelper(doubleBigArrays,\n+ (Long size) -> doubleBigArrays.newDoubleArray(size),\n+ (Long size) -> BigDoubleArray.estimateRamBytes(size));\n+ final BigArrays objectBigArrays = newBigArraysInstance(maxSize, withBreaking);\n+ BigArraysHelper objectHelper = new BigArraysHelper(objectBigArrays,\n+ (Long size) -> objectBigArrays.newObjectArray(size),\n+ (Long size) -> BigObjectArray.estimateRamBytes(size));\n+ return Arrays.asList(byteHelper, intHelper, longHelper, floatHelper, doubleHelper, objectHelper);\n+ }\n+\n+ private BigArrays newBigArraysInstance(final long maxSize, final boolean withBreaking) {\n+ HierarchyCircuitBreakerService hcbs = new HierarchyCircuitBreakerService(\n+ Settings.builder()\n+ .put(REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), maxSize, ByteSizeUnit.BYTES)\n+ .build(),\n+ new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n+ BigArrays bigArrays = new BigArrays(null, hcbs, false);\n+ return (withBreaking ? bigArrays.withCircuitBreaking() : bigArrays);\n+ }\n+\n+ private static class BigArraysHelper {\n+ final BigArrays bigArrays;\n+ final Function<Long, BigArray> arrayAllocator;\n+ final Function<Long, Long> ramEstimator;\n+\n+ BigArraysHelper(BigArrays bigArrays, Function<Long, BigArray> arrayAllocator, Function<Long, Long> ramEstimator) {\n+ this.bigArrays = bigArrays;\n+ this.arrayAllocator = arrayAllocator;\n+ this.ramEstimator = ramEstimator;\n+ }\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/common/util/BigArraysTests.java", "status": "modified" }, { "diff": "@@ -34,8 +34,6 @@\n import java.util.HashMap;\n import java.util.Map;\n import java.util.Random;\n-import java.util.Set;\n-import java.util.WeakHashMap;\n import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicReference;", "filename": "test/framework/src/main/java/org/elasticsearch/common/util/MockBigArrays.java", "status": "modified" } ] }
{ "body": "Right off the bat, here's a little info:\n\n```\n$ uname -a\nLinux jj-big-box 3.19.0-49-generic #55~14.04.1-Ubuntu SMP Fri Jan 22 11:24:31 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\n\n$ curl -XGET 'localhost:9200'\n{\n \"status\" : 200,\n \"name\" : \"Bast\",\n \"cluster_name\" : \"elasticsearch\",\n \"version\" : {\n \"number\" : \"1.7.4\",\n \"build_hash\" : \"0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e\",\n \"build_timestamp\" : \"2015-12-15T11:25:18Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.10.4\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n\nI've seen some similar posts, but I've had trouble squaring their results with mine. I've noticed that I have do not receive consistent numbers of documents when running scan and scroll in elastic search. Here is python code exhibiting the behavior (hopefully the use of sockets is not too confusing...at first I was trying to make sure the problem had nothing to do with elasticsearch-py and that's why I went the route of raw code):\n\n``` python\nimport socket\nimport httplib\nimport json\nimport re\n\nHOST = 'localhost'\nPORT = 9200\n\nCRLF = \"\\r\\n\\r\\n\"\n\ninit_msg = \"\"\"\nGET /index/document/_search?search_type=scan&scroll=15m&timeout=30&size=10 HTTP/1.1\nHost: localhost:9200\nAccept-Encoding: identity\nContent-Length: 94\nconnection: keep-alive\n\n{\"query\": {\"regexp\": {\"date_publ\": \"2001.*\"}}, \"_source\": [\"doc_id\", \"date_publ\", \"abstract\"]}\n\"\"\"\n\nscroll_msg = \"\"\"\nGET /_search/scroll?scroll=15m HTTP/1.1\nHost: localhost:9200\nAccept-Encoding: identity\nContent-Length: {sid_length}\nconnection: keep-alive\n\n{sid}\n\"\"\"\n\ndef get_stream(host, port, verbose=True):\n # Set up the socket.\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n s.connect((HOST, PORT))\n s.send(init_msg)\n\n # Fetch scroll_id and total number of hits.\n data = s.recv(4096)\n payload = json.loads(data.split(CRLF)[-1])\n sid = payload['_scroll_id']\n total_hits = payload['hits']['total']\n\n if verbose:\n print \"Total hits: {}\".format(total_hits)\n\n # Iterate through results.\n while True:\n # Send data request.\n msg = scroll_msg.format(sid=sid, sid_length=len(sid))\n s.send(msg)\n\n # Fetch the response body.\n data = s.recv(1024)\n header, body = data.split(CRLF)\n content_length = int(re.findall('Content-Length: (\\d*)', header)[0])\n while len(body) < content_length:\n body += s.recv(1024)\n\n # Extract results from response body.\n payload = json.loads(body)\n sid = payload['_scroll_id']\n hits = payload['hits']['hits']\n\n #print payload['_shards']\n\n if not hits:\n break\n\n for hit in hits:\n yield hit\n\n\nfor count, _ in enumerate(get_stream(HOST, PORT), 1): pass\n\nprint count\n```\n\nWhen I run that a few times, I get the following:\n\n```\n$ python new_test.py \nTotal hits: 56366\n11650\n$ python new_test.py \nTotal hits: 56366\n24550\n$ python new_test.py \nTotal hits: 56366\n8550\n```\n\nNow if I un-comment the line `#print payload['_shards']`, the ended up being the following during one run:\n\n```\nTotal hits: 56366\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n\n...\n\n{u'successful': 4, u'failed': 0, u'total': 4}\n{u'successful': 4, u'failed': 0, u'total': 4}\n{u'successful': 4, u'failed': 0, u'total': 4}\n{u'successful': 4, u'failed': 0, u'total': 4}\n{u'successful': 4, u'failed': 0, u'total': 4}\n{u'successful': 4, u'failed': 0, u'total': 4}\n28110\n```\n\nand ended up as the following the next run:\n\n```\nTotal hits: 56366\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n\n...\n\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 5, u'failed': 0, u'total': 5}\n{u'successful': 3, u'failed': 0, u'total': 3}\n{u'successful': 1, u'failed': 0, u'total': 1}\n{u'successful': 0, u'failed': 0, u'total': 0}\n56366\n```\n\n_Note_: The last run apparently returned _all_ documents. This is the first time I've seen this during this experimentation.\n\nDoes anyone have any idea what's going on here? As far as I can tell, I never run into these issues when not doing the regular expression as part of the search, but other than that I'm at a loss.\n\nThanks for any help!\n", "comments": [ { "body": "Regexp can be a heavy query, and you have a timeout of 30 milliseconds...I get the same results when trying this locally. Setting the timeout higher solves the problem.\n\nWondering if we should change the behaviour if timeouts occur?\n", "created_at": "2016-02-13T14:32:37Z" }, { "body": "@clintongormley is this issue still relevant?", "created_at": "2017-04-26T10:58:01Z" }, { "body": "Yes, I think we should change the behaviour of scroll - it seems wrong that we drop results here.", "created_at": "2017-04-26T11:13:05Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-26T03:25:35Z" }, { "body": "Closing in favour of https://github.com/elastic/elasticsearch/issues/28499 where I will add a comment about needing to deal with partial results from timeouts as well as shard failures", "created_at": "2018-07-27T13:18:32Z" } ], "number": 16555, "title": "Varrying numbers of results from scan and scroll" }
{ "body": "Today there is a lot of code duplication and different handling of errors\r\nin the two different scroll modes. Yet, it's not clear if we keep both of\r\nthem but this simplification will help to further refactor this code to also\r\nadd cross cluster search capabilities.\r\n\r\nThis refactoring also fixes bugs when shards failed due to the node dropped out of the cluster in between scroll requests and failures during the fetch phase of the scroll. Both places where simply ignoring the failure and logging to debug. This can cause issues like #16555\r\n\r\n", "number": 24979, "review_comments": [ { "body": "useless return statement? or maybe future-proof? in the latter case we should do the same above in innerOnResponse?", "created_at": "2017-05-31T13:32:46Z" } ], "title": "Extract a common base class for scroll executions" }
{ "commits": [ { "message": "Extract a common base class for scroll executions\n\nToday there is a lot of code duplication and different handling of errors\nin the two different scroll modes. Yet, it's not clear if we keep both of\nthem but this simplificaiton will help to further refactor this code to also\nadd cross cluster search capabilities.\n\n:" }, { "message": "fix redundant modifier" }, { "message": "fix error handling and add several TODOs" }, { "message": "fix line len" }, { "message": "only pull next phase once" }, { "message": "Merge branch 'master' into share_scroll_code" }, { "message": "add unittest and fix shard failure concurrecy" }, { "message": "Merge branch 'master' into share_scroll_code" } ], "files": [ { "diff": "@@ -0,0 +1,226 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+import org.apache.logging.log4j.Logger;\n+import org.apache.logging.log4j.message.ParameterizedMessage;\n+import org.apache.logging.log4j.util.Supplier;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.util.concurrent.CountDown;\n+import org.elasticsearch.search.SearchPhaseResult;\n+import org.elasticsearch.search.SearchShardTarget;\n+import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.search.internal.InternalSearchResponse;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest;\n+\n+/**\n+ * Abstract base class for scroll execution modes. This class encapsulates the basic logic to\n+ * fan out to nodes and execute the query part of the scroll request. Subclasses can for instance\n+ * run separate fetch phases etc.\n+ */\n+abstract class SearchScrollAsyncAction<T extends SearchPhaseResult> implements Runnable {\n+ /*\n+ * Some random TODO:\n+ * Today we still have a dedicated executing mode for scrolls while we could simplify this by implementing\n+ * scroll like functionality (mainly syntactic sugar) as an ordinary search with search_after. We could even go further and\n+ * make the scroll entirely stateless and encode the state per shard in the scroll ID.\n+ *\n+ * Today we also hold a context per shard but maybe\n+ * we want the context per coordinating node such that we route the scroll to the same coordinator all the time and hold the context\n+ * here? This would have the advantage that if we loose that node the entire scroll is deal not just one shard.\n+ *\n+ * Additionally there is the possibility to associate the scroll with a seq. id. such that we can talk to any replica as long as\n+ * the shards engine hasn't advanced that seq. id yet. Such a resume is possible and best effort, it could be even a safety net since\n+ * if you rely on indices being read-only things can change in-between without notification or it's hard to detect if there where any\n+ * changes while scrolling. These are all options to improve the current situation which we can look into down the road\n+ */\n+ protected final Logger logger;\n+ protected final ActionListener<SearchResponse> listener;\n+ protected final ParsedScrollId scrollId;\n+ protected final DiscoveryNodes nodes;\n+ protected final SearchPhaseController searchPhaseController;\n+ protected final SearchScrollRequest request;\n+ private final long startTime;\n+ private final List<ShardSearchFailure> shardFailures = new ArrayList<>();\n+ private final AtomicInteger successfulOps;\n+\n+ protected SearchScrollAsyncAction(ParsedScrollId scrollId, Logger logger, DiscoveryNodes nodes,\n+ ActionListener<SearchResponse> listener, SearchPhaseController searchPhaseController,\n+ SearchScrollRequest request) {\n+ this.startTime = System.currentTimeMillis();\n+ this.scrollId = scrollId;\n+ this.successfulOps = new AtomicInteger(scrollId.getContext().length);\n+ this.logger = logger;\n+ this.listener = listener;\n+ this.nodes = nodes;\n+ this.searchPhaseController = searchPhaseController;\n+ this.request = request;\n+ }\n+\n+ /**\n+ * Builds how long it took to execute the search.\n+ */\n+ private long buildTookInMillis() {\n+ // protect ourselves against time going backwards\n+ // negative values don't make sense and we want to be able to serialize that thing as a vLong\n+ return Math.max(1, System.currentTimeMillis() - startTime);\n+ }\n+\n+ public final void run() {\n+ final ScrollIdForNode[] context = scrollId.getContext();\n+ if (context.length == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(\"query\", \"no nodes to search on\", ShardSearchFailure.EMPTY_ARRAY));\n+ return;\n+ }\n+ final CountDown counter = new CountDown(scrollId.getContext().length);\n+ for (int i = 0; i < context.length; i++) {\n+ ScrollIdForNode target = context[i];\n+ DiscoveryNode node = nodes.get(target.getNode());\n+ final int shardIndex = i;\n+ if (node != null) { // it might happen that a node is going down in-between scrolls...\n+ InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(target.getScrollId(), request);\n+ // we can't create a SearchShardTarget here since we don't know the index and shard ID we are talking to\n+ // we only know the node and the search context ID. Yet, the response will contain the SearchShardTarget\n+ // from the target node instead...that's why we pass null here\n+ SearchActionListener<T> searchActionListener = new SearchActionListener<T>(null, shardIndex) {\n+\n+ @Override\n+ protected void setSearchShardTarget(T response) {\n+ // don't do this - it's part of the response...\n+ assert response.getSearchShardTarget() != null : \"search shard target must not be null\";\n+ }\n+\n+ @Override\n+ protected void innerOnResponse(T result) {\n+ assert shardIndex == result.getShardIndex() : \"shard index mismatch: \" + shardIndex + \" but got: \"\n+ + result.getShardIndex();\n+ onFirstPhaseResult(shardIndex, result);\n+ if (counter.countDown()) {\n+ SearchPhase phase = moveToNextPhase();\n+ try {\n+ phase.run();\n+ } catch (Exception e) {\n+ // we need to fail the entire request here - the entire phase just blew up\n+ // don't call onShardFailure or onFailure here since otherwise we'd countDown the counter\n+ // again which would result in an exception\n+ listener.onFailure(new SearchPhaseExecutionException(phase.getName(), \"Phase failed\", e,\n+ ShardSearchFailure.EMPTY_ARRAY));\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Exception t) {\n+ onShardFailure(\"query\", shardIndex, counter, target.getScrollId(), t, null,\n+ SearchScrollAsyncAction.this::moveToNextPhase);\n+ }\n+ };\n+ executeInitialPhase(node, internalRequest, searchActionListener);\n+ } else { // the node is not available we treat this as a shard failure here\n+ onShardFailure(\"query\", shardIndex, counter, target.getScrollId(),\n+ new IllegalStateException(\"node [\" + target.getNode() + \"] is not available\"), null,\n+ SearchScrollAsyncAction.this::moveToNextPhase);\n+ }\n+ }\n+ }\n+\n+ synchronized ShardSearchFailure[] buildShardFailures() { // pkg private for testing\n+ if (shardFailures.isEmpty()) {\n+ return ShardSearchFailure.EMPTY_ARRAY;\n+ }\n+ return shardFailures.toArray(new ShardSearchFailure[shardFailures.size()]);\n+ }\n+\n+ // we do our best to return the shard failures, but its ok if its not fully concurrently safe\n+ // we simply try and return as much as possible\n+ private synchronized void addShardFailure(ShardSearchFailure failure) {\n+ shardFailures.add(failure);\n+ }\n+\n+ protected abstract void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<T> searchActionListener);\n+\n+ protected abstract SearchPhase moveToNextPhase();\n+\n+ protected abstract void onFirstPhaseResult(int shardId, T result);\n+\n+ protected SearchPhase sendResponsePhase(SearchPhaseController.ReducedQueryPhase queryPhase,\n+ final AtomicArray<? extends SearchPhaseResult> fetchResults) {\n+ return new SearchPhase(\"fetch\") {\n+ @Override\n+ public void run() throws IOException {\n+ sendResponse(queryPhase, fetchResults);\n+ }\n+ };\n+ }\n+\n+ protected final void sendResponse(SearchPhaseController.ReducedQueryPhase queryPhase,\n+ final AtomicArray<? extends SearchPhaseResult> fetchResults) {\n+ try {\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(true, queryPhase, fetchResults.asList(),\n+ fetchResults::get);\n+ // the scroll ID never changes we always return the same ID. This ID contains all the shards and their context ids\n+ // such that we can talk to them abgain in the next roundtrip.\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = request.scrollId();\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(),\n+ buildTookInMillis(), buildShardFailures()));\n+ } catch (Exception e) {\n+ listener.onFailure(new ReduceSearchPhaseException(\"fetch\", \"inner finish failed\", e, buildShardFailures()));\n+ }\n+ }\n+\n+ protected void onShardFailure(String phaseName, final int shardIndex, final CountDown counter, final long searchId, Exception failure,\n+ @Nullable SearchShardTarget searchShardTarget,\n+ Supplier<SearchPhase> nextPhaseSupplier) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] Failed to execute {} phase\", searchId, phaseName), failure);\n+ }\n+ addShardFailure(new ShardSearchFailure(failure, searchShardTarget));\n+ int successfulOperations = successfulOps.decrementAndGet();\n+ assert successfulOperations >= 0 : \"successfulOperations must be >= 0 but was: \" + successfulOperations;\n+ if (counter.countDown()) {\n+ if (successfulOps.get() == 0) {\n+ listener.onFailure(new SearchPhaseExecutionException(phaseName, \"all shards failed\", failure, buildShardFailures()));\n+ } else {\n+ SearchPhase phase = nextPhaseSupplier.get();\n+ try {\n+ phase.run();\n+ } catch (Exception e) {\n+ e.addSuppressed(failure);\n+ listener.onFailure(new SearchPhaseExecutionException(phase.getName(), \"Phase failed\", e,\n+ ShardSearchFailure.EMPTY_ARRAY));\n+ }\n+ }\n+ }\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java", "status": "added" }, { "diff": "@@ -28,6 +28,8 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.util.concurrent.CountDown;\n+import org.elasticsearch.search.SearchPhaseResult;\n import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult;\n import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n@@ -39,147 +41,34 @@\n \n import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest;\n \n-final class SearchScrollQueryAndFetchAsyncAction extends AbstractAsyncAction {\n+final class SearchScrollQueryAndFetchAsyncAction extends SearchScrollAsyncAction<ScrollQueryFetchSearchResult> {\n \n- private final Logger logger;\n- private final SearchPhaseController searchPhaseController;\n private final SearchTransportService searchTransportService;\n- private final SearchScrollRequest request;\n private final SearchTask task;\n- private final ActionListener<SearchResponse> listener;\n- private final ParsedScrollId scrollId;\n- private final DiscoveryNodes nodes;\n- private volatile AtomicArray<ShardSearchFailure> shardFailures;\n private final AtomicArray<QueryFetchSearchResult> queryFetchResults;\n- private final AtomicInteger successfulOps;\n- private final AtomicInteger counter;\n \n SearchScrollQueryAndFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService,\n SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task,\n ParsedScrollId scrollId, ActionListener<SearchResponse> listener) {\n- this.logger = logger;\n- this.searchPhaseController = searchPhaseController;\n- this.searchTransportService = searchTransportService;\n- this.request = request;\n+ super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request);\n this.task = task;\n- this.listener = listener;\n- this.scrollId = scrollId;\n- this.nodes = clusterService.state().nodes();\n- this.successfulOps = new AtomicInteger(scrollId.getContext().length);\n- this.counter = new AtomicInteger(scrollId.getContext().length);\n-\n+ this.searchTransportService = searchTransportService;\n this.queryFetchResults = new AtomicArray<>(scrollId.getContext().length);\n }\n \n- private ShardSearchFailure[] buildShardFailures() {\n- if (shardFailures == null) {\n- return ShardSearchFailure.EMPTY_ARRAY;\n- }\n- List<ShardSearchFailure> failures = shardFailures.asList();\n- return failures.toArray(new ShardSearchFailure[failures.size()]);\n- }\n-\n- // we do our best to return the shard failures, but its ok if its not fully concurrently safe\n- // we simply try and return as much as possible\n- private void addShardFailure(final int shardIndex, ShardSearchFailure failure) {\n- if (shardFailures == null) {\n- shardFailures = new AtomicArray<>(scrollId.getContext().length);\n- }\n- shardFailures.set(shardIndex, failure);\n- }\n-\n- public void start() {\n- if (scrollId.getContext().length == 0) {\n- listener.onFailure(new SearchPhaseExecutionException(\"query\", \"no nodes to search on\", ShardSearchFailure.EMPTY_ARRAY));\n- return;\n- }\n-\n- ScrollIdForNode[] context = scrollId.getContext();\n- for (int i = 0; i < context.length; i++) {\n- ScrollIdForNode target = context[i];\n- DiscoveryNode node = nodes.get(target.getNode());\n- if (node != null) {\n- executePhase(i, node, target.getScrollId());\n- } else {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"Node [{}] not available for scroll request [{}]\", target.getNode(), scrollId.getSource());\n- }\n- successfulOps.decrementAndGet();\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n- }\n-\n- for (ScrollIdForNode target : scrollId.getContext()) {\n- DiscoveryNode node = nodes.get(target.getNode());\n- if (node == null) {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"Node [{}] not available for scroll request [{}]\", target.getNode(), scrollId.getSource());\n- }\n- successfulOps.decrementAndGet();\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n- }\n- }\n-\n- void executePhase(final int shardIndex, DiscoveryNode node, final long searchId) {\n- InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request);\n- searchTransportService.sendExecuteScrollFetch(node, internalRequest, task,\n- new SearchActionListener<ScrollQueryFetchSearchResult>(null, shardIndex) {\n- @Override\n- protected void setSearchShardTarget(ScrollQueryFetchSearchResult response) {\n- // don't do this - it's part of the response...\n- assert response.getSearchShardTarget() != null : \"search shard target must not be null\";\n- }\n- @Override\n- protected void innerOnResponse(ScrollQueryFetchSearchResult response) {\n- queryFetchResults.set(response.getShardIndex(), response.result());\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n- @Override\n- public void onFailure(Exception t) {\n- onPhaseFailure(t, searchId, shardIndex);\n- }\n- });\n- }\n-\n- private void onPhaseFailure(Exception e, long searchId, int shardIndex) {\n- if (logger.isDebugEnabled()) {\n- logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] Failed to execute query phase\", searchId), e);\n- }\n- addShardFailure(shardIndex, new ShardSearchFailure(e));\n- successfulOps.decrementAndGet();\n- if (counter.decrementAndGet() == 0) {\n- if (successfulOps.get() == 0) {\n- listener.onFailure(new SearchPhaseExecutionException(\"query_fetch\", \"all shards failed\", e, buildShardFailures()));\n- } else {\n- finishHim();\n- }\n- }\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<ScrollQueryFetchSearchResult> searchActionListener) {\n+ searchTransportService.sendExecuteScrollFetch(node, internalRequest, task, searchActionListener);\n }\n \n- private void finishHim() {\n- try {\n- innerFinishHim();\n- } catch (Exception e) {\n- listener.onFailure(new ReduceSearchPhaseException(\"fetch\", \"\", e, buildShardFailures()));\n- }\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ return sendResponsePhase(searchPhaseController.reducedQueryPhase(queryFetchResults.asList(), true), queryFetchResults);\n }\n \n- private void innerFinishHim() throws Exception {\n- List<QueryFetchSearchResult> queryFetchSearchResults = queryFetchResults.asList();\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(true,\n- searchPhaseController.reducedQueryPhase(queryFetchSearchResults, true), queryFetchSearchResults, queryFetchResults::get);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = request.scrollId();\n- }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(),\n- buildTookInMillis(), buildShardFailures()));\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, ScrollQueryFetchSearchResult result) {\n+ queryFetchResults.setOnce(shardId, result.result());\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -21,215 +21,102 @@\n \n import com.carrotsearch.hppc.IntArrayList;\n import org.apache.logging.log4j.Logger;\n-import org.apache.logging.log4j.message.ParameterizedMessage;\n-import org.apache.logging.log4j.util.Supplier;\n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.common.util.concurrent.CountDown;\n import org.elasticsearch.search.fetch.FetchSearchResult;\n import org.elasticsearch.search.fetch.ShardFetchRequest;\n import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n-import org.elasticsearch.search.internal.InternalSearchResponse;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.search.query.ScrollQuerySearchResult;\n \n-import java.util.List;\n-import java.util.concurrent.atomic.AtomicInteger;\n+import java.io.IOException;\n \n import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest;\n \n-final class SearchScrollQueryThenFetchAsyncAction extends AbstractAsyncAction {\n+final class SearchScrollQueryThenFetchAsyncAction extends SearchScrollAsyncAction<ScrollQuerySearchResult> {\n \n- private final Logger logger;\n private final SearchTask task;\n private final SearchTransportService searchTransportService;\n- private final SearchPhaseController searchPhaseController;\n- private final SearchScrollRequest request;\n- private final ActionListener<SearchResponse> listener;\n- private final ParsedScrollId scrollId;\n- private final DiscoveryNodes nodes;\n- private volatile AtomicArray<ShardSearchFailure> shardFailures;\n- final AtomicArray<QuerySearchResult> queryResults;\n- final AtomicArray<FetchSearchResult> fetchResults;\n- private final AtomicInteger successfulOps;\n+ private final AtomicArray<FetchSearchResult> fetchResults;\n+ private final AtomicArray<QuerySearchResult> queryResults;\n \n SearchScrollQueryThenFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService,\n SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task,\n ParsedScrollId scrollId, ActionListener<SearchResponse> listener) {\n- this.logger = logger;\n+ super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request);\n this.searchTransportService = searchTransportService;\n- this.searchPhaseController = searchPhaseController;\n- this.request = request;\n this.task = task;\n- this.listener = listener;\n- this.scrollId = scrollId;\n- this.nodes = clusterService.state().nodes();\n- this.successfulOps = new AtomicInteger(scrollId.getContext().length);\n- this.queryResults = new AtomicArray<>(scrollId.getContext().length);\n this.fetchResults = new AtomicArray<>(scrollId.getContext().length);\n+ this.queryResults = new AtomicArray<>(scrollId.getContext().length);\n }\n \n- private ShardSearchFailure[] buildShardFailures() {\n- if (shardFailures == null) {\n- return ShardSearchFailure.EMPTY_ARRAY;\n- }\n- List<ShardSearchFailure> failures = shardFailures.asList();\n- return failures.toArray(new ShardSearchFailure[failures.size()]);\n- }\n-\n- // we do our best to return the shard failures, but its ok if its not fully concurrently safe\n- // we simply try and return as much as possible\n- private void addShardFailure(final int shardIndex, ShardSearchFailure failure) {\n- if (shardFailures == null) {\n- shardFailures = new AtomicArray<>(scrollId.getContext().length);\n- }\n- shardFailures.set(shardIndex, failure);\n+ protected void onFirstPhaseResult(int shardId, ScrollQuerySearchResult result) {\n+ queryResults.setOnce(shardId, result.queryResult());\n }\n \n- public void start() {\n- if (scrollId.getContext().length == 0) {\n- listener.onFailure(new SearchPhaseExecutionException(\"query\", \"no nodes to search on\", ShardSearchFailure.EMPTY_ARRAY));\n- return;\n- }\n- final CountDown counter = new CountDown(scrollId.getContext().length);\n- ScrollIdForNode[] context = scrollId.getContext();\n- for (int i = 0; i < context.length; i++) {\n- ScrollIdForNode target = context[i];\n- DiscoveryNode node = nodes.get(target.getNode());\n- if (node != null) {\n- executeQueryPhase(i, counter, node, target.getScrollId());\n- } else {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"Node [{}] not available for scroll request [{}]\", target.getNode(), scrollId.getSource());\n- }\n- successfulOps.decrementAndGet();\n- if (counter.countDown()) {\n- try {\n- executeFetchPhase();\n- } catch (Exception e) {\n- listener.onFailure(new SearchPhaseExecutionException(\"query\", \"Fetch failed\", e, ShardSearchFailure.EMPTY_ARRAY));\n- return;\n- }\n- }\n- }\n- }\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<ScrollQuerySearchResult> searchActionListener) {\n+ searchTransportService.sendExecuteScrollQuery(node, internalRequest, task, searchActionListener);\n }\n \n- private void executeQueryPhase(final int shardIndex, final CountDown counter, DiscoveryNode node, final long searchId) {\n- InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request);\n- searchTransportService.sendExecuteScrollQuery(node, internalRequest, task,\n- new SearchActionListener<ScrollQuerySearchResult>(null, shardIndex) {\n-\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ return new SearchPhase(\"fetch\") {\n @Override\n- protected void setSearchShardTarget(ScrollQuerySearchResult response) {\n- // don't do this - it's part of the response...\n- assert response.getSearchShardTarget() != null : \"search shard target must not be null\";\n- }\n-\n- @Override\n- protected void innerOnResponse(ScrollQuerySearchResult result) {\n- queryResults.setOnce(result.getShardIndex(), result.queryResult());\n- if (counter.countDown()) {\n- try {\n- executeFetchPhase();\n- } catch (Exception e) {\n- onFailure(e);\n- }\n+ public void run() throws IOException {\n+ final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = searchPhaseController.reducedQueryPhase(\n+ queryResults.asList(), true);\n+ if (reducedQueryPhase.scoreDocs.length == 0) {\n+ sendResponse(reducedQueryPhase, fetchResults);\n+ return;\n }\n- }\n \n- @Override\n- public void onFailure(Exception t) {\n- onQueryPhaseFailure(shardIndex, counter, searchId, t);\n- }\n- });\n- }\n-\n- void onQueryPhaseFailure(final int shardIndex, final CountDown counter, final long searchId, Exception failure) {\n- if (logger.isDebugEnabled()) {\n- logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] Failed to execute query phase\", searchId), failure);\n- }\n- addShardFailure(shardIndex, new ShardSearchFailure(failure));\n- successfulOps.decrementAndGet();\n- if (counter.countDown()) {\n- if (successfulOps.get() == 0) {\n- listener.onFailure(new SearchPhaseExecutionException(\"query\", \"all shards failed\", failure, buildShardFailures()));\n- } else {\n- try {\n- executeFetchPhase();\n- } catch (Exception e) {\n- e.addSuppressed(failure);\n- listener.onFailure(new SearchPhaseExecutionException(\"query\", \"Fetch failed\", e, ShardSearchFailure.EMPTY_ARRAY));\n- }\n- }\n- }\n- }\n-\n- private void executeFetchPhase() throws Exception {\n- final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = searchPhaseController.reducedQueryPhase(queryResults.asList(),\n- true);\n- if (reducedQueryPhase.scoreDocs.length == 0) {\n- finishHim(reducedQueryPhase);\n- return;\n- }\n-\n- final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(queryResults.length(), reducedQueryPhase.scoreDocs);\n- final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase, queryResults.length());\n- final CountDown counter = new CountDown(docIdsToLoad.length);\n- for (int i = 0; i < docIdsToLoad.length; i++) {\n- final int index = i;\n- final IntArrayList docIds = docIdsToLoad[index];\n- if (docIds != null) {\n- final QuerySearchResult querySearchResult = queryResults.get(index);\n- ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[index];\n- ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.getRequestId(), docIds, lastEmittedDoc);\n- DiscoveryNode node = nodes.get(querySearchResult.getSearchShardTarget().getNodeId());\n- searchTransportService.sendExecuteFetchScroll(node, shardFetchRequest, task,\n- new SearchActionListener<FetchSearchResult>(querySearchResult.getSearchShardTarget(), index) {\n- @Override\n- protected void innerOnResponse(FetchSearchResult response) {\n- fetchResults.setOnce(response.getShardIndex(), response);\n+ final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(queryResults.length(),\n+ reducedQueryPhase.scoreDocs);\n+ final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase,\n+ queryResults.length());\n+ final CountDown counter = new CountDown(docIdsToLoad.length);\n+ for (int i = 0; i < docIdsToLoad.length; i++) {\n+ final int index = i;\n+ final IntArrayList docIds = docIdsToLoad[index];\n+ if (docIds != null) {\n+ final QuerySearchResult querySearchResult = queryResults.get(index);\n+ ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[index];\n+ ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.getRequestId(), docIds,\n+ lastEmittedDoc);\n+ DiscoveryNode node = nodes.get(querySearchResult.getSearchShardTarget().getNodeId());\n+ searchTransportService.sendExecuteFetchScroll(node, shardFetchRequest, task,\n+ new SearchActionListener<FetchSearchResult>(querySearchResult.getSearchShardTarget(), index) {\n+ @Override\n+ protected void innerOnResponse(FetchSearchResult response) {\n+ fetchResults.setOnce(response.getShardIndex(), response);\n+ if (counter.countDown()) {\n+ sendResponse(reducedQueryPhase, fetchResults);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Exception t) {\n+ onShardFailure(getName(), querySearchResult.getShardIndex(), counter, querySearchResult.getRequestId(),\n+ t, querySearchResult.getSearchShardTarget(),\n+ () -> sendResponsePhase(reducedQueryPhase, fetchResults));\n+ }\n+ });\n+ } else {\n+ // the counter is set to the total size of docIdsToLoad\n+ // which can have null values so we have to count them down too\n if (counter.countDown()) {\n- finishHim(reducedQueryPhase);\n+ sendResponse(reducedQueryPhase, fetchResults);\n }\n }\n-\n- @Override\n- public void onFailure(Exception t) {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"Failed to execute fetch phase\", t);\n- }\n- successfulOps.decrementAndGet();\n- if (counter.countDown()) {\n- finishHim(reducedQueryPhase);\n- }\n- }\n- });\n- } else {\n- // the counter is set to the total size of docIdsToLoad which can have null values so we have to count them down too\n- if (counter.countDown()) {\n- finishHim(reducedQueryPhase);\n }\n }\n- }\n+ };\n }\n \n- private void finishHim(SearchPhaseController.ReducedQueryPhase queryPhase) {\n- try {\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(true, queryPhase, fetchResults.asList(),\n- fetchResults::get);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = request.scrollId();\n- }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(),\n- buildTookInMillis(), buildShardFailures()));\n- } catch (Exception e) {\n- listener.onFailure(new ReduceSearchPhaseException(\"fetch\", \"inner finish failed\", e, buildShardFailures()));\n- }\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected final void doExecute(SearchScrollRequest request, ActionListener<Searc\n protected void doExecute(Task task, SearchScrollRequest request, ActionListener<SearchResponse> listener) {\n try {\n ParsedScrollId scrollId = parseScrollId(request.scrollId());\n- AbstractAsyncAction action;\n+ Runnable action;\n switch (scrollId.getType()) {\n case QUERY_THEN_FETCH_TYPE:\n action = new SearchScrollQueryThenFetchAsyncAction(logger, clusterService, searchTransportService,\n@@ -73,7 +73,7 @@ protected void doExecute(Task task, SearchScrollRequest request, ActionListener<\n default:\n throw new IllegalArgumentException(\"Scroll id type [\" + scrollId.getType() + \"] unrecognized\");\n }\n- action.start();\n+ action.run();\n } catch (Exception e) {\n listener.onFailure(e);\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,407 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.action.search;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.search.Scroll;\n+import org.elasticsearch.search.SearchShardTarget;\n+import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+public class SearchScrollAsyncActionTests extends ESTestCase {\n+\n+ public void testSendRequestsToNodes() throws InterruptedException {\n+\n+ ParsedScrollId scrollId = getParsedScrollId(\n+ new ScrollIdForNode(\"node1\", 1),\n+ new ScrollIdForNode(\"node2\", 2),\n+ new ScrollIdForNode(\"node3\", 17),\n+ new ScrollIdForNode(\"node1\", 0),\n+ new ScrollIdForNode(\"node3\", 0));\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(new DiscoveryNode(\"node1\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node2\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node3\", buildNewFakeTransportAddress(), Version.CURRENT)).build();\n+\n+ AtomicArray<SearchAsyncActionTests.TestSearchPhaseResult> results = new AtomicArray<>(scrollId.getContext().length);\n+ SearchScrollRequest request = new SearchScrollRequest();\n+ request.scroll(new Scroll(TimeValue.timeValueMinutes(1)));\n+ CountDownLatch latch = new CountDownLatch(1);\n+ AtomicInteger movedCounter = new AtomicInteger(0);\n+ SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult> action =\n+ new SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult>(scrollId, logger, discoveryNodes, null, null, request)\n+ {\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<SearchAsyncActionTests.TestSearchPhaseResult> searchActionListener)\n+ {\n+ new Thread(() -> {\n+ SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n+ new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), node);\n+ testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(node.getId(), new Index(\"test\", \"_na_\"), 1));\n+ searchActionListener.onResponse(testSearchPhaseResult);\n+ }).start();\n+ }\n+\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ assertEquals(1, movedCounter.incrementAndGet());\n+ return new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ latch.countDown();\n+ }\n+ };\n+ }\n+\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, SearchAsyncActionTests.TestSearchPhaseResult result) {\n+ results.setOnce(shardId, result);\n+ }\n+ };\n+\n+ action.run();\n+ latch.await();\n+ ShardSearchFailure[] shardSearchFailures = action.buildShardFailures();\n+ assertEquals(0, shardSearchFailures.length);\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < results.length(); i++) {\n+ assertNotNull(results.get(i));\n+ assertEquals(context[i].getScrollId(), results.get(i).getRequestId());\n+ assertEquals(context[i].getNode(), results.get(i).node.getId());\n+ }\n+ }\n+\n+ public void testFailNextPhase() throws InterruptedException {\n+\n+ ParsedScrollId scrollId = getParsedScrollId(\n+ new ScrollIdForNode(\"node1\", 1),\n+ new ScrollIdForNode(\"node2\", 2),\n+ new ScrollIdForNode(\"node3\", 17),\n+ new ScrollIdForNode(\"node1\", 0),\n+ new ScrollIdForNode(\"node3\", 0));\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(new DiscoveryNode(\"node1\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node2\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node3\", buildNewFakeTransportAddress(), Version.CURRENT)).build();\n+\n+ AtomicArray<SearchAsyncActionTests.TestSearchPhaseResult> results = new AtomicArray<>(scrollId.getContext().length);\n+ SearchScrollRequest request = new SearchScrollRequest();\n+ request.scroll(new Scroll(TimeValue.timeValueMinutes(1)));\n+ CountDownLatch latch = new CountDownLatch(1);\n+ AtomicInteger movedCounter = new AtomicInteger(0);\n+ ActionListener listener = new ActionListener() {\n+ @Override\n+ public void onResponse(Object o) {\n+ try {\n+ fail(\"got a result\");\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ try {\n+ assertTrue(e instanceof SearchPhaseExecutionException);\n+ SearchPhaseExecutionException ex = (SearchPhaseExecutionException) e;\n+ assertEquals(\"BOOM\", ex.getCause().getMessage());\n+ assertEquals(\"TEST_PHASE\", ex.getPhaseName());\n+ assertEquals(\"Phase failed\", ex.getMessage());\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+ };\n+ SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult> action =\n+ new SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult>(scrollId, logger, discoveryNodes, listener, null,\n+ request) {\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<SearchAsyncActionTests.TestSearchPhaseResult> searchActionListener)\n+ {\n+ new Thread(() -> {\n+ SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n+ new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), node);\n+ testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(node.getId(), new Index(\"test\", \"_na_\"), 1));\n+ searchActionListener.onResponse(testSearchPhaseResult);\n+ }).start();\n+ }\n+\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ assertEquals(1, movedCounter.incrementAndGet());\n+ return new SearchPhase(\"TEST_PHASE\") {\n+ @Override\n+ public void run() throws IOException {\n+ throw new IllegalArgumentException(\"BOOM\");\n+ }\n+ };\n+ }\n+\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, SearchAsyncActionTests.TestSearchPhaseResult result) {\n+ results.setOnce(shardId, result);\n+ }\n+ };\n+\n+ action.run();\n+ latch.await();\n+ ShardSearchFailure[] shardSearchFailures = action.buildShardFailures();\n+ assertEquals(0, shardSearchFailures.length);\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < results.length(); i++) {\n+ assertNotNull(results.get(i));\n+ assertEquals(context[i].getScrollId(), results.get(i).getRequestId());\n+ assertEquals(context[i].getNode(), results.get(i).node.getId());\n+ }\n+ }\n+\n+ public void testNodeNotAvailable() throws InterruptedException {\n+ ParsedScrollId scrollId = getParsedScrollId(\n+ new ScrollIdForNode(\"node1\", 1),\n+ new ScrollIdForNode(\"node2\", 2),\n+ new ScrollIdForNode(\"node3\", 17),\n+ new ScrollIdForNode(\"node1\", 0),\n+ new ScrollIdForNode(\"node3\", 0));\n+ // node2 is not available\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(new DiscoveryNode(\"node1\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node3\", buildNewFakeTransportAddress(), Version.CURRENT)).build();\n+\n+ AtomicArray<SearchAsyncActionTests.TestSearchPhaseResult> results = new AtomicArray<>(scrollId.getContext().length);\n+ SearchScrollRequest request = new SearchScrollRequest();\n+ request.scroll(new Scroll(TimeValue.timeValueMinutes(1)));\n+ CountDownLatch latch = new CountDownLatch(1);\n+ AtomicInteger movedCounter = new AtomicInteger(0);\n+ SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult> action =\n+ new SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult>(scrollId, logger, discoveryNodes, null, null, request)\n+ {\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<SearchAsyncActionTests.TestSearchPhaseResult> searchActionListener)\n+ {\n+ assertNotEquals(\"node2 is not available\", \"node2\", node.getId());\n+ new Thread(() -> {\n+ SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n+ new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), node);\n+ testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(node.getId(), new Index(\"test\", \"_na_\"), 1));\n+ searchActionListener.onResponse(testSearchPhaseResult);\n+ }).start();\n+ }\n+\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ assertEquals(1, movedCounter.incrementAndGet());\n+ return new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ latch.countDown();\n+ }\n+ };\n+ }\n+\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, SearchAsyncActionTests.TestSearchPhaseResult result) {\n+ results.setOnce(shardId, result);\n+ }\n+ };\n+\n+ action.run();\n+ latch.await();\n+ ShardSearchFailure[] shardSearchFailures = action.buildShardFailures();\n+ assertEquals(1, shardSearchFailures.length);\n+ assertEquals(\"IllegalStateException[node [node2] is not available]\", shardSearchFailures[0].reason());\n+\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < results.length(); i++) {\n+ if (context[i].getNode().equals(\"node2\")) {\n+ assertNull(results.get(i));\n+ } else {\n+ assertNotNull(results.get(i));\n+ assertEquals(context[i].getScrollId(), results.get(i).getRequestId());\n+ assertEquals(context[i].getNode(), results.get(i).node.getId());\n+ }\n+ }\n+ }\n+\n+ public void testShardFailures() throws InterruptedException {\n+ ParsedScrollId scrollId = getParsedScrollId(\n+ new ScrollIdForNode(\"node1\", 1),\n+ new ScrollIdForNode(\"node2\", 2),\n+ new ScrollIdForNode(\"node3\", 17),\n+ new ScrollIdForNode(\"node1\", 0),\n+ new ScrollIdForNode(\"node3\", 0));\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(new DiscoveryNode(\"node1\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node2\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node3\", buildNewFakeTransportAddress(), Version.CURRENT)).build();\n+\n+ AtomicArray<SearchAsyncActionTests.TestSearchPhaseResult> results = new AtomicArray<>(scrollId.getContext().length);\n+ SearchScrollRequest request = new SearchScrollRequest();\n+ request.scroll(new Scroll(TimeValue.timeValueMinutes(1)));\n+ CountDownLatch latch = new CountDownLatch(1);\n+ AtomicInteger movedCounter = new AtomicInteger(0);\n+ SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult> action =\n+ new SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult>(scrollId, logger, discoveryNodes, null, null, request)\n+ {\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<SearchAsyncActionTests.TestSearchPhaseResult> searchActionListener)\n+ {\n+ new Thread(() -> {\n+ if (internalRequest.id() == 17) {\n+ searchActionListener.onFailure(new IllegalArgumentException(\"BOOM on shard\"));\n+ } else {\n+ SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n+ new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), node);\n+ testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(node.getId(), new Index(\"test\", \"_na_\"), 1));\n+ searchActionListener.onResponse(testSearchPhaseResult);\n+ }\n+ }).start();\n+ }\n+\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ assertEquals(1, movedCounter.incrementAndGet());\n+ return new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ latch.countDown();\n+ }\n+ };\n+ }\n+\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, SearchAsyncActionTests.TestSearchPhaseResult result) {\n+ results.setOnce(shardId, result);\n+ }\n+ };\n+\n+ action.run();\n+ latch.await();\n+ ShardSearchFailure[] shardSearchFailures = action.buildShardFailures();\n+ assertEquals(1, shardSearchFailures.length);\n+ assertEquals(\"IllegalArgumentException[BOOM on shard]\", shardSearchFailures[0].reason());\n+\n+ ScrollIdForNode[] context = scrollId.getContext();\n+ for (int i = 0; i < results.length(); i++) {\n+ if (context[i].getScrollId() == 17) {\n+ assertNull(results.get(i));\n+ } else {\n+ assertNotNull(results.get(i));\n+ assertEquals(context[i].getScrollId(), results.get(i).getRequestId());\n+ assertEquals(context[i].getNode(), results.get(i).node.getId());\n+ }\n+ }\n+ }\n+\n+ public void testAllShardsFailed() throws InterruptedException {\n+ ParsedScrollId scrollId = getParsedScrollId(\n+ new ScrollIdForNode(\"node1\", 1),\n+ new ScrollIdForNode(\"node2\", 2),\n+ new ScrollIdForNode(\"node3\", 17),\n+ new ScrollIdForNode(\"node1\", 0),\n+ new ScrollIdForNode(\"node3\", 0));\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(new DiscoveryNode(\"node1\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node2\", buildNewFakeTransportAddress(), Version.CURRENT))\n+ .add(new DiscoveryNode(\"node3\", buildNewFakeTransportAddress(), Version.CURRENT)).build();\n+\n+ AtomicArray<SearchAsyncActionTests.TestSearchPhaseResult> results = new AtomicArray<>(scrollId.getContext().length);\n+ SearchScrollRequest request = new SearchScrollRequest();\n+ request.scroll(new Scroll(TimeValue.timeValueMinutes(1)));\n+ CountDownLatch latch = new CountDownLatch(1);\n+ ActionListener listener = new ActionListener() {\n+ @Override\n+ public void onResponse(Object o) {\n+ try {\n+ fail(\"got a result\");\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ try {\n+ assertTrue(e instanceof SearchPhaseExecutionException);\n+ SearchPhaseExecutionException ex = (SearchPhaseExecutionException) e;\n+ assertEquals(\"BOOM on shard\", ex.getCause().getMessage());\n+ assertEquals(\"query\", ex.getPhaseName());\n+ assertEquals(\"all shards failed\", ex.getMessage());\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+ };\n+ SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult> action =\n+ new SearchScrollAsyncAction<SearchAsyncActionTests.TestSearchPhaseResult>(scrollId, logger, discoveryNodes, listener, null,\n+ request) {\n+ @Override\n+ protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest,\n+ SearchActionListener<SearchAsyncActionTests.TestSearchPhaseResult> searchActionListener)\n+ {\n+ new Thread(() -> searchActionListener.onFailure(new IllegalArgumentException(\"BOOM on shard\"))).start();\n+ }\n+\n+ @Override\n+ protected SearchPhase moveToNextPhase() {\n+ fail(\"don't move all shards failed\");\n+ return null;\n+ }\n+\n+ @Override\n+ protected void onFirstPhaseResult(int shardId, SearchAsyncActionTests.TestSearchPhaseResult result) {\n+ results.setOnce(shardId, result);\n+ }\n+ };\n+\n+ action.run();\n+ latch.await();\n+ ScrollIdForNode[] context = scrollId.getContext();\n+\n+ ShardSearchFailure[] shardSearchFailures = action.buildShardFailures();\n+ assertEquals(context.length, shardSearchFailures.length);\n+ assertEquals(\"IllegalArgumentException[BOOM on shard]\", shardSearchFailures[0].reason());\n+\n+ for (int i = 0; i < results.length(); i++) {\n+ assertNull(results.get(i));\n+ }\n+ }\n+\n+ private static ParsedScrollId getParsedScrollId(ScrollIdForNode... idsForNodes) {\n+ List<ScrollIdForNode> scrollIdForNodes = Arrays.asList(idsForNodes);\n+ Collections.shuffle(scrollIdForNodes, random());\n+ return new ParsedScrollId(\"\", \"test\", scrollIdForNodes.toArray(new ScrollIdForNode[0]));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/action/search/SearchScrollAsyncActionTests.java", "status": "added" } ] }
{ "body": "**Elasticsearch version**: 5.3.0\r\n\r\nThe rollover `max_docs` value uses the doc count from both primary and replica/s. It should be based upon the primary only, since the replica count can be changed.\r\n\r\n(start two nodes, to allow the default 1 replica to allocate)\r\n```\r\nPUT /logs-000001 \r\n{\r\n \"aliases\": {\r\n \"logs_write\": {}\r\n }\r\n}\r\nPUT logs_write/test/1\r\n{\r\n \"blah\": \"foo\"\r\n}\r\nPOST /logs_write/_rollover?dry_run\r\n{\r\n \"conditions\": {\r\n \"max_docs\": 2\r\n }\r\n}\r\n# response\r\n{\r\n \"old_index\": \"logs-000001\",\r\n \"new_index\": \"logs-000002\",\r\n \"rolled_over\": false,\r\n \"dry_run\": true,\r\n \"acknowledged\": false,\r\n \"shards_acknowledged\": false,\r\n \"conditions\": {\r\n \"[max_docs: 2]\": true\r\n }\r\n}\r\n```\r\nchange the replica count to 0:\r\n```\r\nPUT /logs-000001/_settings\r\n{\r\n \"index\": {\r\n \"number_of_replicas\": 0\r\n }\r\n}\r\n# response\r\n{\r\n \"old_index\": \"logs-000001\",\r\n \"new_index\": \"logs-000002\",\r\n \"rolled_over\": false,\r\n \"dry_run\": true,\r\n \"acknowledged\": false,\r\n \"shards_acknowledged\": false,\r\n \"conditions\": {\r\n \"[max_docs: 2]\": false\r\n }\r\n}\r\n```", "comments": [ { "body": "Hi @jpcarey, @clintongormley! I'd like to adopt this issue, if that's OK. It's my first time contributing to the project so if you guys are OK with me picking this up, I'll sign the [CLA](https://www.elastic.co/contributor-agreement/). Thanks! ", "created_at": "2017-04-25T14:01:07Z" }, { "body": "@eticzon great\r\n\r\nThanks for showing interest in contributing to Elasticsearch.\r\n\r\nHere's a guide to how to go about it: https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md", "created_at": "2017-04-25T14:51:10Z" }, { "body": "Appreciate it Eticzon. I too am experiencing the issue. Doubled my max_docs for temporary solution.", "created_at": "2017-05-08T20:16:53Z" }, { "body": "@eticzon would you mind if I will handle this issue? \r\n\r\n@jpcarey @clintongormley I fixed rollover condition (https://github.com/fred84/elasticsearch/commit/0dc2ec3ec86728bebaa136f2e951cdf93ed0a00d) with integration test successfully passing, but I'm currently struggling with passing full \"gradle check\"", "created_at": "2017-05-28T09:57:28Z" } ], "number": 24217, "title": "rollover `max_docs` uses doc count from both primary and replica/s" }
{ "body": "max_doc condition for index rollover should use document count only from primary shards \r\n\r\nFixes #24217\r\n", "number": 24977, "review_comments": [ { "body": "I would much prefer it if you add a unit test in TransportRolloverActionTests instead. The REST yaml tests are there to make sure we pass requests and read responses correctly but it has a big overhead to test simple inner behavior with it. To do so you can add an overload of evaluateConditions that takes IndicesStatsResponse and translates it to a call to an evaluateConditions which gets the number of docs (it seems that's the only stats we use in [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java#L197)", "created_at": "2017-05-31T06:50:00Z" }, { "body": "Thanks for suggestion. I'll create unit test. Should I keep yaml test?", "created_at": "2017-05-31T07:02:45Z" }, { "body": "yeah please keep it ", "created_at": "2017-05-31T08:44:17Z" }, { "body": "@bleskes I added 2 more tests to TransportRolloverActionTests", "created_at": "2017-06-01T13:16:09Z" } ], "title": "Rollover max docs should only count primaries" }
{ "commits": [ { "message": "rollover condition uses doc count from primary #24217" }, { "message": "Merge branch 'master' into 24217_rollover_max_docs" }, { "message": "unit test for max_docs condition evaluation in rollover action" }, { "message": "Merge branch 'master' into 24217_rollover_max_docs" }, { "message": "Merge branch 'master' into 24217_rollover_max_docs" }, { "message": "remove duplication in condition testing in TransportRolloverActionTests" }, { "message": "make max_doc condition index rollover yml test pass for both 1 node and multi nodes" }, { "message": "Merge branch 'master' into 24217_rollover_max_docs" }, { "message": "Merge branch 'master' into 24216_rollover_max_docs" }, { "message": "skip index rollover test with max_docs condition for previous versions of ES" } ], "files": [ { "diff": "@@ -119,7 +119,7 @@ protected void masterOperation(final RolloverRequest rolloverRequest, final Clus\n @Override\n public void onResponse(IndicesStatsResponse statsResponse) {\n final Set<Condition.Result> conditionResults = evaluateConditions(rolloverRequest.getConditions(),\n- statsResponse.getTotal().getDocs(), metaData.index(sourceIndexName));\n+ metaData.index(sourceIndexName), statsResponse);\n \n if (rolloverRequest.isDryRun()) {\n listener.onResponse(\n@@ -201,6 +201,11 @@ static Set<Condition.Result> evaluateConditions(final Set<Condition> conditions,\n .collect(Collectors.toSet());\n }\n \n+ static Set<Condition.Result> evaluateConditions(final Set<Condition> conditions, final IndexMetaData metaData,\n+ final IndicesStatsResponse statsResponse) {\n+ return evaluateConditions(conditions, statsResponse.getPrimaries().getDocs(), metaData);\n+ }\n+\n static void validate(MetaData metaData, RolloverRequest request) {\n final AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(request.getAlias());\n if (aliasOrIndex == null) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,8 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesClusterStateUpdateRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;\n+import org.elasticsearch.action.admin.indices.stats.CommonStats;\n+import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.support.ActiveShardCount;\n import org.elasticsearch.cluster.metadata.AliasAction;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n@@ -40,12 +42,30 @@\n import java.util.Locale;\n import java.util.Set;\n \n+import org.mockito.ArgumentCaptor;\n import static org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.evaluateConditions;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n+import static org.mockito.Matchers.any;\n+import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+\n \n public class TransportRolloverActionTests extends ESTestCase {\n \n+ public void testDocStatsSelectionFromPrimariesOnly() throws Exception {\n+ long docsInPrimaryShards = 100;\n+ long docsInShards = 200;\n+\n+ final Condition condition = createTestCondition();\n+ evaluateConditions(Sets.newHashSet(condition), createMetaData(), createIndecesStatResponse(docsInShards, docsInPrimaryShards));\n+ final ArgumentCaptor<Condition.Stats> argument = ArgumentCaptor.forClass(Condition.Stats.class);\n+ verify(condition).evaluate(argument.capture());\n+\n+ assertEquals(docsInPrimaryShards, argument.getValue().numDocs);\n+ }\n+\n public void testEvaluateConditions() throws Exception {\n MaxDocsCondition maxDocsCondition = new MaxDocsCondition(100L);\n MaxAgeCondition maxAgeCondition = new MaxAgeCondition(TimeValue.timeValueHours(2));\n@@ -190,4 +210,37 @@ public void testCreateIndexRequest() throws Exception {\n assertThat(createIndexRequest.index(), equalTo(rolloverIndex));\n assertThat(createIndexRequest.cause(), equalTo(\"rollover_index\"));\n }\n+\n+ private IndicesStatsResponse createIndecesStatResponse(long totalDocs, long primaryDocs) {\n+ final CommonStats primaryStats = mock(CommonStats.class);\n+ when(primaryStats.getDocs()).thenReturn(new DocsStats(primaryDocs, 0));\n+\n+ final CommonStats totalStats = mock(CommonStats.class);\n+ when(totalStats.getDocs()).thenReturn(new DocsStats(totalDocs, 0));\n+\n+ final IndicesStatsResponse response = mock(IndicesStatsResponse.class);\n+ when(response.getPrimaries()).thenReturn(primaryStats);\n+ when(response.getTotal()).thenReturn(totalStats);\n+\n+ return response;\n+ }\n+\n+ private IndexMetaData createMetaData() {\n+ final Settings settings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ return IndexMetaData.builder(randomAlphaOfLength(10))\n+ .creationDate(System.currentTimeMillis() - TimeValue.timeValueHours(3).getMillis())\n+ .settings(settings)\n+ .build();\n+ }\n+\n+ private Condition createTestCondition() {\n+ final Condition condition = mock(Condition.class);\n+ when(condition.evaluate(any())).thenReturn(new Condition.Result(condition, true));\n+ return condition;\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,57 @@\n+---\n+\"Max docs rollover conditions matches only primary shards\":\n+ - skip:\n+ version: \"- 5.6.1\"\n+ reason: \"matching docs changed from all shards to primary shards\"\n+\n+ # create index with alias and replica\n+ - do:\n+ indices.create:\n+ index: logs-1\n+ wait_for_active_shards: 1\n+ body:\n+ aliases:\n+ logs_search: {}\n+\n+ # index first document and wait for refresh\n+ - do:\n+ index:\n+ index: logs-1\n+ type: test\n+ id: \"1\"\n+ body: { \"foo\": \"hello world\" }\n+ refresh: true\n+\n+ # perform alias rollover with no result\n+ - do:\n+ indices.rollover:\n+ alias: \"logs_search\"\n+ wait_for_active_shards: 1\n+ body:\n+ conditions:\n+ max_docs: 2\n+\n+ - match: { conditions: { \"[max_docs: 2]\": false } }\n+ - match: { rolled_over: false }\n+\n+ # index second document and wait for refresh\n+ - do:\n+ index:\n+ index: logs-1\n+ type: test\n+ id: \"2\"\n+ body: { \"foo\": \"hello world\" }\n+ refresh: true\n+\n+ # perform alias rollover\n+ - do:\n+ indices.rollover:\n+ alias: \"logs_search\"\n+ wait_for_active_shards: 1\n+ body:\n+ conditions:\n+ max_docs: 2\n+\n+ - match: { conditions: { \"[max_docs: 2]\": true } }\n+ - match: { rolled_over: true }\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/20_max_doc_condition.yml", "status": "added" } ] }
{ "body": "Since the upgrade to Lucene 7 (https://github.com/elastic/elasticsearch/pull/24089), the script-score function has been returning null values instead of the script return value:\r\n\r\n```\r\ncurl -H'Content-type: application/json' -XPUT localhost:9200/t/t/1 -d'{}'\r\n\r\ncurl -H 'Content-Type: application/json' localhost:9200/_search?pretty -d '\r\n{\"sort\": [\r\n {\r\n \"_script\": {\r\n \"type\": \"number\",\r\n \"script\": {\r\n \"inline\": \"return 10.0\",\r\n \"lang\": \"painless\"\r\n },\r\n \"order\": \"desc\"\r\n }\r\n }\r\n ]}'\r\n```\r\n\r\nreturns:\r\n\r\n```\r\n \"hits\" : {\r\n \"total\" : 1,\r\n \"max_score\" : null,\r\n \"hits\" : [\r\n {\r\n \"_index\" : \"t\",\r\n \"_type\" : \"t\",\r\n \"_id\" : \"1\",\r\n \"_score\" : null,\r\n \"_source\" : { },\r\n \"sort\" : [\r\n 1.7976931348623157E308\r\n ]\r\n }\r\n ]\r\n }\r\n```\r\n", "comments": [], "number": 24940, "title": "Script-score returning null values" }
{ "body": "This change fixes the script field sort when the returned type is a number.\r\n\r\nCloses #24940", "number": 24942, "review_comments": [], "title": "Fix script field sort returning Double.MAX_VALUE for all documents" }
{ "commits": [ { "message": "Fix script field sort returning Double.MAX_VALUE for all documents\n\nThis change fixes the script field sort when the returned type is a number.\n\nCloses #24940" } ], "files": [ { "diff": "@@ -293,7 +293,7 @@ protected SortedNumericDoubleValues getValues(LeafReaderContext context) throws\n @Override\n public boolean advanceExact(int doc) throws IOException {\n leafScript.setDocument(doc);\n- return false;\n+ return true;\n }\n @Override\n public double doubleValue() {", "filename": "core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java", "status": "modified" }, { "diff": "@@ -32,10 +32,14 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders;\n import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.script.MockScriptPlugin;\n+import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptType;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalSettingsPlugin;\n@@ -46,19 +50,23 @@\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n+import java.util.HashMap;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.Map.Entry;\n import java.util.Random;\n import java.util.Set;\n import java.util.TreeMap;\n import java.util.concurrent.ExecutionException;\n+import java.util.function.Function;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.fieldValueFactorFunction;\n+import static org.elasticsearch.script.MockScriptPlugin.NAME;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFirstHit;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n@@ -76,9 +84,34 @@\n import static org.hamcrest.Matchers.nullValue;\n \n public class FieldSortIT extends ESIntegTestCase {\n+ public static class CustomScriptPlugin extends MockScriptPlugin {\n+ @Override\n+ @SuppressWarnings(\"unchecked\")\n+ protected Map<String, Function<Map<String, Object>, Object>> pluginScripts() {\n+ Map<String, Function<Map<String, Object>, Object>> scripts = new HashMap<>();\n+ scripts.put(\"doc['number'].value\", vars -> sortDoubleScript(vars));\n+ scripts.put(\"doc['keyword'].value\", vars -> sortStringScript(vars));\n+ return scripts;\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ static Double sortDoubleScript(Map<String, Object> vars) {\n+ Map<?, ?> doc = (Map) vars.get(\"doc\");\n+ Double index = ((Number) ((ScriptDocValues<?>) doc.get(\"number\")).getValues().get(0)).doubleValue();\n+ return index;\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ static String sortStringScript(Map<String, Object> vars) {\n+ Map<?, ?> doc = (Map) vars.get(\"doc\");\n+ String value = ((String) ((ScriptDocValues<?>) doc.get(\"keyword\")).getValues().get(0));\n+ return value;\n+ }\n+ }\n+\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return Arrays.asList(InternalSettingsPlugin.class);\n+ return Arrays.asList(InternalSettingsPlugin.class, CustomScriptPlugin.class);\n }\n \n @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/9421\")\n@@ -1491,4 +1524,50 @@ public void testCustomFormat() throws Exception {\n assertArrayEquals(new String[] {\"2001:db8::ff00:42:8329\"},\n response.getHits().getAt(0).getSortValues());\n }\n+\n+ public void testScriptFieldSort() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ final int numDocs = randomIntBetween(10, 20);\n+ IndexRequestBuilder[] indexReqs = new IndexRequestBuilder[numDocs];\n+ for (int i = 0; i < numDocs; ++i) {\n+ indexReqs[i] = client().prepareIndex(\"test\", \"t\")\n+ .setSource(\"number\", Integer.toString(i));\n+ }\n+ indexRandom(true, indexReqs);\n+\n+ {\n+ Script script = new Script(ScriptType.INLINE, NAME, \"doc['number'].value\", Collections.emptyMap());\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(matchAllQuery())\n+ .setSize(randomIntBetween(1, numDocs + 5))\n+ .addSort(SortBuilders.scriptSort(script, ScriptSortBuilder.ScriptSortType.NUMBER))\n+ .addSort(SortBuilders.scoreSort())\n+ .execute().actionGet();\n+\n+ int expectedValue = 0;\n+ for (SearchHit hit : searchResponse.getHits()) {\n+ assertThat(hit.getSortValues().length, equalTo(2));\n+ assertThat(hit.getSortValues()[0], equalTo(expectedValue++));\n+ assertThat(hit.getSortValues()[1], equalTo(1f));\n+ }\n+ }\n+\n+ {\n+ Script script = new Script(ScriptType.INLINE, NAME, \"doc['keyword'].value\", Collections.emptyMap());\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(matchAllQuery())\n+ .setSize(randomIntBetween(1, numDocs + 5))\n+ .addSort(SortBuilders.scriptSort(script, ScriptSortBuilder.ScriptSortType.STRING))\n+ .addSort(SortBuilders.scoreSort())\n+ .execute().actionGet();\n+\n+ int expectedValue = 0;\n+ for (SearchHit hit : searchResponse.getHits()) {\n+ assertThat(hit.getSortValues().length, equalTo(2));\n+ assertThat(hit.getSortValues()[0], equalTo(Integer.toString(expectedValue++)));\n+ assertThat(hit.getSortValues()[1], equalTo(1f));\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.4.0 (on Elastic Cloud)\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Call `GET it_ops_logs/_stats`\r\n 2. Output:\r\n```\r\n{\r\n \"_shards\": {\r\n \"total\": 2,\r\n \"successful\": 0,\r\n \"failed\": 2,\r\n \"failures\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"it_ops_logs\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [ee8ER9CZQZaSSKkdcLJkzQ]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-3]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 0,\r\n \"index\": \"it_ops_logs\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [YAc58bLUTuiYAZy943nTAA]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-3]\"\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"_all\": {\r\n \"primaries\": {},\r\n \"total\": {}\r\n },\r\n \"indices\": {}\r\n}\r\n```\r\n\r\nLogs: Nothing unusual (no errors or such).\r\n", "comments": [ { "body": "If it is possible to reproduce It'd be helpful to have `error_trace` turned on.", "created_at": "2017-05-24T20:31:53Z" }, { "body": "I can't reproduce this with the above steps, I'm sure that there were some node failures in a particular manner that allowed this error to manifest. \r\n\r\n@cwurm Any other steps that you can add for the reproduction? ", "created_at": "2017-05-24T20:36:15Z" }, { "body": "@nik9000 It's a Cloud cluster, I don't think I can turn this on. :-(\r\n\r\n@abeyad It only happens on this index as far as I can tell. I know we resized that Cloud cluster (doubled it in size) earlier today. Cloud console shows all nodes as up and running. No errors in the log.\r\n\r\nI'm not sure what I can do. You can access the cluster if you want - ping me.", "created_at": "2017-05-24T20:51:00Z" }, { "body": "> It's a Cloud cluster, I don't think I can turn this on. :-(\r\n\r\nIt's a request parameter, you can set it (`?error_trace=true`).\r\n\r\n> No errors in the log.\r\n\r\nAre you sure? We warn log all requests that are responded to with a 500 (unless `error_trace` is set to true).", "created_at": "2017-05-24T20:58:34Z" }, { "body": "@jasontedor Oh sorry, my bad. Running `GET it_ops_logs/_stats?error_trace=true` doesn't change the output though.\r\n\r\nI'm searching furiously through the logs UI in Cloud, but can't find anything related. Unfortunately, it seems next to impossible to go through ES logs in Cloud sequentially (new log lines get added all the time and screw up the pagination). What would the exception look like? (I searched for `exception`, `error`, `failed` - anything I could think of).", "created_at": "2017-05-24T21:24:07Z" }, { "body": "I obtained the logs from this instance and I know what the issue is, we have a double decrement bug when handling certain queries that fail in the fetch phase. This double decrement leads to the number of outstanding queries on a shard falling negative and that leads to the serialization issue here.", "created_at": "2017-05-27T17:00:41Z" }, { "body": "I opened #24922.", "created_at": "2017-05-27T17:42:12Z" } ], "number": 24872, "title": "INTERNAL_SERVER_ERROR when calling _stats" }
{ "body": "This commit fixes a double decrement bug on the current query counter. The double decrement arises in a situation when the fetch phase is inlined for a query that is only touching one shard. After the query phase succeeds we decrement the current query counter. If the fetch phase ultimately fails, an exception is thrown and we decrement the current query counter again in the catch block. We also add assertions that all current stats counters remain non-negative at all times.\r\n\r\nRelates #22996, closes #24872\r\n", "number": 24922, "review_comments": [], "title": "Avoid double decrement on current query counter" }
{ "commits": [ { "message": "Avoid double decrement on current query counter\n\nThis commit fixes a double decrement bug on the current query\ncounter. The double decrement arises in a situation when the fetch phase\nis inlined for a query that is only touching one shard. After the query\nphase succeeds we decrement the current query counter. If the fetch\nphase ultimately fails, an exception is thrown and we decrement the\ncurrent query counter again in the catch block. We also add assertions\nthat all current stats counters remain non-negative at all\ntimes." } ], "files": [ { "diff": "@@ -80,8 +80,10 @@ public void onFailedQueryPhase(SearchContext searchContext) {\n computeStats(searchContext, statsHolder -> {\n if (searchContext.hasOnlySuggest()) {\n statsHolder.suggestCurrent.dec();\n+ assert statsHolder.suggestCurrent.count() >= 0;\n } else {\n statsHolder.queryCurrent.dec();\n+ assert statsHolder.queryCurrent.count() >= 0;\n }\n });\n }\n@@ -92,9 +94,11 @@ public void onQueryPhase(SearchContext searchContext, long tookInNanos) {\n if (searchContext.hasOnlySuggest()) {\n statsHolder.suggestMetric.inc(tookInNanos);\n statsHolder.suggestCurrent.dec();\n+ assert statsHolder.suggestCurrent.count() >= 0;\n } else {\n statsHolder.queryMetric.inc(tookInNanos);\n statsHolder.queryCurrent.dec();\n+ assert statsHolder.queryCurrent.count() >= 0;\n }\n });\n }\n@@ -114,6 +118,7 @@ public void onFetchPhase(SearchContext searchContext, long tookInNanos) {\n computeStats(searchContext, statsHolder -> {\n statsHolder.fetchMetric.inc(tookInNanos);\n statsHolder.fetchCurrent.dec();\n+ assert statsHolder.fetchCurrent.count() >= 0;\n });\n }\n \n@@ -174,6 +179,7 @@ public void onNewScrollContext(SearchContext context) {\n @Override\n public void onFreeScrollContext(SearchContext context) {\n totalStats.scrollCurrent.dec();\n+ assert totalStats.scrollCurrent.count() >= 0;\n totalStats.scrollMetric.inc(System.nanoTime() - context.getOriginNanoTime());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java", "status": "modified" }, { "diff": "@@ -251,6 +251,7 @@ public SearchPhaseResult executeQueryPhase(ShardSearchRequest request, SearchTas\n final SearchContext context = createAndPutContext(request);\n final SearchOperationListener operationListener = context.indexShard().getSearchOperationListener();\n context.incRef();\n+ boolean queryPhaseSuccess = false;\n try {\n context.setTask(task);\n operationListener.onPreQueryPhase(context);\n@@ -265,6 +266,7 @@ public SearchPhaseResult executeQueryPhase(ShardSearchRequest request, SearchTas\n contextProcessedSuccessfully(context);\n }\n final long afterQueryTime = System.nanoTime();\n+ queryPhaseSuccess = true;\n operationListener.onQueryPhase(context, afterQueryTime - time);\n if (request.numberOfShards() == 1) {\n return executeFetchPhase(context, operationListener, afterQueryTime);\n@@ -276,7 +278,9 @@ public SearchPhaseResult executeQueryPhase(ShardSearchRequest request, SearchTas\n e = (e.getCause() == null || e.getCause() instanceof Exception) ?\n (Exception) e.getCause() : new ElasticsearchException(e.getCause());\n }\n- operationListener.onFailedQueryPhase(context);\n+ if (!queryPhaseSuccess) {\n+ operationListener.onFailedQueryPhase(context);\n+ }\n logger.trace(\"Query phase failed\", e);\n processFailure(context, e);\n throw ExceptionsHelper.convertToRuntime(e);", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" } ] }
{ "body": "https://bugs.openjdk.java.net/browse/JDK-8162520\r\nSome filesystems can be so large that they return a negative value for their\r\nfree/used/available disk bytes due to being larger than `Long.MAX_VALUE`.\r\n\r\nThis adds protection for our `FsProbe` implementation and adds a test that it\r\ndoes the right thing.\r\n\r\nYou can see a failure from this here: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+nfs/4/console", "comments": [ { "body": "Can you please change the title of this PR to reflect the underlying problem (a JDK bug ID is too opaque), and rewrite the commit message to do the same?", "created_at": "2017-02-12T17:36:49Z" }, { "body": "@jasontedor I pushed a new commit that has a different commit header (and removed the changes so it only affects `total` bytes)", "created_at": "2017-02-13T16:30:52Z" }, { "body": "retest this please", "created_at": "2017-02-13T17:44:36Z" }, { "body": "I backported this to 5.3 and 5.x.", "created_at": "2017-03-19T00:31:17Z" } ], "number": 23093, "title": "Fix total disk bytes returning negative value" }
{ "body": "In #23093 we made a change so that total bytes for a filesystem would not be a\r\nnegative value when the total bytes were > Long.MAX_VALUE.\r\n\r\nThis fixes #24453 which had a related issue where `available` and `free` bytes\r\ncould also be so large that they were negative. These will now return\r\n`Long.MAX_VALUE` for the bytes if the JDK returns a negative value.\r\n", "number": 24911, "review_comments": [], "title": "Adjust available and free bytes to be non-negative on huge FSes" }
{ "commits": [ { "message": "Adjust available and free bytes to be non-negative on huge FSes\n\nIn #23093 we made a change so that total bytes for a filesystem would not be a\nnegative value when the total bytes were > Long.MAX_VALUE.\n\nThis fixes #24453 which had a related issue where `available` and `free` bytes\ncould also be so large that they were negative. These will now return\n`Long.MAX_VALUE` for the bytes if the JDK returns a negative value." } ], "files": [ { "diff": "@@ -155,8 +155,8 @@ public static FsInfo.Path getFSInfo(NodePath nodePath) throws IOException {\n // since recomputing these once per second (default) could be costly,\n // and they should not change:\n fsPath.total = adjustForHugeFilesystems(nodePath.fileStore.getTotalSpace());\n- fsPath.free = nodePath.fileStore.getUnallocatedSpace();\n- fsPath.available = nodePath.fileStore.getUsableSpace();\n+ fsPath.free = adjustForHugeFilesystems(nodePath.fileStore.getUnallocatedSpace());\n+ fsPath.available = adjustForHugeFilesystems(nodePath.fileStore.getUsableSpace());\n fsPath.type = nodePath.fileStore.type();\n fsPath.mount = nodePath.fileStore.toString();\n return fsPath;", "filename": "core/src/main/java/org/elasticsearch/monitor/fs/FsProbe.java", "status": "modified" }, { "diff": "@@ -246,6 +246,8 @@ List<String> readProcDiskStats() throws IOException {\n public void testAdjustForHugeFilesystems() throws Exception {\n NodePath np = new FakeNodePath(createTempDir());\n assertThat(FsProbe.getFSInfo(np).total, greaterThanOrEqualTo(0L));\n+ assertThat(FsProbe.getFSInfo(np).free, greaterThanOrEqualTo(0L));\n+ assertThat(FsProbe.getFSInfo(np).available, greaterThanOrEqualTo(0L));\n }\n \n static class FakeNodePath extends NodeEnvironment.NodePath {\n@@ -284,12 +286,12 @@ public long getTotalSpace() throws IOException {\n \n @Override\n public long getUsableSpace() throws IOException {\n- return 10;\n+ return randomIntBetween(-1000, 1000);\n }\n \n @Override\n public long getUnallocatedSpace() throws IOException {\n- return 10;\n+ return randomIntBetween(-1000, 1000);\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/monitor/fs/FsProbeTests.java", "status": "modified" } ] }
{ "body": "When a remote cluster goes away, and a request to the `_field_caps` API attempts to reach that remote cluster, elasticsearch will not reconnect to the remote cluster even after it has come back online. In order to re-establish a connection a `_search` request to that cluster must be made, which triggers a reconnect and re-enables the `_field_caps` API.\r\n\r\nperhaps related (though I'm happy to file as a separate issue) the failure states of the two API's are pretty dramatically different:\r\n\r\n`_search` API failure state:\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"connect_transport_exception\",\r\n \"reason\": \"[][127.0.0.1:9301] connect_timeout[30s]\"\r\n }\r\n ],\r\n \"type\": \"transport_exception\",\r\n \"reason\": \"unable to communicate with remote cluster [cluster2]\",\r\n \"caused_by\": {\r\n \"type\": \"connect_transport_exception\",\r\n \"reason\": \"[][127.0.0.1:9301] connect_timeout[30s]\",\r\n \"caused_by\": {\r\n \"type\": \"annotated_connect_exception\",\r\n \"reason\": \"Connection refused: /127.0.0.1:9301\",\r\n \"caused_by\": {\r\n \"type\": \"connect_exception\",\r\n \"reason\": \"Connection refused\"\r\n }\r\n }\r\n }\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\n`_field_caps` API failure state:\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"No node available for cluster: cluster2 nodes: []\"\r\n }\r\n ],\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"No node available for cluster: cluster2 nodes: []\"\r\n },\r\n \"status\": 500\r\n}\r\n```", "comments": [ { "body": "I can take a look at this later...", "created_at": "2017-05-23T12:43:41Z" } ], "number": 24763, "title": "_field_caps API doesn't trigger reconnect to remove cluster" }
{ "body": "If a cluster disconnects and comes back up we should ensure that\r\nwe connected to the cluster before we fire the requests.\r\n\r\nCloses #24763", "number": 24845, "review_comments": [ { "body": "can you fix the indentation here?", "created_at": "2017-05-24T07:09:10Z" } ], "title": "Ensure remote cluster is connected before fetching `_field_caps`" }
{ "commits": [ { "message": "Ensure remote cluster is connected before fetching `_field_caps`\n\nIf a cluster disconnects and comes back up we should ensure that\nwe connected to the cluster before we fire the requests.\n\nCloses #24763" }, { "message": "fix indentation" }, { "message": "Merge branch 'master' into issues/24763" } ], "files": [ { "diff": "@@ -118,38 +118,45 @@ public void onFailure(Exception e) {\n for (Map.Entry<String, OriginalIndices> remoteIndices : remoteClusterIndices.entrySet()) {\n String clusterAlias = remoteIndices.getKey();\n OriginalIndices originalIndices = remoteIndices.getValue();\n- Transport.Connection connection = remoteClusterService.getConnection(remoteIndices.getKey());\n- FieldCapabilitiesRequest remoteRequest = new FieldCapabilitiesRequest();\n- remoteRequest.setMergeResults(false); // we need to merge on this node\n- remoteRequest.indicesOptions(originalIndices.indicesOptions());\n- remoteRequest.indices(originalIndices.indices());\n- remoteRequest.fields(request.fields());\n- transportService.sendRequest(connection, FieldCapabilitiesAction.NAME, remoteRequest, TransportRequestOptions.EMPTY,\n- new TransportResponseHandler<FieldCapabilitiesResponse>() {\n- @Override\n- public FieldCapabilitiesResponse newInstance() {\n- return new FieldCapabilitiesResponse();\n- }\n-\n- @Override\n- public void handleResponse(FieldCapabilitiesResponse response) {\n- for (FieldCapabilitiesIndexResponse res : response.getIndexResponses()) {\n- indexResponses.add(new FieldCapabilitiesIndexResponse(RemoteClusterAware.buildRemoteIndexName(clusterAlias,\n- res.getIndexName()), res.get()));\n- }\n- onResponse.run();\n- }\n-\n- @Override\n- public void handleException(TransportException exp) {\n- onResponse.run();\n- }\n-\n- @Override\n- public String executor() {\n- return ThreadPool.Names.SAME;\n- }\n- });\n+ // if we are connected this is basically a no-op, if we are not we try to connect in parallel in a non-blocking fashion\n+ remoteClusterService.ensureConnected(clusterAlias, ActionListener.wrap(v -> {\n+ Transport.Connection connection = remoteClusterService.getConnection(clusterAlias);\n+ FieldCapabilitiesRequest remoteRequest = new FieldCapabilitiesRequest();\n+ remoteRequest.setMergeResults(false); // we need to merge on this node\n+ remoteRequest.indicesOptions(originalIndices.indicesOptions());\n+ remoteRequest.indices(originalIndices.indices());\n+ remoteRequest.fields(request.fields());\n+ transportService.sendRequest(connection, FieldCapabilitiesAction.NAME, remoteRequest, TransportRequestOptions.EMPTY,\n+ new TransportResponseHandler<FieldCapabilitiesResponse>() {\n+\n+ @Override\n+ public FieldCapabilitiesResponse newInstance() {\n+ return new FieldCapabilitiesResponse();\n+ }\n+\n+ @Override\n+ public void handleResponse(FieldCapabilitiesResponse response) {\n+ try {\n+ for (FieldCapabilitiesIndexResponse res : response.getIndexResponses()) {\n+ indexResponses.add(new FieldCapabilitiesIndexResponse(RemoteClusterAware.\n+ buildRemoteIndexName(clusterAlias, res.getIndexName()), res.get()));\n+ }\n+ } finally {\n+ onResponse.run();\n+ }\n+ }\n+\n+ @Override\n+ public void handleException(TransportException exp) {\n+ onResponse.run();\n+ }\n+\n+ @Override\n+ public String executor() {\n+ return ThreadPool.Names.SAME;\n+ }\n+ });\n+ }, e -> onResponse.run()));\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java", "status": "modified" }, { "diff": "@@ -160,12 +160,24 @@ public void fetchSearchShards(ClusterSearchShardsRequest searchRequest,\n // we can't proceed with a search on a cluster level.\n // in the future we might want to just skip the remote nodes in such a case but that can already be implemented on the caller\n // end since they provide the listener.\n- connectHandler.connect(ActionListener.wrap((x) -> fetchShardsInternal(searchRequest, listener), listener::onFailure));\n+ ensureConnected(ActionListener.wrap((x) -> fetchShardsInternal(searchRequest, listener), listener::onFailure));\n } else {\n fetchShardsInternal(searchRequest, listener);\n }\n }\n \n+ /**\n+ * Ensures that this cluster is connected. If the cluster is connected this operation\n+ * will invoke the listener immediately.\n+ */\n+ public void ensureConnected(ActionListener<Void> voidActionListener) {\n+ if (connectedNodes.isEmpty()) {\n+ connectHandler.connect(voidActionListener);\n+ } else {\n+ voidActionListener.onResponse(null);\n+ }\n+ }\n+\n private void fetchShardsInternal(ClusterSearchShardsRequest searchShardsRequest,\n final ActionListener<ClusterSearchShardsResponse> listener) {\n final DiscoveryNode node = nodeSupplier.get();", "filename": "core/src/main/java/org/elasticsearch/transport/RemoteClusterConnection.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.LatchedActionListener;\n import org.elasticsearch.action.OriginalIndices;\n import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsGroup;\n import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsRequest;\n@@ -46,6 +47,7 @@\n import java.io.Closeable;\n import java.io.IOException;\n import java.net.InetSocketAddress;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n@@ -265,6 +267,18 @@ public Transport.Connection getConnection(DiscoveryNode node, String cluster) {\n return connection.getConnection(node);\n }\n \n+ /**\n+ * Ensures that the given cluster alias is connected. If the cluster is connected this operation\n+ * will invoke the listener immediately.\n+ */\n+ public void ensureConnected(String clusterAlias, ActionListener<Void> listener) {\n+ RemoteClusterConnection remoteClusterConnection = remoteClusters.get(clusterAlias);\n+ if (remoteClusterConnection == null) {\n+ throw new IllegalArgumentException(\"no such remote cluster: \" + clusterAlias);\n+ }\n+ remoteClusterConnection.ensureConnected(listener);\n+ }\n+\n public Transport.Connection getConnection(String cluster) {\n RemoteClusterConnection connection = remoteClusters.get(cluster);\n if (connection == null) {", "filename": "core/src/main/java/org/elasticsearch/transport/RemoteClusterService.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.Build;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.LatchedActionListener;\n import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;\n import org.elasticsearch.action.admin.cluster.node.info.NodesInfoAction;\n import org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest;\n@@ -730,4 +731,58 @@ public void onFailure(Exception e) {\n }\n return statsRef.get();\n }\n+\n+ public void testEnsureConnected() throws IOException, InterruptedException {\n+ List<DiscoveryNode> knownNodes = new CopyOnWriteArrayList<>();\n+ try (MockTransportService seedTransport = startTransport(\"seed_node\", knownNodes, Version.CURRENT);\n+ MockTransportService discoverableTransport = startTransport(\"discoverable_node\", knownNodes, Version.CURRENT)) {\n+ DiscoveryNode seedNode = seedTransport.getLocalDiscoNode();\n+ DiscoveryNode discoverableNode = discoverableTransport.getLocalDiscoNode();\n+ knownNodes.add(seedTransport.getLocalDiscoNode());\n+ knownNodes.add(discoverableTransport.getLocalDiscoNode());\n+ Collections.shuffle(knownNodes, random());\n+\n+ try (MockTransportService service = MockTransportService.createNewService(Settings.EMPTY, Version.CURRENT, threadPool, null)) {\n+ service.start();\n+ service.acceptIncomingRequests();\n+ try (RemoteClusterConnection connection = new RemoteClusterConnection(Settings.EMPTY, \"test-cluster\",\n+ Arrays.asList(seedNode), service, Integer.MAX_VALUE, n -> true)) {\n+ assertFalse(service.nodeConnected(seedNode));\n+ assertFalse(service.nodeConnected(discoverableNode));\n+ assertTrue(connection.assertNoRunningConnections());\n+ CountDownLatch latch = new CountDownLatch(1);\n+ connection.ensureConnected(new LatchedActionListener<>(new ActionListener<Void>() {\n+ @Override\n+ public void onResponse(Void aVoid) {\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ throw new AssertionError(e);\n+ }\n+ }, latch));\n+ latch.await();\n+ assertTrue(service.nodeConnected(seedNode));\n+ assertTrue(service.nodeConnected(discoverableNode));\n+ assertTrue(connection.assertNoRunningConnections());\n+\n+ // exec again we are already connected\n+ connection.ensureConnected(new LatchedActionListener<>(new ActionListener<Void>() {\n+ @Override\n+ public void onResponse(Void aVoid) {\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ throw new AssertionError(e);\n+ }\n+ }, latch));\n+ latch.await();\n+ assertTrue(service.nodeConnected(seedNode));\n+ assertTrue(service.nodeConnected(discoverableNode));\n+ assertTrue(connection.assertNoRunningConnections());\n+ }\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/transport/RemoteClusterConnectionTests.java", "status": "modified" }, { "diff": "@@ -189,6 +189,44 @@ public void testIncrementallyAddClusters() throws IOException {\n }\n }\n \n+ public void testEnsureConnected() throws IOException {\n+ List<DiscoveryNode> knownNodes = new CopyOnWriteArrayList<>();\n+ try (MockTransportService seedTransport = startTransport(\"cluster_1_node\", knownNodes, Version.CURRENT);\n+ MockTransportService otherSeedTransport = startTransport(\"cluster_2_node\", knownNodes, Version.CURRENT)) {\n+ DiscoveryNode seedNode = seedTransport.getLocalDiscoNode();\n+ DiscoveryNode otherSeedNode = otherSeedTransport.getLocalDiscoNode();\n+ knownNodes.add(seedTransport.getLocalDiscoNode());\n+ knownNodes.add(otherSeedTransport.getLocalDiscoNode());\n+ Collections.shuffle(knownNodes, random());\n+\n+ try (MockTransportService transportService = MockTransportService.createNewService(Settings.EMPTY, Version.CURRENT, threadPool,\n+ null)) {\n+ transportService.start();\n+ transportService.acceptIncomingRequests();\n+ Settings.Builder builder = Settings.builder();\n+ builder.putArray(\"search.remote.cluster_1.seeds\", seedNode.getAddress().toString());\n+ builder.putArray(\"search.remote.cluster_2.seeds\", otherSeedNode.getAddress().toString());\n+ try (RemoteClusterService service = new RemoteClusterService(Settings.EMPTY, transportService)) {\n+ assertFalse(service.isCrossClusterSearchEnabled());\n+ service.initializeRemoteClusters();\n+ assertFalse(service.isCrossClusterSearchEnabled());\n+ service.updateRemoteCluster(\"cluster_1\", Collections.singletonList(seedNode.getAddress().address()));\n+ assertTrue(service.isCrossClusterSearchEnabled());\n+ assertTrue(service.isRemoteClusterRegistered(\"cluster_1\"));\n+ service.updateRemoteCluster(\"cluster_2\", Collections.singletonList(otherSeedNode.getAddress().address()));\n+ assertTrue(service.isCrossClusterSearchEnabled());\n+ assertTrue(service.isRemoteClusterRegistered(\"cluster_1\"));\n+ assertTrue(service.isRemoteClusterRegistered(\"cluster_2\"));\n+ service.updateRemoteCluster(\"cluster_2\", Collections.emptyList());\n+ assertFalse(service.isRemoteClusterRegistered(\"cluster_2\"));\n+ IllegalArgumentException iae = expectThrows(IllegalArgumentException.class,\n+ () -> service.updateRemoteCluster(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY, Collections.emptyList()));\n+ assertEquals(\"remote clusters must not have the empty string as its key\", iae.getMessage());\n+ }\n+ }\n+ }\n+ }\n+\n public void testRemoteNodeAttribute() throws IOException, InterruptedException {\n final Settings settings =\n Settings.builder().put(\"search.remote.node.attr\", \"gateway\").build();", "filename": "core/src/test/java/org/elasticsearch/transport/RemoteClusterServiceTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Feature request -->\r\n\r\n**Describe the feature**: integer_range don't match expectations, \r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.3.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): 1.8\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): macOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1. create index with integer_range\r\n```\r\ncurl -XPUT http://127.0.0.1:9200/test_range -d '\r\n{\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"expected_attendees\": {\r\n \"type\": \"integer_range\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n 2. write test data\r\n```\r\ncurl -XPUT http://127.0.0.1:9200/test_range/test/1 -d '\r\n{\r\n \"expected_attendees\" : {\"gte\" : 10, \"lte\" : 20}\r\n}'\r\n```\r\n 3. search data\r\n```\r\ncurl http://127.0.0.1:9200/test_range/_search -d '\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"expected_attendees\" : { \r\n \"gte\" : 1,\r\n \"lte\" : 11,\r\n \"relation\" : \"within\" \r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n4. response, i think it don't match expectations, it should response hits 0\r\n```\r\n{\"took\":1,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"test_range\",\"_type\":\"test\",\"_id\":\"1\",\"_score\":1.0,\"_source\":\r\n{\r\n \"expected_attendees\" : {\"gte\" : 10, \"lte\" : 20}\r\n}}]}}%\r\n```\r\n**Provide logs (if relevant)**:\r\n\r\n", "comments": [], "number": 24744, "title": "integer_range don't match expectations" }
{ "body": "This PR fixes the `RangeFieldMapper` and `RangeQueryBuilder` to pass the correct relation to the `RangeQueryBuilder` when performing a range query over range fields.\r\n\r\ncloses #24744 ", "number": 24808, "review_comments": [], "title": "Fix RangeFieldMapper rangeQuery to properly handle relations" }
{ "commits": [ { "message": "Fix RangeFieldMapper rangeQuery to properly handle relations\n\nThis commit fixes the RangeFieldMapper and RangeQueryBuilder to pass the correct relation to the RangeQuery when performing a range query over range fields." } ], "files": [ { "diff": "@@ -282,12 +282,6 @@ public Query termQuery(Object value, QueryShardContext context) {\n return query;\n }\n \n- @Override\n- public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n- QueryShardContext context) {\n- return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, ShapeRelation.INTERSECTS, context);\n- }\n-\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n ShapeRelation relation, QueryShardContext context) {\n failIfNotIndexed();", "filename": "core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java", "status": "modified" }, { "diff": "@@ -495,9 +495,9 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n \n query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper,\n timeZone, getForceDateParser(), context);\n- } else if (mapper instanceof RangeFieldMapper.RangeFieldType && mapper.typeName() == RangeFieldMapper.RangeType.DATE.name) {\n+ } else if (mapper instanceof RangeFieldMapper.RangeFieldType) {\n DateMathParser forcedDateParser = null;\n- if (this.format != null) {\n+ if (mapper.typeName() == RangeFieldMapper.RangeType.DATE.name && this.format != null) {\n forcedDateParser = new DateMathParser(this.format);\n }\n query = ((RangeFieldMapper.RangeFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper,", "filename": "core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java", "status": "modified" }, { "diff": "@@ -79,9 +79,6 @@ protected RangeQueryBuilder doCreateTestQueryBuilder() {\n query.format(\"yyyy-MM-dd'T'HH:mm:ss.SSSZZ\");\n }\n }\n- if (query.fieldName().equals(DATE_RANGE_FIELD_NAME)) {\n- query.relation(RandomPicks.randomFrom(random(), ShapeRelation.values()).getRelationName());\n- }\n break;\n case 2:\n default:\n@@ -97,6 +94,9 @@ protected RangeQueryBuilder doCreateTestQueryBuilder() {\n if (randomBoolean()) {\n query.to(null);\n }\n+ if (query.fieldName().equals(INT_RANGE_FIELD_NAME) || query.fieldName().equals(DATE_RANGE_FIELD_NAME)) {\n+ query.relation(RandomPicks.randomFrom(random(), ShapeRelation.values()).getRelationName());\n+ }\n return query;\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.query.MultiMatchQueryBuilder;\n import org.elasticsearch.index.query.Operator;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.RangeQueryBuilder;\n import org.elasticsearch.index.query.TermQueryBuilder;\n import org.elasticsearch.index.query.WrapperQueryBuilder;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders;\n@@ -1843,4 +1844,20 @@ public void testQueryStringParserCache() throws Exception {\n assertThat(i + \" expected: \" + first + \" actual: \" + actual, Float.compare(first, actual), equalTo(0));\n }\n }\n+\n+ public void testRangeQueryRangeFields_24744() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type1\", \"int_range\", \"type=integer_range\"));\n+\n+ client().prepareIndex(\"test\", \"type1\", \"1\")\n+ .setSource(jsonBuilder()\n+ .startObject()\n+ .startObject(\"int_range\").field(\"gte\", 10).field(\"lte\", 20).endObject()\n+ .endObject()).get();\n+ refresh();\n+\n+ RangeQueryBuilder range = new RangeQueryBuilder(\"int_range\").relation(\"intersects\").from(Integer.MIN_VALUE).to(Integer.MAX_VALUE);\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(range).get();\n+ assertHitCount(searchResponse, 1);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.3.1, 5.3.2, 5.4.0\r\n\r\n**Plugins installed**: None\r\n\r\n**JVM version**: 1.8.0_121\r\n\r\n**OS version**: Linux alan 4.8.0-41-generic #44-Ubuntu SMP Fri Mar 3 15:27:17 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nProblem and actual behavior: Thread falls into infinte loop in org.elasticsearch.index.query.IndicesQueryBuilder#doRewrite when range query 'from' > max value of the field. \r\nExpected behavior: Query returns a result\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create an index with mapping having a dateTime field\r\n`PUT some_index_name\r\n{\r\n \"mappings\": {\r\n \"some_type\": {\r\n \"properties\": {\r\n \"someDateTimeField\": {\r\n \"type\": \"date\" \r\n }\r\n }\r\n }\r\n }\r\n}`\r\n 2. Index at least one document\r\n`PUT some_index_name/some_type/1\r\n{ \"someDateTimeField\": \"2015-01-01T12:10:30Z\" } `\r\n 3. Send range query with value 'from' > max value of the field\r\n`{\r\n \"from\": 0,\r\n \"size\": 20,\r\n \"query\": {\r\n \"indices\": {\r\n \"indices\": [\r\n \"some_index_name\"\r\n ],\r\n \"query\": {\r\n \"range\": {\r\n \"someDateTimeField\": {\r\n \"from\": \"2017-04-21T22:00:00.000Z\",\r\n \"to\": \"2017-04-22T22:00:00.000Z\",\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}`\r\n\r\n\r\n", "comments": [ { "body": "Closed by https://github.com/elastic/elasticsearch/pull/24736", "created_at": "2017-05-29T09:28:54Z" } ], "number": 24735, "title": "Thread falls into infinite loop when processing Indices query" }
{ "body": "<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\nFixes #24735 \r\n\r\nThis pull request is created against branch 5.x since modified class is marked as deprecated and removed from master already.\r\n", "number": 24736, "review_comments": [], "title": "Thread falls into infinite loop when processing Indices query" }
{ "commits": [ { "message": "fix\n\n(cherry picked from commit 2180a5fd1b6ca79a2a3f543128de7d579bdfa30f)" } ], "files": [ { "diff": "@@ -246,10 +246,10 @@ protected boolean doEquals(IndicesQueryBuilder other) {\n \n @Override\n protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException {\n- QueryBuilder newInnnerQuery = innerQuery.rewrite(queryShardContext);\n+ QueryBuilder newInnerQuery = innerQuery.rewrite(queryShardContext);\n QueryBuilder newNoMatchQuery = noMatchQuery.rewrite(queryShardContext);\n- if (newInnnerQuery != innerQuery || newNoMatchQuery != noMatchQuery) {\n- return new IndicesQueryBuilder(innerQuery, indices).noMatchQuery(noMatchQuery);\n+ if (newInnerQuery != innerQuery || newNoMatchQuery != noMatchQuery) {\n+ return new IndicesQueryBuilder(newInnerQuery, indices).noMatchQuery(newNoMatchQuery);\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java", "status": "modified" } ] }
{ "body": "\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.4.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): jre1.8.0_131\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Windows 7 Enterprise\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI have followed the steps outlined for installing elasticsearch on my machine. On clicking on elasticsearch.bat, a black command screen appears for a fraction of a second and disappears. I navigate to http://localhost:9200 and there's nothing. \r\nI am just starting to evaluate elasticsearch and its frustrating to not even get past the install.\r\n\r\nMy JAVA_HOME is C:\\Program Files (x86)\\Java\\jre1.8.0_131\r\n\r\nPlease advise. \r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1.\r\n 2.\r\n 3.\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n", "comments": [ { "body": "Hi @faraazaamir, we reserve Github for bug reports and feature requests only. Please ask questions like these in the [Elasticsearch forum](https://discuss.elastic.co/c/elasticsearch) instead. Thank you!\r\n\r\nAs a hint: I guess Elasticsearch is terminating due to some problem that is specific to your machine. You should not double-click on the `.bat` file but instead open a terminal (`cmd.exe`) and start the `elasticsearch.bat` file from there. By doing it this way you should be able to see the error. Alternatively, you can also inspect the log files in the `logs` directory of Elasticsearch.", "created_at": "2017-05-16T13:00:22Z" }, { "body": "Thanks for the suggestion.\r\nI also had to follow the steps mentioned in the below to get it working:\r\nhttp://stackoverflow.com/questions/40973584/installing-elasticsearch-5-0-2-on-windows-8-config-jvm-options-was-unexpecte\r\n", "created_at": "2017-05-16T13:42:06Z" }, { "body": "If I read that correctly, the \"fix\" was to modify the batch file (bad idea). The problem is that this user (and probably also you) started Elasticsearch from the `bin` directory. However, Elasticsearch should always be started from its home directory. So instead of invoking `elasticsearch.bat` in `bin`, you should just go the home directory and invoke `.\\bin\\elasticsearch.bat` (as mentioned in the [docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/windows.html#windows-running)).", "created_at": "2017-05-16T13:50:54Z" }, { "body": "I reverted the change and tried per your suggestion:\r\nC:\\Program Files (x86)\\ElasticSearch\\elasticsearch-5.4.0>.\\bin\\elasticsearch.bat\r\n\r\nGot the same error:\r\n\\ElasticSearch\\elasticsearch-5.4.0\\bin\\\\..\\config\\jvm.options was unexpected at\r\nthis time.\r\n\r\nSo the workaround mentioned in the stackoverflow link is needed unless you have any other suggestions on what I might have done wrong.", "created_at": "2017-05-17T06:43:56Z" }, { "body": "@faraazaamir The problem is that you have spaces in your path. This will be fixed with https://github.com/elastic/elasticsearch/pull/24731 (will probably be included in Elasticsearch 5.4.2).", "created_at": "2017-05-17T09:04:42Z" }, { "body": "> The problem is that you have spaces in your path.\r\n\r\nTo be very clear, the problem might be multiple spaces, not a single space (we test all of our builds with paths that include a space). However, I thought we fixed all issues with multiple spaces here.\r\n\r\nHave we tried to reproduce this?", "created_at": "2017-05-17T10:22:39Z" }, { "body": "> Have we tried to reproduce this?\r\n\r\nAfter discussion with @russcam, the issue is *parentheses* in the path, not the spaces. You can workaround this right now by removing the parentheses from your install path for Elasticsearch.", "created_at": "2017-05-17T10:43:58Z" }, { "body": "This will be fixed in 5.4.1.", "created_at": "2017-05-17T11:23:39Z" }, { "body": "Hi,\r\n\r\nI was able to run the bat file without changes once I changed my directory structure to not have any spaces.\r\n\r\nThanks all for your help.", "created_at": "2017-05-18T10:00:23Z" } ], "number": 24712, "title": "Cannot start Elasticsearch 5.4.0 on Windows when path contains parentheses" }
{ "body": "variable assignment needs to be quoted to correctly handle the scenario where the batch file path contains parentheses, for example, such as unzipping to a directory under `C:\\Program Files (x86)\\`.\r\n\r\nIf variable assignment is not quoted, the the following error is exhibited\r\n\r\n```\r\n<path after parentheses>\\\\..\\config\\jvm.options was unexpected at this time.\r\n```\r\n\r\nIs is not sufficient to just quote `%~dp0\\..\\config\\jvm.options` because `%ES_JVM_OPTIONS%` will then contain quotes and thus be double quoted when performing the subsequent `findstr` operation, and in addition, the quotes cannot be removed from `\"%ES_JVM_OPTIONS%\"` in the `findstr` operation because they are needed to handle the case where the value is set from an environment variable (which may contain parentheses).\r\n\r\nQuoting the whole assignment handles the case where the value assigned contains parentheses correctly.\r\n\r\nFixes #24712 ", "number": 24731, "review_comments": [], "title": "Handle parentheses in batch file path" }
{ "commits": [ { "message": "Handle spaces in batch file path\n\nvariable assignment needs to be quoted to correctly handle scenario where batch file path contains spaces" } ], "files": [ { "diff": "@@ -37,7 +37,7 @@ SET HOSTNAME=%COMPUTERNAME%\n \n if \"%ES_JVM_OPTIONS%\" == \"\" (\n rem '0' is the batch file, '~dp' appends the drive and path\n-set ES_JVM_OPTIONS=%~dp0\\..\\config\\jvm.options\n+set \"ES_JVM_OPTIONS=%~dp0\\..\\config\\jvm.options\"\n )\n \n @setlocal", "filename": "distribution/src/main/resources/bin/elasticsearch.bat", "status": "modified" } ] }
{ "body": "Hey all,\r\n\r\nI noticed a strange exception when implementing the new collapse feature in Elasticsearch 5.4.\r\nAs soon as you choose a \"from\" in the query, higher than a number of hits that would be returned as a result of the query, the query doesn't get executed and the following exception occurs:\r\n\r\n`\"reason\": \"Validation Failed: 1: no requests added;\"`\r\n\r\nI created this [gist](https://gist.github.com/byronvoorbach/cd9cdf2bbcbede1684201468906ef117#file-collapse-inner_hits-bug-txt) to highlight the problem.", "comments": [ { "body": "Nice catch @byronvoorbach .\r\nThe exception is thrown when the search hits in the response are empty, also it seems that the bug has been introduced in 5.4 (since the gist works as expected in 5.3). I'll work on a fix.", "created_at": "2017-05-15T11:54:55Z" }, { "body": "Nice @jimczi !", "created_at": "2017-05-17T12:17:06Z" } ], "number": 24672, "title": "ES 5.4 - Bug in field collapsing combined with inner_hits and size > results of query" }
{ "body": "This change skips the expand search phase entirely when there is no search hits in the response.\r\n\r\nFixes #24672", "number": 24688, "review_comments": [], "title": "Fix ExpandSearchPhase when response contains no hits" }
{ "commits": [ { "message": "Fix ExpandSearchPhase when response contains no hits\n\nThis change skips the expand search phase entirely when there is no search hits in the response." }, { "message": "Set the rest test to test all versions > 5.4.0" } ], "files": [ { "diff": "@@ -64,7 +64,7 @@ private boolean isCollapseRequest() {\n \n @Override\n public void run() throws IOException {\n- if (isCollapseRequest()) {\n+ if (isCollapseRequest() && searchResponse.getHits().getHits().length > 0) {\n SearchRequest searchRequest = context.getRequest();\n CollapseBuilder collapseBuilder = searchRequest.source().collapse();\n MultiSearchRequest multiRequest = new MultiSearchRequest();", "filename": "core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java", "status": "modified" }, { "diff": "@@ -196,4 +196,35 @@ public void run() throws IOException {\n assertNotNull(reference.get());\n assertEquals(1, mockSearchPhaseContext.phasesExecuted.get());\n }\n+\n+ public void testSkipExpandCollapseNoHits() throws IOException {\n+ MockSearchPhaseContext mockSearchPhaseContext = new MockSearchPhaseContext(1);\n+ mockSearchPhaseContext.searchTransport = new SearchTransportService(\n+ Settings.builder().put(\"search.remote.connect\", false).build(), null) {\n+\n+ @Override\n+ void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionListener<MultiSearchResponse> listener) {\n+ fail(\"expand should not try to send empty multi search request\");\n+ }\n+ };\n+ mockSearchPhaseContext.getRequest().source(new SearchSourceBuilder()\n+ .collapse(new CollapseBuilder(\"someField\").setInnerHits(new InnerHitBuilder().setName(\"foobarbaz\"))));\n+\n+ SearchHits hits = new SearchHits(new SearchHit[0], 1, 1.0f);\n+ InternalSearchResponse internalSearchResponse = new InternalSearchResponse(hits, null, null, null, false, null, 1);\n+ SearchResponse response = mockSearchPhaseContext.buildSearchResponse(internalSearchResponse, null);\n+ AtomicReference<SearchResponse> reference = new AtomicReference<>();\n+ ExpandSearchPhase phase = new ExpandSearchPhase(mockSearchPhaseContext, response, r ->\n+ new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ reference.set(r);\n+ }\n+ }\n+ );\n+ phase.run();\n+ mockSearchPhaseContext.assertNoFailure();\n+ assertNotNull(reference.get());\n+ assertEquals(1, mockSearchPhaseContext.phasesExecuted.get());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java", "status": "modified" }, { "diff": "@@ -147,7 +147,6 @@ setup:\n - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.0._id: \"5\" }\n - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.1._id: \"4\" }\n \n-\n ---\n \"field collapsing, inner_hits and maxConcurrentGroupRequests\":\n \n@@ -247,3 +246,22 @@ setup:\n match_all: {}\n query_weight: 1\n rescore_query_weight: 2\n+\n+---\n+\"no hits and inner_hits\":\n+\n+ - skip:\n+ version: \" - 5.4.0\"\n+ reason: \"bug fixed in 5.4.1\"\n+\n+ - do:\n+ search:\n+ index: test\n+ type: test\n+ body:\n+ size: 0\n+ collapse: { field: numeric_group, inner_hits: { name: sub_hits, size: 1} }\n+ sort: [{ sort: desc }]\n+\n+ - match: { hits.total: 6 }\n+ - length: { hits.hits: 0 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/110_field_collapsing.yml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: openjdk version \"1.8.0_65\"\r\n\r\n**OS version**: CentOS release 6.8 \r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n`more_like_this` query does not work correctly with child documents, when using document reference in `like` parameter. Queries always return 0 hits. It works fine for same query and documents, if I index them without child-parent relationship.\r\n\r\nAdding `_routing` does not change anything. I tried (as an experiment) providing the reference with `_parent` parameter, as documentation tells that like parameter _\"is similar to the one used by the Multi GET API\"_, but it leads to \"failed to parse More Like This item. unknown field [_parent]\".\r\n\r\nWorks when providing text in like parameter directly.\r\nIt worked fine with same query in version 2.x.\r\n\r\n**Steps to reproduce**:\r\n 1. Create mappings A and B, where B has A as a parent\r\n 2. Index some documents\r\n 3. Query B with `more_like_this`\r\n\r\nA query for reproduction can be basic:\r\n```\r\n{\r\n \"query\":{\r\n \"more_like_this\":{\r\n \"fields\":[\r\n \"text\"\r\n ],\r\n \"like\":[\r\n {\r\n \"_index\":\"index_name\",\r\n \"_type\":\"B\",\r\n \"_id\":\"child_id\"\r\n }\r\n ],\r\n \"min_term_freq\":1,\r\n \"min_doc_freq\":1\r\n }\r\n }\r\n}\r\n```\r\n\r\n", "comments": [ { "body": "This reproduces on master with the following script:\r\n```\r\nPUT my_index\r\n{\r\n \"mappings\": {\r\n \"my_parent\": {},\r\n \"my_child\": {\r\n \"_parent\": {\r\n \"type\": \"my_parent\" \r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT my_index/my_parent/1 \r\n{\r\n \"text\": \"This is a parent document\"\r\n}\r\n\r\nPUT my_index/my_child/2?parent=1 \r\n{\r\n \"text\": \"This is a child document\"\r\n}\r\n\r\nPUT my_index/my_child/3?parent=1&refresh=true \r\n{\r\n \"text\": \"This is another child document\"\r\n}\r\n\r\n\r\nPUT my_index/my_parent/4\r\n{\r\n \"text\": \"This is a parent document\"\r\n}\r\n\r\nPUT my_index/my_child/5?parent=4\r\n{\r\n \"text\": \"This is a child document\"\r\n}\r\n\r\nPUT my_index/my_child/6?parent=4&refresh=true \r\n{\r\n \"text\": \"This is another child document\"\r\n}\r\n\r\n# Works and returns hits\r\nGET my_index/_search\r\n{\r\n \"query\":{\r\n \"more_like_this\":{\r\n \"fields\":[\r\n \"text\"\r\n ],\r\n \"like\":[\r\n {\r\n \"_index\":\"my_index\",\r\n \"_type\":\"my_parent\",\r\n \"_id\":\"1\"\r\n }\r\n ],\r\n \"min_term_freq\":1,\r\n \"min_doc_freq\":1\r\n }\r\n }\r\n}\r\n\r\n# Returns 0 hits\r\nGET my_index/_search\r\n{\r\n \"query\":{\r\n \"more_like_this\":{\r\n \"fields\":[\r\n \"text\"\r\n ],\r\n \"like\":[\r\n {\r\n \"_index\":\"my_index\",\r\n \"_type\":\"my_child\",\r\n \"_id\":\"2\"\r\n }\r\n ],\r\n \"min_term_freq\":1,\r\n \"min_doc_freq\":1\r\n }\r\n }\r\n}\r\n```\r\nEven if you specify the `_routing` using the parent's id it still returns 0 hits:\r\n\r\n```\r\n# Returns 0 hits\r\nGET my_index/_search\r\n{\r\n \"query\":{\r\n \"more_like_this\":{\r\n \"fields\":[\r\n \"text\"\r\n ],\r\n \"like\":[\r\n {\r\n \"_index\":\"my_index\",\r\n \"_type\":\"my_child\",\r\n \"_id\":\"2\",\r\n \"_routing\": \"1\"\r\n }\r\n ],\r\n \"min_term_freq\":1,\r\n \"min_doc_freq\":1\r\n }\r\n }\r\n}\r\n```", "created_at": "2017-03-22T15:15:48Z" } ], "number": 23699, "title": "more_like_this query doesn't work with child documents" }
{ "body": "When retrieving documents to extract terms from as part of a more like this query, the _routing value can be set, yet it gets lost. That leads to not being able to retrieve the documents, hence more_like_this used to return no matches all the time.\r\n\r\nCloses #23699", "number": 24679, "review_comments": [], "title": "Pass over _routing value with more_like_this items to be retrieved" }
{ "commits": [ { "message": "Pass over _routing value with more_like_this items to be retrieved\n\nWhen retrieving documents to extract terms from as part of a more like this query, the _routing value can be set, yet it gets lost. That leads to not being able to retrieve the documents, hence more_like_this used to return no matches all the time.\n\nCloses #23699" } ], "files": [ { "diff": "@@ -224,10 +224,6 @@ public String[] getLikeTexts() {\n return likeText;\n }\n \n- public void setLikeText(String likeText) {\n- setLikeText(new String[]{likeText});\n- }\n-\n public void setLikeText(String... likeText) {\n this.likeText = likeText;\n }\n@@ -236,15 +232,15 @@ public Fields[] getLikeFields() {\n return likeFields;\n }\n \n- public void setLikeText(Fields... likeFields) {\n+ public void setLikeFields(Fields... likeFields) {\n this.likeFields = likeFields;\n }\n \n public void setLikeText(List<String> likeText) {\n setLikeText(likeText.toArray(Strings.EMPTY_ARRAY));\n }\n \n- public void setUnlikeText(Fields... unlikeFields) {\n+ public void setUnlikeFields(Fields... unlikeFields) {\n this.unlikeFields = unlikeFields;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/lucene/search/MoreLikeThisQuery.java", "status": "modified" }, { "diff": "@@ -178,6 +178,7 @@ public Item() {\n this.index = copy.index;\n this.type = copy.type;\n this.id = copy.id;\n+ this.routing = copy.routing;\n this.doc = copy.doc;\n this.xContentType = copy.xContentType;\n this.fields = copy.fields;\n@@ -343,7 +344,7 @@ XContentType xContentType() {\n /**\n * Convert this to a {@link TermVectorsRequest} for fetching the terms of the document.\n */\n- public TermVectorsRequest toTermVectorsRequest() {\n+ TermVectorsRequest toTermVectorsRequest() {\n TermVectorsRequest termVectorsRequest = new TermVectorsRequest(index, type, id)\n .selectedFields(fields)\n .routing(routing)\n@@ -1085,14 +1086,14 @@ private Query handleItems(QueryShardContext context, MoreLikeThisQuery mltQuery,\n // fetching the items with multi-termvectors API\n MultiTermVectorsResponse likeItemsResponse = fetchResponse(context.getClient(), likeItems);\n // getting the Fields for liked items\n- mltQuery.setLikeText(getFieldsFor(likeItemsResponse));\n+ mltQuery.setLikeFields(getFieldsFor(likeItemsResponse));\n \n // getting the Fields for unliked items\n if (unlikeItems.length > 0) {\n MultiTermVectorsResponse unlikeItemsResponse = fetchResponse(context.getClient(), unlikeItems);\n org.apache.lucene.index.Fields[] unlikeFields = getFieldsFor(unlikeItemsResponse);\n if (unlikeFields.length > 0) {\n- mltQuery.setUnlikeText(unlikeFields);\n+ mltQuery.setUnlikeFields(unlikeFields);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.index.Fields;\n import org.apache.lucene.index.MultiFields;\n import org.apache.lucene.index.memory.MemoryIndex;\n+import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.ElasticsearchException;\n@@ -61,6 +62,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class MoreLikeThisQueryBuilderTests extends AbstractQueryTestCase<MoreLikeThisQueryBuilder> {\n@@ -264,6 +266,13 @@ private static Fields generateFields(String[] fieldNames, String text) throws IO\n protected void doAssertLuceneQuery(MoreLikeThisQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException {\n if (queryBuilder.likeItems() != null && queryBuilder.likeItems().length > 0) {\n assertThat(query, instanceOf(BooleanQuery.class));\n+ BooleanQuery booleanQuery = (BooleanQuery) query;\n+ for (BooleanClause booleanClause : booleanQuery) {\n+ if (booleanClause.getQuery() instanceof MoreLikeThisQuery) {\n+ MoreLikeThisQuery moreLikeThisQuery = (MoreLikeThisQuery) booleanClause.getQuery();\n+ assertThat(moreLikeThisQuery.getLikeFields().length, greaterThan(0));\n+ }\n+ }\n } else {\n // we rely on integration tests for a deeper check here\n assertThat(query, instanceOf(MoreLikeThisQuery.class));\n@@ -310,6 +319,12 @@ public void testItemSerialization() throws IOException {\n assertEquals(expectedItem, newItem);\n }\n \n+ public void testItemCopy() throws IOException {\n+ Item expectedItem = generateRandomItem();\n+ Item newItem = new Item(expectedItem);\n+ assertEquals(expectedItem, newItem);\n+ }\n+\n public void testItemFromXContent() throws IOException {\n Item expectedItem = generateRandomItem();\n String json = expectedItem.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS).string();", "filename": "core/src/test/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -623,4 +623,18 @@ public void testSelectFields() throws IOException, ExecutionException, Interrupt\n assertSearchResponse(response);\n assertHitCount(response, 1);\n }\n+\n+ public void testWithRouting() throws IOException {\n+ client().prepareIndex(\"index\", \"type\", \"1\").setRouting(\"3\").setSource(\"text\", \"this is a document\").get();\n+ client().prepareIndex(\"index\", \"type\", \"2\").setRouting(\"1\").setSource(\"text\", \"this is another document\").get();\n+ client().prepareIndex(\"index\", \"type\", \"3\").setRouting(\"4\").setSource(\"text\", \"this is yet another document\").get();\n+ refresh(\"index\");\n+\n+ Item item = new Item(\"index\", \"type\", \"2\").routing(\"1\");\n+ MoreLikeThisQueryBuilder moreLikeThisQueryBuilder = new MoreLikeThisQueryBuilder(new String[]{\"text\"}, null, new Item[]{item});\n+ moreLikeThisQueryBuilder.minTermFreq(1);\n+ moreLikeThisQueryBuilder.minDocFreq(1);\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setQuery(moreLikeThisQueryBuilder).get();\n+ assertEquals(2, searchResponse.getHits().totalHits);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java", "status": "modified" } ] }
{ "body": "Perhaps a common scenario here where in a 2 data node system one data node gets hammered while the other is idle. The reason seems to lie in default Kibana and search routing config.\r\n\r\n* Kibana looks to use a preference key for routing searches based on a session ID. It also uses _msearch for dashboards to bulk up requests.\r\n* All primaries were on one data node and the replicas on the other (not uncommon after a restart)\r\n\r\nWhen elasticsearch gets a request to route on a `preference` setting that is a session ID it looks to select a choice of primary vs replica by [hashing the preference string only](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java#L181). If each shard of an index presents their list of primaries and replicas in the same order (and I haven't confirmed this is the case!) then this routing algo will pick the same node for all searches given the same session key which is what the user was seeing.\r\n\r\nIf we hashed the preference key AND the shard number we would randomize the choice of primary vs replica and hence node choice for each shard. This would spread load more evenly.", "comments": [ { "body": "Isn't the whole point of the `preference` to query the same shard copy all the time when the value is the same?", "created_at": "2017-05-12T11:42:38Z" }, { "body": "Yes. This change proposes that for each shardID we deterministically pick the same replica given the same session key _but not using the same policy across all shardIDs_ which leads to uneven loads.", "created_at": "2017-05-12T12:04:16Z" }, { "body": "We discussed this on FixItFriday and decided to adopt the proposed change of including shard_id in the hash of the preference key but with appropriate check for backwards-compatibility.", "created_at": "2017-05-12T12:46:24Z" }, { "body": "@markharwood I am +1 on this but we need to make sure that we preserve BWC. It should be rather simple here like in `OperationRouting` you can do this:\r\n\r\n```Java\r\nprivate ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable indexShard, String localNodeId, DiscoveryNodes nodes, @Nullable String preference) {\r\n // ...\r\n if (nodes.getMinNodeVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) {\r\n // use new method\r\n } else {\r\n // use old method\r\n }\r\n}\r\n```", "created_at": "2017-05-12T12:47:12Z" }, { "body": "Sorry, but I don´t get the point. Why is so important to preserve BWC? \"eventually\" primary and replica can diverge, so hitting same shards in a \"real time\" scenario along the (say) session is fine. But if you update your cluster, restart nodes, and so on... to install this \"patch\" What´s the problem to hit another shards? They should be synced and almost sure your session is changed.\r\n", "created_at": "2017-07-07T15:39:00Z" } ], "number": 24642, "title": "Kibana + 2 data nodes = uneven search loads?" }
{ "body": "A user reported uneven balancing of load on nodes handling search requests from Kibana which supplies a session ID in a routing preference. Each shardId was selecting the same node for a given session ID from the list of allocations (in their case 2 data nodes with all primaries on one node and replicas on the other).\r\nThis change counteracts the tendency to opt for the same node given the same user-supplied preference by incorporating shard ID in the hash of the preference key. This will help randomise node choices across shards.\r\n\r\nCloses #24642", "number": 24671, "review_comments": [ { "body": "can we use `indexShard.shardId.hashCode()` - it's good to have different hashing across indices as well.", "created_at": "2017-05-17T14:46:37Z" }, { "body": "can we say that the shard ordering is the determined by a fixed node order (and point to the AllocationService for details) and thus we need to make sure we scatter across shards? it's better to say why than just saying better ;)", "created_at": "2017-05-17T14:48:07Z" }, { "body": "why do we need the cluster service, can't we just use the generated state?", "created_at": "2017-05-17T14:49:18Z" }, { "body": "can we use random number of replicas, with more than one sometimes ?", "created_at": "2017-05-17T14:49:48Z" }, { "body": "randomize?", "created_at": "2017-05-17T14:50:22Z" }, { "body": "can we randomize the session id?", "created_at": "2017-05-17T14:51:39Z" }, { "body": "can we check that the distribution is balanced in a multi index / multi shard case? like that thing differ at most with 1.", "created_at": "2017-05-17T14:52:51Z" }, { "body": "Copy and pasted from other preference test. Will remove", "created_at": "2017-05-18T09:09:22Z" }, { "body": "ClusterStateCreationUtils could use an additional helper method to support this?\r\n\r\nstateWithActivePrimary(..) lets me pick num replicas but not shards.\r\nstateWithAssignedPrimariesAndOneReplica(...) lets me pick num shards but not replicas\r\n\r\n\r\n", "created_at": "2017-05-18T09:32:06Z" }, { "body": "sure", "created_at": "2017-05-18T09:54:30Z" }, { "body": "<s>So with random num shards and replicas I'm going to assume for this balancing test that shard assignments are optimal i.e. num nodes = shards x replicas (or only one Lucene index on each node). Otherwise I can't reason simply about how \"bunched up\" replicas are on each node and the expected number of nodes servicing a search </s>\r\n\r\nIgnore above- if I have each shard on a different node then the original routing code minus the fix passes because all nodes selected are different. I need num nodes to be < num shards to identify the failure and once you starting adding randomizable numbers of replicas then assignment logic and expected behaviour testing becomes too complex.", "created_at": "2017-05-18T10:58:17Z" }, { "body": "@markharwood remember to take hash % copies# collisions into account. So if you see two shards being search on the same node, they should have the same hash modulo number of shard copies.", "created_at": "2017-05-18T12:09:46Z" }, { "body": "nit: can you use assertThat or expose the actual values in the message.", "created_at": "2017-05-23T09:38:03Z" }, { "body": "same comment about exposing values.", "created_at": "2017-05-23T09:38:32Z" } ], "title": "Search: Fairer balancing when routing searches by session ID" }
{ "commits": [ { "message": "Search: Fairer balancing when routing searches by user-supplied preference values.\nA user reported uneven balancing of load on nodes handling search requests from Kibana which supplies a session ID in a routing preference. Each shardId was selecting the same node for a given session ID because one data node had all primaries and the other data node held all replicas after cluster startup.\nThis change counteracts the tendency to opt for the same node given the same user-supplied preference by incorporating shard ID in the hash of the preference key. This will help randomise node choices across shards.\n\nCloses #24642" }, { "message": "Addressed review comments - more randomisation. Removed use of Cluster Service\nAdded test for multiple indices and regression test that reproduces the hashing logic." }, { "message": "Changed assertEquals to assertThat" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.routing;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n@@ -177,10 +178,20 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n }\n }\n // if not, then use it as the index\n+ int routingHash = Murmur3HashFunction.hash(preference);\n+ if (nodes.getMinNodeVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) {\n+ // The AllocationService lists shards in a fixed order based on nodes\n+ // so earlier versions of this class would have a tendency to\n+ // select the same node across different shardIds.\n+ // Better overall balancing can be achieved if each shardId opts\n+ // for a different element in the list by also incorporating the\n+ // shard ID into the hash of the user-supplied preference key.\n+ routingHash = 31 * routingHash + indexShard.shardId.hashCode();\n+ }\n if (awarenessAttributes.length == 0) {\n- return indexShard.activeInitializingShardsIt(Murmur3HashFunction.hash(preference));\n+ return indexShard.activeInitializingShardsIt(routingHash);\n } else {\n- return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes, Murmur3HashFunction.hash(preference));\n+ return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes, routingHash);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.RoutingTable.Builder;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.TestShardRouting;\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n@@ -271,6 +272,51 @@ public static ClusterState stateWithAssignedPrimariesAndOneReplica(String index,\n state.routingTable(RoutingTable.builder().add(indexRoutingTableBuilder.build()).build());\n return state.build();\n }\n+ \n+ \n+ /**\n+ * Creates cluster state with several indexes, shards and replicas and all shards STARTED.\n+ */\n+ public static ClusterState stateWithAssignedPrimariesAndReplicas(String[] indices, int numberOfShards, int numberOfReplicas) {\n+\n+ int numberOfDataNodes = numberOfReplicas + 1; \n+ DiscoveryNodes.Builder discoBuilder = DiscoveryNodes.builder();\n+ for (int i = 0; i < numberOfDataNodes + 1; i++) {\n+ final DiscoveryNode node = newNode(i);\n+ discoBuilder = discoBuilder.add(node);\n+ }\n+ discoBuilder.localNodeId(newNode(0).getId());\n+ discoBuilder.masterNodeId(newNode(numberOfDataNodes + 1).getId()); \n+ ClusterState.Builder state = ClusterState.builder(new ClusterName(\"test\"));\n+ state.nodes(discoBuilder);\n+ Builder routingTableBuilder = RoutingTable.builder();\n+\n+ org.elasticsearch.cluster.metadata.MetaData.Builder metadataBuilder = MetaData.builder();\n+\n+ for (String index : indices) {\n+ IndexMetaData indexMetaData = IndexMetaData.builder(index)\n+ .settings(Settings.builder().put(SETTING_VERSION_CREATED, Version.CURRENT).put(SETTING_NUMBER_OF_SHARDS, numberOfShards)\n+ .put(SETTING_NUMBER_OF_REPLICAS, numberOfReplicas).put(SETTING_CREATION_DATE, System.currentTimeMillis()))\n+ .build();\n+ metadataBuilder.put(indexMetaData, false).generateClusterUuidIfNeeded();\n+ IndexRoutingTable.Builder indexRoutingTableBuilder = IndexRoutingTable.builder(indexMetaData.getIndex());\n+ for (int i = 0; i < numberOfShards; i++) {\n+ final ShardId shardId = new ShardId(index, \"_na_\", i);\n+ IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId);\n+ indexShardRoutingBuilder\n+ .addShard(TestShardRouting.newShardRouting(index, i, newNode(0).getId(), null, true, ShardRoutingState.STARTED));\n+ for (int replica = 0; replica < numberOfReplicas; replica++) {\n+ indexShardRoutingBuilder.addShard(TestShardRouting.newShardRouting(index, i, newNode(replica + 1).getId(), null, false,\n+ ShardRoutingState.STARTED));\n+ }\n+ indexRoutingTableBuilder.addIndexShard(indexShardRoutingBuilder.build());\n+ }\n+ routingTableBuilder.add(indexRoutingTableBuilder.build());\n+ }\n+ state.metaData(metadataBuilder);\n+ state.routingTable(routingTableBuilder.build());\n+ return state.build();\n+ } \n \n /**\n * Creates cluster state with and index that has one shard and as many replicas as numberOfReplicas.", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/ClusterStateCreationUtils.java", "status": "modified" }, { "diff": "@@ -21,8 +21,11 @@\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.support.replication.ClusterStateCreationUtils;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n@@ -44,6 +47,7 @@\n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.Matchers.containsInAnyOrder;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.object.HasToString.hasToString;\n \n public class OperationRoutingTests extends ESTestCase{\n@@ -367,6 +371,65 @@ public void testPreferNodes() throws InterruptedException, IOException {\n terminate(threadPool);\n }\n }\n+ \n+ public void testFairSessionIdPreferences() throws InterruptedException, IOException {\n+ // Ensure that a user session is re-routed back to same nodes for\n+ // subsequent searches and that the nodes are selected fairly i.e.\n+ // given identically sorted lists of nodes across all shard IDs\n+ // each shard ID doesn't pick the same node.\n+ final int numIndices = randomIntBetween(1, 3);\n+ final int numShards = randomIntBetween(2, 10);\n+ final int numReplicas = randomIntBetween(1, 3);\n+ final String[] indexNames = new String[numIndices];\n+ for (int i = 0; i < numIndices; i++) {\n+ indexNames[i] = \"test\" + i;\n+ }\n+ ClusterState state = ClusterStateCreationUtils.stateWithAssignedPrimariesAndReplicas(indexNames, numShards, numReplicas);\n+ final int numRepeatedSearches = 4;\n+ List<ShardRouting> sessionsfirstSearch = null;\n+ OperationRouting opRouting = new OperationRouting(Settings.EMPTY,\n+ new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS));\n+ String sessionKey = randomAlphaOfLength(10);\n+ for (int i = 0; i < numRepeatedSearches; i++) {\n+ List<ShardRouting> searchedShards = new ArrayList<>(numShards);\n+ Set<String> selectedNodes = new HashSet<>(numShards);\n+ final GroupShardsIterator<ShardIterator> groupIterator = opRouting.searchShards(state, indexNames, null, sessionKey);\n+\n+ assertThat(\"One group per index shard\", groupIterator.size(), equalTo(numIndices * numShards));\n+ for (ShardIterator shardIterator : groupIterator) {\n+ assertThat(shardIterator.size(), equalTo(numReplicas + 1));\n+\n+ ShardRouting firstChoice = shardIterator.nextOrNull();\n+ assertNotNull(firstChoice);\n+ ShardRouting duelFirst = duelGetShards(state, firstChoice.shardId(), sessionKey).nextOrNull();\n+ assertThat(\"Regression test failure\", duelFirst, equalTo(firstChoice));\n+\n+ searchedShards.add(firstChoice);\n+ selectedNodes.add(firstChoice.currentNodeId());\n+ }\n+ if (sessionsfirstSearch == null) {\n+ sessionsfirstSearch = searchedShards;\n+ } else {\n+ assertThat(\"Sessions must reuse same replica choices\", searchedShards, equalTo(sessionsfirstSearch));\n+ }\n+\n+ // 2 is the bare minimum number of nodes we can reliably expect from\n+ // randomized tests in my experiments over thousands of iterations.\n+ // Ideally we would test for greater levels of machine utilisation\n+ // given a configuration with many nodes but the nature of hash\n+ // collisions means we can't always rely on optimal node usage in\n+ // all cases.\n+ assertThat(\"Search should use more than one of the nodes\", selectedNodes.size(), greaterThan(1));\n+ }\n+ }\n+ \n+ // Regression test for the routing logic - implements same hashing logic\n+ private ShardIterator duelGetShards(ClusterState clusterState, ShardId shardId, String sessionId) {\n+ final IndexShardRoutingTable indexShard = clusterState.getRoutingTable().shardRoutingTable(shardId.getIndexName(), shardId.getId());\n+ int routingHash = Murmur3HashFunction.hash(sessionId);\n+ routingHash = 31 * routingHash + indexShard.shardId.hashCode();\n+ return indexShard.activeInitializingShardsIt(routingHash); \n+ }\n \n public void testThatOnlyNodesSupportNodeIds() throws InterruptedException, IOException {\n TestThreadPool threadPool = null;", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/OperationRoutingTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: \r\n[5.1.2]\r\n\r\n**JVM version**:\r\njava version \"1.8.0_112\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_112-b16)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)\r\n\r\n**OS version**:\r\nmacOS 10.12.2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n```\r\n{\r\n \"size\": 0,\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"term\": {\r\n \"TestId\": {\r\n \"value\": \"2\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"term\": {\r\n \"content\": {\r\n \"value\": \"test\",\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"1\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"3\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"4\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"5\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"6\",\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"catalogs\": {\r\n \"value\": \"7\",\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"version\": true,\r\n \"aggregations\": {\r\n \"variantGrouping\": {\r\n \"terms\": {\r\n \"field\": \"variantId\",\r\n \"size\": 10,\r\n \"min_doc_count\": 1,\r\n \"shard_min_doc_count\": 0,\r\n \"show_term_doc_count_error\": false,\r\n \"order\":\r\n {\r\n \"top_hit\": \"desc\"\r\n }\r\n },\r\n \"aggregations\": {\r\n \"item\": {\r\n \"top_hits\": {\r\n \"from\": 0,\r\n \"size\": 1,\r\n \"version\": false,\r\n \"explain\": false\r\n }\r\n },\r\n \"top_hit\": {\r\n \"max\": {\r\n \"script\": {\r\n \"inline\": \"_score\",\r\n \"lang\": \"painless\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI have a query (see above) that uses a term aggregation combined with a top-hits aggregation to achieve field collapsing. \r\nAs suggested in the top-hits documentation an additional max-aggregation is used to sort the term buckets by their score.\r\nThis is working when the request is submitted using curl but NOT using the java API.\r\n\r\nUsing curl I can adjust the \"size\" of the term aggregation from x to x + y and the first x elements are sorted EXACTLY the same, which is needed for paging.\r\nUsing the java API the order is 1) different from the curl request and 2) changes when i adjust the size parameter.\r\nIn my case, the _score for every bucket is the exact same and using the java API the sorting doesn't seem to be stable.\r\n\r\nI have found a workaround for my case using a second sort criteria as a tie breaker: \r\n\r\n```\r\n\"order\": [\r\n\t{\r\n\t\"top_hit\": \"desc\"\r\n\t},\r\n\t{\r\n\t\"_term\": \"asc\"\r\n\t}\r\n]\r\n```\r\n**Steps to reproduce**:\r\n```\r\nSearchRequestBuilder srb = buildSearch();\r\nSearchResponse response = srb.execute().actionGet();\r\n```\r\nUsing the SearchRequestBuilder I can copy the JSON request and execute it using curl.\r\nNow I can compare the ordering of the buckets returned by curl and in the SearchResponse.\r\n\r\nIs this a bug or am I doing something stupid?\r\n\r\n\r\n", "comments": [ { "body": "If you are using the `TermsAggregationBuilder#order(Terms.Order)` method in your Java code to add just that single aggregation order then it would explain this behavior. The curl REST request parser uses the `TermsAggregationBuilder#order(java.util.List<Terms.Order>)` method instead which calls `order(Terms.Order.compound(orders));` internally (even if there is only one order). The `CompoundOrder` constructor checks if the last order is a tie-breaker, i.e. `_term`, **and if not adds term order ascending as a tie-breaker**. As you mentioned the `_score` is the same for some or all of the hits so without a tie-breaker the final order is undefined.\r\n\r\nIMO this is a bug or at the very least unexpected/surprising behavior, especially given that we get different results for `SearchRequestBuilder#execute().actionGet()` vs using `SearchRequestBuilder#toString()` in the REST API.\r\n\r\n@colings86 thoughts on this? Possible solution is to use a `CompoundOrder` if the single order is not a tie-breaker. Perhaps something I can do in #22343 ?\r\n", "created_at": "2017-03-18T06:01:41Z" }, { "body": "@qwerty4030 I think we should just have `TermsAggregationBuilder#order(Terms.Order)` call `TermsAggregationBuilder#order(java.util.List<Terms.Order>)` with `Collections.SingletonList(Terms.Order)` and then the behaviour will be guaranteed to be the same for both cases?", "created_at": "2017-03-20T11:07:02Z" }, { "body": "@colings86 Yeah thats the idea. Might need to refactor a bit because `TermsAggregationBuilder#order(java.util.List<Terms.Order>)` calls `TermsAggregationBuilder#order(Terms.Order)` internally so that would cause a stack overflow 😵 \r\nIf its alright with you I'll make that change in #22343 so a `CompoundOrder` will be used if the individual order is not a tie breaker. Definitely going to be a merge conflict otherwise.\r\n\r\n@Tobsucht **Workaround** is to call `TermsAggregationBuilder#order(java.util.List<Terms.Order>)` with `Collections.SingletonList(yourOrder)` or call `TermsAggregationBuilder#order(Terms.Order)` with `Terms.Order.compoud(yourOrder)`.\r\nFYI this API will change in #22343. Nothing major, just renaming/moving some classes and methods. Functionality will be identical (and this issue will be fixed). Thanks", "created_at": "2017-03-20T16:04:22Z" }, { "body": "Thanks for the fast resolution!", "created_at": "2017-03-22T13:26:44Z" }, { "body": "@colings86 Is it worth the effort to port the fix for this to the 5.x brach? Should be a small change to this method: https://github.com/elastic/elasticsearch/pull/22343/files#diff-56aadcd205034c94cb0e60801704afd9R194", "created_at": "2017-05-12T03:40:45Z" }, { "body": "@qwerty4030 I think it would be good to backport just think fix to 5.x, since it shouldn't change the API at all and it is a bug fix. Would you like to raise a PR for this against the 5.x branch?", "created_at": "2017-05-12T09:36:38Z" }, { "body": "@colings86 created #24658 ", "created_at": "2017-05-14T01:56:51Z" } ], "number": 23613, "title": "Inconsistent bucket ordering using term aggregation" }
{ "body": "This commit fixes inconsistent terms aggregation order by ensuring the order contains a tie breaker.\r\nIf needed a tie-breaker (_term asc) is added by using a compound order.\r\n\r\nCloses #23613 for the 5.X branch (backport from #22343).\r\n", "number": 24658, "review_comments": [ { "body": "Instead of doing this are we not able to just call `order(Collections.singletonList(order)`?", "created_at": "2017-05-15T09:00:49Z" }, { "body": "No because that method calls this one again. Also this may result in a nested `CompoundOrder`, which doesn't make sense.", "created_at": "2017-05-15T22:53:49Z" }, { "body": "ok, makes sense", "created_at": "2017-05-16T11:20:31Z" } ], "title": "Fix inconsistent terms aggregation order for 5.x" }
{ "commits": [ { "message": "Fix inconsistent terms aggregation order.\n\nThis commit fixes inconsistent terms aggregation order by ensuring the order contains a tie breaker.\nIf needed a tie-breaker (_term asc) is added by using a compound order.\n\nCloses #23613 for the 5.X branch (backport from #22343)." } ], "files": [ { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.terms.InternalOrder.CompoundOrder;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms.Order;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds;\n import org.elasticsearch.search.aggregations.bucket.terms.support.IncludeExclude;\n@@ -196,7 +197,11 @@ public TermsAggregationBuilder order(Terms.Order order) {\n if (order == null) {\n throw new IllegalArgumentException(\"[order] must not be null: [\" + name + \"]\");\n }\n- this.order = order;\n+ if(order instanceof CompoundOrder || InternalOrder.isTermOrder(order)) {\n+ this.order = order; // if order already contains a tie-breaker we are good to go\n+ } else { // otherwise add a tie-breaker by using a compound order\n+ this.order = Terms.Order.compound(order);\n+ }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java", "status": "modified" } ] }
{ "body": "Some of the CI boxes for Elasticsearch have inconsistently configured environments when it comes to Vagrant. Vagrant requires $HOME to be set, as well as VirtualBox to be installed. This PR checks those two things are present before enabling tests with VagrantFixtures in the HDFS Repo (only place vagrant fixtures are used right now).", "comments": [ { "body": "@jbaiera I have merged in https://github.com/elastic/elasticsearch/pull/24643 in order to temporarily disable the task that causes build failures. Adding it here to remind you to re-enable once this is ready to go in.", "created_at": "2017-05-12T11:59:57Z" }, { "body": "I've moved the environment sensing out of the hdfs plugin build script as well as the vagrant test plugin into it's own `VagrantSupportPlugin`. Both vagrant and virtualbox are checked to see if they are installed. After the plugin is applied, if the root project hasn't been configured by it yet, then it does the environment checks and installs the results (as `Installation` objects) into the root project. It then transfers the `Installation` objects into the project applying the plugin. `Installation` is just a simple object that couples together the results of the checks: If the commands are installed, if the versions are correct, what the version number is if they meet the requirements, and any errors encountered while checking if they were installed. The plugin also creates the tasks to check the version for Vagrant and VirtualBox, so that hdfs repo and the packaging tests don't have to any more. These tasks no longer perform the actual lookup, but rather call the `verify()` method on the Installation objects, which throws if the Installation is not valid.", "created_at": "2017-05-12T20:35:50Z" }, { "body": "The VagrantTestPlugin definitely depends on this VagrantSupportPlugin. I'm not aware of any good ways to orchestrate that dependency in the build script. @rjernst any thoughts?", "created_at": "2017-05-12T20:37:43Z" }, { "body": "@rjernst How are we on the recent changes for this? I'd like to get this merged soon.", "created_at": "2017-05-31T03:13:45Z" }, { "body": "@jbaiera I had not looked at it since I left my last comments. Please ping me in the future when it is ready for another review. I will look now.", "created_at": "2017-05-31T03:30:39Z" } ], "number": 24636, "title": "Sense for VirtualBox and $HOME when deciding to turn on vagrant testing." }
{ "body": "This keeps failing the build so I am temporarily disabling it\r\nuntil #24636 gets merged.\r\n\r\n\r\n", "number": 24643, "review_comments": [], "title": "[TEST] Temporarily disable the secure fixture for hdfs tests" }
{ "commits": [ { "message": "[TEST] Temporarily disable the secure fixture for hdfs tests\n\nThis keeps failing the build so I am temporarily disabling it\nuntil #24636 gets merged." } ], "files": [ { "diff": "@@ -180,7 +180,7 @@ if (fixtureSupported) {\n }\n \n // Create a Integration Test suite just for security based tests\n-if (secureFixtureSupported) {\n+if (secureFixtureSupported && false) { // This fails due to a vagrant configuration issue - remove the false check to re-enable\n // This must execute before the afterEvaluate block from integTestSecure\n project.afterEvaluate {\n Path elasticsearchKT = project(':test:fixtures:krb5kdc-fixture').buildDir.toPath().resolve(\"keytabs\").resolve(\"elasticsearch.keytab\").toAbsolutePath()", "filename": "plugins/repository-hdfs/build.gradle", "status": "modified" } ] }
{ "body": "With the current implementation, `SniffNodesSampler` might close the\r\ncurrent connection right after a request is sent but before the response\r\nis correctly handled. This causes to timeouts in the transport client\r\nwhen the sniffing is activated in all versions since #22828.\r\n\r\ncloses #24575\r\ncloses #24557", "comments": [ { "body": "@bleskes @jasontedor the PR does not have tests, I created it to point it to you what _I think_ is the cause of #24575 (and also the deleted #24557). I'd be happy if you could confirm or invalidate that this is the cause of transport client exceptions.", "created_at": "2017-05-11T20:56:22Z" }, { "body": "one thing that I am puzzled about is why this causes timeouts instead of triggering onException on the handler? I think the reason is that we don't notify the TransportService when a connection is closed but only if a node is disconnected that is a different bug here. Both are independent and should be handled independently. so I think your fix is sufficient for the issues referenced in the description.", "created_at": "2017-05-12T06:10:36Z" }, { "body": "I opened #24639 for the notification part", "created_at": "2017-05-12T07:16:43Z" }, { "body": "I also marked this as a blocker for 5.4.1", "created_at": "2017-05-12T07:28:44Z" }, { "body": "> one thing that I am puzzled about is why this causes timeouts instead of triggering onException on the handler? I think the reason is that we don't notify the TransportService when a connection is closed but only if a node is disconnected that is a different bug here. Both are independent and should be handled independently. so I think your fix is sufficient for the issues referenced in the description.\r\n\r\nI agree - I didn't spot this problem but my knowledge of the TransportService is limited, I'm glad you already proposed a fix.\r\n\r\nI updated the PR according to your comments.", "created_at": "2017-05-12T12:49:51Z" }, { "body": "Thanks @s1monw @bleskes ", "created_at": "2017-05-12T14:39:07Z" }, { "body": "I'm getting exactly this error with the version 5.4.1 across the board. What's the quickest way for me to recover?", "created_at": "2017-06-07T02:39:41Z" }, { "body": "@konste I just ran another test this morning with a fresh 5.4.1 installation and a PreBuiltTransportClient with sniff option set to true and everything worked as expected (while the error was really obvious and appears at startup time).\r\n\r\nCan you please provide the logs of both transport client and elasticsearch node please? As well as the transport client settings?", "created_at": "2017-06-07T07:00:16Z" }, { "body": "@tlrx Sorry I had to restore functionality ASAP and lost the repro.", "created_at": "2017-06-07T14:52:19Z" } ], "number": 24632, "title": "SniffNodesSampler should close connection after handling responses" }
{ "body": "Today we prune transport handlers in TransportService when a node is disconnected.\r\nThis can cause connections to starve in the TransportService if the connection is\r\nopened as a short living connection ie. without sharing the connection to a node\r\nvia registering in the transport itself. This change now moves to pruning based\r\non the connections cache key to ensure we notify handlers as soon as the connection\r\nis closed for all connections not just for registered connections.\r\n\r\nRelates to #24632\r\nRelates to #24575\r\nRelates to #24557", "number": 24639, "review_comments": [ { "body": "🤣", "created_at": "2017-05-12T13:13:28Z" }, { "body": "yeah I know who put that there 💃 ", "created_at": "2017-05-12T13:29:31Z" }, { "body": "😉", "created_at": "2017-05-12T13:33:10Z" } ], "title": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers" }
{ "commits": [ { "message": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers\n\nToday we prune transport handlers in TransporService when a node is disconnected.\nThis can cause connections to starve in the TransportService if the connection is\nopened as a short living connection ie. without sharing the connection to a node\nvia registering in the transport itself. This change now moves to pruning based\non the connections cache key to ensure we notify handlers as soon as the connection\nis closed for all connections not just for registered connections.\n\nRelates to #24632\nRelates to #24575\nRelates to #24557" }, { "message": "fix line len" } ], "files": [ { "diff": "@@ -101,6 +101,7 @@\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n+import java.util.function.Consumer;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n import java.util.stream.Collectors;\n@@ -357,8 +358,9 @@ public final class NodeChannels implements Connection {\n private final DiscoveryNode node;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n private final Version version;\n+ private final Consumer<Connection> onClose;\n \n- public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile) {\n+ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile, Consumer<Connection> onClose) {\n this.node = node;\n this.channels = channels;\n assert channels.length == connectionProfile.getNumConnections() : \"expected channels size to be == \"\n@@ -369,13 +371,15 @@ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile co\n typeMapping.put(type, handle);\n }\n version = node.getVersion();\n+ this.onClose = onClose;\n }\n \n NodeChannels(NodeChannels channels, Version handshakeVersion) {\n this.node = channels.node;\n this.channels = channels.channels;\n this.typeMapping = channels.typeMapping;\n this.version = handshakeVersion;\n+ this.onClose = channels.onClose;\n }\n \n @Override\n@@ -408,6 +412,7 @@ public Channel channel(TransportRequestOptions.Type type) {\n public synchronized void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n closeChannels(Arrays.stream(channels).filter(Objects::nonNull).collect(Collectors.toList()));\n+ onClose.accept(this);\n }\n }\n \n@@ -519,8 +524,8 @@ public final NodeChannels openConnection(DiscoveryNode node, ConnectionProfile c\n final TimeValue handshakeTimeout = connectionProfile.getHandshakeTimeout() == null ?\n connectTimeout : connectionProfile.getHandshakeTimeout();\n final Version version = executeHandshake(node, channel, handshakeTimeout);\n- transportServiceAdapter.onConnectionOpened(node);\n- nodeChannels = new NodeChannels(nodeChannels, version);// clone the channels - we now have the correct version\n+ transportServiceAdapter.onConnectionOpened(nodeChannels);\n+ nodeChannels = new NodeChannels(nodeChannels, version); // clone the channels - we now have the correct version\n success = true;\n return nodeChannels;\n } catch (ConnectTransportException e) {", "filename": "core/src/main/java/org/elasticsearch/transport/TcpTransport.java", "status": "modified" }, { "diff": "@@ -132,5 +132,13 @@ void sendRequest(long requestId, String action, TransportRequest request, Transp\n default Version getVersion() {\n return getNode().getVersion();\n }\n+\n+ /**\n+ * Returns a key that this connection can be cached on. Delegating subclasses must delegate method call to\n+ * the original connection.\n+ */\n+ default Object getCacheKey() {\n+ return this;\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/transport/Transport.java", "status": "modified" }, { "diff": "@@ -33,8 +33,14 @@ default void onNodeConnected(DiscoveryNode node) {}\n */\n default void onNodeDisconnected(DiscoveryNode node) {}\n \n+ /**\n+ * Called once a node connection is closed. The connection might not have been registered in the\n+ * transport as a shared connection to a specific node\n+ */\n+ default void onConnectionClosed(Transport.Connection connection) {}\n+\n /**\n * Called once a node connection is opened.\n */\n- default void onConnectionOpened(DiscoveryNode node) {}\n+ default void onConnectionOpened(Transport.Connection connection) {}\n }", "filename": "core/src/main/java/org/elasticsearch/transport/TransportConnectionListener.java", "status": "modified" }, { "diff": "@@ -569,7 +569,7 @@ private <T extends TransportResponse> void sendRequestInternal(final Transport.C\n }\n Supplier<ThreadContext.StoredContext> storedContextSupplier = threadPool.getThreadContext().newRestorableContext(true);\n TransportResponseHandler<T> responseHandler = new ContextRestoreResponseHandler<>(storedContextSupplier, handler);\n- clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection.getNode(), action, timeoutHandler));\n+ clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection, action, timeoutHandler));\n if (lifecycle.stoppedOrClosed()) {\n // if we are not started the exception handling will remove the RequestHolder again and calls the handler to notify\n // the caller. It will only notify if the toStop code hasn't done the work yet.\n@@ -810,7 +810,7 @@ public TransportResponseHandler onResponseReceived(final long requestId) {\n }\n holder.cancelTimeout();\n if (traceEnabled() && shouldTraceAction(holder.action())) {\n- traceReceivedResponse(requestId, holder.node(), holder.action());\n+ traceReceivedResponse(requestId, holder.connection().getNode(), holder.action());\n }\n return holder.handler();\n }\n@@ -855,12 +855,12 @@ public void onNodeConnected(final DiscoveryNode node) {\n }\n \n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n+ public void onConnectionOpened(Transport.Connection connection) {\n // capture listeners before spawning the background callback so the following pattern won't trigger a call\n // connectToNode(); connection is completed successfully\n // addConnectionListener(); this listener shouldn't be called\n final Stream<TransportConnectionListener> listenersToNotify = TransportService.this.connectionListeners.stream();\n- threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(node)));\n+ threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(connection)));\n }\n \n @Override\n@@ -871,20 +871,28 @@ public void onNodeDisconnected(final DiscoveryNode node) {\n connectionListener.onNodeDisconnected(node);\n }\n });\n+ } catch (EsRejectedExecutionException ex) {\n+ logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ }\n+ }\n+\n+ @Override\n+ public void onConnectionClosed(Transport.Connection connection) {\n+ try {\n for (Map.Entry<Long, RequestHolder> entry : clientHandlers.entrySet()) {\n RequestHolder holder = entry.getValue();\n- if (holder.node().equals(node)) {\n+ if (holder.connection().getCacheKey().equals(connection.getCacheKey())) {\n final RequestHolder holderToNotify = clientHandlers.remove(entry.getKey());\n if (holderToNotify != null) {\n // callback that an exception happened, but on a different thread since we don't\n // want handlers to worry about stack overflows\n- threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(node,\n- holderToNotify.action())));\n+ threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(\n+ connection.getNode(), holderToNotify.action())));\n }\n }\n }\n } catch (EsRejectedExecutionException ex) {\n- logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ logger.debug(\"Rejected execution on onConnectionClosed\", ex);\n }\n }\n \n@@ -929,13 +937,14 @@ public void run() {\n if (holder != null) {\n // add it to the timeout information holder, in case we are going to get a response later\n long timeoutTime = System.currentTimeMillis();\n- timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.node(), holder.action(), sentTime, timeoutTime));\n+ timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.connection().getNode(), holder.action(), sentTime,\n+ timeoutTime));\n // now that we have the information visible via timeoutInfoHandlers, we try to remove the request id\n final RequestHolder removedHolder = clientHandlers.remove(requestId);\n if (removedHolder != null) {\n assert removedHolder == holder : \"two different holder instances for request [\" + requestId + \"]\";\n removedHolder.handler().handleException(\n- new ReceiveTimeoutTransportException(holder.node(), holder.action(),\n+ new ReceiveTimeoutTransportException(holder.connection().getNode(), holder.action(),\n \"request_id [\" + requestId + \"] timed out after [\" + (timeoutTime - sentTime) + \"ms]\"));\n } else {\n // response was processed, remove timeout info.\n@@ -990,15 +999,15 @@ static class RequestHolder<T extends TransportResponse> {\n \n private final TransportResponseHandler<T> handler;\n \n- private final DiscoveryNode node;\n+ private final Transport.Connection connection;\n \n private final String action;\n \n private final TimeoutHandler timeoutHandler;\n \n- RequestHolder(TransportResponseHandler<T> handler, DiscoveryNode node, String action, TimeoutHandler timeoutHandler) {\n+ RequestHolder(TransportResponseHandler<T> handler, Transport.Connection connection, String action, TimeoutHandler timeoutHandler) {\n this.handler = handler;\n- this.node = node;\n+ this.connection = connection;\n this.action = action;\n this.timeoutHandler = timeoutHandler;\n }\n@@ -1007,8 +1016,8 @@ public TransportResponseHandler<T> handler() {\n return handler;\n }\n \n- public DiscoveryNode node() {\n- return this.node;\n+ public Transport.Connection connection() {\n+ return this.connection;\n }\n \n public String action() {", "filename": "core/src/main/java/org/elasticsearch/transport/TransportService.java", "status": "modified" }, { "diff": "@@ -604,8 +604,8 @@ public void testResolveReuseExistingNodeConnections() throws ExecutionException,\n // install a listener to check that no new connections are made\n handleA.transportService.addConnectionListener(new TransportConnectionListener() {\n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n- fail(\"should not open any connections. got [\" + node + \"]\");\n+ public void onConnectionOpened(Transport.Connection connection) {\n+ fail(\"should not open any connections. got [\" + connection.getNode() + \"]\");\n }\n });\n ", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java", "status": "modified" }, { "diff": "@@ -204,7 +204,7 @@ protected void sendMessage(Object o, BytesReference reference, ActionListener li\n \n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n- return new NodeChannels(node, new Object[profile.getNumConnections()], profile);\n+ return new NodeChannels(node, new Object[profile.getNumConnections()], profile, c -> {});\n }\n \n @Override\n@@ -220,7 +220,7 @@ public long serverOpen() {\n @Override\n public NodeChannels getConnection(DiscoveryNode node) {\n return new NodeChannels(node, new Object[MockTcpTransport.LIGHT_PROFILE.getNumConnections()],\n- MockTcpTransport.LIGHT_PROFILE);\n+ MockTcpTransport.LIGHT_PROFILE, c -> {});\n }\n };\n DiscoveryNode node = new DiscoveryNode(\"foo\", buildNewFakeTransportAddress(), Version.CURRENT);", "filename": "core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java", "status": "modified" }, { "diff": "@@ -320,7 +320,7 @@ public long serverOpen() {\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) {\n final Channel[] channels = new Channel[profile.getNumConnections()];\n- final NodeChannels nodeChannels = new NodeChannels(node, channels, profile);\n+ final NodeChannels nodeChannels = new NodeChannels(node, channels, profile, transportServiceAdapter::onConnectionClosed);\n boolean success = false;\n try {\n final TimeValue connectTimeout;", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java", "status": "modified" }, { "diff": "@@ -777,6 +777,11 @@ public void sendRequest(long requestId, String action, TransportRequest request,\n public void close() throws IOException {\n connection.close();\n }\n+\n+ @Override\n+ public Object getCacheKey() {\n+ return connection.getCacheKey();\n+ }\n }\n \n public Transport getOriginalTransport() {", "filename": "test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java", "status": "modified" }, { "diff": "@@ -2099,9 +2099,6 @@ public void handleException(TransportException exp) {\n \n @Override\n public String executor() {\n- if (1 == 1)\n- return \"same\";\n-\n return randomFrom(executors);\n }\n };\n@@ -2111,4 +2108,59 @@ public String executor() {\n latch.await();\n }\n \n+ public void testHandlerIsInvokedOnConnectionClose() throws IOException, InterruptedException {\n+ List<String> executors = new ArrayList<>(ThreadPool.THREAD_POOL_TYPES.keySet());\n+ CollectionUtil.timSort(executors); // makes sure it's reproducible\n+ TransportService serviceC = build(Settings.builder().put(\"name\", \"TS_TEST\").build(), version0, null, true);\n+ serviceC.registerRequestHandler(\"action\", TestRequest::new, ThreadPool.Names.SAME,\n+ (request, channel) -> {\n+ // do nothing\n+ });\n+ serviceC.start();\n+ serviceC.acceptIncomingRequests();\n+ CountDownLatch latch = new CountDownLatch(1);\n+ TransportResponseHandler<TransportResponse> transportResponseHandler = new TransportResponseHandler<TransportResponse>() {\n+ @Override\n+ public TransportResponse newInstance() {\n+ return TransportResponse.Empty.INSTANCE;\n+ }\n+\n+ @Override\n+ public void handleResponse(TransportResponse response) {\n+ try {\n+ fail(\"no response expected\");\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void handleException(TransportException exp) {\n+ try {\n+ assertTrue(exp.getClass().toString(), exp instanceof NodeDisconnectedException);\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public String executor() {\n+ return randomFrom(executors);\n+ }\n+ };\n+ ConnectionProfile.Builder builder = new ConnectionProfile.Builder();\n+ builder.addConnections(1,\n+ TransportRequestOptions.Type.BULK,\n+ TransportRequestOptions.Type.PING,\n+ TransportRequestOptions.Type.RECOVERY,\n+ TransportRequestOptions.Type.REG,\n+ TransportRequestOptions.Type.STATE);\n+ Transport.Connection connection = serviceB.openConnection(serviceC.getLocalNode(), builder.build());\n+ serviceB.sendRequest(connection, \"action\", new TestRequest(randomFrom(\"fail\", \"pass\")), TransportRequestOptions.EMPTY,\n+ transportResponseHandler);\n+ connection.close();\n+ latch.await();\n+ serviceC.close();\n+ }\n+\n }", "filename": "test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java", "status": "modified" }, { "diff": "@@ -180,7 +180,8 @@ private void readMessage(MockChannel mockChannel, StreamInput input) throws IOEx\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n final MockChannel[] mockChannels = new MockChannel[1];\n- final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE); // we always use light here\n+ final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE,\n+ transportServiceAdapter::onConnectionClosed); // we always use light here\n boolean success = false;\n final MockSocket socket = new MockSocket();\n try {", "filename": "test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java", "status": "modified" } ] }
{ "body": "Hi, I have a rest service using Netty as basis and connecting to ElasticSearch backend via java transport client API.\r\nIt worked very well with Netty 4.1.8 and ES 5.3.0.\r\nNow I tried to upgrade ES backend and transport client to 5.4.0, and also Netty to 4.1.9. Then following problems happened:\r\n\r\n10 May 2017;17:01:59.645 Developer linux-68qh [elasticsearch[_client_][generic][T#3]] INFO o.e.c.t.TransportClientNodesService - failed to get local cluster state for {#transport#-1}{WlTQjgcGQ1uqyNNsw4ZnAw}{127.0.0.1}{127.0.0.1:9300}, disconnecting...\r\norg.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/state] request_id [7] timed out after [5001ms]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:925)\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n\r\nI roll back the transport client to 5.3.0 but keep backend 5.4.0. \r\n\r\nThen it is able to connect to Es backend.\r\nI use SBT and the build dependencies for the error are:\r\n\r\n\"io.netty\" % \"netty-all\" % \"4.1.9.Final\"\r\n\"org.elasticsearch\" % \"elasticsearch\" % \"5.4.0\"\r\n \"org.elasticsearch.client\" % \"transport\" % \"5.4.0\",\r\nand \"io.netty\" % \"netty-transport-native-epoll\" % \"4.1.9.Final\" classifier \"linux-x86_64\"\r\n\r\nEnvironment:\r\n\r\nopenjdk version \"1.8.0_121\"\r\nOpenJDK Runtime Environment (IcedTea 3.3.0) (suse-3.3-x86_64)\r\nOpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\nLinux linux-68qh 4.10.13-1-default #1 SMP PREEMPT Thu Apr 27 12:23:31 UTC 2017 (e5d11ce) x86_64 x86_64 x86_64 GNU/Linux\r\n\r\nThanks\r\n\r\n\r\n\r\n", "comments": [ { "body": "It looks like a bug to me. Is sniffing enabled on your transport client?", "created_at": "2017-05-11T08:47:54Z" }, { "body": "Yes it is enabled. ", "created_at": "2017-05-11T09:57:09Z" }, { "body": "Same issue here. We have:\r\n- Spring Boot v1.5.3.RELEASE,\r\n- Switched from Elasticsearch 5.3.2 to 5.4.0,\r\n- using Transport Client with sniff enabled.\r\n\r\nClient and Elasticsearch both on the same machine, connecting through localhost:\r\n- When using TransportClient 5.3.2 to connect to Elastic 5.4.0 => OK,\r\n- 5.4.0 to 5.4.0 => KO.\r\n\r\nThe exception we have on startup:\r\n> org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/state] request_id [7] timed out after [5000ms]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:925) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.0.jar:5.4.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]", "created_at": "2017-05-11T18:50:04Z" }, { "body": "I am seeing this issue as well on some nodes connecting to ES. We run a service that has multiple machines that each connect to ES, some of them are able to connect successfully and others do not. ", "created_at": "2017-05-11T20:36:44Z" }, { "body": "Thanks for reporting, I think I know where the issue is.", "created_at": "2017-05-11T20:44:03Z" }, { "body": "Thanks @tlrx. I'm not sure if you are also aware, but I also saw errors that looked like the following when I disabled sniffing. \r\n\r\n```\r\n20:38:44.935 [elasticsearch[_client_][generic][T#2]] DEBUG - failed to connect to discovered node [{i-0562d98cb14e42358}{Gzbd-MEzRo-OHMUoEajvXA}{x6V2--f3SS-NzVk5wAQQYg}{10.178.212.242}{127.0.0.1:4374}{aws_availability_zone=us-east-1a}]\r\nConnectTransportException[[i-0562d98cb14e42358][127.0.0.1:4374] handshake failed. unexpected remote node {i-01bae8d9b0f31ac54}{MUjAv_3JR5KmzEdn-eJeSA}{qJdTT_oaSRCJ1TLO1W2A6w}{10.158.100.27}{10.158.100.27:9300}{aws_availability_zone=us-east-1b}]\r\n\tat org.elasticsearch.transport.TransportService.lambda$connectToNode$3(TransportService.java:319)\r\n\tat org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:466)\r\n\tat org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:315)\r\n\tat org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:302)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.validateNewNodes(TransportClientNodesService.java:374)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:442)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:358)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$ScheduledNodeSampler.run(TransportClientNodesService.java:391)\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n```\r\n\r\nIf it helps we have a service discovery framework to discover services: (https://medium.com/airbnb-engineering/smartstack-service-discovery-in-the-cloud-4b8a080de619). We \"randomly\" pick an ES box to connect to and then use sniffing (if enabled) to discover the rest. Even though ES is running on 9200/9300 we use a different port on our client machines because of the service discovery framework does the correct routing. Both the service discovery port and the \"direct access\" port are reachable over the network. \r\n\r\nI am rolling back our transport client version to 5.3.2 and will report back on the results.\r\nUpdate: 5.3.2 works great", "created_at": "2017-05-11T20:51:05Z" }, { "body": "Same here... 5.4.0 to 5.4.0 fails.... but 5.3.0 to 5.4.0 works", "created_at": "2017-05-12T11:03:32Z" }, { "body": "Same here... 5.4.0 to 5.4.0 fails.... but 5.3.0 to 5.4.0 works", "created_at": "2017-05-29T12:30:20Z" }, { "body": "I am seeing a similar exception in 2.3.1. Below is the exception:-\r\n\r\n```\r\nINFO [2017-08-08 20:14:18,019] [U:3,129,F:822,T:3,950,M:3,950] elasticsearch.client.transport:[TransportClientNodesService$SniffNodesSampler$1$1:handleException:455] - [elasticsearch[Edward \"Ned\" Buckman][generic][T#61]] - [Edward \"Ned\" Buckman] failed to get local cluster state for {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}, disconnecting...\r\nReceiveTimeoutTransportException[[][localhost/127.0.0.1:9300][cluster:monitor/state] request_id [341654] timed out after [5001ms]]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n```\r\n\r\nIs the issue not fixed in 2.3.1?", "created_at": "2017-10-06T15:54:42Z" }, { "body": "> Is the issue not fixed in 2.3.1?\r\n\r\nI didn't test in 2.3.1 since the fix fixed a bug introduced in #22828 for 5.4.0. It's possible that this bug exists in 2.3.1 but this version is EOL and not supported anymore.", "created_at": "2017-10-09T09:10:40Z" }, { "body": "ok thanks for the update.\n\nSent from GMail on Android\n\nOn Oct 9, 2017 2:42 PM, \"Tanguy Leroux\" <notifications@github.com> wrote:\n\n> Is the issue not fixed in 2.3.1?\n>\n> I didn't test in 2.3.1 since the fix fixed a bug introduced in #22828\n> <https://github.com/elastic/elasticsearch/pull/22828> for 5.4.0. It's\n> possible that this bug exists in 2.3.1 but this version is EOL and not\n> supported anymore.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/24575#issuecomment-335102848>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHw8JPzLkGBjp7CpN19QJ6_lhFvamNlLks5sqeOCgaJpZM4NWYB_>\n> .\n>\n", "created_at": "2017-10-09T09:15:05Z" } ], "number": 24575, "title": "5.4.0 transport client failed to get local cluster state while using 5.3.0 to connect to 5.4.0 servers works" }
{ "body": "Today we prune transport handlers in TransportService when a node is disconnected.\r\nThis can cause connections to starve in the TransportService if the connection is\r\nopened as a short living connection ie. without sharing the connection to a node\r\nvia registering in the transport itself. This change now moves to pruning based\r\non the connections cache key to ensure we notify handlers as soon as the connection\r\nis closed for all connections not just for registered connections.\r\n\r\nRelates to #24632\r\nRelates to #24575\r\nRelates to #24557", "number": 24639, "review_comments": [ { "body": "🤣", "created_at": "2017-05-12T13:13:28Z" }, { "body": "yeah I know who put that there 💃 ", "created_at": "2017-05-12T13:29:31Z" }, { "body": "😉", "created_at": "2017-05-12T13:33:10Z" } ], "title": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers" }
{ "commits": [ { "message": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers\n\nToday we prune transport handlers in TransporService when a node is disconnected.\nThis can cause connections to starve in the TransportService if the connection is\nopened as a short living connection ie. without sharing the connection to a node\nvia registering in the transport itself. This change now moves to pruning based\non the connections cache key to ensure we notify handlers as soon as the connection\nis closed for all connections not just for registered connections.\n\nRelates to #24632\nRelates to #24575\nRelates to #24557" }, { "message": "fix line len" } ], "files": [ { "diff": "@@ -101,6 +101,7 @@\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n+import java.util.function.Consumer;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n import java.util.stream.Collectors;\n@@ -357,8 +358,9 @@ public final class NodeChannels implements Connection {\n private final DiscoveryNode node;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n private final Version version;\n+ private final Consumer<Connection> onClose;\n \n- public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile) {\n+ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile, Consumer<Connection> onClose) {\n this.node = node;\n this.channels = channels;\n assert channels.length == connectionProfile.getNumConnections() : \"expected channels size to be == \"\n@@ -369,13 +371,15 @@ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile co\n typeMapping.put(type, handle);\n }\n version = node.getVersion();\n+ this.onClose = onClose;\n }\n \n NodeChannels(NodeChannels channels, Version handshakeVersion) {\n this.node = channels.node;\n this.channels = channels.channels;\n this.typeMapping = channels.typeMapping;\n this.version = handshakeVersion;\n+ this.onClose = channels.onClose;\n }\n \n @Override\n@@ -408,6 +412,7 @@ public Channel channel(TransportRequestOptions.Type type) {\n public synchronized void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n closeChannels(Arrays.stream(channels).filter(Objects::nonNull).collect(Collectors.toList()));\n+ onClose.accept(this);\n }\n }\n \n@@ -519,8 +524,8 @@ public final NodeChannels openConnection(DiscoveryNode node, ConnectionProfile c\n final TimeValue handshakeTimeout = connectionProfile.getHandshakeTimeout() == null ?\n connectTimeout : connectionProfile.getHandshakeTimeout();\n final Version version = executeHandshake(node, channel, handshakeTimeout);\n- transportServiceAdapter.onConnectionOpened(node);\n- nodeChannels = new NodeChannels(nodeChannels, version);// clone the channels - we now have the correct version\n+ transportServiceAdapter.onConnectionOpened(nodeChannels);\n+ nodeChannels = new NodeChannels(nodeChannels, version); // clone the channels - we now have the correct version\n success = true;\n return nodeChannels;\n } catch (ConnectTransportException e) {", "filename": "core/src/main/java/org/elasticsearch/transport/TcpTransport.java", "status": "modified" }, { "diff": "@@ -132,5 +132,13 @@ void sendRequest(long requestId, String action, TransportRequest request, Transp\n default Version getVersion() {\n return getNode().getVersion();\n }\n+\n+ /**\n+ * Returns a key that this connection can be cached on. Delegating subclasses must delegate method call to\n+ * the original connection.\n+ */\n+ default Object getCacheKey() {\n+ return this;\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/transport/Transport.java", "status": "modified" }, { "diff": "@@ -33,8 +33,14 @@ default void onNodeConnected(DiscoveryNode node) {}\n */\n default void onNodeDisconnected(DiscoveryNode node) {}\n \n+ /**\n+ * Called once a node connection is closed. The connection might not have been registered in the\n+ * transport as a shared connection to a specific node\n+ */\n+ default void onConnectionClosed(Transport.Connection connection) {}\n+\n /**\n * Called once a node connection is opened.\n */\n- default void onConnectionOpened(DiscoveryNode node) {}\n+ default void onConnectionOpened(Transport.Connection connection) {}\n }", "filename": "core/src/main/java/org/elasticsearch/transport/TransportConnectionListener.java", "status": "modified" }, { "diff": "@@ -569,7 +569,7 @@ private <T extends TransportResponse> void sendRequestInternal(final Transport.C\n }\n Supplier<ThreadContext.StoredContext> storedContextSupplier = threadPool.getThreadContext().newRestorableContext(true);\n TransportResponseHandler<T> responseHandler = new ContextRestoreResponseHandler<>(storedContextSupplier, handler);\n- clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection.getNode(), action, timeoutHandler));\n+ clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection, action, timeoutHandler));\n if (lifecycle.stoppedOrClosed()) {\n // if we are not started the exception handling will remove the RequestHolder again and calls the handler to notify\n // the caller. It will only notify if the toStop code hasn't done the work yet.\n@@ -810,7 +810,7 @@ public TransportResponseHandler onResponseReceived(final long requestId) {\n }\n holder.cancelTimeout();\n if (traceEnabled() && shouldTraceAction(holder.action())) {\n- traceReceivedResponse(requestId, holder.node(), holder.action());\n+ traceReceivedResponse(requestId, holder.connection().getNode(), holder.action());\n }\n return holder.handler();\n }\n@@ -855,12 +855,12 @@ public void onNodeConnected(final DiscoveryNode node) {\n }\n \n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n+ public void onConnectionOpened(Transport.Connection connection) {\n // capture listeners before spawning the background callback so the following pattern won't trigger a call\n // connectToNode(); connection is completed successfully\n // addConnectionListener(); this listener shouldn't be called\n final Stream<TransportConnectionListener> listenersToNotify = TransportService.this.connectionListeners.stream();\n- threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(node)));\n+ threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(connection)));\n }\n \n @Override\n@@ -871,20 +871,28 @@ public void onNodeDisconnected(final DiscoveryNode node) {\n connectionListener.onNodeDisconnected(node);\n }\n });\n+ } catch (EsRejectedExecutionException ex) {\n+ logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ }\n+ }\n+\n+ @Override\n+ public void onConnectionClosed(Transport.Connection connection) {\n+ try {\n for (Map.Entry<Long, RequestHolder> entry : clientHandlers.entrySet()) {\n RequestHolder holder = entry.getValue();\n- if (holder.node().equals(node)) {\n+ if (holder.connection().getCacheKey().equals(connection.getCacheKey())) {\n final RequestHolder holderToNotify = clientHandlers.remove(entry.getKey());\n if (holderToNotify != null) {\n // callback that an exception happened, but on a different thread since we don't\n // want handlers to worry about stack overflows\n- threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(node,\n- holderToNotify.action())));\n+ threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(\n+ connection.getNode(), holderToNotify.action())));\n }\n }\n }\n } catch (EsRejectedExecutionException ex) {\n- logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ logger.debug(\"Rejected execution on onConnectionClosed\", ex);\n }\n }\n \n@@ -929,13 +937,14 @@ public void run() {\n if (holder != null) {\n // add it to the timeout information holder, in case we are going to get a response later\n long timeoutTime = System.currentTimeMillis();\n- timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.node(), holder.action(), sentTime, timeoutTime));\n+ timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.connection().getNode(), holder.action(), sentTime,\n+ timeoutTime));\n // now that we have the information visible via timeoutInfoHandlers, we try to remove the request id\n final RequestHolder removedHolder = clientHandlers.remove(requestId);\n if (removedHolder != null) {\n assert removedHolder == holder : \"two different holder instances for request [\" + requestId + \"]\";\n removedHolder.handler().handleException(\n- new ReceiveTimeoutTransportException(holder.node(), holder.action(),\n+ new ReceiveTimeoutTransportException(holder.connection().getNode(), holder.action(),\n \"request_id [\" + requestId + \"] timed out after [\" + (timeoutTime - sentTime) + \"ms]\"));\n } else {\n // response was processed, remove timeout info.\n@@ -990,15 +999,15 @@ static class RequestHolder<T extends TransportResponse> {\n \n private final TransportResponseHandler<T> handler;\n \n- private final DiscoveryNode node;\n+ private final Transport.Connection connection;\n \n private final String action;\n \n private final TimeoutHandler timeoutHandler;\n \n- RequestHolder(TransportResponseHandler<T> handler, DiscoveryNode node, String action, TimeoutHandler timeoutHandler) {\n+ RequestHolder(TransportResponseHandler<T> handler, Transport.Connection connection, String action, TimeoutHandler timeoutHandler) {\n this.handler = handler;\n- this.node = node;\n+ this.connection = connection;\n this.action = action;\n this.timeoutHandler = timeoutHandler;\n }\n@@ -1007,8 +1016,8 @@ public TransportResponseHandler<T> handler() {\n return handler;\n }\n \n- public DiscoveryNode node() {\n- return this.node;\n+ public Transport.Connection connection() {\n+ return this.connection;\n }\n \n public String action() {", "filename": "core/src/main/java/org/elasticsearch/transport/TransportService.java", "status": "modified" }, { "diff": "@@ -604,8 +604,8 @@ public void testResolveReuseExistingNodeConnections() throws ExecutionException,\n // install a listener to check that no new connections are made\n handleA.transportService.addConnectionListener(new TransportConnectionListener() {\n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n- fail(\"should not open any connections. got [\" + node + \"]\");\n+ public void onConnectionOpened(Transport.Connection connection) {\n+ fail(\"should not open any connections. got [\" + connection.getNode() + \"]\");\n }\n });\n ", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java", "status": "modified" }, { "diff": "@@ -204,7 +204,7 @@ protected void sendMessage(Object o, BytesReference reference, ActionListener li\n \n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n- return new NodeChannels(node, new Object[profile.getNumConnections()], profile);\n+ return new NodeChannels(node, new Object[profile.getNumConnections()], profile, c -> {});\n }\n \n @Override\n@@ -220,7 +220,7 @@ public long serverOpen() {\n @Override\n public NodeChannels getConnection(DiscoveryNode node) {\n return new NodeChannels(node, new Object[MockTcpTransport.LIGHT_PROFILE.getNumConnections()],\n- MockTcpTransport.LIGHT_PROFILE);\n+ MockTcpTransport.LIGHT_PROFILE, c -> {});\n }\n };\n DiscoveryNode node = new DiscoveryNode(\"foo\", buildNewFakeTransportAddress(), Version.CURRENT);", "filename": "core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java", "status": "modified" }, { "diff": "@@ -320,7 +320,7 @@ public long serverOpen() {\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) {\n final Channel[] channels = new Channel[profile.getNumConnections()];\n- final NodeChannels nodeChannels = new NodeChannels(node, channels, profile);\n+ final NodeChannels nodeChannels = new NodeChannels(node, channels, profile, transportServiceAdapter::onConnectionClosed);\n boolean success = false;\n try {\n final TimeValue connectTimeout;", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java", "status": "modified" }, { "diff": "@@ -777,6 +777,11 @@ public void sendRequest(long requestId, String action, TransportRequest request,\n public void close() throws IOException {\n connection.close();\n }\n+\n+ @Override\n+ public Object getCacheKey() {\n+ return connection.getCacheKey();\n+ }\n }\n \n public Transport getOriginalTransport() {", "filename": "test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java", "status": "modified" }, { "diff": "@@ -2099,9 +2099,6 @@ public void handleException(TransportException exp) {\n \n @Override\n public String executor() {\n- if (1 == 1)\n- return \"same\";\n-\n return randomFrom(executors);\n }\n };\n@@ -2111,4 +2108,59 @@ public String executor() {\n latch.await();\n }\n \n+ public void testHandlerIsInvokedOnConnectionClose() throws IOException, InterruptedException {\n+ List<String> executors = new ArrayList<>(ThreadPool.THREAD_POOL_TYPES.keySet());\n+ CollectionUtil.timSort(executors); // makes sure it's reproducible\n+ TransportService serviceC = build(Settings.builder().put(\"name\", \"TS_TEST\").build(), version0, null, true);\n+ serviceC.registerRequestHandler(\"action\", TestRequest::new, ThreadPool.Names.SAME,\n+ (request, channel) -> {\n+ // do nothing\n+ });\n+ serviceC.start();\n+ serviceC.acceptIncomingRequests();\n+ CountDownLatch latch = new CountDownLatch(1);\n+ TransportResponseHandler<TransportResponse> transportResponseHandler = new TransportResponseHandler<TransportResponse>() {\n+ @Override\n+ public TransportResponse newInstance() {\n+ return TransportResponse.Empty.INSTANCE;\n+ }\n+\n+ @Override\n+ public void handleResponse(TransportResponse response) {\n+ try {\n+ fail(\"no response expected\");\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void handleException(TransportException exp) {\n+ try {\n+ assertTrue(exp.getClass().toString(), exp instanceof NodeDisconnectedException);\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public String executor() {\n+ return randomFrom(executors);\n+ }\n+ };\n+ ConnectionProfile.Builder builder = new ConnectionProfile.Builder();\n+ builder.addConnections(1,\n+ TransportRequestOptions.Type.BULK,\n+ TransportRequestOptions.Type.PING,\n+ TransportRequestOptions.Type.RECOVERY,\n+ TransportRequestOptions.Type.REG,\n+ TransportRequestOptions.Type.STATE);\n+ Transport.Connection connection = serviceB.openConnection(serviceC.getLocalNode(), builder.build());\n+ serviceB.sendRequest(connection, \"action\", new TestRequest(randomFrom(\"fail\", \"pass\")), TransportRequestOptions.EMPTY,\n+ transportResponseHandler);\n+ connection.close();\n+ latch.await();\n+ serviceC.close();\n+ }\n+\n }", "filename": "test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java", "status": "modified" }, { "diff": "@@ -180,7 +180,8 @@ private void readMessage(MockChannel mockChannel, StreamInput input) throws IOEx\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n final MockChannel[] mockChannels = new MockChannel[1];\n- final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE); // we always use light here\n+ final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE,\n+ transportServiceAdapter::onConnectionClosed); // we always use light here\n boolean success = false;\n final MockSocket socket = new MockSocket();\n try {", "filename": "test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java", "status": "modified" } ] }
{ "body": " **Elasticsearch version**:\r\n5.4.0\r\n\r\n**Plugins installed**:\r\nNode\r\n\r\n**JVM version**:\r\n1.8.0_102\r\n\r\n**OS version**:\r\nLinux globevm 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI'm using Elasticsearch 5.4.0 with Transport Client with the following problems:\r\n - you must launch the application several times before it can connect to the cluster.\r\n - when connection is established, after a cluster restart, the connection is no more recovered, with this stack:\r\n\r\n```\r\nNoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{s-HKw1m4S9aMgCkx5iBuYg}{192.168.203.128}{192.168.203.128:9500}]]\r\nat org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:348)\r\nat org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:246)\r\nat org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)\r\nat org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)\r\nat org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)\r\nat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:730)\r\nat org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)\r\nat org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)\r\nat org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:69)\r\n```\r\n\r\nThe Transport Client is setting as following:\r\n```\r\n Settings.Builder settingsBuilder = Settings.builder();\r\n\r\n\t\tsettingsBuilder.put(\"cluster.name\", \"globevmes5\");\r\n\t\tsettingsBuilder.put(\"client.transport.sniff\", true);\r\n\t\t \t\r\n\t\tclient = new PreBuiltTransportClient(settingsBuilder.build());\r\n\t\ttry {\r\n\t\t\tclient.addTransportAddress(new \r\n InetSocketTransportAddress(InetAddress.getByName(\"192.168.203.128\"), 9500));\r\n\t\t\t\r\n\t\t} catch (Exception e) {\r\n\t\t\tSystem.out.println(e.getMessage());\t\r\n\t\t}\r\n```\r\n\r\nElastic node configuration:\r\n - network.host: 192.168.203.128\r\n - http.port: 9400\r\n - transport.profiles.default.port: 9500-9600\r\n\r\nWith previous Elasticsearch 5.3.2 it worked fine.\r\nSetting \"client.transport.sniff\" to false works fine.\r\n\r\n**Provide logs**:\r\nElastic node log:\r\n\r\n```\r\n[2017-05-09T11:14:24,336][WARN ][o.e.b.Natives ] unable to load JNA native support library, native methods will be disabled.\r\njava.lang.UnsatisfiedLinkError: /tmp/jna--1077556979/jna7634687564598757394.tmp: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by /tmp/jna--1077556979/jna7634687564598757394.tmp)\r\n at java.lang.ClassLoader$NativeLibrary.load(Native Method) ~[?:1.8.0_102]\r\n at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) ~[?:1.8.0_102]\r\n at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824) ~[?:1.8.0_102]\r\n at java.lang.Runtime.load0(Runtime.java:809) ~[?:1.8.0_102]\r\n at java.lang.System.load(System.java:1086) ~[?:1.8.0_102]\r\n at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at com.sun.jna.Native.<clinit>(Native.java:190) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at java.lang.Class.forName0(Native Method) ~[?:1.8.0_102]\r\n at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_102]\r\n at org.elasticsearch.bootstrap.Natives.<clinit>(Natives.java:45) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:204) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.4.0.jar:5.4.0]\r\n[2017-05-09T11:14:24,342][WARN ][o.e.b.Natives ] cannot check if running as root because JNA is not available\r\n[2017-05-09T11:14:24,342][WARN ][o.e.b.Natives ] cannot register console handler because JNA is not available\r\n[2017-05-09T11:14:24,344][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_NPROC because JNA is not available\r\n[2017-05-09T11:14:24,344][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_AS beacuse JNA is not available\r\n[2017-05-09T11:14:24,493][INFO ][o.e.n.Node ] [globevmes5-node] initializing ...\r\n[2017-05-09T11:14:24,615][INFO ][o.e.e.NodeEnvironment ] [globevmes5-node] using [1] data paths, mounts [[/methode (/dev/mapper/VolGroup01-LogVol02)]], net usable_space [8.1gb], net total_space [72.8gb], spins? [possibly], types [ext3]\r\n[2017-05-09T11:14:24,615][INFO ][o.e.e.NodeEnvironment ] [globevmes5-node] heap size [1007.3mb], compressed ordinary object pointers [true]\r\n[2017-05-09T11:14:24,659][INFO ][o.e.n.Node ] [globevmes5-node] node name [globevmes5-node], node ID [M1_iHcSKRX6wHkD_Va0uDg]\r\n[2017-05-09T11:14:24,659][INFO ][o.e.n.Node ] [globevmes5-node] version[5.4.0], pid[31204], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/2.6.18-194.el5/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_102/25.102-b14]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [aggs-matrix-stats]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [ingest-common]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-expression]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-groovy]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-mustache]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-painless]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [percolator]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [reindex]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [transport-netty3]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [transport-netty4]\r\n[2017-05-09T11:14:26,982][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded plugin [eom-elasticsearch-plugin]\r\n[2017-05-09T11:14:28,406][INFO ][o.e.d.DiscoveryModule ] [globevmes5-node] using discovery type [zen]\r\n[2017-05-09T11:14:28,972][INFO ][o.e.n.Node ] [globevmes5-node] initialized\r\n[2017-05-09T11:14:28,972][INFO ][o.e.n.Node ] [globevmes5-node] starting ...\r\n[2017-05-09T11:14:29,100][INFO ][o.e.t.TransportService ] [globevmes5-node] publish_address {192.168.203.128:9500}, bound_addresses {192.168.203.128:9500}\r\n[2017-05-09T11:14:29,106][INFO ][o.e.b.BootstrapChecks ] [globevmes5-node] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks\r\n[2017-05-09T11:14:32,227][INFO ][o.e.c.s.ClusterService ] [globevmes5-node] new_master {globevmes5-node}{M1_iHcSKRX6wHkD_Va0uDg}{j8GR_IGFSfW2y502jd-SKA}{192.168.203.128}{192.168.203.128:9500}, reason: zen-disco-elected-as-master ([0] nodes joined)\r\n[2017-05-09T11:14:32,379][INFO ][o.e.h.n.Netty4HttpServerTransport] [globevmes5-node] publish_address {192.168.203.128:9400}, bound_addresses {192.168.203.128:9400}\r\n[2017-05-09T11:14:32,386][INFO ][o.e.n.Node ] [globevmes5-node] started\r\n[2017-05-09T11:14:32,590][INFO ][o.e.g.GatewayService ] [globevmes5-node] recovered [6] indices into cluster_state\r\n[2017-05-09T11:14:33,173][INFO ][o.e.c.r.a.AllocationService] [globevmes5-node] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).\r\n[2017-05-09T11:15:02,357][INFO ][o.e.c.r.a.DiskThresholdMonitor] [globevmes5-node] low disk watermark [85%] exceeded on [M1_iHcSKRX6wHkD_Va0uDg][globevmes5-node][/methode/meth01/mnt/elasticsearch-5.4.0/data/nodes/0] free: 8.1gb[11.1%], replicas will not be assigned to this node \r\n```\r\n\r\n", "comments": [ { "body": "It looks like a bug to me, TransportClient's cluster state requests timed out in my local tests. It seems like some requests hang out, maybe because of a concurrent disconnection or a Netty issue. @jasontedor or @bleskes can you have a look?", "created_at": "2017-05-11T10:34:40Z" } ], "number": 24557, "title": "Elasticsearch Transport Client fails to recovery connection after cluster restart" }
{ "body": "Today we prune transport handlers in TransportService when a node is disconnected.\r\nThis can cause connections to starve in the TransportService if the connection is\r\nopened as a short living connection ie. without sharing the connection to a node\r\nvia registering in the transport itself. This change now moves to pruning based\r\non the connections cache key to ensure we notify handlers as soon as the connection\r\nis closed for all connections not just for registered connections.\r\n\r\nRelates to #24632\r\nRelates to #24575\r\nRelates to #24557", "number": 24639, "review_comments": [ { "body": "🤣", "created_at": "2017-05-12T13:13:28Z" }, { "body": "yeah I know who put that there 💃 ", "created_at": "2017-05-12T13:29:31Z" }, { "body": "😉", "created_at": "2017-05-12T13:33:10Z" } ], "title": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers" }
{ "commits": [ { "message": "Notify onConnectionClosed rather than onNodeDisconnect to prune transport handlers\n\nToday we prune transport handlers in TransporService when a node is disconnected.\nThis can cause connections to starve in the TransportService if the connection is\nopened as a short living connection ie. without sharing the connection to a node\nvia registering in the transport itself. This change now moves to pruning based\non the connections cache key to ensure we notify handlers as soon as the connection\nis closed for all connections not just for registered connections.\n\nRelates to #24632\nRelates to #24575\nRelates to #24557" }, { "message": "fix line len" } ], "files": [ { "diff": "@@ -101,6 +101,7 @@\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n+import java.util.function.Consumer;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n import java.util.stream.Collectors;\n@@ -357,8 +358,9 @@ public final class NodeChannels implements Connection {\n private final DiscoveryNode node;\n private final AtomicBoolean closed = new AtomicBoolean(false);\n private final Version version;\n+ private final Consumer<Connection> onClose;\n \n- public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile) {\n+ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile connectionProfile, Consumer<Connection> onClose) {\n this.node = node;\n this.channels = channels;\n assert channels.length == connectionProfile.getNumConnections() : \"expected channels size to be == \"\n@@ -369,13 +371,15 @@ public NodeChannels(DiscoveryNode node, Channel[] channels, ConnectionProfile co\n typeMapping.put(type, handle);\n }\n version = node.getVersion();\n+ this.onClose = onClose;\n }\n \n NodeChannels(NodeChannels channels, Version handshakeVersion) {\n this.node = channels.node;\n this.channels = channels.channels;\n this.typeMapping = channels.typeMapping;\n this.version = handshakeVersion;\n+ this.onClose = channels.onClose;\n }\n \n @Override\n@@ -408,6 +412,7 @@ public Channel channel(TransportRequestOptions.Type type) {\n public synchronized void close() throws IOException {\n if (closed.compareAndSet(false, true)) {\n closeChannels(Arrays.stream(channels).filter(Objects::nonNull).collect(Collectors.toList()));\n+ onClose.accept(this);\n }\n }\n \n@@ -519,8 +524,8 @@ public final NodeChannels openConnection(DiscoveryNode node, ConnectionProfile c\n final TimeValue handshakeTimeout = connectionProfile.getHandshakeTimeout() == null ?\n connectTimeout : connectionProfile.getHandshakeTimeout();\n final Version version = executeHandshake(node, channel, handshakeTimeout);\n- transportServiceAdapter.onConnectionOpened(node);\n- nodeChannels = new NodeChannels(nodeChannels, version);// clone the channels - we now have the correct version\n+ transportServiceAdapter.onConnectionOpened(nodeChannels);\n+ nodeChannels = new NodeChannels(nodeChannels, version); // clone the channels - we now have the correct version\n success = true;\n return nodeChannels;\n } catch (ConnectTransportException e) {", "filename": "core/src/main/java/org/elasticsearch/transport/TcpTransport.java", "status": "modified" }, { "diff": "@@ -132,5 +132,13 @@ void sendRequest(long requestId, String action, TransportRequest request, Transp\n default Version getVersion() {\n return getNode().getVersion();\n }\n+\n+ /**\n+ * Returns a key that this connection can be cached on. Delegating subclasses must delegate method call to\n+ * the original connection.\n+ */\n+ default Object getCacheKey() {\n+ return this;\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/transport/Transport.java", "status": "modified" }, { "diff": "@@ -33,8 +33,14 @@ default void onNodeConnected(DiscoveryNode node) {}\n */\n default void onNodeDisconnected(DiscoveryNode node) {}\n \n+ /**\n+ * Called once a node connection is closed. The connection might not have been registered in the\n+ * transport as a shared connection to a specific node\n+ */\n+ default void onConnectionClosed(Transport.Connection connection) {}\n+\n /**\n * Called once a node connection is opened.\n */\n- default void onConnectionOpened(DiscoveryNode node) {}\n+ default void onConnectionOpened(Transport.Connection connection) {}\n }", "filename": "core/src/main/java/org/elasticsearch/transport/TransportConnectionListener.java", "status": "modified" }, { "diff": "@@ -569,7 +569,7 @@ private <T extends TransportResponse> void sendRequestInternal(final Transport.C\n }\n Supplier<ThreadContext.StoredContext> storedContextSupplier = threadPool.getThreadContext().newRestorableContext(true);\n TransportResponseHandler<T> responseHandler = new ContextRestoreResponseHandler<>(storedContextSupplier, handler);\n- clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection.getNode(), action, timeoutHandler));\n+ clientHandlers.put(requestId, new RequestHolder<>(responseHandler, connection, action, timeoutHandler));\n if (lifecycle.stoppedOrClosed()) {\n // if we are not started the exception handling will remove the RequestHolder again and calls the handler to notify\n // the caller. It will only notify if the toStop code hasn't done the work yet.\n@@ -810,7 +810,7 @@ public TransportResponseHandler onResponseReceived(final long requestId) {\n }\n holder.cancelTimeout();\n if (traceEnabled() && shouldTraceAction(holder.action())) {\n- traceReceivedResponse(requestId, holder.node(), holder.action());\n+ traceReceivedResponse(requestId, holder.connection().getNode(), holder.action());\n }\n return holder.handler();\n }\n@@ -855,12 +855,12 @@ public void onNodeConnected(final DiscoveryNode node) {\n }\n \n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n+ public void onConnectionOpened(Transport.Connection connection) {\n // capture listeners before spawning the background callback so the following pattern won't trigger a call\n // connectToNode(); connection is completed successfully\n // addConnectionListener(); this listener shouldn't be called\n final Stream<TransportConnectionListener> listenersToNotify = TransportService.this.connectionListeners.stream();\n- threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(node)));\n+ threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(connection)));\n }\n \n @Override\n@@ -871,20 +871,28 @@ public void onNodeDisconnected(final DiscoveryNode node) {\n connectionListener.onNodeDisconnected(node);\n }\n });\n+ } catch (EsRejectedExecutionException ex) {\n+ logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ }\n+ }\n+\n+ @Override\n+ public void onConnectionClosed(Transport.Connection connection) {\n+ try {\n for (Map.Entry<Long, RequestHolder> entry : clientHandlers.entrySet()) {\n RequestHolder holder = entry.getValue();\n- if (holder.node().equals(node)) {\n+ if (holder.connection().getCacheKey().equals(connection.getCacheKey())) {\n final RequestHolder holderToNotify = clientHandlers.remove(entry.getKey());\n if (holderToNotify != null) {\n // callback that an exception happened, but on a different thread since we don't\n // want handlers to worry about stack overflows\n- threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(node,\n- holderToNotify.action())));\n+ threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException(\n+ connection.getNode(), holderToNotify.action())));\n }\n }\n }\n } catch (EsRejectedExecutionException ex) {\n- logger.debug(\"Rejected execution on NodeDisconnected\", ex);\n+ logger.debug(\"Rejected execution on onConnectionClosed\", ex);\n }\n }\n \n@@ -929,13 +937,14 @@ public void run() {\n if (holder != null) {\n // add it to the timeout information holder, in case we are going to get a response later\n long timeoutTime = System.currentTimeMillis();\n- timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.node(), holder.action(), sentTime, timeoutTime));\n+ timeoutInfoHandlers.put(requestId, new TimeoutInfoHolder(holder.connection().getNode(), holder.action(), sentTime,\n+ timeoutTime));\n // now that we have the information visible via timeoutInfoHandlers, we try to remove the request id\n final RequestHolder removedHolder = clientHandlers.remove(requestId);\n if (removedHolder != null) {\n assert removedHolder == holder : \"two different holder instances for request [\" + requestId + \"]\";\n removedHolder.handler().handleException(\n- new ReceiveTimeoutTransportException(holder.node(), holder.action(),\n+ new ReceiveTimeoutTransportException(holder.connection().getNode(), holder.action(),\n \"request_id [\" + requestId + \"] timed out after [\" + (timeoutTime - sentTime) + \"ms]\"));\n } else {\n // response was processed, remove timeout info.\n@@ -990,15 +999,15 @@ static class RequestHolder<T extends TransportResponse> {\n \n private final TransportResponseHandler<T> handler;\n \n- private final DiscoveryNode node;\n+ private final Transport.Connection connection;\n \n private final String action;\n \n private final TimeoutHandler timeoutHandler;\n \n- RequestHolder(TransportResponseHandler<T> handler, DiscoveryNode node, String action, TimeoutHandler timeoutHandler) {\n+ RequestHolder(TransportResponseHandler<T> handler, Transport.Connection connection, String action, TimeoutHandler timeoutHandler) {\n this.handler = handler;\n- this.node = node;\n+ this.connection = connection;\n this.action = action;\n this.timeoutHandler = timeoutHandler;\n }\n@@ -1007,8 +1016,8 @@ public TransportResponseHandler<T> handler() {\n return handler;\n }\n \n- public DiscoveryNode node() {\n- return this.node;\n+ public Transport.Connection connection() {\n+ return this.connection;\n }\n \n public String action() {", "filename": "core/src/main/java/org/elasticsearch/transport/TransportService.java", "status": "modified" }, { "diff": "@@ -604,8 +604,8 @@ public void testResolveReuseExistingNodeConnections() throws ExecutionException,\n // install a listener to check that no new connections are made\n handleA.transportService.addConnectionListener(new TransportConnectionListener() {\n @Override\n- public void onConnectionOpened(DiscoveryNode node) {\n- fail(\"should not open any connections. got [\" + node + \"]\");\n+ public void onConnectionOpened(Transport.Connection connection) {\n+ fail(\"should not open any connections. got [\" + connection.getNode() + \"]\");\n }\n });\n ", "filename": "core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java", "status": "modified" }, { "diff": "@@ -204,7 +204,7 @@ protected void sendMessage(Object o, BytesReference reference, ActionListener li\n \n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n- return new NodeChannels(node, new Object[profile.getNumConnections()], profile);\n+ return new NodeChannels(node, new Object[profile.getNumConnections()], profile, c -> {});\n }\n \n @Override\n@@ -220,7 +220,7 @@ public long serverOpen() {\n @Override\n public NodeChannels getConnection(DiscoveryNode node) {\n return new NodeChannels(node, new Object[MockTcpTransport.LIGHT_PROFILE.getNumConnections()],\n- MockTcpTransport.LIGHT_PROFILE);\n+ MockTcpTransport.LIGHT_PROFILE, c -> {});\n }\n };\n DiscoveryNode node = new DiscoveryNode(\"foo\", buildNewFakeTransportAddress(), Version.CURRENT);", "filename": "core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java", "status": "modified" }, { "diff": "@@ -320,7 +320,7 @@ public long serverOpen() {\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) {\n final Channel[] channels = new Channel[profile.getNumConnections()];\n- final NodeChannels nodeChannels = new NodeChannels(node, channels, profile);\n+ final NodeChannels nodeChannels = new NodeChannels(node, channels, profile, transportServiceAdapter::onConnectionClosed);\n boolean success = false;\n try {\n final TimeValue connectTimeout;", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java", "status": "modified" }, { "diff": "@@ -777,6 +777,11 @@ public void sendRequest(long requestId, String action, TransportRequest request,\n public void close() throws IOException {\n connection.close();\n }\n+\n+ @Override\n+ public Object getCacheKey() {\n+ return connection.getCacheKey();\n+ }\n }\n \n public Transport getOriginalTransport() {", "filename": "test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java", "status": "modified" }, { "diff": "@@ -2099,9 +2099,6 @@ public void handleException(TransportException exp) {\n \n @Override\n public String executor() {\n- if (1 == 1)\n- return \"same\";\n-\n return randomFrom(executors);\n }\n };\n@@ -2111,4 +2108,59 @@ public String executor() {\n latch.await();\n }\n \n+ public void testHandlerIsInvokedOnConnectionClose() throws IOException, InterruptedException {\n+ List<String> executors = new ArrayList<>(ThreadPool.THREAD_POOL_TYPES.keySet());\n+ CollectionUtil.timSort(executors); // makes sure it's reproducible\n+ TransportService serviceC = build(Settings.builder().put(\"name\", \"TS_TEST\").build(), version0, null, true);\n+ serviceC.registerRequestHandler(\"action\", TestRequest::new, ThreadPool.Names.SAME,\n+ (request, channel) -> {\n+ // do nothing\n+ });\n+ serviceC.start();\n+ serviceC.acceptIncomingRequests();\n+ CountDownLatch latch = new CountDownLatch(1);\n+ TransportResponseHandler<TransportResponse> transportResponseHandler = new TransportResponseHandler<TransportResponse>() {\n+ @Override\n+ public TransportResponse newInstance() {\n+ return TransportResponse.Empty.INSTANCE;\n+ }\n+\n+ @Override\n+ public void handleResponse(TransportResponse response) {\n+ try {\n+ fail(\"no response expected\");\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void handleException(TransportException exp) {\n+ try {\n+ assertTrue(exp.getClass().toString(), exp instanceof NodeDisconnectedException);\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public String executor() {\n+ return randomFrom(executors);\n+ }\n+ };\n+ ConnectionProfile.Builder builder = new ConnectionProfile.Builder();\n+ builder.addConnections(1,\n+ TransportRequestOptions.Type.BULK,\n+ TransportRequestOptions.Type.PING,\n+ TransportRequestOptions.Type.RECOVERY,\n+ TransportRequestOptions.Type.REG,\n+ TransportRequestOptions.Type.STATE);\n+ Transport.Connection connection = serviceB.openConnection(serviceC.getLocalNode(), builder.build());\n+ serviceB.sendRequest(connection, \"action\", new TestRequest(randomFrom(\"fail\", \"pass\")), TransportRequestOptions.EMPTY,\n+ transportResponseHandler);\n+ connection.close();\n+ latch.await();\n+ serviceC.close();\n+ }\n+\n }", "filename": "test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java", "status": "modified" }, { "diff": "@@ -180,7 +180,8 @@ private void readMessage(MockChannel mockChannel, StreamInput input) throws IOEx\n @Override\n protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n final MockChannel[] mockChannels = new MockChannel[1];\n- final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE); // we always use light here\n+ final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE,\n+ transportServiceAdapter::onConnectionClosed); // we always use light here\n boolean success = false;\n final MockSocket socket = new MockSocket();\n try {", "filename": "test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java", "status": "modified" } ] }
{ "body": "Hi, I have a rest service using Netty as basis and connecting to ElasticSearch backend via java transport client API.\r\nIt worked very well with Netty 4.1.8 and ES 5.3.0.\r\nNow I tried to upgrade ES backend and transport client to 5.4.0, and also Netty to 4.1.9. Then following problems happened:\r\n\r\n10 May 2017;17:01:59.645 Developer linux-68qh [elasticsearch[_client_][generic][T#3]] INFO o.e.c.t.TransportClientNodesService - failed to get local cluster state for {#transport#-1}{WlTQjgcGQ1uqyNNsw4ZnAw}{127.0.0.1}{127.0.0.1:9300}, disconnecting...\r\norg.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/state] request_id [7] timed out after [5001ms]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:925)\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n\r\nI roll back the transport client to 5.3.0 but keep backend 5.4.0. \r\n\r\nThen it is able to connect to Es backend.\r\nI use SBT and the build dependencies for the error are:\r\n\r\n\"io.netty\" % \"netty-all\" % \"4.1.9.Final\"\r\n\"org.elasticsearch\" % \"elasticsearch\" % \"5.4.0\"\r\n \"org.elasticsearch.client\" % \"transport\" % \"5.4.0\",\r\nand \"io.netty\" % \"netty-transport-native-epoll\" % \"4.1.9.Final\" classifier \"linux-x86_64\"\r\n\r\nEnvironment:\r\n\r\nopenjdk version \"1.8.0_121\"\r\nOpenJDK Runtime Environment (IcedTea 3.3.0) (suse-3.3-x86_64)\r\nOpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\nLinux linux-68qh 4.10.13-1-default #1 SMP PREEMPT Thu Apr 27 12:23:31 UTC 2017 (e5d11ce) x86_64 x86_64 x86_64 GNU/Linux\r\n\r\nThanks\r\n\r\n\r\n\r\n", "comments": [ { "body": "It looks like a bug to me. Is sniffing enabled on your transport client?", "created_at": "2017-05-11T08:47:54Z" }, { "body": "Yes it is enabled. ", "created_at": "2017-05-11T09:57:09Z" }, { "body": "Same issue here. We have:\r\n- Spring Boot v1.5.3.RELEASE,\r\n- Switched from Elasticsearch 5.3.2 to 5.4.0,\r\n- using Transport Client with sniff enabled.\r\n\r\nClient and Elasticsearch both on the same machine, connecting through localhost:\r\n- When using TransportClient 5.3.2 to connect to Elastic 5.4.0 => OK,\r\n- 5.4.0 to 5.4.0 => KO.\r\n\r\nThe exception we have on startup:\r\n> org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/state] request_id [7] timed out after [5000ms]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:925) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.0.jar:5.4.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]", "created_at": "2017-05-11T18:50:04Z" }, { "body": "I am seeing this issue as well on some nodes connecting to ES. We run a service that has multiple machines that each connect to ES, some of them are able to connect successfully and others do not. ", "created_at": "2017-05-11T20:36:44Z" }, { "body": "Thanks for reporting, I think I know where the issue is.", "created_at": "2017-05-11T20:44:03Z" }, { "body": "Thanks @tlrx. I'm not sure if you are also aware, but I also saw errors that looked like the following when I disabled sniffing. \r\n\r\n```\r\n20:38:44.935 [elasticsearch[_client_][generic][T#2]] DEBUG - failed to connect to discovered node [{i-0562d98cb14e42358}{Gzbd-MEzRo-OHMUoEajvXA}{x6V2--f3SS-NzVk5wAQQYg}{10.178.212.242}{127.0.0.1:4374}{aws_availability_zone=us-east-1a}]\r\nConnectTransportException[[i-0562d98cb14e42358][127.0.0.1:4374] handshake failed. unexpected remote node {i-01bae8d9b0f31ac54}{MUjAv_3JR5KmzEdn-eJeSA}{qJdTT_oaSRCJ1TLO1W2A6w}{10.158.100.27}{10.158.100.27:9300}{aws_availability_zone=us-east-1b}]\r\n\tat org.elasticsearch.transport.TransportService.lambda$connectToNode$3(TransportService.java:319)\r\n\tat org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:466)\r\n\tat org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:315)\r\n\tat org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:302)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.validateNewNodes(TransportClientNodesService.java:374)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:442)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:358)\r\n\tat org.elasticsearch.client.transport.TransportClientNodesService$ScheduledNodeSampler.run(TransportClientNodesService.java:391)\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n```\r\n\r\nIf it helps we have a service discovery framework to discover services: (https://medium.com/airbnb-engineering/smartstack-service-discovery-in-the-cloud-4b8a080de619). We \"randomly\" pick an ES box to connect to and then use sniffing (if enabled) to discover the rest. Even though ES is running on 9200/9300 we use a different port on our client machines because of the service discovery framework does the correct routing. Both the service discovery port and the \"direct access\" port are reachable over the network. \r\n\r\nI am rolling back our transport client version to 5.3.2 and will report back on the results.\r\nUpdate: 5.3.2 works great", "created_at": "2017-05-11T20:51:05Z" }, { "body": "Same here... 5.4.0 to 5.4.0 fails.... but 5.3.0 to 5.4.0 works", "created_at": "2017-05-12T11:03:32Z" }, { "body": "Same here... 5.4.0 to 5.4.0 fails.... but 5.3.0 to 5.4.0 works", "created_at": "2017-05-29T12:30:20Z" }, { "body": "I am seeing a similar exception in 2.3.1. Below is the exception:-\r\n\r\n```\r\nINFO [2017-08-08 20:14:18,019] [U:3,129,F:822,T:3,950,M:3,950] elasticsearch.client.transport:[TransportClientNodesService$SniffNodesSampler$1$1:handleException:455] - [elasticsearch[Edward \"Ned\" Buckman][generic][T#61]] - [Edward \"Ned\" Buckman] failed to get local cluster state for {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}, disconnecting...\r\nReceiveTimeoutTransportException[[][localhost/127.0.0.1:9300][cluster:monitor/state] request_id [341654] timed out after [5001ms]]\r\n\tat org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n```\r\n\r\nIs the issue not fixed in 2.3.1?", "created_at": "2017-10-06T15:54:42Z" }, { "body": "> Is the issue not fixed in 2.3.1?\r\n\r\nI didn't test in 2.3.1 since the fix fixed a bug introduced in #22828 for 5.4.0. It's possible that this bug exists in 2.3.1 but this version is EOL and not supported anymore.", "created_at": "2017-10-09T09:10:40Z" }, { "body": "ok thanks for the update.\n\nSent from GMail on Android\n\nOn Oct 9, 2017 2:42 PM, \"Tanguy Leroux\" <notifications@github.com> wrote:\n\n> Is the issue not fixed in 2.3.1?\n>\n> I didn't test in 2.3.1 since the fix fixed a bug introduced in #22828\n> <https://github.com/elastic/elasticsearch/pull/22828> for 5.4.0. It's\n> possible that this bug exists in 2.3.1 but this version is EOL and not\n> supported anymore.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/24575#issuecomment-335102848>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHw8JPzLkGBjp7CpN19QJ6_lhFvamNlLks5sqeOCgaJpZM4NWYB_>\n> .\n>\n", "created_at": "2017-10-09T09:15:05Z" } ], "number": 24575, "title": "5.4.0 transport client failed to get local cluster state while using 5.3.0 to connect to 5.4.0 servers works" }
{ "body": "With the current implementation, `SniffNodesSampler` might close the\r\ncurrent connection right after a request is sent but before the response\r\nis correctly handled. This causes to timeouts in the transport client\r\nwhen the sniffing is activated in all versions since #22828.\r\n\r\ncloses #24575\r\ncloses #24557", "number": 24632, "review_comments": [ { "body": "we also need to close the connection in `public void onFailure(Exception e) {` since we might get rejected or something like this.", "created_at": "2017-05-12T06:02:29Z" }, { "body": "this is unrelated?", "created_at": "2017-05-12T06:02:38Z" }, { "body": "this is unrelated?", "created_at": "2017-05-12T06:02:42Z" }, { "body": "testing will be tricky but doable. I have some ideas here similar to what I did on `RemoteClusterConnectionTests` where we basically mock the calls to clusterstate and return a pre-build state but we can also put some sleeps into it.", "created_at": "2017-05-12T06:04:13Z" }, { "body": "maybe we should unify the `latch.countDown()` and `closeConnection()` into a single method called \"onDone\" on the AbstractRunnable that everyone calls? this it's less trappy and people wouldn't forget to do one but not the other.", "created_at": "2017-05-12T09:44:42Z" }, { "body": "I wrote a test which would have failed before the fix. That would be great if you can have a look.", "created_at": "2017-05-12T12:46:22Z" }, { "body": "Yes, this should not have been commited, thanks.", "created_at": "2017-05-12T12:46:42Z" }, { "body": "can we call the latch in a finally block just to be absolutely sure", "created_at": "2017-05-12T14:27:15Z" }, { "body": "Sure", "created_at": "2017-05-12T14:28:48Z" } ], "title": "SniffNodesSampler should close connection after handling responses" }
{ "commits": [ { "message": "SniffNodesSampler should close connection after handling responses\n\nWith the current implementation, SniffNodesSampler might close the\ncurrent connection right after a request is sent but before the response\nis correctly handled. This causes to timeouts in the transport client\nwhen the sniffing is activated.\n\ncloses #24575\ncloses #24557" }, { "message": "Apply feedback" }, { "message": "Close connection before counting down the latch" }, { "message": "add finally block" } ], "files": [ { "diff": "@@ -469,14 +469,17 @@ protected void doSample() {\n */\n Transport.Connection connectionToClose = null;\n \n- @Override\n- public void onAfter() {\n- IOUtils.closeWhileHandlingException(connectionToClose);\n+ void onDone() {\n+ try {\n+ IOUtils.closeWhileHandlingException(connectionToClose);\n+ } finally {\n+ latch.countDown();\n+ }\n }\n \n @Override\n public void onFailure(Exception e) {\n- latch.countDown();\n+ onDone();\n if (e instanceof ConnectTransportException) {\n logger.debug((Supplier<?>)\n () -> new ParameterizedMessage(\"failed to connect to node [{}], ignoring...\", nodeToPing), e);\n@@ -522,7 +525,7 @@ public String executor() {\n @Override\n public void handleResponse(ClusterStateResponse response) {\n clusterStateResponses.put(nodeToPing, response);\n- latch.countDown();\n+ onDone();\n }\n \n @Override\n@@ -532,9 +535,8 @@ public void handleException(TransportException e) {\n \"failed to get local cluster state for {}, disconnecting...\", nodeToPing), e);\n try {\n hostFailureListener.onNodeDisconnected(nodeToPing, e);\n- }\n- finally {\n- latch.countDown();\n+ } finally {\n+ onDone();\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java", "status": "modified" }, { "diff": "@@ -19,41 +19,54 @@\n \n package org.elasticsearch.client.transport;\n \n-import java.io.Closeable;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.concurrent.CountDownLatch;\n-import java.util.concurrent.TimeUnit;\n-import java.util.concurrent.atomic.AtomicInteger;\n-import java.util.concurrent.atomic.AtomicReference;\n-\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.node.liveness.LivenessResponse;\n import org.elasticsearch.action.admin.cluster.node.liveness.TransportLivenessAction;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateAction;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.node.Node;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.threadpool.TestThreadPool;\n import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.ConnectionProfile;\n import org.elasticsearch.transport.Transport;\n+import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportInterceptor;\n import org.elasticsearch.transport.TransportRequest;\n+import org.elasticsearch.transport.TransportRequestHandler;\n import org.elasticsearch.transport.TransportRequestOptions;\n import org.elasticsearch.transport.TransportResponse;\n import org.elasticsearch.transport.TransportResponseHandler;\n import org.elasticsearch.transport.TransportService;\n import org.hamcrest.CustomMatcher;\n \n+import java.io.Closeable;\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.elasticsearch.test.transport.MockTransportService.createNewService;\n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.everyItem;\n import static org.hamcrest.CoreMatchers.hasItem;\n@@ -322,6 +335,157 @@ public boolean matches(Object item) {\n }\n }\n \n+ public void testSniffNodesSamplerClosesConnections() throws Exception {\n+ final TestThreadPool threadPool = new TestThreadPool(\"testSniffNodesSamplerClosesConnections\");\n+\n+ Settings remoteSettings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), \"remote\").build();\n+ try (MockTransportService remoteService = createNewService(remoteSettings, Version.CURRENT, threadPool, null)) {\n+ final MockHandler handler = new MockHandler(remoteService);\n+ remoteService.registerRequestHandler(ClusterStateAction.NAME, ClusterStateRequest::new, ThreadPool.Names.SAME, handler);\n+ remoteService.start();\n+ remoteService.acceptIncomingRequests();\n+\n+ Settings clientSettings = Settings.builder()\n+ .put(TransportClient.CLIENT_TRANSPORT_SNIFF.getKey(), true)\n+ .put(TransportClient.CLIENT_TRANSPORT_PING_TIMEOUT.getKey(), TimeValue.timeValueSeconds(1))\n+ .put(TransportClient.CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL.getKey(), TimeValue.timeValueSeconds(30))\n+ .build();\n+\n+ try (MockTransportService clientService = createNewService(clientSettings, Version.CURRENT, threadPool, null)) {\n+ final List<MockConnection> establishedConnections = new CopyOnWriteArrayList<>();\n+ final List<MockConnection> reusedConnections = new CopyOnWriteArrayList<>();\n+\n+ clientService.addDelegate(remoteService, new MockTransportService.DelegateTransport(clientService.original()) {\n+ @Override\n+ public Connection openConnection(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n+ MockConnection connection = new MockConnection(super.openConnection(node, profile));\n+ establishedConnections.add(connection);\n+ return connection;\n+ }\n+\n+ @Override\n+ public Connection getConnection(DiscoveryNode node) {\n+ MockConnection connection = new MockConnection(super.getConnection(node));\n+ reusedConnections.add(connection);\n+ return connection;\n+ }\n+ });\n+\n+ clientService.start();\n+ clientService.acceptIncomingRequests();\n+\n+ try (TransportClientNodesService transportClientNodesService =\n+ new TransportClientNodesService(clientSettings, clientService, threadPool, (a, b) -> {})) {\n+ assertEquals(0, transportClientNodesService.connectedNodes().size());\n+ assertEquals(0, establishedConnections.size());\n+ assertEquals(0, reusedConnections.size());\n+\n+ transportClientNodesService.addTransportAddresses(remoteService.getLocalDiscoNode().getAddress());\n+ assertEquals(1, transportClientNodesService.connectedNodes().size());\n+ assertClosedConnections(establishedConnections, 1);\n+\n+ transportClientNodesService.doSample();\n+ assertClosedConnections(establishedConnections, 2);\n+ assertOpenConnections(reusedConnections, 1);\n+\n+ handler.blockRequest();\n+ Thread thread = new Thread(transportClientNodesService::doSample);\n+ thread.start();\n+\n+ assertBusy(() -> assertEquals(3, establishedConnections.size()));\n+ assertFalse(\"Temporary ping connection must be opened\", establishedConnections.get(2).isClosed());\n+\n+ handler.releaseRequest();\n+ thread.join();\n+\n+ assertClosedConnections(establishedConnections, 3);\n+ }\n+ }\n+ } finally {\n+ terminate(threadPool);\n+ }\n+ }\n+\n+ private void assertClosedConnections(final List<MockConnection> connections, final int size) {\n+ assertEquals(\"Expecting \" + size + \" closed connections but got \" + connections.size(), size, connections.size());\n+ connections.forEach(c -> assertConnection(c, true));\n+ }\n+\n+ private void assertOpenConnections(final List<MockConnection> connections, final int size) {\n+ assertEquals(\"Expecting \" + size + \" open connections but got \" + connections.size(), size, connections.size());\n+ connections.forEach(c -> assertConnection(c, false));\n+ }\n+\n+ private static void assertConnection(final MockConnection connection, final boolean closed) {\n+ assertEquals(\"Connection [\" + connection + \"] must be \" + (closed ? \"closed\" : \"open\"), closed, connection.isClosed());\n+ }\n+\n+ class MockConnection implements Transport.Connection {\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n+ private final Transport.Connection connection;\n+\n+ private MockConnection(Transport.Connection connection) {\n+ this.connection = connection;\n+ }\n+\n+ @Override\n+ public DiscoveryNode getNode() {\n+ return connection.getNode();\n+ }\n+\n+ @Override\n+ public Version getVersion() {\n+ return connection.getVersion();\n+ }\n+\n+ @Override\n+ public void sendRequest(long requestId, String action, TransportRequest request, TransportRequestOptions options)\n+ throws IOException, TransportException {\n+ connection.sendRequest(requestId, action, request, options);\n+ }\n+\n+ @Override\n+ public void close() throws IOException {\n+ if (closed.compareAndSet(false, true)) {\n+ connection.close();\n+ }\n+ }\n+\n+ boolean isClosed() {\n+ return closed.get();\n+ }\n+ }\n+\n+ class MockHandler implements TransportRequestHandler<ClusterStateRequest> {\n+ private final AtomicBoolean block = new AtomicBoolean(false);\n+ private final CountDownLatch release = new CountDownLatch(1);\n+ private final MockTransportService transportService;\n+\n+ MockHandler(MockTransportService transportService) {\n+ this.transportService = transportService;\n+ }\n+\n+ @Override\n+ public void messageReceived(ClusterStateRequest request, TransportChannel channel) throws Exception {\n+ if (block.get()) {\n+ release.await();\n+ return;\n+ }\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder().add(transportService.getLocalDiscoNode()).build();\n+ ClusterState build = ClusterState.builder(ClusterName.DEFAULT).nodes(discoveryNodes).build();\n+ channel.sendResponse(new ClusterStateResponse(ClusterName.DEFAULT, build, 0L));\n+ }\n+\n+ void blockRequest() {\n+ if (block.compareAndSet(false, true) == false) {\n+ throw new AssertionError(\"Request handler is already marked as blocking\");\n+ }\n+ }\n+ void releaseRequest() {\n+ release.countDown();\n+ }\n+ }\n+\n public static class TestRequest extends TransportRequest {\n \n }", "filename": "core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java", "status": "modified" } ] }
{ "body": " **Elasticsearch version**:\r\n5.4.0\r\n\r\n**Plugins installed**:\r\nNode\r\n\r\n**JVM version**:\r\n1.8.0_102\r\n\r\n**OS version**:\r\nLinux globevm 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI'm using Elasticsearch 5.4.0 with Transport Client with the following problems:\r\n - you must launch the application several times before it can connect to the cluster.\r\n - when connection is established, after a cluster restart, the connection is no more recovered, with this stack:\r\n\r\n```\r\nNoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{s-HKw1m4S9aMgCkx5iBuYg}{192.168.203.128}{192.168.203.128:9500}]]\r\nat org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:348)\r\nat org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:246)\r\nat org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)\r\nat org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)\r\nat org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)\r\nat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:730)\r\nat org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)\r\nat org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)\r\nat org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:69)\r\n```\r\n\r\nThe Transport Client is setting as following:\r\n```\r\n Settings.Builder settingsBuilder = Settings.builder();\r\n\r\n\t\tsettingsBuilder.put(\"cluster.name\", \"globevmes5\");\r\n\t\tsettingsBuilder.put(\"client.transport.sniff\", true);\r\n\t\t \t\r\n\t\tclient = new PreBuiltTransportClient(settingsBuilder.build());\r\n\t\ttry {\r\n\t\t\tclient.addTransportAddress(new \r\n InetSocketTransportAddress(InetAddress.getByName(\"192.168.203.128\"), 9500));\r\n\t\t\t\r\n\t\t} catch (Exception e) {\r\n\t\t\tSystem.out.println(e.getMessage());\t\r\n\t\t}\r\n```\r\n\r\nElastic node configuration:\r\n - network.host: 192.168.203.128\r\n - http.port: 9400\r\n - transport.profiles.default.port: 9500-9600\r\n\r\nWith previous Elasticsearch 5.3.2 it worked fine.\r\nSetting \"client.transport.sniff\" to false works fine.\r\n\r\n**Provide logs**:\r\nElastic node log:\r\n\r\n```\r\n[2017-05-09T11:14:24,336][WARN ][o.e.b.Natives ] unable to load JNA native support library, native methods will be disabled.\r\njava.lang.UnsatisfiedLinkError: /tmp/jna--1077556979/jna7634687564598757394.tmp: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by /tmp/jna--1077556979/jna7634687564598757394.tmp)\r\n at java.lang.ClassLoader$NativeLibrary.load(Native Method) ~[?:1.8.0_102]\r\n at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) ~[?:1.8.0_102]\r\n at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824) ~[?:1.8.0_102]\r\n at java.lang.Runtime.load0(Runtime.java:809) ~[?:1.8.0_102]\r\n at java.lang.System.load(System.java:1086) ~[?:1.8.0_102]\r\n at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at com.sun.jna.Native.<clinit>(Native.java:190) ~[jna-4.4.0.jar:4.4.0 (b0)]\r\n at java.lang.Class.forName0(Native Method) ~[?:1.8.0_102]\r\n at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_102]\r\n at org.elasticsearch.bootstrap.Natives.<clinit>(Natives.java:45) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:204) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.4.0.jar:5.4.0]\r\n[2017-05-09T11:14:24,342][WARN ][o.e.b.Natives ] cannot check if running as root because JNA is not available\r\n[2017-05-09T11:14:24,342][WARN ][o.e.b.Natives ] cannot register console handler because JNA is not available\r\n[2017-05-09T11:14:24,344][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_NPROC because JNA is not available\r\n[2017-05-09T11:14:24,344][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_AS beacuse JNA is not available\r\n[2017-05-09T11:14:24,493][INFO ][o.e.n.Node ] [globevmes5-node] initializing ...\r\n[2017-05-09T11:14:24,615][INFO ][o.e.e.NodeEnvironment ] [globevmes5-node] using [1] data paths, mounts [[/methode (/dev/mapper/VolGroup01-LogVol02)]], net usable_space [8.1gb], net total_space [72.8gb], spins? [possibly], types [ext3]\r\n[2017-05-09T11:14:24,615][INFO ][o.e.e.NodeEnvironment ] [globevmes5-node] heap size [1007.3mb], compressed ordinary object pointers [true]\r\n[2017-05-09T11:14:24,659][INFO ][o.e.n.Node ] [globevmes5-node] node name [globevmes5-node], node ID [M1_iHcSKRX6wHkD_Va0uDg]\r\n[2017-05-09T11:14:24,659][INFO ][o.e.n.Node ] [globevmes5-node] version[5.4.0], pid[31204], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/2.6.18-194.el5/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_102/25.102-b14]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [aggs-matrix-stats]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [ingest-common]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-expression]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-groovy]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-mustache]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [lang-painless]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [percolator]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [reindex]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [transport-netty3]\r\n[2017-05-09T11:14:26,981][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded module [transport-netty4]\r\n[2017-05-09T11:14:26,982][INFO ][o.e.p.PluginsService ] [globevmes5-node] loaded plugin [eom-elasticsearch-plugin]\r\n[2017-05-09T11:14:28,406][INFO ][o.e.d.DiscoveryModule ] [globevmes5-node] using discovery type [zen]\r\n[2017-05-09T11:14:28,972][INFO ][o.e.n.Node ] [globevmes5-node] initialized\r\n[2017-05-09T11:14:28,972][INFO ][o.e.n.Node ] [globevmes5-node] starting ...\r\n[2017-05-09T11:14:29,100][INFO ][o.e.t.TransportService ] [globevmes5-node] publish_address {192.168.203.128:9500}, bound_addresses {192.168.203.128:9500}\r\n[2017-05-09T11:14:29,106][INFO ][o.e.b.BootstrapChecks ] [globevmes5-node] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks\r\n[2017-05-09T11:14:32,227][INFO ][o.e.c.s.ClusterService ] [globevmes5-node] new_master {globevmes5-node}{M1_iHcSKRX6wHkD_Va0uDg}{j8GR_IGFSfW2y502jd-SKA}{192.168.203.128}{192.168.203.128:9500}, reason: zen-disco-elected-as-master ([0] nodes joined)\r\n[2017-05-09T11:14:32,379][INFO ][o.e.h.n.Netty4HttpServerTransport] [globevmes5-node] publish_address {192.168.203.128:9400}, bound_addresses {192.168.203.128:9400}\r\n[2017-05-09T11:14:32,386][INFO ][o.e.n.Node ] [globevmes5-node] started\r\n[2017-05-09T11:14:32,590][INFO ][o.e.g.GatewayService ] [globevmes5-node] recovered [6] indices into cluster_state\r\n[2017-05-09T11:14:33,173][INFO ][o.e.c.r.a.AllocationService] [globevmes5-node] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).\r\n[2017-05-09T11:15:02,357][INFO ][o.e.c.r.a.DiskThresholdMonitor] [globevmes5-node] low disk watermark [85%] exceeded on [M1_iHcSKRX6wHkD_Va0uDg][globevmes5-node][/methode/meth01/mnt/elasticsearch-5.4.0/data/nodes/0] free: 8.1gb[11.1%], replicas will not be assigned to this node \r\n```\r\n\r\n", "comments": [ { "body": "It looks like a bug to me, TransportClient's cluster state requests timed out in my local tests. It seems like some requests hang out, maybe because of a concurrent disconnection or a Netty issue. @jasontedor or @bleskes can you have a look?", "created_at": "2017-05-11T10:34:40Z" } ], "number": 24557, "title": "Elasticsearch Transport Client fails to recovery connection after cluster restart" }
{ "body": "With the current implementation, `SniffNodesSampler` might close the\r\ncurrent connection right after a request is sent but before the response\r\nis correctly handled. This causes to timeouts in the transport client\r\nwhen the sniffing is activated in all versions since #22828.\r\n\r\ncloses #24575\r\ncloses #24557", "number": 24632, "review_comments": [ { "body": "we also need to close the connection in `public void onFailure(Exception e) {` since we might get rejected or something like this.", "created_at": "2017-05-12T06:02:29Z" }, { "body": "this is unrelated?", "created_at": "2017-05-12T06:02:38Z" }, { "body": "this is unrelated?", "created_at": "2017-05-12T06:02:42Z" }, { "body": "testing will be tricky but doable. I have some ideas here similar to what I did on `RemoteClusterConnectionTests` where we basically mock the calls to clusterstate and return a pre-build state but we can also put some sleeps into it.", "created_at": "2017-05-12T06:04:13Z" }, { "body": "maybe we should unify the `latch.countDown()` and `closeConnection()` into a single method called \"onDone\" on the AbstractRunnable that everyone calls? this it's less trappy and people wouldn't forget to do one but not the other.", "created_at": "2017-05-12T09:44:42Z" }, { "body": "I wrote a test which would have failed before the fix. That would be great if you can have a look.", "created_at": "2017-05-12T12:46:22Z" }, { "body": "Yes, this should not have been commited, thanks.", "created_at": "2017-05-12T12:46:42Z" }, { "body": "can we call the latch in a finally block just to be absolutely sure", "created_at": "2017-05-12T14:27:15Z" }, { "body": "Sure", "created_at": "2017-05-12T14:28:48Z" } ], "title": "SniffNodesSampler should close connection after handling responses" }
{ "commits": [ { "message": "SniffNodesSampler should close connection after handling responses\n\nWith the current implementation, SniffNodesSampler might close the\ncurrent connection right after a request is sent but before the response\nis correctly handled. This causes to timeouts in the transport client\nwhen the sniffing is activated.\n\ncloses #24575\ncloses #24557" }, { "message": "Apply feedback" }, { "message": "Close connection before counting down the latch" }, { "message": "add finally block" } ], "files": [ { "diff": "@@ -469,14 +469,17 @@ protected void doSample() {\n */\n Transport.Connection connectionToClose = null;\n \n- @Override\n- public void onAfter() {\n- IOUtils.closeWhileHandlingException(connectionToClose);\n+ void onDone() {\n+ try {\n+ IOUtils.closeWhileHandlingException(connectionToClose);\n+ } finally {\n+ latch.countDown();\n+ }\n }\n \n @Override\n public void onFailure(Exception e) {\n- latch.countDown();\n+ onDone();\n if (e instanceof ConnectTransportException) {\n logger.debug((Supplier<?>)\n () -> new ParameterizedMessage(\"failed to connect to node [{}], ignoring...\", nodeToPing), e);\n@@ -522,7 +525,7 @@ public String executor() {\n @Override\n public void handleResponse(ClusterStateResponse response) {\n clusterStateResponses.put(nodeToPing, response);\n- latch.countDown();\n+ onDone();\n }\n \n @Override\n@@ -532,9 +535,8 @@ public void handleException(TransportException e) {\n \"failed to get local cluster state for {}, disconnecting...\", nodeToPing), e);\n try {\n hostFailureListener.onNodeDisconnected(nodeToPing, e);\n- }\n- finally {\n- latch.countDown();\n+ } finally {\n+ onDone();\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java", "status": "modified" }, { "diff": "@@ -19,41 +19,54 @@\n \n package org.elasticsearch.client.transport;\n \n-import java.io.Closeable;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.concurrent.CountDownLatch;\n-import java.util.concurrent.TimeUnit;\n-import java.util.concurrent.atomic.AtomicInteger;\n-import java.util.concurrent.atomic.AtomicReference;\n-\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.node.liveness.LivenessResponse;\n import org.elasticsearch.action.admin.cluster.node.liveness.TransportLivenessAction;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateAction;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.node.Node;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.threadpool.TestThreadPool;\n import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.ConnectionProfile;\n import org.elasticsearch.transport.Transport;\n+import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportInterceptor;\n import org.elasticsearch.transport.TransportRequest;\n+import org.elasticsearch.transport.TransportRequestHandler;\n import org.elasticsearch.transport.TransportRequestOptions;\n import org.elasticsearch.transport.TransportResponse;\n import org.elasticsearch.transport.TransportResponseHandler;\n import org.elasticsearch.transport.TransportService;\n import org.hamcrest.CustomMatcher;\n \n+import java.io.Closeable;\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.elasticsearch.test.transport.MockTransportService.createNewService;\n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.everyItem;\n import static org.hamcrest.CoreMatchers.hasItem;\n@@ -322,6 +335,157 @@ public boolean matches(Object item) {\n }\n }\n \n+ public void testSniffNodesSamplerClosesConnections() throws Exception {\n+ final TestThreadPool threadPool = new TestThreadPool(\"testSniffNodesSamplerClosesConnections\");\n+\n+ Settings remoteSettings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), \"remote\").build();\n+ try (MockTransportService remoteService = createNewService(remoteSettings, Version.CURRENT, threadPool, null)) {\n+ final MockHandler handler = new MockHandler(remoteService);\n+ remoteService.registerRequestHandler(ClusterStateAction.NAME, ClusterStateRequest::new, ThreadPool.Names.SAME, handler);\n+ remoteService.start();\n+ remoteService.acceptIncomingRequests();\n+\n+ Settings clientSettings = Settings.builder()\n+ .put(TransportClient.CLIENT_TRANSPORT_SNIFF.getKey(), true)\n+ .put(TransportClient.CLIENT_TRANSPORT_PING_TIMEOUT.getKey(), TimeValue.timeValueSeconds(1))\n+ .put(TransportClient.CLIENT_TRANSPORT_NODES_SAMPLER_INTERVAL.getKey(), TimeValue.timeValueSeconds(30))\n+ .build();\n+\n+ try (MockTransportService clientService = createNewService(clientSettings, Version.CURRENT, threadPool, null)) {\n+ final List<MockConnection> establishedConnections = new CopyOnWriteArrayList<>();\n+ final List<MockConnection> reusedConnections = new CopyOnWriteArrayList<>();\n+\n+ clientService.addDelegate(remoteService, new MockTransportService.DelegateTransport(clientService.original()) {\n+ @Override\n+ public Connection openConnection(DiscoveryNode node, ConnectionProfile profile) throws IOException {\n+ MockConnection connection = new MockConnection(super.openConnection(node, profile));\n+ establishedConnections.add(connection);\n+ return connection;\n+ }\n+\n+ @Override\n+ public Connection getConnection(DiscoveryNode node) {\n+ MockConnection connection = new MockConnection(super.getConnection(node));\n+ reusedConnections.add(connection);\n+ return connection;\n+ }\n+ });\n+\n+ clientService.start();\n+ clientService.acceptIncomingRequests();\n+\n+ try (TransportClientNodesService transportClientNodesService =\n+ new TransportClientNodesService(clientSettings, clientService, threadPool, (a, b) -> {})) {\n+ assertEquals(0, transportClientNodesService.connectedNodes().size());\n+ assertEquals(0, establishedConnections.size());\n+ assertEquals(0, reusedConnections.size());\n+\n+ transportClientNodesService.addTransportAddresses(remoteService.getLocalDiscoNode().getAddress());\n+ assertEquals(1, transportClientNodesService.connectedNodes().size());\n+ assertClosedConnections(establishedConnections, 1);\n+\n+ transportClientNodesService.doSample();\n+ assertClosedConnections(establishedConnections, 2);\n+ assertOpenConnections(reusedConnections, 1);\n+\n+ handler.blockRequest();\n+ Thread thread = new Thread(transportClientNodesService::doSample);\n+ thread.start();\n+\n+ assertBusy(() -> assertEquals(3, establishedConnections.size()));\n+ assertFalse(\"Temporary ping connection must be opened\", establishedConnections.get(2).isClosed());\n+\n+ handler.releaseRequest();\n+ thread.join();\n+\n+ assertClosedConnections(establishedConnections, 3);\n+ }\n+ }\n+ } finally {\n+ terminate(threadPool);\n+ }\n+ }\n+\n+ private void assertClosedConnections(final List<MockConnection> connections, final int size) {\n+ assertEquals(\"Expecting \" + size + \" closed connections but got \" + connections.size(), size, connections.size());\n+ connections.forEach(c -> assertConnection(c, true));\n+ }\n+\n+ private void assertOpenConnections(final List<MockConnection> connections, final int size) {\n+ assertEquals(\"Expecting \" + size + \" open connections but got \" + connections.size(), size, connections.size());\n+ connections.forEach(c -> assertConnection(c, false));\n+ }\n+\n+ private static void assertConnection(final MockConnection connection, final boolean closed) {\n+ assertEquals(\"Connection [\" + connection + \"] must be \" + (closed ? \"closed\" : \"open\"), closed, connection.isClosed());\n+ }\n+\n+ class MockConnection implements Transport.Connection {\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n+ private final Transport.Connection connection;\n+\n+ private MockConnection(Transport.Connection connection) {\n+ this.connection = connection;\n+ }\n+\n+ @Override\n+ public DiscoveryNode getNode() {\n+ return connection.getNode();\n+ }\n+\n+ @Override\n+ public Version getVersion() {\n+ return connection.getVersion();\n+ }\n+\n+ @Override\n+ public void sendRequest(long requestId, String action, TransportRequest request, TransportRequestOptions options)\n+ throws IOException, TransportException {\n+ connection.sendRequest(requestId, action, request, options);\n+ }\n+\n+ @Override\n+ public void close() throws IOException {\n+ if (closed.compareAndSet(false, true)) {\n+ connection.close();\n+ }\n+ }\n+\n+ boolean isClosed() {\n+ return closed.get();\n+ }\n+ }\n+\n+ class MockHandler implements TransportRequestHandler<ClusterStateRequest> {\n+ private final AtomicBoolean block = new AtomicBoolean(false);\n+ private final CountDownLatch release = new CountDownLatch(1);\n+ private final MockTransportService transportService;\n+\n+ MockHandler(MockTransportService transportService) {\n+ this.transportService = transportService;\n+ }\n+\n+ @Override\n+ public void messageReceived(ClusterStateRequest request, TransportChannel channel) throws Exception {\n+ if (block.get()) {\n+ release.await();\n+ return;\n+ }\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder().add(transportService.getLocalDiscoNode()).build();\n+ ClusterState build = ClusterState.builder(ClusterName.DEFAULT).nodes(discoveryNodes).build();\n+ channel.sendResponse(new ClusterStateResponse(ClusterName.DEFAULT, build, 0L));\n+ }\n+\n+ void blockRequest() {\n+ if (block.compareAndSet(false, true) == false) {\n+ throw new AssertionError(\"Request handler is already marked as blocking\");\n+ }\n+ }\n+ void releaseRequest() {\n+ release.countDown();\n+ }\n+ }\n+\n public static class TestRequest extends TransportRequest {\n \n }", "filename": "core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.2 - 5.3.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_131\r\n\r\n**OS version**: Ubuntu 14.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI'm trying to migrate a cluster from 2.4.5 to 5.x; executing a `_percolate` query on one of my index results in an internal server error with reason `query must be rewritten first`\r\n\r\n**Steps to reproduce**:\r\nI've created the index on ES2 with these settings:\r\n\r\n```\r\n{\r\n \"quote_application_user\" : {\r\n \"settings\" : {\r\n \"index\" : {\r\n \"creation_date\" : \"1491213458276\",\r\n \"number_of_shards\" : \"5\",\r\n \"number_of_replicas\" : \"1\",\r\n \"uuid\" : \"wpk9Ct2ZTLew82j0eagwNQ\",\r\n \"version\" : {\r\n \"created\" : \"2010299\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nand this mapping:\r\n\r\n```\r\n{\r\n \"quote_application_user\" : {\r\n \"mappings\" : {\r\n \".percolator\" : {\r\n \"properties\" : {\r\n \"active\" : {\r\n \"type\" : \"boolean\"\r\n },\r\n \"enabled\" : {\r\n \"type\" : \"boolean\"\r\n },\r\n \"query\" : {\r\n \"type\" : \"object\",\r\n \"enabled\" : false\r\n },\r\n \"ranking\" : {\r\n \"type\" : \"float\"\r\n },\r\n \"subscription_application_user_id\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n }\r\n }\r\n },\r\n \"quote\" : {\r\n \"_meta\" : {\r\n \"model\" : \"Entity\\\\Quote\"\r\n },\r\n \"date_detection\" : false,\r\n \"properties\" : {\r\n \"affiliateUser\" : {\r\n \"type\" : \"integer\"\r\n },\r\n \"applicationUser\" : {\r\n \"properties\" : {\r\n \"email\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"id\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"phone\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n }\r\n }\r\n },\r\n \"assignedAt\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date_time_no_millis\"\r\n },\r\n \"bids\" : {\r\n \"properties\" : {\r\n \"applicationUser\" : {\r\n \"properties\" : {\r\n \"id\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n }\r\n }\r\n },\r\n \"auto_bid\" : {\r\n \"type\" : \"boolean\"\r\n },\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n },\r\n \"refund_status\" : {\r\n \"type\" : \"integer\"\r\n }\r\n }\r\n },\r\n \"checked\" : {\r\n \"type\" : \"boolean\"\r\n },\r\n \"contactTimes\" : {\r\n \"type\" : \"integer\"\r\n },\r\n \"contactType\" : {\r\n \"type\" : \"integer\"\r\n },\r\n \"coordinates\" : {\r\n \"type\" : \"geo_point\"\r\n },\r\n \"createdAt\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date_time_no_millis\"\r\n },\r\n \"createdAt_date\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date\"\r\n },\r\n \"expireAt\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date_time_no_millis\"\r\n },\r\n \"locality\" : {\r\n \"properties\" : {\r\n \"coordinates\" : {\r\n \"type\" : \"geo_point\"\r\n },\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n },\r\n \"province\" : {\r\n \"properties\" : {\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n }\r\n }\r\n },\r\n \"zip\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n }\r\n }\r\n },\r\n \"position\" : {\r\n \"type\" : \"geo_shape\"\r\n },\r\n \"price\" : {\r\n \"type\" : \"float\"\r\n },\r\n \"published\" : {\r\n \"type\" : \"boolean\"\r\n },\r\n \"rating\" : {\r\n \"type\" : \"float\"\r\n },\r\n \"service\" : {\r\n \"properties\" : {\r\n \"id\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n }\r\n }\r\n },\r\n \"serviceData\" : {\r\n \"type\" : \"nested\",\r\n \"properties\" : {\r\n \"key\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"label\" : {\r\n \"type\" : \"string\"\r\n },\r\n \"other\" : {\r\n \"type\" : \"string\"\r\n },\r\n \"type\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"value\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"value_as_date\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date_time_no_millis\"\r\n }\r\n }\r\n },\r\n \"serviceForm\" : {\r\n \"properties\" : {\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n }\r\n }\r\n },\r\n \"status\" : {\r\n \"properties\" : {\r\n \"code\" : {\r\n \"type\" : \"string\",\r\n \"index\" : \"not_analyzed\"\r\n },\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n }\r\n }\r\n },\r\n \"updatedAt\" : {\r\n \"type\" : \"date\",\r\n \"format\" : \"date_time_no_millis\"\r\n },\r\n \"urgency\" : {\r\n \"properties\" : {\r\n \"id\" : {\r\n \"type\" : \"integer\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis is the indexed document:\r\n\r\n```\r\n{\r\n \"_index\" : \"quote_application_user\",\r\n \"_type\" : \"quote\",\r\n \"_id\" : \"146085\",\r\n \"_version\" : 3,\r\n \"found\" : true,\r\n \"_source\" : {\r\n \"createdAt\" : \"2017-04-06T07:01:00+00:00\",\r\n \"updatedAt\" : \"2017-04-06T07:01:12+00:00\",\r\n \"expireAt\" : \"2017-05-06T07:01:00+00:00\",\r\n \"assignedAt\" : null,\r\n \"published\" : true,\r\n \"checked\" : false,\r\n \"price\" : 8,\r\n \"contactType\" : 2,\r\n \"rating\" : 4,\r\n \"service\" : {\r\n \"id\" : 2147\r\n },\r\n \"applicationUser\" : {\r\n \"id\" : \"00000000-0000-0000-0000-000000000001\",\r\n \"email\" : “xxx@gmail.com\",\r\n \"phone\" : \"+39*****\"\r\n },\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ],\r\n \"position\" : {\r\n \"type\" : \"point\",\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ]\r\n },\r\n \"status\" : {\r\n \"id\" : 2,\r\n \"code\" : \"invendita\"\r\n },\r\n \"urgency\" : {\r\n \"id\" : 3\r\n },\r\n \"serviceForm\" : {\r\n \"id\" : 21401\r\n },\r\n \"serviceData\" : [\r\n {\r\n \"key\" : \"choice_extended1\",\r\n \"label\" : \"Per quale tipo di evento ti serve?\",\r\n \"type\" : \"choice_extended\",\r\n \"value\" : [\r\n \"key16\"\r\n ],\r\n \"other\" : \"casa\"\r\n },\r\n {\r\n \"key\" : \"date1\",\r\n \"label\" : \"Indica la data dell'evento (anche approssimativa)\",\r\n \"type\" : \"date\",\r\n \"value\" : \"2017-04-06\",\r\n \"value_as_date\" : \"2017-04-06T00:00:00+0000\"\r\n },\r\n {\r\n \"key\" : \"text1\",\r\n \"label\" : \"Circa quante persone attenderanno l'evento?\",\r\n \"type\" : \"text\",\r\n \"value\" : \"5\"\r\n },\r\n {\r\n \"key\" : \"textarea1\",\r\n \"label\" : \"Descrivi il tipo di struttura che ti serve\",\r\n \"type\" : \"text\",\r\n \"value\" : \"Soluzione porticato casa\"\r\n }\r\n ],\r\n \"createdAt_date\" : \"2017-04-06\",\r\n \"bids\" : [ ],\r\n \"contactTimes\" : [ ]\r\n }\r\n}\r\n```\r\n\r\nand this is the query indexed in the percolator:\r\n\r\n```\r\n{\r\n \"_index\" : \"quote_application_user\",\r\n \"_type\" : \".percolator\",\r\n \"_id\" : \"id_1\",\r\n \"_version\" : 2,\r\n \"found\" : true,\r\n \"_source\" : {\r\n \"query\" : {\r\n \"bool\" : {\r\n \"filter\" : [ {\r\n \"terms\" : {\r\n \"service.id\" : [ 289, 311, 312, 313, 314, 315, 316, 317, 1976, 1977, 1978, 1980, 1981, 1983, 1985, 1986, 1988, 1990, 1992, 1994, 1996, 1997, 1999, 2000, 2001, 2002, 2004, 2006, 2007, 2008, 2010, 2012, 2013, 2015, 2016, 2018, 2020, 2023, 2024, 2026, 2027, 2029, 2030, 2032, 2033, 2035, 2511, 2005, 2082, 2083, 2084, 2085, 2086, 2087, 2088, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099, 2100, 2101, 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123, 2124, 2125, 2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135, 2136, 2137, 2138, 2139, 2140, 2142, 2143, 2144, 2145, 2146, 2147, 2154, 2167, 2169, 2171, 2173, 2175, 2177, 2178, 2180, 2182, 2183, 2185, 2187, 2188, 2190, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2334, 2335, 2336 ]\r\n }\r\n }, {\r\n \"bool\" : {\r\n \"should\" : [ {\r\n \"geo_shape\" : {\r\n \"position\" : {\r\n \"indexed_shape\" : {\r\n \"id\" : \"AVs5qTCObQYo4fl70pBM\",\r\n \"type\" : \"administrative_level_2\",\r\n \"index\" : \"geoshapes\",\r\n \"path\" : \"polygon\"\r\n },\r\n \"relation\" : \"intersects\"\r\n }\r\n }\r\n } ]\r\n }\r\n } ]\r\n }\r\n },\r\n \"subscription_application_user_id\" : \"00000000-0000-0000-0000-000000000002\"\r\n }\r\n}\r\n```\r\n\r\nExecuting\r\n`curl -XGET 'http://localhost:9200/quote_application_user/quote/146085/_percolate' -d '{\"query\":{\"term\":{\"subscription_application_user_id\":{\"value\":\"00000000-0000-0000-0000-000000000002\"}}}}'`\r\n\r\nwill return on ES 2.4.5:\r\n`{\"took\":8,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"total\":1,\"matches\":[{\"_index\":\"quote_application_user\",\"_id\":\"id_1\"}]}`\r\n\r\nand on 5.3.2:\r\n`{\"took\":329,\"_shards\":{\"total\":5,\"successful\":3,\"failed\":2,\"failures\":[{\"shard\":1,\"index\":\"quote_application_user\",\"status\":\"INTERNAL_SERVER_ERROR\",\"reason\":{\"type\":\"unsupported_operation_exception\",\"reason\":\"query must be rewritten first\"}}]},\"total\":0,\"matches\":[]}`\r\n\r\nwhile expecting the same result.\r\nI've tried with 5.0.0 and 5.0.2 version also, with same results.\r\n\r\n**Logs**:\r\nWhen upgrading, the node wrote this into the log:\r\n\r\n```\r\n[2017-05-04T09:57:16,542][WARN ][o.e.c.l.LogConfigurator ] ignoring unsupported logging configuration file [/etc/elasticsearch/logging.yml], logging is configured via [/etc/elasticsearch/log4j2.p\r\nroperties]\r\n[2017-05-04T09:57:16,744][INFO ][o.e.n.Node ] [] initializing ...\r\n[2017-05-04T09:57:16,830][INFO ][o.e.e.NodeEnvironment ] [YBT8YBw] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [26.4gb], net total_space [39.3gb], spins? [possibly], types\r\n[ext4]\r\n[2017-05-04T09:57:16,830][INFO ][o.e.e.NodeEnvironment ] [YBT8YBw] heap size [2.9gb], compressed ordinary object pointers [true]\r\n[2017-05-04T09:57:16,924][INFO ][o.e.n.Node ] node name [YBT8YBw] derived from node ID [YBT8YBwDRBu5QYEUbpF7Zw]; set [node.name] to override\r\n[2017-05-04T09:57:16,924][INFO ][o.e.n.Node ] version[5.3.2], pid[6732], build[3068195/2017-04-24T16:15:59.481Z], OS[Linux/3.13.0-117-generic/amd64], JVM[Oracle Corporation/Java HotS\r\npot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [aggs-matrix-stats]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [ingest-common]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [lang-expression]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [lang-groovy]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [lang-mustache]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [lang-painless]\r\n[2017-05-04T09:57:17,676][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [percolator]\r\n[2017-05-04T09:57:17,677][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [reindex]\r\n[2017-05-04T09:57:17,677][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [transport-netty3]\r\n[2017-05-04T09:57:17,677][INFO ][o.e.p.PluginsService ] [YBT8YBw] loaded module [transport-netty4]\r\n[2017-05-04T09:57:17,677][INFO ][o.e.p.PluginsService ] [YBT8YBw] no plugins loaded\r\n[2017-05-04T09:57:19,846][INFO ][o.e.c.u.IndexFolderUpgrader] [geoshapes/LhUFlhVuSVCogo46-O4KPA] upgrading [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/geoshapes] to new naming convention\r\n[2017-05-04T09:57:19,847][INFO ][o.e.c.u.IndexFolderUpgrader] [geoshapes/LhUFlhVuSVCogo46-O4KPA] moved from [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/geoshapes] to [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/LhUFlhVuSVCogo46-O4KPA]\r\n[2017-05-04T09:57:19,885][INFO ][o.e.c.u.IndexFolderUpgrader] [quote_application_user/wpk9Ct2ZTLew82j0eagwNQ] upgrading [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/quote_application_user] to new naming convention\r\n[2017-05-04T09:57:19,885][INFO ][o.e.c.u.IndexFolderUpgrader] [quote_application_user/wpk9Ct2ZTLew82j0eagwNQ] moved from [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/quote_application_user] to [/var/lib/elasticsearch/elasticsearch/nodes/0/indices/wpk9Ct2ZTLew82j0eagwNQ]\r\n[2017-05-04T09:57:20,247][INFO ][o.e.n.Node ] initialized\r\n[2017-05-04T09:57:20,247][INFO ][o.e.n.Node ] [YBT8YBw] starting ...\r\n[2017-05-04T09:57:20,484][INFO ][o.e.t.TransportService ] [YBT8YBw] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}\r\n[2017-05-04T09:57:23,563][INFO ][o.e.c.s.ClusterService ] [YBT8YBw] new_master {YBT8YBw}{YBT8YBwDRBu5QYEUbpF7Zw}{jjKMACwFROSwYe-HbgSEdg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)\r\n[2017-05-04T09:57:23,618][INFO ][o.e.h.n.Netty4HttpServerTransport] [YBT8YBw] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}\r\n[2017-05-04T09:57:23,621][INFO ][o.e.n.Node ] [YBT8YBw] started\r\n[2017-05-04T09:57:23,821][INFO ][o.e.g.GatewayService ] [YBT8YBw] recovered [10] indices into cluster_state\r\n[2017-05-04T09:57:24,287][WARN ][o.e.c.m.MetaDataMappingService] [YBT8YBw] [geoshapes] re-syncing mappings with cluster state because of types [[administrative_level_3, administrative_level_2, administrative_level_1]]\r\n[2017-05-04T09:57:24,532][WARN ][o.e.c.m.MetaDataMappingService] [YBT8YBw] [quote_application_user] re-syncing mappings with cluster state because of types [[quote, .percolator]]\r\n[2017-05-04T09:57:26,009][INFO ][o.e.c.r.a.AllocationService] [YBT8YBw] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[quote_application_user][4]] ...]).\r\n```\r\n\r\nAnd when executing the query:\r\n\r\n```\r\n[2017-05-04T10:00:02,021][DEBUG][o.e.a.s.TransportSearchAction] [YBT8YBw] [quote_application_user][1], node[YBT8YBwDRBu5QYEUbpF7Zw], [P], s[STARTED], a[id=b9NIbVDrRfK9ICcds-TYIg]: Failed to execut\r\ne [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[quote_application_user], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, ex\r\npand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"term\" : {\r\n \"subscription_application_user_id\" : {\r\n \"value\" : \"00000000-0000-0000-0000-000000000002\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n }\r\n ],\r\n \"filter\" : [\r\n {\r\n \"percolate\" : {\r\n \"document_type\" : \"quote\",\r\n \"field\" : \"query\",\r\n \"document\" : {\r\n \"createdAt\" : \"2017-04-06T07:01:00+00:00\",\r\n \"updatedAt\" : \"2017-04-06T07:01:12+00:00\",\r\n \"expireAt\" : \"2017-05-06T07:01:00+00:00\",\r\n \"assignedAt\" : null,\r\n \"published\" : true,\r\n \"checked\" : false,\r\n \"price\" : 8,\r\n \"contactType\" : 2,\r\n \"rating\" : 4,\r\n \"service\" : {\r\n \"id\" : 2147\r\n },\r\n \"applicationUser\" : {\r\n \"id\" : \"00000000-0000-0000-0000-000000000001\",\r\n \"email\" : “xxx@gmail.com\",\r\n \"phone\" : \"+39*****”\r\n },\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ],\r\n \"position\" : {\r\n \"type\" : \"point\",\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ]\r\n },\r\n \"status\" : {\r\n \"id\" : 2,\r\n \"code\" : \"invendita\"\r\n },\r\n \"urgency\" : {\r\n \"id\" : 3\r\n },\r\n \"serviceForm\" : {\r\n \"id\" : 21401\r\n },\r\n \"serviceData\" : [\r\n {\r\n \"key\" : \"choice_extended1\",\r\n \"label\" : \"Per quale tipo di evento ti serve?\",\r\n \"type\" : \"choice_extended\",\r\n \"value\" : [\r\n \"key16\"\r\n ],\r\n \"other\" : \"casa\"\r\n },\r\n {\r\n \"key\" : \"date1\",\r\n \"label\" : \"Indica la data dell'evento (anche approssimativa)\",\r\n \"type\" : \"date\",\r\n \"value\" : \"2017-04-06\",\r\n \"value_as_date\" : \"2017-04-06T00:00:00+0000\"\r\n },\r\n {\r\n \"key\" : \"text1\",\r\n \"label\" : \"Circa quante persone attenderanno l'evento?\",\r\n \"type\" : \"text\",\r\n \"value\" : \"5\"\r\n },\r\n {\r\n \"key\" : \"textarea1\",\r\n \"label\" : \"Descrivi il tipo di struttura che ti serve\",\r\n \"type\" : \"text\",\r\n \"value\" : \"Soluzione porticato casa\"\r\n }\r\n ],\r\n \"createdAt_date\" : \"2017-04-06\",\r\n \"bids\" : [ ],\r\n \"contactTimes\" : [ ]\r\n },\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [YBT8YBw][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Query Failed [Failed to execute main query]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:423) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:108) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:247) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:261) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:331) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:328) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) [elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.2.jar:5.3.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: java.lang.UnsupportedOperationException: query must be rewritten first\r\n at org.elasticsearch.index.query.GeoShapeQueryBuilder.doToQuery(GeoShapeQueryBuilder.java:317) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:442) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:418) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toFilter(AbstractQueryBuilder.java:118) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:446) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:419) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.percolator.PercolatorFieldMapper.toQuery(PercolatorFieldMapper.java:343) ~[?:?]\r\n at org.elasticsearch.percolator.PercolatorFieldMapper.parseQuery(PercolatorFieldMapper.java:325) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQueryBuilder.lambda$null$3(PercolateQueryBuilder.java:547) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQuery$1$2.matchDocId(PercolateQuery.java:170) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQuery$BaseScorer$1.matches(PercolateQuery.java:256) ~[?:?]\r\n at org.apache.lucene.search.ConjunctionDISI$ConjunctionTwoPhaseIterator.matches(ConjunctionDISI.java:345) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:228) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:397) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:108) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:247) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:261) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:331) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:328) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]\r\n[2017-05-04T10:00:02,022][DEBUG][o.e.a.s.TransportSearchAction] [YBT8YBw] [quote_application_user][2], node[YBT8YBwDRBu5QYEUbpF7Zw], [P], s[STARTED], a[id=qvQX4P41S_u-ZXtDLw_R_A]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[quote_application_user], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"term\" : {\r\n \"subscription_application_user_id\" : {\r\n \"value\" : \"00000000-0000-0000-0000-000000000002\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n }\r\n ],\r\n \"filter\" : [\r\n {\r\n \"percolate\" : {\r\n \"document_type\" : \"quote\",\r\n \"field\" : \"query\",\r\n \"document\" : {\r\n \"createdAt\" : \"2017-04-06T07:01:00+00:00\",\r\n \"updatedAt\" : \"2017-04-06T07:01:12+00:00\",\r\n \"expireAt\" : \"2017-05-06T07:01:00+00:00\",\r\n \"assignedAt\" : null,\r\n \"published\" : true,\r\n \"checked\" : false,\r\n \"price\" : 8,\r\n \"contactType\" : 2,\r\n \"rating\" : 4,\r\n \"service\" : {\r\n \"id\" : 2147\r\n },\r\n \"applicationUser\" : {\r\n \"id\" : \"00000000-0000-0000-0000-000000000001\",\r\n \"email\" : “xxx@gmail.com\",\r\n \"phone\" : \"+39*****\"\r\n },\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ],\r\n \"position\" : {\r\n \"type\" : \"point\",\r\n \"coordinates\" : [\r\n 12.7245045,\r\n 41.9939694\r\n ]\r\n },\r\n \"status\" : {\r\n \"id\" : 2,\r\n \"code\" : \"invendita\"\r\n },\r\n \"urgency\" : {\r\n \"id\" : 3\r\n },\r\n \"serviceForm\" : {\r\n \"id\" : 21401\r\n },\r\n \"serviceData\" : [\r\n {\r\n \"key\" : \"choice_extended1\",\r\n \"label\" : \"Per quale tipo di evento ti serve?\",\r\n \"type\" : \"choice_extended\",\r\n \"value\" : [\r\n \"key16\"\r\n ],\r\n \"other\" : \"casa\"\r\n },\r\n {\r\n \"key\" : \"date1\",\r\n \"label\" : \"Indica la data dell'evento (anche approssimativa)\",\r\n \"type\" : \"date\",\r\n \"value\" : \"2017-04-06\",\r\n \"value_as_date\" : \"2017-04-06T00:00:00+0000\"\r\n },\r\n {\r\n \"key\" : \"text1\",\r\n \"label\" : \"Circa quante persone attenderanno l'evento?\",\r\n \"type\" : \"text\",\r\n \"value\" : \"5\"\r\n },\r\n {\r\n \"key\" : \"textarea1\",\r\n \"label\" : \"Descrivi il tipo di struttura che ti serve\",\r\n \"type\" : \"text\",\r\n \"value\" : \"Soluzione porticato casa\"\r\n }\r\n ],\r\n \"createdAt_date\" : \"2017-04-06\",\r\n \"bids\" : [ ],\r\n \"contactTimes\" : [ ]\r\n },\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n}}]\r\norg.elasticsearch.transport.RemoteTransportException: [YBT8YBw][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Query Failed [Failed to execute main query]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:423) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:108) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:247) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:261) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:331) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:328) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) [elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.2.jar:5.3.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: java.lang.UnsupportedOperationException: query must be rewritten first\r\n at org.elasticsearch.index.query.GeoShapeQueryBuilder.doToQuery(GeoShapeQueryBuilder.java:317) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:442) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:418) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toFilter(AbstractQueryBuilder.java:118) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:446) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:419) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:96) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.percolator.PercolatorFieldMapper.toQuery(PercolatorFieldMapper.java:343) ~[?:?]\r\n at org.elasticsearch.percolator.PercolatorFieldMapper.parseQuery(PercolatorFieldMapper.java:325) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQueryBuilder.lambda$null$3(PercolateQueryBuilder.java:547) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQuery$1$2.matchDocId(PercolateQuery.java:170) ~[?:?]\r\n at org.elasticsearch.percolator.PercolateQuery$BaseScorer$1.matches(PercolateQuery.java:256) ~[?:?]\r\n at org.apache.lucene.search.ConjunctionDISI$ConjunctionTwoPhaseIterator.matches(ConjunctionDISI.java:345) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:228) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:397) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:108) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:247) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:261) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:331) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:328) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.3.2.jar:5.3.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]\r\n```\r\n", "comments": [ { "body": "@alekitto Thank you for reporting this issue. This is a migration bug. In 5.0 and later the percolator rewrites the query upon indexing and when percolating expects that queries have been rewritten, so that we don't have to rewrite each time ES percolates. The assumption that a query has been rewritten isn't true when upgrading from 2.4.x and before. This should be fixed.\r\n\r\nWhat you can do as work around is to reindex the problematic queries, that should resolve this issue.", "created_at": "2017-05-11T11:37:12Z" }, { "body": "Closed by #24617", "created_at": "2017-05-26T13:58:37Z" } ], "number": 24485, "title": "Percolate query complains that is not rewritten when upgrading to ES 5.x from 2.x" }
{ "body": "This fix is only necessary for 5.4 and 5.x branches.\r\n\r\nPR for #24485", "number": 24617, "review_comments": [ { "body": "Why not QueryBuilder#rewriteQuery ? Otherwise you can still have non-primitive queries ? \r\nAlso, we should rewrite query rewritten at index time (post v5) too ? Otherwise all rewritten rules must be the same at index and query time ?", "created_at": "2017-05-15T16:33:59Z" }, { "body": "@jimczi Good point. \r\n\r\n> Also, we should rewrite query rewritten at index time (post v5) too ?\r\n\r\nThe query is already rewritten at index time. (`PercolatorFieldMapper#parse(...)`)", "created_at": "2017-05-16T10:56:44Z" }, { "body": "I still think we should rewrite the query all the time. The percolator in 5.x does rewrite at index time but we may have new rewrite rules that should be applied in minor releases ? My point here is that it should be safe to rewrite the query even when the query has been rewritten already.", "created_at": "2017-05-16T11:15:41Z" }, { "body": "That is true, let me change that.", "created_at": "2017-05-22T09:14:44Z" }, { "body": "It turns out to be a bit more complicated. In case of range query builders this can result in no matches, there the rewrite checks the relation and in case of disjoint returns a match none query builder. The problem is that is uses the index reader from the shard and not from the in-memory index.\r\n\r\nSomehow switching to use the in-memory index reader should solve that problem, but that is a much bigger change. So for now I'll keep use the version check.", "created_at": "2017-05-22T09:54:48Z" } ], "title": "For legacy indices rewrite percolator query upon percolation time" }
{ "commits": [ { "message": "Rewrite percolator queries in legacy format at query time\n\nRewriting the query at percolate time, because this is sometimes necessary:\n* From 5.0 and onwards the percolator rewrites the query at index time,\n this is not the case for percolator queries in indices created before 5.0\n Doing so fixes percolator query upgrade issues.\n\nCloses #24485" } ], "files": [ { "diff": "@@ -31,12 +31,13 @@\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.IndexSearcher;\n-import org.apache.lucene.search.Query;\n import org.apache.lucene.search.MatchNoDocsQuery;\n+import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermInSetQuery;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.Setting;\n@@ -282,7 +283,7 @@ public Mapper parse(ParseContext context) throws IOException {\n );\n verifyQuery(queryBuilder);\n // Fetching of terms, shapes and indexed scripts happen during this rewrite:\n- queryBuilder = queryBuilder.rewrite(queryShardContext);\n+ queryBuilder = QueryBuilder.rewriteQuery(queryBuilder, queryShardContext);\n \n try (XContentBuilder builder = XContentFactory.contentBuilder(QUERY_BUILDER_CONTENT_TYPE)) {\n queryBuilder.toXContent(builder, new MapParams(Collections.emptyMap()));\n@@ -344,6 +345,14 @@ static Query toQuery(QueryShardContext context, boolean mapUnmappedFieldsAsStrin\n // as an analyzed string.\n context.setAllowUnmappedFields(false);\n context.setMapUnmappedFieldAsString(mapUnmappedFieldsAsString);\n+\n+ // Rewriting the query at percolate time, because this is sometimes necessary:\n+ // * From 5.0 and onwards the percolator rewrites the query at index time,\n+ // this is not the case for percolator queries in indices created before 5.0\n+ if (context.getIndexSettings().getIndexVersionCreated().before(Version.V_5_0_0_alpha1)) {\n+ queryBuilder = QueryBuilder.rewriteQuery(queryBuilder, context);\n+ }\n+\n return queryBuilder.toQuery(context);\n }\n ", "filename": "modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorFieldMapper.java", "status": "modified" }, { "diff": "@@ -21,7 +21,6 @@\n import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.Version;\n-import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n@@ -44,7 +43,6 @@\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.scriptQuery;\n import static org.elasticsearch.percolator.PercolatorTestUtil.preparePercolate;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -72,7 +70,7 @@ public void testOldPercolatorIndex() throws Exception {\n \n // verify cluster state:\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n- assertThat(state.metaData().indices().size(), equalTo(1));\n+ assertThat(state.metaData().indices().size(), equalTo(2));\n assertThat(state.metaData().indices().get(INDEX_NAME), notNullValue());\n assertThat(state.metaData().indices().get(INDEX_NAME).getCreationVersion(), equalTo(Version.V_2_0_0));\n assertThat(state.metaData().indices().get(INDEX_NAME).getUpgradedVersion(), equalTo(Version.CURRENT));\n@@ -88,17 +86,18 @@ public void testOldPercolatorIndex() throws Exception {\n .setTypes(\".percolator\")\n .addSort(\"_uid\", SortOrder.ASC)\n .get();\n- assertThat(searchResponse.getHits().getTotalHits(), equalTo(4L));\n- assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"1\"));\n- assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"2\"));\n- assertThat(searchResponse.getHits().getAt(2).id(), equalTo(\"3\"));\n- assertThat(searchResponse.getHits().getAt(3).id(), equalTo(\"4\"));\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(5L));\n+ assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"0\"));\n+ assertThat(searchResponse.getHits().getAt(1).id(), equalTo(\"1\"));\n+ assertThat(searchResponse.getHits().getAt(2).id(), equalTo(\"2\"));\n+ assertThat(searchResponse.getHits().getAt(3).id(), equalTo(\"3\"));\n+ assertThat(searchResponse.getHits().getAt(4).id(), equalTo(\"4\"));\n assertThat(XContentMapValues.extractValue(\"query.script.script.inline\",\n- searchResponse.getHits().getAt(3).sourceAsMap()), equalTo(\"return true\"));\n+ searchResponse.getHits().getAt(4).sourceAsMap()), equalTo(\"return true\"));\n // we don't upgrade the script definitions so that they include explicitly the lang,\n // because we read / parse the query at search time.\n assertThat(XContentMapValues.extractValue(\"query.script.script.lang\",\n- searchResponse.getHits().getAt(3).sourceAsMap()), nullValue());\n+ searchResponse.getHits().getAt(4).sourceAsMap()), nullValue());\n \n // verify percolate response\n PercolateResponse percolateResponse = preparePercolate(client())\n@@ -107,21 +106,23 @@ public void testOldPercolatorIndex() throws Exception {\n .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"{}\"))\n .get();\n \n- assertThat(percolateResponse.getCount(), equalTo(1L));\n- assertThat(percolateResponse.getMatches().length, equalTo(1));\n+ assertThat(percolateResponse.getCount(), equalTo(2L));\n+ assertThat(percolateResponse.getMatches().length, equalTo(2));\n assertThat(percolateResponse.getMatches()[0].getId().string(), equalTo(\"4\"));\n+ assertThat(percolateResponse.getMatches()[1].getId().string(), equalTo(\"0\"));\n \n percolateResponse = preparePercolate(client())\n .setIndices(INDEX_NAME)\n .setDocumentType(\"message\")\n .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"message\", \"the quick brown fox jumps over the lazy dog\"))\n .get();\n \n- assertThat(percolateResponse.getCount(), equalTo(3L));\n- assertThat(percolateResponse.getMatches().length, equalTo(3));\n+ assertThat(percolateResponse.getCount(), equalTo(4L));\n+ assertThat(percolateResponse.getMatches().length, equalTo(4));\n assertThat(percolateResponse.getMatches()[0].getId().string(), equalTo(\"1\"));\n assertThat(percolateResponse.getMatches()[1].getId().string(), equalTo(\"2\"));\n assertThat(percolateResponse.getMatches()[2].getId().string(), equalTo(\"4\"));\n+ assertThat(percolateResponse.getMatches()[3].getId().string(), equalTo(\"0\"));\n \n // add an extra query and verify the results\n client().prepareIndex(INDEX_NAME, \".percolator\", \"5\")\n@@ -135,11 +136,13 @@ public void testOldPercolatorIndex() throws Exception {\n .setPercolateDoc(new PercolateSourceBuilder.DocBuilder().setDoc(\"message\", \"the quick brown fox jumps over the lazy dog\"))\n .get();\n \n- assertThat(percolateResponse.getCount(), equalTo(4L));\n- assertThat(percolateResponse.getMatches().length, equalTo(4));\n+ assertThat(percolateResponse.getCount(), equalTo(5L));\n+ assertThat(percolateResponse.getMatches().length, equalTo(5));\n assertThat(percolateResponse.getMatches()[0].getId().string(), equalTo(\"1\"));\n assertThat(percolateResponse.getMatches()[1].getId().string(), equalTo(\"2\"));\n assertThat(percolateResponse.getMatches()[2].getId().string(), equalTo(\"4\"));\n+ assertThat(percolateResponse.getMatches()[3].getId().string(), equalTo(\"0\"));\n+ assertThat(percolateResponse.getMatches()[4].getId().string(), equalTo(\"5\"));\n }\n \n private void setupNode() throws Exception {", "filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorBackwardsCompatibilityTests.java", "status": "modified" }, { "diff": "", "filename": "modules/percolator/src/test/resources/indices/percolator/bwc_index_2.0.0.zip", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 2.4.4/5.3.1\r\n\r\nA `_field_stats` call on a type `geo_point` field throws an exception for an index that was created in `2.4.4` and upgraded to `5.3.1`. I also reproduced this on going from `2.3.3` -> `5.3.0`.\r\n\r\nThis causes Kibana to not properly grab the index mappings when defining an index pattern rendering all fields as neither searchable nor aggregatable. \r\n\r\n**Steps to reproduce**:\r\n1. Create an index mapping with a geo_point field\r\n\r\n```\r\nPUT index\r\n{\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"geo_field\": {\r\n \"type\": \"geo_point\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n2. Add a sample document\r\n\r\n```\r\nPUT index/type/1\r\n{\r\n \"geo_field\": \"33.8957, -112.0577\"\r\n}\r\n```\r\n\r\n3. Upgrade to 5.3.1 (I simply copied the data directory over)\r\n\r\n4. Attempt a `_field_stats` call on the geo_field\r\n\r\n```\r\nGET index/_field_stats?fields=geo_field\r\n```\r\nThe response\r\n\r\n```\r\n{\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 4,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 3,\r\n \"index\": \"index\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"exception\",\r\n \"reason\": \"java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5\",\r\n \"caused_by\": {\r\n \"type\": \"execution_exception\",\r\n \"reason\": \"java.lang.ArrayIndexOutOfBoundsException: 5\",\r\n \"caused_by\": {\r\n \"type\": \"array_index_out_of_bounds_exception\",\r\n \"reason\": \"5\"\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"indices\": {\r\n \"_all\": {\r\n \"fields\": {}\r\n }\r\n }\r\n}\r\n```\r\n\r\n", "comments": [ { "body": "This seems to be a deserialization error with geo_point encoding in 2.x. @nknize can you take a look ?", "created_at": "2017-04-23T19:52:24Z" }, { "body": "@n0othing also note that field stats is deprecated in favour of the new field caps API (5.4)", "created_at": "2017-04-25T11:32:07Z" }, { "body": "We've had a couple reports of this affecting Kibana users already. Because we use field stats to figure out the searchable/aggregatable status of fields this effectively breaks Kibana for any index patterns containing a geo_point field after upgrading to 5.3.1 from 2.x\r\n\r\nhttps://github.com/elastic/kibana/issues/11379\r\nhttps://github.com/elastic/kibana/issues/11377\r\nhttps://github.com/elastic/kibana/issues/9571#issuecomment-296392234", "created_at": "2017-04-25T14:54:44Z" }, { "body": "PR opened.... https://github.com/elastic/elasticsearch/pull/24534", "created_at": "2017-05-06T19:25:39Z" }, { "body": "Per the PR, this is fixed in 5.3.3 and 5.4.1, should we close?", "created_at": "2017-06-05T17:28:40Z" }, { "body": "I'm still seeing this behavior when upgrading from 2.x to 5.4.1 and 5.3.3", "created_at": "2017-06-13T16:27:03Z" }, { "body": "There was a reversed ternary logic bug that wasn't caught by the munged test. Opened fix at #25211 for 5.4.2 release /cc @jimczi @clintongormley ", "created_at": "2017-06-14T00:13:02Z" }, { "body": "fix is merged in #25211 ", "created_at": "2017-06-14T13:58:20Z" } ], "number": 24275, "title": "_field_stats call on geo_point field broken after upgrading from 2.4.4 -> 5.3.1" }
{ "body": "`LegacyGeoPointField` was using the wrong decoding for min/max prefix coded GeoPoint Terms. This PR applies the correct decoding. Note that the min/max values, though, are likely useless anyway since they map to a low resolution morton encoded version of the point; not something that is really of value for field stats.\r\n\r\ncloses #24275 ", "number": 24534, "review_comments": [ { "body": "Can we test more than one version at a time ? At least one of v2 and one of v5.", "created_at": "2017-05-11T22:05:19Z" } ], "title": "Fix legacy GeoPointField decoding in FieldStats" }
{ "commits": [ { "message": "Fix legacy GeoPointField decoding in FieldStats\n\nLegacyGeoPointField was using the wrong decoding for min/max prefix coded GeoPoint Terms. This commit applies the correcct decoding. Note that the min/max values, though, are likely useless anyway since they map to a low resolution morton encoded version of the point; not something that is really of value for field stats." } ], "files": [ { "diff": "@@ -25,8 +25,9 @@\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.Terms;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.spatial.util.MortonEncoder;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.LegacyNumericUtils;\n-import org.apache.lucene.util.NumericUtils;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.fieldstats.FieldStats;\n@@ -306,6 +307,7 @@ public static class LegacyGeoPointFieldType extends GeoPointFieldType {\n \n protected MappedFieldType latFieldType;\n protected MappedFieldType lonFieldType;\n+ protected boolean numericEncoded;\n \n LegacyGeoPointFieldType() {}\n \n@@ -316,6 +318,7 @@ public static class LegacyGeoPointFieldType extends GeoPointFieldType {\n this.geoHashPrefixEnabled = ref.geoHashPrefixEnabled;\n this.latFieldType = ref.latFieldType; // copying ref is ok, this can never be modified\n this.lonFieldType = ref.lonFieldType; // copying ref is ok, this can never be modified\n+ this.numericEncoded = ref.numericEncoded;\n }\n \n @Override\n@@ -329,15 +332,16 @@ public boolean equals(Object o) {\n LegacyGeoPointFieldType that = (LegacyGeoPointFieldType) o;\n return geoHashPrecision == that.geoHashPrecision &&\n geoHashPrefixEnabled == that.geoHashPrefixEnabled &&\n+ numericEncoded == that.numericEncoded &&\n java.util.Objects.equals(geoHashFieldType, that.geoHashFieldType) &&\n java.util.Objects.equals(latFieldType, that.latFieldType) &&\n java.util.Objects.equals(lonFieldType, that.lonFieldType);\n }\n \n @Override\n public int hashCode() {\n- return java.util.Objects.hash(super.hashCode(), geoHashFieldType, geoHashPrecision, geoHashPrefixEnabled, latFieldType,\n- lonFieldType);\n+ return java.util.Objects.hash(super.hashCode(), geoHashFieldType, geoHashPrecision, geoHashPrefixEnabled,\n+ numericEncoded, latFieldType, lonFieldType);\n }\n \n @Override\n@@ -437,10 +441,9 @@ public FieldStats.GeoPoint stats(IndexReader reader) throws IOException {\n if (terms == null) {\n return new FieldStats.GeoPoint(reader.maxDoc(), 0L, -1L, -1L, isSearchable(), isAggregatable());\n }\n- GeoPoint minPt = GeoPoint.fromGeohash(NumericUtils.sortableBytesToLong(terms.getMin().bytes, terms.getMin().offset));\n- GeoPoint maxPt = GeoPoint.fromGeohash(NumericUtils.sortableBytesToLong(terms.getMax().bytes, terms.getMax().offset));\n return new FieldStats.GeoPoint(reader.maxDoc(), terms.getDocCount(), -1L, terms.getSumTotalTermFreq(), isSearchable(),\n- isAggregatable(), minPt, maxPt);\n+ isAggregatable(), prefixCodedToGeoPoint(terms.getMin(), numericEncoded),\n+ prefixCodedToGeoPoint(terms.getMax(), numericEncoded));\n }\n }\n \n@@ -657,4 +660,19 @@ public FieldMapper updateFieldType(Map<String, MappedFieldType> fullNameToFieldT\n updated.lonMapper = lonUpdated;\n return updated;\n }\n+\n+ private static GeoPoint prefixCodedToGeoPoint(BytesRef val, boolean isGeoCoded) {\n+ final long encoded = isGeoCoded ? prefixCodedToGeoCoded(val) : LegacyNumericUtils.prefixCodedToLong(val);\n+ return new GeoPoint(MortonEncoder.decodeLatitude(encoded), MortonEncoder.decodeLongitude(encoded));\n+ }\n+\n+ private static long prefixCodedToGeoCoded(BytesRef val) {\n+ long result = fromBytes((byte)0, (byte)0, (byte)0, (byte)0, val.bytes[val.offset + 0], val.bytes[val.offset + 1],\n+ val.bytes[val.offset + 2], val.bytes[val.offset + 3]);\n+ return result << 32;\n+ }\n+\n+ private static long fromBytes(byte b1, byte b2, byte b3, byte b4, byte b5, byte b6, byte b7, byte b8) {\n+ return ((long)b1 & 255L) << 56 | ((long)b2 & 255L) << 48 | ((long)b3 & 255L) << 40 | ((long)b4 & 255L) << 32 | ((long)b5 & 255L) << 24 | ((long)b6 & 255L) << 16 | ((long)b7 & 255L) << 8 | (long)b8 & 255L;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/BaseGeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -79,6 +79,7 @@ public GeoPointFieldMapper build(BuilderContext context, String simpleName, Mapp\n if (context.indexCreatedVersion().before(Version.V_2_3_0)) {\n fieldType.setNumericPrecisionStep(GeoPointField.PRECISION_STEP);\n fieldType.setNumericType(FieldType.LegacyNumericType.LONG);\n+ ((LegacyGeoPointFieldType)fieldType).numericEncoded = true;\n }\n setupFieldType(context);\n return new GeoPointFieldMapper(simpleName, fieldType, defaultFieldType, indexSettings, latMapper, lonMapper,", "filename": "core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,24 +19,30 @@\n \n package org.elasticsearch.fieldstats;\n \n+import org.apache.lucene.geo.GeoTestUtil;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.fieldstats.FieldStats;\n import org.elasticsearch.action.fieldstats.FieldStatsResponse;\n import org.elasticsearch.action.fieldstats.IndexConstraint;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.elasticsearch.test.InternalSettingsPlugin;\n+import org.elasticsearch.test.VersionUtils;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n import java.net.InetAddress;\n import java.net.UnknownHostException;\n import java.util.ArrayList;\n+import java.util.Collection;\n import java.util.Date;\n import java.util.List;\n import java.util.Locale;\n@@ -52,6 +58,11 @@\n /**\n */\n public class FieldStatsTests extends ESSingleNodeTestCase {\n+ @Override\n+ protected Collection<Class<? extends Plugin>> getPlugins() {\n+ return pluginList(InternalSettingsPlugin.class);\n+ }\n+\n public void testByte() {\n testNumberRange(\"field1\", \"byte\", 12, 18);\n testNumberRange(\"field1\", \"byte\", -5, 5);\n@@ -676,6 +687,31 @@ public static FieldStats randomFieldStats(boolean withNullMinMax) throws Unknown\n }\n }\n \n+ public void testGeopoint() {\n+ Version version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.CURRENT);\n+ Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build();\n+ createIndex(\"test\", settings, \"test\",\n+ \"field_index\", makeType(\"geo_point\", true, false, false));\n+ version = Version.CURRENT;\n+ settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build();\n+ createIndex(\"test5x\", settings, \"test\",\n+ \"field_index\", makeType(\"geo_point\", true, false, false));\n+ int numDocs = random().nextInt(20);\n+ for (int i = 0; i <= numDocs; ++i) {\n+ double lat = GeoTestUtil.nextLatitude();\n+ double lon = GeoTestUtil.nextLongitude();\n+ client().prepareIndex(random().nextBoolean() ? \"test\" : \"test5x\", \"test\").setSource(\"field_index\", lat + \",\" + lon).get();\n+ }\n+\n+ client().admin().indices().prepareRefresh().get();\n+ FieldStatsResponse result = client().prepareFieldStats().setFields(\"field_index\").get();\n+ FieldStats stats = result.getAllFieldStats().get(\"field_index\");\n+ assertEquals(stats.getDisplayType(), \"geo_point\");\n+ // min/max random testing is not straightforward; there are 3 different encodings since V_2_0\n+ // e.g., before V2_3 used legacy numeric encoding which is wildly different from V_2_3 which is morton encoded\n+ // which is wildly different from V_5_0 which is point encoded. Skipping min/max in favor of testing\n+ }\n+\n private void assertSerialization(FieldStats stats, Version version) throws IOException {\n BytesStreamOutput output = new BytesStreamOutput();\n output.setVersion(version);", "filename": "core/src/test/java/org/elasticsearch/fieldstats/FieldStatsTests.java", "status": "modified" } ] }
{ "body": "Using the default settings, if a delete request is issued for a single document against a non-existing index, Elasticsearch will create the index. Steps to reproduce:\n\n```\nDELETE /foo\n```\n\n```\nDELETE /foo/bar/1\n```\n\nThen:\n\n```\nGET /foo\n```\n\nresponds with HTTP 200 OK and response body\n\n```\n{\n \"foo\" : {\n \"aliases\" : { },\n \"mappings\" : { },\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"1450124918393\",\n \"number_of_shards\" : \"5\",\n \"number_of_replicas\" : \"1\",\n \"uuid\" : \"Ny3PruSCTIG5TlYecU0_XA\",\n \"version\" : {\n \"created\" : \"3000099\"\n }\n }\n },\n \"warmers\" : { }\n }\n}\n```\n\nshowing that it created the index.\n", "comments": [ { "body": "+1 this just cost me a day of work.\n", "created_at": "2016-08-08T17:27:03Z" }, { "body": "Carrying the discussion over from #21926 \r\n\r\n> I think we should have a dedicated setting for this which defaults to false.\r\n\r\nI would prefer to make deletes with an external version auto create indices rather than have a settings that controls deletes in general. It seems that's the only use case that needs it so we can have it contained. ", "created_at": "2016-12-02T10:29:20Z" }, { "body": "++ @bleskes ", "created_at": "2016-12-02T10:54:31Z" }, { "body": "+1, I was using an index management scheme that was open loop deleting potentially existent indices based on time range. This then creates thousands of ghost indices that cannot be deleted, and completely kills the cluster performance. Poor behaviour. I am on v5.20", "created_at": "2017-04-25T16:06:47Z" }, { "body": "Hey guys, as this appears to be still an issue in master, I would like to give it a try iff nobody else is working on it.\r\n\r\nAs the discussion was held over time in multiple threads, let me sum up what the expected behavior should be : \r\n* if an external version is used : create the index ( the same behavior as of now )\r\n* otherwise : throw an `index_not_found`\r\n(the change should not introduce an additional `IndexOption`)", "created_at": "2017-04-28T12:52:03Z" } ], "number": 15425, "title": "Deleting a document from a non-existing index creates the index" }
{ "body": "Currently a `delete document` request against a non-existing index actually **creates** this index.\r\n\r\nWith this change the `delete document` no longer creates the previously non-existing index and throws an `index_not_found` exception instead.\r\n\r\nHowever as discussed in https://github.com/elastic/elasticsearch/pull/15451#issuecomment-165772026, if an external version is explicitly used, the current behavior is preserved and the index is still created and the document is marked for deletion.\r\n\r\nFixes #15425 ", "number": 24518, "review_comments": [ { "body": "The goal of this test is to check how delete behaves where it doesn't find the document in question. I don't think there is a need to test the index creation on the rest client tests (this is done in the core testing). Just create the index and let the test what it does.", "created_at": "2017-05-12T13:28:36Z" }, { "body": "I think this is the wrong place for this - this suite checks that index names are passed correctly to sub request. IMO you should create a unit test `TransportBulkActionTests` class, which inherits from `ESTestCase` and test how the `TransportBulkAction` decides which indices need to be auto created. You can look at how `TransportBulkActionIngestTests.TestTransportBulkAction` is set up as an example. Let me know if you need more guidance and I'm happy to help", "created_at": "2017-05-12T13:37:08Z" }, { "body": "maybe change this to a delete request with an external version?", "created_at": "2017-05-12T13:41:15Z" }, { "body": "can you mention external_gte too?", "created_at": "2017-05-12T13:41:49Z" }, { "body": "What about adding a link here as there are actually 3 external version types : `external`, `external_gt` and `external_gte` ?", "created_at": "2017-05-14T16:20:12Z" }, { "body": "I'm fine with something generic here (if one of the external versioning variants is used) and linking to the versioning page.", "created_at": "2017-05-15T07:28:48Z" }, { "body": "@bleskes thanks for the tip, I will add a `TransportBulkActionTests`.\r\n\r\nHm... It appeared to me that at the time the issue was opened, the `delete` was not using `BulkAction` ( based on some comments ). So I decided to add the test here to verify that explicitly using the `delete` endpoint will not create the index. But of course I can remove it ;)", "created_at": "2017-05-15T13:17:29Z" }, { "body": "why was this added?", "created_at": "2017-05-18T13:05:26Z" }, { "body": "can you add a comment here as to why we have a special handling for external versions and deletes here?", "created_at": "2017-05-18T13:07:49Z" }, { "body": "nit: left over formatting.. can you please remove?", "created_at": "2017-05-18T13:08:10Z" }, { "body": "can we call this `testDeleteNonExistingDocDoesNotCreateIndices`", "created_at": "2017-05-18T13:13:49Z" }, { "body": "same comment about naming. Can we add \"CreatesIndex\"?", "created_at": "2017-05-18T13:15:10Z" }, { "body": "Hm... you want me revert and leave the imports not organized?\r\n\r\nIn general I agree that major reformatting should happen separately but it is just a single import ;)", "created_at": "2017-05-18T15:22:12Z" }, { "body": "I hear you. On the other hand, it's just a reset to commit away :)", "created_at": "2017-05-18T15:42:59Z" }, { "body": "I could not figure out how else to explicitly create an index in the REST client... Do you have a suggestion?", "created_at": "2017-05-18T15:54:16Z" }, { "body": "I think you still need to use the low level client for that, using `client().performRequest()`. That said, why not change the order of the test blocks around and move the `// Testing deletion` block to be first. It will create the index.", "created_at": "2017-05-18T19:19:20Z" }, { "body": "Well, duh! :blush: Thanks @bleskes! ", "created_at": "2017-05-18T20:27:53Z" } ], "title": "If the index does not exist, delete document will not auto create it" }
{ "commits": [ { "message": "If the index does not exist, delete document will not auto create it\nunless an external versioning is used" }, { "message": "creating TransportBulkActionTests\nreverting the changes to the REST client tests and the IndicesRequestIT\n\nadd a section to the indices.asciidoc" }, { "message": "integrating remarks" }, { "message": "correcting CrudIT" } ], "files": [ { "diff": "@@ -55,24 +55,13 @@\n import java.io.IOException;\n import java.util.Collections;\n import java.util.Map;\n-import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicReference;\n \n import static java.util.Collections.singletonMap;\n \n public class CrudIT extends ESRestHighLevelClientTestCase {\n \n public void testDelete() throws IOException {\n- {\n- // Testing non existing document\n- String docId = \"does_not_exist\";\n- DeleteRequest deleteRequest = new DeleteRequest(\"index\", \"type\", docId);\n- DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);\n- assertEquals(\"index\", deleteResponse.getIndex());\n- assertEquals(\"type\", deleteResponse.getType());\n- assertEquals(docId, deleteResponse.getId());\n- assertEquals(DocWriteResponse.Result.NOT_FOUND, deleteResponse.getResult());\n- }\n {\n // Testing deletion\n String docId = \"id\";\n@@ -87,6 +76,16 @@ public void testDelete() throws IOException {\n assertEquals(docId, deleteResponse.getId());\n assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult());\n }\n+ {\n+ // Testing non existing document\n+ String docId = \"does_not_exist\";\n+ DeleteRequest deleteRequest = new DeleteRequest(\"index\", \"type\", docId);\n+ DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync);\n+ assertEquals(\"index\", deleteResponse.getIndex());\n+ assertEquals(\"type\", deleteResponse.getType());\n+ assertEquals(docId, deleteResponse.getId());\n+ assertEquals(DocWriteResponse.Result.NOT_FOUND, deleteResponse.getResult());\n+ }\n {\n // Testing version conflict\n String docId = \"version_conflict\";", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java", "status": "modified" }, { "diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.ingest.IngestService;\n@@ -144,6 +145,11 @@ protected void doExecute(Task task, BulkRequest bulkRequest, ActionListener<Bulk\n // Attempt to create all the indices that we're going to need during the bulk before we start.\n // Step 1: collect all the indices in the request\n final Set<String> indices = bulkRequest.requests.stream()\n+ // delete requests should not attempt to create the index (if the index does not\n+ // exists), unless an external versioning is used\n+ .filter(request -> request.opType() != DocWriteRequest.OpType.DELETE \n+ || request.versionType() == VersionType.EXTERNAL \n+ || request.versionType() == VersionType.EXTERNAL_GTE)\n .map(DocWriteRequest::index)\n .collect(Collectors.toSet());\n /* Step 2: filter that to indices that don't exist and we can create. At the same time build a map of indices we can't create", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.transport.TransportService;\n@@ -66,7 +67,7 @@ public void testAllFail() {\n BulkRequest bulkRequest = new BulkRequest();\n bulkRequest.add(new IndexRequest(\"no\"));\n bulkRequest.add(new IndexRequest(\"can't\"));\n- bulkRequest.add(new DeleteRequest(\"do\"));\n+ bulkRequest.add(new DeleteRequest(\"do\").version(0).versionType(VersionType.EXTERNAL));\n bulkRequest.add(new UpdateRequest(\"nothin\", randomAlphaOfLength(5), randomAlphaOfLength(5)));\n indicesThatCannotBeCreatedTestCase(new HashSet<>(Arrays.asList(\"no\", \"can't\", \"do\", \"nothin\")), bulkRequest, index -> {\n throw new IndexNotFoundException(\"Can't make it because I say so\");", "filename": "core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionIndicesThatCannotBeCreatedTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,135 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.bulk;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n+import org.elasticsearch.action.bulk.TransportBulkActionTookTests.Resolver;\n+import org.elasticsearch.action.delete.DeleteRequest;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.AutoCreateIndex;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.transport.CapturingTransport;\n+import org.elasticsearch.threadpool.TestThreadPool;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.util.Collections;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.test.ClusterServiceUtils.createClusterService;\n+\n+public class TransportBulkActionTests extends ESTestCase {\n+\n+ /** Services needed by bulk action */\n+ private TransportService transportService;\n+ private ClusterService clusterService;\n+ private ThreadPool threadPool;\n+ \n+ private TestTransportBulkAction bulkAction;\n+\n+ class TestTransportBulkAction extends TransportBulkAction {\n+\n+ boolean indexCreated = false; // set when the \"real\" index is created\n+\n+ TestTransportBulkAction() {\n+ super(Settings.EMPTY, TransportBulkActionTests.this.threadPool, transportService, clusterService, null, null,\n+ null, new ActionFilters(Collections.emptySet()), new Resolver(Settings.EMPTY),\n+ new AutoCreateIndex(Settings.EMPTY, clusterService.getClusterSettings(), new Resolver(Settings.EMPTY)));\n+ }\n+\n+ @Override\n+ protected boolean needToCheck() {\n+ return true;\n+ }\n+\n+ @Override\n+ void createIndex(String index, TimeValue timeout, ActionListener<CreateIndexResponse> listener) {\n+ indexCreated = true;\n+ listener.onResponse(null);\n+ }\n+ }\n+\n+ @Before\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ threadPool = new TestThreadPool(\"TransportBulkActionTookTests\");\n+ clusterService = createClusterService(threadPool);\n+ CapturingTransport capturingTransport = new CapturingTransport();\n+ transportService = new TransportService(clusterService.getSettings(), capturingTransport, threadPool,\n+ TransportService.NOOP_TRANSPORT_INTERCEPTOR,\n+ boundAddress -> clusterService.localNode(), null);\n+ transportService.start();\n+ transportService.acceptIncomingRequests();\n+ bulkAction = new TestTransportBulkAction();\n+ }\n+\n+ @After\n+ public void tearDown() throws Exception {\n+ ThreadPool.terminate(threadPool, 30, TimeUnit.SECONDS);\n+ threadPool = null;\n+ clusterService.close();\n+ super.tearDown();\n+ }\n+\n+ public void testDeleteNonExistingDocDoesNotCreateIndex() throws Exception {\n+ BulkRequest bulkRequest = new BulkRequest().add(new DeleteRequest(\"index\", \"type\", \"id\"));\n+\n+ bulkAction.execute(null, bulkRequest, ActionListener.wrap(response -> {\n+ assertFalse(bulkAction.indexCreated);\n+ BulkItemResponse[] bulkResponses = ((BulkResponse) response).getItems();\n+ assertEquals(bulkResponses.length, 1);\n+ assertTrue(bulkResponses[0].isFailed());\n+ assertTrue(bulkResponses[0].getFailure().getCause() instanceof IndexNotFoundException);\n+ assertEquals(\"index\", bulkResponses[0].getFailure().getIndex());\n+ }, exception -> {\n+ throw new AssertionError(exception);\n+ }));\n+ }\n+\n+ public void testDeleteNonExistingDocExternalVersionCreatesIndex() throws Exception {\n+ BulkRequest bulkRequest = new BulkRequest()\n+ .add(new DeleteRequest(\"index\", \"type\", \"id\").versionType(VersionType.EXTERNAL).version(0));\n+\n+ bulkAction.execute(null, bulkRequest, ActionListener.wrap(response -> {\n+ assertTrue(bulkAction.indexCreated);\n+ }, exception -> {\n+ throw new AssertionError(exception);\n+ }));\n+ }\n+\n+ public void testDeleteNonExistingDocExternalGteVersionCreatesIndex() throws Exception {\n+ BulkRequest bulkRequest = new BulkRequest()\n+ .add(new DeleteRequest(\"index2\", \"type\", \"id\").versionType(VersionType.EXTERNAL_GTE).version(0));\n+\n+ bulkAction.execute(null, bulkRequest, ActionListener.wrap(response -> {\n+ assertTrue(bulkAction.indexCreated);\n+ }, exception -> {\n+ throw new AssertionError(exception);\n+ }));\n+ }\n+}\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTests.java", "status": "added" }, { "diff": "@@ -105,7 +105,8 @@ thrown instead.\n [[delete-index-creation]]\n === Automatic index creation\n \n-The delete operation automatically creates an index if it has not been\n+If an <<docs-index_,external versioning variant>> is used,\n+the delete operation automatically creates an index if it has not been\n created before (check out the <<indices-create-index,create index API>>\n for manually creating an index), and also automatically creates a\n dynamic type mapping for the specific type if it has not been created", "filename": "docs/reference/docs/delete.asciidoc", "status": "modified" }, { "diff": "@@ -44,3 +44,9 @@ The default value of the `allow_no_indices` option for the Open/Close index API\n has been changed from `false` to `true` so it is aligned with the behaviour of the\n Delete index API. As a result, Open/Close index API don't return an error by\n default when a provided wildcard expression doesn't match any closed/open index.\n+\n+==== Delete a document\n+\n+Delete a document from non-existing index has been modified to not create the index.\n+However if an external versioning is used the index will be created and the document\n+will be marked for deletion. ", "filename": "docs/reference/migration/migrate_6_0/indices.asciidoc", "status": "modified" } ] }
{ "body": "This problem was reported at https://discuss.elastic.co/t/keyword-filter-requires-either-keywords-or-keywords-path-to-be-configured/80332/3. If you update analysis settings with an empty list of keywords in a keyword marker filter on a closed index, the index will fail to reopen and there does not seem to be any way to recover from this situation. This problem was reported on Elasticsearch 5.2 but reproduces on master:\r\n\r\n```\r\nDELETE my_index\r\n\r\nPUT my_index\r\n\r\nPOST my_index/_close\r\n\r\nPUT my_index/_settings\r\n{\r\n \"index\": {\r\n \"analysis\": {\r\n \"filter\": {\r\n \"my_keywords\": {\r\n \"type\": \"keyword_marker\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nOnce we reach this point, it does not seem possible to open the index again. For instance, the commands below that try to set a dummy list of keywords to make the filter valid again fail:\r\n\r\n```\r\nPOST my_index/_open\r\n\r\nPOST my_index/_close\r\n\r\nPUT my_index/_settings\r\n{\r\n \"index\": {\r\n \"analysis\": {\r\n \"filter\": {\r\n \"my_keywords\": {\r\n \"type\": \"keyword_marker\",\r\n \"keywords\": [ \"foo\", \"bar\" ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```", "comments": [ { "body": "I dug into this a little to learn more about how things work.\r\n\r\nLooks like nothing in the MetadataUpdateSettingsService actually verifies the contents of these analyzers. (any json metadata objects would be accepted)\r\n\r\nThere is some verification code in place [here](https://github.com/talevy/elasticsearch/blob/4f694a3312a8bf57794ac9c3613ca6b5a2b52c58/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java#L281)\r\n\r\nWhen it finally reaches a point that it is prepared with a temporary IndexService for checking the metadata update, it never verifies the analyzers, like [how is verified during a true new IndexService creation](https://github.com/talevy/elasticsearch/blob/9a0b216c36e3debb9d48374ced1e58d6568429bd/core/src/main/java/org/elasticsearch/index/IndexService.java#L150)\r\n\r\nSince there AnalyzerRegistry in the updateMetadata context, I do not see a way to achieve the same as is done during index creation without more refactoring. If the IndexService kept the AnalysisRegistry that is passed-in around, then it would be as easy to bring that same verification logic into the update code. [Here](https://github.com/talevy/elasticsearch/blob/9a0b216c36e3debb9d48374ced1e58d6568429bd/core/src/main/java/org/elasticsearch/index/IndexService.java#L607) is where the indexSettings are updated and ready to be verified\r\n\r\nI may have an incomplete view into how to properly tackle this, but thought I'd leave these notes just in case it helps", "created_at": "2017-03-30T22:36:02Z" } ], "number": 23787, "title": "Cannot recover from bad analysis settings" }
{ "body": "We allow non-dynamic settings to be updated on closed indices but we don't\r\ncheck if the updated settings can be used to open/create the index.\r\nThis can lead to unrecoverable state where the settings are updated but the index\r\ncannot be reopened since the settings are not valid. Trying to update the invalid settings\r\nis also not possible since the update will fail to validate the current settings.\r\nThis change adds the validation of the updated settings for closed indices and make sure that the new settings do not prevent the reopen of the index.\r\n\r\nFixes #23787", "number": 24487, "review_comments": [], "title": "Validates updated settings on closed indices" }
{ "commits": [ { "message": "Validates updated settings on closed indices\n\nWe allow non-dynamic settings to be updated on closed indices but we don't\ncheck if the updated settings can be used to open/create the index.\nThis can lead to unrecoverable state where the settings are updated but the index\ncannot be reopened since the settings are not valid. Trying to update the invalid settings\nis also not possible since the update will fail to validate the current settings.\nThis change adds the validation of the updated settings for closed indices and make sure that the new settings\ndo not prevent the reopen of the index.\n\nFixes #23787" } ], "files": [ { "diff": "@@ -276,7 +276,11 @@ public ClusterState execute(ClusterState currentState) {\n for (Index index : closeIndices) {\n final IndexMetaData currentMetaData = currentState.getMetaData().getIndexSafe(index);\n final IndexMetaData updatedMetaData = updatedState.metaData().getIndexSafe(index);\n+ // Verifies that the current index settings can be updated with the updated dynamic settings.\n indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData);\n+ // Now check that we can create the index with the updated settings (dynamic and non-dynamic).\n+ // This step is mandatory since we allow to update non-dynamic settings on closed indices.\n+ indicesService.verifyIndexMetadata(updatedMetaData, updatedMetaData);\n }\n } catch (IOException ex) {\n throw ExceptionsHelper.convertToElastic(ex);", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java", "status": "modified" }, { "diff": "@@ -49,6 +49,18 @@\n import static org.hamcrest.Matchers.nullValue;\n \n public class UpdateSettingsIT extends ESIntegTestCase {\n+ public void testInvalidUpdateOnClosedIndex() {\n+ createIndex(\"test\");\n+ assertAcked(client().admin().indices().prepareClose(\"test\").get());\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () ->\n+ client()\n+ .admin()\n+ .indices()\n+ .prepareUpdateSettings(\"test\")\n+ .setSettings(Settings.builder().put(\"index.analysis.char_filter.invalid_char.type\", \"invalid\"))\n+ .get());\n+ assertEquals(exception.getMessage(), \"Unknown char_filter type [invalid] for [invalid_char]\");\n+ }\n \n public void testInvalidDynamicUpdate() {\n createIndex(\"test\");", "filename": "core/src/test/java/org/elasticsearch/indices/settings/UpdateSettingsIT.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 5.2.1 docker image, 5.1.2 docker image, 5.0.1 RPM package\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_121/25.121-b13\r\n\r\n**OS version**: Linux/4.3.0-1.el6.elrepo.x86_64/amd64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI am able to reproduce the issue presented in #15997 with Elasticsearch 5.2.1,\r\n5.1.2 docker images, and another 5.0.1 RPM package.\r\n\r\nSpecifically:\r\n\r\n```\r\nPUT t \r\n{\r\n \"mappings\": {\r\n \"parent\": {},\r\n \"child\": {\r\n \"_parent\": {\r\n \"type\": \"parent\"\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT t/_mapping/child\r\n{\r\n \"properties\": {}\r\n}\r\n```\r\n\r\nresults in `illegal_argument_exception`.\r\n\r\nAgain, providing `_parent` type definition in update request body worked\r\naround the issue.\r\n\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-02-27T19:51:08,706][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [default] failed to put mappings on indices [[[t/AK3xoZCBSCK1Wj6N81GGlA]]], type [child]\r\njava.lang.IllegalArgumentException: The _parent field's type option can't be changed: [parent]->[null]\r\n\tat org.elasticsearch.index.mapper.ParentFieldMapper.doMerge(ParentFieldMapper.java:301) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.index.mapper.FieldMapper.merge(FieldMapper.java:333) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.index.mapper.MetadataFieldMapper.merge(MetadataFieldMapper.java:73) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.index.mapper.Mapping.merge(Mapping.java:96) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.index.mapper.DocumentMapper.merge(DocumentMapper.java:333) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:267) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:674) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:653) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:612) [elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.1.jar:5.2.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]\r\n```", "comments": [ { "body": "Hi there, \r\nI'd be interested in trying to tackle this one.", "created_at": "2017-03-13T09:07:23Z" }, { "body": "Actually, the other issue that I am working on is taking longer than expected. If anyone else wants to take this in the meantime, they are welcome to.", "created_at": "2017-03-14T23:36:49Z" }, { "body": "Hello,\r\n\r\nI see in the [ParentFieldMapper.java](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java), when we do a Mapping update of a type, we merge all the `MetadataFieldMapper` by calling its merge method with the new mappers, so in the implementation of `ParentFieldMapper`\r\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java#L300\r\nwhen we merge the current with the new one, we checked if they are not equal, we throw an Exception, but if we didn't explicitly specify the `_parent` field in the new mapping, the `parentType` field of new mapper should be `null`, in this case we can just keep the original mapper unchanged as we intended instead of throw an Exception of trying to change it from `[sometype]` to `[null]`.\r\n\r\nI also made a PR with it, thanks for having a check.", "created_at": "2017-05-04T12:49:37Z" } ], "number": 23381, "title": "Need to respecify `_parent` when updating mapping of child type" }
{ "body": "Be able to update child type mapping without specifying it's `_parent` field, because it should be considered as unchanged if it is not explicitly specified.\r\nIn this case, the merged mapper's `parentType` field should be `null`, so we can just merge it to keep the original one instead of throwing an exception of \"trying to change it to 'null' \"\r\n\r\nClose #23381 ", "number": 24407, "review_comments": [ { "body": "Replace this try-catch block with an `expectThrows(...)` call?\r\n\r\n```java\r\nException e = expectThrows(IllegalArgumentException.class, () -> initMapper.merge(modParentMapper.mapping(), false))\r\nassertThat(e.getMessage(), containsString(\"The _parent field's type option can't be changed:\"));\r\n```", "created_at": "2017-05-31T08:21:33Z" }, { "body": "Maybe also add a merge with current parent field and a new field?\r\n\r\n```java\r\nString updatedMapping = XContentFactory.jsonBuilder().startObject().startObject(\"child\")\r\n .startObject(\"_parent\").field(\"type\", \"parent\").endObject()\r\n .startObject(\"properties\")\r\n .startObject(\"field2\").field(\"type\", \"text\").endObject()\r\n .endObject().endObject().endObject().string();\r\n DocumentMapper updatedMapper = parser.parse(\"child\", new CompressedXContent(updatedMapping));\r\n DocumentMapper mergedMapper = initMapper.merge(updatedMapper.mapping(), false);\r\n```", "created_at": "2017-05-31T08:23:38Z" } ], "title": "keep _parent field while updating child type mapping" }
{ "commits": [ { "message": "Keep _parent field mapping while updating child mapping (#23381)\n\nbe able to update child type mapping without specifying _parent\nfield instead of trying to update it to \"null\" as previously." }, { "message": "Fix _parent field's name changing while merging\n\nThe parent field should not be updatable, but the it's fieldType is changed and the new one is applied. So this patch make it keep the original one while merging." }, { "message": "Merge github.com:elastic/elasticsearch" } ], "files": [ { "diff": "@@ -295,9 +295,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ ParentFieldType currentFieldType = (ParentFieldType) fieldType.clone();\n super.doMerge(mergeWith, updateAllTypes);\n ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith;\n- if (Objects.equals(parentType, fieldMergeWith.parentType) == false) {\n+ if (fieldMergeWith.parentType != null && Objects.equals(parentType, fieldMergeWith.parentType) == false) {\n throw new IllegalArgumentException(\"The _parent field's type option can't be changed: [\" + parentType + \"]->[\" + fieldMergeWith.parentType + \"]\");\n }\n \n@@ -308,7 +309,7 @@ protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n }\n \n if (active()) {\n- fieldType = fieldMergeWith.fieldType.clone();\n+ fieldType = currentFieldType;\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java", "status": "modified" }, { "diff": "@@ -168,9 +168,9 @@ public void run() {\n barrier.await();\n for (int i = 0; i < 200 && stopped.get() == false; i++) {\n final String fieldName = Integer.toString(i);\n- ParsedDocument doc = documentMapper.parse(SourceToParse.source(\"test\", \n- \"test\", \n- fieldName, \n+ ParsedDocument doc = documentMapper.parse(SourceToParse.source(\"test\",\n+ \"test\",\n+ fieldName,\n new BytesArray(\"{ \\\"\" + fieldName + \"\\\" : \\\"test\\\" }\"),\n XContentType.JSON));\n Mapping update = doc.dynamicMappingsUpdate();\n@@ -191,10 +191,10 @@ public void run() {\n while(stopped.get() == false) {\n final String fieldName = lastIntroducedFieldName.get();\n final BytesReference source = new BytesArray(\"{ \\\"\" + fieldName + \"\\\" : \\\"test\\\" }\");\n- ParsedDocument parsedDoc = documentMapper.parse(SourceToParse.source(\"test\", \n- \"test\", \n- \"random\", \n- source, \n+ ParsedDocument parsedDoc = documentMapper.parse(SourceToParse.source(\"test\",\n+ \"test\",\n+ \"random\",\n+ source,\n XContentType.JSON));\n if (parsedDoc.dynamicMappingsUpdate() != null) {\n // not in the mapping yet, try again\n@@ -235,4 +235,65 @@ public void testDoNotRepeatOriginalMapping() throws IOException {\n assertNotNull(mapper.mappers().getMapper(\"foo\"));\n assertFalse(mapper.sourceMapper().enabled());\n }\n+\n+ public void testMergeChildType() throws IOException {\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ String initMapping = XContentFactory.jsonBuilder().startObject().startObject(\"child\")\n+ .startObject(\"_parent\").field(\"type\", \"parent\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper initMapper = parser.parse(\"child\", new CompressedXContent(initMapping));\n+\n+ assertThat(initMapper.mappers().getMapper(\"_parent#parent\"), notNullValue());\n+\n+ String updatedMapping1 = XContentFactory.jsonBuilder().startObject().startObject(\"child\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"text\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper updatedMapper1 = parser.parse(\"child\", new CompressedXContent(updatedMapping1));\n+ DocumentMapper mergedMapper1 = initMapper.merge(updatedMapper1.mapping(), false);\n+\n+ assertThat(mergedMapper1.mappers().getMapper(\"_parent#parent\"), notNullValue());\n+ assertThat(mergedMapper1.mappers().getMapper(\"name\"), notNullValue());\n+\n+ String updatedMapping2 = XContentFactory.jsonBuilder().startObject().startObject(\"child\")\n+ .startObject(\"_parent\").field(\"type\", \"parent\").endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"age\").field(\"type\", \"byte\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper updatedMapper2 = parser.parse(\"child\", new CompressedXContent(updatedMapping2));\n+ DocumentMapper mergedMapper2 = mergedMapper1.merge(updatedMapper2.mapping(), false);\n+\n+ assertThat(mergedMapper2.mappers().getMapper(\"_parent#parent\"), notNullValue());\n+ assertThat(mergedMapper2.mappers().getMapper(\"name\"), notNullValue());\n+ assertThat(mergedMapper2.mappers().getMapper(\"age\"), notNullValue());\n+\n+ String modParentMapping = XContentFactory.jsonBuilder().startObject().startObject(\"child\")\n+ .startObject(\"_parent\").field(\"type\", \"new_parent\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper modParentMapper = parser.parse(\"child\", new CompressedXContent(modParentMapping));\n+ Exception e = expectThrows(IllegalArgumentException.class, () -> initMapper.merge(modParentMapper.mapping(), false));\n+ assertThat(e.getMessage(), containsString(\"The _parent field's type option can't be changed: [parent]->[new_parent]\"));\n+ }\n+\n+ public void testMergeAddingParent() throws IOException {\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ String initMapping = XContentFactory.jsonBuilder().startObject().startObject(\"cowboy\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"text\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper initMapper = parser.parse(\"cowboy\", new CompressedXContent(initMapping));\n+\n+ assertThat(initMapper.mappers().getMapper(\"name\"), notNullValue());\n+\n+ String updatedMapping = XContentFactory.jsonBuilder().startObject().startObject(\"cowboy\")\n+ .startObject(\"_parent\").field(\"type\", \"parent\").endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"age\").field(\"type\", \"byte\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper updatedMapper = parser.parse(\"cowboy\", new CompressedXContent(updatedMapping));\n+ Exception e = expectThrows(IllegalArgumentException.class, () -> initMapper.merge(updatedMapper.mapping(), false));\n+ assertThat(e.getMessage(), containsString(\"The _parent field's type option can't be changed: [null]->[parent]\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentMapperMergeTests.java", "status": "modified" } ] }
{ "body": "These are returned from the snapshot _status api (examples from 1.x and 2.x) when retrieving the status of a previous snapshot. It happens to have missing metadata files so it is throwing the exceptions below.\r\n\r\n```\r\n{\r\n \"error\" : \"IndexShardRestoreFailedException[[index_name_3.2017-04][0] failed to read shard snapshot file]; nested: FileNotFoundException[/path_to_index/0/snapshot-index_name_3.2017-04 (No such file or directory)]; \",\r\n \"status\" : 500\r\n}\r\n```\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [ {\r\n \"type\" : \"index_shard_restore_failed_exception\",\r\n \"reason\" : \"failed to read shard snapshot file\",\r\n \"shard\" : \"0\",\r\n \"index\" : \"index_name_3.2017-04\"\r\n } ],\r\n \"type\" : \"index_shard_restore_failed_exception\",\r\n \"reason\" : \"failed to read shard snapshot file\",\r\n \"shard\" : \"0\",\r\n \"index\" : \"index_name_3.2017-04\",\r\n \"caused_by\" : {\r\n \"type\" : \"no_such_file_exception\",\r\n \"reason\" : \"/path_to_index/0/snapshot-index_name_3.2017-04\"\r\n }\r\n },\r\n \"status\" : 500\r\n}\r\n```\r\n\r\nThe above exceptions are misleading since it references IndexShardRestoreFailedException and index_shard_restore_failed_exception. Given that the _status api is only for snapshots (not for snapshot restores), the exception strings can be misleading to the end user.", "comments": [], "number": 24225, "title": "IndexShardRestoreFailedException returning from the snapshot _status api" }
{ "body": "Changes the snapshot status read exception from the (misleading)\r\nIndexShardRestoreFailedException to the generic SnapshotException\r\n\r\nCloses #24225", "number": 24355, "review_comments": [], "title": "Change snapshot status error to use generic SnapshotException" }
{ "commits": [ { "message": "Changes the snapshot status read exception from the (misleading)\nIndexShardRestoreFailedException to the generic SnapshotException\n\nCloses #24225" } ], "files": [ { "diff": "@@ -954,7 +954,7 @@ public BlobStoreIndexShardSnapshot loadSnapshot() {\n try {\n return indexShardSnapshotFormat(version).read(blobContainer, snapshotId.getUUID());\n } catch (IOException ex) {\n- throw new IndexShardRestoreFailedException(shardId, \"failed to read shard snapshot file\", ex);\n+ throw new SnapshotException(metadata.name(), snapshotId, \"failed to read shard snapshot file for \" + shardId, ex);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" } ] }
{ "body": "Right now there is a big comment at the top of QueryDSLDocumentationTests saying that we hope that they lines in there match the docs in the java API documentation. We should really use the `include-tagged::{doc-tests}/DeleteDocumentationIT.java[delete-request]` style syntax to include it directly into the asciidoc. The tags that you'd end up putting in `QueryDSLDocumentationTests` would make it *super* clear that this file is included in documentation. That way someone who missed the comment won't break the tests.", "comments": [ { "body": "I'll keep this one on my list to look at eventually. If someone gets to it before I assign it to myself then please be my guest.", "created_at": "2017-04-25T19:42:25Z" } ], "number": 24320, "title": "QueryDSLDocumentationTests should be imported into the java API docs" }
{ "body": "We've had `QueryDSLDocumentationTests` for a while but it had a very\r\nhopeful comment at the top about how we want to make sure that the\r\nexample in the query-dsl docs match up with the test but we never\r\nhad anything that made *sure* that they did. This changes that!\r\n\r\nNow the examples from the query-dsl docs are all built from the\r\n`QueryDSLDocumentationTests`. All except for the percolator example\r\nbecause that is hard to do as it stands now.\r\n\r\nTo make this easier this change moves `QueryDSLDocumentationTests`\r\nfrom core and into the high level rest client. This is useful for\r\ntwo reasons:\r\n1. We expect the high level rest client to be able to use the builders.\r\n2. The code that builds that docs doesn't check out all of\r\nElasticsearch. It only checks out certain directories. Since we're\r\nalready including snippets from that directory we don't have to\r\nmake any changes to that process.\r\n\r\nCloses #24320\r\n", "number": 24354, "review_comments": [ { "body": "Grumble.... I moved this file to make it easier for the docs framework to pick up but that made this much harder to review.....", "created_at": "2017-04-26T21:36:16Z" } ], "title": "Build the java query DSL api docs from a test" }
{ "commits": [ { "message": "Build that java api docs from a test\n\nWe've had `QueryDSLDocumentationTests` for a while but it had a very\nhopeful comment at the top about how we want to make sure that the\nexample in the query-dsl docs match up with the test but we never\nhad anything that made *sure* that they did. This changes that!\n\nNow the examples from the query-dsl docs are all built from the\n`QueryDSLDocumentationTests`. All except for the percolator example\nbecause that is hard to do as it stands now.\n\nTo make this easier this change moves `QueryDSLDocumentationTests`\nfrom core and into the high level rest client. This is useful for\ntwo reasons:\n1. We expect the high level rest client to be able to use the builders.\n2. The code that builds that docs doesn't check out all of\nElasticsearch. It only checks out certain directories. Since we're\nalready including snippets from that directory we don't have to\nmake any changes to that process.\n\nCloses #24320" } ], "files": [ { "diff": "@@ -0,0 +1,453 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.client.documentation;\n+\n+import org.apache.lucene.search.join.ScoreMode;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.ShapeRelation;\n+import org.elasticsearch.common.geo.builders.CoordinatesBuilder;\n+import org.elasticsearch.common.geo.builders.ShapeBuilders;\n+import org.elasticsearch.common.unit.DistanceUnit;\n+import org.elasticsearch.index.query.GeoShapeQueryBuilder;\n+import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder;\n+import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.FilterFunctionBuilder;\n+import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptType;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static java.util.Collections.singletonMap;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.boostingQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.commonTermsQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.disMaxQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.existsQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.geoBoundingBoxQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.geoPolygonQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.prefixQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.regexpQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.scriptQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.simpleQueryStringQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanContainingQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanFirstQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanMultiTermQueryBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.spanNearQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanNotQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanOrQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanTermQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.spanWithinQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termsQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.typeQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery;\n+import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.exponentialDecayFunction;\n+import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.randomFunction;\n+\n+/**\n+ * Examples of using the transport client that are imported into the transport client documentation.\n+ * There are no assertions here because we're mostly concerned with making sure that the examples\n+ * compile and don't throw weird runtime exceptions. Assertions and example data would be nice, but\n+ * that is secondary.\n+ */\n+public class QueryDSLDocumentationTests extends ESTestCase {\n+ public void testBool() {\n+ // tag::bool\n+ boolQuery()\n+ .must(termQuery(\"content\", \"test1\")) // <1>\n+ .must(termQuery(\"content\", \"test4\")) // <1>\n+ .mustNot(termQuery(\"content\", \"test2\")) // <2>\n+ .should(termQuery(\"content\", \"test3\")) // <3>\n+ .filter(termQuery(\"content\", \"test5\")); // <4>\n+ // end::bool\n+ }\n+\n+ public void testBoosting() {\n+ // tag::boosting\n+ boostingQuery(\n+ termQuery(\"name\",\"kimchy\"), // <1>\n+ termQuery(\"name\",\"dadoonet\")) // <2>\n+ .negativeBoost(0.2f); // <3>\n+ // end::boosting\n+ }\n+\n+ public void testCommonTerms() {\n+ // tag::common_terms\n+ commonTermsQuery(\"name\", // <1>\n+ \"kimchy\"); // <2>\n+ // end::common_terms\n+ }\n+\n+ public void testConstantScore() {\n+ // tag::constant_score\n+ constantScoreQuery(\n+ termQuery(\"name\",\"kimchy\")) // <1>\n+ .boost(2.0f); // <2>\n+ // end::constant_score\n+ }\n+\n+ public void testDisMax() {\n+ // tag::dis_max\n+ disMaxQuery()\n+ .add(termQuery(\"name\", \"kimchy\")) // <1>\n+ .add(termQuery(\"name\", \"elasticsearch\")) // <2>\n+ .boost(1.2f) // <3>\n+ .tieBreaker(0.7f); // <4>\n+ // end::dis_max\n+ }\n+\n+ public void testExists() {\n+ // tag::exists\n+ existsQuery(\"name\"); // <1>\n+ // end::exists\n+ }\n+\n+ public void testFunctionScore() {\n+ // tag::function_score\n+ FilterFunctionBuilder[] functions = {\n+ new FunctionScoreQueryBuilder.FilterFunctionBuilder(\n+ matchQuery(\"name\", \"kimchy\"), // <1>\n+ randomFunction(\"ABCDEF\")), // <2>\n+ new FunctionScoreQueryBuilder.FilterFunctionBuilder(\n+ exponentialDecayFunction(\"age\", 0L, 1L)) // <3>\n+ };\n+ functionScoreQuery(functions);\n+ // end::function_score\n+ }\n+\n+ public void testFuzzy() {\n+ // tag::fuzzy\n+ fuzzyQuery(\n+ \"name\", // <1>\n+ \"kimchy\"); // <2>\n+ // end::fuzzy\n+ }\n+\n+ public void testGeoBoundingBox() {\n+ // tag::geo_bounding_box\n+ geoBoundingBoxQuery(\"pin.location\") // <1>\n+ .setCorners(40.73, -74.1, // <2>\n+ 40.717, -73.99); // <3>\n+ // end::geo_bounding_box\n+ }\n+\n+ public void testGeoDistance() {\n+ // tag::geo_distance\n+ geoDistanceQuery(\"pin.location\") // <1>\n+ .point(40, -70) // <2>\n+ .distance(200, DistanceUnit.KILOMETERS); // <3>\n+ // end::geo_distance\n+ }\n+\n+ public void testGeoPolygon() {\n+ // tag::geo_polygon\n+ List<GeoPoint> points = new ArrayList<GeoPoint>(); // <1>\n+ points.add(new GeoPoint(40, -70));\n+ points.add(new GeoPoint(30, -80));\n+ points.add(new GeoPoint(20, -90));\n+ geoPolygonQuery(\"pin.location\", points); // <2>\n+ // end::geo_polygon\n+ }\n+\n+ public void testGeoShape() throws IOException {\n+ {\n+ // tag::geo_shape\n+ GeoShapeQueryBuilder qb = geoShapeQuery(\n+ \"pin.location\", // <1>\n+ ShapeBuilders.newMultiPoint( // <2>\n+ new CoordinatesBuilder()\n+ .coordinate(0, 0)\n+ .coordinate(0, 10)\n+ .coordinate(10, 10)\n+ .coordinate(10, 0)\n+ .coordinate(0, 0)\n+ .build()));\n+ qb.relation(ShapeRelation.WITHIN); // <3>\n+ // end::geo_shape\n+ }\n+\n+ {\n+ // tag::indexed_geo_shape\n+ // Using pre-indexed shapes\n+ GeoShapeQueryBuilder qb = geoShapeQuery(\n+ \"pin.location\", // <1>\n+ \"DEU\", // <2>\n+ \"countries\"); // <3>\n+ qb.relation(ShapeRelation.WITHIN) // <4>\n+ .indexedShapeIndex(\"shapes\") // <5>\n+ .indexedShapePath(\"location\"); // <6>\n+ // end::indexed_geo_shape\n+ }\n+ }\n+\n+ public void testHasChild() {\n+ // tag::has_child\n+ hasChildQuery(\n+ \"blog_tag\", // <1>\n+ termQuery(\"tag\",\"something\"), // <2>\n+ ScoreMode.None); // <3>\n+ // end::has_child\n+ }\n+\n+ public void testHasParent() {\n+ // tag::has_parent\n+ hasParentQuery(\n+ \"blog\", // <1>\n+ termQuery(\"tag\",\"something\"), // <2>\n+ false); // <3>\n+ // end::has_parent\n+ }\n+\n+ public void testIds() {\n+ // tag::ids\n+ idsQuery(\"my_type\", \"type2\")\n+ .addIds(\"1\", \"4\", \"100\");\n+\n+ idsQuery() // <1>\n+ .addIds(\"1\", \"4\", \"100\");\n+ // end::ids\n+ }\n+\n+ public void testMatchAll() {\n+ // tag::match_all\n+ matchAllQuery();\n+ // end::match_all\n+ }\n+\n+ public void testMatch() {\n+ // tag::match\n+ matchQuery(\n+ \"name\", // <1>\n+ \"kimchy elasticsearch\"); // <2>\n+ // end::match\n+ }\n+\n+ public void testMoreLikeThis() {\n+ // tag::more_like_this\n+ String[] fields = {\"name.first\", \"name.last\"}; // <1>\n+ String[] texts = {\"text like this one\"}; // <2>\n+\n+ moreLikeThisQuery(fields, texts, null)\n+ .minTermFreq(1) // <3>\n+ .maxQueryTerms(12); // <4>\n+ // end::more_like_this\n+ }\n+\n+ public void testMultiMatch() {\n+ // tag::multi_match\n+ multiMatchQuery(\n+ \"kimchy elasticsearch\", // <1>\n+ \"user\", \"message\"); // <2>\n+ // end::multi_match\n+ }\n+\n+ public void testNested() {\n+ // tag::nested\n+ nestedQuery(\n+ \"obj1\", // <1>\n+ boolQuery() // <2>\n+ .must(matchQuery(\"obj1.name\", \"blue\"))\n+ .must(rangeQuery(\"obj1.count\").gt(5)),\n+ ScoreMode.Avg); // <3>\n+ // end::nested\n+ }\n+\n+ public void testPrefix() {\n+ // tag::prefix\n+ prefixQuery(\n+ \"brand\", // <1>\n+ \"heine\"); // <2>\n+ // end::prefix\n+ }\n+\n+ public void testQueryString() {\n+ // tag::query_string\n+ queryStringQuery(\"+kimchy -elasticsearch\");\n+ // end::query_string\n+ }\n+\n+ public void testRange() {\n+ // tag::range\n+ rangeQuery(\"price\") // <1>\n+ .from(5) // <2>\n+ .to(10) // <3>\n+ .includeLower(true) // <4>\n+ .includeUpper(false); // <5>\n+ // end::range\n+\n+ // tag::range_simplified\n+ // A simplified form using gte, gt, lt or lte\n+ rangeQuery(\"age\") // <1>\n+ .gte(\"10\") // <2>\n+ .lt(\"20\"); // <3>\n+ // end::range_simplified\n+ }\n+\n+ public void testRegExp() {\n+ // tag::regexp\n+ regexpQuery(\n+ \"name.first\", // <1>\n+ \"s.*y\"); // <2>\n+ // end::regexp\n+ }\n+\n+ public void testScript() {\n+ // tag::script_inline\n+ scriptQuery(\n+ new Script(\"doc['num1'].value > 1\") // <1>\n+ );\n+ // end::script_inline\n+\n+ // tag::script_file\n+ Map<String, Object> parameters = new HashMap<>();\n+ parameters.put(\"param1\", 5);\n+ scriptQuery(new Script(\n+ ScriptType.FILE, // <1>\n+ \"painless\", // <2>\n+ \"myscript\", // <3>\n+ singletonMap(\"param1\", 5))); // <4>\n+ // end::script_file\n+ }\n+\n+ public void testSimpleQueryString() {\n+ // tag::simple_query_string\n+ simpleQueryStringQuery(\"+kimchy -elasticsearch\");\n+ // end::simple_query_string\n+ }\n+\n+ public void testSpanContaining() {\n+ // tag::span_containing\n+ spanContainingQuery(\n+ spanNearQuery(spanTermQuery(\"field1\",\"bar\"), 5) // <1>\n+ .addClause(spanTermQuery(\"field1\",\"baz\"))\n+ .inOrder(true),\n+ spanTermQuery(\"field1\",\"foo\")); // <2>\n+ // end::span_containing\n+ }\n+\n+ public void testSpanFirst() {\n+ // tag::span_first\n+ spanFirstQuery(\n+ spanTermQuery(\"user\", \"kimchy\"), // <1>\n+ 3 // <2>\n+ );\n+ // end::span_first\n+ }\n+\n+ public void testSpanMultiTerm() {\n+ // tag::span_multi\n+ spanMultiTermQueryBuilder(\n+ prefixQuery(\"user\", \"ki\")); // <1>\n+ // end::span_multi\n+ }\n+\n+ public void testSpanNear() {\n+ // tag::span_near\n+ spanNearQuery(\n+ spanTermQuery(\"field\",\"value1\"), // <1>\n+ 12) // <2>\n+ .addClause(spanTermQuery(\"field\",\"value2\")) // <1>\n+ .addClause(spanTermQuery(\"field\",\"value3\")) // <1>\n+ .inOrder(false); // <3>\n+ // end::span_near\n+ }\n+\n+ public void testSpanNot() {\n+ // tag::span_not\n+ spanNotQuery(\n+ spanTermQuery(\"field\",\"value1\"), // <1>\n+ spanTermQuery(\"field\",\"value2\")); // <2>\n+ // end::span_not\n+ }\n+\n+ public void testSpanOr() {\n+ // tag::span_or\n+ spanOrQuery(spanTermQuery(\"field\",\"value1\")) // <1>\n+ .addClause(spanTermQuery(\"field\",\"value2\")) // <1>\n+ .addClause(spanTermQuery(\"field\",\"value3\")); // <1>\n+ // end::span_or\n+ }\n+\n+ public void testSpanTerm() {\n+ // tag::span_term\n+ spanTermQuery(\n+ \"user\", // <1>\n+ \"kimchy\"); // <2>\n+ // end::span_term\n+ }\n+\n+ public void testSpanWithin() {\n+ // tag::span_within\n+ spanWithinQuery(\n+ spanNearQuery(spanTermQuery(\"field1\", \"bar\"), 5) // <1>\n+ .addClause(spanTermQuery(\"field1\", \"baz\"))\n+ .inOrder(true),\n+ spanTermQuery(\"field1\", \"foo\")); // <2>\n+ // end::span_within\n+ }\n+\n+ public void testTerm() {\n+ // tag::term\n+ termQuery(\n+ \"name\", // <1>\n+ \"kimchy\"); // <2>\n+ // end::term\n+ }\n+\n+ public void testTerms() {\n+ // tag::terms\n+ termsQuery(\"tags\", // <1>\n+ \"blue\", \"pill\"); // <2>\n+ // end::terms\n+ }\n+\n+ public void testType() {\n+ // tag::type\n+ typeQuery(\"my_type\"); // <1>\n+ // end::type\n+ }\n+\n+ public void testWildcard() {\n+ // tag::wildcard\n+ wildcardQuery(\n+ \"user\", // <1>\n+ \"k?mch*\"); // <2>\n+ // end::wildcard\n+ }\n+}", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java", "status": "added" }, { "diff": "@@ -19,6 +19,8 @@ Note that you can easily print (aka debug) JSON generated queries using\n The `QueryBuilder` can then be used with any API that accepts a query,\n such as `count` and `search`.\n \n+:query-dsl-test: {docdir}/../../client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java\n+\n include::query-dsl/match-all-query.asciidoc[]\n \n include::query-dsl/full-text-queries.asciidoc[]\n@@ -35,4 +37,4 @@ include::query-dsl/special-queries.asciidoc[]\n \n include::query-dsl/span-queries.asciidoc[]\n \n-\n+:query-dsl-test!:", "filename": "docs/java-api/query-dsl.asciidoc", "status": "modified" }, { "diff": "@@ -3,17 +3,11 @@\n \n See {ref}/query-dsl-bool-query.html[Bool Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = boolQuery()\n- .must(termQuery(\"content\", \"test1\")) <1>\n- .must(termQuery(\"content\", \"test4\")) <1>\n- .mustNot(termQuery(\"content\", \"test2\")) <2>\n- .should(termQuery(\"content\", \"test3\")) <3>\n- .filter(termQuery(\"content\", \"test5\")); <4>\n+include-tagged::{query-dsl-test}[bool]\n --------------------------------------------------\n <1> must query\n <2> must not query\n <3> should query\n <4> a query that must appear in the matching documents but doesn't contribute to scoring.\n-", "filename": "docs/java-api/query-dsl/bool-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,14 +3,10 @@\n \n See {ref}/query-dsl-boosting-query.html[Boosting Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = boostingQuery(\n- termQuery(\"name\",\"kimchy\"), <1> \n- termQuery(\"name\",\"dadoonet\")) <2>\n- .negativeBoost(0.2f); <3>\n+include-tagged::{query-dsl-test}[boosting]\n --------------------------------------------------\n <1> query that will promote documents\n <2> query that will demote documents\n <3> negative boost\n-", "filename": "docs/java-api/query-dsl/boosting-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,10 +3,9 @@\n \n See {ref}/query-dsl-common-terms-query.html[Common Terms Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = commonTermsQuery(\"name\", <1>\n- \"kimchy\"); <2>\n+include-tagged::{query-dsl-test}[common_terms]\n --------------------------------------------------\n <1> field\n <2> value", "filename": "docs/java-api/query-dsl/common-terms-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,12 +3,9 @@\n \n See {ref}/query-dsl-constant-score-query.html[Constant Score Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = constantScoreQuery(\n- termQuery(\"name\",\"kimchy\") <1>\n- )\n- .boost(2.0f); <2>\n+include-tagged::{query-dsl-test}[constant_score]\n --------------------------------------------------\n <1> your query\n <2> query score", "filename": "docs/java-api/query-dsl/constant-score-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,13 +3,9 @@\n \n See {ref}/query-dsl-dis-max-query.html[Dis Max Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = disMaxQuery()\n- .add(termQuery(\"name\", \"kimchy\")) <1>\n- .add(termQuery(\"name\", \"elasticsearch\")) <2>\n- .boost(1.2f) <3>\n- .tieBreaker(0.7f); <4>\n+include-tagged::{query-dsl-test}[dis_max]\n --------------------------------------------------\n <1> add your queries\n <2> add your queries", "filename": "docs/java-api/query-dsl/dis-max-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,9 +3,8 @@\n \n See {ref}/query-dsl-exists-query.html[Exists Query].\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = existsQuery(\"name\"); <1>\n+include-tagged::{query-dsl-test}[exists]\n --------------------------------------------------\n <1> field\n-", "filename": "docs/java-api/query-dsl/exists-query.asciidoc", "status": "modified" }, { "diff": "@@ -10,18 +10,10 @@ To use `ScoreFunctionBuilders` just import them in your class:\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.*;\n --------------------------------------------------\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-FilterFunctionBuilder[] functions = {\n- new FunctionScoreQueryBuilder.FilterFunctionBuilder(\n- matchQuery(\"name\", \"kimchy\"), <1>\n- randomFunction(\"ABCDEF\")), <2>\n- new FunctionScoreQueryBuilder.FilterFunctionBuilder(\n- exponentialDecayFunction(\"age\", 0L, 1L)) <3>\n-};\n-QueryBuilder qb = QueryBuilders.functionScoreQuery(functions);\n+include-tagged::{query-dsl-test}[function_score]\n --------------------------------------------------\n <1> Add a first function based on a query\n <2> And randomize the score based on a given seed\n <3> Add another function based on the age field\n-", "filename": "docs/java-api/query-dsl/function-score-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,13 +3,9 @@\n \n See {ref}/query-dsl-fuzzy-query.html[Fuzzy Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = fuzzyQuery(\n- \"name\", <1>\n- \"kimzhy\" <2>\n-);\n+include-tagged::{query-dsl-test}[fuzzy]\n --------------------------------------------------\n <1> field\n <2> text\n-", "filename": "docs/java-api/query-dsl/fuzzy-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,14 +3,10 @@\n \n See {ref}/query-dsl-geo-bounding-box-query.html[Geo Bounding Box Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = geoBoundingBoxQuery(\"pin.location\") <1>\n- .setCorners(40.73, -74.1, <2>\n- 40.717, -73.99); <3>\n+include-tagged::{query-dsl-test}[geo_bounding_box]\n --------------------------------------------------\n <1> field\n <2> bounding box top left point\n <3> bounding box bottom right point\n-\n-", "filename": "docs/java-api/query-dsl/geo-bounding-box-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,13 +3,10 @@\n \n See {ref}/query-dsl-geo-distance-query.html[Geo Distance Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = geoDistanceQuery(\"pin.location\") <1>\n- .point(40, -70) <2>\n- .distance(200, DistanceUnit.KILOMETERS); <3>\n+include-tagged::{query-dsl-test}[geo_bounding_box]\n --------------------------------------------------\n <1> field\n <2> center point\n <3> distance from center point\n-", "filename": "docs/java-api/query-dsl/geo-distance-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,16 +3,9 @@\n \n See {ref}/query-dsl-geo-polygon-query.html[Geo Polygon Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-List<GeoPoint> points = new ArrayList<>(); <1>\n-points.add(new GeoPoint(40, -70));\n-points.add(new GeoPoint(30, -80));\n-points.add(new GeoPoint(20, -90));\n-\n-QueryBuilder qb = \n- geoPolygonQuery(\"pin.location\", points); <2>\n+include-tagged::{query-dsl-test}[geo_polygon]\n --------------------------------------------------\n <1> add your polygon of points a document should fall within\n <2> initialise the query with field and points\n-", "filename": "docs/java-api/query-dsl/geo-polygon-query.asciidoc", "status": "modified" }, { "diff": "@@ -37,34 +37,17 @@ import org.elasticsearch.common.geo.ShapeRelation;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n --------------------------------------------------\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-List<Coordinate> points = new ArrayList<>();\n-points.add(new Coordinate(0, 0));\n-points.add(new Coordinate(0, 10));\n-points.add(new Coordinate(10, 10));\n-points.add(new Coordinate(10, 0));\n-points.add(new Coordinate(0, 0));\n-\n-QueryBuilder qb = geoShapeQuery(\n- \"pin.location\", <1>\n- ShapeBuilders.newMultiPoint(points) <2>\n- .relation(ShapeRelation.WITHIN); <3>\n+include-tagged::{query-dsl-test}[geo_shape]\n --------------------------------------------------\n <1> field\n <2> shape\n <3> relation can be `ShapeRelation.CONTAINS`, `ShapeRelation.WITHIN`, `ShapeRelation.INTERSECTS` or `ShapeRelation.DISJOINT`\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-// Using pre-indexed shapes\n-QueryBuilder qb = geoShapeQuery(\n- \"pin.location\", <1>\n- \"DEU\", <2>\n- \"countries\") <3>\n- .relation(ShapeRelation.WITHIN)) <4>\n- .indexedShapeIndex(\"shapes\") <5>\n- .indexedShapePath(\"location\"); <6>\n+include-tagged::{query-dsl-test}[indexed_geo_shape]\n --------------------------------------------------\n <1> field\n <2> The ID of the document that containing the pre-indexed shape.", "filename": "docs/java-api/query-dsl/geo-shape-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,15 +3,10 @@\n \n See {ref}/query-dsl-has-child-query.html[Has Child Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = hasChildQuery(\n- \"blog_tag\", <1>\n- termQuery(\"tag\",\"something\"), <2>\n- ScoreMode.Avg <3>\n-);\n+include-tagged::{query-dsl-test}[has_child]\n --------------------------------------------------\n <1> child type to query against\n <2> query\n <3> score mode can be `ScoreMode.Avg`, `ScoreMode.Max`, `ScoreMode.Min`, `ScoreMode.None` or `ScoreMode.Total`\n-", "filename": "docs/java-api/query-dsl/has-child-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,14 +3,10 @@\n \n See {ref}/query-dsl-has-parent-query.html[Has Parent]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = hasParentQuery(\n- \"blog\", <1>\n- termQuery(\"tag\",\"something\"), <2>\n- false <3>\n-);\n+include-tagged::{query-dsl-test}[has_parent]\n --------------------------------------------------\n <1> parent type to query against\n <2> query\n-<3> whether the score from the parent hit should propogate to the child hit\n+<3> whether the score from the parent hit should propagate to the child hit", "filename": "docs/java-api/query-dsl/has-parent-query.asciidoc", "status": "modified" }, { "diff": "@@ -4,13 +4,8 @@\n \n See {ref}/query-dsl-ids-query.html[Ids Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = idsQuery(\"my_type\", \"type2\")\n- .addIds(\"1\", \"4\", \"100\");\n-\n-QueryBuilder qb = idsQuery() <1>\n- .addIds(\"1\", \"4\", \"100\");\n+include-tagged::{query-dsl-test}[ids]\n --------------------------------------------------\n <1> type is optional\n-", "filename": "docs/java-api/query-dsl/ids-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,7 +3,7 @@\n \n See {ref}/query-dsl-match-all-query.html[Match All Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = matchAllQuery();\n+include-tagged::{query-dsl-test}[match_all]\n --------------------------------------------------", "filename": "docs/java-api/query-dsl/match-all-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,13 +3,9 @@\n \n See {ref}/query-dsl-match-query.html[Match Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = matchQuery(\n- \"name\", <1>\n- \"kimchy elasticsearch\" <2>\n-);\n+include-tagged::{query-dsl-test}[match]\n --------------------------------------------------\n <1> field\n <2> text\n-", "filename": "docs/java-api/query-dsl/match-query.asciidoc", "status": "modified" }, { "diff": "@@ -1,18 +1,11 @@\n [[java-query-dsl-mlt-query]]\n ==== More Like This Query\n \n-See:\n- * {ref}/query-dsl-mlt-query.html[More Like This Query]\n+See {ref}/query-dsl-mlt-query.html[More Like This Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-String[] fields = {\"name.first\", \"name.last\"}; <1>\n-String[] texts = {\"text like this one\"}; <2>\n-Item[] items = null;\n- \n-QueryBuilder qb = moreLikeThisQuery(fields, texts, items)\n- .minTermFreq(1) <3>\n- .maxQueryTerms(12); <4>\n+include-tagged::{query-dsl-test}[more_like_this]\n --------------------------------------------------\n <1> fields\n <2> text", "filename": "docs/java-api/query-dsl/mlt-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,12 +3,9 @@\n \n See {ref}/query-dsl-multi-match-query.html[Multi Match Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = multiMatchQuery(\n- \"kimchy elasticsearch\", <1>\n- \"user\", \"message\" <2>\n-);\n+include-tagged::{query-dsl-test}[multi_match]\n --------------------------------------------------\n <1> text\n <2> fields", "filename": "docs/java-api/query-dsl/multi-match-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,15 +3,9 @@\n \n See {ref}/query-dsl-nested-query.html[Nested Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = nestedQuery(\n- \"obj1\", <1>\n- boolQuery() <2>\n- .must(matchQuery(\"obj1.name\", \"blue\"))\n- .must(rangeQuery(\"obj1.count\").gt(5)),\n- ScoreMode.Avg <3>\n- );\n+include-tagged::{query-dsl-test}[nested]\n --------------------------------------------------\n <1> path to nested document\n <2> your query. Any fields referenced inside the query must use the complete path (fully qualified).", "filename": "docs/java-api/query-dsl/nested-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,14 +3,9 @@\n \n See {ref}/query-dsl-prefix-query.html[Prefix Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = prefixQuery(\n- \"brand\", <1>\n- \"heine\" <2>\n-);\n+include-tagged::{query-dsl-test}[prefix]\n --------------------------------------------------\n <1> field\n <2> prefix\n-\n-", "filename": "docs/java-api/query-dsl/prefix-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,8 +3,7 @@\n \n See {ref}/query-dsl-query-string-query.html[Query String Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = queryStringQuery(\"+kimchy -elasticsearch\"); <1>\n+include-tagged::{query-dsl-test}[query_string]\n --------------------------------------------------\n-<1> text", "filename": "docs/java-api/query-dsl/query-string-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,26 +3,19 @@\n \n See {ref}/query-dsl-range-query.html[Range Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = rangeQuery(\"price\") <1>\n- .from(5) <2>\n- .to(10) <3>\n- .includeLower(true) <4>\n- .includeUpper(false); <5>\n+include-tagged::{query-dsl-test}[range]\n --------------------------------------------------\n <1> field\n <2> from\n <3> to\n <4> include lower value means that `from` is `gt` when `false` or `gte` when `true`\n <5> include upper value means that `to` is `lt` when `false` or `lte` when `true`\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-// A simplified form using gte, gt, lt or lte\n-QueryBuilder qb = rangeQuery(\"age\") <1>\n- .gte(\"10\") <2>\n- .lt(\"20\"); <3>\n+include-tagged::{query-dsl-test}[range_simplified]\n --------------------------------------------------\n <1> field\n <2> set `from` to 10 and `includeLower` to `true`", "filename": "docs/java-api/query-dsl/range-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,11 +3,9 @@\n \n See {ref}/query-dsl-regexp-query.html[Regexp Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = regexpQuery(\n- \"name.first\", <1>\n- \"s.*y\"); <2>\n+include-tagged::{query-dsl-test}[regexp]\n --------------------------------------------------\n <1> field\n <2> regexp", "filename": "docs/java-api/query-dsl/regexp-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,11 +3,9 @@\n \n See {ref}/query-dsl-script-query.html[Script Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = scriptQuery(\n- new Script(\"doc['num1'].value > 1\") <1>\n-);\n+include-tagged::{query-dsl-test}[script_inline]\n --------------------------------------------------\n <1> inlined script\n \n@@ -21,17 +19,11 @@ doc['num1'].value > params.param1\n \n You can use it then with:\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = scriptQuery(\n- new Script(\n- ScriptType.FILE, <1>\n- \"painless\", <2>\n- \"myscript\", <3>\n- Collections.singletonMap(\"param1\", 5)) <4>\n-);\n+include-tagged::{query-dsl-test}[script_file]\n --------------------------------------------------\n <1> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED`\n <2> Scripting engine\n <3> Script name\n-<4> Parameters as a `Map` of `<String, Object>`\n+<4> Parameters as a `Map<String, Object>`", "filename": "docs/java-api/query-dsl/script-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,8 +3,7 @@\n \n See {ref}/query-dsl-simple-query-string-query.html[Simple Query String Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = simpleQueryStringQuery(\"+kimchy -elasticsearch\"); <1>\n+include-tagged::{query-dsl-test}[simple_query_string]\n --------------------------------------------------\n-<1> text", "filename": "docs/java-api/query-dsl/simple-query-string-query.asciidoc", "status": "modified" }, { "diff": "@@ -3,14 +3,9 @@\n \n See {ref}/query-dsl-span-containing-query.html[Span Containing Query]\n \n-[source,java]\n+[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n --------------------------------------------------\n-QueryBuilder qb = spanContainingQuery(\n- spanNearQuery(spanTermQuery(\"field1\",\"bar\"), 5) <1>\n- .addClause(spanTermQuery(\"field1\",\"baz\"))\n- .inOrder(true),\n- spanTermQuery(\"field1\",\"foo\")); <2>\n+include-tagged::{query-dsl-test}[span_containing]\n --------------------------------------------------\n <1> `big` part\n <2> `little` part\n-", "filename": "docs/java-api/query-dsl/span-containing-query.asciidoc", "status": "modified" } ] }
{ "body": "An interesting edge case that I found when working on #24330: It seems that `percentiles_bucket` relies on the `percents` parameter beeing specifies in increasing order or magnitude. If the list of percentiles to calculate is provided in a different order, e.g.\r\n```\r\n\"percentiles_monthly_sales\": {\r\n \"percentiles_bucket\": {\r\n \"buckets_path\": \"sales_per_month>sales\", \r\n \"percents\": [ 50.0, 25.0, 75.0 ] \r\n }\r\n }\r\n```\r\nthe following error is thrown:\r\n```\r\n\"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Percent requested [50.0] was not one of the computed percentiles. Available keys are: [50.0, 25.0, 75.0]\"\r\n```\r\n\r\nThe reason seems to be that in `InternalPercentileBucket#percentile(double percent)` (https://github.com/elastic/elasticsearch/blob/f217eb8ad8d3ecc82e2f926222b1f036b0d555b7/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java#L75) the internal array is assumed to be sorted, but we don't enforce this, e.g. in the PercentilesBucketPipelineAggregator or even before when parsing the request.\r\n\r\nI think we could either throw an error early if we think the `percents` need to be provided in order, re-sort the array if the user provided it unsorted or change the implementation of `InternalPercentileBucket` so it doesn't rely on an ordered array.", "comments": [ { "body": "@colings86 do you have any preferences whether we should throw an error here or should try to make InternalPercentileBucket work without relying on an ordered array?", "created_at": "2017-04-26T12:28:03Z" }, { "body": "I think we should not rely on the order of the array the user provides", "created_at": "2017-04-26T12:46:50Z" }, { "body": "Good catch @cbuescher ", "created_at": "2017-05-02T08:58:48Z" } ], "number": 24331, "title": "Percentiles Bucket Aggregation relies on ordered `percents` array" }
{ "body": "Currently `InternalPercentilesBucket#percentile()` relies on the percent array passed in to be in sorted order. This changes the aggregation to store an internal lookup table that is constructed from the percent/percentiles arrays passed in that can be used to look up the percentile values.\r\n\r\nCloses #24331", "number": 24336, "review_comments": [ { "body": "Maybe we should even throw an error if this is not the case, wdyt?", "created_at": "2017-04-26T14:06:00Z" }, { "body": "++ I can't see anything good happening if they aren't the same length. The Iterator for example will throw an out-of-range exception if they don't match", "created_at": "2017-04-26T14:33:41Z" }, { "body": "I will change this to throw an IllegalArgumentException then and add a test for it.", "created_at": "2017-04-26T14:47:30Z" } ], "title": "InternalPercentilesBucket should not rely on ordered percents array" }
{ "commits": [ { "message": "InternalPercentilesBucket should not rely on ordered percents array" }, { "message": "Add exception when argument arrays are not equal size" }, { "message": "Fixing checkstyle issue" } ], "files": [ { "diff": "@@ -31,21 +31,35 @@\n \n import java.io.IOException;\n import java.util.Arrays;\n+import java.util.HashMap;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n+import java.util.Objects;\n \n public class InternalPercentilesBucket extends InternalNumericMetricsAggregation.MultiValue implements PercentilesBucket {\n private double[] percentiles;\n private double[] percents;\n+ private final transient Map<Double, Double> percentileLookups = new HashMap<>();\n \n public InternalPercentilesBucket(String name, double[] percents, double[] percentiles,\n DocValueFormat formatter, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) {\n super(name, pipelineAggregators, metaData);\n+ if ((percentiles.length == percents.length) == false) {\n+ throw new IllegalArgumentException(\"The number of provided percents and percentiles didn't match. percents: \"\n+ + Arrays.toString(percents) + \", percentiles: \" + Arrays.toString(percentiles));\n+ }\n this.format = formatter;\n this.percentiles = percentiles;\n this.percents = percents;\n+ computeLookup();\n+ }\n+\n+ private void computeLookup() {\n+ for (int i = 0; i < percents.length; i++) {\n+ percentileLookups.put(percents[i], percentiles[i]);\n+ }\n }\n \n /**\n@@ -56,6 +70,7 @@ public InternalPercentilesBucket(StreamInput in) throws IOException {\n format = in.readNamedWriteable(DocValueFormat.class);\n percentiles = in.readDoubleArray();\n percents = in.readDoubleArray();\n+ computeLookup();\n }\n \n @Override\n@@ -72,12 +87,12 @@ public String getWriteableName() {\n \n @Override\n public double percentile(double percent) throws IllegalArgumentException {\n- int index = Arrays.binarySearch(percents, percent);\n- if (index < 0) {\n+ Double percentile = percentileLookups.get(percent);\n+ if (percentile == null) {\n throw new IllegalArgumentException(\"Percent requested [\" + String.valueOf(percent) + \"] was not\" +\n \" one of the computed percentiles. Available keys are: \" + Arrays.toString(percents));\n }\n- return percentiles[index];\n+ return percentile;\n }\n \n @Override\n@@ -116,6 +131,17 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n return builder;\n }\n \n+ @Override\n+ protected boolean doEquals(Object obj) {\n+ InternalPercentilesBucket that = (InternalPercentilesBucket) obj;\n+ return Arrays.equals(percents, that.percents) && Arrays.equals(percentiles, that.percentiles);\n+ }\n+\n+ @Override\n+ protected int doHashCode() {\n+ return Objects.hash(Arrays.hashCode(percents), Arrays.hashCode(percentiles));\n+ }\n+\n public static class Iter implements Iterator<Percentile> {\n \n private final double[] percents;", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucket.java", "status": "modified" }, { "diff": "@@ -50,7 +50,7 @@ protected T createTestInstance(String name, List<PipelineAggregator> pipelineAgg\n protected abstract T createTestInstance(String name, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData,\n boolean keyed, DocValueFormat format, double[] percents, double[] values);\n \n- protected static double[] randomPercents() {\n+ public static double[] randomPercents() {\n List<Double> randomCdfValues = randomSubsetOf(randomIntBetween(1, 7), 0.01d, 0.05d, 0.25d, 0.50d, 0.75d, 0.95d, 0.99d);\n double[] percents = new double[randomCdfValues.size()];\n for (int i = 0; i < randomCdfValues.size(); i++) {", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentilesTestCase.java", "status": "modified" }, { "diff": "@@ -0,0 +1,92 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile;\n+\n+import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.InternalAggregationTestCase;\n+import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n+\n+import java.util.Collections;\n+import java.util.Iterator;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.elasticsearch.search.aggregations.metrics.percentiles.InternalPercentilesTestCase.randomPercents;\n+\n+public class InternalPercentilesBucketTests extends InternalAggregationTestCase<InternalPercentilesBucket> {\n+\n+ @Override\n+ protected InternalPercentilesBucket createTestInstance(String name, List<PipelineAggregator> pipelineAggregators,\n+ Map<String, Object> metaData) {\n+ return createTestInstance(name, pipelineAggregators, metaData, randomPercents());\n+ }\n+\n+ private static InternalPercentilesBucket createTestInstance(String name, List<PipelineAggregator> pipelineAggregators,\n+ Map<String, Object> metaData, double[] percents) {\n+ DocValueFormat format = randomNumericDocValueFormat();\n+ final double[] percentiles = new double[percents.length];\n+ for (int i = 0; i < percents.length; ++i) {\n+ percentiles[i] = frequently() ? randomDouble() : Double.NaN;\n+ }\n+ return new InternalPercentilesBucket(name, percents, percentiles, format, pipelineAggregators, metaData);\n+ }\n+\n+ @Override\n+ public void testReduceRandom() {\n+ expectThrows(UnsupportedOperationException.class,\n+ () -> createTestInstance(\"name\", Collections.emptyList(), null).reduce(null, null));\n+ }\n+\n+ @Override\n+ protected void assertReduced(InternalPercentilesBucket reduced, List<InternalPercentilesBucket> inputs) {\n+ // no test since reduce operation is unsupported\n+ }\n+\n+ @Override\n+ protected Writeable.Reader<InternalPercentilesBucket> instanceReader() {\n+ return InternalPercentilesBucket::new;\n+ }\n+\n+ /**\n+ * check that we don't rely on the percent array order and that the iterator returns the values in the original order\n+ */\n+ public void testPercentOrder() {\n+ final double[] percents = new double[]{ 0.50, 0.25, 0.01, 0.99, 0.60 };\n+ InternalPercentilesBucket aggregation = createTestInstance(\"test\", Collections.emptyList(), Collections.emptyMap(), percents);\n+ Iterator<Percentile> iterator = aggregation.iterator();\n+ for (double percent : percents) {\n+ assertTrue(iterator.hasNext());\n+ Percentile percentile = iterator.next();\n+ assertEquals(percent, percentile.getPercent(), 0.0d);\n+ assertEquals(aggregation.percentile(percent), percentile.getValue(), 0.0d);\n+ }\n+ }\n+\n+ public void testErrorOnDifferentArgumentSize() {\n+ final double[] percents = new double[]{ 0.1, 0.2, 0.3};\n+ final double[] percentiles = new double[]{ 0.10, 0.2};\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> new InternalPercentilesBucket(\"test\", percents,\n+ percentiles, DocValueFormat.RAW, Collections.emptyList(), Collections.emptyMap()));\n+ assertEquals(\"The number of provided percents and percentiles didn't match. percents: [0.1, 0.2, 0.3], percentiles: [0.1, 0.2]\",\n+ e.getMessage());\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/InternalPercentilesBucketTests.java", "status": "added" } ] }
{ "body": "The simplest way to reproduce locally is with `gradle run` and then:\r\n\r\n```\r\nPOST /test/doc\r\n{ \"test\": \"test\" }\r\n\r\n\r\nPUT /test/_settings\r\n{\r\n \"settings\": {\r\n \"index.routing.allocation.require._name\": \"shrink_node_name\", \r\n \"index.blocks.write\": true \r\n }\r\n}\r\n\r\nPOST /test/_shrink/shrunk\r\n\r\nPUT /_snapshot/test_repo\r\n{\r\n \"type\": \"fs\",\r\n \"settings\": {\r\n \"compress\": true,\r\n \"location\": \"/Users/manybubbles/Workspaces/Elasticsearch/master/elasticsearch/distribution/build/cluster/shared/repo/test_repo\"\r\n }\r\n}\r\n\r\nPOST /_snapshot/test_repo/test_snapshot?wait_for_completion=true\r\n\r\nPOST /_snapshot/test_repo/test_snapshot/_restore?wait_for_completion=true\r\n{\r\n \"indices\": [\"shrunk\"]\r\n}\r\n```\r\n\r\nKibana will give you \"socket hang up\" error because Elasticsearch has tripped an assertion and killed itself:\r\n```\r\n[elasticsearch] java.lang.AssertionError: all settings must have been upgraded before\r\n[elasticsearch] at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:77) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n```\r\n\r\nIf you run without `-ea` then the shrunken index won't allocation properly because of the `index.routing.allocation.initial_recovery` setting. That setting cannot be cleared.", "comments": [ { "body": "> If you run without `-ea` then the shrunken index won't allocation properly because of the index.routing.allocation.initial_recovery setting. That setting cannot be cleared.\r\n\r\nClarification: if you run without `-ea` it won't recover properly unless the old node is around. But that is still a problem because then you can't snapshot in one cluster and restore to another. Or you can't restore after that node has been decommissioned. ", "created_at": "2017-04-21T20:38:10Z" }, { "body": "> abeyad self-assigned this 3 minutes ago\r\n\r\nGood luck!", "created_at": "2017-04-21T20:45:54Z" } ], "number": 24257, "title": "Snapshot and restore and shrink do not play well together" }
{ "body": "When an index is shrunk using the shrink APIs, the shrink operation adds\r\nsome internal index settings to the shrink index, for example\r\n`index.shrink.source.name|uuid` to denote the source index, as well as\r\n`index.routing.allocation.initial_recovery._id` to denote the node on\r\nwhich all shards for the source index resided when the shrunken index\r\nwas created. However, this presents a problem when taking a snapshot of\r\nthe shrunken index and restoring it to a cluster where the initial\r\nrecovery node is not present, or restoring to the same cluster where the\r\ninitial recovery node is offline or decommissioned. The restore\r\noperation fails to allocate the shard in the shrunken index to a node\r\nwhen the initial recovery node is not present, and a restore type of\r\nrecovery will *not* go through the PrimaryShardAllocator, meaning that\r\nit will not have the chance to force allocate the primary to a node in\r\nthe cluster. Rather, restore initiated shard allocation goes through\r\nthe BalancedShardAllocator which does not attempt to force allocate a\r\nprimary.\r\n\r\nThis commit fixes the aforementioned problem by not requiring allocation\r\nto occur on the initial recovery node when the recovery type is a\r\nrestore of a snapshot. This commit also ensures that the internal\r\nshrink index settings are recognized and not archived (which can trip an\r\nassertion in the restore scenario).\r\n\r\nCloses #24257", "number": 24322, "review_comments": [ { "body": "removing dead code here", "created_at": "2017-04-25T21:18:46Z" }, { "body": "@ywelsch can you look at this one please", "created_at": "2017-04-26T09:14:15Z" }, { "body": "maybe use `return IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getRawKey().match(key)`", "created_at": "2017-04-26T09:17:10Z" }, { "body": "@abeyad leave a comment why we dont' have snapshot in here?", "created_at": "2017-04-26T09:21:52Z" }, { "body": "As this is only used by the public method `isInitialRecovery`, which in turn is only used by `FilterAllocationDecider`, I prefer that you remove this definition and the `isInitialRecovery` method from the `RecoverySource` class and instead define a 2-element EnumSet in `FilterAllocationDecider` so that it can also be used by `FilterAllocationDeciderTests`. The remaining two usages of `isInitialRecovery() == false` in `IndexRoutingTable` and `RoutingTableTests` can be changed to `== EXISTING_STORE`.", "created_at": "2017-04-26T09:43:16Z" }, { "body": "Also, while you're changing the `if` conditions in `IndexRoutingTable` and `RoutingTableTests`, can you also replace `primaryShard.initializing() && primaryShard.relocating() == false` there by just `primaryShard.initializing()`, the second part is unnecessary and always follows from the first part.", "created_at": "2017-04-26T09:53:41Z" }, { "body": "no need to make this public; package-visible is good enough.", "created_at": "2017-04-26T17:26:28Z" }, { "body": "the problem is that `FilterAllocationDeciderTests` is not part of the same package, and moving it to the same package as `FilterAllocationDecider` breaks package visibility on the `AllocationService#reroute` protected method that the test depends on.", "created_at": "2017-04-26T17:30:39Z" }, { "body": "just use the public one with one less parameter...", "created_at": "2017-04-26T17:33:45Z" } ], "title": "Fixes restore of a shrunken index when initial recovery node is gone" }
{ "commits": [ { "message": "Fixes restore of a shrunken index when initial recovery node is gone\n\nWhen an index is shrunk using the shrink APIs, the shrink operation adds\nsome internal index settings to the shrink index, for example\n`index.shrink.source.name|uuid` to denote the source index, as well as\n`index.routing.allocation.initial_recovery._id` to denote the node on\nwhich all shards for the source index resided when the shrunken index\nwas created. However, this presents a problem when taking a snapshot of\nthe shrunken index and restoring it to a cluster where the initial\nrecovery node is not present, or restoring to the same cluster where the\ninitial recovery node is offline or decomissioned. The restore\noperation fails to allocate the shard in the shrunken index to a node\nwhen the initial recovery node is not present, and a restore type of\nrecovery will *not* go through the PrimaryShardAllocator, meaning that\nit will not have the chance to force allocate the primary to a node in\nthe cluster. Rather, restore initiated shard allocation goes through\nthe BalancedShardAllocator which does not attempt to force allocate a\nprimary.\n\nThis commit fixes the aforementioned problem by not requiring allocation\nto occur on the initial recovery node when the recovery type is a\nrestore of a snapshot. This commit also ensures that the internal\nshrink index settings are recognized and not archived (which can trip an\nassertion in the restore scenario).\n\nCloses #24257" }, { "message": "fix test" }, { "message": "fix test" }, { "message": "address feedback" }, { "message": "better return" }, { "message": "move test" } ], "files": [ { "diff": "@@ -41,8 +41,6 @@\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.logging.DeprecationLogger;\n-import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n@@ -62,7 +60,6 @@\n import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n-import java.text.ParseException;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.EnumSet;\n@@ -440,12 +437,14 @@ public MappingMetaData mapping(String mappingType) {\n return mappings.get(mappingType);\n }\n \n- public static final Setting<String> INDEX_SHRINK_SOURCE_UUID = Setting.simpleString(\"index.shrink.source.uuid\");\n- public static final Setting<String> INDEX_SHRINK_SOURCE_NAME = Setting.simpleString(\"index.shrink.source.name\");\n+ public static final String INDEX_SHRINK_SOURCE_UUID_KEY = \"index.shrink.source.uuid\";\n+ public static final String INDEX_SHRINK_SOURCE_NAME_KEY = \"index.shrink.source.name\";\n+ public static final Setting<String> INDEX_SHRINK_SOURCE_UUID = Setting.simpleString(INDEX_SHRINK_SOURCE_UUID_KEY);\n+ public static final Setting<String> INDEX_SHRINK_SOURCE_NAME = Setting.simpleString(INDEX_SHRINK_SOURCE_NAME_KEY);\n \n \n public Index getMergeSourceIndex() {\n- return INDEX_SHRINK_SOURCE_UUID.exists(settings) ? new Index(INDEX_SHRINK_SOURCE_NAME.get(settings), INDEX_SHRINK_SOURCE_UUID.get(settings)) : null;\n+ return INDEX_SHRINK_SOURCE_UUID.exists(settings) ? new Index(INDEX_SHRINK_SOURCE_NAME.get(settings), INDEX_SHRINK_SOURCE_UUID.get(settings)) : null;\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java", "status": "modified" }, { "diff": "@@ -598,7 +598,7 @@ static void prepareShrinkIndexSettings(ClusterState currentState, Set<String> ma\n indexSettingsBuilder\n // we use \"i.r.a.initial_recovery\" rather than \"i.r.a.require|include\" since we want the replica to allocate right away\n // once we are allocated.\n- .put(\"index.routing.allocation.initial_recovery._id\",\n+ .put(IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getKey() + \"_id\",\n Strings.arrayToCommaDelimitedString(nodesToAllocateOn.toArray()))\n // we only try once and then give up with a shrink index\n .put(\"index.allocation.max_retries\", 1)", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -139,8 +139,8 @@ boolean validate(MetaData metaData) {\n \"allocation set \" + inSyncAllocationIds);\n }\n \n- if (shardRouting.primary() && shardRouting.initializing() && shardRouting.relocating() == false &&\n- RecoverySource.isInitialRecovery(shardRouting.recoverySource().getType()) == false &&\n+ if (shardRouting.primary() && shardRouting.initializing() &&\n+ shardRouting.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE &&\n inSyncAllocationIds.contains(shardRouting.allocationId().getId()) == false)\n throw new IllegalStateException(\"a primary shard routing \" + shardRouting + \" is a primary that is recovering from \" +\n \"a known allocation id but has no corresponding entry in the in-sync \" +", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.cluster.routing;\n \n import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n@@ -29,7 +28,6 @@\n import org.elasticsearch.snapshots.Snapshot;\n \n import java.io.IOException;\n-import java.util.EnumSet;\n import java.util.Objects;\n \n /**\n@@ -249,14 +247,4 @@ public String toString() {\n return \"peer recovery\";\n }\n }\n-\n- private static EnumSet<RecoverySource.Type> INITIAL_RECOVERY_TYPES = EnumSet.of(Type.EMPTY_STORE, Type.LOCAL_SHARDS, Type.SNAPSHOT);\n-\n- /**\n- * returns true for recovery types that indicate that a primary is being allocated for the very first time.\n- * This recoveries can be controlled by {@link IndexMetaData#INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING}\n- */\n- public static boolean isInitialRecovery(RecoverySource.Type type) {\n- return INITIAL_RECOVERY_TYPES.contains(type);\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RecoverySource.java", "status": "modified" }, { "diff": "@@ -30,6 +30,8 @@\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n \n+import java.util.EnumSet;\n+\n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.IP_VALIDATOR;\n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND;\n import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.OR;\n@@ -75,6 +77,17 @@ public class FilterAllocationDecider extends AllocationDecider {\n public static final Setting<Settings> CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING =\n Setting.groupSetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + \".\", IP_VALIDATOR, Property.Dynamic, Property.NodeScope);\n \n+ /**\n+ * The set of {@link RecoverySource.Type} values for which the\n+ * {@link IndexMetaData#INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING} should apply.\n+ * Note that we do not include the {@link RecoverySource.Type#SNAPSHOT} type here\n+ * because if the snapshot is restored to a different cluster that does not contain\n+ * the initial recovery node id, or to the same cluster where the initial recovery node\n+ * id has been decommissioned, then the primary shards will never be allocated.\n+ */\n+ static EnumSet<RecoverySource.Type> INITIAL_RECOVERY_TYPES =\n+ EnumSet.of(RecoverySource.Type.EMPTY_STORE, RecoverySource.Type.LOCAL_SHARDS);\n+\n private volatile DiscoveryNodeFilters clusterRequireFilters;\n private volatile DiscoveryNodeFilters clusterIncludeFilters;\n private volatile DiscoveryNodeFilters clusterExcludeFilters;\n@@ -98,7 +111,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n IndexMetaData indexMd = allocation.metaData().getIndexSafe(shardRouting.index());\n DiscoveryNodeFilters initialRecoveryFilters = indexMd.getInitialRecoveryFilters();\n if (initialRecoveryFilters != null &&\n- RecoverySource.isInitialRecovery(shardRouting.recoverySource().getType()) &&\n+ INITIAL_RECOVERY_TYPES.contains(shardRouting.recoverySource().getType()) &&\n initialRecoveryFilters.match(node.node()) == false) {\n String explanation = (shardRouting.recoverySource().getType() == RecoverySource.Type.LOCAL_SHARDS) ?\n \"initial allocation of the shrunken index is only allowed on nodes [%s] that hold a copy of every shard in the index\" :", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java", "status": "modified" }, { "diff": "@@ -191,9 +191,11 @@ protected boolean isPrivateSetting(String key) {\n case IndexMetaData.SETTING_VERSION_UPGRADED:\n case IndexMetaData.SETTING_INDEX_PROVIDED_NAME:\n case MergePolicyConfig.INDEX_MERGE_ENABLED:\n+ case IndexMetaData.INDEX_SHRINK_SOURCE_UUID_KEY:\n+ case IndexMetaData.INDEX_SHRINK_SOURCE_NAME_KEY:\n return true;\n default:\n- return false;\n+ return IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getRawKey().match(key);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java", "status": "modified" }, { "diff": "@@ -352,8 +352,7 @@ public static IndexMetaData updateActiveAllocations(IndexRoutingTable indexRouti\n Set<String> insyncAids = shardTable.activeShards().stream().map(\n shr -> shr.allocationId().getId()).collect(Collectors.toSet());\n final ShardRouting primaryShard = shardTable.primaryShard();\n- if (primaryShard.initializing() && primaryShard.relocating() == false &&\n- RecoverySource.isInitialRecovery(primaryShard.recoverySource().getType()) == false ) {\n+ if (primaryShard.initializing() && shardRouting.recoverySource().getType() == RecoverySource.Type.EXISTING_STORE) {\n // simulate a primary was initialized based on aid\n insyncAids.add(primaryShard.allocationId().getId());\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingTableTests.java", "status": "modified" }, { "diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.discovery.zen.ElectMasterService;\n+import org.elasticsearch.env.Environment;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.node.Node;\n import org.elasticsearch.plugins.Plugin;\n@@ -127,7 +128,6 @@ private void registerBuiltinWritables() {\n NonSnapshottableGatewayMetadata::readDiffFrom, NonSnapshottableGatewayMetadata::fromXContent);\n registerMetaDataCustom(SnapshotableGatewayNoApiMetadata.TYPE, SnapshotableGatewayNoApiMetadata::readFrom,\n NonSnapshottableGatewayMetadata::readDiffFrom, SnapshotableGatewayNoApiMetadata::fromXContent);\n-\n }\n \n @Override\n@@ -154,8 +154,6 @@ public void testRestorePersistentSettings() throws Exception {\n logger.info(\"--> wait for the second node to join the cluster\");\n assertThat(client.admin().cluster().prepareHealth().setWaitForNodes(\"2\").get().isTimedOut(), equalTo(false));\n \n- int random = randomIntBetween(10, 42);\n-\n logger.info(\"--> set test persistent setting\");\n client.admin().cluster().prepareUpdateSettings().setPersistentSettings(\n Settings.builder()\n@@ -723,7 +721,6 @@ public void sendResponse(RestResponse response) {\n if (clusterStateError.get() != null) {\n throw clusterStateError.get();\n }\n-\n }\n \n public void testMasterShutdownDuringSnapshot() throws Exception {\n@@ -801,33 +798,72 @@ public void run() {\n assertEquals(0, snapshotInfo.failedShards());\n }\n \n+ /**\n+ * Tests that a shrunken index (created via the shrink APIs) and subsequently snapshotted\n+ * can be restored when the node the shrunken index was created on is no longer part of\n+ * the cluster.\n+ */\n+ public void testRestoreShrinkIndex() throws Exception {\n+ logger.info(\"--> starting a master node and a data node\");\n+ internalCluster().startMasterOnlyNode();\n+ internalCluster().startDataOnlyNode();\n \n- private boolean snapshotIsDone(String repository, String snapshot) {\n- try {\n- SnapshotsStatusResponse snapshotsStatusResponse = client().admin().cluster().prepareSnapshotStatus(repository).setSnapshots(snapshot).get();\n- if (snapshotsStatusResponse.getSnapshots().isEmpty()) {\n- return false;\n- }\n- for (SnapshotStatus snapshotStatus : snapshotsStatusResponse.getSnapshots()) {\n- if (snapshotStatus.getState().completed()) {\n- return true;\n- }\n- }\n- return false;\n- } catch (SnapshotMissingException ex) {\n- return false;\n- }\n- }\n+ final Client client = client();\n+ final String repo = \"test-repo\";\n+ final String snapshot = \"test-snap\";\n+ final String sourceIdx = \"test-idx\";\n+ final String shrunkIdx = \"test-idx-shrunk\";\n \n- private void createTestIndex(String name) {\n- assertAcked(prepareCreate(name, 0, Settings.builder().put(\"number_of_shards\", between(1, 6))\n- .put(\"number_of_replicas\", between(1, 6))));\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(repo).setType(\"fs\")\n+ .setSettings(Settings.builder().put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())));\n+\n+ assertAcked(prepareCreate(sourceIdx, 0, Settings.builder()\n+ .put(\"number_of_shards\", between(1, 20)).put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n \n- logger.info(\"--> indexing some data into {}\", name);\n- for (int i = 0; i < between(10, 500); i++) {\n- index(name, \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ logger.info(\"--> indexing some data\");\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[randomIntBetween(10, 100)];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(sourceIdx, \"type1\",\n+ Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+\n+ logger.info(\"--> shrink the index\");\n+ assertAcked(client.admin().indices().prepareUpdateSettings(sourceIdx)\n+ .setSettings(Settings.builder().put(\"index.blocks.write\", true)).get());\n+ assertAcked(client.admin().indices().prepareShrinkIndex(sourceIdx, shrunkIdx).get());\n+\n+ logger.info(\"--> snapshot the shrunk index\");\n+ CreateSnapshotResponse createResponse = client.admin().cluster()\n+ .prepareCreateSnapshot(repo, snapshot)\n+ .setWaitForCompletion(true).setIndices(shrunkIdx).get();\n+ assertEquals(SnapshotState.SUCCESS, createResponse.getSnapshotInfo().state());\n \n+ logger.info(\"--> delete index and stop the data node\");\n+ assertAcked(client.admin().indices().prepareDelete(sourceIdx).get());\n+ assertAcked(client.admin().indices().prepareDelete(shrunkIdx).get());\n+ internalCluster().stopRandomDataNode();\n+ client().admin().cluster().prepareHealth().setTimeout(\"30s\").setWaitForNodes(\"1\");\n+\n+ logger.info(\"--> start a new data node\");\n+ final Settings dataSettings = Settings.builder()\n+ .put(Node.NODE_NAME_SETTING.getKey(), randomAlphaOfLength(5))\n+ .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir()) // to get a new node id\n+ .build();\n+ internalCluster().startDataOnlyNode(dataSettings);\n+ client().admin().cluster().prepareHealth().setTimeout(\"30s\").setWaitForNodes(\"2\");\n+\n+ logger.info(\"--> restore the shrunk index and ensure all shards are allocated\");\n+ RestoreSnapshotResponse restoreResponse = client().admin().cluster()\n+ .prepareRestoreSnapshot(repo, snapshot).setWaitForCompletion(true)\n+ .setIndices(shrunkIdx).get();\n+ assertEquals(restoreResponse.getRestoreInfo().totalShards(),\n+ restoreResponse.getRestoreInfo().successfulShards());\n+ ensureYellow();\n }\n \n public static class SnapshottableMetadata extends TestCustomMetaData {", "filename": "core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "Today when removing a plugin, we attempt to move the plugin directory to a temporary directory and then delete that directory from the filesystem. We do this to avoid a plugin being in a half-removed state. We previously tried an atomic move, and fell back to a non-atomic move if that failed. Atomic moves can fail on union filesystems when the plugin directory is not in the top layer of the filesystem. Interestingly, the regular move can fail as well. This is because when the JDK is executing such a move, it first tries to rename the source directory to the target directory and if this fails with EXDEV (as in the case of an atomic move failing), it falls back to copying the source to the target, and then attempts to rmdir the source. The bug here is that the JDK never deleted the contents of the source so the rmdir will always fail (except in the case of an empty directory).\r\n\r\nGiven all this silliness, we were inspired to find a different strategy. The strategy is simple. We will add a marker file to the plugin directory that indicates the plugin is in a state of removal. This file will be the last file out the door during removal. If this file exists during startup, we fail startup.\r\n\r\nCloses #24231\r\n ", "comments": [ { "body": "Thanks @rjernst and @s1monw.", "created_at": "2017-04-21T22:18:32Z" } ], "number": 24252, "title": "Use a marker file when removing a plugin" }
{ "body": "This commit fixes an issue when deleting the plugin directory while executing the remove plugin command. Namely, we take out a file descriptor on the plugin directory to traverse its contents to obtain the list of files to delete. We leaked this file descriptor. On Unix-based filesystems, this is not a problem, deleting the plugin directory deletes the plugin directory. On Windows though, a delete is not executed until the last file descriptor is closed. Since we leaked this file descriptor, the plugin was not actually deleted. This led to test failures that tried to cleanup left behind temporary directories but these test failures were just exposing this bug. This commit fixes this issue by ensuring that we close the file descriptor to the plugin directory when we are finished with it.\r\n\r\nRelates #24252\r\n \r\n", "number": 24266, "review_comments": [], "title": "Fix delete of plugin directory on remove plugin" }
{ "commits": [ { "message": "Fix delete of plugin directory on remove plugin\n\nThis commit fixes an issue when deleting the plugin directory while\nexecuting the remove plugin command. Namely, we take out a file\ndescriptor on the plugin directory to traverse its contents to obtain\nthe list of files to delete. We leaked this file descriptor. On\nUnix-based filesystems, this is not a problem, deleting the plugin\ndirectory deletes the plugin directory. On Windows though, a delete is\nnot executed until the last file descriptor is closed. Since we leaked\nthis file descriptor, the plugin was not actually deleted. This led to\ntest failures that tried to cleanup left behind temporary directories\nbut these test failures were just exposing this bug. This commit fixes\nthis issue by ensuring that we close the file descriptor to the plugin\ndirectory when we are finished with it." }, { "message": "Formatting" }, { "message": "Close the stream immediately" }, { "message": "Fix comment" } ], "files": [ { "diff": "@@ -36,6 +36,8 @@\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Locale;\n+import java.util.stream.Collectors;\n+import java.util.stream.Stream;\n \n import static org.elasticsearch.cli.Terminal.Verbosity.VERBOSE;\n \n@@ -110,7 +112,9 @@ void execute(final Terminal terminal, final String pluginName, final Environment\n * Add the contents of the plugin directory before creating the marker file and adding it to the list of paths to be deleted so\n * that the marker file is the last file to be deleted.\n */\n- Files.list(pluginDir).forEach(pluginPaths::add);\n+ try (Stream<Path> paths = Files.list(pluginDir)) {\n+ pluginPaths.addAll(paths.collect(Collectors.toList()));\n+ }\n try {\n Files.createFile(removing);\n } catch (final FileAlreadyExistsException e) {\n@@ -122,9 +126,10 @@ void execute(final Terminal terminal, final String pluginName, final Environment\n }\n // now add the marker file\n pluginPaths.add(removing);\n+ // finally, add the plugin directory\n+ pluginPaths.add(pluginDir);\n IOUtils.rm(pluginPaths.toArray(new Path[pluginPaths.size()]));\n- // at this point, the plugin directory is empty and we can execute a simple directory removal\n- Files.delete(pluginDir);\n+\n \n /*\n * We preserve the config files in case the user is upgrading the plugin, but we print a", "filename": "distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java", "status": "modified" } ] }
{ "body": "When parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.\r\n\r\nThis change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.\r\n\r\nAlso removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.\r\n\r\nCloses #24227\r\n", "comments": [ { "body": "I'm away from my laptop now so unless anyone else can/wants to merge and back port this PR I'll do it first thing Monday morning. ", "created_at": "2017-04-21T20:52:12Z" }, { "body": "> I'm away from my laptop now so unless anyone else can/wants to merge and back port this PR I'll do it first thing Monday morning.\r\n\r\nI've kicked off a test run of this and #24258 locally. If it passes I'll merge and start the backport dance. But I'm going to be in and out all day so it probably won't be until tomorrow when I finish it.", "created_at": "2017-04-22T15:51:23Z" }, { "body": "All backported.", "created_at": "2017-04-23T00:04:40Z" }, { "body": "@nik9000 thanks very much for doing this", "created_at": "2017-04-24T09:16:51Z" } ], "number": 24251, "title": "No longer add illegal content type option to stored search templates" }
{ "body": "In #24251 we fix an issue with stored search templates that\r\nthis test would have discovered: stored search templates cause\r\nthe node to refuse to start. Technically a \"restart\" test would\r\nhave caught this as well and would have caught it more quickly.\r\nBut we already *have* an upgrade test and we don't have restart tests.\r\nAnd testing this on upgrade is a good thing too.\r\n", "number": 24258, "review_comments": [], "title": "Test search templates during rolling upgrade test" }
{ "commits": [ { "message": "Test search templates during rolling upgrade test\n\nIn #24251 we fix an issue with stored search templates that\nthis test would have discovered: stored search templates cause\nthe node to refuse to start. Technically a \"restart\" test would\nhave caught this as well and would have caught it more quickly.\nBut we already *have* an upgrade test and we don't have restart tests.\nAnd testing this on upgrade is a good thing too." } ], "files": [ { "diff": "@@ -59,6 +59,16 @@\n \n - match: { hits.total: 10 }\n \n+---\n+\"Verify that we can still find things with the template\":\n+ - do:\n+ search_template:\n+ body:\n+ id: test_search_template\n+ params:\n+ f1: v5_old\n+ - match: { hits.total: 1 }\n+\n ---\n \"Verify custom cluster metadata still exists during upgrade\":\n - do:", "filename": "qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,15 +1,15 @@\n ---\n-\"Index data and search on the old cluster\":\n- - do:\n- indices.create:\n+\"Index data, search, and create things in the cluster state that we'll validate are there after the ugprade\":\n+ - do:\n+ indices.create:\n index: test_index\n body:\n settings:\n index:\n number_of_replicas: 0\n \n- - do:\n- bulk:\n+ - do:\n+ bulk:\n refresh: true\n body:\n - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"test_type\"}}'\n@@ -23,18 +23,16 @@\n - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"test_type\"}}'\n - '{\"f1\": \"v5_old\", \"f2\": 4}'\n \n- - do:\n- indices.flush:\n+ - do:\n+ indices.flush:\n index: test_index\n \n- - do:\n- search:\n+ - do:\n+ search:\n index: test_index\n \n- - match: { hits.total: 5 }\n+ - match: { hits.total: 5 }\n \n----\n-\"Add stuff to cluster state so that we can verify that it remains to exist during and after the rolling upgrade\":\n - do:\n snapshot.create_repository:\n repository: my_repo\n@@ -54,3 +52,20 @@\n ]\n }\n - match: { \"acknowledged\": true }\n+\n+ - do:\n+ put_template:\n+ id: test_search_template\n+ body:\n+ query:\n+ match:\n+ f1: \"{{f1}}\"\n+ - match: { acknowledged: true }\n+\n+ - do:\n+ search_template:\n+ body:\n+ id: test_search_template\n+ params:\n+ f1: v5_old\n+ - match: { hits.total: 1 }", "filename": "qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yaml", "status": "modified" }, { "diff": "@@ -36,6 +36,16 @@\n \n - match: { hits.total: 15 } # 10 docs from previous clusters plus 5 new docs\n \n+---\n+\"Verify that we can still find things with the template\":\n+ - do:\n+ search_template:\n+ body:\n+ id: test_search_template\n+ params:\n+ f1: v5_old\n+ - match: { hits.total: 1 }\n+\n ---\n \"Verify custom cluster metadata still exists after rolling upgrade\":\n - do:", "filename": "qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yaml", "status": "modified" } ] }
{ "body": "As observed by @clintongormley in #20912 max_concurrent_searches is missing from the _msearch/template endpoint. This PR adds the missing parameter.\r\n\r\nCloses #20912\r\n\r\nOne observation: Looking at the code there is quite some duplication between _msearch and _msearch/template due to the latter one having been moved to the mustache module. Not sure if this could be reduced in a follow-up.\r\n\r\n@martijnvg would be great if you could take a look, as you introduced the max_concurrent_searches parameter for _msearch.", "comments": [ { "body": "Superseded by #24255", "created_at": "2017-04-21T18:45:51Z" } ], "number": 21907, "title": "This adds max_concurrent_searches to multi-search-template endpoint." }
{ "body": "Reuses #21907 and rewired multi search template api to delegate to multi search api, that the `max_concurrent_searches` parameter can just be pushed down to the multi search api instead of duplicating multi concurrent search logic in multi search template api.\r\n\r\nPR for #20912", "number": 24255, "review_comments": [ { "body": "we need to check the version, don't we?", "created_at": "2017-05-03T12:38:32Z" }, { "body": "same here?", "created_at": "2017-05-03T12:38:39Z" }, { "body": "yes we do! I'll change that.", "created_at": "2017-05-09T13:10:26Z" } ], "title": "Add max concurrent searches to multi template search" }
{ "commits": [ { "message": "This adds max_concurrent_searches to multi-search-template endpoint.\n\nCloses #20912" }, { "message": "Rewrite multi search template api to delegate to multi search api instead of to search template api.\n\nThe max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,\nso we should template each individual template search request and then delegate to multi search api." }, { "message": "Rewrite multi search template api to delegate to multi search api instead of to search template api.\n\nThe max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,\nso we should template each individual template search request and then delegate to multi search api." } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.script.mustache;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.CompositeIndicesRequest;\n@@ -34,6 +35,7 @@\n \n public class MultiSearchTemplateRequest extends ActionRequest implements CompositeIndicesRequest {\n \n+ private int maxConcurrentSearchRequests = 0;\n private List<SearchTemplateRequest> requests = new ArrayList<>();\n \n private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpenAndForbidClosed();\n@@ -56,6 +58,26 @@ public MultiSearchTemplateRequest add(SearchTemplateRequest request) {\n return this;\n }\n \n+\n+ /**\n+ * Returns the amount of search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public int maxConcurrentSearchRequests() {\n+ return maxConcurrentSearchRequests;\n+ }\n+\n+ /**\n+ * Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public MultiSearchTemplateRequest maxConcurrentSearchRequests(int maxConcurrentSearchRequests) {\n+ if (maxConcurrentSearchRequests < 1) {\n+ throw new IllegalArgumentException(\"maxConcurrentSearchRequests must be positive\");\n+ }\n+\n+ this.maxConcurrentSearchRequests = maxConcurrentSearchRequests;\n+ return this;\n+ }\n+\n public List<SearchTemplateRequest> requests() {\n return this.requests;\n }\n@@ -90,12 +112,18 @@ public MultiSearchTemplateRequest indicesOptions(IndicesOptions indicesOptions)\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n+ if (in.getVersion().onOrAfter(Version.V_5_5_0_UNRELEASED)) {\n+ maxConcurrentSearchRequests = in.readVInt();\n+ }\n requests = in.readStreamableList(SearchTemplateRequest::new);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_5_5_0_UNRELEASED)) {\n+ out.writeVInt(maxConcurrentSearchRequests);\n+ }\n out.writeStreamableList(requests);\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequest.java", "status": "modified" }, { "diff": "@@ -30,10 +30,6 @@ protected MultiSearchTemplateRequestBuilder(ElasticsearchClient client, MultiSea\n super(client, action, new MultiSearchTemplateRequest());\n }\n \n- public MultiSearchTemplateRequestBuilder(ElasticsearchClient client) {\n- this(client, MultiSearchTemplateAction.INSTANCE);\n- }\n-\n public MultiSearchTemplateRequestBuilder add(SearchTemplateRequest request) {\n if (request.getRequest().indicesOptions() == IndicesOptions.strictExpandOpenAndForbidClosed()\n && request().indicesOptions() != IndicesOptions.strictExpandOpenAndForbidClosed()) {\n@@ -58,4 +54,12 @@ public MultiSearchTemplateRequestBuilder setIndicesOptions(IndicesOptions indice\n request().indicesOptions(indicesOptions);\n return this;\n }\n+\n+ /**\n+ * Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public MultiSearchTemplateRequestBuilder setMaxConcurrentSearchRequests(int maxConcurrentSearchRequests) {\n+ request().maxConcurrentSearchRequests(maxConcurrentSearchRequests);\n+ return this;\n+ }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -70,6 +70,10 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client\n */\n public static MultiSearchTemplateRequest parseRequest(RestRequest restRequest, boolean allowExplicitIndex) throws IOException {\n MultiSearchTemplateRequest multiRequest = new MultiSearchTemplateRequest();\n+ if (restRequest.hasParam(\"max_concurrent_searches\")) {\n+ multiRequest.maxConcurrentSearchRequests(restRequest.paramAsInt(\"max_concurrent_searches\", 0));\n+ }\n+\n RestMultiSearchAction.parseMultiLineRequest(restRequest, multiRequest.indicesOptions(), allowExplicitIndex,\n (searchRequest, bytes) -> {\n try {", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestMultiSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -20,59 +20,81 @@\n package org.elasticsearch.script.mustache;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.search.MultiSearchRequest;\n+import org.elasticsearch.action.search.MultiSearchResponse;\n+import org.elasticsearch.action.search.SearchRequest;\n+import org.elasticsearch.action.search.TransportMultiSearchAction;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.script.mustache.TransportSearchTemplateAction.convert;\n \n public class TransportMultiSearchTemplateAction extends HandledTransportAction<MultiSearchTemplateRequest, MultiSearchTemplateResponse> {\n \n- private final TransportSearchTemplateAction searchTemplateAction;\n+ private final ScriptService scriptService;\n+ private final NamedXContentRegistry xContentRegistry;\n+ private final TransportMultiSearchAction multiSearchAction;\n \n @Inject\n public TransportMultiSearchTemplateAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n ActionFilters actionFilters, IndexNameExpressionResolver resolver,\n- TransportSearchTemplateAction searchTemplateAction) {\n+ ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n+ TransportMultiSearchAction multiSearchAction) {\n super(settings, MultiSearchTemplateAction.NAME, threadPool, transportService, actionFilters, resolver,\n MultiSearchTemplateRequest::new);\n- this.searchTemplateAction = searchTemplateAction;\n+ this.scriptService = scriptService;\n+ this.xContentRegistry = xContentRegistry;\n+ this.multiSearchAction = multiSearchAction;\n }\n \n @Override\n protected void doExecute(MultiSearchTemplateRequest request, ActionListener<MultiSearchTemplateResponse> listener) {\n- final AtomicArray<MultiSearchTemplateResponse.Item> responses = new AtomicArray<>(request.requests().size());\n- final AtomicInteger counter = new AtomicInteger(responses.length());\n-\n- for (int i = 0; i < responses.length(); i++) {\n- final int index = i;\n- searchTemplateAction.execute(request.requests().get(i), new ActionListener<SearchTemplateResponse>() {\n- @Override\n- public void onResponse(SearchTemplateResponse searchTemplateResponse) {\n- responses.set(index, new MultiSearchTemplateResponse.Item(searchTemplateResponse, null));\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n+ List<Integer> originalSlots = new ArrayList<>();\n+ MultiSearchRequest multiSearchRequest = new MultiSearchRequest();\n+ multiSearchRequest.indicesOptions(request.indicesOptions());\n+ if (request.maxConcurrentSearchRequests() != 0) {\n+ multiSearchRequest.maxConcurrentSearchRequests(request.maxConcurrentSearchRequests());\n+ }\n \n- @Override\n- public void onFailure(Exception e) {\n- responses.set(index, new MultiSearchTemplateResponse.Item(null, e));\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n+ MultiSearchTemplateResponse.Item[] items = new MultiSearchTemplateResponse.Item[request.requests().size()];\n+ for (int i = 0; i < items.length; i++) {\n+ SearchTemplateRequest searchTemplateRequest = request.requests().get(i);\n+ SearchTemplateResponse searchTemplateResponse = new SearchTemplateResponse();\n+ SearchRequest searchRequest;\n+ try {\n+ searchRequest = convert(searchTemplateRequest, searchTemplateResponse, scriptService, xContentRegistry);\n+ } catch (Exception e) {\n+ items[i] = new MultiSearchTemplateResponse.Item(null, e);\n+ continue;\n+ }\n+ items[i] = new MultiSearchTemplateResponse.Item(searchTemplateResponse, null);\n+ if (searchRequest != null) {\n+ multiSearchRequest.add(searchRequest);\n+ originalSlots.add(i);\n+ }\n+ }\n \n- private void finishHim() {\n- MultiSearchTemplateResponse.Item[] items = responses.toArray(new MultiSearchTemplateResponse.Item[responses.length()]);\n- listener.onResponse(new MultiSearchTemplateResponse(items));\n+ multiSearchAction.execute(multiSearchRequest, ActionListener.wrap(r -> {\n+ for (int i = 0; i < r.getResponses().length; i++) {\n+ MultiSearchResponse.Item item = r.getResponses()[i];\n+ int originalSlot = originalSlots.get(i);\n+ if (item.isFailure()) {\n+ items[originalSlot] = new MultiSearchTemplateResponse.Item(null, item.getFailure());\n+ } else {\n+ items[originalSlot].getResponse().setResponse(item.getResponse());\n }\n- });\n- }\n+ }\n+ listener.onResponse(new MultiSearchTemplateResponse(items));\n+ }, listener::onFailure));\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/TransportMultiSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -38,9 +38,11 @@\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n import org.elasticsearch.template.CompiledTemplate;\n+import org.elasticsearch.template.CompiledTemplate;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import java.io.IOException;\n import java.util.Collections;\n \n import static org.elasticsearch.script.ScriptContext.Standard.SEARCH;\n@@ -69,27 +71,8 @@ public TransportSearchTemplateAction(Settings settings, ThreadPool threadPool, T\n protected void doExecute(SearchTemplateRequest request, ActionListener<SearchTemplateResponse> listener) {\n final SearchTemplateResponse response = new SearchTemplateResponse();\n try {\n- Script script = new Script(request.getScriptType(), TEMPLATE_LANG, request.getScript(),\n- request.getScriptParams() == null ? Collections.emptyMap() : request.getScriptParams());\n- CompiledTemplate compiledScript = scriptService.compileTemplate(script, SEARCH);\n- BytesReference source = compiledScript.run(script.getParams());\n- response.setSource(source);\n-\n- if (request.isSimulate()) {\n- listener.onResponse(response);\n- return;\n- }\n-\n- // Executes the search\n- SearchRequest searchRequest = request.getRequest();\n- //we can assume the template is always json as we convert it before compiling it\n- try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(xContentRegistry, source)) {\n- SearchSourceBuilder builder = SearchSourceBuilder.searchSource();\n- builder.parseXContent(new QueryParseContext(parser));\n- builder.explain(request.isExplain());\n- builder.profile(request.isProfile());\n- searchRequest.source(builder);\n-\n+ SearchRequest searchRequest = convert(request, response, scriptService, xContentRegistry);\n+ if (searchRequest != null) {\n searchAction.execute(searchRequest, new ActionListener<SearchResponse>() {\n @Override\n public void onResponse(SearchResponse searchResponse) {\n@@ -106,9 +89,35 @@ public void onFailure(Exception t) {\n listener.onFailure(t);\n }\n });\n+ } else {\n+ listener.onResponse(response);\n }\n- } catch (Exception t) {\n- listener.onFailure(t);\n+ } catch (IOException e) {\n+ listener.onFailure(e);\n+ }\n+ }\n+\n+ static SearchRequest convert(SearchTemplateRequest searchTemplateRequest, SearchTemplateResponse response, ScriptService scriptService,\n+ NamedXContentRegistry xContentRegistry) throws IOException {\n+ Script script = new Script(searchTemplateRequest.getScriptType(), TEMPLATE_LANG, searchTemplateRequest.getScript(),\n+ searchTemplateRequest.getScriptParams() == null ? Collections.emptyMap() : searchTemplateRequest.getScriptParams());\n+ CompiledTemplate compiledScript = scriptService.compileTemplate(script, SEARCH);\n+ BytesReference source = compiledScript.run(script.getParams());\n+ response.setSource(source);\n+\n+ SearchRequest searchRequest = searchTemplateRequest.getRequest();\n+ response.setSource(source);\n+ if (searchTemplateRequest.isSimulate()) {\n+ return null;\n+ }\n+\n+ try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(xContentRegistry, source)) {\n+ SearchSourceBuilder builder = SearchSourceBuilder.searchSource();\n+ builder.parseXContent(new QueryParseContext(parser));\n+ builder.explain(searchTemplateRequest.isExplain());\n+ builder.profile(searchTemplateRequest.isProfile());\n+ searchRequest.source(builder);\n }\n+ return searchRequest;\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/TransportSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.script.mustache;\n \n+import org.elasticsearch.action.search.MultiSearchRequest;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.rest.RestRequest;\n@@ -90,4 +91,12 @@ public void testParseWithCarriageReturn() throws Exception {\n assertEquals(\"{\\\"query\\\":{\\\"match_{{template}}\\\":{}}}\", request.requests().get(0).getScript());\n assertEquals(1, request.requests().get(0).getScriptParams().size());\n }\n+\n+ public void testMaxConcurrentSearchRequests() {\n+ MultiSearchTemplateRequest request = new MultiSearchTemplateRequest();\n+ request.maxConcurrentSearchRequests(randomIntBetween(1, Integer.MAX_VALUE));\n+ expectThrows(IllegalArgumentException.class, () ->\n+ request.maxConcurrentSearchRequests(randomIntBetween(Integer.MIN_VALUE, 0)));\n+ }\n+\n }", "filename": "modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequestTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,10 @@\n \"typed_keys\": {\n \"type\" : \"boolean\",\n \"description\" : \"Specify whether aggregation and suggester names should be prefixed by their respective types in the response\"\n+ },\n+ \"max_concurrent_searches\" : {\n+ \"type\" : \"number\",\n+ \"description\" : \"Controls the maximum number of concurrent searches the multi search api will execute\"\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/msearch_template.json", "status": "modified" } ] }
{ "body": "Multi-search supports the `max_concurrent_searches` QS parameter, but this is missing from the `/_msearch/template` endpoint:\n\n```\nPUT test/foo/1\n{\"user\":\"john\"}\n\nGET _msearch/template?max_concurrent_searches=1\n{\"index\": \"test\"}\n{\"inline\": {\"query\": {\"match\": {\"user\" : \"{{username}}\" }}}, \"params\": {\"username\": \"john\"}} \n\n```\n\nthrows:\n\n```\n\"request [/_msearch/template] contains unrecognized parameter: [max_concurrent_searches]\"\n```\n", "comments": [ { "body": "Is this issue still there? I have seen the pull request is showing \"This branch has conflicts that must be resolved\". So it means this PR hasn't accepted yet. Would you mind if I taking over and fix the issue there?", "created_at": "2017-03-13T09:24:45Z" }, { "body": "To be honest that PR looks pretty close. @martijnvg, would you like to adopt the PR?", "created_at": "2017-03-13T15:36:35Z" }, { "body": "I can try working on this issue now, would it be okay to assign this to me?", "created_at": "2017-03-13T22:42:39Z" }, { "body": "@nik9000 That PR only adds the setting to the request and request builder. Now I think about this more I think we shouldn't try to duplicate the `max_concurrent_searches` logic in this api, but rather duplicate the templating logic from the search template api. This `max_concurrent_searches` logic has turned out to be [pretty](https://github.com/elastic/elasticsearch/pull/23527) [tricky](https://github.com/elastic/elasticsearch/pull/23538).", "created_at": "2017-03-17T16:13:15Z" } ], "number": 20912, "title": "Add `max_concurrent_searches` to msearch-template" }
{ "body": "Reuses #21907 and rewired multi search template api to delegate to multi search api, that the `max_concurrent_searches` parameter can just be pushed down to the multi search api instead of duplicating multi concurrent search logic in multi search template api.\r\n\r\nPR for #20912", "number": 24255, "review_comments": [ { "body": "we need to check the version, don't we?", "created_at": "2017-05-03T12:38:32Z" }, { "body": "same here?", "created_at": "2017-05-03T12:38:39Z" }, { "body": "yes we do! I'll change that.", "created_at": "2017-05-09T13:10:26Z" } ], "title": "Add max concurrent searches to multi template search" }
{ "commits": [ { "message": "This adds max_concurrent_searches to multi-search-template endpoint.\n\nCloses #20912" }, { "message": "Rewrite multi search template api to delegate to multi search api instead of to search template api.\n\nThe max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,\nso we should template each individual template search request and then delegate to multi search api." }, { "message": "Rewrite multi search template api to delegate to multi search api instead of to search template api.\n\nThe max concurrent searches logic is complex and we shouldn't duplicate that in multi search template api,\nso we should template each individual template search request and then delegate to multi search api." } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.script.mustache;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.CompositeIndicesRequest;\n@@ -34,6 +35,7 @@\n \n public class MultiSearchTemplateRequest extends ActionRequest implements CompositeIndicesRequest {\n \n+ private int maxConcurrentSearchRequests = 0;\n private List<SearchTemplateRequest> requests = new ArrayList<>();\n \n private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpenAndForbidClosed();\n@@ -56,6 +58,26 @@ public MultiSearchTemplateRequest add(SearchTemplateRequest request) {\n return this;\n }\n \n+\n+ /**\n+ * Returns the amount of search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public int maxConcurrentSearchRequests() {\n+ return maxConcurrentSearchRequests;\n+ }\n+\n+ /**\n+ * Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public MultiSearchTemplateRequest maxConcurrentSearchRequests(int maxConcurrentSearchRequests) {\n+ if (maxConcurrentSearchRequests < 1) {\n+ throw new IllegalArgumentException(\"maxConcurrentSearchRequests must be positive\");\n+ }\n+\n+ this.maxConcurrentSearchRequests = maxConcurrentSearchRequests;\n+ return this;\n+ }\n+\n public List<SearchTemplateRequest> requests() {\n return this.requests;\n }\n@@ -90,12 +112,18 @@ public MultiSearchTemplateRequest indicesOptions(IndicesOptions indicesOptions)\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n+ if (in.getVersion().onOrAfter(Version.V_5_5_0_UNRELEASED)) {\n+ maxConcurrentSearchRequests = in.readVInt();\n+ }\n requests = in.readStreamableList(SearchTemplateRequest::new);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_5_5_0_UNRELEASED)) {\n+ out.writeVInt(maxConcurrentSearchRequests);\n+ }\n out.writeStreamableList(requests);\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequest.java", "status": "modified" }, { "diff": "@@ -30,10 +30,6 @@ protected MultiSearchTemplateRequestBuilder(ElasticsearchClient client, MultiSea\n super(client, action, new MultiSearchTemplateRequest());\n }\n \n- public MultiSearchTemplateRequestBuilder(ElasticsearchClient client) {\n- this(client, MultiSearchTemplateAction.INSTANCE);\n- }\n-\n public MultiSearchTemplateRequestBuilder add(SearchTemplateRequest request) {\n if (request.getRequest().indicesOptions() == IndicesOptions.strictExpandOpenAndForbidClosed()\n && request().indicesOptions() != IndicesOptions.strictExpandOpenAndForbidClosed()) {\n@@ -58,4 +54,12 @@ public MultiSearchTemplateRequestBuilder setIndicesOptions(IndicesOptions indice\n request().indicesOptions(indicesOptions);\n return this;\n }\n+\n+ /**\n+ * Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.\n+ */\n+ public MultiSearchTemplateRequestBuilder setMaxConcurrentSearchRequests(int maxConcurrentSearchRequests) {\n+ request().maxConcurrentSearchRequests(maxConcurrentSearchRequests);\n+ return this;\n+ }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -70,6 +70,10 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client\n */\n public static MultiSearchTemplateRequest parseRequest(RestRequest restRequest, boolean allowExplicitIndex) throws IOException {\n MultiSearchTemplateRequest multiRequest = new MultiSearchTemplateRequest();\n+ if (restRequest.hasParam(\"max_concurrent_searches\")) {\n+ multiRequest.maxConcurrentSearchRequests(restRequest.paramAsInt(\"max_concurrent_searches\", 0));\n+ }\n+\n RestMultiSearchAction.parseMultiLineRequest(restRequest, multiRequest.indicesOptions(), allowExplicitIndex,\n (searchRequest, bytes) -> {\n try {", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestMultiSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -20,59 +20,81 @@\n package org.elasticsearch.script.mustache;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.search.MultiSearchRequest;\n+import org.elasticsearch.action.search.MultiSearchResponse;\n+import org.elasticsearch.action.search.SearchRequest;\n+import org.elasticsearch.action.search.TransportMultiSearchAction;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.script.mustache.TransportSearchTemplateAction.convert;\n \n public class TransportMultiSearchTemplateAction extends HandledTransportAction<MultiSearchTemplateRequest, MultiSearchTemplateResponse> {\n \n- private final TransportSearchTemplateAction searchTemplateAction;\n+ private final ScriptService scriptService;\n+ private final NamedXContentRegistry xContentRegistry;\n+ private final TransportMultiSearchAction multiSearchAction;\n \n @Inject\n public TransportMultiSearchTemplateAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n ActionFilters actionFilters, IndexNameExpressionResolver resolver,\n- TransportSearchTemplateAction searchTemplateAction) {\n+ ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n+ TransportMultiSearchAction multiSearchAction) {\n super(settings, MultiSearchTemplateAction.NAME, threadPool, transportService, actionFilters, resolver,\n MultiSearchTemplateRequest::new);\n- this.searchTemplateAction = searchTemplateAction;\n+ this.scriptService = scriptService;\n+ this.xContentRegistry = xContentRegistry;\n+ this.multiSearchAction = multiSearchAction;\n }\n \n @Override\n protected void doExecute(MultiSearchTemplateRequest request, ActionListener<MultiSearchTemplateResponse> listener) {\n- final AtomicArray<MultiSearchTemplateResponse.Item> responses = new AtomicArray<>(request.requests().size());\n- final AtomicInteger counter = new AtomicInteger(responses.length());\n-\n- for (int i = 0; i < responses.length(); i++) {\n- final int index = i;\n- searchTemplateAction.execute(request.requests().get(i), new ActionListener<SearchTemplateResponse>() {\n- @Override\n- public void onResponse(SearchTemplateResponse searchTemplateResponse) {\n- responses.set(index, new MultiSearchTemplateResponse.Item(searchTemplateResponse, null));\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n+ List<Integer> originalSlots = new ArrayList<>();\n+ MultiSearchRequest multiSearchRequest = new MultiSearchRequest();\n+ multiSearchRequest.indicesOptions(request.indicesOptions());\n+ if (request.maxConcurrentSearchRequests() != 0) {\n+ multiSearchRequest.maxConcurrentSearchRequests(request.maxConcurrentSearchRequests());\n+ }\n \n- @Override\n- public void onFailure(Exception e) {\n- responses.set(index, new MultiSearchTemplateResponse.Item(null, e));\n- if (counter.decrementAndGet() == 0) {\n- finishHim();\n- }\n- }\n+ MultiSearchTemplateResponse.Item[] items = new MultiSearchTemplateResponse.Item[request.requests().size()];\n+ for (int i = 0; i < items.length; i++) {\n+ SearchTemplateRequest searchTemplateRequest = request.requests().get(i);\n+ SearchTemplateResponse searchTemplateResponse = new SearchTemplateResponse();\n+ SearchRequest searchRequest;\n+ try {\n+ searchRequest = convert(searchTemplateRequest, searchTemplateResponse, scriptService, xContentRegistry);\n+ } catch (Exception e) {\n+ items[i] = new MultiSearchTemplateResponse.Item(null, e);\n+ continue;\n+ }\n+ items[i] = new MultiSearchTemplateResponse.Item(searchTemplateResponse, null);\n+ if (searchRequest != null) {\n+ multiSearchRequest.add(searchRequest);\n+ originalSlots.add(i);\n+ }\n+ }\n \n- private void finishHim() {\n- MultiSearchTemplateResponse.Item[] items = responses.toArray(new MultiSearchTemplateResponse.Item[responses.length()]);\n- listener.onResponse(new MultiSearchTemplateResponse(items));\n+ multiSearchAction.execute(multiSearchRequest, ActionListener.wrap(r -> {\n+ for (int i = 0; i < r.getResponses().length; i++) {\n+ MultiSearchResponse.Item item = r.getResponses()[i];\n+ int originalSlot = originalSlots.get(i);\n+ if (item.isFailure()) {\n+ items[originalSlot] = new MultiSearchTemplateResponse.Item(null, item.getFailure());\n+ } else {\n+ items[originalSlot].getResponse().setResponse(item.getResponse());\n }\n- });\n- }\n+ }\n+ listener.onResponse(new MultiSearchTemplateResponse(items));\n+ }, listener::onFailure));\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/TransportMultiSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -38,9 +38,11 @@\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n import org.elasticsearch.template.CompiledTemplate;\n+import org.elasticsearch.template.CompiledTemplate;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import java.io.IOException;\n import java.util.Collections;\n \n import static org.elasticsearch.script.ScriptContext.Standard.SEARCH;\n@@ -69,27 +71,8 @@ public TransportSearchTemplateAction(Settings settings, ThreadPool threadPool, T\n protected void doExecute(SearchTemplateRequest request, ActionListener<SearchTemplateResponse> listener) {\n final SearchTemplateResponse response = new SearchTemplateResponse();\n try {\n- Script script = new Script(request.getScriptType(), TEMPLATE_LANG, request.getScript(),\n- request.getScriptParams() == null ? Collections.emptyMap() : request.getScriptParams());\n- CompiledTemplate compiledScript = scriptService.compileTemplate(script, SEARCH);\n- BytesReference source = compiledScript.run(script.getParams());\n- response.setSource(source);\n-\n- if (request.isSimulate()) {\n- listener.onResponse(response);\n- return;\n- }\n-\n- // Executes the search\n- SearchRequest searchRequest = request.getRequest();\n- //we can assume the template is always json as we convert it before compiling it\n- try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(xContentRegistry, source)) {\n- SearchSourceBuilder builder = SearchSourceBuilder.searchSource();\n- builder.parseXContent(new QueryParseContext(parser));\n- builder.explain(request.isExplain());\n- builder.profile(request.isProfile());\n- searchRequest.source(builder);\n-\n+ SearchRequest searchRequest = convert(request, response, scriptService, xContentRegistry);\n+ if (searchRequest != null) {\n searchAction.execute(searchRequest, new ActionListener<SearchResponse>() {\n @Override\n public void onResponse(SearchResponse searchResponse) {\n@@ -106,9 +89,35 @@ public void onFailure(Exception t) {\n listener.onFailure(t);\n }\n });\n+ } else {\n+ listener.onResponse(response);\n }\n- } catch (Exception t) {\n- listener.onFailure(t);\n+ } catch (IOException e) {\n+ listener.onFailure(e);\n+ }\n+ }\n+\n+ static SearchRequest convert(SearchTemplateRequest searchTemplateRequest, SearchTemplateResponse response, ScriptService scriptService,\n+ NamedXContentRegistry xContentRegistry) throws IOException {\n+ Script script = new Script(searchTemplateRequest.getScriptType(), TEMPLATE_LANG, searchTemplateRequest.getScript(),\n+ searchTemplateRequest.getScriptParams() == null ? Collections.emptyMap() : searchTemplateRequest.getScriptParams());\n+ CompiledTemplate compiledScript = scriptService.compileTemplate(script, SEARCH);\n+ BytesReference source = compiledScript.run(script.getParams());\n+ response.setSource(source);\n+\n+ SearchRequest searchRequest = searchTemplateRequest.getRequest();\n+ response.setSource(source);\n+ if (searchTemplateRequest.isSimulate()) {\n+ return null;\n+ }\n+\n+ try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(xContentRegistry, source)) {\n+ SearchSourceBuilder builder = SearchSourceBuilder.searchSource();\n+ builder.parseXContent(new QueryParseContext(parser));\n+ builder.explain(searchTemplateRequest.isExplain());\n+ builder.profile(searchTemplateRequest.isProfile());\n+ searchRequest.source(builder);\n }\n+ return searchRequest;\n }\n }", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/TransportSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.script.mustache;\n \n+import org.elasticsearch.action.search.MultiSearchRequest;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.rest.RestRequest;\n@@ -90,4 +91,12 @@ public void testParseWithCarriageReturn() throws Exception {\n assertEquals(\"{\\\"query\\\":{\\\"match_{{template}}\\\":{}}}\", request.requests().get(0).getScript());\n assertEquals(1, request.requests().get(0).getScriptParams().size());\n }\n+\n+ public void testMaxConcurrentSearchRequests() {\n+ MultiSearchTemplateRequest request = new MultiSearchTemplateRequest();\n+ request.maxConcurrentSearchRequests(randomIntBetween(1, Integer.MAX_VALUE));\n+ expectThrows(IllegalArgumentException.class, () ->\n+ request.maxConcurrentSearchRequests(randomIntBetween(Integer.MIN_VALUE, 0)));\n+ }\n+\n }", "filename": "modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/MultiSearchTemplateRequestTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,10 @@\n \"typed_keys\": {\n \"type\" : \"boolean\",\n \"description\" : \"Specify whether aggregation and suggester names should be prefixed by their respective types in the response\"\n+ },\n+ \"max_concurrent_searches\" : {\n+ \"type\" : \"number\",\n+ \"description\" : \"Controls the maximum number of concurrent searches the multi search api will execute\"\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/msearch_template.json", "status": "modified" } ] }
{ "body": "> I POST an search template using REST API, and it works as excepted.\r\n> But when I restart the es after registering the template, it throws an exception.\r\nOrigin question [Link](https://discuss.elastic.co/t/posting-search-template-as-rest-api-causes-elastic-to-fail-on-start/82552) https://discuss.elastic.co/t/posting-search-template-as-rest-api-causes-elastic-to-fail-on-start/82552\r\n\r\nI am also getting the same exception.\r\n ", "comments": [ { "body": "I think there's indeed a real issue here starting in 5.3.0, maybe related to #22206?", "created_at": "2017-04-21T13:53:59Z" } ], "number": 24227, "title": "\"POST’ing Search Template as REST API causes Elastic to fail on start\"" }
{ "body": "When parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.\r\n\r\nThis change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.\r\n\r\nAlso removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.\r\n\r\nCloses #24227\r\n", "number": 24251, "review_comments": [ { "body": "Since the check is being removed, maybe it makes sense to continue to store this with the content-type set to JSON? This only works now because of the default in the mustache engine which is likely fine because I don't see the default changing.", "created_at": "2017-04-21T17:42:35Z" }, { "body": "Maybe add a comment here about how we don't really like the content_type option but we have to support it for 5.3.0 and 5.3.1?", "created_at": "2017-04-21T17:48:06Z" }, { "body": "I think we should not store the content type so we can add the check back in a later version?", "created_at": "2017-04-21T17:48:55Z" }, { "body": "I think that we should have different code-paths for when we validate user inputs and when we parse stuff from cluster state. We should not fail if we set the content-type ourselves, but only if it was set by users, I guess. In that case we wouldn't have to remove the check in `setOptions`, if it was only performed for user input. Yet, I am not too familiar with this validation check and I cannot tell how important it is, nor when it was introduced.\r\n\r\nSide note: the reason why removing the content-type from the options map doesn't affect the parsing side of things seems to be that json is the default mime type anyways, so it didn't need to be set to json explicitly, see https://github.com/elastic/elasticsearch/blob/master/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/CustomMustacheFactory.java#L79 .", "created_at": "2017-04-21T20:00:43Z" }, { "body": "I agree with @javanna that we should ultimately have different paths for user input and loading cluster state, but I think that can be done in a follow up PR.\r\n\r\nI added the ability to add compiler options because it may be needed for Painless in the future and there was already a single option there for templates anyway so it made sense to encapsulate it. Content-type is reserved internally for templates only, so this check was preventing non-template scripts from specifying content-type. The search template API never allows options to be input anyway so it's not a problem there. Removal of this check should be safe for now since scripting languages at worst will ignore compiler options or in the case of Painless see an unrecognized option and fail at compile time.\r\n\r\nAlso my apologies for introducing the bug, I hadn't fully thought through the code path shared by both user-input and internal cluster state loading.", "created_at": "2017-04-21T20:35:59Z" }, { "body": "ok I am totally fine with removing the check for now and reintroducing it later once we have two different code-paths.", "created_at": "2017-04-21T20:49:38Z" } ], "title": "No longer add illegal content type option to stored search templates" }
{ "commits": [ { "message": "No longer add illegal content type option to stored search templates\n\nWhen parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.\n\nThis change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.\n\nAlso removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.\n\nCloses #24227" } ], "files": [ { "diff": "@@ -123,10 +123,6 @@ private void setCode(XContentParser parser) {\n * Appends the user-defined compiler options with the internal compiler options.\n */\n private void setOptions(Map<String, String> options) {\n- if (options.containsKey(Script.CONTENT_TYPE_OPTION)) {\n- throw new IllegalArgumentException(Script.CONTENT_TYPE_OPTION + \" cannot be user-specified\");\n- }\n-\n this.options.putAll(options);\n }\n \n@@ -266,8 +262,7 @@ public static StoredScriptSource parse(String lang, BytesReference content, XCon\n //this is really for search templates, that need to be converted to json format\n try (XContentBuilder builder = XContentFactory.jsonBuilder()) {\n builder.copyCurrentStructure(parser);\n- return new StoredScriptSource(lang, builder.string(),\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, XContentType.JSON.mediaType()));\n+ return new StoredScriptSource(lang, builder.string(), Collections.emptyMap());\n }\n }\n \n@@ -283,8 +278,7 @@ public static StoredScriptSource parse(String lang, BytesReference content, XCon\n token = parser.nextToken();\n \n if (token == Token.VALUE_STRING) {\n- return new StoredScriptSource(lang, parser.text(),\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, XContentType.JSON.mediaType()));\n+ return new StoredScriptSource(lang, parser.text(), Collections.emptyMap());\n }\n }\n \n@@ -297,8 +291,7 @@ public static StoredScriptSource parse(String lang, BytesReference content, XCon\n builder.copyCurrentStructure(parser);\n }\n \n- return new StoredScriptSource(lang, builder.string(),\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, XContentType.JSON.mediaType()));\n+ return new StoredScriptSource(lang, builder.string(), Collections.emptyMap());\n }\n }\n } catch (IOException ioe) {", "filename": "core/src/main/java/org/elasticsearch/script/StoredScriptSource.java", "status": "modified" }, { "diff": "@@ -0,0 +1,68 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.script;\n+\n+import org.elasticsearch.common.io.stream.Writeable.Reader;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.test.AbstractSerializingTestCase;\n+\n+import java.io.IOException;\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+public class StoredScriptSourceTests extends AbstractSerializingTestCase<StoredScriptSource> {\n+\n+ @Override\n+ protected StoredScriptSource createTestInstance() {\n+ String lang = randomAlphaOfLengthBetween(1, 20);\n+ XContentType xContentType = randomFrom(XContentType.JSON, XContentType.YAML);\n+ try {\n+ XContentBuilder template = XContentBuilder.builder(xContentType.xContent());\n+ template.startObject();\n+ template.startObject(\"query\");\n+ template.startObject(\"match\");\n+ template.field(\"title\", \"{{query_string}}\");\n+ template.endObject();\n+ template.endObject();\n+ template.endObject();\n+ Map<String, String> options = new HashMap<>();\n+ if (randomBoolean()) {\n+ options.put(Script.CONTENT_TYPE_OPTION, xContentType.mediaType());\n+ }\n+ return StoredScriptSource.parse(lang, template.bytes(), xContentType);\n+ } catch (IOException e) {\n+ throw new AssertionError(\"Failed to create test instance\", e);\n+ }\n+ }\n+\n+ @Override\n+ protected StoredScriptSource doParseInstance(XContentParser parser) throws IOException {\n+ return StoredScriptSource.fromXContent(parser);\n+ }\n+\n+ @Override\n+ protected Reader<StoredScriptSource> instanceReader() {\n+ return StoredScriptSource::new;\n+ }\n+\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/script/StoredScriptSourceTests.java", "status": "added" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.script;\n \n import org.elasticsearch.ResourceNotFoundException;\n-import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -198,8 +197,7 @@ public void testSourceParsing() throws Exception {\n builder.startObject().field(\"template\", \"code\").endObject();\n \n StoredScriptSource parsed = StoredScriptSource.parse(\"lang\", builder.bytes(), XContentType.JSON);\n- StoredScriptSource source = new StoredScriptSource(\"lang\", \"code\",\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, builder.contentType().mediaType()));\n+ StoredScriptSource source = new StoredScriptSource(\"lang\", \"code\", Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n }\n@@ -214,8 +212,7 @@ public void testSourceParsing() throws Exception {\n }\n \n StoredScriptSource parsed = StoredScriptSource.parse(\"lang\", builder.bytes(), XContentType.JSON);\n- StoredScriptSource source = new StoredScriptSource(\"lang\", code,\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, builder.contentType().mediaType()));\n+ StoredScriptSource source = new StoredScriptSource(\"lang\", code, Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n }\n@@ -230,8 +227,7 @@ public void testSourceParsing() throws Exception {\n }\n \n StoredScriptSource parsed = StoredScriptSource.parse(\"lang\", builder.bytes(), XContentType.JSON);\n- StoredScriptSource source = new StoredScriptSource(\"lang\", code,\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, builder.contentType().mediaType()));\n+ StoredScriptSource source = new StoredScriptSource(\"lang\", code, Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n }\n@@ -246,8 +242,7 @@ public void testSourceParsing() throws Exception {\n }\n \n StoredScriptSource parsed = StoredScriptSource.parse(\"lang\", builder.bytes(), XContentType.JSON);\n- StoredScriptSource source = new StoredScriptSource(\"lang\", code,\n- Collections.singletonMap(Script.CONTENT_TYPE_OPTION, builder.contentType().mediaType()));\n+ StoredScriptSource source = new StoredScriptSource(\"lang\", code, Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n }\n@@ -328,16 +323,6 @@ public void testSourceParsingErrors() throws Exception {\n StoredScriptSource.parse(null, builder.bytes(), XContentType.JSON));\n assertThat(iae.getMessage(), equalTo(\"illegal compiler options [{option=option}] specified\"));\n }\n-\n- // check for illegal use of content type option\n- try (XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON)) {\n- builder.startObject().field(\"script\").startObject().field(\"lang\", \"lang\").field(\"code\", \"code\")\n- .startObject(\"options\").field(\"content_type\", \"option\").endObject().endObject().endObject();\n-\n- ParsingException pe = expectThrows(ParsingException.class, () ->\n- StoredScriptSource.parse(null, builder.bytes(), XContentType.JSON));\n- assertThat(pe.getRootCause().getMessage(), equalTo(\"content_type cannot be user-specified\"));\n- }\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/script/StoredScriptTests.java", "status": "modified" } ] }
{ "body": "This PR is meant to address the permission errors that are encountered in the HDFS Repository Plugin as described in https://github.com/elastic/elasticsearch/issues/22156.\r\n\r\nWhen Hadoop security is enabled, the HDFS client requests the current logged in Subject for a hadoop based Credentials object, which trips a missing permission in the plugin's policy file. This is not caught during testing since we neither use the actual HDFS client code nor do we execute with Kerberos security enabled.\r\n\r\nI'm working on testing this on a local environment at the moment since it requires a secured HDFS service to activate the code path. My main concern is that there may be other permissions that have not yet had the chance to trip up the plugin because they have not yet been reached in the code.\r\n\r\nCloses #22156", "comments": [ { "body": "I do not think we should add a permission that we cannot test (all the time, not manually). Can you improve the hdfs fixture so that we can use credentials with it to test this actually works?", "created_at": "2017-03-01T20:48:20Z" }, { "body": "@rjernst I completely agree with you on that. I'm still trying to dig through all the permission violations at the moment against a local kerberized cluster, then will circle back to assess the effort for the test fixtures.", "created_at": "2017-03-02T18:51:39Z" }, { "body": "So glad to see this issue being worked on. W are waiting on this fix for the hdfs plugin issue with Elastic Search 5.2 in order to take snapshots on hdfs repository. Any ETA on when this will get released?", "created_at": "2017-03-14T19:32:48Z" }, { "body": "@surekhabalaji It's hard to say at this stage. It's been a long process of testing, seeing new permissions that are missing, retesting, repeat, and the iteration cycle for testing is fairly large due to the overhead of requiring a secure HDFS environment. Rest assured that it is being actively worked on.", "created_at": "2017-03-15T20:29:31Z" }, { "body": "At this point the code should be complete. I'm able to manually test this in a sandbox environment with a Kerberos-enabled HDFS cluster. Repository creation, snapshots and restores are working in that environment. @rjernst and @jasontedor, does it make sense to get this change in then immediately circle back around for the testing fixtures after?", "created_at": "2017-03-22T13:48:13Z" }, { "body": "@jbaiera This looks like an elaborate and difficult change to have worked through, and I appreciate the effort and the documentation within the change. However, building off what @rjernst said, I think that we need a testing plan before this can be integrated (n.b.: plan). As we discussed via another channel, if the fixture approach is too much effort I'm fine with testing this through a VM.", "created_at": "2017-03-22T13:49:34Z" }, { "body": "The changes look alright, but as I said before, I really don't trust any of it without tests. This may fix the issue for one particular configuration (which you tested manually), but we don't know that it works for other configurations. In addition to a plan for how to test this, I think we need an explicitly documented set of authentication types that we claim to support with hadoop. Then what needs to be tested should come naturally from that.", "created_at": "2017-03-22T18:54:30Z" }, { "body": "After further scoping out testing requirements for this, I've come across some potentially nasty problems with how Hadoop tries to refresh Kerberos tickets in the CCACHE.\r\n\r\n1. Hadoop spawns a thread that refreshes the ticket in the background\r\n2. Hadoop needs the kerberos client packages installed on the local OS because...\r\n3. Hadoop spawns a child process (default is `kinit -R`) to do the ticket refresh\r\n\r\nI haven't personally run into this issue yet, but that's because I haven't let the repository soak long enough for the testing ticket I've been using to need refreshing. Hadoop is basically sitting on its hands waiting to execute code that is questionable for our security model. I'm going to work on a clean reproduction of this to confirm, but the uncertainty around it makes me uncomfortable about making it a blocker for 5.3.0.", "created_at": "2017-03-23T02:01:11Z" }, { "body": "I've made some local changes to see if I could force the command execution to trip the security checks and, to my dismay, succeeded. Alas, it seems that using a Kerberos TGT from the CCACHE in Elasticsearch is not going to be supportable unless Hadoop finds a better way to refresh Kerberos Tickets instead of forking a process to do so.\r\n\r\nI have seen that logging in with a keytab does not launch this background refresh process in the Hadoop code. I'll dig into how the Kerberos auth module works to see if there's some potential to still allow authentication.", "created_at": "2017-03-23T14:43:23Z" }, { "body": "test this please", "created_at": "2017-04-07T12:56:13Z" }, { "body": "Rebasing onto the most recent master to see if I can get the CI build to pass.", "created_at": "2017-04-20T18:43:28Z" }, { "body": "@rjernst @jasontedor, Docs are completed. This is probably ready for another round of review.\r\n\r\nRegarding the testing strategy, I have a different branch built out with the following changes, but they depend on this branch at the moment:\r\n1. Splits the build Fixture class into an interface and implementation (Fixture and AntFixture).\r\n2. Introduces a VagrantFixture implementation of Fixture based on the existing VagrantCommand task.\r\n3. Vagrantfile containing setup and provisioning for MIT Kerberos 5 (KRB5) added.\r\n4. Modifications to HDFS Test Fixture to allow it to interface with kerberos hosted in vagrant\r\n5. Create a new secure hdfs fixture task in the repository build with dependencies on a vagrant based kerberos fixture task\r\n6. New test suite specifically modeled for interacting with secure HDFS. Configured a test runner specifically for that suite to run using the secure HDFS fixture and the kerberos vagrant fixture.\r\n\r\nI can merge all that into this branch and we can review it here, but It's rather a lot of code for all that, so I'd like to open a different PR for it after this is merged in. Thoughts?", "created_at": "2017-04-21T15:13:36Z" }, { "body": "I think separate PRs would be good, at least for the first 2 bullets which are necessary, but semi unrelated.", "created_at": "2017-04-21T15:43:26Z" }, { "body": "I opened #24249 for the vagrant test fixture changes.", "created_at": "2017-04-21T16:33:49Z" }, { "body": "@rjernst @jasontedor Any thing else needed for this at this time? I'd like to see if I can get this merged in.", "created_at": "2017-05-03T13:07:17Z" }, { "body": "Hi,\r\nThanks for the information.\r\nI need some information on configure hdfs repository with kerebos which i am not able to do.\r\ni read from the posts that its still in progress as :\r\nKerberos authentication for the HDFS Repository Snapshot plugin does not function correctly in 5.x at the moment. There is currently an effort to re-add this functionality so that it may function well with the internal security manager.\r\nCan you please help me when can i get this functionality added or any other method for using this plugin.\r\n\r\nDivya\r\n", "created_at": "2017-05-03T14:26:30Z" } ], "number": 23439, "title": "Fixing permission errors for `KERBEROS` security mode for HDFS Repository" }
{ "body": "This PR extends the functionality of the existing testing fixture code in our gradle build source. Previously, testing fixtures were locked into being executed by an Ant task. In order to take advantage of the existing enhanced Vagrant logging support in the build code, I have split the existing Fixture object into an interface (`Fixture.groovy`) and an implementation (`AntFixture.groovy`), and added a new implementation (`VagrantFixture.groovy`) that is based on the existing `VagrantCommand` task.\r\n\r\nThis is related to #23439 in that the integration tests will need to spin up a Kerberos KDC based in a virtual machine run by Vagrant.", "number": 24249, "review_comments": [ { "body": "I think you could change this to return the String task name instead of the actual task object, so you would not have to even have the Task here.", "created_at": "2017-04-25T19:18:50Z" }, { "body": "I think you could create the stop task here directly, not rely on calling getStopTask. See my comment about decoupling the two.", "created_at": "2017-04-25T19:19:25Z" }, { "body": "Do you mean create the stop task directly in the afterEvaluate method or just in the constructor proper? The issue I ran into with doing it directly in the constructor was that the stop task was being created before the VagrantFixture was done being configured, which was causing some issues.", "created_at": "2017-04-28T15:44:06Z" }, { "body": "I mean in the afterEvaluate.", "created_at": "2017-04-28T16:29:56Z" }, { "body": "I'm running into a strange problem with this. The build and tests run fine but the stop task on the vagrant fixture ceases to pick up the environment variables for the vagrant command, and thus cannot find the original Vagrantfile used to set up the krb5kdc vm. Without it, the command exits complaining that it doesn't know the vm that it's being asked to shut down. \r\n\r\nI'm not sure I'm able to spot where exactly it's dropping the variables on the floor, since all my attempts at logging the values have returned exactly what I expected them to.", "created_at": "2017-04-28T19:11:09Z" }, { "body": "I finally tracked down the issue. The stop task for the `VagrantFixture` is a `VagrantCommandTask` which has a project `afterEvaluate` call that sets the environment variables for the vagrant command. It seems that the `afterEvaluate` was either being ignored or called before the stop task was able to configure the environment variables for the command. New commit landing soon to address these things.", "created_at": "2017-04-28T19:43:31Z" }, { "body": "You'll need to be careful here. afterEvaluate is very tricky for this exact reason. Calling afterEvaluate inside an afterEvaluate is just wrong. In this case, I would create the task outside of afterEvalute. You can also override setBoxName and setEnvironmentVars, so that those are copied to the stop task whenever they are changed in the dsl. With setting `args`, I would use a closure to delay evaluation. So instead of `this.boxName`, use `\"${-> this.boxName}\"`. I think separately we could also cleanup VagrantCommandTask so that it has a `command` in addition the `boxName`, and then any args are just appended, instead of having to redefine the args for every subclass and needing to use boxName.", "created_at": "2017-04-28T20:43:05Z" }, { "body": "Yeah, I ended up doing the afterEvaluate closure scheduling, and then created the task outside of it right after. Didn't think of overloading the setter methods though. I might end up doing that instead since it would be much clearer.", "created_at": "2017-04-28T21:19:49Z" }, { "body": "Just use `this.remoteCommand = Objects.requireNonNull(remoteComment)` instead of the `if` above?", "created_at": "2017-05-01T21:23:46Z" }, { "body": "We should not be overriding and calling the task action directly. Gradle actually has plans to fail if trying to call execute directly in a future version. I think instead you can setup the command line and env vars in a doFirst closure?", "created_at": "2017-05-01T21:26:03Z" }, { "body": "Remove commented out code. Here and all the places below.", "created_at": "2017-05-01T21:26:46Z" }, { "body": "I'm finding out that calling `doFirst` from within the constructor still slots the main task action in front of the closure given in the constructor. Ironically, to do this it seems that it needs to be called from within a `project.afterEvaluate` call. I feel like that's an appropriate use of `afterEvaluate`, but before I do, I wanted to check to see if there's a better way.", "created_at": "2017-05-01T23:34:52Z" }, { "body": "I would need to see example code to comment.", "created_at": "2017-05-02T00:23:00Z" }, { "body": "Since we are already in an afterEvaluate, I don't think we need the doFirst?", "created_at": "2017-05-03T21:02:06Z" }, { "body": "I think we should leave it as is for the sake of least astonishment. The biggest thing I struggled with in this PR was that configurations I wanted to preempt or override from the superclass were being captured before my subclass's logic could change them because of how `afterEvaluate` schedules things. In a sense, the superclass logic was happening before the subclasses logic could override it, which is something that I imagine most people wouldn't expect.", "created_at": "2017-05-03T21:22:27Z" }, { "body": "Then please add a note explaining why both afterEvaluate and doFirst are used.", "created_at": "2017-05-03T22:24:04Z" } ], "title": "Add Vagrant based testing fixture" }
{ "commits": [ { "message": "Extracted a Fixture interface out of the existing Fixture class (now AntFixture)" }, { "message": "Lazy init the vagrant fixture's stop task" }, { "message": "Updating some doc strings" }, { "message": "Changing Fixture to return either a task or a name of a task not yet created" }, { "message": "Improving the VagrantCommandTask to deal with less boilerplate. Simplify VagrantFixture" }, { "message": "Fixing up vagrant command improvements" }, { "message": "Fixing some errors in the package testing.\n\nRemoved the project.afterEvaluate calls from the constructors that perform configuration of the task. Scheduling of the afterEvaluate calls was causing problems with values not being set because of the order the closures were scheduled. Because this is such an easy thing to mess up I've gone through and removed all the calls and replaced the logic with either eager setup code or code that runs when methods on the task are called." }, { "message": "Objects.requireNonNull fix for remote command" }, { "message": "Do not override task command, instead use doFirst" }, { "message": "Kill commented code" }, { "message": "Ok need to use project.afterEvaluate to set the doFirst closure in the constructor." }, { "message": "Adding reasoning for using afterEvaluate with doFirst instead of just afterEvaluate" } ], "files": [ { "diff": "@@ -0,0 +1,291 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.gradle.test\n+\n+import org.apache.tools.ant.taskdefs.condition.Os\n+import org.elasticsearch.gradle.AntTask\n+import org.elasticsearch.gradle.LoggedExec\n+import org.gradle.api.GradleException\n+import org.gradle.api.Task\n+import org.gradle.api.tasks.Exec\n+import org.gradle.api.tasks.Input\n+\n+/**\n+ * A fixture for integration tests which runs in a separate process launched by Ant.\n+ */\n+public class AntFixture extends AntTask implements Fixture {\n+\n+ /** The path to the executable that starts the fixture. */\n+ @Input\n+ String executable\n+\n+ private final List<Object> arguments = new ArrayList<>()\n+\n+ @Input\n+ public void args(Object... args) {\n+ arguments.addAll(args)\n+ }\n+\n+ /**\n+ * Environment variables for the fixture process. The value can be any object, which\n+ * will have toString() called at execution time.\n+ */\n+ private final Map<String, Object> environment = new HashMap<>()\n+\n+ @Input\n+ public void env(String key, Object value) {\n+ environment.put(key, value)\n+ }\n+\n+ /** A flag to indicate whether the command should be executed from a shell. */\n+ @Input\n+ boolean useShell = false\n+\n+ /**\n+ * A flag to indicate whether the fixture should be run in the foreground, or spawned.\n+ * It is protected so subclasses can override (eg RunTask).\n+ */\n+ protected boolean spawn = true\n+\n+ /**\n+ * A closure to call before the fixture is considered ready. The closure is passed the fixture object,\n+ * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait\n+ * condition is for http on the http port.\n+ */\n+ @Input\n+ Closure waitCondition = { AntFixture fixture, AntBuilder ant ->\n+ File tmpFile = new File(fixture.cwd, 'wait.success')\n+ ant.get(src: \"http://${fixture.addressAndPort}\",\n+ dest: tmpFile.toString(),\n+ ignoreerrors: true, // do not fail on error, so logging information can be flushed\n+ retries: 10)\n+ return tmpFile.exists()\n+ }\n+\n+ private final Task stopTask\n+\n+ public AntFixture() {\n+ stopTask = createStopTask()\n+ finalizedBy(stopTask)\n+ }\n+\n+ @Override\n+ public Task getStopTask() {\n+ return stopTask\n+ }\n+\n+ @Override\n+ protected void runAnt(AntBuilder ant) {\n+ project.delete(baseDir) // reset everything\n+ cwd.mkdirs()\n+ final String realExecutable\n+ final List<Object> realArgs = new ArrayList<>()\n+ final Map<String, Object> realEnv = environment\n+ // We need to choose which executable we are using. In shell mode, or when we\n+ // are spawning and thus using the wrapper script, the executable is the shell.\n+ if (useShell || spawn) {\n+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n+ realExecutable = 'cmd'\n+ realArgs.add('/C')\n+ realArgs.add('\"') // quote the entire command\n+ } else {\n+ realExecutable = 'sh'\n+ }\n+ } else {\n+ realExecutable = executable\n+ realArgs.addAll(arguments)\n+ }\n+ if (spawn) {\n+ writeWrapperScript(executable)\n+ realArgs.add(wrapperScript)\n+ realArgs.addAll(arguments)\n+ }\n+ if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {\n+ realArgs.add('\"')\n+ }\n+ commandString.eachLine { line -> logger.info(line) }\n+\n+ ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {\n+ realEnv.each { key, value -> env(key: key, value: value) }\n+ realArgs.each { arg(value: it) }\n+ }\n+\n+ String failedProp = \"failed${name}\"\n+ // first wait for resources, or the failure marker from the wrapper script\n+ ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {\n+ or {\n+ resourceexists {\n+ file(file: failureMarker.toString())\n+ }\n+ and {\n+ resourceexists {\n+ file(file: pidFile.toString())\n+ }\n+ resourceexists {\n+ file(file: portsFile.toString())\n+ }\n+ }\n+ }\n+ }\n+\n+ if (ant.project.getProperty(failedProp) || failureMarker.exists()) {\n+ fail(\"Failed to start ${name}\")\n+ }\n+\n+ // the process is started (has a pid) and is bound to a network interface\n+ // so now wait undil the waitCondition has been met\n+ // TODO: change this to a loop?\n+ boolean success\n+ try {\n+ success = waitCondition(this, ant) == false\n+ } catch (Exception e) {\n+ String msg = \"Wait condition caught exception for ${name}\"\n+ logger.error(msg, e)\n+ fail(msg, e)\n+ }\n+ if (success == false) {\n+ fail(\"Wait condition failed for ${name}\")\n+ }\n+ }\n+\n+ /** Returns a debug string used to log information about how the fixture was run. */\n+ protected String getCommandString() {\n+ String commandString = \"\\n${name} configuration:\\n\"\n+ commandString += \"-----------------------------------------\\n\"\n+ commandString += \" cwd: ${cwd}\\n\"\n+ commandString += \" command: ${executable} ${arguments.join(' ')}\\n\"\n+ commandString += ' environment:\\n'\n+ environment.each { k, v -> commandString += \" ${k}: ${v}\\n\" }\n+ if (spawn) {\n+ commandString += \"\\n [${wrapperScript.name}]\\n\"\n+ wrapperScript.eachLine('UTF-8', { line -> commandString += \" ${line}\\n\"})\n+ }\n+ return commandString\n+ }\n+\n+ /**\n+ * Writes a script to run the real executable, so that stdout/stderr can be captured.\n+ * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process\n+ */\n+ private void writeWrapperScript(String executable) {\n+ wrapperScript.parentFile.mkdirs()\n+ String argsPasser = '\"$@\"'\n+ String exitMarker = \"; if [ \\$? != 0 ]; then touch run.failed; fi\"\n+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n+ argsPasser = '%*'\n+ exitMarker = \"\\r\\n if \\\"%errorlevel%\\\" neq \\\"0\\\" ( type nul >> run.failed )\"\n+ }\n+ wrapperScript.setText(\"\\\"${executable}\\\" ${argsPasser} > run.log 2>&1 ${exitMarker}\", 'UTF-8')\n+ }\n+\n+ /** Fail the build with the given message, and logging relevant info*/\n+ private void fail(String msg, Exception... suppressed) {\n+ if (logger.isInfoEnabled() == false) {\n+ // We already log the command at info level. No need to do it twice.\n+ commandString.eachLine { line -> logger.error(line) }\n+ }\n+ logger.error(\"${name} output:\")\n+ logger.error(\"-----------------------------------------\")\n+ logger.error(\" failure marker exists: ${failureMarker.exists()}\")\n+ logger.error(\" pid file exists: ${pidFile.exists()}\")\n+ logger.error(\" ports file exists: ${portsFile.exists()}\")\n+ // also dump the log file for the startup script (which will include ES logging output to stdout)\n+ if (runLog.exists()) {\n+ logger.error(\"\\n [log]\")\n+ runLog.eachLine { line -> logger.error(\" ${line}\") }\n+ }\n+ logger.error(\"-----------------------------------------\")\n+ GradleException toThrow = new GradleException(msg)\n+ for (Exception e : suppressed) {\n+ toThrow.addSuppressed(e)\n+ }\n+ throw toThrow\n+ }\n+\n+ /** Adds a task to kill an elasticsearch node with the given pidfile */\n+ private Task createStopTask() {\n+ final AntFixture fixture = this\n+ final Object pid = \"${ -> fixture.pid }\"\n+ Exec stop = project.tasks.create(name: \"${name}#stop\", type: LoggedExec)\n+ stop.onlyIf { fixture.pidFile.exists() }\n+ stop.doFirst {\n+ logger.info(\"Shutting down ${fixture.name} with pid ${pid}\")\n+ }\n+ if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n+ stop.executable = 'Taskkill'\n+ stop.args('/PID', pid, '/F')\n+ } else {\n+ stop.executable = 'kill'\n+ stop.args('-9', pid)\n+ }\n+ stop.doLast {\n+ project.delete(fixture.pidFile)\n+ }\n+ return stop\n+ }\n+\n+ /**\n+ * A path relative to the build dir that all configuration and runtime files\n+ * will live in for this fixture\n+ */\n+ protected File getBaseDir() {\n+ return new File(project.buildDir, \"fixtures/${name}\")\n+ }\n+\n+ /** Returns the working directory for the process. Defaults to \"cwd\" inside baseDir. */\n+ protected File getCwd() {\n+ return new File(baseDir, 'cwd')\n+ }\n+\n+ /** Returns the file the process writes its pid to. Defaults to \"pid\" inside baseDir. */\n+ protected File getPidFile() {\n+ return new File(baseDir, 'pid')\n+ }\n+\n+ /** Reads the pid file and returns the process' pid */\n+ public int getPid() {\n+ return Integer.parseInt(pidFile.getText('UTF-8').trim())\n+ }\n+\n+ /** Returns the file the process writes its bound ports to. Defaults to \"ports\" inside baseDir. */\n+ protected File getPortsFile() {\n+ return new File(baseDir, 'ports')\n+ }\n+\n+ /** Returns an address and port suitable for a uri to connect to this node over http */\n+ public String getAddressAndPort() {\n+ return portsFile.readLines(\"UTF-8\").get(0)\n+ }\n+\n+ /** Returns a file that wraps around the actual command when {@code spawn == true}. */\n+ protected File getWrapperScript() {\n+ return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')\n+ }\n+\n+ /** Returns a file that the wrapper script writes when the command failed. */\n+ protected File getFailureMarker() {\n+ return new File(cwd, 'run.failed')\n+ }\n+\n+ /** Returns a file that the wrapper script writes when the command failed. */\n+ protected File getRunLog() {\n+ return new File(cwd, 'run.log')\n+ }\n+}", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy", "status": "added" }, { "diff": "@@ -208,7 +208,7 @@ class ClusterFormationTasks {\n start.finalizedBy(stop)\n for (Object dependency : config.dependencies) {\n if (dependency instanceof Fixture) {\n- Task depStop = ((Fixture)dependency).stopTask\n+ def depStop = ((Fixture)dependency).stopTask\n runner.finalizedBy(depStop)\n start.finalizedBy(depStop)\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy", "status": "modified" }, { "diff": "@@ -16,272 +16,15 @@\n * specific language governing permissions and limitations\n * under the License.\n */\n-\n package org.elasticsearch.gradle.test\n \n-import org.apache.tools.ant.taskdefs.condition.Os\n-import org.elasticsearch.gradle.AntTask\n-import org.elasticsearch.gradle.LoggedExec\n-import org.gradle.api.GradleException\n-import org.gradle.api.Task\n-import org.gradle.api.tasks.Exec\n-import org.gradle.api.tasks.Input\n-\n /**\n- * A fixture for integration tests which runs in a separate process.\n+ * Any object that can produce an accompanying stop task, meant to tear down\n+ * a previously instantiated service.\n */\n-public class Fixture extends AntTask {\n-\n- /** The path to the executable that starts the fixture. */\n- @Input\n- String executable\n-\n- private final List<Object> arguments = new ArrayList<>()\n-\n- @Input\n- public void args(Object... args) {\n- arguments.addAll(args)\n- }\n-\n- /**\n- * Environment variables for the fixture process. The value can be any object, which\n- * will have toString() called at execution time.\n- */\n- private final Map<String, Object> environment = new HashMap<>()\n-\n- @Input\n- public void env(String key, Object value) {\n- environment.put(key, value)\n- }\n-\n- /** A flag to indicate whether the command should be executed from a shell. */\n- @Input\n- boolean useShell = false\n-\n- /**\n- * A flag to indicate whether the fixture should be run in the foreground, or spawned.\n- * It is protected so subclasses can override (eg RunTask).\n- */\n- protected boolean spawn = true\n-\n- /**\n- * A closure to call before the fixture is considered ready. The closure is passed the fixture object,\n- * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait\n- * condition is for http on the http port.\n- */\n- @Input\n- Closure waitCondition = { Fixture fixture, AntBuilder ant ->\n- File tmpFile = new File(fixture.cwd, 'wait.success')\n- ant.get(src: \"http://${fixture.addressAndPort}\",\n- dest: tmpFile.toString(),\n- ignoreerrors: true, // do not fail on error, so logging information can be flushed\n- retries: 10)\n- return tmpFile.exists()\n- }\n+public interface Fixture {\n \n /** A task which will stop this fixture. This should be used as a finalizedBy for any tasks that use the fixture. */\n- public final Task stopTask\n-\n- public Fixture() {\n- stopTask = createStopTask()\n- finalizedBy(stopTask)\n- }\n-\n- @Override\n- protected void runAnt(AntBuilder ant) {\n- project.delete(baseDir) // reset everything\n- cwd.mkdirs()\n- final String realExecutable\n- final List<Object> realArgs = new ArrayList<>()\n- final Map<String, Object> realEnv = environment\n- // We need to choose which executable we are using. In shell mode, or when we\n- // are spawning and thus using the wrapper script, the executable is the shell.\n- if (useShell || spawn) {\n- if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n- realExecutable = 'cmd'\n- realArgs.add('/C')\n- realArgs.add('\"') // quote the entire command\n- } else {\n- realExecutable = 'sh'\n- }\n- } else {\n- realExecutable = executable\n- realArgs.addAll(arguments)\n- }\n- if (spawn) {\n- writeWrapperScript(executable)\n- realArgs.add(wrapperScript)\n- realArgs.addAll(arguments)\n- }\n- if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {\n- realArgs.add('\"')\n- }\n- commandString.eachLine { line -> logger.info(line) }\n-\n- ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {\n- realEnv.each { key, value -> env(key: key, value: value) }\n- realArgs.each { arg(value: it) }\n- }\n-\n- String failedProp = \"failed${name}\"\n- // first wait for resources, or the failure marker from the wrapper script\n- ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {\n- or {\n- resourceexists {\n- file(file: failureMarker.toString())\n- }\n- and {\n- resourceexists {\n- file(file: pidFile.toString())\n- }\n- resourceexists {\n- file(file: portsFile.toString())\n- }\n- }\n- }\n- }\n-\n- if (ant.project.getProperty(failedProp) || failureMarker.exists()) {\n- fail(\"Failed to start ${name}\")\n- }\n-\n- // the process is started (has a pid) and is bound to a network interface\n- // so now wait undil the waitCondition has been met\n- // TODO: change this to a loop?\n- boolean success\n- try {\n- success = waitCondition(this, ant) == false\n- } catch (Exception e) {\n- String msg = \"Wait condition caught exception for ${name}\"\n- logger.error(msg, e)\n- fail(msg, e)\n- }\n- if (success == false) {\n- fail(\"Wait condition failed for ${name}\")\n- }\n- }\n-\n- /** Returns a debug string used to log information about how the fixture was run. */\n- protected String getCommandString() {\n- String commandString = \"\\n${name} configuration:\\n\"\n- commandString += \"-----------------------------------------\\n\"\n- commandString += \" cwd: ${cwd}\\n\"\n- commandString += \" command: ${executable} ${arguments.join(' ')}\\n\"\n- commandString += ' environment:\\n'\n- environment.each { k, v -> commandString += \" ${k}: ${v}\\n\" }\n- if (spawn) {\n- commandString += \"\\n [${wrapperScript.name}]\\n\"\n- wrapperScript.eachLine('UTF-8', { line -> commandString += \" ${line}\\n\"})\n- }\n- return commandString\n- }\n-\n- /**\n- * Writes a script to run the real executable, so that stdout/stderr can be captured.\n- * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process\n- */\n- private void writeWrapperScript(String executable) {\n- wrapperScript.parentFile.mkdirs()\n- String argsPasser = '\"$@\"'\n- String exitMarker = \"; if [ \\$? != 0 ]; then touch run.failed; fi\"\n- if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n- argsPasser = '%*'\n- exitMarker = \"\\r\\n if \\\"%errorlevel%\\\" neq \\\"0\\\" ( type nul >> run.failed )\"\n- }\n- wrapperScript.setText(\"\\\"${executable}\\\" ${argsPasser} > run.log 2>&1 ${exitMarker}\", 'UTF-8')\n- }\n-\n- /** Fail the build with the given message, and logging relevant info*/\n- private void fail(String msg, Exception... suppressed) {\n- if (logger.isInfoEnabled() == false) {\n- // We already log the command at info level. No need to do it twice.\n- commandString.eachLine { line -> logger.error(line) }\n- }\n- logger.error(\"${name} output:\")\n- logger.error(\"-----------------------------------------\")\n- logger.error(\" failure marker exists: ${failureMarker.exists()}\")\n- logger.error(\" pid file exists: ${pidFile.exists()}\")\n- logger.error(\" ports file exists: ${portsFile.exists()}\")\n- // also dump the log file for the startup script (which will include ES logging output to stdout)\n- if (runLog.exists()) {\n- logger.error(\"\\n [log]\")\n- runLog.eachLine { line -> logger.error(\" ${line}\") }\n- }\n- logger.error(\"-----------------------------------------\")\n- GradleException toThrow = new GradleException(msg)\n- for (Exception e : suppressed) {\n- toThrow.addSuppressed(e)\n- }\n- throw toThrow\n- }\n-\n- /** Adds a task to kill an elasticsearch node with the given pidfile */\n- private Task createStopTask() {\n- final Fixture fixture = this\n- final Object pid = \"${ -> fixture.pid }\"\n- Exec stop = project.tasks.create(name: \"${name}#stop\", type: LoggedExec)\n- stop.onlyIf { fixture.pidFile.exists() }\n- stop.doFirst {\n- logger.info(\"Shutting down ${fixture.name} with pid ${pid}\")\n- }\n- if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n- stop.executable = 'Taskkill'\n- stop.args('/PID', pid, '/F')\n- } else {\n- stop.executable = 'kill'\n- stop.args('-9', pid)\n- }\n- stop.doLast {\n- project.delete(fixture.pidFile)\n- }\n- return stop\n- }\n-\n- /**\n- * A path relative to the build dir that all configuration and runtime files\n- * will live in for this fixture\n- */\n- protected File getBaseDir() {\n- return new File(project.buildDir, \"fixtures/${name}\")\n- }\n-\n- /** Returns the working directory for the process. Defaults to \"cwd\" inside baseDir. */\n- protected File getCwd() {\n- return new File(baseDir, 'cwd')\n- }\n-\n- /** Returns the file the process writes its pid to. Defaults to \"pid\" inside baseDir. */\n- protected File getPidFile() {\n- return new File(baseDir, 'pid')\n- }\n-\n- /** Reads the pid file and returns the process' pid */\n- public int getPid() {\n- return Integer.parseInt(pidFile.getText('UTF-8').trim())\n- }\n-\n- /** Returns the file the process writes its bound ports to. Defaults to \"ports\" inside baseDir. */\n- protected File getPortsFile() {\n- return new File(baseDir, 'ports')\n- }\n-\n- /** Returns an address and port suitable for a uri to connect to this node over http */\n- public String getAddressAndPort() {\n- return portsFile.readLines(\"UTF-8\").get(0)\n- }\n-\n- /** Returns a file that wraps around the actual command when {@code spawn == true}. */\n- protected File getWrapperScript() {\n- return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')\n- }\n-\n- /** Returns a file that the wrapper script writes when the command failed. */\n- protected File getFailureMarker() {\n- return new File(cwd, 'run.failed')\n- }\n+ public Object getStopTask()\n \n- /** Returns a file that the wrapper script writes when the command failed. */\n- protected File getRunLog() {\n- return new File(cwd, 'run.log')\n- }\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy", "status": "modified" }, { "diff": "@@ -129,7 +129,7 @@ public class RestIntegTestTask extends DefaultTask {\n runner.dependsOn(dependencies)\n for (Object dependency : dependencies) {\n if (dependency instanceof Fixture) {\n- runner.finalizedBy(((Fixture)dependency).stopTask)\n+ runner.finalizedBy(((Fixture)dependency).getStopTask())\n }\n }\n return this\n@@ -140,7 +140,7 @@ public class RestIntegTestTask extends DefaultTask {\n runner.setDependsOn(dependencies)\n for (Object dependency : dependencies) {\n if (dependency instanceof Fixture) {\n- runner.finalizedBy(((Fixture)dependency).stopTask)\n+ runner.finalizedBy(((Fixture)dependency).getStopTask())\n }\n }\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy", "status": "modified" }, { "diff": "@@ -0,0 +1,54 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.gradle.test\n+\n+import org.elasticsearch.gradle.vagrant.VagrantCommandTask\n+import org.gradle.api.Task\n+\n+/**\n+ * A fixture for integration tests which runs in a virtual machine launched by Vagrant.\n+ */\n+class VagrantFixture extends VagrantCommandTask implements Fixture {\n+\n+ private VagrantCommandTask stopTask\n+\n+ public VagrantFixture() {\n+ this.stopTask = project.tasks.create(name: \"${name}#stop\", type: VagrantCommandTask) {\n+ command 'halt'\n+ }\n+ finalizedBy this.stopTask\n+ }\n+\n+ @Override\n+ void setBoxName(String boxName) {\n+ super.setBoxName(boxName)\n+ this.stopTask.setBoxName(boxName)\n+ }\n+\n+ @Override\n+ void setEnvironmentVars(Map<String, String> environmentVars) {\n+ super.setEnvironmentVars(environmentVars)\n+ this.stopTask.setEnvironmentVars(environmentVars)\n+ }\n+\n+ @Override\n+ public Task getStopTask() {\n+ return this.stopTask\n+ }\n+}", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy", "status": "added" }, { "diff": "@@ -27,12 +27,15 @@ import org.gradle.api.tasks.Input\n public class BatsOverVagrantTask extends VagrantCommandTask {\n \n @Input\n- String command\n+ String remoteCommand\n \n BatsOverVagrantTask() {\n- project.afterEvaluate {\n- args 'ssh', boxName, '--command', command\n- }\n+ command = 'ssh'\n+ }\n+\n+ void setRemoteCommand(String remoteCommand) {\n+ this.remoteCommand = Objects.requireNonNull(remoteCommand)\n+ setArgs(['--command', remoteCommand])\n }\n \n @Override", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy", "status": "modified" }, { "diff": "@@ -21,16 +21,28 @@ package org.elasticsearch.gradle.vagrant\n import org.apache.commons.io.output.TeeOutputStream\n import org.elasticsearch.gradle.LoggedExec\n import org.gradle.api.tasks.Input\n+import org.gradle.api.tasks.Optional\n+import org.gradle.api.tasks.TaskAction\n import org.gradle.internal.logging.progress.ProgressLoggerFactory\n \n import javax.inject.Inject\n+import java.util.concurrent.CountDownLatch\n+import java.util.concurrent.locks.Lock\n+import java.util.concurrent.locks.ReadWriteLock\n+import java.util.concurrent.locks.ReentrantLock\n \n /**\n * Runs a vagrant command. Pretty much like Exec task but with a nicer output\n * formatter and defaults to `vagrant` as first part of commandLine.\n */\n public class VagrantCommandTask extends LoggedExec {\n \n+ @Input\n+ String command\n+\n+ @Input @Optional\n+ String subcommand\n+\n @Input\n String boxName\n \n@@ -40,11 +52,27 @@ public class VagrantCommandTask extends LoggedExec {\n public VagrantCommandTask() {\n executable = 'vagrant'\n \n+ // We're using afterEvaluate here to slot in some logic that captures configurations and\n+ // modifies the command line right before the main execution happens. The reason that we\n+ // call doFirst instead of just doing the work in the afterEvaluate is that the latter\n+ // restricts how subclasses can extend functionality. Calling afterEvaluate is like having\n+ // all the logic of a task happening at construction time, instead of at execution time\n+ // where a subclass can override or extend the logic.\n project.afterEvaluate {\n- // It'd be nice if --machine-readable were, well, nice\n- standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())\n- if (environmentVars != null) {\n- environment environmentVars\n+ doFirst {\n+ if (environmentVars != null) {\n+ environment environmentVars\n+ }\n+\n+ // Build our command line for vagrant\n+ def vagrantCommand = [executable, command]\n+ if (subcommand != null) {\n+ vagrantCommand = vagrantCommand + subcommand\n+ }\n+ commandLine([*vagrantCommand, boxName, *args])\n+\n+ // It'd be nice if --machine-readable were, well, nice\n+ standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())\n }\n }\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy", "status": "modified" }, { "diff": "@@ -391,21 +391,23 @@ class VagrantTestPlugin implements Plugin<Project> {\n \n // always add a halt task for all boxes, so clean makes sure they are all shutdown\n Task halt = project.tasks.create(\"vagrant${boxTask}#halt\", VagrantCommandTask) {\n+ command 'halt'\n boxName box\n environmentVars vagrantEnvVars\n- args 'halt', box\n }\n stop.dependsOn(halt)\n \n Task update = project.tasks.create(\"vagrant${boxTask}#update\", VagrantCommandTask) {\n+ command 'box'\n+ subcommand 'update'\n boxName box\n environmentVars vagrantEnvVars\n- args 'box', 'update', box\n dependsOn vagrantCheckVersion, virtualboxCheckVersion\n }\n update.mustRunAfter(setupBats)\n \n Task up = project.tasks.create(\"vagrant${boxTask}#up\", VagrantCommandTask) {\n+ command 'up'\n boxName box\n environmentVars vagrantEnvVars\n /* Its important that we try to reprovision the box even if it already\n@@ -418,7 +420,7 @@ class VagrantTestPlugin implements Plugin<Project> {\n vagrant's default but its possible to change that default and folks do.\n But the boxes that we use are unlikely to work properly with other\n virtualization providers. Thus the lock. */\n- args 'up', box, '--provision', '--provider', 'virtualbox'\n+ args '--provision', '--provider', 'virtualbox'\n /* It'd be possible to check if the box is already up here and output\n SKIPPED but that would require running vagrant status which is slow! */\n dependsOn update\n@@ -434,11 +436,11 @@ class VagrantTestPlugin implements Plugin<Project> {\n vagrantSmokeTest.dependsOn(smoke)\n \n Task packaging = project.tasks.create(\"vagrant${boxTask}#packagingTest\", BatsOverVagrantTask) {\n+ remoteCommand BATS_TEST_COMMAND\n boxName box\n environmentVars vagrantEnvVars\n dependsOn up, setupBats\n finalizedBy halt\n- command BATS_TEST_COMMAND\n }\n \n TaskExecutionAdapter packagingReproListener = new TaskExecutionAdapter() {\n@@ -461,11 +463,12 @@ class VagrantTestPlugin implements Plugin<Project> {\n }\n \n Task platform = project.tasks.create(\"vagrant${boxTask}#platformTest\", VagrantCommandTask) {\n+ command 'ssh'\n boxName box\n environmentVars vagrantEnvVars\n dependsOn up\n finalizedBy halt\n- args 'ssh', boxName, '--command', PLATFORM_TEST_COMMAND + \" -Dtests.seed=${-> project.extensions.esvagrant.formattedTestSeed}\"\n+ args '--command', PLATFORM_TEST_COMMAND + \" -Dtests.seed=${-> project.extensions.esvagrant.formattedTestSeed}\"\n }\n TaskExecutionAdapter platformReproListener = new TaskExecutionAdapter() {\n @Override", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@ dependencies {\n exampleFixture project(':test:fixtures:example-fixture')\n }\n \n-task exampleFixture(type: org.elasticsearch.gradle.test.Fixture) {\n+task exampleFixture(type: org.elasticsearch.gradle.test.AntFixture) {\n dependsOn project.configurations.exampleFixture\n executable = new File(project.javaHome, 'bin/java')\n args '-cp', \"${ -> project.configurations.exampleFixture.asPath }\",", "filename": "plugins/jvm-example/build.gradle", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ dependencyLicenses {\n mapping from: /hadoop-.*/, to: 'hadoop'\n }\n \n-task hdfsFixture(type: org.elasticsearch.gradle.test.Fixture) {\n+task hdfsFixture(type: org.elasticsearch.gradle.test.AntFixture) {\n dependsOn project.configurations.hdfsFixture\n executable = new File(project.javaHome, 'bin/java')\n env 'CLASSPATH', \"${ -> project.configurations.hdfsFixture.asPath }\"", "filename": "plugins/repository-hdfs/build.gradle", "status": "modified" } ] }
{ "body": "[cat RestAliasAction](https://github.com/elastic/elasticsearch/blob/6265ef1c1ba1d308bcc28d00dccccac555e33b89/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java#L43) exposes an endpoint taking `{alias}` which feeds this **single** `alias` into the [GetAliasRequest constructor](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesRequest.java#L42) exactly [as documented](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-alias.html)\r\n\r\nHowever [cat.aliases](https://github.com/elastic/elasticsearch/blob/6265ef1c1ba1d308bcc28d00dccccac555e33b89/rest-api-spec/src/main/resources/rest-api-spec/api/cat.aliases.json#L9-L12) documents it as a `list` instead of a `string`\r\n\r\n\r\n\r\n", "comments": [ { "body": "This bug can be reproduced with the following script:\r\n```\r\nPUT test\r\n{\r\n \"aliases\": {\r\n \"alias-1\": {},\r\n \"alias-2\": {}\r\n }\r\n}\r\n\r\nPUT test2\r\n{\r\n \"aliases\": {\r\n \"alias-1\": {},\r\n \"alias-2\": {},\r\n \"alias-3\": {}\r\n }\r\n}\r\n\r\nPUT test3\r\n{\r\n \"aliases\": {\r\n \"alias-3\": {},\r\n \"alias-4\": {}\r\n }\r\n}\r\n\r\n# Correctly returns test and test2 indices\r\nGET _cat/aliases/alias-1?v\r\n\r\n# Should return test and test2 indices but returns nothing but the headers\r\nGET _cat/aliases/alias-1,alias-2?v\r\n```\r\n", "created_at": "2017-03-21T09:50:51Z" }, { "body": "Hi, I would like to contribute to the project. Can I take this issue? ", "created_at": "2017-03-21T15:30:16Z" }, { "body": "@glefloch yes, we would very much appreciate you tackling this issue if you want to, so feel free to submit a Pull Request with a fix if you wish. You may want to read https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md to get started on contributing.", "created_at": "2017-03-21T17:59:58Z" } ], "number": 23661, "title": "cat aliases {name} documented as list instead of string" }
{ "body": "relates to #23661 ", "number": 24180, "review_comments": [], "title": "updated RestAliasAction to take list of aliases" }
{ "commits": [ { "message": "updated RestAliasAction to take list of aliases" } ], "files": [ { "diff": "@@ -46,7 +46,7 @@ public RestAliasAction(Settings settings, RestController controller) {\n @Override\n protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) {\n final GetAliasesRequest getAliasesRequest = request.hasParam(\"alias\") ?\n- new GetAliasesRequest(request.param(\"alias\")) :\n+ new GetAliasesRequest(request.paramAsStringArrayOrEmptyIfAll(\"alias\")) :\n new GetAliasesRequest();\n getAliasesRequest.local(request.paramAsBoolean(\"local\", getAliasesRequest.local()));\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java", "status": "modified" } ] }
{ "body": "Tested on master, 5.3, 5.2, 5.0, and 2.3:\r\n\r\n```http\r\nPUT /_bulk\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"type\", \"_id\": \"\" } }\r\n{ \"doc\": \"1\" }\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"type\", \"_id\": \"\" } }\r\n{ \"doc\": \"2\" }\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"type\" } }\r\n{ \"doc\": \"3\" }\r\n```\r\n\r\nThis creates **two** documents:\r\n\r\n```json\r\n{\r\n \"took\": 439,\r\n \"errors\": false,\r\n \"items\": [\r\n {\r\n \"index\": {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"\",\r\n \"_version\": 1,\r\n \"result\": \"created\",\r\n \"_shards\": {\r\n \"total\": 2,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"_seq_no\": 0,\r\n \"created\": true,\r\n \"status\": 201\r\n }\r\n },\r\n {\r\n \"index\": {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"\",\r\n \"_version\": 2,\r\n \"result\": \"updated\",\r\n \"_shards\": {\r\n \"total\": 2,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"_seq_no\": 1,\r\n \"created\": false,\r\n \"status\": 200\r\n }\r\n },\r\n {\r\n \"index\": {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"AVttiPbjiL4QtkRlx6WB\",\r\n \"_version\": 1,\r\n \"result\": \"created\",\r\n \"_shards\": {\r\n \"total\": 2,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"_seq_no\": 0,\r\n \"created\": true,\r\n \"status\": 201\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThis should create 3 documents, all with flake IDs (like the third document here). It's a bug because you can't actually `GET` a document with an empty `_id`.\r\n\r\n```http\r\nGET /test/type/\r\n```\r\n\r\nspits out:\r\n\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"No endpoint or operation is available at [type]\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"No endpoint or operation is available at [type]\"\r\n },\r\n \"status\": 400\r\n}\r\n```", "comments": [ { "body": "It's always been like this (at least back to the 1.x series), it's not just a since 5.x thing. I do agree it's problematic though, for example you can not get such a document via the get API.\r\n\r\n> This should create 3 documents, all with flake IDs (like the third document here).\r\n\r\nMaybe, or perhaps it should simply be rejected as an invalid request?", "created_at": "2017-04-14T18:05:00Z" }, { "body": "@jasontedor \r\n\r\nYeah, I've been slowly testing more and more builds.\r\n\r\n> Maybe, or perhaps it should simply be rejected as an invalid request?\r\n\r\nThat works for me too, specifically because individual indexing requests with a blank `_id` are rejected:\r\n\r\n```http\r\nPUT /test/type/\r\n{ \"doc\": 1 }\r\n```", "created_at": "2017-04-14T18:06:20Z" }, { "body": "The reason I think that it should be rejected is because allowing `_id: \"\"` is a form of leniency. An application bug could easily unintentionally introduce `_id: \"\"` on a request and if I were an application developer accidentally introducing such a bug I would want Elasticsearch to reject it. We already have a way to specify auto-generated IDs: do not send an explicit ID field `_id` on the request, I don't think that we need a second way.", "created_at": "2017-04-14T18:09:41Z" }, { "body": "Agree that we should reject empty ID's. By the way, the issue is only with empty ID's, because if it's a space, you can still get the document by ID:\r\n\r\n```\r\nPUT /_bulk\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"type\",\"_id\":\" \"}}\r\n{\"doc\":\"1\"}\r\n\r\nGET test/type/%20\r\n```\r\n\r\n```\r\n{\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \" \",\r\n \"_version\": 1,\r\n \"found\": true,\r\n \"_source\": {\r\n \"doc\": \"1\"\r\n }\r\n}\r\n```\r\n\r\nTBH, not sure if it should be rejected, or treated like it needs to auto-generate the ID. That's another option, right? ", "created_at": "2017-04-14T18:21:40Z" }, { "body": "Yes, I do not think that we should reject a space as the `_id` as those can still be obtained exactly as you say.\r\n\r\n> TBH, not sure if it should be rejected, or treated like it needs to auto-generate the ID. That's another option, right?\r\n\r\nI do not think so, that would be a breaking change.\r\n", "created_at": "2017-04-14T18:29:16Z" }, { "body": "The issue of `_id` being empty however is already broken, there's nothing to break but rather to fix.", "created_at": "2017-04-14T18:31:15Z" }, { "body": "I opened #24118.", "created_at": "2017-04-14T19:15:27Z" } ], "number": 24116, "title": "HTTP _bulk allows \"_id\": \"\" to set empty _id" }
{ "body": "When indexing a document via the bulk API where IDs can be explicitly specified, we currently accept an empty ID. This is problematic because such a document can not be obtained via the get API. Instead, we should rejected these requets as accepting them could be a dangerous form of leniency. Additionally, we already have a way of specifying auto-generated IDs and that is to not explicitly specify an ID so we do not need a second way. This commit the individual requests where ID is specified but empty.\r\n\r\nCloses #24116", "number": 24118, "review_comments": [ { "body": "Out of curiousity, should we do `Strings.hasText(id) == false` here, to disallow the id `\" \"` also?", "created_at": "2017-04-14T19:33:43Z" }, { "body": "I don't think we should reject \" \" (at least in this PR), it can be obtained via the get API (with escaping) and rejecting it would be a breaking change. On the indexing API we take care to return a properly-escaped location header.", "created_at": "2017-04-14T19:42:45Z" }, { "body": "Sounds good to me, I think it falls into the larger input validation category anyway", "created_at": "2017-04-14T20:15:32Z" } ], "title": "Reject empty IDs" }
{ "commits": [ { "message": "Reject empty IDs\n\nWhen indexing a document via the bulk API where IDs can be explicitly\nspecified, we currently accept an empty ID. This is problematic because\nsuch a document can not be obtained via the get API. Instead, we should\nrejected these requets as accepting them could be a dangerous form of\nleniency. Additionally, we already have a way of specifying\nauto-generated IDs and that is to not explicitly specify an ID so we do\nnot need a second way. This commit the individual requests where ID is\nspecified but empty." }, { "message": "Fix failing test" }, { "message": "Merge branch 'master' into reject-empty-id\n\n* master:\n Use sequence numbers to identify out of order delivery in replicas & recovery (#24060)\n Remove customization of ES_USER and ES_GROUP" }, { "message": "Add skip for BWC reasons" }, { "message": "Fix spacing" } ], "files": [ { "diff": "@@ -308,7 +308,7 @@ public void testIndex() throws IOException {\n \n assertEquals(RestStatus.BAD_REQUEST, exception.status());\n assertEquals(\"Elasticsearch exception [type=illegal_argument_exception, \" +\n- \"reason=Can't specify parent if no parent field has been configured]\", exception.getMessage());\n+ \"reason=can't specify parent if no parent field has been configured]\", exception.getMessage());\n }\n {\n ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> {", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java", "status": "modified" }, { "diff": "@@ -279,7 +279,7 @@ protected void doRun() throws Exception {\n break;\n default: throw new AssertionError(\"request type not supported: [\" + docWriteRequest.opType() + \"]\");\n }\n- } catch (ElasticsearchParseException | RoutingMissingException e) {\n+ } catch (ElasticsearchParseException | IllegalArgumentException | RoutingMissingException e) {\n BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id(), e);\n BulkItemResponse bulkItemResponse = new BulkItemResponse(i, docWriteRequest.opType(), failure);\n responses.set(i, bulkItemResponse);", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -491,14 +491,18 @@ public void process(@Nullable MappingMetaData mappingMd, String concreteIndex) {\n }\n \n if (parent != null && !mappingMd.hasParentField()) {\n- throw new IllegalArgumentException(\"Can't specify parent if no parent field has been configured\");\n+ throw new IllegalArgumentException(\"can't specify parent if no parent field has been configured\");\n }\n } else {\n if (parent != null) {\n- throw new IllegalArgumentException(\"Can't specify parent if no parent field has been configured\");\n+ throw new IllegalArgumentException(\"can't specify parent if no parent field has been configured\");\n }\n }\n \n+ if (\"\".equals(id)) {\n+ throw new IllegalArgumentException(\"if _id is specified it must not be empty\");\n+ }\n+\n // generate id if not already provided\n if (id == null) {\n assert autoGeneratedTimestamp == -1 : \"timestamp has already been generated!\";", "filename": "core/src/main/java/org/elasticsearch/action/index/IndexRequest.java", "status": "modified" }, { "diff": "@@ -1182,13 +1182,13 @@ public void testIndexChildDocWithNoParentMapping() throws IOException {\n client().prepareIndex(\"test\", \"child1\", \"c1\").setParent(\"p1\").setSource(\"c_field\", \"blue\").get();\n fail();\n } catch (IllegalArgumentException e) {\n- assertThat(e.toString(), containsString(\"Can't specify parent if no parent field has been configured\"));\n+ assertThat(e.toString(), containsString(\"can't specify parent if no parent field has been configured\"));\n }\n try {\n client().prepareIndex(\"test\", \"child2\", \"c2\").setParent(\"p1\").setSource(\"c_field\", \"blue\").get();\n fail();\n } catch (IllegalArgumentException e) {\n- assertThat(e.toString(), containsString(\"Can't specify parent if no parent field has been configured\"));\n+ assertThat(e.toString(), containsString(\"can't specify parent if no parent field has been configured\"));\n }\n \n refresh();", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" }, { "diff": "@@ -20,14 +20,18 @@\n package org.elasticsearch.index.reindex;\n \n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n+import org.elasticsearch.action.bulk.byscroll.BulkByScrollResponse;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.query.QueryBuilder;\n \n import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasToString;\n+import static org.hamcrest.Matchers.instanceOf;\n \n /**\n * Index-by-search tests for parent/child.\n@@ -76,12 +80,11 @@ public void testErrorMessageWhenBadParentChild() throws Exception {\n createParentChildDocs(\"source\");\n \n ReindexRequestBuilder copy = reindex().source(\"source\").destination(\"dest\").filter(findsCity);\n- try {\n- copy.get();\n- fail(\"Expected exception\");\n- } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(), equalTo(\"Can't specify parent if no parent field has been configured\"));\n- }\n+ final BulkByScrollResponse response = copy.get();\n+ assertThat(response.getBulkFailures().size(), equalTo(1));\n+ final Exception cause = response.getBulkFailures().get(0).getCause();\n+ assertThat(cause, instanceOf(IllegalArgumentException.class));\n+ assertThat(cause, hasToString(containsString(\"can't specify parent if no parent field has been configured\")));\n }\n \n /**", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java", "status": "modified" }, { "diff": "@@ -23,3 +23,38 @@\n \n - match: {count: 2}\n \n+---\n+\"Empty _id\":\n+ - skip:\n+ version: \" - 5.3.0\"\n+ reason: empty IDs were not rejected until 5.3.1\n+ - do:\n+ bulk:\n+ refresh: true\n+ body:\n+ - index:\n+ _index: test\n+ _type: type\n+ _id: ''\n+ - f: 1\n+ - index:\n+ _index: test\n+ _type: type\n+ _id: id\n+ - f: 2\n+ - index:\n+ _index: test\n+ _type: type\n+ - f: 3\n+ - match: { errors: true }\n+ - match: { items.0.index.status: 400 }\n+ - match: { items.0.index.error.type: illegal_argument_exception }\n+ - match: { items.0.index.error.reason: if _id is specified it must not be empty }\n+ - match: { items.1.index.created: true }\n+ - match: { items.2.index.created: true }\n+\n+ - do:\n+ count:\n+ index: test\n+\n+ - match: { count: 2 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/bulk/10_basic.yaml", "status": "modified" } ] }
{ "body": "ES version: 5.2.2.\r\nJDK: 1.8.0_65\r\nOS version: Ubuntu 14.04.4 \r\n\r\nWe faced high memory usage issue when using percolator.\r\nDedicated index is configured for percolator.\r\nIt works as expected and JVM heap usage picture is \r\n\r\n![without_nested_queries](https://cloud.githubusercontent.com/assets/27484990/25040396/b5e6e152-2111-11e7-92db-3182e11dc73c.JPG)\r\n\r\nAfter registering a nested nested query the picture changes and there is a steady grow of used memory and CPU on the same input document set\r\n\r\n![with_nested_query](https://cloud.githubusercontent.com/assets/27484990/25040463/2acc2f0e-2112-11e7-8a10-2edc730e4fc9.JPG)\r\n\r\nInput documents contain up to hundred of nested documents. Caching settings are by default.\r\n\r\nMemory dump:\r\n![memory1](https://cloud.githubusercontent.com/assets/27484990/25040531/a17841ba-2112-11e7-98d3-b1060a68ad4a.JPG)\r\n![memory2](https://cloud.githubusercontent.com/assets/27484990/25040536/a59bfbe2-2112-11e7-8e43-16212e0d7333.JPG)\r\n", "comments": [ { "body": "@anatoly21 Just double checking: So after registering a single percolator query that has a `nested` query this memory issue happens? Also I assume that search requests (containing `percolate` queries) are being executed. Do you have a lot of these search requests or just a few?", "created_at": "2017-04-14T10:50:41Z" }, { "body": "Yes, this issue occurs after registerering a single nested query. The attached pictures show results after execution of about 300 000 search requests (percolate queries). ", "created_at": "2017-04-14T11:07:48Z" }, { "body": "@anatoly21 Thanks, are the search requests (with `percolate` query) similar (besides the document being percolated)? If possible can you share the search request (or at least how this query is structured)? I like to figure out if the `percolate` query itself gets cached.", "created_at": "2017-04-14T11:16:45Z" }, { "body": "Mapping has a lot of keyword fields, therefore the attachment is a simplified version. Search requests are quite different. \r\n[env.zip](https://github.com/elastic/elasticsearch/files/922050/env.zip)\r\n\r\nPossibly this issue is related to https://github.com/elastic/elasticsearch/issues/23859 \r\n\r\n\r\n", "created_at": "2017-04-14T12:44:14Z" }, { "body": "Thanks for reporting @anatoly21 and this is indeed a bad bug.\r\n\r\nThis problem is quite easily reproducible and affects all 5.x releases.\r\nThe cause of the memory issues / OOM is that the index reader for the memory index isn't closed,\r\nwhich then causes the cache entries in `BitsetFilterCache` to never get cleaned up.\r\n\r\nPrior to 5.0 the percolator had its own search context (PercolateContext), which always closed the readers of the in-memory index.\r\n\r\nI'll work on a fix. The percolator should just never use the `BitsetFilterCache` and it doesn't need since it is querying an in-memory index.", "created_at": "2017-04-14T15:02:21Z" }, { "body": "Thank you. ", "created_at": "2017-04-14T15:08:10Z" } ], "number": 24108, "title": "Percolator: High memory usage issue when nested query is registered" }
{ "body": "The percolator doesn't close the IndexReader of the memory index any more.\r\nPrior to 2.x the percolator had its own SearchContext (PercolatorContext) that did this,\r\nbut that was removed when the percolator was refactored as part of the 5.0 release.\r\n\r\nI think an alternative way to fix this is to let percolator not use the bitset and fielddata caches,\r\nthat way we prevent the memory leak.\r\n\r\nAdding a WIP label to this as I'm not happy with the current test. It is not easy to test that we don't use the bitset or fielddata cache for the percolator, because non percolator operations may use these caches, which is valid.\r\n\r\nPR for #24108", "number": 24115, "review_comments": [ { "body": "Still need to think how to add a test for this too...", "created_at": "2017-04-14T16:20:25Z" }, { "body": "should it be a constructor arg?", "created_at": "2017-04-24T08:05:12Z" }, { "body": "let's try to make it final and give it a better name?", "created_at": "2017-04-24T08:05:37Z" }, { "body": "I don't have suggestions though...", "created_at": "2017-04-24T08:05:51Z" }, { "body": "Me neither... I've named it now releaseDelegator", "created_at": "2017-04-24T11:49:07Z" }, { "body": "done - it is now a final field", "created_at": "2017-04-24T11:49:32Z" }, { "body": "should we just remove this one completely?", "created_at": "2017-04-24T16:15:45Z" }, { "body": "That method is still being used in other places. For example in `MetaDataCreateIndexService` and `MetaDataIndexAliasesService`. In these places we don't have access to a search context and we test there if alias filters parse correctly, which means we will not be supporting `percolate` queries there, which I think is ok.", "created_at": "2017-04-24T17:26:45Z" }, { "body": "I removed this method, wdyt?", "created_at": "2017-04-25T19:15:32Z" } ], "title": "Fix memory leak when percolator uses bitset or field data cache" }
{ "commits": [ { "message": "[percolator] Fix memory leak when percolator uses bitset or field data cache.\n\nThe percolator doesn't close the IndexReader of the memory index any more.\nPrior to 2.x the percolator had its own SearchContext (PercolatorContext) that did this,\nbut that was removed when the percolator was refactored as part of the 5.0 release.\n\nI think an alternative way to fix this is to let percolator not use the bitset and fielddata caches,\nthat way we prevent the memory leak.\n\nCloses #24108" } ], "files": [ { "diff": "@@ -121,8 +121,7 @@ private BitSet getAndLoadIfNotPresent(final Query query, final LeafReaderContext\n }\n final IndexReader.CacheKey coreCacheReader = cacheHelper.getKey();\n final ShardId shardId = ShardUtils.extractShardId(context.reader());\n- if (shardId != null // can't require it because of the percolator\n- && indexSettings.getIndex().equals(shardId.getIndex()) == false) {\n+ if (indexSettings.getIndex().equals(shardId.getIndex()) == false) {\n // insanity\n throw new IllegalStateException(\"Trying to load bit set for index \" + shardId.getIndex()\n + \" with cache of index \" + indexSettings.getIndex());", "filename": "core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations;\n \n import org.apache.lucene.index.CompositeReaderContext;\n+import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.IndexReaderContext;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.Collector;\n@@ -31,8 +32,10 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.MockBigArrays;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache.Listener;\n@@ -48,6 +51,7 @@\n import org.elasticsearch.index.mapper.ObjectMapper.Nested;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.support.NestedScope;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.breaker.CircuitBreakerService;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache;\n@@ -289,4 +293,8 @@ public String toString() {\n return \"ShardSearcher(\" + ctx.get(0) + \")\";\n }\n }\n+\n+ protected static DirectoryReader wrap(DirectoryReader directoryReader) throws IOException {\n+ return ElasticsearchDirectoryReader.wrap(directoryReader, new ShardId(new Index(\"_index\", \"_na_\"), 0));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ public void testNoDocs() throws IOException {\n try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {\n // intentionally not writing any docs\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT);\n MaxAggregationBuilder maxAgg = new MaxAggregationBuilder(MAX_AGG_NAME)\n@@ -112,7 +112,7 @@ public void testSingleNestingMax() throws IOException {\n }\n iw.commit();\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT);\n MaxAggregationBuilder maxAgg = new MaxAggregationBuilder(MAX_AGG_NAME)\n@@ -160,7 +160,7 @@ public void testDoubleNestingMax() throws IOException {\n }\n iw.commit();\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT + \".\" + NESTED_OBJECT2);\n MaxAggregationBuilder maxAgg = new MaxAggregationBuilder(MAX_AGG_NAME)\n@@ -213,7 +213,7 @@ public void testOrphanedDocs() throws IOException {\n iw.addDocuments(documents);\n iw.commit();\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT);\n SumAggregationBuilder sumAgg = new SumAggregationBuilder(SUM_AGG_NAME)\n@@ -292,7 +292,7 @@ public void testResetRootDocId() throws Exception {\n iw.commit();\n iw.close();\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n \n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n \"nested_field\");", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java", "status": "modified" }, { "diff": "@@ -54,7 +54,7 @@ public void testNoDocs() throws IOException {\n try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {\n // intentionally not writing any docs\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT);\n ReverseNestedAggregationBuilder reverseNestedBuilder\n@@ -117,7 +117,7 @@ public void testMaxFromParentDocs() throws IOException {\n }\n iw.commit();\n }\n- try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG,\n NESTED_OBJECT);\n ReverseNestedAggregationBuilder reverseNestedBuilder", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregatorTests.java", "status": "modified" }, { "diff": "@@ -23,16 +23,22 @@\n import org.apache.lucene.analysis.DelegatingAnalyzerWrapper;\n import org.apache.lucene.index.BinaryDocValues;\n import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReaderContext;\n import org.apache.lucene.index.IndexWriter;\n import org.apache.lucene.index.IndexWriterConfig;\n import org.apache.lucene.index.LeafReader;\n+import org.apache.lucene.index.ReaderUtil;\n import org.apache.lucene.index.memory.MemoryIndex;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.join.BitSetProducer;\n import org.apache.lucene.store.RAMDirectory;\n+import org.apache.lucene.util.BitDocIdSet;\n+import org.apache.lucene.util.BitSet;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ResourceNotFoundException;\n@@ -51,6 +57,8 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.analysis.FieldNameAnalyzer;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n+import org.elasticsearch.index.fielddata.IndexFieldDataCache;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperForType;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -62,6 +70,8 @@\n import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.indices.breaker.CircuitBreakerService;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n \n import java.io.IOException;\n import java.util.Objects;\n@@ -412,12 +422,9 @@ protected Analyzer getWrappedAnalyzer(String fieldName) {\n docSearcher.setQueryCache(null);\n }\n \n- Version indexVersionCreated = context.getIndexSettings().getIndexVersionCreated();\n boolean mapUnmappedFieldsAsString = context.getIndexSettings()\n .getValue(PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING);\n- // We have to make a copy of the QueryShardContext here so we can have a unfrozen version for parsing the legacy\n- // percolator queries\n- QueryShardContext percolateShardContext = new QueryShardContext(context);\n+ QueryShardContext percolateShardContext = wrap(context);\n MappedFieldType fieldType = context.fieldMapper(field);\n if (fieldType == null) {\n throw new QueryShardException(context, \"field [\" + field + \"] does not exist\");\n@@ -503,4 +510,36 @@ private static PercolateQuery.QueryStore createStore(PercolatorFieldMapper.Field\n };\n }\n \n+ static QueryShardContext wrap(QueryShardContext shardContext) {\n+ return new QueryShardContext(shardContext) {\n+\n+ @Override\n+ public BitSetProducer bitsetFilter(Query query) {\n+ return context -> {\n+ final IndexReaderContext topLevelContext = ReaderUtil.getTopLevelContext(context);\n+ final IndexSearcher searcher = new IndexSearcher(topLevelContext);\n+ searcher.setQueryCache(null);\n+ final Weight weight = searcher.createNormalizedWeight(query, false);\n+ final Scorer s = weight.scorer(context);\n+\n+ if (s != null) {\n+ return new BitDocIdSet(BitSet.of(s.iterator(), context.reader().maxDoc())).bits();\n+ } else {\n+ return null;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ @SuppressWarnings(\"unchecked\")\n+ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n+ IndexFieldData.Builder builder = fieldType.fielddataBuilder();\n+ IndexFieldDataCache cache = new IndexFieldDataCache.None();\n+ CircuitBreakerService circuitBreaker = new NoneCircuitBreakerService();\n+ return (IFD) builder.build(shardContext.getIndexSettings(), fieldType, cache, circuitBreaker,\n+ shardContext.getMapperService());\n+ }\n+ };\n+ }\n+\n }", "filename": "modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java", "status": "modified" }, { "diff": "@@ -26,9 +26,12 @@\n import org.elasticsearch.action.support.WriteRequest;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.query.MatchPhraseQueryBuilder;\n import org.elasticsearch.index.query.MultiMatchQueryBuilder;\n@@ -39,6 +42,7 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptType;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;\n+import org.elasticsearch.search.lookup.LeafDocLookup;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n@@ -83,6 +87,11 @@ public static class CustomScriptPlugin extends MockScriptPlugin {\n protected Map<String, Function<Map<String, Object>, Object>> pluginScripts() {\n Map<String, Function<Map<String, Object>, Object>> scripts = new HashMap<>();\n scripts.put(\"1==1\", vars -> Boolean.TRUE);\n+ scripts.put(\"use_fielddata_please\", vars -> {\n+ LeafDocLookup leafDocLookup = (LeafDocLookup) vars.get(\"_doc\");\n+ ScriptDocValues scriptDocValues = leafDocLookup.get(\"employees.name\");\n+ return \"virginia_potts\".equals(scriptDocValues.get(0));\n+ });\n return scripts;\n }\n }\n@@ -606,6 +615,119 @@ public void testPercolateQueryWithNestedDocuments() throws Exception {\n assertHitCount(response, 0);\n }\n \n+ public void testPercolateQueryWithNestedDocuments_doNotLeakBitsetCacheEntries() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder();\n+ mapping.startObject().startObject(\"properties\").startObject(\"companyname\").field(\"type\", \"text\").endObject()\n+ .startObject(\"employee\").field(\"type\", \"nested\").startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"text\").endObject().endObject().endObject().endObject()\n+ .endObject();\n+ createIndex(\"test\", client().admin().indices().prepareCreate(\"test\")\n+ // to avoid normal document from being cached by BitsetFilterCache\n+ .setSettings(Settings.builder().put(BitsetFilterCache.INDEX_LOAD_RANDOM_ACCESS_FILTERS_EAGERLY_SETTING.getKey(), false))\n+ .addMapping(\"employee\", mapping)\n+ .addMapping(\"queries\", \"query\", \"type=percolator\")\n+ );\n+ client().prepareIndex(\"test\", \"queries\", \"q1\").setSource(jsonBuilder().startObject()\n+ .field(\"query\", QueryBuilders.nestedQuery(\"employee\",\n+ QueryBuilders.matchQuery(\"employee.name\", \"virginia potts\").operator(Operator.AND), ScoreMode.Avg)\n+ ).endObject())\n+ .get();\n+ client().admin().indices().prepareRefresh().get();\n+\n+ for (int i = 0; i < 32; i++) {\n+ SearchResponse response = client().prepareSearch()\n+ .setQuery(new PercolateQueryBuilder(\"query\", \"employee\",\n+ XContentFactory.jsonBuilder()\n+ .startObject().field(\"companyname\", \"stark\")\n+ .startArray(\"employee\")\n+ .startObject().field(\"name\", \"virginia potts\").endObject()\n+ .startObject().field(\"name\", \"tony stark\").endObject()\n+ .endArray()\n+ .endObject().bytes(), XContentType.JSON))\n+ .addSort(\"_doc\", SortOrder.ASC)\n+ // size 0, because other wise load bitsets for normal document in FetchPhase#findRootDocumentIfNested(...)\n+ .setSize(0)\n+ .get();\n+ assertHitCount(response, 1);\n+ }\n+\n+ // We can't check via api... because BitsetCacheListener requires that it can extract shardId from index reader\n+ // and for percolator it can't do that, but that means we don't keep track of\n+ // memory for BitsetCache in case of percolator\n+ long bitsetSize = client().admin().cluster().prepareClusterStats().get()\n+ .getIndicesStats().getSegments().getBitsetMemoryInBytes();\n+ assertEquals(\"The percolator works with in-memory index and therefor shouldn't use bitset cache\", 0L, bitsetSize);\n+ }\n+\n+ public void testPercolateQueryWithNestedDocuments_doLeakFieldDataCacheEntries() throws Exception {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder();\n+ mapping.startObject();\n+ {\n+ mapping.startObject(\"properties\");\n+ {\n+ mapping.startObject(\"companyname\");\n+ mapping.field(\"type\", \"text\");\n+ mapping.endObject();\n+ }\n+ {\n+ mapping.startObject(\"employees\");\n+ mapping.field(\"type\", \"nested\");\n+ {\n+ mapping.startObject(\"properties\");\n+ {\n+ mapping.startObject(\"name\");\n+ mapping.field(\"type\", \"text\");\n+ mapping.field(\"fielddata\", true);\n+ mapping.endObject();\n+ }\n+ mapping.endObject();\n+ }\n+ mapping.endObject();\n+ }\n+ mapping.endObject();\n+ }\n+ mapping.endObject();\n+ createIndex(\"test\", client().admin().indices().prepareCreate(\"test\")\n+ .addMapping(\"employee\", mapping)\n+ .addMapping(\"queries\", \"query\", \"type=percolator\")\n+ );\n+ Script script = new Script(ScriptType.INLINE, MockScriptPlugin.NAME, \"use_fielddata_please\", Collections.emptyMap());\n+ client().prepareIndex(\"test\", \"queries\", \"q1\").setSource(jsonBuilder().startObject()\n+ .field(\"query\", QueryBuilders.nestedQuery(\"employees\",\n+ QueryBuilders.scriptQuery(script), ScoreMode.Avg)\n+ ).endObject()).get();\n+ client().admin().indices().prepareRefresh().get();\n+ XContentBuilder doc = jsonBuilder();\n+ doc.startObject();\n+ {\n+ doc.field(\"companyname\", \"stark\");\n+ doc.startArray(\"employees\");\n+ {\n+ doc.startObject();\n+ doc.field(\"name\", \"virginia_potts\");\n+ doc.endObject();\n+ }\n+ {\n+ doc.startObject();\n+ doc.field(\"name\", \"tony_stark\");\n+ doc.endObject();\n+ }\n+ doc.endArray();\n+ }\n+ doc.endObject();\n+ for (int i = 0; i < 32; i++) {\n+ SearchResponse response = client().prepareSearch()\n+ .setQuery(new PercolateQueryBuilder(\"query\", \"employee\", doc.bytes(), XContentType.JSON))\n+ .addSort(\"_doc\", SortOrder.ASC)\n+ .get();\n+ assertHitCount(response, 1);\n+ }\n+\n+ long fieldDataSize = client().admin().cluster().prepareClusterStats().get()\n+ .getIndicesStats().getFieldData().getMemorySizeInBytes();\n+ assertEquals(\"The percolator works with in-memory index and therefor shouldn't use field-data cache\", 0L, fieldDataSize);\n+ }\n+\n public void testPercolatorQueryViaMultiSearch() throws Exception {\n createIndex(\"test\", client().admin().indices().prepareCreate(\"test\")\n .addMapping(\"type\", \"field1\", \"type=text\")", "filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchIT.java", "status": "modified" } ] }
{ "body": "The deprecation of lenient booleans seems to have made parsing of lenient boolean values in documents many times (in the order of 1000x) slower. Here is a relevant stack trace from the hot threads API.\r\n\r\n```\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.match(Pattern.java:4785)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$CharProperty.match(Pattern.java:3777)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Loop.matchInit(Pattern.java:4804)\r\n java.util.regex.Pattern$Prolog.match(Pattern.java:4741)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$Slice.match(Pattern.java:3972)\r\n java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)\r\n java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)\r\n java.util.regex.Pattern$Curly.match0(Pattern.java:4247)\r\n java.util.regex.Pattern$Curly.match(Pattern.java:4234)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4604)\r\n java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4602)\r\n java.util.regex.Pattern$Branch.match(Pattern.java:4602)\r\n java.util.regex.Pattern$Curly.match0(Pattern.java:4279)\r\n java.util.regex.Pattern$Curly.match(Pattern.java:4234)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Curly.match0(Pattern.java:4279)\r\n java.util.regex.Pattern$Curly.match(Pattern.java:4234)\r\n java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)\r\n java.util.regex.Pattern$Curly.match0(Pattern.java:4279)\r\n java.util.regex.Pattern$Curly.match(Pattern.java:4234)\r\n java.util.regex.Pattern$Slice.match(Pattern.java:3972)\r\n java.util.regex.Matcher.match(Matcher.java:1270)\r\n java.util.regex.Matcher.matches(Matcher.java:604)\r\n org.elasticsearch.common.logging.DeprecationLogger.extractWarningValueFromWarningHeader(DeprecationLogger.java:230)\r\n org.elasticsearch.common.logging.DeprecationLogger$$Lambda$1875/1145573162.apply(Unknown Source)\r\n java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)\r\n java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)\r\n java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)\r\n java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)\r\n java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)\r\n java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\r\n java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)\r\n org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.putResponse(ThreadContext.java:421)\r\n org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.access$1100(ThreadContext.java:337)\r\n org.elasticsearch.common.util.concurrent.ThreadContext.addResponseHeader(ThreadContext.java:280)\r\n org.elasticsearch.common.logging.DeprecationLogger.deprecated(DeprecationLogger.java:254)\r\n org.elasticsearch.common.logging.DeprecationLogger.deprecated(DeprecationLogger.java:124)\r\n org.elasticsearch.common.xcontent.support.AbstractXContentParser.booleanValue(AbstractXContentParser.java:115)\r\n org.elasticsearch.index.mapper.BooleanFieldMapper.parseCreateField(BooleanFieldMapper.java:247)\r\n org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287)\r\n```\r\n\r\nIt is probably fine that lenient booleans are a bit slower to index, but the difference here is too much?", "comments": [ { "body": "Looks like anything that results in `extractWarningValueFromWarningHeader` will be slow. The regex can cause a lot of backtracking - https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java#L206", "created_at": "2017-04-10T13:54:55Z" }, { "body": "I have assigned this one to myself as there's a broader problem here, namely the extraction of the warning value from the warning header.", "created_at": "2017-04-10T14:00:16Z" }, { "body": "make sure `-Xss` value,eg:linux jdk8_172 `-Xss256`,`Exception in thread \"I/O dispatcher 1\" java.lang.StackOverflowError` will happen\r\n", "created_at": "2022-07-23T07:54:49Z" } ], "number": 24018, "title": "Documents that trigger warning headers are very slow to index" }
{ "body": "When building headers for a REST response, we de-duplicate the warning headers based on the actual warning value. The current implementation of this uses a capturing regular expression that is prone to excessive backtracking. In cases a request involves a large number of warnings, this extraction can be a severe performance penalty. An example where this can arise is a bulk indexing request that utilizes a deprecated feature (e.g., using deprecated forms of boolean values). This commit is an attempt to address this performance regression. We already know the format of the warning header, so we do not need to use a regular expression to parse it but rather can parse it by hand to extract the warning value. This gains back the vast majority of the performance lost due to the usage of a deprecated feature. There is still a performance loss due to logging the deprecation message but we do not address that concern in this commit.\r\n\r\nCloses #24018", "number": 24114, "review_comments": [], "title": "Improve performance of extracting warning value" }
{ "commits": [ { "message": "Improve performance of extracting warning value\n\nWhen building headers for a REST response, we de-duplicate the warning\nheaders based on the actual warning value. The current implementation of\nthis uses a capturing regular expression that is prone to excessive\nbacktracking. In cases a request involves a large number of warnings,\nthis extraction can be a severe performance penalty. An example where\nthis can arise is a bulk indexing request that utilizes a deprecated\nfeature (e.g., using deprecated forms of boolean values). This commit is\nan attempt to address this performance regression. We already know the\nformat of the warning header, so we do not need to use a regular\nexpression to parse it but rather can parse it by hand to extract the\nwarning value. This gains back the vast majority of the performance lost\ndue to the usage of a deprecated feature. There is still a performance\nloss due to logging the deprecation message but we do not address that\nconcern in this commit." } ], "files": [ { "diff": "@@ -226,10 +226,41 @@ public void deprecated(String msg, Object... params) {\n * @return the extracted warning value\n */\n public static String extractWarningValueFromWarningHeader(final String s) {\n+ /*\n+ * We know the exact format of the warning header, so to extract the warning value we can skip forward from the front to the first\n+ * quote, and skip backwards from the end to the penultimate quote:\n+ *\n+ * 299 Elasticsearch-6.0.0 \"warning value\" \"Sat, 25, Feb 2017 10:27:43 GMT\"\n+ * ^ ^ ^\n+ * firstQuote penultimateQuote lastQuote\n+ *\n+ * We do it this way rather than seeking forward after the first quote because there could be escaped quotes in the warning value\n+ * but since there are none in the warning date, we can skip backwards to find the quote that closes the quoted warning value.\n+ *\n+ * We parse this manually rather than using the capturing regular expression because the regular expression involves a lot of\n+ * backtracking and carries a performance penalty. However, when assertions are enabled, we still use the regular expression to\n+ * verify that we are maintaining the warning header format.\n+ */\n+ final int firstQuote = s.indexOf('\\\"');\n+ final int lastQuote = s.lastIndexOf('\\\"');\n+ final int penultimateQuote = s.lastIndexOf('\\\"', lastQuote - 1);\n+ final String warningValue = s.substring(firstQuote + 1, penultimateQuote - 2);\n+ assert assertWarningValue(s, warningValue);\n+ return warningValue;\n+ }\n+\n+ /**\n+ * Assert that the specified string has the warning value equal to the provided warning value.\n+ *\n+ * @param s the string representing a full warning header\n+ * @param warningValue the expected warning header\n+ * @return {@code true} if the specified string has the expected warning value\n+ */\n+ private static boolean assertWarningValue(final String s, final String warningValue) {\n final Matcher matcher = WARNING_HEADER_PATTERN.matcher(s);\n final boolean matches = matcher.matches();\n assert matches;\n- return matcher.group(1);\n+ return matcher.group(1).equals(warningValue);\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java", "status": "modified" } ] }
{ "body": "In Elasticsearch 5.3.0 a bug was introduced in the merging of default settings when the target setting existed as an array. This arose due to the fact that when a target setting is an array, the setting key is broken into key.0, key.1, ..., key.n, one for each element of the array. When settings are replaced by default.key, we are looking for the target key but not the target key.0. This leads to key, and key.0, ..., key.n being present in the constructed settings object. This commit addresses two issues here. The first is that we fix the merging of the keys so that when we try to merge default.key, we also check for the presence of the flattened keys. The second is that when we try to get a setting value as an array from a settings object, we check whether or not the backing map contains the top-level key as well as the flattened keys. This latter check would have caught the first bug. For kicks, we add some tests.\r\n\r\nRelates #24052, relates #23981", "comments": [], "number": 24074, "title": "Correct handling of default and array settings" }
{ "body": "Today Elasticsearch allows default settings to be used only if the actual setting is not set. These settings are trappy, and the complexity invites bugs. This commit removes support for default settings with the exception of default.path.data, default.path.conf, and default.path.logs which are maintainted to support packaging. A follow-up will remove support for these as well.\r\n\r\nRelates #23981, relates #24052, relates #24074\r\n\r\n", "number": 24093, "review_comments": [], "title": "Remove support for default settings" }
{ "commits": [ { "message": "Remove support for default settings\n\nToday Elasticsearch allows default settings to be used only if the\nactual setting is not set. These settings are trappy, and the complexity\ninvites bugs. This commit removes support for default settings with the\nexception of default.path.data, default.path.conf, and default.path.logs\nwhich are maintainted to support packaging. A follow-up will remove\nsupport for these as well." }, { "message": "Fix typo" }, { "message": "Fix another typo" } ], "files": [ { "diff": "@@ -311,9 +311,12 @@ public void apply(Settings value, Settings current, Settings previous) {\n HunspellService.HUNSPELL_IGNORE_CASE,\n HunspellService.HUNSPELL_DICTIONARY_OPTIONS,\n IndicesStore.INDICES_STORE_DELETE_SHARD_TIMEOUT,\n+ Environment.DEFAULT_PATH_CONF_SETTING,\n Environment.PATH_CONF_SETTING,\n+ Environment.DEFAULT_PATH_DATA_SETTING,\n Environment.PATH_DATA_SETTING,\n Environment.PATH_HOME_SETTING,\n+ Environment.DEFAULT_PATH_LOGS_SETTING,\n Environment.PATH_LOGS_SETTING,\n Environment.PATH_REPO_SETTING,\n Environment.PATH_SCRIPTS_SETTING,", "filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -1063,12 +1063,10 @@ public Builder loadFromStream(String resourceName, InputStream is) throws IOExce\n return this;\n }\n \n- public Builder putProperties(Map<String, String> esSettings, Predicate<String> keyPredicate, Function<String, String> keyFunction) {\n+ public Builder putProperties(final Map<String, String> esSettings, final Function<String, String> keyFunction) {\n for (final Map.Entry<String, String> esSetting : esSettings.entrySet()) {\n final String key = esSetting.getKey();\n- if (keyPredicate.test(key)) {\n- map.put(keyFunction.apply(key), esSetting.getValue());\n- }\n+ map.put(keyFunction.apply(key), esSetting.getValue());\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java", "status": "modified" }, { "diff": "@@ -49,11 +49,17 @@\n // public+forbidden api!\n public class Environment {\n public static final Setting<String> PATH_HOME_SETTING = Setting.simpleString(\"path.home\", Property.NodeScope);\n- public static final Setting<String> PATH_CONF_SETTING = Setting.simpleString(\"path.conf\", Property.NodeScope);\n+ public static final Setting<String> DEFAULT_PATH_CONF_SETTING = Setting.simpleString(\"default.path.conf\", Property.NodeScope);\n+ public static final Setting<String> PATH_CONF_SETTING =\n+ new Setting<>(\"path.conf\", DEFAULT_PATH_CONF_SETTING, Function.identity(), Property.NodeScope);\n public static final Setting<String> PATH_SCRIPTS_SETTING = Setting.simpleString(\"path.scripts\", Property.NodeScope);\n+ public static final Setting<List<String>> DEFAULT_PATH_DATA_SETTING =\n+ Setting.listSetting(\"default.path.data\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n public static final Setting<List<String>> PATH_DATA_SETTING =\n- Setting.listSetting(\"path.data\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n- public static final Setting<String> PATH_LOGS_SETTING = Setting.simpleString(\"path.logs\", Property.NodeScope);\n+ Setting.listSetting(\"path.data\", DEFAULT_PATH_DATA_SETTING, Function.identity(), Property.NodeScope);\n+ public static final Setting<String> DEFAULT_PATH_LOGS_SETTING = Setting.simpleString(\"default.path.logs\", Property.NodeScope);\n+ public static final Setting<String> PATH_LOGS_SETTING =\n+ new Setting<>(\"path.logs\", DEFAULT_PATH_LOGS_SETTING, Function.identity(), Property.NodeScope);\n public static final Setting<List<String>> PATH_REPO_SETTING =\n Setting.listSetting(\"path.repo\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n public static final Setting<String> PATH_SHARED_DATA_SETTING = Setting.simpleString(\"path.shared_data\", Property.NodeScope);\n@@ -115,7 +121,8 @@ public Environment(Settings settings) {\n throw new IllegalStateException(PATH_HOME_SETTING.getKey() + \" is not configured\");\n }\n \n- if (PATH_CONF_SETTING.exists(settings)) {\n+ // this is trappy, Setting#get(Settings) will get a fallback setting yet return false for Settings#exists(Settings)\n+ if (PATH_CONF_SETTING.exists(settings) || DEFAULT_PATH_CONF_SETTING.exists(settings)) {\n configFile = PathUtils.get(cleanPath(PATH_CONF_SETTING.get(settings)));\n } else {\n configFile = homeFile.resolve(\"config\");\n@@ -156,7 +163,9 @@ public Environment(Settings settings) {\n } else {\n repoFiles = new Path[0];\n }\n- if (PATH_LOGS_SETTING.exists(settings)) {\n+\n+ // this is trappy, Setting#get(Settings) will get a fallback setting yet return false for Settings#exists(Settings)\n+ if (PATH_LOGS_SETTING.exists(settings) || DEFAULT_PATH_LOGS_SETTING.exists(settings)) {\n logsFile = PathUtils.get(cleanPath(PATH_LOGS_SETTING.get(settings)));\n } else {\n logsFile = homeFile.resolve(\"logs\");", "filename": "core/src/main/java/org/elasticsearch/env/Environment.java", "status": "modified" }, { "diff": "@@ -37,17 +37,12 @@\n import java.util.Map;\n import java.util.Set;\n import java.util.function.Function;\n-import java.util.function.Predicate;\n-import java.util.function.UnaryOperator;\n \n import static org.elasticsearch.common.Strings.cleanPath;\n \n public class InternalSettingsPreparer {\n \n private static final String[] ALLOWED_SUFFIXES = {\".yml\", \".yaml\", \".json\"};\n- private static final String PROPERTY_DEFAULTS_PREFIX = \"default.\";\n- private static final Predicate<String> PROPERTY_DEFAULTS_PREDICATE = key -> key.startsWith(PROPERTY_DEFAULTS_PREFIX);\n- private static final UnaryOperator<String> STRIP_PROPERTY_DEFAULTS_PREFIX = key -> key.substring(PROPERTY_DEFAULTS_PREFIX.length());\n \n public static final String SECRET_PROMPT_VALUE = \"${prompt.secret}\";\n public static final String TEXT_PROMPT_VALUE = \"${prompt.text}\";\n@@ -125,22 +120,16 @@ public static Environment prepareEnvironment(Settings input, Terminal terminal,\n }\n \n /**\n- * Initializes the builder with the given input settings, and applies settings and default settings from the specified map (these\n- * settings typically come from the command line). The default settings are applied only if the setting does not exist in the specified\n- * output.\n+ * Initializes the builder with the given input settings, and applies settings from the specified map (these settings typically come\n+ * from the command line).\n *\n * @param output the settings builder to apply the input and default settings to\n * @param input the input settings\n- * @param esSettings a map from which to apply settings and default settings\n+ * @param esSettings a map from which to apply settings\n */\n static void initializeSettings(final Settings.Builder output, final Settings input, final Map<String, String> esSettings) {\n output.put(input);\n- output.putProperties(esSettings,\n- PROPERTY_DEFAULTS_PREDICATE\n- .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key)) == null)\n- .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key) + \".0\") == null),\n- STRIP_PROPERTY_DEFAULTS_PREFIX);\n- output.putProperties(esSettings, PROPERTY_DEFAULTS_PREDICATE.negate(), Function.identity());\n+ output.putProperties(esSettings, Function.identity());\n output.replacePropertyPlaceholders();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -23,10 +23,12 @@\n \n import java.io.IOException;\n import java.net.URL;\n+import java.nio.file.Path;\n \n import static org.hamcrest.CoreMatchers.endsWith;\n import static org.hamcrest.CoreMatchers.notNullValue;\n import static org.hamcrest.CoreMatchers.nullValue;\n+import static org.hamcrest.Matchers.equalTo;\n \n /**\n * Simple unit-tests for Environment.java\n@@ -71,4 +73,91 @@ public void testRepositoryResolution() throws IOException {\n assertThat(environment.resolveRepoURL(new URL(\"jar:http://localhost/test/../repo1?blah!/repo/\")), nullValue());\n }\n \n+ public void testDefaultPathData() {\n+ final Path defaultPathData = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"default.path.data\", defaultPathData)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.dataFiles(), equalTo(new Path[] { defaultPathData }));\n+ }\n+\n+ public void testPathDataOverrideDefaultPathData() {\n+ final Path pathData = createTempDir().toAbsolutePath();\n+ final Path defaultPathData = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"path.data\", pathData)\n+ .put(\"default.path.data\", defaultPathData)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.dataFiles(), equalTo(new Path[] { pathData }));\n+ }\n+\n+ public void testPathDataWhenNotSet() {\n+ final Path pathHome = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder().put(\"path.home\", pathHome).build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.dataFiles(), equalTo(new Path[]{pathHome.resolve(\"data\")}));\n+ }\n+\n+ public void testDefaultPathLogs() {\n+ final Path defaultPathLogs = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"default.path.logs\", defaultPathLogs)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.logsFile(), equalTo(defaultPathLogs));\n+ }\n+\n+ public void testPathLogsOverrideDefaultPathLogs() {\n+ final Path pathLogs = createTempDir().toAbsolutePath();\n+ final Path defaultPathLogs = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"path.logs\", pathLogs)\n+ .put(\"default.path.logs\", defaultPathLogs)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.logsFile(), equalTo(pathLogs));\n+ }\n+\n+ public void testPathLogsWhenNotSet() {\n+ final Path pathHome = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder().put(\"path.home\", pathHome).build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.logsFile(), equalTo(pathHome.resolve(\"logs\")));\n+ }\n+\n+ public void testDefaultPathConf() {\n+ final Path defaultPathConf = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"default.path.conf\", defaultPathConf)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.configFile(), equalTo(defaultPathConf));\n+ }\n+\n+ public void testPathConfOverrideDefaultPathConf() {\n+ final Path pathConf = createTempDir().toAbsolutePath();\n+ final Path defaultPathConf = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", createTempDir().toAbsolutePath())\n+ .put(\"path.conf\", pathConf)\n+ .put(\"default.path.conf\", defaultPathConf)\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.configFile(), equalTo(pathConf));\n+ }\n+\n+ public void testPathConfWhenNotSet() {\n+ final Path pathHome = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder().put(\"path.home\", pathHome).build();\n+ final Environment environment = new Environment(settings);\n+ assertThat(environment.configFile(), equalTo(pathHome.resolve(\"config\")));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/env/EnvironmentTests.java", "status": "modified" }, { "diff": "@@ -182,29 +182,11 @@ public void testSecureSettings() {\n assertEquals(\"secret\", fakeSetting.get(env.settings()).toString());\n }\n \n- public void testDefaultProperties() throws Exception {\n+ public void testDefaultPropertiesDoNothing() throws Exception {\n Map<String, String> props = Collections.singletonMap(\"default.setting\", \"foo\");\n Environment env = InternalSettingsPreparer.prepareEnvironment(baseEnvSettings, null, props);\n- assertEquals(\"foo\", env.settings().get(\"setting\"));\n- }\n-\n- public void testDefaultPropertiesOverride() throws Exception {\n- Path configDir = homeDir.resolve(\"config\");\n- Files.createDirectories(configDir);\n- Files.write(configDir.resolve(\"elasticsearch.yml\"), Collections.singletonList(\"setting: bar\"), StandardCharsets.UTF_8);\n- Map<String, String> props = Collections.singletonMap(\"default.setting\", \"foo\");\n- Environment env = InternalSettingsPreparer.prepareEnvironment(baseEnvSettings, null, props);\n- assertEquals(\"bar\", env.settings().get(\"setting\"));\n- }\n-\n- public void testDefaultWithArray() {\n- final Settings.Builder output = Settings.builder().put(\"foobar.0\", \"bar\").put(\"foobar.1\", \"baz\");\n- final Map<String, String> esSettings = Collections.singletonMap(\"default.foobar\", \"foo\");\n- InternalSettingsPreparer.initializeSettings(output, Settings.EMPTY, esSettings);\n- final Settings settings = output.build();\n- assertThat(settings.get(\"foobar.0\"), equalTo(\"bar\"));\n- assertThat(settings.get(\"foobar.1\"), equalTo(\"baz\"));\n- assertNull(settings.get(\"foobar\"));\n+ assertEquals(\"foo\", env.settings().get(\"default.setting\"));\n+ assertNull(env.settings().get(\"setting\"));\n }\n \n }", "filename": "core/src/test/java/org/elasticsearch/node/InternalSettingsPreparerTests.java", "status": "modified" }, { "diff": "@@ -89,23 +89,6 @@ Enter value for [node.name]:\n NOTE: Elasticsearch will not start if `${prompt.text}` or `${prompt.secret}`\n is used in the settings and the process is run as a service or in the background.\n \n-[float]\n-=== Setting default settings\n-\n-New default settings may be specified on the command line using the\n-`default.` prefix. This will specify a value that will be used by\n-default unless another value is specified in the config file.\n-\n-For instance, if Elasticsearch is started as follows:\n-\n-[source,sh]\n----------------------------\n-./bin/elasticsearch -Edefault.node.name=My_Node\n----------------------------\n-\n-the value for `node.name` will be `My_Node`, unless it is overwritten on the\n-command line with `es.node.name` or in the config file with `node.name`.\n-\n [float]\n [[logging]]\n == Logging configuration", "filename": "docs/reference/setup/configuration.asciidoc", "status": "modified" }, { "diff": "@@ -227,8 +227,6 @@ The image offers several methods for configuring Elasticsearch settings with the\n ===== A. Present the parameters via Docker environment variables\n For example, to define the cluster name with `docker run` you can pass `-e \"cluster.name=mynewclustername\"`. Double quotes are required.\n \n-NOTE: There is a difference between defining <<_setting_default_settings,default settings>> and normal settings. The former are prefixed with `default.` and cannot override normal settings, if defined.\n-\n ===== B. Bind-mounted configuration\n Create your custom config file and mount this over the image's corresponding file.\n For example, bind-mounting a `custom_elasticsearch.yml` with `docker run` can be accomplished with the parameter:", "filename": "docs/reference/setup/install/docker.asciidoc", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.3.0\r\n\r\n**OS version**: Tried in Centos 7 and OS X, though I suspect it doesn't matter.\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen using multiple data paths, or even specifying just a single data path as an array in yaml, elasticearch includes whatever is passed to it as default.path.data, such as when installed via RPM or DEB packages, from the systemd unit OR the init.d script, since you set this when you execute the es binary:\r\n\r\n`-Edefault.path.data=${DATA_DIR}`\r\n\r\n**Steps to reproduce**:\r\nRepro:\r\n\r\nSet this in the elasticsearch.yml\r\n\r\n```\r\npath.data:\r\n- \"/some/datapath1\"\r\n```\r\n\r\nrun this:\r\n\r\n`bin/elasticsearch -Edefault.path.data=\"/some/datapath2\"`\r\n\r\nAnd elasticsearch will start configured with both data paths.\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n\r\n```\r\n[2017-04-07T13:30:49,987][INFO ][o.e.e.NodeEnvironment ] [jCWIeyS] using [2] data paths, mounts [[/ (/dev/disk1)]], net usable_space [198.8gb], net total_space [464.7gb], spins? [unknown], types [hfs]\r\n```\r\n\r\nThis also happens when you use this array format:\r\n\r\n`path.data: [\"/some/datapath1\"]`\r\n", "comments": [ { "body": "I know what is happening here and I see a fix, I will open a PR soon. This is an incredibly unfortunate consequence of the fact that when a setting is specified as an array, we break it down into new keys `path.data.0`, `path.data.1`, ..., `path.data.n`, yet the default setting is applied as `path.data` so none of the keys from the array setting override as is done with other default settings.\r\n", "created_at": "2017-04-07T20:08:05Z" }, { "body": "This is a *horrifically* bad bug, it appears to have been introduced in 5.3.0, and it means that shards can end up in `/var/lib/elasticsearch` even if a user did not intend for that to be the case because they configured `path.data` as an array but the packaging specifies `default.path.data`. If `/var/lib/elasticsearch` is sitting on the root partition because why not since the user wants the data elsewhere, the user can easily end up with:\r\n - shards not where they wanted them\r\n - the root partition can fill up\r\n\r\nIt gets worse. Let's say that we fix this bug in 5.3.1 and a user migrates from 5.3.0 not aware of the fiasco that is ensuing here. They start up Elasticsearch on 5.3.1, and now their shards are missing! If they're not careful, they are in data loss territory.", "created_at": "2017-04-07T20:36:59Z" }, { "body": "😭 ", "created_at": "2017-04-07T20:39:48Z" }, { "body": "> Let's say that we fix this bug in 5.3.1 and a user migrates from 5.3.0 not aware of the fiasco that is ensuing here. They start up Elasticsearch on 5.3.1, and now their shards are missing! If they're not careful, they are in data loss territory.\r\n\r\nIf we just fix the config issue and someone upgrades one node at a time they shouldn't lose any data so long as they have replicas. The entire shard will have to recover from the primary, but that is a fairly normal thing. They'd have to manually clean up the shards in accidental data directory. Which isn't horrible, but it isn't great at all.\r\n\r\nWe could do more to try and copy the data or make tool or something.\r\n\r\nMinimally anyone on 5.3 will want to be able to know whether they are effected by this issue before they upgrade so they know what they have to do as part of the upgrade.\r\n\r\n", "created_at": "2017-04-08T01:40:28Z" }, { "body": "> If we just fix the config issue and someone upgrades one node at a time they shouldn't lose any data so long as they have replicas.\r\n\r\nThat's the problem, we do not have complete control over this.\r\n\r\n> They'd have to manually clean up the shards in accidental data directory. Which isn't horrible, but it isn't great at all.\r\n\r\nYes, if they are aware of the problem and do not panic. Or they do not do something like practice immutable infrastructure.\r\n\r\n> We could do more to try and copy the data or make tool or something.\r\n\r\nYes, but it's quite delicate.\r\n\r\n> Minimally anyone on 5.3 will want to be able to know whether they are effected by this issue before they upgrade so they know what they have to do as part of the upgrade.\r\n\r\nIndeed, and therein lies the rub.", "created_at": "2017-04-08T01:44:33Z" }, { "body": "Yeah, I just wanted to start talking about this. I certainly don't claim to have answers. I guess I'm fairly sure that the person doing the upgrade is going to want to know if they are effected by this. Lots of people won't and 5.3.0 will just be a normal version for them. For the effected people, even if we handle the whole thing automatically, they are going to need to know.", "created_at": "2017-04-08T01:49:25Z" }, { "body": "> Yeah, I just wanted to start talking about this.\r\n\r\nIndeed, we need more people thinking about this. I discussed some of the complexities with @dakrone earlier in the evening via another channel, and I've been thinking about it all night. Sorry that I haven't taken the time to write my thoughts down yet.\r\n\r\nAnother complexity that I thought of is that the fix for this has to live for the rest of the 5.x series lifecycle because someone can upgrade from 5.3.0 to any future release in the 5.x series.\r\n\r\nAnd yet another complexity that I thought is a situation where someone has their `default.path.data` included as one of the values in `path.data` (assuming multiple `path.data`).\r\n\r\n> For the effected people, even if we handle the whole thing automatically, they are going to need to know.\r\n\r\nI completely agree.", "created_at": "2017-04-08T01:57:34Z" }, { "body": "Do we have set limitations yet on rolling upgrades to 6.0? Would 5.3.0 ->\n6.0 be allowed? If so, the fix has to carry over there as well.\n\nOn Fri, Apr 7, 2017, 8:58 PM Jason Tedor <notifications@github.com> wrote:\n\n> Yeah, I just wanted to start talking about this.\n>\n> Indeed, I need more people to talk to about this. I discussed some of the\n> complexities with @dakrone <https://github.com/dakrone> earlier in the\n> evening via another channel, and I've been thinking about it all night.\n> Sorry that I haven't taken the time to write my thoughts down yet.\n>\n> Another complexity that I thought of is that the fix for this has to live\n> for the rest of the 5.x series lifecycle because someone can upgrade from\n> 5.3.0 to any future release in the 5.x series.\n>\n> And yet another complexity that I thought is a situation where someone has\n> their default.path.data included as one of the values in path.data\n> (assuming multiple path.data).\n>\n> For the effected people, even if we handle the whole thing automatically,\n> they are going to need to know.\n>\n> I completely agree.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/23981#issuecomment-292687280>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AB1ZYHwbhJ9d1nXtq3sn8vTdcT_x87O0ks5rtumtgaJpZM4M3P2Y>\n> .\n>\n", "created_at": "2017-04-08T02:00:11Z" }, { "body": "No, this is absolutely not allowed. You can only rolling restart upgrade from the latest 5.x that is released when 6.0.0 goes GA.", "created_at": "2017-04-08T02:01:36Z" }, { "body": "> Do we have set limitations yet on rolling upgrades to 6.0? Would 5.3.0 ->\r\n> 6.0 be allowed? If so, the fix has to carry over there as well.\r\n\r\nWe'll make sure there is a 5.4 and so that doesn't have to happen.\r\n\r\n> And yet another complexity that I thought is a situation where someone has their default.path.data included as one of the values in path.data (assuming multiple path.data).\r\n\r\nSo any detection we have will produce a false positive for them, right?", "created_at": "2017-04-08T02:01:49Z" }, { "body": "In fact, full cluster shutdown major version upgrades are even MORE dangerous in this instance. Because all of the shard disappear at the same time. ", "created_at": "2017-04-08T02:01:58Z" }, { "body": "> So any detection we have will produce a false positive for them, right?\r\n\r\nI think it has to be smart enough to not produce a false positive for them.", "created_at": "2017-04-08T02:02:23Z" }, { "body": "Just because a 5.4 exists, doesn't mean people will be there when they upgrade.", "created_at": "2017-04-08T02:02:51Z" }, { "body": "> In fact, full cluster shutdown major version upgrades are even MORE dangerous in this instance. Because all of the shard disappear at the same time.\r\n\r\nIndeed, that means that we need a fix in 6.x as well, someone could full cluster restart upgrade from 5.3.0.", "created_at": "2017-04-08T02:03:01Z" }, { "body": "> Just because a 5.4 exists, doesn't mean people will be there when they upgrade.\r\n\r\nRight, sorry. They won't be able to do a rolling upgrade. Just a full cluster restart upgrade.", "created_at": "2017-04-08T02:04:12Z" }, { "body": "I think we should look into removing the default option here. It's too much magic IMO and we can give people a good error message in that case. We can even advice how to copy over stuff to the actual data directories OR do it ourself. We had code for this in 2.x that we can reuse when we moved away from rolling our own software raid.", "created_at": "2017-04-10T10:02:37Z" }, { "body": "> I think we should look into removing the default option here.\r\n\r\nToday this is used to support the packaging. I think that we can remove this if we instead ship the elasticsearch.yml with the distribution packaging to include explicit values path.data, path.logs, and path.conf. Sadly, I think that as a result of the situation here, we can not do this until 7.0.0.", "created_at": "2017-04-10T15:22:23Z" }, { "body": "> Sadly, I think that as a result of the situation here, we can not do this until 7.0.0\r\n\r\nthat is a critical issue, I think we have to find a way to either fix it reliably or we need to drop stuff and I am not hesitating to even do this in a minor if it means dataloss or anything along those lines.", "created_at": "2017-04-11T06:19:53Z" } ], "number": 23981, "title": "default.path.data included as a data path when path.data configured as yaml array" }
{ "body": "In Elasticsearch 5.3.0 a bug was introduced in the merging of default settings when the target setting existed as an array. This arose due to the fact that when a target setting is an array, the setting key is broken into key.0, key.1, ..., key.n, one for each element of the array. When settings are replaced by default.key, we are looking for the target key but not the target key.0. This leads to key, and key.0, ..., key.n being present in the constructed settings object. This commit addresses two issues here. The first is that we fix the merging of the keys so that when we try to merge default.key, we also check for the presence of the flattened keys. The second is that when we try to get a setting value as an array from a settings object, we check whether or not the backing map contains the top-level key as well as the flattened keys. This latter check would have caught the first bug. For kicks, we add some tests.\r\n\r\nRelates #24052, relates #23981", "number": 24074, "review_comments": [], "title": "Correct handling of default and array settings" }
{ "commits": [ { "message": "Correct handling of default and array settings\n\nIn Elasticsearch 5.3.0 a bug was introduced in the merging of default\nsettings when the target setting existed as an array. This arose due to\nthe fact that when a target setting is an array, the setting key is\nbroken into key.0, key.1, ..., key.n, one for each element of the\narray. When settings are replaced by default.key, we are looking for the\ntarget key but not the target key.0. This leads to key, and key.0, ...,\nkey.n being present in the constructed settings object. This commit\naddresses two issues here. The first is that we fix the merging of the\nkeys so that when we try to merge default.key, we also check for the\npresence of the flattened keys. The second is that when we try to get a\nsetting value as an array from a settings object, we check whether or\nnot the backing map contains the top-level key as well as the flattened\nkeys. This latter check would have caught the first bug. For kicks, we\nadd some tests." }, { "message": "Merge branch 'master' into default-settings-array\n\n* master:\n Remove more hidden file leniency from plugins\n Register error listener in evil logger tests\n Detect using logging before configuration\n [DOCS] Added note about Elastic Cloud to improve 'elastic aws' SERP results.\n Add version constant for 5.5 (#24075)\n Add unit tests for NestedAggregator (#24054)" }, { "message": "Remove extraneous newline" } ], "files": [ { "diff": "@@ -57,6 +57,7 @@\n import java.util.Iterator;\n import java.util.LinkedHashMap;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Map;\n import java.util.NoSuchElementException;\n import java.util.Objects;\n@@ -442,6 +443,20 @@ public String[] getAsArray(String settingPrefix, String[] defaultArray) throws S\n public String[] getAsArray(String settingPrefix, String[] defaultArray, Boolean commaDelimited) throws SettingsException {\n List<String> result = new ArrayList<>();\n \n+ final String valueFromPrefix = get(settingPrefix);\n+ final String valueFromPreifx0 = get(settingPrefix + \".0\");\n+\n+ if (valueFromPrefix != null && valueFromPreifx0 != null) {\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"settings object contains values for [%s=%s] and [%s=%s]\",\n+ settingPrefix,\n+ valueFromPrefix,\n+ settingPrefix + \".0\",\n+ valueFromPreifx0);\n+ throw new IllegalStateException(message);\n+ }\n+\n if (get(settingPrefix) != null) {\n if (commaDelimited) {\n String[] strings = Strings.splitStringByCommaToArray(get(settingPrefix));", "filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java", "status": "modified" }, { "diff": "@@ -125,14 +125,21 @@ public static Environment prepareEnvironment(Settings input, Terminal terminal,\n }\n \n /**\n- * Initializes the builder with the given input settings, and loads system properties settings if allowed.\n- * If loadDefaults is true, system property default settings are loaded.\n+ * Initializes the builder with the given input settings, and applies settings and default settings from the specified map (these\n+ * settings typically come from the command line). The default settings are applied only if the setting does not exist in the specified\n+ * output.\n+ *\n+ * @param output the settings builder to apply the input and default settings to\n+ * @param input the input settings\n+ * @param esSettings a map from which to apply settings and default settings\n */\n- private static void initializeSettings(Settings.Builder output, Settings input, Map<String, String> esSettings) {\n+ static void initializeSettings(final Settings.Builder output, final Settings input, final Map<String, String> esSettings) {\n output.put(input);\n output.putProperties(esSettings,\n- PROPERTY_DEFAULTS_PREDICATE.and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key)) == null),\n- STRIP_PROPERTY_DEFAULTS_PREFIX);\n+ PROPERTY_DEFAULTS_PREDICATE\n+ .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key)) == null)\n+ .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key) + \".0\") == null),\n+ STRIP_PROPERTY_DEFAULTS_PREFIX);\n output.putProperties(esSettings, PROPERTY_DEFAULTS_PREDICATE.negate(), Function.identity());\n output.replacePropertyPlaceholders();\n }", "filename": "core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -562,4 +562,16 @@ public void testSecureSettingConflict() {\n IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> setting.get(settings));\n assertTrue(e.getMessage().contains(\"must be stored inside the Elasticsearch keystore\"));\n }\n+\n+ public void testGetAsArrayFailsOnDuplicates() {\n+ final Settings settings =\n+ Settings.builder()\n+ .put(\"foobar.0\", \"bar\")\n+ .put(\"foobar.1\", \"baz\")\n+ .put(\"foobar\", \"foo\")\n+ .build();\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, () -> settings.getAsArray(\"foobar\"));\n+ assertThat(e, hasToString(containsString(\"settings object contains values for [foobar=foo] and [foobar.0=bar]\")));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.3.0\r\n\r\n**OS version**: Tried in Centos 7 and OS X, though I suspect it doesn't matter.\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen using multiple data paths, or even specifying just a single data path as an array in yaml, elasticearch includes whatever is passed to it as default.path.data, such as when installed via RPM or DEB packages, from the systemd unit OR the init.d script, since you set this when you execute the es binary:\r\n\r\n`-Edefault.path.data=${DATA_DIR}`\r\n\r\n**Steps to reproduce**:\r\nRepro:\r\n\r\nSet this in the elasticsearch.yml\r\n\r\n```\r\npath.data:\r\n- \"/some/datapath1\"\r\n```\r\n\r\nrun this:\r\n\r\n`bin/elasticsearch -Edefault.path.data=\"/some/datapath2\"`\r\n\r\nAnd elasticsearch will start configured with both data paths.\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n\r\n```\r\n[2017-04-07T13:30:49,987][INFO ][o.e.e.NodeEnvironment ] [jCWIeyS] using [2] data paths, mounts [[/ (/dev/disk1)]], net usable_space [198.8gb], net total_space [464.7gb], spins? [unknown], types [hfs]\r\n```\r\n\r\nThis also happens when you use this array format:\r\n\r\n`path.data: [\"/some/datapath1\"]`\r\n", "comments": [ { "body": "I know what is happening here and I see a fix, I will open a PR soon. This is an incredibly unfortunate consequence of the fact that when a setting is specified as an array, we break it down into new keys `path.data.0`, `path.data.1`, ..., `path.data.n`, yet the default setting is applied as `path.data` so none of the keys from the array setting override as is done with other default settings.\r\n", "created_at": "2017-04-07T20:08:05Z" }, { "body": "This is a *horrifically* bad bug, it appears to have been introduced in 5.3.0, and it means that shards can end up in `/var/lib/elasticsearch` even if a user did not intend for that to be the case because they configured `path.data` as an array but the packaging specifies `default.path.data`. If `/var/lib/elasticsearch` is sitting on the root partition because why not since the user wants the data elsewhere, the user can easily end up with:\r\n - shards not where they wanted them\r\n - the root partition can fill up\r\n\r\nIt gets worse. Let's say that we fix this bug in 5.3.1 and a user migrates from 5.3.0 not aware of the fiasco that is ensuing here. They start up Elasticsearch on 5.3.1, and now their shards are missing! If they're not careful, they are in data loss territory.", "created_at": "2017-04-07T20:36:59Z" }, { "body": "😭 ", "created_at": "2017-04-07T20:39:48Z" }, { "body": "> Let's say that we fix this bug in 5.3.1 and a user migrates from 5.3.0 not aware of the fiasco that is ensuing here. They start up Elasticsearch on 5.3.1, and now their shards are missing! If they're not careful, they are in data loss territory.\r\n\r\nIf we just fix the config issue and someone upgrades one node at a time they shouldn't lose any data so long as they have replicas. The entire shard will have to recover from the primary, but that is a fairly normal thing. They'd have to manually clean up the shards in accidental data directory. Which isn't horrible, but it isn't great at all.\r\n\r\nWe could do more to try and copy the data or make tool or something.\r\n\r\nMinimally anyone on 5.3 will want to be able to know whether they are effected by this issue before they upgrade so they know what they have to do as part of the upgrade.\r\n\r\n", "created_at": "2017-04-08T01:40:28Z" }, { "body": "> If we just fix the config issue and someone upgrades one node at a time they shouldn't lose any data so long as they have replicas.\r\n\r\nThat's the problem, we do not have complete control over this.\r\n\r\n> They'd have to manually clean up the shards in accidental data directory. Which isn't horrible, but it isn't great at all.\r\n\r\nYes, if they are aware of the problem and do not panic. Or they do not do something like practice immutable infrastructure.\r\n\r\n> We could do more to try and copy the data or make tool or something.\r\n\r\nYes, but it's quite delicate.\r\n\r\n> Minimally anyone on 5.3 will want to be able to know whether they are effected by this issue before they upgrade so they know what they have to do as part of the upgrade.\r\n\r\nIndeed, and therein lies the rub.", "created_at": "2017-04-08T01:44:33Z" }, { "body": "Yeah, I just wanted to start talking about this. I certainly don't claim to have answers. I guess I'm fairly sure that the person doing the upgrade is going to want to know if they are effected by this. Lots of people won't and 5.3.0 will just be a normal version for them. For the effected people, even if we handle the whole thing automatically, they are going to need to know.", "created_at": "2017-04-08T01:49:25Z" }, { "body": "> Yeah, I just wanted to start talking about this.\r\n\r\nIndeed, we need more people thinking about this. I discussed some of the complexities with @dakrone earlier in the evening via another channel, and I've been thinking about it all night. Sorry that I haven't taken the time to write my thoughts down yet.\r\n\r\nAnother complexity that I thought of is that the fix for this has to live for the rest of the 5.x series lifecycle because someone can upgrade from 5.3.0 to any future release in the 5.x series.\r\n\r\nAnd yet another complexity that I thought is a situation where someone has their `default.path.data` included as one of the values in `path.data` (assuming multiple `path.data`).\r\n\r\n> For the effected people, even if we handle the whole thing automatically, they are going to need to know.\r\n\r\nI completely agree.", "created_at": "2017-04-08T01:57:34Z" }, { "body": "Do we have set limitations yet on rolling upgrades to 6.0? Would 5.3.0 ->\n6.0 be allowed? If so, the fix has to carry over there as well.\n\nOn Fri, Apr 7, 2017, 8:58 PM Jason Tedor <notifications@github.com> wrote:\n\n> Yeah, I just wanted to start talking about this.\n>\n> Indeed, I need more people to talk to about this. I discussed some of the\n> complexities with @dakrone <https://github.com/dakrone> earlier in the\n> evening via another channel, and I've been thinking about it all night.\n> Sorry that I haven't taken the time to write my thoughts down yet.\n>\n> Another complexity that I thought of is that the fix for this has to live\n> for the rest of the 5.x series lifecycle because someone can upgrade from\n> 5.3.0 to any future release in the 5.x series.\n>\n> And yet another complexity that I thought is a situation where someone has\n> their default.path.data included as one of the values in path.data\n> (assuming multiple path.data).\n>\n> For the effected people, even if we handle the whole thing automatically,\n> they are going to need to know.\n>\n> I completely agree.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/23981#issuecomment-292687280>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AB1ZYHwbhJ9d1nXtq3sn8vTdcT_x87O0ks5rtumtgaJpZM4M3P2Y>\n> .\n>\n", "created_at": "2017-04-08T02:00:11Z" }, { "body": "No, this is absolutely not allowed. You can only rolling restart upgrade from the latest 5.x that is released when 6.0.0 goes GA.", "created_at": "2017-04-08T02:01:36Z" }, { "body": "> Do we have set limitations yet on rolling upgrades to 6.0? Would 5.3.0 ->\r\n> 6.0 be allowed? If so, the fix has to carry over there as well.\r\n\r\nWe'll make sure there is a 5.4 and so that doesn't have to happen.\r\n\r\n> And yet another complexity that I thought is a situation where someone has their default.path.data included as one of the values in path.data (assuming multiple path.data).\r\n\r\nSo any detection we have will produce a false positive for them, right?", "created_at": "2017-04-08T02:01:49Z" }, { "body": "In fact, full cluster shutdown major version upgrades are even MORE dangerous in this instance. Because all of the shard disappear at the same time. ", "created_at": "2017-04-08T02:01:58Z" }, { "body": "> So any detection we have will produce a false positive for them, right?\r\n\r\nI think it has to be smart enough to not produce a false positive for them.", "created_at": "2017-04-08T02:02:23Z" }, { "body": "Just because a 5.4 exists, doesn't mean people will be there when they upgrade.", "created_at": "2017-04-08T02:02:51Z" }, { "body": "> In fact, full cluster shutdown major version upgrades are even MORE dangerous in this instance. Because all of the shard disappear at the same time.\r\n\r\nIndeed, that means that we need a fix in 6.x as well, someone could full cluster restart upgrade from 5.3.0.", "created_at": "2017-04-08T02:03:01Z" }, { "body": "> Just because a 5.4 exists, doesn't mean people will be there when they upgrade.\r\n\r\nRight, sorry. They won't be able to do a rolling upgrade. Just a full cluster restart upgrade.", "created_at": "2017-04-08T02:04:12Z" }, { "body": "I think we should look into removing the default option here. It's too much magic IMO and we can give people a good error message in that case. We can even advice how to copy over stuff to the actual data directories OR do it ourself. We had code for this in 2.x that we can reuse when we moved away from rolling our own software raid.", "created_at": "2017-04-10T10:02:37Z" }, { "body": "> I think we should look into removing the default option here.\r\n\r\nToday this is used to support the packaging. I think that we can remove this if we instead ship the elasticsearch.yml with the distribution packaging to include explicit values path.data, path.logs, and path.conf. Sadly, I think that as a result of the situation here, we can not do this until 7.0.0.", "created_at": "2017-04-10T15:22:23Z" }, { "body": "> Sadly, I think that as a result of the situation here, we can not do this until 7.0.0\r\n\r\nthat is a critical issue, I think we have to find a way to either fix it reliably or we need to drop stuff and I am not hesitating to even do this in a minor if it means dataloss or anything along those lines.", "created_at": "2017-04-11T06:19:53Z" } ], "number": 23981, "title": "default.path.data included as a data path when path.data configured as yaml array" }
{ "body": "In Elasticsearch 5.3.0 a bug was introduced in the merging of default settings when the target setting existed as an array. This arose due to the fact that when a target setting is an array, the setting key is broken into key.0, key.1, ..., key.n, one for each element of the array. When settings are replaced by default.key, we are looking for the target key but not the target key.0. This leads to key, and key.0, ..., key.n being present in the constructed settings object. When this concerns path.data, we end up in a situation where path.data.0, ..., path.data.n are configured and path.data is configured too. Since our packaging sets default.path.data, users that configure multiple data paths vian an array and use the packaging are subject to having shards land in default.path.data when that is very likely not what they intended.\r\n\r\nThis commit is an attempt to rectify this situation. First, we fix the merging of default settings when the target setting exists an array. We have to hold on to default.path.data though so that we can detect its presence. For this, we elevate default.path.data to an actual setting and give it special treatment when merging settings.\r\n\r\nAfter we have done this, we take a lock on all configured data directories in path.data, and default.path.data too. We look for the presence of indices in default.path.data and if there are any, we fail the node. \r\n\r\nCloses #23981", "number": 24052, "review_comments": [ { "body": "Probably worth a comment for posterity. We want to keep this one to track down an issue caused by 5.3.0. That way come 7 we can remove it.", "created_at": "2017-04-11T20:51:26Z" }, { "body": "Can you add one of those `Version.current.lessThanOrEqualTo(Version.fromId(<whatever ID makes 7.0.0-alpha1>)` things so we can know we can drop this then?", "created_at": "2017-04-11T20:53:49Z" }, { "body": "I think this should be \r\n\r\n```\r\n\"do not include default.path.data [%s] in an array of path.data settings %s\",\r\n```", "created_at": "2017-04-11T21:20:29Z" }, { "body": "Can you be explicit with `this` in this case? We already shadow vars above in a large constructor and it means less code scrolling when it's explicit.", "created_at": "2017-04-11T21:22:58Z" }, { "body": "Annotate with `@Nullable`?", "created_at": "2017-04-11T21:23:11Z" }, { "body": "Why do we expect `default.path.data` to be unset with `path.data` is not set? Isn't that the point of it? Isn't this going to trip for every rpm and deb installation that doesn't touch `elasticsearch.yml`?", "created_at": "2017-04-11T21:27:01Z" }, { "body": "Annotate with `@Nullable`?", "created_at": "2017-04-11T21:27:50Z" }, { "body": "How do you feel about reversing the negative? I feel like `environment.defaultPathData() == null ? 0 : 1;` is easier to read since it's less like a double-negative", "created_at": "2017-04-11T21:29:48Z" }, { "body": "The shadowing here makes this much harder to follow, especially since this isn't a constructor argument (where we usually shadow class vars), would you be open to renaming it something to be explicit?", "created_at": "2017-04-11T21:31:29Z" }, { "body": "Checking the offset from within the loop seems like a warning sign to me, especially since we check it twice and have to add an assert just to tell if we are on the correct loop index. Can we factor the loop initialization logic into a separate method for the regular `environment.dataFiles()` and then have a separate initialization for `environment.defaultPathData()`?", "created_at": "2017-04-11T21:36:01Z" }, { "body": "Reverse the order of this `!= null` `if` statement?", "created_at": "2017-04-11T21:37:04Z" }, { "body": "I pushed b9e1a28250b5a2511f038ff5d192db61d7c44e68.", "created_at": "2017-04-11T21:44:42Z" }, { "body": "How about `if (availableIndexFolders.size() > 0) {` to avoid the double-negative?", "created_at": "2017-04-11T21:45:18Z" }, { "body": "Can you add a comment that we only expect `nodeEnv.defaultNodePath()` to be returned in the event that `path.data` was already configured *and* `default.path.data` is configured? Just the name of the function makes it seem like it'd be returned if the setting exists at all, when it's really set to `null` when it \"becomes\" path.data", "created_at": "2017-04-11T21:48:30Z" }, { "body": "Use `NodeEnvironment.NODE_LOCK_FILENAME` here?", "created_at": "2017-04-11T21:56:20Z" }, { "body": "And same here", "created_at": "2017-04-11T21:56:26Z" }, { "body": "Can you randomize between `.put(\"path.data.0\", zero).put(\"path.data.1\", one)` and `.putArray(\"path.data\", zero, one)`?", "created_at": "2017-04-11T21:58:23Z" }, { "body": "This seems like an overly complex way of doing\r\n\r\n```java\r\nfor (int i = 0; i < numberOfIndices; i++) {\r\n Files.createDirectories(defaultPathData.resolve(\"nodes/0/indices\").resolve(UUIDs.randomBase64UUID()));\r\n}\r\n```\r\n\r\nWhat do you think?", "created_at": "2017-04-11T22:01:30Z" }, { "body": "This line could be moved above the previous `indexExists` if statement and then the two `if (indexExists)` blocks could be combined", "created_at": "2017-04-11T22:02:47Z" }, { "body": "Same comment here about the streams when a normal `for` loop would work? I suppose here you actually save the indices, but it still seems simpler to just add them to a regular ArrayList", "created_at": "2017-04-11T22:05:19Z" }, { "body": "Might as well `spy(logger)` instead of creating a new mocked instance? At least then you'll see the logging message(s) in the test output", "created_at": "2017-04-11T22:08:55Z" }, { "body": "This is a case when *nothing* was set not even `default.path.data` (think of starting from an archive distribution). You see, if `path.data` is not set and `default.path.data` is, then `default.path.data` is copied into `path.data` and we are not in this branch of the code, instead it's covered by the branch above us when the condition data paths is not empty.", "created_at": "2017-04-11T22:37:06Z" }, { "body": "The output will show that it is an array already because we're using `Arrays.toString(dataFiles)` here (so we get the brackets, etc.).", "created_at": "2017-04-11T22:42:52Z" }, { "body": "I pushed 4f4762df108863bba8cd4515664c0ad6a524a561.", "created_at": "2017-04-11T22:43:20Z" }, { "body": "I pushed 8876ca4512df09e05561a7972140548c58406781.", "created_at": "2017-04-11T22:46:31Z" }, { "body": "I pushed dcf42c5657f1972a05123d1003b0478d4d310cc6.", "created_at": "2017-04-11T22:47:50Z" }, { "body": "Sorry, I'm struggling to understand this comment? There is nothing being returned here? Help?", "created_at": "2017-04-11T22:50:14Z" }, { "body": "Yes, I understand, what I meant is that it's entirely okay for `default.path.data=/var/lib/elasticsearch` and `path.data=/var/lib/elasticsearch`, it's *not* okay for `default.path.data=/var/lib/elasticsearch` and `path.data=/var/lib/elasticsearch,/mnt/data2` (where path.data is an array), I didn't want the message to imply that `path.data` can *never* include `default.path.data`", "created_at": "2017-04-11T22:51:21Z" }, { "body": "Okay, the setting overwriting (that we set `path.data` from the default and then nullify the `default.path.data` setting) makes it a little more complex to follow", "created_at": "2017-04-11T22:53:50Z" }, { "body": "I dislike checking size when we really only care about whether or not it's empty.\r\n\r\nI pushed 9503f6fbb77f905e9fb007136aaa0622baf48d6c though, what do you think?", "created_at": "2017-04-11T23:03:42Z" } ], "title": "Correct handling of default settings and path.data" }
{ "commits": [ { "message": "Correct handling of default settings and path.data\n\nIn Elasticsearch 5.3.0 a bug was introduced in the merging of default\nsettings when the target setting existed as an array. This arose due to\nthe fact that when a target setting is an array, the setting key is\nbroken into key.0, key.1, ..., key.n, one for each element of the\narray. When settings are replaced by default.key, we are looking for the\ntarget key but not the target key.0. This leads to key, and key.0, ...,\nkey.n being present in the constructed settings object. When this\nconcerns path.data, we end up in a situation where path.data.0, ...,\npath.data.n are configured and path.data is configured too. Since our\npackaging sets default.path.data, users that configure multiple data\npaths vian an array and use the packaging are subject to having shards\nland in default.path.data when that is very likely not what they\nintended.\n\nThis commit is an attempt to rectify this situation. First, we fix the\nmerging of default settings when the target setting exists an array. We\nhave to hold on to default.path.data though so that we can detect its\npresence. For this, we elevate default.path.data to an actual setting\nand give it special treatment when merging settings.\n\nAfter we have done this, we take a lock on all configured data\ndirectories in path.data, and default.path.data too. We look for the\npresence of indices in default.path.data and if there are any, we fail\nthe node." }, { "message": "Include full index path in log message" }, { "message": "Skip default path data check if no local storage" }, { "message": "Remove extraneous blank line" }, { "message": "Add comment explaing default.path.data hack" }, { "message": "Explicit this in Envrionment constructor" }, { "message": "Reverse condition" }, { "message": "Reverse another condition" }, { "message": "Reverse yet another condition" }, { "message": "Cleanup" }, { "message": "Simplify!" }, { "message": "Remove unneeded variable" }, { "message": "Use a constant" }, { "message": "Nullable, and a Javadoc" }, { "message": "Javadocs for availableIndexFoldersForPath" } ], "files": [ { "diff": "@@ -269,6 +269,9 @@ static void addFilePermissions(Permissions policy, Environment environment) {\n for (Path path : environment.dataFiles()) {\n addPath(policy, Environment.PATH_DATA_SETTING.getKey(), path, \"read,readlink,write,delete\");\n }\n+ if (environment.defaultPathData() != null) {\n+ addPath(policy, Environment.DEFAULT_PATH_DATA_SETTING.getKey(), environment.defaultPathData(), \"read,readlink,write,delete\");\n+ }\n for (Path path : environment.repoFiles()) {\n addPath(policy, Environment.PATH_REPO_SETTING.getKey(), path, \"read,readlink,write,delete\");\n }", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Security.java", "status": "modified" }, { "diff": "@@ -311,6 +311,7 @@ public void apply(Settings value, Settings current, Settings previous) {\n HunspellService.HUNSPELL_IGNORE_CASE,\n HunspellService.HUNSPELL_DICTIONARY_OPTIONS,\n IndicesStore.INDICES_STORE_DELETE_SHARD_TIMEOUT,\n+ Environment.DEFAULT_PATH_DATA_SETTING,\n Environment.PATH_CONF_SETTING,\n Environment.PATH_DATA_SETTING,\n Environment.PATH_HOME_SETTING,", "filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.env;\n \n import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Setting;\n@@ -34,10 +35,14 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Objects;\n+import java.util.Set;\n import java.util.function.Function;\n+import java.util.stream.Collectors;\n \n import static org.elasticsearch.common.Strings.cleanPath;\n \n@@ -53,6 +58,7 @@ public class Environment {\n public static final Setting<String> PATH_SCRIPTS_SETTING = Setting.simpleString(\"path.scripts\", Property.NodeScope);\n public static final Setting<List<String>> PATH_DATA_SETTING =\n Setting.listSetting(\"path.data\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n+ public static final Setting<String> DEFAULT_PATH_DATA_SETTING = Setting.simpleString(\"default.path.data\", Property.NodeScope);\n public static final Setting<String> PATH_LOGS_SETTING = Setting.simpleString(\"path.logs\", Property.NodeScope);\n public static final Setting<List<String>> PATH_REPO_SETTING =\n Setting.listSetting(\"path.repo\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n@@ -63,6 +69,9 @@ public class Environment {\n \n private final Path[] dataFiles;\n \n+ @Nullable\n+ private final Path defaultPathData;\n+\n private final Path[] dataWithClusterFiles;\n \n private final Path[] repoFiles;\n@@ -138,9 +147,32 @@ public Environment(Settings settings) {\n dataFiles[i] = PathUtils.get(dataPaths.get(i));\n dataWithClusterFiles[i] = dataFiles[i].resolve(clusterName.value());\n }\n+ if (DEFAULT_PATH_DATA_SETTING.exists(settings)) {\n+ final String defaultPathDataValue = DEFAULT_PATH_DATA_SETTING.get(settings);\n+ final Set<Path> dataFilesSet = Arrays.stream(dataFiles).collect(Collectors.toSet());\n+ final Path defaultPathData = PathUtils.get(defaultPathDataValue);\n+ if (dataFilesSet.size() == 1 && dataFilesSet.contains(defaultPathData)) {\n+ // default path data was used to set path data\n+ this.defaultPathData = null;\n+ } else if (dataFilesSet.contains(defaultPathData)) {\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"do not include default.path.data [%s] in path.data %s\",\n+ defaultPathData,\n+ Arrays.toString(dataFiles));\n+ throw new IllegalStateException(message);\n+ } else {\n+ this.defaultPathData = defaultPathData;\n+ }\n+ } else {\n+ this.defaultPathData = null;\n+ }\n } else {\n dataFiles = new Path[]{homeFile.resolve(\"data\")};\n dataWithClusterFiles = new Path[]{homeFile.resolve(\"data\").resolve(clusterName.value())};\n+ assert !DEFAULT_PATH_DATA_SETTING.exists(settings)\n+ : \"expected default.path.data to be unset but was [\" + DEFAULT_PATH_DATA_SETTING.get(settings) + \"]\";\n+ this.defaultPathData = null;\n }\n if (PATH_SHARED_DATA_SETTING.exists(settings)) {\n sharedDataFile = PathUtils.get(cleanPath(PATH_SHARED_DATA_SETTING.get(settings)));\n@@ -194,6 +226,16 @@ public Path[] dataFiles() {\n return dataFiles;\n }\n \n+ /**\n+ * The default data path which is set only if default.path.data did not overwrite path.data.\n+ *\n+ * @return the default data path\n+ */\n+ @Nullable\n+ public Path defaultPathData() {\n+ return defaultPathData;\n+ }\n+\n /**\n * The shared data location\n */", "filename": "core/src/main/java/org/elasticsearch/env/Environment.java", "status": "modified" }, { "diff": "@@ -142,6 +142,7 @@ public String toString() {\n }\n \n private final NodePath[] nodePaths;\n+ private final NodePath defaultNodePath;\n private final Path sharedDataPath;\n private final Lock[] locks;\n \n@@ -179,6 +180,7 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n \n if (!DiscoveryNode.nodeRequiresLocalStorage(settings)) {\n nodePaths = null;\n+ defaultNodePath = null;\n sharedDataPath = null;\n locks = null;\n nodeLockId = -1;\n@@ -187,7 +189,10 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n return;\n }\n final NodePath[] nodePaths = new NodePath[environment.dataWithClusterFiles().length];\n- final Lock[] locks = new Lock[nodePaths.length];\n+ NodePath defaultNodePath = null;\n+ final int extra = environment.defaultPathData() == null ? 0 : 1;\n+ final Lock[] locks = new Lock[nodePaths.length + extra];\n+\n boolean success = false;\n \n // trace logger to debug issues before the default node name is derived from the node id\n@@ -199,17 +204,27 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n IOException lastException = null;\n int maxLocalStorageNodes = MAX_LOCAL_STORAGE_NODES_SETTING.get(settings);\n for (int possibleLockId = 0; possibleLockId < maxLocalStorageNodes; possibleLockId++) {\n- for (int dirIndex = 0; dirIndex < environment.dataFiles().length; dirIndex++) {\n- Path dataDirWithClusterName = environment.dataWithClusterFiles()[dirIndex];\n- Path dataDir = environment.dataFiles()[dirIndex];\n+ for (int dirIndex = 0; dirIndex < environment.dataFiles().length + extra; dirIndex++) {\n+ final Path dataDir;\n+ if (dirIndex < environment.dataFiles().length) {\n+ dataDir = environment.dataFiles()[dirIndex];\n+ } else {\n+ dataDir = environment.defaultPathData();\n+ }\n Path dir = dataDir.resolve(NODES_FOLDER).resolve(Integer.toString(possibleLockId));\n Files.createDirectories(dir);\n \n try (Directory luceneDir = FSDirectory.open(dir, NativeFSLockFactory.INSTANCE)) {\n startupTraceLogger.trace(\"obtaining node lock on {} ...\", dir.toAbsolutePath());\n try {\n- locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n- nodePaths[dirIndex] = new NodePath(dir);\n+ if (dirIndex < environment.dataFiles().length) {\n+ locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n+ nodePaths[dirIndex] = new NodePath(dir);\n+ } else {\n+ assert dirIndex == environment.dataFiles().length;\n+ locks[dirIndex] = luceneDir.obtainLock(NODE_LOCK_FILENAME);\n+ defaultNodePath = new NodePath(dir);\n+ }\n nodeLockId = possibleLockId;\n } catch (LockObtainFailedException ex) {\n startupTraceLogger.trace(\"failed to obtain node lock on {}\", dir.toAbsolutePath());\n@@ -244,15 +259,19 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n maxLocalStorageNodes);\n throw new IllegalStateException(message, lastException);\n }\n+\n this.nodeMetaData = loadOrCreateNodeMetaData(settings, startupTraceLogger, nodePaths);\n this.logger = Loggers.getLogger(getClass(), Node.addNodeNameIfNeeded(settings, this.nodeMetaData.nodeId()));\n \n this.nodeLockId = nodeLockId;\n this.locks = locks;\n this.nodePaths = nodePaths;\n-\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"using node location [{}], local_lock_id [{}]\", nodePaths, nodeLockId);\n+ if (environment.defaultPathData() == null) {\n+ assert defaultNodePath == null;\n+ this.defaultNodePath = null;\n+ } else {\n+ assert defaultNodePath != null;\n+ this.defaultNodePath = defaultNodePath;\n }\n \n maybeLogPathDetails();\n@@ -724,6 +743,14 @@ public NodePath[] nodePaths() {\n return nodePaths;\n }\n \n+ public NodePath defaultNodePath() {\n+ assertEnvIsLocked();\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ return defaultNodePath;\n+ }\n+\n /**\n * Returns all index paths.\n */\n@@ -764,19 +791,36 @@ public Set<String> availableIndexFolders() throws IOException {\n assertEnvIsLocked();\n Set<String> indexFolders = new HashSet<>();\n for (NodePath nodePath : nodePaths) {\n- Path indicesLocation = nodePath.indicesPath;\n- if (Files.isDirectory(indicesLocation)) {\n- try (DirectoryStream<Path> stream = Files.newDirectoryStream(indicesLocation)) {\n- for (Path index : stream) {\n- if (Files.isDirectory(index)) {\n- indexFolders.add(index.getFileName().toString());\n- }\n+ indexFolders.addAll(availableIndexFoldersForPath(nodePath));\n+ }\n+ return indexFolders;\n+\n+ }\n+\n+ /**\n+ * Return all directory names in the nodes/{node.id}/indices directory for the given node path.\n+ *\n+ * @param nodePath the node path\n+ * @return all directories that could be indices for the given node path.\n+ * @throws IOException if an I/O exception occurs traversing the filesystem\n+ */\n+ public Set<String> availableIndexFoldersForPath(final NodePath nodePath) throws IOException {\n+ if (nodePaths == null || locks == null) {\n+ throw new IllegalStateException(\"node is not configured to store local location\");\n+ }\n+ assertEnvIsLocked();\n+ final Set<String> indexFolders = new HashSet<>();\n+ Path indicesLocation = nodePath.indicesPath;\n+ if (Files.isDirectory(indicesLocation)) {\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(indicesLocation)) {\n+ for (Path index : stream) {\n+ if (Files.isDirectory(index)) {\n+ indexFolders.add(index.getFileName().toString());\n }\n }\n }\n }\n return indexFolders;\n-\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java", "status": "modified" }, { "diff": "@@ -19,6 +19,14 @@\n \n package org.elasticsearch.node;\n \n+import org.elasticsearch.cli.Terminal;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.settings.SettingsException;\n+import org.elasticsearch.env.Environment;\n+\n import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n@@ -32,14 +40,6 @@\n import java.util.function.Predicate;\n import java.util.function.UnaryOperator;\n \n-import org.elasticsearch.cli.Terminal;\n-import org.elasticsearch.cluster.ClusterName;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.collect.Tuple;\n-import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.settings.SettingsException;\n-import org.elasticsearch.env.Environment;\n-\n import static org.elasticsearch.common.Strings.cleanPath;\n \n public class InternalSettingsPreparer {\n@@ -128,11 +128,22 @@ public static Environment prepareEnvironment(Settings input, Terminal terminal,\n * Initializes the builder with the given input settings, and loads system properties settings if allowed.\n * If loadDefaults is true, system property default settings are loaded.\n */\n- private static void initializeSettings(Settings.Builder output, Settings input, Map<String, String> esSettings) {\n+ static void initializeSettings(Settings.Builder output, Settings input, Map<String, String> esSettings) {\n output.put(input);\n output.putProperties(esSettings,\n- PROPERTY_DEFAULTS_PREDICATE.and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key)) == null),\n+ PROPERTY_DEFAULTS_PREDICATE\n+ .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key)) == null)\n+ .and(key -> output.get(STRIP_PROPERTY_DEFAULTS_PREFIX.apply(key) + \".0\") == null),\n STRIP_PROPERTY_DEFAULTS_PREFIX);\n+ /*\n+ * We have to treat default.path.data separately due to a bug in Elasticsearch 5.3.0 where if multiple path.data were specified as\n+ * an array and default.path.data was configured then the settings were not properly merged. We need to preserve default.path.data\n+ * so that we can detect this situation.\n+ */\n+ final String key = Environment.DEFAULT_PATH_DATA_SETTING.getKey();\n+ if (esSettings.containsKey(key)) {\n+ output.put(Environment.DEFAULT_PATH_DATA_SETTING.getKey(), esSettings.get(key));\n+ }\n output.putProperties(esSettings, PROPERTY_DEFAULTS_PREDICATE.negate(), Function.identity());\n output.replacePropertyPlaceholders();\n }", "filename": "core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -146,7 +146,9 @@\n import java.util.Collection;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n import java.util.function.Consumer;\n@@ -262,6 +264,9 @@ protected Node(final Environment environment, Collection<Class<? extends Plugin>\n Logger logger = Loggers.getLogger(Node.class, tmpSettings);\n final String nodeId = nodeEnvironment.nodeId();\n tmpSettings = addNodeNameIfNeeded(tmpSettings, nodeId);\n+ if (DiscoveryNode.nodeRequiresLocalStorage(tmpSettings)) {\n+ checkForIndexDataInDefaultPathData(nodeEnvironment, logger);\n+ }\n // this must be captured after the node name is possibly added to the settings\n final String nodeName = NODE_NAME_SETTING.get(tmpSettings);\n if (hadPredefinedNodeName == false) {\n@@ -500,6 +505,31 @@ protected Node(final Environment environment, Collection<Class<? extends Plugin>\n }\n }\n \n+ static void checkForIndexDataInDefaultPathData(final NodeEnvironment nodeEnv, final Logger logger) throws IOException {\n+ if (nodeEnv.defaultNodePath() == null) {\n+ return;\n+ }\n+\n+ final Set<String> availableIndexFolders = nodeEnv.availableIndexFoldersForPath(nodeEnv.defaultNodePath());\n+ if (availableIndexFolders.isEmpty()) {\n+ return;\n+ }\n+\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"detected index data in default.path.data [%s] where there should not be any\",\n+ nodeEnv.defaultNodePath().indicesPath);\n+ logger.error(message);\n+ for (final String availableIndexFolder : availableIndexFolders) {\n+ logger.info(\n+ \"index folder [{}] in default.path.data [{}] must be moved to any of {}\",\n+ availableIndexFolder,\n+ nodeEnv.defaultNodePath().indicesPath,\n+ Arrays.stream(nodeEnv.nodePaths()).map(np -> np.indicesPath).collect(Collectors.toList()));\n+ }\n+ throw new IllegalStateException(message);\n+ }\n+\n // visible for testing\n static void warnIfPreRelease(final Version version, final boolean isSnapshot, final Logger logger) {\n if (!version.isRelease() || isSnapshot) {", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -18,15 +18,24 @@\n */\n package org.elasticsearch.env;\n \n+import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n import java.net.URL;\n+import java.nio.file.Path;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.Set;\n \n+import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.CoreMatchers.endsWith;\n+import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.notNullValue;\n import static org.hamcrest.CoreMatchers.nullValue;\n+import static org.hamcrest.Matchers.hasToString;\n \n /**\n * Simple unit-tests for Environment.java\n@@ -71,4 +80,53 @@ public void testRepositoryResolution() throws IOException {\n assertThat(environment.resolveRepoURL(new URL(\"jar:http://localhost/test/../repo1?blah!/repo/\")), nullValue());\n }\n \n+ public void testDefaultPathDataSet() {\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", \"/mnt/zero\")\n+ .put(\"path.data.1\", \"/mnt/one\")\n+ .put(\"default.path.data\", \"/mnt/default\")\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ final Set<Path> dataFiles = new HashSet<>(Arrays.asList(environment.dataFiles()));\n+ assertThat(dataFiles, equalTo(new HashSet<>(Arrays.asList(PathUtils.get(\"/mnt/zero\"), PathUtils.get(\"/mnt/one\")))));\n+ assertThat(environment.defaultPathData(), equalTo(PathUtils.get(\"/mnt/default\")));\n+ }\n+\n+ public void testDefaultPathDataDoesNotSet() {\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", \"/mnt/zero\")\n+ .put(\"path.data.1\", \"/mnt/one\")\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ final Set<Path> actual = new HashSet<>(Arrays.asList(environment.dataFiles()));\n+ final HashSet<Path> expected = new HashSet<>(Arrays.asList(PathUtils.get(\"/mnt/zero\"), PathUtils.get(\"/mnt/one\")));\n+ assertThat(actual, equalTo(expected));\n+ assertNull(environment.defaultPathData());\n+ }\n+\n+ public void testPathDataNotSet() {\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .build();\n+ final Environment environment = new Environment(settings);\n+ final Set<Path> actual = new HashSet<>(Arrays.asList(environment.dataFiles()));\n+ final HashSet<Path> expected = new HashSet<>(Collections.singletonList(PathUtils.get(\"/home/data\")));\n+ assertThat(actual, equalTo(expected));\n+ assertNull(environment.defaultPathData());\n+ }\n+\n+ public void testPathDataContainsDefaultPathData() {\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", \"/mnt/zero\")\n+ .put(\"path.data.1\", \"/mnt/one\")\n+ .put(\"path.data.2\", \"/mnt/default\")\n+ .put(\"default.path.data\", \"/mnt/default\")\n+ .build();\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, () -> new Environment(settings));\n+ assertThat(e, hasToString(containsString(\"do not include default.path.data [/mnt/default] in path.data\")));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/env/EnvironmentTests.java", "status": "modified" }, { "diff": "@@ -37,6 +37,7 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n@@ -45,6 +46,7 @@\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.stream.Collectors;\n \n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.Matchers.arrayWithSize;\n@@ -425,6 +427,41 @@ private Path[] stringsToPaths(String[] strings, String additional) {\n return locations;\n }\n \n+ public void testDefaultPathData() throws IOException {\n+ final Path zero = createTempDir().toAbsolutePath();\n+ final Path one = createTempDir().toAbsolutePath();\n+\n+ final Settings.Builder builder = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", zero)\n+ .put(\"path.data.1\", one);\n+ final boolean defaultPathDataSet = randomBoolean();\n+ final Path defaultPathData;\n+ if (defaultPathDataSet) {\n+ defaultPathData = createTempDir().toAbsolutePath();\n+ builder.put(\"default.path.data\", defaultPathData);\n+ } else {\n+ defaultPathData = null;\n+ }\n+ try (NodeEnvironment nodeEnv = newNodeEnvironment(builder.build())) {\n+ final Set<Path> actual = Arrays.stream(nodeEnv.nodePaths()).map(np -> np.path).collect(Collectors.toSet());\n+ final Set<Path> expected = new HashSet<>(Arrays.asList(zero.resolve(\"nodes/0\"), one.resolve(\"nodes/0\")));\n+ assertThat(actual, equalTo(expected));\n+\n+ if (defaultPathDataSet) {\n+ assertThat(nodeEnv.defaultNodePath().path, equalTo(defaultPathData.resolve(\"nodes/0\")));\n+ }\n+\n+ for (final NodeEnvironment.NodePath nodePath : nodeEnv.nodePaths()) {\n+ assertTrue(Files.exists(nodePath.path.resolve(NodeEnvironment.NODE_LOCK_FILENAME)));\n+ }\n+\n+ if (defaultPathDataSet) {\n+ assertTrue(Files.exists(nodeEnv.defaultNodePath().path.resolve(NodeEnvironment.NODE_LOCK_FILENAME)));\n+ }\n+ }\n+ }\n+\n @Override\n public String[] tmpPaths() {\n final int numPaths = randomIntBetween(1, 3);", "filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java", "status": "modified" }, { "diff": "@@ -22,23 +22,32 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.bootstrap.BootstrapCheck;\n import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.BoundTransportAddress;\n import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.transport.MockTcpTransportPlugin;\n \n import java.io.IOException;\n+import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Locale;\n import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.function.Supplier;\n+import java.util.stream.Collectors;\n+import java.util.stream.IntStream;\n \n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasToString;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.reset;\n import static org.mockito.Mockito.verify;\n@@ -165,6 +174,103 @@ public void testNodeAttributes() throws IOException {\n }\n }\n \n+ public void testNodeConstructionWithDefaultPathDataSet() throws IOException {\n+ final Path home = createTempDir().toAbsolutePath();\n+ final Path zero = createTempDir().toAbsolutePath();\n+ final Path one = createTempDir().toAbsolutePath();\n+ final Path defaultPathData = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", home)\n+ .put(\"path.data.0\", zero)\n+ .put(\"path.data.1\", one)\n+ .put(\"default.path.data\", defaultPathData)\n+ .put(\"http.enabled\", false)\n+ .put(\"transport.type\", \"mock-socket-network\")\n+ .build();\n+ Files.createDirectories(defaultPathData.resolve(\"nodes/0\"));\n+ final boolean indexExists = randomBoolean();\n+ if (indexExists) {\n+ for (int i = 0; i < randomIntBetween(1, 3); i++) {\n+ Files.createDirectories(defaultPathData.resolve(\"nodes/0/indices\").resolve(UUIDs.randomBase64UUID()));\n+ }\n+ }\n+ final Supplier<MockNode> constructor = () -> new MockNode(settings, Collections.singletonList(MockTcpTransportPlugin.class));\n+ if (indexExists) {\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, constructor::get);\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"detected index data in default.path.data [%s] where there should not be any\",\n+ defaultPathData.resolve(\"nodes/0/indices\"));\n+ assertThat(e, hasToString(containsString(message)));\n+ } else {\n+ try (Node ignored = constructor.get()) {\n+ // node construction should be okay\n+ }\n+ }\n+ }\n+\n+ public void testDefaultPathDataSet() throws IOException {\n+ final Path zero = createTempDir().toAbsolutePath();\n+ final Path one = createTempDir().toAbsolutePath();\n+ final Path defaultPathData = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", zero)\n+ .put(\"path.data.1\", one)\n+ .put(\"default.path.data\", defaultPathData)\n+ .build();\n+ try (NodeEnvironment nodeEnv = new NodeEnvironment(settings, new Environment(settings))) {\n+ final boolean indexExists = randomBoolean();\n+ final List<String> indices;\n+ if (indexExists) {\n+ indices = IntStream.range(0, randomIntBetween(1, 3)).mapToObj(i -> UUIDs.randomBase64UUID()).collect(Collectors.toList());\n+ for (final String index : indices) {\n+ Files.createDirectories(nodeEnv.defaultNodePath().indicesPath.resolve(index));\n+ }\n+ } else {\n+ indices = Collections.emptyList();\n+ }\n+ final Logger mock = mock(Logger.class);\n+ if (indexExists) {\n+ final IllegalStateException e = expectThrows(\n+ IllegalStateException.class,\n+ () -> Node.checkForIndexDataInDefaultPathData(nodeEnv, mock));\n+ final String message = String.format(\n+ Locale.ROOT,\n+ \"detected index data in default.path.data [%s] where there should not be any\",\n+ defaultPathData.resolve(\"nodes/0/indices\"));\n+ assertThat(e, hasToString(containsString(message)));\n+ verify(mock).error(message);\n+ for (final String index : indices) {\n+ verify(mock).info(\n+ \"index folder [{}] in default.path.data [{}] must be moved to any of {}\",\n+ index,\n+ nodeEnv.defaultNodePath().indicesPath,\n+ Arrays.stream(nodeEnv.nodePaths()).map(np -> np.indicesPath).collect(Collectors.toList()));\n+ }\n+ verifyNoMoreInteractions(mock);\n+ } else {\n+ Node.checkForIndexDataInDefaultPathData(nodeEnv, mock);\n+ verifyNoMoreInteractions(mock);\n+ }\n+ }\n+ }\n+\n+ public void testDefaultPathDataNotSet() throws IOException {\n+ final Path zero = createTempDir().toAbsolutePath();\n+ final Path one = createTempDir().toAbsolutePath();\n+ final Settings settings = Settings.builder()\n+ .put(\"path.home\", \"/home\")\n+ .put(\"path.data.0\", zero)\n+ .put(\"path.data.1\", one)\n+ .build();\n+ try (NodeEnvironment nodeEnv = new NodeEnvironment(settings, new Environment(settings))) {\n+ final Logger mock = mock(Logger.class);\n+ Node.checkForIndexDataInDefaultPathData(nodeEnv, mock);\n+ verifyNoMoreInteractions(mock);\n+ }\n+ }\n+\n private static Settings.Builder baseSettings() {\n final Path tempDir = createTempDir();\n return Settings.builder()", "filename": "test/framework/src/main/java/org/elasticsearch/node/NodeTests.java", "status": "modified" } ] }
{ "body": "PR for #24009 \r\n\r\nBefore this change if `NestedChildrenQuery` were to be cached it could lead to memory leak, because this query keeps a reference to the IndexReader. The chance that it would be cached is low, because this query is different for each search request and search hit it is trying to fetch inner hits for.", "comments": [ { "body": "Should it be merged to 5.3.1? cc @clintongormley ", "created_at": "2017-04-10T12:43:38Z" }, { "body": "`ParentChildrenBlockJoinQuery` only exists in Lucene 6.5 but it should be easy to copy/paste if we wanted that change to be in Elasticsearch 5.3?", "created_at": "2017-04-10T14:34:56Z" }, { "body": "> ParentChildrenBlockJoinQuery only exists in Lucene 6.5 but it should be easy to copy/paste if we wanted that change to be in Elasticsearch 5.3?\r\n\r\nIf we are ok with backporting this to 5.3 branch, then I'll make it in a different pr.", "created_at": "2017-04-10T15:12:39Z" }, { "body": "@martijnvg Can you update the description of this PR so that it better explains the bug it fixes?\r\n\r\n> If we are ok with backporting this to 5.3 branch, then I'll make it in a different pr.\r\n\r\n+1", "created_at": "2017-04-10T15:42:52Z" }, { "body": "@jpountz updated the description.", "created_at": "2017-04-10T15:49:28Z" } ], "number": 24016, "title": "Replace NestedChildrenQuery with ParentChildrenBlockJoinQuery" }
{ "body": "Backport of #24016\r\n", "number": 24039, "review_comments": [], "title": "Replace `NestedChildrenQuery` with `ParentChildrenBlockJoinQuery`" }
{ "commits": [ { "message": "inner_hits: Replace `NestedChildrenQuery` with `ParentChildrenBlockJoinQuery`.\n\nCloses #24009" } ], "files": [ { "diff": "@@ -0,0 +1,210 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+package org.apache.lucene.search.join;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.ReaderUtil;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.Weight;\n+import org.apache.lucene.util.BitSet;\n+\n+import java.io.IOException;\n+import java.util.Set;\n+\n+/**\n+ * A query that returns all the matching child documents for a specific parent document\n+ * indexed together in the same block. The provided child query determines which matching\n+ * child doc is being returned.\n+ *\n+ * @lucene.experimental\n+ */\n+// FORKED: backported from lucene 6.5 to ES, because lucene 6.4 doesn't have this query\n+public class ParentChildrenBlockJoinQuery extends Query {\n+\n+ private final BitSetProducer parentFilter;\n+ private final Query childQuery;\n+ private final int parentDocId;\n+\n+ /**\n+ * Creates a <code>ParentChildrenBlockJoinQuery</code> instance\n+ *\n+ * @param parentFilter A filter identifying parent documents.\n+ * @param childQuery A child query that determines which child docs are matching\n+ * @param parentDocId The top level doc id of that parent to return children documents for\n+ */\n+ public ParentChildrenBlockJoinQuery(BitSetProducer parentFilter, Query childQuery, int parentDocId) {\n+ this.parentFilter = parentFilter;\n+ this.childQuery = childQuery;\n+ this.parentDocId = parentDocId;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (sameClassAs(obj) == false) {\n+ return false;\n+ }\n+ ParentChildrenBlockJoinQuery other = (ParentChildrenBlockJoinQuery) obj;\n+ return parentFilter.equals(other.parentFilter)\n+ && childQuery.equals(other.childQuery)\n+ && parentDocId == other.parentDocId;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int hash = classHash();\n+ hash = 31 * hash + parentFilter.hashCode();\n+ hash = 31 * hash + childQuery.hashCode();\n+ hash = 31 * hash + parentDocId;\n+ return hash;\n+ }\n+\n+ @Override\n+ public String toString(String field) {\n+ return \"ParentChildrenBlockJoinQuery (\" + childQuery + \")\";\n+ }\n+\n+ @Override\n+ public Query rewrite(IndexReader reader) throws IOException {\n+ final Query childRewrite = childQuery.rewrite(reader);\n+ if (childRewrite != childQuery) {\n+ return new ParentChildrenBlockJoinQuery(parentFilter, childRewrite, parentDocId);\n+ } else {\n+ return super.rewrite(reader);\n+ }\n+ }\n+\n+ @Override\n+ public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException {\n+ final Weight childWeight = childQuery.createWeight(searcher, needsScores);\n+ final int readerIndex = ReaderUtil.subIndex(parentDocId, searcher.getIndexReader().leaves());\n+ return new Weight(this) {\n+\n+ @Override\n+ public void extractTerms(Set<Term> terms) {\n+ childWeight.extractTerms(terms);\n+ }\n+\n+ @Override\n+ public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n+ return Explanation.noMatch(\"Not implemented, use ToParentBlockJoinQuery explain why a document matched\");\n+ }\n+\n+ @Override\n+ public float getValueForNormalization() throws IOException {\n+ return childWeight.getValueForNormalization();\n+ }\n+\n+ @Override\n+ public void normalize(float norm, float boost) {\n+ childWeight.normalize(norm, boost);\n+ }\n+\n+ @Override\n+ public Scorer scorer(LeafReaderContext context) throws IOException {\n+ // Childs docs only reside in a single segment, so no need to evaluate all segments\n+ if (context.ord != readerIndex) {\n+ return null;\n+ }\n+\n+ final int localParentDocId = parentDocId - context.docBase;\n+ // If parentDocId == 0 then a parent doc doesn't have child docs, because child docs are stored\n+ // before the parent doc and because parent doc is 0 we can safely assume that there are no child docs.\n+ if (localParentDocId == 0) {\n+ return null;\n+ }\n+\n+ final BitSet parents = parentFilter.getBitSet(context);\n+ final int firstChildDocId = parents.prevSetBit(localParentDocId - 1) + 1;\n+ // A parent doc doesn't have child docs, so we can early exit here:\n+ if (firstChildDocId == localParentDocId) {\n+ return null;\n+ }\n+\n+ final Scorer childrenScorer = childWeight.scorer(context);\n+ if (childrenScorer == null) {\n+ return null;\n+ }\n+ DocIdSetIterator childrenIterator = childrenScorer.iterator();\n+ final DocIdSetIterator it = new DocIdSetIterator() {\n+\n+ int doc = -1;\n+\n+ @Override\n+ public int docID() {\n+ return doc;\n+ }\n+\n+ @Override\n+ public int nextDoc() throws IOException {\n+ return advance(doc + 1);\n+ }\n+\n+ @Override\n+ public int advance(int target) throws IOException {\n+ target = Math.max(firstChildDocId, target);\n+ if (target >= localParentDocId) {\n+ // We're outside the child nested scope, so it is done\n+ return doc = NO_MORE_DOCS;\n+ } else {\n+ int advanced = childrenIterator.advance(target);\n+ if (advanced >= localParentDocId) {\n+ // We're outside the child nested scope, so it is done\n+ return doc = NO_MORE_DOCS;\n+ } else {\n+ return doc = advanced;\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public long cost() {\n+ return Math.min(childrenIterator.cost(), localParentDocId - firstChildDocId);\n+ }\n+\n+ };\n+ return new Scorer(this) {\n+ @Override\n+ public int docID() {\n+ return it.docID();\n+ }\n+\n+ @Override\n+ public float score() throws IOException {\n+ return childrenScorer.score();\n+ }\n+\n+ @Override\n+ public int freq() throws IOException {\n+ return childrenScorer.freq();\n+ }\n+\n+ @Override\n+ public DocIdSetIterator iterator() {\n+ return it;\n+ }\n+ };\n+ }\n+ };\n+ }\n+}", "filename": "core/src/main/java/org/apache/lucene/search/join/ParentChildrenBlockJoinQuery.java", "status": "added" }, { "diff": "@@ -19,26 +19,18 @@\n \n package org.elasticsearch.search.fetch.subphase;\n \n-import org.apache.lucene.index.LeafReader;\n-import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.ConstantScoreScorer;\n-import org.apache.lucene.search.ConstantScoreWeight;\n-import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.DocValuesTermsQuery;\n-import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.search.TopDocsCollector;\n import org.apache.lucene.search.TopFieldCollector;\n import org.apache.lucene.search.TopScoreDocCollector;\n-import org.apache.lucene.search.Weight;\n import org.apache.lucene.search.join.BitSetProducer;\n-import org.apache.lucene.util.BitSet;\n+import org.apache.lucene.search.join.ParentChildrenBlockJoinQuery;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -48,6 +40,7 @@\n import org.elasticsearch.index.mapper.ParentFieldMapper;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.UidFieldMapper;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHitField;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.InternalSearchHit;\n@@ -133,7 +126,8 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n BitSetProducer parentFilter = context.bitsetFilterCache().getBitSetProducer(rawParentFilter);\n Query childFilter = childObjectMapper.nestedTypeFilter();\n- Query q = Queries.filtered(query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n+ int parentDocId = hitContext.readerContext().docBase + hitContext.docId();\n+ Query q = Queries.filtered(query(), new ParentChildrenBlockJoinQuery(parentFilter, childFilter, parentDocId));\n \n if (size() == 0) {\n return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0);\n@@ -158,120 +152,6 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n }\n \n- // A filter that only emits the nested children docs of a specific nested parent doc\n- static class NestedChildrenQuery extends Query {\n-\n- private final BitSetProducer parentFilter;\n- private final Query childFilter;\n- private final int docId;\n- private final LeafReader leafReader;\n-\n- NestedChildrenQuery(BitSetProducer parentFilter, Query childFilter, FetchSubPhase.HitContext hitContext) {\n- this.parentFilter = parentFilter;\n- this.childFilter = childFilter;\n- this.docId = hitContext.docId();\n- this.leafReader = hitContext.readerContext().reader();\n- }\n-\n- @Override\n- public boolean equals(Object obj) {\n- if (sameClassAs(obj) == false) {\n- return false;\n- }\n- NestedChildrenQuery other = (NestedChildrenQuery) obj;\n- return parentFilter.equals(other.parentFilter)\n- && childFilter.equals(other.childFilter)\n- && docId == other.docId\n- && leafReader.getCoreCacheKey() == other.leafReader.getCoreCacheKey();\n- }\n-\n- @Override\n- public int hashCode() {\n- int hash = classHash();\n- hash = 31 * hash + parentFilter.hashCode();\n- hash = 31 * hash + childFilter.hashCode();\n- hash = 31 * hash + docId;\n- hash = 31 * hash + leafReader.getCoreCacheKey().hashCode();\n- return hash;\n- }\n-\n- @Override\n- public String toString(String field) {\n- return \"NestedChildren(parent=\" + parentFilter + \",child=\" + childFilter + \")\";\n- }\n-\n- @Override\n- public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException {\n- final Weight childWeight = childFilter.createWeight(searcher, false);\n- return new ConstantScoreWeight(this) {\n- @Override\n- public Scorer scorer(LeafReaderContext context) throws IOException {\n- // Nested docs only reside in a single segment, so no need to evaluate all segments\n- if (!context.reader().getCoreCacheKey().equals(leafReader.getCoreCacheKey())) {\n- return null;\n- }\n-\n- // If docId == 0 then we a parent doc doesn't have child docs, because child docs are stored\n- // before the parent doc and because parent doc is 0 we can safely assume that there are no child docs.\n- if (docId == 0) {\n- return null;\n- }\n-\n- final BitSet parents = parentFilter.getBitSet(context);\n- final int firstChildDocId = parents.prevSetBit(docId - 1) + 1;\n- // A parent doc doesn't have child docs, so we can early exit here:\n- if (firstChildDocId == docId) {\n- return null;\n- }\n-\n- final Scorer childrenScorer = childWeight.scorer(context);\n- if (childrenScorer == null) {\n- return null;\n- }\n- DocIdSetIterator childrenIterator = childrenScorer.iterator();\n- final DocIdSetIterator it = new DocIdSetIterator() {\n-\n- int doc = -1;\n-\n- @Override\n- public int docID() {\n- return doc;\n- }\n-\n- @Override\n- public int nextDoc() throws IOException {\n- return advance(doc + 1);\n- }\n-\n- @Override\n- public int advance(int target) throws IOException {\n- target = Math.max(firstChildDocId, target);\n- if (target >= docId) {\n- // We're outside the child nested scope, so it is done\n- return doc = NO_MORE_DOCS;\n- } else {\n- int advanced = childrenIterator.advance(target);\n- if (advanced >= docId) {\n- // We're outside the child nested scope, so it is done\n- return doc = NO_MORE_DOCS;\n- } else {\n- return doc = advanced;\n- }\n- }\n- }\n-\n- @Override\n- public long cost() {\n- return Math.min(childrenIterator.cost(), docId - firstChildDocId);\n- }\n-\n- };\n- return new ConstantScoreScorer(this, score(), it);\n- }\n- };\n- }\n- }\n-\n }\n \n public static final class ParentChildInnerHits extends BaseInnerHits {", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java", "status": "modified" } ] }
{ "body": "This can cause memory leaks if that query gets cached. Unfortunately we only have access to the right APIs to fix it once we upgrade to Lucene 6.5.1.", "comments": [ { "body": "Maybe we should use `ParentChildrenBlockJoinQuery` (was added in lucene 6.5) and remove `NestedChildrenQuery`?", "created_at": "2017-04-10T09:45:47Z" }, { "body": "+1!", "created_at": "2017-04-10T14:34:05Z" } ], "number": 24009, "title": "NestedChildrenQuery references an IndexReader" }
{ "body": "PR for #24009 \r\n\r\nBefore this change if `NestedChildrenQuery` were to be cached it could lead to memory leak, because this query keeps a reference to the IndexReader. The chance that it would be cached is low, because this query is different for each search request and search hit it is trying to fetch inner hits for.", "number": 24016, "review_comments": [], "title": "Replace NestedChildrenQuery with ParentChildrenBlockJoinQuery" }
{ "commits": [ { "message": "inner_hits: Replace `NestedChildrenQuery` with `ParentChildrenBlockJoinQuery`.\n\nCloses #24009" } ], "files": [ { "diff": "@@ -19,26 +19,18 @@\n \n package org.elasticsearch.search.fetch.subphase;\n \n-import org.apache.lucene.index.LeafReader;\n-import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n-import org.apache.lucene.search.ConstantScoreScorer;\n-import org.apache.lucene.search.ConstantScoreWeight;\n-import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.DocValuesTermsQuery;\n-import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.search.TopDocsCollector;\n import org.apache.lucene.search.TopFieldCollector;\n import org.apache.lucene.search.TopScoreDocCollector;\n-import org.apache.lucene.search.Weight;\n import org.apache.lucene.search.join.BitSetProducer;\n-import org.apache.lucene.util.BitSet;\n+import org.apache.lucene.search.join.ParentChildrenBlockJoinQuery;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -48,9 +40,9 @@\n import org.elasticsearch.index.mapper.ParentFieldMapper;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.UidFieldMapper;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHitField;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n-import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.internal.SubSearchContext;\n \n@@ -131,7 +123,8 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n BitSetProducer parentFilter = context.bitsetFilterCache().getBitSetProducer(rawParentFilter);\n Query childFilter = childObjectMapper.nestedTypeFilter();\n- Query q = Queries.filtered(query(), new NestedChildrenQuery(parentFilter, childFilter, hitContext));\n+ int parentDocId = hitContext.readerContext().docBase + hitContext.docId();\n+ Query q = Queries.filtered(query(), new ParentChildrenBlockJoinQuery(parentFilter, childFilter, parentDocId));\n \n if (size() == 0) {\n return new TopDocs(context.searcher().count(q), Lucene.EMPTY_SCORE_DOCS, 0);\n@@ -156,120 +149,6 @@ public TopDocs topDocs(SearchContext context, FetchSubPhase.HitContext hitContex\n }\n }\n \n- // A filter that only emits the nested children docs of a specific nested parent doc\n- static class NestedChildrenQuery extends Query {\n-\n- private final BitSetProducer parentFilter;\n- private final Query childFilter;\n- private final int docId;\n- private final LeafReader leafReader;\n-\n- NestedChildrenQuery(BitSetProducer parentFilter, Query childFilter, FetchSubPhase.HitContext hitContext) {\n- this.parentFilter = parentFilter;\n- this.childFilter = childFilter;\n- this.docId = hitContext.docId();\n- this.leafReader = hitContext.readerContext().reader();\n- }\n-\n- @Override\n- public boolean equals(Object obj) {\n- if (sameClassAs(obj) == false) {\n- return false;\n- }\n- NestedChildrenQuery other = (NestedChildrenQuery) obj;\n- return parentFilter.equals(other.parentFilter)\n- && childFilter.equals(other.childFilter)\n- && docId == other.docId\n- && leafReader.getCoreCacheKey() == other.leafReader.getCoreCacheKey();\n- }\n-\n- @Override\n- public int hashCode() {\n- int hash = classHash();\n- hash = 31 * hash + parentFilter.hashCode();\n- hash = 31 * hash + childFilter.hashCode();\n- hash = 31 * hash + docId;\n- hash = 31 * hash + leafReader.getCoreCacheKey().hashCode();\n- return hash;\n- }\n-\n- @Override\n- public String toString(String field) {\n- return \"NestedChildren(parent=\" + parentFilter + \",child=\" + childFilter + \")\";\n- }\n-\n- @Override\n- public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException {\n- final Weight childWeight = childFilter.createWeight(searcher, false);\n- return new ConstantScoreWeight(this) {\n- @Override\n- public Scorer scorer(LeafReaderContext context) throws IOException {\n- // Nested docs only reside in a single segment, so no need to evaluate all segments\n- if (!context.reader().getCoreCacheKey().equals(leafReader.getCoreCacheKey())) {\n- return null;\n- }\n-\n- // If docId == 0 then we a parent doc doesn't have child docs, because child docs are stored\n- // before the parent doc and because parent doc is 0 we can safely assume that there are no child docs.\n- if (docId == 0) {\n- return null;\n- }\n-\n- final BitSet parents = parentFilter.getBitSet(context);\n- final int firstChildDocId = parents.prevSetBit(docId - 1) + 1;\n- // A parent doc doesn't have child docs, so we can early exit here:\n- if (firstChildDocId == docId) {\n- return null;\n- }\n-\n- final Scorer childrenScorer = childWeight.scorer(context);\n- if (childrenScorer == null) {\n- return null;\n- }\n- DocIdSetIterator childrenIterator = childrenScorer.iterator();\n- final DocIdSetIterator it = new DocIdSetIterator() {\n-\n- int doc = -1;\n-\n- @Override\n- public int docID() {\n- return doc;\n- }\n-\n- @Override\n- public int nextDoc() throws IOException {\n- return advance(doc + 1);\n- }\n-\n- @Override\n- public int advance(int target) throws IOException {\n- target = Math.max(firstChildDocId, target);\n- if (target >= docId) {\n- // We're outside the child nested scope, so it is done\n- return doc = NO_MORE_DOCS;\n- } else {\n- int advanced = childrenIterator.advance(target);\n- if (advanced >= docId) {\n- // We're outside the child nested scope, so it is done\n- return doc = NO_MORE_DOCS;\n- } else {\n- return doc = advanced;\n- }\n- }\n- }\n-\n- @Override\n- public long cost() {\n- return Math.min(childrenIterator.cost(), docId - firstChildDocId);\n- }\n-\n- };\n- return new ConstantScoreScorer(this, score(), it);\n- }\n- };\n- }\n- }\n-\n }\n \n public static final class ParentChildInnerHits extends BaseInnerHits {", "filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java", "status": "modified" } ] }
{ "body": "\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**:\r\n5.2.2\r\n**Plugins installed**: []\r\nhead\r\n**JVM version**:\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n**OS version**:\r\nLinux 3.10.0-229.el7.x86_64\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWith two index named `foo_foo` and `bar_bar` with the same alias `foo` like below:\r\n\r\n| index | alias |\r\n| ------- | -------|\r\n| foo_foo | foo |\r\n| bar_bar | foo |\r\n \r\nIf use the following to remove the alias, only the alias of 'foo_foo' is removed, that's fine.\r\n```\r\nPOST /_aliases \r\n{\r\n \"actions\": [\r\n {\r\n \"remove\": {\r\n \"index\": \"foo_*\",\r\n \"alias\": \"foo\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\nBUT, if use the following to remove, both the alias of `foo_foo` and `bar_bar` will be removed, while I think only the `foo_foo` 's alias should be removed\r\n\r\n```\r\nPOST /_aliases \r\n{\r\n \"actions\": [\r\n {\r\n \"remove\": {\r\n \"index\": \"foo*\",\r\n \"alias\": \"foo\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\n**Steps to reproduce**:\r\n 1. Create Two Index using the following:\r\n```\r\nPUT /foo_foo\r\nPUT /bar_bar\r\n```\r\n 2. Add Alias for them using:\r\n```\r\nPOST /_aliases\r\n{\r\n \"actions\" : [\r\n { \"add\" : { \"index\" : \"foo_foo\", \"alias\" : \"foo\" } },\r\n { \"add\" : { \"index\" : \"bar_bar\", \"alias\" : \"foo\" } }\r\n ]\r\n}\r\n```\r\n 3. delete alias using:\r\n```\r\nPOST /_aliases \r\n{\r\n \"actions\": [\r\n {\r\n \"remove\": {\r\n \"index\": \"foo_*\",\r\n \"alias\": \"foo\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n", "comments": [ { "body": "This looks like a low hanging fruit. If so, I would like to have a look at it.", "created_at": "2017-04-07T09:21:16Z" }, { "body": "@olcbean I just marked the issue as adopt me ;) ", "created_at": "2017-04-07T09:24:13Z" }, { "body": "@jimczi I am a first timer and would like to work on this issue. Could you please share the file where I can start to look at ?", "created_at": "2017-04-08T08:28:25Z" }, { "body": "@myrfy001 i cannot replicate the issue. I have followed the step and i am able to query bar_bar with the alias foo.", "created_at": "2017-04-11T17:29:54Z" }, { "body": "@kunal642 the last step of the reproduction has a typo. I can reproduce the problem with the following request:\r\n\r\n`````\r\nPOST /_aliases \r\n{\r\n \"actions\": [\r\n {\r\n \"remove\": {\r\n \"index\": \"foo*\",\r\n \"alias\": \"foo\"\r\n }\r\n }\r\n ]\r\n}\r\n`````\r\n\r\nThe problem here is that we resolve `index: foo*` to all indices *and* aliases that match the pattern. \r\n@olcbean has a PR open that resolve the pattern on indices only.", "created_at": "2017-04-11T17:45:57Z" } ], "number": 23960, "title": "Wrong behaviour deleting alias." }
{ "body": "Indices wildcards were resolved against all indices and aliases. And if an alias matched, then all indices with this alias were returned as matching.\r\n\r\nIn other words\r\n```\r\nPOST /_aliases\r\n{\r\n \"actions\" : [\r\n { \"add\" : { \"index\" : \"foo_foo\", \"alias\" : \"foo\" } },\r\n { \"add\" : { \"index\" : \"bar_bar\", \"alias\" : \"foo\" } }\r\n ]\r\n}\r\n```\r\n```\r\nPOST /_aliases \r\n{\r\n \"actions\": [\r\n {\r\n \"remove\": {\r\n \"index\": \"foo*\",\r\n \"alias\": \"foo\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\nwill actually return both indices `foo_foo` and `bar_bar` as the common alias `foo` matched the wildcard. Which led to deleting the alias for both indices.\r\n\r\nRelated to #23960\r\n\r\nRelates to #10106 \r\n", "number": 23997, "review_comments": [ { "body": "In general, I would prefer a straightforward comment explaining the issue being tested. A link to a GitHub issues means that I have to be online and context switch to obtain more information.", "created_at": "2017-04-10T22:30:25Z" }, { "body": "In general, I would prefer a straightforward comment explaining the issue being tested. A link to a GitHub issues means that I have to be online and context switch to obtain more information.", "created_at": "2017-04-10T22:30:34Z" }, { "body": "@jasontedor Good point! I just followed the lead of another test in this class which is abbreviated only with the issue number. \r\n\r\nMay I assume that generally a test should be commented with a short description why the test has been introduced (and maybe the issue for further reference)?", "created_at": "2017-04-11T18:30:03Z" }, { "body": "If the test and its purpose is straightforward, I would say that no comment is necessary. If the test is for an tricky bug, or its purpose is unclear, I would say a comment is a necessity.", "created_at": "2017-04-11T19:49:18Z" }, { "body": "@jasontedor Thank you for answering a trivial question! I understand this is common sense, but somehow common sense manages to differ between projects... Do you know if there is a resource where the basic coding practices are defined for new comers? If there is, a link will be really appreciated.", "created_at": "2017-04-13T09:28:37Z" }, { "body": "I think a lot of what you're looking for is not written down, it's tribal knowledge that we all accumulate with experience in the project. We explain some straightforward things in the contributing docs but the vast majority of our conventions are not written down (and I think that's okay, we catch them during code review).", "created_at": "2017-04-14T03:31:11Z" }, { "body": "this method and the read need to handle backwards compatibility: see https://github.com/elastic/elasticsearch/commit/7548b2edb782a2732aca5e9bae9016c6a01cb6e6 for a similar change with bw comp layer from when we added `allowAliasesToMultipleIndices`", "created_at": "2017-05-08T13:21:19Z" }, { "body": "I think that we have to change also the default value for GetAliasesRequest ?", "created_at": "2017-05-08T13:22:30Z" }, { "body": "nit: replace the lambda expressions with method references? `Map.Entry::getKey` and `Map.Entry::getValue` ?", "created_at": "2017-05-08T13:33:40Z" }, { "body": "`.filter(e -> context.getOptions().ignoreAliases() == false || e.getValue().isAlias() == false)` ?", "created_at": "2017-05-08T13:35:15Z" }, { "body": "here we are fixing wildcard matching, but shouldn't we also honour the new option when resolving provided concrete names? e.g. what if I specify an alias amongst the indices, that are supposed to be concrete indices names only? I think if we fix this, it would also help with #10106 as we would not allow to create an alias that points to another alias anymore (which does not do what users think it does).", "created_at": "2017-05-08T13:45:56Z" }, { "body": "can you also add tests for the new option when resolving expressions that don't contain wildcards?", "created_at": "2017-05-08T13:46:53Z" }, { "body": "I tend to think that this comment is not needed. Maybe add a small comment close to where you create the two indices options?", "created_at": "2017-05-08T13:47:47Z" }, { "body": "same as above.", "created_at": "2017-05-08T13:48:02Z" }, { "body": "would you mind adding a couple of integration tests to `IndexAliasesIT` also? Maybe test both IndicesAliasesRequest and GetAliasesRequest ?", "created_at": "2017-05-08T13:49:27Z" }, { "body": "Oh.. I was considering to leave the changes to `GetAliasesRequest` for another PR ( for better traceability ). But np, I will include them in this one. ", "created_at": "2017-05-11T17:01:06Z" }, { "body": "I am fine either way, if you prefer you can send another followup PR to address that.", "created_at": "2017-05-12T09:22:36Z" }, { "body": "not sure why this is still mark unreleased, we have already released alpha1, I think you should use alpha2.", "created_at": "2017-05-15T15:27:33Z" }, { "body": "can you add a comment on what this does and why for future reference?", "created_at": "2017-05-15T15:38:22Z" }, { "body": "can you expand IndicesOptionsTests#testSerialization to pass in also the ignoreAliases flag when creating the random indices options and check that they get written/read correctly. You will need an if based on the version there as well.", "created_at": "2017-05-15T15:40:49Z" }, { "body": "can you expand IndicesOptionsTests#testFromOptions using and testing this new flag too?", "created_at": "2017-05-15T15:42:25Z" }, { "body": "can you do here as well `p.getValue().isAlias() == false` ?", "created_at": "2017-05-15T15:47:02Z" }, { "body": "nit: can you use assertEquals rather than `assertThat(..., equalTo())` ? I know we do that in quite some places in our codebase but we seem to be preferring the former over the latter nowadays.", "created_at": "2017-05-15T15:50:26Z" }, { "body": "maybe adding a get aliases request to the mix as well before the deletion so we make sure that it resolves things correctly?", "created_at": "2017-05-15T15:51:26Z" }, { "body": "you can use expectThrows here and do it in a single line.", "created_at": "2017-05-15T15:53:39Z" }, { "body": "can you remove the empty lines between assigning indexNames the corresponding assertions? Also above? If you want to divide this into blocks you could also do:\r\n\r\n```\r\n{\r\n List<String> indexNames = Arrays.asList(indexNameExpressionResolver.concreteIndexNames(state, indicesAndAliasesOptions, \"foo*\"));\r\n assert.....\r\n}\r\n```\r\n\r\nThis way you declare the list each time and it doesn't get reused.", "created_at": "2017-05-15T15:56:12Z" }, { "body": "could you rename this to highlight what it holds? contextIndicesAndAliases?", "created_at": "2017-05-15T15:57:30Z" }, { "body": "contextIndicesOnly ?", "created_at": "2017-05-15T15:57:40Z" }, { "body": "I'd change change this to `if (aliasOrIndex == null || (aliasOrIndex.isAlias() && context.getOptions().ignoreAliases()))` and remove the new if above", "created_at": "2017-05-15T16:00:35Z" }, { "body": "can you also add a test that makes sure that when adding an alias, you can not make it point to another alias anymore as the indices get resolved to indices only?", "created_at": "2017-05-15T16:03:45Z" } ], "title": "Wrong behavior deleting alias" }
{ "commits": [ { "message": "Resolve indices only against indices" }, { "message": "Added more descriptive comments in the tests" }, { "message": "Revert the changes to the IndexNameExpressionResolver and introducing\nmore conservative approach" }, { "message": "Introduce 'ignoreAliases' IndicesOption\nWhen ignoreAliases is set, the wildcard will be resolved only against\nthe available indices" }, { "message": "integrating remarks" }, { "message": "adding tests and integrating remarks\n\nchanged ndices.asciidoc" }, { "message": "integrating remarks" }, { "message": "adding a remove_index test" } ], "files": [ { "diff": "@@ -59,9 +59,10 @@\n public class IndicesAliasesRequest extends AcknowledgedRequest<IndicesAliasesRequest> {\n private List<AliasActions> allAliasActions = new ArrayList<>();\n \n- //indices options that require every specified index to exist, expand wildcards only to open indices and\n- //don't allow that no indices are resolved from wildcard expressions\n- private static final IndicesOptions INDICES_OPTIONS = IndicesOptions.fromOptions(false, false, true, false);\n+ // indices options that require every specified index to exist, expand wildcards only to open\n+ // indices, don't allow that no indices are resolved from wildcard expressions and resolve the\n+ // expressions only against indices\n+ private static final IndicesOptions INDICES_OPTIONS = IndicesOptions.fromOptions(false, false, true, false, true, false, true);\n \n public IndicesAliasesRequest() {\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.action.support;\n \n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.rest.RestRequest;\n@@ -43,6 +44,7 @@ public class IndicesOptions {\n private static final byte EXPAND_WILDCARDS_CLOSED = 8;\n private static final byte FORBID_ALIASES_TO_MULTIPLE_INDICES = 16;\n private static final byte FORBID_CLOSED_INDICES = 32;\n+ private static final byte IGNORE_ALIASES = 64;\n \n private static final byte STRICT_EXPAND_OPEN = 6;\n private static final byte LENIENT_EXPAND_OPEN = 7;\n@@ -51,10 +53,10 @@ public class IndicesOptions {\n private static final byte STRICT_SINGLE_INDEX_NO_EXPAND_FORBID_CLOSED = 48;\n \n static {\n- byte max = 1 << 6;\n+ short max = 1 << 7;\n VALUES = new IndicesOptions[max];\n- for (byte id = 0; id < max; id++) {\n- VALUES[id] = new IndicesOptions(id);\n+ for (short id = 0; id < max; id++) {\n+ VALUES[id] = new IndicesOptions((byte)id);\n }\n }\n \n@@ -106,18 +108,31 @@ public boolean forbidClosedIndices() {\n * @return whether aliases pointing to multiple indices are allowed\n */\n public boolean allowAliasesToMultipleIndices() {\n- //true is default here, for bw comp we keep the first 16 values\n- //in the array same as before + the default value for the new flag\n+ // true is default here, for bw comp we keep the first 16 values\n+ // in the array same as before + the default value for the new flag\n return (id & FORBID_ALIASES_TO_MULTIPLE_INDICES) == 0;\n }\n \n+ /**\n+ * @return whether aliases should be ignored (when resolving a wildcard)\n+ */\n+ public boolean ignoreAliases() {\n+ return (id & IGNORE_ALIASES) != 0;\n+ }\n+ \n public void writeIndicesOptions(StreamOutput out) throws IOException {\n- out.write(id);\n+ if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) {\n+ out.write(id);\n+ } else {\n+ // if we are talking to a node that doesn't support the newly added flag (ignoreAliases)\n+ // flip to 0 all the bits starting from the 7th\n+ out.write(id & 0x3f);\n+ }\n }\n \n public static IndicesOptions readIndicesOptions(StreamInput in) throws IOException {\n- //if we read from a node that doesn't support the newly added flag (allowAliasesToMultipleIndices)\n- //we just receive the old corresponding value with the new flag set to true (default)\n+ //if we read from a node that doesn't support the newly added flag (ignoreAliases)\n+ //we just receive the old corresponding value with the new flag set to false (default)\n byte id = in.readByte();\n if (id >= VALUES.length) {\n throw new IllegalArgumentException(\"No valid missing index type id: \" + id);\n@@ -133,8 +148,16 @@ public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allo\n return fromOptions(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, defaultOptions.allowAliasesToMultipleIndices(), defaultOptions.forbidClosedIndices());\n }\n \n- static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices, boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) {\n- byte id = toByte(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices, forbidClosedIndices);\n+ public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices,\n+ boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) {\n+ return fromOptions(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices,\n+ forbidClosedIndices, false);\n+ }\n+\n+ public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices,\n+ boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices, boolean ignoreAliases) {\n+ byte id = toByte(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices,\n+ forbidClosedIndices, ignoreAliases);\n return VALUES[id];\n }\n \n@@ -246,7 +269,7 @@ public static IndicesOptions lenientExpandOpen() {\n }\n \n private static byte toByte(boolean ignoreUnavailable, boolean allowNoIndices, boolean wildcardExpandToOpen,\n- boolean wildcardExpandToClosed, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) {\n+ boolean wildcardExpandToClosed, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices, boolean ignoreAliases) {\n byte id = 0;\n if (ignoreUnavailable) {\n id |= IGNORE_UNAVAILABLE;\n@@ -268,6 +291,9 @@ private static byte toByte(boolean ignoreUnavailable, boolean allowNoIndices, bo\n if (forbidClosedIndices) {\n id |= FORBID_CLOSED_INDICES;\n }\n+ if (ignoreAliases) {\n+ id |= IGNORE_ALIASES;\n+ }\n return id;\n }\n \n@@ -281,6 +307,7 @@ public String toString() {\n \", expand_wildcards_closed=\" + expandWildcardsClosed() +\n \", allow_aliases_to_multiple_indices=\" + allowAliasesToMultipleIndices() +\n \", forbid_closed_indices=\" + forbidClosedIndices() +\n+ \", ignore_aliases=\" + ignoreAliases() +\n ']';\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java", "status": "modified" }, { "diff": "@@ -50,6 +50,7 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n+import java.util.SortedMap;\n import java.util.function.Predicate;\n import java.util.stream.Collectors;\n \n@@ -104,7 +105,7 @@ public String[] concreteIndexNames(ClusterState state, IndicesOptions options, S\n return concreteIndexNames(context, indexExpressions);\n }\n \n- /**\n+ /**\n * Translates the provided index expression into actual concrete indices, properly deduplicated.\n *\n * @param state the cluster state containing all the data to resolve to expressions to concrete indices\n@@ -181,7 +182,7 @@ Index[] concreteIndices(Context context, String... indexExpressions) {\n final Set<Index> concreteIndices = new HashSet<>(expressions.size());\n for (String expression : expressions) {\n AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(expression);\n- if (aliasOrIndex == null) {\n+ if (aliasOrIndex == null || (aliasOrIndex.isAlias() && context.getOptions().ignoreAliases())) {\n if (failNoIndices) {\n IndexNotFoundException infe = new IndexNotFoundException(expression);\n infe.setResources(\"index_expression\", expression);\n@@ -638,7 +639,7 @@ private Set<String> innerResolve(Context context, List<String> expressions, Indi\n }\n \n final IndexMetaData.State excludeState = excludeState(options);\n- final Map<String, AliasOrIndex> matches = matches(metaData, expression);\n+ final Map<String, AliasOrIndex> matches = matches(context, metaData, expression);\n Set<String> expand = expand(context, excludeState, matches);\n if (add) {\n result.addAll(expand);\n@@ -693,31 +694,44 @@ private static IndexMetaData.State excludeState(IndicesOptions options) {\n return excludeState;\n }\n \n- private static Map<String, AliasOrIndex> matches(MetaData metaData, String expression) {\n+ public static Map<String, AliasOrIndex> matches(Context context, MetaData metaData, String expression) {\n if (Regex.isMatchAllPattern(expression)) {\n // Can only happen if the expressions was initially: '-*'\n- return metaData.getAliasAndIndexLookup();\n+ if (context.getOptions().ignoreAliases()) {\n+ return metaData.getAliasAndIndexLookup().entrySet().stream()\n+ .filter(e -> e.getValue().isAlias() == false)\n+ .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));\n+ } else {\n+ return metaData.getAliasAndIndexLookup();\n+ }\n } else if (expression.indexOf(\"*\") == expression.length() - 1) {\n- return suffixWildcard(metaData, expression);\n+ return suffixWildcard(context, metaData, expression);\n } else {\n- return otherWildcard(metaData, expression);\n+ return otherWildcard(context, metaData, expression);\n }\n }\n \n- private static Map<String, AliasOrIndex> suffixWildcard(MetaData metaData, String expression) {\n+ private static Map<String, AliasOrIndex> suffixWildcard(Context context, MetaData metaData, String expression) {\n assert expression.length() >= 2 : \"expression [\" + expression + \"] should have at least a length of 2\";\n String fromPrefix = expression.substring(0, expression.length() - 1);\n char[] toPrefixCharArr = fromPrefix.toCharArray();\n toPrefixCharArr[toPrefixCharArr.length - 1]++;\n String toPrefix = new String(toPrefixCharArr);\n- return metaData.getAliasAndIndexLookup().subMap(fromPrefix, toPrefix);\n+ SortedMap<String,AliasOrIndex> subMap = metaData.getAliasAndIndexLookup().subMap(fromPrefix, toPrefix);\n+ if (context.getOptions().ignoreAliases()) {\n+ return subMap.entrySet().stream()\n+ .filter(entry -> entry.getValue().isAlias() == false)\n+ .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));\n+ }\n+ return subMap;\n }\n \n- private static Map<String, AliasOrIndex> otherWildcard(MetaData metaData, String expression) {\n+ private static Map<String, AliasOrIndex> otherWildcard(Context context, MetaData metaData, String expression) {\n final String pattern = expression;\n return metaData.getAliasAndIndexLookup()\n .entrySet()\n .stream()\n+ .filter(e -> context.getOptions().ignoreAliases() == false || e.getValue().isAlias() == false)\n .filter(e -> Regex.simpleMatch(pattern, e.getKey()))\n .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java", "status": "modified" }, { "diff": "@@ -32,7 +32,7 @@ public void testSerialization() throws Exception {\n int iterations = randomIntBetween(5, 20);\n for (int i = 0; i < iterations; i++) {\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(\n- randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean());\n+ randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean());\n \n BytesStreamOutput output = new BytesStreamOutput();\n Version outputVersion = randomVersion(random());\n@@ -50,6 +50,12 @@ public void testSerialization() throws Exception {\n \n assertThat(indicesOptions2.forbidClosedIndices(), equalTo(indicesOptions.forbidClosedIndices()));\n assertThat(indicesOptions2.allowAliasesToMultipleIndices(), equalTo(indicesOptions.allowAliasesToMultipleIndices()));\n+\n+ if (output.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) {\n+ assertEquals(indicesOptions2.ignoreAliases(), indicesOptions.ignoreAliases());\n+ } else {\n+ assertFalse(indicesOptions2.ignoreAliases());\n+ }\n }\n }\n \n@@ -62,9 +68,11 @@ public void testFromOptions() {\n boolean expandToClosedIndices = randomBoolean();\n boolean allowAliasesToMultipleIndices = randomBoolean();\n boolean forbidClosedIndices = randomBoolean();\n+ boolean ignoreAliases = randomBoolean();\n+\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(\n ignoreUnavailable, allowNoIndices,expandToOpenIndices, expandToClosedIndices,\n- allowAliasesToMultipleIndices, forbidClosedIndices\n+ allowAliasesToMultipleIndices, forbidClosedIndices, ignoreAliases\n );\n \n assertThat(indicesOptions.ignoreUnavailable(), equalTo(ignoreUnavailable));\n@@ -74,6 +82,7 @@ public void testFromOptions() {\n assertThat(indicesOptions.allowAliasesToMultipleIndices(), equalTo(allowAliasesToMultipleIndices));\n assertThat(indicesOptions.allowAliasesToMultipleIndices(), equalTo(allowAliasesToMultipleIndices));\n assertThat(indicesOptions.forbidClosedIndices(), equalTo(forbidClosedIndices));\n+ assertEquals(ignoreAliases, indicesOptions.ignoreAliases());\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/action/support/IndicesOptionsTests.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.aliases;\n \n-import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;\n import org.elasticsearch.action.admin.indices.alias.exists.AliasesExistResponse;\n@@ -36,6 +35,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException;\n@@ -63,7 +63,6 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_BLOCKS_READ;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_BLOCKS_WRITE;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_READ_ONLY;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.test.hamcrest.CollectionAssertions.hasKey;\n@@ -425,6 +424,23 @@ public void testDeleteAliases() throws Exception {\n \n AliasesExistResponse response = admin().indices().prepareAliasesExist(aliases).get();\n assertThat(response.exists(), equalTo(false));\n+\n+ logger.info(\"--> creating index [foo_foo] and [bar_bar]\");\n+ assertAcked(prepareCreate(\"foo_foo\"));\n+ assertAcked(prepareCreate(\"bar_bar\"));\n+ ensureGreen();\n+\n+ logger.info(\"--> adding [foo] alias to [foo_foo] and [bar_bar]\");\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"foo_foo\", \"foo\"));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"bar_bar\", \"foo\"));\n+\n+ assertAcked(admin().indices().prepareAliases().addAliasAction(AliasActions.remove().index(\"foo*\").alias(\"foo\")).execute().get());\n+\n+ assertTrue(admin().indices().prepareAliasesExist(\"foo\").get().exists());\n+ assertFalse(admin().indices().prepareAliasesExist(\"foo\").setIndices(\"foo_foo\").get().exists());\n+ assertTrue(admin().indices().prepareAliasesExist(\"foo\").setIndices(\"bar_bar\").get().exists());\n+ expectThrows(IndexNotFoundException.class, () -> admin().indices().prepareAliases()\n+ .addAliasAction(AliasActions.remove().index(\"foo\").alias(\"foo\")).execute().actionGet());\n }\n \n public void testWaitForAliasCreationMultipleShards() throws Exception {\n@@ -785,6 +801,21 @@ public void testCreateIndexWithAliasesFilterNotValid() {\n }\n }\n \n+ public void testAliasesCanBeAddedToIndicesOnly() throws Exception {\n+ logger.info(\"--> creating index [2017-05-20]\");\n+ assertAcked(prepareCreate(\"2017-05-20\"));\n+ ensureGreen();\n+\n+ logger.info(\"--> adding [week_20] alias to [2017-05-20]\");\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"2017-05-20\", \"week_20\"));\n+\n+ IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> admin().indices().prepareAliases()\n+ .addAliasAction(AliasActions.add().index(\"week_20\").alias(\"tmp\")).execute().actionGet());\n+ assertEquals(\"week_20\", infe.getIndex().getName());\n+\n+ assertAcked(admin().indices().prepareAliases().addAliasAction(AliasActions.add().index(\"2017-05-20\").alias(\"tmp\")).execute().get());\n+ }\n+\n // Before 2.0 alias filters were parsed at alias creation time, in order\n // for filters to work correctly ES required that fields mentioned in those\n // filters exist in the mapping.\n@@ -864,6 +895,26 @@ public void testAliasesWithBlocks() {\n }\n }\n \n+ public void testAliasActionRemoveIndex() throws InterruptedException, ExecutionException {\n+ assertAcked(prepareCreate(\"foo_foo\"));\n+ assertAcked(prepareCreate(\"bar_bar\"));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"foo_foo\", \"foo\"));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"bar_bar\", \"foo\"));\n+\n+ expectThrows(IndexNotFoundException.class,\n+ () -> client().admin().indices().prepareAliases().removeIndex(\"foo\").execute().actionGet());\n+\n+ assertAcked(client().admin().indices().prepareAliases().removeIndex(\"foo*\").execute().get());\n+ assertFalse(client().admin().indices().prepareExists(\"foo_foo\").execute().actionGet().isExists());\n+ assertTrue(admin().indices().prepareAliasesExist(\"foo\").get().exists());\n+ assertTrue(client().admin().indices().prepareExists(\"bar_bar\").execute().actionGet().isExists());\n+ assertTrue(admin().indices().prepareAliasesExist(\"foo\").setIndices(\"bar_bar\").get().exists());\n+\n+ assertAcked(client().admin().indices().prepareAliases().removeIndex(\"bar_bar\"));\n+ assertFalse(admin().indices().prepareAliasesExist(\"foo\").get().exists());\n+ assertFalse(client().admin().indices().prepareExists(\"bar_bar\").execute().actionGet().isExists());\n+ }\n+\n public void testRemoveIndexAndReplaceWithAlias() throws InterruptedException, ExecutionException {\n assertAcked(client().admin().indices().prepareCreate(\"test\"));\n indexRandom(true, client().prepareIndex(\"test_2\", \"test\", \"test\").setSource(\"test\", \"test\"));", "filename": "core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashSet;\n+import java.util.List;\n \n import static org.elasticsearch.common.util.set.Sets.newHashSet;\n import static org.hamcrest.Matchers.arrayContaining;\n@@ -643,6 +644,60 @@ public void testConcreteIndicesWildcardWithNegation() {\n assertEquals(0, indexNames.length);\n }\n \n+ public void testConcreteIndicesWildcardAndAliases() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"foo_foo\").state(State.OPEN).putAlias(AliasMetaData.builder(\"foo\")))\n+ .put(indexBuilder(\"bar_bar\").state(State.OPEN).putAlias(AliasMetaData.builder(\"foo\")));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+\n+ // when ignoreAliases option is set, concreteIndexNames resolves the provided expressions\n+ // only against the defined indices\n+ IndicesOptions ignoreAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true);\n+ \n+ String[] indexNamesIndexWildcard = indexNameExpressionResolver.concreteIndexNames(state, ignoreAliasesOptions, \"foo*\");\n+\n+ assertEquals(1, indexNamesIndexWildcard.length);\n+ assertEquals(\"foo_foo\", indexNamesIndexWildcard[0]);\n+\n+ indexNamesIndexWildcard = indexNameExpressionResolver.concreteIndexNames(state, ignoreAliasesOptions, \"*o\");\n+\n+ assertEquals(1, indexNamesIndexWildcard.length);\n+ assertEquals(\"foo_foo\", indexNamesIndexWildcard[0]);\n+\n+ indexNamesIndexWildcard = indexNameExpressionResolver.concreteIndexNames(state, ignoreAliasesOptions, \"f*o\");\n+\n+ assertEquals(1, indexNamesIndexWildcard.length);\n+ assertEquals(\"foo_foo\", indexNamesIndexWildcard[0]);\n+\n+ IndexNotFoundException infe = expectThrows(IndexNotFoundException.class,\n+ () -> indexNameExpressionResolver.concreteIndexNames(state, ignoreAliasesOptions, \"foo\"));\n+ assertThat(infe.getIndex().getName(), equalTo(\"foo\"));\n+\n+ // when ignoreAliases option is not set, concreteIndexNames resolves the provided\n+ // expressions against the defined indices and aliases\n+ IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, false);\n+\n+ List<String> indexNames = Arrays.asList(indexNameExpressionResolver.concreteIndexNames(state, indicesAndAliasesOptions, \"foo*\"));\n+ assertEquals(2, indexNames.size());\n+ assertTrue(indexNames.contains(\"foo_foo\"));\n+ assertTrue(indexNames.contains(\"bar_bar\"));\n+\n+ indexNames = Arrays.asList(indexNameExpressionResolver.concreteIndexNames(state, indicesAndAliasesOptions, \"*o\"));\n+ assertEquals(2, indexNames.size());\n+ assertTrue(indexNames.contains(\"foo_foo\"));\n+ assertTrue(indexNames.contains(\"bar_bar\"));\n+\n+ indexNames = Arrays.asList(indexNameExpressionResolver.concreteIndexNames(state, indicesAndAliasesOptions, \"f*o\"));\n+ assertEquals(2, indexNames.size());\n+ assertTrue(indexNames.contains(\"foo_foo\"));\n+ assertTrue(indexNames.contains(\"bar_bar\"));\n+\n+ indexNames = Arrays.asList(indexNameExpressionResolver.concreteIndexNames(state, indicesAndAliasesOptions, \"foo\"));\n+ assertEquals(2, indexNames.size());\n+ assertTrue(indexNames.contains(\"foo_foo\"));\n+ assertTrue(indexNames.contains(\"bar_bar\"));\n+ }\n+\n /**\n * test resolving _all pattern (null, empty array or \"_all\") for random IndicesOptions\n */", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData.State;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.Arrays;\n@@ -125,6 +126,59 @@ public void testAll() {\n assertThat(newHashSet(resolver.resolve(context, Arrays.asList(\"_all\"))), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n }\n \n+ public void testConcreteIndicesWildcardAndAliases() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"foo_foo\").state(State.OPEN).putAlias(AliasMetaData.builder(\"foo\")))\n+ .put(indexBuilder(\"bar_bar\").state(State.OPEN).putAlias(AliasMetaData.builder(\"foo\")));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+\n+ // when ignoreAliases option is not set, WildcardExpressionResolver resolves the provided\n+ // expressions against the defined indices and aliases\n+ IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, false);\n+ IndexNameExpressionResolver.Context indicesAndAliasesContext = new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions);\n+\n+ // ignoreAliases option is set, WildcardExpressionResolver resolves the provided expressions\n+ // only against the defined indices\n+ IndicesOptions onlyIndicesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true);\n+ IndexNameExpressionResolver.Context onlyIndicesContext = new IndexNameExpressionResolver.Context(state, onlyIndicesOptions);\n+\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(indicesAndAliasesContext, state.getMetaData(), \"*\").keySet(),\n+ equalTo(newHashSet(\"bar_bar\", \"foo_foo\", \"foo\")));\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(onlyIndicesContext, state.getMetaData(), \"*\").keySet(),\n+ equalTo(newHashSet(\"bar_bar\", \"foo_foo\")));\n+\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(indicesAndAliasesContext, state.getMetaData(), \"foo*\").keySet(),\n+ equalTo(newHashSet(\"foo\", \"foo_foo\")));\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(onlyIndicesContext, state.getMetaData(), \"foo*\").keySet(),\n+ equalTo(newHashSet(\"foo_foo\")));\n+\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(indicesAndAliasesContext, state.getMetaData(), \"f*o\").keySet(),\n+ equalTo(newHashSet(\"foo\", \"foo_foo\")));\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(onlyIndicesContext, state.getMetaData(), \"f*o\").keySet(),\n+ equalTo(newHashSet(\"foo_foo\")));\n+\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(indicesAndAliasesContext, state.getMetaData(), \"foo\").keySet(),\n+ equalTo(newHashSet(\"foo\")));\n+ assertThat(\n+ IndexNameExpressionResolver.WildcardExpressionResolver\n+ .matches(onlyIndicesContext, state.getMetaData(), \"foo\").keySet(),\n+ equalTo(newHashSet()));\n+ }\n+\n private IndexMetaData.Builder indexBuilder(String index) {\n return IndexMetaData.builder(index).settings(settings(Version.CURRENT).put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0));\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java", "status": "modified" }, { "diff": "@@ -50,3 +50,9 @@ default when a provided wildcard expression doesn't match any closed/open index.\n Delete a document from non-existing index has been modified to not create the index.\n However if an external versioning is used the index will be created and the document\n will be marked for deletion. \n+\n+==== Indices aliases api resolves indices expressions only against indices\n+\n+The index parameter in the update-aliases, put-alias, and delete-alias APIs no\n+longer accepts alias names. Instead, it accepts only index names (or wildcards\n+which will expand to matching indices).", "filename": "docs/reference/migration/migrate_6_0/indices.asciidoc", "status": "modified" } ] }
{ "body": "Quick fix, skip hidden files when loading plugins. [https://github.com/elastic/elasticsearch/issues/12433]\n", "comments": [ { "body": "Thanks @xuzha !\n", "created_at": "2015-07-27T08:07:24Z" }, { "body": "I do not think we should use Files.isHidden, anywhere in our code.\n\nThis is too specific to the OS environment. This should be looking for a match with .DS_Store or startsWith(\".\") but not looking at any os or filesystem-specific attributes.\n\nIf we start going this route, then things are going to get confusing quickly.\n", "created_at": "2015-07-27T11:30:11Z" }, { "body": "For the record, I merged this pull request because I saw we were already ignoring hidden files this way in other places, but I'm good with changing hidden file detection to startsWith(\".\").\n\nI opened #12480\n", "created_at": "2015-07-27T11:46:10Z" }, { "body": "That's a good point. Thanks Robert.\n", "created_at": "2015-07-27T15:52:41Z" } ], "number": 12465, "title": "Skip hidden files" }
{ "body": "This commit removes some leniency from the plugin service which skips hidden files in the plugins directory. We really want to ensure the integrity of the plugin folder, so hasta la vista leniency.\r\n\r\nRelates #12465\r\n", "number": 23982, "review_comments": [], "title": "Remove hidden file leniency from plugin service" }
{ "commits": [ { "message": "Remove hidden file leniency from plugin service\n\nThis commit removes some leniency from the plugin service which skips\nhidden files in the plugins directory. We really want to ensure the\nintegrity of the plugin folder, so hasta la vista leniency." }, { "message": "Merge branch 'master' into skip-skipping-hidden-files\n\n* master:\n Discovery EC2: Remove region setting (#23991)\n AWS Plugins: Remove signer type setting (#23984)\n Settings: Disallow secure setting to exist in normal settings (#23976)\n Add registration of new discovery settings\n Settings: Migrate ec2 discovery sensitive settings to elasticsearch keystore (#23961)\n Fix throttled reindex_from_remote (#23953)\n Add comment why we check for null fetch results during merge" }, { "message": "Add test for not skipping hidden files\n\nThis commit adds a test that hidden files are not skipped in the plugins\nfolder." } ], "files": [ { "diff": "@@ -305,10 +305,6 @@ static Set<Bundle> getPluginBundles(Path pluginsDirectory) throws IOException {\n \n try (DirectoryStream<Path> stream = Files.newDirectoryStream(pluginsDirectory)) {\n for (Path plugin : stream) {\n- if (FileSystemUtils.isHidden(plugin)) {\n- logger.trace(\"--- skip hidden plugin file[{}]\", plugin.toAbsolutePath());\n- continue;\n- }\n logger.trace(\"--- adding plugin [{}]\", plugin.toAbsolutePath());\n final PluginInfo info;\n try {", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginsService.java", "status": "modified" }, { "diff": "@@ -19,17 +19,19 @@\n \n package org.elasticsearch.plugins;\n \n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.index.IndexModule;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.Arrays;\n import java.util.List;\n \n-import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.env.Environment;\n-import org.elasticsearch.index.IndexModule;\n-import org.elasticsearch.test.ESTestCase;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.hasToString;\n \n public class PluginsServiceTests extends ESTestCase {\n public static class AdditionalSettingsPlugin1 extends Plugin {\n@@ -99,4 +101,22 @@ public void testFilterPlugins() {\n assertEquals(1, scriptPlugins.size());\n assertEquals(FilterablePlugin.class, scriptPlugins.get(0).getClass());\n }\n+\n+ public void testHiddenFiles() throws IOException {\n+ final Path home = createTempDir();\n+ final Settings settings =\n+ Settings.builder()\n+ .put(Environment.PATH_HOME_SETTING.getKey(), home)\n+ .build();\n+ final Path hidden = home.resolve(\"plugins\").resolve(\".hidden\");\n+ Files.createDirectories(hidden);\n+ @SuppressWarnings(\"unchecked\")\n+ final IllegalStateException e = expectThrows(\n+ IllegalStateException.class,\n+ () -> newPluginsService(settings));\n+\n+ final String expected = \"Could not load plugin descriptor for existing plugin [.hidden]\";\n+ assertThat(e, hasToString(containsString(expected)));\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java", "status": "modified" }, { "diff": "@@ -42,3 +42,8 @@ See {plugins}/repository-azure-usage.html#repository-azure-repository-settings[A\n \n * The region setting has been removed. This includes the settings `cloud.aws.region`\n and `cloud.aws.ec2.region`. Instead, specify the full endpoint.\n+\n+==== Ignoring hidden folders\n+\n+Previous versions of Elasticsearch would skip hidden files and directories when\n+scanning the plugins folder. This leniency has been removed.", "filename": "docs/reference/migration/migrate_6_0/plugins.asciidoc", "status": "modified" } ] }
{ "body": "The HttpAsyncReponseConsumerFactory 's access modifier as it sits in version 5.3.0 of Java REST Client libraries is package protected, i.e, no access modifier has been specified therefore users will not be able to implement it outside the org.elasticsearch.client package.\r\n\r\nHowever, the javadoc for the org.elasticsearch.client.HttpAsyncResponseConsumerFactory says :\r\n\r\n/**\r\n* Factory used to create instances of {@link HttpAsyncResponseConsumer}. Each request retry needs its own instance of the\r\n* consumer object. Users can implement this interface and pass their own instance to the specialized\r\n* performRequest methods that accept an {@link HttpAsyncResponseConsumerFactory} instance as argument.\r\n*/\r\n\r\nI believe implementing the aforementioned interface is the only way to specify a buffer limit of higher than 100 MB?\r\n\r\n**Elasticsearch version**: 5.3.0\r\n\r\n**JVM version**: 1.8", "comments": [ { "body": "hi @mkhan24 you are right, the buffer limit was supposed to be configurable but it is configurable only from within the `org.elasticsearch.client` package at the moment. I will fix that. In the meantime, the workaround is to implement the `HttpAsyncResponseConsumerFactory` interface and borrow the few lines of code needed from `HeapBufferedResponseConsumerFactory`.", "created_at": "2017-04-07T08:15:38Z" }, { "body": "Hi @javanna , thanks for the changes. Is there a tentative timeline on the official release of 5.3.1 (I presume it's going to be part of that?)", "created_at": "2017-04-10T10:33:56Z" }, { "body": "hi @mkhan24 I am in the process of backporting the change to 5.x and 5.3. 5.3.1 should be out soon, but I cannot guarantee yet that this change will make it in. Stay tuned.", "created_at": "2017-04-10T12:40:40Z" } ], "number": 23958, "title": "Cannot implement org.elasticsearch.client.HttpAsyncResponseConsumerFactory interface" }
{ "body": "The buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.\r\n\r\nCloses #23958", "number": 23970, "review_comments": [ { "body": "Double `consumer`? It looks like only one should be necessary to escape the `org.elasticsearch.client` package.", "created_at": "2017-04-07T13:52:25Z" }, { "body": "I'd probably assert this with reflection.", "created_at": "2017-04-07T13:55:12Z" }, { "body": "I appreciate this comment.", "created_at": "2017-04-07T13:55:13Z" }, { "body": "This line probably exceeds the 100-column limit. I know that we are actively discussing what to do about that, so I'm fine with either bringing it under the limit or suppressing the file for now.", "created_at": "2017-04-07T13:55:49Z" }, { "body": "Oh, I like that suggestion! Then the custom package isn't needed.", "created_at": "2017-04-07T13:59:31Z" }, { "body": "yea I don't know what happened there :)", "created_at": "2017-04-07T14:04:20Z" }, { "body": "Good idea, I pushed a new commit. It addresses the constructor visibility problem. Also the interface was mistakenly package private, but that is not caught by the test with reflection in the same package. Do you guys have ideas on how to achieve that too?", "created_at": "2017-04-07T14:23:07Z" }, { "body": "I think something like this would work:\r\n\r\n```java\r\n public void testVisibility() throws ClassNotFoundException {\r\n final Class<?> clazz =\r\n Class.forName(\"org.elasticsearch.client.HttpAsyncResponseConsumerFactory\");\r\n assertThat(clazz.getModifiers() & Modifier.PUBLIC, equalTo(Modifier.PUBLIC));\r\n }\r\n```", "created_at": "2017-04-07T14:32:25Z" }, { "body": "of course! thank you", "created_at": "2017-04-07T14:47:50Z" }, { "body": "I'd add a comment that you are using reflection here to make sure that the ctor is public.", "created_at": "2017-04-07T16:33:36Z" }, { "body": "makes sense", "created_at": "2017-04-07T19:23:34Z" }, { "body": "If we are going to load the class directly (i.e., via the class itself) there is no need to use `Class.forName`, you can just say `final Class<?> class = HttpAsyncResponseConsumerFactory.class;`.", "created_at": "2017-04-07T20:31:01Z" }, { "body": "yep that's silly, I will fix thanks", "created_at": "2017-04-07T20:52:58Z" } ], "title": "Make buffer limit configurable in HeapBufferedConsumerFactory" }
{ "commits": [ { "message": "Make buffer limit configurable in HeapBufferedConsumerFactory\n\nThe buffer limit should have been configurable already, but the factory constructor is package private so it is truly configurable only from the org.elasticsearch.client package. Also the HttpAsyncResponseConsumerFactory interface was package private, so it could only be implemented from the org.elasticsearch.client package.\n\nCloses #23958" }, { "message": "use reflection instead of test in separate package" }, { "message": "add test for interface visibility" }, { "message": "add assertion and comment on constructor access modifiers" }, { "message": "remove needless Class.forName" } ], "files": [ { "diff": "@@ -42,6 +42,7 @@\n <suppress files=\"client[/\\\\]test[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]RestClientTestUtil.java\" checks=\"LineLength\" />\n <suppress files=\"client[/\\\\]rest[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]RestClientTests.java\" checks=\"LineLength\" />\n <suppress files=\"client[/\\\\]rest[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]SyncResponseListenerTests.java\" checks=\"LineLength\" />\n+ <suppress files=\"client[/\\\\]rest[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]HeapBufferedAsyncResponseConsumerTests.java\" checks=\"LineLength\" />\n <suppress files=\"client[/\\\\]rest-high-level[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]Request.java\" checks=\"LineLength\" />\n <suppress files=\"client[/\\\\]rest-high-level[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]RestHighLevelClient.java\" checks=\"LineLength\" />\n <suppress files=\"client[/\\\\]rest-high-level[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]client[/\\\\]CrudIT.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -29,7 +29,7 @@\n * consumer object. Users can implement this interface and pass their own instance to the specialized\n * performRequest methods that accept an {@link HttpAsyncResponseConsumerFactory} instance as argument.\n */\n-interface HttpAsyncResponseConsumerFactory {\n+public interface HttpAsyncResponseConsumerFactory {\n \n /**\n * Creates the default type of {@link HttpAsyncResponseConsumer}, based on heap buffering with a buffer limit of 100MB.\n@@ -53,7 +53,7 @@ class HeapBufferedResponseConsumerFactory implements HttpAsyncResponseConsumerFa\n \n private final int bufferLimit;\n \n- HeapBufferedResponseConsumerFactory(int bufferLimitBytes) {\n+ public HeapBufferedResponseConsumerFactory(int bufferLimitBytes) {\n this.bufferLimit = bufferLimitBytes;\n }\n ", "filename": "client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java", "status": "modified" }, { "diff": "@@ -24,19 +24,24 @@\n import org.apache.http.HttpResponse;\n import org.apache.http.ProtocolVersion;\n import org.apache.http.StatusLine;\n-import org.apache.http.entity.BasicHttpEntity;\n import org.apache.http.entity.ContentType;\n import org.apache.http.entity.StringEntity;\n import org.apache.http.message.BasicHttpResponse;\n import org.apache.http.message.BasicStatusLine;\n import org.apache.http.nio.ContentDecoder;\n import org.apache.http.nio.IOControl;\n+import org.apache.http.nio.protocol.HttpAsyncResponseConsumer;\n import org.apache.http.protocol.HttpContext;\n \n+import java.lang.reflect.Constructor;\n+import java.lang.reflect.InvocationTargetException;\n+import java.lang.reflect.Modifier;\n import java.util.concurrent.atomic.AtomicReference;\n \n+import static org.hamcrest.CoreMatchers.instanceOf;\n import static org.junit.Assert.assertEquals;\n import static org.junit.Assert.assertSame;\n+import static org.junit.Assert.assertThat;\n import static org.junit.Assert.assertTrue;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.spy;\n@@ -97,6 +102,26 @@ public void testConfiguredBufferLimit() throws Exception {\n bufferLimitTest(consumer, bufferLimit);\n }\n \n+ public void testCanConfigureHeapBufferLimitFromOutsidePackage() throws ClassNotFoundException, NoSuchMethodException,\n+ IllegalAccessException, InvocationTargetException, InstantiationException {\n+ int bufferLimit = randomIntBetween(1, Integer.MAX_VALUE);\n+ //we use reflection to make sure that the class can be instantiated from the outside, and the constructor is public\n+ Constructor<?> constructor = HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class.getConstructor(Integer.TYPE);\n+ assertEquals(Modifier.PUBLIC, constructor.getModifiers() & Modifier.PUBLIC);\n+ Object object = constructor.newInstance(bufferLimit);\n+ assertThat(object, instanceOf(HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class));\n+ HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory consumerFactory =\n+ (HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory) object;\n+ HttpAsyncResponseConsumer<HttpResponse> consumer = consumerFactory.createHttpAsyncResponseConsumer();\n+ assertThat(consumer, instanceOf(HeapBufferedAsyncResponseConsumer.class));\n+ HeapBufferedAsyncResponseConsumer bufferedAsyncResponseConsumer = (HeapBufferedAsyncResponseConsumer) consumer;\n+ assertEquals(bufferLimit, bufferedAsyncResponseConsumer.getBufferLimit());\n+ }\n+\n+ public void testHttpAsyncResponseConsumerFactoryVisibility() throws ClassNotFoundException {\n+ assertEquals(Modifier.PUBLIC, HttpAsyncResponseConsumerFactory.class.getModifiers() & Modifier.PUBLIC);\n+ }\n+\n private static void bufferLimitTest(HeapBufferedAsyncResponseConsumer consumer, int bufferLimit) throws Exception {\n ProtocolVersion protocolVersion = new ProtocolVersion(\"HTTP\", 1, 1);\n StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, \"OK\");", "filename": "client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 5.3\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8\r\n\r\n**OS version**: MacOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nRemote reindex sends an invalid request to the source ES when `requests_per_second` is specified.\r\n\r\nProblem is most likely in `org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.java` method `scrollParams` calls `keepAlive.toString()` on `TimeValue` object.\r\nBut `TimeValue.toString` comment says: \r\n\r\n> Note that this method might produce fractional time values (ex 1.6m) which cannot be parsed by method like ...\r\n\r\n**Steps to reproduce**:\r\n 1. start a remote reindex with parameter requests_per_second specified\r\n 2. source database responses with HTTP 200 but includes error \"fractional time values are not supported\"\r\n \r\nEXPECTED - remote reindex should run ok\r\n\r\n**Provide logs (if relevant)**:\r\n```json\r\n{\r\n \"completed\": true, \r\n \"error\": {\r\n \"caused_by\": {\r\n \"reason\": \"POST https://company.com:443/_search/scroll?scroll=5.1m: HTTP/1.1 400 Bad Request\\n{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"failed to parse [5.1m], fractional time values are not supported\\\"}],\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"failed to parse [5.1m], fractional time values are not supported\\\",\\\"caused_by\\\":{\\\"type\\\":\\\"number_format_exception\\\",\\\"reason\\\":\\\"For input string: \\\\\\\"5.1\\\\\\\"\\\"}},\\\"status\\\":400}\", \r\n \"type\": \"response_exception\"\r\n }, \r\n \"reason\": \"body={\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"failed to parse [5.1m], fractional time values are not supported\\\"}],\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"failed to parse [5.1m], fractional time values are not supported\\\",\\\"caused_by\\\":{\\\"type\\\":\\\"number_format_exception\\\",\\\"reason\\\":\\\"For input string: \\\\\\\"5.1\\\\\\\"\\\"}},\\\"status\\\":400}\", \r\n \"type\": \"status_exception\"\r\n }, \r\n \"task\": {\r\n \"action\": \"indices:data/write/reindex\", \r\n \"cancellable\": true, \r\n \"description\": \"reindex from [scheme=https host=company.com port=443 query={\\n \\\"match_all\\\" : {\\n \\\"boost\\\" : 1.0\\n }\\n}][index1] to [index1]\", \r\n \"id\": 25587, \r\n \"node\": \"7LvaOqfdSZGsfRWElVkqIA\", \r\n \"running_time_in_nanos\": 10876439850, \r\n \"start_time_in_millis\": 1491479089618, \r\n \"status\": {\r\n \"batches\": 1, \r\n \"created\": 0, \r\n \"deleted\": 0, \r\n \"noops\": 0, \r\n \"requests_per_second\": 100.0, \r\n \"retries\": {\r\n \"bulk\": 0, \r\n \"search\": 0\r\n }, \r\n \"throttled_millis\": 0, \r\n \"throttled_until_millis\": 0, \r\n \"total\": 19433, \r\n \"updated\": 0, \r\n \"version_conflicts\": 1000\r\n }, \r\n \"type\": \"transport\"\r\n }\r\n}\r\n```", "comments": [ { "body": "Fixed by #23953. Should be fixed in 5.4.0.", "created_at": "2017-04-10T16:49:13Z" } ], "number": 23945, "title": "Reindex - requests_per_second causes scroll to use fractional time value" }
{ "body": "reindex_from_remote was using `TimeValue#toString` to generate the\r\nscroll timeout which is bad because that generates fractional\r\ntime values that are useful for people but bad for Elasticsearch\r\nwhich doesn't like to parse them. This switches it to using\r\n`TimeValue#getStringRep` which spits out whole time values.\r\n\r\nCloses to #23945\r\n\r\nMakes #23828 even more desirable.\r\n", "number": 23953, "review_comments": [], "title": "Fix throttled reindex_from_remote" }
{ "commits": [ { "message": "Fix throttle reindex_from_remote\n\nreindex_from_remote was using `TimeValue#toString` to generate the\nscroll timeout which is bad because that generates fractional\ntime values that are useful for people but bad for Elasticsearch\nwhich doesn't like to parse them. This switches it to using\n`TimeValue#getStringRep` which spits out whole time values.\n\nCloses to #23945\n\nMakes #23828 even more desirable" }, { "message": "Merge branch 'master' into remote_throttle" }, { "message": "Merge branch 'master' into remote_throttle" }, { "message": "Switch test to parseTimeValue\n\nThat should make it more clear that the parsing is what is what\nthe other side is going to do so it is what has to work." } ], "files": [ { "diff": "@@ -59,7 +59,7 @@ static String initialSearchPath(SearchRequest searchRequest) {\n static Map<String, String> initialSearchParams(SearchRequest searchRequest, Version remoteVersion) {\n Map<String, String> params = new HashMap<>();\n if (searchRequest.scroll() != null) {\n- params.put(\"scroll\", searchRequest.scroll().keepAlive().toString());\n+ params.put(\"scroll\", searchRequest.scroll().keepAlive().getStringRep());\n }\n params.put(\"size\", Integer.toString(searchRequest.source().size()));\n if (searchRequest.source().version() == null || searchRequest.source().version() == true) {\n@@ -168,7 +168,7 @@ static String scrollPath() {\n }\n \n static Map<String, String> scrollParams(TimeValue keepAlive) {\n- return singletonMap(\"scroll\", keepAlive.toString());\n+ return singletonMap(\"scroll\", keepAlive.getStringRep());\n }\n \n static HttpEntity scrollEntity(String scroll, Version remoteVersion) {", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java", "status": "modified" }, { "diff": "@@ -35,11 +35,11 @@\n import java.nio.charset.StandardCharsets;\n import java.util.Map;\n \n+import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.clearScrollEntity;\n import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchEntity;\n import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchParams;\n import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchPath;\n import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollEntity;\n-import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.clearScrollEntity;\n import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollParams;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.either;\n@@ -150,7 +150,11 @@ public void testInitialSearchParamsMisc() {\n \n Map<String, String> params = initialSearchParams(searchRequest, remoteVersion);\n \n- assertThat(params, scroll == null ? not(hasKey(\"scroll\")) : hasEntry(\"scroll\", scroll.toString()));\n+ if (scroll == null) {\n+ assertThat(params, not(hasKey(\"scroll\")));\n+ } else {\n+ assertEquals(scroll, TimeValue.parseTimeValue(params.get(\"scroll\"), \"scroll\"));\n+ }\n assertThat(params, hasEntry(\"size\", Integer.toString(size)));\n assertThat(params, fetchVersion == null || fetchVersion == true ? hasEntry(\"version\", null) : not(hasEntry(\"version\", null)));\n }\n@@ -181,7 +185,7 @@ public void testInitialSearchEntity() throws IOException {\n \n public void testScrollParams() {\n TimeValue scroll = TimeValue.parseTimeValue(randomPositiveTimeValue(), \"test\");\n- assertThat(scrollParams(scroll), hasEntry(\"scroll\", scroll.toString()));\n+ assertEquals(scroll, TimeValue.parseTimeValue(scrollParams(scroll).get(\"scroll\"), \"scroll\"));\n }\n \n public void testScrollEntity() throws IOException {", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java", "status": "modified" }, { "diff": "@@ -459,3 +459,87 @@\n id: 1\n - match: { _source.text: \"test\" }\n - is_false: _source.filtered\n+\n+---\n+\"Reindex from remote with rethrottle\":\n+ # Throttling happens between each scroll batch so we need to control the size of the batch by using a single shard\n+ # and a small batch size on the request\n+ - do:\n+ indices.create:\n+ index: source\n+ body:\n+ settings:\n+ number_of_shards: \"1\"\n+ number_of_replicas: \"0\"\n+ - do:\n+ index:\n+ index: source\n+ type: foo\n+ id: 1\n+ body: { \"text\": \"test\" }\n+ - do:\n+ index:\n+ index: source\n+ type: foo\n+ id: 2\n+ body: { \"text\": \"test\" }\n+ - do:\n+ index:\n+ index: source\n+ type: foo\n+ id: 3\n+ body: { \"text\": \"test\" }\n+ - do:\n+ indices.refresh: {}\n+\n+\n+ # Fetch the http host. We use the host of the master because we know there will always be a master.\n+ - do:\n+ cluster.state: {}\n+ - set: { master_node: master }\n+ - do:\n+ nodes.info:\n+ metric: [ http ]\n+ - is_true: nodes.$master.http.publish_address\n+ - set: {nodes.$master.http.publish_address: host}\n+ - do:\n+ reindex:\n+ requests_per_second: .00000001 # About 9.5 years to complete the request\n+ wait_for_completion: false\n+ refresh: true\n+ body:\n+ source:\n+ remote:\n+ host: http://${host}\n+ index: source\n+ size: 1\n+ dest:\n+ index: dest\n+ - match: {task: '/.+:\\d+/'}\n+ - set: {task: task}\n+\n+ - do:\n+ reindex_rethrottle:\n+ requests_per_second: -1\n+ task_id: $task\n+\n+ - do:\n+ tasks.get:\n+ wait_for_completion: true\n+ task_id: $task\n+\n+ - do:\n+ search:\n+ index: dest\n+ body:\n+ query:\n+ match:\n+ text: test\n+ - match: {hits.total: 3}\n+\n+ # Make sure reindex closed all the scroll contexts\n+ - do:\n+ indices.stats:\n+ index: source\n+ metric: search\n+ - match: {indices.source.total.search.open_contexts: 0}", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/reindex/90_remote.yaml", "status": "modified" } ] }
{ "body": "Not sure this is a bug, but I'm looking at deprecation warnings when creating new indices (settings, mappings). When I use the following in Console:\r\n\r\n```\r\nPUT /test\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"shadow_replicas\": true,\r\n \"shared_filesystem\": false\r\n },\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"field\": {\r\n \"type\": \"string\"\r\n },\r\n \"field2\": {\r\n \"type\": \"long\",\r\n \"store\" : \"no\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI see the following in `elasticsearch_deprecation.log`:\r\n\r\n```\r\n[2017-04-06T17:05:36,473][WARN ][o.e.d.c.s.Setting ] [index.shadow_replicas] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\r\n[2017-04-06T17:05:36,481][WARN ][o.e.d.c.s.Setting ] [index.shared_filesystem] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\r\n[2017-04-06T17:05:36,481][WARN ][o.e.d.c.s.Setting ] [index.shadow_replicas] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\r\n[2017-04-06T17:05:36,487][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [field]\r\n[2017-04-06T17:05:36,488][WARN ][o.e.d.c.x.s.XContentMapValues] Expected a boolean [true/false] for property [field2.store] but got [no]\r\n```\r\n\r\nBut I only get the following deprecation warnings in the warning headers (in Console):\r\n\r\n```\r\n#! Deprecation: [index.shadow_replicas] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\r\n#! Deprecation: [index.shared_filesystem] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\r\n```\r\nI also checked the same with curl, I also get only the two setting warning headers there:\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPUT localhost:9200/test -d '{\r\n \"settings\": {\r\n \"number_of_shards\" : 1,\r\n \"shadow_replicas\": true,\r\n \"shared_filesystem\": false\r\n },\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"field\" : { \"type\" : \"long\", \"store\" : \"no\" },\r\n \"field2\" : { \"type\":\"string\"}\r\n }\r\n }\r\n }\r\n}' -i\r\n```\r\n\r\nAm I missing something?\r\n\r\n\r\n**Elasticsearch version**: 5.3.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_121\r\n", "comments": [ { "body": "I don't think you are missing anything, I can reproduce and I'd expect those two warnings to be returned as a Warning header, but they are not.", "created_at": "2017-04-06T15:36:14Z" }, { "body": "This looks bad. It appears to have something to do with the mapping update happening on the cluster state update task thread. I'm discussing this one with @jaymode now.", "created_at": "2017-04-06T16:06:14Z" }, { "body": "Good find @cbuescher ", "created_at": "2017-04-06T19:58:54Z" } ], "number": 23947, "title": "Mapping deprecations warnings in logs but not in warning headers" }
{ "body": "This commit preserves the response headers when creating an index and updating settings for an\r\nindex.\r\n\r\nCloses #23947", "number": 23950, "review_comments": [], "title": "Preserve response headers when creating an index" }
{ "commits": [ { "message": "Preserve response headers when creating an index\n\nThis commit preserves the response headers when creating an index and updating settings for an\nindex.\n\nCloses #23947" } ], "files": [ { "diff": "@@ -50,4 +50,12 @@ public void onFailure(Exception e) {\n delegate.onFailure(e);\n }\n }\n+\n+ /**\n+ * Wraps the provided action listener in a {@link ContextPreservingActionListener} that will\n+ * also copy the response headers when the {@link ThreadContext.StoredContext} is closed\n+ */\n+ public static <R> ContextPreservingActionListener<R> wrapPreservingContext(ActionListener<R> listener, ThreadContext threadContext) {\n+ return new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), listener);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java", "status": "modified" }, { "diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;\n import org.elasticsearch.action.support.ActiveShardCount;\n import org.elasticsearch.action.support.ActiveShardsObserver;\n-import org.elasticsearch.action.support.ContextPreservingActionListener;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n@@ -93,6 +92,7 @@\n import java.util.function.BiFunction;\n import java.util.function.Predicate;\n \n+import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_CREATION_DATE;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_INDEX_UUID;\n@@ -222,7 +222,9 @@ private void onlyCreateIndex(final CreateIndexClusterStateUpdateRequest request,\n request.settings(updatedSettingsBuilder.build());\n \n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\",\n- new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, wrapPreservingContext(listener)) {\n+ new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request,\n+ wrapPreservingContext(listener, threadPool.getThreadContext())) {\n+\n @Override\n protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n return new ClusterStateUpdateResponse(acknowledged);\n@@ -476,10 +478,6 @@ public void onFailure(String source, Exception e) {\n });\n }\n \n- private ContextPreservingActionListener<ClusterStateUpdateResponse> wrapPreservingContext(ActionListener<ClusterStateUpdateResponse> listener) {\n- return new ContextPreservingActionListener<>(threadPool.getThreadContext().newRestorableContext(false), listener);\n- }\n-\n private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {\n List<IndexTemplateMetaData> templateMetadata = new ArrayList<>();\n for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsClusterStateUpdateRequest;\n import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeSettingsClusterStateUpdateRequest;\n-import org.elasticsearch.action.support.ContextPreservingActionListener;\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterState;\n@@ -56,6 +55,8 @@\n import java.util.Map;\n import java.util.Set;\n \n+import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext;\n+\n /**\n * Service responsible for submitting update index settings requests\n */\n@@ -180,7 +181,8 @@ public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request\n final boolean preserveExisting = request.isPreserveExisting();\n \n clusterService.submitStateUpdateTask(\"update-settings\",\n- new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, wrapPreservingContext(listener)) {\n+ new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request,\n+ wrapPreservingContext(listener, threadPool.getThreadContext())) {\n \n @Override\n protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n@@ -284,10 +286,6 @@ public ClusterState execute(ClusterState currentState) {\n });\n }\n \n- private ContextPreservingActionListener<ClusterStateUpdateResponse> wrapPreservingContext(ActionListener<ClusterStateUpdateResponse> listener) {\n- return new ContextPreservingActionListener<>(threadPool.getThreadContext().newRestorableContext(false), listener);\n- }\n-\n /**\n * Updates the cluster block only iff the setting exists in the given settings\n */\n@@ -307,7 +305,8 @@ private static void maybeUpdateClusterBlock(String[] actualIndices, ClusterBlock\n \n public void upgradeIndexSettings(final UpgradeSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n clusterService.submitStateUpdateTask(\"update-index-compatibility-versions\",\n- new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, wrapPreservingContext(listener)) {\n+ new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request,\n+ wrapPreservingContext(listener, threadPool.getThreadContext())) {\n \n @Override\n protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java", "status": "modified" }, { "diff": "@@ -33,11 +33,10 @@ public void testOriginalContextIsPreservedAfterOnResponse() throws IOException {\n if (nonEmptyContext) {\n threadContext.putHeader(\"not empty\", \"value\");\n }\n- ContextPreservingActionListener<Void> actionListener;\n+ final ContextPreservingActionListener<Void> actionListener;\n try (ThreadContext.StoredContext ignore = threadContext.stashContext()) {\n threadContext.putHeader(\"foo\", \"bar\");\n- actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true),\n- new ActionListener<Void>() {\n+ final ActionListener<Void> delegate = new ActionListener<Void>() {\n @Override\n public void onResponse(Void aVoid) {\n assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n@@ -48,7 +47,12 @@ public void onResponse(Void aVoid) {\n public void onFailure(Exception e) {\n throw new RuntimeException(\"onFailure shouldn't be called\", e);\n }\n- });\n+ };\n+ if (randomBoolean()) {\n+ actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), delegate);\n+ } else {\n+ actionListener = ContextPreservingActionListener.wrapPreservingContext(delegate, threadContext);\n+ }\n }\n \n assertNull(threadContext.getHeader(\"foo\"));\n@@ -67,22 +71,28 @@ public void testOriginalContextIsPreservedAfterOnFailure() throws Exception {\n if (nonEmptyContext) {\n threadContext.putHeader(\"not empty\", \"value\");\n }\n- ContextPreservingActionListener<Void> actionListener;\n+ final ContextPreservingActionListener<Void> actionListener;\n try (ThreadContext.StoredContext ignore = threadContext.stashContext()) {\n threadContext.putHeader(\"foo\", \"bar\");\n- actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true),\n- new ActionListener<Void>() {\n- @Override\n- public void onResponse(Void aVoid) {\n- throw new RuntimeException(\"onResponse shouldn't be called\");\n- }\n-\n- @Override\n- public void onFailure(Exception e) {\n- assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n- assertNull(threadContext.getHeader(\"not empty\"));\n- }\n- });\n+ final ActionListener<Void> delegate = new ActionListener<Void>() {\n+ @Override\n+ public void onResponse(Void aVoid) {\n+ throw new RuntimeException(\"onResponse shouldn't be called\");\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n+ assertNull(threadContext.getHeader(\"not empty\"));\n+ }\n+ };\n+\n+ if (randomBoolean()) {\n+ actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), delegate);\n+ } else {\n+ actionListener = ContextPreservingActionListener.wrapPreservingContext(delegate, threadContext);\n+ }\n+\n }\n \n assertNull(threadContext.getHeader(\"foo\"));\n@@ -101,25 +111,30 @@ public void testOriginalContextIsWhenListenerThrows() throws Exception {\n if (nonEmptyContext) {\n threadContext.putHeader(\"not empty\", \"value\");\n }\n- ContextPreservingActionListener<Void> actionListener;\n+ final ContextPreservingActionListener<Void> actionListener;\n try (ThreadContext.StoredContext ignore = threadContext.stashContext()) {\n threadContext.putHeader(\"foo\", \"bar\");\n- actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true),\n- new ActionListener<Void>() {\n- @Override\n- public void onResponse(Void aVoid) {\n- assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n- assertNull(threadContext.getHeader(\"not empty\"));\n- throw new RuntimeException(\"onResponse called\");\n- }\n-\n- @Override\n- public void onFailure(Exception e) {\n- assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n- assertNull(threadContext.getHeader(\"not empty\"));\n- throw new RuntimeException(\"onFailure called\");\n- }\n- });\n+ final ActionListener<Void> delegate = new ActionListener<Void>() {\n+ @Override\n+ public void onResponse(Void aVoid) {\n+ assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n+ assertNull(threadContext.getHeader(\"not empty\"));\n+ throw new RuntimeException(\"onResponse called\");\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ assertEquals(\"bar\", threadContext.getHeader(\"foo\"));\n+ assertNull(threadContext.getHeader(\"not empty\"));\n+ throw new RuntimeException(\"onFailure called\");\n+ }\n+ };\n+\n+ if (randomBoolean()) {\n+ actionListener = new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), delegate);\n+ } else {\n+ actionListener = ContextPreservingActionListener.wrapPreservingContext(delegate, threadContext);\n+ }\n }\n \n assertNull(threadContext.getHeader(\"foo\"));", "filename": "core/src/test/java/org/elasticsearch/action/support/ContextPreservingActionListenerTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,28 @@\n+---\n+\"Create index with deprecated settings\":\n+\n+ - skip:\n+ version: \"all\"\n+ reason: removed in 6.0\n+ features: \"warnings\"\n+ - do:\n+ indices.create:\n+ index: test_index\n+ body:\n+ settings:\n+ number_of_shards: 1\n+ shadow_replicas: true\n+ shared_filesystem: false\n+ mappings:\n+ type:\n+ properties:\n+ field:\n+ type: \"string\"\n+ field2:\n+ type: \"long\"\n+ store : \"no\"\n+ warnings:\n+ - \"[index.shadow_replicas] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\"\n+ - \"[index.shared_filesystem] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\"\n+ - \"The [string] field is deprecated, please use [text] or [keyword] instead on [field]\"\n+ - \"Expected a boolean [true/false] for property [field2.store] but got [no]\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.create/20_warnings.yaml", "status": "added" } ] }
{ "body": "Shingle filters creates a graph token stream that the query parser is now able to consume.\r\nThough when shingles of different size are produced the number of paths in the graph can explode.\r\nThis is also the case when `output_unigram` is set to true.\r\nIn 5.3 all paths are generated before building the query so a node can OOM easily on a single big input query. In 5.4 and beyond we detect the explosion earlier but we fail the entire request.\r\nInstead we should be able to detect the problematic token filters and disable the graph analysis for these fields.\r\n", "comments": [ { "body": "we should also update the docs to explain better config \r\n\r\ni'd consider removing the min/max shingles settings in favour of a single size, and removing output_unigram too?", "created_at": "2017-04-05T13:59:14Z" }, { "body": "@clintongormley 👍 for a doc on what a better config would be (if possible without altering the outcome of a request)", "created_at": "2017-04-10T12:11:05Z" } ], "number": 23918, "title": "Shingle filters that produce shingles of different size can create gigantic queries" }
{ "body": "Shingle filters that produce shingles of different size and CJK filters that produce bigram AND unigram are problematic when\r\n we analyze the graph they produce. The position for each shingle size are not aligned so each position has at least two side paths.\r\n So in order to avoid paths explosion this change disables the graph analysis at query time for field analyzers that contain these filters\r\n with a problematic configuration.\r\n\r\nCloses #23918\r\n", "number": 23920, "review_comments": [ { "body": "should we also check that the correct query is built if the analyzer is sane and does not output unigrams?", "created_at": "2017-04-05T17:16:22Z" } ], "title": "Disable graph analysis at query time for shingle and cjk filters producing tokens of different size" }
{ "commits": [ { "message": "Disable graph analysis at query time for shingle and cjk filters producing tokens of different size\n\nShingle filters that produce shingles of different size and CJK filters that produce bigram AND unigram are problematic when\n we analyze the graph they produce. The position for each shingle size are not aligned so each position has at least two side paths.\n So in order to avoid paths explosion this change disables the graph analysis at query time for field analyzers that contain these filters\n with a problematic configuration.\n\nCloses #23918" }, { "message": "fix style checks" }, { "message": "the last checkstyle error" }, { "message": "Disables graph analysis for phrase queries as well" }, { "message": "Cleanup dead code" } ], "files": [ { "diff": "@@ -0,0 +1,32 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.analysis.miscellaneous;\n+\n+import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.util.Attribute;\n+import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute;\n+\n+/**\n+ * This attribute can be used to indicate that the {@link PositionLengthAttribute}\n+ * should not be taken in account in this {@link TokenStream}.\n+ * Query parsers can extract this information to decide if this token stream should be analyzed\n+ * as a graph or not.\n+ */\n+public interface DisableGraphAttribute extends Attribute {}", "filename": "core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java", "status": "added" }, { "diff": "@@ -0,0 +1,38 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.analysis.miscellaneous;\n+\n+import org.apache.lucene.util.AttributeImpl;\n+import org.apache.lucene.util.AttributeReflector;\n+\n+/** Default implementation of {@link DisableGraphAttribute}. */\n+public class DisableGraphAttributeImpl extends AttributeImpl implements DisableGraphAttribute {\n+ public DisableGraphAttributeImpl() {}\n+\n+ @Override\n+ public void clear() {}\n+\n+ @Override\n+ public void reflectWith(AttributeReflector reflector) {\n+ }\n+\n+ @Override\n+ public void copyTo(AttributeImpl target) {}\n+}", "filename": "core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java", "status": "added" }, { "diff": "@@ -20,6 +20,7 @@\n package org.apache.lucene.queryparser.classic;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n@@ -49,14 +50,14 @@\n import org.elasticsearch.index.mapper.StringFieldType;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.support.QueryParsers;\n+import org.elasticsearch.index.analysis.ShingleTokenFilterFactory;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collection;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.Objects;\n \n import static java.util.Collections.unmodifiableMap;\n import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded;\n@@ -805,4 +806,30 @@ public Query parse(String query) throws ParseException {\n }\n return super.parse(query);\n }\n+\n+ /**\n+ * Checks if graph analysis should be enabled for the field depending\n+ * on the provided {@link Analyzer}\n+ */\n+ protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field,\n+ String queryText, boolean quoted, int phraseSlop) {\n+ assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST;\n+\n+ // Use the analyzer to get all the tokens, and then build an appropriate\n+ // query based on the analysis chain.\n+ try (TokenStream source = analyzer.tokenStream(field, queryText)) {\n+ if (source.hasAttribute(DisableGraphAttribute.class)) {\n+ /**\n+ * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid\n+ * paths explosion. See {@link ShingleTokenFilterFactory} for details.\n+ */\n+ setEnableGraphQueries(false);\n+ }\n+ Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop);\n+ setEnableGraphQueries(true);\n+ return query;\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Error analyzing query text\", e);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.cjk.CJKBigramFilter;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.IndexSettings;\n@@ -74,7 +75,17 @@ public CJKBigramFilterFactory(IndexSettings indexSettings, Environment environme\n \n @Override\n public TokenStream create(TokenStream tokenStream) {\n- return new CJKBigramFilter(tokenStream, flags, outputUnigrams);\n+ CJKBigramFilter filter = new CJKBigramFilter(tokenStream, flags, outputUnigrams);\n+ if (outputUnigrams) {\n+ /**\n+ * We disable the graph analysis on this token stream\n+ * because it produces bigrams AND unigrams.\n+ * Graph analysis on such token stream is useless and dangerous as it may create too many paths\n+ * since shingles of different size are not aligned in terms of positions.\n+ */\n+ filter.addAttribute(DisableGraphAttribute.class);\n+ }\n+ return filter;\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.analysis;\n \n import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.shingle.ShingleFilter;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n@@ -86,6 +87,15 @@ public TokenStream create(TokenStream tokenStream) {\n filter.setOutputUnigramsIfNoShingles(outputUnigramsIfNoShingles);\n filter.setTokenSeparator(tokenSeparator);\n filter.setFillerToken(fillerToken);\n+ if (outputUnigrams || (minShingleSize != maxShingleSize)) {\n+ /**\n+ * We disable the graph analysis on this token stream\n+ * because it produces shingles of different size.\n+ * Graph analysis on such token stream is useless and dangerous as it may create too many paths\n+ * since shingles of different size are not aligned in terms of positions.\n+ */\n+ filter.addAttribute(DisableGraphAttribute.class);\n+ }\n return filter;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n@@ -31,6 +32,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.SynonymQuery;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.index.analysis.ShingleTokenFilterFactory;\n import org.elasticsearch.index.mapper.MappedFieldType;\n \n import java.io.IOException;\n@@ -167,6 +169,32 @@ public Query newPrefixQuery(String text) {\n return super.simplify(bq.build());\n }\n \n+ /**\n+ * Checks if graph analysis should be enabled for the field depending\n+ * on the provided {@link Analyzer}\n+ */\n+ protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field,\n+ String queryText, boolean quoted, int phraseSlop) {\n+ assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST;\n+\n+ // Use the analyzer to get all the tokens, and then build an appropriate\n+ // query based on the analysis chain.\n+ try (TokenStream source = analyzer.tokenStream(field, queryText)) {\n+ if (source.hasAttribute(DisableGraphAttribute.class)) {\n+ /**\n+ * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid\n+ * paths explosion. See {@link ShingleTokenFilterFactory} for details.\n+ */\n+ setEnableGraphQueries(false);\n+ }\n+ Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop);\n+ setEnableGraphQueries(true);\n+ return query;\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Error analyzing query text\", e);\n+ }\n+ }\n+\n private static Query wrapWithBoost(Query query, float boost) {\n if (boost != AbstractQueryBuilder.DEFAULT_BOOST) {\n return new BoostQuery(query, boost);", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index.search;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n+import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.ExtendedCommonTermsQuery;\n import org.apache.lucene.search.BooleanClause;\n@@ -49,6 +51,7 @@\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.unit.Fuzziness;\n+import org.elasticsearch.index.analysis.ShingleTokenFilterFactory;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.support.QueryParsers;\n@@ -320,6 +323,32 @@ protected Query newSynonymQuery(Term[] terms) {\n return blendTermsQuery(terms, mapper);\n }\n \n+ /**\n+ * Checks if graph analysis should be enabled for the field depending\n+ * on the provided {@link Analyzer}\n+ */\n+ protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field,\n+ String queryText, boolean quoted, int phraseSlop) {\n+ assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST;\n+\n+ // Use the analyzer to get all the tokens, and then build an appropriate\n+ // query based on the analysis chain.\n+ try (TokenStream source = analyzer.tokenStream(field, queryText)) {\n+ if (source.hasAttribute(DisableGraphAttribute.class)) {\n+ /**\n+ * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid\n+ * paths explosion. See {@link ShingleTokenFilterFactory} for details.\n+ */\n+ setEnableGraphQueries(false);\n+ }\n+ Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop);\n+ setEnableGraphQueries(true);\n+ return query;\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"Error analyzing query text\", e);\n+ }\n+ }\n+\n public Query createPhrasePrefixQuery(String field, String queryText, int phraseSlop, int maxExpansions) {\n final Query query = createFieldQuery(getAnalyzer(), Occur.MUST, field, queryText, true, phraseSlop);\n return toMultiPhrasePrefix(query, phraseSlop, maxExpansions);", "filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java", "status": "modified" }, { "diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.index.analysis;\n \n+import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.Tokenizer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.standard.StandardTokenizer;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.ESTokenStreamTestCase;\n@@ -69,4 +71,25 @@ public void testHanUnigramOnly() throws IOException {\n tokenizer.setReader(new StringReader(source));\n assertTokenStreamContents(tokenFilter.create(tokenizer), expected);\n }\n+\n+ public void testDisableGraph() throws IOException {\n+ ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromClassPath(createTempDir(), RESOURCE);\n+ TokenFilterFactory allFlagsFactory = analysis.tokenFilter.get(\"cjk_all_flags\");\n+ TokenFilterFactory hanOnlyFactory = analysis.tokenFilter.get(\"cjk_han_only\");\n+\n+ String source = \"多くの学生が試験に落ちた。\";\n+ Tokenizer tokenizer = new StandardTokenizer();\n+ tokenizer.setReader(new StringReader(source));\n+ try (TokenStream tokenStream = allFlagsFactory.create(tokenizer)) {\n+ // This config outputs different size of ngrams so graph analysis is disabled\n+ assertTrue(tokenStream.hasAttribute(DisableGraphAttribute.class));\n+ }\n+\n+ tokenizer = new StandardTokenizer();\n+ tokenizer.setReader(new StringReader(source));\n+ try (TokenStream tokenStream = hanOnlyFactory.create(tokenizer)) {\n+ // This config uses only bigrams so graph analysis is enabled\n+ assertFalse(tokenStream.hasAttribute(DisableGraphAttribute.class));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/analysis/CJKFilterFactoryTests.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.Tokenizer;\n import org.apache.lucene.analysis.core.WhitespaceTokenizer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.ESTokenStreamTestCase;\n \n@@ -80,4 +81,25 @@ public void testFillerToken() throws IOException {\n TokenStream stream = new StopFilter(tokenizer, StopFilter.makeStopSet(\"the\"));\n assertTokenStreamContents(tokenFilter.create(stream), expected);\n }\n+\n+ public void testDisableGraph() throws IOException {\n+ ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromClassPath(createTempDir(), RESOURCE);\n+ TokenFilterFactory shingleFiller = analysis.tokenFilter.get(\"shingle_filler\");\n+ TokenFilterFactory shingleInverse = analysis.tokenFilter.get(\"shingle_inverse\");\n+\n+ String source = \"hello world\";\n+ Tokenizer tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(source));\n+ try (TokenStream stream = shingleFiller.create(tokenizer)) {\n+ // This config uses different size of shingles so graph analysis is disabled\n+ assertTrue(stream.hasAttribute(DisableGraphAttribute.class));\n+ }\n+\n+ tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(source));\n+ try (TokenStream stream = shingleInverse.create(tokenizer)) {\n+ // This config uses a single size of shingles so graph analysis is enabled\n+ assertFalse(stream.hasAttribute(DisableGraphAttribute.class));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,251 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.query;\n+\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.SynonymQuery;\n+import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.BooleanClause;\n+import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.search.PhraseQuery;\n+import org.apache.lucene.search.DisjunctionMaxQuery;\n+import org.apache.lucene.search.MultiPhraseQuery;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.search.MatchQuery;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n+\n+/**\n+ * Makes sure that graph analysis is disabled with shingle filters of different size\n+ */\n+public class DisableGraphQueryTests extends ESSingleNodeTestCase {\n+ private static IndexService indexService;\n+ private static QueryShardContext shardContext;\n+ private static Query expectedQuery;\n+ private static Query expectedPhraseQuery;\n+ private static Query expectedQueryWithUnigram;\n+ private static Query expectedPhraseQueryWithUnigram;\n+\n+ @Before\n+ public void setup() {\n+ Settings settings = Settings.builder()\n+ .put(\"index.analysis.filter.shingle.type\", \"shingle\")\n+ .put(\"index.analysis.filter.shingle.output_unigrams\", false)\n+ .put(\"index.analysis.filter.shingle.min_size\", 2)\n+ .put(\"index.analysis.filter.shingle.max_size\", 2)\n+ .put(\"index.analysis.filter.shingle_unigram.type\", \"shingle\")\n+ .put(\"index.analysis.filter.shingle_unigram.output_unigrams\", true)\n+ .put(\"index.analysis.filter.shingle_unigram.min_size\", 2)\n+ .put(\"index.analysis.filter.shingle_unigram.max_size\", 2)\n+ .put(\"index.analysis.analyzer.text_shingle.tokenizer\", \"whitespace\")\n+ .put(\"index.analysis.analyzer.text_shingle.filter\", \"lowercase, shingle\")\n+ .put(\"index.analysis.analyzer.text_shingle_unigram.tokenizer\", \"whitespace\")\n+ .put(\"index.analysis.analyzer.text_shingle_unigram.filter\",\n+ \"lowercase, shingle_unigram\")\n+ .build();\n+ indexService = createIndex(\"test\", settings, \"t\",\n+ \"text_shingle\", \"type=text,analyzer=text_shingle\",\n+ \"text_shingle_unigram\", \"type=text,analyzer=text_shingle_unigram\");\n+ shardContext = indexService.newQueryShardContext(0, null, () -> 0L);\n+\n+ // parsed queries for \"text_shingle_unigram:(foo bar baz)\" with query parsers\n+ // that ignores position length attribute\n+ expectedQueryWithUnigram= new BooleanQuery.Builder()\n+ .add(\n+ new SynonymQuery(\n+ new Term(\"text_shingle_unigram\", \"foo\"),\n+ new Term(\"text_shingle_unigram\", \"foo bar\")\n+ ), BooleanClause.Occur.SHOULD)\n+ .add(\n+ new SynonymQuery(\n+ new Term(\"text_shingle_unigram\", \"bar\"),\n+ new Term(\"text_shingle_unigram\", \"bar baz\")\n+ ), BooleanClause.Occur.SHOULD)\n+ .add(\n+ new TermQuery(\n+ new Term(\"text_shingle_unigram\", \"baz\")\n+ ), BooleanClause.Occur.SHOULD)\n+ .build();\n+\n+ // parsed query for \"text_shingle_unigram:\\\"foo bar baz\\\" with query parsers\n+ // that ignores position length attribute\n+ expectedPhraseQueryWithUnigram = new MultiPhraseQuery.Builder()\n+ .add(\n+ new Term[] {\n+ new Term(\"text_shingle_unigram\", \"foo\"),\n+ new Term(\"text_shingle_unigram\", \"foo bar\")\n+ }, 0)\n+ .add(\n+ new Term[] {\n+ new Term(\"text_shingle_unigram\", \"bar\"),\n+ new Term(\"text_shingle_unigram\", \"bar baz\")\n+ }, 1)\n+ .add(\n+ new Term[] {\n+ new Term(\"text_shingle_unigram\", \"baz\"),\n+ }, 2)\n+ .build();\n+\n+ // parsed query for \"text_shingle:(foo bar baz)\n+ expectedQuery = new BooleanQuery.Builder()\n+ .add(\n+ new TermQuery(new Term(\"text_shingle\", \"foo bar\")),\n+ BooleanClause.Occur.SHOULD\n+ )\n+ .add(\n+ new TermQuery(new Term(\"text_shingle\",\"bar baz\")),\n+ BooleanClause.Occur.SHOULD\n+ )\n+ .add(\n+ new TermQuery(new Term(\"text_shingle\",\"baz biz\")),\n+ BooleanClause.Occur.SHOULD\n+ )\n+ .build();\n+\n+ // parsed query for \"text_shingle:\"foo bar baz\"\n+ expectedPhraseQuery = new PhraseQuery.Builder()\n+ .add(\n+ new Term(\"text_shingle\", \"foo bar\")\n+ )\n+ .add(\n+ new Term(\"text_shingle\",\"bar baz\")\n+ )\n+ .add(\n+ new Term(\"text_shingle\",\"baz biz\")\n+ )\n+ .build();\n+ }\n+\n+ @After\n+ public void cleanup() {\n+ indexService = null;\n+ shardContext = null;\n+ expectedQuery = null;\n+ expectedPhraseQuery = null;\n+ }\n+\n+ public void testMatchPhraseQuery() throws IOException {\n+ MatchPhraseQueryBuilder builder =\n+ new MatchPhraseQueryBuilder(\"text_shingle_unigram\", \"foo bar baz\");\n+ Query query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQueryWithUnigram, equalTo(query));\n+\n+ builder =\n+ new MatchPhraseQueryBuilder(\"text_shingle\", \"foo bar baz biz\");\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQuery, equalTo(query));\n+ }\n+\n+ public void testMatchQuery() throws IOException {\n+ MatchQueryBuilder builder =\n+ new MatchQueryBuilder(\"text_shingle_unigram\", \"foo bar baz\");\n+ Query query = builder.doToQuery(shardContext);\n+ assertThat(expectedQueryWithUnigram, equalTo(query));\n+\n+ builder = new MatchQueryBuilder(\"text_shingle\", \"foo bar baz biz\");\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedQuery, equalTo(query));\n+ }\n+\n+ public void testMultiMatchQuery() throws IOException {\n+ MultiMatchQueryBuilder builder = new MultiMatchQueryBuilder(\"foo bar baz\",\n+ \"text_shingle_unigram\");\n+ Query query = builder.doToQuery(shardContext);\n+ assertThat(expectedQueryWithUnigram, equalTo(query));\n+\n+ builder.type(MatchQuery.Type.PHRASE);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQueryWithUnigram, equalTo(query));\n+\n+ builder = new MultiMatchQueryBuilder(\"foo bar baz biz\", \"text_shingle\");\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedQuery, equalTo(query));\n+\n+ builder.type(MatchQuery.Type.PHRASE);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQuery, equalTo(query));\n+ }\n+\n+ public void testSimpleQueryString() throws IOException {\n+ SimpleQueryStringBuilder builder = new SimpleQueryStringBuilder(\"foo bar baz\");\n+ builder.field(\"text_shingle_unigram\");\n+ builder.flags(SimpleQueryStringFlag.NONE);\n+ Query query = builder.doToQuery(shardContext);\n+ assertThat(expectedQueryWithUnigram, equalTo(query));\n+\n+ builder = new SimpleQueryStringBuilder(\"\\\"foo bar baz\\\"\");\n+ builder.field(\"text_shingle_unigram\");\n+ builder.flags(SimpleQueryStringFlag.PHRASE);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQueryWithUnigram, equalTo(query));\n+\n+ builder = new SimpleQueryStringBuilder(\"foo bar baz biz\");\n+ builder.field(\"text_shingle\");\n+ builder.flags(SimpleQueryStringFlag.NONE);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedQuery, equalTo(query));\n+\n+ builder = new SimpleQueryStringBuilder(\"\\\"foo bar baz biz\\\"\");\n+ builder.field(\"text_shingle\");\n+ builder.flags(SimpleQueryStringFlag.PHRASE);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedPhraseQuery, equalTo(query));\n+ }\n+\n+ public void testQueryString() throws IOException {\n+ QueryStringQueryBuilder builder = new QueryStringQueryBuilder(\"foo bar baz\");\n+ builder.field(\"text_shingle_unigram\");\n+ builder.splitOnWhitespace(false);\n+ Query query = builder.doToQuery(shardContext);\n+ assertThat(expectedQueryWithUnigram, equalTo(query));\n+\n+ builder = new QueryStringQueryBuilder(\"\\\"foo bar baz\\\"\");\n+ builder.field(\"text_shingle_unigram\");\n+ builder.splitOnWhitespace(false);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n+ DisjunctionMaxQuery maxQuery = (DisjunctionMaxQuery) query;\n+ assertThat(maxQuery.getDisjuncts().size(), equalTo(1));\n+ assertThat(expectedPhraseQueryWithUnigram, equalTo(maxQuery.getDisjuncts().get(0)));\n+\n+ builder = new QueryStringQueryBuilder(\"foo bar baz biz\");\n+ builder.field(\"text_shingle\");\n+ builder.splitOnWhitespace(false);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(expectedQuery, equalTo(query));\n+\n+ builder = new QueryStringQueryBuilder(\"\\\"foo bar baz biz\\\"\");\n+ builder.field(\"text_shingle\");\n+ builder.splitOnWhitespace(false);\n+ query = builder.doToQuery(shardContext);\n+ assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n+ maxQuery = (DisjunctionMaxQuery) query;\n+ assertThat(maxQuery.getDisjuncts().size(), equalTo(1));\n+ assertThat(expectedPhraseQuery, equalTo(maxQuery.getDisjuncts().get(0)));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/index/query/DisableGraphQueryTests.java", "status": "added" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: [analysis-phonetic]\r\n\r\n**JVM version**: Oracle java 1.8.0_77 (build 1.8.0_77-b03)\r\n\r\n**OS version**: Windows 7\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen using create index API, I can define custom analyzers in \"analysis\" section.\r\nI can also define the default mapping in the \"mapping\" section.\r\nBut when I want to use one of my custom analyzers in the mapping section, ElasticSearch fails.\r\n\r\n**Steps to reproduce**:\r\n`\r\nDELETE test_custom_analyzer\r\n`\r\n`\r\nPUT test_custom_analyzer\r\n{\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n`\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nelasticsearch response:\r\n`{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n ],\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"Failed to parse mapping [_default_]: analyzer [my_analyzer] not found for field [field1]\",\r\n \"caused_by\": {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n },\r\n \"status\": 400\r\n}`\r\n\r\n\r\n\r\n", "comments": [ { "body": "Note : after a few tests, it seems that elastic ignores the \"analysis\" section when you provide a \"mapping\" section. If you provide a \"mapping\" section, you have to embed the \"analysis\" section into a \"settings\"/\"index\" section.\r\n\r\nThe following syntax reports no error:\r\n`PUT test_custom_analyzer\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}`", "created_at": "2017-03-27T04:11:39Z" }, { "body": "Yep, it looks like illegal syntax is ignored in this API e.g.\r\n\r\n\tPUT test\r\n\t{\r\n\t\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\t\"BAR_IS_THE_SAME\": 42\r\n\t\t},\r\n\t\t\"mappings\": {\r\n\t\t\t\"test\": {\r\n\t\t\t\t\"properties\": {\r\n\t\t\t\t\t\"field1\": {\r\n\t\t\t\t\t\t\"type\": \"text\"\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t} \r\n\r\n\t{\r\n\t \"acknowledged\": true,\r\n\t \"shards_acknowledged\": true\r\n\t}\r\n\r\nYour illegal \"analysis\" section like my illegal \"foo\" section above is overlooked and could give the false impression that it was accepted.", "created_at": "2017-03-27T08:13:18Z" }, { "body": "We should complain about unknown keys at the same level as `mappings`/`settings`", "created_at": "2017-03-28T13:31:37Z" }, { "body": "It looks like the top level keys for create index (`mappings` `settings`and `aliases`) are not verified. \r\nI would like to have a go at it :)", "created_at": "2017-03-30T12:59:38Z" } ], "number": 23755, "title": "Create index : unable to use custom analyzer in default mapping at index creation time" }
{ "body": "Create index should accept only `settings`, `mappings` or `aliases` as top-level keys. Any other entry will result in an exception.\r\n\r\nThe behavior of the create index request till now was that iff none of the supported keys were found (`settings`, `mappings` or `aliases`) then the top level elements were treated as being `settings`.\r\n\r\nIn other words\r\n```\r\nPUT test \r\n{ \r\n \"index.number_of_shards\" : 3 \r\n}\r\n```\r\nactually created the index `test` with 3 shards. The implicit parsing of any keys other than the supported ones as `settings` is removed by this PR (please refer to https://github.com/elastic/elasticsearch/pull/23846#discussion_r109138114).\r\n\r\n```\r\nPUT test\r\n{\r\n\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\"BAR_IS_THE_SAME\": 42\r\n\t},\r\n\t\"mappings\": {\r\n\t\t\"test\": {\r\n\t\t\t\"properties\": {\r\n\t\t\t\t\"field1\": {\r\n\t\t\t\t\t\"type\": \"text\"\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n} \r\n```\r\n```\r\nPUT test\r\n{\r\n\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\"BAR_IS_THE_SAME\": 42\r\n\t}\r\n} \r\n```\r\nboth throw and exception\r\n\r\n(As in 5.x only the first one will throw an exception)\r\n\r\nAddresses #23755 ", "number": 23869, "review_comments": [ { "body": "I think this message is a little deceptive, since this custom elements are allowed. I think just keep it simple, something like \"unsupported key X for create index\"?", "created_at": "2017-09-11T18:14:22Z" }, { "body": "Typically we would use JsonXContent.contentBuilder() (as can be seen in testSerialization above). Can you please switch to that?", "created_at": "2017-09-11T18:17:12Z" } ], "title": "Validate top-level keys for create index request (#23755)" }
{ "commits": [ { "message": "Validate top-level keys for create index request (#23755)\n\n\"settings\" must be explicitly set, otherwise an exception will be thrown" }, { "message": "Merge branch 'master' into CreateIndex_checkKeys" }, { "message": "Tweak error message" }, { "message": "Fix bogus rest test" }, { "message": "Fix another bogus rest test" }, { "message": "Fix docs test" }, { "message": "Merge branch 'master' into CreateIndex_checkKeys" } ], "files": [ { "diff": "@@ -374,38 +374,32 @@ public CreateIndexRequest source(BytesReference source, XContentType xContentTyp\n */\n @SuppressWarnings(\"unchecked\")\n public CreateIndexRequest source(Map<String, ?> source) {\n- boolean found = false;\n for (Map.Entry<String, ?> entry : source.entrySet()) {\n String name = entry.getKey();\n if (name.equals(\"settings\")) {\n- found = true;\n settings((Map<String, Object>) entry.getValue());\n } else if (name.equals(\"mappings\")) {\n- found = true;\n Map<String, Object> mappings = (Map<String, Object>) entry.getValue();\n for (Map.Entry<String, Object> entry1 : mappings.entrySet()) {\n mapping(entry1.getKey(), (Map<String, Object>) entry1.getValue());\n }\n } else if (name.equals(\"aliases\")) {\n- found = true;\n aliases((Map<String, Object>) entry.getValue());\n } else {\n // maybe custom?\n IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(name);\n if (proto != null) {\n- found = true;\n try {\n customs.put(name, proto.fromMap((Map<String, Object>) entry.getValue()));\n } catch (IOException e) {\n throw new ElasticsearchParseException(\"failed to parse custom metadata for [{}]\", name);\n }\n+ } else {\n+ // found a key which is neither custom defined nor one of the supported ones\n+ throw new ElasticsearchParseException(\"unknown key [{}] for create index\", name);\n }\n }\n }\n- if (!found) {\n- // the top level are settings, use them\n- settings(source);\n- }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -31,6 +32,7 @@\n import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n import java.util.HashMap;\n+import java.util.Locale;\n import java.util.Map;\n \n public class CreateIndexRequestBuilderTests extends ESTestCase {\n@@ -58,16 +60,23 @@ public void tearDown() throws Exception {\n */\n public void testSetSource() throws IOException {\n CreateIndexRequestBuilder builder = new CreateIndexRequestBuilder(this.testClient, CreateIndexAction.INSTANCE);\n- builder.setSource(\"{\\\"\"+KEY+\"\\\" : \\\"\"+VALUE+\"\\\"}\", XContentType.JSON);\n+ \n+ ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, \n+ () -> {builder.setSource(\"{\\\"\"+KEY+\"\\\" : \\\"\"+VALUE+\"\\\"}\", XContentType.JSON);});\n+ assertEquals(String.format(Locale.ROOT, \"unknown key [%s] for create index\", KEY), e.getMessage());\n+ \n+ builder.setSource(\"{\\\"settings\\\" : {\\\"\"+KEY+\"\\\" : \\\"\"+VALUE+\"\\\"}}\", XContentType.JSON);\n assertEquals(VALUE, builder.request().settings().get(KEY));\n \n- XContentBuilder xContent = XContentFactory.jsonBuilder().startObject().field(KEY, VALUE).endObject();\n+ XContentBuilder xContent = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"settings\").field(KEY, VALUE).endObject().endObject();\n xContent.close();\n builder.setSource(xContent);\n assertEquals(VALUE, builder.request().settings().get(KEY));\n \n ByteArrayOutputStream docOut = new ByteArrayOutputStream();\n- XContentBuilder doc = XContentFactory.jsonBuilder(docOut).startObject().field(KEY, VALUE).endObject();\n+ XContentBuilder doc = XContentFactory.jsonBuilder(docOut).startObject()\n+ .startObject(\"settings\").field(KEY, VALUE).endObject().endObject();\n doc.close();\n builder.setSource(docOut.toByteArray(), XContentType.JSON);\n assertEquals(VALUE, builder.request().settings().get(KEY));", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -45,4 +46,27 @@ public void testSerialization() throws IOException {\n }\n }\n }\n+ \n+ public void testTopLevelKeys() throws IOException {\n+ String createIndex =\n+ \"{\\n\"\n+ + \" \\\"FOO_SHOULD_BE_ILLEGAL_HERE\\\": {\\n\"\n+ + \" \\\"BAR_IS_THE_SAME\\\": 42\\n\"\n+ + \" },\\n\"\n+ + \" \\\"mappings\\\": {\\n\"\n+ + \" \\\"test\\\": {\\n\"\n+ + \" \\\"properties\\\": {\\n\"\n+ + \" \\\"field1\\\": {\\n\"\n+ + \" \\\"type\\\": \\\"text\\\"\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \"}\";\n+\n+ CreateIndexRequest request = new CreateIndexRequest();\n+ ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, \n+ () -> {request.source(createIndex, XContentType.JSON);});\n+ assertEquals(\"unknown key [FOO_SHOULD_BE_ILLEGAL_HERE] for create index\", e.getMessage());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java", "status": "modified" }, { "diff": "@@ -86,25 +86,27 @@ Here is an example:\n --------------------------------------------------\n PUT /compound_word_example\n {\n- \"index\": {\n- \"analysis\": {\n- \"analyzer\": {\n- \"my_analyzer\": {\n- \"type\": \"custom\",\n- \"tokenizer\": \"standard\",\n- \"filter\": [\"dictionary_decompounder\", \"hyphenation_decompounder\"]\n- }\n- },\n- \"filter\": {\n- \"dictionary_decompounder\": {\n- \"type\": \"dictionary_decompounder\",\n- \"word_list\": [\"one\", \"two\", \"three\"]\n+ \"settings\": {\n+ \"index\": {\n+ \"analysis\": {\n+ \"analyzer\": {\n+ \"my_analyzer\": {\n+ \"type\": \"custom\",\n+ \"tokenizer\": \"standard\",\n+ \"filter\": [\"dictionary_decompounder\", \"hyphenation_decompounder\"]\n+ }\n },\n- \"hyphenation_decompounder\": {\n- \"type\" : \"hyphenation_decompounder\",\n- \"word_list_path\": \"analysis/example_word_list.txt\",\n- \"hyphenation_patterns_path\": \"analysis/hyphenation_patterns.xml\",\n- \"max_subword_size\": 22\n+ \"filter\": {\n+ \"dictionary_decompounder\": {\n+ \"type\": \"dictionary_decompounder\",\n+ \"word_list\": [\"one\", \"two\", \"three\"]\n+ },\n+ \"hyphenation_decompounder\": {\n+ \"type\" : \"hyphenation_decompounder\",\n+ \"word_list_path\": \"analysis/example_word_list.txt\",\n+ \"hyphenation_patterns_path\": \"analysis/hyphenation_patterns.xml\",\n+ \"max_subword_size\": 22\n+ }\n }\n }\n }", "filename": "docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc", "status": "modified" }, { "diff": "@@ -3,8 +3,9 @@\n indices.create:\n index: smb-test\n body:\n- index:\n- store.type: smb_mmap_fs\n+ settings:\n+ index:\n+ store.type: smb_mmap_fs\n \n - do:\n index:", "filename": "plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yml", "status": "modified" }, { "diff": "@@ -47,7 +47,7 @@\n - do:\n indices.create:\n index: test\n- body: { \"index.number_of_shards\": 1, \"index.number_of_replicas\": 9 }\n+ body: { \"settings\": { \"index.number_of_shards\": 1, \"index.number_of_replicas\": 9 } }\n \n - do:\n cluster.state:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yml", "status": "modified" } ] }
{ "body": "Remote nodes in cross-cluster search can be marked as eligible for acting a gateway node via a remote node attribute setting. For example, if search.remote.node.attr is set to \"gateway\", only nodes that have node.attr.gateway set to \"true\" can be connected to for cross-cluster search. Unfortunately, there is a bug in the handling of these attributes due to the use of a dangerous method Boolean#getBoolean(String) which obtains the system property with specified name as a boolean. We are not looking at system properties here, but node settings. This commit fixes this situation, and adds a test. A follow-up will ban the use of Boolean#getBoolean.", "comments": [ { "body": "> A follow-up will ban the use of Boolean#getBoolean.\r\n\r\nI opened #23864 for this.", "created_at": "2017-04-01T16:02:28Z" } ], "number": 23863, "title": "Fix cross-cluster remote node gateway attributes" }
{ "body": "The method Boolean#getBoolean is dangerous. It is too easy to mistakenly invoke this method thinking that it is parsing a string as a boolean. However, what it actually does is get a system property with the specified string, and then attempts to use usual crappy boolean parsing in the JDK to parse that system property as boolean with complete leniency (it parses every input value into either true or false); that is, this method amounts to invoking Boolean#parseBoolean on the result of System#getProperty(String). Boo. This commit bans usage of this method.\r\n\r\nRelates #23863\r\n", "number": 23864, "review_comments": [], "title": "Ban Boolean#getBoolean" }
{ "commits": [ { "message": "Ban Boolean#getBoolean\n\nThe method Boolean#getBoolean is dangerous. It is too easy to mistakenly\ninvoke this method thinking that it is parsing a string as a\nboolean. However, what it actually does is get a system property with\nthe specified string, and then attempts to use usual crappy boolean\nparsing in the JDK to parse that system property as boolean with\ncomplete leniency (it parses every input value into either true or\nfalse); that is, this method amounts to invoking\nBoolean#parseBoolean(String) on the result of\nSystem#getProperty(String). Boo. This commit bans usage of this method." }, { "message": "Fix signature" }, { "message": "Remove excess parenthesis" }, { "message": "I am bad at signatures and should feel bad" }, { "message": "Fix comment" }, { "message": "Merge branch 'master' into ban-get-boolean\n\n* master:\n Fix language in some docs\n CONSOLEify lang-analyzer docs\n Stricter parsing of remote node attribute\n Fix cross-cluster remote node gateway attributes\n FieldCapabilitiesRequest should implements Replaceable since it accepts index patterns\n Cleanup: Remove unused FieldMappers class (#23851)\n Fix FieldCapabilities compilation in Eclipse (#23855)\n Add extra debugging to reindex cancel tests\n Cluster stats should not render empty http/transport types (#23735)" } ], "files": [ { "diff": "@@ -44,4 +44,13 @@ java.net.URLConnection#getInputStream()\n java.net.Socket#connect(java.net.SocketAddress)\n java.net.Socket#connect(java.net.SocketAddress, int)\n java.nio.channels.SocketChannel#open(java.net.SocketAddress)\n-java.nio.channels.SocketChannel#connect(java.net.SocketAddress)\n\\ No newline at end of file\n+java.nio.channels.SocketChannel#connect(java.net.SocketAddress)\n+\n+# This method is misleading, and uses lenient boolean parsing under the hood. If you intend to parse\n+# a system property as a boolean, use\n+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) on the result of\n+# java.lang.SystemProperty#getProperty(java.lang.String) instead. If you were not intending to parse\n+# a system property as a boolean, but instead parse a string to a boolean, use\n+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) directly on the string.\n+@defaultMessage use org.elasticsearch.common.Booleans#parseBoolean(java.lang.String)\n+java.lang.Boolean#getBoolean(java.lang.String)", "filename": "buildSrc/src/main/resources/forbidden/es-all-signatures.txt", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.randomizedtesting.RandomizedRunner;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.SecureSM;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.io.FileSystemUtils;\n@@ -119,7 +120,9 @@ public class BootstrapForTesting {\n perms.add(new FilePermission(System.getProperty(\"tests.config\"), \"read,readlink\"));\n }\n // jacoco coverage output file\n- if (Boolean.getBoolean(\"tests.coverage\")) {\n+ final boolean testsCoverage =\n+ Booleans.parseBoolean(System.getProperty(\"tests.coverage\"));\n+ if (testsCoverage) {\n Path coverageDir = PathUtils.get(System.getProperty(\"tests.coverage.dir\"));\n perms.add(new FilePermission(coverageDir.resolve(\"jacoco.exec\").toString(), \"read,write\"));\n // in case we get fancy and use the -integration goals later:", "filename": "test/framework/src/main/java/org/elasticsearch/bootstrap/BootstrapForTesting.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: [analysis-phonetic]\r\n\r\n**JVM version**: Oracle java 1.8.0_77 (build 1.8.0_77-b03)\r\n\r\n**OS version**: Windows 7\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen using create index API, I can define custom analyzers in \"analysis\" section.\r\nI can also define the default mapping in the \"mapping\" section.\r\nBut when I want to use one of my custom analyzers in the mapping section, ElasticSearch fails.\r\n\r\n**Steps to reproduce**:\r\n`\r\nDELETE test_custom_analyzer\r\n`\r\n`\r\nPUT test_custom_analyzer\r\n{\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n`\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nelasticsearch response:\r\n`{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n ],\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"Failed to parse mapping [_default_]: analyzer [my_analyzer] not found for field [field1]\",\r\n \"caused_by\": {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n },\r\n \"status\": 400\r\n}`\r\n\r\n\r\n\r\n", "comments": [ { "body": "Note : after a few tests, it seems that elastic ignores the \"analysis\" section when you provide a \"mapping\" section. If you provide a \"mapping\" section, you have to embed the \"analysis\" section into a \"settings\"/\"index\" section.\r\n\r\nThe following syntax reports no error:\r\n`PUT test_custom_analyzer\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}`", "created_at": "2017-03-27T04:11:39Z" }, { "body": "Yep, it looks like illegal syntax is ignored in this API e.g.\r\n\r\n\tPUT test\r\n\t{\r\n\t\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\t\"BAR_IS_THE_SAME\": 42\r\n\t\t},\r\n\t\t\"mappings\": {\r\n\t\t\t\"test\": {\r\n\t\t\t\t\"properties\": {\r\n\t\t\t\t\t\"field1\": {\r\n\t\t\t\t\t\t\"type\": \"text\"\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t} \r\n\r\n\t{\r\n\t \"acknowledged\": true,\r\n\t \"shards_acknowledged\": true\r\n\t}\r\n\r\nYour illegal \"analysis\" section like my illegal \"foo\" section above is overlooked and could give the false impression that it was accepted.", "created_at": "2017-03-27T08:13:18Z" }, { "body": "We should complain about unknown keys at the same level as `mappings`/`settings`", "created_at": "2017-03-28T13:31:37Z" }, { "body": "It looks like the top level keys for create index (`mappings` `settings`and `aliases`) are not verified. \r\nI would like to have a go at it :)", "created_at": "2017-03-30T12:59:38Z" } ], "number": 23755, "title": "Create index : unable to use custom analyzer in default mapping at index creation time" }
{ "body": "Create index accepts only `settings`, `mappings` or `aliases` as top-level keys.\r\n\r\nThe behavior of the create index request is that iff none of the supported keys are found (`settings`, `mappings`, `aliases`) then the top level elements are treated as settings.\r\n\r\nIn other words\r\n```\r\nPUT test \r\n{ \r\n \"index.number_of_shards\" : 3 \r\n}\r\n```\r\nwould actually create the index `test` with 3 shards. This behavior is preserved with this PR.\r\n\r\nThe implicit parsing of any keys other than the supported ones as `settings` is deprecated for 5.x and should be removed in the next major version (please refer to https://github.com/elastic/elasticsearch/pull/23846#discussion_r109138114)\r\n\r\nHowever now \r\n```\r\nPUT test\r\n{\r\n\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\"BAR_IS_THE_SAME\": 42\r\n\t},\r\n\t\"mappings\": {\r\n\t\t\"test\": {\r\n\t\t\t\"properties\": {\r\n\t\t\t\t\"field1\": {\r\n\t\t\t\t\t\"type\": \"text\"\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n} \r\n```\r\nwill throw an exception as there is a `mapping` defined.\r\n\r\nAddresses #23755 \r\n", "number": 23862, "review_comments": [], "title": "Validate top-level keys for create index request 5.x deprecation (#23755)" }
{ "commits": [ { "message": "Validate top-level keys for create index request (#23755)\n\nDeprecate the implicit parsing of any keys other than 'settings',\n'mappin' or 'aliases' as 'settings'.\nShould be removed in the next major version." } ], "files": [ { "diff": "@@ -35,6 +35,8 @@\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -82,6 +84,8 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>\n private boolean updateAllTypes = false;\n \n private ActiveShardCount waitForActiveShards = ActiveShardCount.DEFAULT;\n+ \n+ private static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(CreateIndexRequest.class));\n \n public CreateIndexRequest() {\n }\n@@ -383,6 +387,7 @@ public CreateIndexRequest source(BytesReference source, XContentType xContentTyp\n @SuppressWarnings(\"unchecked\")\n public CreateIndexRequest source(Map<String, ?> source) {\n boolean found = false;\n+ String unsupportedKey = null;\n for (Map.Entry<String, ?> entry : source.entrySet()) {\n String name = entry.getKey();\n if (name.equals(\"settings\")) {\n@@ -407,12 +412,25 @@ public CreateIndexRequest source(Map<String, ?> source) {\n } catch (IOException e) {\n throw new ElasticsearchParseException(\"failed to parse custom metadata for [{}]\", name);\n }\n+ } else {\n+ // found a key which is neither custom defined nor one of the supported ones\n+ if (unsupportedKey == null) {\n+ unsupportedKey = name;\n+ }\n }\n }\n- }\n+ } \n if (!found) {\n // the top level are settings, use them\n settings(source);\n+ deprecationLogger.deprecated(\"Implicit parsing of [{}] as [settings] is deprecated: \"\n+ + \"instead use \\\"settings\\\": { \\\"{}\\\": ... }\", unsupportedKey, unsupportedKey);\n+\n+ }\n+ if (found && unsupportedKey != null) {\n+ throw new ElasticsearchParseException(\n+ \"unknown key [{}] for a [{}], expected [settings], [mappings] or [aliases]\",\n+ unsupportedKey, XContentParser.Token.START_OBJECT);\n }\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n import java.util.HashMap;\n+import java.util.Locale;\n import java.util.Map;\n \n public class CreateIndexRequestBuilderTests extends ESTestCase {\n@@ -60,17 +61,27 @@ public void testSetSource() throws IOException {\n CreateIndexRequestBuilder builder = new CreateIndexRequestBuilder(this.testClient, CreateIndexAction.INSTANCE);\n builder.setSource(\"{\\\"\"+KEY+\"\\\" : \\\"\"+VALUE+\"\\\"}\", XContentType.JSON);\n assertEquals(VALUE, builder.request().settings().get(KEY));\n+ assertWarnings(String.format(Locale.ROOT,\n+ \"Implicit parsing of [%s] as [settings] is deprecated: \"\n+ + \"instead use \\\"settings\\\": { \\\"%s\\\": ... }\", KEY, KEY));\n \n XContentBuilder xContent = XContentFactory.jsonBuilder().startObject().field(KEY, VALUE).endObject();\n xContent.close();\n builder.setSource(xContent);\n assertEquals(VALUE, builder.request().settings().get(KEY));\n+ assertWarnings(String.format(Locale.ROOT,\n+ \"Implicit parsing of [%s] as [settings] is deprecated: \"\n+ + \"instead use \\\"settings\\\": { \\\"%s\\\": ... }\", KEY, KEY));\n \n ByteArrayOutputStream docOut = new ByteArrayOutputStream();\n XContentBuilder doc = XContentFactory.jsonBuilder(docOut).startObject().field(KEY, VALUE).endObject();\n doc.close();\n builder.setSource(docOut.toByteArray(), XContentType.JSON);\n assertEquals(VALUE, builder.request().settings().get(KEY));\n+ assertWarnings(String.format(Locale.ROOT, \n+ \"Implicit parsing of [%s] as [settings] is deprecated: \"\n+ + \"instead use \\\"settings\\\": { \\\"%s\\\": ... }\", KEY, KEY));\n+\n \n Map<String, String> settingsMap = new HashMap<>();\n settingsMap.put(KEY, VALUE);", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -30,6 +31,8 @@\n import java.io.IOException;\n import java.util.Base64;\n \n+import static org.hamcrest.Matchers.containsString;\n+\n public class CreateIndexRequestTests extends ESTestCase {\n \n public void testSerialization() throws IOException {\n@@ -69,4 +72,29 @@ public void testSerializationBwc() throws IOException {\n }\n }\n }\n+ \n+ public void testTopLevelKeys() throws IOException {\n+ String createIndexString =\n+ \"{\\n\"\n+ + \" \\\"FOO_SHOULD_BE_ILLEGAL_HERE\\\": {\\n\"\n+ + \" \\\"BAR_IS_THE_SAME\\\": 42\\n\"\n+ + \" },\\n\"\n+ + \" \\\"mappings\\\": {\\n\"\n+ + \" \\\"test\\\": {\\n\"\n+ + \" \\\"properties\\\": {\\n\"\n+ + \" \\\"field1\\\": {\\n\"\n+ + \" \\\"type\\\": \\\"text\\\"\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \"}\";\n+\n+ CreateIndexRequest request = new CreateIndexRequest();\n+ ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, \n+ () -> {request.source(createIndexString, XContentType.JSON);});\n+ assertThat(e.toString(), containsString(\n+ \"unknown key [FOO_SHOULD_BE_ILLEGAL_HERE] for a [START_OBJECT], \"\n+ + \"expected [settings], [mappings] or [aliases]\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java", "status": "modified" }, { "diff": "@@ -86,25 +86,27 @@ Here is an example:\n --------------------------------------------------\n PUT /compound_word_example\n {\n- \"index\": {\n- \"analysis\": {\n- \"analyzer\": {\n- \"my_analyzer\": {\n- \"type\": \"custom\",\n- \"tokenizer\": \"standard\",\n- \"filter\": [\"dictionary_decompounder\", \"hyphenation_decompounder\"]\n- }\n- },\n- \"filter\": {\n- \"dictionary_decompounder\": {\n- \"type\": \"dictionary_decompounder\",\n- \"word_list\": [\"one\", \"two\", \"three\"]\n+ \"settings\" : {\n+ \"index\": {\n+ \"analysis\": {\n+ \"analyzer\": {\n+ \"my_analyzer\": {\n+ \"type\": \"custom\",\n+ \"tokenizer\": \"standard\",\n+ \"filter\": [\"dictionary_decompounder\", \"hyphenation_decompounder\"]\n+ }\n },\n- \"hyphenation_decompounder\": {\n- \"type\" : \"hyphenation_decompounder\",\n- \"word_list_path\": \"analysis/example_word_list.txt\",\n- \"hyphenation_patterns_path\": \"analysis/hyphenation_patterns.xml\",\n- \"max_subword_size\": 22\n+ \"filter\": {\n+ \"dictionary_decompounder\": {\n+ \"type\": \"dictionary_decompounder\",\n+ \"word_list\": [\"one\", \"two\", \"three\"]\n+ },\n+ \"hyphenation_decompounder\": {\n+ \"type\" : \"hyphenation_decompounder\",\n+ \"word_list_path\": \"analysis/example_word_list.txt\",\n+ \"hyphenation_patterns_path\": \"analysis/hyphenation_patterns.xml\",\n+ \"max_subword_size\": 22\n+ }\n }\n }\n }", "filename": "docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc", "status": "modified" }, { "diff": "@@ -3,8 +3,9 @@\n indices.create:\n index: smb-test\n body:\n- index:\n- store.type: smb_mmap_fs\n+ settings:\n+ index:\n+ store.type: smb_mmap_fs\n \n - do:\n index:", "filename": "plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yaml", "status": "modified" }, { "diff": "@@ -42,7 +42,8 @@\n - do:\n indices.create:\n index: test\n- body: { \"index.number_of_shards\": 1, \"index.number_of_replicas\": 9 }\n+ body:\n+ settings : { \"index.number_of_shards\": 1, \"index.number_of_replicas\": 9 }\n \n - do:\n cluster.state:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yaml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: [analysis-phonetic]\r\n\r\n**JVM version**: Oracle java 1.8.0_77 (build 1.8.0_77-b03)\r\n\r\n**OS version**: Windows 7\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen using create index API, I can define custom analyzers in \"analysis\" section.\r\nI can also define the default mapping in the \"mapping\" section.\r\nBut when I want to use one of my custom analyzers in the mapping section, ElasticSearch fails.\r\n\r\n**Steps to reproduce**:\r\n`\r\nDELETE test_custom_analyzer\r\n`\r\n`\r\nPUT test_custom_analyzer\r\n{\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n`\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nelasticsearch response:\r\n`{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n ],\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"Failed to parse mapping [_default_]: analyzer [my_analyzer] not found for field [field1]\",\r\n \"caused_by\": {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"analyzer [my_analyzer] not found for field [field1]\"\r\n }\r\n },\r\n \"status\": 400\r\n}`\r\n\r\n\r\n\r\n", "comments": [ { "body": "Note : after a few tests, it seems that elastic ignores the \"analysis\" section when you provide a \"mapping\" section. If you provide a \"mapping\" section, you have to embed the \"analysis\" section into a \"settings\"/\"index\" section.\r\n\r\nThe following syntax reports no error:\r\n`PUT test_custom_analyzer\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"whitespace\",\r\n \"filter\": [\r\n \"elision\",\r\n \"lowercase\",\r\n \"asciifolding\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n}`", "created_at": "2017-03-27T04:11:39Z" }, { "body": "Yep, it looks like illegal syntax is ignored in this API e.g.\r\n\r\n\tPUT test\r\n\t{\r\n\t\t\"FOO_SHOULD_BE_ILLEGAL_HERE\": {\r\n\t\t\t\"BAR_IS_THE_SAME\": 42\r\n\t\t},\r\n\t\t\"mappings\": {\r\n\t\t\t\"test\": {\r\n\t\t\t\t\"properties\": {\r\n\t\t\t\t\t\"field1\": {\r\n\t\t\t\t\t\t\"type\": \"text\"\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t} \r\n\r\n\t{\r\n\t \"acknowledged\": true,\r\n\t \"shards_acknowledged\": true\r\n\t}\r\n\r\nYour illegal \"analysis\" section like my illegal \"foo\" section above is overlooked and could give the false impression that it was accepted.", "created_at": "2017-03-27T08:13:18Z" }, { "body": "We should complain about unknown keys at the same level as `mappings`/`settings`", "created_at": "2017-03-28T13:31:37Z" }, { "body": "It looks like the top level keys for create index (`mappings` `settings`and `aliases`) are not verified. \r\nI would like to have a go at it :)", "created_at": "2017-03-30T12:59:38Z" } ], "number": 23755, "title": "Create index : unable to use custom analyzer in default mapping at index creation time" }
{ "body": "Create index accepts only `settings`, `mappings` or `aliases` as top-level keys.\r\n\r\nCloses #23755", "number": 23846, "review_comments": [ { "body": "can we throw the exception here directly?", "created_at": "2017-03-31T10:34:48Z" }, { "body": "The behavior of the create index request is that iff none of the supported keys are found (`settings`, `mappings`, `aliases`) then the top level elements are treated as `settings`.\r\n\r\nIn other words \r\n`PUT test\r\n{\r\n \"index.number_of_shards\" : 3\r\n}`\r\nwould actually create the index `test` with 3 shards.\r\n\r\nSo if we simply throw here, the preexisting behavior will be altered. And I am reluctant to do so, as there could be users relying on this behavior... (arguably wrong but preexisting)", "created_at": "2017-03-31T10:46:55Z" }, { "body": "Documentation was removed for this back in 1.0 but yes, some people may still be using it. We should add deprecation logging now, and throw an exception in 6.0", "created_at": "2017-03-31T11:11:02Z" }, { "body": "@clintongormley I am not sure what you mean by 'deprecation logging'. Throw with a msg that this is no longer supported?", "created_at": "2017-03-31T11:18:55Z" }, { "body": "See https://github.com/elastic/elasticsearch/blob/5.x/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java#L98 for an example", "created_at": "2017-03-31T11:24:15Z" } ], "title": "Validate top-level keys for create index request " }
{ "commits": [ { "message": "Validate top-level keys for create index request (#23755)" } ], "files": [ { "diff": "@@ -383,6 +383,7 @@ public CreateIndexRequest source(BytesReference source, XContentType xContentTyp\n @SuppressWarnings(\"unchecked\")\n public CreateIndexRequest source(Map<String, ?> source) {\n boolean found = false;\n+ String unsupportedKey = null;\n for (Map.Entry<String, ?> entry : source.entrySet()) {\n String name = entry.getKey();\n if (name.equals(\"settings\")) {\n@@ -407,13 +408,23 @@ public CreateIndexRequest source(Map<String, ?> source) {\n } catch (IOException e) {\n throw new ElasticsearchParseException(\"failed to parse custom metadata for [{}]\", name);\n }\n+ } else {\n+ // found a key which is neither custom defined nor one of the supported ones\n+ if (unsupportedKey == null) {\n+ unsupportedKey = name;\n+ }\n }\n }\n- }\n+ } \n if (!found) {\n // the top level are settings, use them\n settings(source);\n }\n+ if (found && unsupportedKey != null) {\n+ throw new ElasticsearchParseException(\n+ \"unknown key [{}] for a [{}], expected [settings], [mappings] or [aliases]\",\n+ unsupportedKey, XContentParser.Token.START_OBJECT);\n+ }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -30,6 +31,8 @@\n import java.io.IOException;\n import java.util.Base64;\n \n+import static org.hamcrest.Matchers.containsString;\n+\n public class CreateIndexRequestTests extends ESTestCase {\n \n public void testSerialization() throws IOException {\n@@ -69,4 +72,29 @@ public void testSerializationBwc() throws IOException {\n }\n }\n }\n+ \n+ public void testTopLevelKeys() throws IOException {\n+ String createIndexString =\n+ \"{\\n\"\n+ + \" \\\"FOO_SHOULD_BE_ILLEGAL_HERE\\\": {\\n\"\n+ + \" \\\"BAR_IS_THE_SAME\\\": 42\\n\"\n+ + \" },\\n\"\n+ + \" \\\"mappings\\\": {\\n\"\n+ + \" \\\"test\\\": {\\n\"\n+ + \" \\\"properties\\\": {\\n\"\n+ + \" \\\"field1\\\": {\\n\"\n+ + \" \\\"type\\\": \\\"text\\\"\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \" }\\n\"\n+ + \"}\";\n+\n+ CreateIndexRequest request = new CreateIndexRequest();\n+ ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, \n+ () -> {request.source(createIndexString.getBytes(), XContentType.JSON);});\n+ assertThat(e.toString(), containsString(\n+ \"unknown key [FOO_SHOULD_BE_ILLEGAL_HERE] for a [START_OBJECT], \"\n+ + \"expected [settings], [mappings] or [aliases]\"));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java", "status": "modified" } ] }
{ "body": "When a snapshot fails, the snapshot/_status will return a 500 error. It seems the only way to fetch the actual \"FAILED\" status is by listing the repository/_all. To me, the 500 exception returned when calling the snapshot/_status seems wrong.\r\n\r\n**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: x-pack\r\n\r\n``` json\r\nPUT /_snapshot/my_backup\r\n{\r\n \"type\": \"fs\",\r\n \"settings\": {\r\n \"compress\": true,\r\n \"location\": \"repo_test\"\r\n }\r\n}\r\n\r\nPUT test1\r\n\r\nPUT /_snapshot/my_backup/snapshot_1\r\n{\r\n \"indices\": \"test1\",\r\n \"ignore_unavailable\": true,\r\n \"include_global_state\": false\r\n}\r\n\r\nGET _snapshot/my_backup/snapshot_1/_status\r\n` response `\r\n{\r\n \"snapshots\": [\r\n {\r\n \"snapshot\": \"snapshot_1\",\r\n \"repository\": \"my_backup\",\r\n \"uuid\": \"8KxZ0zSlQFyh77dqvxc3Mw\",\r\n \"state\": \"SUCCESS\",\r\n\r\n}]}\r\n\r\n`make a \"bad\" index... `\r\nPUT /_cluster/settings\r\n{\r\n \"transient\": {\r\n \"cluster.routing.allocation.enable\": \"none\"\r\n }\r\n}\r\n\r\nPUT test2\r\n\r\nPUT /_snapshot/my_backup/snapshot_2\r\n{\r\n \"indices\": \"test1,test2\",\r\n \"ignore_unavailable\": true,\r\n \"include_global_state\": false\r\n}\r\n\r\nGET _snapshot/my_backup/snapshot_2/_status\r\n` response `\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"index_shard_restore_failed_exception\",\r\n \"reason\": \"failed to read shard snapshot file\",\r\n \"index_uuid\": \"_f7dq3AMSEejQMZF4sbqYA\",\r\n \"shard\": \"0\",\r\n \"index\": \"test1\"\r\n }\r\n ],\r\n \"type\": \"index_shard_restore_failed_exception\",\r\n \"reason\": \"failed to read shard snapshot file\",\r\n \"index_uuid\": \"_f7dq3AMSEejQMZF4sbqYA\",\r\n \"shard\": \"0\",\r\n \"index\": \"test1\",\r\n \"caused_by\": {\r\n \"type\": \"no_such_file_exception\",\r\n \"reason\": \"/Users/jared/tmp/repo_test/indices/5H7x7fA-QsK7xqs6MdO0Bw/0/snap-2XWQ_Sd4QMCdSo1wU4VkoA.dat\"\r\n }\r\n },\r\n \"status\": 500\r\n}\r\n\r\nGET /_snapshot/my_backup/_all?filter_path=*.snapshot,*.state\r\n` response `\r\n{\r\n \"snapshots\": [\r\n {\r\n \"snapshot\": \"snapshot_1\",\r\n \"state\": \"SUCCESS\"\r\n },\r\n {\r\n \"snapshot\": \"snapshot_2\",\r\n \"state\": \"FAILED\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\n", "comments": [ { "body": "This sounds like a legit request to me, @imotov what do you think?", "created_at": "2017-03-24T11:47:42Z" }, { "body": "I agree, the `_status` endpoint for a failed snapshot should return information about the failure in a standard response, not a 500.", "created_at": "2017-03-24T13:47:55Z" }, { "body": "thanks @abeyad ! I will mark adoptme then.", "created_at": "2017-03-24T13:48:17Z" }, { "body": "@abeyad that feels like a bug and not enhancement. What do you think?", "created_at": "2017-03-24T14:25:13Z" }, { "body": "@imotov agreed, i'll change the label", "created_at": "2017-03-24T14:26:18Z" }, { "body": "++ thanks for taking it @abeyad ", "created_at": "2017-03-24T14:43:39Z" }, { "body": "@jpcarey the steps you outlined above does *not* reproduce for me on 5.2.2. Instead, for\r\n\r\n```\r\ncurl -XGET \"localhost:9200/_snapshot/fs_repo/snap1\"\r\n```\r\n\r\nI get: \r\n```\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snap1\",\r\n \"uuid\" : \"iTxr6rgSQMqjGOEOtk1C3g\",\r\n \"version_id\" : 5020299,\r\n \"version\" : \"5.2.2\",\r\n \"indices\" : [\r\n \"idx2\"\r\n ],\r\n \"state\" : \"FAILED\",\r\n \"reason\" : \"Indices don't have primary shards [idx2]\",\r\n \"start_time\" : \"2017-03-30T17:25:56.191Z\",\r\n \"start_time_in_millis\" : 1490894756191,\r\n \"end_time\" : \"2017-03-30T17:25:56.199Z\",\r\n \"end_time_in_millis\" : 1490894756199,\r\n \"duration_in_millis\" : 8,\r\n \"failures\" : [\r\n {\r\n \"index\" : \"idx2\",\r\n \"index_uuid\" : \"idx2\",\r\n \"shard_id\" : 3,\r\n \"reason\" : \"primary shard is not allocated\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\" : \"idx2\",\r\n \"index_uuid\" : \"idx2\",\r\n \"shard_id\" : 2,\r\n \"reason\" : \"primary shard is not allocated\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\" : \"idx2\",\r\n \"index_uuid\" : \"idx2\",\r\n \"shard_id\" : 4,\r\n \"reason\" : \"primary shard is not allocated\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\" : \"idx2\",\r\n \"index_uuid\" : \"idx2\",\r\n \"shard_id\" : 0,\r\n \"reason\" : \"primary shard is not allocated\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\" : \"idx2\",\r\n \"index_uuid\" : \"idx2\",\r\n \"shard_id\" : 1,\r\n \"reason\" : \"primary shard is not allocated\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\"\r\n }\r\n ],\r\n \"shards\" : {\r\n \"total\" : 5,\r\n \"failed\" : 5,\r\n \"successful\" : 0\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nFor getting the status:\r\n```\r\ncurl -XGET \"localhost:9200/_snapshot/fs_repo/snap1/_status\"\r\n```\r\n\r\nI get:\r\n```\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snap1\",\r\n \"repository\" : \"fs_repo\",\r\n \"uuid\" : \"iTxr6rgSQMqjGOEOtk1C3g\",\r\n \"state\" : \"FAILED\",\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 0,\r\n \"failed\" : 5,\r\n \"total\" : 5\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"indices\" : {\r\n \"idx2\" : {\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 0,\r\n \"failed\" : 5,\r\n \"total\" : 5\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"shards\" : {\r\n \"0\" : {\r\n \"stage\" : \"FAILURE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"reason\" : \"primary shard is not allocated\"\r\n },\r\n \"1\" : {\r\n \"stage\" : \"FAILURE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"reason\" : \"primary shard is not allocated\"\r\n },\r\n \"2\" : {\r\n \"stage\" : \"FAILURE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"reason\" : \"primary shard is not allocated\"\r\n },\r\n \"3\" : {\r\n \"stage\" : \"FAILURE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"reason\" : \"primary shard is not allocated\"\r\n },\r\n \"4\" : {\r\n \"stage\" : \"FAILURE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 0,\r\n \"time_in_millis\" : 0\r\n },\r\n \"reason\" : \"primary shard is not allocated\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n}\r\n```", "created_at": "2017-03-30T17:40:14Z" }, { "body": "@abeyad I re-ran the steps I provided (without x-pack), and still get the error with 5.2.2 (fresh untar). Reading the error, it is complaining about index `test1`, which is odd. I went back and made sure to add documents to the index, incase it was an issue around a blank index - same results.\r\n\r\nmacOS Sierra 10.12.3 (16D32)\r\njava version \"1.8.0_65\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_65-b17)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)\r\n\r\n```\r\ncurl 'localhost:9200/_snapshot/my_backup/snapshot_2/_status?pretty'\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"index_shard_restore_failed_exception\",\r\n \"reason\" : \"failed to read shard snapshot file\",\r\n \"index_uuid\" : \"RnhkQinqT4yYodBnq4fARQ\",\r\n \"shard\" : \"0\",\r\n \"index\" : \"test1\"\r\n }\r\n ],\r\n \"type\" : \"index_shard_restore_failed_exception\",\r\n \"reason\" : \"failed to read shard snapshot file\",\r\n \"index_uuid\" : \"RnhkQinqT4yYodBnq4fARQ\",\r\n \"shard\" : \"0\",\r\n \"index\" : \"test1\",\r\n \"caused_by\" : {\r\n \"type\" : \"no_such_file_exception\",\r\n \"reason\" : \"/Users/jared/tmp/repo_test/indices/uRZ1_CzRQ-eL3LyKwSvHcA/0/snap-ndxheQU0QgixJnHsLBmXJg.dat\"\r\n }\r\n },\r\n \"status\" : 500\r\n}\r\n```", "created_at": "2017-03-30T18:18:30Z" }, { "body": "@jpcarey I reproduced the problem - the issue is if you specify the snapshot to have only \"bad\" indices, then getting its status works fine. If the snapshot contains a mix of good and bad indices, then I get the same error you got.", "created_at": "2017-03-30T18:32:13Z" } ], "number": 23716, "title": "failed snapshot _status returns 500" }
{ "body": "If a snapshot is taken on multiple indices, and some of them are \"good\"\r\nindices that don't contain any corruption or failures, and some of them\r\nare \"bad\" indices that contain missing shards or corrupted shards, and\r\nif the snapshot request is set to partial=false (meaning don't take a\r\nsnapshot if there are any failures), then the good indices will not be\r\nsnapshotted either. Previously, when getting the status of such a\r\nsnapshot, a 500 error would be thrown, because the snap-*.dat blob for\r\nthe shards in the good index could not be found.\r\n\r\nThis commit fixes the problem by reporting shards of good indices as\r\nfailed due to a failed snapshot, instead of throwing the\r\nNoSuchFileException.\r\n\r\nCloses #23716", "number": 23833, "review_comments": [ { "body": "I was wondering if this logic can be moved inside `repository.getShardSnapshotStatus()` instead?", "created_at": "2017-03-31T08:14:34Z" }, { "body": "So, the problem here is that these calls are always failing when the snapshot is in a FAILED state, right? So, wouldn't it it make sense to not even try to call getShardSnapshotStatus if snapshot failed by just wrapping these 3 lines into a an if statement that checks if the snapshot has a failed status and simply generate the fake shard status you are generating in line 609 below. ", "created_at": "2017-03-31T14:15:26Z" }, { "body": "@imotov is correct, we only need to check the snapshot status here and execute the read based on it, I pushed https://github.com/elastic/elasticsearch/pull/23833/commits/45c994d98b955c834aa8533de21e54d3798199c5", "created_at": "2017-04-06T15:28:36Z" }, { "body": "@dadoonet the problem here is that I need to depend on the snapshot status itself now (see https://github.com/elastic/elasticsearch/pull/23833/commits/45c994d98b955c834aa8533de21e54d3798199c5) to determine if the shard status should be read, and that snapshot status is not passed in to this method. Otherwise, it would've been a good proposition!", "created_at": "2017-04-06T15:30:12Z" }, { "body": "This will be repeated for every single good shard in the status and might hide the shards that actually caused the issue. Maybe we can say something short here like \"skipped\". If we want to improve readability, maybe we should improve the error message on the snapshot itself, so we don't have to repeat this for every shard.", "created_at": "2017-04-06T18:26:44Z" }, { "body": "No longer needed", "created_at": "2017-04-06T18:27:15Z" }, { "body": "Same here", "created_at": "2017-04-06T18:27:22Z" }, { "body": "You might want to mark this one as final to ensure that it gets assigned. ", "created_at": "2017-04-06T18:28:19Z" }, { "body": "fixed", "created_at": "2017-04-06T19:19:10Z" }, { "body": "fixed", "created_at": "2017-04-06T19:19:13Z" }, { "body": "done", "created_at": "2017-04-06T19:19:18Z" }, { "body": "good idea, i think we can just keep this as \"skipped\" for now, as that captures the essence of what happened with that shard.", "created_at": "2017-04-06T19:20:08Z" } ], "title": "Fixes snapshot status on failed snapshots" }
{ "commits": [ { "message": "Fixes snapshot status on failed snapshots\n\nIf a snapshot is taken on multiple indices, and some of them are \"good\"\nindices that don't contain any corruption or failures, and some of them\nare \"bad\" indices that contain missing shards or corrupted shards, and\nif the snapshot request is set to partial=false (meaning don't take a\nsnapshot if there are any failures), then the good indices will not be\nsnapshotted either. Previously, when getting the status of such a\nsnapshot, a 500 error would be thrown, because the snap-*.dat blob for\nthe shards in the good index could not be found.\n\nThis commit fixes the problem by reporting shards of good indices as\nfailed due to a failed snapshot, instead of throwing the\nNoSuchFileException.\n\nCloses #23716" }, { "message": "don't read snapshot shard status if status is failed and shard status\nhas no exception" }, { "message": "addresses feedback" }, { "message": "fix test" } ], "files": [ { "diff": "@@ -550,7 +550,8 @@ public List<SnapshotsInProgress.Entry> currentSnapshots(final String repository,\n /**\n * Returns status of shards currently finished snapshots\n * <p>\n- * This method is executed on master node and it's complimentary to the {@link SnapshotShardsService#currentSnapshotShards(Snapshot)} because it\n+ * This method is executed on master node and it's complimentary to the\n+ * {@link SnapshotShardsService#currentSnapshotShards(Snapshot)} because it\n * returns similar information but for already finished snapshots.\n * </p>\n *\n@@ -578,8 +579,25 @@ public Map<ShardId, IndexShardSnapshotStatus> snapshotShards(final String reposi\n shardSnapshotStatus.failure(shardFailure.reason());\n shardStatus.put(shardId, shardSnapshotStatus);\n } else {\n- IndexShardSnapshotStatus shardSnapshotStatus =\n- repository.getShardSnapshotStatus(snapshotInfo.snapshotId(), snapshotInfo.version(), indexId, shardId);\n+ final IndexShardSnapshotStatus shardSnapshotStatus;\n+ if (snapshotInfo.state() == SnapshotState.FAILED) {\n+ // If the snapshot failed, but the shard's snapshot does\n+ // not have an exception, it means that partial snapshots\n+ // were disabled and in this case, the shard snapshot will\n+ // *not* have any metadata, so attempting to read the shard\n+ // snapshot status will throw an exception. Instead, we create\n+ // a status for the shard to indicate that the shard snapshot\n+ // could not be taken due to partial being set to false.\n+ shardSnapshotStatus = new IndexShardSnapshotStatus();\n+ shardSnapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.FAILURE);\n+ shardSnapshotStatus.failure(\"skipped\");\n+ } else {\n+ shardSnapshotStatus = repository.getShardSnapshotStatus(\n+ snapshotInfo.snapshotId(),\n+ snapshotInfo.version(),\n+ indexId,\n+ shardId);\n+ }\n shardStatus.put(shardId, shardSnapshotStatus);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n@@ -2737,4 +2738,74 @@ public void testSnapshotSucceedsAfterSnapshotFailure() throws Exception {\n assertEquals(SnapshotState.SUCCESS, getSnapshotsResponse.getSnapshots().get(0).state());\n }\n \n+ public void testSnapshotStatusOnFailedIndex() throws Exception {\n+ logger.info(\"--> creating repository\");\n+ final Path repoPath = randomRepoPath();\n+ final Client client = client();\n+ assertAcked(client.admin().cluster()\n+ .preparePutRepository(\"test-repo\")\n+ .setType(\"fs\")\n+ .setVerify(false)\n+ .setSettings(Settings.builder().put(\"location\", repoPath)));\n+\n+ logger.info(\"--> creating good index\");\n+ assertAcked(prepareCreate(\"test-idx-good\")\n+ .setSettings(Settings.builder()\n+ .put(SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(SETTING_NUMBER_OF_REPLICAS, 0)));\n+ ensureGreen();\n+ final int numDocs = randomIntBetween(1, 5);\n+ for (int i = 0; i < numDocs; i++) {\n+ index(\"test-idx-good\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+\n+ logger.info(\"--> creating bad index\");\n+ assertAcked(prepareCreate(\"test-idx-bad\")\n+ .setWaitForActiveShards(ActiveShardCount.NONE)\n+ .setSettings(Settings.builder()\n+ .put(SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(SETTING_NUMBER_OF_REPLICAS, 0)\n+ // set shard allocation to none so the primary cannot be\n+ // allocated - simulates a \"bad\" index that fails to snapshot\n+ .put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(),\n+ \"none\")));\n+\n+ logger.info(\"--> snapshot bad index and get status\");\n+ client.admin().cluster()\n+ .prepareCreateSnapshot(\"test-repo\", \"test-snap1\")\n+ .setWaitForCompletion(true)\n+ .setIndices(\"test-idx-bad\")\n+ .get();\n+ SnapshotsStatusResponse snapshotsStatusResponse = client.admin().cluster()\n+ .prepareSnapshotStatus(\"test-repo\")\n+ .setSnapshots(\"test-snap1\")\n+ .get();\n+ assertEquals(1, snapshotsStatusResponse.getSnapshots().size());\n+ assertEquals(State.FAILED, snapshotsStatusResponse.getSnapshots().get(0).getState());\n+\n+ logger.info(\"--> snapshot both good and bad index and get status\");\n+ client.admin().cluster()\n+ .prepareCreateSnapshot(\"test-repo\", \"test-snap2\")\n+ .setWaitForCompletion(true)\n+ .setIndices(\"test-idx-good\", \"test-idx-bad\")\n+ .get();\n+ snapshotsStatusResponse = client.admin().cluster()\n+ .prepareSnapshotStatus(\"test-repo\")\n+ .setSnapshots(\"test-snap2\")\n+ .get();\n+ assertEquals(1, snapshotsStatusResponse.getSnapshots().size());\n+ // verify a FAILED status is returned instead of a 500 status code\n+ // see https://github.com/elastic/elasticsearch/issues/23716\n+ SnapshotStatus snapshotStatus = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(State.FAILED, snapshotStatus.getState());\n+ for (SnapshotIndexShardStatus shardStatus : snapshotStatus.getShards()) {\n+ assertEquals(SnapshotIndexShardStage.FAILURE, shardStatus.getStage());\n+ if (shardStatus.getIndex().equals(\"test-idx-good\")) {\n+ assertEquals(\"skipped\", shardStatus.getFailure());\n+ } else {\n+ assertEquals(\"primary shard is not allocated\", shardStatus.getFailure());\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: none\r\n\r\n**JVM version**: openjdk version \"1.8.0_121\"\r\n\r\n**OS version**: docker container \"elasticsearch:5.2.2\"\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAdding an offset to a date histogram with extended bounds does not behave like described in the documentation (https://www.elastic.co/guide/en/elasticsearch/reference/5.2/search-aggregations-bucket-datehistogram-aggregation.html#_offset).\r\n\r\nHere's my request:\r\n```\r\n{\r\n 'aggs': {\r\n 'start_time': {\r\n 'date_histogram': {\r\n 'extended_bounds': {\r\n 'min': '2016-01-01T06:00:00Z',\r\n 'max': '2016-01-03T08:00:00Z'\r\n },\r\n 'field': 'start_time',\r\n 'interval': '1d',\r\n 'min_doc_count': 0,\r\n 'offset': '+6h'\r\n }\r\n }\r\n },\r\n}\r\n```\r\n\r\nHere's the response I get (note the keys of the buckets):\r\n```\r\n{\r\n 'aggregations': {\r\n 'start_time': {\r\n 'buckets': [\r\n {\r\n 'doc_count': 0,\r\n 'key': 1451606400000,\r\n 'key_as_string': '2016-01-01T00:00:00.000Z'\r\n },\r\n {\r\n 'doc_count': 3,\r\n 'key': 1451628000000,\r\n 'key_as_string': '2016-01-01T06:00:00.000Z'\r\n },\r\n {\r\n 'doc_count': 0,\r\n 'key': 1451692800000,\r\n 'key_as_string': '2016-01-02T00:00:00.000Z'\r\n },\r\n {\r\n 'doc_count': 0,\r\n 'key': 1451779200000,\r\n 'key_as_string': '2016-01-03T00:00:00.000Z'\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nWhat I expected to get as a response (which I get using Elasticsearch 1.7):\r\n```\r\n{\r\n 'aggregations': {\r\n 'start_time': {\r\n 'buckets': [\r\n {\r\n 'doc_count': 3,\r\n 'key': 1451628000000,\r\n 'key_as_string': '2016-01-01T06:00:00.000Z'\r\n },\r\n {\r\n 'doc_count': 0,\r\n 'key': 1451714400000,\r\n 'key_as_string': '2016-01-02T06:00:00.000Z'\r\n },\r\n {\r\n 'doc_count': 0,\r\n 'key': 1451800800000,\r\n 'key_as_string': '2016-01-03T06:00:00.000Z'\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nI don't understand why I get so many buckets, and why do some of the buckets have starting date at midnight, even though I specified a `+6h` output.\r\nIf this is expected behaviour, how can I improve the request to get the result I got using Elasticsearch 1.7?\r\n\r\nThanks in advance for your help!", "comments": [ { "body": "Thanks for reporting @pmourlanne. It looks like there's a strange behavior with extended_bounds combined with an offset.\r\n\r\nI used the following scenario to reproduce on 5.2.2 and the last result does look strange to me too. @cbuescher worked on extended bounds, maybe you have an idea?\r\n\r\n```\r\n\r\nPUT /events/ '{\r\n \"mappings\" : {\r\n \"event\" : {\r\n \"properties\" : {\r\n \"start_time\" : {\r\n \"type\" : \"date\"\r\n }\r\n }\r\n }\r\n }\r\n }'\r\n\r\nPOST /events/event '{ \r\n \"start_time\": \"2016-01-01T01:00:00.000Z\"\r\n}'\r\n\r\nPOST /events/event '{ \r\n \"start_time\": \"2016-01-01T03:00:00.000Z\"\r\n}'\r\n\r\nPOST /events/event '{ \r\n \"start_time\": \"2016-01-01T05:00:00.000Z\"\r\n}'\r\n```\r\nAggregation with extended bounds and no offset returns:\r\n<details>\r\n\r\n```\r\nPOST '/_search?size=0' '{\r\n \"aggs\": {\r\n \"start_time\": {\r\n \"date_histogram\": {\r\n \"extended_bounds\": {\r\n \"min\": \"2016-01-01T06:00:00.000Z\",\r\n \"max\": \"2016-01-03T08:00:00.000Z\"\r\n },\r\n \"field\": \"start_time\",\r\n \"interval\": \"1d\",\r\n \"min_doc_count\": 0\r\n }\r\n }\r\n }\r\n}'\r\n{\r\n ... \r\n \"aggregations\" : {\r\n \"start_time\" : {\r\n \"buckets\" : [\r\n {\r\n \"key_as_string\" : \"2016-01-01T00:00:00.000Z\",\r\n \"key\" : 1451606400000,\r\n \"doc_count\" : 3\r\n },\r\n {\r\n \"key_as_string\" : \"2016-01-02T00:00:00.000Z\",\r\n \"key\" : 1451692800000,\r\n \"doc_count\" : 0\r\n },\r\n {\r\n \"key_as_string\" : \"2016-01-03T00:00:00.000Z\",\r\n \"key\" : 1451779200000,\r\n \"doc_count\" : 0\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n</details>\r\n\r\nAnd the same with offset `+1h` returns a strange result:\r\n<details>\r\n\r\n```\r\nPOST '/_search?size=0' '{\r\n \"aggs\": {\r\n \"start_time\": {\r\n \"date_histogram\": {\r\n \"extended_bounds\": {\r\n \"min\": \"2016-01-01T06:00:00.000Z\",\r\n \"max\": \"2016-01-03T08:00:00.000Z\"\r\n },\r\n \"field\": \"start_time\",\r\n \"interval\": \"1d\",\r\n \"min_doc_count\": 0,\r\n \"offset\": \"+1h\"\r\n }\r\n }\r\n }\r\n}'\r\n{\r\n ...\r\n \"aggregations\" : {\r\n \"start_time\" : {\r\n \"buckets\" : [\r\n {\r\n \"key_as_string\" : \"2016-01-01T00:00:00.000Z\",\r\n \"key\" : 1451606400000,\r\n \"doc_count\" : 0\r\n },\r\n {\r\n \"key_as_string\" : \"2016-01-01T01:00:00.000Z\",\r\n \"key\" : 1451610000000,\r\n \"doc_count\" : 3\r\n },\r\n {\r\n \"key_as_string\" : \"2016-01-02T00:00:00.000Z\",\r\n \"key\" : 1451692800000,\r\n \"doc_count\" : 0\r\n },\r\n {\r\n \"key_as_string\" : \"2016-01-03T00:00:00.000Z\",\r\n \"key\" : 1451779200000,\r\n \"doc_count\" : 0\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n</details>\r\n", "created_at": "2017-03-28T18:26:12Z" }, { "body": "I will look into this.", "created_at": "2017-03-28T19:03:04Z" } ], "number": 23776, "title": "Date histogram: offset not working as expected" }
{ "body": "This fixes a bug in 'date_histogram' when using 'extended_bounds'\r\ntogether with some 'offset'. Offsets should be applied after rounding the\r\nextended bounds and also be applied when adding empty buckets during the reduce\r\nphase in InternalDateHistogram.\r\n\r\nCloses #23776\r\n", "number": 23789, "review_comments": [ { "body": "should it be `rounding.round(X - offset) + offset` instead?", "created_at": "2017-03-30T13:40:29Z" }, { "body": "I don't think so. The min/max values should initially be the ones defined by the user in the query (e.g. \"2016-01-01T04:00:00Z\"). I we subtract the offset before rounding and e.g. do 1day intervals, we would get to 2015-12-31 in the example above, which I think is not the intention of the user. Does this make sense or am I missing something?", "created_at": "2017-03-30T15:47:02Z" } ], "title": "DateHistogram: Fix `extended_bounds` with `offset`" }
{ "commits": [ { "message": "DateHistogram: Fix 'extended_bounds' with 'offset'\n\nThis fixes a bug in 'date_histogram' when using 'extended_bounds'\ntogether with some 'offset'. Offsets should be applied after rounding the\nextended bounds and also be applied when adding empty buckets during the reduce\nphase in InternalDateHistogram.\n\nCloses #23776" }, { "message": "Don't add offset in ExtendedBounds#round()" } ], "files": [ { "diff": "@@ -361,16 +361,16 @@ private void addEmptyBuckets(List<Bucket> list, ReduceContext reduceContext) {\n Bucket firstBucket = iter.hasNext() ? list.get(iter.nextIndex()) : null;\n if (firstBucket == null) {\n if (bounds.getMin() != null && bounds.getMax() != null) {\n- long key = bounds.getMin();\n- long max = bounds.getMax();\n+ long key = bounds.getMin() + offset;\n+ long max = bounds.getMax() + offset;\n while (key <= max) {\n iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs));\n key = nextKey(key).longValue();\n }\n }\n } else {\n if (bounds.getMin() != null) {\n- long key = bounds.getMin();\n+ long key = bounds.getMin() + offset;\n if (key < firstBucket.key) {\n while (key < firstBucket.key) {\n iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs));\n@@ -397,12 +397,12 @@ private void addEmptyBuckets(List<Bucket> list, ReduceContext reduceContext) {\n }\n \n // finally, adding the empty buckets *after* the actual data (based on the extended_bounds.max requested by the user)\n- if (bounds != null && lastBucket != null && bounds.getMax() != null && bounds.getMax() > lastBucket.key) {\n- long key = emptyBucketInfo.rounding.nextRoundingValue(lastBucket.key);\n- long max = bounds.getMax();\n+ if (bounds != null && lastBucket != null && bounds.getMax() != null && bounds.getMax() + offset > lastBucket.key) {\n+ long key = nextKey(lastBucket.key).longValue();\n+ long max = bounds.getMax() + offset;\n while (key <= max) {\n iter.add(new InternalDateHistogram.Bucket(key, 0, keyed, format, reducedEmptySubAggs));\n- key = emptyBucketInfo.rounding.nextRoundingValue(key);\n+ key = nextKey(key).longValue();\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java", "status": "modified" }, { "diff": "@@ -1048,7 +1048,61 @@ public void testSingleValueFieldWithExtendedBoundsTimezone() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(0L));\n }\n }\n- internalCluster().wipeIndices(\"test12278\");\n+ internalCluster().wipeIndices(index);\n+ }\n+\n+ /**\n+ * Test date histogram aggregation with day interval, offset and\n+ * extended bounds (see https://github.com/elastic/elasticsearch/issues/23776)\n+ */\n+ public void testSingleValueFieldWithExtendedBoundsOffset() throws Exception {\n+ String index = \"test23776\";\n+ prepareCreate(index)\n+ .setSettings(Settings.builder().put(indexSettings()).put(\"index.number_of_shards\", 1).put(\"index.number_of_replicas\", 0))\n+ .execute().actionGet();\n+\n+ List<IndexRequestBuilder> builders = new ArrayList<>();\n+ builders.add(indexDoc(index, DateTime.parse(\"2016-01-03T08:00:00.000Z\"), 1));\n+ builders.add(indexDoc(index, DateTime.parse(\"2016-01-03T08:00:00.000Z\"), 2));\n+ builders.add(indexDoc(index, DateTime.parse(\"2016-01-06T08:00:00.000Z\"), 3));\n+ builders.add(indexDoc(index, DateTime.parse(\"2016-01-06T08:00:00.000Z\"), 4));\n+ indexRandom(true, builders);\n+ ensureSearchable(index);\n+\n+ SearchResponse response = null;\n+ // retrieve those docs with the same time zone and extended bounds\n+ response = client()\n+ .prepareSearch(index)\n+ .addAggregation(\n+ dateHistogram(\"histo\").field(\"date\").dateHistogramInterval(DateHistogramInterval.days(1)).offset(\"+6h\").minDocCount(0)\n+ .extendedBounds(new ExtendedBounds(\"2016-01-01T06:00:00Z\", \"2016-01-08T08:00:00Z\"))\n+ ).execute().actionGet();\n+ assertSearchResponse(response);\n+\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(8));\n+\n+ assertEquals(\"2016-01-01T06:00:00.000Z\", buckets.get(0).getKeyAsString());\n+ assertEquals(0, buckets.get(0).getDocCount());\n+ assertEquals(\"2016-01-02T06:00:00.000Z\", buckets.get(1).getKeyAsString());\n+ assertEquals(0, buckets.get(1).getDocCount());\n+ assertEquals(\"2016-01-03T06:00:00.000Z\", buckets.get(2).getKeyAsString());\n+ assertEquals(2, buckets.get(2).getDocCount());\n+ assertEquals(\"2016-01-04T06:00:00.000Z\", buckets.get(3).getKeyAsString());\n+ assertEquals(0, buckets.get(3).getDocCount());\n+ assertEquals(\"2016-01-05T06:00:00.000Z\", buckets.get(4).getKeyAsString());\n+ assertEquals(0, buckets.get(4).getDocCount());\n+ assertEquals(\"2016-01-06T06:00:00.000Z\", buckets.get(5).getKeyAsString());\n+ assertEquals(2, buckets.get(5).getDocCount());\n+ assertEquals(\"2016-01-07T06:00:00.000Z\", buckets.get(6).getKeyAsString());\n+ assertEquals(0, buckets.get(6).getDocCount());\n+ assertEquals(\"2016-01-08T06:00:00.000Z\", buckets.get(7).getKeyAsString());\n+ assertEquals(0, buckets.get(7).getDocCount());\n+\n+ internalCluster().wipeIndices(index);\n }\n \n public void testSingleValueWithMultipleDateFormatsFromMapping() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramIT.java", "status": "modified" } ] }
{ "body": "A user [ran into an issue](https://discuss.elastic.co/t/apply-fuzziness-on-multiple-terms-in-a-not-analyzed-field/79798/4) when they passed an array for the \"value\" property in a fuzzy query:\r\n\r\n\t{\r\n\t \"query\": {\r\n\t\t\"fuzzy\": {\r\n\t\t \"name\": {\r\n\t\t\t\"value\": [\r\n\t\t\t \"donald\",\r\n\t\t\t \"trump\",\r\n\t\t\t \"president\"\r\n\t\t\t],\r\n\t\t\t\"boost\": 1,\r\n\t\t\t\"fuzziness\": 1,\r\n\t\t\t\"prefix_length\": 0,\r\n\t\t\t\"max_expansions\": 50\r\n\t\t }\r\n\t\t}\r\n\t }\r\n\t}\r\n\r\nRather than failing with a parse error (e.g. \"expected string not an array\") it runs a query which fails to match anything. ", "comments": [ { "body": "In this case the `FuzzyQueryBuilder` would actually result in `\"value\": \"]\"` : hence the unexpected result.\r\n\r\nIf OK I would like to try to fix the issue.", "created_at": "2017-03-27T12:26:26Z" }, { "body": "@olcbean I already fixed it and was about to push the PR, sorry I should have assigned this to me.", "created_at": "2017-03-27T12:30:41Z" } ], "number": 23759, "title": "Fuzzy query parsing logic allows illegal syntax" }
{ "body": "An array of values is illegal in the `fuzzy` query and should result in a parsing error.\r\n\r\nCloses #23759\r\n\r\n", "number": 23762, "review_comments": [], "title": "FuzzyQueryBuilder should error when parsing array of values" }
{ "commits": [ { "message": "FuzzyQueryBuilder should error when parsing array of values\n\nCloses #23759" } ], "files": [ { "diff": "@@ -275,7 +275,7 @@ public static FuzzyQueryBuilder fromXContent(QueryParseContext parseContext) thr\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n- } else {\n+ } else if (token.isValue()) {\n if (TERM_FIELD.match(currentFieldName)) {\n value = parser.objectBytes();\n } else if (VALUE_FIELD.match(currentFieldName)) {\n@@ -298,6 +298,9 @@ public static FuzzyQueryBuilder fromXContent(QueryParseContext parseContext) thr\n throw new ParsingException(parser.getTokenLocation(),\n \"[fuzzy] query does not support [\" + currentFieldName + \"]\");\n }\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(),\n+ \"[\" + NAME + \"] unexpected token [\" + token + \"] after [\" + currentFieldName + \"]\");\n }\n }\n } else {", "filename": "core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java", "status": "modified" }, { "diff": "@@ -190,4 +190,17 @@ public void testParseFailsWithMultipleFields() throws IOException {\n e = expectThrows(ParsingException.class, () -> parseQuery(shortJson));\n assertEquals(\"[fuzzy] query doesn't support multiple fields, found [message1] and [message2]\", e.getMessage());\n }\n+\n+ public void testParseFailsWithValueArray() {\n+ String query = \"{\\n\" +\n+ \" \\\"fuzzy\\\" : {\\n\" +\n+ \" \\\"message1\\\" : {\\n\" +\n+ \" \\\"value\\\" : [ \\\"one\\\", \\\"two\\\", \\\"three\\\"]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(query));\n+ assertEquals(\"[fuzzy] unexpected token [START_ARRAY] after [value]\", e.getMessage());\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/FuzzyQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.0\r\n**JVM version**: Java(TM) SE Runtime Environment (build 1.8.0_111-b14)\r\n**OS version**: OSX 10.11.6\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nThis was found when upgrading from ES version 1.7 to 5.2. ES 1.7 had expected results.\r\n\r\nWhen executing a range query on a date where `\"include_upper\": true`, a date value at or near `Long.MAX_VALUE` causes the `RangeQueryBuilder` rewrite to overflow the `to` field resulting in an invalid `range` query, ie:\r\n```\r\n{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": 9223372036854775807,\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n}\r\n```\r\nRegardless of data the result set here is `0` hits. The expectation would be for the above query to be equivalent to:\r\n```\r\n{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Steps to reproduce**:\r\n```\r\ncurl -XPUT http://localhost:9200/test -d '{\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"test_date\": {\r\n \"type\": \"date\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n\r\ncurl -XPOST http://localhost:9200/test/test -d '{\r\n \"test_date\": 1488387860020\r\n}'\r\n\r\ncurl -XPOST http://localhost:9200/test/test -d '{\r\n \"test_date\": 1588387860020\r\n}'\r\n\r\n## Query should return 1 hit (the first document indexed)\r\n## PASSES\r\ncurl -XPOST http://localhost:9200/test/_search -d '{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": 1500000000000,\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n}' | jq \r\n\r\n## Query should return 2 hits being effectively a boundless range\r\n## PASSES\r\ncurl -XPOST http://localhost:9200/test/_search -d '{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n}' | jq\r\n\r\n## Query should return 2 hits being effectively a boundless range. \r\n## FAILS - returns no hits\r\ncurl -XPOST http://localhost:9200/test/_search -d '{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": 9223372036854775807,\r\n \"include_lower\": true,\r\n \"include_upper\": true\r\n }\r\n }\r\n }\r\n}' | jq \r\n\r\n## I noted that this only occurs when the include_upper is true.\r\n## PASSES\r\ncurl -XPOST http://localhost:9200/test/_search -d '{\r\n \"query\": {\r\n \"range\": {\r\n \"test_date\": {\r\n \"from\": null,\r\n \"to\": 9223372036854775807,\r\n \"include_lower\": true,\r\n \"include_upper\": false\r\n }\r\n }\r\n }\r\n}' | jq \r\n```\r\n**Analysis**:\r\n\r\nI noted that this only occurs when `\"include_upper\": true`. This is because the date parsing will attempt to round the value up and by doing so will cause the `to` field to overflow such that `to < from`.\r\n\r\nThis can be easily reproducible with the following: (which is effectively what the `RangeQueryBuilder` rewrite will do)\r\n```\r\nFormatDateTimeFormatter DEFAULT_DATE_TIME_FORMATTER = Joda.forPattern(\"strict_date_optional_time||epoch_millis\", Locale.ROOT);\r\nDateMathParser parser = new DateMathParser(DEFAULT_DATE_TIME_FORMATTER);\r\nassert parser.parse(\"9223372036854775807\", System::currentTimeMillis, true, DateTimeZone.UTC) > 0;\r\n```\r\n\r\n**Conclusion**:\r\nGoing forward we will make sure the `to` and `from` are both null in the event the query should be boundless. I understand this is probably the better way to go anyway, the outcome was still surprising. ", "comments": [ { "body": "Hi I am new to the project. Could I try and tackle this?", "created_at": "2017-03-12T22:19:42Z" }, { "body": "@njlawton sure! feel free to grab any issue tagged with \"adoptme\"", "created_at": "2017-03-12T22:23:01Z" }, { "body": "Hi @njlawton Are you still going to work on this issue?", "created_at": "2017-03-19T07:12:35Z" }, { "body": "Sorry I am no longer working on this issue. Feel free to pick it up.", "created_at": "2017-03-20T22:11:12Z" }, { "body": "Sorry go for it... I am going to pick another one", "created_at": "2017-03-20T22:29:55Z" }, { "body": "@johnvint thanks a lot again for reporting this bug. Since this seems to be cause by a Joda rounding issue (details in https://github.com/elastic/elasticsearch/pull/23741), it is unlikely that this is going to be fixed upstream soon and its a rare edge case, I'm going to close this issue. Still good to keep this as a known issue though. Feel free to reopen if you think otherwise.", "created_at": "2017-11-08T14:50:57Z" } ], "number": 23436, "title": "RangeQuery where to is near Long.MAX_VALUE causes invalid rewrite" }
{ "body": "Currently DateFieldType#parseToMilliseconds parses the toString representation\r\nif long values are directly passed in, e.g. as bound in a RangeQuery. This is\r\nnot only inefficient but also introduces strange edge cases like the\r\nLong.MAX_VALUE overflow exibited in #23436 that results in a wrong rewrite of\r\nthe query. Instead of converting long values to String and parse them back we\r\nshould directly return them. Rounding and time zone settings should not effect\r\nthose values since they are assumed to be UTC.\r\n\r\nCloses #23436", "number": 23741, "review_comments": [], "title": "DateFieldType#parseToMilliseconds should return Long values directly" }
{ "commits": [ { "message": "DateFieldType#parseToMilliseconds should return Long values directly\n\nCurrently DateFieldType#parseToMilliseconds parses the toString representation\nif long values are directly passed in, e.g. as bound in a RangeQuery. This is\nnot only inefficient but also introduces strange edge cases like the\nLong.MAX_VALUE overflow exibited in #23436 that results in a wrong rewrite of\nthe query. Instead of converting long values to String and parse them back we\nshould directly return them. Rounding and time zone settings should not effect\nthose values since they are assumed to be UTC.\n\nCloses #23436" } ], "files": [ { "diff": "@@ -20,13 +20,12 @@\n package org.apache.lucene.queryparser.classic;\n \n import org.apache.lucene.analysis.Analyzer;\n-import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause;\n-import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.DisjunctionMaxQuery;\n import org.apache.lucene.search.FuzzyQuery;\n@@ -50,15 +49,15 @@\n import org.elasticsearch.index.mapper.StringFieldType;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.support.QueryParsers;\n-import org.elasticsearch.index.analysis.ShingleTokenFilterFactory;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Collection;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.Collections;\n+\n import static java.util.Collections.unmodifiableMap;\n import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded;\n \n@@ -313,7 +312,7 @@ private Query getRangeQuerySingle(String field, String part1, String part2,\n if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {\n DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType;\n rangeQuery = dateFieldType.rangeQuery(part1Binary, part2Binary,\n- startInclusive, endInclusive, settings.timeZone(), null, context);\n+ startInclusive, endInclusive, settings.timeZone(), null, context::nowInMillis);\n } else {\n rangeQuery = currentFieldType.rangeQuery(part1Binary, part2Binary, startInclusive, endInclusive, context);\n }\n@@ -731,6 +730,7 @@ public Query parse(String query) throws ParseException {\n * Checks if graph analysis should be enabled for the field depending\n * on the provided {@link Analyzer}\n */\n+ @Override\n protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field,\n String queryText, boolean quoted, int phraseSlop) {\n assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST;", "filename": "core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java", "status": "modified" }, { "diff": "@@ -19,14 +19,14 @@\n \n package org.elasticsearch.index.mapper;\n \n-import org.apache.lucene.document.StoredField;\n-import org.apache.lucene.document.SortedNumericDocValuesField;\n import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.FieldInfo;\n-import org.apache.lucene.index.PointValues;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.index.PointValues;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.IndexOrDocValuesQuery;\n import org.apache.lucene.search.Query;\n@@ -54,6 +54,8 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Objects;\n+import java.util.function.LongSupplier;\n+\n import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter;\n \n /** A {@link FieldMapper} for ip addresses. */\n@@ -244,7 +246,7 @@ long parse(String value) {\n \n @Override\n public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- Query query = innerRangeQuery(value, value, true, true, null, null, context);\n+ Query query = innerRangeQuery(value, value, true, true, null, null, context::nowInMillis);\n if (boost() != 1f) {\n query = new BoostQuery(query, boost());\n }\n@@ -254,17 +256,17 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n @Override\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) {\n failIfNotIndexed();\n- return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null, context);\n+ return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null, context::nowInMillis);\n }\n \n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n- @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) {\n+ @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, LongSupplier nowInMillis) {\n failIfNotIndexed();\n- return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context);\n+ return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, nowInMillis);\n }\n \n Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n- @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) {\n+ @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, LongSupplier nowInMillis) {\n failIfNotIndexed();\n DateMathParser parser = forcedDateParser == null\n ? dateMathParser\n@@ -273,15 +275,15 @@ Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower,\n if (lowerTerm == null) {\n l = Long.MIN_VALUE;\n } else {\n- l = parseToMilliseconds(lowerTerm, !includeLower, timeZone, parser, context);\n+ l = parseToMilliseconds(lowerTerm, !includeLower, timeZone, parser, nowInMillis);\n if (includeLower == false) {\n ++l;\n }\n }\n if (upperTerm == null) {\n u = Long.MAX_VALUE;\n } else {\n- u = parseToMilliseconds(upperTerm, includeUpper, timeZone, parser, context);\n+ u = parseToMilliseconds(upperTerm, includeUpper, timeZone, parser, nowInMillis);\n if (includeUpper == false) {\n --u;\n }\n@@ -295,7 +297,11 @@ Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower,\n }\n \n public long parseToMilliseconds(Object value, boolean roundUp,\n- @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser, QueryRewriteContext context) {\n+ @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser, LongSupplier nowInMillis) {\n+ if (value instanceof Long) {\n+ return (Long) value;\n+ }\n+\n DateMathParser dateParser = dateMathParser();\n if (forcedDateParser != null) {\n dateParser = forcedDateParser;\n@@ -307,7 +313,7 @@ public long parseToMilliseconds(Object value, boolean roundUp,\n } else {\n strValue = value.toString();\n }\n- return dateParser.parse(strValue, context::nowInMillis, roundUp, zone);\n+ return dateParser.parse(strValue, nowInMillis, roundUp, zone);\n }\n \n @Override\n@@ -339,7 +345,7 @@ public Relation isFieldWithinQuery(IndexReader reader,\n \n long fromInclusive = Long.MIN_VALUE;\n if (from != null) {\n- fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser, context);\n+ fromInclusive = parseToMilliseconds(from, !includeLower, timeZone, dateParser, context::nowInMillis);\n if (includeLower == false) {\n if (fromInclusive == Long.MAX_VALUE) {\n return Relation.DISJOINT;\n@@ -350,7 +356,7 @@ public Relation isFieldWithinQuery(IndexReader reader,\n \n long toInclusive = Long.MAX_VALUE;\n if (to != null) {\n- toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser, context);\n+ toInclusive = parseToMilliseconds(to, includeUpper, timeZone, dateParser, context::nowInMillis);\n if (includeUpper == false) {\n if (toInclusive == Long.MIN_VALUE) {\n return Relation.DISJOINT;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -494,7 +494,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n if (mapper instanceof DateFieldMapper.DateFieldType) {\n \n query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper,\n- timeZone, getForceDateParser(), context);\n+ timeZone, getForceDateParser(), context::nowInMillis);\n } else if (mapper instanceof RangeFieldMapper.RangeFieldType) {\n DateMathParser forcedDateParser = null;\n if (mapper.typeName() == RangeFieldMapper.RangeType.DATE.name && this.format != null) {", "filename": "core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java", "status": "modified" }, { "diff": "@@ -43,8 +43,8 @@\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n import org.elasticsearch.index.fielddata.NumericDoubleValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n-import org.elasticsearch.index.mapper.GeoPointFieldMapper.GeoPointFieldType;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.GeoPointFieldMapper.GeoPointFieldType;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n@@ -309,7 +309,7 @@ private AbstractDistanceScoreFunction parseDateVariable(XContentParser parser, Q\n if (originString == null) {\n origin = context.nowInMillis();\n } else {\n- origin = ((DateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null, context);\n+ origin = ((DateFieldMapper.DateFieldType) dateFieldType).parseToMilliseconds(originString, false, null, null, context::nowInMillis);\n }\n \n if (scaleString == null) {", "filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n import org.junit.Before;\n \n@@ -107,6 +108,10 @@ private void doTestIsFieldWithinQuery(DateFieldType ft, DirectoryReader reader,\n false, true, null, null, context));\n assertEquals(Relation.INTERSECTS, ft.isFieldWithinQuery(reader, \"2015-10-12\", \"2016-04-03\",\n true, false, null, null, context));\n+ // everything should be WITHIN query with upper bound Long.MAXVALUE\n+ // see https://github.com/elastic/elasticsearch/issues/23436\n+ assertEquals(Relation.WITHIN, ft.isFieldWithinQuery(reader, null, Long.MAX_VALUE,\n+ true, true, null, null, context));\n }\n \n public void testIsFieldWithinQuery() throws IOException {\n@@ -210,4 +215,35 @@ public void testRangeQuery() throws IOException {\n () -> ft.rangeQuery(date1, date2, true, true, context));\n assertEquals(\"Cannot search on field [field] since it is not indexed.\", e.getMessage());\n }\n+\n+ public void testParseToMilliseconds() {\n+ DateFieldType ft = (DateFieldType) createDefaultFieldType();\n+ long value = randomLong();\n+ // rounding, \"now\" and timezone should not affect long values\n+ DateTimeZone tz = randomDateTimeZone();\n+ assertEquals(value,\n+ ft.parseToMilliseconds(value, randomBoolean(), tz,\n+ new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER),\n+ () -> randomLong()));\n+\n+ // print date without time zone information, parsing back with tz should get back original\n+ DateTime date = new DateTime(randomIntBetween(-9999, 9999),\n+ randomIntBetween(1, 12), randomIntBetween(1, 28), randomIntBetween(0, 23),\n+ randomIntBetween(0, 59), randomIntBetween(0, 59), randomIntBetween(0, 999),\n+ tz);\n+ assertEquals(date.getMillis(),\n+ ft.parseToMilliseconds(date.toString(\"yyyy-MM-dd'T'HH:mm:ss.SSS\"), randomBoolean(),\n+ tz, new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER),\n+ () -> randomLong()));\n+\n+ // test rounding\n+ assertEquals(DateTime.parse(\"2017-01-01T00:00:00.000+00:00\").getMillis(),\n+ ft.parseToMilliseconds(date.toString(\"2017-01-01\"), false,\n+ DateTimeZone.UTC, new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER),\n+ () -> randomLong()));\n+ assertEquals(DateTime.parse(\"2017-01-01T23:59:59.999+00:00\").getMillis(),\n+ ft.parseToMilliseconds(date.toString(\"2017-01-01\"), true,\n+ DateTimeZone.UTC, new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER),\n+ () -> randomLong()));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -138,7 +138,8 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query,\n assertThat(query, instanceOf(IndexOrDocValuesQuery.class));\n query = ((IndexOrDocValuesQuery) query).getIndexQuery();\n assertThat(query, instanceOf(PointRangeQuery.class));\n- MapperService mapperService = context.getQueryShardContext().getMapperService();\n+ QueryShardContext queryShardContext = context.getQueryShardContext();\n+ MapperService mapperService = queryShardContext.getMapperService();\n MappedFieldType mappedFieldType = mapperService.fullName(DATE_FIELD_NAME);\n final Long fromInMillis;\n final Long toInMillis;\n@@ -148,12 +149,12 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query,\n ((DateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.from(),\n queryBuilder.includeLower(),\n queryBuilder.getDateTimeZone(),\n- queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ queryBuilder.getForceDateParser(), queryShardContext::nowInMillis);\n toInMillis = queryBuilder.to() == null ? null :\n ((DateFieldMapper.DateFieldType) mappedFieldType).parseToMilliseconds(queryBuilder.to(),\n queryBuilder.includeUpper(),\n queryBuilder.getDateTimeZone(),\n- queryBuilder.getForceDateParser(), context.getQueryShardContext());\n+ queryBuilder.getForceDateParser(), queryShardContext::nowInMillis);\n } else {\n fromInMillis = toInMillis = null;\n fail(\"unexpected mapped field type: [\" + mappedFieldType.getClass() + \"] \" + mappedFieldType.toString());", "filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -94,7 +94,6 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThirdHit;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasScore;\n-import static org.hamcrest.Matchers.allOf;\n import static org.hamcrest.Matchers.closeTo;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n@@ -1720,11 +1719,14 @@ public void testRangeQueryWithTimeZone() throws Exception {\n assertHitCount(searchResponse, 1L);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n- // When we use long values, it means we have ms since epoch UTC based so we don't apply any transformation\n- Exception e = expectThrows(SearchPhaseExecutionException.class, () ->\n- client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(1388534400000L).to(1388537940999L).timeZone(\"+01:00\"))\n- .get());\n+ // When we use long values, it means we have ms since epoch UTC based so we don't apply any\n+ // time zone transformation\n+ searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"date\")\n+ .from(DateTime.parse(\"2014-01-01T04:00:00+03:00\").getMillis())\n+ .to(DateTime.parse(\"2014-01-01T04:59:00+03:00\").getMillis())\n+ .timeZone(randomTimeZone(random()).getID())).get();\n+ assertHitCount(searchResponse, 1L);\n+ assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n searchResponse = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01\").to(\"2014-01-01T00:59:00\").timeZone(\"-01:00\"))\n@@ -1739,7 +1741,7 @@ public void testRangeQueryWithTimeZone() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"4\"));\n \n // A Range Filter on a numeric field with a TimeZone should raise an exception\n- e = expectThrows(SearchPhaseExecutionException.class, () ->\n+ Exception e = expectThrows(SearchPhaseExecutionException.class, () ->\n client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.rangeQuery(\"num\").from(\"0\").to(\"4\").timeZone(\"-01:00\"))\n .get());", "filename": "core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java", "status": "modified" } ] }
{ "body": "[cat RestAliasAction](https://github.com/elastic/elasticsearch/blob/6265ef1c1ba1d308bcc28d00dccccac555e33b89/core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java#L43) exposes an endpoint taking `{alias}` which feeds this **single** `alias` into the [GetAliasRequest constructor](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesRequest.java#L42) exactly [as documented](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-alias.html)\r\n\r\nHowever [cat.aliases](https://github.com/elastic/elasticsearch/blob/6265ef1c1ba1d308bcc28d00dccccac555e33b89/rest-api-spec/src/main/resources/rest-api-spec/api/cat.aliases.json#L9-L12) documents it as a `list` instead of a `string`\r\n\r\n\r\n\r\n", "comments": [ { "body": "This bug can be reproduced with the following script:\r\n```\r\nPUT test\r\n{\r\n \"aliases\": {\r\n \"alias-1\": {},\r\n \"alias-2\": {}\r\n }\r\n}\r\n\r\nPUT test2\r\n{\r\n \"aliases\": {\r\n \"alias-1\": {},\r\n \"alias-2\": {},\r\n \"alias-3\": {}\r\n }\r\n}\r\n\r\nPUT test3\r\n{\r\n \"aliases\": {\r\n \"alias-3\": {},\r\n \"alias-4\": {}\r\n }\r\n}\r\n\r\n# Correctly returns test and test2 indices\r\nGET _cat/aliases/alias-1?v\r\n\r\n# Should return test and test2 indices but returns nothing but the headers\r\nGET _cat/aliases/alias-1,alias-2?v\r\n```\r\n", "created_at": "2017-03-21T09:50:51Z" }, { "body": "Hi, I would like to contribute to the project. Can I take this issue? ", "created_at": "2017-03-21T15:30:16Z" }, { "body": "@glefloch yes, we would very much appreciate you tackling this issue if you want to, so feel free to submit a Pull Request with a fix if you wish. You may want to read https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md to get started on contributing.", "created_at": "2017-03-21T17:59:58Z" } ], "number": 23661, "title": "cat aliases {name} documented as list instead of string" }
{ "body": "This pull request aims to handle a list of `alias` in `GetAliasRequest` instead of a single one.\r\n\r\nCloses #23661 ", "number": 23698, "review_comments": [ { "body": "Can you revert this change, I do not see any tests added that rely on the visibility being changed?", "created_at": "2017-04-04T01:16:03Z" }, { "body": "I wonder if rewording this to the following is simpler?\r\n\r\n\r\n> If you only want to get information about specific aliases, you can specify the aliases in comma-delimited format as the last component of the URL path, e.g., `/_cat/aliases/aliases/alias1,alias2`.", "created_at": "2017-04-04T01:18:51Z" }, { "body": "I think that this test should add a third index, and a third alias, and not request that alias (as you've already done) just so that we can be sure that a non-requested alias is not returned.", "created_at": "2017-04-04T01:19:44Z" }, { "body": "Yes I'm gonna revert this change", "created_at": "2017-04-04T06:30:27Z" }, { "body": "I will do that", "created_at": "2017-04-04T06:32:21Z" }, { "body": "I think we should consider changing the parameter from `alias` to `aliases` as it is misleading to pack a `list` in `alias`.", "created_at": "2017-04-11T11:18:22Z" }, { "body": "you could use `Strings.commaDelimitedListToStringArray` here as we do in other places.", "created_at": "2017-04-18T14:24:28Z" }, { "body": "I didn't knew about this class. I will use it.", "created_at": "2017-04-18T18:18:10Z" }, { "body": "I think we can and should make a stronger assertion here. We can request only the `alias` and `index` fields, request that the cat API sort the output on the `index` field, and then assert the *exact* expected output:\r\n\r\n```\r\nfoo test\r\nbar test2\r\n```", "created_at": "2017-04-20T13:30:44Z" }, { "body": "I think this would be clearer if you put all the pieces that belong on a line together on the same line.", "created_at": "2017-04-20T20:54:52Z" } ], "title": "Handle multiple aliases in _cat/aliases api" }
{ "commits": [ { "message": "fix defect 23661" }, { "message": "feat(all): to rework" }, { "message": "Add rest test" }, { "message": "Merge branch 'master' of https://github.com/elastic/elasticsearch" }, { "message": "fix line ending" }, { "message": "fix line ending" }, { "message": "remove unwanted file" }, { "message": "code review fix" }, { "message": "fix indentation" }, { "message": "fix code review" }, { "message": "Merge remote-tracking branch 'origin/master' into fix/23661" }, { "message": "skip test for previous versions" }, { "message": "update test assert" }, { "message": "fix test assert" } ], "files": [ { "diff": "@@ -46,7 +46,7 @@ public RestAliasAction(Settings settings, RestController controller) {\n @Override\n protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) {\n final GetAliasesRequest getAliasesRequest = request.hasParam(\"alias\") ?\n- new GetAliasesRequest(request.param(\"alias\")) :\n+ new GetAliasesRequest(Strings.commaDelimitedListToStringArray(request.param(\"alias\"))) :\n new GetAliasesRequest();\n getAliasesRequest.local(request.paramAsBoolean(\"local\", getAliasesRequest.local()));\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java", "status": "modified" }, { "diff": "@@ -54,5 +54,6 @@ alias4 test1 - 2 1,2\n The output shows that `alias2` has configured a filter, and specific routing\n configurations in `alias3` and `alias4`.\n \n-If you only want to get information about a single alias, you can specify\n-the alias in the URL, for example `/_cat/aliases/alias1`.\n+If you only want to get information about specific aliases, you can specify \n+the aliases in comma-delimited format as a URL parameter, e.g., \n+/_cat/aliases/aliases/alias1,alias2.\n\\ No newline at end of file", "filename": "docs/reference/cat/alias.asciidoc", "status": "modified" }, { "diff": "@@ -126,6 +126,52 @@\n - match:\n $body: / (^|\\n)test_2 .+ \\n/\n \n+---\n+\"Multiple alias names\":\n+\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: multiple aliases are supported only from 6.0.0 on\n+\n+ - do:\n+ indices.create:\n+ index: test\n+\n+ - do:\n+ indices.create:\n+ index: test2\n+ - do:\n+ indices.create:\n+ index: test3\n+\n+ - do:\n+ indices.put_alias:\n+ index: test\n+ name: foo\n+\n+ - do:\n+ indices.put_alias:\n+ index: test2\n+ name: bar\n+ - do:\n+ indices.put_alias:\n+ index: test3\n+ name: baz\n+\n+ - do:\n+ cat.aliases:\n+ name: foo,bar\n+ v: true\n+ h: [alias, index]\n+ s: [index]\n+\n+ - match:\n+ $body: |\n+ /^ alias \\s+ index \\n\n+ foo \\s+ test \\n\n+ bar \\s+ test2\n+ $/\n+\n ---\n \"Column headers\":\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.aliases/10_basic.yaml", "status": "modified" } ] }
{ "body": "This strange combination of three parameters just came in via CI:\r\nhttps://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+5.x+multijob-unix-compatibility/os=debian/252/console\r\n\r\nThe query that trips this looks like this (simplified):\r\n```\r\n\"multi_match\" : {\r\n \"query\" : 6.075210893508043E-4,\r\n \"fields\" : [\r\n \"mapped_double^1.0\"\r\n ], \r\n \"analyzer\" : \"simple\"\r\n }\r\n```\r\n\r\nSo although this is a double field, the value gets analyzed in lucenes QueryBuilder#analyzeTerm which leaves only the String \"e\" which in turn cannot be parsed to a double. This later leads to a NumberFormatException:\r\n\r\n```\r\njava.lang.NumberFormatException: For input string: \"e\"\r\n\tat __randomizedtesting.SeedInfo.seed([8BA49CFAC7249C2F:7C5F9EC4B6A759C5]:0)\r\n\tat sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)\r\n\tat sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)\r\n\tat java.lang.Double.parseDouble(Double.java:538)\r\n\tat org.elasticsearch.index.mapper.NumberFieldMapper$NumberType$3.parse(NumberFieldMapper.java:350)\r\n\tat org.elasticsearch.index.mapper.NumberFieldMapper$NumberType$3.termQuery(NumberFieldMapper.java:360)\r\n\tat org.elasticsearch.index.mapper.NumberFieldMapper$NumberFieldType.termQuery(NumberFieldMapper.java:801)\r\n\tat org.elasticsearch.index.search.MatchQuery.termQuery(MatchQuery.java:273)\r\n\tat org.elasticsearch.index.search.MatchQuery.blendTermQuery(MatchQuery.java:385)\r\n\tat org.elasticsearch.index.search.MultiMatchQuery.blendTermQuery(MultiMatchQuery.java:316)\r\n\tat org.elasticsearch.index.search.MatchQuery$MatchQueryBuilder.newTermQuery(MatchQuery.java:303)\r\n\tat org.apache.lucene.util.QueryBuilder.analyzeTerm(QueryBuilder.java:277)\r\n\tat org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:241)\r\n\tat org.apache.lucene.util.QueryBuilder.createBooleanQuery(QueryBuilder.java:88)\r\n\tat org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:249)\r\n\tat org.elasticsearch.index.search.MultiMatchQuery.parseAndApply(MultiMatchQuery.java:63)\r\n\tat org.elasticsearch.index.search.MultiMatchQuery.parse(MultiMatchQuery.java:81)\r\n\tat org.elasticsearch.index.query.MultiMatchQueryBuilder.doToQuery(MultiMatchQueryBuilder.java:748)\r\n\tat org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97)\r\n\tat org.elasticsearch.index.query.MultiMatchQueryBuilderTests.testAnalyzerOnDoubleField(MultiMatchQueryBuilderTests.java:313)\r\n```\r\n\r\nI wonder where we should catch this. Should we always throw an error when specifying an analyzer together with a numeric field in the query builder already? I think that would make sense. ", "comments": [ { "body": "Related to #21489 (another scientific notation issue)\n", "created_at": "2016-11-18T15:52:21Z" }, { "body": "@dakrone yes, but only partially related. Scientific notation is okay as long as we don't analyze it here.\n", "created_at": "2016-11-18T15:53:55Z" }, { "body": "The \"match\" query seems to have the similar problem.\n", "created_at": "2016-11-18T17:03:39Z" }, { "body": "> Should we always throw an error when specifying an analyzer together with a numeric field in the query builder already?\n\nMulti-match can have multiple fields, so perhaps we only use the analyzer for text and keyword fields. (I'm in two minds about keyword fields, but I could imagine lower casing the search term).\n\nFor match, where only one field can be specified, we should probably throw an exception on fields that are neither text nor keyword.\n", "created_at": "2016-11-18T18:55:56Z" }, { "body": "here is the REPRODUCE line: `REPRODUCE WITH: gradle :core:test -Dtests.seed=8BA49CFAC7249C2F -Dtests.class=org.elasticsearch.index.query.MultiMatchQueryBuilderTests -Dtests.method=\"testToQuery\" -Dtests.security.manager=true -Dtests.locale=lv-LV -Dtests.timezone=Asia/Kathmandu`", "created_at": "2016-11-29T10:44:25Z" }, { "body": "Closing this after discussion with @jpountz in https://github.com/elastic/elasticsearch/pull/23684. We agreed that the error we currently throw in this edge case is okay and we should change the test to avoid running into this in our randomization.", "created_at": "2017-04-12T08:47:44Z" } ], "number": 21665, "title": "Error when running \"multi_match\" on double field with scientific notation and analyzer" }
{ "body": "Currently the user can set an analyzer on e.g. a numeric query, but explicitely\r\nsetting an analyzer only make sense for input text. Otherwise we might\r\nanalyze e.g. floats in scientific notation like `1.34e-6` to just an `e` which\r\nleads to NumberFormatExceptions on each shard later (if targeting a numeric\r\nfield).\r\n\r\nCloses #21665\r\n", "number": 23684, "review_comments": [], "title": "Match- and MultiMatchQueryBuilder should only allow setting analyzer on string values" }
{ "commits": [ { "message": "Match- and MultiMatchQ.B. should only allow setting analyzer on string values\n\nCurrently the user can set an analyzer on e.g. a numeric query, but explicitely\nsetting an analyzer only make sense for input text. Otherwise we might\nanalyze e.g. floats in scientific notation like `1.34e-6` to just an `e` which\nleads to NumberFormatExceptions on each shard later (if targeting a numeric\nfield).\n\nCloses #21665" } ], "files": [ { "diff": "@@ -210,9 +210,14 @@ public Operator operator() {\n /**\n * Explicitly set the analyzer to use. Defaults to use explicit mapping config for the field, or, if not\n * set, the default search analyzer.\n+ * @throws IllegalArgumentException when analyzer is used with a non-String value\n */\n public MatchQueryBuilder analyzer(String analyzer) {\n this.analyzer = analyzer;\n+ if (analyzer != null && value instanceof String == false) {\n+ throw new IllegalArgumentException(\"Setting analyzers is only allowed for string \"\n+ + \"values but was [\" + value.getClass() + \"]\");\n+ }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java", "status": "modified" }, { "diff": "@@ -331,6 +331,10 @@ public Operator operator() {\n */\n public MultiMatchQueryBuilder analyzer(String analyzer) {\n this.analyzer = analyzer;\n+ if (analyzer != null && value instanceof String == false) {\n+ throw new IllegalArgumentException(\"Setting analyzers is only allowed for string \"\n+ + \"values but was [\" + value.getClass() + \"]\");\n+ }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java", "status": "modified" }, { "diff": "@@ -80,16 +80,11 @@ protected MatchQueryBuilder doCreateTestQueryBuilder() {\n MatchQueryBuilder matchQuery = new MatchQueryBuilder(fieldName, value);\n matchQuery.operator(randomFrom(Operator.values()));\n \n- if (randomBoolean()) {\n- if (fieldName.equals(DATE_FIELD_NAME)) {\n- // tokenized dates would trigger parse errors\n- matchQuery.analyzer(randomFrom(\"keyword\", \"whitespace\"));\n- } else {\n- matchQuery.analyzer(randomFrom(\"simple\", \"keyword\", \"whitespace\"));\n- }\n+ if (randomBoolean() && fieldName.equals(STRING_FIELD_NAME)) {\n+ matchQuery.analyzer(randomFrom(\"simple\", \"keyword\", \"whitespace\"));\n }\n \n- if (fieldName.equals(STRING_FIELD_NAME) && randomBoolean()) {\n+ if (randomBoolean() && fieldName.equals(STRING_FIELD_NAME) ) {\n matchQuery.fuzziness(randomFuzziness(fieldName));\n }\n \n@@ -360,16 +355,8 @@ public void testFuzzinessOnNonStringField() throws Exception {\n () -> query.toQuery(context));\n assertEquals(\"Can only use fuzzy queries on keyword and text fields - not on [mapped_int] which is of type [integer]\",\n e.getMessage());\n- query.analyzer(\"keyword\"); // triggers a different code path\n- e = expectThrows(IllegalArgumentException.class,\n- () -> query.toQuery(context));\n- assertEquals(\"Can only use fuzzy queries on keyword and text fields - not on [mapped_int] which is of type [integer]\",\n- e.getMessage());\n-\n query.lenient(true);\n query.toQuery(context); // no exception\n- query.analyzer(null);\n- query.toQuery(context); // no exception\n }\n \n public void testExactOnUnsupportedField() throws Exception {\n@@ -382,7 +369,7 @@ public void testExactOnUnsupportedField() throws Exception {\n query.toQuery(context); // no exception\n }\n \n- public void testParseFailsWithMultipleFields() throws IOException {\n+ public void testParseFailsWithMultipleFields() {\n String json = \"{\\n\" +\n \" \\\"match\\\" : {\\n\" +\n \" \\\"message1\\\" : {\\n\" +\n@@ -452,6 +439,24 @@ public void testMatchPhrasePrefixWithBoost() throws Exception {\n Query query = builder.toQuery(context);\n assertThat(query, instanceOf(MultiPhrasePrefixQuery.class));\n }\n+ }\n \n+ /**\n+ * Setting an analyzer only makes sense on string fields, otherwise we throw an error\n+ */\n+ public void testAnalyzerOnNumericField() throws Exception {\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class,\n+ () -> new MatchQueryBuilder(DOUBLE_FIELD_NAME, 6.075210893508043E-4)\n+ .analyzer(\"simple\"));\n+ assertEquals(\n+ \"Setting analyzers is only allowed for string values but was\"\n+ + \" [class java.lang.Double]\", exception.getMessage());\n+\n+ exception = expectThrows(IllegalArgumentException.class,\n+ () -> new MatchQueryBuilder(DOUBLE_FIELD_NAME, true)\n+ .analyzer(\"simple\"));\n+ assertEquals(\n+ \"Setting analyzers is only allowed for string values but was\"\n+ + \" [class java.lang.Boolean]\", exception.getMessage());\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -85,13 +85,8 @@ protected MultiMatchQueryBuilder doCreateTestQueryBuilder() {\n if (randomBoolean()) {\n query.operator(randomFrom(Operator.values()));\n }\n- if (randomBoolean()) {\n- if (fieldName.equals(DATE_FIELD_NAME)) {\n- // tokenized dates would trigger parse errors\n- query.analyzer(\"keyword\");\n- } else {\n- query.analyzer(randomAnalyzer());\n- }\n+ if (randomBoolean() && fieldName.equals(STRING_FIELD_NAME)) {\n+ query.analyzer(randomAnalyzer());\n }\n if (randomBoolean()) {\n query.slop(randomIntBetween(0, 5));\n@@ -297,14 +292,24 @@ public void testFuzzinessOnNonStringField() throws Exception {\n IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n () -> query.toQuery(context));\n assertThat(e.getMessage(), containsString(\"Can only use fuzzy queries on keyword and text fields\"));\n- query.analyzer(\"keyword\"); // triggers a different code path\n- e = expectThrows(IllegalArgumentException.class,\n- () -> query.toQuery(context));\n- assertThat(e.getMessage(), containsString(\"Can only use fuzzy queries on keyword and text fields\"));\n-\n query.lenient(true);\n query.toQuery(context); // no exception\n- query.analyzer(null);\n- query.toQuery(context); // no exception\n+ }\n+\n+ /**\n+ * Setting an analyzer only makes sense on string fields, otherwise we throw an error\n+ */\n+ public void testAnalyzerOnNumericField() throws Exception {\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class,\n+ () -> new MultiMatchQueryBuilder(6.075210893508043E-4, DOUBLE_FIELD_NAME,\n+ STRING_FIELD_NAME).analyzer(\"simple\"));\n+ assertEquals(\"Setting analyzers is only allowed for string values but was\"\n+ + \" [class java.lang.Double]\", exception.getMessage());\n+\n+ exception = expectThrows(IllegalArgumentException.class,\n+ () -> new MultiMatchQueryBuilder(true, DOUBLE_FIELD_NAME, STRING_FIELD_NAME)\n+ .analyzer(\"simple\"));\n+ assertEquals(\"Setting analyzers is only allowed for string values but was\"\n+ + \" [class java.lang.Boolean]\", exception.getMessage());\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "Throw an exception when specifying `include_in_all` on multi-fields\r\n\r\nCloses https://github.com/elastic/elasticsearch/issues/21710", "comments": [ { "body": "@nik9000 I modified code according to your comments. I'll commit test later. but I have one question related to CopyToMapperTests: it seems in this test, this exception: \r\n```\r\nMapperParsingException(\"copy_to in multi fields is not allowed. Found the copy_to in field [\" + name + \"] which is within a multi field.\")\r\n```\r\nis not test?\r\nif so, why? if it is not the truth, tell me some details. thanks", "created_at": "2016-12-08T03:07:44Z" }, { "body": "> is not test?\r\n\r\nYou mean that test doesn't exist? I suspect we just forgot to make it. If you'd like to make it while you are there that'd be lovely.", "created_at": "2016-12-08T03:26:41Z" }, { "body": "@nik9000 it turns out that it is tested but in another test case file:MultiFieldCopyToMapperTests. so I add this new test case file:MultiFieldIncludeInAllMapperTests\r\nplease help to review. thanks", "created_at": "2016-12-08T11:46:21Z" }, { "body": "@nik9000 can you help to review this?", "created_at": "2016-12-14T11:35:27Z" }, { "body": "@nik9000 can you help to review this please? ", "created_at": "2016-12-15T06:34:08Z" }, { "body": "@nik9000 can you help to review this or ask someone else to help to review it? thanks very much.", "created_at": "2016-12-19T08:26:34Z" }, { "body": "Looks right to me!\r\n\r\nelasticmachine, please test this.", "created_at": "2016-12-19T18:59:40Z" }, { "body": "Thanks @makeyang! I've merged to master and will backport to 5.x:\r\n\r\nmaster: 2e1d152fc0b50633e832b8785d961325e480b54e + 40b80ae10460567144bbb0d8dcb606d4a10418ac\r\n5.x: eae5b30f85a5d886fc336dd27d3950247c5e2d50 + 36559c0beb274d07b5e8bf62d9aa0b442b9f2475", "created_at": "2016-12-19T20:07:57Z" }, { "body": "@nik9000 thanks very much.", "created_at": "2016-12-20T02:00:45Z" }, { "body": "Isn't this a breaking change? This broke our mappings while upgrading from 5.1.x to 5.2.x", "created_at": "2017-03-20T13:24:35Z" }, { "body": "@mfeltscher thanks for reporting and yes, this shouldn't have gone into 5.2. i've opened https://github.com/elastic/elasticsearch/issues/23654", "created_at": "2017-03-20T15:47:31Z" } ], "number": 21971, "title": "Sub-fields should not accept `include_in_all` parameter" }
{ "body": "This reverts #21971 which should only have been applied in master\r\nto preserve backwards compatibility. Instead of throwing an error\r\nwhen you specify `include_in_all` inside a multifield we instead\r\nreturn a deprecation warning. `include_in_all` in a multifield\r\nstill doesn't do anything. But at least people who use it erroneously\r\nwon't break.\r\n\r\nCloses #23654\r\n\r\n", "number": 23656, "review_comments": [], "title": "Switch include_in_all in multifield to warning" }
{ "commits": [ { "message": "Switch include_in_all in multifield to warning\n\nThis reverts #21971 which should only have been applied in master\nto preserve backwards compatibility. Instead of throwing an error\nwhen you specify `include_in_all` inside a multifield we instead\nreturn a deprecation warning. `include_in_all` in a multifield\nstill doesn't do anything. But at least people who use it erroneously\nwon't break.\n\nCloses #23654" } ], "files": [ { "diff": "@@ -253,8 +253,9 @@ && parseNorms(builder, name, propName, propNode, parserContext)) {\n iterator.remove();\n } else if (propName.equals(\"include_in_all\")) {\n if (parserContext.isWithinMultiField()) {\n- throw new MapperParsingException(\"include_in_all in multi fields is not allowed. Found the include_in_all in field [\"\n- + name + \"] which is within a multi field.\");\n+ deprecationLogger.deprecated(\"include_in_all in multi fields is deprecated \"\n+ + \"because it doesn't do anything. Found the include_in_all in field \"\n+ + \"[{}] which is within a multi field.\", name);\n } else {\n deprecationLogger.deprecated(\"field [include_in_all] is deprecated, as [_all] is deprecated, \" +\n \"and will be disallowed in 6.0, use [copy_to] instead.\");", "filename": "core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java", "status": "modified" }, { "diff": "@@ -33,12 +33,11 @@ public class MultiFieldIncludeInAllMapperTests extends ESTestCase {\n public void testExceptionForIncludeInAllInMultiFields() throws IOException {\n XContentBuilder mapping = createMappingWithIncludeInAllInMultiField();\n \n- // first check that for newer versions we throw exception if include_in_all is found withing multi field\n+ // first check that for newer versions we throw exception if include_in_all is found within multi field\n MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), Settings.EMPTY);\n- Exception e = expectThrows(MapperParsingException.class, () ->\n- mapperService.parse(\"type\", new CompressedXContent(mapping.string()), true));\n- assertEquals(\"include_in_all in multi fields is not allowed. Found the include_in_all in field [c] which is within a multi field.\",\n- e.getMessage());\n+ mapperService.parse(\"type\", new CompressedXContent(mapping.string()), true);\n+ assertWarnings(\"include_in_all in multi fields is deprecated because it doesn't do \"\n+ + \"anything. Found the include_in_all in field [c] which is within a multi field.\");\n }\n \n private static XContentBuilder createMappingWithIncludeInAllInMultiField() throws IOException {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/MultiFieldIncludeInAllMapperTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: OpenJDK 1.8.0_111\r\n\r\n**OS version**: Ubuntu Linux 14.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nMapperService#parentTypes is [rewrapped in an UnmodifiableSet](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java#L459) in MapperService#internalMerge every time the cluster state is updated. After thousands of updates the collection is wrapped so deeply that calling a method on it generates a StackOverflowError.\r\n\r\nI encountered this after upgrading from 5.1 to 5.2 (in < 5.2 parentTypes was only conditionally wrapped, it is now wrapped with every cluster state update). In my use case I have been creating an [alias per user](https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html), and each alias creation results in a cluster state update and parentTypes is wrapped again. The error depth will depend on JVM configuration but in my production cluster the StackOverflow error occurs after about 30k cluster updates/new aliases (I had to use XX:MaxJavaStackTraceDepth to obtain the full stack trace since otherwise the trace was being truncated). In the worst case, if all the nodes in my cluster happened to be started at about the same time then they hit this StackOverflowError at the same time and the whole cluster goes down. Presumably prior to the actual error there is also some performance penalty from calls digging through the deeply nested collection object graph.\r\n\r\nI don't really need to be using aliases so am going to factor them out of my use case, but do believe this is a bug, since the StackOverflowError manifests after adding a number of aliases that Elasticsearch should easily be able to handle (and has in past versions).\r\n\r\n**Steps to reproduce**:\r\nThe following bash script reproduces this bug:\r\n\r\n```bash\r\n#!/bin/bash\r\nset -e\r\n\r\n# create an index we can alias\r\ncurl -s -X PUT localhost:9200/test_index >> /dev/null\r\n\r\n# high enough N to generate a StackOverflowError\r\nN=50000\r\nfor i in $(seq 1 $N); do\r\n echo $i\r\n\r\n # create an alias. we don't really care about the alias but creating\r\n # an alias triggers a cluster state update which rewraps MapperService#parentTypes\r\n curl -s -X PUT localhost:9200/test_index/_alias/test_alias_$i >> /dev/null\r\n\r\n # putting a document will eventually cause a StackOverFlowError since it calls\r\n # contains on the wrapped parentTypes collection via DocumentMapper#isParent\r\n curl -s -H \"Content-Type: application/json\" -X PUT -d '{}' localhost:9200/test_index/test_type/1 >> /dev/null\r\n\r\ndone\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-03-13T15:44:21,290][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [search-01] fatal error in thread [elasticsearch[search-01][bulk][T#4]], exiting\r\njava.lang.StackOverflowError: null\r\n at java.util.Collections$UnmodifiableCollection.contains(Collections.java:1032) ~[?:1.8.0_111]\r\n at java.util.Collections$UnmodifiableCollection.contains(Collections.java:1032) ~[?:1.8.0_111]\r\n ... thousands of identical calls here...\r\n at java.util.Collections$UnmodifiableCollection.contains(Collections.java:1032) ~[?:1.8.0_111]\r\n at java.util.Collections$UnmodifiableCollection.contains(Collections.java:1032) ~[?:1.8.0_111]\r\n at org.elasticsearch.index.mapper.DocumentMapper.isParent(DocumentMapper.java:329) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.ParentFieldMapper.parseCreateField(ParentFieldMapper.java:233) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.ParentFieldMapper.postParse(ParentFieldMapper.java:228) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:97) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:275) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:533) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:510) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:196) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:201) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:348) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.bulk.TransportShardBulkAction.index(TransportShardBulkAction.java:155) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.bulk.TransportShardBulkAction.handleItem(TransportShardBulkAction.java:134) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:120) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:73) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:76) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:49) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:914) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:884) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:327) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:262) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:864) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:861) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1652) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:873) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:92) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:279) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:258) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:250) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:610) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]\r\n```", "comments": [ { "body": "Do we know when this fix will be available?", "created_at": "2017-04-05T21:35:35Z" }, { "body": "> Do we know when this fix will be available?\r\n\r\nIt should be in the recently released 5.3.0.", "created_at": "2017-04-05T21:38:27Z" }, { "body": "Thanks! Guess I'm not on the right email list as I didn't know there was a release last week.", "created_at": "2017-04-05T22:38:01Z" } ], "number": 23604, "title": "MapperService StackOverFlowError" }
{ "body": "MapperService#parentTypes is rewrapped in an UnmodifiableSet in MapperService#internalMerge every time the cluster state is updated. After thousands of updates the collection is wrapped so deeply that calling a method on it generates a StackOverflowError.\r\n\r\nI have been running this patch in my cluster to address issue #23604", "number": 23605, "review_comments": [], "title": "Fix MapperService StackOverflowError" }
{ "commits": [ { "message": "avoid endlessly rewrapping parentTypes which can result in a StackOverflowError" }, { "message": "don't wrap parentTypes or fullPathObjectMappers unless the references has changed" }, { "message": "for consistency initialize as immutable so it's immutable even before the first merge with DocumentMappers" } ], "files": [ { "diff": "@@ -110,7 +110,7 @@ public enum MergeReason {\n private volatile Map<String, DocumentMapper> mappers = emptyMap();\n \n private volatile FieldTypeLookup fieldTypes;\n- private volatile Map<String, ObjectMapper> fullPathObjectMappers = new HashMap<>();\n+ private volatile Map<String, ObjectMapper> fullPathObjectMappers = emptyMap();\n private boolean hasNested = false; // updated dynamically to true when a nested object is added\n private boolean allEnabled = false; // updated dynamically to true when _all is enabled\n \n@@ -394,6 +394,7 @@ private synchronized Map<String, DocumentMapper> internalMerge(@Nullable Documen\n \n for (ObjectMapper objectMapper : objectMappers) {\n if (fullPathObjectMappers == this.fullPathObjectMappers) {\n+ // first time through the loops\n fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers);\n }\n fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper);\n@@ -414,6 +415,7 @@ private synchronized Map<String, DocumentMapper> internalMerge(@Nullable Documen\n \n if (oldMapper == null && newMapper.parentFieldMapper().active()) {\n if (parentTypes == this.parentTypes) {\n+ // first time through the loop\n parentTypes = new HashSet<>(this.parentTypes);\n }\n parentTypes.add(mapper.parentFieldMapper().type());\n@@ -456,8 +458,15 @@ private synchronized Map<String, DocumentMapper> internalMerge(@Nullable Documen\n // make structures immutable\n mappers = Collections.unmodifiableMap(mappers);\n results = Collections.unmodifiableMap(results);\n- parentTypes = Collections.unmodifiableSet(parentTypes);\n- fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers);\n+\n+ // only need to immutably rewrap these if the previous reference was changed.\n+ // if not then they are already implicitly immutable.\n+ if (fullPathObjectMappers != this.fullPathObjectMappers) {\n+ fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers);\n+ }\n+ if (parentTypes != this.parentTypes) {\n+ parentTypes = Collections.unmodifiableSet(parentTypes);\n+ }\n \n // commit the change\n if (defaultMappingSource != null) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.ExecutionException;\n import java.util.function.Function;\n \n@@ -189,6 +190,22 @@ public void testMergeWithMap() throws Throwable {\n assertThat(e.getMessage(), startsWith(\"Failed to parse mapping [type1]: \"));\n }\n \n+ public void testMergeParentTypesSame() {\n+ // Verifies that a merge (absent a DocumentMapper change)\n+ // doesn't change the parentTypes reference.\n+ // The collection was being rewrapped with each merge\n+ // in v5.2 resulting in eventual StackOverflowErrors.\n+ // https://github.com/elastic/elasticsearch/issues/23604\n+\n+ IndexService indexService1 = createIndex(\"index1\");\n+ MapperService mapperService = indexService1.mapperService();\n+ Set<String> parentTypes = mapperService.getParentTypes();\n+\n+ Map<String, Map<String, Object>> mappings = new HashMap<>();\n+ mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, false);\n+ assertSame(parentTypes, mapperService.getParentTypes());\n+ }\n+\n public void testOtherDocumentMappersOnlyUpdatedWhenChangingFieldType() throws IOException {\n IndexService indexService = createIndex(\"test\");\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java", "status": "modified" } ] }
{ "body": "Elasticsearch version: 5.2.2\r\nJVM version: 1.8\r\nOS: Windows 10\r\nI am able to reproduce the bug on all 5.2.x versions of client transport library.\r\n\r\nI have a GetResponse object which i want to display in console by calling the toString method. It throws the following exception:\r\nError building toString out of XContent: com.fasterxml.jackson.core.JsonGenerationException: Can not start an object, expecting field name (context: Object)\r\n\tat com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1897)\r\n\tat com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:244)\r\n\tat com.fasterxml.jackson.core.json.UTF8JsonGenerator._verifyValueWrite(UTF8JsonGenerator.java:1027)\r\n\tat com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeStartObject(UTF8JsonGenerator.java:313)\r\n\tat org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeStartObject(JsonXContentGenerator.java:161)\r\n\tat org.elasticsearch.common.xcontent.XContentBuilder.startObject(XContentBuilder.java:217)\r\n\tat org.elasticsearch.index.get.GetResult.toXContent(GetResult.java:251)\r\n\tat org.elasticsearch.action.get.GetResponse.toXContent(GetResponse.java:158)\r\n\tat org.elasticsearch.common.Strings.toString(Strings.java:901)\r\n\tat org.elasticsearch.action.get.GetResponse.toString(GetResponse.java:197)\r\n\tat gr.unipi.elastic.App.main(App.java:48)", "comments": [ { "body": "I can reproduce this, but it should only affect the 5.2.x releases and is already fixed in the next minor version (5.3). In the meantime you can use `org.elasticsearch.common.Strings.toString(getResponse, false)` as a temporary workaround. @javanna do you think there should be a fix on the 5.2 branch in case there is going to be a 5.2.3 release?", "created_at": "2017-03-08T05:57:37Z" }, { "body": "Please do fix this ASAP. I have a production system that dump the getResponse result as json string to a custom REST client by relying on this toString method. It caused serious problem. ", "created_at": "2017-03-09T11:17:35Z" }, { "body": "Fixed in 5.3 and upwards, closing this after adding tests to the related branches.", "created_at": "2017-03-13T17:24:58Z" }, { "body": "@fangqiao do you mean that you are using the output of the toString method to send the request via a REST client? If that is the case, I would recommend to move your code to calling the `toXContent` method instead. In fact, there is no guarantee that the `toString` method will always print out valid json, we could potentially change that in the future.", "created_at": "2017-03-17T20:10:33Z" }, { "body": "@javana Thanks. It was a quick and dirty solution. Shall follow your suggestions.", "created_at": "2017-03-24T00:05:49Z" } ], "number": 23505, "title": "GetResponse.toString method throws exception" }
{ "body": "Related to #23505 \r\n", "number": 23545, "review_comments": [], "title": "Tests: Check that GetResponse.toString() outputs json xcontent" }
{ "commits": [ { "message": "Tests: Check that GetResponse.toString() outputs json xcontent" } ], "files": [ { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.index.get.GetResult;\n import org.elasticsearch.test.ESTestCase;\n \n-import java.io.IOException;\n import java.util.Collections;\n \n import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;\n@@ -62,7 +61,7 @@ public void testToAndFromXContent() throws Exception {\n assertEquals(expectedGetResponse.getSourceAsString(), parsedGetResponse.getSourceAsString());\n }\n \n- public void testToXContent() throws IOException {\n+ public void testToXContent() {\n {\n GetResponse getResponse = new GetResponse(new GetResult(\"index\", \"type\", \"id\", 1, true, new BytesArray(\"{ \\\"field1\\\" : \" +\n \"\\\"value1\\\", \\\"field2\\\":\\\"value2\\\"}\"), Collections.singletonMap(\"field1\", new GetField(\"field1\",\n@@ -78,6 +77,14 @@ public void testToXContent() throws IOException {\n }\n }\n \n+ public void testToString() {\n+ GetResponse getResponse = new GetResponse(\n+ new GetResult(\"index\", \"type\", \"id\", 1, true, new BytesArray(\"{ \\\"field1\\\" : \" + \"\\\"value1\\\", \\\"field2\\\":\\\"value2\\\"}\"),\n+ Collections.singletonMap(\"field1\", new GetField(\"field1\", Collections.singletonList(\"value1\")))));\n+ assertEquals(\"{\\\"_index\\\":\\\"index\\\",\\\"_type\\\":\\\"type\\\",\\\"_id\\\":\\\"id\\\",\\\"_version\\\":1,\\\"found\\\":true,\\\"_source\\\":{ \\\"field1\\\" \"\n+ + \": \\\"value1\\\", \\\"field2\\\":\\\"value2\\\"},\\\"fields\\\":{\\\"field1\\\":[\\\"value1\\\"]}}\", getResponse.toString());\n+ }\n+\n public void testEqualsAndHashcode() {\n checkEqualsAndHashCode(new GetResponse(randomGetResult(XContentType.JSON).v1()), GetResponseTests::copyGetResponse,\n GetResponseTests::mutateGetResponse);", "filename": "core/src/test/java/org/elasticsearch/action/get/GetResponseTests.java", "status": "modified" } ] }
{ "body": "Today when handling a multi-search request, we asynchornously execute as many search requests as the minimum of the number of search requests in the multi-search request and the maximum number of concurrent requests. When these search requests return, we poll more search requests from a queue of search requests from the original multi-search request. The implementation of this was recursive, and if the number of requests in the multi-search request was large, a stack overflow could arise due to the recursive invocation. This commit replaces this recursive implementation with a simple iterative implementation.\r\n\r\nCloses #23523\r\n\r\n", "comments": [ { "body": "Thanks @jpountz and @martijnvg.", "created_at": "2017-03-09T23:32:04Z" } ], "number": 23527, "title": "Avoid stack overflow in multi-search" }
{ "body": "A previous change to the multi-search request execution to avoid stack overflows regressed on limiting the number of concurrent search requests from a batched multi-search request. In particular, the replacement of the tail-recursive call with a loop could asynchronously fire off all of the remaining search requests in the batch while max concurrent search requests are already executing. This commit attempts to address this issue by taking a more careful approach to the initial problem of recurisve calls. The cause of the initial problem was due to possibility of individual requests completing on the same thread as invoked the search action execution. This can happen, for example, in cases when an individual request does not resolve to any shards. To address this problem, when an individual request completes we check if it completed on the same thread as fired off the request. In this case, we loop and otherwise safely recurse. Sadly, there was a unit test to check that the maximum number of concurrent search requests was not exceeded, but that test was broken while modifying the test to reproduce a case that led to the possibility of stack overflow. As such, we randomize whether or not search actions execute on the same thread as the thread that invoked the action.\r\n\r\nRelates #23527", "number": 23538, "review_comments": [ { "body": "can we assert this is null?", "created_at": "2017-03-10T22:54:01Z" }, { "body": "mayb initialize next to the initial request? I think this will mean that we can remove this if and make do with `current = next.getAndSet(null)`", "created_at": "2017-03-10T22:58:18Z" }, { "body": "Can we instead always do: `next.set(requests.poll());`? I think this would make the code simpler?", "created_at": "2017-03-11T00:01:30Z" }, { "body": "We can't do this. Misunderstood the code. ", "created_at": "2017-03-11T00:51:47Z" } ], "title": "Honor max concurrent searches in multi-search" }
{ "commits": [ { "message": "Honor max concurrent searches in multi-search\n\nA previous change to the multi-search request execution to avoid stack\noverflows regressed on limiting the number of concurrent search requests\nfrom a batched multi-search request. In particular, the replacement of\nthe tail-recursive call with a loop could asynchronously fire off all of\nthe remaining search requests in the batch while max concurrent search\nrequests are already executing. This commit attempts to address this\nissue by taking a more careful approach to the initial problem of\nrecurisve calls. The cause of the initial problem was due to possibility\nof individual requests completing on the same thread as invoked the\nsearch action execution. This can happen, for example, in cases when an\nindividual request does not resolve to any shards. To address this\nproblem, when an individual request completes we check if it completed\non the same thread as fired off the request. In this case, we loop and\notherwise safely recurse. Sadly, there was a unit test to check that the\nmaximum number of concurrent search requests was not exceeded, but that\ntest was broken while modifying the test to reproduce a case that led to\nthe possibility of stack overflow. As such, we randomize whether or not\nsearch actions execute on the same thread as the thread that invoked the\naction." }, { "message": "Fork when on the same thread" }, { "message": "Revert unneeded change" } ], "files": [ { "diff": "@@ -159,7 +159,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]MultiSearchRequestBuilder.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]ShardSearchFailure.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportClearScrollAction.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]search[/\\\\]TransportMultiSearchAction.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]suggest[/\\\\]SuggestResponse.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]support[/\\\\]ActionFilter.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]main[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]action[/\\\\]support[/\\\\]DelegatingActionListener.java\" checks=\"LineLength\" />", "filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml", "status": "modified" }, { "diff": "@@ -47,18 +47,17 @@ public class TransportMultiSearchAction extends HandledTransportAction<MultiSear\n @Inject\n public TransportMultiSearchAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n ClusterService clusterService, TransportSearchAction searchAction,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new);\n+ ActionFilters actionFilters, IndexNameExpressionResolver resolver) {\n+ super(settings, MultiSearchAction.NAME, threadPool, transportService, actionFilters, resolver, MultiSearchRequest::new);\n this.clusterService = clusterService;\n this.searchAction = searchAction;\n this.availableProcessors = EsExecutors.numberOfProcessors(settings);\n }\n \n- // For testing only:\n TransportMultiSearchAction(ThreadPool threadPool, ActionFilters actionFilters, TransportService transportService,\n ClusterService clusterService, TransportAction<SearchRequest, SearchResponse> searchAction,\n- IndexNameExpressionResolver indexNameExpressionResolver, int availableProcessors) {\n- super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new);\n+ IndexNameExpressionResolver resolver, int availableProcessors) {\n+ super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, resolver, MultiSearchRequest::new);\n this.clusterService = clusterService;\n this.searchAction = searchAction;\n this.availableProcessors = availableProcessors;\n@@ -90,10 +89,9 @@ protected void doExecute(MultiSearchRequest request, ActionListener<MultiSearchR\n }\n \n /*\n- * This is not perfect and makes a big assumption, that all nodes have the same thread pool size / have the number\n- * of processors and that shard of the indices the search requests go to are more or less evenly distributed across\n- * all nodes in the cluster. But I think it is a good enough default for most cases, if not then the default should be\n- * overwritten in the request itself.\n+ * This is not perfect and makes a big assumption, that all nodes have the same thread pool size / have the number of processors and\n+ * that shard of the indices the search requests go to are more or less evenly distributed across all nodes in the cluster. But I think\n+ * it is a good enough default for most cases, if not then the default should be overwritten in the request itself.\n */\n static int defaultMaxConcurrentSearches(int availableProcessors, ClusterState state) {\n int numDateNodes = state.getNodes().getDataNodes().size();\n@@ -103,8 +101,20 @@ static int defaultMaxConcurrentSearches(int availableProcessors, ClusterState st\n return Math.max(1, numDateNodes * defaultSearchThreadPoolSize);\n }\n \n- void executeSearch(Queue<SearchRequestSlot> requests, AtomicArray<MultiSearchResponse.Item> responses,\n- AtomicInteger responseCounter, ActionListener<MultiSearchResponse> listener) {\n+ /**\n+ * Executes a single request from the queue of requests. When a request finishes, another request is taken from the queue. When a\n+ * request is executed, a permit is taken on the specified semaphore, and released as each request completes.\n+ *\n+ * @param requests the queue of multi-search requests to execute\n+ * @param responses atomic array to hold the responses corresponding to each search request slot\n+ * @param responseCounter incremented on each response\n+ * @param listener the listener attached to the multi-search request\n+ */\n+ private void executeSearch(\n+ final Queue<SearchRequestSlot> requests,\n+ final AtomicArray<MultiSearchResponse.Item> responses,\n+ final AtomicInteger responseCounter,\n+ final ActionListener<MultiSearchResponse> listener) {\n SearchRequestSlot request = requests.poll();\n if (request == null) {\n /*\n@@ -118,52 +128,43 @@ void executeSearch(Queue<SearchRequestSlot> requests, AtomicArray<MultiSearchRes\n }\n \n /*\n- * With a request in hand, we are going to asynchronously execute the search request. When the search request returns, either with\n- * a success or with a failure, we set the response corresponding to the request. Then, we enter a loop that repeatedly pulls\n- * requests off the request queue, this time only setting the response corresponding to the request.\n+ * With a request in hand, we are now prepared to execute the search request. There are two possibilities, either we go asynchronous\n+ * or we do not (this can happen if the request does not resolve to any shards). If we do not go asynchronous, we are going to come\n+ * back on the same thread that attempted to execute the search request. At this point, or any other point where we come back on the\n+ * same thread as when the request was submitted, we should not recurse lest we might descend into a stack overflow. To avoid this,\n+ * when we handle the response rather than going recursive, we fork to another thread, otherwise we recurse.\n */\n+ final Thread thread = Thread.currentThread();\n searchAction.execute(request.request, new ActionListener<SearchResponse>() {\n @Override\n public void onResponse(final SearchResponse searchResponse) {\n handleResponse(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null));\n- executeSearchLoop();\n }\n \n @Override\n public void onFailure(final Exception e) {\n handleResponse(request.responseSlot, new MultiSearchResponse.Item(null, e));\n- executeSearchLoop();\n }\n \n private void handleResponse(final int responseSlot, final MultiSearchResponse.Item item) {\n responses.set(responseSlot, item);\n if (responseCounter.decrementAndGet() == 0) {\n assert requests.isEmpty();\n finish();\n+ } else {\n+ if (thread == Thread.currentThread()) {\n+ // we are on the same thread, we need to fork to another thread to avoid recursive stack overflow on a single thread\n+ threadPool.generic().execute(() -> executeSearch(requests, responses, responseCounter, listener));\n+ } else {\n+ // we are on a different thread (we went asynchronous), it's safe to recurse\n+ executeSearch(requests, responses, responseCounter, listener);\n+ }\n }\n }\n \n private void finish() {\n listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()])));\n }\n-\n- private void executeSearchLoop() {\n- SearchRequestSlot next;\n- while ((next = requests.poll()) != null) {\n- final int nextResponseSlot = next.responseSlot;\n- searchAction.execute(next.request, new ActionListener<SearchResponse>() {\n- @Override\n- public void onResponse(SearchResponse searchResponse) {\n- handleResponse(nextResponseSlot, new MultiSearchResponse.Item(searchResponse, null));\n- }\n-\n- @Override\n- public void onFailure(Exception e) {\n- handleResponse(nextResponseSlot, new MultiSearchResponse.Item(null, e));\n- }\n- });\n- }\n- }\n });\n }\n \n@@ -176,5 +177,7 @@ static final class SearchRequestSlot {\n this.request = request;\n this.responseSlot = responseSlot;\n }\n+\n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java", "status": "modified" }, { "diff": "@@ -30,14 +30,20 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Randomness;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.tasks.TaskManager;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import java.util.Arrays;\n import java.util.Collections;\n+import java.util.IdentityHashMap;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.ExecutorService;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n \n@@ -73,17 +79,27 @@ public TaskManager getTaskManager() {\n int maxAllowedConcurrentSearches = scaledRandomIntBetween(1, 16);\n AtomicInteger counter = new AtomicInteger();\n AtomicReference<AssertionError> errorHolder = new AtomicReference<>();\n+ // randomize whether or not requests are executed asynchronously\n+ final List<String> threadPoolNames = Arrays.asList(ThreadPool.Names.GENERIC, ThreadPool.Names.SAME);\n+ Randomness.shuffle(threadPoolNames);\n+ final ExecutorService commonExecutor = threadPool.executor(threadPoolNames.get(0));\n+ final ExecutorService rarelyExecutor = threadPool.executor(threadPoolNames.get(1));\n+ final Set<SearchRequest> requests = Collections.newSetFromMap(Collections.synchronizedMap(new IdentityHashMap<>()));\n TransportAction<SearchRequest, SearchResponse> searchAction = new TransportAction<SearchRequest, SearchResponse>\n (Settings.EMPTY, \"action\", threadPool, actionFilters, resolver, taskManager) {\n @Override\n protected void doExecute(SearchRequest request, ActionListener<SearchResponse> listener) {\n+ requests.add(request);\n int currentConcurrentSearches = counter.incrementAndGet();\n if (currentConcurrentSearches > maxAllowedConcurrentSearches) {\n errorHolder.set(new AssertionError(\"Current concurrent search [\" + currentConcurrentSearches +\n \"] is higher than is allowed [\" + maxAllowedConcurrentSearches + \"]\"));\n }\n- counter.decrementAndGet();\n- listener.onResponse(new SearchResponse());\n+ final ExecutorService executorService = rarely() ? rarelyExecutor : commonExecutor;\n+ executorService.execute(() -> {\n+ counter.decrementAndGet();\n+ listener.onResponse(new SearchResponse());\n+ });\n }\n };\n TransportMultiSearchAction action =\n@@ -104,6 +120,7 @@ protected void doExecute(SearchRequest request, ActionListener<SearchResponse> l\n \n MultiSearchResponse response = action.execute(multiSearchRequest).actionGet();\n assertThat(response.getResponses().length, equalTo(numSearchRequests));\n+ assertThat(requests.size(), equalTo(numSearchRequests));\n assertThat(errorHolder.get(), nullValue());\n } finally {\n assertTrue(ESTestCase.terminate(threadPool));", "filename": "core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java", "status": "modified" } ] }
{ "body": "A multi-search request can stack overflow with large batches due to execute search being called recursively as responses are received.", "comments": [ { "body": "I will work on a fix for this.", "created_at": "2017-03-09T18:50:55Z" } ], "number": 23523, "title": "Multi-search can stack overflow with large batches" }
{ "body": "Today when handling a multi-search request, we asynchornously execute as many search requests as the minimum of the number of search requests in the multi-search request and the maximum number of concurrent requests. When these search requests return, we poll more search requests from a queue of search requests from the original multi-search request. The implementation of this was recursive, and if the number of requests in the multi-search request was large, a stack overflow could arise due to the recursive invocation. This commit replaces this recursive implementation with a simple iterative implementation.\r\n\r\nCloses #23523\r\n\r\n", "number": 23527, "review_comments": [], "title": "Avoid stack overflow in multi-search" }
{ "commits": [ { "message": "Avoid stack overflow in multi-search\n\nToday when handling a multi-search request, we asynchornously execute as\nmany search requests as the minimum of the number of search requests in\nthe multi-search request and the maximum number of concurrent\nrequests. When these search requests return, we poll more search\nrequests from a queue of search requests from the original multi-search\nrequest. The implementation of this was recursive, and if the number of\nrequests in the multi-search request was large, a stack overflow could\narise due to the recursive invocation. This commit replaces this\nrecursive implementation with a simple iterative implementation." } ], "files": [ { "diff": "@@ -46,8 +46,8 @@ public class TransportMultiSearchAction extends HandledTransportAction<MultiSear\n \n @Inject\n public TransportMultiSearchAction(Settings settings, ThreadPool threadPool, TransportService transportService,\n- ClusterService clusterService, TransportSearchAction searchAction,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n+ ClusterService clusterService, TransportSearchAction searchAction,\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n super(settings, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new);\n this.clusterService = clusterService;\n this.searchAction = searchAction;\n@@ -107,27 +107,61 @@ void executeSearch(Queue<SearchRequestSlot> requests, AtomicArray<MultiSearchRes\n AtomicInteger responseCounter, ActionListener<MultiSearchResponse> listener) {\n SearchRequestSlot request = requests.poll();\n if (request == null) {\n- // Ok... so there're no more requests then this is ok, we're then waiting for running requests to complete\n+ /*\n+ * The number of times that we poll an item from the queue here is the minimum of the number of requests and the maximum number\n+ * of concurrent requests. At first glance, it appears that we should never poll from the queue and not obtain a request given\n+ * that we only poll here no more times than the number of requests. However, this is not the only consumer of this queue as\n+ * earlier requests that have already completed will poll from the queue too and they could complete before later polls are\n+ * invoked here. Thus, it can be the case that we poll here and and the queue was empty.\n+ */\n return;\n }\n+\n+ /*\n+ * With a request in hand, we are going to asynchronously execute the search request. When the search request returns, either with\n+ * a success or with a failure, we set the response corresponding to the request. Then, we enter a loop that repeatedly pulls\n+ * requests off the request queue, this time only setting the response corresponding to the request.\n+ */\n searchAction.execute(request.request, new ActionListener<SearchResponse>() {\n @Override\n- public void onResponse(SearchResponse searchResponse) {\n- responses.set(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null));\n- handleResponse();\n+ public void onResponse(final SearchResponse searchResponse) {\n+ handleResponse(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null));\n+ executeSearchLoop();\n }\n \n @Override\n- public void onFailure(Exception e) {\n- responses.set(request.responseSlot, new MultiSearchResponse.Item(null, e));\n- handleResponse();\n+ public void onFailure(final Exception e) {\n+ handleResponse(request.responseSlot, new MultiSearchResponse.Item(null, e));\n+ executeSearchLoop();\n }\n \n- private void handleResponse() {\n+ private void handleResponse(final int responseSlot, final MultiSearchResponse.Item item) {\n+ responses.set(responseSlot, item);\n if (responseCounter.decrementAndGet() == 0) {\n- listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()])));\n- } else {\n- executeSearch(requests, responses, responseCounter, listener);\n+ assert requests.isEmpty();\n+ finish();\n+ }\n+ }\n+\n+ private void finish() {\n+ listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()])));\n+ }\n+\n+ private void executeSearchLoop() {\n+ SearchRequestSlot next;\n+ while ((next = requests.poll()) != null) {\n+ final int nextResponseSlot = next.responseSlot;\n+ searchAction.execute(next.request, new ActionListener<SearchResponse>() {\n+ @Override\n+ public void onResponse(SearchResponse searchResponse) {\n+ handleResponse(nextResponseSlot, new MultiSearchResponse.Item(searchResponse, null));\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ handleResponse(nextResponseSlot, new MultiSearchResponse.Item(null, e));\n+ }\n+ });\n }\n }\n });", "filename": "core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ public TaskManager getTaskManager() {\n \n // Keep track of the number of concurrent searches started by multi search api,\n // and if there are more searches than is allowed create an error and remember that.\n- int maxAllowedConcurrentSearches = scaledRandomIntBetween(1, 20);\n+ int maxAllowedConcurrentSearches = scaledRandomIntBetween(1, 16);\n AtomicInteger counter = new AtomicInteger();\n AtomicReference<AssertionError> errorHolder = new AtomicReference<>();\n TransportAction<SearchRequest, SearchResponse> searchAction = new TransportAction<SearchRequest, SearchResponse>\n@@ -82,24 +82,20 @@ protected void doExecute(SearchRequest request, ActionListener<SearchResponse> l\n errorHolder.set(new AssertionError(\"Current concurrent search [\" + currentConcurrentSearches +\n \"] is higher than is allowed [\" + maxAllowedConcurrentSearches + \"]\"));\n }\n- threadPool.executor(ThreadPool.Names.GENERIC).execute(\n- () -> {\n- try {\n- Thread.sleep(scaledRandomIntBetween(10, 1000));\n- } catch (InterruptedException e) {\n- }\n- counter.decrementAndGet();\n- listener.onResponse(new SearchResponse());\n- }\n- );\n+ counter.decrementAndGet();\n+ listener.onResponse(new SearchResponse());\n }\n };\n TransportMultiSearchAction action =\n new TransportMultiSearchAction(threadPool, actionFilters, transportService, clusterService, searchAction, resolver, 10);\n \n // Execute the multi search api and fail if we find an error after executing:\n try {\n- int numSearchRequests = randomIntBetween(16, 128);\n+ /*\n+ * Allow for a large number of search requests in a single batch as previous implementations could stack overflow if the number\n+ * of requests in a single batch was large\n+ */\n+ int numSearchRequests = scaledRandomIntBetween(1, 8192);\n MultiSearchRequest multiSearchRequest = new MultiSearchRequest();\n multiSearchRequest.maxConcurrentSearchRequests(maxAllowedConcurrentSearches);\n for (int i = 0; i < numSearchRequests; i++) {", "filename": "core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**:\r\n5.2.1\r\n**Plugins installed**: []\r\n**JVM version**:\r\n1.8\r\n**OS version**:\r\nNAME=\"Amazon Linux AMI\"\r\nVERSION=\"2016.09\"\r\nID=\"amzn\"\r\nID_LIKE=\"rhel fedora\"\r\nVERSION_ID=\"2016.09\"\r\nPRETTY_NAME=\"Amazon Linux AMI 2016.09\"\r\nANSI_COLOR=\"0;33\"\r\nCPE_NAME=\"cpe:/o:amazon:linux:2016.09:ga\"\r\nHOME_URL=\"http://aws.amazon.com/amazon-linux-ami/\"\r\nAmazon Linux AMI release 2016.09\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nI've just moved to elastic v5 (5.2.1) from 2.3.5 and am finding that I cannot get nested inner hits back when the nested query is placed inside a dismax query.\r\n\r\n**Steps to reproduce**:\r\n 1. Create new index with nested type\r\n\r\n```\r\ncurl -XPUT localhost:9200/nested-dismax '{\r\n \"mappings\": {\r\n \"tpe\": {\r\n \"properties\": {\r\n \"bob\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"name\": {\r\n \"type\": \"text\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n 2. Index a document:\r\n\r\n```\r\ncurl -XPUT localhost:9200/nested-dismax/tpe/1 '{\r\n \"bob\" : {\r\n \"name\" : \"Bob\"\r\n }\r\n}'\r\n```\r\n\r\n 3. Try a nested query without a dismax and it works:\r\n\r\n```\r\ncurl -XGET localhost:9200/nested-dismax/_search '{\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"bob\",\r\n \"query\": {\r\n \"match\": {\r\n \"bob.name\": \"Bob\"\r\n }\r\n },\r\n \"inner_hits\":{}\r\n }\r\n }\r\n}'\r\n```\r\n\r\n4. Try it with a dismax around it and no inner hits are returned:\r\n\r\n```\r\ncurl -XGET localhost:9200/nested-dismax/_search '{\r\n \"query\": {\r\n \"dis_max\": {\r\n \"queries\": [\r\n {\r\n \"nested\": {\r\n \"path\": \"bob\",\r\n \"query\": {\r\n \"match\": {\r\n \"bob.name\": \"Bob\"\r\n }\r\n },\r\n \"inner_hits\": {}\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}'\r\n```\r\n\r\nPS - apologies if the curls don't run straight away, I was using sense and my server got shut down before I could test the curl version!\r\n\r\n\r\n\r\n", "comments": [ { "body": "@wboult Thanks for reporting this. Inner hits are indeed not working when nesting it under `dis_max` query.", "created_at": "2017-03-05T14:26:57Z" }, { "body": "I see the fix has been added to v6. Is it expected to be back ported to v5?", "created_at": "2017-06-30T05:51:39Z" } ], "number": 23482, "title": "No inner hits returned when nested query inside dismax query (v5)" }
{ "body": "PR for #23482", "number": 23512, "review_comments": [], "title": "Changed DisMaxQueryBuilder to extract inner hits from leaf queries" }
{ "commits": [ { "message": "[INNER HITS] Changed DisMaxQueryBuilder to extract inner hits from leaf queries.\n\nCloses #23482" } ], "files": [ { "diff": "@@ -33,6 +33,7 @@\n import java.util.ArrayList;\n import java.util.Collection;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -206,4 +207,11 @@ protected boolean doEquals(DisMaxQueryBuilder other) {\n public String getWriteableName() {\n return NAME;\n }\n+\n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitBuilder> innerHits) {\n+ for (QueryBuilder query : queries) {\n+ InnerHitBuilder.extractInnerHits(query, innerHits);\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/DisMaxQueryBuilder.java", "status": "modified" }, { "diff": "@@ -183,6 +183,25 @@ public void testInlineLeafInnerHitsNestedQueryViaBoolQuery() {\n assertThat(innerHitBuilders.get(leafInnerHits.getName()), notNullValue());\n }\n \n+ public void testInlineLeafInnerHitsNestedQueryViaDisMaxQuery() {\n+ InnerHitBuilder leafInnerHits1 = randomInnerHits();\n+ NestedQueryBuilder nestedQueryBuilder = new NestedQueryBuilder(\"path\", new MatchAllQueryBuilder(), ScoreMode.None)\n+ .innerHit(leafInnerHits1, false);\n+\n+ InnerHitBuilder leafInnerHits2 = randomInnerHits();\n+ HasChildQueryBuilder hasChildQueryBuilder = new HasChildQueryBuilder(\"type\", new MatchAllQueryBuilder(), ScoreMode.None)\n+ .innerHit(leafInnerHits2, false);\n+\n+ DisMaxQueryBuilder disMaxQueryBuilder = new DisMaxQueryBuilder();\n+ disMaxQueryBuilder.add(nestedQueryBuilder);\n+ disMaxQueryBuilder.add(hasChildQueryBuilder);\n+ Map<String, InnerHitBuilder> innerHitBuilders = new HashMap<>();\n+ disMaxQueryBuilder.extractInnerHitBuilders(innerHitBuilders);\n+ assertThat(innerHitBuilders.size(), equalTo(2));\n+ assertThat(innerHitBuilders.get(leafInnerHits1.getName()), notNullValue());\n+ assertThat(innerHitBuilders.get(leafInnerHits2.getName()), notNullValue());\n+ }\n+\n public void testInlineLeafInnerHitsNestedQueryViaConstantScoreQuery() {\n InnerHitBuilder leafInnerHits = randomInnerHits();\n NestedQueryBuilder nestedQueryBuilder = new NestedQueryBuilder(\"path\", new MatchAllQueryBuilder(), ScoreMode.None)", "filename": "core/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.1\r\n**Plugins installed**: `repository-azure`\r\n\r\nI created an azure storage account with [Read-access geo-redundant storage (RA-GRS)](https://docs.microsoft.com/en-us/azure/storage/storage-redundancy#read-access-geo-redundant-storage).\r\nThen I configure `elasticsearch.yml` with:\r\n\r\n```yml\r\ncloud:\r\n azure:\r\n storage:\r\n my_account:\r\n account: ACCOUNT\r\n key: KEY\r\n```\r\n\r\nThen:\r\n\r\n```\r\nPUT _snapshot/primary\r\n{\r\n \"type\": \"azure\",\r\n \"settings\": {\r\n \"account\": \"my_account\",\r\n \"container\": \"container\"\r\n }\r\n}\r\nGET _snapshot/primary/_all\r\n```\r\n\r\nThis takes less than a second.\r\n\r\n```\r\nPUT _snapshot/secondary\r\n{\r\n \"type\": \"azure\",\r\n \"settings\": {\r\n \"account\": \"my_account\",\r\n \"container\": \"container\",\r\n \"location_mode\": \"secondary_only\"\r\n }\r\n}\r\nGET _snapshot/secondary/_all\r\n```\r\n\r\nThis takes something like 5 minutes.\r\n\r\nThis seems to be a regression from 5.1 series.", "comments": [], "number": 23480, "title": "Getting all snapshots with a secondary only azure repository takes a long time" }
{ "body": "Previously, the Azure blob store would depend on a 404 StorageException\r\ncoming back from Azure if trying to open an input stream to a\r\nnon-existent blob. This works for Azure repositories which access a\r\nprimary location path. For those configured to access a secondary\r\nlocation path, the Azure SDK keeps trying for a long while before\r\nreturning a 404 StorageException, causing potential delays in the\r\nsnapshot APIs. This commit makes an initial check if the blob exists in\r\nAzure and returns immediately with a NoSuchFileException, instead of\r\ntrying to open the input stream to the blob.\r\n\r\nCloses #23480 ", "number": 23483, "review_comments": [ { "body": "I think we need to add that this test requires an azure storage account defined as a `Read-access geo-redundant storage (RA-GRS)`.", "created_at": "2017-03-03T18:58:41Z" }, { "body": "I think this is not needed. It should use the default account available.", "created_at": "2017-03-03T19:01:03Z" }, { "body": "I think this is not needed. It should use the default account available.", "created_at": "2017-03-03T19:01:08Z" }, { "body": "May be randomize the container name as we do in `AzureSnapshotRestoreTests`?\r\n\r\n```java\r\n private static String getContainerName() {\r\n String testName = \"snapshot-itest-\".concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\r\n return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\r\n }\r\n```\r\n\r\n", "created_at": "2017-03-03T19:02:05Z" }, { "body": "And reuse the randomized container name here?", "created_at": "2017-03-03T19:02:24Z" }, { "body": "Why do you remove and create again?", "created_at": "2017-03-03T20:29:25Z" }, { "body": "May be do that in an After method so it's always removed?", "created_at": "2017-03-03T20:31:10Z" }, { "body": "Remove and create again is not needed I think", "created_at": "2017-03-03T20:31:56Z" }, { "body": "Needed?", "created_at": "2017-03-03T20:47:59Z" }, { "body": "removed", "created_at": "2017-03-03T22:00:56Z" } ], "title": "Azure blob store's readBlob() method first checks if the blob exists" }
{ "commits": [ { "message": "Add secondary azure test for 5.2 branch" }, { "message": "Azure blob store's readBlob() method first checks if the blob exists\n\nPreviously, the Azure blob store would depend on a 404 StorageException\ncoming back from Azure if trying to open an input stream to a\nnon-existent blob. This works for Azure repositories which access a\nprimary location path. For those configured to access a secondary\nlocation path, the Azure SDK keeps trying for a long while before\nreturning a 404 StorageException, causing potential delays in the\nsnapshot APIs. This commit makes an initial check if the blob exists in\nAzure and returns immediately with a NoSuchFileException, instead of\ntrying to open the input stream to the blob.\n\nCloses #23480" }, { "message": "check location mode" }, { "message": "javadocs" }, { "message": "address review" }, { "message": "feedback" }, { "message": "remove unused method" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cloud.azure.blobstore;\n \n+import com.microsoft.azure.storage.LocationMode;\n import com.microsoft.azure.storage.StorageException;\n import org.apache.logging.log4j.Logger;\n import org.elasticsearch.common.Nullable;\n@@ -68,6 +69,16 @@ public boolean blobExists(String blobName) {\n public InputStream readBlob(String blobName) throws IOException {\n logger.trace(\"readBlob({})\", blobName);\n \n+ if (blobStore.getLocationMode() == LocationMode.SECONDARY_ONLY && !blobExists(blobName)) {\n+ // On Azure, if the location path is a secondary location, and the blob does not\n+ // exist, instead of returning immediately from the getInputStream call below\n+ // with a 404 StorageException, Azure keeps trying and trying for a long timeout\n+ // before throwing a storage exception. This can cause long delays in retrieving\n+ // snapshots, so we first check if the blob exists before trying to open an input\n+ // stream to it.\n+ throw new NoSuchFileException(\"Blob [\" + blobName + \"] does not exist\");\n+ }\n+\n try {\n return blobStore.getInputStream(blobStore.container(), buildKey(blobName));\n } catch (StorageException e) {", "filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java", "status": "modified" }, { "diff": "@@ -76,6 +76,13 @@ public String container() {\n return container;\n }\n \n+ /**\n+ * Gets the configured {@link LocationMode} for the Azure storage requests.\n+ */\n+ public LocationMode getLocationMode() {\n+ return locMode;\n+ }\n+\n @Override\n public BlobContainer blobContainer(BlobPath path) {\n return new AzureBlobContainer(repositoryName, path, this);", "filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobStore.java", "status": "modified" }, { "diff": "@@ -0,0 +1,117 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.repositories.azure;\n+\n+import com.microsoft.azure.storage.LocationMode;\n+import com.microsoft.azure.storage.StorageException;\n+import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.cloud.azure.AbstractAzureWithThirdPartyIntegTestCase;\n+import org.elasticsearch.cloud.azure.storage.AzureStorageService;\n+import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.repositories.azure.AzureRepository.Repository;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.net.URISyntaxException;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile;\n+import static org.elasticsearch.repositories.azure.AzureSnapshotRestoreTests.getContainerName;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n+\n+/**\n+ * This test needs Azure to run and -Dtests.thirdparty=true to be set\n+ * and -Dtests.config=/path/to/elasticsearch.yml\n+ *\n+ * Note that this test requires an Azure storage account, with the account\n+ * and credentials set in the elasticsearch.yml config file passed in to the\n+ * test. The Azure storage account type must be a Read-access geo-redundant\n+ * storage (RA-GRS) account.\n+ *\n+ * @see AbstractAzureWithThirdPartyIntegTestCase\n+ */\n+@ClusterScope(\n+ scope = ESIntegTestCase.Scope.SUITE,\n+ supportsDedicatedMasters = false, numDataNodes = 1,\n+ transportClientRatio = 0.0)\n+public class AzureSnapshotRestoreListSnapshotsTests extends AbstractAzureWithThirdPartyIntegTestCase {\n+\n+ private final AzureStorageService azureStorageService = new AzureStorageServiceImpl(readSettingsFromFile());\n+ private final String containerName = getContainerName();\n+\n+ public void testList() throws Exception {\n+ Client client = client();\n+ logger.info(\"--> creating azure primary repository\");\n+ PutRepositoryResponse putRepositoryResponsePrimary = client.admin().cluster().preparePutRepository(\"primary\")\n+ .setType(\"azure\").setSettings(Settings.builder()\n+ .put(Repository.CONTAINER_SETTING.getKey(), containerName)\n+ ).get();\n+ assertThat(putRepositoryResponsePrimary.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> start get snapshots on primary\");\n+ long startWait = System.currentTimeMillis();\n+ client.admin().cluster().prepareGetSnapshots(\"primary\").get();\n+ long endWait = System.currentTimeMillis();\n+ // definitely should be done in 30s, and if its not working as expected, it takes over 1m\n+ assertThat(endWait - startWait, lessThanOrEqualTo(30000L));\n+\n+ logger.info(\"--> creating azure secondary repository\");\n+ PutRepositoryResponse putRepositoryResponseSecondary = client.admin().cluster().preparePutRepository(\"secondary\")\n+ .setType(\"azure\").setSettings(Settings.builder()\n+ .put(Repository.CONTAINER_SETTING.getKey(), containerName)\n+ .put(Repository.LOCATION_MODE_SETTING.getKey(), \"secondary_only\")\n+ ).get();\n+ assertThat(putRepositoryResponseSecondary.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> start get snapshots on secondary\");\n+ startWait = System.currentTimeMillis();\n+ client.admin().cluster().prepareGetSnapshots(\"secondary\").get();\n+ endWait = System.currentTimeMillis();\n+ logger.info(\"--> end of get snapshots on secondary. Took {} ms\", endWait - startWait);\n+ assertThat(endWait - startWait, lessThanOrEqualTo(30000L));\n+ }\n+\n+ @Before\n+ public void createContainer() throws Exception {\n+ // It could happen that we run this test really close to a previous one\n+ // so we might need some time to be able to create the container\n+ assertBusy(() -> {\n+ try {\n+ azureStorageService.createContainer(null, LocationMode.PRIMARY_ONLY, containerName);\n+ } catch (URISyntaxException e) {\n+ // Incorrect URL. This should never happen.\n+ fail();\n+ } catch (StorageException e) {\n+ // It could happen. Let's wait for a while.\n+ fail();\n+ }\n+ }, 30, TimeUnit.SECONDS);\n+ }\n+\n+ @After\n+ public void removeContainer() throws Exception {\n+ azureStorageService.removeContainer(null, LocationMode.PRIMARY_ONLY, containerName);\n+ }\n+}", "filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreListSnapshotsTests.java", "status": "added" }, { "diff": "@@ -69,7 +69,7 @@ private String getRepositoryPath() {\n return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n }\n \n- private static String getContainerName() {\n+ public static String getContainerName() {\n String testName = \"snapshot-itest-\".concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\n return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n }", "filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "\r\n**Elasticsearch version**:\r\n5.1.1\r\n\r\n\r\nIn order to speed up the reloading of scripts, it seems that it's possible to do this:\r\n\r\n```\r\nresource.reload.interval.medium: \"2s\"\r\n```\r\n\r\nWhich works great - but there are no clear docs on how to use these settings, in particular:\r\n- How to change the default interval from medium\r\n- What is the purpose of the `resource.reload.interval` setting.\r\n\r\nStart ES with:\r\n\r\n```\r\nresource.reload.interval: \"2s\"\r\n```\r\n\r\nResults in:\r\n\r\n```\r\nCaused by: java.lang.IllegalArgumentException: unknown setting [resource.reload.interval] did you mean any of [resource.reload.interval.low, resource.reload.interval.high, resource.reload.interval.medium, resource.reload.enabled]?\r\n```", "comments": [ { "body": "I believe this is because the `resource.reload.interval` is not given `NodeScope`, unlike the other settings.", "created_at": "2017-01-26T16:19:11Z" }, { "body": "Does that mean that the documentation here is incorrect? https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting-using.html#reload-scripts\r\n\r\nIt mentions setting `resource.reload.interval`.", "created_at": "2017-04-12T21:19:11Z" }, { "body": "This is already fixed, see: https://github.com/elastic/elasticsearch/blob/07f67cd8b545a14867496800407644d849671b77/core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java#L72-L73", "created_at": "2017-08-15T19:50:17Z" }, { "body": "This is **not** solved. This exception\r\n```\r\n java.lang.IllegalArgumentException: unknown setting [resource.reload.interval] did you mean any of [resource.reload.interval.low, resource.reload.interval.high, resource.reload.interval.medium, resource.reload.enabled]?\r\n```\r\nis still thrown. Please update the docs, at the very least.", "created_at": "2017-10-27T11:25:55Z" } ], "number": 22814, "title": "resource.reload.interval not configurable" }
{ "body": "I test the issue mentioned in #22814 with ES-5.2.2, and verify the parameter \"resource.reload.interval\" is still not configurable. This commit expected to fix the problem, Thanks.\r\n--Fanfan ", "number": 23475, "review_comments": [], "title": "fix #22814 resource.reload.interval not configurable" }
{ "commits": [ { "message": "fix #22814 resource.reload.interval not configurable" } ], "files": [ { "diff": "@@ -396,6 +396,7 @@ public void apply(Settings value, Settings current, Settings previous) {\n IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING,\n IndexingMemoryController.SHARD_MEMORY_INTERVAL_TIME_SETTING,\n ResourceWatcherService.ENABLED,\n+ ResourceWatcherService.RELOAD_INTERVAL,\n ResourceWatcherService.RELOAD_INTERVAL_HIGH,\n ResourceWatcherService.RELOAD_INTERVAL_MEDIUM,\n ResourceWatcherService.RELOAD_INTERVAL_LOW,", "filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -69,8 +69,10 @@ public enum Frequency {\n public static final Setting<Boolean> ENABLED = Setting.boolSetting(\"resource.reload.enabled\", true, Property.NodeScope);\n public static final Setting<TimeValue> RELOAD_INTERVAL_HIGH =\n Setting.timeSetting(\"resource.reload.interval.high\", Frequency.HIGH.interval, Property.NodeScope);\n- public static final Setting<TimeValue> RELOAD_INTERVAL_MEDIUM = Setting.timeSetting(\"resource.reload.interval.medium\",\n- Setting.timeSetting(\"resource.reload.interval\", Frequency.MEDIUM.interval), Property.NodeScope);\n+ public static final Setting<TimeValue> RELOAD_INTERVAL =\n+ Setting.timeSetting(\"resource.reload.interval\", Frequency.MEDIUM.interval, Property.NodeScope);\n+ public static final Setting<TimeValue> RELOAD_INTERVAL_MEDIUM =\n+ Setting.timeSetting(\"resource.reload.interval.medium\", RELOAD_INTERVAL, Property.NodeScope);\n public static final Setting<TimeValue> RELOAD_INTERVAL_LOW =\n Setting.timeSetting(\"resource.reload.interval.low\", Frequency.LOW.interval, Property.NodeScope);\n ", "filename": "core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 5.2.1\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nCalling the setGlobalText method on SuggestBuilder should be a shortcut to setting the text() property on each SuggestionBuilder added to the SuggestBuilder. Currently setting just the global text causes the Suggest object returned by SearchResponse to be null.\r\n\r\n**Steps to reproduce**:\r\n `\r\nString searchText = \"foo\";\r\nSuggestBuilder suggestBuilder = new SuggestBuilder(); \r\nsuggestBuilder.setGlobalText(searchText);\r\n\r\nSuggestionBuilder<?> suggestionBuilder = SuggestBuilders.completionSuggestion(\"fieldName\");\r\n// suggestionBuilder.text(searchText); \r\n// uncomment the line above and the searchResponse#getResponse will exist as expected\r\nsuggestBuilder.addSuggestion(\"suggestion-name\", suggestionBuilder);\r\n\r\nsearchResponse.getSuggest() // This is null when only the global text is set.\r\n`\r\nThe Suggest instance should not be null if the global text is set on the SuggestBuilder.\r\n\r\n<!--\r\nIf you are filing a feature request, please remove the above bug\r\nreport block and provide responses for all of the below items.\r\n-->\r\n", "comments": [ { "body": "I dont think this is a bug. The `text` parameter on the SuggestionBuilder level is supposed to be independent of the `globalText` parameter in the top level Suggester. This also reflects what is happening on the REST level. As mentioned in the [docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters.html): \"The suggest text specified on suggestion level override the suggest text on the global level.\"\r\n\r\nThis is handled later, I think in this part of the code: \r\nhttps://github.com/elastic/elasticsearch/blob/3fb9254b9548d3514c3d187081bfebfce3a948f8/core/src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java#L175\r\n\r\nSo IMHO `SuggestBuilder#setGlobalText` isn't supposed to be a shortcut for setting the text() property on each SuggestionBuilder added to the SuggestBuilder. Maybe @areek can confirm this.", "created_at": "2017-03-01T16:45:30Z" }, { "body": "Directly above the section in the docs you referenced, this is mentioned:\r\n\r\n> To avoid repetition of the suggest text, it is possible to define a global text. In the example below the suggest text is defined globally and applies to the my-suggest-1 and my-suggest-2 suggestions.\r\n\r\n```\r\n POST _search\r\n{\r\n \"suggest\": {\r\n \"text\" : \"tring out Elasticsearch\",\r\n \"my-suggest-1\" : {\r\n \"term\" : {\r\n \"field\" : \"message\"\r\n }\r\n },\r\n \"my-suggest-2\" : {\r\n \"term\" : {\r\n \"field\" : \"user\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\nThis leads me to believe that setting the global text is a shortcut for setting the text() property on each SuggestionBuilder. \r\n\r\nAlso, in the code you referenced:\r\n```\r\nif (suggestionContext.getText() == null) {\r\n if (globalText == null) {\r\n throw new IllegalArgumentException(\"The required text option is missing\");\r\n }\r\n suggestionContext.setText(BytesRefs.toBytesRef(globalText));\r\n}\r\n```\r\nI read that as: \r\nIf the suggestionContext does not have it's text() property set, and globalText is not null, then use the globalText to assign the the text() property of the suggestionContext.\r\n\r\nOtherwise, what would be the point of setting the globalText if you still had to set the text property of each SuggestionBuilder?", "created_at": "2017-03-01T18:35:38Z" }, { "body": "@etay2000 The docs refer to the Rest syntax, not the Java API. Your reading of the code is correct, but here we are already taking about internals (SuggestionContext, SuggestionSearchContext), not the builders. The `globalText` will be used when set on the SuggestBuilder and no `text` from the individual Suggestion overwrite it. But that doesn't mean we need to transfer this to the SuggestionBuilders.", "created_at": "2017-03-01T20:39:04Z" }, { "body": "@cbuescher I assumed the Rest syntax would mirror the Java API (or vice versa) with regards to setting the globalText. I am not too familiar with the internals, so I am still a bit confused by when and why the globalText would actually be used. My interpretation of the benefit of the globalText option was to prevent the repetition of setting the same suggest text. As it stands in the Java API, each SuggestionBuilder is still required to have its text property set even if it shares the same text value with all of the other SuggestionBuilders. If the text property is not set, then why wouldn't the globalText propagate down to it? This to me seems to defeat the purpose of having an option of setting the globalText on the SuggestBuilder in the first place.", "created_at": "2017-03-01T21:24:15Z" }, { "body": "> As it stands in the Java API, each SuggestionBuilder is still required to have its text property set even if it shares the same text value with all of the other SuggestionBuilders. If the text property is not set, then why wouldn't the globalText propagate down to it?\r\n\r\nHave you tried setting the `globalText` on the SuggestBuilder and then using this for the seach? It should work regardless. You don't have to set the `text` on each SuggestionBuilder, these things are merged later (the place I pointed to)...", "created_at": "2017-03-01T21:29:12Z" }, { "body": "Yes that was my point in my first post (sorry for the poor code formatting). If I set the globalText on the SuggestBuilder but not on any SuggestionBuilders I get a null Suggest instance back from the SearchResponse. However, if I set the suggest text on the SuggestionBuilders I get a proper Suggest instance as expected.\r\n\r\n```String searchText = \"foo\";\r\nSuggestBuilder suggestBuilder = new SuggestBuilder();\r\nsuggestBuilder.setGlobalText(searchText);\r\n\r\nSuggestionBuilder<?> suggestionBuilder = SuggestBuilders.completionSuggestion(\"fieldName\");\r\n// suggestionBuilder.text(searchText);\r\n// As is, searchResponse#getSuggest will return null,\r\n// uncomment the line above and the searchResponse#getSuggest will return as expected\r\nsuggestBuilder.addSuggestion(\"suggestion-name\", suggestionBuilder);```\r\n\r\n\r\n", "created_at": "2017-03-01T21:48:04Z" }, { "body": "@etay2000 thanks for the clarification, I was able to reproduce this now in a simple test. The problem seems to be limited to completion suggestions where neither `prefix`, `regex` nor `text` specified on the individual suggestion. In this case we internally overwrite the `text` property with the global text set on the Suggest builder. The problem is that internally the completion suggester need either `prefix` or `regex` to be set and throws an error. There should be shard failures in the response you are receiving. \r\nI think in case of missing `text` on the suggestion, we should use the global text to also overwrite the prefix used by the CompletionSuggester. I will open a PR shortly.", "created_at": "2017-03-02T11:35:14Z" }, { "body": "@cbuescher Sure enough there were shard failures and somehow I missed the IllegalArgumentException(\"'prefix' or 'regex' must be defined\") on the stack, so your fix should definitely take care of it. Thanks for the quick response.", "created_at": "2017-03-02T17:06:27Z" }, { "body": "Closed by #23451", "created_at": "2017-03-17T13:25:04Z" } ], "number": 23340, "title": "Setting globalText property on SuggestBuilder instead of the #text property on each SuggestionBuilder returns null from SearchResponse#getSuggest" }
{ "body": "In cases where the user specifies only the `text` option on the top level\r\nsuggest element (either via REST or the java api), this gets transferred to the\r\n`text` property in the SuggestionSearchContext. CompletionSuggestionContext\r\ncurrently requires prefix or regex to be specified, otherwise errors. We should\r\nuse the global `text` property as a fall back if provided in this case.\r\n\r\nNote that when `text` is set on the CompletionSuggestionBuilder directly, we already \r\noverwrite a missing `prefix` property in `SuggestionBuilder#populateCommonFields`. \r\nThis, however, currently doesn't work with the global text overwrite taking place in \r\n`SuggestBuilder#build`\r\n\r\nCloses to #23340", "number": 23451, "review_comments": [], "title": "Completion suggestion should also consider text if prefix/regex is missing" }
{ "commits": [ { "message": "CompletionSuggestionContext#toQuery() should also consider text if prefix/regex missing\n\nIn cases where the user specifies only the `text` option on the top level\nsuggest element (either via REST or the java api), this gets transferred to the\n`text` property in the SuggestionSearchContext. CompletionSuggestionContext\ncurrently requires prefix or regex to be specified, otherwise errors. We should\nuse the global `text` property as a fall back if provided in this case.\n\nCloses to #23340" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.suggest.completion;\n \n import org.apache.lucene.search.suggest.document.CompletionQuery;\n+import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.index.mapper.CompletionFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n@@ -77,15 +78,7 @@ CompletionQuery toQuery() {\n CompletionFieldMapper.CompletionFieldType fieldType = getFieldType();\n final CompletionQuery query;\n if (getPrefix() != null) {\n- if (fuzzyOptions != null) {\n- query = fieldType.fuzzyQuery(getPrefix().utf8ToString(),\n- Fuzziness.fromEdits(fuzzyOptions.getEditDistance()),\n- fuzzyOptions.getFuzzyPrefixLength(), fuzzyOptions.getFuzzyMinLength(),\n- fuzzyOptions.getMaxDeterminizedStates(), fuzzyOptions.isTranspositions(),\n- fuzzyOptions.isUnicodeAware());\n- } else {\n- query = fieldType.prefixQuery(getPrefix());\n- }\n+ query = createCompletionQuery(getPrefix(), fieldType);\n } else if (getRegex() != null) {\n if (fuzzyOptions != null) {\n throw new IllegalArgumentException(\"can not use 'fuzzy' options with 'regex\");\n@@ -95,8 +88,10 @@ CompletionQuery toQuery() {\n }\n query = fieldType.regexpQuery(getRegex(), regexOptions.getFlagsValue(),\n regexOptions.getMaxDeterminizedStates());\n+ } else if (getText() != null) {\n+ query = createCompletionQuery(getText(), fieldType);\n } else {\n- throw new IllegalArgumentException(\"'prefix' or 'regex' must be defined\");\n+ throw new IllegalArgumentException(\"'prefix/text' or 'regex' must be defined\");\n }\n if (fieldType.hasContextMappings()) {\n ContextMappings contextMappings = fieldType.getContextMappings();\n@@ -105,4 +100,18 @@ CompletionQuery toQuery() {\n return query;\n }\n \n+ private CompletionQuery createCompletionQuery(BytesRef prefix, CompletionFieldMapper.CompletionFieldType fieldType) {\n+ final CompletionQuery query;\n+ if (fuzzyOptions != null) {\n+ query = fieldType.fuzzyQuery(prefix.utf8ToString(),\n+ Fuzziness.fromEdits(fuzzyOptions.getEditDistance()),\n+ fuzzyOptions.getFuzzyPrefixLength(), fuzzyOptions.getFuzzyMinLength(),\n+ fuzzyOptions.getMaxDeterminizedStates(), fuzzyOptions.isTranspositions(),\n+ fuzzyOptions.isUnicodeAware());\n+ } else {\n+ query = fieldType.prefixQuery(prefix);\n+ }\n+ return query;\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionContext.java", "status": "modified" }, { "diff": "@@ -68,12 +68,10 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAllSuccessful;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHit;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasScore;\n import static org.hamcrest.Matchers.contains;\n-import static org.hamcrest.Matchers.containsInAnyOrder;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n@@ -116,6 +114,36 @@ public void testPrefix() throws Exception {\n assertSuggestions(\"foo\", prefix, \"suggestion10\", \"suggestion9\", \"suggestion8\", \"suggestion7\", \"suggestion6\");\n }\n \n+ /**\n+ * test that suggestion works if prefix is either provided via {@link CompletionSuggestionBuilder#text(String)} or\n+ * {@link SuggestBuilder#setGlobalText(String)}\n+ */\n+ public void testTextAndGlobalText() throws Exception {\n+ final CompletionMappingBuilder mapping = new CompletionMappingBuilder();\n+ createIndexAndMapping(mapping);\n+ int numDocs = 10;\n+ List<IndexRequestBuilder> indexRequestBuilders = new ArrayList<>();\n+ for (int i = 1; i <= numDocs; i++) {\n+ indexRequestBuilders.add(client().prepareIndex(INDEX, TYPE, \"\" + i).setSource(jsonBuilder().startObject().startObject(FIELD)\n+ .field(\"input\", \"suggestion\" + i).field(\"weight\", i).endObject().endObject()));\n+ }\n+ indexRandom(true, indexRequestBuilders);\n+ CompletionSuggestionBuilder noText = SuggestBuilders.completionSuggestion(FIELD);\n+ SearchResponse searchResponse = client().prepareSearch(INDEX)\n+ .suggest(new SuggestBuilder().addSuggestion(\"foo\", noText).setGlobalText(\"sugg\")).execute().actionGet();\n+ assertSuggestions(searchResponse, \"foo\", \"suggestion10\", \"suggestion9\", \"suggestion8\", \"suggestion7\", \"suggestion6\");\n+\n+ CompletionSuggestionBuilder withText = SuggestBuilders.completionSuggestion(FIELD).text(\"sugg\");\n+ searchResponse = client().prepareSearch(INDEX)\n+ .suggest(new SuggestBuilder().addSuggestion(\"foo\", withText)).execute().actionGet();\n+ assertSuggestions(searchResponse, \"foo\", \"suggestion10\", \"suggestion9\", \"suggestion8\", \"suggestion7\", \"suggestion6\");\n+\n+ // test that suggestion text takes precedence over global text\n+ searchResponse = client().prepareSearch(INDEX)\n+ .suggest(new SuggestBuilder().addSuggestion(\"foo\", withText).setGlobalText(\"bogus\")).execute().actionGet();\n+ assertSuggestions(searchResponse, \"foo\", \"suggestion10\", \"suggestion9\", \"suggestion8\", \"suggestion7\", \"suggestion6\");\n+ }\n+\n public void testRegex() throws Exception {\n final CompletionMappingBuilder mapping = new CompletionMappingBuilder();\n createIndexAndMapping(mapping);\n@@ -217,7 +245,7 @@ public void testSuggestDocument() throws Exception {\n for (CompletionSuggestion.Entry.Option option : options) {\n assertThat(option.getText().toString(), equalTo(\"suggestion\" + id));\n assertSearchHit(option.getHit(), hasId(\"\" + id));\n- assertSearchHit(option.getHit(), hasScore(((float) id)));\n+ assertSearchHit(option.getHit(), hasScore((id)));\n assertNotNull(option.getHit().getSourceAsMap());\n id--;\n }\n@@ -252,7 +280,7 @@ public void testSuggestDocumentNoSource() throws Exception {\n for (CompletionSuggestion.Entry.Option option : options) {\n assertThat(option.getText().toString(), equalTo(\"suggestion\" + id));\n assertSearchHit(option.getHit(), hasId(\"\" + id));\n- assertSearchHit(option.getHit(), hasScore(((float) id)));\n+ assertSearchHit(option.getHit(), hasScore((id)));\n assertNull(option.getHit().getSourceAsMap());\n id--;\n }\n@@ -289,7 +317,7 @@ public void testSuggestDocumentSourceFiltering() throws Exception {\n for (CompletionSuggestion.Entry.Option option : options) {\n assertThat(option.getText().toString(), equalTo(\"suggestion\" + id));\n assertSearchHit(option.getHit(), hasId(\"\" + id));\n- assertSearchHit(option.getHit(), hasScore(((float) id)));\n+ assertSearchHit(option.getHit(), hasScore((id)));\n assertNotNull(option.getHit().getSourceAsMap());\n Set<String> sourceFields = option.getHit().getSourceAsMap().keySet();\n assertThat(sourceFields, contains(\"a\"));", "filename": "core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.0.1\r\n\r\n**Plugins installed**: `repository-hdfs`\r\n\r\n**JVM version**:\r\n\r\n```\r\njava version \"1.8.0_92\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_92-b14)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)\r\n```\r\n\r\n**OS version**:\r\n\r\n```\r\nCentOS release 6.7 (Final)\r\nLinux version 2.6.32-573.26.1.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Wed May 4 00:57:44 UTC 2016\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen I create repositories, ES response \r\n\r\n```\r\n{\r\n \"acknowledged\": true\r\n}\r\n```\r\n\r\nbut when I create snapshot of index, it throws exception:\r\n\r\n```\r\n[2016-12-12T11:38:04,417][WARN ][r.suppressed ] path: /_snapshot/my_hdfs_repo/20161209-snapshot, params: {repository=my_hdfs_repo, snapshot=20161209-snapshot}\r\norg.elasticsearch.transport.RemoteTransportException: [node-2][10.90.6.234:9340][cluster:admin/snapshot/create]\r\nCaused by: org.elasticsearch.repositories.RepositoryException: [my_hdfs_repo] could not read repository data from index blob\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:751) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_92]\r\nCaused by: java.io.IOException: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:580) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]\r\n at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]\r\nCaused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: service_exception: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:243) ~[?:?]\r\n at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]\r\n at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]\r\nCaused by: java.lang.SecurityException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_92]\r\n at java.security.AccessControlContext.checkPermission2(AccessControlContext.java:538) ~[?:1.8.0_92]\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:481) ~[?:1.8.0_92]\r\n at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_92]\r\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_92]\r\n at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1414) ~[?:1.8.0_92]\r\n at javax.security.auth.Subject$ClassSet.<init>(Subject.java:1372) ~[?:1.8.0_92]\r\n at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767) ~[?:1.8.0_92]\r\n at org.apache.hadoop.security.UserGroupInformation.getCredentialsInternal(UserGroupInformation.java:1499) ~[?:?]\r\n at org.apache.hadoop.security.UserGroupInformation.getTokens(UserGroupInformation.java:1464) ~[?:?]\r\n at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:436) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.call(Client.java:1446) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[?:?]\r\n at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[?:?]\r\n at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]\r\n at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]\r\n```\r\n\r\n**Steps to reproduce**:\r\n 1.create repositories\r\n\r\n```\r\nPUT /_snapshot/my_backup\r\n{\r\n\"type\": \"hdfs\",\r\n\"settings\": {\r\n \"path\": \"/path/on/hadoop\",\r\n \"uri\": \"hdfs://hadoop_cluster_domain:[port]\",\r\n \"conf_location\":\"/hadoop/hdfs-site.xml,/hadoop/core-site.xml\",\r\n \"user\":\"hadoop\"\r\n }\r\n}\r\n```\r\n\r\n 2.snapshot my index\r\n\r\n```\r\nPUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true\r\n```\r\n\r\n 3.exception is thrown\r\n", "comments": [ { "body": "@jbaiera could you take a look at this please?", "created_at": "2016-12-14T10:27:08Z" }, { "body": "Dear @jbaiera @clintongormley, have you fixed the bug? Or should I provide something more for you to solve it?", "created_at": "2016-12-16T07:04:57Z" }, { "body": "@ervinyang Could you provide some information about how you have HDFS set up? (distribution, version, security on/off) \r\n\r\nThanks!", "created_at": "2016-12-18T06:30:12Z" }, { "body": "@jbaiera\r\n* hdfs-version: 2.2.0\r\n* plugin-security.policy:\r\n```\r\n/*\r\n * Licensed to Elasticsearch under one or more contributor\r\n * license agreements. See the NOTICE file distributed with\r\n * this work for additional information regarding copyright\r\n * ownership. Elasticsearch licenses this file to you under\r\n * the Apache License, Version 2.0 (the \"License\"); you may\r\n * not use this file except in compliance with the License.\r\n * You may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing,\r\n * software distributed under the License is distributed on an\r\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\n * KIND, either express or implied. See the License for the\r\n * specific language governing permissions and limitations\r\n * under the License.\r\n */\r\n\r\ngrant {\r\n // Allow connecting to the internet anywhere\r\n permission java.net.SocketPermission \"*\", \"connect,resolve\";\r\n \r\n // Basic permissions needed for Lucene to work:\r\n permission java.util.PropertyPermission \"*\", \"read,write\";\r\n permission java.lang.reflect.ReflectPermission \"*\";\r\n permission java.lang.RuntimePermission \"*\";\r\n\r\n // These two *have* to be spelled out a separate\r\n permission java.lang.management.ManagementPermission \"control\";\r\n permission java.lang.management.ManagementPermission \"monitor\";\r\n\r\n // Solr needs those:\r\n permission java.net.NetPermission \"*\";\r\n permission java.sql.SQLPermission \"*\";\r\n permission java.util.logging.LoggingPermission \"control\";\r\n permission javax.management.MBeanPermission \"*\", \"*\";\r\n permission javax.management.MBeanServerPermission \"*\";\r\n permission javax.management.MBeanTrustPermission \"*\";\r\n permission javax.security.auth.AuthPermission \"*\";\r\n permission javax.security.auth.PrivateCredentialPermission \"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\";\r\n permission java.security.SecurityPermission \"putProviderProperty.SaslPlainServer\";\r\n permission java.security.SecurityPermission \"insertProvider.SaslPlainServer\";\r\n permission javax.xml.bind.JAXBPermission \"setDatatypeConverter\";\r\n \r\n // TIKA uses BouncyCastle and that registers new provider for PDF parsing + MSOffice parsing. Maybe report as bug!\r\n permission java.security.SecurityPermission \"putProviderProperty.BC\";\r\n permission java.security.SecurityPermission \"insertProvider.BC\";\r\n\r\n // Needed for some things in DNS caching in the JVM\r\n permission java.security.SecurityPermission \"getProperty.networkaddress.cache.ttl\";\r\n permission java.security.SecurityPermission \"getProperty.networkaddress.cache.negative.ttl\";\r\n\r\n // SSL related properties for Solr tests\r\n permission java.security.SecurityPermission \"getProperty.ssl.*\";\r\n};\r\n\r\n```\r\nThanks!", "created_at": "2016-12-18T09:30:00Z" }, { "body": "I'm facing the same problem. If I grant all permissions to the plugin it works. So I should happen because of missing grant permissions (for org.apache.hadoop.security.Credentials)?\r\n", "created_at": "2016-12-22T07:30:27Z" }, { "body": "@mrauter What do you mean by \"If I grant all permissions to the plugin\" ? Could you paste the plugin-security.policy file? We are hitting this too", "created_at": "2016-12-26T11:03:32Z" }, { "body": "@tangfl \r\npermission java.security.AllPermission;", "created_at": "2016-12-27T09:42:19Z" }, { "body": "Hi everyone,\r\n\r\nSame problem : \r\n- It works on my VM prototype (UBUNTU 16.04/elasticsearch 5.0.2 from zip) with 2 nodes and a repository on a Hubic file system (`sudo hubicfuse /mnt/hubic -o noauto_cache,sync_read,allow_other,uid=XXX,gid=XXX,nonempty`).\r\n- But with my VPS system (UBUNTU 16.04 2 nodes from elastic packets), it's a drama... The fisrt snapshot began : i can see it in `/mnt/hubic/...` but when it ends, it's impossible to consult list of snapshots, neither do a new snapshot.\r\n\r\n**Curl**\r\n`curl -XPUT 'http://XXX.XXX.XXX.XXX:9200/_snapshot/sauvegarde/all?pretty'`\r\n\r\n**Answer :** \r\n`{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"repository_exception\",\r\n \"reason\" : \"[sauvegarde] could not read repository data from index blob\"\r\n }\r\n ],\r\n \"type\" : \"repository_exception\",\r\n \"reason\" : \"[sauvegarde] could not read repository data from index blob\",\r\n \"caused_by\" : {\r\n \"type\" : \"i_o_exception\",\r\n \"reason\" : \"Repérage non permis\"\r\n }\r\n },\r\n \"status\" : 500\r\n}`\r\n\r\n**Log :**\r\n`[2016-12-28T11:30:50,215][WARN ][r.suppressed ] path: /_snapshot/sauvegarde/all, params: {pretty=, repository=sauvegarde, snapshot=all}\r\norg.elasticsearch.transport.RemoteTransportException: [XX-XXXXXXX][XXX.XXX.XXX.XXX:9300][cluster:admin/snapshot/create]\r\nCaused by: org.elasticsearch.repositories.RepositoryException: [sauvegarde] could not read repository data from index blob\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:751) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]\r\nCaused by: java.io.IOException: Repérage non permis\r\n at sun.nio.ch.FileChannelImpl.position0(Native Method) ~[?:?]\r\n at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:263) ~[?:?]\r\n at sun.nio.ch.ChannelInputStream.available(ChannelInputStream.java:116) ~[?:?]\r\n at java.io.BufferedInputStream.read(BufferedInputStream.java:353) ~[?:1.8.0_111]\r\n at java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:1.8.0_111]\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:76) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:57) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:737) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111]\r\n at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_111]`\r\n\r\nI don't know how to produce the plugin-security.policy extract.", "created_at": "2016-12-28T11:13:49Z" }, { "body": "@mrauter set \" permission java.security.AllPermission ;\" is also throw exception.", "created_at": "2017-01-04T06:37:02Z" }, { "body": "Hi everyone,\r\nI've managed to reproduce the same error when trying to create snapshot on hdfs from elasticsearch.\r\nTried with ES-5.1.1 and repository-hdfs installed through elasticsearch-plugin on centos7.\r\nOpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode) \r\nIt worked the first time and I was able to create a first snapshot.\r\nOnce done, I couldn't access to it or create any other new snapshot and error logs where just the same all time.\r\n\r\nCaused by: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]\r\n\r\nI tried to load setting ALL permissions on java policy, but it's like if it doesn't read the config or just ignores it.\r\n\r\nIf you need anymore info or tests, I'm happy to help.\r\nRegards", "created_at": "2017-02-22T10:54:59Z" }, { "body": "same problem", "created_at": "2017-02-28T02:58:18Z" }, { "body": "so bad", "created_at": "2017-03-01T05:52:37Z" }, { "body": "@netmanito Can you please paste the entire stack trace you see in the logs?", "created_at": "2017-03-01T07:09:08Z" }, { "body": "Hi, for the following GET request,` GET _snapshot/hdfs_repository/syslog_test` , I get the following message:\r\n\r\n> [2017-03-01T08:23:44,286][WARN ][r.suppressed ] path: /_snapshot/hdfs_repository/syslog_test, params: {repository=hdfs_repository, snapshot=syslog_test}\r\norg.elasticsearch.repositories.RepositoryException: [hdfs_repository] could not read repository data from index blob\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:796) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:142) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:91) [elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:50) [elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) [elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:167) [elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.1.jar:5.2.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]\r\nCaused by: java.io.IOException: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:580) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n ... 10 more\r\nCaused by: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:243) ~[?:?]\r\n at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n ... 10 more\r\nCaused by: java.security.AccessControlException: access denied (\"javax.security.auth.PrivateCredentialPermission\" \"org.apache.hadoop.security.Credentials\" \"read\")\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]\r\n at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_111]\r\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_111]\r\n at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1414) ~[?:1.8.0_111]\r\n at javax.security.auth.Subject$ClassSet.<init>(Subject.java:1372) ~[?:1.8.0_111]\r\n at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767) ~[?:1.8.0_111]\r\n at org.apache.hadoop.security.UserGroupInformation.getCredentialsInternal(UserGroupInformation.java:1499) ~[?:?]\r\n at org.apache.hadoop.security.UserGroupInformation.getTokens(UserGroupInformation.java:1464) ~[?:?]\r\n at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:436) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.call(Client.java:1446) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[?:?]\r\n at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[?:?]\r\n at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]\r\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n ... 10 more \r\n\r\nAlso, If I restart any node, there's a connection error on start although connectivity is correct.\r\nHere's the pastebin link http://pastebin.com/GW8TDymK\r\n\r\nRegards\r\n", "created_at": "2017-03-01T08:38:44Z" }, { "body": "I resove this problem by modify plugin source\r\n\r\nplugin-security.policy\r\n```\r\ngrant {\r\n // Hadoop UserGroupInformation, HdfsConstants, PipelineAck clinit\r\n permission java.lang.RuntimePermission \"getClassLoader\";\r\n\r\n // UserGroupInformation (UGI) Metrics clinit\r\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\r\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\r\n\r\n // org.apache.hadoop.util.StringUtils clinit\r\n permission java.util.PropertyPermission \"*\", \"read,write\";\r\n\r\n // org.apache.hadoop.util.ShutdownHookManager clinit\r\n permission java.lang.RuntimePermission \"shutdownHooks\";\r\n\r\n // JAAS is used always, we use a fake subject, hurts nobody\r\n permission javax.security.auth.AuthPermission \"getSubject\";\r\n permission javax.security.auth.AuthPermission \"doAs\";\r\n permission javax.security.auth.AuthPermission \"modifyPrivateCredentials\";\r\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\r\n permission java.lang.RuntimePermission \"getClassLoader\";\r\n permission java.lang.RuntimePermission \"shutdownHooks\";\r\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\r\n permission javax.security.auth.AuthPermission \"doAs\";\r\n permission javax.security.auth.AuthPermission \"getSubject\";\r\n permission javax.security.auth.AuthPermission \"modifyPrivateCredentials\";\r\n permission java.util.PropertyPermission \"*\", \"read,write\";\r\n permission javax.security.auth.PrivateCredentialPermission \"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\";\r\n};\r\n```\r\n\r\nHdfsBlobStore.java remove\r\nnew ReflectPermission(\"suppressAccessChecks\"),\r\n new AuthPermission(\"modifyPrivateCredentials\"), new SocketPermission(\"*\", \"connect\")\r\n```\r\n <V> V execute(Operation<V> operation) throws IOException {\r\n SecurityManager sm = System.getSecurityManager();\r\n if (sm != null) {\r\n // unprivileged code such as scripts do not have SpecialPermission\r\n sm.checkPermission(new SpecialPermission());\r\n }\r\n if (closed) {\r\n throw new AlreadyClosedException(\"HdfsBlobStore is closed: \" + this);\r\n }\r\n try {\r\n return AccessController.doPrivileged(new PrivilegedExceptionAction<V>() {\r\n @Override\r\n public V run() throws IOException {\r\n return operation.run(fileContext);\r\n }\r\n });\r\n } catch (PrivilegedActionException pae) {\r\n throw (IOException) pae.getException();\r\n }\r\n }\r\n```", "created_at": "2017-03-12T00:57:59Z" }, { "body": "I have solved it by add a Java Security Manager settings in jvm.options\r\nmodify \"plugin-security.policy\":\r\n```\r\ngrant {\r\n // Hadoop UserGroupInformation, HdfsConstants, PipelineAck clinit\r\n permission java.lang.RuntimePermission \"getClassLoader\";\r\n\r\n // UserGroupInformation (UGI) Metrics clinit\r\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\r\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\r\n\r\n // org.apache.hadoop.util.StringUtils clinit\r\n permission java.util.PropertyPermission \"*\", \"read,write\";\r\n\r\n // org.apache.hadoop.util.ShutdownHookManager clinit\r\n permission java.lang.RuntimePermission \"shutdownHooks\";\r\n\r\n // JAAS is used always, we use a fake subject, hurts nobody\r\n permission javax.security.auth.AuthPermission \"getSubject\";\r\n permission javax.security.auth.AuthPermission \"doAs\";\r\n permission javax.security.auth.AuthPermission \"modifyPrivateCredentials\";\r\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\r\n permission java.lang.RuntimePermission \"getClassLoader\";\r\n permission java.lang.RuntimePermission \"shutdownHooks\";\r\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\r\n permission javax.security.auth.AuthPermission \"doAs\";\r\n permission javax.security.auth.AuthPermission \"getSubject\";\r\n permission javax.security.auth.AuthPermission \"modifyPrivateCredentials\";\r\n permission java.security.AllPermission;\r\n permission java.util.PropertyPermission \"*\", \"read,write\";\r\n permission javax.security.auth.PrivateCredentialPermission \"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\";\r\n};\r\n```\r\nMy policy file path is `data/soft/elasticsearch-5.0.1/plugins/repository-hdfs/plugin-security.policy`\r\nso I add `-Djava.security.policy=file:///data/soft/elasticsearch-5.0.1/plugins/repository-hdfs/plugin-security.policy` in \"/data/soft/elasticsearch-5.0.1/config/jvm.options\"\r\nand then restart the elasticsearch,and run a command \r\n`curl -XPUT http://localhost:9200/_snapshot/my_hdfs_repository/snapshot_1?wait_for_completion=true`\r\nthe result:\r\n`{\"snapshot\":{\"snapshot\":\"snapshot_1\",\"uuid\":\"SprY4aHXTE6crhi5duJGAQ\",\"version_id\":5000199,\"version\":\"5.0.1\",\"indices\":[\"ttst\",\"test\"],\"state\":\"SUCCESS\",\"start_time\":\"2017-03-16T07:23:54.568Z\",\"start_time_in_millis\":1489649034568,\"end_time\":\"2017-03-16T07:24:03.961Z\",\"end_time_in_millis\":1489649043961,\"duration_in_millis\":9393,\"failures\":[],\"shards\":{\"total\":10,\"failed\":0,\"successful\":10}}}`\r\n it done !", "created_at": "2017-03-16T08:30:16Z" }, { "body": "@YDHui You have included `permission java.security.AllPermission;` which is a security issue (it grants everything) and your other permissions are redundant.", "created_at": "2017-03-16T15:23:30Z" }, { "body": "Any update on this? I got the same problem.", "created_at": "2017-04-06T21:03:07Z" }, { "body": "There's an open PR for it: #23439. This is not a simple issue.", "created_at": "2017-04-06T21:08:16Z" }, { "body": "Not sure if it's helpful at this point, but if you need an easy way to reproduce this problem, I ran into this right away with an out-of-the-box [hadoop docker image](https://hub.docker.com/r/sequenceiq/hadoop-docker/) - in fact, all I was looking to do was to give the HDFS plugin a quick test drive.", "created_at": "2017-04-09T19:17:51Z" }, { "body": "After Elasticsearch v5.4.0 is out, #23439 seems not helpful. \r\nHow to reporduce?\r\n1. ES 5.4.0 installed, \r\n2. repository-hdfs-5.4.0.zip. installed\r\n3. PUT /_snapshot/my_hdfs_repository (with necessary payload)\r\n4. POST /_snapshot/my_hdfs_repository/_verify still has the same Exception", "created_at": "2017-05-09T22:54:09Z" }, { "body": "@MrGarry2016 this is fixed in 5.4.1", "created_at": "2017-05-10T10:17:53Z" }, { "body": "@clintongormley We want to install 5.4.1 version of the plugin. How can we install that specific version of the hdfs plugin to the 5.4.0 running ES cluster?", "created_at": "2017-05-15T18:59:19Z" }, { "body": "you have to wait until it is released", "created_at": "2017-05-15T19:11:03Z" }, { "body": "@adkhare Also, you simply can't install version 5.4.1 of the plugin (when it is released) on a 5.4.0 node.", "created_at": "2017-05-15T19:39:44Z" }, { "body": "Is there a way to track when this is released?? If I subscribe to this thread would that be sufficient??", "created_at": "2017-05-16T14:00:29Z" }, { "body": "> Is there a way to track when this is released?? If I subscribe to this thread would that be sufficient??\r\n\r\n@326TimesBetter Yes, although subscribing to this thread is not sufficient yet you can [track releases on the Elastic website](https://www.elastic.co/blog/category/releases).", "created_at": "2017-05-16T14:57:17Z" }, { "body": "@YDHui Used your solution and it worked except I didn't have to set \"permission java.security.AllPermission;\" in the plugin-security.policy , thereby not comprising the entire security definitions. Thanks. \r\n\r\nBy the way my system configuration is \r\nOS centos7.3.1 \r\nDocker 17.05.0-ce \r\nES 5.4.1\r\nhadoop/hdfs 2.8\r\n\r\nN.B I wonder why by default the plugin-security.policy file was not detected. the JAVA_OPTS in the jvm.options file had to the trick \r\n\r\ni.e the line \r\n-Djava.security.policy=file:///path/to/plugins/repository-hdfs/plugin-security.policy", "created_at": "2017-06-12T09:32:18Z" } ], "number": 22156, "title": "ES-v5.0.1 throw java.lang.SecurityException while snapshot" }
{ "body": "This PR is meant to address the permission errors that are encountered in the HDFS Repository Plugin as described in https://github.com/elastic/elasticsearch/issues/22156.\r\n\r\nWhen Hadoop security is enabled, the HDFS client requests the current logged in Subject for a hadoop based Credentials object, which trips a missing permission in the plugin's policy file. This is not caught during testing since we neither use the actual HDFS client code nor do we execute with Kerberos security enabled.\r\n\r\nI'm working on testing this on a local environment at the moment since it requires a secured HDFS service to activate the code path. My main concern is that there may be other permissions that have not yet had the chance to trip up the plugin because they have not yet been reached in the code.\r\n\r\nCloses #22156", "number": 23439, "review_comments": [ { "body": "I think that the comment on this method is no longer accurate. Can you adjust it?", "created_at": "2017-03-22T14:17:31Z" }, { "body": "Nit: please fix the indentation here", "created_at": "2017-03-22T14:26:20Z" }, { "body": "I appreciate the thoroughness of this comment.", "created_at": "2017-03-22T14:27:56Z" }, { "body": "Why are we using the old Log4j API here? I understand that Hadoop needs it, but we can use the new Log4j API here, and our Log4j API to acquire a logger, no?", "created_at": "2017-03-22T14:31:46Z" }, { "body": "Can this whole block be replaced by:\r\n\r\n```diff\r\ndiff --git a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java\r\nindex 95619b1d40..d65686e356 100644\r\n--- a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java\r\n+++ b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java\r\n@@ -122,11 +122,7 @@ public final class HdfsRepository extends BlobStoreRepository {\r\n UserGroupInformation.setConfiguration(cfg);\r\n \r\n // Debugging\r\n- if (UserGroupInformation.isSecurityEnabled()) {\r\n- LOGGER.info(\"Hadoop Security Is [ENABLED]\");\r\n- } else {\r\n- LOGGER.info(\"Hadoop Security is [DISABLED]\");\r\n- }\r\n+ LOGGER.info(\"Hadoop security enabled: [{}]\", UserGroupInformation.isSecurityEnabled());\r\n UserGroupInformation.AuthenticationMethod method = SecurityUtil.getAuthenticationMethod(cfg);\r\n LOGGER.info(\"Using Hadoop authentication method : [\" + method + \"]\");\r\n ```\r\n\r\n", "created_at": "2017-03-22T14:33:51Z" }, { "body": "If this is debugging, why we are logging at the info level?", "created_at": "2017-03-22T14:34:01Z" }, { "body": "Is this better on the debug level?", "created_at": "2017-03-22T14:34:14Z" }, { "body": "This will make the logger usage checker unhappy, can you rewrite this to be parameterized?", "created_at": "2017-03-22T14:52:10Z" }, { "body": "I pushed \td973296", "created_at": "2017-03-22T19:06:44Z" }, { "body": "Good catch. Should be fixed in 8845f2d.", "created_at": "2017-03-22T19:07:35Z" }, { "body": "I think that this should be in `config/repository-hdfs/krb5.keytab` rather than in a separate directory. This is the convention for plugin-specific configuration files.", "created_at": "2017-04-06T02:38:03Z" }, { "body": "Is this documentation inaccurate now? I think that it should mention the keytab?", "created_at": "2017-04-07T12:57:14Z" }, { "body": "Can we leave a comment here explaining why this should not block as it's a bit indirect (Log4j does a lookup, it's cached by the JVM, default TTL with a security manager is infinite, etc.)?", "created_at": "2017-04-07T13:17:44Z" }, { "body": "Nit: `findHostName` -> `getHostName`", "created_at": "2017-04-07T13:18:31Z" }, { "body": "Nit: `it's` -> `its`", "created_at": "2017-04-07T13:19:07Z" }, { "body": "I think that this can be package-private.", "created_at": "2017-04-07T13:28:54Z" }, { "body": "This drops the original exception on the floor. I wonder if we need to swallow and wrap here at all?", "created_at": "2017-04-07T13:30:39Z" }, { "body": "This can be package-private.", "created_at": "2017-04-07T13:33:08Z" }, { "body": "I think that this class can be package private?", "created_at": "2017-04-07T13:33:24Z" }, { "body": "I do not see where this method is used?", "created_at": "2017-04-07T13:33:58Z" }, { "body": "Can it be package-private?", "created_at": "2017-04-07T13:34:04Z" }, { "body": "Can it be package private?", "created_at": "2017-04-07T13:34:10Z" }, { "body": "The reason should be more explicit about why this needed.", "created_at": "2017-04-07T13:35:15Z" }, { "body": "`UncheckedIOException`?", "created_at": "2017-04-07T13:35:27Z" }, { "body": "`UncheckedIOException`?", "created_at": "2017-04-07T13:35:44Z" }, { "body": "`UncheckedIOException`?", "created_at": "2017-04-07T13:36:20Z" }, { "body": "Include the path in the exception message? I question whether the logging statement is needed given that the exception message will effectively say the same thing? Can you throw this as a FileNotFoundException from here and let it bubble all the way up to a higher layer for wrapping?", "created_at": "2017-04-07T13:39:02Z" }, { "body": "Commenting here since the line that I want to comment on is not changed here. I see that `doStart` catches an `IOException` and wraps it in an `ElasticsearchGenerationException`. That seems odd to me, I think this should be either `UncheckedIOException` or `RuntimeException`.", "created_at": "2017-04-07T13:40:26Z" }, { "body": "Nit: `permissions.length-1` -> `permissions.length - 1`", "created_at": "2017-04-07T13:40:53Z" }, { "body": "The docs still need some work at this point. None of this is accurate any longer. I'll address this soon.", "created_at": "2017-04-07T14:25:07Z" } ], "title": "Fixing permission errors for `KERBEROS` security mode for HDFS Repository" }
{ "commits": [ { "message": "Adding missing private credential permission" }, { "message": "Adding permissions to support `SIMPLE` Hadoop login" }, { "message": "Reworking how we obtain a Subject in the plugin.\n\nHadoop jumps through a lot of hoops to make sure all the principals are available on a Subject that it might expect based on the Hadoop configuration. Weird things happen when you slap a User principal on a non authenticated Subject when Kerberos auth is specified." }, { "message": "Adding permissions for Kerberos authentication code" }, { "message": "Adding permission for accessing a Subject's Kerberos information." }, { "message": "Adding `doAs` permission to the restricted security context.\n\nThe HDFS Client requires the ability to operate as the currently logged in user to create the SaslRpcClient wrapper around the client object." }, { "message": "Added some troubleshooting code to help with ServiceLoader issues.\n\nWhen the RPC client tries to communicate with a secured server during the SASL negotiation, the client receives a list of acceptable authentication methods. The client checks to see if the protocol buffer it is using can accept any of the authentication methods, and to do this, it must load a service from the current thread's context classloader to perform the checks. This is an issue, because the plugins are not executing with context class loaders on their threads that include the plugin jars." }, { "message": "Fixing issues with SecurityUtil and its ServiceLoaders\n\nSecurityUtil is used all over the place in Hadoop, and as soon as the class is even loaded it expects to find an installed context class loader that contains all the SecurityInfo service definitions it requires. The plugins in Elasticsearch run on a separate classloader than the system one, and their classloaders are not installed as context loaders in the threads that they execute in. This means that the moment you load this class, or any Hadoop classes that might use this class in their static code (and there's a lot of them that do), then the class loads, creates a ServiceLoader that has nothing in it, stores it in a static field, and from then on there's no way for the client to validate if it's current protocol supports Kerberos. This breaks the SASL negotiation between client and server, thus breaking security.\nTo fix this, we simply hack in the context class loader for our plugin into the thread, load the class eagerly, and then restore the old loader instance in a finally block." }, { "message": "Adding more permissions for Kerberos" }, { "message": "Cleaning up permissions and adding comments for where they're used." }, { "message": "Moved the SecurityUtil static init to the earliest place that it can be called." }, { "message": "Adding permissions for Windows specific code." }, { "message": "Updating comment on blob store execute permissions method." }, { "message": "Fixing indent" }, { "message": "Fixing the logging in the repository." }, { "message": "Adding docs for security support" }, { "message": "Adding principal and keytab authentication and validation code" }, { "message": "Variable name changes, flattening some logging statements, adding some debug lines.\nFixed a logic error in the login validation code.\nRemoved a suppress-forbidden annotation as I don't think it applies any longer." }, { "message": "Don't need to accept Kerberos requests here, only initiate them." }, { "message": "Re-grouped the permissions and added comments so that it's easier to see what code needs which permissions." }, { "message": "Adding file permission for keytabs" }, { "message": "Support server principals (common principal format)\n\nRevert back to property for keytab location." }, { "message": "Adding logging information for the retrieved TGT that a user gets on login." }, { "message": "Added a TicketEnforcer to force relogin for purposes of initial testing" }, { "message": "Created logic for picking which elevated permissions are needed for blob store execute code section and moved it to its own class.\nRestricting kerberos initiation logic in the execute method to use the user specified principal only.\nRemoved the TicketEnforcer as it is no longer needed." }, { "message": "Lets not output so much info about the keytab file to any requester who might come along." }, { "message": "Hadoop already logs this in a more complete manner. We don't need to." }, { "message": "Clean up" }, { "message": "Keytab authentication using keytab installed in configuration directory" }, { "message": "Change keytab location in conf" } ], "files": [ { "diff": "@@ -68,3 +68,113 @@ The following settings are supported:\n \n Override the chunk size. (Disabled by default)\n \n+`security.principal`::\n+\n+ Kerberos principal to use when connecting to a secured HDFS cluster.\n+ If you are using a service principal for your elasticsearch node, you may\n+ use the `_HOST` pattern in the principal name and the plugin will replace\n+ the pattern with the hostname of the node at runtime (see\n+ link:repository-hdfs-security-runtime[Creating the Secure Repository]).\n+\n+[[repository-hdfs-security]]\n+==== Hadoop Security\n+\n+The HDFS Repository Plugin integrates seamlessly with Hadoop's authentication model. The following authentication\n+methods are supported by the plugin:\n+\n+[horizontal]\n+`simple`::\n+\n+ Also means \"no security\" and is enabled by default. Uses information from underlying operating system account\n+ running elasticsearch to inform Hadoop of the name of the current user. Hadoop makes no attempts to verify this\n+ information.\n+\n+`kerberos`::\n+\n+ Authenticates to Hadoop through the usage of a Kerberos principal and keytab. Interfacing with HDFS clusters\n+ secured with Kerberos requires a few additional steps to enable (See <<repository-hdfs-security-keytabs>> and\n+ <<repository-hdfs-security-runtime>> for more info)\n+\n+[[repository-hdfs-security-keytabs]]\n+[float]\n+===== Principals and Keytabs\n+Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the\n+Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos\n+replay protection, you should create a service principal per node, following the pattern of\n+`elasticsearch/hostname@REALM`.\n+\n+WARNING: In some cases, if the same principal is authenticating from multiple clients at once, services may reject\n+authentication for those principals under the assumption that they could be replay attacks. If you are running the\n+plugin in production with multiple nodes you should be using a unique service principal for each node.\n+\n+On each Elasticsearch node, place the appropriate keytab file in the node's configuration location under the\n+`repository-hdfs` directory using the name `krb5.keytab`:\n+\n+[source, bash]\n+----\n+$> cd elasticsearch/config\n+$> ls\n+elasticsearch.yml jvm.options log4j2.properties repository-hdfs/ scripts/\n+$> cd repository-hdfs\n+$> ls\n+krb5.keytab\n+----\n+// TEST[skip:this is for demonstration purposes only\n+\n+NOTE: Make sure you have the correct keytabs! If you are using a service principal per node (like\n+`elasticsearch/hostname@REALM`) then each node will need its own unique keytab file for the principal assigned to that\n+host!\n+\n+// Setup at runtime (principal name)\n+[[repository-hdfs-security-runtime]]\n+[float]\n+===== Creating the Secure Repository\n+Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just\n+add the name of the principal that you will be authenticating as in the repository settings under the\n+`security.principal` option:\n+\n+[source,js]\n+----\n+PUT _snapshot/my_hdfs_repository\n+{\n+ \"type\": \"hdfs\",\n+ \"settings\": {\n+ \"uri\": \"hdfs://namenode:8020/\",\n+ \"path\": \"/user/elasticsearch/respositories/my_hdfs_repository\",\n+ \"security.principal\": \"elasticsearch@REALM\"\n+ }\n+}\n+----\n+// CONSOLE\n+// TEST[skip:we don't have hdfs set up while testing this]\n+\n+If you are using different service principals for each node, you can use the `_HOST` pattern in your principal\n+name. Elasticsearch will automatically replace the pattern with the hostname of the node at runtime:\n+\n+[source,js]\n+----\n+PUT _snapshot/my_hdfs_repository\n+{\n+ \"type\": \"hdfs\",\n+ \"settings\": {\n+ \"uri\": \"hdfs://namenode:8020/\",\n+ \"path\": \"/user/elasticsearch/respositories/my_hdfs_repository\",\n+ \"security.principal\": \"elasticsearch/_HOST@REALM\"\n+ }\n+}\n+----\n+// CONSOLE\n+// TEST[skip:we don't have hdfs set up while testing this]\n+\n+[[repository-hdfs-security-authorization]]\n+[float]\n+===== Authorization\n+Once Elasticsearch is connected and authenticated to HDFS, HDFS will infer a username to use for\n+authorizing file access for the client. By default, it picks this username from the primary part of\n+the kerberos principal used to authenticate to the service. For example, in the case of a principal\n+like `elasticsearch@REALM` or `elasticsearch/hostname@REALM` then the username that HDFS\n+extracts for file access checks will be `elasticsearch`.\n+\n+NOTE: The repository plugin makes no assumptions of what Elasticsearch's principal name is. The main fragment of the\n+Kerberos principal is not required to be `elasticsearch`. If you have a principal or service name that works better\n+for you or your organization then feel free to use it instead!\n\\ No newline at end of file", "filename": "docs/plugins/repository-hdfs.asciidoc", "status": "modified" }, { "diff": "@@ -29,23 +29,21 @@\n import org.elasticsearch.common.blobstore.BlobStore;\n \n import java.io.IOException;\n-import java.lang.reflect.ReflectPermission;\n-import java.net.SocketPermission;\n import java.security.AccessController;\n import java.security.PrivilegedActionException;\n import java.security.PrivilegedExceptionAction;\n \n-import javax.security.auth.AuthPermission;\n-\n final class HdfsBlobStore implements BlobStore {\n \n private final Path root;\n private final FileContext fileContext;\n+ private final HdfsSecurityContext securityContext;\n private final int bufferSize;\n private volatile boolean closed;\n \n HdfsBlobStore(FileContext fileContext, String path, int bufferSize) throws IOException {\n this.fileContext = fileContext;\n+ this.securityContext = new HdfsSecurityContext(fileContext.getUgi());\n this.bufferSize = bufferSize;\n this.root = execute(fileContext1 -> fileContext1.makeQualified(new Path(path)));\n try {\n@@ -107,18 +105,19 @@ interface Operation<V> {\n /**\n * Executes the provided operation against this store\n */\n- // we can do FS ops with only two elevated permissions:\n- // 1) hadoop dynamic proxy is messy with access rules\n- // 2) allow hadoop to add credentials to our Subject\n <V> V execute(Operation<V> operation) throws IOException {\n SpecialPermission.check();\n if (closed) {\n throw new AlreadyClosedException(\"HdfsBlobStore is closed: \" + this);\n }\n try {\n return AccessController.doPrivileged((PrivilegedExceptionAction<V>)\n- () -> operation.run(fileContext), null, new ReflectPermission(\"suppressAccessChecks\"),\n- new AuthPermission(\"modifyPrivateCredentials\"), new SocketPermission(\"*\", \"connect\"));\n+ () -> {\n+ securityContext.ensureLogin();\n+ return operation.run(fileContext);\n+ },\n+ null,\n+ securityContext.getRestrictedExecutionPermissions());\n } catch (PrivilegedActionException pae) {\n throw (IOException) pae.getException();\n }", "filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java", "status": "modified" }, { "diff": "@@ -26,6 +26,9 @@\n import java.util.Collections;\n import java.util.Map;\n \n+import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;\n+import org.apache.hadoop.security.KerberosInfo;\n+import org.apache.hadoop.security.SecurityUtil;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n@@ -40,6 +43,7 @@ public final class HdfsPlugin extends Plugin implements RepositoryPlugin {\n static {\n SpecialPermission.check();\n AccessController.doPrivileged((PrivilegedAction<Void>) HdfsPlugin::evilHadoopInit);\n+ AccessController.doPrivileged((PrivilegedAction<Void>) HdfsPlugin::eagerInit);\n }\n \n @SuppressForbidden(reason = \"Needs a security hack for hadoop on windows, until HADOOP-XXXX is fixed\")\n@@ -79,6 +83,34 @@ private static Void evilHadoopInit() {\n return null;\n }\n \n+ private static Void eagerInit() {\n+ /*\n+ * Hadoop RPC wire serialization uses ProtocolBuffers. All proto classes for Hadoop\n+ * come annotated with configurations that denote information about if they support\n+ * certain security options like Kerberos, and how to send information with the\n+ * message to support that authentication method. SecurityUtil creates a service loader\n+ * in a static field during its clinit. This loader provides the implementations that\n+ * pull the security information for each proto class. The service loader sources its\n+ * services from the current thread's context class loader, which must contain the Hadoop\n+ * jars. Since plugins don't execute with their class loaders installed as the thread's\n+ * context class loader, we need to install the loader briefly, allow the util to be\n+ * initialized, then restore the old loader since we don't actually own this thread.\n+ */\n+ ClassLoader oldCCL = Thread.currentThread().getContextClassLoader();\n+ try {\n+ Thread.currentThread().setContextClassLoader(HdfsRepository.class.getClassLoader());\n+ KerberosInfo info = SecurityUtil.getKerberosInfo(ClientNamenodeProtocolPB.class, null);\n+ // Make sure that the correct class loader was installed.\n+ if (info == null) {\n+ throw new RuntimeException(\"Could not initialize SecurityUtil: \" +\n+ \"Unable to find services for [org.apache.hadoop.security.SecurityInfo]\");\n+ }\n+ } finally {\n+ Thread.currentThread().setContextClassLoader(oldCCL);\n+ }\n+ return null;\n+ }\n+\n @Override\n public Map<String, Repository.Factory> getRepositories(Environment env, NamedXContentRegistry namedXContentRegistry) {\n return Collections.singletonMap(\"hdfs\", (metadata) -> new HdfsRepository(metadata, env, namedXContentRegistry));", "filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsPlugin.java", "status": "modified" }, { "diff": "@@ -19,29 +19,31 @@\n package org.elasticsearch.repositories.hdfs;\n \n import java.io.IOException;\n-import java.lang.reflect.Constructor;\n+import java.io.UncheckedIOException;\n+import java.net.InetAddress;\n import java.net.URI;\n+import java.net.UnknownHostException;\n import java.security.AccessController;\n-import java.security.Principal;\n import java.security.PrivilegedAction;\n-import java.util.Collections;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Map.Entry;\n \n-import javax.security.auth.Subject;\n-\n import org.apache.hadoop.conf.Configuration;\n import org.apache.hadoop.fs.AbstractFileSystem;\n import org.apache.hadoop.fs.FileContext;\n import org.apache.hadoop.fs.UnsupportedFileSystemException;\n-import org.elasticsearch.ElasticsearchGenerationException;\n+import org.apache.hadoop.security.SecurityUtil;\n+import org.apache.hadoop.security.UserGroupInformation;\n+import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;\n+import org.apache.logging.log4j.Logger;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.cluster.metadata.RepositoryMetaData;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.blobstore.BlobPath;\n import org.elasticsearch.common.blobstore.BlobStore;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n@@ -51,9 +53,14 @@\n \n public final class HdfsRepository extends BlobStoreRepository {\n \n- private final BlobPath basePath = BlobPath.cleanPath();\n+ private static final Logger LOGGER = Loggers.getLogger(HdfsRepository.class);\n+\n+ private static final String CONF_SECURITY_PRINCIPAL = \"security.principal\";\n+\n+ private final Environment environment;\n private final ByteSizeValue chunkSize;\n private final boolean compress;\n+ private final BlobPath basePath = BlobPath.cleanPath();\n \n private HdfsBlobStore blobStore;\n \n@@ -65,6 +72,7 @@ public HdfsRepository(RepositoryMetaData metadata, Environment environment,\n NamedXContentRegistry namedXContentRegistry) throws IOException {\n super(metadata, environment.settings(), namedXContentRegistry);\n \n+ this.environment = environment;\n this.chunkSize = metadata.settings().getAsBytesSize(\"chunk_size\", null);\n this.compress = metadata.settings().getAsBoolean(\"compress\", false);\n }\n@@ -101,49 +109,116 @@ protected void doStart() {\n blobStore = new HdfsBlobStore(fileContext, pathSetting, bufferSize);\n logger.debug(\"Using file-system [{}] for URI [{}], path [{}]\", fileContext.getDefaultFileSystem(), fileContext.getDefaultFileSystem().getUri(), pathSetting);\n } catch (IOException e) {\n- throw new ElasticsearchGenerationException(String.format(Locale.ROOT, \"Cannot create HDFS repository for uri [%s]\", uri), e);\n+ throw new UncheckedIOException(String.format(Locale.ROOT, \"Cannot create HDFS repository for uri [%s]\", uri), e);\n }\n super.doStart();\n }\n \n // create hadoop filecontext\n- @SuppressForbidden(reason = \"lesser of two evils (the other being a bunch of JNI/classloader nightmares)\")\n- private static FileContext createContext(URI uri, Settings repositorySettings) {\n- Configuration cfg = new Configuration(repositorySettings.getAsBoolean(\"load_defaults\", true));\n- cfg.setClassLoader(HdfsRepository.class.getClassLoader());\n- cfg.reloadConfiguration();\n+ private FileContext createContext(URI uri, Settings repositorySettings) {\n+ Configuration hadoopConfiguration = new Configuration(repositorySettings.getAsBoolean(\"load_defaults\", true));\n+ hadoopConfiguration.setClassLoader(HdfsRepository.class.getClassLoader());\n+ hadoopConfiguration.reloadConfiguration();\n \n Map<String, String> map = repositorySettings.getByPrefix(\"conf.\").getAsMap();\n for (Entry<String, String> entry : map.entrySet()) {\n- cfg.set(entry.getKey(), entry.getValue());\n+ hadoopConfiguration.set(entry.getKey(), entry.getValue());\n }\n \n- // create a hadoop user. if we want some auth, it must be done different anyway, and tested.\n- Subject subject;\n- try {\n- Class<?> clazz = Class.forName(\"org.apache.hadoop.security.User\");\n- Constructor<?> ctor = clazz.getConstructor(String.class);\n- ctor.setAccessible(true);\n- Principal principal = (Principal) ctor.newInstance(System.getProperty(\"user.name\"));\n- subject = new Subject(false, Collections.singleton(principal), Collections.emptySet(), Collections.emptySet());\n- } catch (ReflectiveOperationException e) {\n- throw new RuntimeException(e);\n- }\n+ // Create a hadoop user\n+ UserGroupInformation ugi = login(hadoopConfiguration, repositorySettings);\n \n- // disable FS cache\n- cfg.setBoolean(\"fs.hdfs.impl.disable.cache\", true);\n+ // Disable FS cache\n+ hadoopConfiguration.setBoolean(\"fs.hdfs.impl.disable.cache\", true);\n \n- // create the filecontext with our user\n- return Subject.doAs(subject, (PrivilegedAction<FileContext>) () -> {\n+ // Create the filecontext with our user information\n+ // This will correctly configure the filecontext to have our UGI as it's internal user.\n+ return ugi.doAs((PrivilegedAction<FileContext>) () -> {\n try {\n- AbstractFileSystem fs = AbstractFileSystem.get(uri, cfg);\n- return FileContext.getFileContext(fs, cfg);\n+ AbstractFileSystem fs = AbstractFileSystem.get(uri, hadoopConfiguration);\n+ return FileContext.getFileContext(fs, hadoopConfiguration);\n } catch (UnsupportedFileSystemException e) {\n- throw new RuntimeException(e);\n+ throw new UncheckedIOException(e);\n }\n });\n }\n \n+ private UserGroupInformation login(Configuration hadoopConfiguration, Settings repositorySettings) {\n+ // Validate the authentication method:\n+ AuthenticationMethod authMethod = SecurityUtil.getAuthenticationMethod(hadoopConfiguration);\n+ if (authMethod.equals(AuthenticationMethod.SIMPLE) == false\n+ && authMethod.equals(AuthenticationMethod.KERBEROS) == false) {\n+ throw new RuntimeException(\"Unsupported authorization mode [\"+authMethod+\"]\");\n+ }\n+\n+ // Check if the user added a principal to use, and that there is a keytab file provided\n+ String kerberosPrincipal = repositorySettings.get(CONF_SECURITY_PRINCIPAL);\n+\n+ // Check to see if the authentication method is compatible\n+ if (kerberosPrincipal != null && authMethod.equals(AuthenticationMethod.SIMPLE)) {\n+ LOGGER.warn(\"Hadoop authentication method is set to [SIMPLE], but a Kerberos principal is \" +\n+ \"specified. Continuing with [KERBEROS] authentication.\");\n+ SecurityUtil.setAuthenticationMethod(AuthenticationMethod.KERBEROS, hadoopConfiguration);\n+ } else if (kerberosPrincipal == null && authMethod.equals(AuthenticationMethod.KERBEROS)) {\n+ throw new RuntimeException(\"HDFS Repository does not support [KERBEROS] authentication without \" +\n+ \"a valid Kerberos principal and keytab. Please specify a principal in the repository settings with [\" +\n+ CONF_SECURITY_PRINCIPAL + \"].\");\n+ }\n+\n+ // Now we can initialize the UGI with the configuration.\n+ UserGroupInformation.setConfiguration(hadoopConfiguration);\n+\n+ // Debugging\n+ LOGGER.debug(\"Hadoop security enabled: [{}]\", UserGroupInformation.isSecurityEnabled());\n+ LOGGER.debug(\"Using Hadoop authentication method: [{}]\", SecurityUtil.getAuthenticationMethod(hadoopConfiguration));\n+\n+ // UserGroupInformation (UGI) instance is just a Hadoop specific wrapper around a Java Subject\n+ try {\n+ if (UserGroupInformation.isSecurityEnabled()) {\n+ String principal = preparePrincipal(kerberosPrincipal);\n+ String keytab = HdfsSecurityContext.locateKeytabFile(environment).toString();\n+ LOGGER.debug(\"Using kerberos principal [{}] and keytab located at [{}]\", principal, keytab);\n+ return UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab);\n+ }\n+ return UserGroupInformation.getCurrentUser();\n+ } catch (IOException e) {\n+ throw new UncheckedIOException(\"Could not retrieve the current user information\", e);\n+ }\n+ }\n+\n+ // Convert principals of the format 'service/_HOST@REALM' by subbing in the local address for '_HOST'.\n+ private static String preparePrincipal(String originalPrincipal) {\n+ String finalPrincipal = originalPrincipal;\n+ // Don't worry about host name resolution if they don't have the _HOST pattern in the name.\n+ if (originalPrincipal.contains(\"_HOST\")) {\n+ try {\n+ finalPrincipal = SecurityUtil.getServerPrincipal(originalPrincipal, getHostName());\n+ } catch (IOException e) {\n+ throw new UncheckedIOException(e);\n+ }\n+\n+ if (originalPrincipal.equals(finalPrincipal) == false) {\n+ LOGGER.debug(\"Found service principal. Converted original principal name [{}] to server principal [{}]\",\n+ originalPrincipal, finalPrincipal);\n+ }\n+ }\n+ return finalPrincipal;\n+ }\n+\n+ @SuppressForbidden(reason = \"InetAddress.getLocalHost(); Needed for filling in hostname for a kerberos principal name pattern.\")\n+ private static String getHostName() {\n+ try {\n+ /*\n+ * This should not block since it should already be resolved via Log4J and Netty. The\n+ * host information is cached by the JVM and the TTL for the cache entry is infinite\n+ * when the SecurityManager is activated.\n+ */\n+ return InetAddress.getLocalHost().getCanonicalHostName();\n+ } catch (UnknownHostException e) {\n+ throw new RuntimeException(\"Could not locate host information\", e);\n+ }\n+ }\n+\n @Override\n protected BlobStore blobStore() {\n return blobStore;", "filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java", "status": "modified" }, { "diff": "@@ -0,0 +1,145 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.repositories.hdfs;\n+\n+import java.io.IOException;\n+import java.io.UncheckedIOException;\n+import java.lang.reflect.ReflectPermission;\n+import java.net.SocketPermission;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.security.Permission;\n+import java.util.Arrays;\n+import java.util.Locale;\n+import java.util.function.Supplier;\n+import javax.security.auth.AuthPermission;\n+import javax.security.auth.PrivateCredentialPermission;\n+import javax.security.auth.kerberos.ServicePermission;\n+\n+import org.apache.hadoop.security.UserGroupInformation;\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.env.Environment;\n+\n+/**\n+ * Oversees all the security specific logic for the HDFS Repository plugin.\n+ *\n+ * Keeps track of the current user for a given repository, as well as which\n+ * permissions to grant the blob store restricted execution methods.\n+ */\n+class HdfsSecurityContext {\n+\n+ private static final Logger LOGGER = Loggers.getLogger(HdfsSecurityContext.class);\n+\n+ private static final Permission[] SIMPLE_AUTH_PERMISSIONS;\n+ private static final Permission[] KERBEROS_AUTH_PERMISSIONS;\n+ static {\n+ // We can do FS ops with only a few elevated permissions:\n+ SIMPLE_AUTH_PERMISSIONS = new Permission[]{\n+ new SocketPermission(\"*\", \"connect\"),\n+ // 1) hadoop dynamic proxy is messy with access rules\n+ new ReflectPermission(\"suppressAccessChecks\"),\n+ // 2) allow hadoop to add credentials to our Subject\n+ new AuthPermission(\"modifyPrivateCredentials\")\n+ };\n+\n+ // If Security is enabled, we need all the following elevated permissions:\n+ KERBEROS_AUTH_PERMISSIONS = new Permission[] {\n+ new SocketPermission(\"*\", \"connect\"),\n+ // 1) hadoop dynamic proxy is messy with access rules\n+ new ReflectPermission(\"suppressAccessChecks\"),\n+ // 2) allow hadoop to add credentials to our Subject\n+ new AuthPermission(\"modifyPrivateCredentials\"),\n+ // 3) allow hadoop to act as the logged in Subject\n+ new AuthPermission(\"doAs\"),\n+ // 4) Listen and resolve permissions for kerberos server principals\n+ new SocketPermission(\"localhost:0\", \"listen,resolve\"),\n+ // We add the following since hadoop requires the client to re-login when the kerberos ticket expires:\n+ // 5) All the permissions needed for UGI to do its weird JAAS hack\n+ new RuntimePermission(\"getClassLoader\"),\n+ new RuntimePermission(\"setContextClassLoader\"),\n+ // 6) Additional permissions for the login modules\n+ new AuthPermission(\"modifyPrincipals\"),\n+ new PrivateCredentialPermission(\"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\"),\n+ new PrivateCredentialPermission(\"javax.security.auth.kerberos.KerberosTicket * \\\"*\\\"\", \"read\"),\n+ new PrivateCredentialPermission(\"javax.security.auth.kerberos.KeyTab * \\\"*\\\"\", \"read\")\n+ // Included later:\n+ // 7) allow code to initiate kerberos connections as the logged in user\n+ // Still far and away fewer permissions than the original full plugin policy\n+ };\n+ }\n+\n+ /**\n+ * Locates the keytab file in the environment and verifies that it exists.\n+ * Expects keytab file to exist at {@code $CONFIG_DIR$/repository-hdfs/krb5.keytab}\n+ */\n+ static Path locateKeytabFile(Environment environment) {\n+ Path keytabPath = environment.configFile().resolve(\"repository-hdfs\").resolve(\"krb5.keytab\");\n+ try {\n+ if (Files.exists(keytabPath) == false) {\n+ throw new RuntimeException(\"Could not locate keytab at [\" + keytabPath + \"].\");\n+ }\n+ } catch (SecurityException se) {\n+ throw new RuntimeException(\"Could not locate keytab at [\" + keytabPath + \"]\", se);\n+ }\n+ return keytabPath;\n+ }\n+\n+ private final UserGroupInformation ugi;\n+ private final Permission[] restrictedExecutionPermissions;\n+\n+ HdfsSecurityContext(UserGroupInformation ugi) {\n+ this.ugi = ugi;\n+ this.restrictedExecutionPermissions = renderPermissions(ugi);\n+ }\n+\n+ private Permission[] renderPermissions(UserGroupInformation ugi) {\n+ Permission[] permissions;\n+ if (ugi.isFromKeytab()) {\n+ // KERBEROS\n+ // Leave room to append one extra permission based on the logged in user's info.\n+ int permlen = KERBEROS_AUTH_PERMISSIONS.length + 1;\n+ permissions = new Permission[permlen];\n+\n+ System.arraycopy(KERBEROS_AUTH_PERMISSIONS, 0, permissions, 0, KERBEROS_AUTH_PERMISSIONS.length);\n+\n+ // Append a kerberos.ServicePermission to only allow initiating kerberos connections\n+ // as the logged in user.\n+ permissions[permissions.length - 1] = new ServicePermission(ugi.getUserName(), \"initiate\");\n+ } else {\n+ // SIMPLE\n+ permissions = Arrays.copyOf(SIMPLE_AUTH_PERMISSIONS, SIMPLE_AUTH_PERMISSIONS.length);\n+ }\n+ return permissions;\n+ }\n+\n+ Permission[] getRestrictedExecutionPermissions() {\n+ return restrictedExecutionPermissions;\n+ }\n+\n+ void ensureLogin() {\n+ if (ugi.isFromKeytab()) {\n+ try {\n+ ugi.checkTGTAndReloginFromKeytab();\n+ } catch (IOException ioe) {\n+ throw new UncheckedIOException(\"Could not re-authenticate\", ioe);\n+ }\n+ }\n+ }\n+}", "filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java", "status": "added" }, { "diff": "@@ -25,17 +25,60 @@ grant {\n permission java.lang.RuntimePermission \"accessDeclaredMembers\";\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\n \n+ // Needed so that Hadoop can load the correct classes for SPI and JAAS\n+ // org.apache.hadoop.security.SecurityUtil clinit\n+ // org.apache.hadoop.security.UserGroupInformation.newLoginContext()\n+ permission java.lang.RuntimePermission \"setContextClassLoader\";\n+\n // org.apache.hadoop.util.StringUtils clinit\n permission java.util.PropertyPermission \"*\", \"read,write\";\n \n // org.apache.hadoop.util.ShutdownHookManager clinit\n permission java.lang.RuntimePermission \"shutdownHooks\";\n \n- // JAAS is used always, we use a fake subject, hurts nobody\n+ // JAAS is used by Hadoop for authentication purposes\n+ // The Hadoop Login JAAS module modifies a Subject's private credentials and principals\n+ // The Hadoop RPC Layer must be able to read these credentials, and initiate Kerberos connections\n+\n+ // org.apache.hadoop.security.UserGroupInformation.getCurrentUser()\n permission javax.security.auth.AuthPermission \"getSubject\";\n+\n+ // org.apache.hadoop.security.UserGroupInformation.doAs()\n permission javax.security.auth.AuthPermission \"doAs\";\n+\n+ // org.apache.hadoop.security.UserGroupInformation.getCredentialsInternal()\n+ permission javax.security.auth.PrivateCredentialPermission \"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\";\n+\n+ // Hadoop depends on the Kerberos login module for kerberos authentication\n+ // com.sun.security.auth.module.Krb5LoginModule.login()\n+ permission java.lang.RuntimePermission \"accessClassInPackage.sun.security.krb5\";\n+\n+ // com.sun.security.auth.module.Krb5LoginModule.commit()\n permission javax.security.auth.AuthPermission \"modifyPrivateCredentials\";\n+ permission javax.security.auth.AuthPermission \"modifyPrincipals\";\n+ permission javax.security.auth.PrivateCredentialPermission \"javax.security.auth.kerberos.KeyTab * \\\"*\\\"\", \"read\";\n+ permission javax.security.auth.PrivateCredentialPermission \"javax.security.auth.kerberos.KerberosTicket * \\\"*\\\"\", \"read\";\n+\n+ // Hadoop depends on OS level user information for simple authentication\n+ // Unix: UnixLoginModule: com.sun.security.auth.module.UnixSystem.UnixSystem init\n+ permission java.lang.RuntimePermission \"loadLibrary.jaas_unix\";\n+ // Windows: NTLoginModule: com.sun.security.auth.module.NTSystem.loadNative\n+ permission java.lang.RuntimePermission \"loadLibrary.jaas_nt\";\n+ permission javax.security.auth.AuthPermission \"modifyPublicCredentials\";\n+\n+ // org.apache.hadoop.security.SaslRpcServer.init()\n+ permission java.security.SecurityPermission \"putProviderProperty.SaslPlainServer\";\n+\n+ // org.apache.hadoop.security.SaslPlainServer.SecurityProvider.SecurityProvider init\n+ permission java.security.SecurityPermission \"insertProvider.SaslPlainServer\";\n+\n+ // org.apache.hadoop.security.SaslRpcClient.getServerPrincipal -> KerberosPrincipal init\n+ permission javax.security.auth.kerberos.ServicePermission \"*\", \"initiate\";\n \n // hdfs client opens socket connections for to access repository\n permission java.net.SocketPermission \"*\", \"connect\";\n+\n+ // client binds to the address returned from the host name of any principal set up as a service principal\n+ // org.apache.hadoop.ipc.Client.Connection.setupConnection\n+ permission java.net.SocketPermission \"localhost:0\", \"listen,resolve\";\n };", "filename": "plugins/repository-hdfs/src/main/plugin-metadata/plugin-security.policy", "status": "modified" } ] }
{ "body": "The warning header used by Elasticsearch for delivering deprecation warnings has a specific format (RFC 7234, section 5.5). The format specifies that the warning header should be of the form\r\n\r\n warn-code warn-agent warn-text [warn-date]\r\n\r\nHere, the warn-code is a three-digit code which communicates various meanings. The warn-agent is a string used to identify the source of the warning (either a host:port combination, or some other identifier). The warn-text is quoted string which conveys the semantic meaning of the warning. The warn-date is an optional quoted date that can be in a few different formats.\r\n\r\nThis commit corrects the warning header within Elasticsearch to follow this specification. We use the warn-code 299 which means a \"miscellaneous persistent warning.\" For the warn-agent, we use the version of Elasticsearch that produced the warning. The warn-text is unchanged from what we deliver today, but is wrapped in quotes as specified (this is important as a problem that exists today is that multiple warnings can not be split by comma to obtain the individual warnings as the warnings might themselves contain commas). For the warn-date, we use the RFC 1123 format.\r\n\r\nCloses #22986\r\n", "comments": [ { "body": "Thanks for the thorough review @bleskes.", "created_at": "2017-03-01T02:16:38Z" } ], "number": 23275, "title": "Correct warning header to be compliant" }
{ "body": "This commit fixes the date format in warning headers. There is some confusion around whether or not RFC 1123 requires two-digit days. However, the warning header specification very clearly relies on a format that requires two-digit days. This commit removes the usage of RFC 1123 date/time format from Java 8, which allows for one-digit days, in favor of a format that forces two-digit days (it's otherwise identical to RFC 1123 format, it is just fixed width).\r\n\r\nRelates #23275\r\n", "number": 23418, "review_comments": [ { "body": "nit: please cap this at 72 columns.", "created_at": "2017-03-01T01:08:14Z" }, { "body": "I think we should discuss this. :smile:", "created_at": "2017-03-01T01:28:00Z" } ], "title": "Fix date format in warning headers" }
{ "commits": [ { "message": "Fix date format in warning headers\n\nThis commit fixes the date format in warning headers. There is some\nconfusion around whether or not RFC 1123 requires two-digit\ndays. However, the warning header specification very clearly relies on a\nformat that requires two-digit days. This commit removes the usage of\nRFC 1123 date/time format from Java 8, which allows for one-digit days,\nin favor of a format that forces two-digit days (it's otherwise\nidentical to RFC 1123 format, it is just fixed width)." }, { "message": "Fix grammar in comment" } ], "files": [ { "diff": "@@ -29,14 +29,26 @@\n import java.time.ZoneId;\n import java.time.ZonedDateTime;\n import java.time.format.DateTimeFormatter;\n+import java.time.format.DateTimeFormatterBuilder;\n+import java.time.format.SignStyle;\n+import java.util.HashMap;\n import java.util.Iterator;\n import java.util.Locale;\n+import java.util.Map;\n import java.util.Objects;\n import java.util.Set;\n import java.util.concurrent.CopyOnWriteArraySet;\n import java.util.regex.Matcher;\n import java.util.regex.Pattern;\n \n+import static java.time.temporal.ChronoField.DAY_OF_MONTH;\n+import static java.time.temporal.ChronoField.DAY_OF_WEEK;\n+import static java.time.temporal.ChronoField.HOUR_OF_DAY;\n+import static java.time.temporal.ChronoField.MINUTE_OF_HOUR;\n+import static java.time.temporal.ChronoField.MONTH_OF_YEAR;\n+import static java.time.temporal.ChronoField.SECOND_OF_MINUTE;\n+import static java.time.temporal.ChronoField.YEAR;\n+\n /**\n * A logger that logs deprecation notices.\n */\n@@ -128,6 +140,63 @@ public void deprecated(String msg, Object... params) {\n Build.CURRENT.shortHash()) +\n \"\\\"%s\\\" \\\"%s\\\"\";\n \n+ /*\n+ * RFC 7234 section 5.5 specifies that the warn-date is a quoted HTTP-date. HTTP-date is defined in RFC 7234 Appendix B as being from\n+ * RFC 7231 section 7.1.1.1. RFC 7231 specifies an HTTP-date as an IMF-fixdate (or an obs-date referring to obsolete formats). The\n+ * grammar for IMF-fixdate is specified as 'day-name \",\" SP date1 SP time-of-day SP GMT'. Here, day-name is\n+ * (Mon|Tue|Wed|Thu|Fri|Sat|Sun). Then, date1 is 'day SP month SP year' where day is 2DIGIT, month is\n+ * (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec), and year is 4DIGIT. Lastly, time-of-day is 'hour \":\" minute \":\" second' where\n+ * hour is 2DIGIT, minute is 2DIGIT, and second is 2DIGIT. Finally, 2DIGIT and 4DIGIT have the obvious definitions.\n+ */\n+ private static final DateTimeFormatter RFC_7231_DATE_TIME;\n+\n+ static {\n+ final Map<Long, String> dow = new HashMap<>();\n+ dow.put(1L, \"Mon\");\n+ dow.put(2L, \"Tue\");\n+ dow.put(3L, \"Wed\");\n+ dow.put(4L, \"Thu\");\n+ dow.put(5L, \"Fri\");\n+ dow.put(6L, \"Sat\");\n+ dow.put(7L, \"Sun\");\n+ final Map<Long, String> moy = new HashMap<>();\n+ moy.put(1L, \"Jan\");\n+ moy.put(2L, \"Feb\");\n+ moy.put(3L, \"Mar\");\n+ moy.put(4L, \"Apr\");\n+ moy.put(5L, \"May\");\n+ moy.put(6L, \"Jun\");\n+ moy.put(7L, \"Jul\");\n+ moy.put(8L, \"Aug\");\n+ moy.put(9L, \"Sep\");\n+ moy.put(10L, \"Oct\");\n+ moy.put(11L, \"Nov\");\n+ moy.put(12L, \"Dec\");\n+ RFC_7231_DATE_TIME = new DateTimeFormatterBuilder()\n+ .parseCaseInsensitive()\n+ .parseLenient()\n+ .optionalStart()\n+ .appendText(DAY_OF_WEEK, dow)\n+ .appendLiteral(\", \")\n+ .optionalEnd()\n+ .appendValue(DAY_OF_MONTH, 2, 2, SignStyle.NOT_NEGATIVE)\n+ .appendLiteral(' ')\n+ .appendText(MONTH_OF_YEAR, moy)\n+ .appendLiteral(' ')\n+ .appendValue(YEAR, 4)\n+ .appendLiteral(' ')\n+ .appendValue(HOUR_OF_DAY, 2)\n+ .appendLiteral(':')\n+ .appendValue(MINUTE_OF_HOUR, 2)\n+ .optionalStart()\n+ .appendLiteral(':')\n+ .appendValue(SECOND_OF_MINUTE, 2)\n+ .optionalEnd()\n+ .appendLiteral(' ')\n+ .appendOffset(\"+HHMM\", \"GMT\")\n+ .toFormatter(Locale.getDefault(Locale.Category.FORMAT));\n+ }\n+\n private static final ZoneId GMT = ZoneId.of(\"GMT\");\n \n /**\n@@ -195,13 +264,13 @@ void deprecated(final Set<ThreadContext> threadContexts, final String message, f\n \n /**\n * Format a warning string in the proper warning format by prepending a warn code, warn agent, wrapping the warning string in quotes,\n- * and appending the RFC 1123 date.\n+ * and appending the RFC 7231 date.\n *\n * @param s the warning string to format\n * @return a warning value formatted according to RFC 7234\n */\n public static String formatWarning(final String s) {\n- return String.format(Locale.ROOT, WARNING_FORMAT, escape(s), DateTimeFormatter.RFC_1123_DATE_TIME.format(ZonedDateTime.now(GMT)));\n+ return String.format(Locale.ROOT, WARNING_FORMAT, escape(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT)));\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java", "status": "modified" }, { "diff": "@@ -19,8 +19,6 @@\n \n package org.elasticsearch.test.rest.yaml.section;\n \n-import org.elasticsearch.Version;\n-import org.elasticsearch.common.hash.MessageDigests;\n import org.elasticsearch.common.logging.DeprecationLogger;\n import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentLocation;\n@@ -29,7 +27,6 @@\n import org.hamcrest.MatcherAssert;\n \n import java.io.IOException;\n-import java.io.UnsupportedEncodingException;\n import java.util.Arrays;\n import java.util.Map;\n ", "filename": "test/framework/src/test/java/org/elasticsearch/test/rest/yaml/section/DoSectionTests.java", "status": "modified" } ] }
{ "body": "The IndexShardOperationsLock has a mechanism to delay operations if there is currently a block on the lock. These delayed operations are executed when the block is released and are executed by a different thread. When the different thread executes the operations, the ThreadContext is that of the thread that was blocking operations. In order to preserve the ThreadContext, we need to store it and wrap the listener when the operation is delayed.", "comments": [ { "body": "LGTM again. I thought about the case where an executor isn't specified and decided to go with you in the logic that \"no executor means no threading context support\". +1 to being stricter here.", "created_at": "2017-02-24T12:54:03Z" } ], "number": 23349, "title": "Always restore the ThreadContext for operations delayed due to a block" }
{ "body": "The ThreadedActionListener may not be invoked by the same thread that creates it. When this happens, the ThreadContext from the thread that created the ThreadedActionListener should be used. This change makes the ThreadedActionListener always wrap the ActionListener in a ContextPreservingActionListener to prevent accidental loss of the ThreadContext.\r\n\r\nRelates #23349", "number": 23390, "review_comments": [ { "body": "I know we talked about it but reflecting on this, I think it's too implicit to wrap things here. Better be explicit and put this as a parameter type and let the caller be aware and decide.", "created_at": "2017-02-27T17:57:21Z" }, { "body": "hmm.. is this correct? It's debatable what we should do here. Should we restore the context in which the listener was added or should we use the context of the thread that made the request and also keep the response headers. The way this is set up now, consumers don't have a choice. I'm starting to doubt if we should do this... @s1monw wdyt?", "created_at": "2017-02-28T09:21:27Z" }, { "body": "I agree and am beginning to think we should skip this change altogether for now", "created_at": "2017-02-28T12:05:10Z" }, { "body": "> think we should skip this change altogether\r\n\r\nyeah. Let's just drop this. Thanks for trying..", "created_at": "2017-02-28T12:51:18Z" } ], "title": "ThreadedActionListener always wraps listener to preserve context" }
{ "commits": [ { "message": "ThreadedActionListener always wraps listener to preserve context\n\nThe ThreadedActionListener may not be invoked by the same thread that creates it. When this happens, the ThreadContext\nfrom the thread that created the ThreadedActionListener should be used. This change makes the ThreadedActionListener\nalways wrap the ActionListener in a ContextPreservingActionListener to prevent accidental loss of the ThreadContext.\n\nRelates #23349" }, { "message": "remove implicit wrapping of listener and make it explicit" } ], "files": [ { "diff": "@@ -50,7 +50,8 @@ public void addListener(final ActionListener<T> listener) {\n }\n \n public void internalAddListener(ActionListener<T> listener) {\n- listener = new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER, listener, false);\n+ listener = new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER,\n+ ContextPreservingActionListener.wrap(listener, threadPool.getThreadContext(), false), false);\n boolean executeImmediate = false;\n synchronized (this) {\n if (executedListeners) {", "filename": "core/src/main/java/org/elasticsearch/action/support/AbstractListenableActionFuture.java", "status": "modified" }, { "diff": "@@ -50,4 +50,12 @@ public void onFailure(Exception e) {\n delegate.onFailure(e);\n }\n }\n+\n+ /**\n+ * Wraps the given action listener so that it restores the current context.\n+ */\n+ public static <R> ContextPreservingActionListener<R> wrap(ActionListener<R> listener, ThreadContext context,\n+ boolean restoreResponseHeader) {\n+ return new ContextPreservingActionListener<>(context.newRestorableContext(restoreResponseHeader), listener);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java", "status": "modified" }, { "diff": "@@ -66,17 +66,18 @@ public <Response> ActionListener<Response> wrap(ActionListener<Response> listene\n if (listener instanceof ThreadedActionListener) {\n return listener;\n }\n- return new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER, listener, false);\n+ return new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER,\n+ ContextPreservingActionListener.wrap(listener, threadPool.getThreadContext(), false), false);\n }\n }\n \n private final Logger logger;\n private final ThreadPool threadPool;\n private final String executor;\n- private final ActionListener<Response> listener;\n+ private final ContextPreservingActionListener<Response> listener;\n private final boolean forceExecution;\n \n- public ThreadedActionListener(Logger logger, ThreadPool threadPool, String executor, ActionListener<Response> listener,\n+ public ThreadedActionListener(Logger logger, ThreadPool threadPool, String executor, ContextPreservingActionListener<Response> listener,\n boolean forceExecution) {\n this.logger = logger;\n this.threadPool = threadPool;", "filename": "core/src/main/java/org/elasticsearch/action/support/ThreadedActionListener.java", "status": "modified" }, { "diff": "@@ -477,7 +477,7 @@ public void onFailure(String source, Exception e) {\n }\n \n private ContextPreservingActionListener<ClusterStateUpdateResponse> wrapPreservingContext(ActionListener<ClusterStateUpdateResponse> listener) {\n- return new ContextPreservingActionListener<>(threadPool.getThreadContext().newRestorableContext(false), listener);\n+ return ContextPreservingActionListener.wrap(listener, threadPool.getThreadContext(), false);\n }\n \n private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -289,7 +289,7 @@ public ClusterState execute(ClusterState currentState) {\n }\n \n private ContextPreservingActionListener<ClusterStateUpdateResponse> wrapPreservingContext(ActionListener<ClusterStateUpdateResponse> listener) {\n- return new ContextPreservingActionListener<>(threadPool.getThreadContext().newRestorableContext(false), listener);\n+ return ContextPreservingActionListener.wrap(listener, threadPool.getThreadContext(), false);\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import org.elasticsearch.action.support.ThreadedActionListener;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasable;\n-import org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.Closeable;\n@@ -34,7 +33,6 @@\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.TimeoutException;\n import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.function.Supplier;\n \n public class IndexShardOperationsLock implements Closeable {\n private final ShardId shardId;\n@@ -129,13 +127,13 @@ public void acquire(ActionListener<Releasable> onAcquired, String executorOnDela\n if (delayedOperations == null) {\n delayedOperations = new ArrayList<>();\n }\n- final Supplier<StoredContext> contextSupplier = threadPool.getThreadContext().newRestorableContext(false);\n+ final ContextPreservingActionListener<Releasable> wrappedListener =\n+ ContextPreservingActionListener.wrap(onAcquired, threadPool.getThreadContext(), false);\n if (executorOnDelay != null) {\n delayedOperations.add(\n- new ThreadedActionListener<>(logger, threadPool, executorOnDelay,\n- new ContextPreservingActionListener<>(contextSupplier, onAcquired), forceExecution));\n+ new ThreadedActionListener<>(logger, threadPool, executorOnDelay, wrappedListener, forceExecution));\n } else {\n- delayedOperations.add(new ContextPreservingActionListener<>(contextSupplier, onAcquired));\n+ delayedOperations.add(wrappedListener);\n }\n return;\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationsLock.java", "status": "modified" }, { "diff": "@@ -0,0 +1,87 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.common.util.concurrent.ThreadContext;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.threadpool.TestThreadPool;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.threadpool.ThreadPool.Names;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+\n+public class ThreadedActionListenerTests extends ESTestCase {\n+\n+ private ThreadPool threadPool;\n+\n+ @Before\n+ public void createThreadPool() {\n+ threadPool = new TestThreadPool(ThreadedActionListenerTests.class.getName());\n+ }\n+\n+ @After\n+ public void terminateThreadPool() throws InterruptedException {\n+ terminate(threadPool);\n+ }\n+\n+ public void testThreadedActionListenerWithContextPreservingListener() throws InterruptedException {\n+ final ThreadedActionListener<Void> listener;\n+ final ThreadContext context = threadPool.getThreadContext();\n+ final CountDownLatch responseLatch = new CountDownLatch(1);\n+ final CountDownLatch exceptionLatch = new CountDownLatch(1);\n+ try (ThreadContext.StoredContext ignore = context.newStoredContext(false)) {\n+ context.putHeader(\"foo\", \"bar\");\n+ context.putTransient(\"bar\", \"baz\");\n+ listener = new ThreadedActionListener<>(logger, threadPool, Names.GENERIC,\n+ ContextPreservingActionListener.wrap(ActionListener.wrap(r -> {\n+ assertEquals(\"bar\", context.getHeader(\"foo\"));\n+ assertEquals(\"baz\", context.getTransient(\"bar\"));\n+ responseLatch.countDown();\n+ },\n+ e -> {\n+ assertEquals(\"bar\", context.getHeader(\"foo\"));\n+ assertEquals(\"baz\", context.getTransient(\"bar\"));\n+ exceptionLatch.countDown();\n+ }), threadPool.getThreadContext(), false), randomBoolean());\n+ }\n+\n+ assertNull(context.getHeader(\"foo\"));\n+ assertNull(context.getTransient(\"bar\"));\n+ assertEquals(1, responseLatch.getCount());\n+\n+ listener.onResponse(null);\n+ responseLatch.await(1L, TimeUnit.HOURS);\n+ assertEquals(0, responseLatch.getCount());\n+\n+ assertNull(context.getHeader(\"foo\"));\n+ assertNull(context.getTransient(\"bar\"));\n+ assertEquals(1, exceptionLatch.getCount());\n+\n+ listener.onFailure(null);\n+ exceptionLatch.await(1L, TimeUnit.HOURS);\n+ assertEquals(0, exceptionLatch.getCount());\n+ assertNull(context.getHeader(\"foo\"));\n+ assertNull(context.getTransient(\"bar\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/action/support/ThreadedActionListenerTests.java", "status": "added" }, { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.ActionResponse;\n import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.ContextPreservingActionListener;\n import org.elasticsearch.action.support.PlainActionFuture;\n import org.elasticsearch.action.support.ThreadedActionListener;\n import org.elasticsearch.action.support.replication.ClusterStateCreationUtils;\n@@ -137,7 +138,8 @@ class Action extends TransportMasterNodeAction<Request, Response> {\n @Override\n protected void doExecute(Task task, final Request request, ActionListener<Response> listener) {\n // remove unneeded threading by wrapping listener with SAME to prevent super.doExecute from wrapping it with LISTENER\n- super.doExecute(task, request, new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.SAME, listener, false));\n+ super.doExecute(task, request, new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.SAME,\n+ ContextPreservingActionListener.wrap(listener, threadPool.getThreadContext(), false), false));\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/action/support/master/TransportMasterNodeActionTests.java", "status": "modified" } ] }
{ "body": "The test `org.elasticsearch.index.reindex.ReindexFailureTests testResponseOnSearchFailure` has been failing for 2 days (from Feb 23 to the day this issue was opened). This has been failing on CI but I have not been able to reproduce it locally. Example CI output from the failure:\r\nhttps://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=oraclelinux/646/console\r\n\r\nThe issue is that `org.apache.lucene.search.TopDocs$MergeSortQueue.<init>(TopDocs.java:128)` throws a NPE, presumably because `shardHits` is null. The stack trace shows that the calls are initialized from `SearchPhaseController:L265`. However, the `shardTopDocs` is most certainly initialized and non-null when passed into the `TopDocs.merge` call, so its quite puzzling as to what is happening.", "comments": [ { "body": "/cc @jpountz @s1monw ", "created_at": "2017-02-24T16:09:05Z" }, { "body": "I am looking into this. this smells like there is a bug somewhere else that was uncovered by a recent refactoring. We are making stronger assumptions now...", "created_at": "2017-02-27T09:00:02Z" } ], "number": 23357, "title": "NPE in Lucene TopDocs from the SearchPhaseController" }
{ "body": "Previously this code was duplicated across the 3 different topdocs variants\r\nwe have. It also had no real unittest (where we tested with holes in the results)\r\nwhich caused a sneaky bug where the comparison used `result.size()` vs `results.size()`\r\ncausing several NPEs downstream. This change adds a static method to fill the top docs\r\nthat is shared across all variants and adds a unittest that would have caught the issue\r\nvery quickly.\r\n\r\nCloses #19356\r\nCloses #23357", "number": 23380, "review_comments": [ { "body": "the supplier seems to be only useful to save one object allocation, maybe getting rid of it would make things simpler?", "created_at": "2017-02-27T09:59:01Z" }, { "body": "true! I fixed it", "created_at": "2017-02-27T10:06:55Z" } ], "title": "Factor out filling of TopDocs in SearchPhaseController" }
{ "commits": [ { "message": "Factor out filling of TopDocs in SearchPhaseController\n\nPreviously this code was duplicated across the 3 different topdocs variants\nwe have. It also had no real unittest (where we tested with holes in the results)\nwhich caused a sneaky bug where the comparison used `result.size()` vs `results.size()`\ncausing several NPEs downstream. This change adds a static method to fill the top docs\nthat is shared across all variants and adds a unittest that would have caught the issue\nvery quickly.\n\nCloses #19356\nCloses #23357" }, { "message": "simplify API" } ], "files": [ { "diff": "@@ -66,6 +66,7 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n+import java.util.function.Supplier;\n import java.util.stream.Collectors;\n import java.util.stream.StreamSupport;\n \n@@ -233,47 +234,19 @@ public ScoreDoc[] sortDocs(boolean ignoreFrom, AtomicArray<? extends QuerySearch\n if (result.queryResult().topDocs() instanceof CollapseTopFieldDocs) {\n CollapseTopFieldDocs firstTopDocs = (CollapseTopFieldDocs) result.queryResult().topDocs();\n final Sort sort = new Sort(firstTopDocs.fields);\n-\n final CollapseTopFieldDocs[] shardTopDocs = new CollapseTopFieldDocs[numShards];\n- if (result.size() != shardTopDocs.length) {\n- // TopDocs#merge can't deal with null shard TopDocs\n- final CollapseTopFieldDocs empty = new CollapseTopFieldDocs(firstTopDocs.field, 0, new FieldDoc[0],\n- sort.getSort(), new Object[0], Float.NaN);\n- Arrays.fill(shardTopDocs, empty);\n- }\n- for (AtomicArray.Entry<? extends QuerySearchResultProvider> sortedResult : results) {\n- TopDocs topDocs = sortedResult.value.queryResult().topDocs();\n- // the 'index' field is the position in the resultsArr atomic array\n- shardTopDocs[sortedResult.index] = (CollapseTopFieldDocs) topDocs;\n- }\n+ fillTopDocs(shardTopDocs, results, new CollapseTopFieldDocs(firstTopDocs.field, 0, new FieldDoc[0],\n+ sort.getSort(), new Object[0], Float.NaN));\n mergedTopDocs = CollapseTopFieldDocs.merge(sort, from, topN, shardTopDocs);\n } else if (result.queryResult().topDocs() instanceof TopFieldDocs) {\n TopFieldDocs firstTopDocs = (TopFieldDocs) result.queryResult().topDocs();\n final Sort sort = new Sort(firstTopDocs.fields);\n-\n final TopFieldDocs[] shardTopDocs = new TopFieldDocs[resultsArr.length()];\n- if (result.size() != shardTopDocs.length) {\n- // TopDocs#merge can't deal with null shard TopDocs\n- final TopFieldDocs empty = new TopFieldDocs(0, new FieldDoc[0], sort.getSort(), Float.NaN);\n- Arrays.fill(shardTopDocs, empty);\n- }\n- for (AtomicArray.Entry<? extends QuerySearchResultProvider> sortedResult : results) {\n- TopDocs topDocs = sortedResult.value.queryResult().topDocs();\n- // the 'index' field is the position in the resultsArr atomic array\n- shardTopDocs[sortedResult.index] = (TopFieldDocs) topDocs;\n- }\n+ fillTopDocs(shardTopDocs, results, new TopFieldDocs(0, new FieldDoc[0], sort.getSort(), Float.NaN));\n mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs);\n } else {\n final TopDocs[] shardTopDocs = new TopDocs[resultsArr.length()];\n- if (result.size() != shardTopDocs.length) {\n- // TopDocs#merge can't deal with null shard TopDocs\n- Arrays.fill(shardTopDocs, Lucene.EMPTY_TOP_DOCS);\n- }\n- for (AtomicArray.Entry<? extends QuerySearchResultProvider> sortedResult : results) {\n- TopDocs topDocs = sortedResult.value.queryResult().topDocs();\n- // the 'index' field is the position in the resultsArr atomic array\n- shardTopDocs[sortedResult.index] = topDocs;\n- }\n+ fillTopDocs(shardTopDocs, results, Lucene.EMPTY_TOP_DOCS);\n mergedTopDocs = TopDocs.merge(from, topN, shardTopDocs);\n }\n \n@@ -314,6 +287,20 @@ public ScoreDoc[] sortDocs(boolean ignoreFrom, AtomicArray<? extends QuerySearch\n return scoreDocs;\n }\n \n+ static <T extends TopDocs> void fillTopDocs(T[] shardTopDocs,\n+ List<? extends AtomicArray.Entry<? extends QuerySearchResultProvider>> results,\n+ T empytTopDocs) {\n+ if (results.size() != shardTopDocs.length) {\n+ // TopDocs#merge can't deal with null shard TopDocs\n+ Arrays.fill(shardTopDocs, empytTopDocs);\n+ }\n+ for (AtomicArray.Entry<? extends QuerySearchResultProvider> resultProvider : results) {\n+ final T topDocs = (T) resultProvider.value.queryResult().topDocs();\n+ assert topDocs != null : \"top docs must not be null in a valid result\";\n+ // the 'index' field is the position in the resultsArr atomic array\n+ shardTopDocs[resultProvider.index] = topDocs;\n+ }\n+ }\n public ScoreDoc[] getLastEmittedDocPerShard(ReducedQueryPhase reducedQueryPhase,\n ScoreDoc[] sortedScoreDocs, int numShards) {\n ScoreDoc[] lastEmittedDocPerShard = new ScoreDoc[numShards];", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.TopDocs;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.common.util.BigArrays;\n@@ -347,4 +348,31 @@ public void testNewSearchPhaseResults() {\n }\n }\n }\n+\n+ public void testFillTopDocs() {\n+ final int maxIters = randomIntBetween(5, 15);\n+ for (int iters = 0; iters < maxIters; iters++) {\n+ TopDocs[] topDocs = new TopDocs[randomIntBetween(2, 100)];\n+ int numShards = topDocs.length;\n+ AtomicArray<QuerySearchResultProvider> resultProviderAtomicArray = generateQueryResults(numShards, Collections.emptyList(),\n+ 2, randomBoolean());\n+ if (randomBoolean()) {\n+ int maxNull = randomIntBetween(1, topDocs.length - 1);\n+ for (int i = 0; i < maxNull; i++) {\n+ resultProviderAtomicArray.set(randomIntBetween(0, resultProviderAtomicArray.length() - 1), null);\n+ }\n+ }\n+ SearchPhaseController.fillTopDocs(topDocs, resultProviderAtomicArray.asList(), Lucene.EMPTY_TOP_DOCS);\n+ for (int i = 0; i < topDocs.length; i++) {\n+ assertNotNull(topDocs[i]);\n+ if (topDocs[i] == Lucene.EMPTY_TOP_DOCS) {\n+ assertNull(resultProviderAtomicArray.get(i));\n+ } else {\n+ assertNotNull(resultProviderAtomicArray.get(i));\n+ assertNotNull(resultProviderAtomicArray.get(i).queryResult());\n+ assertSame(resultProviderAtomicArray.get(i).queryResult().topDocs(), topDocs[i]);\n+ }\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java", "status": "modified" } ] }
{ "body": "I have tried to fire a silly request to ElastiSearch 5.1.1:\r\n\r\nGET products/_search\r\n```json\r\n{\r\n **\"from\": -2**,\r\n \"size\": 100, \r\n \"query\": {\r\n \"match\": {\r\n \"productName\": \"soy sauce\"\r\n }\r\n },\r\n \"sort\": [\r\n {\r\n \"customerRating\": {\r\n \"order\": \"desc\"\r\n }\r\n },\r\n \"_score\",\r\n {\r\n \"price\" : {\r\n \"order\": \"asc\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nand got this:\r\n\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [],\r\n \"type\": \"reduce_search_phase_exception\",\r\n \"reason\": \"[reduce] \",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [],\r\n \"caused_by\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n },\r\n \"status\": 503\r\n}\r\n```\r\n\r\nI had expected a 400 (user error) and a meaningful error message, like e.g: from parameter cannot be negative.", "comments": [ { "body": "I agree we should catch this earlier. I just reproduced this on 5.1.2. and there I get an ArrayIndexOutOfBoundsException from\r\n```\r\njava.lang.ArrayIndexOutOfBoundsException: -2\r\n\tat org.elasticsearch.action.search.SearchPhaseController.sortDocs(SearchPhaseController.java:214) ~[elasticsearch-5.1.2.jar:5.1.2]\r\n\tat org.elasticsearch.action.search.SearchQueryThenFetchAsyncAction.moveToSecondPhase(SearchQueryThenFetchAsyncAction.java:80) ~[elasticsearch-5.1.2.jar:5.1.2]\r\n```\r\n\r\nI think we should reject negative `from` values already in SearchSourceBuilder, which would mean that errors would be triggered either on the client side or the coordinating node.", "created_at": "2017-02-24T15:49:12Z" }, { "body": "Closed by #23358", "created_at": "2017-02-27T09:15:12Z" } ], "number": 23324, "title": "NullPointer on from parameter with negative value and 500 code instead of 400 error" }
{ "body": "This prevents later errors like the one reported in #23324 and throws an\r\nIllegalArgumentException early instead.", "number": 23358, "review_comments": [], "title": "Prevent negative `from` parameter in SearchSourceBuilder" }
{ "commits": [ { "message": "Prevent negative `from` parameter in SearchSourceBuilder\n\nThis prevents later errors like the one reported in #23324 and throws an\nIllegalArgumentException early instead." } ], "files": [ { "diff": "@@ -38,11 +38,11 @@\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.script.Script;\n-import org.elasticsearch.search.collapse.CollapseBuilder;\n import org.elasticsearch.search.SearchExtBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n+import org.elasticsearch.search.collapse.CollapseBuilder;\n import org.elasticsearch.search.fetch.StoredFieldsContext;\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;\n@@ -314,6 +314,9 @@ public QueryBuilder postFilter() {\n * From index to start the search from. Defaults to <tt>0</tt>.\n */\n public SearchSourceBuilder from(int from) {\n+ if (from < 0) {\n+ throw new IllegalArgumentException(\"[from] parameter cannot be negative\");\n+ }\n this.from = from;\n return this;\n }", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -360,6 +360,11 @@ public void testParseIndicesBoost() throws IOException {\n }\n }\n \n+ public void testNegativeFromErrors() {\n+ IllegalArgumentException expected = expectThrows(IllegalArgumentException.class, () -> new SearchSourceBuilder().from(-2));\n+ assertEquals(\"[from] parameter cannot be negative\", expected.getMessage());\n+ }\n+\n private void assertIndicesBoostParseErrorMessage(String restContent, String expectedErrorMessage) throws IOException {\n try (XContentParser parser = createParser(JsonXContent.jsonXContent, restContent)) {\n ParsingException e = expectThrows(ParsingException.class, () -> SearchSourceBuilder.fromXContent(createParseContext(parser)));", "filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" }, { "diff": "@@ -17,6 +17,14 @@ setup:\n index: test_1\n from: 10000\n \n+---\n+\"Request with negative from value\":\n+ - do:\n+ catch: /\\[from\\] parameter cannot be negative/\n+ search:\n+ index: test_1\n+ from: -2\n+\n ---\n \"Request window limits with scroll\":\n - do:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yaml", "status": "modified" } ] }
{ "body": "After #21123 when Elasticsearch receive a HEAD request it returns the Content-Length of the that it would return for a GET request with an empty response body. Except in the document exists, index exists, and type exists requests which return 0. We should fix them to also return the Content-Length that would be in the response.\n", "comments": [ { "body": "I'm adding the v5.1.0 label too, I think we should target a fix there.\n", "created_at": "2016-10-26T05:16:19Z" }, { "body": "These are all addressed now. Closing.", "created_at": "2017-06-12T12:10:12Z" } ], "number": 21125, "title": "Some endpoints return Content-Length: 0 for HEAD requests" }
{ "body": "Previous changes aligned HEAD requests to be consistent with GET\r\nrequests to the same endpoint. This commit aligns the REST spec for the\r\nimpacted endpoints.\r\n\r\nRelates #21125\r\n", "number": 23313, "review_comments": [], "title": "Align REST specs for HEAD requests" }
{ "commits": [ { "message": "Align REST specs for HEAD requests\n\nPrevious changes aligned HEAD requests to be consistent with GET\nrequests to the same endpoint. This commit aligns the REST spec for the\nimpacted endpoints." } ], "files": [ { "diff": "@@ -0,0 +1,71 @@\n+{\n+ \"exists_source\": {\n+ \"documentation\": \"http://www.elastic.co/guide/en/elasticsearch/reference/master/docs-get.html\",\n+ \"methods\": [\"HEAD\"],\n+ \"url\": {\n+ \"path\": \"/{index}/{type}/{id}/_source\",\n+ \"paths\": [\"/{index}/{type}/{id}/_source\"],\n+ \"parts\": {\n+ \"id\": {\n+ \"type\" : \"string\",\n+ \"required\" : true,\n+ \"description\" : \"The document ID\"\n+ },\n+ \"index\": {\n+ \"type\" : \"string\",\n+ \"required\" : true,\n+ \"description\" : \"The name of the index\"\n+ },\n+ \"type\": {\n+ \"type\" : \"string\",\n+ \"required\" : true,\n+ \"description\" : \"The type of the document; use `_all` to fetch the first document matching the ID across all types\"\n+ }\n+ },\n+ \"params\": {\n+ \"parent\": {\n+ \"type\" : \"string\",\n+ \"description\" : \"The ID of the parent document\"\n+ },\n+ \"preference\": {\n+ \"type\" : \"string\",\n+ \"description\" : \"Specify the node or shard the operation should be performed on (default: random)\"\n+ },\n+ \"realtime\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Specify whether to perform the operation in realtime or search mode\"\n+ },\n+ \"refresh\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Refresh the shard containing the document before performing the operation\"\n+ },\n+ \"routing\": {\n+ \"type\" : \"string\",\n+ \"description\" : \"Specific routing value\"\n+ },\n+ \"_source\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"True or false to return the _source field or not, or a list of fields to return\"\n+ },\n+ \"_source_exclude\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"A list of fields to exclude from the returned _source field\"\n+ },\n+ \"_source_include\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"A list of fields to extract and return from the _source field\"\n+ },\n+ \"version\" : {\n+ \"type\" : \"number\",\n+ \"description\" : \"Explicit version number for concurrency control\"\n+ },\n+ \"version_type\": {\n+ \"type\" : \"enum\",\n+ \"options\" : [\"internal\", \"external\", \"external_gte\", \"force\"],\n+ \"description\" : \"Specific version type\"\n+ }\n+ }\n+ },\n+ \"body\": null\n+ }\n+}", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/exists_source.json", "status": "added" }, { "diff": "@@ -1,27 +1,44 @@\n {\n \"indices.exists\": {\n \"documentation\": \"http://www.elastic.co/guide/en/elasticsearch/reference/master/indices-exists.html\",\n- \"methods\": [\"HEAD\"],\n+ \"methods\": [ \"HEAD\" ],\n \"url\": {\n \"path\": \"/{index}\",\n- \"paths\": [\"/{index}\"],\n+ \"paths\": [ \"/{index}\" ],\n \"parts\": {\n \"index\": {\n- \"type\" : \"list\",\n- \"required\" : true,\n- \"description\" : \"A comma-separated list of indices to check\"\n+ \"type\": \"list\",\n+ \"required\": true,\n+ \"description\": \"A comma-separated list of index names\"\n }\n },\n \"params\": {\n+ \"local\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Return local information, do not retrieve the state from master node (default: false)\"\n+ },\n+ \"ignore_unavailable\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Ignore unavailable indexes (default: false)\"\n+ },\n+ \"allow_no_indices\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Ignore if a wildcard expression resolves to no concrete indices (default: false)\"\n+ },\n \"expand_wildcards\": {\n- \"type\" : \"enum\",\n- \"options\" : [\"open\",\"closed\",\"none\",\"all\"],\n- \"default\" : \"open\",\n- \"description\" : \"Whether to expand wildcard expression to concrete indices that are open, closed or both.\"\n+ \"type\": \"enum\",\n+ \"options\": [ \"open\", \"closed\", \"none\", \"all\" ],\n+ \"default\": \"open\",\n+ \"description\": \"Whether wildcard expressions should get expanded to open or closed indices (default: open)\"\n },\n- \"local\": {\n- \"type\": \"boolean\",\n- \"description\": \"Return local information, do not retrieve the state from master node (default: false)\"\n+ \"flat_settings\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Return settings in flat format (default: false)\"\n+ },\n+ \"include_defaults\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Whether to return all default setting for each of the indices.\",\n+ \"default\": false\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists.json", "status": "modified" }, { "diff": "@@ -4,7 +4,7 @@\n \"methods\": [\"HEAD\"],\n \"url\": {\n \"path\": \"/_alias/{name}\",\n- \"paths\": [\"/_alias/{name}\", \"/{index}/_alias/{name}\", \"/{index}/_alias\"],\n+ \"paths\": [\"/_alias/{name}\", \"/{index}/_alias/{name}\"],\n \"parts\": {\n \"index\": {\n \"type\" : \"list\",\n@@ -17,22 +17,22 @@\n },\n \"params\": {\n \"ignore_unavailable\": {\n- \"type\" : \"boolean\",\n- \"description\" : \"Whether specified concrete indices should be ignored when unavailable (missing or closed)\"\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Whether specified concrete indices should be ignored when unavailable (missing or closed)\"\n },\n \"allow_no_indices\": {\n- \"type\" : \"boolean\",\n- \"description\" : \"Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)\"\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)\"\n },\n \"expand_wildcards\": {\n- \"type\" : \"enum\",\n- \"options\" : [\"open\",\"closed\",\"none\",\"all\"],\n- \"default\" : [\"open\", \"closed\"],\n- \"description\" : \"Whether to expand wildcard expression to concrete indices that are open, closed or both.\"\n+ \"type\" : \"enum\",\n+ \"options\" : [\"open\",\"closed\",\"none\",\"all\"],\n+ \"default\" : \"all\",\n+ \"description\" : \"Whether to expand wildcard expression to concrete indices that are open, closed or both.\"\n },\n \"local\": {\n- \"type\": \"boolean\",\n- \"description\": \"Return local information, do not retrieve the state from master node (default: false)\"\n+ \"type\": \"boolean\",\n+ \"description\": \"Return local information, do not retrieve the state from master node (default: false)\"\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_alias.json", "status": "modified" }, { "diff": "@@ -4,15 +4,19 @@\n \"methods\": [\"HEAD\"],\n \"url\": {\n \"path\": \"/_template/{name}\",\n- \"paths\": [\"/_template/{name}\"],\n+ \"paths\": [ \"/_template/{name}\" ],\n \"parts\": {\n \"name\": {\n- \"type\": \"string\",\n- \"required\": true,\n- \"description\": \"The name of the template\"\n+ \"type\": \"list\",\n+ \"required\": false,\n+ \"description\": \"The comma separated names of the index templates\"\n }\n },\n \"params\": {\n+ \"flat_settings\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Return settings in flat format (default: false)\"\n+ },\n \"master_timeout\": {\n \"type\": \"time\",\n \"description\": \"Explicit operation timeout for connection to master node\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_template.json", "status": "modified" } ] }
{ "body": "When sending the request through the REST layer, the update indices settings api only supports yaml and json format. An error is returned whenever a request in CBOR or SMILE format is sent, while it should also support these two binary formats like every other api.", "comments": [], "number": 23242, "title": "Update indices settings api doesn't support SMILE nor CBOR formats" }
{ "body": "Indices update settings api should support requests in binary formats like cbor or smile, not just json or yaml. Also expanded testing on the different ways to provide index settings and remove dead code around ability to provide settings as query string parameters. The latter didn't work since 5.0 , where we introduced validation of query_string parameters. Given that we received no complaint about that, rather than restoring that ability, I will add a note to the 5.0 migration guide.\r\n\r\nRelates to #23245\r\n\r\nCloses #23242 ", "number": 23309, "review_comments": [], "title": "Update indices settings api to support CBOR and SMILE format" }
{ "commits": [ { "message": "Update indices settings api to support CBOR and SMILE format\n\nAlso expand testing on the different ways to provide index settings and remove dead code around ability to provide settings as query string parameters\n\nCloses #23242" } ], "files": [ { "diff": "@@ -24,30 +24,20 @@\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.rest.BaseRestHandler;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.AcknowledgedRestListener;\n \n import java.io.IOException;\n+import java.util.HashMap;\n import java.util.Map;\n import java.util.Set;\n \n-import static java.util.Collections.unmodifiableSet;\n import static org.elasticsearch.client.Requests.updateSettingsRequest;\n-import static org.elasticsearch.common.util.set.Sets.newHashSet;\n \n public class RestUpdateSettingsAction extends BaseRestHandler {\n- private static final Set<String> VALUES_TO_EXCLUDE = unmodifiableSet(newHashSet(\n- \"error_trace\",\n- \"pretty\",\n- \"timeout\",\n- \"master_timeout\",\n- \"index\",\n- \"preserve_existing\",\n- \"expand_wildcards\",\n- \"ignore_unavailable\",\n- \"allow_no_indices\"));\n \n public RestUpdateSettingsAction(Settings settings, RestController controller) {\n super(settings);\n@@ -63,29 +53,22 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n updateSettingsRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", updateSettingsRequest.masterNodeTimeout()));\n updateSettingsRequest.indicesOptions(IndicesOptions.fromRequest(request, updateSettingsRequest.indicesOptions()));\n \n- Settings.Builder updateSettings = Settings.builder();\n- String bodySettingsStr = request.content().utf8ToString();\n- if (Strings.hasText(bodySettingsStr)) {\n- Settings buildSettings = Settings.builder()\n- .loadFromSource(bodySettingsStr, request.getXContentType())\n- .build();\n- for (Map.Entry<String, String> entry : buildSettings.getAsMap().entrySet()) {\n- String key = entry.getKey();\n- String value = entry.getValue();\n+ Map<String, Object> settings = new HashMap<>();\n+ if (request.hasContent()) {\n+ try (XContentParser parser = request.contentParser()) {\n+ Map<String, Object> bodySettings = parser.map();\n+ Object innerBodySettings = bodySettings.get(\"settings\");\n // clean up in case the body is wrapped with \"settings\" : { ... }\n- if (key.startsWith(\"settings.\")) {\n- key = key.substring(\"settings.\".length());\n+ if (innerBodySettings instanceof Map) {\n+ @SuppressWarnings(\"unchecked\")\n+ Map<String, Object> innerBodySettingsMap = (Map<String, Object>) innerBodySettings;\n+ settings.putAll(innerBodySettingsMap);\n+ } else {\n+ settings.putAll(bodySettings);\n }\n- updateSettings.put(key, value);\n }\n }\n- for (Map.Entry<String, String> entry : request.params().entrySet()) {\n- if (VALUES_TO_EXCLUDE.contains(entry.getKey())) {\n- continue;\n- }\n- updateSettings.put(entry.getKey(), entry.getValue());\n- }\n- updateSettingsRequest.settings(updateSettings);\n+ updateSettingsRequest.settings(settings);\n \n return channel -> client.admin().indices().updateSettings(updateSettingsRequest, new AcknowledgedRestListener<>(channel));\n }\n@@ -94,5 +77,4 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n protected Set<String> responseParams() {\n return Settings.FORMAT_PARAMS;\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -36,7 +36,7 @@ setup:\n ignore_unavailable: true\n index: test-index, non-existing\n body:\n- number_of_replicas: 1\n+ index.number_of_replicas: 1\n \n - do:\n indices.get_settings: {}\n@@ -81,7 +81,6 @@ setup:\n indices.get_settings:\n flat_settings: false\n \n-\n - match:\n test-index.settings.index.number_of_replicas: \"0\"\n - match:\n@@ -96,8 +95,9 @@ setup:\n preserve_existing: true\n index: test-index\n body:\n- index.translog.durability: \"request\"\n- index.query_string.lenient: \"true\"\n+ settings:\n+ index.translog.durability: \"request\"\n+ index.query_string.lenient: \"true\"\n \n - do:\n indices.get_settings:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_settings/10_basic.yaml", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.1.1\r\n\r\n**Plugins installed**: [analysis-icu]\r\n\r\n**JVM version**: Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111-internal/25.111-b14\r\n\r\n**OS version**: Linux/4.8.0-34-generic/amd64\r\n\r\n**Description of the problem including expected versus actual behavior**: \r\n\r\nA NullPointerException is displayed to the user when the Range aggregation is malformed:\r\n\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n ],\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Run a badly formatted query:\r\n\r\n```json\r\nGET /_search\r\n{\r\n \"aggs\": {\r\n \"foobar\": {\r\n \"range\": {\r\n \"field\": \"hey\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nTrace:\r\n\r\n```\r\n[2017-01-31T10:49:11,250][WARN ][r.suppressed ] path: /_search, params: {}\r\njava.lang.NullPointerException: null\r\n\tat org.elasticsearch.search.aggregations.bucket.range.RangeParser.createFactory(RangeParser.java:58) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.aggregations.bucket.range.RangeParser.createFactory(RangeParser.java:39) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser.parse(AbstractValuesSourceParser.java:150) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.aggregations.support.AbstractValuesSourceParser$NumericValuesSourceParser.parse(AbstractValuesSourceParser.java:48) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:156) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:80) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.search.builder.SearchSourceBuilder.parseXContent(SearchSourceBuilder.java:1018) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.rest.action.search.RestSearchAction.parseSearchRequest(RestSearchAction.java:105) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.rest.action.search.RestSearchAction.prepareRequest(RestSearchAction.java:81) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:66) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.rest.RestController.executeHandler(RestController.java:243) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:200) [elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.http.HttpServer.dispatchRequest(HttpServer.java:113) [elasticsearch-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:507) [transport-netty4-5.1.1.jar:5.1.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:69) [transport-netty4-5.1.1.jar:5.1.1]\r\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:66) [transport-netty4-5.1.1.jar:5.1.1]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) [netty-codec-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:536) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:490) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450) [netty-transport-4.1.6.Final.jar:4.1.6.Final]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) [netty-common-4.1.6.Final.jar:4.1.6.Final]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_111-internal]\r\n```\r\n\r\n", "comments": [ { "body": "Hello @clintongormley @jpountz, I am looking for my first elasticsearch contribution!\r\n\r\nHowever, I cannot reproduce this bug on my fresh elasticsearch build. \r\nThis is the response for `GET /`:\r\n```\r\n{\r\n \"name\": \"node-0\",\r\n \"cluster_name\": \"distribution_run\",\r\n \"cluster_uuid\": \"fQLnWXIERNuBj5vWoNtBlw\",\r\n \"version\": {\r\n \"number\": \"6.0.0-alpha1\",\r\n \"build_hash\": \"a4ac29c\",\r\n \"build_date\": \"2017-01-23T13:54:32.228Z\",\r\n \"build_snapshot\": true,\r\n \"lucene_version\": \"6.4.0\"\r\n },\r\n \"tagline\": \"You Know, for Search\"\r\n}\r\n```\r\n \r\nand this is what I get for the issue's request:\r\n```\r\nGET /_search\r\n{\r\n \"aggs\": {\r\n \"foobar\": {\r\n \"range\": {\r\n \"field\": \"hey\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n```\r\n{\r\n \"took\": 27,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": null,\r\n \"hits\": []\r\n },\r\n \"aggregations\": {\r\n \"foobar\": {\r\n \"buckets\": []\r\n }\r\n }\r\n}\r\n```\r\nMy `java -version` on `OS X 10.11.6`\r\n```\r\njava version \"1.8.0_60\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_60-b27)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)\r\n```\r\n\r\nI understand that my build is 1 version later, should I build the 5.1.1 version and check the issue?\r\nThanks!", "created_at": "2017-02-01T14:51:09Z" }, { "body": "Did you test on master or the 5.x branch? If it does not reproduce on 5.x, then then 5.1 branch is indeed what I would look at next.", "created_at": "2017-02-01T16:51:56Z" }, { "body": "Yes I was testing on master. I just checked 5.x, 5.1, 5.2 and it does reproduce on 5.1 only.\r\nSo I guess no further action is needed since it is ok on master and 5.x?", "created_at": "2017-02-01T23:18:39Z" }, { "body": "Good to know that issue is already fixed in 5.2! Thank you for investigating @giorgosp, we appreciate! I will close this issue now.", "created_at": "2017-02-02T00:27:53Z" }, { "body": "@giorgosp I don't think it's resolved in 5.x nor master. The only difference with 5.1 is that you need to have at least one document with the range field to fail the query. The following recreation for instance fails in 5.1, 5.2, 5.x and master:\r\n\r\n`````\r\nPUT t/t/1\r\n{\r\n \"number\": 3\r\n}\r\n\r\nGET /_search\r\n{\r\n \"aggs\": {\r\n \"foobar\": {\r\n \"range\": {\r\n \"field\": \"number\"\r\n }\r\n }\r\n }\r\n}\r\n`````", "created_at": "2017-02-02T09:54:32Z" }, { "body": "Ok, I am checking it", "created_at": "2017-02-02T22:58:25Z" }, { "body": "Ok, it fails if the range aggregation query is invalid and the \"field\" value is an existing field inside the document (changing \"field\": \"number\" to \"field\": \"number1123\" doesn't reproduce the issue).\r\n\r\nI will keep working on it in my spare time but since I am also trying to familiarize myself with the codebase, if someone else wants to fix it faster than me then it is ok.", "created_at": "2017-02-03T01:41:07Z" } ], "number": 22881, "title": "Range aggregation with no ranges throw a NullPointerException" }
{ "body": "The issue is reproduced in 5.1, 5.2, 5.3, 5.x and master branch. My fix is on master.\r\n\r\nThis commit just checks the ranges array size at the spot where the exception is raised. I am not sure if this should be fixed at a higher level in the code.\r\n\r\nCloses #22881 \r\n", "number": 23241, "review_comments": [ { "body": "Could we put the check here so that it is run for both the transport and rest APIs? When put here we should make it an `IllegalArgumentException` and we should update the other Range Aggregation Builders too.", "created_at": "2017-03-14T08:42:46Z" } ], "title": "Fix ArrayIndexOutOfBoundsException when no ranges are specified in the query" }
{ "commits": [ { "message": "Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query" }, { "message": "Revert \"Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query\"\n\nThis reverts commit ad57d8feb3577a64b37de28c6f3df96a3a49fe93." }, { "message": "Fix range aggregation out of bounds exception when there are no ranges in a range or date_range query" }, { "message": "Fix range aggregation out of bounds exception when there are no ranges in the query\n\nThis fix is applied to range queries, date range queries, ip range queries and geo distance aggregation queries" } ], "files": [ { "diff": "@@ -140,6 +140,9 @@ protected RangeAggregatorFactory innerBuild(SearchContext context, ValuesSourceC\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n // We need to call processRanges here so they are parsed before we make the decision of whether to cache the request\n Range[] ranges = processRanges(context, config);\n+ if (ranges.length == 0) {\n+ throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n+ }\n return new RangeAggregatorFactory(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder,\n metaData);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -283,9 +283,12 @@ public DateRangeAggregationBuilder addUnboundedFrom(DateTime from) {\n @Override\n protected DateRangeAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig<Numeric> config,\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n- // We need to call processRanges here so they are parsed and we know whether `now` has been used before we make \n+ // We need to call processRanges here so they are parsed and we know whether `now` has been used before we make\n // the decision of whether to cache the request\n Range[] ranges = processRanges(context, config);\n+ if (ranges.length == 0) {\n+ throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n+ }\n return new DateRangeAggregatorFactory(name, config, ranges, keyed, rangeFactory, context, parent, subFactoriesBuilder,\n metaData);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/date/DateRangeAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -384,6 +384,9 @@ public boolean keyed() {\n ValuesSourceConfig<ValuesSource.GeoPoint> config, AggregatorFactory<?> parent, Builder subFactoriesBuilder)\n throws IOException {\n Range[] ranges = this.ranges.toArray(new Range[this.range().size()]);\n+ if (ranges.length == 0) {\n+ throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n+ }\n return new GeoDistanceRangeAggregatorFactory(name, config, origin, ranges, unit, distanceType, keyed, context, parent,\n subFactoriesBuilder, metaData);\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -369,6 +369,9 @@ private static BytesRef toBytesRef(String ip) {\n AggregatorFactory<?> parent, Builder subFactoriesBuilder)\n throws IOException {\n List<BinaryRangeAggregator.Range> ranges = new ArrayList<>();\n+ if(this.ranges.size() == 0){\n+ throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n+ }\n for (Range range : this.ranges) {\n ranges.add(new BinaryRangeAggregator.Range(range.key, toBytesRef(range.from), toBytesRef(range.to)));\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/ip/IpRangeAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.plugins.Plugin;\n@@ -50,6 +51,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.core.IsNull.notNullValue;\n import static org.hamcrest.core.IsNull.nullValue;\n@@ -865,6 +867,19 @@ public void testEmptyAggregation() throws Exception {\n assertThat(buckets.get(0).getAggregations().asList().isEmpty(), is(true));\n }\n \n+ public void testNoRangesInQuery() {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(dateRange(\"my_date_range_agg\").field(\"value\"))\n+ .execute().actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException spee){\n+ Throwable rootCause = spee.getCause().getCause();\n+ assertThat(rootCause, instanceOf(IllegalArgumentException.class));\n+ assertEquals(rootCause.getMessage(), \"No [ranges] specified for the [my_date_range_agg] aggregation\");\n+ }\n+ }\n+\n /**\n * Make sure that a request using a script does not get cached and a request\n * not using a script does get cached.", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/DateRangeIT.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.GeoPoint;\n@@ -51,6 +52,7 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.sameInstance;\n import static org.hamcrest.core.IsNull.notNullValue;\n@@ -439,6 +441,19 @@ public void testEmptyAggregation() throws Exception {\n assertThat(buckets.get(0).getDocCount(), equalTo(0L));\n }\n \n+ public void testNoRangesInQuery() {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(geoDistance(\"geo_dist\", new GeoPoint(52.3760, 4.894)))\n+ .execute().actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException spee){\n+ Throwable rootCause = spee.getCause().getCause();\n+ assertThat(rootCause, instanceOf(IllegalArgumentException.class));\n+ assertEquals(rootCause.getMessage(), \"No [ranges] specified for the [geo_dist] aggregation\");\n+ }\n+ }\n+\n public void testMultiValues() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx-multi\")\n .addAggregation(geoDistance(\"amsterdam_rings\", new GeoPoint(52.3760, 4.894))", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/GeoDistanceIT.java", "status": "modified" }, { "diff": "@@ -18,18 +18,9 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n-import org.elasticsearch.cluster.health.ClusterHealthStatus;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n-import static org.hamcrest.Matchers.containsString;\n-\n-import java.util.Arrays;\n-import java.util.Collection;\n-import java.util.Collections;\n-import java.util.List;\n-import java.util.Map;\n-\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.common.inject.internal.Nullable;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.plugins.ScriptPlugin;\n@@ -42,6 +33,17 @@\n import org.elasticsearch.search.aggregations.bucket.range.Range;\n import org.elasticsearch.test.ESIntegTestCase;\n \n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.instanceOf;\n+\n @ESIntegTestCase.SuiteScopeTestCase\n public class IpRangeIT extends ESIntegTestCase {\n \n@@ -221,6 +223,20 @@ public void testRejectsValueScript() {\n assertThat(e.getMessage(), containsString(\"[ip_range] does not support scripts\"));\n }\n \n+ public void testNoRangesInQuery() {\n+ try {\n+ client().prepareSearch(\"idx\").addAggregation(\n+ AggregationBuilders.ipRange(\"my_range\")\n+ .field(\"ip\"))\n+ .execute().actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException spee){\n+ Throwable rootCause = spee.getCause().getCause();\n+ assertThat(rootCause, instanceOf(IllegalArgumentException.class));\n+ assertEquals(rootCause.getMessage(), \"No [ranges] specified for the [my_range] aggregation\");\n+ }\n+ }\n+\n public static class DummyScriptPlugin extends Plugin implements ScriptPlugin {\n @Override\n public List<NativeScriptFactory> getNativeScripts() {", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/IpRangeIT.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n@@ -52,6 +53,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.core.IsNull.notNullValue;\n import static org.hamcrest.core.IsNull.nullValue;\n@@ -660,6 +662,20 @@ public void testEmptyRange() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(0L));\n }\n \n+ public void testNoRangesInQuery() {\n+ try {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(range(\"foobar\")\n+ .field(SINGLE_VALUED_FIELD_NAME))\n+ .execute().actionGet();\n+ fail();\n+ } catch (SearchPhaseExecutionException spee){\n+ Throwable rootCause = spee.getCause().getCause();\n+ assertThat(rootCause, instanceOf(IllegalArgumentException.class));\n+ assertEquals(rootCause.getMessage(), \"No [ranges] specified for the [foobar] aggregation\");\n+ }\n+ }\n+\n public void testScriptMultiValued() throws Exception {\n Script script =\n new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"doc['\" + MULTI_VALUED_FIELD_NAME + \"'].values\", Collections.emptyMap());", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/RangeIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 5.2.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_112\r\n\r\n**OS version**: Linux (Debian 8.6)\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nES doesn't start due to a problem with REGEX matching of entries in `/proc/self/cgroup` (see stacktrace below).\r\n\r\nThe problem seems to be this pattern in OsProbe:191:\r\n```\r\nprivate static final Pattern CONTROL_GROUP_PATTERN = Pattern.compile(\"\\\\d+:([^:,]+(?:,[^:,]+)?):(/.*)\");\r\n```\r\n\r\nIt works for one or two components only.\r\n* `1:cpuset,cpu:/`: This would match.\r\n* `1:cpuset,cpu,cpuacct:/`: This wouldn't.\r\n\r\nUnfortunately my Debian 8 installation has a line with 11 components. Thus I'm unable to start ES on any of our Debian 8 Linux-VMs. Is there a workaround, e.g. a configuration option that skips this test altogether?\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n```\r\n[2017-02-16T18:43:12,635][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]\r\norg.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: No match found\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.1.jar:5.2.1]\r\nCaused by: java.lang.IllegalStateException: No match found\r\n\tat java.util.regex.Matcher.group(Matcher.java:536) ~[?:1.8.0_112]\r\n\tat org.elasticsearch.monitor.os.OsProbe.getControlGroups(OsProbe.java:216) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:414) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:466) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.monitor.os.OsService.<init>(OsService.java:45) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.monitor.MonitorService.<init>(MonitorService.java:45) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.node.Node.<init>(Node.java:345) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.node.Node.<init>(Node.java:232) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:241) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:241) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\t... 6 more\r\n```", "comments": [ { "body": "Thanks for the report @mg-2014 and sorry for this.", "created_at": "2017-02-16T19:33:57Z" }, { "body": "I opened #23219.", "created_at": "2017-02-16T19:51:02Z" }, { "body": "Thanks for fixing it @jasontedor.", "created_at": "2017-02-17T08:12:21Z" }, { "body": "I am also running into this problem. Any ideas when this fix will be released?", "created_at": "2017-02-22T15:44:30Z" }, { "body": "I'm very sorry, but we do not provide release timelines.", "created_at": "2017-02-22T15:45:54Z" }, { "body": "how to resolve this?\r\n", "created_at": "2017-10-24T16:11:36Z" }, { "body": "@mrunalpagnis-RS the fix is available in 5.2.2+, 5.3.0+, and 5.4.0+, if you upgrade it should be resolved.", "created_at": "2017-10-24T16:40:39Z" }, { "body": "Can you post a working solution for all those who encounter this issue?! I just encountered this issue and can't seem to find a fix anywhere... \r\nPlease is there a way to fix this!!!!!", "created_at": "2022-11-17T20:04:17Z" }, { "body": "@D3epDiv3r This should be fixed in all versions since 5.2.2. Are you on an older version?", "created_at": "2022-11-17T20:05:52Z" }, { "body": "Hey @jasontedor, thanks for the quick response! I appreciate it!\r\nI made this post on stackoverflow and on the discussions page on Elastic here: https://discuss.elastic.co/t/elasticsearch-fails-to-start-java-lang-illegal-state-exception-no-match-found/319239\r\n\r\nThis is the issue I am having:\r\n\r\n\r\nI have been trying to get Elastic Stack to run for at least 2 days now, I have been given a project in school that requires me to use Elasticsearch 5.2.0 specifically (Logstash 5.2.0, and Kibana 5.2.0) I have not been able to get past installing Elasticsearch yet because it always fails to run the service. I have Java8 and Java11 (using Java8 for ES 5.2.0). The service always fails to run and anytime I check the error logs I keep getting the below:\r\n\r\n```\r\n[2022-11-17T20:37:58,110][INFO ][o.e.p.PluginsService ] [F-2-m97] loaded module [reindex]\r\n[2022-11-17T20:37:58,110][INFO ][o.e.p.PluginsService ] [F-2-m97] loaded module [transport-netty3]\r\n[2022-11-17T20:37:58,110][INFO ][o.e.p.PluginsService ] [F-2-m97] loaded module [transport-netty4]\r\n[2022-11-17T20:37:58,110][INFO ][o.e.p.PluginsService ] [F-2-m97] no plugins loaded\r\n[2022-11-17T20:37:58,779][ERROR][o.e.b.Bootstrap ] Exception\r\njava.lang.IllegalStateException: No match found\r\n at java.util.regex.Matcher.group(Matcher.java:645) ~[?:?]\r\n at org.elasticsearch.monitor.os.OsProbe.getControlGroups(OsProbe.java:216) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:414) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:466) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsService.<init>(OsService.java:45) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.MonitorService.<init>(MonitorService.java:45) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.node.Node.<init>(Node.java:345) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.node.Node.<init>(Node.java:232) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:241) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:241) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) [elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) [elasticsearch-5.2.0.jar:5.2.0]\r\n[2022-11-17T20:37:58,788][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]\r\norg.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: No match found\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.0.jar:5.2.0]\r\nCaused by: java.lang.IllegalStateException: No match found\r\n at java.util.regex.Matcher.group(Matcher.java:645) ~[?:?]\r\n at org.elasticsearch.monitor.os.OsProbe.getControlGroups(OsProbe.java:216) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:414) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:466) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.os.OsService.<init>(OsService.java:45) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.monitor.MonitorService.<init>(MonitorService.java:45) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.node.Node.<init>(Node.java:345) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.node.Node.<init>(Node.java:232) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:241) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:241) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.2.0.jar:5.2.0]\r\n ... 6 more\r\n```\r\n\r\nI have checked almost all online resources there is, with no luck! No one has a solution to this issue, at least not one I have been able to figure out... Does anyone have a fix for this?\r\n", "created_at": "2022-11-17T21:55:26Z" }, { "body": "Also, I would have used the latest version of the Elastic Stack, however since my school explicitly states to use version 5.2.0 and a lot of configurations have changed since the release of that version to 8.5, I am not sure what to do.\r\nI was told to use the dashboards provided by Microsoft here (https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-intrusion-detection-open-source-tools#create-a-kibana-dashboard) and non of those dashboards are able to be imported into the ELK stack 8.5... Also, it has become too challenging to add index patterns to that version.\r\nDo you think the dashboards will work on 5.2.2 and how similar is that version to 5.2.0 (in terms of layout, functionality, and configurations)?\r\nCan I also install Elasticsearch 5.2.2, Logstash 5.2.2, and Kibana 5.2.2? because I noticed that different versions just do not work together!", "created_at": "2022-11-17T22:02:12Z" }, { "body": "5.2.2 should be backwards compatible with 5.2.0. I encourage you to at least upgrade to that version so that you get a fix for the issue that you're encountering, and pick up any other bug fixes that were delivered in the 5.2 series.", "created_at": "2022-11-18T11:34:19Z" }, { "body": "That makes perfect sense! Will do just that, thank you so much 👍 ", "created_at": "2022-11-18T12:57:07Z" }, { "body": "This is odd... I still face the same issue with v5.2.2\r\n\r\n```\r\n ... 6 more\r\n[2022-11-18T14:57:54,900][INFO ][o.e.n.Node ] [] initializing ...\r\n[2022-11-18T14:57:54,996][INFO ][o.e.e.NodeEnvironment ] [5Yuafc8] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [26.5gb], net total_space [28.8gb], spins? [possibly], types [ext4]\r\n[2022-11-18T14:57:54,996][INFO ][o.e.e.NodeEnvironment ] [5Yuafc8] heap size [1.9gb], compressed ordinary object pointers [true]\r\n[2022-11-18T14:57:54,998][INFO ][o.e.n.Node ] node name [5Yuafc8] derived from node ID [5Yuafc8cTMyJO3VwKScGpA]; set [node.name] to override\r\n[2022-11-18T14:57:55,000][INFO ][o.e.n.Node ] version[5.2.2], pid[9279], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/5.15.0-1023-azure/amd64], JVM[Private Build/OpenJDK 64-Bit Server VM/1.8.0_352/25.352->\r\n[2022-11-18T14:57:56,097][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [aggs-matrix-stats]\r\n[2022-11-18T14:57:56,098][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [ingest-common]\r\n[2022-11-18T14:57:56,098][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [lang-expression]\r\n[2022-11-18T14:57:56,098][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [lang-groovy]\r\n[2022-11-18T14:57:56,098][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [lang-mustache]\r\n[2022-11-18T14:57:56,099][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [lang-painless]\r\n[2022-11-18T14:57:56,099][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [percolator]\r\n[2022-11-18T14:57:56,099][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [reindex]\r\n[2022-11-18T14:57:56,099][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [transport-netty3]\r\n[2022-11-18T14:57:56,099][INFO ][o.e.p.PluginsService ] [5Yuafc8] loaded module [transport-netty4]\r\n[2022-11-18T14:57:56,100][INFO ][o.e.p.PluginsService ] [5Yuafc8] no plugins loaded\r\n[2022-11-18T14:57:56,993][ERROR][o.e.b.Bootstrap ] Exception\r\njava.lang.IllegalStateException: No match found\r\n at java.util.regex.Matcher.group(Matcher.java:536) ~[?:1.8.0_352]\r\n at org.elasticsearch.monitor.os.OsProbe.getControlGroups(OsProbe.java:216) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:414) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:466) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsService.<init>(OsService.java:45) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.MonitorService.<init>(MonitorService.java:45) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.node.Node.<init>(Node.java:345) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.node.Node.<init>(Node.java:232) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:241) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:241) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) [elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) [elasticsearch-5.2.2.jar:5.2.2]\r\n[2022-11-18T14:57:57,003][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]\r\norg.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: No match found\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]\r\nCaused by: java.lang.IllegalStateException: No match found\r\n at java.util.regex.Matcher.group(Matcher.java:536) ~[?:1.8.0_352]\r\n at org.elasticsearch.monitor.os.OsProbe.getControlGroups(OsProbe.java:216) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:414) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:466) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.os.OsService.<init>(OsService.java:45) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.monitor.MonitorService.<init>(MonitorService.java:45) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.node.Node.<init>(Node.java:345) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.node.Node.<init>(Node.java:232) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:241) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:241) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n ... 6 more\r\n```\r\n\r\nIs there something I am doing wrong?\r\n\r\n", "created_at": "2022-11-18T15:01:20Z" }, { "body": "Hello, apparently the problem was not with the ELK stack version, but with the OS version (see: [Elastisc Support Matrix).](https://www.elastic.co/support/matrix) I had to downgrade from Ubuntu 20.04 to Ubuntu 16.04 because ES 5.2.* works well on 16.04 but with some of its own bugs. Thanks for your help!\r\n", "created_at": "2022-11-19T00:57:58Z" } ], "number": 23218, "title": "ES doesn't start due to a problem with REGEX matching of entries in '/proc/self/cgroup'" }
{ "body": "The file /proc/self/cgroup lists the control groups to which the process\r\nbelongs. This file is a colon separated list of three fields:\r\n 1. a hierarchy ID number\r\n 2. a comma-separated list of hierarchies\r\n 3. the pathname of the control group in the hierarchy\r\n\r\nThe regex pattern for this contains a bug for the second field. It\r\nallows one or two entries in the comma-separated list, but not\r\nmore. This commit fixes the pattern to allow one or more entires in the\r\ncomma-separated list.\r\n\r\nCloses #23218", "number": 23219, "review_comments": [], "title": "Fix control group pattern" }
{ "commits": [ { "message": "Fix control group pattern\n\nThe file /proc/self/cgroup lists the control groups to which the process\nbelongs. This file is a colon separated list of three fields:\n 1. a hierarchy ID number\n 2. a comma-separated list of hierarchies\n 3. the pathname of the control group in the hierarchy\n\nThe regex pattern for this contains a bug for the second field. It\nallows one or two entries in the comma-separated list, but not\nmore. This commit fixes the pattern to allow one or more entires in the\ncomma-separated list." }, { "message": "Simplify control group pattern" } ], "files": [ { "diff": "@@ -188,7 +188,7 @@ private String readSingleLine(final Path path) throws IOException {\n }\n \n // pattern for lines in /proc/self/cgroup\n- private static final Pattern CONTROL_GROUP_PATTERN = Pattern.compile(\"\\\\d+:([^:,]+(?:,[^:,]+)?):(/.*)\");\n+ private static final Pattern CONTROL_GROUP_PATTERN = Pattern.compile(\"\\\\d+:([^:]+):(/.*)\");\n \n // this property is to support a hack to workaround an issue with Docker containers mounting the cgroups hierarchy inconsistently with\n // respect to /proc/self/cgroup; for Docker containers this should be set to \"/\"", "filename": "core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java", "status": "modified" }, { "diff": "@@ -155,16 +155,15 @@ public void testCgroupProbe() {\n @Override\n List<String> readProcSelfCgroup() {\n return Arrays.asList(\n- \"11:freezer:/\",\n- \"10:net_cls,net_prio:/\",\n- \"9:pids:/\",\n- \"8:cpuset:/\",\n+ \"10:freezer:/\",\n+ \"9:net_cls,net_prio:/\",\n+ \"8:pids:/\",\n \"7:blkio:/\",\n \"6:memory:/\",\n \"5:devices:/user.slice\",\n \"4:hugetlb:/\",\n \"3:perf_event:/\",\n- \"2:cpu,cpuacct:/\" + hierarchy,\n+ \"2:cpu,cpuacct,cpuset:/\" + hierarchy,\n \"1:name=systemd:/user.slice/user-1000.slice/session-2359.scope\");\n }\n ", "filename": "core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java", "status": "modified" } ] }
{ "body": "* Send a non supported document to an ingest pipeline using `ingest-attachment`\r\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\r\n\r\nThis commit removes extracting embedded content from Office XML docs.\r\n\r\nSo elasticsearch is not killed anymore when you run a command like:\r\n\r\n```\r\nGET _ingest/pipeline/_simulate\r\n{\r\n \"pipeline\" : {\r\n \"processors\" : [\r\n {\r\n \"attachment\" : {\r\n \"field\" : \"file\"\r\n }\r\n }\r\n ] \r\n },\r\n \"docs\" : [\r\n {\r\n \"_source\" : {\r\n \"file\" : \"UEsDBBQABgAIAAAAIQC0lAFevwEAAK8IAAATAAgCW0NvbnRlbnRfVHlwZXNdLnhtbCCiBAIooAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADElklv2zAQhe8F8h8EXgOJTg5BUVjOocuxCdAU6JUmRzZRcQFnvP37Dr0IRSBHRh2hFwHSzHvv4xASNX3curZYQ0IbfC3uqokowOtgrF/U4ufLt/KjKJCUN6oNHmqxAxSPs5sP05ddBCxY7bEWS6L4SUrUS3AKqxDBc6UJySni27SQUenfagHyfjJ5kDp4Ak8lZQ8xm36BRq1aKr5u+fGBBFwjis+HvhxVC+uyflvmiuzVJGjxlUjF2FqtiOty7c0rsvJIVbFy34NLG/GWG84k5Mr5gPO6NZrtAJnDcm25uzJJbXj62emJNyZZA8WzSvRdOZbJTUhGmqBXjq2qt4F6Vhyaxmro9NktpqABkTNdW3UVp6w/TaKPQ6+QgvvlWmkJ3HMKEe+uxulMsx8kstDtxtlZ+JWbQ2L69x9GZz0IgbRrAd+f4OA7HA9ELBgD4Og8iLCB+Y/RKP4yHwRpQiAfaIzd6KwHIcCbkRhOzhfNAdL172TvFCBdmH//H/M5T81bGIPgaD0IQXwgwuF6/U7sbd6K5M79h5gP2PQPyz6dhlldxou+wF0iW1+9PsgHpgHTky33vxuzPwAAAP//AwBQSwMEFAAGAAgAAAAhAB6RGrfvAAAATgIAAAsACAJfcmVscy8ucmVscyCiBAIooAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACsksFqwzAMQO+D/YPRvVHawRijTi9j0NsY2QcIW0lME9vYatf+/TzY2AJd6WFHy9LTk9B6c5xGdeCUXfAallUNir0J1vlew1v7vHgAlYW8pTF41nDiDJvm9mb9yiNJKcqDi1kVis8aBpH4iJjNwBPlKkT25acLaSIpz9RjJLOjnnFV1/eYfjOgmTHV1mpIW3sHqj1FvoYdus4ZfgpmP7GXMy2Qj8Lesl3EVOqTuDKNain1LBpsMC8lnJFirAoa8LzR6nqjv6fFiYUsCaEJiS/7fGZcElr+54rmGT827yFZtF/hbxucXUHzAQAA//8DAFBLAwQUAAYACAAAACEAUyT3RoEBAAByBwAAHAAIAXdvcmQvX3JlbHMvZG9jdW1lbnQueG1sLnJlbHMgogQBKKAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC0lctOwzAQRfdI/EPkPXFToDzUtJsKqQs2UBC7yk0mqUVsR/b09fdMKU1TKBYLs/S1PPfoeuzpD9eqipZgnTQ6ZUncYRHozORSlyl7mTxc3LLIodC5qIyGlG3AseHg/Kz/BJVAOuTmsnYRVdEuZXPE+p5zl81BCRebGjTtFMYqgbS0Ja9F9i5K4N1Op8dtuwYbHNWMxnnK7Dgn/8mmhr/UNkUhMxiZbKFA4wkLLhV5U0FhS8CUKcil2IlJDKpg/DRDchkSAulsC+JzuROTmIr9BhGUweGmoptsIHZrn/1NSHvQuTbYBtgrPoSkG5KhMBonYla1rqKRfBRBIfRCzcDSYztANJI3ipAQ2cKhUW/k1kDE8UHlEkF5W7MX9l4MfmuORvJGEjSTrSXYYwSwXR/AdUj/FcyeAZHaoJVDS/Qm0fn/JLz9cBX0q/oRw17xIdyFRPgaWq3fip5ovp2Rjj/KzBpnCpy+Shqi05EVK9pI4qXL11s6fjQpBx8AAAD//wMAUEsDBBQABgAIAAAAIQCHBIL5pAwAALJAAAARAAAAd29yZC9kb2N1bWVudC54bWzsWFtv4jgUfl9p/0OUd5oLAVJUGLUU2j4NmtK3SiuTGOJtYmdtA0N//Rw7Vy7l1unLaFupCcc+3/nOHfXm288kNpaYC8Joz3SubNPANGAhofOe+TIZNXzTEBLREMWM4p65xsL81v/7r5tVN2TBIsFUGgBBRXeVBj0zkjLtWpYIIpwgcZWQgDPBZvIqYInFZjMSYGvFeGi5tmPrt5SzAAsB9gaILpEwc7jg52loIUcrUFaAnhVEiEv8s8JwzgZpWdeWvw2U7LrGUkzhcMZ4giR85HMrQfxtkTYAN0WSTElM5Bog7XYBw3rmgtNuDtEoqSiVbkYlfxQa/BS7mcp9ng5t0eI4Bg6MioikZUyTS9HgMCpAloecWCZxcW+VOt7nCuI+y0oFeAr9PJVJnDE/jOjYJ2REQZQap1DYtFkwSRChleGLQlMLrtM6D8DdAWgLfB5EK4ewxDqpWmOVzj+X5QfOFmmFRj6H9kTfSiw1r87AyqulXsHic2SeI5RCKydB92lOGUfTGBhB7g1In6EzYKguMfswTacsXKtnCgdeN0UcPYU9s+l57VZzeG9qKUwkqaSd/AekXZjc4Y+eCeybnbZ7V4ru8QwtYqlOmo7rXTvFybh2WRscc/14lusY+HSXKO6ZjxipFeCYVv/GKu9MGXtTQ+5ZwnSEqwS42AqXogQ8+2fCAq/tt52W73eUJhhUf1S4HtdphGmGpoV8D3cll/2JipPBZsaAUQkjSCgdWWkWLIY0LDnkNLW1UOaPjDTPHyNAE6CARECgyAYoJlNOFHuMhLwVBE0gxeBGQiBXj7dU6MNIvVTXtVfTIkzaMHweKOC6JGAxUx5qmWerXwUmlYWBOuuZKpk6KVo4IRRydTfK1MV7oevmgOK9suG6mSxGdF7IMG0MbuvOaNHLcx6ZPAhERUzfb7h2y3Vdx/O9DAy2+Riy+n36b+3TA4pjzNeF0k5mNlRfKPlvgTN7dTSrlgx4gbSNeeZTKHOg3cK3r+2h741UgC4v/GNVPvk+yAu9XudFIW0ScjzXvWu5ynSNUNu/7jiep/qzIuTbrfZIJ2ObkN1puc5hQrrloCjQVOTP4jDGM6lAUwZjyfPywqhd4GQe6RsxOIWhxEJWKVw3W0WbFNh7O2NvE+zrkGEuK5tkr2YgdHnn4jsS5i1E2ZgzNtttFrSQbKcJ9hf8ZnVXY6ocLWOugu62IeYdHfRZHA7gm51Rvk3WKXCb4jns5iPD6UNQQoXkEygItTa6IkUBQKYcC8yX2OwbkFTjlRmm02iaxmtkvL4brwtD2So191g+lbvAqkIlrtOP4ITHhL6ppNIgUuNmczirnBEhGV9DZWv4DZe3KjjjMnDbdnuUXc4DvbkzCqvbGa7mj+w7W8N8n9k/rzZ1z31UX18X7GOb9Ejw63FY4ekjCcNyhZ/q1VlwF7XnJYaOtez49mH4YzgyNppG9e6xrr2M0FcHcN+M+Hzuz+3li+ljGlbMrdp4+3BXu/6gNfBG5sauhm88o9vh3bDstMO7uubA/7v6y3b1sVXlH1pVv3FUutu1/OcF/cNx/RvD+EQlZ+EiUP96OhDRszfLWbonr5GzUDd3xi8AAAD//+xabXOjOBL+KxT39TZB4t23TsXGeJKq2Uoqyd59uaopDLLNBCMOsJPsr79uCTDY2Mlkk6naLc9UOUhqie6nW0+3ZBdZELKhmuWsYPmGqRfK7eiLf+dPlW8PPDQsxyKm4zjKf5fKr+dPgzgtyvyBPZcX2Movfq0/bsWflN/mnM/P8fmJza7iKGLpuZRFiffMeRrMk8hbBrnSPD28ZKBzwbIgD0qmVsI/vnB5QbH1TmMOKsbSaKsTfC6hN0/i9FG0MpyZKU/EGKD+19FQpZPxxPdcSxW9JcCLvbZFDI0S7B3kRRzdDVVNczTTmo6argmbB+ukbI2I1aV62X35kjAQ3QTJUH248QiqBfYGs6L6Ww8mbF7iohkvhqphaI1gLZDHi6WQSFgQsXyoRnw7wdVNMQOxrNauEMqnPC0LkAuKMI4flmwF+KzilOdXo7SIcQUWFOWoiIP2oF/14fgSBXtnhkXZ6h7HUSzVbjsu5AlHBwkjgnXJpUjxR91HqexJgnRR97H0l9/vaxdWcVth2vgSbUrDJQco2jvFFSrHRcnzl6FKhDvw/cJbtzk6yqOWZk3lSIVSx1FX9St2rWkHrn4kcP8moItgau/tT4DxZgOsF7OnP0UDO2r+0NwDFDJjizh9P7E1NK08r5JB8SaOd08c/5kcb42oO/Vd2uV4Q/ccV3fNj+N4/W0cT4h2Ivl3kryr/SySPyNn5ET0HwHldaoUIc+Y8nvBlDAoWHGifFc7Uf6nUr5DHDLR9S7lm9bEHE19wSEnyv+rUD75iZR/LHpPlP/22n5dKnyu3De0751oX8byifY/k/Y1jZrEcewu7du6PRr5U+PjaL/aZ6/RvuOcWP+9rI+ntZ/D+ifO/xAgJ0EZKEEaKddpyRbAJzFPT4zv0hPjfybjm1PDsDxz9/6++tdhfEu3LTpuumToU4uS8VaulQY024SheuS2JSwM7V3mByzLxGcRlR4wDEslkNDsMdGfTjTDQVVeM3GqE2Jv81zLmmpErC49gOEbC86YsTnPQUVxLgrmJWYlKg8sW74WHzM0qxT2ZMGiFT79zqHjkaE7Dir0mubuWHesSZ/mjkN0Iu72Gz9YLhmNJNq9GfoKUiuYJr5z2Vow4/xxFeSP92WQlyAaR5hX4CENkIQ7X7pJohPTEuDyO/Aey1l0C0aPcxY8imG82ihzHq3DHq6r3+YDI9bvqtTpAcr1RpZv7xxXe4GaTAx/tP1iSoYfAeduMWqjNxkbridyaYNetcIHoCfW3UfPbaF38J5/Hx+RyPrxMcYQ//rI+fFd3oKiOyKg0KaWMZWJ7xgU4pR/HAp03T4UblX/VVDUN2G7zNnW3/F005QqHUozx6bTieEaVE6/OHThto99Y+Eh+wSpVPZ94eMgrGqB/aUMOdB1oD6lEKI+huirDrTAS9vCXAb4Tmfbq9T0aHOJL+lBCre8+j2sPRoC1bK8h9r47DsL0droObiBIh32q+ZSwQ/RS9XjuDZ2gAWyYhVHDtd0bM3D/bQZFMsgY8iPCiLx7RkiTPtW2qBeyHkeFfEfgB8llqb9U3yqCgf/gh0owgfg4znLc5ZA8bQBSTgYZEG5hALx0rg0E/gg5NKVH+YzU5V5nCQM3jRXFSgu+KN8FpqIpvKdY9kBEY1FZmX4ZgB0v1oDr4lnhf0vHarxXIE0yyZ58JQqWfzMkq/Q/E8clUtFBHEjWaxXyqWmkJ5uDf5fCt5oujPgR+WSgrjY3jv9uiJgkC8UL3tN6IqJ81OPQqBSz2TrwJvt428WazqVTNfQnvmHlCJae4XzLvDoWfA5bIV8XUD+4I/Cj1A5RzEEqQgl7CsxMkKephCeMvXm8IQL8kHCw0dlg2tAgRHFIBoUGYyK6hunytc2YdnEaDs+Y6LhBpJr/6MdtFXkPCE8A93Qzpys/NdSGDugjnZGM6EcRyERdfEKEmSEB4FccAHsEAclyrgUMi11UP2br/6N3HayRvJXMxapCtSOi+vJUP13DLicYUgCB58RUOgeZ+JQV3UUGQnDh2pVTqmKXFkIE9Ogumk7Osg2ioksdV5v/OOFDJRgnm+ab8o/tukBf3fpSyPWGAHtoa+OuExKUriHRak1tXXbQB780DRIPaoZk49Ig8LG/TRY/fqlrghat4M/MxX2XULu5y+BfH8Y2CbRPH+680VyL/7Ut6nR1F1t/LsjEv+tpr34f42LcrwGqhcbGgBer6oDabJJaqGq1ICxazRl23feniDMxEsqyMmW/DI1SOFQvJDtloMPVGIuMYyp+6bDyNTSfL8Pgu6IgMC1dWIfKUo/CoLj1tGx7k+0MZ5DXrNuPNZNq7ci6Y4I66qu1zZYUwEf3mCi8t/fYNUlULXB3nAVsx/31uG4Nzwdzgz6TvWm+2Ti+XSLAVpKfGKS7Qm0Op50O1tYdUcEViPdJGOvjVWvryuLO76u8Wsm7J9xKRZdzSnXEQ0sfGBE1HaicbfGbLW9bePrEru/7rx9N5YKyCO3HQp6Axr3ImsBLZvEN3xh9pxz0O4Oa0GWhhgkMjezDUtbCYxU4X5IPJIwt2fUv0LECrUuiCFfxumar4sq4yzu8W7xCepfSg3JEPBsOvUvE7PFb+J2o+QZ9OMPFtESrAm2zRkvS77atiXh1K1ldfNta5hQK/2b5mJdimb1upAneP9aUT/KiO6Ih1/yWJAZeOY2LsNlm8GkK8TjjEcv4gGmrFd40/J/AAAA//8DAFBLAwQUAAYACAAAACEALm4yA1wCAADECAAAEAAAAHdvcmQvZm9vdGVyMS54bWzElttu4jAQhu9X2neIfA9OOHQhaqgolIqbFWrZBzCJc1Djg2wngbffMQmhLNqKg1abixzGM9/89owNj09bljslVToTPEBe10UO5aGIMp4E6Nd60RkhRxvCI5ILTgO0oxo9Tb5/e6z82CgHorn2KxkGKDVG+hjrMKWM6C7LQiW0iE03FAyLOM5CiiuhItxzPXf/JpUIqdaQakZ4STRqcOH2MlqkSAXBFjjAYUqUodsjw7saMsRjPPoTxM6nJiTlMBgLxYiBT5VgRtRHITvAlcRkmyzPzA6Q7sMBIwJUKO43iE4rxYb4tZTmcYhQl+StQ+YiLBjlZp8RK5qDBsF1msl2TdmtNBhMD5Dyq0mULD/4VdIb3NcQ87oqR+Al8ptSsrxW/jXRcy+oiEW0EZdIOM15UMJIxo+Jb1qaT4vrDa8D9M4AD5pehxg2CKx37Lg1KpncV+VXJQp5pGX30Zb8o2XZo+oKVtMtnztY3yfmPSUStjIL/WXChSKbHBRB7R0on7OvgGN3CZrAQSrBMPAlUWQZBchd9GbeeNBHeyucRMZafzQXWH04rKM3cHRns/60P21NcxqTIjfnIytrGr+47ui5TrhS+8e72eWgwy9JHqCFEIYqhO2Iqh3UicOKJPRnwTa1E268cIurb5dHwi9IHs3gtHXat/VOwjJtaAL7pfG8jZxxbdQa1s7W09eShMCVimqqSoomq+nri+NY/9bxnmx/mYemtqaG3jKVyudipYSITxKZSc9+/QuxlEdHnbao553ZG8+92bNn2+o/deah2Wp9eP8fZPIbAAD//wMAUEsDBBQABgAIAAAAIQBRkj/ynwIAADMJAAAQAAAAd29yZC9mb290ZXIyLnhtbLRW227iMBB9X2n/IfJ766QQSKOGigao+rJC2+4HmMQBq/FFtoHy9zvOjVK0XSi7POQynjlz7DMz4e7+jZfehmrDpEhQcO0jj4pM5kwsE/TrZXYVIc9YInJSSkETtKMG3Y++f7vbxoXVHkQLE29VlqCVtSrG2GQryom55izT0sjCXmeSY1kULKN4K3WOb/zAr56Ulhk1BlKlRGyIQQ1c9nYaWq7JFoIdYB9nK6ItfdtjBGeDhPgWRx+B+PHWpKICFgupObHwqpeYE/26VleAq4hlC1YyuwNIf9DCyASttYgbiKuOiguJayrNrY3Qp+StQyYyW3MqbJURa1oCBynMiqnuTPlX0WBx1YJsPtvEhpet31YF/csKYlKrsgc8hX4jJS9r5p8jBv4JijiILuIUCoc5WyacMLFP/KWjeXe4QXgewM0RwMDQ8yDCBgKbHd+3xlYtL1P5Ucu12qOxy9CexGuH5UbVGVhNtbyvYHMZmecVUdDKPIuflkJqsiiBEWjvgXxepYDnugSNYJAqMPRjRTR5ymEG9ydBGvZSVFlhEllnDafDQe8hmII1hmGd/0yQ76dpb9wbd6a5dsbhJIzSYWec0IKsS3vsPnem26nvRw81i7mubs92VwK5eEPKBM2ktFQj7FZ07aAPHOZkSX+s+aJzmklhDawSkzEQNCUlW2jmPcgyd5lXY2E+mqvARX1NDdxxkwx3rOrL3wm0kfB1KvMUJrnXPb3sFEiwoEvoxcbza8hMGKtfQBdXK7FRJANcpamhekPRaD5+nHqe8+8cL8n2h30Y6urF0oOtvK+NyA8Hs3Gl7EnCCTnXUhYHue0ocG//gz8Vle41rtP5uBH60ygK09nNYSMMm99njfCvar6tv5ofrv7yjH4DAAD//wMAUEsDBBQABgAIAAAAIQCOmL+CCwIAADkHAAASAAAAd29yZC9mb290bm90ZXMueG1stJTNjpswEMfvlfoOyPfEkIV8oJCVtlGr3Kpu+wBeY4K12GPZJiRvX0OApJsoIhuVg4Gx5zf/8dizfN6LwtsxbTjIBAVjH3lMUki53Cboz+/voznyjCUyJQVIlqADM+h59fXLsoozACvBMuM5hjRxpWiCcmtVjLGhORPEjAWnGgxkdkxBYMgyThmuQKd44gd+86U0UGaMC/iNyB0xqMXR/TBaqknlnGtgiGlOtGX7EyO4GxLhBZ5/BInL1EAx6SYz0IJY96u3WBD9XqqR4ypi+RsvuD04pD/tMJCgUsu4RYx6KbVLfJTSvjoPPSTu0WUNtBRM2iYi1qxwGkCanKt+T8VnaW4y7yC7W0nsRNGtq1QQPnYg1seqnIBD5LelFMVR+W1i4A+oSI3oPYZI+Ddmp0QQLk+BP7U1Z5sbRPcBJheAqWH3IaIWgc1BnK5GpbaPVfmHhlKdaPwx2ka+96y6Yd3Bak/L+Qk2j4l5zYlyV1nQeLOVoMlb4RS52nuufF5TAa++JWh11k69KrYH5dYZpogmFjRyJp4maBQ0C5XzDON6buOM0XQ+WyzWEWqsrmXZ2jprn9rV9fb0V4J8P5xH/mLdm9YsI2VhL2d+1qbJdBK8zJqAuh56NXi1xI3NjaoZO+VXs6AgLZdl04teP2bkX0sonM0Xs6fwfyd0Vdit5M5+zOovAAAA//8DAFBLAwQUAAYACAAAACEAo0pYhAgCAAAzBwAAEQAAAHdvcmQvZW5kbm90ZXMueG1stJTbbuIwEIbvV9p3iHwPdlA4RYRKFFFxt9ruPoDrGGI1Psh2CLx9JyEk3YIQFG0unGTs+eYfjz2zp73Mgx23TmiVoLBPUMAV06lQ2wT9/bPqTVDgPFUpzbXiCTpwh57mP3/MypirVGnPXQAI5eLSsARl3psYY8cyLqnrS8Gsdnrj+0xLrDcbwTgutU3xgISk/jJWM+4cxHumakcdanBsfxsttbQE5woYYZZR6/m+Y4R3Q4Z4iidfQfI8NW24gsmNtpJ6+LVbLKl9L0wPuIZ68SZy4Q+AJKMTRieosCpuEL1WSuUSH6U0r5OHvSXu0WWpWSG58nVEbHkOGrRymTDtnsrv0mAyO0F215LYyfy0rjRh9NiBWB6r0gFvkd+UUuZH5deJIbmhIhWi9bhFwr8xT0okFaoL/K2t+bS54fA+wOAMMHL8PsSwQWB3kN3VKM32sSq/WF2YjiYeo63Ve8uq+tUdrOa0fD7B7jExrxk1cJUli9dbpS19y0ER1D6A8gV1BYLqlqB5102DMvYHA8scN9RSry0Ck0gT1AvrdQYco7iaW4ORDKPn8ZSsUG2FjuUr67h5Klfo7OlvWEiiyZBMl61pyTe0yP35zK/KNBgNwsW4DmiroVWD5zNc22A09dgIv5QD08oLVdSN6PVrPuRSOovBahotyP9O56KwK6l1327+AQAA//8DAFBLAwQUAAYACAAAACEAOOeS1EUnAACgOgEAFQAAAHdvcmQvbWVkaWEvaW1hZ2UxLmVtZux9C3xUxfX/bJLdPEl2Qx4b8lrICwmoVMAoJFmzEbGhNIq8igqEdwWMEhQKlfgoyuMH0Qr6R4sI/woSRUVqtUgSLD54+Gvaqj+t/Cw+fqCmFXzzK9r8zvfOPdnNsrvczd2QTdzJ5+TMnXtm7uy958x8z8zcuQYhxDyifCID0cYwISYRcRg5Q4g5Vwthu/wnIyGx+VKDeNMoRLhTRAm1kUIURQjxW8r7SxTkEt4cFyNGNRsFFSAGEtmIqLhCg90gMihuJgozN72LbFNVguxRojdU2Wx7uIhTy7vQHqbkiVCOakv72SPazmXZo9vife2RbfEBdiFyiVvVPGpyu3ghyTjLMbmUY/RYvpYyXesQYXeW35coiqiCiJLFhUR0W5V7g/sAMiuSTa0gA0mB3I/7CVlmKwXcMytljDPIe5Znl2XI+ySE6zHKb1//pjJVTMZrLY76g8nlUXYRvptStgv5jCyqxOuHD5eAz5o1yw5+aGUN8dr8j1qX2++fdnED+OeHDiv8H//9QhP4phNNTZCr3/FKE/KBoxzw0dMvVrgQR4ZBDhz5wFEOOMqFHNXmMXn9UduEITw83IXuDDcYkuyi1izk78R94XtdJeS9wm/H7wZRnYvVn9QuDhMoEPLesDpTUQrPcymnt0t8uJDXSRfyPt8aK8QNJDxMPd6RJY8nUrw/X0iWqwTY1PfZN+SZ+07MNfc15nA+M/2A+8J85zP3LTF8n11I1yhsux6uXyh858uNKRTm6BKDOfq+sLbrRZMu5pzletETc3Njbsi7NfaGPNw0/O4RKodi4L7dK+R9tNC/CWHt7xXfQwq1Ls/iMufVnHGUAd1+mDLWqbqtxfa0yOTb29tEx2zEEIbffqGQNoKza1Wpa98aMAL1Z0JgmTcfmtwaZQ/rXvZFtz/Zpf3VaktTiIrUODgCc5RiVmOTH3qzLRWB9QT5OE7tZG3I1iS3C6mb64TaHqn9N98r6pZr/bU11IPvuXv/cpLoI9G5Nuh6rLUMX3Za/fodjv+66PeX+SqHZTz10RkqR19tVs+hr3a5lx7Lwv1DWZwHIcbtmKCVqDbIe32yVe1g1IC2DoHuQz+HmEY4bS71YTfTfyGWR2aI5eEZ4sG/ZYj0mD1GG9EzJyJesL5LaZNut772whTri3Ru2a3zTbKUPNxbboKUYJ9sje9PMstfOXIjn1vzYdyto/5y8Prl4XuMSWEZlJxhQDVAODdPOTfFutsgz9lnju71xsZPF/QTPxJbBshyv5t0OgbpI97OEDmE+GQ6cEutML9U0quIzkHHrMRxZgbJXVF+pSPq9nvCp1K8T6X1lsqsXyUfOR0RC117+m08AestiCMNeVCfJ5R0hAzD09eXKWUhuNaFw4xblEfYrj1JUePQh0Q1jnJxjPi1D70Rdg3x54lMBqkDq4j3UfO7B5Qx2i2N7bia4qPEjWKRWEjPb4GYLRx0tID+ZorpoobScLRQ0Q9ch5oFgQcHG8AxE9JdjyNVObQTfA7H/DvpptT6sueviD4WnuzZfxviWOfItK/Pw7e8fNb6sEwgbJrLCi6bTr3lP5P63uZaPsLUC0bGQBcnHh4dg+fsyaZZ9mw2/bhHm7ap6f7ZtK2ftGnoG9s04t5s2jylvU1zXTjosWnQatWm1/qwaeT1ZtOVFD9fSBsNtL1p0W+OnQt7e3Dpq2etD8sEwt64rO5gb7UJl8TgMjbLJbrt7WkvfahM98/e6vL8szf3PpTrwkGPvU0kWqfaG/w4b/aGNF/2Np2eIfo8kFabm0rXu8ZwdpvTouMc0yKjZ/wls/+FjonjniuDf1hHKXcJ6R+2PSAK9Tt+3gC+vN9zDW+WryP/7lM6tpWCr79uscKF2FyK8+BSfnPpM7a/7MX5bTt2PAP5rcSR/xHluG4bzlP6dkU+3BnuJP8vyR4WkLGVs42hjCOqdGZu0wH8/tMxFRGnY1aGPx7XErYnviXsdXNL2NLEleFLEysilibGG1831xv3xNcbH4+rN56OiTdCfoRaLo9NPCVk292fLjrZrQ4dGZtYTBnnedSxrhmbcOpPx8YmynYubMXYX50IEt2jayXb2/9+LXo2RWgbd1i40+X2Ce/jDqyXO4kGEQ2gilwnlDaoQ3oZ8tM5kM7+fJTjwXvuLfNVDssEAmNwWf5ijK+6CGPg2oHAGA1eMIZM9w9jnBqkz0/nunDoCMaYNGmSgjHuJ3pNyDY+wSD1wBPGQB5vGGMGxS8TM+jPJm4l37xKVNMTnU2xhW2+NjjKZR/dE8EmAo1PtNgHx7TI6JsfGup4YkCsI9j6iCR7YOZ+9OCTytiKyMrYlaZ/x7UYzQktxhxLi7E5caWpObEisjkxPirHUh9lTqiP+ndcfVRlbHwU5N3xye+IricaRBdd7VaHnoBPnPqjB58EFTYWnTkv4i8+eVZIXHIBVeQ/hMQnHdHLED7h0FRWO7eS2t/Pfba/LBMIfMJldQd8UnSwOAZ1nJg406oXnxz2gk9kun/45MFh+vAJ14WDXnzSTLSAyEICS9X87uFs+ORyQidzRY1wRyiMPTAXAJ0HR2UZs/B56Bx4oPGJFvvgmBYZ/fhkuCPY+ohgGD+hdj2G+oFo6geovW+hdr8livqBaOoHYqgfiKV+IJb6gVjqB2KpH4iFvCd8gn4a+GSPWx16Dj6B/oTGT4rUODgCc734BLgE+IQqzPjEb70M4RMOTWX27y53jN7f3+GrHJYJBD7hsroDPnk+gPjkb17wiUz3D58UlerDJ1wXDnrwyWaiV4g2CXk36tX87iFeeMcncwSe9UwxX9wobiHujlGAQ6BT0A/X8RJXrMJ4hXFKZ2AVLbbCMS0yerBK1MkXHePX3+9zLGXcuEVKf/H+3Aalvxg37mulvwBHfyCPN5fiPLiU31z6pyffVfqL+h07Bsv+Zcdg2d/guDYP58EVebf+IhjGUjakpls2pFaay9PqE8an1yfMzaxPSMmuNKdkp1tSso9Z5mauThyfvjqxPG114obUYxbIu2OVJXSxfxH/gvh5hp6HVZz60zljKedc94JsLOVWqsD/Ev+SeKFBYpWO6GUIq3BoKhv1wB8d/T5Y5bP9ZZlAYBUuqztglbq6aiv62OYrxuqe6/nYC1aR6f5hlY+u0IdVuC4c9GCVJ4juphOxRB8R4b0XT1jFLLxjlXkCcz3V9HezF7TSfs4Huu86fsJrLl1xjCtBPtC4RYvdcEyLjF7cUj+gwucYi+wLavM62nfsVPuOXWrf8ZxyXLdN9h227Z76jmAYY9mQOjx2Q2p1THnaoejx6Yei52Yeik7Jro5JyR4em5JtiqP+Io76jTjqP+I2pJriIO8JtzwsJG455FaHnoJbpP50zhjLOdc9EVxjLMAtDwmJW14XjFv818sQbuHQVPbxPYccOfde6LP9ZZlA4BYuqzvgljmrq61m4tUx03Xjlm+84BaZ7h9uebhSH27hunDQOwf0AJ14jPhbBjkO6gm3II833DJDONeoyLdIagir4L/tjDUqrvM+sAFXLOPabpwNn3wltK1b12IfHOscGTe8VPJnx9JvS3z7GapMQPwMtazuYK88ZzsnWf+Y6Pce7dWmpvtnr6Ou0feeCNeFgx57XUy0lU7sJn6A+D41v3uATXmz12sF7HUR2eccstabyauoLWW7hB26xqFbAfcZNNgAx7TI6PUZnhiwNDTWqQY3nyGesFkvwmaEwQ4RFjsUR9isF2GzeMJmyhgTxpow5kTYLAHyI9RyXX2Gw0L6DKfd6tBTfAapP6GxziI1Do7AXK/PAF8TPsP3os1n8FsvQz4DB2p/nz7kyL13qm8MosoEBIOoZXUHDNJccVkM+r/nyXfQi0FiC6WcOwaR6f5hkKPDdvvlM7hjEK4LBz0Y5GEhfYa3hPQZPlTzuwdUwBsGmSNc1425ew2trewjuK4d4zTXsU/GKTgfaN9Bi51wrHNkzsRNy75dcFbfATKB8h1QVnew23vWVluhb4GYo0j0aLcD1XT/7Pb+S6XdQt868s4r14VDIHwHtBvwHT5X87sH/3yH9v7CufAdzmYDHNMio3udxLi3gu2d2CCZb6gwbkhdGVGe1hI+Pr0lfG5mS3hK9sqIlOwKY0p2vIkwmokwmokwmmlDarwJ8iPUcl19B8ThO9zgVoee4jtI/Qmt6SxS4+AIzPX6DrgQfIf5gn0H//Uy5DtwaCrbvuQVR36/rT7X1LNMIDAIl9UdMAj7Ds1r9PsONi8YRKb7h0EyHP75Du4YhOvCQQ8G2U70H3TiCuLvG+S+N54wSILwvU6i/ZpOd//BuV7T07rOc+1HaLEZjnWOzJkY6q3PGn3Wh2UC5UegrO5gw0frqq3QvUDMGRZ48f9lun82fGSkf36Eu//PdeEQCD8C/Sz8COBKTzYc7H7E2WyAY1pk9PoR9QPSQ++uq8HNj4givBZJeI1wWQvhsxYT4bVIwmtRhNeiCa9FE16LJrwWTXgtGvIj1HJd/YgFQvoRD7nVoaf4EVJ/Qu+uF6lxcATmev0I+A/wI34j2vwIv/Uy5EdwIJ098LKj970Gn+M4LBMIDMJldQcMEkg/YqgXDCLT/cMgM36qbw6C68JBDwZ5nmgNnbiV+AcGtd1qX7wSUIY3DFJN8XLyGeYRzaZnCf/BRkgEO+AuUp5pjVginGuYXH0H6BpsgdNxzOmuvgTLuLYtgfIrtNgQxzpHxq0+Uw47bvg2wbdNqzIBsWm1rO5g0yen3GaFLn78mv49cEd4GRuQ6f7Z9JCx+uYnuC4c9Nh0HdH/oxO/Iv5X4vep+d0DKubNpmdQvJos+GbqseeSLWN3a7mTdXs/H/rB9gkdY9vmMYFA+xtabINjWmT0+BsELRzj198d2otCDa7+hjludYI57kB8Q6/C+OaEwvgPLYXxdb0PEK1OqOs9xvyhpcXcnNBibujVYjbHjTFDfoRaLvsb+4g+ILrEIPd962n+hlN/QvMWRWocHIG5Hn+jkeh9ouEGuS9PR/Uy5G9woHjZTx2j9y/z3f6qMoHAJlxWd8AmRYeKY9CnBmLd9Y+9YBOZ7h82eWOyvnkLrgsHvf7Gn4i+IMIeu3hInrAJyvCGTaopPpY8iyoFm2CeQr7tiWeKfcT5qxuMU0DQFVwHdgCMwr4IOI5xzt3n4LxaMUwR/Z6BHvuXDtiQGrTI6MEwMx7ZXvbBhi87PHd+4L7xSh80kOI2Nd39eR5oqSmdOiGuEd+ccu8fuF2oEv5hE619xvj7DrSlInjrMyYIeU20IQ6iXcK5j1uxcX1kIPd0hq4MMcg9ys9l269HTzK3/7Ysuiqiw1jlbHryyJdH9z765ztLB69IaPCkJ12pE5VENUS/F06dyDduiLwmdnbA9tHsjjpxLtqOvetvL7Vfdbwk2HQC7QTGzKAT21Q5tBOB3LusO+rEu3+2lH/9yi87rZ0YFdeneHTD9MbPlk0KOp04YZB45koSSDJIufm5CcreMT9kncj7zFJePyCh03QCIfofMxrrd/TxiDG6WifGCqkT96pyNbktUVgn90PXiScHpHSaTowbt34vdGLcuKFBhyegExgr8aATAZvb7I468eJDlvIrIjq3nej9wLLGNX+9Pej6jv8xyH1TR5HA46rclNxPomqyFvyg8cS50In/XTat0WKZXhqM7QTrxDFVDjqB9zN/yDrx0YTXysavX9mpOlFwFOPjq4Ku76gi+pboj0TJBik33FRoCeQ4NXRiIkVGh3UfnSg9dVcZr/3DeoZa3Bt6ruepEi8fu7OkbFt6I/i6dwsa1146tWRM0+DGyXMvUPhPL/y+GHxizM5inAeHPPjXNyco54etNSry4MgPjvJwnstffP2qkrGDWhu2v72uGPybWIPCUYc3Yk8o/LyL31X4oLrhDQkXTS1mPv/lxxrmrVnfxiGzOzYf02EKX5ec3ADeeFmfhsLiASWTb8pqqJ19gcLfuHeIwiv+WKScB4c8OPKP3LGmjf8zbluDxbShjeP6W67Z2cZRP/BjMScaci7apPwe/l3t53uUj8fLIJPaDu403G5JsjvXPZ58/0gr65hWGyoQ8llDHuHI+ycVzrqNdE/zkulC6s0TZDhGk/xG5WguWDjHkZHvCeOLxpuMtxsLiN6LuN24NeJFIqNpa8QY03sRq0wFxlWmm4ieMI4hMppQQddrNNI1dpqc36dfEyGPcU38XvdrYlz5qoidJlvEe0RpkZOEa7vglCMbFraI+6JqFPp9ZJ1C9dFpRtDhKJQ/xEM+jFenGfdH2Yx7owqILiAaSsdDKY+vaw2lcl9R6LnIDxS6L6rQBEqL9PVbCk3vmU4bd5oaifjeoE0G53lcGCrKmC4Cs260hiJzOrlt6ljb07H+CPcyyh4eareCod0SGYZke3in9fet7aclvfb3obYsuNoyu5C2Dz6eaKaQa5gDhbn6qXHUBW3cSdH91gZMfTK17NnagT7nNVkmwt52rzq8NoDLwv1DWZwHwdfagC9bu3Dd4uv61y0+oM7Hu69Flun+rQ04uVTfWmSuC4eOrA2wDqxse6cxn+hRouVCfsuZ+0rXkCB8v9Po/hZj+/ejXkhGG8EE/cKD4zivDeBj1ziIfxfdhFpf9nstJVZ6xCj+2wzHtMjo8aFmZsaXf/L6Ooc3LJIy9MaS7NxejeB9h1obH7+6ouTFQX0bH43PVviP1v9PMfjE5t8U4zw45MEze4cp5yOGftEAeXDkB0d5OM/lN534Rcm+quMN4268oxi8IfLTEeCow+Uvva3wf7zyisJ/3ntAw39XVBQz/6zogYbWjXe3ccisqEtRMAT4xRdENoB//vfYhuPX9Sn5//UJDV/EZSv8+z05Cp+yskA5Dw55cOTfl7a8je9f8kDDT6LvaeO4/u96b2rjqB/4gn1vN1yycJ3ye/h3+YNFkuzh7XwobiurhH++lFaf6TgVtDXeN844njst/pncwvgFRAOJvsmZRrSV6BgdZyQsIHqG6HjusfjjuVvj3XHGKbrGiAQnzvggRx7jmrAp92vCJp/IGZFQmzOHaFOCr76/NudL82GFBpmPKfSFpSoXVGBB+d5wRlWuzTI7N92ygGgR0VI6Xkp5fF1rKZX7b4UGmq15oC/NP1NoU4Kv3/KzvDkJ5+eNSDiVOyLBm89UFCbXvo4nbjP0PJ/J2dbo8Zk8j/WE2qlz3E5ZzJbOHCNt7aDPFGrLurYtswtp+8PC5DpsjCn3NYR8JqdMU9nfvoso/+fiB33iP5YJhM/EZeH+BbvPdOG9gdtD4rdefCaZ7p/PlLtSn8/EdeGgx2faTZREGfLo5C+IX6Q+L/dgEb7XU5/pM7l+P8e5r4QWv8nVd2IZ2AvigfaftNgPx7TI6PGfdl4SX146fZDXOagQLjn3uCTJ3n4OitvNKtE5/tNnVFBtpG/M8VluceTzuVGRi4guIvo+p5ioluglOj5F6afo/KnIz3JfIqqNdMccrXSNtCgn5vgkRx7jmrAp92vCNp/NSYu6O6ec6Bc+x07vzmmO/qtC0dH/VOhPMXNzQRExKH+Ih3yw67m530cvyP02ehHRMqI76PgOyuPrWndQuaY8UFR0tkLN0dMU+oUyJuztt0zLK48ampcW1ZqbFuXLf6omPoH4k6Ln+U/Otibwc06hdurcj/ME45xTqC3r2rbMLpz+041C+k9PiZD/5JRpKiv4IqL8ihd8fwOFZQLhP3FZuH/B7j89+OCEGOD+QMw5PevFf5Lpfu5/s0Gf/8R14aDHf2oS0n96lvjSMO/fAQJ+8OY/1Qis8cebqNj3Bl/wwnupC5R9M2romeKNVDn3lFCKBwa7Z/8JcVQebQh+PPtRLAM7YV8K8Y7MRU01aNtbQ4stcUyLjB5fSih7IzyizEXViWDZ38AQ7jofg/vL9x2/h9uUKpc0l3bEaz9eIOQ9YGPkpuJsPsc4okouSOZTAu6ROW51ojnugKWhV6GlOaHQ8qGl0FLX+wDR6sS63mN6f2hp6d2c0NK7oVdLb3PcmN6Qd8fQsIVEg9x/o9TQU/ffgI71kP03wrHfX+eNw3dk/w3su4H9Ny4ztO2/4bdehvAOh6aypiVjHPlVG3y20SwTCLzDZfmLd7pi/w3gHbRRJ4uKdOOdfV7wjkz3D+9c84g+vMN14RCI/Tf6GeReFReoz8s9oAxveKdaoO+pJpzDe/153n8DlcQDY4INMObhsWE+dsVEkEOdWEYrztG6/4YWG+KYFhk9OAfvspQmdXwPqLO9y1K8a1zxmJO3l9706x97fOeN2wV/MYvWPqMj77egz3iVeI6qm3i/5eq42ID1A9CV7vZ+i3H7trLrSjeG3m8Jvd/SNna2jxrHjQm+x+f2ma5KWGFKSSgm+sqYkvC88SqijUQf0XG2udiUbV5BtM/0UcI+08a2OXy+xpt0DYfZOT63xSiPcU38Xvdroh2fZXSYhxkXEj1l9jVmNsxYkLhKodGWxxSy977ABJqWiPKHeMiH/uAC06TEoaaxicVElxNV0HEF5fF1rQoq94hCFZZTChUkXhYJesrs67dcFrnQbI50mN80Ocze5hrGCvkNsEVCvkvZ0+YanG2PnrmGsFC7FTTvt3TenvCtHZxrCLVlXduW2YW0/auIThDdQjTMEJprcMo0lc06sK5s2qdbfPoNLBMI35vLwv1DWZwHoSe/3/KWR997oJrun+/dvFPf3pdcFw565hruIRpFdCn6QcTV5+Ue8Gxd20AEtuOpAt/7cfW2sV5rYdscAs8dsD/NabABf3zpqQZtcwZabIJjWmT0+EiRgwc5xq8/GFz7cStzBu334+b7jt/DbUOVS5qW/rgz5gwOx7SkHo45L3VJ3OqUNfGrUzaZV6eMTjwvdXRiC9FO6yZzRdqa+Iq0JXEVaYdjdloh746FnyCag36DaL2h580ZOHUsNGdQpMbBEZjrmTOoJ5pNlSgwyG+jd1QvQ7iFQ1PZFbV2x6rX9vpso1kmELiFy/IXt3TFnMHbu4tjlDpNvk33nt0fecEtMt0/3GIf+6Au3MJ14aBnzgBt+j6imwxyX8vb1OflHszC93u57ecMKpXV5fIbI/L7QKigt/kC12OeF8AxCLoIGa3Y5iuh7btAWuyGY50j49b3JI52PHd6v8/6sEwg7JjL6g52XLum2moWgXlXpMXL3J9M98+OIyZIO4a+deR7o1wXDnq/C3SYaDmdNBrkd7882TEq5s2OZ1D8SvI5Zikrmua7zPadaafQrWiXND7vjx+idU5Pi21wTIuMHj8kc/vesutKmzt1f7qWbytLxzQ943F/Orb3KuGfT6EV0/k7p3ct0S8Ncu3qFlXnjhlFn0DiNOjKtVR2pUdd6Txspt9ffT+4vlcbju9Htf9erRbd0ed3HiJdiO6zJO4GwvE3pG0y35A2OjG6z+jEQ0Rr0zeZCzPWxBdmLIkrzDgcszYd8rArlOvqd24wSL/zj4ae6ndCV/T4ncG0lhLfiuq8dx464ndivAJ+58sG9jv918uQ38mhqezVquGO/Kq3ffbJLBMIvMpl4f6hLM6DEKzfiuqfqv9bUQJAQZzpd8p0//Dq89P0+Z1cFw568GoT0R6ix+hkCtGz6vNyD2dbmw+/cx5h1GnkfWKt2s30N5OOsCLfRkh2rvLl2sZwVNTV32Ti9WqwC5Crfwp9RJ0Q53H3zsC3WmyJY1pk9OLbn/38dZ34Ntwrvv2w/qZi4NvI9Q51zVr7foLbhyoRPPh2k0Hi26dUHZX4NnD9AXSl++3J/HDZjE+Oet1PLLT2oyvWrLXfT4x1TKsNuc87+btmbR41jtVW3+s85pnyrYNNLanHjS2pm4imG/Ot043VRC9YNxm/sh4nGmz6yjrP9AJRtRUVdL3GcrqGNc25zmO0UR7jmvi97tdEm20xWtOORlxJdFear7UXRyO+6zPECOrbp1KhlIx3FBqWjvKHeMiH/uAd4/npR439048TnSD6mo6/pjy+rvU1lbvCBLL12aLQd31aFLorzddvaTFdmbbHZE1bTsT3Bu00OPtLY4Qcx72R+EuGnrgnM7c9etasRYTaraBZsxbRab5iawfXrIXasq5ty+xC2j7uPcYXbxJy7Cfkg3NoKrOdf3HZnPs/8ek3sEwgfHAuC/cPZXEehGBcs4Z+79Ql+t8XS4YzI86cM5Lp/vngI27yzwd3nzPiunDQu7/Y+USv0sl1xN9Un5d7sAjf74uNIqSxiPxtvBc/W/Ds7y10PIP3F7sUleT1aq5+t7sPzjKu56GTIK1+t9b9xbTYD8e0yOjxp2ZmppZvHvySwxsuCe3b0xX7M0e086e43awS/vlVWv2nAnLATib5xhwFBU8mfZ6/OGkn0TyiYflPEp0k+lHyvPw5yTuJPicqKPgR0ckkd8xRRNdYmezEHP3y5TGuCZtyvybs8B95K5Ob8nYTfZrsCwc05dlT4/JBS1LyFSq1blGoJhXle8McW/Lnp27Ln526k+h3RHvoeA/l8XWtPVRuSQFoccpEheypGxX6NNnXb9lYsDt5QcHK5CIib/4T2o8rDHLPxbsMPXF/Zm5rQvuL9Yz9mYNvf7FQW9a1bZldSNv/GT2okaQUy4ivMIT8J6dMU9kfmuPLoxwHfeI/lgmE/8Rl4f6hLM6DEGxzmM9vCdz+Yjmqz+LuP8l0//ynZbfp85+4Lhz0+k/DKcMqOrkxTK5p577SNeA+uraBCGzH1ULuzzyf/hYp+4rJ9bKVyhxmDaXAgxpcgIeFtgJr7qArfIzrwR5wjDjS2F/idBDiWv2nqQZt7wdpsR+OaZHR4z+Jpj84JKbxvg5m76BHCQ+8tu3bwf9qeHbJoyV7CYcMGDekFLzhm9UKHzBud+kpOg8OeXDjWHPDXjp/7TtDdkEefBflB9/+6+O7cB5cKd+t3+4ue4pVpk5OqUytTB6atjhpVfripKOZi5P2Z1cm78+enLI/e1bq0cxS66r0UuvQtFJrZeqsVMhzX8O4eZFB4ubvDYHBzUG3p1ibjulZpxURPPqpvB/UeWPv/q7TWmiQWOXfBolVOqqXIYzDoans4oMvOdaOesZnG80ygcA4XNYPDeMM9oJxZLp/GGfq3fowDteFg951WrDFiQb5zfqZ6vNyD/6t05pBsemEd+a3fZMCbxnYxOIaVJTHhBm7eDoG8fsHPHbM+IflteKdr4S2d4a02BLHOkemfX2+3f+y4w9Ln/dZH5YJhG1zWd3Btqsny/mfykv1z/8UebTtgWq6f7ZdtNq/d4bc12ByXTjotW2M4c0iOmaQ/W9HbLuKrHiO8hxnKZaM/9PO2MfAabfso8BGXe3WdY2lq3+DNMh3xJfRugZTiy1xTIuM3rmg8ckvdBhnUlBwJlTFpqa7PdeGv1+yvBFcrsFsjwG5fagS/vklWnGhe/CGCycIec1/UQVmEF0fJvcfgFxTHvBf4LAedKW7rcFcVZ9cvnnwqdCcYbCMxStrMCPa/P2uWIN5MJ/6lXTf4+4H8+en359flH41kZnob3nziZ4m+oKO+2dcTXQ/0cH8L9IP5j+d7j7ufoSucWWGc9z95Tx5jGvi97pfE+33urwrM6bn1RDtyPA1Fj49T2Q9rdAlmQcUas26PB80JAvlD/GQD33C5fmDsiryC7KuJppENIWOp1AeX9eaQuV+oFBR5mmFRFZJAWhHhq/fUlJQk5FYcGXGkfwrM7zNIeLbrninAXN/Jw09bw7R2faEvvEa+sZr58whhtqyrm3L7ELa/jX0oHaRUtxC/AtDaA7RKdNU9sg6c/ldfUW5r3JYJhA+OJeF+4eyOA9CT/bBfwJnRpw5vibT/fPBmzbqG1/junDQM4f4MFEZZfiGTt5DPDzMsw+OCri2gQhsx3ME9g1s/+aj80tFvt9vdF1nifE0PtbqX081aJsr1GInHNMio8dvMtsaHeOTv9KwN0NzPt6Jx1wM3pHH3Ao45mLk8W7lnXlwyIPjnXjMxTy+Y8czkN9BHHMx9cQf+/XxXTi/jbjnvQTb783A9x2/h9uLKpc0lzbinM4VHklNzzySOiljX9qe9JSMPenzsvakV9omZVTa0jMrbe9lzstampWSsTRrX9rSrCOp72VCnvsUxsdY07+HyEiV/NjQ8+YKnTrWOXOF51w/g2yucBlV4A9EkSTwqUG2jx3RyxCW4dBUNrv/q45Vo477HANlmUBgGS6rO2CZurpqK3Ss+YqxurHMZC9YRqb7h2XWbNWHZbguHPTMJzQTraUT+4m+Jfqz+rzcA8rzhmUWC2CZ6fQ3k3CMRDLtkU21sjtZjfKcbe3WO+EBgsMeXN8pAXnCPZzGeTi9M+YOtdgWxzpHpn19tv262fHc0s981odlAmHrXFZ3sPVTQ8ZboaNRs+fp3r+lyqOtD1TT/bP1mm365g65Lhz02Pr9RI/Sib8QvU/0dy+2jjzebH0GxTE3iBUB2Dd0vjJjKPcQZdsE8Twh4rz2kecEOV2rvWqdD9RiHxzTIqPHXyEnq3x88olOnQ/svXQi6tot5gPhIzcb5HzdG6reAeNNzwscnoOudMc9WXYNMZaH9jYIpj1ZItp8+q7ak2Vz5tn2MRifOdiUnnncmJ65iWi6cTzRZqJP6Dg36zjRYFNu1jzTJ5nzTJsz3cfQsY/BqKz2+xjgGNfE73W/Jtpui3FU1tGIJUS7fY5rH40YaBtiBF2VXanQyL7vKDTLhvKHeMiH/uAd4/W2o8aJtuNEJ4i+puOvKY/vfQxG9l1hAlVmb1FooK1Fod3KeL2339JiWpK1xzQqazmRt/nAMULi4RuJJ4f1zD1ZZNsT+HcKQ+3WOW63gvSdwlBb1rVtmV1I28e9B/66iXhKWGg+0CnTVNacfmfZkFXRPuc5WCYQfjWXhfuHsjgPQrDtyfL2tbVW1PHpweN1+9W3wpkRZ46hyXQ/vyP2nL4xNK4LBz3zgXcRQaH6UKZlxPuHefarMVfn2gYisB1PFRhDGycm0X8b8YXKO4RSr0BoH0Cwd5QD/eJxNK1+9FSDtnk/LfbAMS0yevyjzP65DolRgmmv7O7zjuDwmGPZw2Nysj+JXZF1uteKrF7mFVl7LDnZeyzHiLbbeplH9j3da2TfT2JH9h0es90Gee47GAdvFXK+DxYdCBwcbPN+Th3rzHm/c/0NseCZ99si5LsKWQaJPTqqlyHMwqGp7DdXD3H0ecj32gyWCQRm4bL8xSxdMRdg3V0cgz7ytav17+X+Ky+YRab7h1nuatSHWbguHPR+Q2y3kHN+sUQRXjCLWfj+htjVyg4ImPFj7IJ5P57vW9i2TonH/xnH4JjxDGyCZVzn//zBNlrnCLTYDce0yOjBNt9NeK9s1+6wDvc7QsMcQeuPe6Gu3WKOYD7Rd1SJxyCs6mOLCf1E4Np+BX8QQXfPZXsPG5ohgPFlPbBPGeYq+PhS+jeB6Dyxo610b9fyFtxtAi2JWcjfGWZ39gHxRElqHOcR/z8BAAAA//8DAFBLAwQKAAAAAAAAACEAqRvoX4qtAACKrQAALQAAAHdvcmQvZW1iZWRkaW5ncy9NaWNyb3NvZnRfVmlzaW9fRHJhd2luZzEudnNkeFBLAwQUAAYACAAAACEAHY/Dt4QBAABvBgAAEwAIAltDb250ZW50X1R5cGVzXS54bWwgogQCKKAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAtJXJTsMwEIbvSLxD5CtK3BaEEGraA8sReigPYNmT1iJeZLvb2zNZGkTVJRbqJVFi///3ZyaZjKdbVSZrcF4anZNhNiAJaG6E1IucfM3f0yeS+MC0YKXRkJMdeDKd3N6M5zsLPkG19jlZhmCfKfV8CYr5zFjQuFIYp1jAS7eglvFvtgA6GgweKTc6gA5pqDzIZPwKBVuVIXnb4u0miYPSk+Sl2VixcsKsLSVnAZPStRYHlLQlZKis9/iltP4OYxB6lFCtnAa0uk8sjZMCkhlz4YMpjEHXEiNSYfhK4UNk532qoMqntSYTjm2wsJliUu+THSOg98wZ67FQDnoA/ta7q0SlTi0agQsSulqcJWKR44GmKCSHriJQtVGAiGXzlQ9G/Rvf2PSEN83ElzbgN0Dbc48IXU9bybl2HmMM4xmxiNH1EffXRzxcA2FxFHmcSXiMsa8Fl9vw6x7V5Up22RwHiDCbqNit5LJ3wPENtD72SX4wdg6mQG2zZ9L6dzH5AQAA//8DAFBLAwQUAAYACAAAACEAiwvPBxYBAADOAgAACwAIAl9yZWxzLy5yZWxzIKIEAiigAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIySwUoDMRCG74LvEHLvzraKiDTbiwi9idQHCMnsbugmE5Jpad/eWCla6G49ZjL5/2/+yXJ18IPYY8qOgpLzqpYCgyHrQqfk5+Zt9ixFZh2sHiigkkfMctXc3y0/cNBcHuXexSyKSshK9szxBSCbHr3OFUUM5aal5DWXY+ogarPVHcKirp8g/dWQzYWmWFsl09o+SLE5xuJ8W5va1hl8JbPzGPiKBeCBMVi0s5gKW2JXphEbnTpkJS2Z91LOoGOsCraE60SL/xONTwseWVvNGgwlnOb57pgCmo8AeWcSZWq5MuRh78qOSyjz+jJ3KHOfAvtN4qf1XJ+yfhyxvrL529sxu8zkb2Rx6jkjwcUvbL4AAAD//wMAUEsDBBQABgAIAAAAIQBv6iL1hxQAAEqBAAASAAAAdmlzaW8vZG9jdW1lbnQueG1s7F3tc+O4zf/+zPR/8PQ+uJ3Oxm9xsrlJ0nHsOJte3i5Odvf65RnFpmN1ZcmV5Ozm/voCoCSSIijL12vu0iaZ2Y2NH0iQBEEQBKXDv35bBo0nESd+FB41OzvtZkOE02jmh49HzXU6f/e+2fjr8R/+7/CjD5BRNF0vRZg2gCtMjpqLNF1932ol04VYesnO0p/GURLN051ptGxF87k/Fa0nZGx1251ua+n5YVPyfh9b3NFKhFDuPIqXXprsRPFjVkReKxTS3mvFIvBSkDZZ+KuESvs+WXlTcdRcxSIR8ZNoHh/mLBORptCUpHEXrW68RwBBA0di7q2D9E58SyfpcwBf9oovL/xQWF+O/SCwvjxb+7McugtVngVrkVd3fHDYMj4fTkJvVVD3+u939w9bxneEOP2WihCHIjnu7UqA9hVBBuFjIJIWNPE59KDDz2J/dhp6D4GYHXcOW8y3hzdxlIqpbGty3D5smV8U9IW3EgZdfpHTL70kBUXRCsi/yREnXx7DmQ7IvgCxMsUpuuBwGAURFCb/Pw3T+Llx/vmo2d1tNm7PTo6a3+2P8bcJLSWoBunnkDb9cJC9HNI/wF8Osp9DxvTDQUD3pSy73f3OcI+DHOSQzvvusN9mID3QOFlK/+TgZNTnIJ0csjvCXw7SzSH7vcHpbpeDgBZnXTcYtIcspOjd/t5Bd8j1S6/o3d3OfveAlUX17hBatMvJUvTuaHjaGbP9onp31DntDLhSit7dbe+Nh9R1rVxxxjDnr7wlaOxh/mcDP98fNYde4D/EfrNxH/pgy8StFz4KMFfv+r299/vtzm6/0Wnv9/bf73U6nUanASM0XHgxKCeACNPpdffx6xsvjBIwEN1Gp9/oN7r0u9vowf/QlePAewSO7l4HhW/lYoBEZC0mCwEl6h8a5yOyQJmcV1GDgM3GeTJcJ2m0zAigD/jXUZNHgKEGgzMUQdC4OmrK2Y+GC+Y1msSPSMd5YwDQiFUC0B46AB/A1I2jeLBaBc9UvhzQrHys+pPwHxeppO20O+3dzp762delQTSNIV/QjZeClYFlotyK22gd4rJks52Gs0EcR18n/s+CqFL1M+lOxKMfEt3NaVNISm9lE1RxfHVF8+5iL5SDYXTWMFqusCV3zysprEHFQYKOFmBHqWpjHJFIFtWWigbX1XOTxeyrXqhRIxL1PjeImjiO5hQyOeha3U4ELDsI43tkgosQkq/n80Skn+22lxA/VSEeAv+fa0FraAVsMvUCAbM5jWJCGaNQVHcSrCXZ6LKCPFlEX+06LsQ8vfRiUEmbdotTyEUEB8ZFOolSMB0u6kcRpz40aBD4j8y0wlnvUKrcT/IeJmmUTYUduX5lcwuZR34MzgX4LHaDirIdQ38RTb988mfpwmZF0gfNphgWBGiX0ZNgdAHZkMQoAZIGyQpk5WsbiUCkzJREPpr0PBsYH55wG6Weq7whGGqe62P67XTmO0TE/nRTYYqD48wXexZHa0eNsFhOP33g2SbQJ67+wgULlwuecxxHS6q0Sqo72DbItYAxlNjvBDidz0GIKsQwCkOAgD/JC0PFjKOwspDzcCa+8fy3YhWAt8ETP3qxTxsSm3wVXT/844MXzsBj56jhTeyHuD/hiMM0qGCl2XwSMQLfr2agd246ruWoSHadsHcYCzF78KZfbCJuZ3gL/ckLvtzEYi5i2DkynQST5y72Hx9Fbi0bY3JsUDPWgafPbJhMdaHQs7w4sMDiFpVaoJd9noxAWyfROuaEvIou/CeR7Z6YwQJtBKfyZ3Ep4kemjVcRTIVozg4lTDARzry89ZpjdgF+6fmIJBXhu/uJLi6tIz+I569RPJPy6FRsynWIO1paq6gIY5UClR95qZeNo7vHAacpgxt3+gR9OnoIhoFfaMcG9OfxZTTbKIAsWNmRDYVewsbdx9ZvLPiDCFawaPpTQuqdN4xWz6CQmbuqUy68ZxFfiuVDpqs67fPtOhDxCPfnqXSCe4af+dMGuuS/hoq5tV9yu6ifcZevV/1el+ynSirxTiBCwion8TqpxOsSilhdxDNvuZTaZ6glWOo0hjiCHD3TmTihMQlFkk0/kwrzIYYAUTbs2iTi3TDorMhPmJlKrsjKQ1vFbGXkggd6KzkN2Ud+AsvAc0E0Rl9alzvw7ETWOH2EzhOM9WhUo+DzJF/YXYgRdBu6NcOFH8xAdLsXyFzciHgpMBTEeEYmgPGPTMANv+DdmsE32KPZzu/Y/yZmw7wLbfpNEH2tIMM+L5VxNVcrUbRs31wWAPSronqg4vbsb+vlihdAA7gryIsAx5fpZ60IADj7GdswDvzcjGn6nBVA3XDKLdM0UJNVwHmJipaiHtg9mCnxhXgSgU29FQmscbis2DQZZkBTY9PydRP2aDZxmMbBIDkPV2tmZgyeIn+GFZ7EwvvCLLykiI7BqFIV4huJFbe5cGtgPraFehgTNac6xMFm5BBeOUoIRjuQ/y66gtmT6dZOp2tsuRQg4y4DTgJwnDE2kRdg8hfknN0kD2Cdhzi2xt7bN+pXgLyAEkAKiP/mArAtQEBeAgPAOSp34HkpKpwEf1kRJYXPC63GD2b/gP0L7lJshUXRJP0usqmkWPzURUb3vMXBV/OTKRjVpxQe2DFHB4u4LUcJGEwpWMAg7JhBCUR+pex7Yw7Q97KPje9RtITB0/cMfgj+pGANKBGyUdQXUenvMv2GmyjeLtH+CiUANwWKVdu0kugrEf/gcyG3Gwg+xdqewdC7H9fgC5MtwIFXMc1O21j0FAyDZTVgoCQzXCLhgIRa6y4P3IIaMJTu0ktj/9uG4kC8OrhsQ14HipvuOjh+K6d6rmKTja07i70ZDjBYXVs/dICK/HV2+vvt/YO9Xndv/2BXnoRkUS0cpsoSdUCtEnURssM6W069VCdIRpVy6T756YJMihxYfbbcJ9ArEPTJoXZ9J+gDgKXg+z6nOuJzOdkVoyO6NDIV5UtAVRUSUVkLLfLaNNB7gcQgQBGtN+Ym0cHjSiEsoIpwQioKAS0XEAgKKlp7gZYbdmE1IEqrDFlo9CHU+nkD/acN9L9voPMSwvaC4qcQKLG1CRzL1IP4Tx7448KiPwixwq0OnJsxruCtmGO0EZrnCBcrAD8Oio7C2CIqOr9nPKONSWF1dT0qSA7RkM4LNYF0iNMZrI0O8heRThcTCLvZ8k6I5jQFkjxYwmmSbKzhp0kqGh7zaM44mAMfCQ5Jj5o3d3pjFetw4cHJqZRspyMPejMjKUG0ohkgvSCaURNMDhGO6KpcSiohMDtlcLcShQtNJYBiwA4pitWFBN4EohXfATqF0FEQ+MkCQ5AOTBZNxvg22W4tTqdtAzWUFqDjAa4Ye1YGxrvwLDjfXeGIn1xfX8ABtisQiz6TbparkYZ5dkPtU7w62NwTdGNRWBUGbZNGn1/9/7iifaPY+wqW+Bdywe5Is5BuwfRaajGchwv/wU8xWJSZA3fh2GrUIfNwoubYIuNoDTGEqTqjqsF6f45pYA8+xB6eNwpYOtN1t4T0wj6ZrWagodPOaGHYndqcjYOMblQOxEQuQJhWAbYvhiNgiAQfH95GXyk7qa2lXaDFoT6A+D5lm2hz0+FKOOIGQ48LVN5E+XzVCpZmDg4UqGZjF6PWlx0996K8WYYY/j2cdMUBrAzZEGrlX0MqIE+ZwP4B1qNFvLa5RtEagk0FIlo/LmzQBaTioQPBHqRRhznW1kHie2HR24Y/hMkUgfg2mcb+Kq0BKfrondF13DlMC8YccnoyhYAkHqUZN6AYj7G3Wjg0A+z+2I85DwQouMO3OwcItK+3KZMVruH0/bvOjhFznqxOxBySRTimwRw1F5NojA77EMU/09EgkYxOOFkHcABus8jvYXCJpC/vksJ3u6I5+hwXNVBxktNVNeVXKYkqhuTOe0iM0QCdZscOO7PYEvETOydj0oOayEZfWRCH6uo4aKxvpklUtEffCxrtUlldetkvKmaLzBglt1m5bVniGjgYOL6N6xDzxVzJbQ4IZq81cJyoHkrOxe5Qn5Cv+KQ6RDrKyOjIYpMALKsSQJoJJ4zS/BrjvlUeHHmcM1qWYF3XJw7KqIasGqfnZrmREHVUyXFuGJyvlzPlcNUsiadS3LK5x2DyktwIamTmx7glUnUVdqJb2WVqprkLtXLs3FDUBnCfi4S7aqRKlKrG1Rs09Hvq1Y3IemVq7anTUwinRtUBa/LWg5cT+9ydRhsh8gO1LL/acLlFqAMve5k1eMqOZg2WIrxQA6ulCTLTrHweoFsR6yBAJ5ZOAHSSDKnVTxc0HAg0jmoWGCRHuiDTKiwEQrXSo3JbkKIupW3Syy/ZK7Q0+WqYR4DdPa+jVRjMjccpUr90HV2ndF0aPdJTT556HBQ1LHrIjBm762EDyDb8l/vFjGKY/jEP0P1kBlH2l1mI4TezCN1/ZgCmH21MA8OZZVglvfCnnQjNr67EFOsm7GkYIHkznJ/NYHV/WyOX/dQqBxAW7+JyA25bXL7fVVSmvrl9+X0NtAn6am+o15unp69lvytPzxgnzVmiFc62nPlwvzl35u6o5Av+Fzt3/4V+izEHdGdId1UM0H/UO8nnmMokQOOq9t6YWqGtdDYcm7AFHHTXSt2oLh+iyFuUj9KX8imqiwfxt8HbqR2yf87uB7ejP7X/rNte1afoqmxTSykEX3LnVbnFuRxZUIp9VvkePc33gKswkN/o9j5setn/gJNRHPws0ASf0JMqPv2ewk7lnLF2v/TDnPFam8edOlylXWUtHnu7WYuNu7bGqHqxR3RvI198X1qlo7uFjtLzA9wqypDLGgr6rmkofNI0tAdR49yaSdNLXiUf1pQALOtFAqNgUqycA9Msf7c/x1/d2pD42g3Xbk+nvnnFem+8ecVNt7uNav4W8uS8nsLx/m1Dnm9eMWQfvIhXXI4uuxdmZuXdwoNguEuehG69bIdBp9p+gXEm/nv0B/J1mLkOzPQMRMaMa8HGAeRVZF0LZougM351PZjBsNeEGVzpujCDsK8NM6DS9WEGwV4jZnAvcZ2YqRbWVHWtmKEz14sZFHvNmMFBfpZXvm7MwKxrxwymdP2YQVjpLwyG1gbzOjKD4q8lM0AqrnRd0VB0EyBjQBUAdV3RAHHXFV3SlO8NunCly2MuGHik5jVGF9C64MYAIQe2hnjZzcHS9Ua+OHSnEUgbWvfeSStSu9FWXSQA5XixoRU5sKXrj+4CqRtPi2uQDJAK1K9DVmL0a5EMkLkeqaG0s65h3exAtyOqwj5mYjU+km5DyFjLGNTEy9cZlTnIEDFWo9Je2SOj4kSpXb5byCQPMlWUkggZRJEquHarnjupkCnQTi5kQNThG86SzWTDqsHjMg9r4osO5o/squzxNmdxELPKzuJILnesgSGXYw04l1U0DD5psQY9K/b3FmvQRuTuw+nl6cfBxZ+MKKYZeKgF18/mNjK4IhO8NHlaVaEhbDKUSppyT5+8JDeCWm4naPGCqSqrRSu6U82zjV1UFa7gpUFVdGROuRlU2spGiSgUpgWbNjJUnPbxAlUc+vEMWpO36Frkcgc7+Jq0tmxRk1yG9ce41eg05tFu23JZx4OuRuV1VaSAbWCtCIts4OQSwjawbMgLqw6S8GXj3MwzmdjcJJ4NdShnY9KrNjNtUZcuon5iuFErdCG3YqwZaOFbuSlHiufK784Wx2AbW5dzqCtZtVm0u1n1eGT4pZCNdeJJHglUIm1CapK4odZlXsaPo9rLl3pdONjBmJd76/VCxlYsdvW4rEvA9disi8HbsW0xv9QV2C1Mu2Kq3x2Kp7iLu7FRiqe+vSwu5pZ3TPzUK+BbNN+427uxEfZ9380s5TvAjC7L67ZbWTbJot0PrikI2mDtzvAWXLApzq8L1+RCs70Nl8oNKBxNGnj3IzKwgs1P0gDvxcrb0K7FqWpx+1yjPJSulBHBFwfi1cHZeRnuFtvJGGzV1RZetdhMwmAtN3VI+fb1Xr/fMwIb/F1vCwaWl7/vbSFlKMO8XW6BSBEhrzh7tqdFL5pHjagDpJBZBdBxB7yo+d+MIvGGTenk5pnHBZgcpaqbqYxN+jXjTMbTmuDxTkx1/8PBJ8fosLdfN44/breLS7PFiv7vh6I0vda98a2uL24UPt+IFJczt1nIs6cEIa9xG9NZq4y25be/9YfuW3LU9D80EWjKvqD4XS0l6BWKr2c0vULx8RUxeYD+FYrff93i771u8fdft/jvX7f4B69T/F/3+QfONdJp1F5waXtbma0l5QV7/21l/i17/21l/i17/21l/i17/9WvzFVXRSAIkqVvFCExdwqHA1JO43g9V0ZqX2piIkW/amryVknRdu7yVux2cjOboGUmOX83px8KmVGgSF7WK2JGfzyBF/o8xviEIAo5/PHPf+kYKTAvfkEqdxl/WUI0G4J+S4xuviVGqwjLq02MVucur+uAS7+iodqAcWD9eItHMYdbPLB8z5hHaQdb7MFXceKTpcYBSAtdw2lkvcd+ahvi0gVm9iymhFF51xXlvNjpC7wGV//R+/V/+NSlNGTGuQmqYjnkX4X/Fc9ZDJ9R+wBvGi5es41vHs69R1Aw+N7tOnL0st+Iay7M5ez6O3xypP9er1N4zQ+kZi65J7zjU4fhTYFPvmDejDqYzeCZZF+411R+BAYXDZpW8RRkzQCAhxB56+J1QMXTjjVIVlbxYGR8xHJmJDh41pQf13BVp3geslZcRp9M4c32ytbkzg9WBukS5ff/qVel4xgeH9L78S7gefbZn+epWOavkSYavqoI3kHdbAzo2fk0clmeCP0t3yl21PxxLdaQkxB/ETHxwRjS28oGMb7DupVEQXh0f3kxDLwkabSmy9kRSHgNr1yDR9NDo4inqBzcUq1y+PSfqHwIr4RKZe0tqk12Az4vnaJbcxGfh/Oogbla8GwGeIw9jRd+9CD/IsaFwPqS3r+NCkV/0FNMoZs60IS8TMkmH1BVfOfj27h7/YPu7vuufJ98Cx+FHUEfrfFe2PG/BAAAAP//AwBQSwMEFAAGAAgAAAAhAAKCFBvUAAAAZQIAACQAAAB2aXNpby9tYXN0ZXJzL19yZWxzL21hc3RlcnMueG1sLnJlbHO0kstqw0AMRfeF/sOgfSzbCSGUjLMrZBvSDxjGij3E82A0hObvIwiFupTsvLwInXMF2h++/aRulNnFoKGpalAUbOxdGDR8nT9XO1BcTOjNFANpuBPDoXt/259oMkWWeHSJlVACaxhLSR+IbEfyhquYKMjkErM3RWIeMBl7NQNhW9dbzL8Z0M2Y6thryMd+Dep8T2L+w/bO5sjxUiobPd6cHCDQpp5DUWoUysIweaCi4ZnXldQC/N/YLmFsXxmbJYzNK+NmCePmx4iz5+geAAAA//8DAFBLAwQUAAYACAAAACEAj8OZ7LgAAAALAQAAIAAAAHZpc2lvL3BhZ2VzL19yZWxzL3BhZ2VzLnhtbC5yZWxzXM/PCsIwDAbwu+A7lNxdNw8isnY3YVfRByhd7IrrH5oi+vbGm/MYQn5fvn54hUU8sZBPUUHXtCAw2jT56BTcrufdEQRVEyezpIgK3kgw6O2mv+BiKh/R7DMJViIpmGvNJynJzhgMNSlj5M09lWAqj8XJbOzDOJT7tj3I8muAXplinBSUcepAXN+Zk//s4G1JlO61sSnIp+cCjHbtGuU8hyyY4rAq+E5dwy+B1L1cVdAfAAAA//8DAFBLAwQUAAYACAAAACEArI+QTQIBAACEAgAAHQAAAHZpc2lvL19yZWxzL2RvY3VtZW50LnhtbC5yZWxzpJLBasMwEETvhf6D2HstOy2llMi5lECuJf0AIa9tUUsrJCVp/r5L6pK4uLnkJHaE5+14tFx9uUHsMSZLXkFVlCDQG2qs7xR8bNcPLyBS1r7RA3lUcMQEq/r+bvmOg878UeptSIJdfFLQ5xxepUymR6dTQQE937QUnc48xk4GbT51h3JRls8yXnpAPfEUm0ZB3DSPILbHwOQ/3s6aSInaXBhycm85AJtW5dRUHqxv6JDYRMcOs4JRKHgvkPPIxW3IwPkugKeRc7N4DVrdBuXfnbnFc85RkON5Df30D3qmPmpba/CNzM6hzzMtyszV43mN0/gjVr87yMnbqb8BAAD//wMAUEsDBBQABgAIAAAAIQDSNcS8qAIAALwGAAAVAAAAdmlzaW8vcGFnZXMvcGFnZXMueG1s3FTLbtswELwX6D/oxlP0SiU/EDlIbKQ1YCSB7aTJqWDklUVUIlWSsuN+fZeUnMiwm96rk0jODmdnd3lx+VoWzgakYoInJHB94gBPxYrxdUJqnZ31iXM5+vzp4p6uQTmI5iohudbV0PNUmkNJlVuyVAolMu2movRElrEUvA1DTi/0g9ArKeOkiR3Ko2hRAUfeTMiSauUKuW4pJiKtS+AaSfzYk1BQjSpVzipl2YaqoikkpJKgQG6AjKxKZzpJCOZxS0t4SIgRfhYQZ6rGtdKibLdxx/ydPkcniPPIYLtIaYGYdjVGMSCfEnLuhrH/JYibr9dgm9PnhIRufxD1z/dfK2uRA2hnxjgs9M6QosQbVhTvqyW86rfV6GIMReHcNgK/s5XO8ZqE9NwoHpx/8aN+NOj3/TgmDiY5vSXeYcQ3YOtc25DYDYMo7PUGcRyFUT88EbHIV9u7LFOgn2yI7wZh9CHu2eLO/gY0rlvzLAz9OxI5kXSLXXYM+nHTzWUPY79huaugpeskuweYSr0hfEOxlGy9Bmk8nAMep2NRCInlmEN2vXOWjbXYGbZfMMBrI96dnPKcvTC94LRqfOlKMznORPpzDlWBjdgCTKbXd3ezU8hJXRUspfoj7MP0ESfnhRVM747vNIU6zHLfJraELwX7VcMVXxf7KzpOGYS1+4amGo0w7RR0ZbZWzkEdub2A1Ayf8XJGd2BdFFtnisPgo6N7EWakLO9YcI4heEtHQOO/uTaMou7BQlNd41Djia3cns460aZyoPReMt4098H2FYrEd+CIx1TpePetqgccX4u6YTjYta8GZ5t/JLeUlHfy8OZii+3WGhWcMqrGp0t1vfhfTDqd2EmDvLa7cALtu2FeSjOlhSOHbJUQOV3ZYtjTFqRGfwAAAP//AwBQSwMEFAAGAAgAAAAhAH2B/O5+BwAACyMAABkAAAB2aXNpby9tYXN0ZXJzL21hc3RlcnMueG1s7Fnfc+I2EH7PTP4Hv/nhcPwTDm5COrINDdPcHZdcrtdHxRbGqbGpLUJpp/97d2UbZAyXXHOddtqYAUvr1erTaiV9Xs6/+3WRKA8sL+IsHarmmaEqLA2yME6jobriM62vKt9dnJ6cv6UFBzUF9NNiqM45X77R9SKYswUtzhZxkGdFNuNnQbbQs9ksDpj+EINV3TJMS1/QOFXLtm/yVutsyVKwO8vyBeXFWZZHlQk/C1YLlnIwYvT0nCWUA85iHi8LYe1NsaQBG6rLnBUsf2DqRYVTmfhD1VaVd3TBboeqv0kpQFSCLE1ZwLNcVSaFtyp4tqg0zFL3UVXwkapM82yx5EP1momOU14oVIlYynKaKDJK5Y7xNWOpEqcFp2nACiWbKXzOlCChRcGKMwACoG7i32AUYJkkcZQipKFqqcpbyoO5uynrMDOoersMKS+Vb9P4lxXDkf5uvrYI8d2BZhiGjT+G1h85BpR6Rt8b20bXtf5QFZcWpX7XG/jmYGRoY88ZaI7tW5pru33NMawe6Y5Hrj/ugv6Ucpj0dJzQCOYcEFzGYcggUKBYBsTHzRLAdB0TXD+lEbuZM8aVqzhlN3yTwCPQHMdJsqt9ZL/ybe3i3GNJorwbqtj4xzjkc1X5NFQdVbkdqpN3qt7UuGRxNOdHVW7m4fr9bFYw/lnoGGem1ZVt7BR+EgpaS0MMIqAJE89hSlo4/JyuYXncPEkJJhZdVKKRkchWDmtM0nl8F/OblC7bzRHmVRb8DCGYwBKoFBCs+/79ldxRremvlkkcQOx8Qfd28gnW7F2cxHzT7hOddxiqcOtdguFI0qjynSGjQA3hsTEt1x9MsikrVP64ZoXsMqFyA0sWlj1GyRXdsBxC7TpbK5PPGF27+MBlIkB721UuRY+XJbjsoVur2wwJTvkKthN40kAsPLENA8nSNI/TMgQbAyAAEnaglh2cpbZ0O6sNG98nq9JCQyq2qDR+EFaOD+5jTlNpHDr46OJcr5wHJRHZuDwvznEjgU2dEOISMiITj3yAMlx+RiJCxvAlHv5gjVQPR6gsrohMqpJ0E+Z2dQ+K+AUToCzMiLsnyhF0Sd76ZI0K0375Jb6ur30dRfr69ARKUx1qINVB43nXHrrnGSPkxdxf9+D/z3cYwV+8XoE33S9qVA9PT56i9WSdw+bWw+G5LraI82sGhOZNHA7VfBKKTUkvz/0G0RpsidZ1tkpDFirXsGXT8iQ4RrQeUW0QLTgcIiVLeSa40xLOvg6WUmVVMCHaAIuAI6H4ZUWBCiqguAAOO9uIh8WcLtm3YlpGv+8M3MEYqJXpPYFpjQaG6XdtUxsQy9Ycq29rhAxszexava5rmtbIQGb2VKZlvfCsBimsaZRMGhrHeK2AbO0weXnhWXtU7AbXyw9ss87ysCQUfJ11ijhkRSfMVncJ6wQr3slxrUMJ3rvCziqN8fUNGd0L0fjrZ2PVMrTrz+lJVQKmdOAYAWl7sx+skTjtXwG2b2/49ldaaJgV5p5lAczt2jfQfWEUDQwNC9KTAVDIo36Q9ERxh0F6AggmUrVG5zx7eoWBv4MGHYgQaQBQxJ59QcYDLF7ijytI+F4d0O1JHq3/hMa+31mUe659J8uA7B8hGpbYRg4QDUhGiZehoXoL+Z7jSZz200fohELTsGQUHN7WkT/QEASQqvhWvMGybXvgWX2gDBbmZR7L0PRtYvljd6z1XGOkOeMR0fr+GGjHiIx6xDQ903vhDZgye8nP1Cm0fyo/s+t/nzfgOuykkAPN8p87S5bHyznmSDshe4AEMUqKLP2vcwY/XLu22Exxb6wuD5Ie5XYMAukouMekh9iQa024TzAtsp8CaVjYKmN72GzbFuxDFnA73jYVhfn6HoVrbydGdJNInIR9IqPwJp7gGWS6UyYkJqUFX8awbR/JgyWHLdzbCEF37hsWhAx+GgimW0bTOJUFAmxw0EIkw62ZUtV+OxWTWHz2501u+kgZPNYYbKWO2FotEYWMVSjgaS6Ptg6UgxbA603dVh9tgRhszRnIJWLALESfzOD3UP0VceGRYKKAVqAD5HUdD3nAIEkadfAwmkXjGRp/JXro49BfieRHJLxQjhqEYE5IHASm1zCPcQb7GGcw4R+IijSMkgT+OYIM67GcxEGFR6jDtyIImFh4PegJbtB9CkHo9kZWb+xqpvXa1ZwRpCSI6xBt4L42+oZj257hvyQWXgiC9M/Fv48g3NEiDjoiQdeJWLZgPN90gjgPVgnN5QxDJ3ugyX+dKbR36H3J9njaf/D1dRdyDKcne+fb9n1XOiO3MtiF6zNKlsFJXfZ+33rLQ3nJEXALry8Qii29ros72Ki3+FouzIlzppbU9/KUqWv1XWQ46kp9t7f5jQY6sNAcg9AHZHgGNS7IArSlgFZMxZ525bWGVLQHE/KYtzIQeyWO0xOnao26z76+YaAglgZTaLinUdm4MHQYxgwDZSyiZYrB4QpqImiiyD6AOZEtEBJXjFvU32NfcIjBBUQB3I7FSyz6aAwC5AO5qpnDrufGzG7Fx5iC02AK1f8ZxcWfAAAA//8DAFBLAwQUAAYACAAAACEAbl3n3gsJAAAjJAAAGQAAAHZpc2lvL21hc3RlcnMvbWFzdGVyMy54bWzcWltz4jgWft+q/Q9MT9WC08RIvsnuaXqKAEmoJZfFJJM8bblBCd42mLJNujNb89/nSLZBwoJcNrtbFR6SwpbO9TufdE7y+dcf86j2QJM0jBftOtZRvUYXk3gaLu7b9VV2d+jWa79++etfPp8FaUaTbrzI6CJLa7Btkbbrsyxbfmq10smMzoNUn4eTJE7ju0yfxPNWfHcXTmjrIQThLQNhozUPwkU93/spqeyOl3QBcu/iZB5kqR4n94WIXjxZzUEtCEFOK6FRkIG56Sxcplzap3QZTGi7vkxoSpMHWv/y2Z8FS5oWv2uDXrtu12vjxyWsOkni1bJeG4YL6mePETwx67XjMIo238b0R7b+9uVzl0ZR7bxdvwwXN/Xadbtu1FvS09vq09/CaTbjj7G4+JSG97Os+nwYT9bSkQ62HrfrXMQB+yZoyxfmCsuFudDtlZ3FfUS5JiQKOI7CZe5F5XEhVVw9omn4Oz2LpwpJ4x/Z82zOF+bSD5GOTVf8eNxZvz/ujEf94waDWBJHqQ648f3xzVi/1USLQNgmtiBs68OFnQ3OG+P+zfi3QW982hjPKEuo1swDauj2tkAhK0g3iPSBAoBcMGmn/cHJ6bgU1ywN2RYmpxLZW59cXOHEds5AppxgZaxglTrl8ELMOjN8cNw4GoxvLkYNnvgm+3mrNXHzEGsH+WIBXaN4tWC1X4UNLyh/Nv3uz+LvVfxefP0Xq67qC3DnW1FxUDhSLfg0opNMja1B6i+C5ThI7mleLRJY+w/ABr2vUTcKJ98KW5mzF5f98zzv5w0pLac0Wo7jZTjhi6/D9J+Xo5tbfTKb//Szg2xPKuhuvHxM1mW6/lZrTLQao7HaWUlztW6cLOOE05Feq3WAJvjGtFYw0VRn0PXBTSAsTiEJsFySPQJFjeLv7Ml5MGeEVXLMdRCtylpjLp3HtWMgxFUUiEVwmcTzZR4Y8fEw+Eoj7iKXKiSWyQiqG/w4yf5OH/kWUdA6l1LYBwvG5V9VrHJNk/AulyNt6QVZAEz7jU6LNAk2DYPF/aDHn9PF4ZUvGtANIrqYBslmVwvitQ4aoIoH/e0Dt5YsGPrOgne0CiNe5G+OurXk9xu8URzP3x51XOr7DVqPLoMkY1e4tw+dIPv9BpCfvt0oSFNlAOHOtGBnzEMIJ0vtql33xyORTZ88LgT5/90gSheA//GhwZ3kJ5uK+K6gdXh97LjYdxi6VnF3kS4xPFTrsxjuBJBI1kAqsYml9mULidKhLoBQlaDny4kj/1Rpy7+PjpFlmj3vsNsj5qHVP7IPXdNDh65jm0d920OG5fzBUXByBVcTIaH7DIcIDOk93Fe4B0rN0gVTKUsZ6aIZApHFbRFU8aZIUFI0c/uaxlt+jdrVeh02oEk4u+hdDa/8Bm8JmthFvf6J1mY/WaOTtxsto7lzpVcu5U0WrGQ9WOfIbwi7W92LQoGmfeTvOrCGtanaQbFNaxZ7iu8tn4nhS/It5fPxeqvWhD74gOsSFh+AXr6WKxIUsxe5Oxr7iDm+6T3CeALalLKxrvSi4urbcvWuuFZ267fi/ht4z7XZvCVsVDvfv0XZL0g7MLSPqpf32S/cQe3A0j5CXqDFOw2nvNVtAv/7nbN+wz/tXPZZR1R2rVrzwwdNa9pNJHl+W9pi7LJFv91rDbwGe4q4gkGio91gcbJpatQ1NaLLOA15l8R6dbY954ZdZcHPO045rDIGN5C019REmWl1P90LE8WoBJ7mBSV1OuqWqbPK4hNaoEoMisgB1T5vn++zIAkmMIsrSGHb9W4cxXnb9LM1xRQHPKOAj+HVWQNmGGf9686w8eEomHy7h0nYYso3AChYArGBmoCZXn+z8h8r6LH5MIzNyPhirUlsrbmRNWTNMkgQ/fNhbFTUUnWGATeUy7Eyx5WxnVOO7XJu/Y/GdmWy/RmlmW7/xMtnewQDI60iuwWjlqvVEOEyCkctD5nExSaxgT9N0+KRL7eXyuRFYshyBVwWlrYWmqW7Uz4mKrnfsEClhxyXwBmGHcK3lxqlgzPfJ3uo9oyzbu6ZaOXbzQ95RnePlEb0js2H4F6xwRL36+SqM+o1ZALbPbhaz2+4K+tvbzLNOaHxnGbJY71CQOcxq5Zq9M5jNndWPV/P1SRWOY/ZHEy1ntdlLwmEWR1jwnG73o9gxpeCEkYNWMGKcMkhyDOJhw0LbjxbaEEmRshwseO5pilmvgQNgMxxMSLYQjaC4uZJWWNIfivu7+Ru6CbBIN/wTJjmI9cCrAMf9IYSZkGFAzhGnusiQ7o8HRVSZD3Mio2UZxjTLcRUY7ERsy6gnSHJp1dI9/KPaxsuQYRV/kbK2hh5EYvMHpqXoSVm8f8JrRGNzuIHOo53g4sAsLCDIW+ea2BkiQgoEUQwIZ5BXMv0HI84fM2mFQAIgx5WJ6UeQwVi5Biu6WIMhEdcTyK5l+jprr4e0d9LTWZVk8oBwKZj2gQbHnYsgKq4poQ58gzbBNvAToRsMFVcVKJYFYlyFFxAVNxW4s0hhgVnDAYzLOQ6vEz3BdB6nluGY7oITLYwMpFhekz1PrF2VayOHURsYrqOgxDawSAv1eO8Uo/leBhb2DRN1/bsJ90hCjUGtiwHIzjZ2dlOpDO4hNkL1bivU/NUMjyFWIs4BqQUoOpCBAgRsVRY/5RYrLpXP0OubiCPHRPwxz5INyGcxLfBJBcfFmmuvNrYEHniWdCmE9d01HxiGAZwieW52COeshj3GVzWogFHng25hmiBJCLVa3lcWMwbywFcQf1btrSmLM6qNU+4raI32/A8iKAJRrnYUB/EqgiX5FGykM3I0TI8F3gZE1uiybXjO8NXur3PmrXbcvierDas4Fp9n6LnAlZBdroLd2OXGIYFvObZcOUR2tDnylWx3TPk6kAPrgfABHDZgGZOIduIEE87rKI7F/5MSeCPj+CHYUmwE1joZWpUdPe0Gpm74fb2xBmBq3SnJNEXy63ynVKu6qjejr5MQ4aK8V57rZHrUbKwLD92mCPiItdAhJW8iM2yAD1EGIzg0gw4sgAhAn7LAlTdJXJPN0OUFm+64FTnv+EfZKoP5P/0+fInAAAA//8DAFBLAwQUAAYACAAAACEA96Ij3NcAAACRAgAAIAAAAHZpc2lvL3BhZ2VzL19yZWxzL3BhZ2UxLnhtbC5yZWxztJLNasMwDIDvg72D0X1WkpYxRp3eBr2O7gGMoyZm8Q+WKe3bV4cOljF6y0lIQt8ngXb7S5jVmQr7FA20ugFF0aXBx9HA1/Hj5Q0UVxsHO6dIBq7EsO+fn3afNNsqQzz5zEookQ1MteZ3RHYTBcs6ZYrSOaUSbJW0jJit+7YjYdc0r1h+M6BfMNVhMFAOwwbU8ZrF/IcdvCuJ06lqlwKevRwg0LZZQlHWqFSEYctI1YDW9xLfY6tlQ8D/5d3K8u0jebuyvHsk364s3/zIcfFI/Q0AAP//AwBQSwMEFAAGAAgAAAAhAOjMYbl2CAAAqTAAABkAAAB2aXNpby9tYXN0ZXJzL21hc3RlcjIueG1s7FtbU9tGFH7vTP+D6s5UOHVk2WCI0zgZY2NgCoZih5CnjiLWWBNZq5FkEvLre/Ym7WpXjiEOpZPykrF2dW7f2XPT5tWbz4vQukVJGuCoZ7cc17ZQ5OPrILrp2cts9vyFbb15/fNPr069NEPJAEcZirLUgteitGfPsyx+2Wym/hwtvNRZBH6CUzzLHB8vmng2C3zUvA2AeLPtttrNhRdENnv3ZaK9jWMUAd0ZThZeljo4ueEkhthfLoAtEHF3mwkKvQzETedBnFJqL9PY81HPjhOUouQW2a9fTeZejFL+r3U87Nkd25rexbCLLtnWSRChSXYXwpNt2xoFYVj8mqLPWf7r9asBCkNr3LPPg+jKti6Jnbrsb7e9t7PndvfsprLrvbqr2+7sdHdaXXnXu+A6m/NtHXnhCAU384ytyM9PsJ/zd509UGfUsymVZ66jUGA7mQxkiWxkVMs7+9FNiCgrV2Y1CoOYKao95lTl3RcoDb6gU3xtoESM/K5QyHVcd7v0Z1tvwbJTKuX06OD04LJ/slUrXqw1XKe9cz6tyzz/Wgb+R4oQwZRq0JbXD27BXYYfwkEI+7iCxAxn5wfj6cHV9N3xeEshSJ1iEoeBwfJHKIynOA58SugySP+efPAcf7745dedzl5Hsf0Ax3dJDmD+y9ry6xY5AtapOCLWACcxTqgrO5bVBxejL6YW9+Jrh2g0QT5xduJ+b8G5wbUv8Cfya0A0B7FO0CybREEMK8JPL71wKcAg1t0/OzuRrXOe4EXMFCWPm0BRI3tBlNg03X2cfQ9xgezGxcVJhBI4SR+PcBJ8MRq3RV33XsbNqV6iJNs40WHg3RiJQlS/pxtwxzqbzVJkFtR1Wh1KdnhCz+7xaIs4qANuLVuuQUPUc5I4EhymDhCmWDX0/UT88nYBbUMmQHyorhzfVR4tOH6TLlS2XAbux2VhBae1ReN07idZfzw0W7pkfuJi9SoRDeanjl7en9tfh4vQ12ApHnDd6usDJXjdzxwGrctKCFzKzwXDtfGCUga0JrWS8ZjBgZDrAFOUbfJ4rgT2Pg3xKdAUsZ2eIZraiqh+iqIlzUDshNFqRubHqEjJrlaTlwdz5H9E13xdknMYpN6H0LRygbzrsyi80985jkhVB2/RpZbMZx/dBNFhgpexvjYK7/AyG8yD0CDH1LsZewtGUSG4zDIcjaDCowTlpQlOsj8Rk488VzIZoE4CRZUVf/MW8R+wxyKbrCm2aLIjRUJNW2JP8iNQyryN2gVeRtfoutaoESI1xZ80WCYH09HWIcou0Iyf4xK9emN8Nq1YUkj/S5CWikIVUsUXvgOkNHlUYTrFFLrYortUUPU1M6p5JbEpWHOCBlyLtR8bWJ4sqnClqMKeDC+qjmtp1YAt50FO6EagleiVkZWXfnhgqY9XIbtPoWXI0o3qoTUum7HNj9KmwM0JGtAt1n5seE0NUl6nvI3IMulzoSFILYo0LTKhF/BCUlZAss2TaqltqLG3IaUSIrU6fbtWTUypdbScW8XFbejZOBeE1Pz1xqh/Mjmo139fsZEqJXtJToKtPHkfUTL6ppM2wa/U7q50EbI38Fc4CG081vGPgtJ9vYOy2IxzEFJG36AL/7tGaWiRu4YeO8hOHK1wDNqkr+MYBaX7OgZlsdIxGN4saqwRNNYIL4Sn0YPowg/qQcZGmg9oikaa94BSt87n2nR4Beln/+wtjFPEGL3VcFm8h39PYURMnzfbDTY9b7bVWQabhcMUEOjw+brS/VwN7+Brx2XPhvEbbOFM5P71vbpDDOnlLVegEyXSEeny7GKt8Q/zmk7DVRzkvSCnSDrwosNiZCzNBqS5Gu1x4bOQEIPNXUqtK2uArQl8EIDMLV5hDXG5Oac1lIZMy9lmmqrIdHNkqBn5JOepYcSA1osNGjUeCYq8NF0bC96taFCwCe9GDgkzjNG/1SPwyIeEBevHQEZqCe8DzCMekkfHyDS5fSQo7nFIqhJNxD7M8VxzfEXCfDGsFZ9PqwI/ywsrP84Og0RQkSIyPGV5R63ZxVdQ5Wl/meFDJHKQREQK6zScj7E1gq/uy9BTp6hEq5amVYuLzc6qkki4aP8FvdqaXqu/qgurS9leMfYTgWv7QWpVFzFPRK0dg1ruznaXFwt5yi3XI3D/gdwf6La79Wc03vCKkH9jks6EcN1uZ9fdZSUIy1fPN0D8iVixo1mx5ewU+tIDralbxEqTMenqt1jzQeSfiD13NXvCDZcKr5QLAM2QfLHajpul+kTMt6eZ72vuCHZa4S9i9UFmfBDdf9mQ5tpk7iWeD5f1wLrke3K5NCHNGS8/Wtsv5L+uuIsFYZF1a0YGhwgvUJbc2RrpMSb36HjPK8XWMSb3uUzPJ3P8yfg88mLTc3rxa5jAPRfWWAMTouO0Z5/iW7gOxWTSa5a8l6FJ4GvHjRZFcrdS9K7AiegiOOlVRN7Asmgq+K2Ok+sz1PN7zlDJU+Is0M9dVBTeMxsEUicE3LKkzqmmaKDCJw2KqfqJLyylZ/A16khyxQjkMHCTZyR9JnIeeuGuE7+UxCYlwhYsLtThFiQpCNRSt4SrKVMyWVZUvS/YDl42UKGh3CghX2UfPZeYgRUUzcCKVa6qbCVW4LBKTwFWvEMpVgvPp2CKs1Rb3qD9OnY3pQRe40nnSUhcFej1claxunyAX2g5SA0VwIpcENkAp241pzIchCNFw20IOxZCKACsAjWnUsK0ICVDqlhIOretB7S15oNQ8JXd8usuVLy3jge1DMGfBhF2cJWejVlPBVzLDZWG0WN/TkkBlFOUAVWYGAAtDRJJVyNTEfGwsExpzPsVoyrs1zKqnnZyXUU50HKuTLMAavtiz3u6h9m0qC6a9P41VB3i3n5T/f8Gr/8BAAD//wMAUEsDBBQABgAIAAAAIQCqkbakekcAAMz6AQAVAAAAdmlzaW8vcGFnZXMvcGFnZTEueG1s7J1bc9xGkrbvN2L/AyfmgvaG3cKxuzFheUOWZFvx2TNeSbalqwmaakncodgMsuXD/vrvSRQKVQlUAehm09JILe+OLRIo1DHrzTdPX/z372/Oj35dXV2frS/uHqez5PhodXG6fnF28eru8dvNy8+Xx0f//eV//scXP5y8Wt1fX2xWF5vrI166uL57/HqzufzbnTvXp69Xb06uZ2/OTq/W1+uXm9np+s2d9cuXZ6erO7+e0fSdLEmzO29Ozi6Ozbt/u+q9vb5cXdDuy/XVm5PN9Wx99app4sH69O0bPksjyfzO1er8ZENnr1+fXV7Xrf3t+vLkdHX3+PJqdb26+nV1/OUXT16fXK6um38fPXrAyNL0+OjpH5c8983V+u3l8dF3ZxerJ5s/zvlJfnz09dn5ufvb09Xvm/ZvX35xf3V+fvT3u8c/nF08Oz76iRdm8zxdFkm5LKvlMpnPj++op543TyVVvij93/189mLzuv7lYtb51bers1evN/Xv5rMsWebuT+E38d361O/Hkg8cfX33uG75v5KZ+px51vYmTYp03v5Z1K+Zr3bfu3fx6nxVdyXxP/31+dmlmYDej8031I8fr67P/m/1/fpFoKX/eXt2+q96hn86uTqrF7T+XOZ/7h+//K+sWP2Lpf+L++vz9dUT2XWrRxcvVr/XTxTq4w9fvlydboafYTtf8NBYU1+z64cbehrqSX8PZsdHfz95s/rx7vHj9Vv6/eLoMZ8/MXMtvwn+wmzaurXjo+9Prjcrzk7FJg/syrIs52VVpEWSplVe1Cv85PVqtZmx//9it0ixzNIsz9MkXxRqu7CvhvaKaym8a9zmzmeV/2cZ60lZZPkiWy7TvCyr3F/iwdMg+73XmdR/3T8kTDvPP7p43X9gaLTdFxAXP7sjmswQRlWh/hwfsbQ/PA197qfV1ebs9OT83vnZK2QgIkTt1np1n7x+8duT1+vf+gfBnRYRWt+fbK7OzJ5XY3ZPiTAbf8ockevxB+UAdJ96ws5FCotU/BGpy3Z8vP5N/vZ0ffnd6uXmHy9fXq823i796eT8bSMIZmwHZurBd/5M3eF9v5HHIg1v1spX683Nu0IjN+nKnWaiuIvcjMk9erU+702aN12NmJWp6m9es237t4T3aHA6e+1zj+2j/Waie80nuvuqT3Zeey91+mReik2jyG+2IY3I9nv0TM6Vk4wTjrfrk7ycei+bFRiY/3wWulAbsaHbzXrtKqGkH86HH7Zif6tNIIMreu0mXAuLdHALJBWPtEhEzYbuddlrPZ8tl1kj+jvC1PZ9cuvzXuujfQ88oHu86LU52uNom+H9+frk6uSU+5oPhbbnE+CRuQtmHi4TgAY2M1eJByu/Ayg8elA/v7r4/P49uc3MgIIf/2a1frPaXP1x3J4K6cLTu8ffr39dPV2bH/f3e/TI8qZcPvbN/o4Ontvmm/6b/e099Oa9q1P7yf7+VdtRn4vOOOwv79n5Nvt+4BbqjLe/vQc+Xs6WHfwelMx8whtef4t35sWOwR6ero7gfcM+uvNwQ2djUE4MdMadus6cLnsHMLJqE0Yc/YiPlPd3qeqlS/uXzuAWuPHyoMx2b7nO1Kn58HZZ2j+4nTdv3rn+Ce98oiu2RN3+8ovTy0ZU3fny2/XbaxiIRkETbX92egIJccc8eafGy756jwbfqFYPz1FSr5GrRqFq/xpUo9TCtVp1NisT709qNGWncFg9Ks+X1TJPymRZLIowBVDOSv9kpGUNt1xTrSK1XFbZPF0URTWfF0rfd1pVOstUx6pOY7ZfKYpJlZZZrV3FtCqWpPQ5htIgkkDXkkW+zMoyyVHSFolSGH01K5nNMzO8zmXvkxDJLMsq/7IzY+i+cSNdaytlqqYS6kshTRT5oLWpCY/x2Rfr3yY8KOrUhMfeG1XPCoSnv2/aIxJdbPOQEdjTFps3/C1uKLPAXcZzHinA9k18+G3OaGcf8YreoWUxt3xZ7NG260niH93gzvYJqloV1zresHJiL6PIqbH90Kc0V+N0Qj6svERXyTY+dBh144FrI645Di+8bjh0WQxPif9d3VYfIYrI9J+3e3n6BATB9f0RZO+O91/LSv6p+/D024ffP/zmx3uPH3zy/ZN/PH3096efPP7mq0+Sz/jn08/y8tNPfYbK1w5kEO9KIbjZ9DUKQH0NGwbMAfr+nrrJctmltZi33kQ9gG+f+qpRBPrHK/qK0b2y2SLz/1Hby21HAGJwzIHtvusE26FsNeAA82bbmTw+dyR6wO3eixdHv61+ucRYdD2E11CeNF4TpkghNvnBdpjN574zo6w4NGOxUV7MyzxJqiQBzagLvyW/y1niYzYxDgXo5kR0vEVWLPMsS4uFgn/+ffbBQTZ1A9m9I2Bld3b8gNhWxsDoQOeNyHm7KgaMjQAN81D4QgyuNS/4G3xXwKYYUK/H+0JsQbriHSA2c/c3cNNdEHtBbGp9dNv9yxXTfsyU0AVsA+0GLrCY4tfsKf+zuo83xWuBbrq7ycfiB7gmFseO7h2YvdtEa+pz9rxvhV06jEXOydoSrIEp3BFwe/FWwNrNx7uP4bnz0MNqD1+cbSxYG8JqINwOVmt+0kDc6xVOJzfAalE/hQlYrdD8Wh4lsfAySJeLKq2KeZ5Uvp7l32UHsNY4NFh10zsu9sgewNoBrBm0ukd67QDWWneuBq77N5U9ehNAurvUBGAewNqHwq0pNOEWOYZc+uh/O2pNfc5uvwNYCx3Khkm8ZbD2ePUGv40pcA1U3IFrzU8cXOMHW1BrxWyO93ZeLJfzconHaIxam2fYQ2FdS4ixYuFDrJZay2f5svSckM02cyxdaw4tC1g6nMbzIsuWiqa7PbgGOmydo+eG1g30LFks0mUKQZiU+bxSDpiauYgYMTrW0LzyrbDmqx3T1IFbw0n/g7CGTljsncm1tMoWzukoiKduhteqee6O7jIkCf8Uck2dUnVNuVtxV25tYH104/3rdZhcm9zwOwZso/10dMKBXmsRewvLR6dvz/zawPd2xGzd0zXKr82T5bwi6CTDSxhXYV8suCMTQ6mB3e43YMcwZX7ts1th1AmYbXR87kT0CLZ7l5dXHmgbNIjK1HVgm/2Rw23yk3cD3IpZPvdN5XGeLXG4Lf2TeDa1ad8NcFPEst2LB+D2YQK34GLvCbjF2ta6xVZ+bAq4KcRkN+qfD9wkwiFkidkPclOT6K4hafxmyG2g4cBd9mfaRTUUCPTT3VMH5DaG3ALTd6vITX3PHsqtkMysUJdgE0CkIyxtww9q17cMVseHbsbNqeemcDvQ7eYD3hq6BcbnjkQfuuHH1mQ7OLr3dvN6fTWM3sBDXfSGCv7o+v7b6836TfOrBuJ5eK7zjAQgviOAR+ibAnhRr7d55nzeSkV/HZi57WLCD4bU98eQqq/PIA27J4B3YOb82LUR38Qpmv8+8Z1aeN3we4XvAv10l9kB343hu8D03Sq+U9+zMOwm+A7t6cDMKfgKcOrqk+5E9OBd7fq2Db4DDnXxHWswiu86z7xDfFfOPOsNZk5jwAnYN5fzDANt2gR9+lbc28N3vgk0fzeBqIqPsEf0QOB9KASeCjoOrvXO8E6HoRpivGPBv5nhVcWhBuHjn8LfqUMqvtddeStX6K78XXx9NArblr6b2u47Rndj3XRX2QHc9cHd2OztGdvFP2fvjS2hXe9kjWI7PwCVGMTwWdwXdbfn8U6ArqPDc+ehB+0aR7ltwB3z1wV3wKNRcNd55h2Cu2KmvOpIRyrCOQDuFl4QxFK56H3Q4G6ubit7Sg/o7oNEd+HF3hO88yWt3Ud7RHdBouAdoDuFkDUC24W9UzeIXh/d+I3g3VDD7xO+C/XTXWgHgDcC8ELTd5sIT3/PHvobQTyO15YQTwkGd2ZuBeLtYcD7GJ87Ej2M92B1vnp1slkdXa9O316dbf44ulxdvTm7liT2w3Za5rED9TKwwRjU6z6zLdTD1k6S+DYxdzw5yTzHjp8VWblYLJXznBdBoc20UUc8QjU8Q+1Hw+OFN+8B6h2gXi8FPLjtkFDukFCOjOFRKvOmCUq0NDL3trvXDlDvAPXwSD0klHNHogf1nrz95Q1pSgilOKd+huT6H0pVIrnsG3z34I+LEwoSHZ3aIi/mN5Sd6P/CxFQ0dYFsiRVfZWzznqaznAS9+XJOet0MjzlFvVrdoAVr5Swr5ousyogYLeeVdnq2TzvmLZkRSbPIsgURNUW2KBYG3JmMkQ8vXjz7/KvVK0oQqTSR6gorKe2SLLOc5HbZvKpM79r3n5v3n6v3tWN5sVjm8zQnfneR5otcWdVthzsRrItC8rCmZboAvS6NXblj/zLdbjIuVukyWyzoY1HC85J0V6TvD/cef/ID+TANZ5mlf4G/bdKnXs+epZ8Ff/481Rkz688Yl6FiVi0yInBJ6lsBvYlE2dtXZCFMwuAZuWRdnugmTbQeSa4HkjcD0T9+nutx8AUzim72aMPr7uEL3538sbr6fvXmF2oT9IrRMI1Pr85evWp+Zxbon8++fvr40TffPHzs1ujhr1TlevY1FZ7UjqL7E97P/xJ5vXaLpEDSxlTnMlOt0iuyNwTQ1888/H1jnvA1HJBde2JJFGz2dLWolum80IDH7mnzhpl18mQ1e3qJOpY2rhKdLR1x3vSv9F6RF7Kc/7C+PmsKlthPm80U6KZX+yHWMe+RZ8g1s5aTBvzcPT4w2gHEoqv+PHhWHz4vK71f7ydJi1yypKdlkadFluBN0+M/HKlBEaEHzwebQ8RluaQkIIVnJdJutL1nbMopnaObCM2xzg005ndtWeeaHm6sHqcpt+Yy+vtzRxLqXLZhMc/IOFA21BH0U8j8ztTRt2nNpcuKUD0z2EBz7k7We3paUmu94imMh7dV7W5O1GQRplQXeXMbIeZMUEnVMIl3p8YYnkqKKLFtl3lVLRH/82W1KJJ5nUJiYDdHq6WIoLEFSALstL6xS6orBMapb+WyM0o4PP8j7GXoLWGYaMlbhH5FQ7ZCIBi0Fkx3j/1qIp4sTKhZiDwk00UFzyRpOmSzP/q7v52sYGphDFVvkmKRJwtueTzHgDMDbyl04DfrbSMtO/Z+BLgf0mdTTnuRFOCV4QMqjT2PNeZv4CpFvkVbcyvZL16R0on4SvqlWLyVTNNkWS6LrJqzoMXQirTr+DlF69JFRaB2Nl9w+NgEQ+8pZHhYyX61pcBKVkMr6de78VZySWA1uCSnkuiS6GlA3pQjyUkESKdSkSOfV0loIaMnbl93hJhduHI4IlnsvPmyukR0RA+I11bsuCmZnknd08jRHTpsGXRP/LD51YK8JUJeUptlCSvPOmWSAWnKEhUFF/eiJO8kJzV81g5LxB0nV5vcdc0VZ39Q/70u3isXTkytx685bsDp6fq9pz1TzjT9v0gXaOZpWVIUhzvUl4u9ixMtdEnxPOAH1hjgCHLeoTb7tNP/qb6MtooSDVNQAlo66vu4+t/UujAKP2pM+eA7pZkpiT4rU6oEUXFomS8WCeWMQ51Tt/ksDRfTqC/wZ43yUSUk7KoWCRdMWc1FmvXU+2VEve/8fEy9TyEPKtaAW9zMlFaMO621JELn592vIBvNUNJZlWcUY5pTJneeJI2X3T9/vvfd//vmux8ffsKDjZr7GRNg//Pnk/N//XC1erm6ooz2Ss0+Lxh0SoEBmqsSUn8t5jA+QD7m6AYt70GdZ1Yi+rgbZ73CETqgiL1en+Ixdf7rs99XL+7bStGqxNSWyn6zqZHSXI9wWKFNrZV9NnVN4lCIGG1SdMn+Ga0H0a/P60lvutkp6Tqs7ff6GdAbej3znvHU/V5TgQF46n6vVe/xAQ1JY/Yxfb8sikUpCnpKtmW0r9hN3dz6Y/o+XGGxTNDo5iXyNBORH7n6bYODCr/XuzQhiHmsseecgagK4PetpI9AwOG+jeo7iDVgxqKcpyStzkQ8SIP3/I3plORxld9vDt+Cht8INOdgk97YU1V+b1rrRfe2q1XL0VoYnLDLSYJUrQvdubGEFX6uRfgjLlK5SoF9qqJd23DOnQMdlC1FsU29hoOjiur7seqo7XcSbmhMAOjCRbo0fXED6OjyfcKgf8WHJonSZtw6OYXRy2WKisoz0W+wO6bxBRmnMA58Y3zBktsVzxWALEBhGVQzLJBp9Uz4AimLSMnAxTIr8kzSVkXxskIY/hb39qCWPXs/P6Lix/SXBM6yFWXQSaOiTBqLKTBq96MSRCWF27MBMMyKx9cxwhYklGKGLyCBo6gxbKyBFfHWEcOO2KIAc+gxCGAz+jBdpMDlYSHNgR1cSCRxfCEjZEHFimQY2nARE0OaEebhBfHWUepZcU+BlsneAREfWP3ocePuGyKUJ98unoIfPmx9OT9ykXLUsuhR8+8D/nu3owbOiK9QjCvArrlkunP4aSJ7DFQZXaFkTtFWqr+SRxFmB876sETtrdc/RENcAVI1yhUUkD8xZ88+V9B9enuuICsoOEqBOJDwvLF0d+yI7SEtZtQQdr4Cwm719RDFFWAAgwBkh5ENbluuACoYbI4IwddgjqMBBGT9vW18BUqxq2N+gRABmYnrRb/D/u3ONzOIEA7FEjcF8klmQV1LswkfjrMA1jbnAtzUvNGMRRl2FtA/jjoLhEvh7OELe2AXcOi4EbtQxl6fxC5s4yzQbGpQUrFEFINc+5va5w+8TZ2SBZfDHnrjFvgDc/hcP3t6RKBn3jM+f9BtKjDklj8ItOo93gU9UVAxyh+gWy24QUu0ILS/6O1t1f0BfwF6nCTzxYLKz5jKoG4pWDja3jB94DqHAJzQuRh90O0asrQcbW5E/aHNIgU/FPhkLbA6YCA0ow0o/M3sDSK8bnvUz4q25y5qb+FBaZMJBH9i8Z319qvRv8MT1gIFGU+YQ8C1rsJlgNgJCjdQVcIo1VZzbRsv+D0Kd5VDUxCDUTMNA3s6yiL4Bv0gCdC5u0MD7d3PapgdpoE1mcgC7OI1IKZmjB5ljv6QcHPXG2Ac0sLFgDDwI5xDAaSDuoqPE5Q/treRtmIButt2wjEYoQH8vdn4Tej8mo6iMTbRCA3Q3cMwYUWcVXBHKsAEoMXH1ZMYEwBHCQuA1xRVXSB3hlT6FpzSZ7rJBceKokrCrxkhEN4De2cCPorFrIYWM8YGzNNCCLqkqArAxxDP5q9lXkIiSIBYic/VQhxXe/Rc9Nzt7bIY5QO6Mps6QDsTArKDlGwXfSdC0w8duXwn74EkXyaLAvIa6gY+NDDd9ibyVwmLB4ZZqIQCxAILF3jtsEoX1nHA/vva/seXX9Sou3YgEEf2KClQbkUKdJ/ekhQoZ5gOOLG40uWkYVtw9voKRrsLcnx5M7YBbgEFFevFv6z/tCMF2OVLKcGVQUFRhgsIiFDhhVarH3MhoAHlQ1D/fcCJoD5WKEopdpcFkaviqNHvoH+588a4F8F8xvSIAsAIcI6quOdoVauyS+1D78z7+udd836Nng3ey2clVyAQJQcRYtPhStvXVxCSxomACBAOfbd2mh5JpdX+sokR0D9+XkZiBMIl1fbwhT2o/azRjdT+Kvb6ntV+bxsXVY6kNsemQ9V11P7INt63qh/oW0hbkN54P2/V+8DrgRPqq/edcQ2oPxqQD6n09ELOGA5x+Xw5l//JGy3yO19gKAQ95BMg7cltmlKLr8JDHglZNGruQINRpb7bPbz8RrHGg6hTQK9z1Jwe7dyYVp/NKhQwHMOYRgZrp29Xpb7TnNwTER8Dh4Y8sDFRp+/Oa73s3i41clieKlAr0TIXNZ8tdaSUshvW6XkPSicjp1WWYKVMykUgyoCHsqSC/KnwpMDjGbzr2g6OLarUD7sGyCjEUaxic+M2BsRGtVKj6KjsfV5ABuRf37iA+Ge6na0M9qBK5jklt2DRMSMPfWYyMyBbPq5NBv0DZMz4Hlf8v5T8xIDF7PZ1iRC4ZQhZgceGhO/My8YHZ4I6OcNigcdiKdYDnFkagNNIam+Hatm099MV5wpkTnxRJxLKHK64ZJLWwoZMac0/G2h6Oyou7IP46ga5Avk2qAhPFJQQsUpKgM6E1ZUJwBMnLRbzxRwZDSk+8JoPDvfD/HRk27io/PdbTPZAfDGDXAGLCdCVKxOBIQG4hE1NWEteyziebjGbiLEJJ9W/2K0EUIs945hgwcSJQbyFc+2rGT3Kw/xDZ+0H7rVR+oGhq+tlLjWBI5SB11j0HPeuoVhj7lbqU37iThBf+aBHgoyD0EHqMuNqjOoJ+zdx5Qnbw4kT/2qMxHkxSPtGKb/gyvtb47DQ0ygMWbUohTHfisLoPr0DhQGWJ5UoHr5ZxMe6pTBwdgegg/ebP4iuPkOgKQxihhA6BPBXOOeWhvOYTGEk1LXyKozPF8g5Pti+P5oEgfMiVsIUY4eURBI7UKjLWpRJ0Us3xrD+WIMQQwtAcGRCAoFkhDdZGNpEK+1Lbfd3BIf++QDBgUuJmby9NLwVp5FqFuaZJTU6P4+yGuFyBXocnbamfmMfvEbUH4FJmpD7gJ5HeJHbIDZkL+OOWztPN3XtB5iNZNbsZXTgOWbkxJBvnTduheXodLSnKAa65j2jKY9OWwGZ03IegWa9x/fIfxSsQwVfUeXYXgyUHrAgDvEfQOwEDbCJXCF8ZcCc0WCTByPsh+vcHIe1MaQTZz926NqIdpbN6ghnzKSQBNwJjTq1I/WhW8Pp0gw10JrDYR5G2Yr5cHOKS2jAnSE0V0qXjxMfS8K385RYhFwsCaoO3/MmEq+syH/QXoUkE2qbDo4synsMOzMIvFQXdrAvvTu57Ytszw4zMpmyEON1HA3HKAt8EcoqQRtKsZBYj/6wQtOiGEbJiWOcYsYTSyKR/0NUh8LDt8JZ6H084VSMablus85xHRoTAHHKorupuXd2oyzExyC+ulHKAlooBbhBCpOmS9Bfz7ZtdZJ2dcW7AY8IZDPIkwwwgjmjbyng5yNDT0psxT59BCu5E19BkjVMnxXswxJD6TRHFbA7bCTABY4c0ccJH11Jg8gV9WQ3iFrqW6Ar9MoP3UUewRAOnhAxrO4EyPAYweA1FmYrkpm+OwZsK+4q6ZMVxU5kBTFjsOdLbi78UrEsDyygd4DRpXDsxFMC4EeupKFjr2Szf4CDy+4/ED3hg6TUB7nKAxEYslviTAV7YnoEhqgr/tO3ylRQRUVr8f7S272xV6ZC+VqI68WYq4VoNdvQEh1brh3ER8VEFGH3ilT/PMpEhGvrdJgI3ZZjIvTPu994H5iIm+VtmB5Z0bJq2zAR+AwTT0+SN0mGimgPHch3xET0urYfJqLX7K0zESQZGAMKY0wEWeGahWKtCGIbbW86FUHI8mhrUUcM0US27dsIF5GToofRQk3jT0DurZtxEbq1pVwHt+aF4dQ7ySvR9ytgsmD78NfHzlv/Qf1SSvo0LkKgl3cULBdBKJVnBSDXUNu0A5AeuopyEcM+GDIGVWuI2Jf2QwGiIeyCMc5mEPHos/2D35hOZnBu4upujMwgZIhYY0lJhapELkCjuI+wGcwTggZNFz8KYgBwKKo3XvgtBZdvhcrQx2DCoZpOZYjH1JgAGaQy1Jkg3DbamNvHAUUIsje+tFEmgyRgGGBzhFhakZhoSKfxNCEyk+NERAJx0lHKZTu6tM0Z9S9X7yhuRWV8+CvJIsRXMuZ5IVUXoRwJ7yYTILE3A0viLaTosgkplMi8wem2nNyEQ+qvpIX8t01l6JUfuso89uFPoTL01dN4MYTsLkMnWLz+4+se9bvAYEG4u0S9k1ssmNPBro+37iQ3JtqOWMV8iRFcQhdGuMipDJa/L6InfJDK+CBXeYDKkOs0SmWIcd0nJ1il5vFAkYnaFO8/vT2VQSAldzz5rSS5FmK970bRbqJiBiKXGmLsOmjwcG4GRWXg60CNshRHTrJCkFa4br51mpgUNwKY4O6WLDN4pTI3W3tdVHiVYGUR40qFjTQ0Ql+MoVKKhxLOoATJkAh8WSCc+5OiyQ48zYir4lFsBE04jdbli7nmC2ztic7PR9wuRO0wA9hL657vBY4jOYOuyB8pLsD1iG+QOpKWzf1fwJyDIEn2JPmeyDB005b3wG4w5xE/Cbo9xc8imnZikp/FvrJSCvdR723MsUABiWoKbdRefInZ2xWYAZQeeuM2uI9uR3vKnHfsbNe8Z5QXRretwOlsvTACzXqP788Lg9MPy5stS+JHrLoUQgMNTBniPuiyGBnRovC/SMnejTecsTcNNThMfnjdI57BgPeh1qLkx06dG2E/Ps+w9eOHUSI9RVY0kcO7pqbsNneb/Ic3r/Wye1vWyL/efBFQohX7KAECTpfpIJJa7I6YHEONw5CxnuQ+ImpDksu4th3u9EBZlAEZ9cbo3eTB3lBKxN3W7DMectFXu/pjlMCGOEqOUhjkQKZ+PYeR9CeSnmUA8LYIR5aLwFD+UBoFvaokQ+zAe7dPYnQ38/jZGGExvB0LCWAmZUASxFmM3s4m+2pcTLnd2OcxhEyIr2+MxyBzBIlM2W4VSuxUMz4FR/BwgjU0GQsz4y43pPyaY6zs+N552orG+CjWki0QX8sok0FUNhQ/gd4UiUOrHThy/lGFwyJqizuXqDii8AwlN7SYU1XaW3DK6C7+wK00hcrQl8OkejNZ2CuDc6wvEVsMKhBYNniKUUziKx/jMsj8g3FHakqTw4LUlhNXnohLYKyU6yL7Ev838JqS0SG46yt/0875IJnxgS70EJ0x5JmxZEp9gmKYzug+vT2dgZGJ6wB5AksdVvZbCVLgk15nwWiMU0FqQNEZ2hdKEr75bMQEOmNJ6jTPGKbfnxZDgr8yUSwY1EpCSDgvfWrC39CcbqkXI+kxTaSMiIr+Kx8YmzGeHSPNNB/TRpLon3d9K1peI59JFI8Xm1PPqmZldvzG+8BwZDGGZBLDsaX/htnRUtqF0jHBQ9jhMOyOJuMe8D5cYuY2SAxyYsnZa3saUoF6ffMeUixGt7HAqfRZjF673vP7ozHIqk/wO6FjGAqsjWBAOxijMSRCiFTFtFa727C24ogQABaWFhlmMfzesfKjrQ2yGKpvZE0aa22cxKA68kKyDswTkvBLfVcZ7D1f3DpdeKy+BijCb27RlEgINedAmaeYoLdNSY8pShy+f27RKUAV5Bkk+YBbStH0lGofJTHwlScbEqFE4hNA1tBg4xThkJgwczGSfcK1HRzb7iSGvr8h9r3DaRQ9JkTf0X5iD9mlO5MYu/lhkOYA0pxY17lQPGL/YFOFVZwW1zCGOjYPwp3E7/O8khj43mveZtlWi/V35oSNPsJI+NtvgswZZiTsNjXFlxqH7ZDEcTsrQEhwduOqTJSQkINEgjEiBVIYucCU96yy5vjNSwAa7DGWo2AehMNKnUpx61AGvhIAHF+pGN0gGUSkFg5HS+66iSslNffw0gP3IeCbWMbwSYyqnNGVHNMo/RM3dBdMoQ6USCYXRT380BHxGotSB0p0kw4p2tjQeZPMAfFVjFEHEoYlxkUphCi6zcRlJP8y7k8khSLcDseZIa7JV6T2wwx8iOs4wAxIBoKoo4NA/enMQPfpLZmBlKyMFf4vkntEClTy7b4W3N6g5HD0YlyhFEJPO2aAlLKkGGvrRpJavn5+sp8DDJzy/BTh4hMLo8QASv4SccbI5pwI8FbQEqy2MxQnZZuMT678b/ANjxagjE5dQxSoKL4cWLGki1rhXS60Um2dHDo/H3ByIO1kOLnEbi0jVp+Z8pKkCyldnQwKx/d7n+oMGM/yJmNm5+fRShn5TDMCZhH1DHXamvqNPVACrMHNnB6iySn2TAl4exmTKtlagzvTZwSAu2YvYymVNIYSGNw/3PsmBAL97GkTgZ55z7R0QKCpwABaNiDQqvf4XsgAUQuht8juRNZSfIJwFone7FZ3HyiWIR4NEhNv4m4knGPAVGjbi3IB3c4hx8Y7hyiI1tpUXavrKQ/TFCNUgIhzCtwtACaEjnLbmN7d83fldCZAt0adNQN09s0EdCcVb7a+dhxcxwlEABek+BsDvfA9JQ9n7QJhtSGjeDNMyVvdBkDgitg27LCjB6B3ZAG6d3WoZEfgPm77IrtzVw5AXLTjODfoyMCMzxFpGZnSsJERHdRkPQgrHi2A4TWZZtFWRCVF1zFHJPyaAgb+LvWmeyuOQO/ZCScgThH09uW4L1ScIQju3xgn6XZdnyGQ6rnxlQwyBEhtdE7xwcZcTUCReLX0SBl7JtqFFIYARQWnW6n0jh9rnXw9+t72mue+Zdm/3Upu77AARYi7Ly7JJUeSwOlpmQ/JB0WCGDIPkGAHrdkWYNYnMnrgBgkCfdwGrwhPpw+HSfREddynx2srzA8EZPpOhw3sET9sQXqAUeCNXSwkJRYVj9ImH5Oe7N5h44gusHuQCljUOV4z3Ih+67BEX1h5OKD5S9GYRvOvb476L3ePzX+bxfzmav328vjo+5Przerq7nGQrJN1JJUrOi3WQaLuEqO4PXm9Wm0gudO/1Er4f7FwVKdgyThYBEQpV8lWlBZaS5eMx2gJrqlvV2evXm9oC1debACgXtRpHXfqq/xgOyHmATRYFUhZ2WnNdkzKIuGcBOGIjk3ORw//mw82abVUcxSk7zTXdo5BS9EOPs5Y1UCV7J9RWttjJMLqvbr39Rt1qvq+EmXULqNTI9AmfMNX1MQwkQNimBHq7xFxIFXtgx9xEw3vIAnWcXXAl5Nrk30VfENNJomIJQMwi4jagQ9t5B09Y3hDgj4pRQVwJz11UIlkNHrSpownonla8fM/b89O//Vk88f5Suzm99fn66t6TxCl5+8W99jXZ+fnEx7jsy/Wv0148Ov1xWbCY9K77082V2e/G0ol3rnxpx6+fLk63VyPPyh9iz3VZUXqFHGeeAaKbK7W56h9zU3169n1kydPnz31FMFttrJRVCZuYk+5n3JMWuV+avMMzkgOf28P6P73X59cnZwiaZvpEItt4k2E2wB/pTjuqUQm9s/ZdycXrx49qL+7uvj8/r0RFY0+TjFCTxMkVk1Us9kRU93xP139vvnyi9PLunxjQm9lS6wuNkf33m5er6+u//M/vrhjnqnPKEWe6n/XxZ3EMhBHHRFlTXUuC0vd9kLqjLszFisenCiU5wn1oR5CnqNG45EfWiMlCIO3SifpqpaCcmkAnLAbIzeJlwt9QotANebIIJzwOsi41fn5Ee4fbkpuIuOiLMjD8/Ozy+sVJhYOeuoddCvxKHKD/7Rk3IaqgmAILbQ9c9gWuK65Hgk5Fv6pLxkgt35CoJBsitopuHtQRFwSH4ayJNiN/VXzSrfxHvdmX7jfvNDvefQVI6tI3m3AGtq3lKg0mnfI5NnX+YcwLjPhYVwqwsQNWgJ8ew/EbVhBMByuX+YArIWc4GXcRfD9wNpVquI1regJFylzTbVwU6qlEQgrGW+gkfwr35dLcwJlYSawpoPrULrqHeJasx3DQkWwMWB5Wdte/da01Bpuru0chEpB/R3UvJxShn5zWqblOASLf4XEMAHtlqEN3JFpE97oYOHJb2wFI/iIm+h0hoMJQcwwghQUxo4dGghvqMncCQvj4QDtXyeYwEtCrsL+kT9g4Q8VC0/YylttYh8LT2n7wR8G236cWHjKDDU34ZBgm46FLy+v1r+uRtAw6vD2aHjCUNorCXQ74XEnDHlenHjIz4r9jzhNCVvqCyklCoPX1DAaxv6JQVkce0EO4+EW0wbhoN8BDb8vaBjHZ2AT/jPCMUkh9v5esmiYunJ4AhKtRqQR5rXgsy0eppaesHVkLynycLythbcWD/ebj8LbFhH3eh99pUXEDcaqnbqJkY5a0LeDxEA/BYlBDjEfLwOJOw9sD4kDFW/6yBNIvMRRUBwaUHZ8pNjKn7asjXu7BZoLwkyINSVPLqmgFenqy6OgeHGtHVCwUXwNbt4KQHRRMJWosaRxniia0FQ56ojx/aBgrAokisdbGH9fvhYSCgcUfEDB08waBxR893g6IzwBC9r7eAg3jqLgJ2eb1dG9F2/OLs6uN1cnmzFWGJ37gINBC314tD2DcsDBQpoqOOKm5B2xwj0k2V9oe+76QLX/7MeKgzkhDQ5uiHhSXJkf3T32fxKQJUBg3zAHQDWUfTYjlaLzYW9S6fXxZU7AYE3+Qnbr8mAt1s1n4t3blM7kX4bfc021wJcUZAb0knAUMt9zXnDAF1bSRNtB+8sfIxxcYxb34hIrgRcZOYRz/Oe8tpSSrjpGvEwtalxrbdfgmQXOY8FA9TdJtawmpdlfmzKxgxC1uALoecEQEjfY38uirP9cO4k09AsqAcY4/0+tRf3wNPR2bVl88vrFb09er3+rG4gc/C4nEHms6wUQeYzPdrwAIg92vQAij0nvPIO8Mgl4wgsXhfGn+l4Asea0F4B6yqgT1qoVWWzz0PNm3aYsNm/4WzypbGBKZxvxnN6+RN+4uJogN8Areofi2UIm59CmaR9tuy7mk9ZJOpzg5B+//K+IlXqw9VRpn4hpJvnIRNp+6FOaq3EazCU+FzHb40jjQ+ujG++n2g/6LdheT284D1hMh3vtL5/uZNFrS0Sm/7wVXNP76VgZf3Gn+3hQiYt/6j48/fbh9w9NrNr3T/7x9NHfn37y+JuvPkk+459PP8vLTz/1ZfWTs/8zO0t87hgERBPyzhPmW/qIRA3YsZoMzUm/2fTJ5sSLv76JN2enJ+f3rk6fro2xvL+nbrJcdmktEKo3UZSd+6qRUP3jFX3FEHrZDHra+0dtL7cdY2MObPddJ3inAQfYR9vO5PG5I9Fz+/nx8sWJqHmXl+cstgSTO88f68zqPH8qZEgD3rBEnbw5Oz06NZ5M6yv8US2MC//OALqOb6s/vS2eIyS1ohhdhmWdeCHsraEZ9yAbfgt4PpDBmphnqkmGnlY3VirhoAk5MEFKUhcYQDM5IhXLWyOhzCv13wfKiFHaL8G5AsdJ6n9gYRewGwBQ69N2PGLbCwvBDynoFI+DvMRkD+rNloY/3EdW7XzGTYZBgSylSHHclevZvkHLXVdK8ZX7wp5AFmRCVuybBpguq1iAan0yH6/fbla1l6zBNcqvBVppX1m1251cO6tgElKS1E4JuKyzk2vVh+zvpDFsdJYGKPr3c9chVQTVD+vrM5FIPZAQ6Im3KAYqmFOkv+095FGOzfkcHJVyQm2UutCQzH3i5K03xNrBv3W4ffCsPsze0H46OX9r8QM+RRmSDyGIKY1gizRwC7irC5e5sXRTCEhSIBMhJblUCmzDow0Oxpiq3o2nyo0XL2eV8Ijz+4Yo6F3oaqgjYVmy7khaZm6+RN/AQQs9WVoEZji5q1ocjCDqtofVO9pecN0nE7udRUcoevu13dTY9CWTNF7/UqZoaZJfu+HE9ArxWOVpqVtFWmxqNIXaRmRSBCXLJPMoSZW8Wl7BkW0NUL1B+Bd2TkwH3XGD6ESRhmAnk6Xu8PBwJNaBoBc83KS8y+BH2HcPVudiZJWuNOe28XpW3s8VhzHATdWi+O5xxPuZIhsU2a3E9xA7KQEY9R7S4VNWgrbiU84GVlkpK0V6jgQ/i8FqY4q28Td7VAjt/yix2dNY9Fx/f4+cdGksHD4nYqN3DmKtucUMAFoWIr6YkWhV8FxK5A2pNCWUfRnMIdZbTJEj4BJAKmZT/nB6B/aAYkEOa9lKh8G15AjH1zLoN0wQHZFWki0JzpdqAaIMcFeMnEtZSurNIFgk3oDIg8xcp/q16Knb23XjRZnGzlxP7McOiddW9Mj1rodYY4OrxFzFVykWsppyH8mthGc+sWtTVwkS3BOfVei1wyp52eOGPPpBPcp9iWUcdl/qPLCl+1I4qZIzdrSmE+qfUqeLBJlkX1Kopr1GXeYk93prK+EWIfkKCFuMJgpxOcagDZJoIlobysC1ZjuzU0RrxRUEMHWttX3bLaB1nGCQaCkvBFbKxztkbC8uo8c9a1g39QLlLuNvtBhvSvwnH/GnmYxGmI0kGQQBOCDRyFeUVWEnJ34Qp5R4JNcdZkNyU0Y+pBDVpADdGgXuYspSW09bi1z8YTTulc92TFmx9tCzJ7TXMWVFDF5iZ/NMWZGn+qasyIM3MesDFXcMaA3v5a12scclTDknikuYEPT9jMGZY+hvU6MtuZvWu8juTzd2vOuA1qHZHxRT3eH3iO2G0b7YDIeyggniaCSizE1Z5Pb2GRyFFba+EHSRrOgIhHH5i26fVxKwieAzl1Nzm3RMsUqdIJWHDWQlnwNqZegLWvSpEYfvCyezulb6mCg6xOrX0a5dgrlrl46yLI2zjDHNjcexhq9TI+hAOCqONVzYEwbtp14ka2Xrbqnc9XajftW80m++R/LZV+43r+hYVul99JUH9isqlnUgf68Tm5YSGIC+Eqrb8ViyP3IeS/KTgCCJeywJbqWaKqwzud+JRdUY0CJKtEtJ5Zuh+WDR8Em7VsBk1h4VQJAetkUX8l93EucDdFIKS7SDj5LJo+JBNrWjnATvALvIU31gF3mwA+zUU1rNiPiR+DYlzK/KIS241kqvQJXc0UXJaGadu5S29XW6u49SUPF6Bz5KSp1zNHzMlhBZJXuVDKyPbrvP62/hoqTWXbfr+xRYBXa4zz4C0m3d1EMp0E13+eyG2T8iB6XA7N2mf5L6nMUjFvXs5J7EyRpBLV33JIX33V68Ffekm493H8Nz56GnxDXeST9crS9XV5s/jh5dvFxfvRl3U0Kri7spNb/EJNx3YWrVwWluSphioaQpoktGoLBa5ME0irzX9feoXk3avuDcO1RGrr10Lvn9sJqRhqZqHNEnuyl11c4mBrx9f0rifOrB0mHcI3CoqcKZ/rWaSOINsv41qfYzKSHb5xVrm+OzGrBzMUuZiTaFoeH7dFr4qpN63ibO7/x8IHF+qhMlNrGKe/kKlhw7ElJq9uB85xudRP02iX6lfx5Not/i/L0029U6d3BrYglulDefgUfer1nUMbcmoRnqZx7+vjHbyddvNK4szFbGAo8yFc962B5WDHx2Kwcy7fvXdpd3HPFb6nWk57cQ+LT3jEc19poKHLaWagy06j1urhknh70Bbue1RKpRknETmpIviZ4PXA7uQhtxWqp9H8iY1KTZxxtUeKreXarbG/RZ8vuGhWG0sWiJvE7XpEqvMeWH0mY1dtURTwvKHOAFRdIxzFjEMZVzNA8ZK9jDSVA11kEjsmqO9FPiBRlpLrjoEM7TQiESSS7rr7i3Va06UIdluHUkz33tYcPEGGe4mJbBDkpy0r1iosnx7VVao207x6NtTr5oSBSpLFbb/wY2c5RKE1kSd3Lv3sU6V63ri7Kp1f11S9ZxaeKqm+htBIIM8Eq1hIx7G3FdE5ZGhnuqhpKlxWx27Z5g4a0n8xJxn5YkcHhCZDk2uHrXhF9TF7+/S6OyY+9HQNyDYn4PvY05IjqkrbDfQ+e0Y4IbKPjhTpPlFeuFqrNmkiZjaCkjvka4MOL6IL57FCy0+fnCS9KupHhHYdTE2Q+nCUntMbgBFJlxWMnGDXF4JZH38UMZ8TRCT8AzvUoogkhiycmZ8QHfiEDxNpNCVqEjGT1x+7ojPN+g2HnrCevx85bFzltPqMfaGl4jrtD4GkX8jPCMxe2AagR4+RY42Q4IwPa0QUhC0JMen3zsddV0Awb0GT2s0ZTU+OLfYBV3Gwre/szZWuofBdb2TzC2pDOvEA++f6w1AC1gecmW3KRkJEZxJ2O+r5k4Hf8DtLwcwsMJPPBX2xlVOj41Cs+6p/6dTC/hxd6T7cVHIhar7tH0EoxdegemF0UDOnUhphQNmzE6pjG9PrrxG9lehhp+n4wvoX460ODdyVt4TH1E1pfQ9N2m+UV/zx56OJDa62Qn+0uIM7ING6+Rrv1FCQZ3Zm7F/rKHARtiJ+h7M3l87kgEDDDn65MXR/eurlYnV9dHL8/OV0Px4WirLYDrW1fa34ZML/UvDajbu+0lnS2JvAbWU394TiBY0K7vcFmX79nW9oL+vWWIeMPlTja1SARKOPz2ozWvoBToEsvWvtL9RdTAEsb22trSbcyVKe58vvuV98DeIn1/Xw0u9YY2NXCJp8P2oj0Q/at67xaX0Lc9HntXk0uo2R6b70SvN8TdbS65pHwdMZIMFCOmy3gci4GsILyN6AhxTRxtcLrVJW/sEOq6cndsbRKCtAoGxff7RhDmWN9GOGdZIji1dE4RjJJRl3b6wBy72F1Ue3+OzUVW3NusxhghczVPKFIhZf+IOcbI3jFHxPQLzeNR+ibUNlx0QuAABUZyaMlbDROXIHcEAtXL51THqz3J3X7p2FQCKk3HiSJkuJE1g5SVih5ABExNZOlj0NGvTLbckH1/iIGMxBbgqkjaPiK9gSmUjSG8X86zJhMtgG0pSFlvXPrnVMFdUH2VqiRkmxl4ce/GG7Xxpx2k6dYbnE/GznncfNM/CkFPcrPcThqHzDdY0QKkY2OJi5hvKM5JfUuCUMm6ABkoecUmLScpLAhIxNGVoFdChAbe2rv55qNYS26O+FrGDDhS1hiKf85KIjCGFsU/mUSWUnSK8u1k7MDdP/Ra9O4ftOCohYrfNn+q9Ub61L0dYoBk+LQxoPgKxcw3uB4T2ZrhP0G1tWC8d0h2ZsWipJYXCY2SMqdUV+C0HVbo9cnlShKjyL/Jh9IXkIIU/Qhx+3dT2ngmfw2q98HTFnYNdHaWWlmX4sYk1CCbDX5deU5CDQ+ttEcw7P/nmmrDrSXkWOoogzj447flUwMq2q48hIJzWhxUtqfLeN89r5k7EQkTolw7ZgMwOVUVKUlHlmV8gCIfUXGQO0WCl2AlXFxTUj7zn5LbJzgahZgmjac+KIdIcL+aZz9gKGIq6wQMqae6REp96D3xDBQ8RIJL1jyppLqFXePDjQT/dv32+uzi1RFRBL+evRip5lYNovyI0qbCoyPZMdr7qOuZzwf7Ise/cQ4B4exkZyFHsH7cuS6iXqyHgHCLP4zpq83b0xQ3vllA+AD0xbNPY9/br3eMk70QtYBeKvzOVYqFVtJks0jINxPTrSPsIWcFqoYeVNTDpBqVGkYNVSeyK2mQ5LPGBjz9DQ97JvhPkgis+ROStl3sSbVmqYec5NQmDFeq5w01TRS0l4KH7Z/IV/SMlZRdFr0Fyn2eLsN5OPiQnjScgEfHc8Ceq24l+dvDnnan1mmVnpy+Xr1ZPbp4sfq93rKFiqsynRh+BvwqsRfrq+HHBCIPP/E01JNdgfLkmmO1tje+Qz2j2pQz3cYxTW2eWTQiwz+IXZLXn4x9pEzadwGIZiuYDNhC44nVxi+AtI1YNAIR8DmwlN0J6rmD3F+/efP24oxI3B9OrjYXqyvnDFLLHJUolzKUlvKpf3l89P3J9WZ1Vf8lGrsy1D971Nobbng49nEFpg+VkUUxPMDp06k6sn+qc2EG2z+eodBx2Z5QuQFQP9Sai3gxuIm2DPQgLnfuaDaeAL5SWGoXTcAPAoaGWCxBMcOwQMQjhUn4X8x6Pk/cyqUo8k4J9CRne0X2avLxE6Lnv64QpQbeqrKyRpELkty7/oQT6mkIqUWs8kC2IlOc1Q8F3ZiE9zhZ0pSFN+pT+O4PLnxHFcpy2ecVdjWS1ov7Tp+q6qpCFB6s4O2pIjAn03kRe0dv4jkeHgsSRxCwSmjcRBo+W5QFJWRQuugl/iih3vVc+u0O/zeE69NisqdsC7shukLGW17nABPzFdrmQwMiR38o4MQTWlY7gMnthsISthiA3wfd4VCCqK6M96bVbr/JA3AXnocs7k9XVz7oeIVJyx9z3e9vtf7tvNvK2UW2oQv9jdZDN/aVrxqmq3cyo28YrpPsMBQwWFD4ipRBSGys7U5Uux0bm4vA8fAbsL2bsGvto7uPPeDsZRudPFZ3ano67I+XdUjDi/Xp2zeri80RlrJfVq9Pzl8erV8enZg6eBcbp9hanFmrrSaSP3WuDr0oBwsvA78Iuj/4Mw9+NCp9OeM+yyUGmSwM4gsYWgyFNr3VtpPl1F5YE6pcUPeHVAD4FnJZmruyzQb17HMTN/BpFI6qgAaJbxgoecfnSFpQgHMhxol+hUYN9V+D0vF4hvmMYj4UNpYiRtkcZrZu1JV2YwhNMbbP8Juy//nzyfm/frhavVxdrS5OV2qA9ZjNlhbALlN4g9b4pl08gUoO4xuc5Vr2Oud1eaCfvBDr5dZtdc3nip903amlkFm1fz77+unjR9988/DxJ9ZpJovFEbjuDDdQxBqA8tLl6lTotEGzZpbbTYb5o6CoGZUcQ5tMA+DIJlOgl83r17smI0CkXaWuZS31b/4j8o5GvV65Yrx+c+gyJ7PtKaZvnYPiGTPEpuG/5EOErh+ECMJoRb3QbHomISv3Zf68H7csbuj1wFhaGheXIB2+ZG4oJ7W9gUyPhqAX5EuvyMdK7H2dOijFn6d3cbrbsA44qKVAMOSAfC0jL0dDH3pdwdIz1lrdkXsXr85Xwd5IZXpqoaJV4akkpj/THves2zRqaBzGfbQWXJSJ1HRvGhaZMlV29pXrfljnoLkKfy1iDshgzwKnWHS97RhsLdz9EfzscX6uonh9RDmkfHFgu0ZJv1jV5mCvmzDb4bxUMh3da31sOgINT49okKxpAa5siNCXHbCk4FmJnbOgDC4lC5HTHKuRkAZeKwtcaSSPEQEg6PncRtHXHNQRZ0OQcJFhHVei0cpTJbXJyZRAxC0IZSHuI3yDaJE90npXVMebj0q4fYsBjmokTVbwdI6IPGktlrinlukRibn/YygdNfXVu6cxhNjZdPGdGwzeCOxcNlR0C7ZonB0ogWQSJsBmJybHCP7wfn8/N65/pRz26bgb5eB1sd0+hS6N79Ogq7zcAjCgKcgHEMqFMBj65W3TjoBtigy/433qbz0rsvcgVPeEhUZjaWQxFEIZSBboNfbvKFERa/GdGgzQ6e9Ug2HDW87bqR2BCnkRFcN/mkAd3aj+A1EhetiXxlHrhhJ0yAYsvph+XBIhlZajE9USTbTNy9lJPBIUt/MZhJyvdbT7NO3+xt+Lms/NVQsKkg49qKGoJr+lKKpTBMOic8obPscyzb3Gp1g44lMcFDuEy5IArnRBBB4MNg6XKO39kfCKmqedPC4p9onNfZGRUpWA8iTG7HQunIPHpXYZ6uSoU+E5zrHo4HFZc6ue6Ed3mh6aNOCmZw+3YQ4mHrmWq5t2qFuqbmrzDO7gcfnavxgssTMkdbuqa89aZTNv3Xvx5uzCGaVqykV7W0oV2zggi0UuTdhm7QU3bef4tx75cBeksUDTIKVu2dQg6dTTUlK9d/sFbgJ9C9a5erMEf6xELFug0v7doeX50HrYo+XEmFBwEyrmikyc8Bjr9nEHL92fzrwSNez/OXhbXijxcr/xFEh7vqI9Cs7uamM+d9E/1OhYLogjqvWpUPUKx9pZTm0QaUPRNkjbeluSc8yCbfWjgKCKe1ySPKPE1WuB7ZoMHLEo/w4sd2Ko69WhqsQr6VOUhIG3f0oFbLTU6Xh0BIGkljq6Kl448+PB6/KP975E5ZSV9zWibr3K8MorhYi84Nu7XRYJGTRbw3XMFq53sfa6JOdF4O6kZ52NXBd0aQ9K0Mjy0XtdBgWCBYRazGir05gJNJlN2YD2S0NCR3+p7wwHK9JHUts37Ps3Gc+NrYbgd0L3eNzxMngIpo/AXX6+DjeCXxwG/OsH7Xk5bWfF3A37u61/Se+4+yzWwS/iJylW3j8uUXj0VfNK/3xGXzGIKp+RrI2CDwVSMYFXAvO4o+N2bWw2AkfEb8COaMrOtc/uPvoAFrSNTh6sOzl9ffb0dHV93SaUvlxfbU7Oj04uXhxtVm8uz082w/mlcR9plNwOV+vPIQqrkTS4URLJTY4oWMX/z97VtrhxA+G/ct+OQrjYu7exDUngSCn0WylpuHw8QiiFJIa75P/3GWm1mpFmZuW1nSbFn5I4q7fRvGn0aAZvC0bEYHH+nI63MX7Ldi6tO7uSyAyNzKg4ZcIZxaMIeKRho86EowQSDUU02fQFX6XZSetc4KrSRwFUkIhCnjIts0ht3HUygfLwbMT7Ff/xfvjFglFmEp6ob9yPxFkD/NmtGOUjKU4BqsxzPqK3U8AqQWYjPTOoMMJZg4aycJndyuoAwTkJqxT3D9FdTcZ55Dkkj6UHOeZTnElsKiwft5ZlxNMFIU78nse24UyJs1lcs2XmU1yzmnbU01l1sWW0QxAJgdi9oGKwu5C5s6eAVGU9skk4HwKxnAmO+nMzmUEewSGklcC0lMZNJRp4ru3BUk0xe9cz5XSEIBQ0sqZSBVKCim+Rc1OLyI4wVmfHvyOKT7EpLeuHM8HRgdCHbQUl1/ROTgm/eCA+wvAhzQwe9iEnLdybLVVQAC8cenEfZUFvJcwYZzFTDhezawZeGMC4UnYapBjcfiQuLkXYwkbEdybkXthbpaLWFm9VgQZy4ZYiinDZq99/ReySamzae6VCCXA4wTNZSBQKH6AoCWI+i6QK8lzJoik04NPuXsWXK6pzxnJRX8chlzSed5GaKq5oKR0Llh+LLkv19JMR0g3PwwHgQJijEvQC57YJMHyUiNiheDWeNzCrNTmHRR7fQeQx4OeqKq0u603E6MWHeFjGvhNq6aYlKaMwOrIFlS/Pfk7yNqWr3DJEbBHP7uEuHXUMbte3PZ4xIL8YLKI6CCfNBoEFMrnIUd0DRo9rXrWJoNKiXLh4QArYfAdVhkrW9rsnSbSW3L7BpF1y4fK0Oxd0zFHoGGR5xxOgrHsUkThI5NgpskWop1Nko0Tfw1TG4CIXXuf8MXdNzsLM/9/EvXd/vb+7orO3j3uBDrUdLwv30sBAkxED7oV/rlsGrrEvGXsvKcZeMs/RjCaMmIjrkNlvzdzyFIxbbwCnXfWoJI3n+3ADuPpIPklUdEB1rLbwKXbhiTzq/6iuRboXAJTiRYfHb3hFCSdELaSVuk+3InX3VVwrNXkzXqTUszebxLuF02bsZSfpfsR8Y/wJh5J+K35SdIkFTemQTXEHYR+2AxDF+IN7o0x9oFzMtiP64U0YMCz8q6w26mrgzLUVzp28n/JgKka5YOnBeVd5aUcvyJQfH5libDY/gCwCo9DDWBXlgZ4lXy6AoOCxPiBomlpD7wWfXoAnioUwdj0ZBakrDsWazHTuaQ4Zqa8v/M8IL/FnzXlNTlJDlMg0FUkdJurOE0C/GrmASN4RNmOefGO6gmCzv/7z4eHT3eOHt/voL9U8RSaUb+8h25W+TR5SYCLTXUlOUS1eZpPo4VzKjaN0GPKbI6n165cVOuTPb1+uwqnv8SMhQ57ywU+JFFMkLDprLVgQAIxQ5AmlUMmdBlQSUfM6jOb4bMrX2Xdb3QyAUXd44YCH2BQNjWx4NlwIMl/uAIZAYo1hg5Q36rFAGs95XMga+WVQ0xW1sG6RR27EMGVIBG7bUkqtDEd45mStCtdzSVGWHjBR84i+J1jI+kbEg8dCYblnNlW2AGfWaNA254N7PglEZGshPPJ0XIhI3w4REal1ohOZTqUj++0ArkJweq2KknQ7C/ZjZ2NExuSjuBmISDU2OyWlnRvv95NKZ8G9qrUi1lNw72wQERzRN3ja0qMACBKFAK8QkRVeWewgTcz9e/fw6dvHFFKsbE72awhgYmapwmWxnMpAGX783sJE7LxSZwOJ1BO1Nz4v30KJDHgJjOSKlErwtkcElR/JCy6KnWWjxXjXjOucPteTYmFa1r8YJdID4aFEQkJIxazdsEJlXCQ1x7MCel89kGIAM8kL1SSUzNQW6R12Tith1LgFZ7si0VqL+bUFJVIz5Yz0QNudHiZCiQPtvbJgIsv2qrgz/5lgIj/IZkEo7M0ycCIEN+5RkApOH3KpkeO7QK40aTTFhrAdFk6kUp7zTH96nAhSaDh0NHAiy+hY8PzpcCL/HSErnMj0AzaeYJt4Wz7+7eq3x/3ngAQnlBMdNfBv0uKvroNuhVNIv1BBoFfX0N1v99PHZEP28dOxMxz7nm7uaev2sQFOYsH2xlHFWPgfNhYcXDESaZ48FB1X1KHC72moaOZnh/KXFQpZa6ui4+i0qphXtx6KipJnArqL2hESShuoaU1yoLMuiUJ5jTxByR/VNTURb4Ov8kgu8ShFtzpQE/HkQD7xjlvSQDCjiXj+ko4SJzmQvyS64VaJ17RLg9ilc44UbuIbiUdmVxOmJl0kB/KXRLEfdaQm4t0K4vn8YOm8piXRSS8znr+kkFpEI17TkuRI7pKsFTUJLc2ydUXbMHPFNDWtiLC+rSMdRzs5kks7i+2aaBdQhZMk+dywtXapiXZyJHdFwZlQ9qiJv6UgzazIol3TihDy5ezgLglojqwdcASGS5N8hpgYu/YYiu79laDewsH9S5/On37IlTdqgsbpkyXOwuJ3TxGDpDsXdT9Dneg0RpZq638nnBt39pTh/bDJS0vpdk5J8Q7rXKpDt3Of6s/zkeD5Hw9/IxvSl68onPL0+l8BAAAA//8DAFBLAwQUAAYACAAAACEAaPFvCTgKAABIOAAAGQAAAHZpc2lvL21hc3RlcnMvbWFzdGVyMS54bWzsW+tz2kgS/35V9z9ofVUnWBuBhMCQC94iPGzqDPYBceBTSoYxaCM0lCSceP/663kIzUjDY5N1suXYH3Zjaaa7px+/7umW3/72ZeVpjygIXew3dNMo6RryZ3ju+ouGvokeCjVd++3in/9423fCCAUt7EfIj0INtvlhQ19G0fpNsRjOlmjlhMbKnQU4xA+RMcOrIn54cGeo+OgC8aJVMq3iynF9ne19E2R24zXyge4DDlZOFBo4WHASbTzbrIAtEClViwHynAjEDZfuOqTU3oRrZ4Ya+jpAIQoekX7xdrR01ijk/9d67YZe0bWBs0LvG3r7yXdAUm2GfR/NIhzoWi9sbcIIr/gKk609uBQ0pmvjpzXwvgzwZq1r166PRtGTB0/Oda3rel7y2xh9iba/XbxtIc/TBg391vUnunbX0C3DLFetmn1u8R8gAITfN4ftXO4dWsC6044/n+SLVl4vSgSmnECpXrdtqwI07Kp5Xs0QmBIC0zSBD+48WlIKcJwtS8KqwNhK7K6Qu1hGdHkhtX7K1k+l9dd4tj1iyQAzbDlQvr/CM8V6dqKCvIGxzuxo+gsPUYHAfbfUS+2mRLfruWumaHFVt3k96mTWMe771w1R6P6B+ngesxZMwtTGVGpUU1ZNr4uNV0kZT1hHbEGpWfuoEdtynntoXTtPKOij1T0Cx79r6CXRlzqPEGfte6/lubNP/DVR6c1tZzDuTMYfeoOcpK3xlyixLtX+qDNujoedbo6ARYC90CCOf4tDl0StYjM3NfOl/bsN2bWAeeK74CryD3P/fnOSY5K3x1e58RIRafJnlV9bSycwRmDCtEiCg5cMy5Z/6BEJvatO7/JqHBM8i0VJE5O93zqXfmqMGj8F8WvRFkCS7ebeaJgxMIj4AKuSsEhtFwND8Cag+kk4pCnuIu9ajjf7cMU8SXx3hbz1GK/dGX1154YfRx1jtlz98i8A5pLkRy28fgq2OLH9TcvN8hpJBVo/ThVaCwdrHFBINzStCaBIN4YaR/O5IcowwE3PXfjv8JesfADsXYTm9w53XUvceOltEIFquk16A8E6BpYLHg/MDz9OuuNh7/KyM8zRmJh0IdIl40K0fc22m/vf1WIM8LX7iHhygsQGoSlZhqa00dpzo8i553gnLSC559aJIE1DkqW7EzDsdXPvITcavTnEt/vwBPn9zDyz8tKJKGo1gwB/pvtF+CtJC+Hoxyz73wZAhGa8OydwqYUT7Y8g9QIekAzIgQLS9hB/Jg9EwICncaZkECj52RY7BPeegA6FAyhhSPSMabyepzPlBmMqbpnAGsqCpTPQ7s0wNxoPR81+Jze6at52CETE4JA/OznJn1258xh7zmR1TmNq0tFajk98lp9EON9tgFdrloCHaM1xVSM6IyIWQYfwX65eqIASPRMPSJQMddkdK/wEFd85MUtTwiKBZ8KEW2soFGSkgFKS0zWovEBDLEVkNCQqdx+v0RJ/7m+8yFUykfS3j4wQBkpCUlztI9Se0KBRErEMyyrZdrVk161KqWyWOwUTchKpQK+pGnIkcxZ4hjgFJyDlKKlsjYmZ56XXsYppT3cLAlVUyaqLPzUAOVmOKZdjKskxjeWQvH+/QgAclOoAIcokTWx/LAieRAjYNins0Uiq5k1iIOOP7elOEVIMp5ThjqMfz5DhJi1CE7BKIqlgGpXz0nm9Wraq53W7VoebATl3kzpBc9wcWAycuQ3PKFJzz5Jwd5/iCSbvFAGgfAdH2Bbzg38e5qYEliZzWzB7DAn480dT8II+8jcUyEj8ajSAIZHN3OhJ9G9GRoBuKAa7uUsUDdED09A2/PNng5tx+pkkfWuJZp/QXKBG1bqlIDJuuyFJqfFiAWuHyJnf+N5TFoV7PrnYxolYQh7qD/xCmC6wu94T3kStpespuI2dBQVQkr1F+d5togj7XbjkUjnEVyMcRP9FTD7ynMG/YAZLaQYUQd3Fr7/0BivSzNjh5ER8LWtWUNZLVmNZpcbmYhGgBStsBD1k9Cf6MQMLUmRBWcAu95aVP8mfiotIMEtLSukVQrEHdOA20u/cNa9zsGy3qUg5mLBvWFIV/LzGkxJqOgakl88RA7bSeGGIZ7wq/QbjZSzzHLZjd5LmoM36QNQ3GqWzmBX8U/CHhin5wPMaVgK+727YisqwLSiPeVUsxsLxUfl+1MmdnMRkEBTvPyQ8K7Yo/ks2Y1VlxjaCfjBcV2fH1wgJuG2x9XB4mhn0FWJpS4csEs0hJ0GCrCQ6twFpWlJEyrXkSzbludKUbgA3GzRvfi/IPWTT4/PlPqv+PDhbU1m15y9R4EaOD2XpsyZQ004DsByhX2tNW4rRn8eadZU14abjuX98cy17BN4esCavif8k3sq2fOF4q7yEJ/0jMC+5APYmZLiTdCRY+7SgbksJAcx7qqqmkbCq7QaKfiw8ZdulunDb8JaeNjcRvkRxq1YgLbY5SGYdYK0LQ+GN5xCcSW645IBil4HLY0CDRaDGjyNdMn6w8Er7XSK8QlEAV/m04QaYDJLp1V86BZkY+IoG8QCTLke2ZQHPfWetek479e3AWSQviQeNG3ofP6IxZjIpdJ1VNDVxYiMgQYSMSYjdCIX7iLbaSUO8ijMaklayNAR1Zz4LgIYsG+DT8cq3DfAhYqDPZ5aq5zXTrFUrdq1G+269Ae300f4TRKk5OW3djFjzilWspG93alfancs8DAFNWC/4LzSJ+ZHgVUmiz7rJafLT0xFMaSm3I8iLA1T4ZMCulStV2ypbtmXWqdgMj7MjVKkcFsaJIGW5XK5ZNcsGbZC+d4pManJKBZAHUbwlzp2RQAD7VADmLKRl/m8v+k+iTJhjyeMUtpvpbIcwfGLK6E0letMMvR1TVPF7AsFc5DHnLVpx39cCf3rofo0eor4TwBchSbzGQ7IhmbruegnT212v3mHoMa52vY2HV5Qf60aQBuxoiVBkVH6RW6uSZ+wdAsZCbwfElP72t68fF++ceiURIWTGY3v10nEFSoL5xdwFjxMQY3NOc6Jky/vzbCwUc0kSOhkIic50iMlUyYSMgSAWk5GLitH0MCMBTpPZIvmawoE7XjxgTOcw8qEFc1bDTP1QkW7H5IBMXQkDMtS8eDtb85RYvOib8C1akT0uUsgWv/GCwcpfBuZEW1+P5oXnRfOd5P/2aG4oc8wrvIvh/Qrvh74GeoX3OHMKqeslwLu1D96hmv7L4L1k1Op1q1YvW2atZpZtW1mqW0KpTnq7tFA3y0dU6nVLpF5RVuqWUKkfpv7SoN2aZCptqXKnt4bdhbr1Wqiryv8XU6jHAaEsZaXvWeIyll5CttuOr8mtHTU56SMdqJYn0gj9QFlu7SjLpW+x4sOIxf/0MJekZBZS4/eqycv7QBuaCH8D0C4cAdrw1y9ywa9q3qgwezfxnw2z1a2fVxDP/J3FT9Jt2aJxMoVICtZXEKcNEVXf4weAuJ0FcdZggb+iizstyQP5zwEv/g8AAP//AwBQSwMEFAAGAAgAAAAhAENNb2ffAwAAHw8AABkAAAB2aXNpby9tYXN0ZXJzL21hc3RlcjQueG1s5Fdbb9s2FH4fsP/Adg9Mh42SbCdNgiiFZzsXIHa9WnXSp0FVaIsYJQqU7Cb79TsSdSEteYvXAQU2v9iizo3fOd9H+uLdU8TRlsqUidjFDrExonEgHlm8dvEmW/18itG7y++/u5j6aUblSMQZjbMUgVucujjMsuTcstIgpJGfkogFUqRilZFARJZYrVhArS2D4FbPdnpW5LMYK99z2fIWCY0h7krIyM9SIuS6DDEWwSaCtBDEPrEk5X4G5aYhS9Ii2nma+AF1cSJpSuWW4suLRegnNC2/0e3YxccYec8JWBWvMLpjMV1kzxxW+hhdMc6bJ48+ZfXT5cWIco5mLp6z+AGjpYt7GH108e0MW8bLT3tf3rPHLFRvSQ8qaXnfULYOs8LCIV0GdyKo0zvEaYKgKxcX4X+0wVErSHmommzytg6aO6h0ux7DeM1pUQOMAZQ4nlzrAa84S9T+7d3lMou+/IGm7A86FY9VRK20yRa6Of7MR5wFv1cJoaz388nMmzx497ezozd6MO8pO3D3yuOQ3YNH0yaHnJ2cmphVIOfL2l7AzWhef6c3NdRtP72nNjk7HfTNjFVFu32CdbO5J8cn5kDURXW46l3W9nFDeeKJhAVFP5Ys/W3x2SdBGL36YXD8dnCm73kkkmdZD2z9hI6CNygnOppWQoBGQiZCFoQlCA2BSIVjikquPhI9cMHNRcJZSQX9XU7Y+4YmNrHt/s6nmNq5B3R2sXczmU6Ww7uj143j659s0hvMPWO2ft3AEBZsz/Wh2H0vz7ugQS4zOfFB9mL1BNLyQXxBtw8utuF3pQyKFi/jpZpJjeUaIR19w2MmK7rlJjOBrkAaN9zfMarIt9+o3hjwem+k4SYT1xT0eZnvbb/dXIooUf3psLIAngYj56sweplqaTBp4wyrJdA6Wt8MiF4LCCWxcIbkSJfaopd6iHRpEOzvW43IXzX3mwHUbwF0CJsMMCvF1dH8DyA0aCHU0+4SzRQZGvI/mSKrFGtDtT/CZRAwy/UaNByuoUt1z9VwXPp8oyTfMQ71l2hcZ85R6Es/gJtymXj3oFjAnUgpLHH6p/rnrDq8QMSUiHYmuKYiopl8xq0zaCbyS2wp35oSzkR+AHatL0LxpXM99pNyfb+czERxcI6lv95rm0PvuXgqttQTquL2iWBw9x8KYXPqQLoJh4tqxgKfD2VQ5W0L8L/MnmGBwstE65fS9rj6J5DDXMqWQd9RBW1+Gx8agtaEqM4PS4/w94i0FdfoRDVoDnkwSupWlMb6k2F9OCpGEV1S/rWYNLSyiusm0K34hn+Llvkv9/JPAAAA//8DAFBLAwQUAAYACAAAACEAFE1oHuEBAABeAwAAEQAAAHZpc2lvL3dpbmRvd3MueG1sbFJNj5swEL1X6n/g5tNizDcRsGqTVVqph6hE3fboJUOwCjaynWT333eAEKXdXpDnvZnHvJnJH1/7zjmDNkLJgjDXIw7IWh2EPBbkZJuHlDiP5ccP+bOQB3UxzroTIO2zONgW8zMfC2boC4hjawuSJQFxUFWagrTWDitKTd1Cz43bi1oroxrr1qqnqmlEDfQs8N/U95hPey7ktXal31WrASTqNkr33BpX6eNVYqPqU49NoYgXUw0dt+jGtGIwk9rKDLyGggwaDOgzkPLqxvm6KQgamL3t3wZM2mh+QfMLWFluEWVeEiQhS/1wIb5Bg2ZZFKc3aK+GgsRJtKRchxT6WZhFLGb+QiyjmpnAS5BZK2nRPui5jR0/AnHG79ThDwGXquYdRg+MOGO4RsegfxYkcJM0Dbw4S7IkjrN79tfIMi/EnTA/Sn0Wo/eqVZfvpw53XrKc3kUTs9XisODTe0LHRj5r4L9N6c01d8hcdxIHuClu52hi0JmEelzJTglpbwrv8HzbnaACa3H+psxy+lecV5IPNzaO0jDBTu6xKePp1YIcr9mUQTgn3EFTyid57MDQMt+8SY43Odp8kvylg8n5f9B8z1+qoRMWJ75T6MCNcvoPltP5jMrlYco/AAAA//8DAFBLAwQUAAYACAAAACEA/cN7IMIKAADEPQAAFgAAAHZpc2lvL3RoZW1lL3RoZW1lMS54bWzsW91u2zoSvl9g30HQvRr9/wR1D/SbpE3SIE666CVjy7ZOZMmQ5LTZxXmCvdz3OG9w3mb3PXZISjJp0XXbuEWLxgUaUfo4HM4MZ0ZD6uVvH5e59JBWdVYWI1l7ocpSWkzKaVbMR/LtTaK4slQ3qJiivCzSkfyY1vJvr/7+t5fouFmky1SC/kV9jEbyomlWx0dH9QRuo/pFuUoLeDYrqyVqoFnNj6YV+gB0l/mRrqr20RJlhSwVaAlkr1CF8jzN5Vcd4TgH6kVT4xuTvBpjsqkIPb3XMKau5ndhXkkPKB/JKvnJR69eHqHjFpA3Q1wSJ/BrcS1geq8P6JmmZdp+T48A8maIi53Yju2eHgGgyQTmMRzb1F3di1ssA6KXQ9q2bnluR5sB0UtjwLNrerFhcPQJiOLNAT62LVfVOTwBUbw1wHu2E5ghhycgircHeMe2E8vk8AS0yLPifoBWLdsItRbdQ2ZlfiqEe5aZOB3zLCr92JzXDSYPV9K6ykbyv3RHNXwjMJRIjzXFNCJPcVXLUezYixwzVhPT9P+QX718aI6JNadnUWvmD83AzpfZpCrrcta8mJTLo3I2yybp0UMG6wmsXNOPyCqBNUTsNy7Wy5FswiKj7ZPbswgYouaqqgq+2Pqve/ZHKwuYBj8b1TNi14t9xfY8G2ajxkqQJDAlw/cSH36BY7SzubufF9OnTmWgKryE2kUEEiNjkHU34DSKIj1ybU8J9ThRTEcDTt3EVzzN9BPfUN3QS1pOH1CVoQZ8EixpuvRBi0/nHPgbUiaKhtthmZfVcJ1agRdEFpE+7d7hmNZwtfoW/rfdC3BMr+GadQw/NqkZ8zimNVy5pu5oIfUMPI5pDdcv4394HNMarmLGC/E4puUMTITxRTzuiDa3dN1JacftTgFbTl8sBlAoM+RQUWL17lOUuBcolBlrqChxL8AxvYaKYhwzj2NaQ0Ux7pnHMa2hohgnzeMOqSjxIPsU5em+38YbhrV9irJ9P/RoHGF67VOUWHj7FCVeG6BQZuShosRrA3BMr6GixEYBuEMqSuwi9ilKLIZ9ihKLYZ+ixGLYpyixevcpSmy2+xRlGZ4XONuB4BOKEiuQJDGQzJKgSv92t/rsGLvdWVk0NGCKcuUl+r2sEoBgaA4BtpCax1U6QxPIwUOUZ3dVhlmFcRDzhN6a1INbeDRIZapsBYnRqX85lhnI//7893//+g/tuw28YYEXXQolvV6kxfw0bXngOr2+8i+5TmPp6qRsFtlEPMIJB0b5fF1IO/H+tR+w+JtsmdbSZfpBui6XqBCNEAfXbI/z9CEtsqV0cSMA35z6Zyw4qMoP8LqTITyEAB/fnHL4y0eUIwEuiC+5ab6rMnhREwBPbl9z3I4X1boRSfnN6QUHjNC6uEqLhYDmm8vIZyd1sy7m4rGvb1ncNUIPoqHDLfOJ1yt45ctEJMPTmOPyKkdFg+ZpkTZSuEir8j5NBRy/PzvjxLqxu/eZFKBMKJGbs2CHsZ5mS1DLo4hBUDgnm4t3UlDmollH8TseiUCKuYD5m/icE+MJWjdoKSJ541+cswI/R81CxOT4/XXI4uK6qWD0NC+leJrWtajP2+v3HLtvwGeI1X5x/v6Cpf4GVU12L6J57r99yyKj8j5coOVKhB2fXZ6y2LP6vixzJF2VjQh+8ZZfIBdlMQc9oGKnut+dxZy693uC27MTjqWNVd1m88W6EujyJH7L2e/4MZ+hlLgZcO2ck15mxbPHFojw2WO3UffZY7dZT5+HPHtsNit79thdWtwbyOE9NuukIUMfVBwDI1Qd14VquheGihlGgeJbTqSotm2GSeS7uhb82BVH/pUDZ+b0LYMId/mJV45Zlufj5jFPW6HUEH+nCdzEPbmiPVNHhCccjGuQSmpf6l8t4FLGxOBtpqHlfwMXTXHQAK5poRUwtMVR4hoisi0JDsc1DtaJSGOBpimdgWMdaAZfQxektq21vGB1mBfSh5FsGxYUsidoNZKrYgpXy9V0JNfFnGiDV+BnymlV1U2E6gUVAiHRqjEv8Ey+fGB2V2i3gT2PO/2F5Qz2ztt3Opulk4a1eOYOtkPaBI9GHYTwKen+ncB4mHLdpNV4Mf0gTbMayjGGq5mweTvNqpEMO17EJUpQB4Ft3klTyVJVNv/ImsV4gVZQ/FHpohWtU0wc5asFoqtSI5REzrXngG7/ECF2RapvIaKfbNY4hA4M626OYyFral/lOPd3wsLaREiviy8QIFFzUU5b3RI72R855xUiMXzLiDRiRPO6DfXzWlqV9V7bYrjCgRuPvsUVZra7v4mRntvfzdfLzRxUOF0gjv7zGpPu2KJUyWC7zJ5hbTMYJ7A+1QCWGdY2DPOsdRMcJCY8a2SN7VmSzHB2R3ZLblgQuyUBPPSqgiqOBLWfkWyZuA/ZoEZ5CvG869+pnKztbasFUrMuCcQCFeXAmmHqpmolCpxjgN3f0LMUN4wSxfeiMNYjU3N9vc2BgVZYFgX44LI7+EGPmXz9/rvgmMl2jrN/DX2/dHAwEpbwlqPgY9aXp0a7jJ4xLLez+oG9csI6fPok4m0gE3T8eQN7lm49JVElLmKzuA8iFBAor79haGDukDXVxVMqB+HTw+ccwCczUuvYt5c/uJ1v91bE+ODnoCU/B63DBS3YYB9GGpq7bh+yihwrSUzdUDwdDoqZuqcpvpnEihbopum6Bhwkc9rgRQ58tVuwT4xa5AwALWDAETR68eQjZDBrhkXxdHUnjCw3MhVHdeFMGZwgUwLdDxTHj4NYh/cK3w03sZpONv54kPNZ32C6fWoSfxRPN7Th2Jztwpm52HNBu3Go+FGQKKrrwFwtA04LJpvpbqUmP/S8B7yKBWDZumkYMHfNceBEZGh4cIYQlA5FSlX3vVAznV4AXRZSH0TdOKm5qsoVvCqgBl5g8SFk/AaA1+YP8AgfxujzLnh7wOFw2zfYoR94HqwO1XfAegK4CizPUSzdjmzdsJ0wMFvrgTQ7JTH6QMIb5Mnn/QBEhP14fQusFYp1kOjjFL+BPdnN1Tj7JxQCdHwAnOT9+G9/j9cOFsuQdpt9tM8+eySTjPjrjkRsbJ8mex/2M2oYbO072dLzSIJ1+dkr8ceUXrs+aJbBmj/nhXY4Z5yvmJGlKnFowc5bZEPBQfMNJbQS1dfcwDPs7qw/2dTCO1X1SVWuVwcJb5OujJHgHTNCnMY2aNKwV+ObtEYGjnOCz6B/siTFvJ1buEwkKNZt6BDRdUP9igNjAXxKB4dRCVNB9bxWJ/B+zxUlSVWtLWF01Q3IJ76RrhiO2loiDM2z1HO6Vdt5Gkc79tu+iKjmQB150wOqC53AaJUaPtQiVr+BPN3Ov4rmkxnt+O7MkG8TNyTOOU3PDGNIypVIsyBhd6JQ8azYVVzLtvTYd33biducs//ghQzSetFDf0zD0JbS5V0Kh1vrBf5+DyeURJnABgFJOJ0+m36EDSpZwjlkd03LTF0LO+P2mr4N/NT9sV5FeugkQz90+RohQmGzFyJcM0KEVi9E8iES5WFLCU/t/1Qlagz/cM3wD62e/369P0mIxrMl8p+HMAb32Za4U4g2o0m4ZjQJrV6T5DM1oSX+RP13L+ddT/ptcFw8oEWE7hYpxrFfO5d3v8POVwRfZazzpiZxG2oOFYIzTL33pokE6frq/wAAAP//AwBQSwMEFAAGAAgAAAAhALxnIMamAQAAQwMAABEACAFkb2NQcm9wcy9jb3JlLnhtbCCiBAEooAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHxSQW7bMBC8F+gfBN5liraTBoKtAG2RS5siQBy0yI0lNwobiSTIdRT9vivKUuw27Y3LmZ2dHXJz+dI22TOEaJzdMrEoWAZWOW1svWV3u6v8gmURpdWycRa2rIfILqv37zbKl8oFuAnOQ0ADMSMlG0vlt+wR0ZecR/UIrYwLYlgCH1xoJVIZau6lepI18GVRnPMWUGqJkg+CuZ8V2UFSq1nS70OTBLTi0EALFiMXC8FfuQihjW82JOSI2RrsPe10sHusrdUIzuyXaGZi13WLbpVskH/Bf1x/vU2r5sYOWSlg1UarEg02UG3465FOcf/zFygcr+eCABVAogvVF3g2NvtW73uwqXcChsifoO9c0JHaTyrq1xBVMB7pIUfxkwtiNzLiNb3sgwH9sf9jzt/4MG5ouQnGIuhqWYh1LpZ5cbYTF+V6Wa5X98nFMYnWTimP24DOKLdyTHlCvq8+fd5dsX/oTay0NE2dBduD8/8rnuWFyIvzwWHxoRSDw0lxEqjS15UItQv9mKOaK8qxkbbe09+swOZ3t+kF5qu07+m3r34DAAD//wMAUEsDBBQABgAIAAAAIQDrBtKhawEAAG4DAAATAAgBZG9jUHJvcHMvY3VzdG9tLnhtbCCiBAEooAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyTTW+CMBzG70v2HUjvSIviCwEMAiYmmzEb87CLQSjahLakLW5m2XdflWE87LSZ3fp/yfN7nqb1pu+0Mg5YSMKZD1APAgOznBeE7Xzwks7NMTCkyliRVZxhHxyxBNPg/s5bCV5joQiWhpZg0gd7pWrXsmS+xzSTPT1melJyQTOlS7GzeFmSHMc8byhmyrIhHFp5IxWnZn2RA62ee1C/lSx4fnIn1+mx1nYD71v8aJRUkcIHH7ETxbEDHdNOJpGJIJqZk/5kZMIxhPbMjuaTMPkERn1atoHBMqqjb9arRbwJH9LkaRmmyTJ8TJ61+EG5Vf0mlQg86/rcQf+I73f4WUOqYtnQLRaRwJnCRQsngwDpixzBPnKcswXd8awb4Qc/4JOC/BPd6egL+YiVIHkbect5FSjR4HPcc3WzwMMOmRKKr5OWpMJK9wIbIseEyITDFI1dOHIhej0buWxcm7FOT7H9KMEXAAAA//8DAFBLAwQUAAYACAAAACEArhjCJdABAAAWBAAAEAAIAWRvY1Byb3BzL2FwcC54bWwgogQBKKAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACcU02P0zAQvSPxHyLfW7eoi1DleAUtaA9UVLtp74MzaS0c27K91ZZfzyTZJilUSODTfPl53puxuH+pTXbCELWzOZtPZyxDq1yp7SFnu+LL5APLYgJbgnEWc3bGyO7l2zdiG5zHkDTGjCBszNkxJb/kPKoj1hCnlLaUqVyoIZEbDtxVlVa4duq5Rpv4u9nsPceXhLbEcuJ7QNYhLk/pf0FLp5r+4r44e2pYigJrbyChFHwwP3pvtIJE1OVGq+Ciq1K216SF4OOkeFJgcEWIsgITUfAhIB4QGrW2oEOU4pSWJ1TJhSzqn6TXgmXfIWLTR85OEDTYRP00ZZ3T2sbHFOQWDhgFp1znt+a4bGzrhZy3BWT8tbDD2kBMNOV/QF/cRm/a6xjSs9fcC50Mxm/VFkK6IcXdWIq2q06IgfykY/TKvtdhfbZQa5UpZ22r7ZhFX7WLGG4mHt1zs2DZI90FezA0vv5SM4juuc/GaN+Mto+0Zk/1N3IbsDStQBr01srVHuyZQr31VdsfcecLt25273V3roPi6QgBS/oUl/wQEA+0NsEQyCfaoUbva7934+pIzLC8QPyZELTP++6by/nddEan3fFLTPDhQ8tfAAAA//8DAFBLAQItABQABgAIAAAAIQAdj8O3hAEAAG8GAAATAAAAAAAAAAAAAAAAAAAAAABbQ29udGVudF9UeXBlc10ueG1sUEsBAi0AFAAGAAgAAAAhAIsLzwcWAQAAzgIAAAsAAAAAAAAAAAAAAAAAvQMAAF9yZWxzLy5yZWxzUEsBAi0AFAAGAAgAAAAhAG/qIvWHFAAASoEAABIAAAAAAAAAAAAAAAAABAcAAHZpc2lvL2RvY3VtZW50LnhtbFBLAQItABQABgAIAAAAIQACghQb1AAAAGUCAAAkAAAAAAAAAAAAAAAAALsbAAB2aXNpby9tYXN0ZXJzL19yZWxzL21hc3RlcnMueG1sLnJlbHNQSwECLQAUAAYACAAAACEAj8OZ7LgAAAALAQAAIAAAAAAAAAAAAAAAAADRHAAAdmlzaW8vcGFnZXMvX3JlbHMvcGFnZXMueG1sLnJlbHNQSwECLQAUAAYACAAAACEArI+QTQIBAACEAgAAHQAAAAAAAAAAAAAAAADHHQAAdmlzaW8vX3JlbHMvZG9jdW1lbnQueG1sLnJlbHNQSwECLQAUAAYACAAAACEA0jXEvKgCAAC8BgAAFQAAAAAAAAAAAAAAAAAEHwAAdmlzaW8vcGFnZXMvcGFnZXMueG1sUEsBAi0AFAAGAAgAAAAhAH2B/O5+BwAACyMAABkAAAAAAAAAAAAAAAAA3yEAAHZpc2lvL21hc3RlcnMvbWFzdGVycy54bWxQSwECLQAUAAYACAAAACEAbl3n3gsJAAAjJAAAGQAAAAAAAAAAAAAAAACUKQAAdmlzaW8vbWFzdGVycy9tYXN0ZXIzLnhtbFBLAQItABQABgAIAAAAIQD3oiPc1wAAAJECAAAgAAAAAAAAAAAAAAAAANYyAAB2aXNpby9wYWdlcy9fcmVscy9wYWdlMS54bWwucmVsc1BLAQItABQABgAIAAAAIQDozGG5dggAAKkwAAAZAAAAAAAAAAAAAAAAAOszAAB2aXNpby9tYXN0ZXJzL21hc3RlcjIueG1sUEsBAi0AFAAGAAgAAAAhAKqRtqR6RwAAzPoBABUAAAAAAAAAAAAAAAAAmDwAAHZpc2lvL3BhZ2VzL3BhZ2UxLnhtbFBLAQItABQABgAIAAAAIQBo8W8JOAoAAEg4AAAZAAAAAAAAAAAAAAAAAEWEAAB2aXNpby9tYXN0ZXJzL21hc3RlcjEueG1sUEsBAi0AFAAGAAgAAAAhAENNb2ffAwAAHw8AABkAAAAAAAAAAAAAAAAAtI4AAHZpc2lvL21hc3RlcnMvbWFzdGVyNC54bWxQSwECLQAUAAYACAAAACEAFE1oHuEBAABeAwAAEQAAAAAAAAAAAAAAAADKkgAAdmlzaW8vd2luZG93cy54bWxQSwECLQAUAAYACAAAACEA/cN7IMIKAADEPQAAFgAAAAAAAAAAAAAAAADalAAAdmlzaW8vdGhlbWUvdGhlbWUxLnhtbFBLAQItABQABgAIAAAAIQC8ZyDGpgEAAEMDAAARAAAAAAAAAAAAAAAAANCfAABkb2NQcm9wcy9jb3JlLnhtbFBLAQItABQABgAIAAAAIQDrBtKhawEAAG4DAAATAAAAAAAAAAAAAAAAAK2iAABkb2NQcm9wcy9jdXN0b20ueG1sUEsBAi0AFAAGAAgAAAAhAK4YwiXQAQAAFgQAABAAAAAAAAAAAAAAAAAAUaUAAGRvY1Byb3BzL2FwcC54bWxQSwUGAAAAABMAEwAdBQAAV6gAAAAAUEsDBBQABgAIAAAAIQAw3UMpAgYAAKQbAAAVAAAAd29yZC90aGVtZS90aGVtZTEueG1s7FlLbxNHHL9X6ncY7R38iB2SCAfFjg0tBKLEUHEc7453B8/urGbGCb5VcKxUqSqteihSbz1UbZFA6oV+mrRULZX4Cv3P7Hq9Y4/BkFSgFh+88/j934+dsS9euhszdESEpDxpebXzVQ+RxOcBTcKWd7PfO7fhIalwEmDGE9LyJkR6l7Y//OAi3lIRiQkC+kRu4ZYXKZVuVSrSh2Usz/OUJLA35CLGCqYirAQCHwPfmFXq1ep6JcY08VCCY2B7YzikPkF9zdLbnjLvMvhKlNQLPhOHmjWxKAw2GNX0Q05khwl0hFnLAzkBP+6Tu8pDDEsFGy2vaj5eZftipSBiagltia5nPjldThCM6oZOhIOCsNZrbF7YLfgbAFOLuG632+nWCn4GgH0fLM10KWMbvY1ae8qzBMqGi7w71Wa1YeNL/NcW8Jvtdru5aeENKBs2FvAb1fXGTt3CG1A2bC7q397pdNYtvAFlw/UFfO/C5nrDxhtQxGgyWkDreBaRKSBDzq444RsA35gmwAxVKWVXRp+oZbkW4ztc9ABggosVTZCapGSIfcB1cDwQFGsBeIvg0k625MuFJS0LSV/QVLW8j1MMFTGDvHj644unj9HJvScn9345uX//5N7PDqorOAnLVM+//+Lvh5+ivx5/9/zBV268LON//+mz33790g1UZeCzrx/98eTRs28+//OHBw74jsCDMrxPYyLRdXKMDngMhjkEkIF4PYp+hGmZYicJJU6wpnGguyqy0NcnmOXRsXBtYnvwloAW4AJeHt+xFD6MxFhRB/BqFFvAPc5ZmwunTVe1rLIXxknoFi7GZdwBxkcu2Z25+HbHKeTyNC1taEQsNfcZhByHJCEK6T0+IsRBdptSy6971Bdc8qFCtylqY+p0SZ8OrGyaEV2hMcRl4lIQ4m35Zu8WanPmYr9LjmwkVAVmLpaEWW68jMcKx06NcczKyGtYRS4lDyfCtxwuFUQ6JIyjbkCkdNHcEBNL3asYepEz7HtsEttIoejIhbyGOS8jd/moE+E4depMk6iM/UiOIEUx2ufKqQS3K0TPIQ44WRruW5RY4X51bd+koaXSLEH0zli4SoJwux4nbIiJYV6Z69UxTV7WuBmFzp1JOLvGDa3y2bcP3Z31nWzZO/D2ctXMfKNehptvzx0uAvrud+ddPE72CRSEA/q+Ob9vzv/55rysns++Jc+6sDmCTw/ahk289NQ9pIwdqgkj16Tp3xLMC3qwaCaGqDjkpxEMc3EWLhTYjJHg6hOqosMIpyCmZiSEMmcdSpRyCVcLs+zkrTfg/aGyteb0UglorPZ4kC2vlS+bBRszC82FdipoTTNYVdjahdMJq2XAFaXVjGqL0gqTndLMI/cm1A3C+qeE2no9Ew2JghkJtN8zBtOwnHmIZIQDksdI271oSM34bQW36Yvj6tI2NdtTSFslSGVxjSXiptE7TZSmDGZR0nU7V44ssWfoGLRq1pse8nHa8oZw3IJhnAI/qVsVZmHS8nyVm/LKYp432J2WtepSgy0RqZBqF8soozJbORFLZvrXmw3th7MxwNGNVtNibaP2FrUwj3JoyXBIfLVkZTbN9/hYEXEYBcdowMbiAIPeOlXBnoBKeFWYXNMTARVqdmBmV35eBfO/+eTVgVka4bwn6RKdWpjBzbjQwcxK6hWzOd3f0BRT8mdkSjmN/2em6MyFA+5aoIc+HAMERjpHWx4XKuLQhdKI+j0BBwcjC/RCUBZaJcT0L9haV3I061sZD1NQcGJRBzREgkKnU5EgZF/ldr6CWS3vinll5IzyPlOoK9PsOSBHhPV19a5r+z0UTbtJ7giDmw+aPc+dMQh1ob6rJ58sbV73eDATlNGvKqzU9Euvgs3TqfCar9qsYy2IqzdXftWmcE1B+gsaNxU+m51v+/wAoo/Y9ESJIBHPZQcPpEsxGw1A52wxk6ZZZRL+rWPULASF3Dlnl4vjDJ1dHJfmnP1ycW/u7Hxk+bqcRw5XVxZLtFK6yJjZwj9ZfHAHZO/C/WjMlDT2kbtwKe1M/4MAPplEQ7r9DwAAAP//AwBQSwMEFAAGAAgAAAAhAGBLI+bFDgAA9EMAABEAAAB3b3JkL3NldHRpbmdzLnhtbLRcWW8cyZF+X2D/A8Fnc5hX5EGMxshzd4yRbZizP6DZXRQb6gvVTWlow//dUX2IOr405DX8pGZ9lVmRcUdkpn78/W/r1dWHYdwvt5s31/IHcX01bObbxXLz7s31//3abvz11f4w2yxmq+1meHP9Muyvf//Tf//Xjx/v9sPhwK/tr3iKzf5uPX9z/XQ47O5ub/fzp2E92/+w3Q0bBh+343p24D/Hd7fr2fj+eXcz3653s8PyYblaHl5ulRD2+jzN9s3187i5O09xs17Ox+1++3iYhtxtHx+X8+H8z2XE+D3fPQ0p2/nzetgcjl+8HYcV07Dd7J+Wu/1ltvX/dzYGny6TfPhni/iwXl3e+yjFdyz343ZcfBrxPeRNA3bjdj7s9yyg9epC4HLz+mHzzUSfvv0Df/u8xONUPFyK46/PKad/bQL1zQR2P/xrU9B5itv9y3r47TLRfvU9LDlBvywfxtl4UrgzP9bzu5/fbbbj7GHF5DBfrnhpV0fqrn9iLf/rdru++ni3G8Y5i5pNRIjr2wk4jLP5+78MH5aT6eyPjxbD4+x5dfh19nB/2O541IcZE+fUecT8acZjDsN4v5vNWTB5uzmM29XlvcX2j9tDZsMYWW6nEY/b7WGzPQx/Hj//iwcsF2+ub+SXL50fHz92+/XYYbP45o+v5vny6WWaLwaezPb11/3JBfCQzWzN7PvCrN9uF2yjH++ex+X3y3kacOSGNGemwQ9t2WWNy8Xw6yS2+8PLamjMzPvlX4e4WfzheX9Y8oxH4/43KPhnBAyb6ct/YkX79WU3tGF2eGax/Yc+dtSMtlru3i7HcTv+vFmwKv67H7v9XJzs/xf7y4+/sOZcXhXCe6llOJE3oa+IEFKz44aI5kEQkSrRWbRfI8bYDkI5Z4xY5UoHSY4w4shiRLlUIkS0aR7Tpl1yPaQWjBhpgsOISgrzjSQP6yA1YimQzRkjVhurMWJ8h2qnlcF64EjJhBGXCUsu2mjOfvErJEmrsXySbBV/J1kr8ZisyFWM6BbwbNlQxVRnooLHFJUy5mgVtUN1VVpiPag2KgWRZk1rCJFCVQeplkL7DHnAOuUrpEAKSw1yVAonpO8gBlu9lNJhH8JIIWjBUpmSIA/Y64QAdYedjnOYAq0dtlOpjbSYo2zBCdqPNEJ1OGqUrJ0xpjioB5KEjVB3JOnc4RuZWPFK2YIJr8dJ67GGOC0ans3ZYLG0g9SEqQ7SYKuXgaLESFS+I7lEoYc400GKiB5rb6GOv5bFFoE1sUq6pCNfI8paLO2qhe4htTOmSdOwNTbWBExbUypCRLHVE5Sc4pDhIQVKmiqhzSlpA7ZTpQwpGDWV4vgDtZf9nrRwpUprImglymhvodUrkjnjlZLKDa+HtMVeTJGxBdo2I83ilTLNBeq1shwa8UqtiTiiK3a9Bmova5vG8YcR15vNKYPXwz454fU4zt86Y5gJmAJvgsI8CLZmTFtwpmANiToRHhMpdr6TurxONldowSpzMINejBHnMXeylALTlpUw2EoyO8vObLZZzINigsHaW8hITEEhr7F8qpQGc6cqF2DEUJUoYiuptunObKxvmOrKwQTT1jh/wmMaEY7OmhOEAHmg2YBwfqCVm1pNCNGsBpACThw6mqjZ72C+aWK7hzqqnfEOU+A5r+oh1WLagtENjwkUcNapowgaylQnkTSmOlEJnTHOecxrzvxbB3Gp4tkKTwa1ipGSoAXryvaIpcBZNM6vdaOKNdFwZSKhnTKSsOfjZDDgvNdwColzCs45nYN+x3DgdpA7RqnkoVeekIQpULphT2G01hFy1GiyAnokY0zE+QHX7qTwbMZmXDsb41jjIEKKEoyAhkwSeDarRYb6ZtzkYDpIzHg2RxJn3pxdi4BlGjn3x+uJLFIs7aiD78zmLPajjETcQzFJuopnSy4aTFsW9tKh/AZxBWtiNg5nqiZTpyNjKvtlrInNlQh5TRwdE6SAS6agIEcnpEDaGEkRahVJ1bB3IWkt9i7s4nPBFHB9in08aY5APcR3aNNsWngMGyrOQ8hQwB0msjLjqo2szhHPZnUn/pA1hL0/OfI4h+X8pOD6hzyVBP0beU5ioVcm70yGes1IwjbHyUEJWNpBhoqlHTl1wbNFVwRGUpdvyXncC6As2S9ChL+CozMVF3GUoWoatlOq5HAtw7WH7+hbI04IEWKnXAhKzgrrJKTN8kJxH9ZKVXCWZqUTuJ7jMkvgrqFVOmENsYqywmO0bQ36HaudVXg2I42A8mEk4GzDGpOwvlnmJ67aLLksMA8skwZlysVhxtprOT/B+Q7LTeEeip0a6JgCRxn3nqyztnYocCJ3ZnPSYr6xD8HdfetJexhLGPEdvgUZI7QSG1zGHTPOLH3C34m9XvSEYM9nEyscXmnSDnfmbLKhYsll1XA/0RYZcH1qiykNI5WVClPQOBPBVtIsYQvmVMwKqNdOaok13klnIowyTsmIOeqUFjivcsZ2PJ/jdBDHH0ciWagHjFQNec0JpMVdUMfBXuP1kKnYJzoug2XnO84EqNeMRIm/Y7XEFRjnnA7n5M5Sw70A51xPpl4qBy3LcXagoadw3hpcvTMS8D6TC7Y6LNPIlgXtlPPhmjEF0UqcebvoBPaJjISIeZBsp3fLiMV7OS45W/CYzFUbHjPtTWEKMlWcJ7qpS91BmDbM0eoKYR40Nm5MQZMNZ7eu6YR3BFyjhLvUXhDhOpiRDtWsiBpb8KSiuTOGBK41vVYa6wEjHveEvDYa76Qw4hMewxUy5o4nmQrUUU+qYE1kpOJcjINmaXilVkrCs1lpcX+UkdQw1U5lnCszUhT+Tvf8wYTgPVcfmGrMnaBNhd7FcyWOux5cmCXcv/bRCrzzwIjH1uiT6ux8+6lGx1LImnBe5bPLFVNdTMY9SF9sx4v5arg8xAglha2xyYz3nX3THmc1vnEuBmcLwkS8IxCUcTg2hmnzA/I6sGnjnCIY1dl3ZqQ6KJ9AXJphCkgUDzUk0LQF1UFcgNobuDTDOXlgKxGYAqcl7iwEzq9xDA6elwO1N0ynMzCSpMM9VS6PvcUcTWQrlkKyDvv4kLXDUSaUqe2AERnxjkAoKuJ6O9TJ7jpIxXlVqLpzpiRUlzryaSriiB7a1C9CSBSiYk8ehbY4J48cn3HuEqWUndkkRRw1o7ShQPlERRrnYlFxPopn01wCdRAOZ9CHMGJxNhiNtDiPj8TShlJgpHP6jJGKO86RrMZ7u4wQ9qPRao+lHZ3udKWiN1pBy4ocuB3WkMCZPKYtcvKANYQVAZ8ciUlLvHfIiMLVR0xGdvjGOSzuPcVsBM7WGTF4LzRmR/gsWyxK4I4MIzljKXDNhs+yxco1P0aa6uzTJiET7myzM1D4hEqSurMDlThzwT2HJK3BFT8jAdfBSSmBe0JJ6YCr0AnBGUpSxggYzZKijGN9mg7TQZtLxKkIHkNGY4+UOM7hzIERwjGLkVChT0xBEK5/UrAeW0niSg9rSErTMIywG8PryaJjwSlrhT1fysYpTEF2FtdZXCwEBW0hFaPx/lzqniVgJOLKNRWnOjpaOe/FY6pJODonjpq4ZkqNCPfwsxANZxtZksZ7lFm6hnU0K9vZo8xGKrxXfXRiUD7ZWIHrrGzZV0EPm51TuDrMXnmch0wI7qkyUnE3PHOC3aE6ioJPiWZOA3AHPUfOyTEF0UlcNzKicS2TEyV8ZiFnUbDV5yw7/QNmdMRncXI2DfcCciGDM7tcpcAdwFyVwz2uXHvxNFfOKfCYpiquAXMjg71/ETrgDnqRpjg424TgvY8iifDOUJEuY8sqWmacCRXNxTv0fEUbiXveRduMa6ZiOEXA6zGqc9KiGKvxLmkhQRFqbyFlcf+gcCQhqDuFKOPTdIVsLng229sPLtYVvJNS/NQx6iAJV67F64JjcOGsE+ejJZgU8HqibdiHlGQC9kglOdehjaNChwdZ64Z5UEznNkQp1uAsjZGCM9Uq2BjgSqswFttpFa5zS4EdhcFWUtV0ohsjqnhojVWRx9KubHP47HE1UmLJMeJx14PLScJ6zYjHJ0eqMRGf3a9scziiM2Jxr7NOBTL+DvU6THU6OQ+lXYkFBz1Stcbh/K1a6/FOV3VTcgkRz8ky5ptnz9dBnMLZLSOdjlkNbHZ4pcES7k7WyIUo5gFXbTiiV05RcKRlRON9s1plwTurdToNhGXalMT+mhHSWOMbF4fYGpvlBA4hTaqItbcp6y+3PL9CtJT49kDTquE6uGmKePelGVVxZ7tZoXEMblZHfB6Jkc7JnjYdJsDfcULjU/DNmYYjU3O2c4K1Odewr2p+alhhhCLuHjdvOxrPSMV1CSONMNVBWoVpC9LhPl9LIuJKvCVj8e0oVgKP950ZyfhuX8s6YX/dMomOJmaquP/WOHkJmLbiLL7X1qpp+IZLq65zM6g13dlHb6y+pwrs9gTtf/pxfTfdj59uEZ9+Tdd1r9anEXm2fhiXs6u30w362+mNh/F9Wm4u+MPwuB2Hz5H754cLeHNzAvbr2WrVxtn8AhzZtr5bLPe7Mjwef6/ezsZ3r/Oe3xjh08Xw+IdPc01Xv4fxf8bt8+6Efhxnu9M13Msr0pxaJuu75ebwy3J9eb5/fri/jNrMxpfPoOfN4k8fxiOfXtnz8e7wNKyP15l/mR3vAB/fHTY3l8Jkvhrvp6u+w9vZbne6JvzwTr65Xi3fPR3kdMn3wH8tZuP74x8P79QZU0dMnbDjH7P5tDJ++/zj9Zm6PPvsPX15pl+fmcsz8/qMLs/o9Zm9PLPTs6eX3TCulpv3b64//ZyeP25Xq+3HYfG/r/g3j05M2D/NdkM53bdn9dqeHpwv4O+vPtwNvx2Ya4vl4fpqv1su1rPfprv7py2s89ur2cv2+fDFuxM2vbz7cobF7DC7XJv+YvBRxb+iZfp/AOZLVsf7l/XD6/X+H06Er5b7w/2wm42zw3a8YL87YtLcLbbzn9mS+NfZlrgcpHMfStInmE7w37y25EKrN0Su3hhq+SYpF26CjWaqCTO59vezIV7+u46f/gEAAP//AwBQSwMEFAAGAAgAAAAhANqloZ7AGAAAiAwBAA8AAAB3b3JkL3N0eWxlcy54bWzsXV1z27iSfd+q/Q8sP+0+eKxv2VPXc8uW5U12k0xu7Jl5piTIZkKRuqQUJ/PrF58kyAZAgIT8kXimasYiiUOwz+lGowmS//jnt00cfEVZHqXJ+VH/l95RgJJluoqSu/OjP26vj0+PgnwXJqswThN0fvQd5Uf//O0//+MfD7/mu+8xygMMkOS/bpbnR/e73fbXk5N8eY82Yf5LukUJ3rlOs024wz+zu5NNmH3Zb4+X6WYb7qJFFEe77yeDXm9yxGEyG5R0vY6W6Cpd7jco2dH2JxmKMWKa5PfRNhdoDzZoD2m22mbpEuU5vuhNzPA2YZQUMP0RANpEyyzN0/XuF3wxvEcUCjfv9+hfm7gEGLsBDADAJEduEGMOcZJ/36BvR8Fm+evbuyTNwkWMkfAlBbhXAQU++g2zuUqXV2gd7uNdTn5mHzP+k/+i/7tOk10ePPwa5ssousW9wFCb8HOavblI8ugI70FhvrvIo1DsjBJp5z35Q9lsme+kzZfRKjo6IWfM/8Y7v4bx+dFgILbMSA8q2+IwuRPbUHI8u5B7Qjf9cUM2LTDu+VGYHd9ckIYn/MLY/6XL3dZ/0RNvw2VEzxOudwgLFeuEgMYR8YvBdCJ+fNoTC4f7XcpPQgHY/wvYE2BxrF+s5hvmVHgvWr9Ll1/Q6maHd5wf0XPhjX+8/ZhFaYYd5/zo7IxvvEGb6E20WqFEOjC5j1bor3uU/JGjVbn9X9dU/HzDMt0n+O/htE9VEOer+bcl2hJXwnuTkHDygTSIydH7qDw5bf5vAdbnTKja36OQxJOg3x1iQFrk0tUCTGqSfe3a6VFOJxo+1olGj3Wi8WOdiDrCY5xo+lgnOn2sE1GYQ54oSlboG3NEeBqA2oSj8UZnHI2zOeNofMkZR+MqzjgaT3DG0QjdGUejY2ccjUwdcHbpUqdCSexDjdrNuM1jRDvc5iGhHW7zCNAOtzngt8Ntju/tcJvDeTvc5ujdDrc5WLvjslQreIvdLNl19rJ1mu6SdIeCHfrWHS1MMBadZPnBI4MeyrxcpAcYFtn4QNwZbRnS380KoU7qpDwygQvSdbCO7vYZno137SpKvqIYz4uDcLXCeB4BM7TbZxobtFFxhtYoQ8kS+ZSyP1Ay9wuS/WbhQY3b8M4bFkpWns0nEL2EgULQeMZ8T9wi8iDqTbjM0u5dS0NvEeFdlHe3FQEJLvdxjDxhffAjMYrVfTZAYbpPBihM97kAhek+FZA482UijubJUhzNk8E4mie7MX36shtH82Q3jubJbhytu91uo11MQ7ycZ/TtS22zOCWF8M79uInukhAnAN2HG14lDT6GWXiXhdv7gNSh1bDyNbue5zJdfQ9ufYxpBZKvTJ5KZIavOkr23Q1aQfPlXAWeJ/cq8Dw5WIHX3cXe4zSZJGhv/MxgbvaLndJpjY3CeM9S2O7+Fe66a6qU/HWU5d6Er4b1oNkPJIElBPqIdWUvu3esxOruSPU45LV7HNJDL+N0+cVP4H3zfYsyPBH70hnpOo3j9AGt/CHe7LKUaU128gGlxGpknm+292Ee0dlRBcJ+cBc3zYP34bbzBX2Mwyjxw9v8eBNGceAvZ3hz+/5dcJtuycSSGMYP4GW626Ubb5i82vdff6HFf/vp4AWe9ibfPV3thaeCEAWbRR4GGYaUrjwh4cQySiIvYyjF+z/0fZGG2coP2scMsXUqO+QJ8SbcbFma4cG3cFx8wPHHQ/5D8f4Ms4hUgnw51a0XMKlQmO8Xn9Gye6j7kAZeakG/73e04kiTW9raH1z3NKEC1z1FoGzi4YHo18PFVuC6X2wFztfFzuIwzyPtbdLWeL4uV+D5vt7u0z2Ol8Zptt7H/gwoAL1ZUAB6M2Ea7zdJ7vOKKZ7HC6Z4vq/Xo2QonociHMX7nyxaeSODgvligoL5ooGC+eKAgnkloPsqHAms+1IcCaz7ehwG5ikFkMB86czr8O/pvo4E5ktnFMyXziiYL51RMF86G14FaL3GSbC/IUaC9KU5CdLfQJPs0GabZmH23RPkPEZ3oYcCKUP7mKVr8gBDmrCF2h4gSVU69phsMzhfJP+FFt66RrB89stDRTSM4zT1VFsrBxzaUiocjs8am9GnNTp34WMcLtF9Gq9QprkmfVs8X75hj17Uu29q9S66u98FN/dFfV9uOOk1thRT9Eqz5hOqrDwRT6momr1Hq2i/ER2Fj0hMhvaNqYYrjUfNjcvcodJybNkSnnPS3LLMiystp5Yt4TlPLVtSz6y0NHnAVZh9UQphatJPMavTiG9qUlHRWHlak5CKlioJTk0qqrhKcLFckvsDkB07n9G3t3MefXsXL9KjuLiTHsXar/QQJgf7hL5GZCx3CZP0fMUKifrphjRttrpF9K99yir1lVtMdCWzVfu3OFVKchQocYb2t6oqUUZvR+two4ewjjt6COsApIewikTa5k4hSY9iHZv0ENZBSg/hHK3giOAWrWB7t2gF27eJVhClTbTqkAXoIazTAT2Es6NCCGdH7ZAp6CGcHBU0b+WoEMXZUSGEs6NCCGdHhQmYm6PC9m6OCtu3cVSI0sZRIYqzo0IIZ0eFEM6OCiGcHRVCODtqy9xe27yVo0IUZ0eFEM6OCiGcHZXmix0cFbZ3c1TYvo2jQpQ2jgpRnB0VQjg7KoRwdlQI4eyoEMLZUSGEk6OC5q0cFaI4OyqEcHZUCOHsqOwBwvaOCtu7OSps38ZRIUobR4Uozo4KIZwdFUI4OyqEcHZUCOHsqBDCyVFB81aOClGcHRVCODsqhHB2VHp7sIOjwvZujgrbt3FUiNLGUSGKs6NCCGdHhRDOjgohnB0VQjg7KoRwclTQvJWjQhRnR4UQzo4KIUz65DcldQvr+yYzijqndlW+6fL5iT/Jj2TLjYc2FVZ9a5PZLtP0S6B8SHBoMtVltIijlBaXNbfAZSS6fMHpJuXvM/PTODJ6x5cg8ecW6P1NAD6ybQmqISOT1eWWYHo2MhlebgnyxZEpbsotwQA2MoVL6lFiAQkeSEBjU4CQGvc1zU1xVmoOTWyKrlJDaGFTTJUaQgObQoDUcByQsFpvPba006RYCwoQTHKUEKZ6BJMsIVcikELHsCVNj2DLnh7BlkY9ghOfWhh3YvVQzgzrodpRDd3Mler2jqpHcKUaIrSiGsC0pxpCtaYaQrWjGgZGV6ohgivV7YOzHqEV1QCmPdUQqjXVEKod1XAoc6UaIrhSDRFcqe44IGth2lMNoVpTDaHaUQ2TO1eqIYIr1RDBlWqI0IpqANOeagjVmmoI1Y5qML91phoiuFINEVyphgitqAYw7amGUK2phlAmqmn9o0K1E8NSc7ckTGroNiBLDd2Cs9SwxWxJat1ytiQhtJwtQa4E526zJZk0PYIte3oEWxr1CE58amHcidVDOTOsh2pHtdtsSUV1e0fVI7hS7TZb0lLtNlsyUu02WzJS7TZb0lPtNltSUe02W1JR3T446xFaUe02WzJS7TZbMlLtNlvSU+02W1JR7TZbUlHtNltSUd1xQNbCtKfabbZkpNpttqSn2m22pKLabbakotpttqSi2m22pKXabbZkpNpttmSk2m22pKfabbakotpttqSi2m22pKLabbakpdpttmSk2m22ZKRaM1s6eah8EIlg0++N4YN337eIvCFbetRlxd4Qym8C0gPfrooPF5HGpCcB/0QU30w7zG8Y0r+zHM/q+DG93mAy6F9yssEnoBbk1Uy4F3327jXxRSh2X0x88knz3azzo1kYR4us/CRWuYWebUlMJDoy6pF/yaH0C1nUfOdH5M3b9FrpxtuIfMPp8po1l76ZJbojPnNFLdNgy8J6/HYs+zCUbL/ye070fAl5GaDCtOSleWK7wJrdhxnbS76P9QmtyBu4UAMTk7P+xUWViWS/Kf54WxwqlFPsze+LfcsY4VMH3Lr8+1z45zqK8d75ZH4656fYhQuqN/x/0ThGazot36Y5+WoVO4s4DqpidCqrgv/i3wljXMLvhJGOJYTWfRjz513p1pS9SOnd17gwiIvGgss0XkGhFZuX+IIuskhwtmD/neWsR+GW/yErEjvG5WDOtVZcO911zBQna3DII6783Ta2raMuB1pd8rHCQZeDUpdlKBLH8GBbXW/QINnpZNif8zhfl2wUl1wKFzKrGAhsUBFYRV9TGhZU+oJKomfvGq1sJDMcTOZiLU9NMnwZgiQYcZwsGGGTLoIZagXDl18sQkzs72TcqmjDWU1D32q6uhrNL7gfmdRko5we91CoBtG6qobqYDS56ImOSMMMF3rl04x0W0fGRlrG+LIXX4yNDOOSHwp7s8F4yoOmiUIqRTOFioxDRadAMjl381c6yz3M4MUVsd/E3eub/GYvNVlR4jvKaqyVFQ9GvmQ19h0IxjgXGvBoZVKRsJJBRdXMlfyiqFFCTkfyHbxxLJYof14K5EW6u9cpTpz1oIpblGIQXZaGnhenxYlWizzM+tLi5KlSHOpTrZLya/pPwc013bgIl1/usnSfrHjuBHUozmhOa27D+3QTSlkN32ClpeFkfH1WyoZrKaRT2nIzWSGNlHJS5DheRsypVk58cuNLTtOnkpOYOhQ7If/ikA7816NOxP5rGWkOoQ4fU/lTrTq44X2p4/Sp1EFVblaHOMSjOl6qHs60euAc+dLD2VPpQcRUgx7EIR308FhxQZGE9C1HjSUmIFzyt7VryqX8O0vFS4ToV5bqCtF8jEnDLR+jy8c51FTr+70jZWNDn2lZ2VjnZZVnrfi4+pp6iPuziJk08B9vaZb8wOdirKerb1wNeP8MxfH7kB2dbvWHkjyb7e336Es6a/sX7HsT2vYZvdmhBTipdob9NOuEfXMyYk/haMvqpKKvMDd9mKurpS01vNzn2DQ35IB6/yrF5nov+c6gH5QxqRbmlH6gC21c4fqK9Yutzz60KZ87RqJmFllpVsfiwBOLfEjTDk760eflF00dCWHVTR0hQ0+E8CKHJSFygfRwhFSqnw2EdKmJOhLCipc6QkaeCOEVV0tC5HKnpzqQoKZjSaeJtw6FHkfeWHVQx9vYE2/8Ci15kwuMfngrcuMfhjdWSdPxNvHEG48z/kYk84xW72PtJi0auurB0CHHsKkGeuealbl0XE89cc0zxcfimrNcq2k9BueWRQtHkli1SUfSqSeSuMUfmaQXTAsr+uhoOfNEi6gePUWcPLzvVCmq0mZb+lGXAMlUnX2rHVBE1+vxfSp+5GKffp6vKKFUuZjP8QyHX5dxWRe4J65aKSUqeJZ2MMiXXP7xZbxHasMcB3SfyjBkf/kKbnqEwzI3U02zYg/hrnwvqF0ykVYTrqmYunTRS3ltSsvULz2OsHkKudJDuGs3yspiFcaQz0j0Bp2PhzPxDW3zukFwU0mxcKZcuFf5UV1lBbjQVGYcWTCoVc9JjQ4XazuJVLG2oEfNw35eYHB+CO+7WHPAjqK/4EGuhi7COKmY3UYbLLwP6CH4hEN34tfiH2iJFK3URVC+UyqFNgcJDzGhpmDHaFhcG3svVP2i2NYmFdncD6JI5cCvuGHABwQnk2wvVyzwiXI5PTTHYuKfgP2b3OYhf2CJCT75JGN0fdq/vNIN11xpHN+8FouvqGkQrnn0gnJGYb67yKNQTK/Dz2k259s4JRxVDvez6fhUXQLmqcUXlBUklrUquOVw1atSCkq5dU0PJc3qVdalrGskJjBVGZ8nVQUx/Dv1dVb4Zo1FxT1qO7+tJCosm26bl7DPjNQ7y7aq1OMasShSqUWLxGTAE+BWIYzfEVQEsAF9/o1HMPrUjIhghYKMoapYKiov8RucqhftGWoDo95Z78wylbSNBaWRlUR2jQWSGvR0+b7F48mEatXzV0CqxV/9Ao8PJ5BPZxq9h11Gb3anXKF8Wfh9OqzKBUJ2fYaRv7F5o9sQd6Gj+pClrKpRvbJwtj+mj6DT2/P8p8rLgKJwzrFnX5gUiuJbmhQ1GI16E9uiw6nneg9Qh1GQXZ25Iv0GHbr79PNioFj9UmZNZAt5/QzMmugDlnSXyrjyQhmN1cTnIMEs+vT6suKxpZfw1NdtaiyvpLlMsxXK2PN2Nv6vdn/9qNnc1jZ0qFuLFTitGkdYYSv0plvzP9s1P6man/1sMxSROiyCPk9WEiK1n5sWC6LMNL5Y1xbHo+G4Vy39KB7rJL7Joht9sHM0YX5ZOYYSXBxyNmQVS2Iu7eOf8qTQxTO6R2LJgnU+2C4fRXfBqp4ePRftUqlO947bpljXaUoMXrfjmm120TVDetV1e11LFqzzwXZ11TUn+8fStXp6/wbvy8i1wxBR7HEzZAc9T8fDYZGsMRtWjNLD/0g3/blR7qsd3Yuj+QjYwelvf5/x+4owv5LeUK+yD9/X8FIERa7aZv37ZDidDKpJ2ReEth/wWek28uMddkiWqFrdRnqwX4pRpiCq4COaNrwsQRFIprSsYVUzrVyEalk/tWrDjEsu3RWLmazqesqWy5yKlG++jFbCzQ/yWIDSDjZ1QWzhsGyIkuM/buTrPj/6HB7/78eOXgTfGrJLl+KNIS2qETY3BG2DjsJ5gCzJzSlJQC0sAN9PQSzAy6QvxwK1emGxVqStWeBbGIhZ1G9geDFmGVVf/ONqlsswjtM0ucWXCKzD9wV0Z5OR5JRGArVJPZsfgHGwVdeMUZtQ6RbnBPy+r7RUpwyFfcUjDmybvxpY3domGrsmq7Jc3FjVU9jd3p2S2YPQJU0fsMEe0Eqf+sIj3Ojx51fj+Xg6uaiQUrHsKc6IxXSue+JrUPS3eAJflEE3NkUh/QU53K+3qF/pi/ZPugBFuE5tDUogp1iKncLB4B7NipaaMMpkWUoC+RopuCq+SALLPuFMcEZ58qoh+IILurGzhn5a0hfsv6p7EEoJNM8DDi0B+FIKuvFlSgBfPx7FLuLoLtmQ91fyDvKS26tGWmoEvpqCbuyskR9hqHkRknsJYxF83wXd+BqIfmZVTHsKVYgv4L6q4hCqeHHD0xSWM+nGzhp5HZ5eA1EhMlgxphtfA9FPrQpYMKcbX1XxbIanp9cIfFEy3dhZI6/D06vkNJJT1YanHmrDrxr5cTSiqv1OX2rt90VoxLBAfXxG/n0slcCV6/TDTGQRS6RYXMW+2iT2qgRisX5dvEygqp/5eHJ6faXRT5d702ChGlOX/oZnZQWPMOFDZTk89aRP6cNlmKxuor8L+/CppjgCw+uPsFlSLz8NKw/apstgrFg+YusAelLtNftZtchHYmcmrHWU5TtsISqhwy08kINtjSpm5yXfsjyoueUHG5KIK/8AHDxUH2QoTlV7QqG2vXj0gG8n1JX2oL94lCiJ1NAah6+s/nisUmfFvaK0GjiwVYgnqAWOnf0/UcZWWzXQryTLv10f7Jffznrk34J19ia06lBDtrH1NUMx2tizRs3zJs3+/unN0za/OWYfN4RV40qeIz6BqK4jP/OEp8Wi5eeZ8jS+AKRdaLZ5r8jPmvZYmNzbEGlxrtfU55XZgzD7mv5Y29VhfL8aXl3P5wXvbHyvDzw/cwLk1UD6FEi91nqWbkh57BNaowwlS/h2kjBJ0l1IPxCdFQcxJT/+auvT4Wgolplxr/KzEF39oAc3jvJBD8kuJFlVmkT3rIeEe4hnPWQrgRHKdSA65Jtn63ao25jv9/IUhsylm70bJfiYjxHLgmd0tKPIKPib/eIzWho1n/NDVIQAU8sPg4GdCpL4+Q/tG+Zx2Y+25UvRyZsfo1e4Ml7o7fbE+pbt+UzUDqf/79Eq2m/Iyz372tk/O4a+GjTod5/987utVXPPLqbDOeflaW53SAbkyRP/g8r6dabPfponDWI+qHSYW3KaQz+23maOeNjpXIeJmDzFVgTpmrj716OzKegJETddjNfGMs9foC9nivtqd4s5/AuaTT+3ea1ubOdV+wEf3uFqXT68i+r+QIzw6iW8NiM8X4hx6BH+KQbpvs5ryvU2Jp9rbN5h0H2cImxt0Kl93ogHi4X6+0YqN7ILgRZmdy7QWmA+8Q3BcW9UxmE5bogB/blV6yvSkI5yZn+V7kU0+5u/NvrnIt/ys2H+Mp7H8uvCYK/+2iGpPChbz3FthEfTvvT7SKfkX0NkkPfwhQxXtJD1ROnv4/Q3QeSD4TaU/vhJhpvhHgrDqUbx1wTPj+1P+MSM+k7DxI0aeb2P6SfddFVZcRCry3atyooHNn+WqixjRyjUMaTNr64H1/WQpp/R98VLPQuXsCufLhiXhx/wjU5EPwmhcCLbLrQb5c7mw4uh2sS0Q/URYzZzHjHcqqu1DvHLrfeo3puuUVFnfItnE17GtEPYyENO7ALllAO2nL08k3TxBRVKL+fj62qnjF0VH1h1HG/VC4A+hneIfQWRXoY80m7xriBh+xjPj7/iZzobj6/E17I0F6Zey/AJfY3yKE3AZRU76Onupc60u2M+G40m4kCvycLTriuRwgq3u/gr/+3/BQAAAP//AwBQSwMEFAAGAAgAAAAhACEwqPDXCgAAY8AAABIAAAB3b3JkL251bWJlcmluZy54bWzsXd2OozgWvl9p36EUKZedYH4ClKZ6BCHZ6VXPaDTdq72mEqrCNj8RkKqu23mZfYR9rHmFtYEASYzBzm/NnLpJxdjm/Pp8HB+TH378HgZ3L16S+nH0MEAjaXDnRYt46UfPD4N/fZ1/MAZ3aeZGSzeII+9h8Oalgx8//v1vP7zeR5vw0Utwxzs8R5Tev64XD4NVlq3vx+N0sfJCNx2F/iKJ0/gpGy3icBw/PfkLb/waJ8uxLCEp/2+dxAsvTfE8Uzd6cdNBOd3ie7/Zlon7igeTCdXxYuUmmfe9ngNxT6KNzbGxP1F4yFq89iJ88SlOQjfDX5Pncegm3zbrD3jetZv5j37gZ294SmmynSZ+GGyS6L6c4kNFChlyX5BSfmxHJH3uWwxx4sUm9KIsv+M48QJMQxylK39dyTQUnQ1fXG0neWEx8RIG236va6QeZxBOoZV6wj7kl6oMg4Jy9oxI6qERMkU1og8Ju/fcUhK6flTfWEg0DeEijW8C+WCCSerxTaGVU4zTt7B2jdf183Fa/kcSb9b1bP5xs32KvlVzkQWLY67SWpoWnB5HzJeVu8auHC7uPz1HceI+BpgirPs7rL67XAN3xEsGH/Fy6j6mWeIusl824d3Ot0/LhwFelvGQ+8TDa3FCGouV13rKvMROPPcb6UJmiVJ/iYe/uMHDYJ7/GeZgTK6EmyDzP3svXvD1be1t+xAqAy9vLrpl4TrYXtTnaCbb01lxJXghF3z8sb1ZTsy2Myp64agwD6vGx00QeFlxZf0lewuqG3/208xuXMXTfvW+VwP/+P1/Vfs/F9vWwHvaTvZrklOL5VR+bvtgArCw7tcx1p0ykUj3cd3Rj4h4yDzFVfxl5UbPebire5ezJ+XHPI6ylCglXfjYPL+8hY9xkA+1sLx3GvwIT7z0nlws7XKyfJZxzkn+2VBtp96RiN4lU7dUU1ZYel+9PSb+8mdyLWhRvj2zNE1XacrH/2brAMdEJEmmJEnznuaw9BZ+6JY329P4EI36aLypQF3mVGCnenYZJdLfYRSZeQsOizi2vnikRz/Gg/jVSz57GVYbnXmZm3mkqkzu6SzJByzZx7D0Wxy6EZ0jhcZR4j+v2lmS0Z5HIqMHSwrFHMVYYpqnyq0h2TAENKRezug0bpYwBwIsaRczugm/0anK3irSy+gmlzE6nVtDmiSyLOiXMzqDnyWdHajpLBkXMzqT3+gm6t7S0GJ0nHCBrO4CcGFuSwhNC2JF4cJEUS19qudc7CqiARd0Z2LNnIlR0NCKFPcE/IHXYLhxQQnsrMR3cxjnuWlmpb77FWN+bCihj8H7TwTr5dMWoK/qvEgbX8QRBvZbsowoPY2SIa+YV15dUKJVYNN4k/hecveL99qQzF4rkc9+Rz4pNUBLKSXteCn98ft/eeV0gE/6yunfuDdJqaUNKe228QmkAXlKgfRdy5gC6fXIxYNwzvIItSuKBly6ogd14aJre1ADgV3Vgw7A1rU8qIHfrupBXXDtAh7UwH5X9KAukHdtD2rAyat60AFyvJwHcYJREqS4wSiyZFNVLWbOshuMaoo9tWVDroRbqbEBRk3bsPSZPhNQ42Eq81c3cZ8Td7061cIgimFPty4cZL8Am1KkdJBQk4pYy/+Y+efHqqWACsM6sYDeOXatPOzkovkzY9mb8rhbxLY35XG3hXVvyOPeEfa9KY97P1iYrPT8WHhmzHRNUQv+RbHwXNemsqSxsTDs4/ZjHPZxi1bYx+1kB/ZxjzA62MeFfdy/8D4ucTRuuCAjXZWV2ZGpM8tRJNNCRiWKShENuDDXJ1PDsSwR19jNnRW8eUuSQ6NrhR9OELNimFkPOIFBcxzEyfZu7ibLMX1vlHHNvVxJEBNfcS+XtFz7WQI77u3u7Z5BQPz5Mu3qGWnaXu/JRcOfL5NvO0NN2/s9g0Fxe5xi3kiGmrYXfAYBcXucOrl6hpq2N3xy0fDny9TbzlDT9orPYFD8+TJ0tQw1JwAmLikAgGeKgsx5wT8dAOetbbvGaD61bEOtxFoptO9xF3ZejNv/lRwsVdoi3xvaakOhpyB1NKzzhf2f7naoJd87qZVPRe1oWB536U/wftaRfO8k+ASQXiASGJMc+lSkku99/PaU0Ot0ihoN1dFQ45UBfvjP19X6UZs0dCqsb5VOf8JHw9ywuWhH2q4r5w2dtE/OQftoqHOTL5u7vp03dJKvn4n80bDOTogm2fOGTg6KQvWzcDAalokaDiY0Y3cZyBsoTHAGWwLy+IOtos2Rg0pXEM02Sdp0IkmOVYmiEn0j23TVAs2bqsOCMwKtUoIzAnsCgTMClSjgjACcERAQCJwRqEQBZwTgjEAvgXBCTyIxfuhpYVCGlBI0Cr/fwpobkq3Uz3KVGg/rogB63lK5JBwJEFsW4EjAu4KqcCQAjgTAkYBLQls4EgBHAi4GfUlhJj/0tR0DLypH1vghVVENew5HAi5SKAtHAmjYE44EnNXo4EgADX7BkYCzGh0cCaAhkNMcCUBkUn684CBnIqtlpBfFC7ZtWFr1hrimJgAvAF4AvNCtIcALgBcAL+yzBHjhjHhB6NXx8sxy1KmjFtTyl1CbiiPNNYm6qSbgA7tnBX/yXJKNKYfti75u7p+c3ZH7+cqrqWzU2Zc96MNfeq1MlEvVXlNZqUuvD1i5tbrsbmZK229hZjSsr/dFZ/kPADUYaqveFjhzKlSWTWW7rsduYfvGS7apTNXrUDtTN17OTeWrruBm8nXjpd5U1urS7i7WbrwMnMpdXfPdg7sbKhFHQm+WV2RHnhj6kbsVpjWTdd2wKllUemlkH8zp3LJsieJiZfzZ1RRj0+lib5ufuoGPeSdj+7xvvtF9kWZlR/c/cWL7y6If1PZAbc8V9j13BQS1Pa2igdoeqO2B2p5LehzU9kBtzzVqexAxM364rCFTdnTmDzHB+wt6kQrvL2iFZ/0IhvcXtCsK3l8A7y84hnx4f8GJk1NCb9dWbFVRJkrpx/zR1kLaTJ5to3VT6hBtu6mFaCvgf7w0d4Td4yLlaRFCZ3S8AEaAUAuhFkItO9QKvZlaVQ1TN/RyzYUfsoAqVMaGClSh9jRPqELNoAq1YgmqUEWNDqpQs/NVoQq9yFdVHV2xnWN/BNacG87cZJ9amSmarGqzKz30iP7mAPzIK1R9UPEJVH3wIZ6L7DzsiwaqPrqlBlUfHQKCqo9W0UDVB1R9XKXqgxgeN9jVMPpUFaQWAoCqjyNIhX2oVnjWj2Co+mhXFFR9wFYUbEXd0FaU0LuDJ7opaY6dSxFeiEI+YCuqDuv7LMFWVD/zhK2oHimVtkh/KaODrSjYivorb0UJvXBVV/Cz+dQ6Ei8Y8nw2RfY7Ll3pDwJO5AoQ9CHoQ9CHoA9BH4K+eNDPLZk/6Duqos7sWUEtpOSPIBVS8q1Buh/BkJJvVxSk5CElDyn5i6bkozzKRuUjdc7VTsituMuJGuddD8YVizV9XC6WlnHFmkkdt9UBbVix5FCH5YtHy7DC4elUbo2WNq5wNu7bFUZOv13uaC3jCtOij8tDWcu44kmdOq6KX7RxqMAw1IEssSCWtWzXMOpAUXNBDHthD2RYDHsgy2aYAxlGwx7IMBuWCyKG2WyDIXUcw2wQa6DMMJvt2kMdxzIblj/JLLNh3lFwmZFZVsMcyLIa1jiG0bA8X2bYDMsRZYbN7KxQxWfxpPXx/wAAAP//AwBQSwMEFAAGAAgAAAAhAHQ/OXrCAAAAKAEAAB4ACAFjdXN0b21YbWwvX3JlbHMvaXRlbTEueG1sLnJlbHMgogQBKKAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACMz7GKwzAMBuD94N7BaG+c3FDKEadLKXQ7Sg66GkdJTGPLWGpp377mpit06CiJ//tRu72FRV0xs6dooKlqUBgdDT5OBn77/WoDisXGwS4U0cAdGbbd50d7xMVKCfHsE6uiRDYwi6RvrdnNGCxXlDCWy0g5WCljnnSy7mwn1F91vdb5vwHdk6kOg4F8GBpQ/T3hOzaNo3e4I3cJGOVFhXYXFgqnsPxkKo2qt3lCMeAFw9+qqYoJumv103/dAwAA//8DAFBLAwQUAAYACAAAACEAuC67FeIAAABVAQAAGAAoAGN1c3RvbVhtbC9pdGVtUHJvcHMxLnhtbCCiJAAooCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACckMFqwzAMhu+DvUPQ3XWapUla4pStzqDXscKuruMkhtgKtjM2xt59Djt1x53EJyF9P6qPH2ZK3pXzGi2D7SaFRFmJnbYDg8vrM6kg8UHYTkxoFQOLcGzu7+rOHzoRhA/o1Dkok8SGjvXMGXydTo8831c5adsyI3lR7sj+Ic/IE2+Lqtxxnm7Lb0ii2sYznsEYwnyg1MtRGeE3OCsbhz06I0JEN1Dsey0VR7kYZQPN0rSgcol682YmaNY8v9svqve3uEZbnP6v5aqvk8bBiXn8BNrU9I9q5ZtXND8AAAD//wMAUEsDBBQABgAIAAAAIQC38a4krwAAAA4BAAATACgAY3VzdG9tWG1sL2l0ZW0xLnhtbCCiJAAooCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACsj8EKwjAQRH8l7N2mehApbaUgnkSEKnjwkqbbNpDsliSK/r1BxC/wOG/gDVNun86KB/pgmCpYZjkIJM29obGCy3m/2IAIUVGvLBNWQAzbuuyKlu9eYxAtWtQR+za+bKpvzanJru0BxAcclUswMRBph0LRVTDFOBdSBj2hUyHjGSl1A3unYop+lDwMRuOO9d0hRbnK87XsTGcNj17N0+sr+4uqLuXvTP0GAAD//wMAUEsDBBQABgAIAAAAIQAM4rLSagEAALsCAAARAAgBZG9jUHJvcHMvY29yZS54bWwgogQBKKAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB8ktFOwjAUhu9NfIel96MtyMRmG4kQbhRD4ozGu6Y9QOPWLW1h7O3dBhugxsvm/86X078Np4cs9fZgrMp1hOiAIA+0yKXSmwi9JQt/gjzruJY8zTVEqAKLpvHtTSgKJnIDK5MXYJwC69UmbZkoIrR1rmAYW7GFjNtBTeg6XOcm464+mg0uuPjiG8BDQgKcgeOSO44boV/0RnRSStEri51JW4EUGFLIQDuL6YDiM+vAZPbPgTa5IDPlqgL+RLuwpw9W9WBZloNy1KL1/hR/LJ9f26v6SjddCUBxKAUTBrjLTfwEe6W9l82uAh3ii6ApMeXWLeu+1wrkYxXPtkZZb75b5zsT4t95M2JqX/NeMSUt0p8738oo7UDGQ0LHPqE+GSf0gdEhI+Szl3ZQeKrsuBVIr74qOxbTJe+j2TxZoLMvSOiEkfuj78f8WZid1v7fGPh06JO7hAZsPLk2doK4Xfr6u8XfAAAA//8DAFBLAwQUAAYACAAAACEA3JNYaZoCAAB/DAAAEgAAAHdvcmQvZm9udFRhYmxlLnhtbNSWwY7aMBCG75X6DpHvS5wQCKCFFVCQeumhS9WzCQ5Yje3IDrC8fcd2WKCBXUJb1MZCJGPnj/1l5ncen1545m2o0kyKPgoaGHlUJHLBxLKPvs2mDx3k6YKIBcmkoH20oxo9DT5+eNz2UikK7cH9Qvd40kerosh7vq+TFeVEN2ROBXSmUnFSwKVa+pyoH+v8IZE8JwWbs4wVOz/EuI1KGXWNikxTltBPMllzKgp7v69oBopS6BXL9V5te43aVqpFrmRCtYY188zpccLEq0wQVYQ4S5TUMi0asJhyRlYKbg+wPePZQaBVTyCsCLQ1rSfRKiV8veP0BXk86X1eCqnIPAMlWJIHs/KsMBqUL9Pb9gTh0P2843OZ2XhOhNQ0gK4NyfoIt6AF2Cwyxm34b+EY+WZgsiJKU6PhBoYunBLOst0+qiQnwnXkrEhW+/iGKGZm5ro0W0LHWs8x6JQHcpEAcvQ0ElbGNE8jidXpnEaCozHwTN8BqICYMU6194Vuva925ueIhNDauAkkIviFcBadJ2Kf9PtEJjDncDKdHoiMIRJ3WqMKke5bROxl4HSuJzKEaZ3PjBCPgENkebhWh4PeMq3/CIeoeQ8OY5KxuWIXSExtJlgGkA93ITH8lUQYxXfJiLFcK0aVqZILNGJg0LU0DJWoFg0uF1SdK5CUvdDFv5YV32ELMVunvuCdlaOGd5J1If8j6xwTDvVBLqSEsUpnmcY66xXIbZaJw+OkiEwkeo1cScJeBt3brMIbyWxx0S9it3Pc4Be182Ji0qBCYzg+Q+OKEqlLY0ZW8P7e3EFcYpid5C/7ZnDOK9q4upOG74EIbvBN+Hhdq93FAjEkzNfWPb4p3CI78YHE8SprFgh+NyXKEz34CQAA//8DAFBLAwQUAAYACAAAACEAqYeqztMBAACtCwAAFAAAAHdvcmQvd2ViU2V0dGluZ3MueG1s7JZNa+MwEIbvC/sfjO6Nvz+pUwilpbAsy7b9AbIsJ2IljZGUuOmvX8V2m7TZQ33aHHzSaEbv4xleYXR98yK4s6NKM5Al8hcecqgkUDO5LtHz091VhhxtsKwxB0lLtKca3Sy/f7vuio5Wj9QYe1I7liJ1IUiJNsa0hetqsqEC6wW0VNpiA0pgY7dq7Qqs/mzbKwKixYZVjDOzdwPPS9CIUV+hQNMwQm+BbAWVpte7inJLBKk3rNVvtO4rtA5U3SogVGs7j+ADT2Am3zF+dAYSjCjQ0JiFHWbsqEdZue/1keBHQDwNEJwBEk2nIeIR4eq9oC/IEaR4WEtQuOKWZEdybFdOD0ZLa2nNdnpcna5gdYmiMIrzOMy8vl5Bvb/tazvM7XVB7iFrDf1BG/OW9d6zv9l684/0E7TnyRUYA+JT3vaxqtUhMkeNtBcR2Y1+PZw7BC0mdIwJcLD3B28NDAh+0tk0ZfWho2ladTr5FKl7HHoIP9qReFmaWTfi2Y5LsCMPvSwPgiCZ7bgIO+I0TNMoCmc7LsEOP8zSKPeSfP5bXYQfgR+kfpZncT778b/8GNb+kQWtYYK90jtQKwWdpqr/GuYcul8/7wf9ySN7+RcAAP//AwBQSwMEFAAGAAgAAAAhALpc8WHcAQAA3QMAABAACAFkb2NQcm9wcy9hcHAueG1sIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAnFPBbtswDL0P2D8Yujdygi7NAlnFkGLoYVsDxG3PmkwnwmRJkNig2dePthtP2XaaT4+P1NMjKYvb184WR4jJeFex+axkBTjtG+P2FXusP1+tWJFQuUZZ76BiJ0jsVr5/J7bRB4hoIBUk4VLFDohhzXnSB+hUmlHaUab1sVNIYdxz37ZGw53XLx045IuyXHJ4RXANNFdhEmSj4vqI/yvaeN37S0/1KZCeFDV0wSoE+a0/aWeNx07wiRW1R2Vr04G8uSF+isRW7SHJheAjEM8+NknOS2JGKDYHFZVGGqH8sFoJnsXiUwjWaIU0XPnV6OiTb7F4GBwX/XnB8xJBXexAv0SDJ1kKnofii3Fk4FrwEZCzqPZRhQPZ6e1NkdhpZWFD/ctW2QSC/ybEPah+t1tlen9HXB9Bo49FMj9puwtWfFcJ+qlV7KiiUQ7ZWDYGA7YhYZS1QUvaUzzAvCzH5ro3OYLLwiEYPBC+dDfckB5a6g3/YXaemx08jFYzO7mz8x1/qG58F5Sj+fIJ0YB/pMdQ+7v+bbzN8JLMtv5s8LALStNOlquP+f6zjNgRCw0tdNrJRIh76iDaXp/Ouj0055q/E/2Lehr/Vjlfzkr6hid05ughTL+R/AUAAP//AwBQSwECLQAUAAYACAAAACEAtJQBXr8BAACvCAAAEwAAAAAAAAAAAAAAAAAAAAAAW0NvbnRlbnRfVHlwZXNdLnhtbFBLAQItABQABgAIAAAAIQAekRq37wAAAE4CAAALAAAAAAAAAAAAAAAAAPgDAABfcmVscy8ucmVsc1BLAQItABQABgAIAAAAIQBTJPdGgQEAAHIHAAAcAAAAAAAAAAAAAAAAABgHAAB3b3JkL19yZWxzL2RvY3VtZW50LnhtbC5yZWxzUEsBAi0AFAAGAAgAAAAhAIcEgvmkDAAAskAAABEAAAAAAAAAAAAAAAAA2wkAAHdvcmQvZG9jdW1lbnQueG1sUEsBAi0AFAAGAAgAAAAhAC5uMgNcAgAAxAgAABAAAAAAAAAAAAAAAAAArhYAAHdvcmQvZm9vdGVyMS54bWxQSwECLQAUAAYACAAAACEAUZI/8p8CAAAzCQAAEAAAAAAAAAAAAAAAAAA4GQAAd29yZC9mb290ZXIyLnhtbFBLAQItABQABgAIAAAAIQCOmL+CCwIAADkHAAASAAAAAAAAAAAAAAAAAAUcAAB3b3JkL2Zvb3Rub3Rlcy54bWxQSwECLQAUAAYACAAAACEAo0pYhAgCAAAzBwAAEQAAAAAAAAAAAAAAAABAHgAAd29yZC9lbmRub3Rlcy54bWxQSwECLQAUAAYACAAAACEAOOeS1EUnAACgOgEAFQAAAAAAAAAAAAAAAAB3IAAAd29yZC9tZWRpYS9pbWFnZTEuZW1mUEsBAi0ACgAAAAAAAAAhAKkb6F+KrQAAiq0AAC0AAAAAAAAAAAAAAAAA70cAAHdvcmQvZW1iZWRkaW5ncy9NaWNyb3NvZnRfVmlzaW9fRHJhd2luZzEudnNkeFBLAQItABQABgAIAAAAIQAw3UMpAgYAAKQbAAAVAAAAAAAAAAAAAAAAAMT1AAB3b3JkL3RoZW1lL3RoZW1lMS54bWxQSwECLQAUAAYACAAAACEAYEsj5sUOAAD0QwAAEQAAAAAAAAAAAAAAAAD5+wAAd29yZC9zZXR0aW5ncy54bWxQSwECLQAUAAYACAAAACEA2qWhnsAYAACIDAEADwAAAAAAAAAAAAAAAADtCgEAd29yZC9zdHlsZXMueG1sUEsBAi0AFAAGAAgAAAAhACEwqPDXCgAAY8AAABIAAAAAAAAAAAAAAAAA2iMBAHdvcmQvbnVtYmVyaW5nLnhtbFBLAQItABQABgAIAAAAIQB0Pzl6wgAAACgBAAAeAAAAAAAAAAAAAAAAAOEuAQBjdXN0b21YbWwvX3JlbHMvaXRlbTEueG1sLnJlbHNQSwECLQAUAAYACAAAACEAuC67FeIAAABVAQAAGAAAAAAAAAAAAAAAAADnMAEAY3VzdG9tWG1sL2l0ZW1Qcm9wczEueG1sUEsBAi0AFAAGAAgAAAAhALfxriSvAAAADgEAABMAAAAAAAAAAAAAAAAAJzIBAGN1c3RvbVhtbC9pdGVtMS54bWxQSwECLQAUAAYACAAAACEADOKy0moBAAC7AgAAEQAAAAAAAAAAAAAAAAAvMwEAZG9jUHJvcHMvY29yZS54bWxQSwECLQAUAAYACAAAACEA3JNYaZoCAAB/DAAAEgAAAAAAAAAAAAAAAADQNQEAd29yZC9mb250VGFibGUueG1sUEsBAi0AFAAGAAgAAAAhAKmHqs7TAQAArQsAABQAAAAAAAAAAAAAAAAAmjgBAHdvcmQvd2ViU2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhALpc8WHcAQAA3QMAABAAAAAAAAAAAAAAAAAAnzoBAGRvY1Byb3BzL2FwcC54bWxQSwUGAAAAABUAFQBtBQAAsT0BAAAA\" \r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document.\r\n\r\nCloses #22077.", "comments": [ { "body": "@jasontedor Can you tell me what you think of the latest changes? Thanks!", "created_at": "2016-12-12T10:57:54Z" }, { "body": "Why are we doing this instead of adding the missing dependency?", "created_at": "2016-12-12T14:48:52Z" }, { "body": "I totally agree on not catching Throwable and the effort made to change that in our codebase. That said in this specific case it seems pretty awkward that the node crashes. Is it right to let it go down for a missing dependency problem triggered by some specific call? I guess there is no way to check for this earlier when the node is started?", "created_at": "2016-12-12T16:34:14Z" }, { "body": "> Why are we doing this instead of adding the missing dependency?\r\n\r\nWell. We reduced the number of dependencies with this PR: https://github.com/elastic/elasticsearch-mapper-attachments/pull/186.\r\nIIRC we wanted to avoid some Security Manager issues with 3rd party libs and some JarHell issues.\r\n\r\nWe ended up saying that we will only support a subset of files. Here the user is sending a Word document which is supported but within the Word document he is having an embedded Visio Chart, which we don't support.\r\nProbably we can fix the issue for Visio. But then, what will be the next problem? MP3?\r\n\r\nI think that killing a node because a end user is sending a non supported format should be prevented.\r\n\r\n@javanna No we can't check that on startup because it depends on the end user's request.\r\n", "created_at": "2016-12-12T16:55:00Z" }, { "body": "> Is it right to let it go down for a missing dependency\r\n\r\nYes, it is. We (developers) did not setup the system correctly. We didn't find the problem because of inadequate tests. Let's fix the tests, not try to mask the problems.\r\n\r\nThe underlying issue is the OOXMLParser in tika is poorly designed (all static functions) and does not allow overriding which embedded formats are supported. If we are going to handle this by catching (instead of adding the dependency), then it should be completely local and specific to this case, not general purpose for all of tika (so we don't mask other issues). OOXMLParser is subclassable, where we can catch the NCDFE, not throwable. But such a hack should only be done after opening an issue in tika (and referencing in a comment with the hacky catch).\r\n\r\nWe also desperately need more tests, ideally each type of embedded file (and there are a limited number of these that are possible, so we should try to cover each one). I believe tika has an extensive collection of files for testing, perhaps we can reuse some of those?\r\n", "created_at": "2016-12-12T17:24:44Z" }, { "body": "> so we don't mask other issues\r\n\r\nWe don't mask it here. We fail the ingestion with an error message which tells what is happening.\r\n\r\n```java\r\nthrow new ElasticsearchParseException(\"Missing classes to parse document in field [{}]\", e, field);\r\n```\r\n\r\nWe just don't kill anymore the node.\r\n\r\nDefinitely I don't want to hide any error.", "created_at": "2016-12-12T17:28:51Z" }, { "body": "I think everybody agrees that we should not blindly catch Throwable in our code. We made efforts to not hide OOM errors and such, and we don't want to go back there for sure. But, maybe I didn't state it clearly, I don't think it is acceptable that a node crashes because of a missing dependency. We are currently exposing an api that can kill a node, that is a big concerning problem to me. I am not confident that this category of errors with the attachment processor can be solved by testing, cause so many things can go wrong with tika and the many formats it supports. For sure catching `NoClassDefFoundError` is not an elegant solution, it's far from ideal, but is there a better alternative that doesn't involve crashing a node then?", "created_at": "2016-12-13T14:31:12Z" }, { "body": "> But, maybe I didn't state it clearly, I don't think it is acceptable that a node crashes because of a missing dependency. We are currently exposing an api that can kill a node, that is a big concerning problem to me.\r\n\r\n+1 ", "created_at": "2016-12-13T14:45:16Z" }, { "body": "I think if we should add the dependency we should add the dependency and consider that the bug that caused the thing to go down, not the lack of catch. If we don't want the dependency we should figure out some way to work around not having it. I don't think we should have a global \"catch NoClassDefError\" across all of our code because we don't want to hide other problems. If the way work around this is by catching the Error then I'm all for adding it, but I agree with @rjernst that we should file an issue with tika. If they intend for us to be able to run without the dependency then they should throw a friendlier exception. If they don't intend for us to be able to run without the dependency then we have more thinking to do....", "created_at": "2016-12-13T14:56:09Z" }, { "body": "> I don't think we should have a global \"catch NoClassDefError\" across all of our code because we don't want to hide other problems.\r\n\r\nDavid is introducing a catch in the `AttachmentProcessor`, that is not a global catch. Anyways, I do see how it is not the most elegant solution. Even if we are going to add this missing jar back and testing this case explicitly and filing an issue to tika, I wonder how long it will take for a similar error to arise due to another missing dependency which we haven't yet tested. Can we really make sure that we test all the possible situations here?", "created_at": "2016-12-13T15:57:53Z" }, { "body": "> David is introducing a catch in the AttachmentProcessor, that is not a global catch.\r\n\r\nRight. I'm cool with it.\r\n\r\n> Can we really make sure that we test all the possible situations here?\r\n\r\nThat argument makes sense to me. Something along the lines of \"Tika throws these instead of UnsupportedOperationException when you are missing an optional dependency.\" is fine reasoning for adding it. I think we should file an issue with Tika about it. I'd like to be able to remove it one day and catch something that is less likely to mask other weird stuff.", "created_at": "2016-12-13T16:01:29Z" }, { "body": "So what is the outcome on this? Do we want to merge my PR?\r\nI opened https://issues.apache.org/jira/browse/TIKA-2208 to see if they can do anything better on their side.", "created_at": "2016-12-14T08:31:06Z" }, { "body": "I agree that taking a node down in this situation is bad, but I disagree with the conclusion that we should catch `NoClassDefFoundError` here. Catching `NoClassDefFoundError` is too blunt, it catches more than just a missing dependency for an embedded document format that we do not support (for example, a class initializer that throws can also cause a `NoClassDefFoundError` too; [JLS 12.4.2](http://docs.oracle.com/javase/specs/jls/se8/html/jls-12.html#jls-12.4.2)):\r\n\r\n> 5. If the `Class` object for `C` is in an erroneous state, then initialization is not possible. Release `LC` and throw a `NoClassDefFoundError`.\r\n\r\nLike @rjernst said, we need to work this one on the Tika side.", "created_at": "2016-12-14T13:19:05Z" }, { "body": "I'm going to test the workaround proposed on the JIRA and will update the PR if successful.", "created_at": "2016-12-14T14:01:53Z" }, { "body": "Thanks @dadoonet.", "created_at": "2016-12-14T14:25:13Z" }, { "body": "For information, the proposed workaround does not work in our case as I explained at https://issues.apache.org/jira/browse/TIKA-2208?focusedCommentId=15753790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15753790\r\n\r\nBasically, the Tika checks are made when Tika instance is created not when it is used.\r\n\r\nI can obviously fix the current problem by adding the missing library but I think that it will hide potential other issues with missing JARs.\r\nMay be we can do something on Gradle side which checks that when we add an explicit dependency like `org.apache.poi:poi-ooxml:3.15`, all the needed transitive dependencies are also declared?\r\n\r\n", "created_at": "2016-12-16T08:22:37Z" }, { "body": "I pushed another commit based on the latest discussions I had with Tika team.\r\nI also changed the title and the description of the PR.\r\n\r\nWhat it does now:\r\n\r\n* We still don't support Visio files\r\n* If an embedded Visio content exists in a Word `docx` file, its extraction is now skipped\r\n* Which means that we don't fail anymore, we don't have any exception and cherry on the cake, the text coming from the `docx` file is extracted!\r\n\r\n", "created_at": "2016-12-16T15:02:12Z" }, { "body": "@jasontedor Can you also please update your review? Thanks!", "created_at": "2016-12-17T07:56:19Z" }, { "body": "So this is making our tests failing now.\r\nI need to dig more. \r\n\r\n`testPPT.potm` Tika sample file is now returned with an empty content as soon as we add `Collections.singleton(MediaType.application(\"x-tika-ooxml\")`.\r\n\r\n", "created_at": "2016-12-18T13:37:15Z" }, { "body": "I checked the full suite of documents we have and only `testPPT.potm` is now failing.\r\nI asked on [Tika project](https://issues.apache.org/jira/browse/TIKA-2208?focusedCommentId=15758867&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15758867) to see if we have a workaround.\r\n\r\nIf not, we have 2 choices I believe:\r\n\r\n* Revert completely my change and just add missing libraries\r\n* remove support of `potm` files\r\n\r\nFrom http://www.reviversoft.com/file-extensions/potm:\r\n\r\n> The POTM file extension is used to label macro-enabled templates created by Microsoft’s PowerPoint, a well-known software used to create presentations with the use of slide shows. \r\n\r\nSo I think we should better add missing lib. But it will never prevent not parseable embedded content.\r\nWe can just expect that Tika is able to handle such a case instead of throwing a ClassNotFoundError.\r\n", "created_at": "2016-12-18T13:59:09Z" }, { "body": "I tried to add missing libs:\r\n\r\n```gradle\r\n compile \"com.github.virtuald:curvesapi:1.04\"\r\n compile \"com.bbn.poi.visio:ooxml-visio-schemas:2011.1\"\r\n```\r\n\r\nBut it now fails with JarHell:\r\n\r\n```\r\nCaused by: java.lang.IllegalStateException: jar hell!\r\nclass: com.microsoft.schemas.office.visio.x2012.main.CellType$Factory\r\njar1: /Users/dpilato/.gradle/caches/modules-2/files-2.1/org.apache.poi/poi-ooxml-schemas/3.15/de4a50ca39de48a19606b35644ecadb2f733c479/poi-ooxml-schemas-3.15.jar\r\njar2: /Users/dpilato/.gradle/caches/modules-2/files-2.1/com.bbn.poi.visio/ooxml-visio-schemas/2011.1/5c395aefc5c1a33f517c243843c909c1f4d6b3f0/ooxml-visio-schemas-2011.1.jar\r\n```", "created_at": "2016-12-18T14:11:06Z" }, { "body": "@jasontedor Do you have any idea on how I could fix https://github.com/elastic/elasticsearch/pull/22079#issuecomment-267823297?", "created_at": "2017-01-23T14:56:59Z" }, { "body": "> > The POTM file extension is used to label macro-enabled templates created by Microsoft’s PowerPoint, a well-known software used to create presentations with the use of slide shows.\r\n\r\nDo we really need to support this, it seems obscure?", "created_at": "2017-01-23T15:30:46Z" }, { "body": "Hello everybody.\r\nThe error persists?\r\nWe have this problem in production ES 5.2.0\r\n\r\nthis file (.vsdx) is problem https://www.dropbox.com/s/1s92iz94m9oxrn8/vsdx-error.txt?dl=0\r\nprint screen error - https://www.dropbox.com/s/mjeaalri7plk7sm/vsdx-error.jpg?dl=0\r\n\r\nAfter try parsing this .vsdx file elasticsearch process is killed. \r\n\r\n```\r\n[2017-02-02T20:04:01,209][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[DpKBPQg][bulk][T#3]], exiting\r\njava.lang.NoClassDefFoundError: com/graphbuilder/curve/Point\r\n at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_91]\r\n at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_91]\r\n at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_91]\r\n at java.lang.Class.getDeclaredConstructor(Class.java:2178) ~[?:1.8.0_91]\r\n at org.apache.poi.xdgf.util.ObjectFactory.put(ObjectFactory.java:34) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.section.geometry.GeometryRowFactory.<clinit>(GeometryRowFactory.java:39) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.section.GeometrySection.<init>(GeometrySection.java:55) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFSheet.<init>(XDGFSheet.java:77) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFShape.<init>(XDGFShape.java:113) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFShape.<init>(XDGFShape.java:107) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFBaseContents.onDocumentRead(XDGFBaseContents.java:82) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFMasterContents.onDocumentRead(XDGFMasterContents.java:66) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFMasters.onDocumentRead(XDGFMasters.java:101) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XmlVisioDocument.onDocumentRead(XmlVisioDocument.java:106) ~[?:?]\r\n at org.apache.poi.POIXMLDocument.load(POIXMLDocument.java:190) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XmlVisioDocument.<init>(XmlVisioDocument.java:79) ~[?:?]\r\n at org.apache.poi.xdgf.extractor.XDGFVisioExtractor.<init>(XDGFVisioExtractor.java:41) ~[?:?]\r\n at org.apache.poi.extractor.ExtractorFactory.createExtractor(ExtractorFactory.java:207) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:86) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:87) ~[?:?]\r\n at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) ~[?:?]\r\n at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) ~[?:?]\r\n at org.apache.tika.Tika.parseToString(Tika.java:568) ~[?:?]\r\n at org.elasticsearch.ingest.attachment.TikaImpl.lambda$parse$0(TikaImpl.java:89) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_91]\r\n at org.elasticsearch.ingest.attachment.TikaImpl.parse(TikaImpl.java:88) ~[?:?]\r\n at org.elasticsearch.ingest.attachment.AttachmentProcessor.execute(AttachmentProcessor.java:86) ~[?:?]\r\n at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:100) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:100) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:58) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.ingest.PipelineExecutionService.innerExecute(PipelineExecutionService.java:164) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.ingest.PipelineExecutionService.access$000(PipelineExecutionService.java:41) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.ingest.PipelineExecutionService$2.doRun(PipelineExecutionService.java:88) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]\r\nCaused by: java.lang.ClassNotFoundException: com.graphbuilder.curve.Point\r\n at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_91]\r\n at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_91]\r\n at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:814) ~[?:1.8.0_91]\r\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_91]\r\n```", "created_at": "2017-02-02T18:08:30Z" }, { "body": "@sergeytkachenko About your error here, I think we probably don't want to support indexing Visio files. At least we don't mean to support them. I'm going to try to have a proper rejection when you index a Visio file instead of killing the node like this.\r\nBut the issue we have within this thread is happening when you have a Visio content (not supported) embedded within a Word document (supported).\r\n\r\nThanks for sharing your Visio file BTW!", "created_at": "2017-02-03T10:05:28Z" }, { "body": "Actually I was wrong. This PR removes support for Visio files even embedded. I'm going to update the PR soonish.", "created_at": "2017-02-03T11:56:20Z" }, { "body": "@jasontedor So I removed support for POTM files. I rebased on master and squashed everything. Could you check again this PR?", "created_at": "2017-02-03T12:05:44Z" }, { "body": "I am missing in this pull request where support for POTM files was removed?\r\n\r\nAlso, please do not squash commits in the middle of review cycles. You can merge master into your branch instead of rebasing, too.", "created_at": "2017-02-03T14:44:28Z" }, { "body": "@jasontedor `POTM` is removed in the zip file. That's why you can't see it and why I opened #22958 ", "created_at": "2017-02-03T15:00:21Z" }, { "body": "I'm going to push a new commit to my branch because I just merged #22959 so you will see in a more obvious way the POTM file removal.", "created_at": "2017-02-03T15:13:11Z" } ], "number": 22079, "title": "Remove support for Visio and potm files" }
{ "body": "Related to #22077\r\n\r\nThis PR comes with 2 changes, one for `ingest-attachment` and the other for `mapper-attachments`.\r\nIt's essentially a backport of #22079 for 5.x series.\r\n\r\n## Ingest Attachment Plugin\r\n\r\n* Send a non supported document to an ingest pipeline using `ingest-attachment`\r\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\r\n\r\nThis commit removes support for Visio and POTM office files.\r\n\r\nSo elasticsearch is not killed anymore when you run a command like:\r\n\r\n```\r\nGET _ingest/pipeline/_simulate\r\n{\r\n \"pipeline\" : {\r\n \"processors\" : [\r\n {\r\n \"attachment\" : {\r\n \"field\" : \"file\"\r\n }\r\n }\r\n ]\r\n },\r\n \"docs\" : [\r\n {\r\n \"_source\" : {\r\n \"file\" : \"BASE64CONTENT\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\r\n\r\n\r\n## Mapper Attachments Plugin\r\n\r\n* Parse a non supported document using `mapper-attachments`\r\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\r\n\r\nThis commit removes support for Visio and POTM office files.\r\n\r\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\r\n\r\nNote that for this one as we did not apply yet #22963 it hides the fact that we removed the potm sample file from the tika big ZIP file.\r\n", "number": 23214, "review_comments": [], "title": "Remove support for Visio and potm files" }
{ "commits": [ { "message": "Remove support for Visio and potm files\n\n* Send a non supported document to an ingest pipeline using `ingest-attachment`\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\n\nThis commit removes support for Visio and POTM office files.\n\nSo elasticsearch is not killed anymore when you run a command like:\n\n```\nGET _ingest/pipeline/_simulate\n{\n \"pipeline\" : {\n \"processors\" : [\n {\n \"attachment\" : {\n \"field\" : \"file\"\n }\n }\n ]\n },\n \"docs\" : [\n {\n \"_source\" : {\n \"file\" : \"BASE64CONTENT\"\n }\n }\n ]\n}\n```\n\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\n\nRelated to #22077\n\nBackport of #22079 in 5.x branch (5.3)" }, { "message": "Remove support for Visio and potm files\n\n* Parse a non supported document using `mapper-attachments`\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\n\nThis commit removes support for Visio and POTM office files.\n\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\n\nRelated to #22077 and #22079 for mapper-attachments plugin" } ], "files": [ { "diff": "@@ -74,9 +74,11 @@ dependencyLicenses {\n }\n \n forbiddenPatterns {\n+ exclude '**/*.doc'\n exclude '**/*.docx'\n exclude '**/*.pdf'\n exclude '**/*.epub'\n+ exclude '**/*.vsdx'\n }\n \n thirdPartyAudit.excludes = [", "filename": "plugins/ingest-attachment/build.gradle", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.apache.tika.Tika;\n import org.apache.tika.exception.TikaException;\n import org.apache.tika.metadata.Metadata;\n+import org.apache.tika.mime.MediaType;\n import org.apache.tika.parser.AutoDetectParser;\n import org.apache.tika.parser.Parser;\n+import org.apache.tika.parser.ParserDecorator;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.bootstrap.JarHell;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -45,7 +47,9 @@\n import java.security.PrivilegedExceptionAction;\n import java.security.ProtectionDomain;\n import java.security.SecurityPermission;\n+import java.util.Collections;\n import java.util.PropertyPermission;\n+import java.util.Set;\n \n /**\n * Runs tika with limited parsers and limited permissions.\n@@ -54,6 +58,9 @@\n */\n final class TikaImpl {\n \n+ /** Exclude some formats */\n+ private static final Set<MediaType> EXCLUDES = Collections.singleton(MediaType.application(\"x-tika-ooxml\"));\n+\n /** subset of parsers for types we support */\n private static final Parser PARSERS[] = new Parser[] {\n // documents\n@@ -63,7 +70,7 @@ final class TikaImpl {\n new org.apache.tika.parser.txt.TXTParser(),\n new org.apache.tika.parser.microsoft.OfficeParser(),\n new org.apache.tika.parser.microsoft.OldExcelParser(),\n- new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(),\n+ ParserDecorator.withoutTypes(new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(), EXCLUDES),\n new org.apache.tika.parser.odf.OpenDocumentParser(),\n new org.apache.tika.parser.iwork.IWorkPackageParser(),\n new org.apache.tika.parser.xml.DcXMLParser(),", "filename": "plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java", "status": "modified" }, { "diff": "@@ -47,6 +47,7 @@\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n import static org.hamcrest.core.IsCollectionContaining.hasItem;\n \n public class AttachmentProcessorTests extends ESTestCase {\n@@ -130,6 +131,34 @@ public void testWordDocument() throws Exception {\n is(\"application/vnd.openxmlformats-officedocument.wordprocessingml.document\"));\n }\n \n+ public void testWordDocumentWithVisioSchema() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.docx\", processor);\n+\n+ assertThat(attachmentData.keySet(), containsInAnyOrder(\"content\", \"language\", \"date\", \"author\", \"content_type\",\n+ \"content_length\"));\n+ assertThat(attachmentData.get(\"content\").toString(), containsString(\"Table of Contents\"));\n+ assertThat(attachmentData.get(\"language\"), is(\"en\"));\n+ assertThat(attachmentData.get(\"date\"), is(\"2015-01-06T18:07:00Z\"));\n+ assertThat(attachmentData.get(\"author\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_type\").toString(),\n+ is(\"application/vnd.openxmlformats-officedocument.wordprocessingml.document\"));\n+ }\n+\n+ public void testLegacyWordDocumentWithVisioSchema() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.doc\", processor);\n+\n+ assertThat(attachmentData.keySet(), containsInAnyOrder(\"content\", \"language\", \"date\", \"author\", \"content_type\",\n+ \"content_length\"));\n+ assertThat(attachmentData.get(\"content\").toString(), containsString(\"Table of Contents\"));\n+ assertThat(attachmentData.get(\"language\"), is(\"en\"));\n+ assertThat(attachmentData.get(\"date\"), is(\"2016-12-16T15:04:00Z\"));\n+ assertThat(attachmentData.get(\"author\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_type\").toString(),\n+ is(\"application/msword\"));\n+ }\n+\n public void testPdf() throws Exception {\n Map<String, Object> attachmentData = parseDocument(\"test.pdf\", processor);\n assertThat(attachmentData.get(\"content\"),\n@@ -138,6 +167,13 @@ public void testPdf() throws Exception {\n assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n }\n \n+ public void testVisioIsExcluded() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.vsdx\", processor);\n+ assertThat(attachmentData.get(\"content\"), nullValue());\n+ assertThat(attachmentData.get(\"content_type\"), is(\"application/vnd.ms-visio.drawing\"));\n+ assertThat(attachmentData.get(\"content_length\"), is(0L));\n+ }\n+\n public void testEncryptedPdf() throws Exception {\n ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> parseDocument(\"encrypted.pdf\", processor));\n assertThat(e.getDetailedMessage(), containsString(\"document is encrypted\"));", "filename": "plugins/ingest-attachment/src/test/java/org/elasticsearch/ingest/attachment/AttachmentProcessorTests.java", "status": "modified" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.doc", "status": "added" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.docx", "status": "added" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.vsdx", "status": "added" }, { "diff": "@@ -74,9 +74,11 @@ dependencyLicenses {\n }\n \n forbiddenPatterns {\n+ exclude '**/*.doc'\n exclude '**/*.docx'\n exclude '**/*.pdf'\n exclude '**/*.epub'\n+ exclude '**/*.vsdx'\n }\n \n thirdPartyAudit.excludes = [", "filename": "plugins/mapper-attachments/build.gradle", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.apache.tika.Tika;\n import org.apache.tika.exception.TikaException;\n import org.apache.tika.metadata.Metadata;\n+import org.apache.tika.mime.MediaType;\n import org.apache.tika.parser.AutoDetectParser;\n import org.apache.tika.parser.Parser;\n+import org.apache.tika.parser.ParserDecorator;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.bootstrap.JarHell;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -45,7 +47,9 @@\n import java.security.PrivilegedExceptionAction;\n import java.security.ProtectionDomain;\n import java.security.SecurityPermission;\n+import java.util.Collections;\n import java.util.PropertyPermission;\n+import java.util.Set;\n \n /**\n * Runs tika with limited parsers and limited permissions.\n@@ -54,6 +58,9 @@\n */\n final class TikaImpl {\n \n+ /** Exclude some formats */\n+ private static final Set<MediaType> EXCLUDES = Collections.singleton(MediaType.application(\"x-tika-ooxml\"));\n+\n /** subset of parsers for types we support */\n private static final Parser PARSERS[] = new Parser[] {\n // documents\n@@ -63,7 +70,7 @@ final class TikaImpl {\n new org.apache.tika.parser.txt.TXTParser(),\n new org.apache.tika.parser.microsoft.OfficeParser(),\n new org.apache.tika.parser.microsoft.OldExcelParser(),\n- new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(),\n+ ParserDecorator.withoutTypes(new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(), EXCLUDES),\n new org.apache.tika.parser.odf.OpenDocumentParser(),\n new org.apache.tika.parser.iwork.IWorkPackageParser(),\n new org.apache.tika.parser.xml.DcXMLParser(),", "filename": "plugins/mapper-attachments/src/main/java/org/elasticsearch/mapper/attachments/TikaImpl.java", "status": "modified" }, { "diff": "@@ -44,7 +44,9 @@\n import static org.elasticsearch.mapper.attachments.AttachmentMapper.FieldNames.TITLE;\n import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath;\n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n+import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.isEmptyOrNullString;\n+import static org.hamcrest.Matchers.isEmptyString;\n import static org.hamcrest.Matchers.not;\n \n /**\n@@ -121,6 +123,40 @@ public void testAsciidocDocument() throws Exception {\n testMapper(\"asciidoc.asciidoc\", false);\n }\n \n+ public void testWordDocumentWithVisioSchema() throws Exception {\n+ assertParseable(\"issue-22077.docx\");\n+ testMapper(\"issue-22077.docx\", false);\n+ }\n+\n+ public void testLegacyWordDocumentWithVisioSchema() throws Exception {\n+ assertParseable(\"issue-22077.doc\");\n+ testMapper(\"issue-22077.doc\", false);\n+ }\n+\n+ public void testVisioIsExcluded() throws Exception {\n+ String filename = \"issue-22077.vsdx\";\n+ try (InputStream is = VariousDocTests.class.getResourceAsStream(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" +\n+ filename)) {\n+ byte bytes[] = IOUtils.toByteArray(is);\n+ String parsedContent = TikaImpl.parse(bytes, new Metadata(), -1);\n+ assertThat(parsedContent, isEmptyString());\n+ }\n+\n+ byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" + filename);\n+ BytesReference json = jsonBuilder()\n+ .startObject()\n+ .startObject(\"file\")\n+ .field(\"_name\", filename)\n+ .field(\"_content\", html)\n+ .endObject()\n+ .endObject().bytes();\n+\n+ ParseContext.Document doc = docMapper.parse(\"person\", \"person\", \"1\", json).rootDoc();\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content\").fieldType().name()), isEmptyString());\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content_type\").fieldType().name()), is(\"application/vnd.ms-visio.drawing\"));\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content_length\").fieldType().name()), is(\"210451\"));\n+ }\n+\n void assertException(String filename, String expectedMessage) throws Exception {\n try (InputStream is = VariousDocTests.class.getResourceAsStream(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" + filename)) {\n byte bytes[] = IOUtils.toByteArray(is);", "filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/VariousDocTests.java", "status": "modified" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.doc", "status": "added" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.docx", "status": "added" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.vsdx", "status": "added" } ] }
{ "body": "I am getting a fatal error when trying to index the attached stripped down document using the ingest-attachment plugin. This causes my cluster to reboot and does not give me an email notification. It looks to be having a problem with the embedded Visio diagram.\r\n\r\nVersion: 5.1.1 (Elastic Cloud)\r\nClusterId: 7e7501\r\nNode: instance-0000000005\r\nPlugin: ingest-attachment\r\n\r\nLink to problematic doc: https://1drv.ms/w/s!ApTXXtrEV_GGiosenEfoUSk1rRnuYA \r\n\r\nError Information\r\n\r\n```\r\nDec 8 21:42:34 ERROR org.elasticsearch.bootstrap.ElasticsearchUncaughtExceptionHandler i5@z0\r\n```\r\n\r\n```\r\n[2016-12-08T21:42:34,628][ERROR][org.elasticsearch.bootstrap.ElasticsearchUncaughtExceptionHandler] fatal error in thread [elasticsearch[index][T#1]], exiting java.lang.NoClassDefFoundError: com/graphbuilder/curve/Point\r\n at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_72]\r\n at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_72]\r\n at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_72]\r\n at java.lang.Class.getDeclaredConstructor(Class.java:2178) ~[?:1.8.0_72]\r\n at org.apache.poi.xdgf.util.ObjectFactory.put(ObjectFactory.java:34) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.section.geometry.GeometryRowFactory.(GeometryRowFactory.java:39) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.section.GeometrySection.(GeometrySection.java:55) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFSheet.(XDGFSheet.java:77) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFShape.(XDGFShape.java:113) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFShape.(XDGFShape.java:107) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFBaseContents.onDocumentRead(XDGFBaseContents.java:82) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFMasterContents.onDocumentRead(XDGFMasterContents.java:66) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XDGFMasters.onDocumentRead(XDGFMasters.java:101) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XmlVisioDocument.onDocumentRead(XmlVisioDocument.java:106) ~[?:?]\r\n at org.apache.poi.POIXMLDocument.load(POIXMLDocument.java:190) ~[?:?]\r\n at org.apache.poi.xdgf.usermodel.XmlVisioDocument.(XmlVisioDocument.java:79) ~[?:?]\r\n at org.apache.poi.xdgf.extractor.XDGFVisioExtractor.(XDGFVisioExtractor.java:41) ~[?:?]\r\n at org.apache.poi.extractor.ExtractorFactory.createExtractor(ExtractorFactory.java:207) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:86) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:87) ~[?:?]\r\n at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) ~[?:?]\r\n at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) ~[?:?]\r\n at org.apache.tika.parser.DelegatingParser.parse(DelegatingParser.java:72) ~[?:?]\r\n at org.apache.tika.extractor.ParsingEmbeddedDocumentExtractor.parseEmbedded(ParsingEmbeddedDocumentExtractor.java:102) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedFile(AbstractOOXMLExtractor.java:311) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedParts(AbstractOOXMLExtractor.java:202) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.getXHTML(AbstractOOXMLExtractor.java:115) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:112) ~[?:?]\r\n at org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:87) ~[?:?]\r\n at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) ~[?:?]\r\n at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) ~[?:?]\r\n at org.apache.tika.Tika.parseToString(Tika.java:568) ~[?:?]\r\n at org.elasticsearch.ingest.attachment.TikaImpl$1.run(TikaImpl.java:94) ~[?:?]\r\n at org.elasticsearch.ingest.attachment.TikaImpl$1.run(TikaImpl.java:91) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_72]\r\n at org.elasticsearch.ingest.attachment.TikaImpl.parse(TikaImpl.java:91) ~[?:?]\r\n at org.elasticsearch.ingest.attachment.AttachmentProcessor.execute(AttachmentProcessor.java:72) ~[?:?]\r\n at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:100) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:58) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.ingest.PipelineExecutionService.innerExecute(PipelineExecutionService.java:166) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.ingest.PipelineExecutionService.access$000(PipelineExecutionService.java:41) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.ingest.PipelineExecutionService$1.doRun(PipelineExecutionService.java:65) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] \r\nCaused by: java.lang.ClassNotFoundException: com.graphbuilder.curve.Point\r\n at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_72]\r\n at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_72]\r\n at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:814) ~[?:1.8.0_72]\r\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_72]\r\n ... 47 more\r\n```\r\n", "comments": [ { "body": "This is due to a missing transitive dependency: `com.github.virtuald:curvesapi:1.04`:\r\n\r\n```\r\n_transitive_org.apache.poi:poi-ooxml:3.15\r\n\\--- org.apache.poi:poi-ooxml:3.15\r\n +--- org.apache.poi:poi:3.15\r\n | +--- commons-codec:commons-codec:1.10\r\n | \\--- org.apache.commons:commons-collections4:4.1\r\n +--- org.apache.poi:poi-ooxml-schemas:3.15\r\n | \\--- org.apache.xmlbeans:xmlbeans:2.6.0\r\n | \\--- stax:stax-api:1.0.1\r\n \\--- com.github.virtuald:curvesapi:1.04\r\n```\r\n\r\nIt's not ideal, but I think that you can work around this for now by dropping this dependency (and any of its transitive dependencies) in plugins/ingest-attachment (note: this might not be a complete solution if there are also security permissions that need to be added too, just trying to see what we can do here in the short term).", "created_at": "2016-12-09T15:27:26Z" }, { "body": "He can't do that. He is on cloud.\r\nI'm currently reproducing it.", "created_at": "2016-12-09T15:28:43Z" }, { "body": "> He can't do that. He is on cloud.\r\n\r\nI didn't notice that but that is indeed unfortunate.", "created_at": "2016-12-09T15:32:08Z" }, { "body": "I can totally reproduce the hard failure locally.\r\nThanks a lot @chrduf for providing the file which is causing that.\r\nWorking on a fix ATM. We have to fix 2 things IMO:\r\n\r\n* catch the exception correctly and just fail the ingest pipeline (don't stop the node basically)\r\n* then try to see if we can add safely the missing dependency ", "created_at": "2016-12-09T15:34:38Z" }, { "body": "> This causes my cluster to reboot and does not give me an email notification.\r\n\r\nWe will look into why this is the case (that you're not receiving the email notification).", "created_at": "2016-12-09T15:36:31Z" }, { "body": "@chrduf I'd like to use your file https://1drv.ms/w/s!ApTXXtrEV_GGiosenEfoUSk1rRnuYA as an input for a test case. Do you allow us doing so?", "created_at": "2016-12-09T15:37:13Z" }, { "body": "yes, you can use that document", "created_at": "2016-12-09T19:47:53Z" } ], "number": 22077, "title": "Fatal error with ingest-attachment plugin" }
{ "body": "Related to #22077\r\n\r\nThis PR comes with 2 changes, one for `ingest-attachment` and the other for `mapper-attachments`.\r\nIt's essentially a backport of #22079 for 5.x series.\r\n\r\n## Ingest Attachment Plugin\r\n\r\n* Send a non supported document to an ingest pipeline using `ingest-attachment`\r\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\r\n\r\nThis commit removes support for Visio and POTM office files.\r\n\r\nSo elasticsearch is not killed anymore when you run a command like:\r\n\r\n```\r\nGET _ingest/pipeline/_simulate\r\n{\r\n \"pipeline\" : {\r\n \"processors\" : [\r\n {\r\n \"attachment\" : {\r\n \"field\" : \"file\"\r\n }\r\n }\r\n ]\r\n },\r\n \"docs\" : [\r\n {\r\n \"_source\" : {\r\n \"file\" : \"BASE64CONTENT\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\r\n\r\n\r\n## Mapper Attachments Plugin\r\n\r\n* Parse a non supported document using `mapper-attachments`\r\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\r\n\r\nThis commit removes support for Visio and POTM office files.\r\n\r\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\r\n\r\nNote that for this one as we did not apply yet #22963 it hides the fact that we removed the potm sample file from the tika big ZIP file.\r\n", "number": 23214, "review_comments": [], "title": "Remove support for Visio and potm files" }
{ "commits": [ { "message": "Remove support for Visio and potm files\n\n* Send a non supported document to an ingest pipeline using `ingest-attachment`\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\n\nThis commit removes support for Visio and POTM office files.\n\nSo elasticsearch is not killed anymore when you run a command like:\n\n```\nGET _ingest/pipeline/_simulate\n{\n \"pipeline\" : {\n \"processors\" : [\n {\n \"attachment\" : {\n \"field\" : \"file\"\n }\n }\n ]\n },\n \"docs\" : [\n {\n \"_source\" : {\n \"file\" : \"BASE64CONTENT\"\n }\n }\n ]\n}\n```\n\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\n\nRelated to #22077\n\nBackport of #22079 in 5.x branch (5.3)" }, { "message": "Remove support for Visio and potm files\n\n* Parse a non supported document using `mapper-attachments`\n* If Tika is not able to parse the document because of a missing class (we are not importing all jars needed by Tika), Tika throws a Throwable which is not catch.\n\nThis commit removes support for Visio and POTM office files.\n\nThe good news is that it does not kill the node anymore and allows to extract the text which is in the Office document even if we have a Visio content (which is not extracted anymore).\n\nRelated to #22077 and #22079 for mapper-attachments plugin" } ], "files": [ { "diff": "@@ -74,9 +74,11 @@ dependencyLicenses {\n }\n \n forbiddenPatterns {\n+ exclude '**/*.doc'\n exclude '**/*.docx'\n exclude '**/*.pdf'\n exclude '**/*.epub'\n+ exclude '**/*.vsdx'\n }\n \n thirdPartyAudit.excludes = [", "filename": "plugins/ingest-attachment/build.gradle", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.apache.tika.Tika;\n import org.apache.tika.exception.TikaException;\n import org.apache.tika.metadata.Metadata;\n+import org.apache.tika.mime.MediaType;\n import org.apache.tika.parser.AutoDetectParser;\n import org.apache.tika.parser.Parser;\n+import org.apache.tika.parser.ParserDecorator;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.bootstrap.JarHell;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -45,7 +47,9 @@\n import java.security.PrivilegedExceptionAction;\n import java.security.ProtectionDomain;\n import java.security.SecurityPermission;\n+import java.util.Collections;\n import java.util.PropertyPermission;\n+import java.util.Set;\n \n /**\n * Runs tika with limited parsers and limited permissions.\n@@ -54,6 +58,9 @@\n */\n final class TikaImpl {\n \n+ /** Exclude some formats */\n+ private static final Set<MediaType> EXCLUDES = Collections.singleton(MediaType.application(\"x-tika-ooxml\"));\n+\n /** subset of parsers for types we support */\n private static final Parser PARSERS[] = new Parser[] {\n // documents\n@@ -63,7 +70,7 @@ final class TikaImpl {\n new org.apache.tika.parser.txt.TXTParser(),\n new org.apache.tika.parser.microsoft.OfficeParser(),\n new org.apache.tika.parser.microsoft.OldExcelParser(),\n- new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(),\n+ ParserDecorator.withoutTypes(new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(), EXCLUDES),\n new org.apache.tika.parser.odf.OpenDocumentParser(),\n new org.apache.tika.parser.iwork.IWorkPackageParser(),\n new org.apache.tika.parser.xml.DcXMLParser(),", "filename": "plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java", "status": "modified" }, { "diff": "@@ -47,6 +47,7 @@\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n import static org.hamcrest.core.IsCollectionContaining.hasItem;\n \n public class AttachmentProcessorTests extends ESTestCase {\n@@ -130,6 +131,34 @@ public void testWordDocument() throws Exception {\n is(\"application/vnd.openxmlformats-officedocument.wordprocessingml.document\"));\n }\n \n+ public void testWordDocumentWithVisioSchema() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.docx\", processor);\n+\n+ assertThat(attachmentData.keySet(), containsInAnyOrder(\"content\", \"language\", \"date\", \"author\", \"content_type\",\n+ \"content_length\"));\n+ assertThat(attachmentData.get(\"content\").toString(), containsString(\"Table of Contents\"));\n+ assertThat(attachmentData.get(\"language\"), is(\"en\"));\n+ assertThat(attachmentData.get(\"date\"), is(\"2015-01-06T18:07:00Z\"));\n+ assertThat(attachmentData.get(\"author\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_type\").toString(),\n+ is(\"application/vnd.openxmlformats-officedocument.wordprocessingml.document\"));\n+ }\n+\n+ public void testLegacyWordDocumentWithVisioSchema() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.doc\", processor);\n+\n+ assertThat(attachmentData.keySet(), containsInAnyOrder(\"content\", \"language\", \"date\", \"author\", \"content_type\",\n+ \"content_length\"));\n+ assertThat(attachmentData.get(\"content\").toString(), containsString(\"Table of Contents\"));\n+ assertThat(attachmentData.get(\"language\"), is(\"en\"));\n+ assertThat(attachmentData.get(\"date\"), is(\"2016-12-16T15:04:00Z\"));\n+ assertThat(attachmentData.get(\"author\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n+ assertThat(attachmentData.get(\"content_type\").toString(),\n+ is(\"application/msword\"));\n+ }\n+\n public void testPdf() throws Exception {\n Map<String, Object> attachmentData = parseDocument(\"test.pdf\", processor);\n assertThat(attachmentData.get(\"content\"),\n@@ -138,6 +167,13 @@ public void testPdf() throws Exception {\n assertThat(attachmentData.get(\"content_length\"), is(notNullValue()));\n }\n \n+ public void testVisioIsExcluded() throws Exception {\n+ Map<String, Object> attachmentData = parseDocument(\"issue-22077.vsdx\", processor);\n+ assertThat(attachmentData.get(\"content\"), nullValue());\n+ assertThat(attachmentData.get(\"content_type\"), is(\"application/vnd.ms-visio.drawing\"));\n+ assertThat(attachmentData.get(\"content_length\"), is(0L));\n+ }\n+\n public void testEncryptedPdf() throws Exception {\n ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> parseDocument(\"encrypted.pdf\", processor));\n assertThat(e.getDetailedMessage(), containsString(\"document is encrypted\"));", "filename": "plugins/ingest-attachment/src/test/java/org/elasticsearch/ingest/attachment/AttachmentProcessorTests.java", "status": "modified" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.doc", "status": "added" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.docx", "status": "added" }, { "diff": "", "filename": "plugins/ingest-attachment/src/test/resources/org/elasticsearch/ingest/attachment/test/sample-files/issue-22077.vsdx", "status": "added" }, { "diff": "@@ -74,9 +74,11 @@ dependencyLicenses {\n }\n \n forbiddenPatterns {\n+ exclude '**/*.doc'\n exclude '**/*.docx'\n exclude '**/*.pdf'\n exclude '**/*.epub'\n+ exclude '**/*.vsdx'\n }\n \n thirdPartyAudit.excludes = [", "filename": "plugins/mapper-attachments/build.gradle", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.apache.tika.Tika;\n import org.apache.tika.exception.TikaException;\n import org.apache.tika.metadata.Metadata;\n+import org.apache.tika.mime.MediaType;\n import org.apache.tika.parser.AutoDetectParser;\n import org.apache.tika.parser.Parser;\n+import org.apache.tika.parser.ParserDecorator;\n import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.bootstrap.JarHell;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -45,7 +47,9 @@\n import java.security.PrivilegedExceptionAction;\n import java.security.ProtectionDomain;\n import java.security.SecurityPermission;\n+import java.util.Collections;\n import java.util.PropertyPermission;\n+import java.util.Set;\n \n /**\n * Runs tika with limited parsers and limited permissions.\n@@ -54,6 +58,9 @@\n */\n final class TikaImpl {\n \n+ /** Exclude some formats */\n+ private static final Set<MediaType> EXCLUDES = Collections.singleton(MediaType.application(\"x-tika-ooxml\"));\n+\n /** subset of parsers for types we support */\n private static final Parser PARSERS[] = new Parser[] {\n // documents\n@@ -63,7 +70,7 @@ final class TikaImpl {\n new org.apache.tika.parser.txt.TXTParser(),\n new org.apache.tika.parser.microsoft.OfficeParser(),\n new org.apache.tika.parser.microsoft.OldExcelParser(),\n- new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(),\n+ ParserDecorator.withoutTypes(new org.apache.tika.parser.microsoft.ooxml.OOXMLParser(), EXCLUDES),\n new org.apache.tika.parser.odf.OpenDocumentParser(),\n new org.apache.tika.parser.iwork.IWorkPackageParser(),\n new org.apache.tika.parser.xml.DcXMLParser(),", "filename": "plugins/mapper-attachments/src/main/java/org/elasticsearch/mapper/attachments/TikaImpl.java", "status": "modified" }, { "diff": "@@ -44,7 +44,9 @@\n import static org.elasticsearch.mapper.attachments.AttachmentMapper.FieldNames.TITLE;\n import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath;\n import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath;\n+import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.isEmptyOrNullString;\n+import static org.hamcrest.Matchers.isEmptyString;\n import static org.hamcrest.Matchers.not;\n \n /**\n@@ -121,6 +123,40 @@ public void testAsciidocDocument() throws Exception {\n testMapper(\"asciidoc.asciidoc\", false);\n }\n \n+ public void testWordDocumentWithVisioSchema() throws Exception {\n+ assertParseable(\"issue-22077.docx\");\n+ testMapper(\"issue-22077.docx\", false);\n+ }\n+\n+ public void testLegacyWordDocumentWithVisioSchema() throws Exception {\n+ assertParseable(\"issue-22077.doc\");\n+ testMapper(\"issue-22077.doc\", false);\n+ }\n+\n+ public void testVisioIsExcluded() throws Exception {\n+ String filename = \"issue-22077.vsdx\";\n+ try (InputStream is = VariousDocTests.class.getResourceAsStream(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" +\n+ filename)) {\n+ byte bytes[] = IOUtils.toByteArray(is);\n+ String parsedContent = TikaImpl.parse(bytes, new Metadata(), -1);\n+ assertThat(parsedContent, isEmptyString());\n+ }\n+\n+ byte[] html = copyToBytesFromClasspath(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" + filename);\n+ BytesReference json = jsonBuilder()\n+ .startObject()\n+ .startObject(\"file\")\n+ .field(\"_name\", filename)\n+ .field(\"_content\", html)\n+ .endObject()\n+ .endObject().bytes();\n+\n+ ParseContext.Document doc = docMapper.parse(\"person\", \"person\", \"1\", json).rootDoc();\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content\").fieldType().name()), isEmptyString());\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content_type\").fieldType().name()), is(\"application/vnd.ms-visio.drawing\"));\n+ assertThat(doc.get(docMapper.mappers().getMapper(\"file.content_length\").fieldType().name()), is(\"210451\"));\n+ }\n+\n void assertException(String filename, String expectedMessage) throws Exception {\n try (InputStream is = VariousDocTests.class.getResourceAsStream(\"/org/elasticsearch/index/mapper/attachment/test/sample-files/\" + filename)) {\n byte bytes[] = IOUtils.toByteArray(is);", "filename": "plugins/mapper-attachments/src/test/java/org/elasticsearch/mapper/attachments/VariousDocTests.java", "status": "modified" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.doc", "status": "added" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.docx", "status": "added" }, { "diff": "", "filename": "plugins/mapper-attachments/src/test/resources/org/elasticsearch/index/mapper/attachment/test/sample-files/issue-22077.vsdx", "status": "added" } ] }
{ "body": "**Elasticsearch version**:5.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**:1.8.0\r\n\r\n**OS version**:macOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nThere is a missing `\\n` after `/_cat/thread_pool/{thread_pools}` in `_cat` documentation\r\n\r\n**Steps to reproduce**:\r\n 1.curl -XGET 127.0.0.1:9200/_cat\r\n\r\n**Provide logs (if relevant)**:\r\npart of output:\r\n...\r\n/_cat/thread_pool\r\n**/_cat/thread_pool/{thread_pools}/_cat/plugins**\r\n/_cat/fielddata\r\n...\r\n\r\nexpected output:\r\n...\r\n/_cat/thread_pool\r\n**/_cat/thread_pool/{thread_pools}**\r\n**/_cat/plugins**\r\n/_cat/fielddata\r\n...", "comments": [ { "body": "Thanks for reporting @biyuhao . Would you like to send a PR?", "created_at": "2017-02-16T16:04:19Z" }, { "body": "Working on it, just a minute.", "created_at": "2017-02-16T16:07:46Z" } ], "number": 23211, "title": "Minor fix of _cat documentation" }
{ "body": "Close #23211 ", "number": 23213, "review_comments": [], "title": "Minor fix of _cat documentation (#23211)" }
{ "commits": [ { "message": "Minor fix of _cat documentation (#23211)" } ], "files": [ { "diff": "@@ -60,7 +60,7 @@ public RestThreadPoolAction(Settings settings, RestController controller) {\n @Override\n protected void documentation(StringBuilder sb) {\n sb.append(\"/_cat/thread_pool\\n\");\n- sb.append(\"/_cat/thread_pool/{thread_pools}\");\n+ sb.append(\"/_cat/thread_pool/{thread_pools}\\n\");\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java", "status": "modified" } ] }
{ "body": "Get HEAD requests incorrectly return a content-length header of 0. This\r\ncommit addresses this by removing the special handling for get HEAD\r\nrequests, and just relying on the general mechanism that exists for\r\nhandling HEAD requests in the REST layer.\r\n\r\nRelates #21125\r\n", "comments": [ { "body": "retest this please", "created_at": "2017-02-15T17:37:41Z" }, { "body": "Thanks @nik9000.", "created_at": "2017-02-15T18:11:35Z" } ], "number": 23186, "title": "Fix get HEAD requests" }
{ "body": "A previous change aligned the handling of the GET document and HEAD document APIs. This commit aligns the specification for these two APIs as well, and fixes a failing test.\r\n\r\nRelates #23186\r\n\r\n", "number": 23196, "review_comments": [ { "body": "Would you mind to add a REST test in core for at least the `version` parameter? If we had one, we would have caught the change in #23186", "created_at": "2017-02-16T07:45:09Z" }, { "body": "Should we consider this a breaking change? The Exists response is changed with #23186", "created_at": "2017-02-16T07:46:34Z" }, { "body": "I discussed this via another channel with @tlrx and we agreed that #23186 is not a breaking change, but rather a bug fix.", "created_at": "2017-02-16T13:27:49Z" }, { "body": "@tlrx I pushed 0c63bf3ef687813524a88bb4fcb1b676e3dd83d2. This test would have failed before #23186, and passes after it.", "created_at": "2017-02-16T13:35:40Z" }, { "body": "sorry for being late to the party, but I think these params weren't previously accepted by the exists api. I am wondering what their meaning is in the context of an exists request, Version makes sense to me, but the exclude, include and source don't. Note that we have a separate endpoint (`/{index}/{type}/{id}/_source`) to check whether the source exists or not (which is not in our spec and that is a bug that I will fix). Thoughts?", "created_at": "2017-02-17T15:21:34Z" }, { "body": "HEAD and GET should have exactly the same behavior, just HEAD returns no body but otherwise all the headers that GET would return (including the content-length that GET would return).", "created_at": "2017-02-17T15:34:16Z" }, { "body": "sure but what does it mean to get back e.g. 404 when I provide source_include? that the document is not there? or that the source is not there? or that the required fields are not there?", "created_at": "2017-02-17T15:36:19Z" }, { "body": "The response means exactly the same as if you were to make exactly the same request with the GET verb instead of the HEAD verb.", "created_at": "2017-02-17T15:48:29Z" } ], "title": "Fix REST spec for exists" }
{ "commits": [ { "message": "Fix REST spec for exists\n\nA previous change aligned the handling of the GET document and HEAD\ndocument APIs. This commit aligns the specification for these two APIs\nas well, and fixes a failing test." }, { "message": "Add REST test for exists with version" } ], "files": [ { "diff": "@@ -44,7 +44,6 @@\n \n public class CrudIT extends ESRestHighLevelClientTestCase {\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/pull/23196\")\n public void testExists() throws IOException {\n {\n GetRequest getRequest = new GetRequest(\"index\", \"type\", \"id\");\n@@ -64,10 +63,7 @@ public void testExists() throws IOException {\n }\n {\n GetRequest getRequest = new GetRequest(\"index\", \"type\", \"does_not_exist\").version(1);\n- ElasticsearchException exception = expectThrows(ElasticsearchException.class,\n- () -> execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));\n- assertEquals(RestStatus.BAD_REQUEST, exception.status());\n- assertThat(exception.getMessage(), containsString(\"/index/type/does_not_exist?version=1: HTTP/1.1 400 Bad Request\"));\n+ assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync));\n }\n }\n ", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java", "status": "modified" }, { "diff": "@@ -23,6 +23,10 @@\n }\n },\n \"params\": {\n+ \"stored_fields\": {\n+ \"type\": \"list\",\n+ \"description\" : \"A comma-separated list of stored fields to return in the response\"\n+ },\n \"parent\": {\n \"type\" : \"string\",\n \"description\" : \"The ID of the parent document\"\n@@ -42,6 +46,27 @@\n \"routing\": {\n \"type\" : \"string\",\n \"description\" : \"Specific routing value\"\n+ },\n+ \"_source\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"True or false to return the _source field or not, or a list of fields to return\"\n+ },\n+ \"_source_exclude\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"A list of fields to exclude from the returned _source field\"\n+ },\n+ \"_source_include\": {\n+ \"type\" : \"list\",\n+ \"description\" : \"A list of fields to extract and return from the _source field\"\n+ },\n+ \"version\" : {\n+ \"type\" : \"number\",\n+ \"description\" : \"Explicit version number for concurrency control\"\n+ },\n+ \"version_type\": {\n+ \"type\" : \"enum\",\n+ \"options\" : [\"internal\", \"external\", \"external_gte\", \"force\"],\n+ \"description\" : \"Specific version type\"\n }\n }\n },", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/exists.json", "status": "modified" }, { "diff": "@@ -25,3 +25,12 @@\n id: 1\n \n - is_true: ''\n+\n+ - do:\n+ exists:\n+ index: test_1\n+ type: test\n+ id: 1\n+ version: 1\n+\n+ - is_true: ''", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/exists/10_basic.yaml", "status": "modified" } ] }
{ "body": "After #21123 when Elasticsearch receive a HEAD request it returns the Content-Length of the that it would return for a GET request with an empty response body. Except in the document exists, index exists, and type exists requests which return 0. We should fix them to also return the Content-Length that would be in the response.\n", "comments": [ { "body": "I'm adding the v5.1.0 label too, I think we should target a fix there.\n", "created_at": "2016-10-26T05:16:19Z" }, { "body": "These are all addressed now. Closing.", "created_at": "2017-06-12T12:10:12Z" } ], "number": 21125, "title": "Some endpoints return Content-Length: 0 for HEAD requests" }
{ "body": "Get mappings HEAD requests incorrectly return a content-length header of 0. This commit addresses this by removing the special handling for get mappings HEAD requests, and just relying on the general mechanism that exists for handling HEAD requests in the REST layer.\r\n\r\nRelates #21125\r\n", "number": 23192, "review_comments": [], "title": "Fix get mappings HEAD requests" }
{ "commits": [ { "message": "Fix get mappings HEAD requests\n\nGet mappings HEAD requests incorrectly return a content-length header of\n0. This commit addresses this by removing the special handling for get\nmappings HEAD requests, and just relying on the general mechanism that\nexists for handling HEAD requests in the REST layer." }, { "message": "Merge branch 'master' into fix-get-mapping-head\n\n* master: (1210 commits)\n Add support for clear scroll to high level REST client (#25038)\n Tiny correction in inner-hits.asciidoc (#25066)\n Added release notes for 6.0.0-alpha2\n Expand index expressions against indices only when managing aliases (#23997)\n Collapse inner hits rest test should not skip 5.x\n Settings: Fix secure settings by prefix (#25064)\n add `exclude_keys` option to KeyValueProcessor (#24876)\n Test: update missing body tests to run against versions >= 5.5.0\n Track EWMA[1] of task execution time in search threadpool executor\n Removes an invalid assert in resizing big arrays which does not always hold (resizing can result in a smaller size than the current size, while the assert attempted to verify the new size is always greater than the current).\n Fixed NPEs caused by requests without content. (#23497)\n Plugins can register pre-configured char filters (#25000)\n Build: Allow preserving shared dir (#24962)\n Tests: Make secure settings available from settings builder for tests (#25037)\n [TEST] Skip wildcard expansion test due to breaking change\n Test that gradle and Java version types match (#24943)\n Include duplicate jar when jarhell check fails\n Change ScriptContexts to use needs instead of uses$. (#25036)\n Change `has_child`, `has_parent` queries and `childen` aggregation to work with the new join field type and at the same time maintaining support for the `_parent` meta field type.\n Remove comma-separated feature parsing for GetIndicesAction\n ..." }, { "message": "Merge branch 'master' into fix-get-mapping-head\n\n* master: (80 commits)\n Test: remove faling test that relies on merge order\n Log checkout so SHA is known\n Add link to community Rust Client (#22897)\n \"shard started\" should show index and shard ID (#25157)\n await fix testWithRandomException\n Change BWC versions on create index response\n Return the index name on a create index response\n Remove incorrect bwc branch logic from master\n Correctly format arrays in output\n [Test] Extending parsing checks for SearchResponse (#25148)\n Scripting: Change keys for inline/stored scripts to source/id (#25127)\n [Test] Add test for custom requests in High Level Rest Client (#25106)\n nested: In case of a single type the _id field should be added to the nested document instead of _uid field.\n `type` and `id` are lost upon serialization of `Translog.Delete`. (#24586)\n fix highlighting docs\n Fix NPE in token_count datatype with null value (#25046)\n Remove the postings highlighter and make unified the default highlighter choice (#25028)\n [Test] Adding test for parsing SearchShardFailure leniently (#25144)\n Fix typo in shards.asciidoc (#25143)\n List Hibernate Search (#25145)\n ..." }, { "message": "Handle not found" }, { "message": "Merge branch 'master' into fix-get-mapping-head\n\n* master:\n Fix handling of exceptions thrown on HEAD requests\n Fix comment formatting in EvilLoggerTests\n Remove unneeded weak reference from prefix logger" }, { "message": "More tests" }, { "message": "Fix test" } ], "files": [ { "diff": "@@ -275,7 +275,6 @@\n import org.elasticsearch.rest.action.admin.indices.RestRolloverIndexAction;\n import org.elasticsearch.rest.action.admin.indices.RestShrinkIndexAction;\n import org.elasticsearch.rest.action.admin.indices.RestSyncedFlushAction;\n-import org.elasticsearch.rest.action.admin.indices.RestTypesExistsAction;\n import org.elasticsearch.rest.action.admin.indices.RestUpdateSettingsAction;\n import org.elasticsearch.rest.action.admin.indices.RestUpgradeAction;\n import org.elasticsearch.rest.action.admin.indices.RestValidateQueryAction;\n@@ -547,7 +546,6 @@ public void initRestHandlers(Supplier<DiscoveryNodes> nodesInCluster) {\n registerHandler.accept(new RestGetAllAliasesAction(settings, restController));\n registerHandler.accept(new RestGetAllMappingsAction(settings, restController));\n registerHandler.accept(new RestGetAllSettingsAction(settings, restController, indexScopedSettings, settingsFilter));\n- registerHandler.accept(new RestTypesExistsAction(settings, restController));\n registerHandler.accept(new RestGetIndicesAction(settings, restController, indexScopedSettings, settingsFilter));\n registerHandler.accept(new RestIndicesStatsAction(settings, restController));\n registerHandler.accept(new RestIndicesSegmentsAction(settings, restController));", "filename": "core/src/main/java/org/elasticsearch/action/ActionModule.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.rest.action.admin.indices;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n \n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsRequest;\n@@ -28,7 +29,9 @@\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -37,21 +40,33 @@\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.RestResponse;\n+import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.rest.action.RestBuilderListener;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Set;\n+import java.util.SortedSet;\n+import java.util.stream.Collectors;\n \n import static org.elasticsearch.rest.RestRequest.Method.GET;\n+import static org.elasticsearch.rest.RestRequest.Method.HEAD;\n import static org.elasticsearch.rest.RestStatus.OK;\n \n public class RestGetMappingAction extends BaseRestHandler {\n- public RestGetMappingAction(Settings settings, RestController controller) {\n+\n+ public RestGetMappingAction(final Settings settings, final RestController controller) {\n super(settings);\n controller.registerHandler(GET, \"/{index}/{type}/_mapping\", this);\n controller.registerHandler(GET, \"/{index}/_mappings\", this);\n controller.registerHandler(GET, \"/{index}/_mapping\", this);\n controller.registerHandler(GET, \"/{index}/_mappings/{type}\", this);\n controller.registerHandler(GET, \"/{index}/_mapping/{type}\", this);\n+ controller.registerHandler(HEAD, \"/{index}/_mapping/{type}\", this);\n controller.registerHandler(GET, \"/_mapping/{type}\", this);\n }\n \n@@ -64,48 +79,87 @@ public String getName() {\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n final String[] indices = Strings.splitStringByCommaToArray(request.param(\"index\"));\n final String[] types = request.paramAsStringArrayOrEmptyIfAll(\"type\");\n- GetMappingsRequest getMappingsRequest = new GetMappingsRequest();\n+ final GetMappingsRequest getMappingsRequest = new GetMappingsRequest();\n getMappingsRequest.indices(indices).types(types);\n getMappingsRequest.indicesOptions(IndicesOptions.fromRequest(request, getMappingsRequest.indicesOptions()));\n getMappingsRequest.local(request.paramAsBoolean(\"local\", getMappingsRequest.local()));\n return channel -> client.admin().indices().getMappings(getMappingsRequest, new RestBuilderListener<GetMappingsResponse>(channel) {\n @Override\n- public RestResponse buildResponse(GetMappingsResponse response, XContentBuilder builder) throws Exception {\n-\n- ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappingsByIndex = response.getMappings();\n- if (mappingsByIndex.isEmpty()) {\n- if (indices.length != 0 && types.length != 0) {\n- return new BytesRestResponse(OK, builder.startObject().endObject());\n- } else if (indices.length != 0) {\n+ public RestResponse buildResponse(final GetMappingsResponse response, final XContentBuilder builder) throws Exception {\n+ final ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappingsByIndex = response.getMappings();\n+ if (mappingsByIndex.isEmpty() && (indices.length != 0 || types.length != 0)) {\n+ if (indices.length != 0 && types.length == 0) {\n builder.close();\n- return new BytesRestResponse(channel, new IndexNotFoundException(indices[0]));\n- } else if (types.length != 0) {\n- builder.close();\n- return new BytesRestResponse(channel, new TypeMissingException(\"_all\", types[0]));\n+ return new BytesRestResponse(channel, new IndexNotFoundException(String.join(\",\", indices)));\n } else {\n- return new BytesRestResponse(OK, builder.startObject().endObject());\n+ builder.close();\n+ return new BytesRestResponse(channel, new TypeMissingException(\"_all\", String.join(\",\", types)));\n }\n }\n \n- builder.startObject();\n- for (ObjectObjectCursor<String, ImmutableOpenMap<String, MappingMetaData>> indexEntry : mappingsByIndex) {\n- builder.startObject(indexEntry.key);\n- builder.startObject(Fields.MAPPINGS);\n- for (ObjectObjectCursor<String, MappingMetaData> typeEntry : indexEntry.value) {\n- builder.field(typeEntry.key);\n- builder.map(typeEntry.value.sourceAsMap());\n+ final Set<String> typeNames = new HashSet<>();\n+ for (final ObjectCursor<ImmutableOpenMap<String, MappingMetaData>> cursor : mappingsByIndex.values()) {\n+ for (final ObjectCursor<String> inner : cursor.value.keys()) {\n+ typeNames.add(inner.value);\n+ }\n+ }\n+\n+ final SortedSet<String> difference = Sets.sortedDifference(Arrays.stream(types).collect(Collectors.toSet()), typeNames);\n+\n+ // now remove requested aliases that contain wildcards that are simple matches\n+ final List<String> matches = new ArrayList<>();\n+ outer:\n+ for (final String pattern : difference) {\n+ if (pattern.contains(\"*\")) {\n+ for (final String typeName : typeNames) {\n+ if (Regex.simpleMatch(pattern, typeName)) {\n+ matches.add(pattern);\n+ continue outer;\n+ }\n+ }\n }\n- builder.endObject();\n- builder.endObject();\n }\n+ difference.removeAll(matches);\n+\n+ final RestStatus status;\n+ builder.startObject();\n+ {\n+ if (difference.isEmpty()) {\n+ status = RestStatus.OK;\n+ } else {\n+ status = RestStatus.NOT_FOUND;\n+ final String message;\n+ if (difference.size() == 1) {\n+ message = String.format(Locale.ROOT, \"type [%s] missing\", toNamesString(difference.iterator().next()));\n+ } else {\n+ message = String.format(Locale.ROOT, \"types [%s] missing\", toNamesString(difference.toArray(new String[0])));\n+ }\n+ builder.field(\"error\", message);\n+ builder.field(\"status\", status.getStatus());\n+ }\n \n+ for (final ObjectObjectCursor<String, ImmutableOpenMap<String, MappingMetaData>> indexEntry : mappingsByIndex) {\n+ builder.startObject(indexEntry.key);\n+ {\n+ builder.startObject(\"mappings\");\n+ {\n+ for (final ObjectObjectCursor<String, MappingMetaData> typeEntry : indexEntry.value) {\n+ builder.field(typeEntry.key, typeEntry.value.sourceAsMap());\n+ }\n+ }\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ }\n+ }\n builder.endObject();\n- return new BytesRestResponse(OK, builder);\n+ return new BytesRestResponse(status, builder);\n }\n });\n }\n \n- static class Fields {\n- static final String MAPPINGS = \"mappings\";\n+ private static String toNamesString(final String... names) {\n+ return Arrays.stream(names).collect(Collectors.joining(\",\"));\n }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetMappingAction.java", "status": "modified" }, { "diff": "@@ -23,9 +23,9 @@ following are some examples:\n \n [source,js]\n --------------------------------------------------\n-GET /_mapping/tweet,kimchy\n+GET /_mapping/tweet\n \n-GET /_all/_mapping/tweet,book\n+GET /_all/_mapping/tweet\n --------------------------------------------------\n // CONSOLE\n // TEST[setup:twitter]", "filename": "docs/reference/indices/get-mapping.asciidoc", "status": "modified" }, { "diff": "@@ -76,8 +76,14 @@ public void testIndexExists() throws IOException {\n \n public void testTypeExists() throws IOException {\n createTestDoc();\n- headTestCase(\"/test/test\", emptyMap(), equalTo(0));\n- headTestCase(\"/test/test\", singletonMap(\"pretty\", \"true\"), equalTo(0));\n+ headTestCase(\"/test/_mapping/test\", emptyMap(), greaterThan(0));\n+ headTestCase(\"/test/_mapping/test\", singletonMap(\"pretty\", \"true\"), greaterThan(0));\n+ }\n+\n+ public void testTypeDoesNotExist() throws IOException {\n+ createTestDoc();\n+ headTestCase(\"/test/_mapping/does-not-exist\", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0));\n+ headTestCase(\"/text/_mapping/test,does-not-exist\", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0));\n }\n \n public void testAliasExists() throws IOException {", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java", "status": "modified" }, { "diff": "@@ -1,5 +1,8 @@\n ---\n-\"Return empty response when type doesn't exist\":\n+\"Non-existent type returns 404\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing types\n - do:\n indices.create:\n index: test_index\n@@ -12,11 +15,91 @@\n analyzer: whitespace\n \n - do:\n+ catch: missing\n indices.get_mapping:\n index: test_index\n type: not_test_type\n- \n- - match: { '': {}}\n+\n+ - match: { status: 404 }\n+ - match: { error.reason: 'type[[not_test_type]] missing' }\n+\n+---\n+\"No type matching pattern returns 404\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing types\n+ - do:\n+ indices.create:\n+ index: test_index\n+ body:\n+ mappings:\n+ test_type:\n+ properties:\n+ text:\n+ type: text\n+ analyzer: whitespace\n+\n+ - do:\n+ catch: missing\n+ indices.get_mapping:\n+ index: test_index\n+ type: test*,not*\n+\n+ - match: { status: 404 }\n+ - match: { error: 'type [not*] missing' }\n+ - is_true: test_index.mappings.test_type\n+\n+---\n+\"Existent and non-existent type returns 404 and the existing type\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing types\n+ - do:\n+ indices.create:\n+ index: test_index\n+ body:\n+ mappings:\n+ test_type:\n+ properties:\n+ text:\n+ type: text\n+ analyzer: whitespace\n+\n+ - do:\n+ catch: missing\n+ indices.get_mapping:\n+ index: test_index\n+ type: test_type,not_test_type\n+\n+ - match: { status: 404 }\n+ - match: { error: 'type [not_test_type] missing' }\n+ - is_true: test_index.mappings.test_type\n+\n+---\n+\"Existent and non-existent types returns 404 and the existing type\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: Previous versions did not 404 on missing types\n+ - do:\n+ indices.create:\n+ index: test_index\n+ body:\n+ mappings:\n+ test_type:\n+ properties:\n+ text:\n+ type: text\n+ analyzer: whitespace\n+\n+ - do:\n+ catch: missing\n+ indices.get_mapping:\n+ index: test_index\n+ type: test_type,not_test_type,another_not_test_type\n+\n+ - match: { status: 404 }\n+ - match: { error: 'types [another_not_test_type,not_test_type] missing' }\n+ - is_true: test_index.mappings.test_type\n \n ---\n \"Type missing when no types exist\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_mapping/20_missing_type.yml", "status": "modified" } ] }