issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper's S3 source can loose connections from the AWS SDK connection pool. See the following error.
```
2023-11-30T22:35:53.601 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Error reading from S3 object: s3ObjectReference=[bucketName=my-bucket, key=not-valid-gzip.gz]. Unable to execute HTTP request: Timeout waiting for connection from pool
2023-11-30T22:35:53.601 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: Unable to execute HTTP request: Timeout waiting for connection from pool. Retrying with exponential backoff.
```
After this happens, Data Prepper's `s3` source is unable to reclaim the connection from the connection pool. Data Prepper can run for hours or days without getting this back.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure an S3 bucket with SQS queue
2. Create a pipeline with an `s3` source. Configure it to use `automatic` compression (using `gzip` would probably work too)
3. Upload a file which is uncompressed, but has the `.gz` extension `not-valid-gzip.gz`
4. Wait a while - maybe 30 minutes.
5. You see the error.
6. Upload a valid file. You will see the same error because the S3 AWS SDK is out of connections in the pool.
**Expected behavior**
Data Prepper will not reach this point of running out of connections.
**Environment (please complete the following information):**
Data Prepper 2.6.0
**Additional context**
Here are the logs from when the transition occurred. The `GZIP encoding specified but data did contain gzip magic header` log is expected since this is not a Gzip file.
```
2023-11-30T22:34:21.343 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Received 1 messages from SQS. Processing 1 messages.
2023-11-30T22:34:21.387 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: GZIP encoding specified but data did contain gzip magic header. Retrying with exponential backoff.
2023-11-30T22:34:21.387 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Error reading from S3 object: s3ObjectReference=[bucketName=my-bucket, key=not-valid-gzip.gz]. GZIP encoding specified but data did contain gzip magic header
2023-11-30T22:34:21.387 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Pausing SQS processing for 21.3 seconds due to an error in processing.
2023-11-30T22:34:51.343 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Read S3 object: [bucketName=my-bucket, key=not-valid-gzip.gz]
2023-11-30T22:34:51.343 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Received 1 messages from SQS. Processing 1 messages.
2023-11-30T22:35:53.601 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Error reading from S3 object: s3ObjectReference=[bucketName=my-bucket, key=not-valid-gzip.gz]. Unable to execute HTTP request: Timeout waiting for connection from pool
2023-11-30T22:35:53.601 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: Unable to execute HTTP request: Timeout waiting for connection from pool. Retrying with exponential backoff.
2023-11-30T22:35:53.601 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Pausing SQS processing for 18.477 seconds due to an error in processing.
2023-11-30T22:36:12.134 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Read S3 object: [bucketName=my-bucket, key=not-valid-gzip.gz]
2023-11-30T22:36:12.134 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Received 1 messages from SQS. Processing 1 messages.
2023-11-30T22:37:13.656 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: Unable to execute HTTP request: Timeout waiting for connection from pool. Retrying with exponential backoff.
```
Also, here are the logs from the first GZip failure. This shows the timestamps for the expected errors.
```
2023-11-30T22:09:51.178 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Received 1 messages from SQS. Processing 1 messages.
2023-11-30T22:09:51.179 [Thread-11] INFO org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Read S3 object: [bucketName=my-bucket, key=not-valid-gzip.gz]
2023-11-30T22:09:51.234 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Error reading from S3 object: s3ObjectReference=[bucketName=my-bucket, key=not-valid-gzip.gz]. GZIP encoding specified but data did contain gzip magic header
2023-11-30T22:09:51.234 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: GZIP encoding specified but data did contain gzip magic header. Retrying with exponential backoff.
``` | [BUG] Data Prepper is losing connections from S3 pool | https://api.github.com/repos/opensearch-project/data-prepper/issues/3809/comments | 1 | 2023-12-05T23:08:49Z | 2023-12-11T23:07:53Z | https://github.com/opensearch-project/data-prepper/issues/3809 | 2,027,286,697 | 3,809 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently opensearch sink supports dynamic index but the documents* metrics such as documentsWritten, documentsSuccess, etc do not have resolution on index name.
**Describe the solution you'd like**
Add index name as a dimension into the metrics
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add index resolution into opensearch sink indexing metrics | https://api.github.com/repos/opensearch-project/data-prepper/issues/3807/comments | 0 | 2023-12-05T17:17:12Z | 2023-12-05T17:17:28Z | https://github.com/opensearch-project/data-prepper/issues/3807 | 2,026,748,843 | 3,807 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present data prepper seems to continually retry access to a S3 bucket, no matter what the error returned may be. This means that when using SQS messages may be continually returned to the queue and ultimately backlog the entire pipeline.
**Describe the solution you'd like**
I would like the ability to specify the action to be taken based off of the message returned from S3, when attempting to access the object.
Eg: a 403 being returned may be retry x times and if still failing put message back onto the queue, which with DLQ enabled should get the message into the DLQ. A no object found should result in a message delete and a log line created, eg `object x not found in bucket y, message deleted`
Note I haven't thought of all the failure scenarios available
**Describe alternatives you've considered (Optional)**
**Additional context**
Sourced from https://github.com/opensearch-project/data-prepper/discussions/3726
| Allow different actions based off of different S3 error codes | https://api.github.com/repos/opensearch-project/data-prepper/issues/3804/comments | 0 | 2023-12-04T08:31:28Z | 2023-12-05T20:33:58Z | https://github.com/opensearch-project/data-prepper/issues/3804 | 2,023,300,305 | 3,804 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The update and upsert actions always pass the full Event to the bulk actions, instead of only passing the part of the Event that is filtered from `document_root_key`, `exclude_keys`, `include_keys`, etc.
For update we are passing the original jsonNode here (https://github.com/opensearch-project/data-prepper/blob/979a0045b4970c7d31909a722a753adc3d2666ca/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L307), while for index we pass the document without the root key / exclude_keys here (https://github.com/opensearch-project/data-prepper/blob/979a0045b4970c7d31909a722a753adc3d2666ca/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L334).
**To Reproduce**
Steps to reproduce the behavior:
Send two items with the following structure and sink config
```
{"operation": "index", "item": { "doc_id": 110, "name": "Original Name" }}
{"operation": "update", "item": { "doc_id": 110, "name": "Updated Name" }}
```
```
document_id_field: "item/doc_id"
index: "actions-test-index"
document_root_key: "item"
actions:
- type: "update"
when: '/operation == "update"'
- type: "index"
```
and the final document in OpenSearch will look like
```
{
"_index": "actions-test-index",
"_id": "110",
"_score": 1,
"_source": {
"doc_id": 110,
"name": "Original Name",
"item": {
"name": "Updated Name",
"doc_id": 110
},
"operation": "update"
}
}
```
**Expected behavior**
The updated document should look like
```
{
"_index": "actions-test-index",
"_id": "110",
"_score": 1,
"_source": {
"doc_id": 110,
"name": "Updated Name"
}
}
```
| [BUG] update and upsert bulk actions do not include changes from document_root_key, exclude_keys, etc | https://api.github.com/repos/opensearch-project/data-prepper/issues/3745/comments | 2 | 2023-12-01T16:57:29Z | 2023-12-06T21:21:06Z | https://github.com/opensearch-project/data-prepper/issues/3745 | 2,021,257,982 | 3,745 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present the maximum back-off delay (for SQS, not sure if others) is hard coded, we would like to be able to set that to a user defined value
**Describe the solution you'd like**
A configuration option to set the maximum back-off delay
**Describe alternatives you've considered (Optional)**
**Additional context**
Sourced from https://github.com/opensearch-project/data-prepper/discussions/3726
| Make maximum back-off delay a configuration option | https://api.github.com/repos/opensearch-project/data-prepper/issues/3744/comments | 1 | 2023-12-01T10:20:24Z | 2025-02-05T11:14:03Z | https://github.com/opensearch-project/data-prepper/issues/3744 | 2,020,562,559 | 3,744 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
For my use-case I have nested ion documents in my input. For example:
```
{
"event": "{id:\"foo...\", status: ACTIVE, timestamp: 2023-11-30T21:05:23.383Z, amount: dollars::100.0}"
}
```
I would like to parse these into fields so that I can index and search them in OpenSearch.
**Describe the solution you'd like**
A processor for parsing ion documents `parse_ion`, similar to `parse_json`, and `csv`.
The implementation would likely be very similar to parse_json, and perhaps under the hood they can share most of their logic, just supplying different ObjectMapper implementations for each as well as any language specific configurations.
**Describe alternatives you've considered (Optional)**
It's possible to preprocess simple well-formatted ion documents converting them to json in order to prepare them for parse_json using regular expressions (substitute_string), but this is hacky, probably slow, and very prone to bugs.
I have also considered creating a new intermediary service that converts the ion to json before submitting to data-prepper, but this adds additional complexity and just defeats the purpose of data-prepper in general.
**Additional context**
I'm willing to submit a PR for this, would like to get feedback on the idea & approach though.
| Processor for parsing Amazon Ion documents | https://api.github.com/repos/opensearch-project/data-prepper/issues/3730/comments | 2 | 2023-11-30T21:24:25Z | 2023-12-11T19:16:57Z | https://github.com/opensearch-project/data-prepper/issues/3730 | 2,019,532,496 | 3,730 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-6378 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>logback-classic-1.2.12.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /performance-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.12/d4dee19148dccb177a0736eb2027bd195341da78/logback-classic-1.2.12.jar</p>
<p>
Dependency Hierarchy:
- gatling-charts-highcharts-3.9.5.jar (Root Library)
- gatling-app-3.9.5.jar
- gatling-core-3.9.5.jar
- gatling-commons-3.9.5.jar
- :x: **logback-classic-1.2.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A serialization vulnerability in logback receiver component part of
logback version 1.4.11 allows an attacker to mount a Denial-Of-Service
attack by sending poisoned data.
<p>Publish Date: 2023-11-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-6378>CVE-2023-6378</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logback.qos.ch/news.html#1.3.12">https://logback.qos.ch/news.html#1.3.12</a></p>
<p>Release Date: 2023-11-29</p>
<p>Fix Resolution: ch.qos.logback:logback-classic:1.3.12,1.4.12</p>
</p>
</details>
<p></p>
| CVE-2023-6378 (High) detected in logback-classic-1.2.12.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3729/comments | 0 | 2023-11-30T16:19:39Z | 2023-12-06T21:20:52Z | https://github.com/opensearch-project/data-prepper/issues/3729 | 2,019,015,421 | 3,729 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
There is a confusion in s3 source users when source receives notification from SQS on creating S3 folder. S3 sources processes this notifications and tries to get the object from folder, which results in 0 records being created and also logging a warning message along with incrementing `s3ObjectNoRecordsFound` metric.
`Failed to find any records in S3 object: s3ObjectReference=[bucketName=bucket-name, key=folder-name/].`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to S3
2. Create a folder
3. Check notification in SQS
4. Start a pipeline with that notification
5. You will see the following warning log
`Failed to find any records in S3 object: s3ObjectReference=[bucketName=bucket-name, key=folder-name/].`
**Expected behavior**
There shouldn't a log that there are no records found in that key, but instead we should skip `getObject` call on this key or have a different log message that it's a folder.
There are couple of ways we can achieve this,
1. Check the object size, it's 0 for folders and objects with zero objects. We already deserialize the SQS notification to `S3EventNotification` which contains size.
https://github.com/opensearch-project/data-prepper/blob/b6e38a77a7aacba7280728890df72197bd7ccd8d/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/S3EventNotification.java#L90
2. Validate if key ends with `/`, I don't see that you can create a key with `/` in the key name.
We can achieve this by doing the following [here](https://github.com/opensearch-project/data-prepper/blob/6dc1d12a4b84ade389d7cc311799363e3ea3114d/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/SqsWorker.java#L218)
```
if (s3SourceConfig.getNotificationSource().equals(NotificationSourceOption.S3)
&& !parsedMessage.isEmptyNotification()
&& isS3EventNameCreated(parsedMessage)
&& !parsedMessage.getObjectKey().endsWith("/")) {
```
| [BUG] S3 source processes SQS notification when S3 folder is created | https://api.github.com/repos/opensearch-project/data-prepper/issues/3727/comments | 1 | 2023-11-29T21:35:45Z | 2023-12-05T19:38:20Z | https://github.com/opensearch-project/data-prepper/issues/3727 | 2,017,456,067 | 3,727 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The process workers crash when there is not sufficient permission to access the source cluster of an OpenSearch source pipeline. This causes the pipeline to shutdown.
```
2023-11-28T20:19:42.499 [opensearch-migration-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [****] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1129) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getDistributionAndVersionNumber(SearchAccessorStrategy.java:191) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getSearchAccessor(SearchAccessorStrategy.java:107) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.startProcess(OpenSearchSource.java:74) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.start(OpenSearchSource.java:64) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
... 2 more
Caused by: co.elastic.clients.elasticsearch._types.ElasticsearchException: [es/info] failed: [security_exception] no permissions for [cluster:monitor/main] and User [name=<IAM role ARN>, backend_roles=[<IAM role ARN>], requestedTenant=null]
at co.elastic.clients.transport.rest_client.RestClientTransport.getHighLevelResponse(RestClientTransport.java:281) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch.ElasticsearchClient.info(ElasticsearchClient.java:983) ~[elasticsearch-java-7.17.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getDistributionAndVersionNumber(SearchAccessorStrategy.java:188) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getSearchAccessor(SearchAccessorStrategy.java:107) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.startProcess(OpenSearchSource.java:74) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.start(OpenSearchSource.java:64) ~[opensearch-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
... 2 more
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create an FGAC enabled OpenSearch/Elasticsearch cluster
2. Don't provide FGAC permissions to the pipeline role
3. Create an opensearch source pipeline
**Expected behavior**
Similar to the OpenSearch sink, the pipeline should spin until the permissions issue is resolved rather than crashing
| [BUG] OpenSearch source pipeline crashes when there are insufficient permissions to the source cluster | https://api.github.com/repos/opensearch-project/data-prepper/issues/3725/comments | 2 | 2023-11-29T18:56:39Z | 2024-03-20T18:58:22Z | https://github.com/opensearch-project/data-prepper/issues/3725 | 2,017,221,253 | 3,725 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.6.0
**BUILD NUMBER**: 75
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/7022025728
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 7022025728: Release Data Prepper : 2.6.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3713/comments | 3 | 2023-11-28T17:26:02Z | 2023-11-28T17:27:38Z | https://github.com/opensearch-project/data-prepper/issues/3713 | 2,014,940,941 | 3,713 |
[
"opensearch-project",
"data-prepper"
] | Every time that I restart container/service of Data Prepper, the output file configured in my File Sink Output is deleted.
I think that this is a big problem if you are collecting that results file with other program to send it to other platform, because sometimes a file collector do mistakes when the file that is collecting is deleted and a new one with the same name takes its place.
When Data Prepper is working, appends every new line to the file. Is it possible that after a restart of the container/service of Data Prepper, Data Prepper continue appending new lines to the file after restarting, without deleting it first?
Maybe you can add a config option to File Sink Output to decide that. | Not delete output file of File Sink Output on every container/service restart | https://api.github.com/repos/opensearch-project/data-prepper/issues/3687/comments | 1 | 2023-11-21T14:54:00Z | 2023-11-29T21:07:43Z | https://github.com/opensearch-project/data-prepper/issues/3687 | 2,004,462,185 | 3,687 |
[
"opensearch-project",
"data-prepper"
] | ## Is your feature request related to a problem? Please describe.
Today, I frequently need to repeat specific pieces of info in my config which are all the same. When I update them, I have to check I got all settings. For example, if I'm deploying my template in a new region, I have at least 3 region settings to update, plus the region names in relevant arns. This can lead to bugs when I don't check it correctly. I would like a way to declare something once and reuse it multiple times.
## Describe the solution you'd like
- I should be able to declare variables in the YAML
```yaml
version: "2"
my-pipeline:
variables:
ACCOUNT_ID: "123456789012"
REGION: "us-west-2"
APP_NAME: "Foo"
```
- I should be able to access these variables with a simple semantic like `${{VARIABLE_NAME}}`. For example:
```YAML
- opensearch:
hosts: [ "https://${{APP_NAME}}-${{ENV}}.${{REGION}}.es.amazonaws.com" ]
```
- I should be able to compose variables. For example:
```YAML
variables:
ACCOUNT_ID: "123456789012"
ROLE_NAME: osis-role
ROLE_ARN: "arn:aws:iam::${{ACCOUNT_ID}}:role/${{ROLE_NAME}}"
```
<details>
<summary> End-to-end example </summary>
```YAML
version: "2"
my-pipeline:
variables:
ACCOUNT_ID: "123456789012"
REGION: "us-west-2"
APP_NAME: "Foo"
ENV: "dev-andercj"
IS_SLS: true
ROLE_NAME: osis-role
ROLE_ARN: "arn:aws:iam::${{ACCOUNT_ID}}:role/${{ROLE_NAME}}"
source:
dynamodb:
tables:
- table_arn: "arn:aws:dynamodb:${{REGION}}:${{ACCOUNT_ID}}:table/${{APP_NAME}}-${{ENV}}"
stream:
start_position: "LATEST"
export:
s3_bucket: "osis-${{APP_NAME}}-${{ENV}}"
s3_prefix: "/ddbexports/${{APP_NAME}}/${{ENV}}"
aws:
region: "${{REGION}}"
sts_role_arn: "${{ROLE_ARN}}"
sink:
- opensearch:
hosts: [ "https://${{APP_NAME}}-${{ENV}}.${{REGION}}.es.amazonaws.com" ]
index: ${{APP_NAME}}
action: ${getMetadata("opensearch_action")}
document_id: ${getMetadata("primary_key")}
aws:
sts_role_arn: ${{ROLE_ARN}}
region: ${{REGION}}
serverless: ${{IS_SLS}}
dlq:
s3:
bucket: "osis-${{APP_NAME}}-${{ENV}}"
key_path_prefix: "dlq"
region: ${{REGION}}
sts_role_arn: ${{ROLE_ARN}}
```
</details>
**Open questions/other thoughts:**
- I should be able to declare variables outside the scope of a pipeline and it is then "global"
- I should be able to declare variables within a given pipeline's scope and another pipeline cannot access those variables
- I should be able to set variables with a processor (i.e. `set_variable`). These would only be set for that event's processing; there should be no side effects for other events.
- Today, you can kind of hack this by adding entries, but then I have to remember to not include them in the actual index.
- I should be able to specify a separate config file which can I can override my variables with (not in the primary YAML), so that I can avoid having to modify YAML in CI systems.
## Describe alternatives you've considered (Optional)
### Option 1: CloudFormation-style parameters
Use CloudFormation-like syntax [0]. In this, you'd specify Parameters at the top and reference them with a `Ref:` child-property.
My personal opinion is that I don't find that this fits the current style of DataPrepper config and it solves for a slightly different problem set, but I do believe it would effectively solve the problem.
### Option 2: `getVariable()` function
Similar to proposal, but would use a new function, `getVariable(string variableName)` to access a variables constants.
```YAML
- opensearch:
hosts: [ "https://${getVariable(\"APP_NAME\"}}-${getVariable(\"ENV\"}}.${getVariable(\"REGION\"}.es.amazonaws.com" ]
```
I think we may want to support this as well, but `getVariable` is verbose and would less readable than `${{VARIABLE_NAME}}` syntax proposed.
### Option 3: $ prefix
Use `getMetadata()` to access the variables, but use a `$` prefix to signal that it is a variable - `getMetadata("$VARIABLE_NAME")`
```YAML
- opensearch:
hosts: [ "https://${getMetadata(\"$APP_NAME\"}}-${getMetadata(\"$ENV\"}}.${getMetdata(\"$REGION\"}.es.amazonaws.com" ]
```
If we support the "@" prefix [1] for autogenerated metadata, then I think we may want to support this as well, but `getVariable` is verbose and would less readable than `${{VARIABLE_NAME}}` syntax proposed.
Options 2 & 3 could be useful in the future if there is support for ternary operators where you'd need to access the variable within the scope of a `${}`
## Additional context
- [0]: [CloudFormation Parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html)
- [1]\: https://github.com/opensearch-project/data-prepper/issues/3630
| User configurable "variables" for repeatedly used elements | https://api.github.com/repos/opensearch-project/data-prepper/issues/3684/comments | 7 | 2023-11-20T17:46:42Z | 2023-11-21T21:56:19Z | https://github.com/opensearch-project/data-prepper/issues/3684 | 2,002,705,839 | 3,684 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The following code - has a bug about releasing event handles without trying to push the events to DLQ first.
```
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/kafka-plugins/src/main/java/org/opensearch/dataprepper/plugins/kafka/producer/KafkaCustomProducer.java#L198-L210
```
It should be fixed such a way that - if DLQ is not configured or DLQ push fails, events should be released with negative ack, otherwise they should be released with positive ack
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Kafka Producer Callback releases events incorrectly | https://api.github.com/repos/opensearch-project/data-prepper/issues/3680/comments | 0 | 2023-11-16T18:02:47Z | 2023-11-16T18:03:06Z | https://github.com/opensearch-project/data-prepper/issues/3680 | 1,997,451,146 | 3,680 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper's new feature to provide [protection against premature visibility timeouts](https://github.com/opensearch-project/data-prepper/issues/2485) is very useful to avoid duplicate values. It should be the default value. But this may not work since it requires a new `sqs` permission.
**Describe the solution you'd like**
Change the default value to enabled. Do this in Data Prepper 3.0.
**Describe alternatives you've considered (Optional)**
Make this default now. But, this may result in many failed requests since this requires new permissions.
**Additional context**
See #2485 for the initial feature.
| Enable visibility duplication prevention by default | https://api.github.com/repos/opensearch-project/data-prepper/issues/3679/comments | 0 | 2023-11-16T17:06:19Z | 2023-11-16T17:06:50Z | https://github.com/opensearch-project/data-prepper/issues/3679 | 1,997,334,546 | 3,679 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We were recently concerned that Data Prepper was sending invalid data to OpenSearch. It turns out we don't have test cases to verify many different data combinations are correctly sent to OpenSearch.
**Describe the solution you'd like**
Create new tests in the OpenSearch integration tests (`OpenSearchIT`) to test various data combinations.
Examples:
**long**
* largest long value
* smallest long value
* other extreme values
* `1234567891011121314` (this was the value that originally initiated this; see https://github.com/opensearch-project/OpenSearch-Dashboards/issues/5485)
* `null`
**int**
* largest int value
* smallest int value
* other extreme values
* `null`
**boolean**
* `true`
* `false`
* `null`
**string**
* Very large strings
* Strings with special characters
* Strings with alternate character sets (e.g. non-Latin languages)
**double**
* large double values
* smallest double values (fractional of 0)
* both positive and negative
* other extreme values
* `null`
**float**
* large double values
* smallest double values (fractional of 0)
* both positive and negative
* other extreme values
* `null`
Any other values that may possibly have issues being serialized or sent to OpenSearch. | Integration tests to validate data going to OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/3678/comments | 0 | 2023-11-16T16:12:29Z | 2023-11-28T14:24:36Z | https://github.com/opensearch-project/data-prepper/issues/3678 | 1,997,220,761 | 3,678 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I am trying to create a data prepper pipeline that consumes data from kafka and ingest it into opensearch.
I need to consume from a set of topics whose name is generated dynamically over time, so I don't know it in
advance. I only know that the pattern is always "my-topic-*".
**Describe the solution you'd like**
I saw that KafkaConsumer supports the subscription to a wildcard topic using [patterns](https://kafka.apache.org/26/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html),
but it seems data prepper only uses a [list of fixed topics](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/kafka-plugins/src/main/java/org/opensearch/dataprepper/plugins/kafka/consumer/KafkaCustomConsumer.java#L321).
Is it possible to add the possibility to also consume wildcard topics?
**Describe alternatives you've considered (Optional)**
I saw that logstash supports this with the [topics_pattern](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html#plugins-inputs-kafka-topics_pattern) key.
| Support wildcard pattern in kafka source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3677/comments | 2 | 2023-11-16T15:01:47Z | 2025-06-07T10:16:44Z | https://github.com/opensearch-project/data-prepper/issues/3677 | 1,997,068,100 | 3,677 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Provide an configuration to allow ingesting the s3 data by file timestamp or some other attribute.
**Describe the solution you'd like**
Our batch process creates the parquet files in batches, we like to enforce some order to ingest the source files.
**Describe alternatives you've considered (Optional)**
We have to create a cumbersome workaround to dump files in batches, and wait for the first batch to be ingested firstly before dumping the second batch.
**Additional context**
Feel free to reach out to me internally for clarfication.
| Provide a configuration to allow ingesting the s3 data by file timestamp or some other configurable attributes. | https://api.github.com/repos/opensearch-project/data-prepper/issues/3674/comments | 0 | 2023-11-16T02:28:19Z | 2023-11-21T20:39:19Z | https://github.com/opensearch-project/data-prepper/issues/3674 | 1,995,923,454 | 3,674 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The snapshot speed for SINGLE table/collection from existing mongodb is not fast enough. (current speed 3-4 Mb per second per DB table/collection from local test).
This is the limitation of the Debezium Tool.
In Debezium, it only parallelized the snapshot task at collection/table level. Meaning no matter how you perform Horizontal Scaling, the smallest granularity is ONE node will snapshot all data from ONE MongoDB table.
**Describe the solution you'd like**
For snapshot (initial load), we will not use Debeizum. Instead, OSDP will connect to MongoDB, create partitions for each collections, then process each partition.
For change data, Debezium will still be used
| Speed Up initial snapshot from MongoDB | https://api.github.com/repos/opensearch-project/data-prepper/issues/3673/comments | 0 | 2023-11-16T01:39:46Z | 2024-01-10T20:03:24Z | https://github.com/opensearch-project/data-prepper/issues/3673 | 1,995,875,607 | 3,673 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Sending traces with compression from OTel collector to OTel traces source in a pipeline results in 500 error.
**To Reproduce**
Steps to reproduce the behavior:
1. Change `otel-collector-config.yaml` in jaeger-hotrod example to use otlphttp exporter:
```
receivers:
jaeger:
protocols:
grpc:
exporters:
otlphttp:
traces_endpoint: http://data-prepper:21890/v1/traces
# compression: none
tls:
insecure: false
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlphttp]
```
and change the corresponding OTel trace source config in `trace_analytics_no_ssl_2x.yml` to:
```
entry-pipeline:
delay: "100"
source:
otel_trace_source:
path: "/v1/traces"
ssl: false
unframed_requests: true
... ...
```
2. Run the example
3. With `compression: none` option (compression disabled), everything works as expected; without `compression: none` option, we can see error messages in OTel collector like this:
```
2023-11-15T17:33:54.261Z info exporterhelper/queued_retry.go:426 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "traces", "name": "otlphttp", "error": "error exporting items, request to http://data-prepper:21890/v1/traces responded with HTTP Status Code 500", "interval": "42.666367768s"}
```
With the recent improvement in logging (#3658), we can now see exception in Data Prepper logs as well:
```
2023-11-15T17:34:51,616 [armeria-common-worker-epoll-3-8] ERROR org.opensearch.dataprepper.GrpcRequestExceptionHandler - Unexpected exception handling gRPC request
io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence
......
Caused by: com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
```
**Expected behavior**
Otel sources should work with compressed data from OTel collector
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
This might be a regression. We had compression supported through https://github.com/opensearch-project/data-prepper/pull/2702
| [BUG] Sending traces with compression from OTel collector to OTel traces source in a pipeline results in 500 error | https://api.github.com/repos/opensearch-project/data-prepper/issues/3670/comments | 1 | 2023-11-15T17:44:36Z | 2023-11-15T18:12:45Z | https://github.com/opensearch-project/data-prepper/issues/3670 | 1,995,248,459 | 3,670 |
[
"opensearch-project",
"data-prepper"
] | On this line, I believe the intention is indeed to print the `exceptionFromRequest` as it is passed to this LOG.warn, however, there is not a second `{}` in the log string for it to be interpolated into. So instead, the underlying error is silently discarded, making it difficult for users to debug through this step. Adding that second `{}` somewhere in the log message should fix the problem.
https://github.com/opensearch-project/data-prepper/blob/1ffb57240f80fc171ef30c64436d8e4c13a0fbfe/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L244C3-L244C3
Thank you! | Log detail is passed but not printed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3669/comments | 1 | 2023-11-15T16:47:18Z | 2023-11-15T16:53:52Z | https://github.com/opensearch-project/data-prepper/issues/3669 | 1,995,155,026 | 3,669 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Leader parition will time out when there are some errors occured during initialization, such as permission is not properly set or PITR is not enabled etc. The save progresss state is never invoked hence the lease of the partition will time out after 10 minutes by default, and another node can take the leader lease. So there will be more than 1 leader at a time.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a table without PITR enabled
2. Create a pipeline with multiple nodes with exports using DynamoDB source
3. Wait for at least 10 minutes.
4. Check the backend log, there will be more than 1 leader running.
**Expected behavior**
There should be only 1 leader at a time.
| [BUG] Leader partition time out due when exception occured in DynamoDB source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3665/comments | 0 | 2023-11-15T12:02:22Z | 2023-11-15T15:42:12Z | https://github.com/opensearch-project/data-prepper/issues/3665 | 1,994,644,548 | 3,665 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
If the source item contains string with `'`, the final document in OpenSearch will have empty body.
After debug, An exception has been throw when converting the Stream event to Json event using jackson library.
```
com.fasterxml.jackson.core.JsonParseException: Unrecognized character escape ''' (code 39)
at [Source: (String)"{"Content":"I\'m sorry, but I don\'t have access to that.","UserId":"1234","StartTime":"2023-10-13T08:00:29.818246","SessionId":"123"}"; line: 1, column: 16]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:2477)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:750)
at com.fasterxml.jackson.core.base.ParserBase._handleUnrecognizedCharacterEscape(ParserBase.java:1353)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._decodeEscaped(ReaderBasedJsonParser.java:2713)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._finishString2(ReaderBasedJsonParser.java:2233)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._finishString(ReaderBasedJsonParser.java:2206)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.getText(ReaderBasedJsonParser.java:323)
at com.fasterxml.jackson.databind.deser.std.UntypedObjectDeserializerNR.deserialize(UntypedObjectDeserializerNR.java:82)
at com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringKeyMap(MapDeserializer.java:623)
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:449)
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:32)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:323)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4825)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3772)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3755)
at org.opensearch.dataprepper.plugins.source.dynamodb.converter.StreamRecordConverter.convertData(StreamRecordConverter.java:107)
at org.opensearch.dataprepper.plugins.source.dynamodb.converter.StreamRecordConverter.writeToBuffer(StreamRecordConverter.java:74)
at org.opensearch.dataprepper.plugins.source.dynamodb.stream.ShardConsumer.run(ShardConsumer.java:247)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
**To Reproduce**
Steps to reproduce the behavior:
For example, the content attribute contains `'`.
```
{
"SessionId": "123",
"UserId": "1234",
"Content": "I'm sorry, but I don't have access to that.",
"StartTime": "2023-10-13T08:00:29.818246"
}
```
And the doc in OpenSearch will be
```
{
"_index": "test-complex",
"_id": "123|1234",
"_score": 1,
"_source": {}
}
```
**Expected behavior**
The doc in OpenSearch should match to DynamoDB source.
**Screenshots**
N/A
**Additional context**
No issue in export.
| [BUG] Unrecognized character escape issue for stream data in DynamoDB source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3664/comments | 6 | 2023-11-15T07:09:51Z | 2023-11-16T01:03:05Z | https://github.com/opensearch-project/data-prepper/issues/3664 | 1,994,183,016 | 3,664 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the OpenSearch source, it is not clear to me how long the search requests are taking to be fulfilled by OpenSearch.
**Describe the solution you'd like**
A `searchLatency` metric that provides information on how long it takes for the actual search request to take place. This metric may not be that necessary as OpenSearch commonly provides a searchLatency metric, but this would identify the latency specifically coming from Data Prepper searches
There is already an `indexProcessingTime` metric that tracks how long particular indices take to process, but not one that shows the search requests themselves to identify if OpenSearch is a bottleneck in the migration.
| Emit a searchLatency metric on the OpenSearch source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3663/comments | 0 | 2023-11-15T05:22:45Z | 2023-11-15T05:25:51Z | https://github.com/opensearch-project/data-prepper/issues/3663 | 1,994,069,560 | 3,663 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The OpenSearch source currently supports an `index_name_regex` parameter, and users may be more familiar or prefer to pass an index pattern of `index-*`, which in regex is `index-.*`.
This would be achieved by adding an `index_pattern` parameter as an alternative to `index_name_regex`
**Describe alternatives you've considered (Optional)**
Replacing `*` with `.*` internally in `index_name_regex` to prevent misuse with the * pattern.
| Support index_pattern option in the OpenSearch source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3662/comments | 0 | 2023-11-15T05:19:33Z | 2023-11-15T05:20:10Z | https://github.com/opensearch-project/data-prepper/issues/3662 | 1,994,066,566 | 3,662 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I want to ensure that eventsRead and eventsWritten match, but if I'm using routes to filter events by not specifying sink, I cannot tell what part of the delta is from my route specific issues vs potential data loss from the writers.
**Describe the solution you'd like**
Emit a metric for events that are no-op'd from an empty route
**Describe alternatives you've considered (Optional)**
Don't emit eventsread for these scenarios. This is less desirable because I want to match eventsRead to the number of writes/records in my source system.
Emit as eventsWritten. This solves the problem of "correctness", but could be confusing if the number of items in OpenSearch actually differs from this amount.
**Additional context**
N/A
| Count events that are read by not written because there's no sink defined for that route | https://api.github.com/repos/opensearch-project/data-prepper/issues/3659/comments | 1 | 2023-11-14T22:43:55Z | 2023-11-21T20:39:35Z | https://github.com/opensearch-project/data-prepper/issues/3659 | 1,993,700,806 | 3,659 |
[
"opensearch-project",
"data-prepper"
] | Coming from #3620, include metadata about encryption in the Kafka buffer message
```
message BufferedData {
/* The format of the message as it was written.
*/
MessageFormat message_format = 1;
/* The actual data. This is encrypted if key_id is present. Otherwise, it
* is unencrypted data.
*/
bytes data = 2;
/* Indicates if data is encrypted or not.
*/
optional boolean encrypted = 3;
/* The data key which encrypted the data field. This will be encrypted.
* The consuming Data Prepper node must have the ability to decrypt this key.
*/
optional bytes encrypted_data_key = 4;
}
``` | Include encrypted data key in Kafka buffer message. | https://api.github.com/repos/opensearch-project/data-prepper/issues/3655/comments | 1 | 2023-11-14T20:55:51Z | 2024-02-10T16:24:04Z | https://github.com/opensearch-project/data-prepper/issues/3655 | 1,993,556,711 | 3,655 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When the call to extend the visibility timeout fails, no exception or logs are thrown. This is because we don't `get` the future for the callback at any point to surface the ExecutionException.
A side effect of this is that the expiry metric for the ack set are not published even though the visibility timeout has expired
Callback invocations:
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-core/src/main/java/org/opensearch/dataprepper/acknowledgements/DefaultAcknowledgementSet.java#L147
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-core/src/main/java/org/opensearch/dataprepper/acknowledgements/DefaultAcknowledgementSet.java#L174
**To Reproduce**
Create an S3-SQS source pipeline with extend visibility timeout enabled. Don't provide the ChangeMessageVisibility permission to the role used for the source.
**Expected behavior**
Failures should be surfaced in the logs and potentially via metrics
| [BUG] ProgressCheck callback silently fails on exception | https://api.github.com/repos/opensearch-project/data-prepper/issues/3653/comments | 2 | 2023-11-14T19:29:41Z | 2023-11-15T17:47:26Z | https://github.com/opensearch-project/data-prepper/issues/3653 | 1,993,423,464 | 3,653 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The Data Prepper `opensearch` sink can write empty DLQ objects to S3 in certain case.
From what I can tell, this mostly occurs from high-level errors from the `_bulk` request which do not result in underlying `items` provided with failures.
**To Reproduce**
Steps to reproduce the behavior:
Configure an index that you do not have permission to write to.
Run Data Prepper and provide input.
I see the following errors:
```
2023-11-13T18:09:23,902 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. Number of retries 5. Retrying...
2023-11-13T18:09:23,906 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, error = OpenSearch exception [type=authorization_exception, reason=User does not have permissions for the requested resource]
2023-11-13T18:09:23,909 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, error = OpenSearch exception [type=authorization_exception, reason=User does not have permissions for the requested resource]
2023-11-13T18:09:50,577 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. Number of retries 10. Retrying...
2023-11-13T18:09:50,579 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, error = OpenSearch exception [type=authorization_exception, reason=User does not have permissions for the requested resource]
2023-11-13T18:09:50,579 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, error = OpenSearch exception [type=authorization_exception, reason=User does not have permissions for the requested resource]
```
I also got 8 S3 objects in the DLQ. Each looked like:
```
{"dlqObjects":[]}
```
**Expected behavior**
The DLQ should include one item per failed document. It should include the high-level error instead of the individual item error.
**Environment (please complete the following information):**
Data Prepper `main` working toward 2.6.0.
**Additional context**
N/A
| [BUG] Data Prepper is writting empty DLQ objects | https://api.github.com/repos/opensearch-project/data-prepper/issues/3644/comments | 1 | 2023-11-13T18:37:20Z | 2023-11-15T00:58:57Z | https://github.com/opensearch-project/data-prepper/issues/3644 | 1,991,233,884 | 3,644 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Currently, in DynamoDB source, the DynamoDB partition key and sort key will be stored in event metadata which can then be used as doc id in OpenSearch.
However, for the event comes from export data file, the metadata info will be incorrect if the key is in numeric format.
For example, if I have a table with partition key `pk` and sort key `sk`. And for an item like below
```
{
"pk": 39370,
"sk": 25130455
}
```
If we set up to use `document_id: "${getMetadata(\"primary_key\")}"` in the configuration, the result in OpenSearch will be something like:
```{
"_index": "...",
"_id": "3.937E+4|25130455",
"_score": 1,
"_source": {
"pk": 39370,
"sk": 25130455
}
}
```
The pk in the doc id is with E notation (`3.937E+4|25130455`). The impact is that, if some changes to this item happens, it will create a new doc with id `39370|25130455`.
**To Reproduce**
Steps to reproduce the behavior:
Create a table with numeric partition key or sort key, run the pipeline with doc id mapping and check the output in OpenSearch.
If you want to verify in code, here is the Ion output for that item: `" $ion_1_0 {Item:{pk:3937d1,sk:25130455.}}"`
**Expected behavior**
The primary key in metadata should contain the correct numeric format (without E notation) for exports.
**Additional context**
No such issue in streaming.
| [BUG] The primary key in metadata for export data is incorrect for DynamoDB source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3642/comments | 0 | 2023-11-13T08:06:13Z | 2023-11-14T16:35:11Z | https://github.com/opensearch-project/data-prepper/issues/3642 | 1,990,086,810 | 3,642 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We have a requirement to produce reports showing the time difference between a file been actioned upon (uploaded, generally) in S3 and the time that we process it in data-prepper
This page outlines the data provided to "SQS" https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-content-structure.html
From that my understanding at present is that only the `Records.s3.bucket.name` and `Records.s3.object.key` are used and exported to the event.
Ideally we would want the entire S3 data event message to be available to be used as metadata to be "attached" to the record. this would allow us to extend our requirement to possibly, say this file was uploaded by this "person/process/thing" at y time, we "recieved" it as x time, and the event had an original timestamp of z time.
So if this event ended up in Opensearch we may have an extra set of data along the lines of
```
s3: {
eventTime: <time_object_created> (Records.eventTime)
eventName: <put_object> (Records.eventName)
bucketName: <bucket_name> (Records.s3.bucket.name)
object: <object> (Records.s3.object.key)
}
internal: {
original_time: <timestamp_of_log_line> (Origin time written by App)
s3_time: <s3.eventTime> (read from S3 Data event Notification message)
dp_time: <time_ingest_by_dataprepper> (added by dataprepper)
os_time: <time_ingest_into_opensearch> (via a ingest pipeline or similar)
}
```
With the above we would be able to work out total latency from event emitted to event ingested in opensearch in this example. Now if this time was prolonged we would hopefully be able to determine where the latency was introduced. This would help us measure our SLA's / SLO's accurately.
Please note I may have mixed terms etc, but hopefully I have got the gist across, of what we are looking for an why.
**Describe the solution you'd like**
We would like the S3 Data Event message to be attached as "metadata" to the records processed in a S3 sourced file
**Describe alternatives you've considered (Optional)**
We would need to build some solution, to read our final output, go search the S3 bucket, get the obect time and then update the record in the final output
**Additional context**
Off the back of this very brief discussion here https://github.com/opensearch-project/data-prepper/discussions/3626
| Add S3 SQS Data Event Notification message as metadata to records | https://api.github.com/repos/opensearch-project/data-prepper/issues/3641/comments | 1 | 2023-11-12T20:21:10Z | 2023-11-14T20:49:21Z | https://github.com/opensearch-project/data-prepper/issues/3641 | 1,989,555,574 | 3,641 |
[
"opensearch-project",
"data-prepper"
] | Hi. I am testing data prepper new openseach source functionally with this config:
```
opensearch-to-s3:
delay: "10"
workers: 4
source:
opensearch:
hosts: [ "https://my-aws-opensearch-2.7:443" ]
username: "***"
password: "***"
indices:
include:
- index_name_regex: "myindex"
scheduling:
interval: "PT1H"
index_read_count: 5
search_options:
search_context_type: "point_in_time"
batch_size: 10000
buffer:
bounded_blocking:
buffer_size: 12800
batch_size: 200
sink:
- stdout:
```
I use aws managed opensearch 2.7 .
and I face this error: ElasticsearchVersionInfo.buildFlavor
```
dataprepper-data-prepper-1 | 2023-11-08T02:42:28,268 [opensearch-to-s3-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [opensearch-to-s3] process worker encountered a fatal exception, cannot proceed further
dataprepper-data-prepper-1 | java.util.concurrent.ExecutionException: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
dataprepper-data-prepper-1 | at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
dataprepper-data-prepper-1 | at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.5.0.jar:?]
dataprepper-data-prepper-1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
dataprepper-data-prepper-1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
dataprepper-data-prepper-1 | at java.lang.Thread.run(Unknown Source) [?:?]
dataprepper-data-prepper-1 | Caused by: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getDistributionAndVersionNumber(SearchAccessorStrategy.java:181) ~[opensearch-source-2.5.0.jar:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getSearchAccessor(SearchAccessorStrategy.java:97) ~[opensearch-source-2.5.0.jar:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.startProcess(OpenSearchSource.java:69) ~[opensearch-source-2.5.0.jar:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.start(OpenSearchSource.java:58) ~[opensearch-source-2.5.0.jar:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:210) ~[data-prepper-core-2.5.0.jar:?]
dataprepper-data-prepper-1 | at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:251) ~[data-prepper-core-2.5.0.jar:?]
dataprepper-data-prepper-1 | at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
dataprepper-data-prepper-1 | at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
dataprepper-data-prepper-1 | ... 3 more
dataprepper-data-prepper-1 | Caused by: co.elastic.clients.util.MissingRequiredPropertyException: Missing required property 'ElasticsearchVersionInfo.buildFlavor'
dataprepper-data-prepper-1 | at co.elastic.clients.util.ApiTypeHelper.requireNonNull(ApiTypeHelper.java:76) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo.<init>(ElasticsearchVersionInfo.java:74) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo.<init>(ElasticsearchVersionInfo.java:50) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo$Builder.build(ElasticsearchVersionInfo.java:300) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo$Builder.build(ElasticsearchVersionInfo.java:200) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.ObjectBuilderDeserializer.deserialize(ObjectBuilderDeserializer.java:80) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:43) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.ObjectDeserializer$FieldObjectDeserializer.deserialize(ObjectDeserializer.java:72) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.ObjectDeserializer.deserialize(ObjectDeserializer.java:176) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.ObjectDeserializer.deserialize(ObjectDeserializer.java:137) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.JsonpDeserializer.deserialize(JsonpDeserializer.java:75) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.ObjectBuilderDeserializer.deserialize(ObjectBuilderDeserializer.java:79) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:43) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.transport.rest_client.RestClientTransport.decodeResponse(RestClientTransport.java:328) ~[elasticsearch-java-7.17.0.jar:?]
dataprepper-data-prepper-1 | at co.elastic.clients.transport.rest_client.RestClientTransport.getHighLevelResponse(RestClientTransport.java:294) ~[elasticsearch-java-7.17.0.jar:?]
```
It's aws manged opensearch 2.7 (override_main_response_version: true)
why does it try to use elasticsearch-java-7.17.0 instead of
opensearch-java-2.5.0 ? | [BUG] AWS openseach source error: ElasticsearchVersionInfo.buildFlavor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3640/comments | 4 | 2023-11-12T19:53:04Z | 2024-06-21T18:48:22Z | https://github.com/opensearch-project/data-prepper/issues/3640 | 1,989,543,870 | 3,640 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
There is a problem with the DynamoDB source metadata. We are currently using `dynamodb_item_version` as the key for the metadata field with nanosecond precision version of the record timestamp, equivalent of the `dynamodb_timestamp` field which currently is only second precision. This has a potential naming conflict if DynamoDB ever supports an item version concept.
There is a similar field, `opensearch_action` which is a remapping of the `dynamodb_event_name` field to the recommended action on OpenSearch. However, `opensearch_version` also collides with the version of the OpenSearch engine that customers might confuse it with. Just plain `action` could be confusing.
**Describe the solution you'd like**
#### Option 1: `@<setting> convention` (Recommended)
Use `@<setting>` convention for naming generated/remapped metadata fields. This would look like this for DynamoDB+OpenSearch.
```yaml
action: "${getMetadata('@action')}"
document_version: "${getMetadata('@document_version')}"
```
This has the benefit of not needing to "read the docs" to understand the recommended defaults. The default is always the name of the setting with an `@` prefix. The `@` also always signals it is an artificial field (as in `@rtificial`).
However, this could set expectations we use `@<setting>` for every setting, such as `@document_version_type`, which is not currently the plan. I believe this is acceptable because, if we do get that feedback, it is probably fine to do that for most settings (e.g. `@document_version_type` would always just be `external`).
**Describe alternatives you've considered (Optional)**
#### Option 2: `<setting>` convention
Use `<setting>` convention for naming remapped metadata fields. This would like this for DynamoDB+OpenSearch.
```yaml
action: "${getMetadata('action')}"
document_version: "${getMetadata('document_version')}"
```
This has the benefit of not needing to "read the docs" to understand the recommended defaults.
However, it can open the door to naming collisions where the convention couldn't be follow. For example, for a potential integration with DocumentDB, `document_version` may be a field we'd get from DocumentDB that needs to be remapped to be appropriate for OpenSearch's `document_version`. `@document_version` would still possibly have some confusion in this case, but the `@` make it clear which one would be autogenerated vs from the source.
#### Option 3: Only rename `dynamodb_item_version` to `document_version`
Rename `dynamodb_item_version` to `document_version` for now. For DynamoDB+OpenSearch, this would look like:
```yaml
action: "${getMetadata('opensearch_action')}"
document_version: "${getMetadata('document_version')}"
```
This solves the bare minimum requirement of avoiding name conflict with a possible future DynamoDB feature.
However, it is less clear which prefix to use when and whether the field is autogenerated or not.
**Additional context**
- [Metadata key names](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/converter/MetadataKeyAttributes.java#L17)
| Alternative metadata naming for version | https://api.github.com/repos/opensearch-project/data-prepper/issues/3630/comments | 2 | 2023-11-10T21:35:16Z | 2023-11-20T19:24:25Z | https://github.com/opensearch-project/data-prepper/issues/3630 | 1,988,435,279 | 3,630 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, if DDB source role does not have `dynamodb:DescribeTable` permission, the pipeline will shut down during initialization. We can catch this as a retryable error like we did for Opensearch sink to allow user to correct permissions on the role.
The permission error is like this:
```
[dynamodb-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [dynamodb-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: software.amazon.awssdk.services.dynamodb.model.DynamoDbException: User: arn:aws:sts::123456789012:assumed-role/pipelineRole/Data-Prepper-0233fade-6af1-4da2-a36b-923d9802c8cb is not authorized to perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-east-1:123456789012:table/test-source-table because no identity-based policy allows the dynamodb:DescribeTable action (Service: DynamoDb, Status Code: 400, Request ID: 839GJP2AL1PSOC8225BKVCULM3VV4KQNSO5AEMVJF66Q9ASUAAJG)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1129) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: software.amazon.awssdk.services.dynamodb.model.DynamoDbException: User: arn:aws:sts::193777858833:assumed-role/pipelineRole/Data-Prepper-0233fade-6af1-4da2-a36b-923d9802c8cb is not authorized to perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-east-1:193777858833:table/gameday-test-source-table because no identity-based policy allows the dynamodb:DescribeTable action (Service: DynamoDb, Status Code: 400, Request ID: 839GJP2AL1PSOC8225BKVCULM3VV4KQNSO5AEMVJF66Q9ASUAAJG)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125) ~[sdk-core-2.20.67.jar:?]
...
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56) ~[aws-core-2.20.67.jar:?]
at software.amazon.awssdk.services.dynamodb.DefaultDynamoDbClient.describeTable(DefaultDynamoDbClient.java:2344) ~[dynamodb-2.20.67.jar:?]
at org.opensearch.dataprepper.plugins.source.dynamodb.DynamoDBService.getTableInfo(DynamoDBService.java:271) ~[dynamodb-source-2.6.0-SNAPSHOT.jar:?]
...
```
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Retry when DynamoDB source fails to initialize due to permission issue | https://api.github.com/repos/opensearch-project/data-prepper/issues/3623/comments | 4 | 2023-11-09T23:35:42Z | 2023-11-13T19:29:25Z | https://github.com/opensearch-project/data-prepper/issues/3623 | 1,986,580,950 | 3,623 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Right now it's not easy to debug 500 errors in any otel source, if there is an exception which is not an instance of `RequestTimeoutException`, `TimeoutException`, `SizeOverflowException`, `BadRequestException`, `RequestCancelledException` we increment the `internalServerError` metric. With this metric it's not possible to debug the failures without any logging.
https://github.com/opensearch-project/data-prepper/blob/259fea1ba1ecd0a223499e7c9932457425f08079/data-prepper-plugins/armeria-common/src/main/java/org/opensearch/dataprepper/GrpcRequestExceptionHandler.java#L65
We also return a description with status but looks like the `e.getMessage()` can be null.
https://github.com/opensearch-project/data-prepper/blob/259fea1ba1ecd0a223499e7c9932457425f08079/data-prepper-plugins/armeria-common/src/main/java/org/opensearch/dataprepper/GrpcRequestExceptionHandler.java#L74
**Describe the solution you'd like**
Add a log statement when incrementing the `internalServerError` metric.
Log should include `getCause` of throwable if `getMessage` of the exception is null.
| Improve GRPC request exception logging | https://api.github.com/repos/opensearch-project/data-prepper/issues/3621/comments | 1 | 2023-11-09T18:56:36Z | 2023-11-15T01:26:19Z | https://github.com/opensearch-project/data-prepper/issues/3621 | 1,986,223,398 | 3,621 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `kafka` buffer supports data encryption with envelope encryption. In this situation, Data Prepper should be able to write the event data alongside the encrypted data key.
Additionally, the `kafka` buffer only supports input from `byte[]`. With additional metadata we could track whether it was serialized using bytes or as JSON from the `Event` model.
**Describe the solution you'd like**
Update the `kafka` buffer to write and read using an internal binary protocol. This can include metadata that may not always be serialized along with an event. This would include the encrypted data key.
```
enum MessageFormat {
MESSAGE_FORMAT_UNSPECIFIED = 0;
MESSAGE_FORMAT_BYTES = 1;
MESSAGE_FORMAT_JSON = 2;
}
message BufferedData {
/* The format of the message as it was written.
*/
MessageFormat message_format = 1;
/* The actual data. This is encrypted if key_id is present. Otherwise, it
* is unencrypted data.
*/
bytes data = 2;
/* Indicates if data is encrypted or not.
*/
optional boolean encrypted = 3;
/* The data key which encrypted the data field. This will be encrypted.
* The consuming Data Prepper node must have the ability to decrypt this key.
*/
optional bytes encrypted_data_key = 4;
}
```
**Describe alternatives you've considered (Optional)**
I chose Protobuf for the data format in this design.
There are possible alternatives.
Avro is compact like protobuf. In order to support change in the format over time, each Avro record would need to have a schema id attached to it. This would be done using a schema registry. Using Avro would thus require a schema registry which is adds complexity to the overall architecture.
Protobuf supports change to the schema over time using field numbers. This will be embedded in the binary data and within Data Prepper.
I think Protobuf is preferable to other binary formats such as Thrift because Data Prepper is already making significant use of Protobuf for OTel data.
Non-binary formats are not considered because they would require base64 encoding the binary data that is embedded.
| Write Kafka buffer data in a consistent format with metadata | https://api.github.com/repos/opensearch-project/data-prepper/issues/3620/comments | 1 | 2023-11-09T18:55:16Z | 2023-11-14T20:56:32Z | https://github.com/opensearch-project/data-prepper/issues/3620 | 1,986,221,601 | 3,620 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Race condition in DefaultEventHandle causing crash with the following stack trace
```
java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 0
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.FutureHelper.awaitFuturesIndefinitely(FutureHelper.java:29) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:158) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:139) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 0
at java.base/java.util.ArrayList.add(ArrayList.java:487) ~[?:?]
at java.base/java.util.ArrayList.add(ArrayList.java:499) ~[?:?]
at org.opensearch.dataprepper.model.event.DefaultEventHandle.onRelease(DefaultEventHandle.java:70) ~[data-prepper-api-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.updateLatencyMetrics(AbstractSink.java:88) ~[data-prepper-api-2.6.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:348) ~[data-prepper-core-2.6.0-SNAPSHOT.jar:?]
... 5 more
```
**To Reproduce**
Steps to reproduce the behavior:
I think it can happen with any image with the latest code, especially when there are multiple sinks
**Expected behavior**
No crash
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Race condition in DefaultEventHandle | https://api.github.com/repos/opensearch-project/data-prepper/issues/3617/comments | 0 | 2023-11-09T16:52:03Z | 2023-11-09T18:30:17Z | https://github.com/opensearch-project/data-prepper/issues/3617 | 1,986,033,125 | 3,617 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `kafka` buffer is currently subject to the heap circuit breaker on receipt of data. Since this buffer writes directly to a Kafka topic, and thus leaves the JVM heap, it shouldn't have the heap circuit-breaker impact it.
**Describe the solution you'd like**
Provide a new function on `Buffer` to indicate if the buffer uses off-heap data.
```
boolean isWrittenOffHeapOnly()
```
When this is `true`, then do not apply the `CircuitBreaker`. This initial logic may not be ideal long-term as we could possibly support other types of circuit-breakers (say based on local disk-space, a CPU-based threshold, or the current pipeline latency). However, this will work for an initial implementation.
**Additional context**
The `kafka` buffer already uses the circuit-breaker when reading from the topic to place into the processor chains - #3578.
| Allow the Kafka buffer (and others that do not require the heap) to bypass the heap circuit breaker | https://api.github.com/repos/opensearch-project/data-prepper/issues/3616/comments | 0 | 2023-11-09T15:10:46Z | 2023-11-10T16:45:07Z | https://github.com/opensearch-project/data-prepper/issues/3616 | 1,985,831,153 | 3,616 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DataPrepper crashes when following [quickstart instructions](https://opensearch.org/docs/latest/data-prepper/getting-started/)
The stack trace is quite long, but the relevant part is (not terribly decipherable, but anyway)...
> Reading pipelines and data-prepper configuration files from Data Prepper home directory.
/opt/java/openjdk/bin/java
Found openjdk version of 17.0
2023-11-08T19:02:11,203 [main] INFO org.opensearch.dataprepper.DataPrepperArgumentConfiguration - Command line args: /usr/share/data-prepper/pipelines,/usr/share/data-prepper/config/data-prepper-config.yaml
2023-11-08T19:02:11,440 [main] INFO org.opensearch.dataprepper.parser.PipelinesDataflowModelParser - Reading pipeline configuration from pipelines.yaml
2023-11-08T19:02:11,782 [main] WARN org.springframework.context.support.AbstractApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataPrepper' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.5.0.jar!/org/opensearch/dataprepper/DataPrepper.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'pipelineParser' defined in org.opensearch.dataprepper.parser.config.PipelineParserConfiguration: Unsatisfied dependency expressed through method 'pipelineParser' parameter 1; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'extensionsApplier' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.5.0.jar!/org/opensearch/dataprepper/plugin/ExtensionsApplier.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'extensionLoader' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.5.0.jar!/org/opensearch/dataprepper/plugin/ExtensionLoader.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'extensionPluginConfigurationConverter' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.5.0.jar!/org/opensearch/dataprepper/plugin/ExtensionPluginConfigurationConverter.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'extensionPluginConfigurationResolver' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.5.0.jar!/org/opensearch/dataprepper/plugin/ExtensionPluginConfigurationResolver.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataPrepperConfiguration' defined in org.opensearch.dataprepper.parser.config.DataPrepperAppConfiguration: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.opensearch.dataprepper.parser.model.DataPrepperConfiguration]: Factory method 'dataPrepperConfiguration' threw exception; nested exception is java.lang.IllegalArgumentException: Invalid DataPrepper configuration file.
**To Reproduce**
Follow the instructions: https://opensearch.org/docs/latest/data-prepper/getting-started/
**pipelines.yaml**
```
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- stdout:
```
**data-prepper-config.yaml**
This is empty since it doesn't seem to require anything by default
```
```
**docker command**
```bash
docker run --name data-prepper -p 4900:4900 -v ${PWD}/pipelines.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml -v ${PWD}/data-prepper-config.yaml:/usr/share/data-prepper/config/data-prepper-config.yaml opensearchproject/data-prepper:latest
```
**Expected behavior**
Should have the output of UUIDs as described. | [BUG] DataPrepper crashes when following quickstart instructions | https://api.github.com/repos/opensearch-project/data-prepper/issues/3612/comments | 2 | 2023-11-08T19:25:12Z | 2024-06-19T12:24:02Z | https://github.com/opensearch-project/data-prepper/issues/3612 | 1,984,255,921 | 3,612 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The ${pipelineName} variable does not get replaced in this field:
```
sink:
- opensearch:
...
dlq:
s3:
bucket: "our_dlq_bucket"
key_path_prefix: "${pipelineName}/logs/dlq"
```
**To Reproduce**
Steps to reproduce the behavior:
Create a pipeline with a DLQ for the sink and use the ${pipelineName} variable to construct an object key prefix.
Object keys will contain "${pipelineName}" instead of the pipeline name.
**Expected behavior**
Object key should contain the actual pipeline name
**Screenshots**
n/a
**Environment (please complete the following information):**
n/a
**Additional context**
This is trivial to hard-code the pipeline name in each pipeline definition, but most users will expect this to work like it does for the ingestion path: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/creating-pipeline.html#pipeline-path
| [BUG] In OpenSearch pipelines, ${pipelineName} variable does not get replaced in s3 key_path_prefix | https://api.github.com/repos/opensearch-project/data-prepper/issues/3609/comments | 1 | 2023-11-08T18:21:07Z | 2023-11-14T20:33:50Z | https://github.com/opensearch-project/data-prepper/issues/3609 | 1,984,166,117 | 3,609 |
[
"opensearch-project",
"data-prepper"
] | There are projects which we are not supporting yet. We need to remove them in the `settings.gradle` in the `2.6` branch once created. | Remove all unnecessary projects in the 2.6 branch | https://api.github.com/repos/opensearch-project/data-prepper/issues/3605/comments | 1 | 2023-11-08T15:43:36Z | 2023-11-27T23:54:49Z | https://github.com/opensearch-project/data-prepper/issues/3605 | 1,983,897,290 | 3,605 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I want to test pushing more data through Data Prepper. I'd like to make the `random` source support a configurable wait time.
**Describe the solution you'd like**
Add a new `wait_delay` field to the `random` source that takes a duration value.
```
simple-test-pipeline:
workers: 2
delay: "5000"
source:
random:
wait_delay: 1ms
sink:
- stdout:
```
| Configure the delay in the random string source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3601/comments | 0 | 2023-11-08T14:44:05Z | 2023-11-14T22:53:31Z | https://github.com/opensearch-project/data-prepper/issues/3601 | 1,983,773,052 | 3,601 |
[
"opensearch-project",
"data-prepper"
] | Vector search is feature of OpenSearch that is gaining prominence with recent emergence of generative AI and other ML use cases. Currently users of OpenSearch either use external embedding methods or use OpenSearch ingest pipelines to generate text embedding. Other processor currently present in the [ingest pipelines](https://opensearch.org/docs/latest/ingest-pipelines/index/) are Append, Bytes Convert, CSV, Date, IP2Geo, Lowercase Text embedding. However using these processors especially text embedding causes additional CPU usage on OpenSearch. Data Prepper helps with preparing and ingesting data into OpenSearch. Having a text embedding processor in Data Prepper will help reduce this CPU need on OpenSearch and can help with the emerging use cases for OpenSearch
| Vector embedding processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3597/comments | 13 | 2023-11-07T20:59:04Z | 2023-12-12T20:24:25Z | https://github.com/opensearch-project/data-prepper/issues/3597 | 1,982,217,619 | 3,597 |
[
"opensearch-project",
"data-prepper"
] | The GitHub Actions tests often fail from flaky tests. This makes checking PRs difficult. Also, some true failures have made it through by ignoring the build failures.
I'm creating this issue to track the flaky tests. When you see a test fail, paste the failure as a comment.
### Tasks
- [x] #3470
- [ ] #1374 | [BUG] Data Prepper flaky tests | https://api.github.com/repos/opensearch-project/data-prepper/issues/3589/comments | 2 | 2023-11-04T16:48:32Z | 2023-11-09T21:03:16Z | https://github.com/opensearch-project/data-prepper/issues/3589 | 1,977,425,449 | 3,589 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-4043 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parsson-1.1.2.jar</b></p></summary>
<p>Jakarta JSON Processing provider</p>
<p>Library home page: <a href="https://github.com/eclipse-ee4j/parsson">https://github.com/eclipse-ee4j/parsson</a></p>
<p>Path to dependency file: /data-prepper-plugins/otel-trace-group-processor/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.parsson/parsson/1.1.2/f446d3339aa16da8b8891aa11544bc1291a6232e/parsson-1.1.2.jar</p>
<p>
Dependency Hierarchy:
- opensearch-java-2.5.0.jar (Root Library)
- :x: **parsson-1.1.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Parsson before versions 1.1.4 and 1.0.5, Parsing JSON from untrusted sources can lead malicious actors to exploit the fact that the built-in support for parsing numbers with large scale in Java has a number of edge cases where the input text of a number can lead to much larger processing time than one would expect.
To mitigate the risk, parsson put in place a size limit for the numbers as well as their scale.
<p>Publish Date: 2023-11-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-4043>CVE-2023-4043</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://gitlab.eclipse.org/security/vulnerability-reports/-/issues/13">https://gitlab.eclipse.org/security/vulnerability-reports/-/issues/13</a></p>
<p>Release Date: 2023-11-03</p>
<p>Fix Resolution (org.eclipse.parsson:parsson): 1.1.4</p>
<p>Direct dependency fix Resolution (org.opensearch.client:opensearch-java): 2.7.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2023-4043 (High) detected in parsson-1.1.2.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3588/comments | 0 | 2023-11-04T06:14:26Z | 2023-11-27T16:45:39Z | https://github.com/opensearch-project/data-prepper/issues/3588 | 1,977,192,450 | 3,588 |
[
"opensearch-project",
"data-prepper"
] | Since the certainty of an Anomaly Detection model increases as more samples are available, it would be good if all the information that the Anomaly Detector plugin has calculated since it was started would be persisted on disk (or other mechanism) between service restarts, because that would improve model fidelity over time.
It would also be good to be able to configure a limit of historical information to be saved, per processor.
I think Data Prepper Anomaly Detector Processor is a big differential compared to other similar tools, it is an amazing feature
| Add persistence to Data Prepper Anomaly Detector Processor between service restarts | https://api.github.com/repos/opensearch-project/data-prepper/issues/3582/comments | 0 | 2023-11-02T19:55:50Z | 2023-11-21T01:58:22Z | https://github.com/opensearch-project/data-prepper/issues/3582 | 1,974,943,265 | 3,582 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When using the dynamo source with export and stream, after the export completed, I received the following exception continuously
```
ERROR org.opensearch.dataprepper.sourcecoordination.enhanced.EnhancedLeaseBasedSourceCoordinator - Global state arn:aws:dynamodb:us-west-2:123456789012:table/SomeTable/stream/2023-11-01T21:46:45.879 is not found.
```
The only global items for this pipeline in the coordination store were
```
arn:aws:dynamodb:us-west-2:123456789012:table/SomeTable/export/01698897694319-d7dc03b8
```
and
```
arn:aws:dynamodb:us-west-2:123456789012:table/SomeTable
```
and there was no item for streams.
The export still went through successfully and then shards started. This may expected behavior and the log should just not be an ERROR log
**To Reproduce**
Not sure how to reproduce
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| [BUG] DynamoDb source global state not found for export | https://api.github.com/repos/opensearch-project/data-prepper/issues/3579/comments | 0 | 2023-11-02T04:15:03Z | 2023-12-12T16:13:19Z | https://github.com/opensearch-project/data-prepper/issues/3579 | 1,973,472,341 | 3,579 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Kafka buffer reads from Kafka and then writes to an internal buffer. The Kafka buffer creates this itself by calling `new` which bypasses the circuit breaker. This means that the Kafka reads could be faster enough to cause an out-of-memory situation.
**Describe the solution you'd like**
Use the circuit breaker while reading from Kafka in the Kafka buffer.
**Additional context**
Original feature for circuit breakers: #2150
| Support the circuit breaker when reading from Kafka as a buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3578/comments | 0 | 2023-11-01T18:28:01Z | 2023-11-28T14:24:20Z | https://github.com/opensearch-project/data-prepper/issues/3578 | 1,972,882,648 | 3,578 |
[
"opensearch-project",
"data-prepper"
] | Provide configurations in the `opensearch` sink which can configure an Amazon OpenSearch Serverless network policy. | Create or update Amazon OpenSearch Serverless network policy | https://api.github.com/repos/opensearch-project/data-prepper/issues/3577/comments | 1 | 2023-11-01T14:53:46Z | 2023-11-01T14:54:53Z | https://github.com/opensearch-project/data-prepper/issues/3577 | 1,972,535,827 | 3,577 |
[
"opensearch-project",
"data-prepper"
] | null | Update release workflow to use end-to-end tests with pre-released Docker image | https://api.github.com/repos/opensearch-project/data-prepper/issues/3568/comments | 0 | 2023-10-31T16:28:41Z | 2023-10-31T20:02:32Z | https://github.com/opensearch-project/data-prepper/issues/3568 | 1,970,847,551 | 3,568 |
[
"opensearch-project",
"data-prepper"
] | null | Make end-to-end tests configurable such that they can run from configurable Docker images | https://api.github.com/repos/opensearch-project/data-prepper/issues/3567/comments | 0 | 2023-10-31T16:28:37Z | 2025-06-04T22:08:30Z | https://github.com/opensearch-project/data-prepper/issues/3567 | 1,970,847,442 | 3,567 |
[
"opensearch-project",
"data-prepper"
] | The current end-to-end tests build custom Docker images and run using those. Update these tests to use the release Docker image. It provides a few benefits:
* These tests run on PRs and can thus help ensure that our release Docker image remains stable.
* Helps consolidate our end-to-end tests with our smoke tests.
* Fewer Docker images to maintain. | Update end-to-end tests to run from the released Docker image | https://api.github.com/repos/opensearch-project/data-prepper/issues/3566/comments | 0 | 2023-10-31T16:03:38Z | 2023-11-03T18:26:55Z | https://github.com/opensearch-project/data-prepper/issues/3566 | 1,970,801,340 | 3,566 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
DynamoDB supports customer to choose their own customer key when doing the data export, that is used for encrypt the exported data files.
Currently, there is not an option for customer to provide the CMK, will default to use S3-SSE encryption, but that may not be aligned with customers security policy.
**Describe the solution you'd like**
The ask is to add CMK support, the proposed solution is to add an extra option under export.
```
- table_arn: "arn:aws:dynamodb:us-west-2:xxx:table/xxx-table"
export:
s3_bucket: "xxx-bucket"
s3_prefix: "export/"
s3_sse_kms_key_id: "key_id"
```
**Describe alternatives you've considered (Optional)**
n/a
**Additional context**
Additional permission will be required for the role to read and decrypt the data files via CMK.
| Add CMK support to DynamoDB source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3564/comments | 1 | 2023-10-30T23:01:11Z | 2023-11-07T22:26:26Z | https://github.com/opensearch-project/data-prepper/issues/3564 | 1,969,356,593 | 3,564 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the OpenSearch sink, I would like to conditionally update documents by comparing the existing document in OpenSearch with the updated document.
**Describe the solution you'd like**
An option to specify a `script` parameter that will apply to update and upsert operation in the sink (https://opensearch.org/docs/latest/api-reference/document-apis/update-document/#script-example).
This requires that the incoming update documents contain the fields that are referenced in the `script`.
**Describe alternatives you've considered (Optional)**
One alternative to having OpenSearch handle this would be to have Data Prepper handle it, which would mean an extra read of each document with an `update` or `upsert` operation, and then relying on Data Prepper expression as the condition instead of the `script`. the benefit of this approach would be that we could support comparisons on Data Prepper metadata, instead of requiring the data be in the Event.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support conditional script update of documents in the OpenSearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3563/comments | 0 | 2023-10-30T21:16:32Z | 2025-06-10T23:36:57Z | https://github.com/opensearch-project/data-prepper/issues/3563 | 1,969,240,338 | 3,563 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Logical types are used to extend the types that parquet can be used to store, by specifying how the primitive types should be interpreted. This keeps the set of primitive types to a minimum and reuses parquet's efficient encodings. Currently, the logic types are ignored during the ingestion process.
**Describe the solution you'd like**
An configuration option to allow OSI to ingestion data with logic type
**Describe alternatives you've considered (Optional)**
Right now, I have to create another filed so OSI can ingest the field into the correct logic type (date)
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow Option to ingest file with Parquet Logical types | https://api.github.com/repos/opensearch-project/data-prepper/issues/3562/comments | 2 | 2023-10-30T14:41:35Z | 2023-11-01T13:28:26Z | https://github.com/opensearch-project/data-prepper/issues/3562 | 1,968,540,728 | 3,562 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the OpenSearch sink, I would like to roll over my indices with custom time durations, rather than just those provided by the `JavaDateTime` format (https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html). For example, I would like to either roll over indices every 2 days, or every 6 hours, instead of being forced to use granularity of 1 day or 1 hour only.
**Describe the solution you'd like**
The ability to create dynamic indices and rollover with specific time patterns, but without the restriction of using a multiple of 1 for hours and days.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support dynamic indices with more granular time-based partitions | https://api.github.com/repos/opensearch-project/data-prepper/issues/3558/comments | 1 | 2023-10-30T00:56:24Z | 2023-11-07T20:32:31Z | https://github.com/opensearch-project/data-prepper/issues/3558 | 1,967,269,056 | 3,558 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DynamoDb and OpenSearch mappings have potential for conflicts. One example of this being hit is with the `Number` type in DDB supporting both float and long types.
With dynamic mappings in OpenSearch, the first document that gets created will determine the mappings. Following documents with different types for the same field would result in mapping failures.
For example, the DynamoDb `Number` type can represent `float`, `long`, etc, while OpenSearch has explicit typing for float and long
Testing earlier with a field with a mix of `float` and `long` types for a field resulted in this exception sending to a serverless collection
```
2023-10-27T15:58:48,210 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: mapper [cs_ship_customer_sk] cannot be changed from type [long] to [float]
2023-10-27T15:58:48,210 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: mapper [cs_ship_customer_sk] cannot be changed from type [long] to [float]
2023-10-27T15:58:48,210 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: mapper [cs_net_paid_inc_ship] cannot be changed from type [float] to [long]
2023-10-27T15:58:48,210 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: mapper [cs_quantity] cannot be changed from type [long] to [float]
```
We need to define an index mapping template that can handle all types of DynamoDB types (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html) to OpenSearch types (https://opensearch.org/docs/2.4/opensearch/supported-field-types/numeric/)
**Expected behavior**
Provide a guide for creating index mappings for ddb to opensearch. We can supply this as `template_content`.
**Additional context**
Add any other context about the problem here.
| [BUG] DynamoDB source has potential for invalid mappings to OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/3557/comments | 1 | 2023-10-27T21:52:33Z | 2024-01-05T21:12:47Z | https://github.com/opensearch-project/data-prepper/issues/3557 | 1,966,233,482 | 3,557 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When running a Data Prepper pipeline without any Kafka Connect plugins, I see logs for the Kafka extension.
```
2023-10-27T14:44:40.828 [main] INFO org.opensearch.dataprepper.plugins.kafka.extension.KafkaClusterConfigExtension - Applying Kafka Cluster Config Extension.
2023-10-27T14:44:40.829 [main] INFO org.opensearch.dataprepper.plugins.kafkaconnect.extension.KafkaConnectConfigExtension - Applying Kafka Connect Config Extension.
```
**To Reproduce**
Run a pipeline.
**Expected behavior**
If the Kafka extension is not needed, then don't load it. This should happen for all extensions.
| [BUG] Kafka Connect extension loaded even if not needed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3555/comments | 1 | 2023-10-27T18:26:29Z | 2024-04-02T19:53:22Z | https://github.com/opensearch-project/data-prepper/issues/3555 | 1,966,003,190 | 3,555 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-46136 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Werkzeug-2.2.3-py3-none-any.whl</b></p></summary>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Werkzeug-2.2.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Werkzeug is a comprehensive WSGI web application library. If an upload of a file that starts with CR or LF and then is followed by megabytes of data without these characters: all of these bytes are appended chunk by chunk into internal bytearray and lookup for boundary is performed on growing buffer. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. This vulnerability has been patched in version 3.0.1.
<p>Publish Date: 2023-10-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-46136>CVE-2023-46136</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-hrfv-mqp8-q5rw">https://github.com/pallets/werkzeug/security/advisories/GHSA-hrfv-mqp8-q5rw</a></p>
<p>Release Date: 2023-10-25</p>
<p>Fix Resolution: 3.0.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2023-46136 (High) detected in Werkzeug-2.2.3-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/3552/comments | 0 | 2023-10-26T12:17:53Z | 2023-11-27T16:52:32Z | https://github.com/opensearch-project/data-prepper/issues/3552 | 1,963,434,537 | 3,552 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-46122 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>io_2.13-1.9.1.jar</b></p></summary>
<p>IO module for sbt</p>
<p>Library home page: <a href="https://github.com/sbt/io">https://github.com/sbt/io</a></p>
<p>Path to dependency file: /performance-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.scala-sbt/io_2.13/1.9.1/ea166891cd1713dd95289fbfb791e60a5decaf3c/io_2.13-1.9.1.jar</p>
<p>
Dependency Hierarchy:
- zinc_2.13-1.9.3.jar (Root Library)
- zinc-compile-core_2.13-1.9.3.jar
- :x: **io_2.13-1.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
sbt is a build tool for Scala, Java, and others. Given a specially crafted zip or JAR file, `IO.unzip` allows writing of arbitrary file. This would have potential to overwrite `/root/.ssh/authorized_keys`. Within sbt's main code, `IO.unzip` is used in `pullRemoteCache` task and `Resolvers.remote`; however many projects use `IO.unzip(...)` directly to implement custom tasks. This vulnerability has been patched in version 1.9.7.
<p>Publish Date: 2023-10-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-46122>CVE-2023-46122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-46122">https://www.cve.org/CVERecord?id=CVE-2023-46122</a></p>
<p>Release Date: 2023-10-23</p>
<p>Fix Resolution: org.scala-sbt:io:1.9.7</p>
</p>
</details>
<p></p>
| CVE-2023-46122 (High) detected in io_2.13-1.9.1.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3547/comments | 1 | 2023-10-25T00:16:02Z | 2023-11-27T17:04:53Z | https://github.com/opensearch-project/data-prepper/issues/3547 | 1,960,285,158 | 3,547 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The switch to Docker base image "eclipse-temurin:17-jre-jammy" introduced in #3355 adds BerkeleyDB to the image (libdb5.3_5.3.28+dfsg1-0.8ubuntu3_amd64.deb). Berkeley DB is dual license, either Oracle commercial license or GNU AGPL v3 [1]. Both licenses types are problematic for distribution and SaaS deployments.
**To Reproduce**
Either use a package scanning tool e.g Mend or ssh into a running container and use the package tool.
**Environment (please complete the following information):**
- Version 2.5.0
**Additional context**
The change was introduced by #3355.
[1] https://en.wikipedia.org/wiki/Berkeley_DB#Licensing | [BUG] Docker image jre-jammy contains Berkeley DB | https://api.github.com/repos/opensearch-project/data-prepper/issues/3543/comments | 2 | 2023-10-24T07:24:03Z | 2023-11-21T17:17:06Z | https://github.com/opensearch-project/data-prepper/issues/3543 | 1,958,665,626 | 3,543 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, the s3 and opensearch source have an internal acknowledgment timeout of 1 or 2 hours. This is because these are mainly used for historical migrations of data, so they are less time sensitive. However, as dynamodb streams is a streaming use case where order as well as any delay really matters, it is not as simple to provide a high default timeout like these other sources.
**Describe the solution you'd like**
The simple approach would be to just make the timeout configurable for the user with an `acknowledgment_timeout` parameter that specifies the timeout, but this is not ideal either since this value can be hard to gauge for streams on their own (also with OpenSearch backpressure), and would still require a fairly large default value to make sure that timeouts don't occur too early right before the data gets to OpenSearch. We need a way to lower the `acknowledgment_timeout` to quickly recover for streams, but to do that we need to lower the amount of data, since a shard could take a long time to process, which means a longer time to indicate a failure and timeout (and save progress instead of restarting the entire shard when the last part of it has an acknowledgment timeout).
To handle these complications, I would propose that we chunk the `AcknowledgmentSets` for the dynamo source to a general amount of data (in bytes). We can do this by grouping by sequence numbers for a DDB stream shard. For example, we have a configurable
```
acknowledgment_checkpoint_size: "10mb"
acknowledgment_timeout: "30s"
```
which will keep reading from the shard until it hits a sequence number that goes to (or past this point, we don't care if it's a little overestimated). Now when we receive an acknowledgment for one of these chunks (we should receive them in order), we can update the partition ownership timeout for that partition to be something small (potentially as low as 30 seconds, or however long the `acknowledgment_checkpoint_size` takes to travel through the pipeline (#3494 could be used to tune these values)but it is wise for us to allow this to be configurable as well since there is always sink backpressure. For a well scaled sink, low values for `acknowledgment_timeout` may be possible, which would allow nodes to pick up on crashes very quickly and continue processing the shard.
When an acknowledgment callback is received, not only do we update the ownership timeout with `acknowledgment_timeout` amount of time, we also update the partition progress state of the partition with the last sequence number that we received an ack callback for. This will allow us to pick up right where we left off in the case of failure instead of having to restart the processing of the entire shard.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| End to end acknowledgments for Dynamo source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3538/comments | 1 | 2023-10-21T00:04:34Z | 2023-11-02T16:14:44Z | https://github.com/opensearch-project/data-prepper/issues/3538 | 1,955,147,306 | 3,538 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the date processor, I am unable to format the result of the conversion of my dates to ISO_8601 format strings. The result of the string is always the entire timestamp String (`2023-07-23T15:28:15.081-05:00`).
**Describe the solution you'd like**
An optional parameter in the `date` processor named `destination_format`. This format can be https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html similar to the OpenSearch index, and will take the ISO_8601 timestamp String that is generated by the Date processor and format it to the DateTimeFormat specified
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support a destination_format option in the date processor to format result with DateTimeFormatter | https://api.github.com/repos/opensearch-project/data-prepper/issues/3537/comments | 0 | 2023-10-20T23:02:34Z | 2023-10-20T23:42:14Z | https://github.com/opensearch-project/data-prepper/issues/3537 | 1,955,105,921 | 3,537 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We deprecated the `Record` class, but have not provided any alternatives to users. Thus, developers ignore deprecation warnings, which is probably not what we want.
Also, might `Record` be useful for binary data?
**Describe the solution you'd like**
For now, remove the `@Deprecated` annotation from `Record`. We can circle back to this when we have an interest in consolidating entirely on `Event`.
**Describe alternatives you've considered (Optional)**
Keep this `@Deprecated`. But this is confusing to new developers.
**Additional context**
#944
| Remove the @Deprecated from Record | https://api.github.com/repos/opensearch-project/data-prepper/issues/3536/comments | 0 | 2023-10-20T21:25:02Z | 2023-10-23T21:52:28Z | https://github.com/opensearch-project/data-prepper/issues/3536 | 1,955,030,346 | 3,536 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It seems during ingestion stage, the OSI will filter based on certain criteria like `start_time`, and `prefix`, we like to see those in logging so we can understand what criteria are applied during ingestion process.
https://github.com/opensearch-project/data-prepper/blob/c63dea39545dc408d0f7f173cfbd98a6ffab0969/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/S3ScanPartitionCreationSupplier.java#L108-L117
**Describe the solution you'd like**
Output a log line explain the filtering criteria used.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Improve the logging in the OSI filtering stage | https://api.github.com/repos/opensearch-project/data-prepper/issues/3531/comments | 3 | 2023-10-19T14:43:43Z | 2024-03-28T17:08:50Z | https://github.com/opensearch-project/data-prepper/issues/3531 | 1,952,436,560 | 3,531 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
NullPointer exception in DefaultKafkaClusterConfigSupplier get API
**To Reproduce**
Start pipeline with the following YAML file, it crashes because the kafkaClusterConfig is null
```
kafka-pipeline:
source:
kafka:
bootstrap_servers: ["localhost:9092"]
acknowledgments: true
encryption:
type: none
topics:
- name: "TestTopic"
group_id: "TestGroup"
sink:
- stdout:
```
**Expected behavior**
The pipeline should start successfully.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] NullPointer exception in DefaultKafkaClusterConfigSupplier get API | https://api.github.com/repos/opensearch-project/data-prepper/issues/3528/comments | 0 | 2023-10-18T23:16:17Z | 2023-10-23T21:50:23Z | https://github.com/opensearch-project/data-prepper/issues/3528 | 1,950,859,645 | 3,528 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Having the ability to do complex schema transformations in DataPrepper is an important component of log observably and index management. For example, if an organization wants to standardize on SS4O schema without rewriting logging in all their apps, today they would need to insert a middleware between the log source and the DataPrepper processors to perform the transformation. This feels less than ideal - in a perfect world, DataPrepper could provide mechanisms to do schema transformation and enrichment.
**Describe the solution you'd like**
Configurations to support schema re-mapping. For example, a processor config could specify the explicit remapping logic:
```
remap:
# Accessing key/value pairs in a map object by key name, moving the keys
- source: pets/<name>
target: $name
# Simple move from key to key
- source: event/context/region
target: aws/region
# Explode a list into many documents (maybe required as a separate pipeline)
- source: event/messages
operation: explode
on_duplicate_keys: merge
...
```
Or possibly a UDF?
**Describe alternatives you've considered (Optional)**
I've considered using add_entries, copy_values, delete_entries, etc. but they're too simple for the types of transformations customers need. If a customer wanted to map e.g. raw CloudTrail logs to SS4O for use with OpenSearch Integrations, it would be nearly impossible to use the existing transformation logic.
| Improved schema transformation utilities in DatPrepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/3526/comments | 1 | 2023-10-18T16:01:40Z | 2023-10-19T21:52:19Z | https://github.com/opensearch-project/data-prepper/issues/3526 | 1,950,102,303 | 3,526 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper currently has both end-to-end tests and a set of smoke tests. The end-to-end tests cover more cases than the smoke tests.
The primary advantage of the smoke tests is that they use the actual Docker image or tar.gz. We can do the same thing with the end-to-end tests. Then we'd only have one set of tests to maintain.
**Describe the solution you'd like**
Update the end-to-end tests to build their Docker image from the actual release Docker image instead of making a custom Docker image.
Additionally, support building the end-to-end tests using the tar.gz releases.
**Describe alternatives you've considered (Optional)**
Continue to have these two tests split. But, this ends up requiring more maintenance to ensure that both tests run.
**Additional context**
The smoke tests are not running per #2579. As I looked into these failures, I realized that the end-to-end tests are nearly what we need for smoke testing.
#### Tasks
- [x] #3566
- [x] #3567
- [ ] #3568
| Use end-to-end tests for smoke tests | https://api.github.com/repos/opensearch-project/data-prepper/issues/3524/comments | 0 | 2023-10-18T14:56:21Z | 2025-06-04T22:08:31Z | https://github.com/opensearch-project/data-prepper/issues/3524 | 1,949,960,235 | 3,524 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-5072 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-20230618.jar</b></p></summary>
<p>JSON is a light-weight, language independent, data interchange format.
See http://www.JSON.org/
The files in this package implement JSON encoders/decoders in Java.
It also includes the capability to convert between JSON and XML, HTTP
headers, Cookies, and CDL.
This is a reference implementation. There is a large number of JSON packages
in Java. Perhaps someday the Java community will standardize on one. Until
then, choose carefully.</p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20230618/1ae16df7d556d02713e241086f878399e99260d6/json-20230618.jar</p>
<p>
Dependency Hierarchy:
- anomaly-detector-processor-2.6.0-SNAPSHOT (Root Library)
- :x: **json-20230618.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/5b822f31bcf20d963c76d9b2319604252b9fa5d1">5b822f31bcf20d963c76d9b2319604252b9fa5d1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Denial of Service in JSON-Java versions up to and including 20230618. A bug in the parser means that an input string of modest size can lead to indefinite amounts of memory being used.
<p>Publish Date: 2023-10-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-5072>CVE-2023-5072</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rm7j-f5g5-27vv">https://github.com/advisories/GHSA-rm7j-f5g5-27vv</a></p>
<p>Release Date: 2023-10-12</p>
<p>Fix Resolution: org.json:json:20231013</p>
</p>
</details>
<p></p>
| CVE-2023-5072 (High) detected in json-20230618.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3522/comments | 1 | 2023-10-18T13:32:30Z | 2023-11-27T17:04:50Z | https://github.com/opensearch-project/data-prepper/issues/3522 | 1,949,769,867 | 3,522 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When using the Kafka buffer, we may wish to have different AWS credentials for KMS than for the actual Amazon MSK configuration provided in the root of the configuration.
**Describe the solution you'd like**
Add new configurations for the AWS credentials:
```
buffer:
kafka:
topics:
- name: MyTopic
encryption_key: AQIDAHhBQ4iH7RP28kWDRU1yN2K73qYEE2d8i06EBly7HoDSIwFXoO+oiW+HOlam8lfIUFwLAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM/j9Uf9cxYv/poV0FAgEQgDuVG9jfls3Ys7dR/cRKmdkcYDJw/XzR/ZEnZwcT9e+XB1T+SxC0YHLtc33lRwoD/UV0Ot+y8oUBqMvaXg==
kms:
key_id: alias/ExampleAlias
sts_role_arn: 'arn:aws:iam::123456789012:role/KmsAccessRole'
region: us-east-2
```
If these are not provided, use the configurations from the `kafka` buffer's `aws` configuration. | Support KMS-specific AWS configurations in the Kafka buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3516/comments | 0 | 2023-10-17T16:50:17Z | 2023-11-28T14:24:24Z | https://github.com/opensearch-project/data-prepper/issues/3516 | 1,947,850,537 | 3,516 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When using a pointer and a destination together with the `parse_json` processor, the output data in the destination contains an incorrect nested key.
**To Reproduce**
Steps to reproduce the behavior:
pipelines.yaml:
```
parse-json-pipeline:
source:
http:
sink:
- stdout:
processor:
- parse_json:
source: "message"
destination: "target"
pointer: "outer_key/inner_key"
```
Input:
```
JSON='[{"message": "{\"outer_key\": {\"inner_key\": \"1220\"}}"}]'
curl -k -XPOST -H "Content-Type: application/json" -d "${JSON}" http://localhost:2021/log/ingest
```
Output:
```
{"message":"{\"outer_key\": {\"inner_key\": \"1220\"}}","target":{"inner_key":"1220"}}
```
**Expected behavior**
Expected / desired output:
```
{"message":"{\"outer_key\": {\"inner_key\": \"1220\"}}","target":"1220"}
```
**Environment (please complete the following information):**
opensearchproject/data-prepper:latest
tag:2c474f8f05fb
data-prepper 2.5
**Additional context**
When doing complex conversions (e.g. input JSON to SS4O or OTEL schema), users need the ability to perform standard JSON re-mapping from a source schema into a destination schema. This currently isn't supported by DataPrepper.
| [BUG] parse_json processor doesn't handle pointers with destinations | https://api.github.com/repos/opensearch-project/data-prepper/issues/3515/comments | 2 | 2023-10-17T16:36:23Z | 2024-05-16T15:48:16Z | https://github.com/opensearch-project/data-prepper/issues/3515 | 1,947,825,888 | 3,515 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Trying to use a regex in a Data Prepper expression with a `$` in the Regex is causing this error:
```
2023-10-17T16:24:04,064 [simple-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator - Failed to evaluate route. This route will not be applied to any events.
org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/app =~ "-service$""
at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:41) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.ExpressionEvaluator.evaluateConditional(ExpressionEvaluator.java:28) ~[data-prepper-api-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator.findMatchedRoutes(RouteEventEvaluator.java:64) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator.evaluateEventRoutes(RouteEventEvaluator.java:45) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.Router.route(Router.java:39) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.publishToSinks(Pipeline.java:335) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:151) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:133) [data-prepper-core-2.5.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:60) [data-prepper-core-2.5.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?]
at java.util.concurrent.FutureTask.run(Unknown Source) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
at java.lang.Thread.run(Unknown Source) [?:?]
Caused by: org.opensearch.dataprepper.expression.ParseTreeCompositeException
at org.opensearch.dataprepper.expression.ParseTreeParser.createParseTree(ParseTreeParser.java:78) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:101) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:27) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:35) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:20) ~[data-prepper-expression-2.5.0.jar:?]
at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:37) ~[data-prepper-expression-2.5.0.jar:?]
... 13 more
Caused by: org.opensearch.dataprepper.expression.ExceptionOverview: Multiple exceptions (2)
|-- org.antlr.v4.runtime.InputMismatchException: null
at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)
|-- org.antlr.v4.runtime.LexerNoViableAltException: null
at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309)
```
In this case the condition was:
```
app-logs: "/app =~ \"-service$\""
```
This is similar to the example in the documentation:
https://github.com/opensearch-project/data-prepper/blob/main/docs/expression_syntax.md#reference-table
**To Reproduce**
Steps to reproduce the behavior:
Create a pipeline with a configuration similar to the following:
```
simple-pipeline:
workers: 2
delay: "5000"
source:
http:
path: "/ingest"
route:
- app-logs: "/app =~ \"-service$\""
sink:
- stdout:
routes:
- app-logs
```
Send data to the pipeline
```
curl -k -XPOST -H "Content-Type: application/json" -d '[{"app": "-service"}]' http://localhost:2021/ingest
```
Check logs
**Expected behavior**
The provided expression should be able to be parsed. | [BUG] Regex matching with Data Prepper Expression throws error when using $ | https://api.github.com/repos/opensearch-project/data-prepper/issues/3514/comments | 2 | 2023-10-17T16:29:14Z | 2024-02-03T18:00:23Z | https://github.com/opensearch-project/data-prepper/issues/3514 | 1,947,813,657 | 3,514 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Data Prepper pipeline configuration uses YAML which supports both single and double quotes for strings. And in many cases, you don't even need quotes.
The Data Prepper expression language uses double quotes to represent a string within an expression. For example, `/status == "ok"`.
Users have been using single quotes for their YAML strings, which results in lots of unnecessary escapes for double quotes.
For example:
```
sink:
- opensearch:
my_route: "/my_field =~ \"suffix\$\""
```
This would be more readable if the YAML quotes were single quotes.
```
sink:
- opensearch:
my_route: '/my_field =~ "suffix\$"'
```
**Describe the solution you'd like**
We should recommend not using quotes for YAML values whenever possible. And when necessary, use single quotes.
I propose:
* Adding documentation clarifying the different quotes.
* Updating our examples and documentations to avoid YAML quotes when possible and use YAML single quotes when quotes are necessary.
| Provide better documentation around YAML quotes | https://api.github.com/repos/opensearch-project/data-prepper/issues/3513/comments | 0 | 2023-10-17T16:23:19Z | 2025-01-21T20:59:11Z | https://github.com/opensearch-project/data-prepper/issues/3513 | 1,947,804,019 | 3,513 |
[
"opensearch-project",
"data-prepper"
] | Any chance that support for sourcing from RabbitMQ might be considered (rabbitmq-source plugin)?
| RabbitMQ Source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3511/comments | 0 | 2023-10-17T00:32:29Z | 2023-10-31T19:53:07Z | https://github.com/opensearch-project/data-prepper/issues/3511 | 1,946,313,831 | 3,511 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
No permissions when writing to serverless collections results in infinite retry (or up to max_retries) before an exception message is shown to the user. This can lead to never seeing the permission issue. This may occur for more than just serverless collections but I have only personally hit the issue with serverless
**To Reproduce**
Steps to reproduce the behavior:
1. Create a serverless collection
2. Configure an OpenSearch sink with this serverless collection, with an `sts_role_arn` that does not have permissions on the data access policy
**Expected behavior**
Log permission exceptions immediately when they occur, instead of after the max_retries limit is reached. The easiest solution would be for us to log on top of `Bulk Operation Failed. Number of retries {}. Retrying...` with one of the actual bulkResponse exceptions
**Additional context**
Add any other context about the problem here.
| [BUG] No permissions for writing to OpenSearch serverless collection only shows errors after max_retries limit is reached | https://api.github.com/repos/opensearch-project/data-prepper/issues/3508/comments | 0 | 2023-10-16T20:24:46Z | 2023-10-31T20:23:57Z | https://github.com/opensearch-project/data-prepper/issues/3508 | 1,946,030,351 | 3,508 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When a bulk request fails to write to OpenSearch, failures will be handled after the `max_retries` has been exhausted. However, when logging the failure or sending the failure to the DLQ, the bulk level message of `Number of retries reached the limit of max retries (configured value %d)`, instead of using the document's bulkResponse with the error code and the exception. This makes it so document level failure root cause is hidden due to the code here (https://github.com/opensearch-project/data-prepper/blob/910f45161db66e536cf2e5e5efd9fa7bed68d9f9/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L251).
**Expected behavior**
Given how clustered the code here is, I think it is simplest for us to add both the bulk-level error message (if it exists) as well as the document-level failure at all times to the failure message that is logged or sent to the dlq.
**Additional context**
Add any other context about the problem here.
| [BUG] Document level bulk request error messages are overridden by bulk level error message when max limit is reached | https://api.github.com/repos/opensearch-project/data-prepper/issues/3507/comments | 5 | 2023-10-16T20:17:37Z | 2024-05-16T21:30:34Z | https://github.com/opensearch-project/data-prepper/issues/3507 | 1,946,019,555 | 3,507 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
In the Trace Analytics OpenSearch setup, index rollovers for the `otel-v1-apm-span-*` indices are not working with Data Prepper's official index template.
**To Reproduce**
* Prerequisite is a running OpenSearch cluster.
* Configure Data Prepper for the [trace analytics use case](https://opensearch.org/docs/latest/data-prepper/common-use-cases/trace-analytics/#pipeline-configuration). At least the raw-pipeline should be connected to an OpenSearch Sink.
* When Data Prepper has been started and is running it initializes the OpenSearch sink. In this step it creates the `otel-v1-apm-span-000001` index and an ISM policy `raw-span-policy` which is targeting those kind of indices.
* In OpenSearch Index Management -> Managed Indices you should see this index. It takes some time until the ISM policy is initialized for this index.
* When the first index rollover operation is performed by the ISM policy (after 24h), this operation fails with the error `"message": "Missing rollover_alias index setting [index=otel-v1-apm-span-000001]"` and after the retry operations were performed the Job Status will also be "failed". (In order to test this within multiple minutes you can manually change the `raw-span-policy` to perform the rollover after e.g. 3 minutes. Do not forget that policies don't get applied to existing indices, so you would either need to delete the `otel-v1-apm-span-000001` index and restart Data Prepper or apply this policy to the existing index.)
<img width="3179" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/121185951/d0bdacb3-f535-44ba-999d-4519c94c0e06">
**Expected behavior**
Rollover actions from the ISM policy should not fail. According to [this](https://opensearch.org/docs/latest/im-plugin/ism/error-prevention/resolutions/#the-rollover-policy-misses-rollover_alias-index-setting) OpenSearch documentation the error message is clear and you need to add the `"plugins.index_state_management.rollover_alias": "otel-v1-apm-span"` index setting to the `otel-v1-apm-span-index-template`. I tested this for our setup and this solved the problem.
I know that the rollover actions worked in previous setups (older Data Prepper and OpenSearch versions) even though this index setting was not set.I could not figure out the reason so far. Also, I would assume that this is a problem also for newer Data Prepper / OpenSearch versions (compared to our setup) because I do not see that this index setting is configured somewhere.
**Environment (please complete the following information):**
- Data Prepper version: 2.3.2
- OpenSearch version: 1.3.12 | [BUG] ISM index rollover actions fail because of missing setting for otel-v1-apm-span-* indices | https://api.github.com/repos/opensearch-project/data-prepper/issues/3506/comments | 6 | 2023-10-16T16:22:24Z | 2023-11-08T14:38:51Z | https://github.com/opensearch-project/data-prepper/issues/3506 | 1,945,632,200 | 3,506 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper 2.5 started using Ubuntu as the base image (from the Temurin Java Docker image). This introduces quite a few packages and through these introduced a number of CVEs in the base image.
**Describe the solution you'd like**
Use [Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/what-is-amazon-linux.html) as the base Docker image for Data Prepper. Amazon Linux favors stability and security over features which is quite sufficient for running Data Prepper. Additionally, OpenSearch [builds Docker images](https://github.com/opensearch-project/opensearch-build/blob/28790484492f63b83db1644b2fcee34b58494f03/docker/release/dockerfiles/opensearch.al2023.dockerfile#L53) on Amazon Linux.
**Describe alternatives you've considered (Optional)**
Going back to Alpine, but Temurin's Alpine image does not support ARM.
**Additional context**
This was added to help support ARM images. #640
| Use Amazon Linux as base Docker image | https://api.github.com/repos/opensearch-project/data-prepper/issues/3505/comments | 0 | 2023-10-16T14:50:31Z | 2023-11-17T20:34:53Z | https://github.com/opensearch-project/data-prepper/issues/3505 | 1,945,405,084 | 3,505 |
[
"opensearch-project",
"data-prepper"
] | I'm trying to debug an issue in an OS Integration Pipeline, but the only message I get is:
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. Number of retries 5. Retrying...
```
Which makes impossible to debug the root cause of the issue. It would be very useful to get more information about what was the failure in order to debug issues.
Having a quick look at the code it seems like this [line](https://github.com/opensearch-project/data-prepper/blob/ccbe50c83916e34bde4f8cb0358ea70d9be3488d/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L240) is printing the cause if the issue is an exception but there is nothing to print if the error was part of the BulkResponse..
Please note, it is the first time ever I look at this code, so my previous analysis may be wrong, though
| Bulk Operation Retry Strategy should print cause of error | https://api.github.com/repos/opensearch-project/data-prepper/issues/3504/comments | 1 | 2023-10-16T13:00:33Z | 2023-11-27T17:40:06Z | https://github.com/opensearch-project/data-prepper/issues/3504 | 1,945,168,125 | 3,504 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
For some use cases ingest pipelines in opensearch are useful. From what I gather from the docs and code the opensearch sink does not support an ingest pipeline option.
**Describe the solution you'd like**
Implement ingest pipeline config option and pass this to opensearch
**Describe alternatives you've considered (Optional)**
Using processors in DP itself, but i.e. filebeat has ingest pipelines that can be used out the box.
**Additional context**
N/A | Opensearch sink: support ingest pipeline config | https://api.github.com/repos/opensearch-project/data-prepper/issues/3502/comments | 4 | 2023-10-15T21:54:39Z | 2023-11-01T18:41:21Z | https://github.com/opensearch-project/data-prepper/issues/3502 | 1,944,071,752 | 3,502 |
[
"opensearch-project",
"data-prepper"
] | I get a x_content_parse_exception while triying .\examples\jaeger-hotrod Steps to reproduce the behavior:
1. Go to localhost:8080 and click some buttons on Hot ROD page. 2. Open http://127.0.0.1:5601/app/observability-traces , select Data-Prepper and click refresh button
3. See Error (details in opensearch-dasboard container) /***********
{"statusCode":200,"responseTime":8,"contentLength":9},"message":"POST /api/observability/trace_analytics/query 200 8ms - 9.0B"}
StatusCodeError: [parsing_exception] [term] query does not support [dispatchConfig], with { line=1 & col=328 } at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15) at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7) at HttpConnector. (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7) at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19) at IncomingMessage.emit (node:events:525:35) at IncomingMessage.emit (node:domain:489:12) at endReadableNT (node:internal/streams/readable:1359:12) at processTicksAndRejections (node:internal/process/task_queues:82:21) { status: 400, displayName: 'BadRequest', path: '/otel-v1-apm-span-*/_search', query: { size: 0 }, body: { error: { root_cause: [Array], type: 'x_content_parse_exception', reason: '[1:328] [bool] failed to parse field [must]', caused_by: [Object] }, status: 400 }, statusCode: 400, response: '{"error":{"root_cause":[{"type":"parsing_exception","reason":"[term] query does not support [dispatchConfig]","line":1,"col":328}],"type":"x_content_parse_exception","reason":"[1:328] [bool] failed to parse field [must]","caused_by":{"type":"parsing_exception","reason":"[term] query does not support [dispatchConfig]","line":1,"col":328}},"status":400}', toString: [Function (anonymous)], toJSON: [Function (anonymous)]
}
/*********** I expected to see : some data on this page If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context**
Add any other context about the problem here. My Environment is Windows 10 and i disabled security features in configurations. I attached all releated files as zip file.
[files.zip](https://github.com/opensearch-project/data-prepper/files/12887422/files.zip)
| [BUG] Cannot get view in DashBoard - jaeger-hotrod | https://api.github.com/repos/opensearch-project/data-prepper/issues/3499/comments | 4 | 2023-10-12T22:00:07Z | 2024-10-26T13:25:04Z | https://github.com/opensearch-project/data-prepper/issues/3499 | 1,940,860,537 | 3,499 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The key value processor can throw NPE here (https://github.com/opensearch-project/data-prepper/blob/b4b4a98d10a5cec0d8c0331e7551bc272403c71f/data-prepper-plugins/key-value-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/keyvalue/KeyValueProcessor.java#L239) if the `source` key in the Event does not exist.
**Expected behavior**
Utilize the `tags_on_failure` to tag events that don't have the key, but otherwise will be a no-op and be flushed downstream.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Key value processor will throw NPE if source key does not exist in the Event | https://api.github.com/repos/opensearch-project/data-prepper/issues/3496/comments | 1 | 2023-10-12T19:11:04Z | 2024-01-17T19:13:17Z | https://github.com/opensearch-project/data-prepper/issues/3496 | 1,940,627,928 | 3,496 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Users of Data Prepper are not given an accurate look at the time it takes for Events to be processed through a Data Prepper pipeline, from the time the Event was received by the source, to the time that it is successfully sent to the sink.
Additionally, users would like to know the delay from a timestamp that is in the Events already and get an idea of the end to end latency from a source in an external system to Data Prepper to being successfully sent to the sink.
**Describe the solution you'd like**
A sink-level metric (ideally this full metric name would include plugin-id but since we don't have that it will have to be aggregated for now between sinks of the same type in the same sub-pipeline) named `pipelineEventLatency`. This metric will track the time between when the Event was created (the `from_time_received` timestamp in the EventMetadata) to the time that the Event was successfully sent to the sink. The metric will be calculated by the `releaseEvent` call made by the sink to acknowledge that it was sent successfully to the sink.
Additionally, for the use case of tracking latency for an external source from a timestamp in the Event, we will add another parameter to the common `SinkContext` for `end_to_end_latency_metric_start_time_key` (any way to shorten this name?) which can take the name of a key in the Event to use at the starting time calculation for latency. My thought is that this would result in a second metric, `endToEndEventLatency`, which will track the latency from the user's external source timestamp to the time that the Event was successfully received by the sink.
**Additional context**
Add any other context or screenshots about the feature request here.
| Sink level metric for end to end latency | https://api.github.com/repos/opensearch-project/data-prepper/issues/3494/comments | 10 | 2023-10-12T16:47:37Z | 2023-11-01T20:53:00Z | https://github.com/opensearch-project/data-prepper/issues/3494 | 1,940,393,345 | 3,494 |
[
"opensearch-project",
"data-prepper"
] | Update the minimum version for running Data Prepper to JDK 21 from JDK 11.
(The original proposal from Oct 2023 was to update to JDK 17. Releasing with JDK 21 as the minimum now that it is the latest LTS will give us more language features and fewer versions to support) | Update the minimum version of Data Prepper to 21 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3493/comments | 1 | 2023-10-12T15:48:55Z | 2024-09-06T18:21:54Z | https://github.com/opensearch-project/data-prepper/issues/3493 | 1,940,279,504 | 3,493 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-44981 - Critical Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>zookeeper-3.7.1.jar</b>, <b>zookeeper-3.6.4.jar</b>, <b>zookeeper-3.6.3.jar</b></p></summary>
<p>
<details><summary><b>zookeeper-3.7.1.jar</b></p></summary>
<p>ZooKeeper server</p>
<p>Library home page: <a href="http://zookeeper.apache.org">http://zookeeper.apache.org</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.1/b598708e153ff74dd2ebb800419df96df8a01c2d/zookeeper-3.7.1.jar</p>
<p>
Dependency Hierarchy:
- kafka_2.13-7.4.0-ccs.jar (Root Library)
- :x: **zookeeper-3.7.1.jar** (Vulnerable Library)
</details>
<details><summary><b>zookeeper-3.6.4.jar</b></p></summary>
<p>ZooKeeper server</p>
<p>Library home page: <a href="http://zookeeper.apache.org">http://zookeeper.apache.org</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.4/7c035d7130a0dcb55e9566ba304ebda4763b002d/zookeeper-3.6.4.jar</p>
<p>
Dependency Hierarchy:
- data-prepper-plugins-2.6.0-SNAPSHOT (Root Library)
- kafka-connect-plugins-2.6.0-SNAPSHOT
- kafka-schema-registry-7.5.0.jar
- :x: **zookeeper-3.6.4.jar** (Vulnerable Library)
</details>
<details><summary><b>zookeeper-3.6.3.jar</b></p></summary>
<p>ZooKeeper server</p>
<p>Library home page: <a href="http://zookeeper.apache.org">http://zookeeper.apache.org</a></p>
<p>Path to dependency file: /data-prepper-plugins/parquet-codecs/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.3/a6e74f826db85ff8c51c15ef0fa2ea0b462aef25/zookeeper-3.6.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.3/a6e74f826db85ff8c51c15ef0fa2ea0b462aef25/zookeeper-3.6.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.3/a6e74f826db85ff8c51c15ef0fa2ea0b462aef25/zookeeper-3.6.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.6.3/a6e74f826db85ff8c51c15ef0fa2ea0b462aef25/zookeeper-3.6.3.jar</p>
<p>
Dependency Hierarchy:
- hadoop-common-3.3.6.jar (Root Library)
- :x: **zookeeper-3.6.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key vulnerability in Apache ZooKeeper. If SASL Quorum Peer authentication is enabled in ZooKeeper (quorum.auth.enableSasl=true), the authorization is done by verifying that the instance part in SASL authentication ID is listed in zoo.cfg server list. The instance part in SASL auth ID is optional and if it's missing, like 'eve@EXAMPLE.COM', the authorization check will be skipped. As a result an arbitrary endpoint could join the cluster and begin propagating counterfeit changes to the leader, essentially giving it complete read-write access to the data tree. Quorum Peer authentication is not enabled by default.
Users are recommended to upgrade to version 3.9.1, 3.8.3, 3.7.2, which fixes the issue.
Alternately ensure the ensemble election/quorum communication is protected by a firewall as this will mitigate the issue.
See the documentation for more details on correct cluster administration.
<p>Publish Date: 2023-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-44981>CVE-2023-44981</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread/wf0yrk84dg1942z1o74kd8nycg6pgm5b">https://lists.apache.org/thread/wf0yrk84dg1942z1o74kd8nycg6pgm5b</a></p>
<p>Release Date: 2023-10-11</p>
<p>Fix Resolution: org.apache.zookeeper:zookeeper:3.7.2,3.8.3,3.9.1</p>
</p>
</details>
<p></p>
| CVE-2023-44981 (Critical) detected in multiple libraries - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3491/comments | 4 | 2023-10-12T03:29:11Z | 2023-11-27T17:04:56Z | https://github.com/opensearch-project/data-prepper/issues/3491 | 1,939,109,349 | 3,491 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-36478 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>http2-hpack-11.0.12.jar</b>, <b>jetty-http-11.0.12.jar</b></p></summary>
<p>
<details><summary><b>http2-hpack-11.0.12.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-hpack/11.0.12/975eab2eb71a46ccb061e00a6c77e0a56f6501cc/http2-hpack-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- jetty-bom-11.0.12.pom (Root Library)
- :x: **http2-hpack-11.0.12.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-http-11.0.12.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-http/11.0.12/bf07349f47ab6b11f1329600f37dffb136d5d7c/jetty-http-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- jetty-server-11.0.12.jar
- :x: **jetty-http-11.0.12.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/5b822f31bcf20d963c76d9b2319604252b9fa5d1">5b822f31bcf20d963c76d9b2319604252b9fa5d1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Eclipse Jetty provides a web server and servlet container. In versions 11.0.0 through 11.0.15, 10.0.0 through 10.0.15, and 9.0.0 through 9.4.52, an integer overflow in `MetaDataBuilder.checkSize` allows for HTTP/2 HPACK header values to
exceed their size limit. `MetaDataBuilder.java` determines if a header name or value exceeds the size limit, and throws an exception if the limit is exceeded. However, when length is very large and huffman is true, the multiplication by 4 in line 295
will overflow, and length will become negative. `(_size+length)` will now be negative, and the check on line 296 will not be triggered. Furthermore, `MetaDataBuilder.checkSize` allows for user-entered HPACK header value sizes to be negative, potentially leading to a very large buffer allocation later on when the user-entered size is multiplied by 2. This means that if a user provides a negative length value (or, more precisely, a length value which, when multiplied by the 4/3 fudge factor, is negative), and this length value is a very large positive number when multiplied by 2, then the user can cause a very large buffer to be allocated on the server. Users of HTTP/2 can be impacted by a remote denial of service attack. The issue has been fixed in versions 11.0.16, 10.0.16, and 9.4.53. There are no known workarounds.
<p>Publish Date: 2023-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-36478>CVE-2023-36478</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-wgh7-54f2-x98r">https://github.com/eclipse/jetty.project/security/advisories/GHSA-wgh7-54f2-x98r</a></p>
<p>Release Date: 2023-10-10</p>
<p>Fix Resolution: org.eclipse.jetty.http2:http2-hpack:9.4.53.v20231009,10.0.16,11.0.16;org.eclipse.jetty.http3:http3-qpack:10.0.16,11.0.16;org.eclipse.jetty:jetty-http:9.4.53.v20231009,10.0.16,11.0.16</p>
</p>
</details>
<p></p>
| CVE-2023-36478 (High) detected in http2-hpack-11.0.12.jar, jetty-http-11.0.12.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3490/comments | 2 | 2023-10-12T03:28:52Z | 2023-10-26T18:28:40Z | https://github.com/opensearch-project/data-prepper/issues/3490 | 1,939,109,117 | 3,490 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Users often format the `index` with values from the Event or EventMetadata, and this can commonly lead to invalid index names that don't follow the guidelines below
```
OpenSearch Service indexes have the following naming restrictions:
All letters must be lowercase.
Index names cannot begin with _ or -.
Index names can't contain spaces, commas, :, ", *, +, /, \, |, ?, #, >, or <.
```
**Describe the solution
Have the `index` parameter always conform to a valid index naming. The index will always take what is given and perform the following operations. This could be enabled with a separate flag `enforce_valid_index: true` or `normalize_index: true`
1. Make lowercase
2. Replace starting `_` or `-` with `` (should this character be configurable?)
3. Replace invalid characters with `_` (should this character be configurable?)
**Alternatives**
* Create expression functions that will do this `getValidIndex` or `getValidIndexMetadata`
| Using index in expressions has a common chance of creating an invalid index | https://api.github.com/repos/opensearch-project/data-prepper/issues/3487/comments | 2 | 2023-10-11T19:57:05Z | 2023-11-11T21:54:31Z | https://github.com/opensearch-project/data-prepper/issues/3487 | 1,938,621,617 | 3,487 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Kafka buffer can now decrypt an envelope encryption key using KMS. However, sometimes, we want to decrypt with an encryption context.
**Describe the solution you'd like**
Add support for KMS encryption context in the configuration. Use this value when sending the `kms:Decrypt` request.
```
buffer:
kafka:
topics:
- name: MyTopic
encryption_key: AQIDAHhBQ4iH7RP28kWDRU1yN2K73qYEE2d8i06EBly7HoDSIwFXoO+oiW+HOlam8lfIUFwLAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM/j9Uf9cxYv/poV0FAgEQgDuVG9jfls3Ys7dR/cRKmdkcYDJw/XzR/ZEnZwcT9e+XB1T+SxC0YHLtc33lRwoD/UV0Ot+y8oUBqMvaXg==
kms:
key_id: alias/ExampleAlias
encryption_context:
mykey1: myvalue1
mykey2: myvalue2
mykey3: myvalue3
```
Additionally, we can move the `kms_key_id` into a new `kms` section.
**Describe alternatives you've considered (Optional)**
Add a new field: `kms_encryption_key`. But, this list of `kms_` prefix options could grow.
**Additional context**
Kafka buffer issue for encryption/decryption and KMS: #3422
| Support KMS encryption context in the Kafka buffer's encryption | https://api.github.com/repos/opensearch-project/data-prepper/issues/3484/comments | 0 | 2023-10-11T17:03:58Z | 2023-11-28T14:24:27Z | https://github.com/opensearch-project/data-prepper/issues/3484 | 1,938,281,282 | 3,484 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.5.0
**BUILD NUMBER**: 74
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/6485073769
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 6485073769: Release Data Prepper : 2.5.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3483/comments | 3 | 2023-10-11T16:48:47Z | 2023-10-11T16:54:57Z | https://github.com/opensearch-project/data-prepper/issues/3483 | 1,938,249,381 | 3,483 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Running the data-prepper build and github actions frequently has failing flaky tests, and often requires retrying the tests multiple times before it succeeds. This issue is meant to allow easy tracking of these flaky tests, by adding an item to the checklist when a flaky test is seen.
### Flaky tests
- [ ] ```> Task :data-prepper-core:integrationTest PipelinesWithAcksIT > simple_pipeline_with_multiple_records() FAILED
java.lang.NullPointerException at PipelinesWithAcksIT.java:85```
- [ ] ```testcase name="three_pipelines_with_route_and_multiple_records()" classname="org.opensearch.dataprepper.integration.PipelinesWithAcksIT" time="11.003">
<failure message="java.lang.NullPointerException" type="java.lang.NullPointerException">java.lang.NullPointerException
at org.opensearch.dataprepper.integration.PipelinesWithAcksIT.three_pipelines_with_route_and_multiple_records(PipelinesWithAcksIT.java:130)```
- [x] #3470
- [ ] #1374
- [ ] Router_ThreeRoutesIT https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-1793501480
- [ ] AwsCloudMapPeerListProviderTest.java:345 - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-1795279643
- [ ] OTelTraceRawProcessorTest.java:270 - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-1796339462
- [ ] OTelTraceRawProcessorTest.java:193 - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-1806598375
- [x] #4298
- [ ] HistogramAggregateActionTests.java:370 - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2207244965
- [x] LambdaSinkServiceTest.java:316 - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2221378060; fixed by #4723
- [ ] RDS ExportSchedulerTest - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2393877779
- [ ] RDS StreamScheduler - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2394058183. (added sleep time in #5016, will check again see if this helps)
- [ ] Truncate processor - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2405626969
- [ ] RDS DataFileSchedulerTest - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2581524346
- [ ] PipelinesWithAcksIT simple_pipeline_with_single_record - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2581526898
- [ ] GrpcRetryInfoCalculatorTest - testResetAfterDelayWearsOff - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2581528755
- [ ] ExportSchedulerTest test_export_run - https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2581533044 | [BUG] Flaky tests high-level tracking | https://api.github.com/repos/opensearch-project/data-prepper/issues/3481/comments | 25 | 2023-10-11T15:15:26Z | 2025-01-14T20:55:06Z | https://github.com/opensearch-project/data-prepper/issues/3481 | 1,938,065,731 | 3,481 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-44487 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>netty-codec-http2-4.1.96.Final.jar</b>, <b>http2-common-11.0.16.jar</b>, <b>http2-server-11.0.16.jar</b></p></summary>
<p>
<details><summary><b>netty-codec-http2-4.1.96.Final.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /data-prepper-plugins/grok-processor/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.96.Final/cc8baf4ff67c1bcc0cde60bc5c2bb9447d92d9e6/netty-codec-http2-4.1.96.Final.jar</p>
<p>
Dependency Hierarchy:
- data-prepper-core-2.6.0-SNAPSHOT (Root Library)
- armeria-1.25.2.jar
- :x: **netty-codec-http2-4.1.96.Final.jar** (Vulnerable Library)
</details>
<details><summary><b>http2-common-11.0.16.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.dev/jetty">https://eclipse.dev/jetty</a></p>
<p>Path to dependency file: /release/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.16/4d9ca033da05cdaa6658cb31467bd2f3aef67d8b/http2-common-11.0.16.jar</p>
<p>
Dependency Hierarchy:
- jetty-bom-11.0.16.pom (Root Library)
- :x: **http2-common-11.0.16.jar** (Vulnerable Library)
</details>
<details><summary><b>http2-server-11.0.16.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.dev/jetty">https://eclipse.dev/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-server/11.0.16/e16959d693580c0d5d162a65d495a237a8603258/http2-server-11.0.16.jar</p>
<p>
Dependency Hierarchy:
- kafka-plugins-2.6.0-SNAPSHOT (Root Library)
- kafka-schema-registry-7.3.3.jar
- rest-utils-7.3.3.jar
- :x: **http2-server-11.0.16.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The HTTP/2 protocol allows a denial of service (server resource consumption) because request cancellation can reset many streams quickly, as exploited in the wild in August through October 2023.
<p>Publish Date: 2023-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-44487>CVE-2023-44487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-44487">https://www.cve.org/CVERecord?id=CVE-2023-44487</a></p>
<p>Release Date: 2023-10-10</p>
<p>Fix Resolution: Upgrade to version org.eclipse.jetty.http2:http2-server:9.4.53.v20231009,10.0.17,11.0.17, org.eclipse.jetty.http2:jetty-http2-server:12.0.2, org.eclipse.jetty.http2:http2-common:9.4.53.v20231009,10.0.17,11.0.17, org.eclipse.jetty.http2:jetty-http2-common:12.0.2, nghttp - v1.57.0</p>
</p>
</details>
<p></p>
| CVE-2023-44487 (High) detected in multiple libraries | https://api.github.com/repos/opensearch-project/data-prepper/issues/3474/comments | 0 | 2023-10-10T20:33:21Z | 2023-10-10T23:33:28Z | https://github.com/opensearch-project/data-prepper/issues/3474 | 1,936,165,922 | 3,474 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Fix disabled E2E ack integration tests in PipelinesWithAcksIT.java
**To Reproduce**
The problem occurs somewhat frequently in github environment but not in local environments.
**Expected behavior**
Tests should pass in all environments.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Fix disabled E2E ack integration tests in PipelinesWithAcksIT.java | https://api.github.com/repos/opensearch-project/data-prepper/issues/3472/comments | 0 | 2023-10-10T17:25:56Z | 2023-10-11T20:00:49Z | https://github.com/opensearch-project/data-prepper/issues/3472 | 1,935,869,693 | 3,472 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
```
> Task :data-prepper-plugins:dynamodb-source:test
DataFileLoaderTest > test_run_loadFile_correctly() FAILED
org.mockito.exceptions.verification.WantedButNotInvoked at DataFileLoaderTest.java:139
```
Example failure:
https://github.com/opensearch-project/data-prepper/actions/runs/6470850267/job/17567944366?pr=3469
**To Reproduce**
This is flaky, so this might require some investigation.
**Expected behavior**
This test should consistently pass.
| [BUG] Flaky test in DataFileLoaderTest | https://api.github.com/repos/opensearch-project/data-prepper/issues/3470/comments | 7 | 2023-10-10T14:55:52Z | 2023-11-09T21:03:15Z | https://github.com/opensearch-project/data-prepper/issues/3470 | 1,935,581,274 | 3,470 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
There are a lot of test failures on the Windows system.
1. SinkModelTest. It uses hardcode "\n" in the tests, but it is "\r\n" on Windows.
2. DataPrepperArgsTest. It uses a hardcode "/" path separator to compare the path, but on Windows which is "\\"
3. DateProcessorTests. The test uses the wrong time API to compare one local time in UTC and one in the local timezone.
4. InMemorySourceCoordinationStoreTest. There is a mistaken assumption that if Java code calls Instant.now() twice, the second call will return a later time than the first one. This is not correct, as on my Windows system, it returns the same time for both calls sometimes.
5. QueuedPartitionsItemTest. Similar issues as the above one.
6. RSSSourceTest. It uses a real-world RSS feed which requires an internet connection when running this test.
7. ParquetOutputCodecTest. The test doesn't close all outputStream objects which causes filelocked exception on Windows when deleting.
8. InMemoryBufferTest. Similar to the above Instant.now() issues.
**To Reproduce**
Steps to reproduce the behavior:
Just run `gradlew test` and meet a lot of errors.
**Expected behavior**
No unit tests error on main branch
**Screenshots**
**Environment (please complete the following information):**
- OS: Windows 10, 21H2
- Version: main branch
**Additional context**
I understand that Windows may not be the application runtime target. However, it seems there are several test code issues in the project.
| [BUG] Unit tests fail on Windows machine | https://api.github.com/repos/opensearch-project/data-prepper/issues/3459/comments | 0 | 2023-10-08T07:33:08Z | 2023-11-01T14:56:17Z | https://github.com/opensearch-project/data-prepper/issues/3459 | 1,931,687,294 | 3,459 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Changed the compression in logstash producer to snappy compression. After that dataprepper stopped processing messages without giving an error. Removed compression and everything is okay again
**To Reproduce**
^^
**Expected behavior**
Dataprepper processes compressed messages
**Screenshots**
--
**Environment (please complete the following information):**
Dataprepper 2.4.1
**Additional context**
-- | [BUG] Kafka input snappy compression not supported? | https://api.github.com/repos/opensearch-project/data-prepper/issues/3452/comments | 1 | 2023-10-06T16:41:05Z | 2023-10-06T18:37:15Z | https://github.com/opensearch-project/data-prepper/issues/3452 | 1,930,597,044 | 3,452 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-4586 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-handler-4.1.100.Final.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /data-prepper-plugins/otel-trace-raw-processor/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.100.Final/4c0acdb8bb73647ebb3847ac2d503d53d72c02b4/netty-handler-4.1.100.Final.jar</p>
<p>
Dependency Hierarchy:
- data-prepper-core-2.6.0-SNAPSHOT (Root Library)
- armeria-1.25.2.jar
- netty-resolver-dns-4.1.100.Final.jar
- :x: **netty-handler-4.1.100.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
After conducting further research, Mend has determined that versions 4.1.x before stable releases of 5.x of io.netty:netty-handler are vulnerable to CVE-2023-4586.
<p>Publish Date: 2023-10-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-4586>CVE-2023-4586</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-4586">https://nvd.nist.gov/vuln/detail/CVE-2023-4586</a></p>
<p>Release Date: 2023-10-04</p>
<p>Fix Resolution: io.netty:netty-handler - 5.0.0.Alpha1</p>
</p>
</details>
<p></p>
| CVE-2023-4586 (High) detected in netty-handler-4.1.100.Final.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3443/comments | 7 | 2023-10-05T16:58:38Z | 2023-11-07T18:03:48Z | https://github.com/opensearch-project/data-prepper/issues/3443 | 1,928,711,408 | 3,443 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently basic auth (username/password) support in pipeline.yaml does not include secret refreshment through polling.
**Describe the solution you'd like**
Support secret refreshment through the polling (#3415 )
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| [Enhancement] Support secrets refreshment for basic auth in OpenSearch Source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3436/comments | 0 | 2023-10-05T02:53:15Z | 2023-10-09T15:24:49Z | https://github.com/opensearch-project/data-prepper/issues/3436 | 1,927,287,282 | 3,436 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We would like to support Hadoop partitions in the S3 sink. For example, we may wish to write an object to the following key path:
```
events/year=2023/month=10/day=04
```
**Describe the solution you'd like**
Provide a new Data Prepper expression method to format a date-time as desired.
```
date_time_format(EventKey, FormatString)
```
You could achieve the above example using a configuration like the following:
```
path_prefix: "events/year=${date_time_format(eventTime, "YYYY")}/month=${date_time_format(eventTime, "MM")}/day=${date_time_format(eventTime, "dd")}/"
```
**Describe alternatives you've considered (Optional)**
One alternative is to update the `date` processor to create formatted strings. This would mean writing to the Event, but the user doesn't necessarily want the data in the S3 object. It could write to the metadata instead.
```
path_prefix: "events/year=${getMetadata(\"year\")}/month=${getMetadata(\"month\")}/day=${getMetadata(\"day\")}}/"
```
Another alternative is to make a new special syntax. But, we already have a functions concept. So I think we should build on this rather than try to make some new syntax.
**Additional context**
See https://github.com/opensearch-project/data-prepper/issues/3310#issuecomment-1747508402 from #3310.
| Data Prepper function to format a date-time | https://api.github.com/repos/opensearch-project/data-prepper/issues/3434/comments | 4 | 2023-10-04T19:34:37Z | 2024-09-17T19:46:05Z | https://github.com/opensearch-project/data-prepper/issues/3434 | 1,926,852,311 | 3,434 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Index patterns are being created with a truncated index pattern from the index template index patterns passed in the config
This bug occurs when there is no `ism_policy_file` passed to the source. This is here in the code (https://github.com/opensearch-project/data-prepper/blob/463fa4590d5f9d7f0a397430de0dbd634384dd03/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/IndexManagerFactory.java#L103) where the decision to use `NoIsmPolicyManagement` class is made. This class has the following method which does not add a pattern back (https://github.com/opensearch-project/data-prepper/blob/463fa4590d5f9d7f0a397430de0dbd634384dd03/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/NoIsmPolicyManagement.java#L43), which is called here (after removing the pattern suffix) (https://github.com/opensearch-project/data-prepper/blob/463fa4590d5f9d7f0a397430de0dbd634384dd03/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L230). With an ism policy file, this method will be called instead, which adds back the pattern (https://github.com/opensearch-project/data-prepper/blob/463fa4590d5f9d7f0a397430de0dbd634384dd03/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/IsmPolicyManagement.java#L145)
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with the following opensearch sink configuration
```yaml
sink:
- opensearch:
index: "test-index-1"
template_type: "index-template"
template_content: |
{
"index_patterns": [
"test-*"
],
"template": {
"aliases": {
"my_test_logs": {}
},
"settings": {
"number_of_shards": 5,
"number_of_replicas": 2,
"refresh_interval": -1
},
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
},
"value": {
"type": "double"
}
}
}
}
}
```
2. Run data prepper and then go to OpenSearch Dashboards to look at the created index template
```
GET _index_template/test-index-template
```
You will see that the index_pattern in the template is only `test` and not `test-*`
**Expected behavior**
To not truncate the index pattern
**Additional context**
| [BUG] Index template_file and template_content truncates index pattern incorrectly | https://api.github.com/repos/opensearch-project/data-prepper/issues/3432/comments | 1 | 2023-10-03T19:00:32Z | 2025-03-04T20:49:56Z | https://github.com/opensearch-project/data-prepper/issues/3432 | 1,924,747,017 | 3,432 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-39410 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>avro-1.11.0.jar</b></p></summary>
<p>Avro core components</p>
<p>Library home page: <a href="https://avro.apache.org">https://avro.apache.org</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-connect-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.avro/avro/1.11.0/2b0c58e5b450d4f4931456952ad9520cae9c896c/avro-1.11.0.jar</p>
<p>
Dependency Hierarchy:
- kafka-plugins-2.6.0-SNAPSHOT (Root Library)
- kafka-schema-registry-client-7.5.0.jar
- :x: **avro-1.11.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
When deserializing untrusted or corrupted data, it is possible for a reader to consume memory beyond the allowed constraints and thus lead to out of memory on the system.
This issue affects Java applications using Apache Avro Java SDK up to and including 1.11.2. Users should update to apache-avro version 1.11.3 which addresses this issue.
<p>Publish Date: 2023-09-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-39410>CVE-2023-39410</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rhrv-645h-fjfh">https://github.com/advisories/GHSA-rhrv-645h-fjfh</a></p>
<p>Release Date: 2023-09-29</p>
<p>Fix Resolution: org.apache.avro:avro:1.11.3;org.apache.avro:avro-android:1.11.3;org.apache.avro:avro-tools:1.11.3;avro - 1.11.3</p>
</p>
</details>
<p></p>
| CVE-2023-39410 (High) detected in avro-1.11.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3430/comments | 2 | 2023-10-03T16:59:09Z | 2023-11-27T18:58:52Z | https://github.com/opensearch-project/data-prepper/issues/3430 | 1,924,562,417 | 3,430 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.4.1
**BUILD NUMBER**: 73
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/6384570326
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 6384570326: Release Data Prepper : 2.4.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3423/comments | 4 | 2023-10-02T19:52:59Z | 2023-10-02T20:08:37Z | https://github.com/opensearch-project/data-prepper/issues/3423 | 1,922,574,138 | 3,423 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.