issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | ## Is your feature request related to a problem? Please describe.
For some situations, we want to encrypt each Kafka topic with a different encryption key.
## Describe the solution you'd like
Update Data Prepper's Kafka buffer to support an optional encryption key.
```
buffer:
kafka:
topics:
- name: MyTopic
encryption_key: gEa68HffrNhFtJkNoY0UsD6D6W4w8KUFtYJTnte+eiY=
```
Additionally, this key could be encrypted by Amazon KMS so that we can support envelope encryption.
```
buffer:
kafka:
topics:
- name: MyTopic
encryption_key: AQIDAHhBQ4iH7RP28kWDRU1yN2K73qYEE2d8i06EBly7HoDSIwFXoO+oiW+HOlam8lfIUFwLAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM/j9Uf9cxYv/poV0FAgEQgDuVG9jfls3Ys7dR/cRKmdkcYDJw/XzR/ZEnZwcT9e+XB1T+SxC0YHLtc33lRwoD/UV0Ot+y8oUBqMvaXg==
kms_key_id: alias/ExampleAlias
```
### Data Prepper initialization
When Data Prepper starts, it reads the pipeline configuration file. If the user provided a KMS key in the configuration, the Kafka Buffer decrypts the data encryption key using KMS.
Data Prepper holds the decrypted data key in memory for future processing.

### Receiving data and writing to Kafka
As users provider their data to a Data Prepper source, that source writes to the Kafka buffer. The Kafka Buffer encrypts each record using the decrypted data key; this is the same key decrypted during initialization. The Kafka Buffer sends the encrypted record to the Kafka topic.

### Processing data and reading from Kafka
As Data Prepper runs, the Pipeline Worker reads from the Kafka Buffer. The Kafka Buffer polls the Kafka topic for data. Each Kafka ConsumerRecord is already encrypted as described above. Thus, the Kafka Buffer decrypts this data using the same data encryption key it loaded at initialization.

## Describe alternatives you've considered (Optional)**
N/A
## Additional context
This builds on the work being done for #3322. The design could be extended for generic sink and source as well.
| Support topic-based encryption for Kafka buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3422/comments | 1 | 2023-10-02T19:29:32Z | 2023-11-28T14:24:30Z | https://github.com/opensearch-project/data-prepper/issues/3422 | 1,922,530,169 | 3,422 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.4.1
**BUILD NUMBER**: 71
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/6383496851
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 6383496851: Release Data Prepper : 2.4.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3420/comments | 3 | 2023-10-02T18:00:05Z | 2023-10-02T18:04:23Z | https://github.com/opensearch-project/data-prepper/issues/3420 | 1,922,374,256 | 3,420 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
No, it is a new feature.
**Describe the solution you'd like**
Currently, the S3 sink to ingest data from S3 requires fixed prefixes and schemas. However larger enterprises have data lakes for large scale data sets (~800M records daily per data set in my case). In AWS, Glue keeps track of all the catalog tables and metadata. If we could set up our catalog as a source and maintain sync, we can remove unnecessary infrastructure and complexity in the indexing process at large scale.
As a source, a user should be able to provide Data Prepper with:
* Glue database name
* Glue table name
* Primary key fields of the table
* Shard count
* Replica count
And should then follow a process similar to this:
<img width="426" alt="Screenshot 2023-09-28 at 3 46 17 PM" src="https://github.com/opensearch-project/data-prepper/assets/30414016/45fdffbb-e06b-40cf-8d26-50f11c372fec">
Plant UML code for reference:
```
@startuml
skinparam maxMessageSize 150
autonumber
participant "OSIS" as osi
participant "Glue API" as ga
participant "OpenSearch" as os
participant "S3" as s3
osi -> ga: Get table metadata
osi -> os: Get all indices in the alias
osi -> osi: Compare indices in alias vs table partitions
loop For every missing index
osi -> os: Create index
osi -> os: Set refresh interval to -1
osi -> os: Add index to alias
osi -> s3: Get the data from the partition
osi -> os: Index the data
osi -> os: Set refresh interval back to original value
end
loop For every partition
osi -> s3: Check row count on the partition
osi -> os: Check record count in index
osi -> os: If record mismatch, set refresh interval to -1
osi -> s3: If record mismatch, get the data from the partition
osi -> os: If record mismatch, index the data
osi -> os: If record mismatch, set refresh interval back to original value
osi -> os: Purge deleted records
end
@enduml
```
As a result, in the OS cluster, we will have:
* An alias called `<database_name>_<table_name>`
* If the table is partitioned, indices that point to the alias for each partition in this format:
```
<database_name>_<table_name>_<partition1_value>_<partition2_value>_<partitionN_value>
```
**Describe alternatives you've considered (Optional)**
I currently do this:
https://github.com/aws-samples/aws-s3-to-opensearch-pipeline
It's fast and it works, but an out of the box solution would be preferable.
| Allow for sync from AWS Glue Catalog to AWS OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/3405/comments | 0 | 2023-09-28T22:55:02Z | 2023-10-04T19:38:47Z | https://github.com/opensearch-project/data-prepper/issues/3405 | 1,918,371,430 | 3,405 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.4.1
**BUILD NUMBER**: 70
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/6342953194
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 6342953194: Release Data Prepper : 2.4.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3402/comments | 3 | 2023-09-28T19:08:30Z | 2023-09-28T19:19:33Z | https://github.com/opensearch-project/data-prepper/issues/3402 | 1,918,110,655 | 3,402 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
As part of the sink initialization process, DataPrepper will attempt to create static indices. This also serves as a permissions check for the DataPrepper -> AOS integration. However, if a dynamic index pattern is used, no indices are created during initialization and instead permissions gaps are exposed at runtime by shutting down the pipeline:
```
ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [***] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [security_exception] no permissions for [cluster:monitor/state] and User [name=***, backend_roles=[***], requestedTenant=null]
```
**To Reproduce**
Create a pipeline with an FGAC-enabled AOS sink and a dynamic index pattern. Don't provide sufficient permissions to the pipeline role to ingest data to AOS. Ingest data into the DataPrepper pipeline to cause a crash.
**Expected behavior**
Similar to the behavior of static indices, DataPrepper should avoid crashing when there is a permissions gap and instead loop while providing helpful messaging to the user so that the issue can be resolved. | [BUG] Insufficient permissions to AOS shuts down DataPrepper when a dynamic index pattern is used | https://api.github.com/repos/opensearch-project/data-prepper/issues/3393/comments | 1 | 2023-09-27T16:01:04Z | 2023-12-11T19:24:27Z | https://github.com/opensearch-project/data-prepper/issues/3393 | 1,915,892,751 | 3,393 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A scenario was identified where incorrectly specifying `json` instead of `newline` for the S3 scan codec resulted in
```
2023-09-27T14:15:47,361 [Thread-2] ERROR org.opensearch.dataprepper.plugins.source.ScanObjectWorker - Received an exception while processing S3 objects, backing off and retrying
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:223) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:83) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:171) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:82) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:179) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56) ~[aws-core-2.20.103.jar:?]
at software.amazon.awssdk.services.s3.DefaultS3Client.headObject(DefaultS3Client.java:5541) ~[s3-2.20.67.jar:?]
at org.opensearch.dataprepper.plugins.source.S3InputFile.getMetadata(S3InputFile.java:72) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.S3InputFile.getLength(S3InputFile.java:46) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.S3ObjectWorker.doParseObject(S3ObjectWorker.java:94) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.S3ObjectWorker.lambda$parseS3Object$0(S3ObjectWorker.java:64) ~[s3-source-2.4.0.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.plugins.source.S3ObjectWorker.parseS3Object(S3ObjectWorker.java:63) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.ScanObjectWorker.processS3Object(ScanObjectWorker.java:187) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.ScanObjectWorker.startProcessingObject(ScanObjectWorker.java:160) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.ScanObjectWorker.run(ScanObjectWorker.java:106) [s3-source-2.4.0.jar:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Suppressed: software.amazon.awssdk.core.exception.SdkClientException: Request attempt 1 failure: Unable to execute HTTP request: Timeout waiting for connection from pool
Suppressed: software.amazon.awssdk.core.exception.SdkClientException: Request attempt 2 failure: Unable to execute HTTP request: Timeout waiting for connection from pool
Suppressed: software.amazon.awssdk.core.exception.SdkClientException: Request attempt 3 failure: Unable to execute HTTP request: Timeout waiting for connection from pool
Suppressed: software.amazon.awssdk.core.exception.SdkClientException: Request attempt 4 failure: Unable to execute HTTP request: Timeout waiting for connection from pool
Suppressed: software.amazon.awssdk.core.exception.SdkClientException: Request attempt 5 failure: Unable to execute HTTP request: Timeout waiting for connection from pool
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:316) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:282) ~[httpclient-4.5.14.jar:4.5.14]
at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionRequestFactory$DelegatingConnectionRequest.get(ClientConnectionRequestFactory.java:92) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionRequestFactory$InstrumentedConnectionRequest.get(ClientConnectionRequestFactory.java:69) ~[apache-client-2.20.103.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:190) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.14.jar:4.5.14]
at software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute(ApacheSdkHttpClient.java:72) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:254) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:104) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:231) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:228) ~[apache-client-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:63) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.executeHttpRequest(MakeHttpRequestStage.java:77) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:56) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:39) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:72) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:52) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:37) ~[sdk-core-2.20.103.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81) ~[sdk-core-2.20.103.jar:?]
... 32 more
```
This occurred during the 51st file read which correlates to the default connection pool size of 50 on the S3 client.
**To Reproduce**
I was not able to narrow down the exact parameters that cause this issue, but it looks to be related to medium-large sized files. I can repro it with an 8MB uncompressed newline json file.
Create a pipeline with S3 scan and codec = json. Upload a relatively large file with newline json contents over 50 times.
**Expected behavior**
The connection pool should never be exceeded. This same behavior is not seen with smaller files (5kb or so). Ideally there would be a helpful error message indicating that the codec is likely incorrect when no records are identified but a file is larger than XX bytes.
| [BUG] Incorrect codec with S3 scan results in Timeout waiting for connection from pool | https://api.github.com/repos/opensearch-project/data-prepper/issues/3390/comments | 0 | 2023-09-27T14:24:13Z | 2023-10-31T19:45:50Z | https://github.com/opensearch-project/data-prepper/issues/3390 | 1,915,709,713 | 3,390 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DataPrepper does not allow time patterns to be in dynamic index patterns unless they are the suffix of the pattern: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L127
This causes the pipeline to shutdown at runtime when data is sent to a sink who's index pattern violates this condition.
```
ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [***] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Time pattern can only be a suffix of an index
```
**To Reproduce**
Create a pipeline with this index pattern in an opensearch sink:
```
prefix-%{yyyy.MM.dd}-suffix
```
**Expected behavior**
It's unclear why DataPrepper does not support non-suffix time patterns. If this is not a hard restriction then I would expect this validation to be removed. If it is a hard restriction then I would expect this to be validated during initialization rather than throwing an exception and shutting down the pipeline at runtime.
| [BUG] Invalid time pattern placement in dynamic index pattern shuts down pipeline | https://api.github.com/repos/opensearch-project/data-prepper/issues/3386/comments | 1 | 2023-09-26T20:17:41Z | 2023-10-31T19:44:36Z | https://github.com/opensearch-project/data-prepper/issues/3386 | 1,914,237,149 | 3,386 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Current AWS secret extension only retrieves secret value at launching Data Prepper application while AWS secrets allows for secret value rotation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html.
**Describe the solution you'd like**
Secret values refreshment will be supported when Data Prepper is running. The secret manager extension plugin will provide the refresher. Data Prepper core will provide the subscriber and publisher interfaces as well as publisher implementation to be feed into individual Data Prepper pipeline plugin. Individual plugin will decide whether to implement the subscriber interface and register its subscriber for component refreshment.

**Describe alternatives you've considered (Optional)**
None
**Additional context**
None
| Enhancement: Support AWS secret refreshment in data-prepper-core/data-prepper-api | https://api.github.com/repos/opensearch-project/data-prepper/issues/3382/comments | 0 | 2023-09-22T18:41:19Z | 2023-10-06T14:49:05Z | https://github.com/opensearch-project/data-prepper/issues/3382 | 1,909,366,613 | 3,382 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
S3 scan waits for the acknowledgment to be received for an object before it starts processing another. This leads to lower performance and not utilizing the data prepper buffer to its full extent.
**Describe the solution you'd like**
S3 Scan should continue grabbing objects and writing them to the buffer instead of waiting for the acknowledgment before processing the next object.
This will require some modifications to source coordination to allow for partitions to be put into a certain state when an ack is being waited on to ensure that no other nodes pick it up, and still allows the current node to mark the partition as completed.
A potential solution for this would be to introduce another status for partitions, WAITING_ON_ACK.
When a node finishes writing the entirety of a partition to the buffer, it would put the partition in the WAITING_FOR_ACK state with an hour timeout. Once this hour timeout is reached, the partition will become available for other nodes to pick up. The node that set this partition in the WAITING_FOR_ACK state would have an hour of ownership over it to complete it in the acknowledgment timeout.
The acknowledgment callback would then mark the partition as completed and do whatever else is needed. This will require two new methods to the SourceCoordinator interface.
```
markAsWaitingForAck(final String partitionKey, final Duration acknowledgmentTimeout)
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 Scan with acknowledgments waits for acknowledgment before processing another object | https://api.github.com/repos/opensearch-project/data-prepper/issues/3381/comments | 2 | 2023-09-22T18:06:31Z | 2023-09-26T20:34:55Z | https://github.com/opensearch-project/data-prepper/issues/3381 | 1,909,323,730 | 3,381 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Opensearch sink failure log messaging is not descriptive enough to root cause issues.
Sample log message:
```
org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. Number of retries 25. Retrying...
```
**Describe the solution you'd like**
More details should be added to the log, few suggestions are:
- What is reason for operation failure.
- Whats the response code from opensearch domain.
- How many individual records failed and how many succeeded in bulk request.
- Which index bulk operation failed for.
- Opensearch domain URL.
- Request duration.
- Number of records/bytes sent in the bulk operation that failed
| Improve sink failure log messages | https://api.github.com/repos/opensearch-project/data-prepper/issues/3379/comments | 2 | 2023-09-22T00:51:15Z | 2023-12-13T21:25:26Z | https://github.com/opensearch-project/data-prepper/issues/3379 | 1,907,981,025 | 3,379 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
KeyValue processor currently writes parsed entries to a user specified `destination` (default to `"parsed_message"`) but doesn't support writing those entries directly to event root.
**Describe the solution you'd like**
When `destination` is set to null, we write parsed entries to root. The config would look like this:
```yaml
...
processor:
key_value:
destination:
```
or
```yaml
...
processor:
key_value:
destination: null
```
**Describe alternatives you've considered (Optional)**
Unless user specifies a destination, writing to root as default behavior. But this is going to be a breaking change.
**Additional context**
`parse_json` has similar configuration that when `target` is null, the parsed results will be written to root.
| Allow KeyValue processor to write parsed entries to event root | https://api.github.com/repos/opensearch-project/data-prepper/issues/3378/comments | 3 | 2023-09-21T23:17:36Z | 2023-09-25T17:04:18Z | https://github.com/opensearch-project/data-prepper/issues/3378 | 1,907,906,287 | 3,378 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The KeyValue processor writes its output maps to a `destination` field that is not the root of the event. As far as I'm aware there is not a way to move a nested field like that to the root of the event.
Example event
```
{message: "key1=val1 key2=val2"}
```
Current behavior of KeyValue processor
```
{
message: "key1=val1 key2=val2",
parsed_message: {
key1: "val1",
key2: "val2"
}
}
```
Desired output event
```
{
message: "key1=val1 key2=val2",
key1: "val1",
key2: "val2"
}
```
There are other cases beyond the KeyValue Processor where moving a nested field to the event root so building it into a processor would be useful
**Describe the solution you'd like**
The CopyValue processor should provide an optional parameter like `to_root` which can copy a field to the event root. It should also support a way of moving all fields within a json object
The configuration to get the desired output from above could look something like this:
```
processor:
- copy_values:
entries:
- from_object_key: "parsed_message"
to_root: true
```
**Describe alternatives you've considered (Optional)**
Data Prepper could support something like a Flatten Object processor to flatten nested json.
That config might look something like this
```
processor:
- flatten:
- key: "parsed_message"
```
And change the input to be like this
```
{
message: "key1=val1 key2=val2",
parsed_message.key1: "val1",
parsed_message.key2: "val2"
}
```
The RenameKey Processor could then be used to get the desired output
**Additional context**
N/A | Support moving nested json to event root | https://api.github.com/repos/opensearch-project/data-prepper/issues/3377/comments | 2 | 2023-09-21T22:15:06Z | 2023-10-31T19:41:05Z | https://github.com/opensearch-project/data-prepper/issues/3377 | 1,907,839,713 | 3,377 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Due to some jackson dependabot update, but deserializing Enums does not work in `data-prepper-core` for all plugins now
I noticed this when Enum deserialization for the opensearch source mysteriously started failing with
```
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Cannot construct instance of `java.lang.Enum` (no Creators, like default constructor, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information
```
for the `search_context_type`. I manually fixed that but the http sink `http_method` also does not work (not sure if it worked before). There may be other places where enums would fail to deserialize too now
**To Reproduce**
Steps to reproduce the behavior:
Create a pipelines with http sink and `http_method` will not deserialize
```
sink:
- http:
http_method: "POST"
url: "test-url"
```
Same fails for sqs `notification_type`
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Enums are not being deserialized correctly for all plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/3376/comments | 10 | 2023-09-21T15:31:21Z | 2023-10-05T17:06:00Z | https://github.com/opensearch-project/data-prepper/issues/3376 | 1,907,254,646 | 3,376 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Sensitive logging is performed to keep user data from being logged in data prepper. However, many of these sensitive logs censor more than they should, including important error messages. For example, the error message will be censored out here https://github.com/opensearch-project/data-prepper/blob/542b4517896f1f074a32575b9ee98fb737065ee2/data-prepper-plugins/failures-common/src/main/java/org/opensearch/dataprepper/plugins/dlq/s3/S3DlqWriter.java#L132
**Describe the solution you'd like**
A way to selectively censor arguments in the SENSITIVE logs by only censoring arguments in `[]` brackets. That way we can remove the brackets around arguments like error messages so they are not censored
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| SENSITIVE logging should selectively censor arguments instead of censoring all arguments | https://api.github.com/repos/opensearch-project/data-prepper/issues/3375/comments | 1 | 2023-09-21T15:26:07Z | 2024-05-13T21:25:43Z | https://github.com/opensearch-project/data-prepper/issues/3375 | 1,907,244,766 | 3,375 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-36479 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-servlets-11.0.12.jar</b></p></summary>
<p>Utility Servlets from Jetty</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-servlets/11.0.12/4ffa7970f9bf5df062beadff6ad7bbf669c4a2e3/jetty-servlets-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- :x: **jetty-servlets-11.0.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Eclipse Jetty Canonical Repository is the canonical repository for the Jetty project. Users of the CgiServlet with a very specific command structure may have the wrong command executed. If a user sends a request to a org.eclipse.jetty.servlets.CGI Servlet for a binary with a space in its name, the servlet will escape the command by wrapping it in quotation marks. This wrapped command, plus an optional command prefix, will then be executed through a call to Runtime.exec. If the original binary name provided by the user contains a quotation mark followed by a space, the resulting command line will contain multiple tokens instead of one. This issue was patched in version 9.4.52, 10.0.16, 11.0.16 and 12.0.0-beta2.
<p>Publish Date: 2023-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-36479>CVE-2023-36479</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-3gh6-v5v9-6v9j">https://github.com/eclipse/jetty.project/security/advisories/GHSA-3gh6-v5v9-6v9j</a></p>
<p>Release Date: 2023-09-15</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-servlets:9.4.52.v20230823,10.0.16,11.0.16</p>
</p>
</details>
<p></p>
| CVE-2023-36479 (Medium) detected in jetty-servlets-11.0.12.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3367/comments | 4 | 2023-09-20T14:07:05Z | 2023-10-26T18:28:53Z | https://github.com/opensearch-project/data-prepper/issues/3367 | 1,905,083,413 | 3,367 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper currently has two built-in index templates - OTel trace and OTel service map. If you wish to bring your own, you currently have two options:
1. Add this as a file on the local file system and tell Data Prepper to use it.
2. Perform a PUT/POST directly on the OpenSearch cluster and not use Data Prepper to add the index template.
For some index templates, it would be nice to have the mapping in the YAML file similar to what we did for Avro schemas in the S3 sink.
**Describe the solution you'd like**
Provide a new field on the `opensearch` sink for providing the index template within the pipeline YAML itself. This new field can be called `template_content` to fit along with the existing `template_file` property.
```
sink:
- opensearch:
index: logs-%{yyyy-MM-dd}
template_content: |
"template": {
"aliases": {
"my_logs": {}
},
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
},
"value": {
"type": "double"
}
}
}
}
```
(The above example is using a mapping from the docs: https://opensearch.org/docs/latest/im-plugin/index-templates/)
The `template_content` will be sent as either a composable index template or a v1 template. The `opensearch` sink will use the existing `template_type` property to decide.
**Describe alternatives you've considered (Optional)**
The templates could be provided as a list of templates. However, this is could be problematic:
1. It could become too difficult to read.
2. It would require much more refactoring within the sink.
3. How would the `opensearch` sink know which index patterns to match on?
4. It does not match the existing `template_file` which uses a single file.
5. Similarly, it would not fit with the existing `ism_policy_file` which uses a single file.
An alternative would be to route data to different `opensearch` sinks and have a template per sink.
Also, we could add support for a list as a future enhancement. | Support inline index templates in OpenSearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3365/comments | 0 | 2023-09-20T13:27:52Z | 2023-10-06T16:17:24Z | https://github.com/opensearch-project/data-prepper/issues/3365 | 1,905,001,990 | 3,365 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A dissect processor (similar to https://www.elastic.co/guide/en/logstash/current/plugins-filters-dissect.html#plugins-filters-dissect) would be useful when extracting fields from log messages that follows a repeated pattern. It's simpler to use and can be faster than grok in certain use cases.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Dissect processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3362/comments | 0 | 2023-09-19T17:44:15Z | 2023-09-26T03:30:49Z | https://github.com/opensearch-project/data-prepper/issues/3362 | 1,903,459,831 | 3,362 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Many users that are migrating their data from one opensearch cluster to another will not want system indices to be moved, as there may be incompatibilities between system indices on different versions of OpenSearch. Users will still want the option to include system indices to migrate dashboards, etc,.
**Describe the solution you'd like**
A flag in the opensearch source named `include_system_indices`. This flag will default to false and when set to true, system indices will be included as normal indices, and can be included and excluded specifically with the regex patterns as normal.
**Describe alternatives you've considered (Optional)**
Users exclude system indices manually with
```
indices:
exclude:
- index_name_regex: "\..*"
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Exclude system indices by default and make system indices configurable for the OpenSearch Source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3360/comments | 0 | 2023-09-19T16:19:58Z | 2023-09-25T19:57:33Z | https://github.com/opensearch-project/data-prepper/issues/3360 | 1,903,333,513 | 3,360 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-40167 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-http-11.0.12.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-http/11.0.12/bf07349f47ab6b11f1329600f37dffb136d5d7c/jetty-http-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- jetty-server-11.0.12.jar
- :x: **jetty-http-11.0.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/ebd3e757c341c1d9c1352431bbad7bf5db2ea939">ebd3e757c341c1d9c1352431bbad7bf5db2ea939</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Jetty is a Java based web server and servlet engine. Prior to versions 9.4.52, 10.0.16, 11.0.16, and 12.0.1, Jetty accepts the `+` character proceeding the content-length value in a HTTP/1 header field. This is more permissive than allowed by the RFC and other servers routinely reject such requests with 400 responses. There is no known exploit scenario, but it is conceivable that request smuggling could result if jetty is used in combination with a server that does not close the connection after sending such a 400 response. Versions 9.4.52, 10.0.16, 11.0.16, and 12.0.1 contain a patch for this issue. There is no workaround as there is no known exploit scenario.
<p>Publish Date: 2023-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-40167>CVE-2023-40167</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-hmr7-m48g-48f6">https://github.com/eclipse/jetty.project/security/advisories/GHSA-hmr7-m48g-48f6</a></p>
<p>Release Date: 2023-09-15</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-http:9.4.52.v20230823,10.0.16,11.0.16,12.0.1</p>
</p>
</details>
<p></p>
| CVE-2023-40167 (Medium) detected in jetty-http-11.0.12.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3359/comments | 4 | 2023-09-19T05:07:38Z | 2023-10-26T18:28:37Z | https://github.com/opensearch-project/data-prepper/issues/3359 | 1,902,213,177 | 3,359 |
[
"opensearch-project",
"data-prepper"
] | I launched a data-prepper pod on an EKS cluster to send traces from an otel-collector to an AWS managed OpenSearch.
The traces are being sent fine but the log on data-prepper keeps printing an error that seems to relate to JSON parsing on each record:
2023-09-18T19:48:33,795 [raw-pipeline-processor-worker-5-thread-1] DEBUG org.opensearch.dataprepper.pipeline.ProcessWorker - Pipeline Worker: Submitting 7 processed records to sinks
line 1:0 token recognition error at: 'spanId'
line 1:6 mismatched input '<EOF>' expecting {Function, Integer, Float, Boolean, 'null', JsonPointer, EscapedJsonPointer, VariableIdentifier, String, NOT, '-', '(', '{', OTHER}
**Steps to reproduce the behavior:**
1. Installed Istio as a service mesh.
2. Installed otel-collector otel/opentelemetry-collector:0.85.0 with a zipkin receiver
3. Configured Istio to send trace data to the otel-collector.
4. Otel-collector has a pipeline with zipkin as a receiver and data-prepper as exporter.
5. Data-prepper has a pipelines config as described in this project (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics.yml) configured to the managed OpenSearch.
**Expected behavior**
No error logs or a log that allows me to fix the problem.
**Screenshots**
otel-collector config
<img width="901" alt="Screenshot 2023-09-18 at 14 09 02" src="https://github.com/opensearch-project/data-prepper/assets/145393929/aae1aba6-5974-40db-a73e-af16d30a7048">
data-prepper config
<img width="974" alt="Screenshot 2023-09-18 at 14 09 45" src="https://github.com/opensearch-project/data-prepper/assets/145393929/4c5f42b0-75f5-4030-8f81-155d21d19879">
**Environment (please complete the following information):**
- EKS. Kubernetes version: 1.27.
- Node OS: Amazon Linux 2. Kernel: 5.10.186-179.751.amzn2.x86_64
- Data-prepper version: 2.4.0
- Otel-collector version: 0.85.0
- Istio version: 1.18.2
| [BUG] JSON parsing errors in raw processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3357/comments | 6 | 2023-09-18T20:13:25Z | 2023-10-20T20:50:19Z | https://github.com/opensearch-project/data-prepper/issues/3357 | 1,901,667,306 | 3,357 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It appears that the Data Prepper image has grown quite large.
```
ls -l release/archives/linux/build/distributions/ | awk '{print $5,$9}'
664652152 opensearch-data-prepper-2.5.0-SNAPSHOT-linux-x64.tar.gz
857129267 opensearch-data-prepper-jdk-2.5.0-SNAPSHOT-linux-x64.tar.gz
```
Even without the JDK, it comes to 633 MB.
**Describe the solution you'd like**
Find why it is so large and see how we can reduce the size.
## Tasks
- [x] #4035
- [ ] Others
| Reduce the Data Prepper tar.gz and Docker image sizes | https://api.github.com/repos/opensearch-project/data-prepper/issues/3356/comments | 6 | 2023-09-18T19:52:14Z | 2024-03-21T16:22:55Z | https://github.com/opensearch-project/data-prepper/issues/3356 | 1,901,638,669 | 3,356 |
[
"opensearch-project",
"data-prepper"
] | The current Dockerfile builds from `eclipse-temurin:17-jdk-alpine` which is not available for ARM. Update to use an ARM image.
The `eclipse-temurin:17-jdk-jammy` image works for ARM and I've tested it locally. | Support a local ARM image build | https://api.github.com/repos/opensearch-project/data-prepper/issues/3352/comments | 1 | 2023-09-18T18:59:54Z | 2023-09-26T16:10:20Z | https://github.com/opensearch-project/data-prepper/issues/3352 | 1,901,563,870 | 3,352 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Split from #3340 . AWS extension plugins usually requires IAM credentials, which might be pulled from AWSPlugin similar as pipeline plugin.
**Describe the solution you'd like**
Proper refactoring on ExtensionPlugin constructor and method
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Use AWSPlugin IAM credentials in other extension plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/3350/comments | 0 | 2023-09-18T14:40:37Z | 2023-10-06T14:53:23Z | https://github.com/opensearch-project/data-prepper/issues/3350 | 1,901,096,970 | 3,350 |
[
"opensearch-project",
"data-prepper"
] | ### Is your feature request related to a problem?
Yes, S3 log pipeline – Listens to S3 Amazon SQS notifications generated via **eventbridge** and pulls data from S3 buckets. I am getting invalid body which cannot be parsed into S3EventNotification. Unrecognized field "version" (class org.opensearch.dataprepper.plugins.source.S3EventNotification), not marked as ignorable (one known property: "Records"]
### What solution would you like?
I want any sqs event generated via s3-sqs,s3-sns or **s3-eventbridge-sqs** should be parsed in ingest pipeline source as s3.
### What alternatives have you considered?
Right now I dont have any alternative
### Do you have any additional context?
2023-09-16T11:58:29.321 [Thread-11] ERROR org.opensearch.dataprepper.plugins.source.parser.S3EventNotificationParser - SQS message with message ID:414afe99-8914-4fb4-b9ed-782c228a0ddb has invalid body which cannot be parsed into S3EventNotification. Unrecognized field "version" (class org.opensearch.dataprepper.plugins.source.S3EventNotification), not marked as ignorable (one known property: "Records"])
at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: org.opensearch.dataprepper.plugins.source.S3EventNotification["version"]).
| [FEATURE] Support various EventBridge messages in S3 source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3426/comments | 4 | 2023-09-16T15:30:38Z | 2024-06-11T09:04:39Z | https://github.com/opensearch-project/data-prepper/issues/3426 | 1,923,256,893 | 3,426 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-42503 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.23.0.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /data-prepper-plugins/grok-processor/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.23.0/4af2060ea9b0c8b74f1854c6cafe4d43cfc161fc/commons-compress-1.23.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.23.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Improper Input Validation, Uncontrolled Resource Consumption vulnerability in Apache Commons Compress in TAR parsing.This issue affects Apache Commons Compress: from 1.22 before 1.24.0.
Users are recommended to upgrade to version 1.24.0, which fixes the issue.
A third party can create a malformed TAR file by manipulating file modification times headers, which when parsed with Apache Commons Compress, will cause a denial of service issue via CPU consumption.
In version 1.22 of Apache Commons Compress, support was added for file modification times with higher precision (issue # COMPRESS-612 [1]). The format for the PAX extended headers carrying this data consists of two numbers separated by a period [2], indicating seconds and subsecond precision (for example “1647221103.5998539”). The impacted fields are “atime”, “ctime”, “mtime” and “LIBARCHIVE.creationtime”. No input validation is performed prior to the parsing of header values.
Parsing of these numbers uses the BigDecimal [3] class from the JDK which has a publicly known algorithmic complexity issue when doing operations on large numbers, causing denial of service (see issue # JDK-6560193 [4]). A third party can manipulate file time headers in a TAR file by placing a number with a very long fraction (300,000 digits) or a number with exponent notation (such as “9e9999999”) within a file modification time header, and the parsing of files with these headers will take hours instead of seconds, leading to a denial of service via exhaustion of CPU resources. This issue is similar to CVE-2012-2098 [5].
[1]: https://issues.apache.org/jira/browse/COMPRESS-612
[2]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/pax.html#tag_20_92_13_05
[3]: https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html
[4]: https://bugs.openjdk.org/browse/JDK-6560193
[5]: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-2098
Only applications using CompressorStreamFactory class (with auto-detection of file types), TarArchiveInputStream and TarFile classes to parse TAR files are impacted. Since this code was introduced in v1.22, only that version and later versions are impacted.
<p>Publish Date: 2023-09-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-42503>CVE-2023-42503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread/5xwcyr600mn074vgxq92tjssrchmc93c">https://lists.apache.org/thread/5xwcyr600mn074vgxq92tjssrchmc93c</a></p>
<p>Release Date: 2023-09-14</p>
<p>Fix Resolution: 1.24.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2023-42503 (Medium) detected in commons-compress-1.23.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3347/comments | 0 | 2023-09-15T19:14:50Z | 2023-09-25T15:19:57Z | https://github.com/opensearch-project/data-prepper/issues/3347 | 1,898,948,823 | 3,347 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper pods cannot (re)start (specifically the opensearch plugin) when there is a index named like an alias that is managed by data prepper. This bug occurs when no index alias exists (for example, if all OTel span indices were deleted), ingestion to data prepper is still ongoing and then data prepper restarts.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup opensearch and data prepper
2. Ingest OTel traces/spans
3. Delete all otel-v1-apm-span-.* indices (otel-v1-apm-span alias is removed automatically when deleting the indices)
4. Ingest OTel traces/spans (which creates otel-v1-apm-span index)
5. Try to restart data prepper pods. Pods fail to start with the following error log (full stack trace below)
`An index exists with the same name as the reserved index alias name [otel-v1-apm-span], please delete or migrate the existing index`
**Expected behavior**
Data prepper should be able to start in situations where customers did not do anything obviously wrong.
One mitigation idea would be to automatically rename otel-v1-apm-span index to otel-v1-apm-span-000001 and add an otel-v1-apm-span alias pointing to the renamed index.
**Stack trace**
```
2023-09-13T12:55:45,750 [raw-pipeline-sink-worker-8-thread-1] ERROR org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink due to a configuration error.
org.opensearch.dataprepper.model.plugin.InvalidPluginConfigurationException: An index exists with the same name as the reserved index alias name [otel-v1-apm-span], please delete or migrate the existing index
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndex(AbstractIndexManager.java:258) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:198) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:180) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:145) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:49) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:195) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:243) ~[data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
2023-09-13T12:55:45,756 [raw-pipeline-sink-worker-8-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [raw-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.RuntimeException: An index exists with the same name as the reserved index alias name [otel-v1-apm-span], please delete or migrate the existing index
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.lang.RuntimeException: An index exists with the same name as the reserved index alias name [otel-v1-apm-span], please delete or migrate the existing index
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:152) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:49) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:195) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:243) ~[data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
Caused by: org.opensearch.dataprepper.model.plugin.InvalidPluginConfigurationException: An index exists with the same name as the reserved index alias name [otel-v1-apm-span], please delete or migrate the existing index
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndex(AbstractIndexManager.java:258) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:198) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:180) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:145) ~[opensearch-2.3.2.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:49) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:195) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:243) ~[data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
```
**Environment:**
- Version 2.1.1 (data prepper)
| [BUG] Data prepper cannot start if otel-v1-apm-span index exists | https://api.github.com/repos/opensearch-project/data-prepper/issues/3342/comments | 3 | 2023-09-15T08:44:37Z | 2023-11-21T22:48:07Z | https://github.com/opensearch-project/data-prepper/issues/3342 | 1,897,982,449 | 3,342 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
S3 source is not stopped after the pipeline shutdown call. This will cause loss of SQS messages if end-to-end acknowledgements are not enabled.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an S3 pipeline
2. Send SQS notifications to the pipeline
3. Make the pipeline fail by not having permissions to write to sink
4. S3 source still polls SQS notifications after pipeline shutdown is called (check logs) even after processors and sink are shutdown.
**Expected behavior**
S3 source should stop before the processors or sink shutdown.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
The `InterruptedException` clears the interrupt state of thread, so we should handle it gracefully so the source won't be active after shutdown.
```
2023-08-04T18:56:40.761 [s3-log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [s3-log-pipeline] - Shutting down processor process workers.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-2] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Processor shutdown phase 1 complete.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-2] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Beginning processor shutdown phase 2, iterating until buffers empty.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-2] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Processor shutdown phase 2 complete.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-2] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Beginning processor shutdown phase 3, iterating until 1691175580761.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Processor shutdown phase 1 complete.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Beginning processor shutdown phase 2, iterating until buffers empty.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Processor shutdown phase 2 complete.
2023-08-04T18:56:40.761 [s3-log-pipeline-processor-worker-1-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Beginning processor shutdown phase 3, iterating until 1691175580761.
2023-08-04T18:57:00.636 [Thread-10] ERROR org.opensearch.dataprepper.plugins.source.SqsWorker - Unable to process SQS messages. Processing error due to: Thread was interrupted
2023-08-04T18:57:00.637 [Thread-10] INFO org.opensearch.dataprepper.plugins.source.SqsWorker - Pausing SQS processing for 16.212 seconds due to an error in processing.
2023-08-04T18:57:00.637 [Thread-10] ERROR org.opensearch.dataprepper.plugins.source.SqsWorker - Thread is interrupted while polling SQS with retry.
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method) ~[?:?]
at org.opensearch.dataprepper.plugins.source.SqsWorker.applyBackoff(SqsWorker.java:165) ~[s3-source-2.3.2.jar:?]
```
| [BUG] S3 source does not stop on pipeline shutdown | https://api.github.com/repos/opensearch-project/data-prepper/issues/3341/comments | 0 | 2023-09-14T18:58:29Z | 2023-09-15T18:26:11Z | https://github.com/opensearch-project/data-prepper/issues/3341 | 1,897,136,435 | 3,341 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In opensearch you can use ingest pipelines to process data. Fluent-bit, logstash etc support this via a property on the output/sink configuration. Data prepper does not.
**Describe the solution you'd like**
ingest_pipeline option for the opensearch sink
**Describe alternatives you've considered (Optional)**
Index templates through dataprepper, but this is overkill imho.
**Additional context**
N/A | Opensearch sink: support ingest pipeline configuration option | https://api.github.com/repos/opensearch-project/data-prepper/issues/3336/comments | 2 | 2023-09-14T15:01:12Z | 2023-10-24T20:51:10Z | https://github.com/opensearch-project/data-prepper/issues/3336 | 1,896,765,436 | 3,336 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Other logforwarders support the use of environment variables in configs. Logstash and FLuentbit both support the ${ENV_VAR} notation in their config files. This is useful in docker/kubernetes implementations where you might have several instances using the same config data. The data prepper docs don't reference this so I presume it's not supported yet.
**Describe the solution you'd like**
Use environment variables in the configs like Fluent-bit and Logstash do
**Describe alternatives you've considered (Optional)**
None
**Additional context**
None
| Use environment variables in data prepper configs | https://api.github.com/repos/opensearch-project/data-prepper/issues/3335/comments | 9 | 2023-09-14T11:54:22Z | 2025-01-20T14:57:40Z | https://github.com/opensearch-project/data-prepper/issues/3335 | 1,896,409,596 | 3,335 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some other logforwarders support flexible scripting, i.e. Fluent-bit supports Lua as an option to perform more complex operations than the pipeline might support. This would enable the user to build complex handling and extend the processing capabilities. We have used this to do more complex conditional logic.
**Describe the solution you'd like**
Lua/Javascript/whatever language would be really nice to be able to use. It needs to be performant and needs to fit the data prepper architecture.
**Describe alternatives you've considered (Optional)**
Using fluent-bit itself - but I have found it less reliable than data prepper or logstash in some circumstances.
**Additional context**
None
| Processor scripting language support | https://api.github.com/repos/opensearch-project/data-prepper/issues/3334/comments | 0 | 2023-09-14T11:49:10Z | 2023-10-24T19:53:39Z | https://github.com/opensearch-project/data-prepper/issues/3334 | 1,896,401,329 | 3,334 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The forward protocol is used for filebeat/logstash setups to forward logs between instances. This is very useful for larger architectures. It would be nice to implement this for data prepper as well so data prepper can be used as an intermediate or endpoint before sending the data to Opensearch and can fully replace logstash.
**Describe the solution you'd like**
Implement input/sink
**Describe alternatives you've considered (Optional)**
http - overhead might be an issue
**Additional context**
none
| Implement lumberjack protocol for input/sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3333/comments | 2 | 2023-09-14T10:07:07Z | 2023-10-24T20:53:14Z | https://github.com/opensearch-project/data-prepper/issues/3333 | 1,896,216,876 | 3,333 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The forward protocol is used in fluentd/fluent-bit setup to forward logs between instances. This is very useful for larger architectures. It would be nice to implement this for data prepper as well so data prepper can be used as an intermediate or endpoint before sending the data to Opensearch
**Describe the solution you'd like**
Implement input/sink
**Describe alternatives you've considered (Optional)**
http - overhead might be an issue
**Additional context**
none
| Implement fluent forward protocol for input/sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3332/comments | 6 | 2023-09-14T10:07:02Z | 2023-10-24T19:50:59Z | https://github.com/opensearch-project/data-prepper/issues/3332 | 1,896,216,745 | 3,332 |
[
"opensearch-project",
"data-prepper"
] | Update to Gradle 8.4 or higher. Gradle 8.4 adds support for Java 21.
https://docs.gradle.org/8.4/release-notes.html | Update to the Gradle 8.x version which supports Java 21. Gradle 8.3 is supporting up to Java 20. | https://api.github.com/repos/opensearch-project/data-prepper/issues/3330/comments | 0 | 2023-09-13T14:57:29Z | 2023-11-01T14:56:08Z | https://github.com/opensearch-project/data-prepper/issues/3330 | 1,894,718,305 | 3,330 |
[
"opensearch-project",
"data-prepper"
] | Java 21 is the latest LTS release and available Sep 2023. Let's build Data Prepper on this version to ensure compatibility. | Start building Data Prepper on Java 21 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3329/comments | 0 | 2023-09-13T14:57:24Z | 2023-11-28T14:24:33Z | https://github.com/opensearch-project/data-prepper/issues/3329 | 1,894,718,078 | 3,329 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
All types are now in one directory: https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins
Would be nice to split everything in different subdirectories for input, processor, sink, etc. Goal is to provide a better overview to the user.
**Describe the solution you'd like**
Create multiple directories
**Describe alternatives you've considered (Optional)**
None
**Additional context**
None | Split plugins in input, processor, sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3328/comments | 4 | 2023-09-13T14:52:39Z | 2023-10-28T11:24:35Z | https://github.com/opensearch-project/data-prepper/issues/3328 | 1,894,708,710 | 3,328 |
[
"opensearch-project",
"data-prepper"
] | ## Use-case
Currently, the only buffer available with Data Prepper is the `bounding_blocking` buffer, which stores events in memory. This can lead to data loss if a pipeline crashes, or the buffer overflows. A disk based buffer is required to prevent this data loss.
This proposal is to implement a Kafka buffer. Kafka offers robust buffering capabilities by persistently storing data on disk across multiple nodes, ensuring high availability and fault tolerance.
## Basic Configuration
The buffer will:
- write incoming bytes to Kafka
- consume from Kafka
- callback to pipeline source to deserialize bytes
# Sample configuration
```
buffer:
kafka_buffer:
bootstrap_servers:
- 127.0.0.1:9093
acknowledgments: true
topic:
name: "pipeline-buffer"
group_id: "kafka-group"
workers: 2
authentication:
sasl_plaintext:
username: admin
password: admin-secret
```
The configuration will be similar to that of the Kafka source and sink. Notably, only one topic will be provided, and `serde_format` will not be configurable, as the buffer will read and write bytes. Attributes that were previously set for each topic, such as workers, will be made attributes of the plugin, rather than topic.
## Detailed Process
- **Producer and Consumer Logic**: Reuse the logic from Kafka Source/Sink for both writing to and reading from the Kafka buffer.
- **Data Writing**: Sources will write data to the buffer in raw bytes format.
- **Deserialization**: To optimize performance and avoid re-serializing events, sources will implement the `RawByteHandler` interface. This interface will include a `deserializeBytes()` function, which the Kafka buffer will callback to when reading data.
- **Compatibility**: Only push-based sources will be compatible with the Kafka buffer. Pull-based sources, like S3, will not be supported. Incompatible configurations will trigger an error message during pipeline startup.
## Encryption
The Kafka buffer will offer optional encryption via KMS:
- Before writing to the buffer, the KMS `GenerateDataKeyPair` API will be invoked to obtain a data key pair.
- The KMS `Encrypt` API will then encrypt the private key, which will be sent to Kafka alongside the encrypted data.
- During data reading, the KMS `Decrypt` API will decrypt the private key, which will then decrypt the data.
## Metrics
The Kafka buffer will incorporate the standard buffer metrics, as well as the metrics reported by Kafka Source/Sink:
- numberOfPositiveAcknowledgements
- numberOfNegativeAcknowledgements
- numberOfRecordsFailedToParse
- numberOfBufferSizeOverflows
- numberOfPollAuthErrors
- numberOfRecordsCommitted
- numberOfRecordsConsumed
- numberOfBytesConsumed | Use Kafka as a buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3322/comments | 5 | 2023-09-11T21:30:07Z | 2023-12-01T11:52:58Z | https://github.com/opensearch-project/data-prepper/issues/3322 | 1,891,295,906 | 3,322 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
It is possible for a NPE to occur when the global state Map for s3 scan contains a key for the bucket but the value is null (https://github.com/opensearch-project/data-prepper/blob/cd194c167233287b9be7fc83699a925cc6e44409/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/S3ScanPartitionCreationSupplier.java#L104)
```
2023-09-08T15:42:56.313 [Thread-10] ERROR org.opensearch.dataprepper.plugins.source.ScanObjectWorker - Received an exception while processing S3 objects, backing off and retrying
java.lang.NullPointerException: text
at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1945) ~[?:?]
at java.time.Instant.parse(Instant.java:395) ~[?:?]
at org.opensearch.dataprepper.plugins.source.S3ScanPartitionCreationSupplier.listFilteredS3ObjectsForBucket(S3ScanPartitionCreationSupplier.java:104) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.S3ScanPartitionCreationSupplier.apply(S3ScanPartitionCreationSupplier.java:87) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.S3ScanPartitionCreationSupplier.apply(S3ScanPartitionCreationSupplier.java:32) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.sourcecoordination.LeaseBasedSourceCoordinator.getNextPartition(LeaseBasedSourceCoordinator.java:153) ~[data-prepper-core-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.ScanObjectWorker.startProcessingObject(ScanObjectWorker.java:128) ~[s3-source-2.4.0.jar:?]
at org.opensearch.dataprepper.plugins.source.ScanObjectWorker.run(ScanObjectWorker.java:106) ~[s3-source-2.4.0.jar:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
```
**Expected behavior**
No NullPointerException
** Steps to reproduce **
Configure an s3 scan pipeline with the same bucket duplicated twice
```
- bucket:
name: "my-bucket"
- bucket:
name: "my-bucket"
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] S3 Scan hits NPE when bucket key has a null value | https://api.github.com/repos/opensearch-project/data-prepper/issues/3316/comments | 0 | 2023-09-08T18:45:27Z | 2023-09-12T15:09:48Z | https://github.com/opensearch-project/data-prepper/issues/3316 | 1,888,161,028 | 3,316 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Gatling performance tests could be run against Amazon HTTP endpoints which are authenticated by SigV4. This could allow for running against Amazon OpenSearch Ingestion since this service is making use of Data Prepper. This can also set the groundwork for running tests against other authentication mechanisms such as HTTP Basic Auth and mTLS (if supported in the future).
**Describe the solution you'd like**
Provide a new property to configure the authentication:
```
-Dauthentication=aws_sigv4
```
When set to SigV4 make use of two other properties to configure the region and service name: `-Daws_region=` and `-Daws_service`.
| Support Gatling tests using AWS sigV4 signing | https://api.github.com/repos/opensearch-project/data-prepper/issues/3311/comments | 0 | 2023-09-07T01:41:09Z | 2023-09-07T16:29:16Z | https://github.com/opensearch-project/data-prepper/issues/3311 | 1,884,970,363 | 3,311 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently it seems like all objects from the s3 sink are sent using the same prefix, with only date-time being configurable. This means in order to retrieve a subset of events, e.g. logs from a specific hostname, you need to query all events for the time period.
**Describe the solution you'd like**
We would like to send events to different s3 object prefixes based on specific event fields, for example, hostname. This makes searching events in s3 simpler and cheaper as you can directly query the relevant subset of events.
**Describe alternatives you've considered (Optional)**
We could potentially use separate sinks for each subset of logs but this is not really dynamic or scalable.
**Additional context**
N/A
| Allow using event fields in s3 sink object_key | https://api.github.com/repos/opensearch-project/data-prepper/issues/3310/comments | 13 | 2023-09-07T01:11:19Z | 2024-07-09T06:40:31Z | https://github.com/opensearch-project/data-prepper/issues/3310 | 1,884,947,715 | 3,310 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When Data Prepper is starting, if a sink fails to initialize, it does not start the source. This is by design to avoid taking requests if the server may not come up. However, one downside with the current implementation is that the server itself does not respond. Ideally, the server could be up, but unable to accept requests.
**Describe the solution you'd like**
When Data Prepper is starting, start servers but do not allow them to accept requests. Instead, they could respond with 503 Service Unavailable.
To implement, this might require some API changes to `Source` to allow the pipeline to start before it is ready. A new `void start();` method could be used to perform any start-up work before the `Buffer` is available.
This can apply for all the HTTP/gRPC sources.
**Describe alternatives you've considered (Optional)**
Perhaps the server could return a 4xx response, but none seem quite right.
| Data Prepper servers do not process requests at all if pipeline is inoperable at startup | https://api.github.com/repos/opensearch-project/data-prepper/issues/3309/comments | 0 | 2023-09-06T23:06:16Z | 2023-09-07T01:34:35Z | https://github.com/opensearch-project/data-prepper/issues/3309 | 1,884,868,030 | 3,309 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Gatling performance tests do not run against an HTTPS endpoint. And it always uses the `/log/ingest` endpoint, but this is now configurable.
**Describe the solution you'd like**
Continue the current pattern of using Java system properties to configure these values.
* `-Dprotocol=https`
* `-Dpath=/my/path`
| Enable Gatling HTTPS support and path configuration | https://api.github.com/repos/opensearch-project/data-prepper/issues/3308/comments | 0 | 2023-09-06T18:38:07Z | 2023-09-07T16:29:16Z | https://github.com/opensearch-project/data-prepper/issues/3308 | 1,884,552,629 | 3,308 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
According to this link https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins/otel-trace-source#authentication-configurations , the server supports http basic authentication. We would like to use alternatives i.e oauth2 or MTLS
**Describe alternatives you've considered (Optional)**
Using i.e oauth2 or MTLS
| Using oauth2 or MTLS for authentication | https://api.github.com/repos/opensearch-project/data-prepper/issues/3306/comments | 8 | 2023-09-05T15:26:23Z | 2023-09-13T18:36:09Z | https://github.com/opensearch-project/data-prepper/issues/3306 | 1,882,203,969 | 3,306 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The [`dependabot.yml`](https://github.com/opensearch-project/data-prepper/blob/main/.github/dependabot.yml) file is far out-of-date. It does not scan all projects.
Some examples of missing projects:
* `data-prepper-main`
* `data-prepper-test-common`
* `anomaly-detection-processor`
* `avro-codecs`
And many others.
**Expected behavior**
Each project should be included in the `dependabot.yml` file.
| [BUG] Dependabot updates are not configured for all projects | https://api.github.com/repos/opensearch-project/data-prepper/issues/3301/comments | 0 | 2023-09-01T16:57:18Z | 2023-12-01T20:01:38Z | https://github.com/opensearch-project/data-prepper/issues/3301 | 1,877,758,617 | 3,301 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently (per release v2.4), anomaly detection processor offers cardinality keys support via identification_keys property. This property will create up to 5000 models per each key/value pair. After the limit is reached, the plugin will stop creating new models that can be observed by CardinalityOverflow metric.
**Describe the solution you'd like**
As an alternative to the existing solution, it would be beneficial to introduce a configuration where least recently used models are purged out and new ones are being automatically created. This way, the implementation becomes more dynamic for handling newly arriving cardinality keys.
**Describe alternatives you've considered (Optional)**
Other mechanisms for purging unused models can be considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Introduce configuration for purging least recently used models in anomaly detection plugin | https://api.github.com/repos/opensearch-project/data-prepper/issues/3293/comments | 1 | 2023-08-31T19:57:49Z | 2023-09-23T21:46:51Z | https://github.com/opensearch-project/data-prepper/issues/3293 | 1,876,214,617 | 3,293 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
HTTP Sink - Posting to AMP : FailedHttpResponseInterceptor is wrongly interpreting 200 as error and throwing exception which is causing pipeline to break
**To Reproduce**
Steps to reproduce the behavior:
1. Run Dataprepper
2. Generate metrics by running ArmeriaExportMetrics.java
3. Data is successfully posted to AMP endpoint but still see the stack trace
4. See error
java.io.IOException: url: https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-a13b8ab8-b903-4b6a-8fe9-6722053c2469/api/v1/remote_write **, status code: 200**
at org.opensearch.dataprepper.plugins.sink.prometheus.FailedHttpResponseInterceptor.process(**FailedHttpResponseInterceptor.java:29**) ~[prometheus-sink-2.5.0-SNAPSHOT.jar:?]
at org.apache.hc.core5.http.protocol.DefaultHttpProcessor.process(DefaultHttpProcessor.java:117) ~[httpcore5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.MainClientExec.execute(MainClientExec.java:119) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ConnectExec.execute(ConnectExec.java:188) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ProtocolExec.execute(ProtocolExec.java:192) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.HttpRequestRetryExec.execute(HttpRequestRetryExec.java:96) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ContentCompressionExec.execute(ContentCompressionExec.java:152) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.RedirectExec.execute(RedirectExec.java:115) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.InternalHttpClient.doExecute(InternalHttpClient.java:170) ~[httpclient5-5.2.jar:5.2]
at org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:106) ~[httpclient5-5.2.jar:5.2]
at org.opensearch.dataprepper.plugins.sink.prometheus.service.PrometheusSinkService.pushToEndPoint(PrometheusSinkService.java:310) ~[prometheus-sink-2.5.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.prometheus.service.PrometheusSinkService.lambda$output$0(PrometheusSinkService.java:180) ~[prometheus-sink-2.5.0-SNAPSHOT.jar:?]
at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at org.opensearch.dataprepper.plugins.sink.prometheus.service.PrometheusSinkService.output(PrometheusSinkService.java:145) ~[prometheus-sink-2.5.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.prometheus.PrometheusSink.doOutput(PrometheusSink.java:113) ~[prometheus-sink-2.5.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:64) ~[data-prepper-api-2.5.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:64) ~[data-prepper-api-2.5.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:336) ~[data-prepper-core-2.5.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
**Expected behavior**
HTTP Client should handle the Status code properly and not throw exception.
| [BUG] Prometheus Sink - Posting to AMP : FailedHttpResponseInterceptor is wrongly interpreting 200 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3291/comments | 0 | 2023-08-30T22:07:40Z | 2023-09-06T21:04:36Z | https://github.com/opensearch-project/data-prepper/issues/3291 | 1,874,440,377 | 3,291 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently user is allowed to distribute pipeline configuration bodies across multiple YAML files and Data Prepper will merge them together before converting into internal PipelineDataflowModel. This will require any shared pipeline configuration to be applied to all pipeline config files even if it is only defined in one of those files. Thus blurring the usage of pipeline config file versus data prepper config file which is supposed to include global settings.
**Describe the solution you'd like**
Instead of merging the pipeline config body from files together, we should first parse each file into its own PipelineDataflowModel and then merge them into a single PipelineDataflowModel.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| [core] Merging PipelineDataflowModel instead of pipeline YAML files | https://api.github.com/repos/opensearch-project/data-prepper/issues/3289/comments | 1 | 2023-08-29T21:27:58Z | 2023-08-31T20:38:16Z | https://github.com/opensearch-project/data-prepper/issues/3289 | 1,872,503,813 | 3,289 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It would be nice to include a feature which reads multiline logs like error stack traces or logs having database queries where each parts of query are present in different lines. I have sample log 2023-06-11T12:17:06,027 INFO [deft-faye-4dd0-Scheduler_Worker-22] {} c.t.w.a.s.j.ExtractTeradataJob$ExtractTask.executeTask(1429) - Executing sql: locking row for access
sel
s.object_id
,cast(null as varchar(128)) as databasename
,cast(null as varchar(128)) as tablename
,s.sum_currentperm
,c.blccompratio
,current_timestamp(6) as thetimestamp
from (
sel
hashrow(t.databasename,t.tablename) as object_id
,t.databasename
,t.tablename
,sum(t.currentperm) as sum_currentperm
from dbc.tablesizev t
qualify row_number() over (order by sum_currentperm desc) <= 100
group by 2,3
) s
left outer join dbc.statsv c
on s.databasename = c.databasename
and s.tablename = c.tablename
and c.statstype = 'T'
and c.validstats = 'T'
and c.columnname is null
and I have use grok pattern %{TIMESTAMP_ISO8601:log_timestamp} %{LOGLEVEL:loglevel} \[%{DATA:thread}\] \{\} %{GREEDYDATA:logmessage} to extract the log into different fields but what is happening here is the logmessage field is able to extract the text only till Executing sql: locking row for access so whatever is below this data its not able to extract into logmessage field. I heard that GREEDYSEARCH only extracts data until new line is encountered. Also the query instead of being considered as single log event each new line is treated as new event.
**Describe the solution you'd like**
I would like to have parameter multiline code , which is present in logstash like
codec:
multiline:
pattern:
what:
negate:
this particular parameters will be able to read multiline logs but in data prepper we don't have that. If there is one could you please let me know the solution. I have attached screenshot which shows how mmultiline logs are being treated
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
<img width="715" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/126722476/b8d87992-4903-4edb-9750-b0a8a069af98">
| Support multiline logs like error stack trace, logs with db queries | https://api.github.com/repos/opensearch-project/data-prepper/issues/3284/comments | 1 | 2023-08-29T16:12:36Z | 2024-05-08T07:56:30Z | https://github.com/opensearch-project/data-prepper/issues/3284 | 1,871,996,126 | 3,284 |
[
"opensearch-project",
"data-prepper"
] | I want to data-prepper to scan S3 objects that reside in sub-directories within the bucket.
this is my pipeline config:
```
s3-logs-pipeline-payment-transactions:
delay: 100
workers: 4
buffer:
bounded_blocking:
buffer_size: 25600
batch_size: 3200
source:
s3:
acknowledgments: true
compression: gzip
codec:
json:
s3_select:
expression: "select * from s3object s LIMIT 10000"
expression_type: SQL
input_serialization: json
compression_type: gzip
json:
type: document
aws:
region: us-east-1
sts_role_arn: <>
scan:
start_time: 2023-08-01T05:31:00
end_time: 2023-08-01T06:31:00
buckets:
- bucket:
name: logs-dev-s3
sink:
- opensearch:
hosts: <>
aws:
sts_role_arn: <>
region: us-east-1
index: sampled-logs-dev
bulk_size: 4
```
How can i specify the folder path within the bucket?
This config seems to be working when log files are in the root folder in the bucket,
but when it has sub-directories seeing the below in logs:
```2023-08-29T11:30:33,827 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - aws_sigv4 is set, will sign requests using AWSRequestSigningApacheInterceptor
Exception in thread "main" java.lang.IllegalStateException: Problem loading keystore to create SSLContext
at org.opensearch.dataprepper.pipeline.server.SslUtil.createSslContext(SslUtil.java:35)
at org.opensearch.dataprepper.pipeline.server.HttpServerProvider.get(HttpServerProvider.java:39)
at org.opensearch.dataprepper.pipeline.server.DataPrepperServer.createServer(DataPrepperServer.java:65)
at org.opensearch.dataprepper.pipeline.server.DataPrepperServer.start(DataPrepperServer.java:58)
at org.opensearch.dataprepper.pipeline.server.DataPrepperServer$$FastClassBySpringCGLIB$$5d4867b0.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy.invokeMethod(CglibAopProxy.java:386)
at org.springframework.aop.framework.CglibAopProxy.access$000(CglibAopProxy.java:85)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:704)
at org.opensearch.dataprepper.pipeline.server.DataPrepperServer$$EnhancerBySpringCGLIB$$b82f3ac1.start(<generated>)
at org.opensearch.dataprepper.DataPrepper.execute(DataPrepper.java:89)
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:42)
Caused by: java.io.IOException: Is a directory
at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at java.base/sun.nio.ch.FileDispatcherImpl.read(FileDispatcherImpl.java:48)
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:330)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:296)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:273)
at java.base/sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:232)
at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:107)
at java.base/sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:101)
at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:244)
at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:263)
at java.base/sun.security.util.DerValue.<init>(DerValue.java:440)
at java.base/sun.security.util.DerValue.<init>(DerValue.java:487)
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2012)
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:221)
at java.base/java.security.KeyStore.load(KeyStore.java:1473)
at org.opensearch.dataprepper.pipeline.server.SslUtil.createSslContext(SslUtil.java:24)
... 11 more
2023-08-29T11:30:34,253 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2023-08-29T11:30:34,898 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initialized OpenSearch sink
2023-08-29T11:30:34,898 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [s3-logs-pipeline-payment-transactions] Sink is ready, starting source...
2023-08-29T11:30:34,901 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.source.S3ClientBuilderFactory - Creating S3 client
2023-08-29T11:30:34,966 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.source.S3ClientBuilderFactory - Creating S3 Async client
2023-08-29T11:30:35,140 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sourcecoordinator.inmemory.InMemorySourceCoordinationStore - The in_memory source coordination store is not recommended for production workloads. It is only effective in single node environments of Data Prepper, and can run into memory limitations over time if the number of partitions is too great.
2023-08-29T11:30:35,142 [s3-logs-pipeline-payment-transactions-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [s3-logs-pipeline-payment-transactions] - Submitting request to initiate the pipeline processin```
| How can we scan sub-directories in S3 source? | https://api.github.com/repos/opensearch-project/data-prepper/issues/3282/comments | 1 | 2023-08-29T11:42:30Z | 2023-09-08T17:56:29Z | https://github.com/opensearch-project/data-prepper/issues/3282 | 1,871,492,170 | 3,282 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
DataPrepper provides support for OpenTelemetry log format. It parses the incoming logs correctly but for the [SeverityText.]
(https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-severitytext)
On incoming messages with a severity text the field is ignored and its content dropped during parsing.
This issue has been hinted at in an [OpenTelemetry Collector Issue 7535](https://github.com/open-telemetry/opentelemetry-collector/issues/7535).
**Describe the solution you'd like**
Support the SeverityText field just like SeverityNumber.
**Describe alternatives you've considered (Optional)**
Currently, the SevertityText would need to be mapped to a log attribute. But this is not in the spirit of the [OpenTelemetry log data model](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md).
**Additional context**
The code is just missing in the parsing at the [OTelProtoCodec](https://github.com/opensearch-project/data-prepper/blob/930a382492970f48a76310e59fc9dc0fd2a7e559/data-prepper-plugins/otel-proto-common/src/main/java/org/opensearch/dataprepper/plugins/otel/codec/OTelProtoCodec.java#L254) and the [JacksonOtelLog](https://github.com/opensearch-project/data-prepper/blob/930a382492970f48a76310e59fc9dc0fd2a7e559/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/log/JacksonOtelLog.java#L253-L263)
| Support OpenTelemetry SeverityText for Logs | https://api.github.com/repos/opensearch-project/data-prepper/issues/3280/comments | 1 | 2023-08-29T05:38:23Z | 2023-09-08T17:57:34Z | https://github.com/opensearch-project/data-prepper/issues/3280 | 1,870,921,910 | 3,280 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Acknowledgements are not received by sources (such as S3/Kafka) if a pipeline has drop processor to conditionally drop events and has "acknowledgements" enabled.
**Additional context**
This was tested with following pipeline config. Kafka source did not receive acknowledgements for dropped events and did not commit offsets for consumed records.
```
version: "2"
kafka-pipeline:
source:
kafka:
acknowledgments: true
topics:
- name: "test-drop"
group_id: "Mac-group"
serde_format: "json"
auto_offset_reset: "earliest"
aws:
msk:
arn: "arn:aws:kafka:us-west-2:388303208821:cluster/gameday-cluster/8cae81cb-3b38-4b59-8f47-cadee5a804f0-6"
broker_connection_type: "public"
sts_role_arn: "arn:aws:iam::388303208821:role/pipeline-to-domain-role"
region: "us-west-2"
authentication:
sasl:
aws_msk_iam: "role"
buffer:
bounded_blocking:
batch_size: 125000
buffer_size: 1000000
processor:
- grok:
match:
message: ['%{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] %{NUMBER:response_status:int}']
- drop_events:
drop_when: '/response_status >= 900'
sink:
- stdout:
```
Sample logs in kafka topic:
```
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 400"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 300"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
{"message": "127.0.0.1 198.126.12 [10/Oct/2000:13:55:36 -0700] 200"}
```
| Acknowledgements support in Drop processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3279/comments | 0 | 2023-08-29T02:24:00Z | 2023-08-30T21:05:12Z | https://github.com/opensearch-project/data-prepper/issues/3279 | 1,870,767,629 | 3,279 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.4.0
**BUILD NUMBER**: 69
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/6001122581
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 6001122581: Release Data Prepper : 2.4.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3268/comments | 3 | 2023-08-28T14:54:00Z | 2023-08-28T15:25:57Z | https://github.com/opensearch-project/data-prepper/issues/3268 | 1,869,926,051 | 3,268 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper still builds on Gradle 7.x though Gradle 8 has been out for a few months. Also, Gradle 8.3 introduced support for Java 20.
**Describe the solution you'd like**
Fix Gradle warnings for 8.x migration. Update to 8.3 (current latest).
| Build with Gradle 8 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3267/comments | 0 | 2023-08-28T14:29:38Z | 2023-09-01T16:36:27Z | https://github.com/opensearch-project/data-prepper/issues/3267 | 1,869,881,833 | 3,267 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Kafka source plugin retries every 30s when it hits AWSSchemaRegistryException, which includes both access denied and deserialization errors. As a result, deserialization errors are also retried instead of skipping the records that failed to get deserialized.
**Expected behavior**
- Kafka source plugin should retry only if the rootCause of AWSSchemaRegistryException is AccessDenied.
- In all other cases, records that failed to get deserialized should be skipped (dlq in future) to avoid blocking stream processing.
| [BUG] Exception handling for aws glue schema registry | https://api.github.com/repos/opensearch-project/data-prepper/issues/3264/comments | 0 | 2023-08-26T21:16:46Z | 2023-08-28T13:55:00Z | https://github.com/opensearch-project/data-prepper/issues/3264 | 1,868,270,418 | 3,264 |
[
"opensearch-project",
"data-prepper"
] | Multiple unit tests in KafkaSourceCustomConsumerTest have Thread.sleep(10s). This increases the data prepper build time. Sleep appears to be present for acknowledgements framework to complete acknowledgements. This sim is to track improvement in the unit-tests to mock acknowledgement framework apis and remove Thread.sleep(). | [Unit Test] Remove sleeps from KafkaSourceCustomConsumerTest | https://api.github.com/repos/opensearch-project/data-prepper/issues/3263/comments | 2 | 2023-08-26T00:04:12Z | 2023-10-06T15:36:37Z | https://github.com/opensearch-project/data-prepper/issues/3263 | 1,867,802,035 | 3,263 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
PR #3238 uses the Avro/Parquet schema and ignores include/exclude keys in the S3 sink. With this new change, Data Prepper should disallow configurations that have a user-defined schema and include/exclude keys. Keeping this combination will create confusion in that what the user sees in a config is not actually what is happening.
**Describe the solution you'd like**
Verify that include/exclude are not used when the user defines a schema for Avro/Parquet codecs.
**Describe alternatives you've considered (Optional)**
Allow the combination, but that can cause confusion.
| Disallow include/exclude keys with user-defined schema | https://api.github.com/repos/opensearch-project/data-prepper/issues/3253/comments | 0 | 2023-08-25T18:47:26Z | 2023-08-25T22:42:17Z | https://github.com/opensearch-project/data-prepper/issues/3253 | 1,867,536,265 | 3,253 |
[
"opensearch-project",
"data-prepper"
] | We've made a few changes to the S3 sink behavior that need to be documented.
* The Parquet/Avro schemas now take precedence over include/exclude.
* Changes to the Parquet codec - it must use `in_memory` buffer.
Additionally, the following can help users:
* More details on auto-schema generation
* Recommendations for schemas: nullable
| [Documentation] Update documentation for S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3252/comments | 2 | 2023-08-25T18:40:02Z | 2023-08-29T20:45:57Z | https://github.com/opensearch-project/data-prepper/issues/3252 | 1,867,526,891 | 3,252 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
With acknowledgments enabled, if KafkaConsumer encounters deserialization error while consuming a record then kafka source plugin stops committing offsets for that partition. Subsequent records are consumed and pushed to sink, but committed offset remains stuck on the offset one before the record that encountered deserialization exception.
**To Reproduce**
- Create kafka topic and ingest json records.
- Create pipeline with kafka source and "json" as serde_format
- Send one text (non-json) record to the topic.
- Deserialization exception will be thrown in pipeline.
- Describe kafka consumer group on kafka cluster to observe partition LAG/committed offsets.
- Any further records sent to the error partition will not result in new offsets being committed and LAG will remain non-zero.
**Expected behavior**
Failed records should not result in stuck committed offsets, commits should resume after accounting error records in NumberOfDeserializationErrors metric. (In future, failed records can be sent to source/pipeline level dlq if available) | [BUG] Kafka source stops committing offsets after consuming a record that results in deserialization error | https://api.github.com/repos/opensearch-project/data-prepper/issues/3247/comments | 0 | 2023-08-25T04:40:10Z | 2023-08-25T23:58:01Z | https://github.com/opensearch-project/data-prepper/issues/3247 | 1,866,269,397 | 3,247 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Kafka source plugin does not support 'acknowledgements_timeout' config option, it should be removed from plugin's config guide: https://opensearch.org/docs/2.9/data-prepper/pipelines/configuration/sources/kafka/ | [Documentation] Remove unsupported 'acknowledgements_timeout' option from kafka source config guide | https://api.github.com/repos/opensearch-project/data-prepper/issues/3246/comments | 2 | 2023-08-24T21:48:08Z | 2023-08-28T18:00:47Z | https://github.com/opensearch-project/data-prepper/issues/3246 | 1,865,933,350 | 3,246 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The opentelemetry collector exports metrics data to data-prepper which uses source as otel_metrics_source to receive the data.
Here I would like to copy a value from existing field and place it to a new field.
Hence I went with copy_values processor but the data-prepper pod errored out and failed.
**To Reproduce**
```
otel-metrics-pipeline:
workers: 1
delay: 3000
source:
otel_metrics_source:
health_check_service: true
ssl: false
buffer:
bounded_blocking:
processor:
- copy_values:
entries:
- from_key: "Time"
to_key: "@timestamp"
overwrite_if_to_key_exists: true
- otel_metrics:
sink:
- opensearch:
```
**Expected behavior**
To add a new field name called "@timestamp" with the value from "time" in the output data.
**Actual Behavior**
ExportMetricsServiceRequest cannot be cast to class event
**Environment (please complete the following information):**
- Kubernetes
**Additional context**
Would like to introduce a custom field with value from existing field. If not via copy_values, kindly let me know other alternatives
| [BUG] Unable to use copy_values processor with otel_metrics_source | https://api.github.com/repos/opensearch-project/data-prepper/issues/3240/comments | 4 | 2023-08-24T20:15:12Z | 2023-08-31T20:59:11Z | https://github.com/opensearch-project/data-prepper/issues/3240 | 1,865,820,542 | 3,240 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
HTTP and Prometheus not working when SigV4 Enabled. HTTP plugin code - throws 408 Exception and Prometheus we see 403 ERROR
**To Reproduce**
Steps to reproduce the behavior:
Aws_sigv4 enabled - I don’t see the code sending request to HTTP endpoint
1. Exceptions are not seen in the dataprepper console
2. Pipeline is not posting any data to endpoints
3. Cloudwatch logs are clean and see no data at all
**Expected behavior**
Expected the data to be posted to Lambda with SigV4 and AMP
**Screenshots**


**Environment (please complete the following information):**
Local testing | [BUG] HTTP and Prometheus not working when SigV4 Enabled | https://api.github.com/repos/opensearch-project/data-prepper/issues/3237/comments | 0 | 2023-08-24T15:26:03Z | 2023-08-30T21:14:34Z | https://github.com/opensearch-project/data-prepper/issues/3237 | 1,865,401,680 | 3,237 |
[
"opensearch-project",
"data-prepper"
] | Is your feature request related to a problem? Please describe.
I would like to use data-prepper to ingest CDC from MongoDB database, including AWS Document DB
Describe the solution you'd like
Support Kafka Connect with Debezium mongodb connector plugins
Describe alternatives you've considered (Optional)
N/A
Additional context
```
connect-pipeline:
source:
kafka_connect:
worker_properties:
group_id: group
config_storage_topic: pipeline-configs
offset_storage_topic: pipeline-offsets
status_storage_topic: pipeline-status
mongodb:
hostname: localhost
credentials:
type: plaintext
username: username
password: password
collections:
- topic_prefix: prefix1
collection_name: dbname.collection1
- topic_prefix: prefix2
collection_name: dbname.collection2
``` | support CDC from MongoDB | https://api.github.com/repos/opensearch-project/data-prepper/issues/3235/comments | 0 | 2023-08-24T07:25:18Z | 2023-08-30T21:11:23Z | https://github.com/opensearch-project/data-prepper/issues/3235 | 1,864,568,925 | 3,235 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I would like to use data-prepper to ingest CDC from Postgresql database, including Aurora DB
**Describe the solution you'd like**
Support Kafka Connect with Debezium Postgresql connector plugins
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
```
connect-pipeline:
source:
kafka_connect:
worker_properties:
group_id: group
config_storage_topic: pipeline-configs
offset_storage_topic: pipeline-offsets
status_storage_topic: pipeline-status
postgresql:
hostname: localhost
credentials:
type: aws
region: us-east-1
secretId: secretId
tables:
- topic_prefix: prefix1
database_name: dbname
table_name: public.tableName1
```
| Support CDC from Postgresql | https://api.github.com/repos/opensearch-project/data-prepper/issues/3234/comments | 0 | 2023-08-24T07:23:57Z | 2023-08-30T21:11:49Z | https://github.com/opensearch-project/data-prepper/issues/3234 | 1,864,566,980 | 3,234 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I would like to use data-prepper to ingest CDC from MySQL database, including Aurora DB
**Describe the solution you'd like**
Support Kafka Connect with Debezium MySQL connector plugins
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
```
connect-pipeline:
source:
kafka_connect:
worker_properties:
group_id: group
config_storage_topic: pipeline-configs
offset_storage_topic: pipeline-offsets
status_storage_topic: pipeline-status
mysql:
hostname: localhost
credentials:
type: plaintext
username: username
password: password
tables:
- topic_prefix: prefix1
table_name: dbname.tableName1
- topic_prefix: prefix2
table_name: dbname.tableName2
```
| Support CDC from MySQL | https://api.github.com/repos/opensearch-project/data-prepper/issues/3233/comments | 1 | 2023-08-24T07:22:59Z | 2023-10-12T03:10:37Z | https://github.com/opensearch-project/data-prepper/issues/3233 | 1,864,565,538 | 3,233 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
If a Kafka topic partition has non-zero beginningOffset then Kafka source plugin does not commit offsets when acknowledgements are enabled.
e.g. issue was seen with following kafka partition:
```
Assigned partition test-topic5-12. beginningOffset: 833, endOffset: 417499, committedOffset: -
```
Also, numberOfRecordsCommitted metric does not reflect correct value. In above scenario, numberOfRecordsCommitted incremented equal to the number of consumed records and not committed records, which was 0.
| [BUG] Kafka source plugin does not commit offsets when "acknowledgments" are enabled | https://api.github.com/repos/opensearch-project/data-prepper/issues/3231/comments | 0 | 2023-08-24T02:51:52Z | 2023-08-24T19:42:57Z | https://github.com/opensearch-project/data-prepper/issues/3231 | 1,864,297,757 | 3,231 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.4.0
**BUILD NUMBER**: 66
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/5957678214
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 5957678214: Release Data Prepper : 2.4.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3230/comments | 2 | 2023-08-24T00:27:03Z | 2023-08-24T17:00:36Z | https://github.com/opensearch-project/data-prepper/issues/3230 | 1,864,183,398 | 3,230 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.5.0-SNAPSHOT
**BUILD NUMBER**: 65
**RELEASE MAJOR TAG**: false
**RELEASE LATEST TAG**: false
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/5957289626
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 5957289626: Release Data Prepper : 2.5.0-SNAPSHOT | https://api.github.com/repos/opensearch-project/data-prepper/issues/3229/comments | 3 | 2023-08-23T23:31:43Z | 2023-08-23T23:35:06Z | https://github.com/opensearch-project/data-prepper/issues/3229 | 1,864,146,724 | 3,229 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.5.0-SNAPSHOT
**BUILD NUMBER**: 64
**RELEASE MAJOR TAG**: false
**RELEASE LATEST TAG**: false
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/5957255754
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 5957255754: Release Data Prepper : 2.5.0-SNAPSHOT | https://api.github.com/repos/opensearch-project/data-prepper/issues/3228/comments | 3 | 2023-08-23T23:22:08Z | 2023-08-23T23:34:36Z | https://github.com/opensearch-project/data-prepper/issues/3228 | 1,864,140,040 | 3,228 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A misdefined parquet schema lead to an exception that shutdown the sink threads
```
2023-08-21T18:35:22.570 [log-pipeline-sink-worker-2-thread-2] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [log-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: org.apache.avro.SchemaParseException: "s3" is not a defined name. The type of the "s3" field must be a defined name or a {"type": ...} expression.
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1129) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.apache.avro.SchemaParseException: "s3" is not a defined name. The type of the "s3" field must be a defined name or a {"type": ...} expression.
at org.apache.avro.Schema.parse(Schema.java:1716) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.Schema.parse(Schema.java:1805) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.Schema$Parser.parse(Schema.java:1469) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.Schema$Parser.parse(Schema.java:1457) ~[avro-1.11.1.jar:1.11.1]
at org.opensearch.dataprepper.plugins.codec.parquet.ParquetOutputCodec.parseSchema(ParquetOutputCodec.java:142) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.codec.parquet.ParquetOutputCodec.buildSchemaAndKey(ParquetOutputCodec.java:81) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.codec.parquet.ParquetOutputCodec.start(ParquetOutputCodec.java:69) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3SinkService.output(S3SinkService.java:113) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3Sink.doOutput(S3Sink.java:109) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:336) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
... 2 more
```
**To Reproduce**
S3 sink config
```
codec:
parquet:
schema: >
[{
"type" : "record",
"namespace" : "org.opensearch.dataprepper.examples",
"name" : "S3",
"fields" : [
{ "name" : "bucket", "type" : "string"},
{ "name" : "key", "type": "string"}
]
},
{
"type" : "record",
"namespace" : "org.opensearch.dataprepper.examples",
"name" : "Parquet",
"fields" : [
...
{ "name" : "s3", "type": "s3"}, # This was the problem, the type should be S3
{ "name" : "@timestamp", "type" : "string"}
]
}]
```
**Expected behavior**
An error/warning should be logged or a metric emitted rather than the thread encountering a fatal exception
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Sink Worker Threads encounter fatal exception on unknown type | https://api.github.com/repos/opensearch-project/data-prepper/issues/3202/comments | 0 | 2023-08-21T18:45:38Z | 2023-08-22T21:40:55Z | https://github.com/opensearch-project/data-prepper/issues/3202 | 1,859,941,051 | 3,202 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DataPrepper throws an exception on start up if `path_prefix` is not defined under the parquet codec.
```
Caused by: org.opensearch.dataprepper.model.plugin.InvalidPluginConfigurationException: Plugin parquet in pipeline null is configured incorrectly: pathPrefix must not be null
at org.opensearch.dataprepper.plugin.PluginConfigurationConverter.convert(PluginConfigurationConverter.java:73) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.getConstructionContext(DefaultPluginFactory.java:115) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:74) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3Sink.<init>(S3Sink.java:63) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
... 41 more
```
The S3 sink configuration has `path_prefix` defined under the `object_key_options` object. I think the correct behavior would be to remove `path_prefix` from the parquet codec configuration
**To Reproduce**
Create an S3 sink pipeline using the parquet codec without path_prefix defined in the codec config
Example S3 sink config
```
sink:
- s3:
aws:
region: "us-west-2"
sts_role_arn: "<my role>"
bucket: "my-sink-bucket"
object_key:
path_prefix: "perf-sink"
threshold:
event_collect_timeout: 600s
maximum_size: 128mb
codec:
parquet:
schema: ...
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Parquet Codec requires path_prefix | https://api.github.com/repos/opensearch-project/data-prepper/issues/3201/comments | 0 | 2023-08-21T18:00:34Z | 2023-08-22T14:40:34Z | https://github.com/opensearch-project/data-prepper/issues/3201 | 1,859,880,407 | 3,201 |
[
"opensearch-project",
"data-prepper"
] | ## Background
Data Prepper uses the Temurin releases for packages with bundled JDKs. Temurin has active LTS support for both Java 11 and 17. Java 21 is planned is a planned LTS release coming in Sep 2023.
## Proposal
Update Data Prepper's supported Java versions to 21 and 17. As a breaking change, Data Prepper can stop supporting Java 11 entirely. This also means that developers can start using features which are available only in Java 17 and above.
We can also run some performance tests against Java 21 after it is released. This can help us determine if the default bundled JDK should be 21 instead of 17. Previously we found a performance gain running on Java 17 over 11 which motivated the update to bundle 17 instead.
Additionally, the Data Prepper builds can start to run against the Java 21 to ensure forward compatibility before it is the official version.
## Tasks
- [x] #3267
- [x] #3329
- [x] #3330
- [ ] Performance evaluation of Data Prepper on Java 21 (once available)
- [ ] Bundle Data Prepper with Java 21 (once available; possibly breaking change)
- [ ] #3493 (breaking change)
## Reference
https://adoptium.net/support/
| Update Java support - JDK 17 and 21 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3192/comments | 4 | 2023-08-18T13:45:08Z | 2024-07-09T19:02:25Z | https://github.com/opensearch-project/data-prepper/issues/3192 | 1,856,753,448 | 3,192 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Plugin authors must use strings and manually convert to use `ByteCount` in their configuration files.
**Describe the solution you'd like**
Support `ByteCount` deserialization in the plugin parser. This would be similar to the `data-prepper-config.yaml` which supports it already. And it would give similar to behavior to the duration deserializer.
**Additional context**
Needs to be added here:
https://github.com/opensearch-project/data-prepper/blob/59c48e29a25e15b16e38f4c118477a041e458121/data-prepper-core/src/main/java/org/opensearch/dataprepper/plugin/PluginConfigurationConverter.java#L37
Compare with:
https://github.com/opensearch-project/data-prepper/blob/35e37c671ccc4b736dbebea5fcfbfd95f6ef590d/data-prepper-core/src/main/java/org/opensearch/dataprepper/parser/config/DataPrepperAppConfiguration.java#L34-L36
| Support ByteCount in plugin parser | https://api.github.com/repos/opensearch-project/data-prepper/issues/3191/comments | 4 | 2023-08-17T22:06:07Z | 2024-01-30T19:13:00Z | https://github.com/opensearch-project/data-prepper/issues/3191 | 1,855,766,903 | 3,191 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently Data Prepper sinks support DLQ for writing failed events but the S3 sink doesn't support it.
**Describe the solution you'd like**
Support `dlq` and `dlq_file` option in S3 sink similar to OpenSearch sink
| Support S3 DLQ for S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3185/comments | 0 | 2023-08-17T15:00:42Z | 2023-12-13T20:47:38Z | https://github.com/opensearch-project/data-prepper/issues/3185 | 1,855,181,582 | 3,185 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
In a pipeline with OTEL log source and parse_json processor, it's possible that the processor overwrites `attributes` field in the event, changing its value from a map to a string. This will subsequently cause ClassCastException in the sink when it calls `JacksonOtelLog.toJsonString()`. The exception is currently not caught and will trigger a pipeline shutdown.
**To Reproduce**
With this pipeline config:
```yaml
version: "2"
otel-logs-pipeline:
source:
otel_logs_source:
path: "/${pipelineName}/v1/logs"
unframed_requests: true
ssl: false
processor:
- parse_json:
source: "body"
sink:
- stdout:
```
and this test data:
```
{
"resourceLogs": [
{
"resource": {
"attributes": [
{
"key": "resource-attr",
"value": {
"stringValue": "resource-attr-val-01"
}
}
]
},
"scopeLogs": [
{
"scope": {},
"logRecords": [
{
"name": "logA",
"body": {
"stringValue": "{\"attributes\": \"hello\"}"
},
"attributes": [
{
"key": "app",
"value": {
"stringValue": "server"
}
},
{
"key": "instance_num",
"value": {
"intValue": "1"
}
}
]
}
]
}
]
}
]
}
```
Pipeline shutdown will be triggered:
```
2023-08-16T23:38:15,731 [otel-logs-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [otel-logs-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.ClassCastException: class com.fasterxml.jackson.databind.node.TextNode cannot be cast to class com.fasterxml.jackson.databind.node.ObjectNode (com.fasterxml.jackson.databind.node.TextNode and com.fasterxml.jackson.databind.node.ObjectNode are in unnamed module of loader 'app')
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.lang.ClassCastException: class com.fasterxml.jackson.databind.node.TextNode cannot be cast to class com.fasterxml.jackson.databind.node.ObjectNode (com.fasterxml.jackson.databind.node.TextNode and com.fasterxml.jackson.databind.node.ObjectNode are in unnamed module of loader 'app')
at org.opensearch.dataprepper.model.log.JacksonOtelLog.toJsonString(JacksonOtelLog.java:112) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.event.JacksonEvent$JsonStringBuilder.toJsonString(JacksonEvent.java:578) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.StdOutSink.checkTypeAndPrintObject(StdOutSink.java:56) ~[common-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.StdOutSink.output(StdOutSink.java:48) ~[common-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:336) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
```
**Expected behavior**
Pipeline should continue to run in the above situation.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] ParseJson processor overwrites fields in Otel Log Event causing pipeline to shut down | https://api.github.com/repos/opensearch-project/data-prepper/issues/3184/comments | 1 | 2023-08-17T14:06:03Z | 2023-08-21T15:15:27Z | https://github.com/opensearch-project/data-prepper/issues/3184 | 1,855,081,922 | 3,184 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
While my pipeline is idle the kafka source plugin continuously reports byte rate data when there is no data flowing from the kafka cluster to my pipeline. This is seen for both `incomingByteRate` and `outgoingByteRate`.
Ideally I would expect this to be zero during idle periods.
| [BUG] Kafka `*ByteRate` metrics are non-zero for idle pipelines | https://api.github.com/repos/opensearch-project/data-prepper/issues/3180/comments | 1 | 2023-08-16T20:03:43Z | 2023-11-07T20:41:07Z | https://github.com/opensearch-project/data-prepper/issues/3180 | 1,853,833,532 | 3,180 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper 2.4 supports deleting S3 objects when using S3 scan when `delete_s3_objects_on_read` is configured. SQS messages received by S3 source are processed but the underlying S3 objects are not deleted even after successful processing. Delete only works with end-to-end acknowledgements for S3 scan.
**Describe the solution you'd like**
Support deleting underlying S3 objects for SQS case in S3 source.
**Additional context**
The current delete functionality is performed by calling [S3ObjectDeleteWorker](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/S3ObjectDeleteWorker.java) from [S3ObjectWorker](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/S3ObjectWorker.java). The delete logic can be moved to `S3ObjectWorker`.
| Support deleting s3 objects in S3 source when using SQS | https://api.github.com/repos/opensearch-project/data-prepper/issues/3177/comments | 0 | 2023-08-16T15:39:56Z | 2025-01-27T19:23:13Z | https://github.com/opensearch-project/data-prepper/issues/3177 | 1,853,490,844 | 3,177 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Several `OutputCodec` configurations have include or exclude key options. These should not exist.
**Expected behavior**
Remove old configurations from code, READMEs, and documentation (if applicable). Do for all `OutputCodec` implementations.
| [BUG] Remove include/exclude keys from codecs | https://api.github.com/repos/opensearch-project/data-prepper/issues/3176/comments | 2 | 2023-08-16T15:30:44Z | 2023-08-21T16:30:46Z | https://github.com/opensearch-project/data-prepper/issues/3176 | 1,853,475,029 | 3,176 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I noticed my pipeline has a `max.poll.interval.ms=300000000` according to my logs. I leveraged default values in my kafka source. That's roughly 3.5 days. This interval is too large IMO. I would expect Data Prepper to loop on a smaller interval for more efficient data processing and prevent pipelines from appears stuck while waiting.
```
INFO org.opensearch.dataprepper.plugins.kafka.source.KafkaSource - Starting consumer with the properties : {max.poll.records=500, group.id=group1, reconnect.backoff.ms=10000, max.partition.fetch.bytes=1048576, bootstrap.servers=xxxxxxxxxxx, auto.commit.interval.ms=5000, heartbeat.interval.ms=5000, retry.backoff.ms=10000, security.protocol=SASL_SSL, enable.auto.commit=false, [xxxxxxxxxxxx], fetch.max.wait.ms=500, sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler, fetch.min.bytes=1, fetch.max.bytes=52428800, max.poll.interval.ms=300000000, session.timeout.ms=45000, auto.offset.reset=earliest}
```
| [BUG] Kafka Source default max.poll.interval.ms is too large | https://api.github.com/repos/opensearch-project/data-prepper/issues/3169/comments | 1 | 2023-08-15T17:12:29Z | 2023-08-16T21:20:37Z | https://github.com/opensearch-project/data-prepper/issues/3169 | 1,851,819,011 | 3,169 |
[
"opensearch-project",
"data-prepper"
] | For some reason I'm unable to create the metrics pipeline.
Here's my config
```
otel-trace-pipeline:
# workers is the number of threads processing data in each pipeline.
# We recommend same value for all pipelines.
# default value is 1, set a value based on the machine you are running Data Prepper
workers: 1
# delay in milliseconds is how often the worker threads should process data.
# Recommend not to change this config as we want the otel-trace-pipeline to process as quick as possible
# default value is 3_000 ms
delay: "100"
source:
otel_trace_source:
#record_type: event # Add this when using Data Prepper 1.x. This option is removed in 2.0
ssl: false # Change this to enable encryption in transit
authentication:
unauthenticated:
health_check_service: true
proto_reflection_service: true
port: ${otel_trace_source_port}
buffer:
bounded_blocking:
# buffer_size is the number of ExportTraceRequest from otel-collector the data prepper should hold in memeory.
# We recommend to keep the same buffer_size for all pipelines.
# Make sure you configure sufficient heap
# default value is 12800
buffer_size: 25600
# This is the maximum number of request each worker thread will process within the delay.
# Default is 200.
# Make sure buffer_size >= workers * batch_size
batch_size: 400
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
# Configure same as the otel-trace-pipeline
workers: 8
# We recommend using the default value for the raw-pipeline.
delay: "3000"
source:
pipeline:
name: "otel-trace-pipeline"
buffer:
bounded_blocking:
# Configure the same value as in otel-trace-pipeline
# Make sure you configure sufficient heap
# default value is 12800
buffer_size: 25600
# This is the maximum number of request each worker thread will process within the delay.
# Default is 200.
# Make sure buffer_size >= workers * batch_size
# The raw processor does bulk request to your OpenSearch sink, so configure the batch_size higher.
batch_size: 3200
processor:
- otel_trace_raw:
- otel_trace_group:
hosts:
- ${opensearch_endpoint}
# Change to your credentials
# username: "admin"
# password: "admin"
# Add a certificate file if you are accessing an OpenSearch cluster with a self-signed certificate
#cert: /path/to/cert
# If you are connecting to an Amazon OpenSearch Service domain without
# Fine-Grained Access Control, enable these settings. Comment out the
# username and password above.
#aws_sigv4: true
#aws_region: us-east-1
sink:
- opensearch:
hosts:
- ${opensearch_endpoint}
index_type: trace-analytics-raw
# Change to your credentials
# username: "admin"
# password: "admin"
# Add a certificate file if you are accessing an OpenSearch cluster with a self-signed certificate
#cert: /path/to/cert
# If you are connecting to an Amazon OpenSearch Service domain without
# Fine-Grained Access Control, enable these settings. Comment out the
# username and password above.
#aws_sigv4: true
#aws_region: us-east-1
service-map-pipeline:
workers: 8
delay: "100"
source:
pipeline:
name: "otel-trace-pipeline"
processor:
- service_map_stateful:
# The window duration is the maximum length of time the data prepper stores the most recent trace data to evaluvate service-map relationships.
# The default is 3 minutes, this means we can detect relationships between services from spans reported in last 3 minutes.
# Set higher value if your applications have higher latency.
window_duration: 180
buffer:
bounded_blocking:
# buffer_size is the number of ExportTraceRequest from otel-collector the data prepper should hold in memeory.
# We recommend to keep the same buffer_size for all pipelines.
# Make sure you configure sufficient heap
# default value is 12800
buffer_size: 25600
# This is the maximum number of request each worker thread will process within the delay.
# Default is 200.
# Make sure buffer_size >= workers * batch_size
batch_size: 400
sink:
- opensearch:
hosts:
- ${opensearch_endpoint}
index_type: trace-analytics-service-map
# Change to your credentials
# username: "admin"
# password: "admin"
# Add a certificate file if you are accessing an OpenSearch cluster with a self-signed certificate
#cert: /path/to/cert
# If you are connecting to an Amazon OpenSearch Service domain without
# Fine-Grained Access Control, enable these settings. Comment out the
# username and password above.
#aws_sigv4: true
#aws_region: us-east-1
metrics-pipeline:
source:
otel_metrics_source:
ssl: false
authentication:
unauthenticated:
processor:
- otel_metrics_raw_processor:
sink:
# - stdout:
- opensearch:
hosts:
- ${opensearch_endpoint}
# # username: username
# # password: password
# index: metrics-otel-v1
```
the error I'm getting
```
2023-08-15T07:24:10,205 [main] ERROR org.opensearch.dataprepper.plugin.PluginCreator - Encountered exception while instantiating the plugin OpenSearchSink
java.lang.reflect.InvocationTargetException: null
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:40) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:75) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildSinkOrConnector(PipelineParser.java:310) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildRoutedSinkOrConnector(PipelineParser.java:296) ~[data-prepper-core-2.3.0.jar:?]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildPipelineFromConfiguration(PipelineParser.java:216) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.parseConfiguration(PipelineParser.java:122) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:67) ~[data-prepper-core-2.3.0.jar:2.3.0]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:211) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:296) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:541) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:229) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:920) [spring-context-5.3.27.jar:5.3.27]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) [spring-context-5.3.27.jar:5.3.27]
at org.opensearch.dataprepper.AbstractContextManager.start(AbstractContextManager.java:59) [data-prepper-core-2.3.0.jar:2.3.0]
at org.opensearch.dataprepper.AbstractContextManager.getDataPrepperBean(AbstractContextManager.java:45) [data-prepper-core-2.3.0.jar:2.3.0]
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:39) [data-prepper-main-2.3.0.jar:2.3.0]
Caused by: java.lang.IllegalArgumentException: hosts cannot be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) ~[guava-31.1-jre.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration$Builder.<init>(ConnectionConfiguration.java:467) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration.readConnectionConfiguration(ConnectionConfiguration.java:170) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSinkConfiguration.readESConfig(OpenSearchSinkConfiguration.java:44) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.<init>(OpenSearchSink.java:119) ~[opensearch-2.3.0.jar:?]
... 70 more
2023-08-15T07:24:10,267 [main] ERROR org.opensearch.dataprepper.parser.PipelineParser - Construction of pipeline components failed, skipping building of pipeline [metrics-pipeline] and its connected pipelines
org.opensearch.dataprepper.model.plugin.PluginInvocationException: Exception throw from the plugin'OpenSearchSink'.
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:47) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:75) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildSinkOrConnector(PipelineParser.java:310) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildRoutedSinkOrConnector(PipelineParser.java:296) ~[data-prepper-core-2.3.0.jar:?]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.parser.PipelineParser.buildPipelineFromConfiguration(PipelineParser.java:216) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineParser.parseConfiguration(PipelineParser.java:122) ~[data-prepper-core-2.3.0.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:67) ~[data-prepper-core-2.3.0.jar:2.3.0]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:211) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:296) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:541) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:229) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) [spring-beans-5.3.27.jar:5.3.27]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:920) [spring-context-5.3.27.jar:5.3.27]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) [spring-context-5.3.27.jar:5.3.27]
at org.opensearch.dataprepper.AbstractContextManager.start(AbstractContextManager.java:59) [data-prepper-core-2.3.0.jar:2.3.0]
at org.opensearch.dataprepper.AbstractContextManager.getDataPrepperBean(AbstractContextManager.java:45) [data-prepper-core-2.3.0.jar:2.3.0]
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:39) [data-prepper-main-2.3.0.jar:2.3.0]
Caused by: java.lang.reflect.InvocationTargetException
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:40) ~[data-prepper-core-2.3.0.jar:?]
... 64 more
Caused by: java.lang.IllegalArgumentException: hosts cannot be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) ~[guava-31.1-jre.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration$Builder.<init>(ConnectionConfiguration.java:467) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration.readConnectionConfiguration(ConnectionConfiguration.java:170) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSinkConfiguration.readESConfig(OpenSearchSinkConfiguration.java:44) ~[opensearch-2.3.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.<init>(OpenSearchSink.java:119) ~[opensearch-2.3.0.jar:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:40) ~[data-prepper-core-2.3.0.jar:?]
... 64 more
```
the traces are working fine. | Encountered exception while instantiating the plugin OpenSearchSink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3166/comments | 2 | 2023-08-15T07:28:06Z | 2024-06-27T01:20:52Z | https://github.com/opensearch-project/data-prepper/issues/3166 | 1,851,016,593 | 3,166 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add Exemplars to metrics generated in aggregate processor. See https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exemplars for description about Exemplars
**Describe the solution you'd like**
Add one of the events used in the Count aggregation for exemplars in the count metric generated
Add min and max events in the Histogram aggregation for exemplars in the histogram metric generated
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add Exemplars to metrics generated in aggregate processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3164/comments | 0 | 2023-08-15T06:36:30Z | 2023-08-17T16:44:00Z | https://github.com/opensearch-project/data-prepper/issues/3164 | 1,850,971,959 | 3,164 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We want to build a SIEM based on opensearch. So we ingest a ton of data, but that data needs to be expanded based on data from i.e. our CMDB or personnel records. This makes correlation and deciding on threat impact a lot easier.
Elastic has the enrich ingest pipeline processor to do something like this: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/ingest-enriching-data.html
**Describe the solution you'd like**
Implement something similar to what Elastic has so data can be enriched at ingest time based on source indexes that will be kept up to date.
**Describe alternatives you've considered**
I can script this out, do this in a logstash pipeline, but then I need to implement it many times. The ingest pipelines seem to be the best solution.
**Additional context**
none | [Feature]Enrich ingest pipeline functionality | https://api.github.com/repos/opensearch-project/data-prepper/issues/3167/comments | 7 | 2023-08-15T01:33:52Z | 2023-10-18T18:24:04Z | https://github.com/opensearch-project/data-prepper/issues/3167 | 1,851,657,043 | 3,167 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Specifying include/exclude_keys in the S3 sink with avro schema enabled will ignore the include/exclude_keys when creating entries from the schema:
```
Caused by: java.lang.RuntimeException: The event has a key ('s3') which is not included in the schema.
```
With sink config:
```
codec:
avro:
schema: >
{
"type" : "record",
"namespace" : "org.opensearch.dataprepper.examples",
"name" : "VpcFlowLog",
"fields" : [
{ "name" : "version", "type" : ["null", "string"]},
{ "name" : "srcport", "type": ["null", "int"]},
{ "name" : "dstport", "type": ["null", "int"]},
{ "name" : "accountId", "type" : ["null", "string"]},
{ "name" : "interfaceId", "type" : ["null", "string"]},
{ "name" : "srcaddr", "type" : ["null", "string"]},
{ "name" : "dstaddr", "type" : ["null", "string"]},
{ "name" : "start", "type": ["null", "int"]},
{ "name" : "end", "type": ["null", "int"]},
{ "name" : "protocol", "type": ["null", "int"]},
{ "name" : "packets", "type": ["null", "int"]},
{ "name" : "bytes", "type": ["null", "int"]},
{ "name" : "action", "type": ["null", "string"]},
{ "name" : "logStatus", "type" : ["null", "string"]}
]
}
exclude_keys:
- s3
```
| [BUG] Include/exclude_keys not working with Avro codec output in S3 Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3163/comments | 6 | 2023-08-14T23:10:22Z | 2023-08-18T18:46:29Z | https://github.com/opensearch-project/data-prepper/issues/3163 | 1,850,684,707 | 3,163 |
[
"opensearch-project",
"data-prepper"
] | There's a few gaps in documentation on include/exclude_keys:
1. The [Avro README](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/avro-codecs/README.md) shows an example with exclude_keys, but it is not a valid option in the codec. Instead it should be specified as a parameter of the sink.
2. The [S3 sink README](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/s3-sink/README.md) does not have a section on include/exclude_keys | [Enhancement] exclude_keys documentation gaps | https://api.github.com/repos/opensearch-project/data-prepper/issues/3162/comments | 0 | 2023-08-14T22:56:23Z | 2023-08-22T22:17:12Z | https://github.com/opensearch-project/data-prepper/issues/3162 | 1,850,672,470 | 3,162 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When the avro schema and the actual data don't match, a RuntimeException is thrown which shuts down the pipeline
```
2023-08-14T17:19:53.568 [log-pipeline-sink-worker-2-thread-2] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [log-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.RuntimeException: The event has a key ('s3') which is not included in the schema.
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1129) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.RuntimeException: The event has a key ('s3') which is not included in the schema.
at org.opensearch.dataprepper.plugins.codec.avro.AvroOutputCodec.buildAvroRecord(AvroOutputCodec.java:181) ~[avro-codecs-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.codec.avro.AvroOutputCodec.writeEvent(AvroOutputCodec.java:152) ~[avro-codecs-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3SinkService.output(S3SinkService.java:115) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3Sink.doOutput(S3Sink.java:116) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:336) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
... 2 more
```
Exception comes from here: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/avro-codecs/src/main/java/org/opensearch/dataprepper/plugins/codec/avro/AvroOutputCodec.java#L181
Potential solutions for undefined schema keys:
Automap to a type and include
Ignore the field (essentially adds to exclude_keys list)
**To Reproduce**
Steps to reproduce the behavior:
Create an S3 sink pipeline using avro schema as the output codec. Ingest data with one or more field that does not match the defined schema.
**Expected behavior**
The pipeline should not shutdown (debatable what the correct way to handle the mismatch is)
| [BUG] Avro schema shuts down pipeline when field is not defined in the schema | https://api.github.com/repos/opensearch-project/data-prepper/issues/3161/comments | 2 | 2023-08-14T22:54:44Z | 2023-08-21T16:20:54Z | https://github.com/opensearch-project/data-prepper/issues/3161 | 1,850,671,068 | 3,161 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The S3 sink throws this exception part-way through processing the data:
```
2023-08-14T20:31:16.966 [log-pipeline-sink-worker-2-thread-2] ERROR org.opensearch.dataprepper.plugins.sink.s3.S3SinkService - Exception while write event into buffer :
java.io.IOException: Cannot write more data, the end of the compressed data stream has been reached
at org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream.write(GzipCompressorOutputStream.java:178) ~[commons-compress-1.23.0.jar:1.23.0]
at org.apache.avro.file.DataFileWriter$BufferedFileOutputStream$PositionFilter.write(DataFileWriter.java:476) ~[avro-1.11.1.jar:1.11.1]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81) ~[?:?]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142) ~[?:?]
at org.apache.avro.file.DataFileWriter$BufferedFileOutputStream.flush(DataFileWriter.java:493) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.io.DirectBinaryEncoder.flush(DirectBinaryEncoder.java:63) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.file.DataFileWriter.create(DataFileWriter.java:175) ~[avro-1.11.1.jar:1.11.1]
at org.apache.avro.file.DataFileWriter.create(DataFileWriter.java:145) ~[avro-1.11.1.jar:1.11.1]
at org.opensearch.dataprepper.plugins.codec.avro.AvroOutputCodec.start(AvroOutputCodec.java:75) ~[avro-codecs-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3SinkService.output(S3SinkService.java:111) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.s3.S3Sink.doOutput(S3Sink.java:116) ~[s3-sink-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:64) ~[data-prepper-api-2.4.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:336) ~[data-prepper-core-2.4.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
```
I am not sure if all of the data was processed before this or not. It did not generate a file after this exception was encountered and prevented the E2E ack callback from executing.
**To Reproduce**
Steps to reproduce the behavior:
Sink config:
```
sink:
- s3:
aws:
region: "us-west-2"
sts_role_arn: "<my role>"
bucket: "my-sink-bucket"
object_key:
path_prefix: "s3-sink"
threshold:
event_collect_timeout: 600s
compression: "gzip"
codec:
avro:
schema: >
{
"type" : "record",
"namespace" : "org.opensearch.dataprepper.examples",
"name" : "VpcFlowLog",
"fields" : [
{ "name" : "version", "type" : ["null", "string"]},
{ "name" : "srcport", "type": ["null", "int"]},
{ "name" : "dstport", "type": ["null", "int"]},
{ "name" : "accountId", "type" : ["null", "string"]},
{ "name" : "interfaceId", "type" : ["null", "string"]},
{ "name" : "srcaddr", "type" : ["null", "string"]},
{ "name" : "dstaddr", "type" : ["null", "string"]},
{ "name" : "start", "type": ["null", "int"]},
{ "name" : "end", "type": ["null", "int"]},
{ "name" : "protocol", "type": ["null", "int"]},
{ "name" : "packets", "type": ["null", "int"]},
{ "name" : "bytes", "type": ["null", "int"]},
{ "name" : "action", "type": ["null", "string"]},
{ "name" : "logStatus", "type" : ["null", "string"]}
]
}
``` | [BUG] Exception writing S3 file - S3 Sink + Avro Codec | https://api.github.com/repos/opensearch-project/data-prepper/issues/3160/comments | 2 | 2023-08-14T22:51:47Z | 2023-08-21T14:46:56Z | https://github.com/opensearch-project/data-prepper/issues/3160 | 1,850,668,378 | 3,160 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A few bugs found for S3 Sink Avro output codecs:
1. With `compression: gzip` specified, the files are still uploaded with just a `.avro` suffix. This should be `.avro.gz`
2. The contents of the avro file show null for all fields:
```
{"version":null,"srcport":null,"dstport":null,"accountId":null,"interfaceId":null,"srcaddr":null,"dstaddr":null,"start":null,"end":null,"protocol":null,"packets":null,"bytes":null,"action":null,"logStatus":null}
{"version":null,"srcport":null,"dstport":null,"accountId":null,"interfaceId":null,"srcaddr":null,"dstaddr":null,"start":null,"end":null,"protocol":null,"packets":null,"bytes":null,"action":null,"logStatus":null}
{"version":null,"srcport":null,"dstport":null,"accountId":null,"interfaceId":null,"srcaddr":null,"dstaddr":null,"start":null,"end":null,"protocol":null,"packets":null,"bytes":null,"action":null,"logStatus":null}
{"version":null,"srcport":null,"dstport":null,"accountId":null,"interfaceId":null,"srcaddr":null,"dstaddr":null,"start":null,"end":null,"protocol":null,"packets":null,"bytes":null,"action":null,"logStatus":null}
```
3. The timestamp in the file name is 12 hours behind UTC. I am not sure what the "correct" time is but UTC seems logical
**To Reproduce**
Sink config:
```
sink:
- s3:
aws:
region: "us-west-2"
sts_role_arn: "<my role>"
bucket: "my-sink-bucket"
object_key:
path_prefix: "s3-sink"
threshold:
event_collect_timeout: 600s
compression: "gzip"
codec:
avro:
schema: >
{
"type" : "record",
"namespace" : "org.opensearch.dataprepper.examples",
"name" : "VpcFlowLog",
"fields" : [
{ "name" : "version", "type" : ["null", "string"]},
{ "name" : "srcport", "type": ["null", "int"]},
{ "name" : "dstport", "type": ["null", "int"]},
{ "name" : "accountId", "type" : ["null", "string"]},
{ "name" : "interfaceId", "type" : ["null", "string"]},
{ "name" : "srcaddr", "type" : ["null", "string"]},
{ "name" : "dstaddr", "type" : ["null", "string"]},
{ "name" : "start", "type": ["null", "int"]},
{ "name" : "end", "type": ["null", "int"]},
{ "name" : "protocol", "type": ["null", "int"]},
{ "name" : "packets", "type": ["null", "int"]},
{ "name" : "bytes", "type": ["null", "int"]},
{ "name" : "action", "type": ["null", "string"]},
{ "name" : "logStatus", "type" : ["null", "string"]}
]
}
```
**Screenshots**
File name times example:

| [BUG] S3 Sink Avro Output issues | https://api.github.com/repos/opensearch-project/data-prepper/issues/3158/comments | 5 | 2023-08-14T22:48:55Z | 2023-08-21T15:47:07Z | https://github.com/opensearch-project/data-prepper/issues/3158 | 1,850,665,873 | 3,158 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Kafka ReadMe does not detail configuration options like:
1. `broker_configuration_type`
2. `aws_msk_iam`
3. `ssl_endpoint_identification_algorithm`
4. `AwsIamAuthConfig.role`
5. `key_mode`
This is not an inclusive list. I only looked through a handful of configuration options and found these. There may be others.
**Additional context**
[Documentation Website](https://github.com/opensearch-project/documentation-website/pull/4737) is lacking details as well. It has some of these but is missing some as well and needs to be thoroughly reviewed.
| [BUG] Missing Configuration details in Kafka documentation | https://api.github.com/repos/opensearch-project/data-prepper/issues/3157/comments | 5 | 2023-08-14T22:24:36Z | 2023-12-16T07:26:27Z | https://github.com/opensearch-project/data-prepper/issues/3157 | 1,850,638,220 | 3,157 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Writing compressed files in Snappy can help some analytics scenarios.
**Describe the solution you'd like**
Provide a new `snappy` compression option for the S3 sink.
```
sink:
- s3:
compression: snappy
bucket: my_bucket
```
**Additional context**
Expands upon: #3130
| Support Snappy compression in the S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3154/comments | 0 | 2023-08-14T20:32:42Z | 2023-08-16T15:39:37Z | https://github.com/opensearch-project/data-prepper/issues/3154 | 1,850,497,164 | 3,154 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
As an administrator configuring collectors that write to OpenSearch you want to configure index management policies scoped to particular index-name pattern.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Define index_type for "otel_logs_source" and "otel_metrics_source" (and others) similar to ones defined for "[trace-analytics-raw](https://github.com/opensearch-project/data-prepper/blob/28fdf903b791ec7365e5783022b859a7910040eb/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/IndexType.java#L15)" and "[trace-analytics-service-map](https://github.com/opensearch-project/data-prepper/blob/28fdf903b791ec7365e5783022b859a7910040eb/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/IndexType.java#L16C1-L16C5)"
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
Without index_type defined, an index needs to be explicitly named, where it complicates creation of a scoped down policy in OpenSearch.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add additional index_types | https://api.github.com/repos/opensearch-project/data-prepper/issues/3148/comments | 5 | 2023-08-12T17:57:01Z | 2024-10-30T15:42:07Z | https://github.com/opensearch-project/data-prepper/issues/3148 | 1,848,152,802 | 3,148 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The new `json` codec writes all values as strings, even if they are integers.
**Expected behavior**
A clear and concise description of what you expected to happen.
| [BUG] JSON codec uses string values for all values | https://api.github.com/repos/opensearch-project/data-prepper/issues/3146/comments | 0 | 2023-08-11T21:15:27Z | 2023-08-18T21:44:42Z | https://github.com/opensearch-project/data-prepper/issues/3146 | 1,847,436,016 | 3,146 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `s3` source and `s3` sink require the bucket name to be supplied only as a bucket name. Some users have provided the `s3://` scheme in front and then hit errors.
**Describe the solution you'd like**
Allow the `s3://` prefix and remove it from the bucket name supplied to the AWS SDK if it is supplied.
**Describe alternatives you've considered (Optional)**
An alternative is to have a specific validation on `s3://` and tell the user very clearly that they should remove it.
| Support s3:// prefix in S3 bucket names | https://api.github.com/repos/opensearch-project/data-prepper/issues/3143/comments | 2 | 2023-08-11T15:25:10Z | 2023-08-23T13:53:23Z | https://github.com/opensearch-project/data-prepper/issues/3143 | 1,847,027,361 | 3,143 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I'm using the Docker version of Data Prepper (`opensearch-data-prepper:2.4.0-SNAPSHOT`) built from source (`./gradlew :dockerSolution:buildDockerImages`). When using a pipeline that includes an Opensearch source, invoking Data Prepper's shutdown API appears to terminate the Data Prepper server but does not stop the Docker container.
Snippet of Data Prepper logs:
```
...
Found openjdk version of 17.0
2023-08-10T23:16:46,589 [main] INFO org.opensearch.dataprepper.parser.PipelineParser - Building [opensearch] as source component for the pipeline [test-pipeline]
....
2023-08-10T23:16:46,940 [main] INFO org.opensearch.dataprepper.parser.PipelineParser - Building [opensearch] as sink component
....
2023-08-10T23:16:47,223 [main] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Initiating pipeline execution
....
2023-08-10T23:16:47,488 [main] INFO org.opensearch.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server running at :4900
....
2023-08-10T23:16:48,143 [test-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Submitting request to initiate the pipeline processing
....
2023-08-10T23:16:51,810 [pool-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Received shutdown signal with processor shutdown timeout PT30S and sink shutdown timeout PT30S. Initiating the shutdown process
....
2023-08-10T23:17:21,813 [pool-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Shutting down processor process workers.
....
2023-08-10T23:17:23,026 [pool-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Shutting down sink process workers.
2023-08-10T23:17:23,027 [pool-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [test-pipeline] - Pipeline fully shutdown.
2023-08-10T23:17:23,057 [pool-4-thread-1] INFO org.opensearch.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server stopped
```
Command line:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08319e699cae opensearch-data-prepper:2.4.0-SNAPSHOT "/entrypoint.sh bin/…" 23 minutes ago Up 22 minutes 0.0.0.0:4900->4900/tcp epic_jackson
$ docker top 0831
UID PID PPID C STIME TTY TIME CMD
root 48665 48639 3 23:16 ? 00:00:43 java -Dlog4j.configurationFile=/usr/share/data-prepper/config/log4j2-rolling.properties -Ddata-prepper.dir=/usr/share/data-prepper -cp /usr/share/data-prepper/lib/* org.opensearch.dataprepper.DataPrepperExecute
root 49046 48639 0 23:19 ? 00:00:00 /bin/bash
```
Pipeline configuration:
```
test-pipeline:
sink:
- opensearch:
bulk_size: 2
document_id_field: getMetadata("opensearch-document_id")
hosts:
- [redacted]
index: ${getMetadata("opensearch-index")}
password: [redacted]
username: [redacted]
source:
opensearch:
disable_authentication: true
hosts:
- [redacted]
indices:
exclude:
- index_name_regex: \.*
include:
- index_name_regex: [redacted]
- index_name_regex: [redacted]
```
This is possibly an issue only with the opensearch source plugin - I used the following pipeline and the Docker container stopped correctly (the sink endpoint in both cases is the same).
```
http-test-pipeline:
source:
http:
sink:
- opensearch:
hosts: [redacted]
username: [redacted]
password: [redacted]
index: "httptest"
```
**To Reproduce**
See above
**Expected behavior**
The Docker container stops after the Data Prepper server has shut down
**Screenshots**
N/A
**Environment (please complete the following information):**
- OS: MacOS 12.6.6
- Version: Data Prepper 2.4.0-SNAPSHOT
**Additional context**
N/A
| [BUG] Docker container does not stop running after Data Prepper shuts down | https://api.github.com/repos/opensearch-project/data-prepper/issues/3141/comments | 3 | 2023-08-10T23:40:49Z | 2024-01-18T21:07:52Z | https://github.com/opensearch-project/data-prepper/issues/3141 | 1,846,022,410 | 3,141 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The conditional routing integration test failed.
```
Router_SingleRouteIT > sending_alpha_events_sends_to_the_sink_with_alpha_only_routes() FAILED
java.util.ConcurrentModificationException at Router_SingleRouteIT.java:79
```
https://github.com/opensearch-project/data-prepper/actions/runs/5824982516/job/15795716699
| [BUG] Conditional routing test failure | https://api.github.com/repos/opensearch-project/data-prepper/issues/3139/comments | 0 | 2023-08-10T19:27:06Z | 2023-08-14T21:02:03Z | https://github.com/opensearch-project/data-prepper/issues/3139 | 1,845,767,454 | 3,139 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The processor threads are shutting down due to IllegalArgumentException thrown for invalid type in the StringCoverter.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a data prepper pipeline to write data to OpenSearch
2. Send some data that is not in the supported format https://github.com/opensearch-project/data-prepper/blob/47a9bc0b99590d4b6554371a38860964947a06fd/data-prepper-api/src/main/java/org/opensearch/dataprepper/typeconverter/StringConverter.java#L8
3. Notice that opensearch.documentsSuccess metrics is at 0
4. Now send the correctly formatted data, note that opensearch.documentsSuccess is still at zero, which indicates that the data is not processed by the pipeline
**Expected behavior**
The pipeline should drop invalid messages instead of shutting down while emitting the right metrics.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
```
2023-07-30T20:00:04.944 [apache-log-pipeline-processor-worker-1-thread-2] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline apache-log-pipeline processing
java.lang.IllegalArgumentException: Unsupported type conversion
at org.opensearch.dataprepper.typeconverter.StringConverter.convert(StringConverter.java:28) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.typeconverter.StringConverter.convert(StringConverter.java:8) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.processor.mutateevent.ConvertEntryTypeProcessor.doExecute(ConvertEntryTypeProcessor.java:52) ~[mutate-event-processors-2.3.2.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:127) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:60) ~[data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-07-30T20:04:06.922 [apache-log-pipeline-sink-worker-2-thread-2] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document [******] has failure. DLQ not configured
2023-07-30T20:05:04.844 [apache-log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline apache-log-pipeline processing
java.lang.IllegalArgumentException: Unsupported type conversion
at org.opensearch.dataprepper.typeconverter.StringConverter.convert(StringConverter.java:28) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.typeconverter.StringConverter.convert(StringConverter.java:8) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.processor.mutateevent.ConvertEntryTypeProcessor.doExecute(ConvertEntryTypeProcessor.java:52) ~[mutate-event-processors-2.3.2.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:127) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:60) ~[data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
```
| [BUG] Processor threads shutting down due to IllegalArgumentException thrown by StringConverter | https://api.github.com/repos/opensearch-project/data-prepper/issues/3135/comments | 0 | 2023-08-10T15:53:52Z | 2023-08-21T19:06:15Z | https://github.com/opensearch-project/data-prepper/issues/3135 | 1,845,466,843 | 3,135 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper's `s3` sink does not support any compression. Yet writing compressed files to S3 can reduce network and save on storage costs.
**Describe the solution you'd like**
Add a new `compression` option in the S3 sink similar to the S3 source. (However, it will not support `automatic` since this is not relevant.)
```
sink:
- s3:
compression: gzip
bucket: my_bucket
```
New option: `compression`. Values:
* `gzip`
* `none` (default) | Support gzip compression on the S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3130/comments | 0 | 2023-08-09T20:38:58Z | 2023-08-14T20:30:50Z | https://github.com/opensearch-project/data-prepper/issues/3130 | 1,843,978,568 | 3,130 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-32731 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>grpc-protobuf-1.45.0.jar</b></p></summary>
<p>gRPC: Protobuf</p>
<p>Path to dependency file: /data-prepper-plugins/otel-logs-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.grpc/grpc-protobuf/1.45.0/f41a3849091a95af98d009294cd8572b3d152a43/grpc-protobuf-1.45.0.jar</p>
<p>
Dependency Hierarchy:
- armeria-grpc-1.15.0.jar (Root Library)
- :x: **grpc-protobuf-1.45.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
When gRPC HTTP2 stack raised a header size exceeded error, it skipped parsing the rest of the HPACK frame. This caused any HPACK table mutations to also be skipped, resulting in a desynchronization of HPACK tables between sender and receiver. If leveraged, say, between a proxy and a backend, this could lead to requests from the proxy being interpreted as containing headers from different proxy clients - leading to an information leak that can be used for privilege escalation or data exfiltration. We recommend upgrading beyond the commit contained in https://github.com/grpc/grpc/pull/33005 https://github.com/grpc/grpc/pull/33005
<p>Publish Date: 2023-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-32731>CVE-2023-32731</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cfgp-2977-2fmm">https://github.com/advisories/GHSA-cfgp-2977-2fmm</a></p>
<p>Release Date: 2023-06-09</p>
<p>Fix Resolution: grpc- 1.53.0;grpcio- 1.53.0;io.grpc:grpc-protobuf:1.53.0</p>
</p>
</details>
<p></p>
| CVE-2023-32731 (High) detected in grpc-protobuf-1.45.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3129/comments | 1 | 2023-08-09T18:21:48Z | 2023-09-26T11:00:40Z | https://github.com/opensearch-project/data-prepper/issues/3129 | 1,843,784,649 | 3,129 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We need integration test coverage when `distribution_version` is set to `es6` in `opensearch` sink
**Describe the solution you'd like**
Use the existing matrix and make a [JUnit condition](https://junit.org/junit5/docs/current/user-guide/#writing-tests-conditional-execution) using the DeclaredOpenSearchVersion class or something similar.
**Describe alternatives you've considered (Optional)**
Create a new GHA specifically for ODFE 0.10.0.
**Additional context**
Add any other context or screenshots about the feature request here.
| Integration test on ES 6 coverage in opensearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3126/comments | 0 | 2023-08-09T14:37:37Z | 2023-08-10T00:29:15Z | https://github.com/opensearch-project/data-prepper/issues/3126 | 1,843,408,798 | 3,126 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The opentelemetry KVLIST_VALUE is recursively converted to one non-readable string
**To Reproduce**
Steps to reproduce the behavior:
Use opentelemetry k8sobjectsreceiver source to collect kubernetes events, and export to data prepper.
The body field in opensearch is something like
```
{"type":"ADDED","object":"{\"reason\":\"Created\",\"metadata\":\"{\\\"uid\\\":\\\"69a08c03-9043-4194-9e2a-a7994686232f\\\",\\\"managedFields\\\":\\\"[\\\\\\\"{\\\\\\\\\\\\\\\"apiVersion\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"v1\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"manager\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"kubelet\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fieldsV1\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\"{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"f:firstTimestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"f:message\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"f:count\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"{}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"f:type\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\":\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
```
**Expected behavior**
It's better to parse KVLIST_VALUE correctly and insert into OpenSearch as separated field key.
If no, parsing to a simple json string so the next pipeline can parse back
**Screenshots**
<img width="1204" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/1520380/58179c48-bfe1-4eed-8f20-2fd5ad9e5e0a">
**Environment (please complete the following information):**
data-prepper image opensearchproject/data-prepper:2.3.2
**Additional context**
data-prepper is installed in an AWS eks and use AWS managed opensearch service
| [BUG] opentelemetry KVLIST_VALUE is not readable in open search | https://api.github.com/repos/opensearch-project/data-prepper/issues/3123/comments | 0 | 2023-08-09T02:44:36Z | 2023-08-09T12:09:54Z | https://github.com/opensearch-project/data-prepper/issues/3123 | 1,842,364,248 | 3,123 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The DeleteEntry processor uses [GenericExpressionEvaluator](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-expression/src/main/java/org/opensearch/dataprepper/expression/GenericExpressionEvaluator.java) and it doesn't support deleting key with DeleteEntry processor when the expression path refers to json object.
```
log-pipeline:
source:
http:
path: "/test/path"
processor:
- delete_entries:
with_keys: [ "responseElements" ]
delete_when: '/responseElements == null'
- date:
from_time_received: true
destination: "@timestamp"
sink:
- stdout:
```
```
curl -X POST http://localhost:2021/test/path -H 'Content-Type: application/json; charset=utf-8' -d '[{"log":"sample log", "responseElements": {"a":"a", "b": "b" }}]'
```
```
2023-08-07T11:10:58,081 [log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline log-pipeline processing
org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/responseElements == null"
at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:41) ~[data-prepper-expression-2.3.2.jar:?]
at org.opensearch.dataprepper.expression.ExpressionEvaluator.evaluateConditional(ExpressionEvaluator.java:28) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.plugins.processor.mutateevent.DeleteEntryProcessor.doExecute(DeleteEntryProcessor.java:40) ~[mutate-event-processors-2.3.2.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.10.5.jar:1.10.5]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:115) ~[data-prepper-core-2.3.2.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:50) [data-prepper-core-2.3.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
```
**Describe the solution you'd like**
Add support for parsing Json object in [LiteralTypeConversionsConfiguration](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-expression/src/main/java/org/opensearch/dataprepper/expression/LiteralTypeConversionsConfiguration.java#L16) that is used by GenericExpressionEvaluator.
| DeleteEntry Processor support for parsing json object | https://api.github.com/repos/opensearch-project/data-prepper/issues/3119/comments | 0 | 2023-08-07T16:24:10Z | 2023-08-09T21:14:07Z | https://github.com/opensearch-project/data-prepper/issues/3119 | 1,839,828,264 | 3,119 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As noted [here](https://github.com/opensearch-project/data-prepper/pull/3115#discussion_r1284743547) the Kafka source/sink are using bytes without the Data Prepper byte type.
**Describe the solution you'd like**
Update all byte configurations to use the byte type - `ByteCount`.
| Update Kafka source/sink to use ByteCount | https://api.github.com/repos/opensearch-project/data-prepper/issues/3116/comments | 0 | 2023-08-04T19:52:01Z | 2023-08-17T21:24:32Z | https://github.com/opensearch-project/data-prepper/issues/3116 | 1,837,263,196 | 3,116 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When collecting metrics from CloudWatch Logs, if there are no event_handles in the source type, the sink does not properly update the "LogEventsSucceededCounter".
**To Reproduce**
Steps to reproduce the behavior:
1. Set up Data-Prepper config with http source and cloudwatch_logs sink and configured CloudWatch metrics.
2. Sent POST requests with log information to Data-Prepper server instance.
**Expected behavior**
When reading the collected metrics from the sink, a positive value for the events succeeded was expected as log events were being received. But instead received 0 for the entire logging period.
**Screenshots**

**Environment (please complete the following information):**
- OS: macOS Ventura
- Version 13.4 | [BUG] CloudWatch Logs Sink not generating LogEventSuccessMetrics | https://api.github.com/repos/opensearch-project/data-prepper/issues/3113/comments | 1 | 2023-08-04T16:49:48Z | 2023-08-09T22:35:12Z | https://github.com/opensearch-project/data-prepper/issues/3113 | 1,837,070,410 | 3,113 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.