issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
262k
issue_title
stringlengths
1
1.02k
issue_comments_url
stringlengths
53
116
issue_comments_count
int64
0
2.49k
issue_created_at
stringdate
1999-03-17 02:06:42
2025-06-23 11:41:49
issue_updated_at
stringdate
2000-02-10 06:43:57
2025-06-23 11:43:00
issue_html_url
stringlengths
34
97
issue_github_id
int64
132
3.17B
issue_number
int64
1
215k
[ "opensearch-project", "data-prepper" ]
## CVE-2024-35195 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requests-2.31.0-py3-none-any.whl</b></p></summary> <p>Python HTTP for Humans.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl">https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl</a></p> <p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p> <p>Path to vulnerable library: /release/smoke-tests/otel-span-exporter/requirements.txt,/examples/trace-analytics-sample-app/sample-app/requirements.txt</p> <p> Dependency Hierarchy: - :x: **requests-2.31.0-py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Requests is a HTTP library. Prior to 2.32.2, when making requests through a Requests `Session`, if the first request is made with `verify=False` to disable cert verification, all subsequent requests to the same host will continue to ignore cert verification regardless of changes to the value of `verify`. This behavior will continue for the lifecycle of the connection in the connection pool. This vulnerability is fixed in 2.32.2. <p>Publish Date: 2024-05-20 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-35195>CVE-2024-35195</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: High - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/psf/requests/security/advisories/GHSA-9wx4-h78v-vm56">https://github.com/psf/requests/security/advisories/GHSA-9wx4-h78v-vm56</a></p> <p>Release Date: 2024-05-20</p> <p>Fix Resolution: requests - 2.32.2</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2024-35195 (Medium) detected in requests-2.31.0-py3-none-any.whl
https://api.github.com/repos/opensearch-project/data-prepper/issues/4562/comments
0
2024-05-22T17:58:16Z
2024-06-07T21:07:06Z
https://github.com/opensearch-project/data-prepper/issues/4562
2,311,157,163
4,562
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a Data Prepper user, I would like to have an `rds` source to load existing data and stream change events from RDS MySQL databases. **Describe the solution you'd like** For export (loading existing data), we can create a snapshot, export it to S3 and read the data from S3 For stream (streaming change events), we can connect to MySQL's binary log stream to receive change events. **Describe alternatives you've considered (Optional)** Run SQL queries periodically through a JDBC driver to load existing and incremental data from the source database. **Additional context** The feature shares similar ideas with existing `dynamodb` source and `documentdb` source. ## Tasks - [x] Project setup, source configurations, skeleton code - [x] Export implementation - create snapshot and export to S3 - [x] Export implementation- read exported data files in S3 - [x] Stream implementation - [x] Checkpointing in both export and stream - [x] Pipeline configuration transformation template - [x] Secret rotation support - [x] Add E2E acknowledge support - [x] Add data type mapping - [x] Add plugin metrics - [x] Add aggregate metrics - [ ] Add integration tests
Support AWS Aurora/RDS MySQL as source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4561/comments
0
2024-05-22T16:19:50Z
2025-05-27T18:54:08Z
https://github.com/opensearch-project/data-prepper/issues/4561
2,310,950,932
4,561
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of the DynamoDB source, I would like to differentiate between items deleted via TTL for DynamoDB streams, vs items that were explicitly deleted. DynamoDB streams provides this information in the `userIdentity` field of the stream record (https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html) **Describe the solution you'd like** An additional metadata field in DynamoDB stream events. ``` deleted_with_ttl: true ``` If the record was not a result of TTL deletion, this metadata key will not get applied to the Event at all. **Additional context** Add any other context or screenshots about the feature request here.
Add metadata to DynamoDB stream Events to indicate if they are deleted via TTL
https://api.github.com/repos/opensearch-project/data-prepper/issues/4560/comments
0
2024-05-21T20:56:30Z
2024-10-10T20:11:28Z
https://github.com/opensearch-project/data-prepper/issues/4560
2,309,100,432
4,560
[ "opensearch-project", "data-prepper" ]
Please approve or deny the release of Data Prepper. **VERSION**: 2.8.0 **BUILD NUMBER**: 82 **RELEASE MAJOR TAG**: true **RELEASE LATEST TAG**: true Workflow is pending manual review. URL: https://github.com/opensearch-project/data-prepper/actions/runs/9117735331 Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed KarstenSchnitter dlvenable oeyh] Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel.
Manual approval required for workflow run 9117735331: Release Data Prepper : 2.8.0
https://api.github.com/repos/opensearch-project/data-prepper/issues/4551/comments
3
2024-05-16T19:29:37Z
2024-05-16T20:20:01Z
https://github.com/opensearch-project/data-prepper/issues/4551
2,301,186,459
4,551
[ "opensearch-project", "data-prepper" ]
**Describe the bug** The github workflow has failed consistently due to some regression change. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] AWS secrets e2e test is broken
https://api.github.com/repos/opensearch-project/data-prepper/issues/4549/comments
0
2024-05-16T15:31:30Z
2025-03-04T20:55:07Z
https://github.com/opensearch-project/data-prepper/issues/4549
2,300,725,515
4,549
[ "opensearch-project", "data-prepper" ]
**Describe the bug** When both export and stream is enabled, and DocumentDB database or collection is invalid or collection doesn't contain any records, the DocumentDB source fails start stream processing or fails to recover when database or collection is created. Since the collection has no records, all new updates should be synced through stream. Currently the export partition worker job is stuck waiting for partitions. The stream job is stuck waiting for export to be complete. This is a corner case we should handle **Expected behavior** The stream job should not wait if there are no records to export and stream job should process when new change events are available. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] DocumentDB source streaming failed to start when collection is empty
https://api.github.com/repos/opensearch-project/data-prepper/issues/4544/comments
0
2024-05-15T14:31:24Z
2024-05-20T18:17:01Z
https://github.com/opensearch-project/data-prepper/issues/4544
2,298,105,387
4,544
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** I would like to use data-prepper to full load load/export and ingest change data capture events from AWS DocumentDB. **Describe the solution you'd like** Support DocumentDB Source to do a full scan of AWS DocumentDB collection that would export the entire collection data to Opensearch Sink. The DocumentDB Source will also read the DocumentDB stream data and would ingest any change data capture events to Opensearch Sink. For the full load, the source will implement a partition supplier that would partition the collection into multiple query partition and will do scans in parallel. **Describe alternatives you've considered (Optional)** Support Kafka Connect with Debezium mongodb connector plugins **Additional context** Sample DocumentDB source configuration: ``` documentdb-pipeline: source: documentdb: acknowledgments: true host: "<<docdb-2024-01-03-20-31-17.cluster-abcdef.us-east-1.docdb.amazonaws.com>>" port: 27017 authentication: username: ${{aws_secrets:secret:username}} password: ${{aws_secrets:secret:password}} aws: sts_role_arn: "<<arn:aws:iam::123456789012:role/Example-Role>>" # If id_key is specified, new key with docdb_id that matches the data from _id will be created # id_key: "docdb_id" s3_bucket: "<<bucket-name>>" s3_region: "<<bucket-region>>" # optional s3_prefix for Opensearch ingestion to write the temporary data s3_prefix: "<<path_prefix>>" collections: # collection format: <databaseName>.<collectionName> - collection: "<<dbname.collection1>>" export: true stream: true ```
Support Full load and CDC from AWS DocumentDB
https://api.github.com/repos/opensearch-project/data-prepper/issues/4534/comments
0
2024-05-14T16:24:14Z
2024-05-14T16:27:29Z
https://github.com/opensearch-project/data-prepper/issues/4534
2,295,896,291
4,534
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** The default behavior of Data Prepper core is to only shut down when no sub-pipelines are valid. This can lead to a confusing user experience, as someone who wants all sub-pipelines to be running correctly for data prepper to start. **Describe the solution you'd like** An optional parameter in the `data-prepper-config.yaml` to change this behavior to fail to start Data Prepper when any of the sub-pipelines are not configured properly. This would then throw an exception that stops Data Prepper when any sub-pipelines are misconfigured. ``` fail_on_any_sub_pipeline_failure: true // defaults to false ``` **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Add option to shut down and fail data prepper when any sub-pipelines are invalid
https://api.github.com/repos/opensearch-project/data-prepper/issues/4530/comments
1
2024-05-13T15:52:14Z
2024-05-14T19:33:24Z
https://github.com/opensearch-project/data-prepper/issues/4530
2,293,196,481
4,530
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** DataPrepper does an aggregation of all spans with the same trace id. If encounters a span with a null parent span id, it assigns its name as trace group to all spans. This allows classification in the OpenSearch Dashboards observability plugin. However, this approach fails, if the global parent span does not arrive in time or at all. Consider the following situation: ![otel-partial-trace drawio](https://github.com/opensearch-project/data-prepper/assets/5429709/f1000b02-7101-4158-81f7-9405ae0ce7f4) In this picture, DataPrepper receives all coloured spans. It does not receive the gray spans. This might be because they are created in another system outside of the reach of the observability infrastructure the DataPrepper instance belongs to. This can be another vendor or a client system where the coloured spans are generated within a SaaS solution. Currently, DataPrepper will not create a trace group entry for the spans, since the global trace parent is never received. **Describe the solution you'd like** It would be great, if in that case DataPrepper would follow the connection along the parent span ids until it can no longer resolve the parent. If this leads to a unique span, this span should be used as the trace parent instead of the original global trace parent. The picture shows a conflict situation, where no unique parent can be determined. In that case, no trace group should be issued, keeping the current behaviour. **Additional context** For the implementation, this feature could be an option in the OTelTraceRawProcessor where the [detection of a parent span](https://github.com/opensearch-project/data-prepper/blob/f5b0fee2fbb9b829de633a05e53fd6fe5d012f93/data-prepper-plugins/otel-trace-raw-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/oteltrace/OTelTraceRawProcessor.java#L105-L113) needs to be changed. Alternatively, in the OTelTraceGroupProcessor the [search query](https://github.com/opensearch-project/data-prepper/blob/f5b0fee2fbb9b829de633a05e53fd6fe5d012f93/data-prepper-plugins/otel-trace-group-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/oteltracegroup/OTelTraceGroupProcessor.java#L144-L161) could be changed. It would also be possible to create a new processor or action in the aggregate processor, that fills in empty trace groups if possible.
Allow trace group for partial traces
https://api.github.com/repos/opensearch-project/data-prepper/issues/4517/comments
4
2024-05-08T19:37:48Z
2024-10-17T21:04:04Z
https://github.com/opensearch-project/data-prepper/issues/4517
2,286,307,936
4,517
[ "opensearch-project", "data-prepper" ]
## CVE-2024-34069 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Werkzeug-2.2.3-py3-none-any.whl</b></p></summary> <p>The comprehensive WSGI web application library.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl</a></p> <p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p> <p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p> <p> Dependency Hierarchy: - :x: **Werkzeug-2.2.3-py3-none-any.whl** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Werkzeug is a comprehensive WSGI web application library. The debugger in affected versions of Werkzeug can allow an attacker to execute code on a developer's machine under some circumstances. This requires the attacker to get the developer to interact with a domain and subdomain they control, and enter the debugger PIN, but if they are successful it allows access to the debugger even if it is only running on localhost. This also requires the attacker to guess a URL in the developer's application that will trigger the debugger. This vulnerability is fixed in 3.0.3. <p>Publish Date: 2024-05-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-34069>CVE-2024-34069</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-2g68-c3qc-8985">https://github.com/pallets/werkzeug/security/advisories/GHSA-2g68-c3qc-8985</a></p> <p>Release Date: 2024-05-06</p> <p>Fix Resolution: Werkzeug - 3.0.3</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2024-34069 (High) detected in Werkzeug-2.2.3-py3-none-any.whl
https://api.github.com/repos/opensearch-project/data-prepper/issues/4515/comments
0
2024-05-08T06:56:11Z
2024-05-15T22:24:20Z
https://github.com/opensearch-project/data-prepper/issues/4515
2,284,830,318
4,515
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** I need to be able to collect raw syslog traffic from endpoints such as network devices. Today this would require some sort of log collector beyond data prepper such as Logstash. **Describe the solution you'd like** Add a syslog source plugin similar to Logstash https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html **Describe alternatives you've considered (Optional)** Using logstash or fluentbit instead :( **Additional context** Add any other context or screenshots about the feature request here.
Add Syslog Source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4511/comments
5
2024-05-07T13:45:29Z
2024-12-11T17:17:40Z
https://github.com/opensearch-project/data-prepper/issues/4511
2,283,402,612
4,511
[ "opensearch-project", "data-prepper" ]
**Describe the bug** Similar to https://github.com/opensearch-project/data-prepper/issues/3514, the regex parser fails on the example from documentation: https://github.com/opensearch-project/data-prepper/blob/main/docs/expression_syntax.md#reference-table. **To Reproduce** Steps to reproduce the behavior: 1. Create a pipeline with the following configuration ``` log-pipeline: source: http: ssl: false processor: - parse_json: source: message parse_when: '/message=~"^\w*$"' # Fails # parse_when: '/message=~"^\w*\ $"' # Also fails # parse_when: '/message =~ "^(\\{.*\\}|\\[.*\\])$"' # Also fails sink: - opensearch: hosts: [ 'https://opensearch:9200' ] insecure: true ``` 2. Send in a log message (any). 3. See the error log: ``` 2024-05-07T12:07:25,039 [log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor - An exception occurred while using the parse_json processor on Event [org.opensearch.dataprepper.model.log.JacksonLog@25c210a1] org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/message=~"^\w*$"" at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:42) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ExpressionEvaluator.evaluateConditional(ExpressionEvaluator.java:28) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor.doExecute(AbstractParseProcessor.java:70) ~[parse-json-processor-2.7.0.jar:?] at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) [micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) [data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:135) [data-prepper-core-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.7.0.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] Caused by: org.opensearch.dataprepper.expression.ParseTreeCompositeException at org.opensearch.dataprepper.expression.ParseTreeParser.createParseTree(ParseTreeParser.java:78) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:101) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:27) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:35) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:20) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:38) ~[data-prepper-expression-2.7.0.jar:?] ... 12 more Caused by: org.opensearch.dataprepper.expression.ExceptionOverview: Multiple exceptions (5) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.InputMismatchException: null at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) line 1:11 token recognition error at: '^' line 1:12 token recognition error at: '\' line 1:13 token recognition error at: 'w*' line 1:15 token recognition error at: '$"' line 1:10 mismatched input '"' expecting {JsonPointer, EscapedJsonPointer, String} 2024-05-07T12:07:25,042 [log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor - An exception occurred while using the parse_json processor on Event [org.opensearch.dataprepper.model.log.JacksonLog@2c0f1eaf] org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/message=~"^\w*$"" ``` 4. If you escape the dollar sign, you still get an error: ``` line 1:11 token recognition error at: '^' line 1:12 token recognition error at: '\' line 1:13 token recognition error at: 'w*' line 1:15 token recognition error at: '\' line 1:16 token recognition error at: '$"' line 1:10 mismatched input '"' expecting {JsonPointer, EscapedJsonPointer, String} 2024-05-07T12:16:50,189 [log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor - An exception occurred while using the parse_json processor on Event [org.opensearch.dataprepper.model.log.JacksonLog@686b21ea] org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/message=~"^\w*\$"" at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:42) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ExpressionEvaluator.evaluateConditional(ExpressionEvaluator.java:28) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor.doExecute(AbstractParseProcessor.java:70) ~[parse-json-processor-2.7.0.jar:?] at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) [micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) [data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:135) [data-prepper-core-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.7.0.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] Caused by: org.opensearch.dataprepper.expression.ParseTreeCompositeException at org.opensearch.dataprepper.expression.ParseTreeParser.createParseTree(ParseTreeParser.java:78) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:101) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeParser.parse(ParseTreeParser.java:27) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:35) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.MultiThreadParser.parse(MultiThreadParser.java:20) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:38) ~[data-prepper-expression-2.7.0.jar:?] ... 12 more Caused by: org.opensearch.dataprepper.expression.ExceptionOverview: Multiple exceptions (6) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.LexerNoViableAltException: null at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309) |-- org.antlr.v4.runtime.InputMismatchException: null at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270) line 1:11 token recognition error at: '^' line 1:12 token recognition error at: '\' line 1:13 token recognition error at: 'w*' line 1:15 token recognition error at: '\' line 1:16 token recognition error at: '$"' line 1:10 mismatched input '"' expecting {JsonPointer, EscapedJsonPointer, String} 2024-05-07T12:16:50,191 [log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.plugins.processor.parse.AbstractParseProcessor - An exception occurred while using the parse_json processor on Event [org.opensearch.dataprepper.model.log.JacksonLog@1c5deeb4] org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/message=~"^\w*\$"" ``` 5. Parsing also fails when checking if message is a JSON string or array with the following regex: parse_when: '/message =~ "^(\\{.*\\}|\\[.*\\])$"' **Expected behavior** Regex should be parsed correctly. **Environment (please complete the following information):** - Data Prepper 2.7. **Additional context** Add any other context about the problem here.
[BUG] Failure to process "reserved" chars in regular expressions
https://api.github.com/repos/opensearch-project/data-prepper/issues/4510/comments
2
2024-05-07T12:20:18Z
2024-05-07T20:48:09Z
https://github.com/opensearch-project/data-prepper/issues/4510
2,283,124,202
4,510
[ "opensearch-project", "data-prepper" ]
**Describe the bug** We have an Data Prepper & OpenTelemetry setup in our Kubernetes ecosystem running to collect metrics and traces and send the data to OpenSearch. This setup was working perfectly fine when we were running with below versions. Data Prepper - 2.6.0 OpenTelemetry Collector - 0.83.0 Recently we had performed a version upgrade of dataprepper component only to remove certain vulnerabilities associated with image. The new versions we are on are as below Data Prepper - 2.7.0 OpenTelemetry Collector - 0.83.0 However post this upgrade we are encountering below error in Data Prepper and metrics / traces are not reaching the Opensearch. [armeria-common-worker-epoll-3-2] ERROR org.opensearch.dataprepper.GrpcRequestExceptionHandler - Unexpected exception handling gRPC request com.linecorp.armeria.common.stream.ClosedStreamException: received a RST_STREAM frame: CANCEL [pool-9-thread-94] ERROR org.opensearch.dataprepper.plugins.source.otelmetrics.OTelMetricsGrpcService - Failed to write the request of size 120068 due to: java.util.concurrent.TimeoutException: Pipeline [otel-metrics-pipeline] - Buffer does not have enough capacity left for the number of records: 286, timed out waiting for slots. at org.opensearch.dataprepper.plugins.buffer.blockingbuffer.BlockingBuffer.doWriteAll(BlockingBuffer.java:127) ~ [blocking-buffer-2.7.0.jar:?] at org.opensearch.dataprepper.model.buffer.AbstractBuffer.writeAll(AbstractBuffer.java:107) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.model.buffer.DelegatingBuffer.writeAll(DelegatingBuffer.java:48) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.model.buffer.DelegatingBuffer.writeAll(DelegatingBuffer.java:48) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.parser.CircuitBreakingBuffer.writeAll(CircuitBreakingBuffer.java:50) ~[data-prepper-core-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.otelmetrics.OTelMetricsGrpcService.processRequest(OTelMetricsGrpcService.java:97) ~[otel-metrics-source-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.otelmetrics.OTelMetricsGrpcService.lambda$export$0(OTelMetricsGrpcService.java:83) ~[otel-metrics-source-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.plugins.source.otelmetrics.OTelMetricsGrpcService.export(OTelMetricsGrpcService.java:83) ~[otel-metrics-source-2.7.0.jar:?] at io.opentelemetry.proto.collector.metrics.v1.MetricsServiceGrpc$MethodHandlers.invoke(MetricsServiceGrpc.java:246) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0] at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182) ~[grpc-stub-1.58.0.jar:1.58.0] at com.linecorp.armeria.internal.server.grpc.AbstractServerCall.invokeOnMessage(AbstractServerCall.java:387) ~[armeria-grpc-1.26.4.jar:?] at com.linecorp.armeria.internal.server.grpc.AbstractServerCall.lambda$onRequestMessage$2(AbstractServerCall.java:351) ~[armeria-grpc-1.26.4.jar:?] at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$1.run(SequentialExecutor.java:125) [armeria-1.26.4.jar:?] at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$QueueWorker.workOnQueue(SequentialExecutor.java:237) [armeria-1.26.4.jar:?] at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$QueueWorker.run(SequentialExecutor.java:182) [armeria-1.26.4.jar:?] at com.linecorp.armeria.common.DefaultContextAwareRunnable.run(DefaultContextAwareRunnable.java:45) [armeria-1.26.4.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] [armeria-common-worker-epoll-3-1] WARN io.netty.util.concurrent.AbstractEventExecutor - A task raised an exception. Task: com.linecorp.armeria.common.DefaultContextAwareRunnable@23d5df32 java.lang.IllegalStateException: call already closed. status: Status{code=RESOURCE_EXHAUSTED, description=Pipeline [otel-metrics-pipeline] - Buffer does not have enough capacity left for the number of records: 286, timed out waiting for slots., cause=null}, exception: org.opensearch.dataprepper.exceptions.BufferWriteException: Pipeline [otel-metrics- pipeline] - Buffer does not have enough capacity left for the number of records: 286, timed out waiting for slots. at com.linecorp.armeria.internal.shaded.guava.base.Preconditions.checkState(Preconditions.java:835) ~[armeria-1.26.4.jar:?] at com.linecorp.armeria.internal.server.grpc.AbstractServerCall.doClose(AbstractServerCall.java:245) ~[armeria-grpc-1.26.4.jar:?] at com.linecorp.armeria.internal.server.grpc.AbstractServerCall.lambda$close$1(AbstractServerCall.java:227) ~[armeria-grpc-1.26.4.jar:?] at com.linecorp.armeria.common.DefaultContextAwareRunnable.run(DefaultContextAwareRunnable.java:45) ~[armeria-1.26.4.jar:?] at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413) [netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.100.Final.jar:4.1.100.Final] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] **To Reproduce** Bring up similar setup on kubernetes ecosystem to collect and send metrics & traces to Opensearch. Data Prepper - 2.7.0 OpenTelemetry Collector - 0.83.0 **Expected behavior** Metrics and traces collected from kubernetes need to flow to opensearch seamleslly,
ERROR org.opensearch.dataprepper.GrpcRequestExceptionHandler in dataprepper version 2.7.0
https://api.github.com/repos/opensearch-project/data-prepper/issues/4502/comments
5
2024-05-06T09:47:08Z
2024-06-04T06:42:03Z
https://github.com/opensearch-project/data-prepper/issues/4502
2,280,470,636
4,502
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Currently, opensearch as a source supports `indices` but not `aliases`. **Describe the solution you'd like** Support `aliases` as well **Describe alternatives you've considered (Optional)** `index_name_regex` is supported so one can provide the regex pattern, but in case if there are changes in the name of the indices, the user has to modify the `index_name_regex` accordingly. With support of `aliases` one can change the indexes an alias points to at any time without modifying the configuration.
Support Index `aliases` in Opensearch as a data source.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4501/comments
1
2024-05-04T06:54:46Z
2024-05-07T20:03:15Z
https://github.com/opensearch-project/data-prepper/issues/4501
2,278,798,628
4,501
[ "opensearch-project", "data-prepper" ]
Example DocumentDB document: ``` { _id: Timestamp(time=1713536835 ordinal=12), url: "https://github.com/opensearch-project/OpenSearch" } ``` The source outputs: ``` { docdb_id: 1713536835, url: "https://github.com/opensearch-project/OpenSearch" } metadata { "primary_key" : "1713536835-12", "documentdb_id_bson_type" : "timestamp" } ```
Include BSON type information in the Event metadata to allow for conversions
https://api.github.com/repos/opensearch-project/data-prepper/issues/4491/comments
1
2024-05-01T20:06:29Z
2024-05-14T00:53:40Z
https://github.com/opensearch-project/data-prepper/issues/4491
2,274,113,373
4,491
[ "opensearch-project", "data-prepper" ]
Tracking items to add to the DocumentDb support. - [ ] #4461
DocumentDb Mapping Follow-on
https://api.github.com/repos/opensearch-project/data-prepper/issues/4490/comments
0
2024-05-01T19:34:40Z
2024-05-01T19:41:11Z
https://github.com/opensearch-project/data-prepper/issues/4490
2,274,070,213
4,490
[ "opensearch-project", "data-prepper" ]
## Background Some processors we only want to run when the field has a certain type. ## Proposal I propose creating a new operator within the Data Prepper expression language: `typeof`. This could be used to determine if a given field has a certain type. For example: ``` /myfield typeof string ``` Additionally, this approach could support inheritance. For example, the following could be true: ``` 25 typeof integer 25 typeof number 25.5 typeof double 25.5 typeof number ``` ### Types Data Prepper doesn't have have a consistent concept of types. However, we should have some core types. We can base these on what we already have for the convert entries processor. https://github.com/opensearch-project/data-prepper/blob/24bbbf1576849f38671b9e4a03de347c9e89c28f/data-prepper-plugins/mutate-event-processors/src/main/java/org/opensearch/dataprepper/plugins/processor/mutateevent/TargetType.java#L21-L25 ## Alternative One alternative is to add a new function. ``` $type(/myfield) == 'string' ``` However, this would not work so well with inheritance.
Support conditional expression to evaluate based on the data type for a given field
https://api.github.com/repos/opensearch-project/data-prepper/issues/4478/comments
2
2024-04-30T19:42:47Z
2024-05-14T00:24:02Z
https://github.com/opensearch-project/data-prepper/issues/4478
2,272,329,632
4,478
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of the OpenSearch source, I would like to only do a one-time list of the indices rather than periodically pull new indices. **Describe the solution you'd like** An option in the OpenSearch source to configure the number of `cat/indices` calls to make. ``` source: opensearch: scheduling: list_index_count: 1 ``` Would only list indexes once, process those indexes, and then stop listing and working completely. This can be done in a similar way as the S3 Source, which will only has a configurable scan count that will block subsequent scans once that scan count is reached (https://github.com/opensearch-project/data-prepper/blob/2f9bed8ae28868c33f50e7e929b502970cb23aff/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/S3ScanPartitionCreationSupplier.java#L73). This would be applied to the Global state item and supplier for the OpenSearch source (https://github.com/opensearch-project/data-prepper/blob/2f9bed8ae28868c33f50e7e929b502970cb23aff/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/source/opensearch/worker/OpenSearchIndexPartitionCreationSupplier.java#L65) **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Stop listing indexes after one time pull from the OpenSearch indices
https://api.github.com/repos/opensearch-project/data-prepper/issues/4471/comments
0
2024-04-26T20:08:11Z
2024-04-26T20:09:27Z
https://github.com/opensearch-project/data-prepper/issues/4471
2,266,435,630
4,471
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. It would be nice to have [...] Currently, I do not see any way to have a single pipeline consume an s3 source (with sqs) for s3 buckets that are in different regions. It would be nice to have this ability. Example scenario: - two s3 buckets, one in us-west-2 and us-east-1 - each bucket has event notifications configured with an sns topic in their respective regions - a single sqs queue in us-east-1 that is subscribed to both topics above (one topic in us-east-1, another in us-west-2) - configuration yaml: ``` source: s3: notification_type: "sqs" codec: newline: sqs: queue_url: "https://sqs.us-east-1.amazonaws.com/123456789012/asdf" bucket_owners: my-bucket-in-us-west-2: 210987654321 my-bucket-in-us-east-1: 123456789012 aws: sts_role_arn: "arn:aws:iam::123456789012:role/asdf" region: us-east-1 With the above configuration, everything goes smoothly for us-east-1. However, the pipeline fails to get objects from the us-west-2 bucket because the s3 client is configured for us-east-1. The (not very informative) error log is: `[s3-source-sqs-1] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: null (Service: S3, Status Code: 400, Request ID: xxxx, Extended Request ID: xxxx)` **Describe the solution you'd like** Enable (or the option to enable) cross region access on the S3 client so it is able to download objects from buckets in regions other than the one defined in the yaml config. See https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/s3-cross-region.html. potential solution, add `.crossRegionAccessEnabled()` to `createS3Client` in `S3ClientBuilderFactory`: ``` public S3Client createS3Client() { LOG.info("Creating S3 client"); return S3Client.builder() .crossRegionAccessEnabled(true) .region(s3SourceConfig.getAwsAuthenticationOptions().getAwsRegion()) .credentialsProvider(credentialsProvider) .overrideConfiguration(ClientOverrideConfiguration.builder() .retryPolicy(retryPolicy -> retryPolicy.numRetries(5).build()) .build()) .build(); } ``` **Describe alternatives you've considered (Optional)** Using a pipeline and sqs queue for each bucket that is in a different region. But this feels silly - extra sqs queue, pipeline, and duplicated configuration.
S3 source - allow cross region access
https://api.github.com/repos/opensearch-project/data-prepper/issues/4470/comments
7
2024-04-26T18:37:22Z
2024-12-29T17:37:12Z
https://github.com/opensearch-project/data-prepper/issues/4470
2,266,314,846
4,470
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** The `s3` source supports S3's [bucket ownership verification](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html) to protect against reading from buckets in unexpected accounts. The `s3` sink does not have this feature. **Describe the solution you'd like** Provide the same configurations for S3 bucket ownership as are provided in the `s3` source. Use those to define the `ExpectedBucketOwner` parameter when writing to S3. ``` sink: - s3: default_bucket_owner: 000000000000 bucket_owners: my-bucket-01: 123456789012 my-bucket-02: 999999999999 ``` Conceptual `PutObjectRequest`: ``` PutObjectRequest.builder().bucket(defaultBucket).key(objectKey).expectedBucketOwner(buckerOwner).build() ``` Additionally, this check should occur based on either of those fields being set. If they are not set, then there is no check. This is the current default and this will not break anything. As a result, I don't see any need to have a disable flag. **Additional context** Issue to add these configurations to the `s3` source: #2012. Original PR adding the check to the s3 source: #1526
Support S3 bucket ownership validation on the S3 sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4468/comments
1
2024-04-26T02:20:33Z
2024-05-08T18:28:36Z
https://github.com/opensearch-project/data-prepper/issues/4468
2,264,822,548
4,468
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Data Prepper supports CloudWatch as a metric destination for its own metrics. The data-prepper-core project includes a [cloudwatch.properties](https://github.com/opensearch-project/data-prepper/blob/eb94f4abbd4f83ce3b3ebe4d4478af7531f9bc36/data-prepper-core/src/main/resources/cloudwatch.properties) file. It is good that Data Prepper is pre-configured for CloudWatch so that no additional configuration is needed. However, the current approach provides some difficulties: * It is difficult to override these configurations * When creating a custom Data Prepper distribution, the file in this jar may conflict with other similar files that teams provide. **Describe the solution you'd like** Move the `cloudwatch.properties` file out of `data-prepper-core.jar`. Instead, include it in the `config` directory of Data Prepper at `config/cloudwatch.properties`. With this, users can change this file and get their own custom configurations. **Describe alternatives you've considered (Optional)** There may still be some value in including the `cloudwatch.properties` file in the jars. If so, we may wish to move it into `data-prepper-main.jar` since that jar is more opinionated on how Data Prepper is run. **Additional context** See #4464 for another file that is in `data-prepper-core.jar` and causing difficulty for custom distributions.
Support alternate CloudWatch configurations for monitoring Data Prepper
https://api.github.com/repos/opensearch-project/data-prepper/issues/4465/comments
0
2024-04-25T16:16:02Z
2024-04-30T19:38:03Z
https://github.com/opensearch-project/data-prepper/issues/4465
2,263,994,507
4,465
[ "opensearch-project", "data-prepper" ]
**Describe the bug** The `data-prepper-core.jar` file contains Log4J configuration files. * log4j.properties * log4j2-rolling.properties * log4j2.properties **To Reproduce** 1. Download Data Prepper (2.7.0) 2. Extract 3. Extract jar: `jar xf opensearch-data-prepper-2.7.0-linux-x64/lib/data-prepper-core-2.7.0.jar` 4. Perform an `ls` to see **Expected behavior** This jar file should not have any log4j files. The correct location for any Log4J configurations would be within the directory structure of Data Prepper. ``` config/ ``` This actually already includes `log4j2-rolling.properties`.
[BUG] data-prepper-core contains log4j configuration files
https://api.github.com/repos/opensearch-project/data-prepper/issues/4464/comments
0
2024-04-25T16:10:25Z
2024-04-30T19:38:32Z
https://github.com/opensearch-project/data-prepper/issues/4464
2,263,985,133
4,464
[ "opensearch-project", "data-prepper" ]
Every DocumentDb object must have an `_id` field. The Data Prepper `documentdb` source should do two things with this value: 1. Create a metadata field `primary_key` which is a string representation of that value. 2. Create a new field `docdb_id` with matches the data from `_id`. This should use the same mappings as found in #4458. OpenSearch does not allow sending documents with an `_id` field. So using the `docdb_id` will allow saving all the data from a DocumentDb object while also using a valid OpenSearch Id. ### Examples For example, say we have the following DocumentDb documents: ``` { _id: { category: "repository", title: "OpenSearch" }, url: "https://github.com/opensearch-project/OpenSearch" } ``` and ``` { _id: { category: "project", title: "opensearch-project" }, url: "https://github.com/opensearch-project/" } ``` We'd like to retain the original data inside the OpenSearch index. So the `documentdb` source should output the following: ``` { docdb_id: { category: "repository", title: "OpenSearch" }, url: "https://github.com/opensearch-project/OpenSearch" } metadata { "primary_key" : "{category:"repository",title:"OpenSearch"}" } ``` and ``` { docdb_id: { category: "project", title: "opensearch-project" }, url: "https://github.com/opensearch-project/" } metadata { "primary_key" : "{category:"project",title:"opensearch-project"}" } ``` Say the user has the following sink configuration: ``` sink: - opensearch: hosts: ["https://localhost:9200"] document_id: "${getMetadata(\"primary_key\")}" action: "${getMetadata(\"opensearch_action\")}" ``` Then, in OpenSearch, this will look like: ``` { _id: "{category:"project",title:"opensearch-project"}" docdb_id: { category: "project", title: "opensearch-project" }, url: "https://github.com/opensearch-project/" } ``` ### Configuration Also, we should allow users to configure the value used in the source. ``` documentdb: id_key: docdb_id ```
Create a doc_id field for data exported from the DocumentDb source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4463/comments
1
2024-04-24T21:49:34Z
2024-05-14T00:51:18Z
https://github.com/opensearch-project/data-prepper/issues/4463
2,262,246,725
4,463
[ "opensearch-project", "data-prepper" ]
DocumentDB and BSON have a decimal type. This can be output from the `documentdb` source as a Java `BigDecimal`.
Export BSON Decimal as BigDecimal from DocumentDb source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4462/comments
0
2024-04-24T21:48:59Z
2024-05-14T16:11:48Z
https://github.com/opensearch-project/data-prepper/issues/4462
2,262,245,586
4,462
[ "opensearch-project", "data-prepper" ]
## Problem/Background Data Prepper has an upcoming `documentdb` source. The issue #4458 proposes a simple data type solution. However, sometimes we need to get all the data that is available. ## Solution Provide options for complex and extended types coming out of DynamoDB. Data Prepper can support the following mapping options for types. * `simple` - For complex BSON types, lose some subtype information. See #4458 for more details. * `relaxed` - Uses the MongoDB [relaxed JSON format](https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/data-formats/document-data-format-extended-json/#extended-json-formats) * `extended` - Uses the MongoDB [extended/canonical JSON format](https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/data-formats/document-data-format-extended-json/#extended-json-formats) * `complex` - An alternative mapping provided by Data Prepper which does not include BSON type information, but does include all the data from complex objects. If you use the `relaxed` mapping, then we will use `extended` for any type that does not support `relaxed`. The `relaxed` is more closely related to `extended` than `simple` types. ### Options We will add a new `type_mappings` option group within the `documentdb` source. It will have the following options. * `default` - Can be `simple`, `extended`, or `relaxed`. All types will use this form. * `object_id` - Can be `simple`, `extended`, or `complex`. This configures how BSON ObjectIds are mapped. When configured, this overrides `default` for BSON ObjectIds * `bindata` - Can be `simple`, `extended`, or `complex`. This configures how BSON BinData is mapped. When configured, this overrides `default` for BSON BinData fields. * `timestamp` - Can be `simple`, `extended`, or `complex`. This configures how BSON Timestamps are mapped. When configured, this overrides `default` for BSON Timestamps. ``` source: documentdb: host: 'https://my-docdb.docdb.amazonaws.com' type_mappings: default: relaxed bin_data: extended timestamp: simple object_id: simple ``` ### Complex types ``` source: documentdb: host: 'https://my-docdb.docdb.amazonaws.com' type_mappings: object_id: complex bindata: complex timestamp: complex ``` ### ObjectId For BSON ObjectId, the complex form would include the timstamp. Input: ``` { "name" : "Star Wars", "directorId" : ObjectId("fdd898945") } ``` Output: ``` { "name" : "Star Wars", "directorId" : { "id" : "fdd898945", # The Id string "timestamp" : 1713536423. # Linux time, extracted from _id } } ``` ### BinData The complex BinData will include the subtype. It solves this by making the field an object which will translate into a nested field in OpenSearch. Input: ``` { "filepath" : "/usr/share/doc1", "myBinary" : BinData(binary="[0x88]", subtype="MD5") # Not actual format; just a conceptual representation } ``` Output: ``` { "filepath" : "/usr/share/doc1", "myBinary" : { "binary" : "X7ah==", "subtype" : "MD5" } } ``` ### Timestamp The complex BinData will include the ordinal. It solves this by making the field an object which will translate into a nested field in OpenSearch. Input: ``` { "name" : "Star Wars", "lastUpdatedAt" : Timestamp(time=1713536835, ordinal=12) # Not actual format } ``` Output: ``` { "_id" : "abcdef12345", "name" : "Star Wars", "lastUpdatedAt" : { "timestamp" : 1713536835, # The time part "ordinal" : 12 # The ordinal value } } ``` ### Relaxed types Configuring the relaxed types will also provide BSON type information. These mappings will look similar to the MongoDB [relaxedformat](https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/data-formats/document-data-format-extended-json/#extended-json-formats). ``` source: documentdb: host: 'https://my-docdb.docdb.amazonaws.com' type_mappings: object_id: relaxed bindata: relaxed timestamp: relaxed ``` ### BinData Input: ``` { "filepath" : "/usr/share/doc1", "myBinary" : BinData(binary="[0x88]", subtype="MD5") # Not actual format; just a conceptual representation } ``` Output: ``` "myBinary": { "$binary": { "base64": "X7ah==", "subType": "05" } } ``` ### Timestamp Input: ``` { "name" : "Star Wars", "lastUpdatedAt" : Timestamp(time=1713536835, ordinal=12) # Not actual format } ``` Output: ``` { "_id" : "abcdef12345", "name" : "Star Wars", "lastUpdatedAt" : { "$timestamp": { "timestamp": 1713536835, "ordinal": 12 } } } ``` ### Extended Additionally we can include `extended` as an option to include all type information. ### Alternative The original proposal had `complex_` boolean fields. ``` source: documentdb: host: 'https://my-docdb.docdb.amazonaws.com' mappings: complex_object_id: true complex_bindata: true complex_timestamp: true ``` However, I've changed the proposal to use an enum option since we want to have three options: `simple`, `complex`, and `relaxed`. ### References * [Extended JSON](https://github.com/mongodb/specifications/blob/master/source/extended-json.rst)
Support alternative representations of DocumentDb data types.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4461/comments
0
2024-04-24T21:47:33Z
2024-05-03T15:56:24Z
https://github.com/opensearch-project/data-prepper/issues/4461
2,262,242,176
4,461
[ "opensearch-project", "data-prepper" ]
This is a high-level task for tracking changes to the `documentdb` source to improve the type mappings. - [x] #4458 - [x] #4459 - [x] #4462 - [x] #4463 - [x] #4478 - [x] #832 - [x] #4491 - [x] Investigation into whether Data Prepper retains map order (using LinkedHashMap over HashMap). See #4490 for mappings that will not be available at the first release of 2.8.
DocumentDb Mapping (for initial release)
https://api.github.com/repos/opensearch-project/data-prepper/issues/4460/comments
0
2024-04-24T19:29:27Z
2024-05-14T16:13:14Z
https://github.com/opensearch-project/data-prepper/issues/4460
2,262,025,857
4,460
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** The upcoming `documentdb` source will read BSON data. BSON includes four types that need mapping: * Null * Undefined * MinKey * MaxKey **Describe the solution you'd like** Map all of these values into Java `null`. This will result in an OpenSearch `null`. Using `null` for the Null type is easy. The other three are more complicated. But, I believe it is worth the mapping to `null` because there is no other type in OpenSearch that would help provide what these DocumentDb types provide.
Support DocumentDb types: Null, Undefined, MinKey, MaxKey
https://api.github.com/repos/opensearch-project/data-prepper/issues/4459/comments
1
2024-04-24T19:28:25Z
2024-05-14T00:47:28Z
https://github.com/opensearch-project/data-prepper/issues/4459
2,262,024,345
4,459
[ "opensearch-project", "data-prepper" ]
## Problem/Background The upcoming `documentdb` source is putting BSON types directly into the `Event` model. These types are not supported by downstream processors or sinks. So at the best, they will get the `toString()` representations. ## Solution When constructing the Data Prepper event in the `documentdb` source, create simple representations of BSON types. ### ObjectId For the BSON ObjectId, make this the string representation. Input: ``` { "name" : "Star Wars", "directorId" : ObjectId("fdd898945") } ``` Output from `documentdb` source: ``` { "name" : "Star Wars", "directorId" : "fdd898945" } ``` ### BinData For the BSON BinData type, make the simple representation a base64 encoded string for that binary data. The subtype will be lost. Input: ``` { "filepath" : "/usr/share/doc1", "myBinary" : BinData(binary="[0x88]", subtype="MD5") # Not actual format; just a conceptual representation } ``` Output: ``` { "filepath" : "/usr/share/doc1", "myBinary" : "X7ah==" } ``` ### Timestamp For the BSON Timestamp type, create an integer with the epoch seconds. The ordinal will be lost. Input: ``` { "name" : "Star Wars", "lastUpdatedAt" : Timestamp(time=1713536835 ordinal=12) # Not actual format } ``` Output: ``` { "_id" : "abcdef12345", "name" : "Star Wars", "lastUpdatedAt" : 1713536835 # The time part. The ordinal is lost } ``` ### Regular Expressions The BSON RegEx type has both a pattern and options part. I'm unsure if the patterns would be sufficient on its own. So perhaps this one must have nested fields. Input: ``` { "use-case" : "email_address", "myRegex" : RegEx(pattern="[A-Za-z0-9+_.-]+@([\w-]+\.)+[\w-]{2,4}", options="i") # Not actual format } ``` Output: ``` { "use-case" : "email_address", "myRegex" : { "pattern" : "[A-Za-z0-9+_.-]+@([\w-]+\.)+[\w-]{2,4}", "options" : "i" } } ```
DocumentDb simple representations of BSON types
https://api.github.com/repos/opensearch-project/data-prepper/issues/4458/comments
1
2024-04-24T19:25:18Z
2024-05-14T00:47:13Z
https://github.com/opensearch-project/data-prepper/issues/4458
2,262,018,135
4,458
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** There have been duplication of code between KafkaSource and KafkaCustomConsumerFactory which are doing similar things with subtle difference **Describe the solution you'd like** Refactoring out the duplication of code **Describe alternatives you've considered (Optional)** N/A **Additional context**
Use KafkaCustomConsumerFactory in configuring consumer in KafkaSource
https://api.github.com/repos/opensearch-project/data-prepper/issues/4456/comments
0
2024-04-24T15:19:44Z
2024-04-24T15:23:35Z
https://github.com/opensearch-project/data-prepper/issues/4456
2,261,556,168
4,456
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of the s3 source with scan, I would like to be able to exclude objects from processing based on a regex pattern instead of a static string. This would allow me to filter out all objects from folders like ``` exclude_suffix: - "folder-name/.*" ``` **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Support regex patterns for the exclude_suffix in s3 scan
https://api.github.com/repos/opensearch-project/data-prepper/issues/4454/comments
1
2024-04-23T21:29:48Z
2024-04-30T19:44:15Z
https://github.com/opensearch-project/data-prepper/issues/4454
2,259,800,442
4,454
[ "opensearch-project", "data-prepper" ]
Hi Team, Upgrade version for CVE-2024-22201 on http2-common 9.4.51 version is available on http2-common 9.4.54 Application is deployed on Jetty 9.4.51, jdk 11 and http2-common 9.4.51 version. Can we use http2-common 9.4.54 on Jetty 9.4.51 ? Kindly suggest. Thanks,
CVE-2024-22201 on http2-common 9.4.51 version - autoclosed
https://api.github.com/repos/opensearch-project/data-prepper/issues/4452/comments
1
2024-04-23T16:34:44Z
2024-05-02T19:44:12Z
https://github.com/opensearch-project/data-prepper/issues/4452
2,259,305,347
4,452
[ "opensearch-project", "data-prepper" ]
**Describe the bug** On fresh start (DataPrepper against fresh empty initialized OpenSearch DB) we expecting, that DataPrepper inject our custom ISM policy + custom index template. From logs from DataPrepper I can see, that the index template is managed, but ISM policy not. **Expected behavior** If custom ISM policy is used, must be showed in ISM management GUI and ISM must managing the OTel indices. **Environment (please complete the following information):** - DataPrepper Docker image v 2.6.2 - OpenSearch Docker image v2.11.1 **Additional context** `/usr/share/data-prepper/config/data-prepper-config.yaml` ```yaml ssl: false ``` `/usr/share/data-prepper/otel-span-index-template.json` ```json { "index_patterns": ["otel-v1-apm-span-*"], "version": 1, "template": { "settings": { "plugins.index_state_management.rollover_alias": "otel-v1-apm-span" } } } ``` `/usr/share/data-prepper/otel-span-ism-policy.json` ```json { "policy": { "policy_id": "otel-span", "description": "Managing raw spans for trace analytics", "default_state": "current_write_index", "states": [ { "name": "current_write_index", "actions": [ { "rollover": { "min_size": "10gb", "min_index_age": "24h" } } ], "transitions": [ { "state_name": "delete", "conditions": { "min_index_age": "3d" } } ] }, { "name": "delete", "actions": [ { "delete": {} } ] } ], "ism_template": [ { "index_patterns": ["otel-v1-apm-span-*"] } ] } } ``` `/usr/share/data-prepper/pipelines/pipelines.yaml` ```yaml entry-pipeline: workers: 4 delay: "100" source: otel_trace_source: ssl: true sslKeyCertChainFile: "/usr/share/data-prepper/server.crt" sslKeyFile: "/usr/share/data-prepper/server.key" port: 21890 authentication: http_basic: username: REDACTED password: REDACTED buffer: bounded_blocking: buffer_size: 1024 batch_size: 256 sink: - pipeline: name: "raw-pipeline" - pipeline: name: "service-map-pipeline" raw-pipeline: source: pipeline: name: "entry-pipeline" processor: - otel_traces: sink: - opensearch: #index_type: trace-analytics-raw index_type: custom index: otel-v1-apm-span hosts: [ https://os-endpoint.REDACTED:9200 ] cert: "/usr/share/data-prepper/root-ca.pem" template_file: "/usr/share/data-prepper/otel-span-index-template.json" ism_policy_file: "/usr/share/data-prepper/otel-span-ism-policy.json" username: REDACTED password: REDACTED service-map-pipeline: delay: "100" source: pipeline: name: "entry-pipeline" processor: - service_map: sink: - opensearch: index_type: trace-analytics-service-map hosts: [ https://os-endpoint.REDACTED:9200 ] cert: "/usr/share/data-prepper/root-ca.pem" username: REDACTED password: REDACTED ```
[BUG] Custom ISM policy isnt injected
https://api.github.com/repos/opensearch-project/data-prepper/issues/4450/comments
5
2024-04-23T09:34:18Z
2024-10-26T19:15:58Z
https://github.com/opensearch-project/data-prepper/issues/4450
2,258,403,370
4,450
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** KeyValue processor tries to find key value pairs in entire `source` field. But it is possible that the key value pairs are only present in the small substring of the entire source field. Need a way to reduce the scope of finding the key value pairs in a given field. **Describe the solution you'd like** 1. Enhance Key Value processor to pick start-idx (default 0) and end-idx (default end of string) in a given input field string for finding key value pairs. For example - ``` key_value: source: message start_idx : 5 end_idx = 15 ``` 2. Enhance Key Value processor to pick starting string pattern and ending string-pattern. Finds key value pattern only between the two patterns. For example - ``` key_value: source: message start_pattern : "(" end_pattern = ")" ``` 3. Enhance Key Value processor to find only given keys. Finds only the specified keys in the source. This can be combined with any of the above two start/end index/pattern options For example - ``` key_value: source: message include_keys: [ "key1", "key2", "key3" ] ``` 4. Start processing key values based on a separator like "=" and pick key from the left side and value from the right side with with option to group values like `[value1, value2]` (so comma cannot be used as field separator blindly) and also some values like `url` should ignore http arguments after "?" For example, `url=<something>?x=y` should be just `url=<something>` Some examples 1. Text like `<some text> k1=v1,k2=v2 <other text>` should only extract key values k1 and k2 like `{"k1": "v1", "k2:"v2"}` 2. If the value has a group of values, they should be treated as single value. For example, `k1=[v1, v2],k3=v3` should generate `{"k1":["v1", "v2"], "k3": "v3"}` 3. values should be able ignore some parts of them. For example `url=http://abcd.com?k1=v1&k2=v2` should have an option to generate just `{"url":"http://abcd.com"}` and not include the text after "?" **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Enhance Key Value Processor
https://api.github.com/repos/opensearch-project/data-prepper/issues/4447/comments
13
2024-04-22T06:30:23Z
2024-05-13T23:20:05Z
https://github.com/opensearch-project/data-prepper/issues/4447
2,255,693,713
4,447
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Current DataPrepper configurations are very verbose and very tedious to write manually. With increasing number of processors and functionality in the DataPrepper, it would be difficult for the customers to know the correct processors to use with appropriate config options in correct sequence. Secondly, some customer use cases might require multiple data-prepper pipelines even though their high level requirement is not very complex. **Describe the solution you'd like** Support transformation of pipeline configuration from user provided configuration to a transformed configuration based on template and rules. **User config** ```yaml simple-pipeline: source: someSource: hostname: "database.example.com" port: "27017" sink: - opensearch: hosts: ["https://search-service.example.com"] index: "my_index" ``` **Template** ```yaml "<<pipeline-name>>-transformed": source: "<<$.*.someSource>>" sink: - opensearch: hosts: "<<$.*.sink[?(@.opensearch)].opensearch.hosts>>" port: "<<$.*.someSource.documentdb.port>>" index: "<<$.*.sink[0].opensearch.index>>" aws: sts_role_arn: "arn123" region: "us-test-1" dlq: s3: bucket: "test-bucket" ``` **Rule** ``` apply_when: - "$..source.someSource" ``` **Expected Transformed Config** ```yaml simple-pipeline-transformed: source: someSource: hostname: "database.example.com" port: "27017" sink: - opensearch: hosts: ["https://search-service.example.com"] port: "27017" index: "my_index" aws: sts_role_arn: "arn123" region: "us-test-1" dlq: s3: bucket: "test-bucket"
Pipeline Configuration Transformation
https://api.github.com/repos/opensearch-project/data-prepper/issues/4444/comments
0
2024-04-19T09:00:20Z
2024-04-25T03:13:37Z
https://github.com/opensearch-project/data-prepper/issues/4444
2,252,464,528
4,444
[ "opensearch-project", "data-prepper" ]
Hello everyone, Can I have a hand on this error? For some reason, I started having this problem below: ``` 2024-04-18T13:38:49,018 [armeria-boss-http-*:21890] ERROR io.netty.util.concurrent.DefaultPromise - Failed to submit a listener notification task. Event loop shut down? java.util.concurrent.RejectedExecutionException: event executor terminated ``` I even have no idea where to start investigating. This is my pipeline: ```yaml pipelines.yaml: | log-pipeline: source: http: # Explicitly disable SSL ssl: false # Explicitly disable authentication authentication: unauthenticated: # The default port that will listen for incoming logs port: 21890 # https://github.com/opensearch-project/data-prepper/issues/2147 buffer: bounded_blocking: buffer_size: 2000000 # max number of records the buffer accepts batch_size: 400 # max number of records the buffer drains after each read processor: - grok: match: # This will match logs with a "log" key against the COMMONAPACHELOG pattern (ex: { "log": "actual apache log..." } ) # You should change this to match what your logs look like. See the grok documenation to get started. log: [ "%{COMMONAPACHELOG}" ] sink: #- stdout: # print in the console - opensearch: hosts: [ "https://opensearch-cluster-master:9200" ] # Change to your credentials username: admin password: admin # Add a certificate file if you are accessing an OpenSearch cluster with a self-signed certificate #cert: /path/to/cert # If you are connecting to an Amazon OpenSearch Service domain without # Fine-Grained Access Control, enable these settings. Comment out the # username and password above. #aws_sigv4: true #aws_region: us-east-1 # Since we are grok matching for apache logs, it makes sense to send them to an OpenSearch index named apache_logs. # You should change this to correspond with how your OpenSearch indices are set up. index: devopsv2-rico-index insecure: true data-prepper-config.yaml: | ssl: false ``` And this is the full stack error: ```java 2024-04-18T13:38:49,018 [armeria-boss-http-*:21890] ERROR io.netty.util.concurrent.DefaultPromise - Failed to submit a listener notification task. Event loop shut down? java.util.concurrent.RejectedExecutionException: event executor terminated at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:934) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:351) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:344) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:836) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.execute0(SingleThreadEventExecutor.java:827) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:817) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:862) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:500) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) ~[netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final] at com.linecorp.armeria.internal.common.util.ChannelUtil.close(ChannelUtil.java:189) ~[armeria-1.26.4.jar:?] at com.linecorp.armeria.server.Server$ServerStartStopSupport.lambda$doStop$11(Server.java:651) ~[armeria-1.26.4.jar:?] at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934) ~[?:?] at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911) ~[?:?] at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?] at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?] at com.linecorp.armeria.internal.common.util.ChannelUtil.lambda$close$0(ChannelUtil.java:184) ~[armeria-1.26.4.jar:?] at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:990) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:756) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1352) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:749) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannelHandlerContext.access$1200(AbstractChannelHandlerContext.java:61) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.AbstractChannelHandlerContext$11.run(AbstractChannelHandlerContext.java:732) [netty-transport-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413) [netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.100.Final.jar:4.1.100.Final] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.100.Final.jar:4.1.100.Final] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] ``` Just in case, here is the complete [data-prepper.log](https://github.com/opensearch-project/data-prepper/files/15027913/data-prepper.log) I appreciate any help in advance.
[BUG] Failed to submit a listener notification task. Event loop shut down?
https://api.github.com/repos/opensearch-project/data-prepper/issues/4441/comments
1
2024-04-18T16:33:38Z
2024-04-18T21:01:05Z
https://github.com/opensearch-project/data-prepper/issues/4441
2,251,118,590
4,441
[ "opensearch-project", "data-prepper" ]
**Describe the bug** A clear and concise description of what the bug is. aws msk(no auth/ disabled TLS) --> data prepper --> aws opensearch when deployed data prepper: 2024-04-17T14:11:41,339 [pool-6-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-host-1, groupId=host] Connection to node -1 (kafka.data-prepper/172.27.91.183:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. ` data: pipelines.yaml: | log-pipeline: source: kafka: bootstrap_servers: - b-1xxxxxxx:9092 topics: - name: "host-test" group_id: "host" serde_format: "plaintext" - name: "kube-test" group_id: "kube" serde_format: "plaintext" sink: - opensearch: hosts: [ "https://vpc-devops-xxxxxx-xxxxast-1.es.amazonaws.com" ] username: "admin" password: "KfExxxxxxxxxxxxxc" index: sample_app_logs data-prepper-config.yaml: | ssl: false ` **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] 2024-04-17T14:11:41,339 [pool-6-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-host-1, groupId=host] Connection to node -1 (kafka.data-prepper/172.27.91.183:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4431/comments
2
2024-04-17T14:16:21Z
2024-04-18T07:00:37Z
https://github.com/opensearch-project/data-prepper/issues/4431
2,248,415,155
4,431
[ "opensearch-project", "data-prepper" ]
**Describe the bug** HTTP source config does not support API restendpoint. It does not provide any field for host or uri **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] HTTP source config does not support API restendpoint
https://api.github.com/repos/opensearch-project/data-prepper/issues/4430/comments
1
2024-04-17T07:50:20Z
2024-04-30T19:58:10Z
https://github.com/opensearch-project/data-prepper/issues/4430
2,247,625,946
4,430
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** The naming pipeline_configurations within pipeline.yaml can cause confusion with actual pipeline definition. ``` pipeline_configurations: aws: secrets: qchea-test-secret: secret_id: "kafka-secrets" region: "us-west-2" sts_role_arn: "arn:aws:iam::xxxxxxx:role/os-os-test-role" schema-secret: secret_id: "self-managed-kafka-schema" region: "us-west-2" sts_role_arn: "arn:aws:iam::xxxxx:role/os-os-test-role" kafka-pipeline: source: kafka: encryption: type: "none" topics: - name: "quickstart-events" group_id: "groupdID1" bootstrap_servers: - "10.125.24.235:9092" authentication: sasl: plaintext: username: "${{aws_secrets:qchea-test-secret:username}}" password: "${{aws_secrets:qchea-test-secret:password}}" schema: type: confluent registry_url: https://psrc-m5k9x.us-west-2.aws.confluent.cloud version: 1 schema_registry_api_key: "${{aws_secrets:schema-secret:schema_registry_api_key}}" schema_registry_api_secret: "${{aws_secrets:schema-secret:schema_registry_api_secret}}" basic_auth_credentials_source: USER_INFO ``` **Describe the solution you'd like** rename it into extension ``` extension: aws: secrets: qchea-test-secret: secret_id: "kafka-secrets" region: "us-west-2" sts_role_arn: "arn:aws:iam::xxxxxxxxx:role/os-os-test-role" schema-secret: secret_id: "self-managed-kafka-schema" region: "us-west-2" sts_role_arn: "arn:aws:iam::xxxxxxxxx:role/os-os-test-role" kafka-pipeline: source: kafka: encryption: type: "none" topics: - name: "quickstart-events" group_id: "groupdID1" bootstrap_servers: - "10.125.24.235:9092" authentication: sasl: plaintext: username: "${{aws_secrets:qchea-test-secret:username}}" password: "${{aws_secrets:qchea-test-secret:password}}" schema: type: confluent registry_url: https://psrc-m5k9x.us-west-2.aws.confluent.cloud version: 1 schema_registry_api_key: "${{aws_secrets:schema-secret:schema_registry_api_key}}" schema_registry_api_secret: "${{aws_secrets:schema-secret:schema_registry_api_secret}}" basic_auth_credentials_source: USER_INFO ``` **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Deprecate pipeline_configurations with extension in pipeline.yaml
https://api.github.com/repos/opensearch-project/data-prepper/issues/4427/comments
1
2024-04-16T20:02:03Z
2024-04-19T14:27:17Z
https://github.com/opensearch-project/data-prepper/issues/4427
2,246,795,505
4,427
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** With the addition of DynamoDB resource policies, data prepper can now assume a role in account A and access a table in account B. However, the Data Prepper DynamodB source only passes the table name to some requests, instead of the full table ARN (https://github.com/opensearch-project/data-prepper/blob/11b18cd5debc1ff04087f20a595c2e5637e5fdde/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/leader/LeaderScheduler.java#L240). Changing this in all the calls that allow table arn will allow this support. **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Support cross account DynamoDB source tables
https://api.github.com/repos/opensearch-project/data-prepper/issues/4424/comments
2
2024-04-16T18:32:41Z
2025-01-14T18:35:51Z
https://github.com/opensearch-project/data-prepper/issues/4424
2,246,661,292
4,424
[ "opensearch-project", "data-prepper" ]
**Describe the bug** we currently update source coordination ownership for partitions synchronously in pull based sources like S3, OpenSearch, and DynamoDB. This happens in a loop approximately every 2 minutes, but when the buffer is very full, we spend time retrying to write to the buffer, which leads to expiring ownership of the partition, and reprocessing of that partition by another node of Data Prepper **Expected behavior** Asynchronously update ownership every 2 minutes without depending on the primary loop. For example, this is done here for DynamoDB (https://github.com/opensearch-project/data-prepper/blob/a20756cc13e6f7b7f088544df03bb1230a88af8f/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/export/DataFileLoader.java#L206). We should update ownership in a timely manner regardless of how long it takes to write to the buffer. **Alternative consideration** Increase the ownership timeout to be a higher value or check ownership updates in between attempts to write to the buffer **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] Ownership can timeout on full buffer for pull based sources
https://api.github.com/repos/opensearch-project/data-prepper/issues/4422/comments
3
2024-04-16T17:01:16Z
2024-04-19T19:01:11Z
https://github.com/opensearch-project/data-prepper/issues/4422
2,246,514,653
4,422
[ "opensearch-project", "data-prepper" ]
**Describe the bug** DynamoDB and DocDB sources do not override areAcknowledementsEnabled. It is mandatory for sources supporting acknowledments to override this API. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** override the areAcknowledementsEnabled method so that acknowledgements are properly handled by the core data prepper. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] All sources supporting acknowledgements must override areAcknowledementsEnabled
https://api.github.com/repos/opensearch-project/data-prepper/issues/4420/comments
0
2024-04-15T21:52:44Z
2024-04-15T23:45:59Z
https://github.com/opensearch-project/data-prepper/issues/4420
2,244,697,950
4,420
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** [Here](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/failures-common/src/main/java/org/opensearch/dataprepper/plugins/dlq/README.md) would be great to allow users to define compression for the objects that will be stored in S3 buckets similar to the S3 sink plugin. **Describe the solution you'd like** A user could define the compression like below ``` pipeline: ... sink: opensearch: dlq: s3: bucket: "my-dlq-bucket" key_path_prefix: "dlq-files/" compression: gzip region: "us-west-2" sts_role_arn: "arn:aws:iam::123456789012:role/dlq-role" ``` **Describe alternatives you've considered (Optional)** N/A **Additional context** N/A
Adding compression support for DLQ events
https://api.github.com/repos/opensearch-project/data-prepper/issues/4418/comments
1
2024-04-15T12:15:01Z
2024-04-16T21:21:32Z
https://github.com/opensearch-project/data-prepper/issues/4418
2,243,525,737
4,418
[ "opensearch-project", "data-prepper" ]
**Describe the bug** Index prefix is not parsed correctly when the index name contains dynamic date pattern and the date pattern is not prefixed or suffixed with a hypen, for example, if index is configured as `index-prefix.%{yyyy-MM-dd}`, the parsed index prefix will be `index-prefix.%{yyyy-MM-dd}` instead of `index-prefix`. This bug can manifest itself as a misleading `[security_exception] authentication/authorization failure` when pipeline tries to check if a corresponding index template exists. See details below. **To Reproduce** Steps to reproduce the behavior: 1. Configure a pipeline with this opensearch sink config pointing to a public AOSS collection: ```yaml sink: - opensearch: hosts: - https://xxxx.us-east-1.aoss.amazonaws.com index_type: custom index: "prefix.%{yyyy-MM-dd}" aws: sts_role_arn: arn:aws:iam::xxx:role/OpenSearchServerlessPipelineRole region: us-east-1 serverless: true template_type: index-template template_content: '{"template":{"mappings":{"date_detection":false}}}' ``` 2. Start the pipeline, pipeline fails to init sink with this error message: ``` org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [security_exception] authentication/authorization failure at org.opensearch.client.transport.aws.AwsSdk2Transport.parseResponse(AwsSdk2Transport.java:473) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.executeSync(AwsSdk2Transport.java:392) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.performRequest(AwsSdk2Transport.java:192) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.opensearch.indices.OpenSearchIndicesClient.existsIndexTemplate(OpenSearchIndicesClient.java:571) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.ComposableTemplateAPIWrapper.getTemplate(ComposableTemplateAPIWrapper.java:45) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.ComposableIndexTemplateStrategy.getExistingTemplateVersion(ComposableIndexTemplateStrategy.java:28) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.shouldCreateTemplate(AbstractIndexManager.java:292) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndexTemplate(AbstractIndexManager.java:246) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndexTemplate(AbstractIndexManager.java:234) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:224) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:235) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:193) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:52) ~[data-prepper-api-2.7.0-SNAPSHOT.jar:?] ... ``` **Expected behavior** Sink should initialize correctly. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Relevant code: https://github.com/opensearch-project/data-prepper/blob/1e5e0d06b825923ccce47689d3cbf5bccd365a29/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L127-L129
[BUG] Index prefix parsing bug in OpenSearch sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4415/comments
0
2024-04-12T18:57:38Z
2024-05-14T14:47:30Z
https://github.com/opensearch-project/data-prepper/issues/4415
2,240,733,989
4,415
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of data prepper, I would like to be able to set different property values in the configuration depending on certain conditions. **Describe the solution you'd like** Similar to how the `opensearch` sink supports the `actions` parameter for conditionally choosing an action ``` actions: - when: "condition" type: "delete" - when: "condition_two" type: "update" - type: "index" # default case ``` I would like to extract this out for common use with any parameter, without requiring duplicate code to evaluate the result. This could be by specifying this common type in the config like this. In data-prepper-api ``` public SwitchWhen<T> { private List<T> configList; # contains private List<String> whenConditions; private T getConfigForEvent(final Event event); } ``` ``` SomePluginConfig { @JsonProperty("actions") private SwitchWhen<ActionClass> actionsWhen; } public ActionClass { @JsonProperty("type") String type; } ``` this would make it easy to add support for other parameters that need this same functionality, for example for s3 sink thresholds ``` public S3SinkConfig { @JsonProperty("thresholds") private SwitchWhen<ThresholdOptions> thresholdOptions; } ``` **Describe alternatives you've considered (Optional)** Only support expressions to fit this use case, but this required add_entries processors with when conditions to some sort of metadata or part of the Event data and then using that in expression value. This is a more confusing user experience. **Additional context** Add any other context or screenshots about the feature request here.
A common type for plugin configurations to support if ,else if ,else format
https://api.github.com/repos/opensearch-project/data-prepper/issues/4408/comments
0
2024-04-10T16:03:01Z
2024-04-10T17:31:46Z
https://github.com/opensearch-project/data-prepper/issues/4408
2,235,951,375
4,408
[ "opensearch-project", "data-prepper" ]
Hello **Is your feature request related to a problem? Please describe.** The documentation is not clear if we can you GeoIP commercial database (other than GeoIP enterprise database). Today I have subscription for GeoIP2-Country, GeoIP2-ISP and GeoIP2-Anonymous, I wanted to use at least GeoIP2-Country but didn't work. It seems that geoip processor only works with Geo2Lite. I have the following configuration: ``` extensions: geoip_service: maxmind: databases: country: "/usr/share/GeoIP/GeoIP2-Country.mmdb" database_refresh_interval: PT1H ``` In case you can access database samples here: <https://github.com/maxmind/MaxMind-DB/tree/main/test-data> **Describe the solution you'd like** Have the possible to use GeoIP2 database with configuration like: ``` extensions: geoip_service: maxmind: databases: geoip2-country: "/usr/share/GeoIP/GeoIP2-Country.mmdb" geoip2-isp: "/usr/share/GeoIP/GeoIP2-ISP.mmdb" database_refresh_interval: PT1H ``` **Describe alternatives you've considered (Optional)** Or similar to what it's done in logstash, determine the database in the processor like: ``` processor: - geoip: entries: - source: "/clientIp" database: "/usr/share/GeoIP/GeoIP2-Country.mmdb" ``` **Additional context** It's seems to don't work ever when providing GeoLite2-Country.mmdb by localpath directly. data-prepper-config.yaml: ``` extensions: geoip_service: maxmind: databases: country: "/usr/share/GeoIP/GeoLite2-Country.mmdb" database_refresh_interval: PT1H ``` pipeline.yaml: ``` version: "2" test-pipeline: source: http: processor: - parse_json: source: "message" - geoip: entries: - source: "/clientIp" sink: - stdout: ``` input data sample: ``` {"clientIP":"185.126.231.50"} ```
Add GeoIP commercial databases
https://api.github.com/repos/opensearch-project/data-prepper/issues/4407/comments
1
2024-04-10T15:13:12Z
2024-04-16T19:41:58Z
https://github.com/opensearch-project/data-prepper/issues/4407
2,235,849,743
4,407
[ "opensearch-project", "data-prepper" ]
## Configuration File: ``` s3-ingestion-pipeline: source: s3: codec: newline: compression: "snappy" aws: region: "cn-north-1" scan: buckets: - bucket: name: "bucket" filter: include_prefix: - "etl/snappy_input/" sink: - opensearch: ``` ## Error msg: ``` 2024-04-10T14:06:10,222 [Thread-2] ERROR org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker - Error while process the parseS3Object. java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:112) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:504) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.Snappy.uncompress(Snappy.java:543) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:165) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:117) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:77) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:61) ~[snappy-java-1.1.10.5.jar:1.1.10.5] at org.opensearch.dataprepper.plugins.codec.SnappyDecompressionEngine.createInputStream(SnappyDecompressionEngine.java:18) ~[common-2.7.0.jar:?] at org.opensearch.dataprepper.model.codec.InputCodec.parse(InputCodec.java:42) ~[data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker.doParseObject(S3ObjectWorker.java:99) ~[s3-source-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker.lambda$parseS3Object$0(S3ObjectWorker.java:66) ~[s3-source-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:129) ~[micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker.parseS3Object(S3ObjectWorker.java:65) ~[s3-source-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.processS3Object(ScanObjectWorker.java:178) [s3-source-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.startProcessingObject(ScanObjectWorker.java:153) [s3-source-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.run(ScanObjectWorker.java:101) [s3-source-2.7.0.jar:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] 2024-04-10T14:06:10,227 [Thread-2] INFO org.opensearch.dataprepper.plugins.source.s3.S3ScanPartitionCreationSupplier - Single S3 scan has already been completed ``` But the file is generated and produced by Glue Studio, so i don't think there is something wrong with the snappy file ## Version used https://gallery.ecr.aws/opensearchproject/data-prepper
[BUG] Failed to decompress snappy file in S3 source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4406/comments
1
2024-04-10T14:09:30Z
2024-04-16T19:44:21Z
https://github.com/opensearch-project/data-prepper/issues/4406
2,235,700,156
4,406
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** A Data Prepper Event currently includes the Event data (which is the user's data), but Data Prepper Events also have other attributes, for example the Event Metadata or Event tags, and this could grow in the future. **Describe the solution you'd like** A standard `codec` that can be used to represent a full Data Prepper Event, including the metadata and tags. I am proposing ``` codec: event_json: ``` as the identifier for this codec. As an input codec for use in sources like S3, the event_json would be read in as a Data Prepper Event, where the data is written to the Event data, and the metadata to the Event Metadata and tags are written to the Event tags. As an output codec, the structure of the json representing the full Event would contain something like the following format ``` { "event_data": { EVENT_DATA_JSON }, "event_metadata": { EVENT_METADATA_JSON }, "event_tags": [ "EVENT_TAGS" ] } ``` **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Data Prepper Event Json Codec
https://api.github.com/repos/opensearch-project/data-prepper/issues/4404/comments
4
2024-04-09T21:23:13Z
2024-04-22T23:22:18Z
https://github.com/opensearch-project/data-prepper/issues/4404
2,234,322,237
4,404
[ "opensearch-project", "data-prepper" ]
**Describe the bug** There are times when this error is received by the DynamoDB source when it is getting a shard iterator for a shard (https://github.com/opensearch-project/data-prepper/blob/9f778dde31ca25ce69ebd61bcd97b6a6c1c4cd3b/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/stream/ShardConsumerFactory.java#L170) ``` 2024-04-01T21:32:04.221 [pool-13-thread-4] ERROR org.opensearch.dataprepper.plugins.source.dynamodb.stream.ShardConsumerFactory - Exception when trying to get the shard iterator due to Requested resource not found: Shard does not exist (Service: DynamoDbStreams, Status Code: 400, Request ID: CBIRSDQC1HK3F07CNNNAJ1EOBBVV4KQNSO5AEMVJF66Q9ASUAAJG) ``` **To Reproduce** Have not recreated, but suspicion is that this is due to acknowledgments not being received for shards, and leads to retrying the shard for 24 hours until it expires. However, the pipeline should recover after this, but it does not even though the partition is marked as completed when this error is hit (https://github.com/opensearch-project/data-prepper/blob/9f778dde31ca25ce69ebd61bcd97b6a6c1c4cd3b/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/stream/StreamScheduler.java#L124) **Expected behavior** Move on to the next shards and continue processing **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] DynamoDB source shard does not exist results in stuck pipeline
https://api.github.com/repos/opensearch-project/data-prepper/issues/4401/comments
0
2024-04-09T19:15:34Z
2024-04-09T19:23:59Z
https://github.com/opensearch-project/data-prepper/issues/4401
2,234,146,472
4,401
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Users can provide values that are not AWS account Ids in the S3 source. ``` bucket_owners: my-bucket: abc ``` **Describe the solution you'd like** Validate AWS account Ids wherever they are supplied. They should be 12 digits. Optionally, we can also trim out any spaces and hyphens (even from the middle) in case users provide them in the form of `1234-5678-9012`. I've not seen this happen, but the AWS Console shows the account Id in that form.
Provide validations of AWS accountIds
https://api.github.com/repos/opensearch-project/data-prepper/issues/4398/comments
0
2024-04-08T20:47:40Z
2024-04-11T19:14:15Z
https://github.com/opensearch-project/data-prepper/issues/4398
2,232,053,796
4,398
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Currently my team uses `logstash` for our log shipping solution. In its current form (OSS version), it's muddled with CVE's and I would like to swich to `data-prepper` but I have noticed that its lacking in some of the essential plugins we use. I am wondering can we add these? `Redis` , `Apache Pulsar`, `Azure event Hubs` sources. **Describe the solution you'd like** `Redis`, `Apache Pulsar`, `Azure Event Hubs` source pluggin support **Describe alternatives you've considered (Optional)** FluentD **Additional context** none
Feature(s) Request: More source plugins
https://api.github.com/repos/opensearch-project/data-prepper/issues/4392/comments
1
2024-04-04T15:12:28Z
2024-04-09T19:34:54Z
https://github.com/opensearch-project/data-prepper/issues/4392
2,225,783,948
4,392
[ "opensearch-project", "data-prepper" ]
Hello **Is your feature request related to a problem?** It may already be possible, but I would like a mutate to retrieve last item of a list. For example: ``` { "my-list": [ "key1": "value1", "key2": "value2", "key3": "value3" ] } ``` Would become: ``` { ... "lastkey": "value3" } ``` **Additional context** I tried with `add_entries` and using `-1` to get it: ``` processor: - add_entries: entries: - key: "my-key" value_expression: /my-list/-1 ``` But it didn't work I had the following error: ``` 2024-03-28T16:30:37,711 [waf-log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.expression.ParseTreeEvaluator - Unable to evaluate event org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate the part of input statement: /my-list/-1 at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.exitEveryRule(ParseTreeEvaluatorListener.java:91) ~[data-prepper-expression-2.7.0.jar:?] at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:63) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:38) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:37) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:17) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:39) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.processor.mutateevent.AddEntryProcessor.doExecute(AddEntryProcessor.java:61) ~[mutate-event-processors-2.7.0.jar:?] at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) [micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) [data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:135) [data-prepper-core-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.7.0.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] Caused by: java.lang.IllegalArgumentException: DIVIDE requires left operand to be either Float or Integer. at org.opensearch.dataprepper.expression.ArithmeticBinaryOperator.evaluate(ArithmeticBinaryOperator.java:46) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ArithmeticBinaryOperator.evaluate(ArithmeticBinaryOperator.java:16) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.performSingleOperation(ParseTreeEvaluatorListener.java:104) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.exitEveryRule(ParseTreeEvaluatorListener.java:88) ~[data-prepper-expression-2.7.0.jar:?] ... 18 more 2024-03-28T16:30:37,717 [waf-log-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.plugins.processor.mutateevent.AddEntryProcessor - Error adding entry to record [org.opensearch.dataprepper.model.log.JacksonLog@50405800] with key [my-key], metadataKey [null], value_expression [/my-list/-1] format [null], value [null] org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "/my-list/-1" at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:42) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.plugins.processor.mutateevent.AddEntryProcessor.doExecute(AddEntryProcessor.java:61) ~[mutate-event-processors-2.7.0.jar:?] at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.7.0.jar:?] at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) [micrometer-core-1.11.5.jar:1.11.5] at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) [data-prepper-api-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:135) [data-prepper-core-2.7.0.jar:?] at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.7.0.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] Caused by: org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate the part of input statement: /my-list/-1 at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:41) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:17) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:39) ~[data-prepper-expression-2.7.0.jar:?] ... 11 more Caused by: org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate the part of input statement: /my-list/-1 at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.exitEveryRule(ParseTreeEvaluatorListener.java:91) ~[data-prepper-expression-2.7.0.jar:?] at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:63) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:38) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:37) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:17) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:39) ~[data-prepper-expression-2.7.0.jar:?] ... 11 more Caused by: java.lang.IllegalArgumentException: DIVIDE requires left operand to be either Float or Integer. at org.opensearch.dataprepper.expression.ArithmeticBinaryOperator.evaluate(ArithmeticBinaryOperator.java:46) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ArithmeticBinaryOperator.evaluate(ArithmeticBinaryOperator.java:16) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.performSingleOperation(ParseTreeEvaluatorListener.java:104) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.exitEveryRule(ParseTreeEvaluatorListener.java:88) ~[data-prepper-expression-2.7.0.jar:?] at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:63) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:38) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:36) ~[antlr4-runtime-4.10.1.jar:4.10.1] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:37) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:17) ~[data-prepper-expression-2.7.0.jar:?] at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:39) ~[data-prepper-expression-2.7.0.jar:?] ... 11 more ```
Retrieve last item of a list
https://api.github.com/repos/opensearch-project/data-prepper/issues/4354/comments
2
2024-03-28T16:35:43Z
2024-04-03T07:57:43Z
https://github.com/opensearch-project/data-prepper/issues/4354
2,213,661,262
4,354
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** S3 Sink users would like to be able to group Events together and send them to dynamic partitions of their S3 bucket. Users would like to track different groups based on what is in the Events to group them into different S3 “folders”. **Describe the solution you'd like** Proposal with path_prefix and object_name ``` sink: - s3: object_key: path_prefix: "folder-${/partition_key}/" object_name: "my-object-${/some_key}"; threshold: // We flush and remove groups when they reach 1000 events event_count: 1000 // Alternatively, flush and remove groups on time when event_count isn't reached event_collect_timeout: "30s" ``` When dynamic values are used in either the path_prefix or the object_name, we will group Events together based on the expression evaluation result. **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Support dynamically grouping Events together in the S3 sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4345/comments
1
2024-03-27T18:36:24Z
2024-03-27T23:44:03Z
https://github.com/opensearch-project/data-prepper/issues/4345
2,211,610,965
4,345
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Data Prepper reports a `documentErrors` metric which counts all document errors. However, it may be difficult to know the exact cause. Data Prepper also has metrics related to specific known errors that happen at the `_bulk` response. Neither of these lets us analyze all the document status codes. **Describe the solution you'd like** Provide a dynamic metric which includes the status from the response item. I'd also like to do this with tags/dimensions (see https://github.com/opensearch-project/data-prepper/issues/3051#issuecomment-1901270539 for a broader proposal on this) ``` <pipeline-name>.opensearch.documentStatus | status=<status> ``` **Describe alternatives you've considered (Optional)** We could choose specific codes and have discrete metrics for those. But, this may miss some errors. **Additional context** See #4343 for another metric proposal to help with similar problems.
Better metrics on OpenSearch document errors
https://api.github.com/repos/opensearch-project/data-prepper/issues/4344/comments
0
2024-03-27T15:36:43Z
2024-03-27T22:51:02Z
https://github.com/opensearch-project/data-prepper/issues/4344
2,211,130,528
4,344
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** We have had some difficulty adding up metrics when some events are duplicates. With the current metrics, we cannot tell how many documents were counted as duplicates in OpenSearch. This can result in being unable to add up the number of events in with the actual documents in OpenSearch. Goal: ``` documentsSuccess - documentsDuplicates == count on OpenSearch ``` **Describe the solution you'd like** Add a metrics for duplicate documents. To determine if a document is a duplicate, we can check `_seq_no > 0`. If this is true, then we can consider this document to be a duplicate.
Better metrics for OpenSearch duplicate documents
https://api.github.com/repos/opensearch-project/data-prepper/issues/4343/comments
0
2024-03-27T15:31:57Z
2024-03-27T22:51:02Z
https://github.com/opensearch-project/data-prepper/issues/4343
2,211,119,948
4,343
[ "opensearch-project", "data-prepper" ]
Please approve or deny the release of Data Prepper. **VERSION**: 2.7.0 **BUILD NUMBER**: 81 **RELEASE MAJOR TAG**: true **RELEASE LATEST TAG**: true Workflow is pending manual review. URL: https://github.com/opensearch-project/data-prepper/actions/runs/8453845571 Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh] Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel.
Manual approval required for workflow run 8453845571: Release Data Prepper : 2.7.0
https://api.github.com/repos/opensearch-project/data-prepper/issues/4342/comments
3
2024-03-27T15:21:17Z
2024-03-27T15:26:53Z
https://github.com/opensearch-project/data-prepper/issues/4342
2,211,091,790
4,342
[ "opensearch-project", "data-prepper" ]
**Describe the bug** The issue arises when utilizing the predefined pattern "%{CREDIT_CARD_NUMBER}" with the obfuscate processor in the OSI pipeline. The expected behavior is for the processor to exclusively mask credit card information within logs while leaving non-personally identifiable information (non-PII) fields untouched. However, in our current environment, we have observed that the obfuscate processor is erroneously masking non-PII fields such as trackingId and sdsStayGuid. This unintended behavior complicates troubleshooting efforts for application teams as critical data points become obscured. Attaching some sceenshots where the data has been masked, <img width="1178" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/165096742/f7436014-7282-4e8e-9b33-81ae615343ba"> <img width="1172" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/165096742/c196a26f-06c0-4941-b090-1c4340084bb9"> **Expected behavior** When employing the patterns configuration option, users expect seamless integration with a predefined set of obfuscation patterns for common fields. Specifically, the obfuscate processor should seamlessly implement the predefined pattern "%{CREDIT_CARD_NUMBER}" without encountering errors. It is imperative that this processor selectively masks only credit card values within logs, while abstaining from obscuring any other field values that may resemble credit card patterns. The trackingId's should not be masked as shown in this screenshot, ![image](https://github.com/opensearch-project/data-prepper/assets/165096742/8e857186-5e34-4e2b-b546-16ede025e066) **Resolution:** To rectify this issue, the implementation of the obfuscate processor requires refinement. The processor should be updated to accurately discern and mask solely credit card numbers within logs, adhering strictly to the predefined "%{CREDIT_CARD_NUMBER}" pattern. This necessitates a thorough review and potential adjustment of the pattern matching algorithm employed by the processor. Furthermore, comprehensive testing is essential to validate the updated processor's efficacy across diverse log scenarios, ensuring that it effectively safeguards credit card information while preserving the integrity of non-PII fields. **Steps to Reproduce:** 1. Configure the obfuscate processor within the OSI pipeline, utilizing the predefined pattern "%{CREDIT_CARD_NUMBER}". 2. Analyze logs containing a mixture of credit card numbers and non-PII fields. 3. Observe whether non-PII fields are erroneously masked alongside credit card numbers, impeding the troubleshooting process for application teams. Example confgiuration ``` - obfuscate: source: 'data' patterns: - '%{CREDIT_CARD_NUMBER}' action: mask: mask_character: "&" mask_character_length: 10 ``` **Environment (please complete the following information):** - OS: Amazon EC2 - Linux/UNIX - Version : AML 2.0 **Additional context** Add any other context about the problem here.
[BUG]Incorrect Behavior of Obfuscate Processor with Predefined Pattern "%{CREDIT_CARD_NUMBER}"
https://api.github.com/repos/opensearch-project/data-prepper/issues/4340/comments
7
2024-03-26T22:56:03Z
2024-05-16T16:37:45Z
https://github.com/opensearch-project/data-prepper/issues/4340
2,209,486,852
4,340
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** When using DDB source, some of the incoming data gets typed as BigDecimal. This type is not supported by the ConvertEntries processor which results in the below exception ``` Unable to convert key: ****** with value: ****** to ****** java.lang.IllegalArgumentException: Unsupported type conversion. Source class: class java.math.BigDecimal at org.opensearch.dataprepper.typeconverter.IntegerConverter.convert(IntegerConverter.java:22) ~[data-prepper-api-2.6.1.jar:?] at org.opensearch.dataprepper.typeconverter.IntegerConverter.convert(IntegerConverter.java:8) ~[data-prepper-api-2.6.1.jar:?] at org.opensearch.dataprepper.plugins.processor.mutateevent.ConvertEntryTypeProcessor.doExecute(ConvertEntryTypeProcessor.java:68) ~[mutate-event-processors-2.6.1.jar:?] ``` **Describe the solution you'd like** BigDecimal should be added to the supported conversion types **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Add support for BigDecimal in ConvertType processor
https://api.github.com/repos/opensearch-project/data-prepper/issues/4316/comments
4
2024-03-21T21:32:23Z
2024-04-24T18:00:28Z
https://github.com/opensearch-project/data-prepper/issues/4316
2,201,229,617
4,316
[ "opensearch-project", "data-prepper" ]
## CVE-2024-29133 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-configuration2-2.8.0.jar</b></p></summary> <p>Tools to assist in the reading of configuration/preferences files in various formats</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-configuration/">https://commons.apache.org/proper/commons-configuration/</a></p> <p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar</p> <p> Dependency Hierarchy: - hadoop-common-3.3.6.jar (Root Library) - :x: **commons-configuration2-2.8.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Out-of-bounds Write vulnerability in Apache Commons Configuration.This issue affects Apache Commons Configuration: from 2.0 before 2.10.1. Users are recommended to upgrade to version 2.10.1, which fixes the issue. <p>Publish Date: 2024-03-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-29133>CVE-2024-29133</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/ccb9w15bscznh6tnp3wsvrrj9crbszh2">https://lists.apache.org/thread/ccb9w15bscznh6tnp3wsvrrj9crbszh2</a></p> <p>Release Date: 2024-03-21</p> <p>Fix Resolution: org.apache.commons:commons-configuration2:2.10.1</p> </p> </details> <p></p>
CVE-2024-29133 (Medium) detected in commons-configuration2-2.8.0.jar - autoclosed
https://api.github.com/repos/opensearch-project/data-prepper/issues/4314/comments
1
2024-03-21T20:53:34Z
2024-03-21T21:16:26Z
https://github.com/opensearch-project/data-prepper/issues/4314
2,201,165,761
4,314
[ "opensearch-project", "data-prepper" ]
## CVE-2024-29131 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-configuration2-2.8.0.jar</b></p></summary> <p>Tools to assist in the reading of configuration/preferences files in various formats</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-configuration/">https://commons.apache.org/proper/commons-configuration/</a></p> <p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-configuration2/2.8.0/6a76acbe14d2c01d4758a57171f3f6a150dbd462/commons-configuration2-2.8.0.jar</p> <p> Dependency Hierarchy: - hadoop-common-3.3.6.jar (Root Library) - :x: **commons-configuration2-2.8.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50">2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Out-of-bounds Write vulnerability in Apache Commons Configuration.This issue affects Apache Commons Configuration: from 2.0 before 2.10.1. Users are recommended to upgrade to version 2.10.1, which fixes the issue. <p>Publish Date: 2024-03-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-29131>CVE-2024-29131</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/03nzzzjn4oknyw5y0871tw7ltj0t3r37">https://lists.apache.org/thread/03nzzzjn4oknyw5y0871tw7ltj0t3r37</a></p> <p>Release Date: 2024-03-21</p> <p>Fix Resolution: org.apache.commons:commons-configuration2:2.10.1</p> </p> </details> <p></p>
CVE-2024-29131 (Medium) detected in commons-configuration2-2.8.0.jar - autoclosed
https://api.github.com/repos/opensearch-project/data-prepper/issues/4313/comments
1
2024-03-21T20:53:32Z
2024-03-21T21:16:24Z
https://github.com/opensearch-project/data-prepper/issues/4313
2,201,165,682
4,313
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** The GeoIP database manifest contains a `sha256_hash` value. Data Prepper should verify this against the data downloaded to enhance security. **Describe the solution you'd like** After downloading a database from the URL provided by a `manifest.json`, calculate the SHA-256 of the download. After that, verify it matches the expectation. If not, do not use the database.
Verify the SHA-256 for downloaded MaxMind databases
https://api.github.com/repos/opensearch-project/data-prepper/issues/4312/comments
0
2024-03-21T18:46:38Z
2025-01-21T20:59:04Z
https://github.com/opensearch-project/data-prepper/issues/4312
2,200,915,906
4,312
[ "opensearch-project", "data-prepper" ]
A lot has changed since Data Prepper was initially introduced to the public in December 2020. First, when Data Prepper launched there was no OpenSearch project. Shortly after the release of Data Prepper, the [OpenSearch project launched](https://opensearch.org/blog/opensearch-general-availability-announcement/) with OpenSearch Core and OpenSearch Dashboards. Data Prepper joined the project shortly after. Second, Data Prepper has become a key component of the [OpenSearch Toolbox.](https://opensearch.org/platform/index.html) Along with OpenSearch Core and OpenSearch Dashboards, Data Prepper is the third part of the whole platform, and is the recommended way to ingest data into OpenSearch. Third, Data Prepper itself has grown as a product. When Data Prepper first launched, it was focused on supporting trace analytics. It quickly grew to support log analytics from a variety of sources. Data Prepper has also grown to supporting search use-cases through sources such as [S3](https://opensearch.org/docs/latest/data-prepper/common-use-cases/s3-logs/) and [DynamoDB](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/dynamo-db/). And with the addition of [OpenSearch as a source,](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/opensearch/) users can migrate data between OpenSearch clusters. With these changes, Data Prepper should clearly be part of the OpenSearch ecosystem. To that end, I propose that we rename Data Prepper to OpenSearch Ingestion. This name reflects two important aspects of this project. 1) Data Prepper is part of the OpenSearch Toolbox. The new name conveys this to users, clarifying that this is part of OpenSearch. 2) This name conveys the goal of Data Prepper to provide ingestion into OpenSearch. One concern that may arise with this name is that may indicate that the product only sends data to OpenSearch. The maintainers add sinks that help complement the primary product use-case of ingesting data into OpenSearch. For example, [writing to S3](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sinks/s3/) to reduce the volume of data going to OpenSearch. These continue to be important offerings of the product. But, they are complementary to the primary goal of ingesting data into OpenSearch. ### Process for Renaming An important [principle](https://opensearch.org/faq/#q3.28) in the OpenSearch project is to support [semver](https://semver.org/) and avoid breaking changes. The maintainers of Data Prepper follow this principle and will continue to do so with this change. Renaming a product can be a disruptive change. But we will take care to follow semver and reduce friction. Here is a sketch of the process for renaming. 1. We will make text changes to the repository, product pages, and documentation with the new name. This change just updates the names that readers see. 2. Update the URLs for the product and documentation pages with the name `opensearch-ingestion` and add redirects from the existing data-prepper URLs. 3. Data Prepper currently deploys [artifacts](https://opensearch.org/downloads.html#data-prepper) for Docker images and archive files. For the remainder of the 2.x versions, we can retain the Data Prepper name to avoid breaking any automation. Additionally, we could add the new artifact names in parallel. This way, users can update their automation to use the new name at their convenience. When 3.0 releases, we would remove any artifacts named Data Prepper. 4. Data Prepper itself has no APIs with the name Data Prepper. So the are no API changes needed. 5. Code changes can come in over time as they don’t have as much impact on users. We would create a new root package - `org.opensearch.ingestion`. New plugins can start to use this package. We can migrate existing code to this package over time. 6. Rename the project in GitHub to `OpenSearch-Ingestion`. GitHub supports [renaming](https://docs.github.com/en/enterprise-cloud@latest/repositories/creating-and-managing-repositories/renaming-a-repository) a project to support redirects on the URL.
[RFC] OpenSearch Ingestion: A New Name for the Next Steps of Data Prepper
https://api.github.com/repos/opensearch-project/data-prepper/issues/4309/comments
8
2024-03-21T15:46:05Z
2025-04-07T14:31:56Z
https://github.com/opensearch-project/data-prepper/issues/4309
2,200,540,124
4,309
[ "opensearch-project", "data-prepper" ]
**Describe the bug** - Pipeline with `dyanamodb` as the source and `OpenSearch Serverless` sink is creating empty dlqObjects ```{"dlqObjects":[]}``` - non-empty dlqObjects are created even though data is loaded into OpenSearch. Seeing messages like these ```"status":0,"message":"Number of retries reached the limit of max retries (configured value 10)``` **To Reproduce** Steps to reproduce the behavior: 1. Define a pipeline with a dynamodb table as the source (ideally with at least 10M records) 2. Define an OpenSearch serverless sink 3. Define S3 bucket and prefix for dlq 4. Run pipeline 5. DLQ S3 bucket will have several empty s3 objects that are 17.0 bytes in size (```{"dlqObjects":[]}``` 6. Some DLQ S3 objects have data, but those items are loaded in OpenSearch **Expected behavior** - No DLQ objects are created if the data has been loaded successfully. - If data load is not successful and dlq s3 object is created, then `dlqObjects` should be populated with relevant data. - If data is ingested in OpenSearch dlq object with the id should not be created **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** - max_retries is set to 10 - Pipeline has has min 1 OCU and max 20 OCU - dynamodb table has ~100M records - OpenSearch Serverless sink
[BUG] Empty DLQ Objects and DLQ objects with data even though data is loaded correctly
https://api.github.com/repos/opensearch-project/data-prepper/issues/4304/comments
1
2024-03-20T17:10:53Z
2024-05-16T15:43:06Z
https://github.com/opensearch-project/data-prepper/issues/4304
2,198,082,410
4,304
[ "opensearch-project", "data-prepper" ]
I have configured the otel collector with Prometheus as the receiver and Data Prepper as the exporter. So, while I can retrieve data from the OpenSearch dashboard, it appears that all data is grouped under the name field. To create a dashboard, I believe the fields need to be distinguished. What additional settings should I add to the otel collector configmap to differentiate fields? I am attaching the yaml file of the otel collector configmap that I have written. Thank you. [otel collector configmap yaml] data: relay: | exporters: otlp/metrics: endpoint: data-prepper-headless.opensearch.svc:21891 tls: insecure: true extensions: health_check: endpoint: ${env:K8S_POD_IP}:13133 processors: memory_limiter: check_interval: 5s limit_percentage: 80 spike_limit_percentage: 25 receivers: prometheus/internal: config: scrape_configs: - job_name: apps kubernetes_sd_configs: - role: pod selectors: - role: pod # only scrape data from pods running on the same node as collector field: "spec.nodeName=${NODE_NAME}" relabel_configs: # scrape pods annotated with "prometheus.io/scrape: true" - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] regex: "true" action: keep # read the port from "prometheus.io/port: <port>" annotation and update scraping address accordingly - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) # escaped $1:$2 replacement: $$1:$$2 - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: 'otel-collector' scrape_interval: 5s static_configs: - targets: ['prometheus-k8s.monitoring.svc:9090'] metric_relabel_configs: - source_labels: [ __name__ ] regex: '.*grpc_io.*' action: drop otlp: protocols: grpc: endpoint: ${env:K8S_POD_IP}:4317 http: endpoint: ${env:K8S_POD_IP}:4318 service: extensions: - health_check pipelines: metrics/internal: exporters: - otlp/metrics processors: - memory_limiter receivers: - prometheus/internal - otlp
When I configured the Prometheus receiver, the fields were not differentiated.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4302/comments
1
2024-03-20T04:07:27Z
2024-03-26T19:40:04Z
https://github.com/opensearch-project/data-prepper/issues/4302
2,196,627,121
4,302
[ "opensearch-project", "data-prepper" ]
**Describe the bug** An empty DLQ object is created when there are failures shipping to OpenSearch but all of the failures are due to a version conflict. The code here filters out version conflicts from the DLQ entries: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L353-L356 But then there is no empty check before sending the doc list to the DLQ: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L369 **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - OS: [e.g. Ubuntu 20.04 LTS] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
[BUG] Empty DLQ entries when version conflicts occur
https://api.github.com/repos/opensearch-project/data-prepper/issues/4301/comments
1
2024-03-19T20:07:29Z
2024-04-11T19:14:22Z
https://github.com/opensearch-project/data-prepper/issues/4301
2,195,895,755
4,301
[ "opensearch-project", "data-prepper" ]
See: https://github.com/opensearch-project/data-prepper/issues/3481#issuecomment-2007578931
ExportPartitionWorkerTest testProcessPartitionSuccess(String) failure
https://api.github.com/repos/opensearch-project/data-prepper/issues/4298/comments
2
2024-03-19T16:06:49Z
2024-03-22T21:07:55Z
https://github.com/opensearch-project/data-prepper/issues/4298
2,195,384,171
4,298
[ "opensearch-project", "data-prepper" ]
I configured the opentelemetry collector with Prometheus as the receiver and data prepper as the exporter. However, I'm encountering no error in the collector, and no data is showing up in the OpenSearch dashboard. What should I do? What might I have done wrong? I'll attach the Data Prepper configmap YAML and the OpenTelemetry Collector configmap YAML below. [data prepper configmap yaml] ``` otel-metrics-pipeline-2: # workers: 5 delay: 10 source: http_source: ssl: false port: 21891 buffer: bounded_blocking: buffer_size: 12800 batch_size: 1024 sink: - opensearch: hosts: ["https://opensearch-cluster-master.opensearch.svc:9200"] insecure: true username: admin password: admin index_type: custom index: proms-%{yyyy.MM.dd} ``` [otel collector configmap yaml] ``` data: relay: | exporters: debug: {} logging: {} otlp/metrics: endpoint: data-prepper-headless.opensearch.svc.cluster.local:21891 tls: insecure: true extensions: health_check: endpoint: ${env:K8S_POD_IP}:13133 processors: memory_limiter: check_interval: 5s limit_percentage: 80 spike_limit_percentage: 25 receivers: prometheus/internal: config: scrape_configs: - job_name: apps kubernetes_sd_configs: - role: pod selectors: - role: pod # only scrape data from pods running on the same node as collector field: "spec.nodeName=${NODE_NAME}" relabel_configs: # scrape pods annotated with "prometheus.io/scrape: true" - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] regex: "true" action: keep # read the port from "prometheus.io/port: <port>" annotation and update scraping address accordingly - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) # escaped $1:$2 replacement: $$1:$$2 - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: 'otel-collector' scrape_interval: 5s static_configs: - targets: ['prometheus-k8s.monitoring.svc:9090'] metric_relabel_configs: - source_labels: [ __name__ ] regex: '.*grpc_io.*' action: drop otlp: protocols: grpc: endpoint: ${env:K8S_POD_IP}:4317 http: endpoint: ${env:K8S_POD_IP}:4318 service: extensions: - health_check pipelines: metrics/internal: exporters: - debug - logging - otlp/metrics processors: - memory_limiter ```
Errors in the collector when configuring Prometheus as the receiver and Data Prepper as the exporter.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4299/comments
4
2024-03-19T02:28:14Z
2024-03-20T02:31:14Z
https://github.com/opensearch-project/data-prepper/issues/4299
2,195,617,243
4,299
[ "opensearch-project", "data-prepper" ]
## CVE-2023-52428 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nimbus-jose-jwt-9.37.1.jar</b>, <b>nimbus-jose-jwt-9.8.1.jar</b></p></summary> <p> <details><summary><b>nimbus-jose-jwt-9.37.1.jar</b></p></summary> <p>Java library for Javascript Object Signing and Encryption (JOSE) and JSON Web Tokens (JWT)</p> <p>Library home page: <a href="https://bitbucket.org/connect2id/nimbus-jose-jwt">https://bitbucket.org/connect2id/nimbus-jose-jwt</a></p> <p>Path to dependency file: /data-prepper-plugins/s3-sink/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/9.37.1/940fec997571d75f391afe3988b7fbef7b04b3be/nimbus-jose-jwt-9.37.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/9.37.1/940fec997571d75f391afe3988b7fbef7b04b3be/nimbus-jose-jwt-9.37.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/9.37.1/940fec997571d75f391afe3988b7fbef7b04b3be/nimbus-jose-jwt-9.37.1.jar</p> <p> Dependency Hierarchy: - :x: **nimbus-jose-jwt-9.37.1.jar** (Vulnerable Library) </details> <details><summary><b>nimbus-jose-jwt-9.8.1.jar</b></p></summary> <p>Java library for Javascript Object Signing and Encryption (JOSE) and JSON Web Tokens (JWT)</p> <p>Library home page: <a href="https://bitbucket.org/connect2id/nimbus-jose-jwt">https://bitbucket.org/connect2id/nimbus-jose-jwt</a></p> <p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.nimbusds/nimbus-jose-jwt/9.8.1/2af7f734313320e4b156522d22ce32b775633909/nimbus-jose-jwt-9.8.1.jar</p> <p> Dependency Hierarchy: - hadoop-common-3.3.6.jar (Root Library) - hadoop-auth-3.3.6.jar - :x: **nimbus-jose-jwt-9.8.1.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In Connect2id Nimbus JOSE+JWT before 9.37.2, an attacker can cause a denial of service (resource consumption) via a large JWE p2c header value (aka iteration count) for the PasswordBasedDecrypter (PBKDF2) component. <p>Publish Date: 2024-02-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-52428>CVE-2023-52428</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-52428">https://www.cve.org/CVERecord?id=CVE-2023-52428</a></p> <p>Release Date: 2024-02-11</p> <p>Fix Resolution: 9.37.2</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2023-52428 (High) detected in nimbus-jose-jwt-9.37.1.jar, nimbus-jose-jwt-9.8.1.jar
https://api.github.com/repos/opensearch-project/data-prepper/issues/4296/comments
1
2024-03-18T22:05:32Z
2024-03-21T20:40:58Z
https://github.com/opensearch-project/data-prepper/issues/4296
2,193,464,178
4,296
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** I have an OpenSearch cluster that intermittently has specific indexes write blocked due to miscellaneous failures. To maintain the ingestion throughput for the non blocked indexes during this time I set the `opensearch.max_retries` to a low value. This way data for the blocked index gets sent to the DLQ quickly and the rest of the data continues being written to my sink. After I resolve the index write blocks I want to redrive the data from my DLQ using a separate instance of Data Prepper with an S3 Scan source. As far as I can tell the existing codecs/processors for Data Prepper do not allow for directly processing the JSON object that's written to the DLQ and splitting the `dlqObjects` array into multiple documents. Here's an example of the data that was written to my DLQ bucket ``` { "dlqObjects": [{ "pluginId": "opensearch", "pluginName": "opensearch", "pipelineName": "apache-log-pipeline", "failedData": { "index": "logs", "indexId": null, "status": 0, "message": "Number of retries reached the limit of max retries (configured value 1)", "document": { "time": "2014-08-11T11:40:13+00:00", "remote_addr": "xxx.xxx.xx", "status": "404", "request": "GET http://www.k2proxy.com//hello.html HTTP/1.1", "http_user_agent": "Mozilla/4.0 (compatible; WOW64; SLCC2;)", "@timestamp": "2024-03-14T21:24:37.498Z" } }, "timestamp": "2024-03-14T21:24:37.572Z" }] } ``` **Describe the solution you'd like** I would like to be able to enable my source codec to parse the elements out of the `dlqObjects` array. The S3 json codec configuration could add something like an `array_source` key. The JSON array at the `array_source` would then be processed by the codec and the rest of the data in the original JSON object would be ignored/dropped. An S3 scan pipeline for redriving DLQ data might look something like this: ``` dlq-pipeline: source: s3: acknowledgments: true codec: json: array_source: "dlqObjects" # NEW scan: scheduling: interval: PT1H buckets: - bucket: name: "mydlq" ``` **Describe alternatives you've considered (Optional)** Add a new configuration for specifying the format of the data that's written to the DLQ. This might allow Data Prepper to write the DLQ as a JSON array rather than a JSON object. This might look some thing like this: ``` dlq: s3: bucket: "mydlq" codec: json: ``` or ``` dlq: format: json: s3: bucket: "mydlq" ``` The data written to the DLQ would then be the same as the `dlqObjects` JSON array. In my case it might look like this: ``` [{ "pluginId": "opensearch", "pluginName": "opensearch", "pipelineName": "apache-log-pipeline", "failedData": { "index": "logs", "indexId": null, "status": 0, "message": "Number of retries reached the limit of max retries (configured value 1)", "document": { "time": "2014-08-11T11:40:13+00:00", "remote_addr": "xxx.xxx.xx", "status": "404", "request": "GET http://www.k2proxy.com//hello.html HTTP/1.1", "http_user_agent": "Mozilla/4.0 (compatible; WOW64; SLCC2;)", "@timestamp": "2024-03-14T21:24:37.498Z" } }, "timestamp": "2024-03-14T21:24:37.572Z" }] ``` **Additional context** None
Improve retryability of DLQ data
https://api.github.com/repos/opensearch-project/data-prepper/issues/4295/comments
9
2024-03-18T15:06:46Z
2024-06-22T19:48:46Z
https://github.com/opensearch-project/data-prepper/issues/4295
2,192,477,591
4,295
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. It would be nice to have [...] Hi, I have a question about ingestion pipeline: is there a standard way to process arrays of data in ingestion processor that is recommended by data prepper? for example, if I have a {‘category’: [‘a’, ‘b’, ‘c’]}, how can I break elements in the array down? **Describe the solution you'd like** A clear and concise description of what you want to happen. in the example {‘category’: [‘a’, ‘b’, ‘c’]}, ideally it can be broken down like {‘category’: ‘a’,‘category’: ‘b’,‘category’: ‘c’ }? **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. common use case is the transition from NoSql database arrays to Sql database to support table joins in data warehousing.
Support for data normalization in arrays
https://api.github.com/repos/opensearch-project/data-prepper/issues/4291/comments
10
2024-03-15T22:40:49Z
2024-03-21T01:54:07Z
https://github.com/opensearch-project/data-prepper/issues/4291
2,189,573,913
4,291
[ "opensearch-project", "data-prepper" ]
## CVE-2024-23944 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>zookeeper-3.7.2.jar</b>, <b>zookeeper-3.8.3.jar</b></p></summary> <p> <details><summary><b>zookeeper-3.7.2.jar</b></p></summary> <p>ZooKeeper server</p> <p>Library home page: <a href="http://zookeeper.apache.org">http://zookeeper.apache.org</a></p> <p>Path to dependency file: /release/archives/linux/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.7.2/3b7c2c26b697094fc29e5a78a522cffa1a55b26d/zookeeper-3.7.2.jar</p> <p> Dependency Hierarchy: - :x: **zookeeper-3.7.2.jar** (Vulnerable Library) </details> <details><summary><b>zookeeper-3.8.3.jar</b></p></summary> <p>ZooKeeper server</p> <p>Library home page: <a href="http://zookeeper.apache.org">http://zookeeper.apache.org</a></p> <p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.8.3/97bb82af5b529ec14e9c2d44b96884544f0db743/zookeeper-3.8.3.jar</p> <p> Dependency Hierarchy: - :x: **zookeeper-3.8.3.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/774fa213614252c4772b018731452f020cafa16a">774fa213614252c4772b018731452f020cafa16a</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Information disclosure in persistent watchers handling in Apache ZooKeeper due to missing ACL check. It allows an attacker to monitor child znodes by attaching a persistent watcher (addWatch command) to a parent which the attacker has already access to. ZooKeeper server doesn't do ACL check when the persistent watcher is triggered and as a consequence, the full path of znodes that a watch event gets triggered upon is exposed to the owner of the watcher. It's important to note that only the path is exposed by this vulnerability, not the data of znode, but since znode path can contain sensitive information like user name or login ID, this issue is potentially critical. Users are recommended to upgrade to version 3.9.2, 3.8.4 which fixes the issue. <p>Publish Date: 2024-03-15 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-23944>CVE-2024-23944</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://seclists.org/oss-sec/2024/q1/229">https://seclists.org/oss-sec/2024/q1/229</a></p> <p>Release Date: 2024-03-15</p> <p>Fix Resolution: 3.8.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2024-23944 (Medium) detected in zookeeper-3.7.2.jar, zookeeper-3.8.3.jar
https://api.github.com/repos/opensearch-project/data-prepper/issues/4290/comments
1
2024-03-15T15:00:16Z
2024-03-21T20:40:47Z
https://github.com/opensearch-project/data-prepper/issues/4290
2,188,758,580
4,290
[ "opensearch-project", "data-prepper" ]
## CVE-2023-51775 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jose4j-0.9.3.jar</b></p></summary> <p>The jose.4.j library is a robust and easy to use open source implementation of JSON Web Token (JWT) and the JOSE specification suite (JWS, JWE, and JWK). It is written in Java and relies solely on the JCA APIs for cryptography. Please see https://bitbucket.org/b_c/jose4j/wiki/Home for more info, examples, etc..</p> <p>Library home page: <a href="https://bitbucket.org/b_c/jose4j/">https://bitbucket.org/b_c/jose4j/</a></p> <p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bitbucket.b_c/jose4j/0.9.3/9670e11587194cb6b1b2edcaa688a3fab85b4148/jose4j-0.9.3.jar</p> <p> Dependency Hierarchy: - :x: **jose4j-0.9.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/8827ebf9e6d6c55ade13e9cf7a6e39bc507c5afd">8827ebf9e6d6c55ade13e9cf7a6e39bc507c5afd</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The jose4j component before 0.9.4 for Java allows attackers to cause a denial of service (CPU consumption) via a large p2c (aka PBES2 Count) value. <p>Publish Date: 2024-02-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-51775>CVE-2023-51775</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-51775">https://www.cve.org/CVERecord?id=CVE-2023-51775</a></p> <p>Release Date: 2024-02-29</p> <p>Fix Resolution: 0.9.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2023-51775 (High) detected in jose4j-0.9.3.jar
https://api.github.com/repos/opensearch-project/data-prepper/issues/4282/comments
0
2024-03-14T18:31:52Z
2024-03-21T20:34:06Z
https://github.com/opensearch-project/data-prepper/issues/4282
2,187,013,654
4,282
[ "opensearch-project", "data-prepper" ]
Versions: opensearch & opensearch dashboard 2.11.1 data-prepper 2.6.1 opentelemetry-collector-contrib 0.92.0 When utilizing DataPrepper's metric pipeline to transmit data collected through OpenTelemetry to OpenSearch, the data is not stored in the format of `<metric_name> : <value>` in OpenSearch. OTLE collector to collect kubelet data. If you check in the opensearch dashboard, the metric name is stored in the name field. ``` { name: <metric_name> value: <value> } ``` It seems to save as I want to easily use the visualization features you provide. ``` { <metric_name>: <value> } ``` , is it possible to change it to something like this? [name field] <img width="508" alt="스크린샷 2024-03-14 오후 3 40 48" src="https://github.com/opensearch-project/data-prepper/assets/83107898/dcd317a3-02ea-4652-b8fa-90399369c59f"> [data-prepper config] ![스크린샷 2024-03-14 오후 3 43 34](https://github.com/opensearch-project/data-prepper/assets/83107898/3d359f7a-6764-4d51-8299-2077b87e94ad)
data is not stored in the format of [<metric_name>: <value>] in OpenSearch.
https://api.github.com/repos/opensearch-project/data-prepper/issues/4281/comments
6
2024-03-14T07:48:23Z
2024-08-19T20:23:01Z
https://github.com/opensearch-project/data-prepper/issues/4281
2,185,662,044
4,281
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of data prepper and tags within data prepper, I would like to explicitly add or remove tags from Events with conditional expressions **Describe the solution you'd like** A `tag_manager` processor that can explicitly add or remove tags with conditions ``` processor: - tag_manager: add_tags: [ "tag_one", "tag_two" ] add_when: "/key == null" remove_tags: [ "tag_three", "tag_four" ] remove_when: "/key == null" ``` **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Originally mentioned in https://github.com/opensearch-project/data-prepper/issues/629
Tag manager processor to add and remove tags manually
https://api.github.com/repos/opensearch-project/data-prepper/issues/4272/comments
1
2024-03-12T19:50:20Z
2024-05-16T15:43:53Z
https://github.com/opensearch-project/data-prepper/issues/4272
2,182,554,915
4,272
[ "opensearch-project", "data-prepper" ]
After creating the `2.7` branch, disable the `mongodb` source.
Disable the new mongodb source in 2.7
https://api.github.com/repos/opensearch-project/data-prepper/issues/4270/comments
1
2024-03-12T17:15:04Z
2024-03-22T22:09:18Z
https://github.com/opensearch-project/data-prepper/issues/4270
2,182,242,950
4,270
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** It's difficult to track bulk retry failures if Data Prepper is writting to multiple indexes. **Describe the solution you'd like** 1. Log the bulk failure the first time that there is a failure instead of only doing it every 5 times. 2. Update logging to show the OpenSearch index for which the bulk request failed. https://github.com/opensearch-project/data-prepper/blob/ed3f75ed3beedfb6d72af1f4b63ae0d085f72a0b/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L243-L244 **Describe alternatives you've considered (Optional)** N/A **Additional context** N/A
Improve OpenSearch bulk retry logging
https://api.github.com/repos/opensearch-project/data-prepper/issues/4268/comments
1
2024-03-12T16:15:36Z
2024-03-12T19:46:04Z
https://github.com/opensearch-project/data-prepper/issues/4268
2,182,082,452
4,268
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** At present, the DLQ only supports S3 sources. To my understanding possible sinks are currently: text files (`dlq_file`), S3 storage and opensearch. **Describe the solution you’d like** We would like to use a DLQ with Kafka as a source. Ideally, the DLQ sink would be Kafka as well, i.e., the entire failed message would be copied into a separate DLQ-Kafka topic. Alternatively, only the Key of the failed Kafka-message could be written to the sink (in which case it could also be written nicely to opensearch, `dlq_file` or S3 storage, even if the message is larger or has a broken json structure).
DLQ: Kafka sources and/or sinks
https://api.github.com/repos/opensearch-project/data-prepper/issues/4267/comments
1
2024-03-12T11:21:47Z
2024-03-12T19:42:58Z
https://github.com/opensearch-project/data-prepper/issues/4267
2,181,370,762
4,267
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of the DynamoDB source using a custom `document_id` or `document_version` from my DynamoDB items, I am not able to keep deletions on my DDB table in sync with OpenSearch, because Data Prepper only reads the `newImage` of DDB stream records, and does not write the old image **Describe the solution you'd like** A new parameter under the `stream` block of DDB source ``` source: dynamodb: stream: use_old_image_for_deletes: true ``` When this is set to true, Data Prepper would check stream records for the REMOVE action, and then it would construct the Event from the old image, rather than the new empty image. **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
Support reading of old image for delete events on DynamoDB source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4261/comments
1
2024-03-11T20:22:43Z
2024-03-19T17:35:31Z
https://github.com/opensearch-project/data-prepper/issues/4261
2,180,148,979
4,261
[ "opensearch-project", "data-prepper" ]
**Describe the bug** Unable to get data prepper to follow s3 sink configured thresholds. When setting the thresholds to very low values (1 or even 0) data is still written to s3. **To Reproduce** Configuration to reproduce: ``` test-pipeline: workers: 2 delay: "5000" source: kafka: bootstrap_servers: - localhost:29092 topics: - name: <test-topic> group_id: connect-<test-topic> encryption: type: none insecure: true sink: - s3: aws: region: us-east-1 sts_header_overrides: max_retries: 0 threshold: event_count: 1 maximum_size: 1mb event_collect_timeout: 15s bucket: <bucket name> object_key: path_prefix: testing-s3-data-loss-v2/ codec: ndjson: ``` **Expected behavior** Would like the pipeline to fail when s3 thresholds are exceeded. Or if this is not the designed behavior would appreciate clarification on what should be happening in this scenario. **Environment (please complete the following information):** - Data prepper: we are building on top of the opensource version [here](https://github.com/opensearch-project/data-prepper) pulled 2/6 **Additional Context** Have also tried setting `event_count`/`maximum_size` to 0 and see the same behavior.
S3 Thresholds are not withheld [BUG]
https://api.github.com/repos/opensearch-project/data-prepper/issues/4242/comments
5
2024-03-06T19:17:10Z
2024-03-06T20:05:30Z
https://github.com/opensearch-project/data-prepper/issues/4242
2,172,238,408
4,242
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Currently, Kafka source only supports SASL/PLAIN authentication mechanism, but apparently no SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512. **Describe the solution you'd like** Extend Data Prepper’s authentication options to include mechanisms such as SCRAM-SHA-512. Example: ```yaml pipeline: name: kafka-pipeline source: kafka: bootstrap_servers: - 127.0.0.1:9093 topics: - name: topic1 group_id: groupID1 authentication: sasl: plaintext: mechanism: SCRAM-SHA-512 # or SCRAM-SHA-256 or plain username: your_kafka_username password: your_kafka_password ``` **Additional context** Many Kafka deployments rely on SCRAM mechanisms for improved security. Users who require SCRAM-SHA-512 authentication need this feature to seamlessly integrate Data Prepper into their existing Kafka infrastructure.
Kafka source: support SASL/SCRAM mechanisms
https://api.github.com/repos/opensearch-project/data-prepper/issues/4241/comments
6
2024-03-06T09:38:56Z
2024-09-17T19:44:11Z
https://github.com/opensearch-project/data-prepper/issues/4241
2,171,058,785
4,241
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of a pipeline with many grok processors and patterns, it is difficult for me to debug the performance of my grok processors. The only metric is the `grokProcessingTime` and this is shared/aggregated between all grok processor instances. The only way to know which Events are spending a lot of time in grok is if the grok match times out, and tags the event with `tags_on_timeout`. However, there can still be very slow patterns that do not hit the pattern, and can be optimized to improve performance. **Describe the solution you'd like** An option to create metadata on Events that contains important debug information related to grok matching for this Event. ``` - grok: performance_metadata: true // defaults to false match: log: - %{PATTERN_1} - %{PATTERN_2} ``` When the `include_performance_metadata` flag is set to true, the grok processor can add metadata fields to the Event. To start, these metadata fields can be ``` _total_grok_processing_time: 2500 // in milliseconds _total_grok_patterns_attempted: 10 // The number of individual patterns this Event attempted to match on ``` These same metadata fields will be shared between all grok processors. So given this configuration ``` - grok: include_performance_metadata: true match: log: - %{PATTERN_1} // mismatch after 1000 ms - %{PATTERN_2} // matches after 1000 ms - grok: performance_metadata: true match: log: - %{PATTERN_3} // mismatch after 1000 ms - %{PATTERN_4} // mismatch after 1000 ms ``` If an Event takes the path indicated by the comments, the end result of the metadata fields would be ``` _total_grok_processing_time: 4000 _total_grok_patterns_attempted: 4 ``` This metadata can then be used with the `getMetadata` function of Data Prepper expressions as needed (such as copying it over to the Event with `add_entries` ``` - add_entries: entries: - add_when: 'getMetadata("_total_grok_processing_time") != null' key: "grok_processing_time" value_expression: 'getMetadata("_total_grok_processing_time")' ``` **Describe alternatives you've considered (Optional)** Add this metadata to Events by default without the need for configuring the `include_performance_metadata` parameter. While minimal, this change could add memory unnecessarily Another alternative is to keep the parameter, and default it to true, and allowing users to disable it if requested. **Additional context** Add any other context or screenshots about the feature request here.
Add support for tracking performance of individual Events in the grok processor
https://api.github.com/repos/opensearch-project/data-prepper/issues/4196/comments
1
2024-02-28T17:52:47Z
2024-03-06T17:34:00Z
https://github.com/opensearch-project/data-prepper/issues/4196
2,159,511,706
4,196
[ "opensearch-project", "data-prepper" ]
**Describe the bug** When configuring an OpenSearch sink with a user, the [permissions documented](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/opensearch_security.md) result in failure to write data to OpenSearch **To Reproduce** Configure pipeline sink: ``` entry-pipeline: buffer: bounded_blocking: batch_size: 160 buffer_size: 10240 delay: "100" sink: - pipeline: name: raw-pipeline raw-pipeline: buffer: bounded_blocking: batch_size: 160 buffer_size: 10240 processor: - otel_traces: trace_flush_interval: 1 sink: - opensearch: hosts: - https://vpc-opensearch-dev-nevn7hqedhhppf7rw4c674cbne.eu-west-1.es.amazonaws.com index_type: trace-analytics-raw password: REDACTED username: observability - stdout: null source: pipeline: name: entry-pipeline ``` Configure role: ![image](https://github.com/opensearch-project/data-prepper/assets/121832558/437a840c-958c-4b90-aaa3-5273dc120980) Map role to user: ![image](https://github.com/opensearch-project/data-prepper/assets/121832558/3ebba6ff-8f35-49f9-9416-43aa06a44fd5) **Expected behavior** Data prepper starts without error, listens on the appropriate port, and writes to OpenSearch **Screenshots** As above. [log gist](https://gist.github.com/arichtman-srt/2ca01c38de45776e87860f4653ef1911) **Environment (please complete the following information):** - AWS-managed OpenSearch: v2.9.0 - Data Prepper: v2.6.2 - AWS EKS: v1.29 **Additional context** When I map the same user to `all_access` role, Data Prepper behaves as expected, and is able to set up the pipeline, listens on the configured port, and successfully writes to OpenSearch. If I add index permissions of `indices_all` on pattern `*`, it still doesn't work. So it's maybe cluster level index permissions? I've set `JAVA_OPTS=-Dlog4j2.debug=true` so hopefully the logs show something useful, but not the failing API call that I can see.
[BUG] [Docs] OpenSearch sink documented permissions insufficient
https://api.github.com/repos/opensearch-project/data-prepper/issues/4194/comments
1
2024-02-28T07:25:15Z
2024-06-18T19:45:55Z
https://github.com/opensearch-project/data-prepper/issues/4194
2,158,289,764
4,194
[ "opensearch-project", "data-prepper" ]
See also: https://github.com/opensearch-project/opensearch-java/issues/473 Looks like we maybe fixed this in the java client, but not in the Python client? Or maybe this is a different code path? I haven't been able to diagnose exactly what's going on and where the failure is. Here's what's in CloudWatch Logs for the OpenSearch sink initialization ``` 2024-02-27T19:37:23.594 [log-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink with a retryable exception. org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [security_exception] authentication/authorization failure at org.opensearch.client.transport.aws.AwsSdk2Transport.parseResponse(AwsSdk2Transport.java:473) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.executeSync(AwsSdk2Transport.java:392) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.performRequest(AwsSdk2Transport.java:192) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.opensearch.indices.OpenSearchIndicesClient.exists(OpenSearchIndicesClient.java:507) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.NoIsmPolicyManagement.checkIfIndexExistsOnServer(NoIsmPolicyManagement.java:50) ~[opensearch-2.6.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndex(AbstractIndexManager.java:268) ~[opensearch-2.6.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:225) ~[opensearch-2.6.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:231) ~[opensearch-2.6.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:193) ~[opensearch-2.6.1.jar:?] at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:52) ~[data-prepper-api-2.6.1.jar:?] at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:200) ~[data-prepper-core-2.6.1.jar:?] at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:252) ~[data-prepper-core-2.6.1.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.base/java.lang.Thread.run(Thread.java:829) [?:?] ``` FWIW, my pipeline role has: ``` { "Action": [ "es:DescribeDomain", "es:*" ], "Resource": "arn:aws:es:us-west-2:OBSCURED:domain/OBSCURED", "Effect": "Allow" }, ```
[BUG] Unhelpful error message initializing OpenSearch Ingestion, OpenSearch sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4195/comments
16
2024-02-27T19:46:37Z
2024-03-12T16:50:28Z
https://github.com/opensearch-project/data-prepper/issues/4195
2,159,189,978
4,195
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a Data Prepper user, I'd like to use custom index type in opensearch sink to set up index template when using vpc collection as sink. Current issue with this setup is when opensearch sink initializes, it tries to [set up index and index template](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L231) before [creating or updating serverless network policy](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L261), which results in 401 errors. Trackback looks like this: ``` 2024-03-07T21:04:55,273 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager - Index template vpc-aoss-test-000003-index-template does not exist and should be created 2024-03-07T21:04:55,285 [log-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink with a retryable exception. org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [http_exception] server returned 401 at org.opensearch.client.transport.aws.AwsSdk2Transport.parseResponse(AwsSdk2Transport.java:494) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.executeSync(AwsSdk2Transport.java:392) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.client.transport.aws.AwsSdk2Transport.performRequest(AwsSdk2Transport.java:192) ~[opensearch-java-2.8.1.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.ComposableTemplateAPIWrapper.putTemplate(ComposableTemplateAPIWrapper.java:34) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.ComposableIndexTemplateStrategy.createTemplate(ComposableIndexTemplateStrategy.java:47) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndexTemplate(AbstractIndexManager.java:258) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndexTemplate(AbstractIndexManager.java:234) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:224) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:231) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:193) ~[opensearch-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:52) ~[data-prepper-api-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:200) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?] at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:252) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?] ``` **Describe the solution you'd like** Support custom index type for private opensearch serverless collection as sink. This can probably be resolved by just moving network policy update code ahead of setting up index. **Describe alternatives you've considered (Optional)** N/A **Additional context** N/A
[BUG] Support custom index type for private opensearch serverless collection as sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4188/comments
0
2024-02-27T04:15:52Z
2024-03-08T16:02:52Z
https://github.com/opensearch-project/data-prepper/issues/4188
2,155,615,163
4,188
[ "opensearch-project", "data-prepper" ]
## CVE-2024-22201 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http2-common-11.0.18.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://eclipse.dev/jetty">https://eclipse.dev/jetty</a></p> <p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty.http2/http2-common/11.0.18/c63d6f334b58e61b8e0c2d6bb769b1bd31ab736d/http2-common-11.0.18.jar</p> <p> Dependency Hierarchy: - jetty-bom-11.0.18.pom (Root Library) - :x: **http2-common-11.0.18.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50">2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Jetty is a Java based web server and servlet engine. An HTTP/2 SSL connection that is established and TCP congested will be leaked when it times out. An attacker can cause many connections to end up in this state, and the server may run out of file descriptors, eventually causing the server to stop accepting new connections from valid clients. The vulnerability is patched in 9.4.54, 10.0.20, 11.0.20, and 12.0.6. <p>Publish Date: 2024-02-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-22201>CVE-2024-22201</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jetty/jetty.project/security/advisories/GHSA-rggv-cv7r-mw98">https://github.com/jetty/jetty.project/security/advisories/GHSA-rggv-cv7r-mw98</a></p> <p>Release Date: 2024-02-26</p> <p>Fix Resolution: org.eclipse.jetty.http2:http2-common:9.4.54,10.0.20,11.0.20, org.eclipse.jetty.http2:jetty-http2-common:12.0.6, org.eclipse.jetty.http3:http3-common:10.0.20,11.0.20, org.eclipse.jetty.http3:jetty-http3-common:12.0.6</p> </p> </details> <p></p>
CVE-2024-22201 (High) detected in http2-common-11.0.18.jar - autoclosed
https://api.github.com/repos/opensearch-project/data-prepper/issues/4186/comments
1
2024-02-26T23:10:59Z
2024-03-07T16:02:16Z
https://github.com/opensearch-project/data-prepper/issues/4186
2,155,333,673
4,186
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** I'd like to be able to point existing OpenSearch clients that perform ingestion to Data Prepper instead. **Describe the solution you'd like** Create a new source - `opensearch_api` which supports the [Document APIs](https://opensearch.org/docs/latest/api-reference/document-apis/index/) that perform ingestion. Minimally, this can be: ``` source: opensearch_api: ``` Some more options: ``` source: opensearch_api: port: 9200 ssl: true ssl_certificate_file: /path/to/certificate ssl_key_file: /path/to/private.key thread_count: 200 path_prefix: opensearch/ authentication: http_basic: username: ingest_client password: ${{aws_secrets:ingest:password}} ``` **Additional context** This can start with only the `_bulk` API and this could supersede #248. Over time we can add the other endpoints as well. This would be available for `_bulk` using the path: ``` https://localhost:9200/_bulk ``` ## Tasks - [x] #248 - [ ] Support other ingestion APIs
Provide an OpenSearch API source
https://api.github.com/repos/opensearch-project/data-prepper/issues/4180/comments
2
2024-02-23T17:15:59Z
2024-10-11T16:54:07Z
https://github.com/opensearch-project/data-prepper/issues/4180
2,151,493,861
4,180
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** These processors have similar overwrite_if_key_exists options: - add_entries - copy_values - rename_keys - parse_json - parse_ion - key_value grok processor has a keys_to_overwrite option so that you can determine to overwrite or append (default) for each grok generated field. There are 3 basic options when adding fields but the key already exists: overwrite, append or skip. We should consider making the options available and consistent throughout processors. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered. **Additional context** See comments from https://github.com/opensearch-project/data-prepper/pull/4143
Consolidate processor configurations for actions when there're key conflicts
https://api.github.com/repos/opensearch-project/data-prepper/issues/4177/comments
0
2024-02-22T21:33:53Z
2024-02-27T20:39:03Z
https://github.com/opensearch-project/data-prepper/issues/4177
2,149,978,243
4,177
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** We would like to have custom field created with the value of other field. For ex: Below is the sample doc we have: ``` "_source": { "name": "system.cpu.used.pct", "value": 0.767015 } ``` Where name has high cardinal value and can have hundreds of unique value such as memory, disk etc **Describe the solution you'd like** As a solution we would like a new fields to be created where key would be value from name, and value would be value from value. So, output would be like: ``` "_source": { "system.cpu.used.pct": 0.767015 "name": "system.cpu.used.pct", "value": 0.767015 } ``` Additionally it would be good if we have an option where you can replace . via _ as well. For ex: ``` "_source": { "system.cpu.used.pct": 0.767015, "system_cpu_used_pct": 0.767015, "name": "system.cpu.used.pct", "value": 0.767015 } ```
Dynamic field creation using value of other field
https://api.github.com/repos/opensearch-project/data-prepper/issues/4176/comments
4
2024-02-22T20:58:49Z
2024-03-08T22:08:23Z
https://github.com/opensearch-project/data-prepper/issues/4176
2,149,925,383
4,176
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user, I'd like to use the value of some other field to construct the key of a new field. For example, if I have an event {"param_name": "cpu", "param_value": 50} and `param_name` field is dynamic across events, but I'd like to add a new field with the actual param name as key and param value as value. Current processors only support literal strings as keys. **Describe the solution you'd like** In `add_entries` processor, similar to `format` and `value_expression` options for specifying values, add `format_key` or `key_expression` option for specifying keys. **Describe alternatives you've considered (Optional)** N/A **Additional context** N/A
Support adding event fields with dynamic keys
https://api.github.com/repos/opensearch-project/data-prepper/issues/4175/comments
3
2024-02-22T20:55:53Z
2024-02-23T23:30:51Z
https://github.com/opensearch-project/data-prepper/issues/4175
2,149,919,423
4,175
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** My organization is providing me access to an s3 compatible storage service (Ceph). I would like to be able to fetch and push logs to this storage service. Unfortunately, I do see any option to pass a custom s3 endpoint to the current implementation of the s3 source and sink. **Describe the solution you'd like** I don't know how "compatible" the Ceph S3 implementation is with the original AWS one but being able to provide a custom endpoint, key and key_id would be a good start. **Describe alternatives you've considered (Optional)** Currently, I don't see much of an alternative outside switching back to logstash. **Additional context** It looks like in the Opensearch ecosystem none of the S3 compatible services like Ceph and minio can be used. OpenSearch data sources rely on additional AWS solutions and both data prepper source and sink too.
Allow S3 compatible sources and sinks
https://api.github.com/repos/opensearch-project/data-prepper/issues/4171/comments
1
2024-02-22T09:35:50Z
2025-02-03T16:58:45Z
https://github.com/opensearch-project/data-prepper/issues/4171
2,148,625,250
4,171
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Pipeline users want to send events to AWS Lambda. **Describe the solution you'd like** Create a new sink in Data Prepper which outputs data to lambda using codec. It should support - Retries - Different codecs - Buffering capabilities - DLQ Without Batching: ``` lambda-pipeline: ... sink: - lambda: aws: region: us-east-1 sts_role_arn: <arn> sts_overrides: function_name: "uploadToS3Lambda" max_retries: 3 sync: False dlq: s3: bucket: test-bucket key_path_prefix: dlq/ ``` With Batching: ``` lambda-pipeline: ... sink: - lambda: aws: region: us-east-1 sts_role_arn: <arn> sts_overrides: function_name: "uploadToS3Lambda" max_retries: 3 batch: batch_key: "user_key" threshold: event_count: 3 maximum_size: 6mb event_collect_timeout: 15s sync: False dlq: s3: bucket: test-bucket key_path_prefix: dlq/ ``` **Additional context** [Add any other context or screenshots about the feature request here.](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-lambda.html)
Lambda as Sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4170/comments
0
2024-02-21T22:08:05Z
2024-10-07T14:05:19Z
https://github.com/opensearch-project/data-prepper/issues/4170
2,147,796,155
4,170
[ "opensearch-project", "data-prepper" ]
The integration tests for Kafka running 2.8.1 fail consistently on GitHub. "Kafka plugin integration tests / integration-tests (11, 2.8.1) (pull_request_target) " Example run: https://github.com/opensearch-project/data-prepper/actions/runs/7976394878/job/21776875103?pr=4161
[BUG] Integration tests for Kafka 2.8.1 fail consistently
https://api.github.com/repos/opensearch-project/data-prepper/issues/4168/comments
0
2024-02-21T17:32:12Z
2024-02-27T20:40:19Z
https://github.com/opensearch-project/data-prepper/issues/4168
2,147,318,911
4,168
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user who sends XML in specific fields in Events to my Data Prepper pipelines, I would like to be able to parse the xml **Describe the solution you'd like** A new processor to parse xml. **Describe alternatives you've considered (Optional)** A clear and concise description of any alternative solutions or features you've considered.
Support parsing of xml fields in Events
https://api.github.com/repos/opensearch-project/data-prepper/issues/4165/comments
2
2024-02-20T20:48:17Z
2024-03-07T15:48:40Z
https://github.com/opensearch-project/data-prepper/issues/4165
2,145,254,632
4,165
[ "opensearch-project", "data-prepper" ]
## CVE-2024-25710 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>commons-compress-1.24.0.jar</b>, <b>commons-compress-1.22.jar</b></p></summary> <p> <details><summary><b>commons-compress-1.24.0.jar</b></p></summary> <p>Apache Commons Compress defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p> <p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar</p> <p> Dependency Hierarchy: - :x: **commons-compress-1.24.0.jar** (Vulnerable Library) </details> <details><summary><b>commons-compress-1.22.jar</b></p></summary> <p>Apache Commons Compress software defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p> <p>Path to dependency file: /data-prepper-plugins/parquet-codecs/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar</p> <p> Dependency Hierarchy: - kafka-schema-registry-client-7.4.0.jar (Root Library) - :x: **commons-compress-1.22.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/774fa213614252c4772b018731452f020cafa16a">774fa213614252c4772b018731452f020cafa16a</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Loop with Unreachable Exit Condition ('Infinite Loop') vulnerability in Apache Commons Compress.This issue affects Apache Commons Compress: from 1.3 through 1.25.0. Users are recommended to upgrade to version 1.26.0 which fixes the issue. <p>Publish Date: 2024-02-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-25710>CVE-2024-25710</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-25710">https://www.cve.org/CVERecord?id=CVE-2024-25710</a></p> <p>Release Date: 2024-02-19</p> <p>Fix Resolution: org.apache.commons:commons-compress:1.26.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2024-25710 (High) detected in commons-compress-1.24.0.jar, commons-compress-1.22.jar
https://api.github.com/repos/opensearch-project/data-prepper/issues/4164/comments
0
2024-02-20T18:07:57Z
2024-02-28T21:06:23Z
https://github.com/opensearch-project/data-prepper/issues/4164
2,144,978,539
4,164
[ "opensearch-project", "data-prepper" ]
## CVE-2024-26308 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>commons-compress-1.24.0.jar</b>, <b>commons-compress-1.22.jar</b></p></summary> <p> <details><summary><b>commons-compress-1.24.0.jar</b></p></summary> <p>Apache Commons Compress defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p> <p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.24.0/b4b1b5a3d9573b2970fddab236102c0a4d27d35e/commons-compress-1.24.0.jar</p> <p> Dependency Hierarchy: - :x: **commons-compress-1.24.0.jar** (Vulnerable Library) </details> <details><summary><b>commons-compress-1.22.jar</b></p></summary> <p>Apache Commons Compress software defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p> <p>Path to dependency file: /data-prepper-plugins/parquet-codecs/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.22/691a8b4e6cf4248c3bc72c8b719337d5cb7359fa/commons-compress-1.22.jar</p> <p> Dependency Hierarchy: - kafka-schema-registry-client-7.4.0.jar (Root Library) - :x: **commons-compress-1.22.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/e1f316767d00b59c5369866b82cbbae76f6ba24b">e1f316767d00b59c5369866b82cbbae76f6ba24b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Allocation of Resources Without Limits or Throttling vulnerability in Apache Commons Compress.This issue affects Apache Commons Compress: from 1.21 before 1.26. Users are recommended to upgrade to version 1.26, which fixes the issue. <p>Publish Date: 2024-02-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-26308>CVE-2024-26308</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-26308">https://www.cve.org/CVERecord?id=CVE-2024-26308</a></p> <p>Release Date: 2024-02-19</p> <p>Fix Resolution: org.apache.commons:commons-compress:1.26.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
CVE-2024-26308 (Medium) detected in commons-compress-1.24.0.jar, commons-compress-1.22.jar
https://api.github.com/repos/opensearch-project/data-prepper/issues/4163/comments
0
2024-02-20T18:07:53Z
2024-02-28T21:06:22Z
https://github.com/opensearch-project/data-prepper/issues/4163
2,144,978,442
4,163
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** As a user of Data Prepper's DLQ, I would like to also get the metadata of Events in the case that I was using those values in my pipeline for the failed DLQ data. This would allow me to replay DLQ data in the exact same way as the previous pipeline attempted. **Describe the solution you'd like** A configurable option at the `dlq` level for `include_event_metadata`, which would default to false. ``` dlq: include_event_metadata: true s3: bucket: "dlq-bucket-ddb" key_path_prefix: "osi-test-dlq-pipeline/logs/dlq/%{yyyy}/%{MM}/%{dd}" region: "us-west-2" sts_role_arn: "arn:aws:iam::870201406020:role/osis-to-domain-ingestion-role" ``` When set to true, the DLQ documents will contain an extra field for `eventMetadata`, which contains the EventMetadata map. **Describe alternatives you've considered (Optional)** Default all DLQ documents to contain the Event metadata **Additional context** Add any other context or screenshots about the feature request here.
Have the option to write Event Metadata for failed documents to the DLQ
https://api.github.com/repos/opensearch-project/data-prepper/issues/4158/comments
0
2024-02-19T20:54:18Z
2024-02-19T22:10:49Z
https://github.com/opensearch-project/data-prepper/issues/4158
2,143,129,301
4,158
[ "opensearch-project", "data-prepper" ]
Please approve or deny the release of Data Prepper. **VERSION**: 2.6.2 **BUILD NUMBER**: 77 **RELEASE MAJOR TAG**: true **RELEASE LATEST TAG**: true Workflow is pending manual review. URL: https://github.com/opensearch-project/data-prepper/actions/runs/7963900111 Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh] Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel.
Manual approval required for workflow run 7963900111: Release Data Prepper : 2.6.2
https://api.github.com/repos/opensearch-project/data-prepper/issues/4154/comments
3
2024-02-19T18:56:49Z
2024-02-19T19:04:07Z
https://github.com/opensearch-project/data-prepper/issues/4154
2,142,979,905
4,154
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** S3 supports object checksums to verify the content sent to S3. With this, S3 will reject files that do not match the checksum. This can prevent invalid objects. Additionally, S3 Object Lock requires checksums. **Describe the solution you'd like** Provide a configuration to specify the checksums for objects uploaded. New configuration: `checksum` Valid values: `crc32`, `crc32c`, `sha1`, `sha256` ``` sink: - s3: bucket: mybucket checksum: crc32 compression: gzip object_key: path_prefix: logs/%{yyyy}/%{MM}/%{dd}/%{HH}/ codec: ndjson: ``` **Describe alternatives you've considered (Optional)** Data Prepper could support MD5 as well. We'd need to determine if the MD5 should be another option entirely or an additional entry in the list. This probably depends on whether the settings are mutually exclusive. Perhaps you can have both? However, just adding the current checksum should be sufficient. **Additional context** Current error when writing to S3 with object lock: ``` data-prepper | software.amazon.awssdk.services.s3.model.S3Exception: Content-MD5 OR x-amz-checksum- HTTP header is required for Put Object requests with Object Lock parameters (Service: S3, Status Code: 400, Request ID: XYZ, Extended Request ID: XYZ) ``` https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-put-object https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/s3-checksums.html
Support object checksums in the S3 sink
https://api.github.com/repos/opensearch-project/data-prepper/issues/4146/comments
1
2024-02-18T03:01:04Z
2024-10-07T14:01:27Z
https://github.com/opensearch-project/data-prepper/issues/4146
2,140,710,935
4,146
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Initially, I'd like to be able to test the circuit breaker behavior using manual controls. Additionally, there may be value in stopping a pipeline temporarily. **Describe the solution you'd like** Provide an HTTP endpoint for opening and closing a circuit breaker. First, it should be enabled in data-prepper-config.yaml: ``` circuit_breakers: http_forced: enabled: true ``` Then to open the circuit breaker: ``` curl -X POST http://localhost:4900/circuit_breaker/forced ``` Then to close the circuit breaker: ``` curl -X DELETE http://localhost:4900/circuit_breaker/forced ``` **Describe alternatives you've considered (Optional)** Provide a full override of all circuit breakers. But, this could result in running out of memory. I'd like to see if there is a better name. Right now, I'm calling it "forced," but I think we could do better. Another idea would be to set the circuit breaker with a timeout. I tend to think this might be a completely different circuit breaker so that the forced one is not interfered with.
Support an additional manual circuit breaker
https://api.github.com/repos/opensearch-project/data-prepper/issues/4145/comments
0
2024-02-17T16:17:51Z
2024-02-20T20:49:57Z
https://github.com/opensearch-project/data-prepper/issues/4145
2,140,249,313
4,145
[ "opensearch-project", "data-prepper" ]
**Describe the bug** When ingesting OpenTelemetry metrics into Data Prepper, certain fields from the OTel data model are not supported yet. When trying to ingest them (e.g. via grpcurl) I get an error returned. Below an example for the scope attributes field: ``` Error invoking method "opentelemetry.proto.collector.metrics.v1.MetricsService/Export": error getting request data: message type opentelemetry.proto.common.v1.InstrumentationScope has no known field named attributes ``` Here the first aspect from this issue: From the [OTLP JSON protobuf encoding](https://opentelemetry.io/docs/specs/otlp/#json-protobuf-encoding) doc page I get that the OTLP receiver should not reject those requests. > OTLP/JSON receivers MUST ignore message fields with unknown names and MUST unmarshal the message as if the unknown field was not present in the payload. This aligns with the behavior of the Binary Protobuf unmarshaler and ensures that adding new fields to OTLP messages does not break existing receivers. --- The second aspect of this issue deals with the thought of adding the unsupported fields. I collected all fields I was aware of: The following fields **will produce an error message** like above, hence are unsupported. - `attributes` within the `scope` field ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/common/v1/common.proto#L76-L79)) - `metadata` within the `metrics` ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L192-L199)) - `droppedAttributesCount` within the `scope` field ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/common/v1/common.proto#L80)) - the `min` and `max` fields within the `dataPoints` for the histogram / exponential histogram kind ([Link Histogram](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L468-L472), [Link Exponential Histogram](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L576-L580)) - the `zeroThreshold` field within the `dataPoints` for the exponential histogram kind ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L582-L588)) Next, the fields below **do not produce an error, but they are not included in the resulting documents** in the DataPrepper sink (stdout / opensearch). It could also be that this is on purpose, but I still wanted to point it out. - `droppedAttributesCount` within the `resource` field ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/resource/v1/resource.proto#L34-L36)) - `schemaUrl` within the `scopeMetrics` ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L76-L80)) (Same for `scopeLogs, `scopeSpans` and `resourceSpans`) - There is also the `schemaUrl` within the `resourceMetrics` ([Link](https://github.com/open-telemetry/opentelemetry-proto/blob/9d139c87b52669a3e2825b835dd828b57a455a55/opentelemetry/proto/metrics/v1/metrics.proto#L58-L63)) which is included in the resulting sink document. --- **To Reproduce** Steps to reproduce the behavior: - Run Data Prepper with an OTel metrics pipeline. Here an example configuration file: ``` metrics-pipeline: workers: 4 source: otel_metrics_source: proto_reflection_service: true ssl: false buffer: bounded_blocking: buffer_size: 512 batch_size: 8 processor: - otel_metrics: sink: - stdout: ``` - Prepare the JSON payload. For example here for an exponential histogram kind (I am choosing this kind because it can contain all the mentioned unsupported fields). I just put some example numbers and did not think too much about if the values make sense. The main goal was to have a complete payload specifying every field there is in the [OpenTelemetry Proto Specifications](https://github.com/open-telemetry/opentelemetry-proto/tree/main/opentelemetry/proto). ``` { "resourceMetrics": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "basic-metric-service" } } ], "droppedAttributesCount": 5 }, "scopeMetrics": [ { "metrics": [ { "description": "Example of n exponential histogram", "name": "requests", "exponentialHistogram": { "aggregationTemporality": 2, "dataPoints": [ { "attributes": [ { "key": "datapoint.myattribute", "value": { "stringValue": "metricAttribute" } } ], "startTimeUnixNano": 1660736598000000000, "timeUnixNano": 1660736598000001000, "count": 4, "sum": 5, "scale": 5, "zeroCount": 2, "positive": { "offset": 3, "bucketCounts": [ 2, 2 ] }, "negative":{ "offset": 3, "bucketCounts": [ 2, 2 ] }, "exemplars": [ { "filteredAttributes": [ { "key": "datapoint.exemplarattribute", "value": { "stringValue": "exemplarAttributeValue" } } ], "timeUnixNano": 1660736598000001000, "asDouble": 1, "traceId": "428264014a59a9a29b7053279f687e9f", "spanId": "9bc01dfad9f631ff" } ], "flags": 1, "min": 7.6, "max": 9.6, "zeroThreshold": 99 } ] }, "unit": "1", "metadata": [ { "key": "myMetadata", "value": { "stringValue": "exampleValue" } } ] } ], "scope": { "name": "example-exporter-collector", "version": "MyVersion", "attributes": [ { "key": "myScopeAttribute", "value": { "stringValue": "my-scope-attribute-value" } } ], "droppedAttributesCount": 1 }, "schemaUrl": "https://opentelemetry.io/schemas/1.24.0" } ], "schemaUrl": "https://opentelemetry.io/schemas/1.24.0" } ] } ``` - Send this data to your Data Prepper endpoint. - e.g. using grpcurl: `grpcurl -insecure -d @ < otel-metric-exponential-histogram.json <data_prepper_endpoint>:<data_prepper_metrics_port> opentelemetry.proto.collector.metrics.v1.MetricsService/Export` - There will be errors like the one mentioned in the beginning. You can remove the fields which are producing the error from the payload until it works. Then you will get something like the following document in the stdout sink of Data Prepper. It misses the two fields mentioned in the second list above. ``` {"kind":"SUM","flags":1,"description":"Example of a Counter","serviceName":"basic-metric-service","schemaUrl":"https://opentelemetry.io/schemas/1.24.0","isMonotonic":true,"unit":"1","aggregationTemporality":"AGGREGATION_TEMPORALITY_CUMULATIVE","exemplars":[{"time":"2022-08-17T11:43:18.000001Z","value":1.0,"attributes":{"exemplar.attributes.datapoint@exemplarattribute":"exemplarAttributeValue"},"spanId":"f5b734d5d7da77d7fadf57df","traceId":"e36f36eb8d35e1ae7d6bd6b6f5bef4e77dbbf5febcedef5f"}],"name":"requests","startTime":"2022-08-17T11:43:18Z","time":"2022-08-17T11:43:18.000001Z","value":1.0,"instrumentationScope.name":"example-exporter-collector","metric.attributes.datapoint@myattribute":"metricAttribute","resource.attributes.service@name":"basic-metric-service", "instrumentationScope.version":"MyVersion"} ``` Note: When I try to send the metric via normal curl to an OTel collector first, which then sends the metric via gRPC to Data Prepper, I do not get the error messages like with grpcurl. However, the fields are also not contained in the resulting documents in the Data Prepper sink. I was using grpcurl to have the direct feedback of Data Prepper without some in-between component like the OTel collector. **Expected behavior** - If there are unsupported fields, the request should not fail. - Unsupported fields like the scope attributes should also be included in the resulting documents. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - Data Prepper 2.6.1 - grpcurl 1.8.9 **Additional context** Add any other context about the problem here.
[BUG] Unsupported OTel metric, instrumentation scope and resource fields
https://api.github.com/repos/opensearch-project/data-prepper/issues/4137/comments
7
2024-02-16T14:48:44Z
2025-04-22T00:57:14Z
https://github.com/opensearch-project/data-prepper/issues/4137
2,138,776,609
4,137
[ "opensearch-project", "data-prepper" ]
**Is your feature request related to a problem? Please describe.** Append is a common operation for lists. As a Data Prepper user, I would like to be able to append values to a list in the event. **Describe the solution you'd like** `add_entries` processor can be enhanced to achieve this by adding an `append_if_key_exists` option: ``` - add_entries: entries: - key: "fruits" value: "banana" append_if_key_exists: true ``` Example1: Input: ```json {"fruits": "apple"} ``` output: ```json {"fruits": ["apple", "banana"]} ``` Example2: Input: ```json {"fruits": ["apple", "watermelon"]} ``` output: ```json {"fruits": ["apple", "watermelon", "banana"]} ``` **Describe alternatives you've considered (Optional)** A separate `list_append` processor. **Additional context** #3967
Append values to lists in an event
https://api.github.com/repos/opensearch-project/data-prepper/issues/4129/comments
2
2024-02-14T20:50:20Z
2024-02-20T17:34:14Z
https://github.com/opensearch-project/data-prepper/issues/4129
2,135,182,379
4,129