issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
S3 Scan has `start_time`, `end_time`, and `range` at the global level for s3 scan, and not for each individual bucket
**Describe the solution you'd like**
change the current configuration of this
```
s3:
codec:
newline:
aws:
region: "us-west-2"
sts_role_arn: "arn:aws:iam::123456789012:role/s3-role"
scan:
start_time: 2023-01-21T18:00:00
end_time: 2023-08-21T18:00:00
buckets:
- bucket:
name: "my-bucket"
- bucket:
name: "other-bucket"
```
to
```
s3:
codec:
newline:
aws:
region: "us-west-2"
sts_role_arn: "arn:aws:iam::123456789012:role/s3-role"
scan:
buckets:
- bucket:
name: "my-bucket"
start_time: 2023-01-21T18:00:00
end_time: 2023-08-21T18:00:00
- bucket:
name: "other-bucket"
start_time: 2023-02-21T18:00:00
end_time: 2023-09-21T18:00:00
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 Scan Configuration Improvement | https://api.github.com/repos/opensearch-project/data-prepper/issues/2737/comments | 1 | 2023-05-23T15:09:43Z | 2023-06-26T16:11:42Z | https://github.com/opensearch-project/data-prepper/issues/2737 | 1,722,280,463 | 2,737 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Pull based sources (S3, OpenSearch) collect data in temporary buffer (`BufferAccumulator`) prior to writing the data to Data Prepper buffer. This code is an exact copy and this will have us supporting the same class in multiple places. We will likely encounter the need for this elsewhere with other pull sources on the road map.
**Describe the solution you'd like**
A common class that can be leveraged in pull source plugins to temporarily hold data prior to writing to the buffer.
**Describe alternatives you've considered (Optional)**
- Writing to the buffer directly and eliminating this buffer.
**Additional context**
- https://github.com/opensearch-project/data-prepper/pull/2734
| Common Buffer Accumulator | https://api.github.com/repos/opensearch-project/data-prepper/issues/2736/comments | 0 | 2023-05-23T15:08:14Z | 2023-05-23T16:50:50Z | https://github.com/opensearch-project/data-prepper/issues/2736 | 1,722,277,831 | 2,736 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
S3 Scan configuration requires both start and end time to be able to use the scan feature
```
s3:
codec:
newline:
aws:
region: "us-west-2"
sts_role_arn: "arn:aws:iam::123456789012:role/s3-role"
scan:
start_time: 2023-01-21T18:00:00
end_time: 2023-08-21T18:00:00
buckets:
- bucket:
name: "my-bucket"
```
**Describe the solution you'd like**
By default s3 scan should scan the entire bucket, and not require `start_time` or `end_time`
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 Scan Requires start and end time | https://api.github.com/repos/opensearch-project/data-prepper/issues/2735/comments | 2 | 2023-05-23T15:06:41Z | 2023-06-26T16:11:43Z | https://github.com/opensearch-project/data-prepper/issues/2735 | 1,722,275,117 | 2,735 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The new S3 sink should be able to notify the end-to-end acknowledgement framework that an event is complete.
**Describe the solution you'd like**
Notify the end-to-end acknowledgements when events are flushed to S3.
| Support end-to-end acknowledgements in the S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/2732/comments | 0 | 2023-05-22T23:14:37Z | 2023-05-31T21:30:44Z | https://github.com/opensearch-project/data-prepper/issues/2732 | 1,720,785,623 | 2,732 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A Data Prepper administrator can change the log4j file. But it is not possible to override the value.
**Describe the solution you'd like**
Allow setting the configuration file via `JAVA_OPTS`. See the following:
```
env JAVA_OPTS="-Dlog4j.configurationFile=custom/path/to/log4j2-rolling.properties" ./bin/data-prepper
```
Data Prepper already has the `JAVA_OPTS` environment variable. However, it does not support changing this value. This is because Data Prepper overrides that value.
**Describe alternatives you've considered (Optional)**
There could be a data-prepper-config.yaml configuration. However, this would come much later and isn't how the current Log4j configuration works.
| Support overriding the Log4j configuration file location | https://api.github.com/repos/opensearch-project/data-prepper/issues/2720/comments | 1 | 2023-05-19T19:03:29Z | 2023-05-31T21:29:27Z | https://github.com/opensearch-project/data-prepper/issues/2720 | 1,717,692,014 | 2,720 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Users of Data Prepper should be able to reference or key off elements (fields & metadata) within an event in a unified way through pipeline configurations. Currently users can reference fields within an event like `${foo}` and expressions reference fields via `/foo`. Additionally, there is no way to access metadata at this time.
**Describe the solution you'd like**
A unified standard for all plugins to follow and leverage for keying off of elements (fields & metadata) within pipeline configurations.
**Additional context**
I have also observed users attempting to access elements `.foo` even though this is not supported. There is likely an additional documentation gap.
| Unified mechanism to reference elements within an Event from the pipeline configuration | https://api.github.com/repos/opensearch-project/data-prepper/issues/2719/comments | 3 | 2023-05-19T18:23:28Z | 2023-05-19T19:24:06Z | https://github.com/opensearch-project/data-prepper/issues/2719 | 1,717,647,336 | 2,719 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Address comments in PR# 2673
https://github.com/opensearch-project/data-prepper/pull/2673
- Consumers
- Indentation issues
- Remove group_name
- Pipeline name - we can get that from PipelineDescription
- Broker URL – This is not working when I change the URL to confluent cloud.. seeing time outs
- Schema Registry – Unable to connect to Schema Registry- 404 errors resulting in in correct processing of messages for Avro
log:
2023-05-31T11:18:01,130 [log-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.plugins.kafka.source.KafkaSource - GET request failed while fetching the schema registry details : 404
2023-05-31T11:18:01,251 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - Submitting request to initiate the pipeline processing
So consumed messages are
Config@b030ee1 :: 1 executed on : 2023-05-31T11:18:01.251693
{"message":"\u0000\u0000\u0000\u0000\u0002\u0006id0\u0000\u0000\u0000\u0000\u0000@�@"}
{"message":"\u0000\u0000\u0000\u0000\u0002\u0006id1\u0000\u0000\u0000\u0000\u0000@�@"}
- Auth_type – can you move this to authentication ?
| [BUG] Address comments in PR #2673 | https://api.github.com/repos/opensearch-project/data-prepper/issues/2713/comments | 1 | 2023-05-18T15:39:23Z | 2023-06-28T11:10:58Z | https://github.com/opensearch-project/data-prepper/issues/2713 | 1,715,849,291 | 2,713 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The S3 Scan integration tests do not work.
**To Reproduce**
Run:
```
./gradlew :data-prepper-plugins:s3-source:integrationTest -Dtests.s3source.region=YOUR_REGION -Dtests.s3source.bucket=YOUR_BUCKET -Dtests.s3source.account=YOUR_ACCOUNT -Dtests.s3source.queue.url=YOUR_QUEUE_URL
```
See this error:
```
java.nio.file.NoSuchFileException: /full/path/to/opensearch/data-prepper/data-prepper-plugins/s3-source\src\main\resources\IntegrationTest.parquet
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
at java.base/java.nio.file.Files.newByteChannel(Files.java:371)
at java.base/java.nio.file.Files.newByteChannel(Files.java:422)
at java.base/java.nio.file.Files.readAllBytes(Files.java:3206)
at org.opensearch.dataprepper.plugins.source.ParquetRecordsGenerator.write(ParquetRecordsGenerator.java:28)
at org.opensearch.dataprepper.plugins.source.S3ObjectGenerator.writeToOutputStream(S3ObjectGenerator.java:55)
at org.opensearch.dataprepper.plugins.source.S3ObjectGenerator.write(S3ObjectGenerator.java:31)
at org.opensearch.dataprepper.plugins.source.S3ScanObjectWorkerIT.parseS3Object_parquet_correctly_with_bucket_scan_and_loads_data_into_Buffer(S3ScanObjectWorkerIT.java:179)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:139)
at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.lambda$execute$2(TestTemplateTestDescriptor.java:107)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:107)
at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:42)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.processAllTestClasses(JUnitPlatformTestClassProcessor.java:99)
at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.access$000(JUnitPlatformTestClassProcessor.java:79)
at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor.stop(JUnitPlatformTestClassProcessor.java:75)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:61)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy5.stop(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker$3.run(TestWorker.java:193)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.executeAndMaintainThreadName(TestWorker.java:129)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:100)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:60)
at org.gradle.process.internal.worker.child.ActionExecutionWorker.execute(ActionExecutionWorker.java:56)
at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:133)
at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:71)
at worker.org.gradle.process.internal.worker.GradleWorkerMain.run(GradleWorkerMain.java:69)
at worker.org.gradle.process.internal.worker.GradleWorkerMain.main(GradleWorkerMain.java:74)
```
**Expected behavior**
These tests should pass.
| [BUG] S3 Scan integration tests do not run | https://api.github.com/repos/opensearch-project/data-prepper/issues/2709/comments | 3 | 2023-05-17T22:57:02Z | 2023-07-24T18:18:30Z | https://github.com/opensearch-project/data-prepper/issues/2709 | 1,714,753,908 | 2,709 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
For pull based sources that perform bulk reading like S3 scan or the OpenSearch source that is in PR. As a user, I would like a mechanism to track which data has been read and processed. This could include if data is dropped, a node in my data prepper cluster becomes unresponsive
**Describe the solution you'd like**
An audit log comes to mind. This log would contain a list of data processing events related to docs or indices or some metadata determine by the source. These logs could be used to determine the exact time frame a set of data was pulled into data prepper.
**Describe alternatives you've considered (Optional)**
* Metrics tracking the completion percentage for a scan
* Improving existing logs by adding an Audit tag to the message which tracks relevant data processing events.
* Not including audit logs. I am not sure if this makes sense in Data Prepper. Audit logging would be a new requirement that we may have to enforce on every plugin if we wanted to track data through a pipeline. | Audit Logs | https://api.github.com/repos/opensearch-project/data-prepper/issues/2705/comments | 1 | 2023-05-17T15:09:39Z | 2023-05-17T21:05:46Z | https://github.com/opensearch-project/data-prepper/issues/2705 | 1,714,126,576 | 2,705 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently ExpressionEvaluator is defined with generic type with evaluator function returning different types (see below)
```
public interface ExpressionEvaluator<T> {
...
T evaluate(final String statement, final Event context);
}
```
But this would not be correct because it means, the same statement needs to be evaluated multiple times to get correct evaluation result, which is very inefficient.
**Describe the solution you'd like**
Solution is to evaluated and return the result as`Object` type. Then compare the return type with expected type as needed. One way to do this is to modify ExpressionEvaluator as
```
public interface ExpressionEvaluator {
/**
* @since 1.3
* Parse and evaluate the statement string, resolving external references with the provided context. Return type is
* ambiguous until statement evaluation is complete
*
* @param statement string to be parsed and evaluated
* @param context event used to resolve external references in the statement
* @return result of statement evaluation
*/
Object evaluate(final String statement, final Event context);
default Boolean evaluateConditional(final String statement, final Event context) {
final Object result = evaluate(statement, context);
if (result instanceof Boolean) {
return (Boolean) result;
} else {
throw new ClassCastException("Unexpected expression return type of " + result.getClass());
}
}
}
```
This allows for generic `evaluate` and a default implementation for conditional expression.
Then have a `GenericExpressionEvaluator` as default implementation of `ExpressionEvaluator` as follows -
```
class GenericExpressionEvaluator implements ExpressionEvaluator {
private final Parser<ParseTree> parser;
private final Evaluator<ParseTree, Event> evaluator;
@Inject
public GenericExpressionEvaluator(final Parser<ParseTree> parser, final Evaluator<ParseTree, Event> evaluator) {
this.parser = parser;
this.evaluator = evaluator;
}
/**
* {@inheritDoc}
*
* @throws ExpressionEvaluationException if unable to evaluate or coerce the statement result to type T
*/
@Override
public Object evaluate(final String statement, final Event context) {
try {
final ParseTree parseTree = parser.parse(statement);
return evaluator.evaluate(parseTree, context);
}
catch (final Exception exception) {
throw new ExpressionEvaluationException("Unable to evaluate statement \"" + statement + "\"", exception);
}
}
}
```
**Describe alternatives you've considered (Optional)**
Alternative way of doing this is to have a `ConditionalExpressionEvaluator` as a derived interface as
`interface ConditionalExpressionEvaluator extends ExpressionEvaluator<Boolean>`
and then have an implementation for this as
`class DefaultConditionalExpressionEvaluator implements ConditionalExpressionEvaluator`
This approach may reduce the number of code changes but it is not as clean as the above approach.
**Additional context**
Add any other context or screenshots about the feature request here.
| DataPrepper ExpressionEvaluator should not be type specific | https://api.github.com/repos/opensearch-project/data-prepper/issues/2703/comments | 0 | 2023-05-16T23:36:21Z | 2023-05-19T17:02:58Z | https://github.com/opensearch-project/data-prepper/issues/2703 | 1,712,871,303 | 2,703 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The current `json` codec supports parsing JSON arrays. However, many users would like to parse ndjson formatted files.
```
{"key": "value1"}
{"key": "value2"}
{"key": "value3"}
{"key": "value4"}
```
**Describe the solution you'd like**
Create a new codec: `ndjson`.
**Describe alternatives you've considered (Optional)**
We can already ready nd-json files using a combination of:
* Parsing with the newline codec.
* Using the `parse_json` processor.
| Support ndjson with a codec | https://api.github.com/repos/opensearch-project/data-prepper/issues/2700/comments | 1 | 2023-05-16T18:27:09Z | 2024-05-14T19:10:02Z | https://github.com/opensearch-project/data-prepper/issues/2700 | 1,712,525,544 | 2,700 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an S3 pipeline with json codec
3. Upload a test file with just a json item not in an array. Ex: `{"foo": "bar"}`
4. Verify the S3 source read the data in the logs
```
INFO org.opensearch.dataprepper.plugins.source.SqsWorker - Received 1 messages from SQS. Processing 1 messages.
INFO org.opensearch.dataprepper.plugins.source.S3ObjectWorker - Read S3 object: [bucketName=****, key=****.json]
INFO org.opensearch.dataprepper.plugins.source.SqsWorker - Deleted 1 messages from SQS. [****]
```
6. View metrics, No data will be written to the opensearch nor the buffer. The logs do not detail the data was dropped or is missing.
**Expected behavior**
An indication that there was a failure reading the data
| [BUG] S3 source silently drops data with files contain a single JSON element | https://api.github.com/repos/opensearch-project/data-prepper/issues/2699/comments | 9 | 2023-05-16T15:15:09Z | 2023-05-30T18:34:56Z | https://github.com/opensearch-project/data-prepper/issues/2699 | 1,712,235,395 | 2,699 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
There is a lot of activity in Data Prepper, and not all of it is ready for production. I'd like to be able to get these features out to users in an experimental form than try to hide or remove them, or use feature branches.
**Describe the solution you'd like**
Introduce a concept of "experimental" features. These are not production-ready. They may change between minor versions.
This will mostly be a feature of documentation. However, we can also have an annotation to mark a plugin as experimental.
```
@DataPrepperPlugin(name="experimental_source")
@Experimental
public class ExperimentalSource implements Source
```
Some initial candidates:
* S3 scan
* RSS source
### Disabling experimental features
By default, experimental features can be disabled so that they do not run. We could enable them using `data-prepper-config.yaml`
The simplest is to allow enabling all experimental.
```
experimental:
enable_all: true
```
This structure could be expanded to allow enabling specific plugins.
```
experimental:
enable:
source:
- rss
- neptune
``` | Add experimental feature concept | https://api.github.com/repos/opensearch-project/data-prepper/issues/2695/comments | 0 | 2023-05-15T19:57:43Z | 2025-01-15T17:27:36Z | https://github.com/opensearch-project/data-prepper/issues/2695 | 1,710,734,602 | 2,695 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DataPrepper expression fails to recognize some valid floating point numbers.
`12345.678` is not recognized as float
**Expected behavior**
Expected behavior is to recognize it as valid float point number
| DataPrepper expression fails to recognize some valid floating point numbers | https://api.github.com/repos/opensearch-project/data-prepper/issues/2691/comments | 0 | 2023-05-13T21:38:16Z | 2023-05-15T17:59:27Z | https://github.com/opensearch-project/data-prepper/issues/2691 | 1,708,747,510 | 2,691 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently `add_entries` processor adds new fields to an `Event`. We need the ability to add entries to event's metadata.
**Describe the solution you'd like**
Suggest the following change to `add_entries` processor
```
processor
- add_entries:
entries:
- metadata_key: "message_len"
value: 10
```
or
```
processor
- add_entries:
entries:
- metadata_key: "message_len"
value_expression: `length(/message)`
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for adding metadata entries as part of add_entries processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/2687/comments | 0 | 2023-05-12T21:34:40Z | 2023-05-19T17:39:02Z | https://github.com/opensearch-project/data-prepper/issues/2687 | 1,708,241,514 | 2,687 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add support to allow expressions returning String type. Also add string operations, like string concatenation using "+" as operator
**Describe the solution you'd like**
Suggested grammar change for supporting String Expressions
```
stringExpression
: stringExpression PLUS stringExpression
| function
| JsonPointer
| String
```
This allows for following expressions
```
getMetadata("key") + "suffix"
/key + "suffix"
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for String Expressions in DataPrepper Expression | https://api.github.com/repos/opensearch-project/data-prepper/issues/2686/comments | 2 | 2023-05-12T21:30:27Z | 2023-06-05T20:42:43Z | https://github.com/opensearch-project/data-prepper/issues/2686 | 1,708,235,485 | 2,686 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add support for Arithmetic expressions in DataPrepper expression.
Some discussion is in previous RFC https://github.com/opensearch-project/data-prepper/issues/1005.
**Describe the solution you'd like**
Suggested grammar for supporting full Arithmetic expressions in DataPrepper expressions
```
arithmeticExpression
: arithmeticExpression (PLUS | MINUS) arithmeticTerm
| arithmeticTerm
;
arithmeticTerm
: arithmeticTerm (MULTIPLY | DIVIDE) arithmeticFactor
| arithmeticFactor
;
arithmeticFactor
: Function
| JsonPointer
| Integer
| Float
| LPAREN arithmeticExpression RPAREN
| SUBTRACT arithmeticFactor
;
```
This allows for `+,=,*,/ ` arithmetic operators and also functions as operands
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for Arithmetic expressions in DataPrepper expression | https://api.github.com/repos/opensearch-project/data-prepper/issues/2685/comments | 1 | 2023-05-12T21:27:14Z | 2023-05-26T00:37:22Z | https://github.com/opensearch-project/data-prepper/issues/2685 | 1,708,231,405 | 2,685 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Pipeline users want to read data from SQS Queue
**Describe the solution you'd like**
Create SQS Source plugin which will read all events from SQS queue URL’s with provided batch size. This plugin should be able to accept AWS Key’s and work with AWS Authentication Sigv4. Plugin should have
- Back off mechanism which should kick in say when number of messages per batch request reaches batch size mentioned in the configuration.
- Capability to spin of threads which are configurable.
- Require sqs:DeleteMessageBatch, sqs:DeleteMessage and sqs:ReceiveMessage IAM privileges to be associated with the Instance profile or AWS Access KeyId
- Implement multi node support
- End-end acknowledgments.
```
log-pipeline:
source:
sqs:
queues:
- urls: ['https://sqs.us-east-1.amazonaws.com/myQueue/dev', 'https://sqs.us-east-1.amazonaws.com/myQueue2/dev']
polling_frequency: 5m
batch_size: 10
number_of_threads: 2
- urls: ['https://sqs.us-east-1.amazonaws.com/myQueue/dev2']
polling_frequency: 1m
batch_size: 10
number_of_threads: 3
aws:
access_key_id:
role_arn:
region:
secret_access_key:
session_token:
```
**Additional context**
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-sqs.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
https://github.com/plumbee/flume-sqs-source
| SQS Source / Input plugin | https://api.github.com/repos/opensearch-project/data-prepper/issues/2679/comments | 6 | 2023-05-11T15:14:39Z | 2023-06-28T15:02:27Z | https://github.com/opensearch-project/data-prepper/issues/2679 | 1,706,030,208 | 2,679 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper supports end-to-end acknowledgments, but sinks must honor these.
The new S3 sink being worked by #1048 should acknowledge when events are sent to S3.
**Describe the solution you'd like**
Implement end-to-end acknowledgements for the S3 sink.
| Support end-to-end acknowledgments in the S3 sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/2674/comments | 0 | 2023-05-10T15:18:39Z | 2023-05-10T15:24:09Z | https://github.com/opensearch-project/data-prepper/issues/2674 | 1,704,167,023 | 2,674 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add Entries processor should support adding entries based on expressions, especially expressions with functions
For example,
```
processor:
- add_entries:
entries:
- key: "requestLength"
- value: "length(/request)"
```
or
```
processor:
- add_entries:
entries:
- key: "newKey"
- value: "getMetadata('newKey')"
```
or
```
processor:
- add_entries:
entries:
- key: "newLength"
- value: "length(/request) + getMetadata('padding_length')"
```
**Describe the solution you'd like**
Need to support dataprepper expressions as values. They may be supported under existing `value:` config option or add new config option like `value_expression:` which takes an expression, evaluates it and stores the result as value for the given key
```
processor:
- add_entries:
entries:
- key: "newLength"
- value_expression: "length(/request) + getMetadata('padding_length')"
```
**Describe alternatives you've considered (Optional)**
Alternative way is to have a special character as the first character in the value to indicate that what's following is an expression. For example
```
processor:
- add_entries:
entries:
- key: "newLength"
- value: "@(length(/request) + getMetadata('padding_length'))"
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for expressions in add_entries processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/2672/comments | 1 | 2023-05-10T04:55:18Z | 2023-05-23T20:31:03Z | https://github.com/opensearch-project/data-prepper/issues/2672 | 1,703,123,279 | 2,672 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-5.3.0.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/5.3.0/b6dda23ac18fa6db58093638cfc7a62f1c50b808/spring-core-5.3.0.jar</p>
<p>
Dependency Hierarchy:
- spring-test-5.3.0.jar (Root Library)
- :x: **spring-core-5.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution (org.springframework:spring-core): 5.3.12</p>
<p>Direct dependency fix Resolution (org.springframework:spring-test): 5.3.12</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2021-22096 (Medium) detected in spring-core-5.3.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/2671/comments | 0 | 2023-05-09T21:15:58Z | 2023-05-22T17:34:54Z | https://github.com/opensearch-project/data-prepper/issues/2671 | 1,702,767,281 | 2,671 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2021-22060 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-5.3.0.jar</b></p></summary>
<p>Spring Core</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/5.3.0/b6dda23ac18fa6db58093638cfc7a62f1c50b808/spring-core-5.3.0.jar</p>
<p>
Dependency Hierarchy:
- spring-test-5.3.0.jar (Root Library)
- :x: **spring-core-5.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.13, 5.2.0 - 5.2.18, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries. This is a follow-up to CVE-2021-22096 that protects against additional types of input and in more places of the Spring Framework codebase.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-22060>CVE-2021-22060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6gf2-pvqw-37ph">https://github.com/advisories/GHSA-6gf2-pvqw-37ph</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution (org.springframework:spring-core): 5.3.14</p>
<p>Direct dependency fix Resolution (org.springframework:spring-test): 5.3.14</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2021-22060 (Medium) detected in spring-core-5.3.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/2670/comments | 0 | 2023-05-09T21:15:56Z | 2023-05-22T17:34:53Z | https://github.com/opensearch-project/data-prepper/issues/2670 | 1,702,767,227 | 2,670 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Change JsonStringBuilder in JacksonEvent to be non static for ease-of-use
**Describe the solution you'd like**
From ease-of-use perspective it is easier to write like
`event.jsonBuilder().includeTags("tags").toJsonString()`
instead of current way of
`JacksonEvent.jsonBuilder().withEvent(event).includeTags("tags").toJsonString()`
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Event Tags : Change JsonStringBuilder in JacksonEvent to be non static for ease-of-use | https://api.github.com/repos/opensearch-project/data-prepper/issues/2665/comments | 0 | 2023-05-09T15:19:44Z | 2023-05-09T22:45:22Z | https://github.com/opensearch-project/data-prepper/issues/2665 | 1,702,258,848 | 2,665 |
[
"opensearch-project",
"data-prepper"
] | ## Background
Data Prepper currently has a `stdout` sink, which is quite rudimentary. It lacks a few things that would be helpful for a more robust solution.
* The output is not formatted like Data Prepper log lines and thus can be hard to grok.
* You cannot tell the difference between output from different pipelines.
* It includes everything.
## Solution
Provide `log` sink. This `log` sink should write events to the SLF4J logger (which uses Log4j in the end). It should write events with an `INFO` log level to a dynamic logger.
```
2023-05-09T13:39:17,399 [raw-pipeline-sink-worker-6-thread-1] INFO org.opensearch.dataprepper.events.raw-pipeline - {"traceId":"00000000000000007798e1c61318236b","droppedLinksCount":0,"kind":"SPAN_KIND_UNSPECIFIED","droppedEventsCount":0,"traceGroupFields":{"endTime":"2023-05-09T13:47:23.527907Z","durationInNanos":1249634000,"statusCode":0},"traceGroup":"HTTP GET /dispatch","serviceName":"frontend","parentSpanId":"7798e1c61318236b","spanId":"636334d5cf67afbf","traceState":"","name":"HTTP GET: /route","startTime":"2023-05-09T13:47:23.430116Z","links":[],"endTime":"2023-05-09T13:47:23.471633Z","droppedAttributesCount":0,"durationInNanos":41517000,"events":[],"resource.attributes.client-uuid":"2c6596275f2b7601","resource.attributes.ip":"172.29.0.5","resource.attributes.host@name":"d93e9be7cf2d","resource.attributes.opencensus@exporterversion":"Jaeger-Go-2.29.1","resource.attributes.service@name":"frontend","status.code":0}
```
I propose the following logger:
```
org.opensearch.dataprepper.events.${name}
```
The value of `${name}` can be configured by the pipeline author via the `name` configuration. If not specified, it will default to the pipeline name. By putting all the logs under the `org.opensearch.dataprepper.events` group, the loggers can be highly configured. Additionally, I think we should allow dots in the `name` so that users could configure sub-groups if they desire.
The logger should have a simple sampler involved. One approach would be to allow the user to configure the number of events (N) in any given period of time. The sampler would take the first N events in any time period and output those.
Parameters:
* `name` - Configure the logger name to differentiate pipelines.
* `sampling` - Provide configurations for sampling events.
* * `period` - A duration for the sampling time.
* * `count` - The number of events for the sampling time.
| Log sink for logging events in the normal logger | https://api.github.com/repos/opensearch-project/data-prepper/issues/2662/comments | 0 | 2023-05-09T13:55:38Z | 2023-05-12T18:49:34Z | https://github.com/opensearch-project/data-prepper/issues/2662 | 1,702,093,393 | 2,662 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Running Data Prepper with a bad username/password combination yields the following error.
```
2023-05-08T19:29:16,393 [main] ERROR org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink.
2023-05-08T19:29:16,393 [Thread-2] ERROR org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink.
Exception in thread "main" jakarta.json.stream.JsonParsingException: Jackson exception: Unrecognized token 'Unauthorized': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (ByteArrayInputStream); line: 1, column: 13]
at org.opensearch.client.json.jackson.JacksonJsonpParser.convertException(JacksonJsonpParser.java:97)
at org.opensearch.client.json.jackson.JacksonJsonpParser.fetchNextToken(JacksonJsonpParser.java:104)
at org.opensearch.client.json.jackson.JacksonJsonpParser.next(JacksonJsonpParser.java:131)
at org.opensearch.client.json.JsonpDeserializer.deserialize(JsonpDeserializer.java:82)
at org.opensearch.client.json.ObjectBuilderDeserializer.deserialize(ObjectBuilderDeserializer.java:92)
at org.opensearch.client.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:56)
at org.opensearch.client.transport.rest_client.RestClientTransport.getHighLevelResponse(RestClientTransport.java:271)
at org.opensearch.client.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:143)
at org.opensearch.client.opensearch.cluster.OpenSearchClusterClient.getSettings(OpenSearchClusterClient.java:282)
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkISMEnabled(AbstractIndexManager.java:187)
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.checkAndCreateIndexTemplate(AbstractIndexManager.java:208)
at org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager.setupIndex(AbstractIndexManager.java:203)
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:174)
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:139)
at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:39)
at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:194)
at org.opensearch.dataprepper.DataPrepper.execute(DataPrepper.java:93)
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:42)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Unauthorized': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (ByteArrayInputStream); line: 1, column: 13]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:2418)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:759)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3693)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2781)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:907)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:793)
at org.opensearch.client.json.jackson.JacksonJsonpParser.fetchNextToken(JacksonJsonpParser.java:102)
... 16 more
```
**To Reproduce**
Steps to reproduce the behavior:
1. Change the `examples/trace_analytics_no_ssl_2x.yml` file by giving `badpassword` in the `password` fields.
2. Run the Jaeger Hotrod example
3. Check Data Prepper logs (`docker logs -f data-prepper`)
4. See error
**Expected behavior**
Ideally, the error from OpenSearch should be visible.
Additionally, I'd like to see the username I provided. For example: "Unable to authenticate with username `admin`. Check your username and password."
**Additional context**
This is similar to #2655, but for open-source OpenSearch domains.
| [BUG] Failing to connect to an OpenSearch cluster with a bad username/password gives unhelpful error | https://api.github.com/repos/opensearch-project/data-prepper/issues/2657/comments | 2 | 2023-05-08T19:33:22Z | 2023-06-05T15:16:47Z | https://github.com/opensearch-project/data-prepper/issues/2657 | 1,700,810,346 | 2,657 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Pipeline creation should succeed even when sinks are not ready.
Currently, DataPrepper waits for all sinks to be ready for 10 minutes (hard-coded) and if all sinks are not ready in that time, the pipeline creation fails
We should allow pipeline creation to succeed and let the sinks get initialized later. Source should wait for sinks to be ready before starting. This means ingestion pipeline does not start accepting any input until all sinks are ready.
**Describe the solution you'd like**
Solution:
1. Remove the current wait loop in the DataPrepper `execute()` that waits for all sinks to get ready
2. Add check for sink to be ready in the Pipeline `execute()` that checks for sinks' readiness in asynchronous way using `sinkExecutorService.submit(() -> { while (!isReady() {...}; source.start()`
3. The Pipeline `execute()` continues to initialize processors and sinks. And this happens for all pipelines
4. When a pipeline's source starts after all it's sinks are ready, it should set itself as "ready" if the source is a PipelineConnector type. This way sinks of parent pipeline would be ready. This makes the entire pipeline ready in reverse order.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Pipeline creation failures due to sink config issues is the main driving factor for this feature request.
| Pipeline creation should succeed even when sinks are not ready | https://api.github.com/repos/opensearch-project/data-prepper/issues/2656/comments | 2 | 2023-05-08T18:19:42Z | 2023-05-22T17:03:19Z | https://github.com/opensearch-project/data-prepper/issues/2656 | 1,700,707,220 | 2,656 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper is showing an error that looks like the following:
```
[security_exception] authentication/authorization failure
```
**Expected behavior**
Data Prepper used to have a clearer error:
```
[security_exception] no permissions for [indices:admin/get] and User [name=arn:aws:iam::123456789012:role/FullAccess, backend_roles=[arn:aws:iam::123456789012:role/FullAccess], requestedTenant=null]
```
Additionally, the OpenSearch username or STS role should always be included in the response.
For example:
> The role "arn:aws:iam::123456789012:role/FullAccess" was unable to authenticate with the OpenSearch domain. Please check your permissions. | [BUG] Unhelpful error message when failing to authenticate with Amazon OpenSearch Service | https://api.github.com/repos/opensearch-project/data-prepper/issues/2655/comments | 4 | 2023-05-08T15:27:29Z | 2023-06-05T15:16:47Z | https://github.com/opensearch-project/data-prepper/issues/2655 | 1,700,472,256 | 2,655 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When running multiple workers, there is a bug which occurs occasionally in 2.2.0.
```
2023-05-05T17:08:33,800 [raw-pipeline-sink-worker-6-thread-3] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [raw-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.2.0-SNAPSHOT.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.lang.NullPointerException: Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:195) ~[opensearch-2.2.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.execute(BulkRetryStrategy.java:145) ~[opensearch-2.2.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$flushBatch$6(OpenSearchSink.java:237) ~[opensearch-2.2.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.3.jar:1.10.3]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.flushBatch(OpenSearchSink.java:234) ~[opensearch-2.2.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doOutput(OpenSearchSink.java:214) ~[opensearch-2.2.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:54) ~[data-prepper-api-2.2.0-SNAPSHOT.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.3.jar:1.10.3]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:54) ~[data-prepper-api-2.2.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$3(Pipeline.java:262) ~[data-prepper-core-2.2.0-SNAPSHOT.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
```
| [BUG] Receiving exception in OpenSearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/2654/comments | 1 | 2023-05-08T15:22:12Z | 2023-05-12T18:55:04Z | https://github.com/opensearch-project/data-prepper/issues/2654 | 1,700,464,810 | 2,654 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
OpenSearch Sink retry mechanism uses hard-coded values for initial delay, max delay. And does not use jitter. Allow these parameters to be configurable.
**Describe the solution you'd like**
Provide options under opensearch to configure these values
```
sink:
- opensearch:
hosts: ["..."]
username:
password:
index:
max_retries:
initial_retry_delay:
max_retry_delay:
retry_jitter:
```
**Describe alternatives you've considered (Optional)**
**Additional context**
| Allow open search sink retry mechanism configurable | https://api.github.com/repos/opensearch-project/data-prepper/issues/2650/comments | 0 | 2023-05-05T21:31:42Z | 2023-05-10T21:05:04Z | https://github.com/opensearch-project/data-prepper/issues/2650 | 1,698,217,881 | 2,650 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper users have ability to load objects from S3 that do not have an SQS notification but missing End to end acknowledgment
**Describe the solution you'd like**
The pipeline author should be able to have end to end acknowledgment for S3 Scan
| S3 Source - Implement End to End Ack | https://api.github.com/repos/opensearch-project/data-prepper/issues/2649/comments | 1 | 2023-05-05T21:11:42Z | 2023-08-16T22:26:50Z | https://github.com/opensearch-project/data-prepper/issues/2649 | 1,698,202,766 | 2,649 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, user can configure Grok Processor with `pattern_directories` to load pattern files only from local file system.
```
processor:
- grok:
patterns_directories: ["path/to/patterns_folder", "path/to/extra_patterns_folder"]
match:
message: ["%{CUSTOM_PATTERN_FROM_FILE:my_pattern}"`
```
**Describe the solution you'd like**
Allow users to load the pattern directories from S3. This would allow specifying S3 URI in `pattern_directories`.
This change would use [ListObjectsV2](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) API and require `s3:ListBucket` permission.
**Additional context**
This is similar to loading index template and ISM policy from S3: https://github.com/opensearch-project/data-prepper/issues/2120
| Support loading Grok patterns from S3 | https://api.github.com/repos/opensearch-project/data-prepper/issues/2646/comments | 3 | 2023-05-05T15:05:19Z | 2023-10-23T18:35:32Z | https://github.com/opensearch-project/data-prepper/issues/2646 | 1,697,777,822 | 2,646 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently OpenSearch supports exponential backoff with optional max-retries as retry mechanism. This is not very useful because the sleep time between the retries becomes exceedingly high some retries. Here is the wait times (in milliseconds) before each retry
```
1 50
2 60
3 80
4 150
5 280
6 580
7 1250
8 2740
9 6050
10 13430
11 29840
12 66380
13 147680
14 328630
15 731340
16 1627580
17 3622210
18 8061330
19 17940780
20 39927900
21 88861140
22 197764060
23 440131970
24 979531670
```
At iteration 21, with a value of 88,861,140 is more than one day of wait time. And iteration 25 would result in integer overflow of the wait time.
**Describe the solution you'd like**
Provide a more reasonable wait scheme before retries. It would be a hybrid model of exponential back off and constant backoff. First exponential backoff followed by constant backoff. It would have four parameters.
1. Initial delay ( Enforce min and max values)
2. Max wait time with exponential backoff (Enforce maximum to be less than one hour or even less like 15 minutes)
3. Value of constant wait time for constant backoff (For example, 15 minutes)
4. Max wait time with constant backoff
First retry is done after initial delay
Next "n" number of retries are done until max exponential backoff wait time
Next "m" number of retries are done at constant pace of configured interval for max constant backoff wait time
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Improve OpenSearch retry mechanism | https://api.github.com/repos/opensearch-project/data-prepper/issues/2641/comments | 2 | 2023-05-04T21:23:25Z | 2023-06-05T20:41:48Z | https://github.com/opensearch-project/data-prepper/issues/2641 | 1,696,723,877 | 2,641 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I'd like to create conditionals based on the length of a string.
**Describe the solution you'd like**
Provide a function to get the length of a string from a key.
```
length("/log_level") > 10
```
**Describe alternatives you've considered (Optional)**
We could have a length processor. But, this approach would be more compact.
**Additional context**
Base this on #2626
| Support getting the length of a string in expressions | https://api.github.com/repos/opensearch-project/data-prepper/issues/2639/comments | 2 | 2023-05-04T19:13:21Z | 2023-06-05T20:43:53Z | https://github.com/opensearch-project/data-prepper/issues/2639 | 1,696,559,176 | 2,639 |
[
"opensearch-project",
"data-prepper"
] | null | Allow injecting configurations into data-prepper-config.yaml | https://api.github.com/repos/opensearch-project/data-prepper/issues/2638/comments | 0 | 2023-05-04T19:01:25Z | 2023-08-30T17:31:07Z | https://github.com/opensearch-project/data-prepper/issues/2638 | 1,696,542,034 | 2,638 |
[
"opensearch-project",
"data-prepper"
] | null | Create support for extension-related objects which can be loaded by other Data Prepper plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/2637/comments | 0 | 2023-05-04T19:01:23Z | 2023-05-24T21:22:56Z | https://github.com/opensearch-project/data-prepper/issues/2637 | 1,696,541,982 | 2,637 |
[
"opensearch-project",
"data-prepper"
] | null | Create starting point interfaces for extensions | https://api.github.com/repos/opensearch-project/data-prepper/issues/2636/comments | 0 | 2023-05-04T19:01:20Z | 2023-05-24T21:22:55Z | https://github.com/opensearch-project/data-prepper/issues/2636 | 1,696,541,918 | 2,636 |
[
"opensearch-project",
"data-prepper"
] | Not sure where to ask this as it seems the engagement is low on forums like StackOverflow.
I am looking to make the move from ELK to OpenSearch. The purpose is solely for application log aggregation. I can not find a simple example of how to do this however...the example discussed here https://opensearch.org/docs/latest/observing-your-data/log-ingestion/, does not work and it seems like that documentation is out of date based on what the docker-compose.yml it refers to looks like. In particular the repo does not show any 'data-prepper' in the docker-compose.yml. Additionally, while the example does build and start 3 containers- it does not seem to push any data to opensearch when you make modifications to the test.log file.
So, here i am..asking really just for something that actually works. anything out there? I have been warned that the documentation for this project is very lacking - but i feel like there should be one basic example that new users could follow to get working...just one. | Log ingestion example does not work | https://api.github.com/repos/opensearch-project/data-prepper/issues/2640/comments | 4 | 2023-05-03T22:55:46Z | 2023-05-10T21:04:43Z | https://github.com/opensearch-project/data-prepper/issues/2640 | 1,696,614,085 | 2,640 |
[
"opensearch-project",
"data-prepper"
] | Having a hard time building an example that ships and parses application logs. This example in the documentation seems old:
https://opensearch.org/docs/1.2/observability-plugin/log-analytics/
The docker-compose.yml no longer includes a data-prepper component. it's unclear what else needs to be updated, but the compose file in examples/log-ingestion/, while it setups up the services/containers nicely, doesnt not seem to feed logs into opensearch. | Update Examples | https://api.github.com/repos/opensearch-project/data-prepper/issues/2630/comments | 0 | 2023-05-03T17:50:12Z | 2023-05-05T02:15:54Z | https://github.com/opensearch-project/data-prepper/issues/2630 | 1,694,532,494 | 2,630 |
[
"opensearch-project",
"data-prepper"
] | ## Background
We have several issues that call for using functions in Data Prepper. These include the following:
* A `getTags()` function - #629
* #1998
* Possibly a CIDR function as noted in #2625
* #2639
## Solution
Provide an ability to use functions in Data Prepper expressions. Functions should be able to take parameters.
## Design
When parsing an expression, Data Prepper should have two distinct steps:
1. Parse the grammar for what looks like a function. The parser should have no specific knowledge of the functions available, but should instead parse that a function is being used.
2. Look-up available functions based on the name of the function and the parameters provided.
For evaluating an expression, Data Prepper will run the function with the given event as well as the parameters supplied.
| Support functions in Data Prepper expressions | https://api.github.com/repos/opensearch-project/data-prepper/issues/2626/comments | 0 | 2023-05-02T22:31:56Z | 2023-05-11T19:47:06Z | https://github.com/opensearch-project/data-prepper/issues/2626 | 1,693,178,814 | 2,626 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
There are use cases where the IP address associated with an event needs to be checked against some given CIDR blocks to determine the further processing actions for the event.
**Describe the solution you'd like**
Support a `ip_in` operator in an Data Prepper expression like this:
```
/source_ip ip_in {"192.0.2.0/24.", "10.10.0.0/16"}
```
to check if an IP address from the event on the left side matches any of the network blocks on the right side.
With the conditional support proposed in #2613, we can then use this expression in many common processors to process events based on the result of the IP check.
**Describe alternatives you've considered (Optional)**
Support a `when` option in existing processors (e.g. add_entries processors) to perform CIDR check:
```
processor:
- add_entries:
entries:
- key: "valid_source"
value: true
when:
cidr:
address_key: "source_ip"
network: ["192.0.2.0/24.", "10.10.0.0/16"]
```
This will add an entry {"valid_source": true} to the event if source_ip is within the range of the network blocks.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support checking if an IP address is in a CIDR block | https://api.github.com/repos/opensearch-project/data-prepper/issues/2625/comments | 1 | 2023-05-02T20:49:14Z | 2023-05-24T15:56:17Z | https://github.com/opensearch-project/data-prepper/issues/2625 | 1,693,081,466 | 2,625 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Many use-cases seek to parse out User-Agent headers to get useful metadata before saving to OpenSearch. This can include the type of browser, version information of the browser, the operation system, version information for the operating system, and even device information.
**Describe the solution you'd like**
Provide a processor for parsing User-Agent headers.
```
useragent:
source_key: /headers/user_agent
target_key: /user_agent
exclude_original: false
```
Following along with https://github.com/opensearch-project/observability/issues/1398, this should support ECS-compatibility.
In the example above, I might have output which looks like the following:
```
user_agent : {
name: "Safari",
version: "16.4",
device: {
name: "iPhone"
},
original: "..."
}
```
Fields:
* `source_key` - The key path to use for the User-Agent string. required
* `target_key` - The key path which will be the object containing the User-Agent metadata. Optional: if not specified, defaults to `user_agent` to match the ECS schema.
* `exclude_original` - By default, the source_key will be copied. Set this to `true` to not copy it. Optional; defaults to `false`.
**Additional context**
ECS reference for User-Agent: https://www.elastic.co/guide/en/ecs/current/ecs-user_agent.html#field-user-agent-device-name
| Support parsing User-Agent fields | https://api.github.com/repos/opensearch-project/data-prepper/issues/2618/comments | 2 | 2023-05-01T19:16:05Z | 2023-05-19T20:23:00Z | https://github.com/opensearch-project/data-prepper/issues/2618 | 1,691,230,296 | 2,618 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2022-45688 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-20180130.jar</b></p></summary>
<p>JSON is a light-weight, language independent, data interchange format.
See http://www.JSON.org/
The files in this package implement JSON encoders/decoders in Java.
It also includes the capability to convert between JSON and XML, HTTP
headers, Cookies, and CDL.
This is a reference implementation. There is a large number of JSON packages
in Java. Perhaps someday the Java community will standardize on one. Until
then, choose carefully.
The license includes this restriction: "The software shall be used for good,
not evil." If your conscience cannot live with that, then choose a different
package.</p>
<p>Library home page: <a href="https://github.com/douglascrockford/JSON-java">https://github.com/douglascrockford/JSON-java</a></p>
<p>Path to dependency file: /data-prepper-plugins/avro-codecs/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.json/json/20180130/26ba2ec0e791a32ea5dfbedfcebf36447ee5b12c/json-20180130.jar</p>
<p>
Dependency Hierarchy:
- :x: **json-20180130.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/ebd3e757c341c1d9c1352431bbad7bf5db2ea939">ebd3e757c341c1d9c1352431bbad7bf5db2ea939</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A stack overflow in the XML.toJSONObject component of hutool-json v5.8.10 allows attackers to cause a Denial of Service (DoS) via crafted JSON or XML data.
<p>Publish Date: 2022-12-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45688>CVE-2022-45688</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-3vqj-43w4-2q58">https://github.com/advisories/GHSA-3vqj-43w4-2q58</a></p>
<p>Release Date: 2022-12-13</p>
<p>Fix Resolution: 20230227</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
| CVE-2022-45688 (High) detected in json-20180130.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/2615/comments | 0 | 2023-05-01T16:15:11Z | 2023-05-04T15:18:16Z | https://github.com/opensearch-project/data-prepper/issues/2615 | 1,691,020,701 | 2,615 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of processors, I would like to only run processors conditionally based on the Events using the Data Prepper expression syntax
**Describe the solution you'd like**
Add conditional support to the following processors
```
* grok
* parse_json
* mutate event processors
* mutate string processors
```
Following from the existing naming scheme for processors that support this conditional parameter (`aggregate` processor has `aggregate_when`, drop processor has `drop_when`, the conditional statement parameters for these processors will follow the same schema (i.e. `grok_when` (for grok), `parse_when` (for parse_json), `add_when` (for add_entries), and so on).
The alternative to this is using a common parameter for all the conditional statement parameters, that being `when`. The benefit of this would be to eliminate breaking changes being made in the case that we add support in the future for conditional statements being at the processor level with a common parameter name of `when`.
**Describe alternatives you've considered (Optional)**
Adding support through data-prepper-core to support filtering events based on a `when` parameter that can automatically be used for all processors. This could still be something we do in the future, but as it is a core change, it comes with extra complexity
**Additional context**
Add any other context or screenshots about the feature request here.
| Add conditional support to commonly used processors | https://api.github.com/repos/opensearch-project/data-prepper/issues/2613/comments | 0 | 2023-05-01T16:07:58Z | 2023-05-03T15:50:05Z | https://github.com/opensearch-project/data-prepper/issues/2613 | 1,691,012,191 | 2,613 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
OpenSearch ingestion failures are being obscured by the OpenSearch sink logging. I encountered this due to a mappings exception from OpenSearch. DataPrepper only logged:
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document [******] has failure.
```
The OpenSearch logs had to be inspected to determine the true cause of failure.
**Expected behavior**
The exception that prevented the document from being ingested into OpenSearch should be logged by DataPrepper for ease of debugging by users.
**Additional context**
Log is generated from here: https://github.com/opensearch-project/data-prepper/blob/74dbff0835298cee3ddf2cef9fb20be8baa755b2/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L321
| [BUG] OpenSearch sink obscuring ingestion exceptions | https://api.github.com/repos/opensearch-project/data-prepper/issues/2612/comments | 2 | 2023-05-01T15:30:21Z | 2024-02-12T11:11:33Z | https://github.com/opensearch-project/data-prepper/issues/2612 | 1,690,962,891 | 2,612 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Presently, the `s3` source parses SQS messages for when S3 writes directly to the SQS queue. However, in some cases (e.g. fan-out pattern), an architecture has S3 events go to SNS, and then SNS to SQS. When this happens, the message is wrapped in another layer of formatting.
Presently, we get messages like the following:
```
ERROR org.opensearch.dataprepper.plugins.source.SqsWorker - SQS message with message ID:*** has invalid body which cannot be parsed into S3EventNotification. Unrecognized field "Type" (class org.opensearch.dataprepper.plugins.source.S3EventNotification), not marked as ignorable (one known property: "Records"])
at [Source: (String)"{
"Type" : "Notification",
"MessageId" : "***",
"TopicArn" : "arn:aws:sns:us-east-1:123456789012:my-topic",
"Subject" : "Amazon S3 Notification",
"Message" : "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"us-east-1\",\"eventTime\":\"2023-04-27T12:32:12.908Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"***\"[truncated 1060 chars]; line: 12, column: 2] (through reference chain: org.opensearch.dataprepper.plugins.source.S3EventNotification["Type"])
```
**Describe the solution you'd like**
Automatically detect when the input is an SNS message. Extract the `Message` from that. Then parse the body of `Message` just as we currently do.
**Describe alternatives you've considered (Optional)**
We could provide a user-configure here. But, it seems that there are only two common scenarios that need to be supported and they can be automatically detected.
Scenario 1: S3 -> SQS (already supported)
Scenario 2: S3 -> SNS -> SQS
| Support SNS messages in S3 SQS queues | https://api.github.com/repos/opensearch-project/data-prepper/issues/2604/comments | 1 | 2023-04-28T21:44:00Z | 2023-05-08T18:27:40Z | https://github.com/opensearch-project/data-prepper/issues/2604 | 1,689,163,431 | 2,604 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I see that the OpenSearch sink handles retries recursively.
```
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.logFailure(OpenSearchSink.java:310)
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink$$Lambda$1222/0x0000000800829c40.accept(Unknown Source)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleFailures(BulkRetryStrategy.java:322)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:195)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:186)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:235)
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.execute(BulkRetryStrategy.java:154)
```
**Expectation**
This should happen in a loop. The current behavior could potentially hit a stack overflow.
Also, it adds a lot of redundant calls in the stack traces which are produced. | [BUG] Retries should not happen recursively | https://api.github.com/repos/opensearch-project/data-prepper/issues/2599/comments | 0 | 2023-04-26T23:15:38Z | 2023-05-02T23:05:30Z | https://github.com/opensearch-project/data-prepper/issues/2599 | 1,685,863,458 | 2,599 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper is no longer processing data. There appears to be data stuck in the buffer. Buffer metrics are reporting bufferUsage of 75%. RecordsInFlight are 250,000 and records in buffer are 687,000+. The last message in my logs is
```
2023-04-25T20:42:43.344 WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document [******] has failure.
java.lang.RuntimeException: Number of retries reached the limit of max retries(configured value 10)
```
Data prepper appears to be hung. I no longer see the source polling or any data flowing through the pipeline via the metrics.
**To Reproduce**
I have 2 out of 5 pipelines with this issue. All are pointing to the same domain. Around the time of this issue starting there was a brief write block on my domain. It is unclear if this issue is reproducible as 3 were able to continue to process data after the write block was lifted..
Steps to reproduce the behavior:
**Expected behavior**
Data prepper does not hang with data in the buffer.
**Environment (please complete the following information):**
- Version [e.g. 2.2]
**Additional context**
Partial Pipeline Configuration
```
version: "2"
my-pipeline:
source:
s3:
notification_type: "sqs"
buffer_timeout: "60s"
codec:
newline:
skip_lines: 1
sqs:
queue_url: "https://sqs.us-east-1.amazonaws.com/123456789012/****-queue"
compression: "gzip"
aws:
region: "us-east-1"
sts_role_arn: "arn:aws:iam::123456789012:role/osis-pipeline-role"
buffer:
bounded_blocking:
batch_size: 125000
buffer_size: 1000000
...
sink:
- opensearch:
max_retries: 10
hosts:
- "https://*****"
index: "vpc-flow-logs-%{yyyy.MM.dd}"
bulk_size: 20
aws:
region: "us-east-1"
sts_role_arn: "arn:aws:iam::123456789012:role/*****-role"
workers: 2
delay: 0
```
| [BUG] Data Prepper hung with data stuck in buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/2598/comments | 1 | 2023-04-26T18:42:58Z | 2023-04-26T23:10:32Z | https://github.com/opensearch-project/data-prepper/issues/2598 | 1,685,529,426 | 2,598 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Writing to the OpenSearch sink may encounter failures. These were originally provided after the final failure occurred. The recent change to the failure handling overwrites the error message masking it from the operator. This makes it difficult to debug failure scenarios:
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document [******] has failure.
java.lang.RuntimeException: Number of retries reached the limit of max retries(configured value 10)
```
**Expected behavior**
Log the original failure associated with
**Additional context**
- [Overwriting the original failure](https://github.com/opensearch-project/data-prepper/blame/74dbff0835298cee3ddf2cef9fb20be8baa755b2/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java#L190)
- [PR associated with the change](https://github.com/opensearch-project/data-prepper/pull/2339)
| [BUG] OpenSearch Sink failures are masked by max retry in the logs | https://api.github.com/repos/opensearch-project/data-prepper/issues/2597/comments | 2 | 2023-04-26T18:02:14Z | 2023-05-02T23:05:29Z | https://github.com/opensearch-project/data-prepper/issues/2597 | 1,685,480,128 | 2,597 |
[
"opensearch-project",
"data-prepper"
] | There does not seem to be a way to specify an s3 source to delete files after processing.
Currently logstash supports a parameter in the s3 input called [delete](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-s3.html#plugins-inputs-s3-delete) that you can set to `true` and then the s3 source file is deleted once fully ingested. We are currently using logstash and investigating switching to data-prepper and the lack of this feature would be a major blocker for us.
It would be good to have this feature parity if possible. | s3 source: delete after processing | https://api.github.com/repos/opensearch-project/data-prepper/issues/2596/comments | 5 | 2023-04-26T11:59:22Z | 2023-07-20T19:28:27Z | https://github.com/opensearch-project/data-prepper/issues/2596 | 1,684,885,192 | 2,596 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present we are testing data-prepper in a k8s environment, for a new piece of work we are looking at.
The current issue we are having is a heap issue (I will raise another issue for this), that when we run out of heap space all our pipelines shutdown , which is fine. However we would like the option that once the pipelines have shutdown the data-prepper process is terminated, which is not happening at present, which means our pods continue to run although they process nothing.
**Describe the solution you'd like**
On pipeline shutdown for the option to kill the process, and in turn for the pod to be restarted
**Describe alternatives you've considered (Optional)**
We will investigate health check's but this is a belt and braces type thing
**Additional context**
This would be an extension to this PR (I think) https://github.com/opensearch-project/data-prepper/pull/2540/files
| Process Termination on Pipeline Shutdown | https://api.github.com/repos/opensearch-project/data-prepper/issues/2595/comments | 4 | 2023-04-25T19:31:40Z | 2023-04-25T21:48:38Z | https://github.com/opensearch-project/data-prepper/issues/2595 | 1,683,741,762 | 2,595 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-2251 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yaml-1.10.2.tgz</b></p></summary>
<p>JavaScript parser and stringifier for YAML</p>
<p>Library home page: <a href="https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz">https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz</a></p>
<p>Path to dependency file: /release/staging-resources-cdk/package.json</p>
<p>Path to vulnerable library: /release/staging-resources-cdk/node_modules/aws-cdk-lib/node_modules/yaml/package.json</p>
<p>
Dependency Hierarchy:
- aws-cdk-lib-2.13.0.tgz (Root Library)
- :x: **yaml-1.10.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/ebd3e757c341c1d9c1352431bbad7bf5db2ea939">ebd3e757c341c1d9c1352431bbad7bf5db2ea939</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Uncaught Exception in GitHub repository eemeli/yaml prior to 2.0.0-4.
<p>Publish Date: 2023-04-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-2251>CVE-2023-2251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-f9xv-q969-pqx4">https://github.com/advisories/GHSA-f9xv-q969-pqx4</a></p>
<p>Release Date: 2023-04-24</p>
<p>Fix Resolution: yaml - 2.2.2</p>
</p>
</details>
<p></p>
| CVE-2023-2251 (High) detected in yaml-1.10.2.tgz - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/2594/comments | 1 | 2023-04-25T16:36:36Z | 2023-05-01T16:15:15Z | https://github.com/opensearch-project/data-prepper/issues/2594 | 1,683,504,768 | 2,594 |
[
"opensearch-project",
"data-prepper"
] | ## Problem
Data Prepper pipeline configurations require configuring OpenSearch sinks in multiple locations.
For example, in the standard trace analytics pipeline, I may have the following configured twice
```
sink:
- opensearch:
hosts: ["https://localhost:9200"]
username: admin
password: admin
index_type: trace-analytics-raw
...
sink:
- opensearch:
hosts: ["https://localhost:9200"]
username: admin
password: admin
index_type: trace-analytics-service-map
```
## Proposal
I'd like to be able to optionally configure an OpenSearch connection once and re-use it.
```
pipeline_configurations:
opensearch:
connections:
trace_cluster:
hosts: ["https://localhost:9200"]
username: admin
password: admin
```
Then, I can re-use it with the following.
```
sink:
- opensearch:
connection: trace_cluster
index_type: trace-analytics-raw
...
sink:
- opensearch:
connection: trace_cluster
index_type: trace-analytics-service-map
```
## Related enhancements
This requires a common OpenSearch extension. It would be used along with #2589. Thus, this requires the plugin extensions capability described by #2588. | Shared OpenSearch configurations | https://api.github.com/repos/opensearch-project/data-prepper/issues/2590/comments | 0 | 2023-04-24T19:07:57Z | 2023-04-24T19:09:00Z | https://github.com/opensearch-project/data-prepper/issues/2590 | 1,681,863,886 | 2,590 |
[
"opensearch-project",
"data-prepper"
] | # Problem
Currently, two `opensearch` sinks which write to the same OpenSearch cluster have no coordination between them. If one of the sinks is experiencing increased error rates, there is no way to notify the other sinks to attempt to backoff as well.
Additionally, other pipeline components which use OpenSearch do not coordinate among themselves or with other OpenSearch components. This includes `otel_trace_group` and the OpenSearch source proposed in #1985.
## Proposal
Provide a Data Prepper extension for OpenSearch clusters to use for coordinating requests. This can add backoffs at the cluster-level and not just at the sink level.
## Plugin support
This proposal is to use the plugin extensions proposed in #2588 to provide a common set of code for coordinating. | OpenSearch sink coordination | https://api.github.com/repos/opensearch-project/data-prepper/issues/2589/comments | 2 | 2023-04-24T18:59:15Z | 2023-09-19T16:35:43Z | https://github.com/opensearch-project/data-prepper/issues/2589 | 1,681,848,962 | 2,589 |
[
"opensearch-project",
"data-prepper"
] | ## Problem
The term pipeline component refers to a source, buffer, processor, or sink. Today, Data Prepper plugins only support creating pipeline components. (Or in some cases, sub-configurations within a component.) The `opensearch` sink is an example of component plugin. Data Prepper creates an instance of this class for each sink declared in a pipeline configuration.
However, Data Prepper has no solution for extending existing functionality beyond pipeline components. Nor does it have a solution to add new functionality to the overall system.
One use-case for this is described in #2570. This proposes:
* Extending the `data-prepper-config.yaml` to support AWS credential configurations.
* Adding new functionality for other plugins to use which allow them to get credentials from a common mechanism.
Another use-case is supporting variable injection (through environment variables or AWS Secrets). For example, see #2780.
Another problem extensions could help with is sharing resources. For example, we could possibly share data across multiple `opensearch` sinks through the use of extensions. See #2589 for more details.
## Solution
Provide a mechanism for extending Data Prepper functionality. Extensions will provide this mechanism.
An extension is a global component that can be used across all of Data Prepper. Other pipeline components (sources, processors, sinks) can use extensions. And even Data Prepper core functionality can be modified by using extensions.
These extensions should allow:
* Adding new configurations in `data-prepper-config.yaml`
* Creating new types that can be shared within the plugin framework. These types can themselves be registered as plugins.
* Add new configurations within a pipeline YAML file
## Tasks
- [x] #2636
- [x] #2637
- [x] #2638
- [ ] #2824
- [x] #2826
- [ ] #2825
| Plugin extensions beyond pipeline components | https://api.github.com/repos/opensearch-project/data-prepper/issues/2588/comments | 0 | 2023-04-24T18:47:06Z | 2024-01-19T22:38:28Z | https://github.com/opensearch-project/data-prepper/issues/2588 | 1,681,832,469 | 2,588 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some Data Prepper APIs continue to use either the low-level rest-client or the rest high-level client. These are incompatible with Amazon OpenSearch Serverless.
**Describe the solution you'd like**
Support OpenSearch Serverless by updating all API calls to use the `opensearch-java` client.
**Additional context**
This builds on top of #2169.
| Support OpenSearch Serverless in all requests | https://api.github.com/repos/opensearch-project/data-prepper/issues/2587/comments | 0 | 2023-04-24T18:33:00Z | 2023-05-10T21:02:19Z | https://github.com/opensearch-project/data-prepper/issues/2587 | 1,681,815,506 | 2,587 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently the Log source has [Throttling Strategy](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/http-source/src/main/java/org/opensearch/dataprepper/plugins/source/loghttp/LogThrottlingStrategy.java) when the queue.size() < maxPendingRequests (number of HTTP source request that are queued when all the HTTP source threads are busy executing the request is greater than the maximum allowed number of request that can be queued). To be consistent, this throttling strategy should be applied to all the sources.
**Describe the solution you'd like**
Move the [Throttling Strategy](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/http-source/src/main/java/org/opensearch/dataprepper/plugins/source/loghttp/LogThrottlingStrategy.java) to common package and apply this to all the sources.
| Add Throttling Strategy to all the OTEL sources | https://api.github.com/repos/opensearch-project/data-prepper/issues/2582/comments | 0 | 2023-04-23T16:54:26Z | 2023-04-25T15:54:52Z | https://github.com/opensearch-project/data-prepper/issues/2582 | 1,680,092,735 | 2,582 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I have added the dlq to OpenSearch sink. And when I used "error/" as the key_path_prefix, the full output file of dlq is 'error//dlq-v2-test-pipeline-opensearch-2023-...' which contains an extra unexpected `/` in between.
Note that / still is a valid key in Amazon S3.
**To Reproduce**
Steps to reproduce the behavior:
1. Add below dlq config to opensearch sink
```
sink:
- opensearch:
dlq:
s3:
bucket: "xxx"
key_path_prefix: "error/"
region: "us-west-2"
```
2. Run the pipeline and try get some errors
3. Check the dlq bucket for error files.
**Expected behavior**
The current output key is `error//dlq-v2-...`, expected output key is `error/dlq-v2-...`
**Screenshots**
N/A
**Environment (please complete the following information):**
- OS: MacOS (local)
- Version: 2.2
**Additional context**
N/A
| [BUG] Invalid key prefix format for dlq | https://api.github.com/repos/opensearch-project/data-prepper/issues/2581/comments | 12 | 2023-04-23T06:35:43Z | 2023-05-25T18:08:22Z | https://github.com/opensearch-project/data-prepper/issues/2581 | 1,679,884,401 | 2,581 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The release build's smoke tests fail consistently.
However, I can run them locally with success.
**To Reproduce**
Steps to reproduce the behavior:
1. Run a release build GHA
**Expected behavior**
The tests pass (perhaps with a re-run failed jobs)
**Sample run**
https://github.com/opensearch-project/data-prepper/actions/runs/4758053910/jobs/8455993252
```
Ready to begin smoke tests. Running cURL commands.
Test: Verify logs received via HTTP were processed
Open Search is receiving logs from Data Prepper
Found at least 78 hits
Test passed
Open Search successfully received logs from Data Prepper!
Test: Verify metrics received via grpc were processed
No hits found with query url https://localhost:9200/otel-v1-apm-span-000001/_search?q=PythonService
Smoke test failed
Stopping smoke-tests_otel-span-exporter_1 ...
Stopping smoke-tests_otel-collector_1 ...
Stopping smoke-tests_data-prepper_1 ...
Stopping node-0.example.com ...
Stopping smoke-tests_http-log-generation_1 ...
Stopping smoke-tests_http-log-generation_1 ... done
Stopping smoke-tests_otel-span-exporter_1 ... done
Stopping smoke-tests_otel-collector_1 ... done
Stopping smoke-tests_data-prepper_1 ... done
Stopping node-0.example.com ... done
Removing smoke-tests_otel-span-exporter_1 ...
Removing smoke-tests_otel-collector_1 ...
Removing smoke-tests_data-prepper_1 ...
Removing node-0.example.com ...
Removing smoke-tests_http-log-generation_1 ...
Removing node-0.example.com ... done
Removing smoke-tests_otel-collector_1 ... done
Removing smoke-tests_otel-span-exporter_1 ... done
Removing smoke-tests_http-log-generation_1 ... done
Removing smoke-tests_data-prepper_1 ... done
Removing network smoke-tests_default
Smoke tests failed
Error: Process completed with exit code 1.
``` | [BUG] Smoke tests fail in GitHub Actions | https://api.github.com/repos/opensearch-project/data-prepper/issues/2579/comments | 0 | 2023-04-23T00:42:34Z | 2023-04-23T01:10:10Z | https://github.com/opensearch-project/data-prepper/issues/2579 | 1,679,798,099 | 2,579 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of data prepper, I would like to be able to validate my pipeline configurations without having to start the entirety of data prepper with multiple servers and without having to use real permissions for opensearch, s3, etc.
**Describe the solution you'd like**
A new module within data prepper called `data-prepper-validation-api`. This module would provide a library to validate pipeline configuration without starting the entirety of data prepper. It will utilize data-prepper-core code to convert the pipeline configuration into the model for each data prepper plugin, and run the jsr380 validations for those plugins. Additionally, this module would be responsible for constructing instances of plugins with a dependency on only the configuration that the plugin uses.
In order to achieve this, the data prepper directory structure will need to change to the structure proposed in https://github.com/opensearch-project/data-prepper/issues/1503 for data-prepper-core.
After these directory structure changes are complete (only need the changes for splitting out data-prepper-core), we will add the `data-prepper-validation-api`
The `data-prepper-validation-api` module will take a dependency on all the data-prepper-plugins that are configured in the pipeline configuration, as well as some of the libraries extracted from data-prepper-core (`data-prepper-pipeline`, `data-prepper-plugin-framework`), and will provide a library to run these validations given a pipeline configuration yaml string, and return error messages for invalid configurations. The following dependency hierarchy will be the end result
```
data-prepper-core
+---- data-prepper-validations
+---- data-prepper-plugin-framework
+---- data-prepper-pipeline
data-prepper-validations
+---- data-prepper-pipeline
+---- data-prepper-plugin-framework
data-prepper-validation-api
+---- data-prepper-validations
+---- opensearch
+---- s3-source
+---- grok-processor
... This can have all plugins so that it can perform actual validations
```
In order to validate more than just the jsr380 validations, plugins will need to be instantiated just with the configuration model associated with that plugin. Additionally, some plugins do not use jsr380 and the `@DataPrepperPluginConstructor` annotation for their configurations. For example, to run validations that the grok patterns configured in a grok processor are valid, the grok processor will need to be instantiated because it takes a `PluginSetting` object in its constructor and will validate that PluginSetting itself, rather than having data-prepper-core validate it.
While we could start with just validating plugin models with jsr380, I am proposing that we add an optional annotation to `data-prepper-api` that can be used by all plugins, that being `@DataPrepperValidateApi`. This annotation could be added to either an existing or new constructor that only requires the configuration, whether is it a `PluginSetting` or a custom config that is converted by the `data-prepper-plugin-framework`. The `data-prepper-validation-api` would then look for this annotation on plugins (if it is not found then no extra validations are run), and use it to instantiate the plugin with its configuration. The plugin would then be able to run any validations that it reasonably can in this constructor without creating all of the additional dependencies (servers, clients, etc.), and would be able to provide error messages for this. For example, the grok processor could have the following constructor,
```
@DataPrepperValidateApi
GrokProcessor(final PluginSetting pluginSetting, final List<String> errors) {
final GrokProcessorConfig grokConfig = buildConfig(pluginSetting);
errors.add(validateGrokPatterns());
}
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Data Prepper directory structure changes proposal (https://github.com/opensearch-project/data-prepper/issues/1503) | Validate data prepper configurations without running Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/2573/comments | 4 | 2023-04-21T22:03:51Z | 2025-04-17T19:15:58Z | https://github.com/opensearch-project/data-prepper/issues/2573 | 1,679,149,967 | 2,573 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Need a processor which does similar to what OTEL tail sampling processor does. More details can be found at - https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md
Basic functionality is that - the decision to sample or not is taken after a trace is "complete". When it is complete if the trace is error trace, then it is allowed all the time (no sampling), if it is not an error trace then it is allowed based on configured percent using probabilistic sampling.
A trace is considered "complete" if there is no events/spans in the trace for a configured time period called "wait_period"
**Describe the solution you'd like**
Solution is support the behavior of tail sampling as supported by OTEL tail sampling processor. We use the existing aggregate processor framework in the Data Prepper to support this functionality. A new action with tail sampling functionality is the easy way to add this functionality to Data Prepper.
**Describe alternatives you've considered (Optional)**
I can't think of alternative approaches in the current Data Prepper. In future we could have better way to do tail sampling without using aggregate processor framework.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add tail sampling processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/2572/comments | 3 | 2023-04-21T21:07:42Z | 2023-05-04T20:26:14Z | https://github.com/opensearch-project/data-prepper/issues/2572 | 1,679,103,478 | 2,572 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The Data Prepper tar.gz distributions are only for x86 architectures currently. As Data Prepper is written in Java it should be able to run on ARM. The distribution which is bundled with a JDK needs to have an ARM distribution.
**Describe the solution you'd like**
Provide two new distributions for ARM64:
* Data Prepper for ARM
* Data Prepper with JDK for ARM
**Describe alternatives you've considered (Optional)**
The Data Prepper (without JDK) distribution should work for either x86 or ARM64. However, I propose creating an ARM distribution for the following reasons:
1. This is what other projects in OpenSearch appear to do. See the [artifacts page](https://opensearch.org/artifacts).
2. This will be clearer for users. They can have more confidence for using the ARM64 distribution.
**Additional context**
This is related to #640.
| ARM distributions via tar.gz | https://api.github.com/repos/opensearch-project/data-prepper/issues/2571/comments | 4 | 2023-04-21T20:51:40Z | 2024-04-17T23:02:02Z | https://github.com/opensearch-project/data-prepper/issues/2571 | 1,679,082,674 | 2,571 |
[
"opensearch-project",
"data-prepper"
] | ## Problem
Presently Data Prepper pipeline definitions must have AWS IAM credential configurations for most AWS authentication.
This presents a few problems:
* Different pipeline components have copied-and-pasted configurations resulting in duplicate configuration.
* Pipeline authors may have to change multiple locations to perform updates.
* AWS STS roles may need to be assumed in multiple locations even though they can be shared.
* The pipeline configuration can be somewhat clutter with these configurations.
## Solution
I'd like to have three options available for configuring AWS IAM credentials in pipeline configurations.
1. Use a default AWS configuration configured in `data-prepper-config.yaml`.
2. Specify a named AWS configuration which is configured in `data-prepper-config.yaml`.
3. Configure the AWS configuration in the pipeline configuration as Data Prepper already supports.
### Default AWS configuration
In `data-prepper-config.yaml`, I'd like to have something like the following.
```
aws:
default:
region: us-west-2
sts_role_arn: "arn:aws:iam::123456789012:role/MyRole"
```
Now, can configure my `opensearch` sink with just:
```
- opensearch:
hosts: [ "https://search-my-amazon-opensearch-domain.us-west-2.es.amazonaws.com" ]
index: my_index
```
It will use that `sts_role_arn` and `region` as specified above.
### Named AWS configurations
In `data-prepper-config.yaml`, I'd like to have something like the following.
```
aws:
configurations:
my_configuration:
region: us-west-2
sts_role_arn: "arn:aws:iam::123456789012:role/MyRole"
```
Now, can configure my `opensearch` sink with just:
```
- opensearch:
hosts: [ "https://search-my-amazon-opensearch-domain.us-west-2.es.amazonaws.com" ]
index: my_index
aws:
configuration: my_configuration
```
It will use that `sts_role_arn` and `region` as defined in `my_configuration`.
### Additional configurations
Additionally, a few other options could be provided to customize how Data Prepper authenticates.
* `role_session_name_prefix` - Now that credentials can be shared, a default STS session name would be `DataPrepper-${random}`. Instead, the role prefix can be configured. Thus, the session name can be `${role_session_name_prefix}-${random}`.
* `role_session_name` - Provide the full name for role sessions.
* `endpoint` - Configure a specific endpoint for STS requests.
```
aws:
default:
region: us-west-2
sts_role_arn: "arn:aws:iam::123456789012:role/MyRole"
role_session_name_prefix: dp1
endpoint: https://mysts.example.org
```
## Alternative considered
An alternative is make use of AWS profiles and the default provider chain. However, this can be confusing because it relies on changes to user paths or environment variables. It also requires making configuration changes in other files beyond Data Prepper which may be challenging in certain environments.
Also, since Data Prepper supports these role configurations, it makes sense to support this with Data Prepper itself.
## Plugin support
While I'd like to have this available in `data-prepper-config.yaml`, I do not think Data Prepper core should have this AWS functionality. Instead, I'd like to have the ability to create plugins which are not pipeline components, but are instead able to extend Data Prepper's core functionality.
That is, I want this AWS feature to be a plugin which adds these configurations to `data-prepper-config.yaml`. And it will provide classes to other plugins that need AWS support.
# Tasks
- [x] #2751
- [x] #2764
- [x] Provide an AWS configuration with support for default credentials
- [ ] #4637 | Make AWS credential management available in data-prepper-config.yaml | https://api.github.com/repos/opensearch-project/data-prepper/issues/2570/comments | 3 | 2023-04-21T20:30:37Z | 2024-06-18T19:38:18Z | https://github.com/opensearch-project/data-prepper/issues/2570 | 1,679,048,737 | 2,570 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Failures in the `s3` source related to authentication tend to require user intervention. Because of this, the backoff for situations with authentication issues is more aggressive than it needs to be.
This produces a number of unnecessary logs before the user can respond.
**Expected behavior**
Have a longer delay for STS and unauthenticted errors.
| [BUG] S3 source backoff is still quite aggressive | https://api.github.com/repos/opensearch-project/data-prepper/issues/2568/comments | 5 | 2023-04-21T16:22:27Z | 2023-04-24T14:16:55Z | https://github.com/opensearch-project/data-prepper/issues/2568 | 1,678,783,495 | 2,568 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I was following the log ingestion example in the repo and found that a few key steps make the readme incomplete. They are:
1. No instructions on how to set a custom opensearch config. This is useful if i want to play around with other config options from opensearch
2. The default auth credentials are not mentioned. Once i startup the compose file, i am not able to login to the Dashboard. turns out it is `admin:admin`
3. Once i login, i am not able to find the sample log index `apache-logs` in the index management view. Adding more logs to the `test.log` file shows that the DNS resolution for the default fluent bit container fails.
4. The common apache log examples do not map the timestamp as a date field on OpenSearch, making exploring them based on time impossible without adding a custom mapping. Given that the logs are ingested automatically once data is added, this means that the mapping has to happen as soon as the cluster is up, and if not, will need a reindex just to get the time fields mapped correctly for the demo. All this adds friction to a user who just wants to quickly setup a demo that they can play with and get a feel for the log ingestion flow. The `timestamp` field can be manually mapped and is a part of common apache log. Making this map by default to a `date` type would help demo the capability of data prepper a lot better.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to ['log'](https://github.com/opensearch-project/data-prepper/tree/main/examples/log-ingestion)
5. Observe the issues mentioned above
**Expected behavior**
None of the mentioned issues
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] log ingestion example missing information | https://api.github.com/repos/opensearch-project/data-prepper/issues/2567/comments | 3 | 2023-04-21T06:38:19Z | 2024-01-22T16:54:45Z | https://github.com/opensearch-project/data-prepper/issues/2567 | 1,677,922,368 | 2,567 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The S3 Select functionality runs out of memory on large files.
**To Reproduce**
Configure to use a large CSV file which does not support batching, for example, with compression.
**Expected behavior**
Data Prepper should add data to buffer as received rather than populate a large list.
**Observations**
The S3 Select code makes an asynchronous request and then blocks for it to complete.
https://github.com/opensearch-project/data-prepper/blob/bcae5e1a4c548590fe1a14d4f8b3ff6e49a15274/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/S3SelectObjectWorker.java#L179
The asynchronous response handler then immediately adds the data to a list.
https://github.com/opensearch-project/data-prepper/blob/bcae5e1a4c548590fe1a14d4f8b3ff6e49a15274/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/S3SelectResponseHandler.java#L28
Thus, this will fill the entire `List` before writing anything to the buffer.
| [BUG] S3 Select for large files | https://api.github.com/repos/opensearch-project/data-prepper/issues/2559/comments | 1 | 2023-04-20T16:31:18Z | 2023-04-24T14:43:54Z | https://github.com/opensearch-project/data-prepper/issues/2559 | 1,677,054,403 | 2,559 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We had a GitHub Action fail with:
```
PipelinesWithAcksIT > two_pipelines_with_multiple_records() FAILED
java.lang.NullPointerException at PipelinesWithAcksIT.java:100
```
https://github.com/opensearch-project/data-prepper/actions/runs/4749652084/jobs/8437118038?pr=2549
This corresponds to:
https://github.com/opensearch-project/data-prepper/blob/a1978d353e9eae16ae6a7c09e65544ebd4c671f7/data-prepper-core/src/integrationTest/java/org/opensearch/dataprepper/integration/PipelinesWithAcksIT.java#L100
| [BUG] End-to-end acknowledgement integration test is failing | https://api.github.com/repos/opensearch-project/data-prepper/issues/2551/comments | 1 | 2023-04-20T02:22:16Z | 2023-05-01T17:39:07Z | https://github.com/opensearch-project/data-prepper/issues/2551 | 1,675,886,338 | 2,551 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper GitHub Actions builds are failing with:
```
Could not determine the dependencies of task ':data-prepper-api:spotlessCheck'.
> Could not create task ':data-prepper-api:spotlessJavaCheck'.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
> Could not create task ':data-prepper-api:spotlessJava'.
> Could not resolve all files for configuration ':data-prepper-api:spotless865455342'.
See https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings
> Could not resolve com.google.googlejavaformat:google-java-format:1.15.0.
Required by:
project :data-prepper-api
> Could not resolve com.google.googlejavaformat:google-java-format:1.15.0.
> Could not get resource 'https://repo.maven.apache.org/maven2/com/google/googlejavaformat/google-java-format/1.15.0/google-java-format-1.15.0.pom'.
> Could not GET 'https://repo.maven.apache.org/maven2/com/google/googlejavaformat/google-java-format/1.15.0/google-java-format-1.15.0.pom'.
> repo.maven.apache.org
```
https://github.com/opensearch-project/data-prepper/actions/runs/4749448000/jobs/8436723881
**To Reproduce**
Run any build in GitHub Actions.
**Additional context**
I can run the build locally, even with `--refresh-dependencies`.
| [BUG] Building failing unable to find Maven repository artifacts | https://api.github.com/repos/opensearch-project/data-prepper/issues/2548/comments | 1 | 2023-04-20T01:27:48Z | 2023-04-20T15:26:18Z | https://github.com/opensearch-project/data-prepper/issues/2548 | 1,675,848,082 | 2,548 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-26048 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-11.0.12.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-server/11.0.12/29c82ff7e059ee1e454af6d391834abadf24e60/jetty-server-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- :x: **jetty-server-11.0.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/ebd3e757c341c1d9c1352431bbad7bf5db2ea939">ebd3e757c341c1d9c1352431bbad7bf5db2ea939</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Jetty is a java based web server and servlet engine. In affected versions servlets with multipart support (e.g. annotated with `@MultipartConfig`) that call `HttpServletRequest.getParameter()` or `HttpServletRequest.getParts()` may cause `OutOfMemoryError` when the client sends a multipart request with a part that has a name but no filename and very large content. This happens even with the default settings of `fileSizeThreshold=0` which should stream the whole part content to disk. An attacker client may send a large multipart request and cause the server to throw `OutOfMemoryError`. However, the server may be able to recover after the `OutOfMemoryError` and continue its service -- although it may take some time. This issue has been patched in versions 9.4.51, 10.0.14, and 11.0.14. Users are advised to upgrade. Users unable to upgrade may set the multipart parameter `maxRequestSize` which must be set to a non-negative value, so the whole multipart content is limited (although still read into memory).
<p>Publish Date: 2023-04-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26048>CVE-2023-26048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-qw69-rqj8-6qw8">https://github.com/eclipse/jetty.project/security/advisories/GHSA-qw69-rqj8-6qw8</a></p>
<p>Release Date: 2023-04-18</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.51.v20230217,10.0.14,11.0.14;org.eclipse.jetty:jetty-runner:9.4.51.v20230217,10.0.14,11.0.14</p>
</p>
</details>
<p></p>
| CVE-2023-26048 (Medium) detected in jetty-server-11.0.12.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/2533/comments | 5 | 2023-04-19T16:12:11Z | 2023-10-26T18:28:50Z | https://github.com/opensearch-project/data-prepper/issues/2533 | 1,675,222,254 | 2,533 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-26049 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jetty-http-11.0.12.jar</b>, <b>jetty-server-11.0.12.jar</b></p></summary>
<p>
<details><summary><b>jetty-http-11.0.12.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-http/11.0.12/bf07349f47ab6b11f1329600f37dffb136d5d7c/jetty-http-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- jetty-server-11.0.12.jar
- :x: **jetty-http-11.0.12.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-server-11.0.12.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-server/11.0.12/29c82ff7e059ee1e454af6d391834abadf24e60/jetty-server-11.0.12.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-8.jar (Root Library)
- :x: **jetty-server-11.0.12.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/38c0aad8c814ffe1bddae84012ebe80e54225ec2">38c0aad8c814ffe1bddae84012ebe80e54225ec2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Jetty is a java based web server and servlet engine. Nonstandard cookie parsing in Jetty may allow an attacker to smuggle cookies within other cookies, or otherwise perform unintended behavior by tampering with the cookie parsing mechanism. If Jetty sees a cookie VALUE that starts with `"` (double quote), it will continue to read the cookie string until it sees a closing quote -- even if a semicolon is encountered. So, a cookie header such as: `DISPLAY_LANGUAGE="b; JSESSIONID=1337; c=d"` will be parsed as one cookie, with the name DISPLAY_LANGUAGE and a value of b; JSESSIONID=1337; c=d instead of 3 separate cookies. This has security implications because if, say, JSESSIONID is an HttpOnly cookie, and the DISPLAY_LANGUAGE cookie value is rendered on the page, an attacker can smuggle the JSESSIONID cookie into the DISPLAY_LANGUAGE cookie and thereby exfiltrate it. This is significant when an intermediary is enacting some policy based on cookies, so a smuggled cookie can bypass that policy yet still be seen by the Jetty server or its logging system. This issue has been addressed in versions 9.4.51, 10.0.14, 11.0.14, and 12.0.0.beta0 and users are advised to upgrade. There are no known workarounds for this issue.
<p>Publish Date: 2023-04-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26049>CVE-2023-26049</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p26g-97m4-6q7c">https://github.com/advisories/GHSA-p26g-97m4-6q7c</a></p>
<p>Release Date: 2023-04-18</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-http:9.4.51.v20230217,10.0.14,11.0.14, org.eclipse.jetty:jetty-runner:9.4.51.v20230217,10.0.14,11.0.14, org.eclipse.jetty:jetty-server:9.4.51.v20230217,10.0.14,11.0.14</p>
</p>
</details>
<p></p>
| CVE-2023-26049 (Medium) detected in jetty-http-11.0.12.jar, jetty-server-11.0.12.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/2532/comments | 5 | 2023-04-19T16:12:09Z | 2023-10-26T18:28:43Z | https://github.com/opensearch-project/data-prepper/issues/2532 | 1,675,222,191 | 2,532 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, users configure `route` component and `routes` in sink in the pipeline YAML. This might not look very clear for users on what actually is happening with routes since both `route` and `routes` are list and can be plural. For example,
```
route:
- my-route: "/a==b"
sink:
- testSink:
routes:
- "my-route"
```
**Describe the solution you'd like**
One proposal is to have a clear and easy configuration for readability and understandability.
```
routes:
- my-route: "/a==b"
sink:
- testSink:
routes:
- "my-route"
```
Second proposal to to have match inside sink. This is not required and might not add much value as there are no other options inside `routes`. If needed we can rename the `matches` in sink.
```
routes:
- my-route: "/a==b"
sink:
- testSink:
routes:
match:
- "my-route"
```
**Additional context**
We originally changes to `routes` to `route` from PR feedback and we are planning to go back to `routes`.
Please feel free to comment any other suggestions you might have. | Make conditional routing yaml configurations user friendly | https://api.github.com/repos/opensearch-project/data-prepper/issues/2520/comments | 0 | 2023-04-18T19:51:40Z | 2023-04-20T02:01:01Z | https://github.com/opensearch-project/data-prepper/issues/2520 | 1,673,740,996 | 2,520 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper currently have multple plugins related to OTel but we don't follow any standard naming convention.
For example, `otel_trace_source` and `otel_metrics_source`.
Also, having processor keyword in a processor plugin is redundant.
For example, `otel_metrics_raw_processor`
**Describe the solution you'd like**
All the plugins should follow a common naming convention, and we can leverage the https://github.com/opensearch-project/data-prepper/issues/2504 to make the changes without breaking anything.
Sources list,
`otel_trace_source` -> `otel_traces`
`otel_metrics_source` -> `otel_metrics`
`otel_logs_source` -> `otel_logs`
Processors list,
`otel_trace_raw` -> `otel_traces`
`otel_metrics_raw_processor` -> `otel_metrics`
`service_map_stateful` -> `service_map`
While it is allowed to have same name for a source and processor, I think it's a confusing user experience. Please comment if you have any suggestions.
| Consistency in OTel plugin names | https://api.github.com/repos/opensearch-project/data-prepper/issues/2515/comments | 4 | 2023-04-17T23:00:43Z | 2023-04-20T15:54:45Z | https://github.com/opensearch-project/data-prepper/issues/2515 | 1,672,083,831 | 2,515 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The CSV codec will skip some rows of data without any metrics, only logging.
**Expected behavior**
I'd like a more robust error handling situation. In the meantime though, failing fast is better than difficult to detect data drops.
| [BUG] S3 source CSV codec drops data without metrics | https://api.github.com/repos/opensearch-project/data-prepper/issues/2512/comments | 0 | 2023-04-17T21:27:14Z | 2023-04-19T01:32:40Z | https://github.com/opensearch-project/data-prepper/issues/2512 | 1,671,994,170 | 2,512 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user, I would like to be able to determine the key of the event to be written to the OpenSearch Sink.
**Describe the solution you'd like**
The OpenSearch Sink will provide an option `document_root_key` which will determine the contents of the event to write to the sink. The document will be built based on the object at the `document_root_key`.
**Describe alternatives you've considered (Optional)**
This can be achieved by mutating events and deleting keys to ensure only the data I want is written to the opensearch sink. However, this is complex and adds complexity to my pipeline definition.
**Additional context**
This is relevant for processing DLQ Objects written to S3. If I create a DLQ pipeline which reads DLQ events written to S3, the entire DLQ object is then written to my OpenSearch sink unless I mutate and remove fields. Having a key to provide would simplify my pipeline configuration and reduce the amount of data stored in my opensearch sink.
| Indexing into the event to create the document for the OpenSearch Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/2511/comments | 0 | 2023-04-17T21:26:23Z | 2023-04-17T21:27:12Z | https://github.com/opensearch-project/data-prepper/issues/2511 | 1,671,992,992 | 2,511 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, if you want to rename a plugin it has to be a breaking change and there is no easy way to do it.
**Describe the solution you'd like**
We can achieve this by adding `deprecatedName` to existing `DataPrepperPlugin` annotation (credits: @dlvenable). For example,
```
@DataPrepperPlugin(name = "otel_trace", deprecatedName="otel_trace_raw", pluginType = Processor.class, pluginConfigurationType = OtelTraceRawProcessorConfig.class)
```
**Describe alternatives you've considered (Optional)**
1. Duplicate the entire plugin package.
2. Duplicate the plugin class with `DataPrepperPlugin` annotation and create a new class with new name in `DataPrepperPlugin`. | Allow changing plugin names without breaking changes | https://api.github.com/repos/opensearch-project/data-prepper/issues/2504/comments | 1 | 2023-04-17T03:33:27Z | 2023-04-18T02:10:08Z | https://github.com/opensearch-project/data-prepper/issues/2504 | 1,670,342,385 | 2,504 |
[
"opensearch-project",
"data-prepper"
] | ### Discussed in https://github.com/opensearch-project/data-prepper/discussions/2499
<div type='discussions-op-text'>
<sup>Originally posted by **ccntechgit** April 15, 2023</sup>
OCP 4.12
Deployed data-prepper:2 in my namespace using the unchanged k8s deployment-template file "data-prepper-k8s.yaml"
Container started and crashes with the following errors:
Reading pipelines and data-prepper configuration files from Data Prepper home directory.
/opt/java/openjdk/bin/java
Found openjdk version of 17.0
2023-04-15 23:35:46,808 main ERROR Unable to create file log/data-prepper/data-prepper.log java.io.IOException: Could not create directory /usr/share/data-prepper/log/data-prepper
<lines skipped>
023-04-15 23:35:46,814 main ERROR Could not create plugin of type class org.apache.logging.log4j.core.appender.RollingFileAppender for element RollingFile: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@672872e1] unable to create manager for [log/data-prepper/data-prepper.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@32910148[pattern=logs/data-prepper.log.%d{MM-dd-yy-HH}-%i.gz, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=true), SizeBasedTriggeringPolicy(size=104857600)]), strategy=DefaultRolloverStrategy(min=1, max=168, useMax=true), advertiseURI=null, layout=%d{ISO8601} [%t] %-5p %40C - %m%n, filePermissions=null, fileOwner=null]] java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@672872e1] unable to create manager for [log/data-prepper/data-prepper.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@32910148[pattern=logs/data-prepper.log.%d{MM-dd-yy-HH}-%i.gz, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=true), SizeBasedTriggeringPolicy(size=104857600)]), strategy=DefaultRolloverStrategy(min=1, max=168, useMax=true), advertiseURI=null, layout=%d{ISO8601} [%t] %-5p %40C - %m%n, filePermissions=null, fileOwner=null]]
<lines skipped>
2023-04-15 23:35:46,816 main ERROR Unable to invoke factory method in class org.apache.logging.log4j.core.appender.RollingFileAppender for element RollingFile: java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.RollingFileAppender java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.RollingFileAppender
If you need full see attached
[console-output.log](https://github.com/opensearch-project/data-prepper/files/11240901/console-output.log)
</div> | data-prepper crashing on openshift | https://api.github.com/repos/opensearch-project/data-prepper/issues/2500/comments | 0 | 2023-04-16T16:15:01Z | 2023-04-24T17:07:41Z | https://github.com/opensearch-project/data-prepper/issues/2500 | 1,670,032,504 | 2,500 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, when end to end acknowledgements are enabled for S3 Source, an acknowledgement set is created per SQS message (Which is good/correct) but also deletes each SQS message separately when acknowledgements were received.
The deletions (upon receiving acknowledgements) can be batched to improve performance and reduce costs for users.
**Describe the solution you'd like**
Keep track of the acknowledged SQS messages and delete them when a pre-configured amount of messages are collected or upon a timeout.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Improve SQS message delete performance when e2e acks enabled | https://api.github.com/repos/opensearch-project/data-prepper/issues/2498/comments | 0 | 2023-04-15T00:26:23Z | 2023-04-17T14:40:31Z | https://github.com/opensearch-project/data-prepper/issues/2498 | 1,669,074,762 | 2,498 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper deletes SQS messages for S3 events after writing to the buffer. However, if the events are not sent to the OpenSearch sink, this is not ideal behavior. It means that Data Prepper will not retry these objects and the messages will not go to the SQS DLQ.
**Describe the solution you'd like**
Provide an optional feature on the `s3` source so that it does not delete from the SQS queue until either:
* The OpenSearch sink commits to OpenSearch
* The OpenSearch sink writes to its DLQ
Related to #851, but specifically for the S3 source.
| End-to-end acknowledgements for S3 source | https://api.github.com/repos/opensearch-project/data-prepper/issues/2496/comments | 1 | 2023-04-14T20:01:22Z | 2023-04-21T16:24:30Z | https://github.com/opensearch-project/data-prepper/issues/2496 | 1,668,884,467 | 2,496 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-20863 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-expression-5.3.26.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /data-prepper-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.3.26/75ccfb9a99560d6a6b2654eae88896ed58b3e428/spring-expression-5.3.26.jar</p>
<p>
Dependency Hierarchy:
- data-prepper-expression-2.2.0-SNAPSHOT (Root Library)
- spring-context-5.3.26.jar
- :x: **spring-expression-5.3.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In spring framework versions prior to 5.2.24 release+ ,5.3.27+ and 6.0.8+ , it is possible for a user to provide a specially crafted SpEL expression that may cause a denial-of-service (DoS) condition.
<p>Publish Date: 2023-04-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20863>CVE-2023-20863</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-20863">https://nvd.nist.gov/vuln/detail/CVE-2023-20863</a></p>
<p>Release Date: 2023-04-13</p>
<p>Fix Resolution: org.springframework:spring-expression - 5.2.24.RELEASE,6.0.8,6.0.8</p>
</p>
</details>
<p></p>
| CVE-2023-20863 (High) detected in spring-expression-5.3.26.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/2492/comments | 0 | 2023-04-14T11:57:59Z | 2023-04-20T14:46:01Z | https://github.com/opensearch-project/data-prepper/issues/2492 | 1,668,118,410 | 2,492 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the s3 source with sqs, I have messages/objects that differentiates in size. This means that there is not an optimal visibility timeout for the SQS queue, as too small a timeout cause issues with large messages, and too large a timeout could cause delays on processing files if data prepper were to crash.
**Describe the solution you'd like**
Making timely calls to the `ChangeMessageVisbility` API of SQS from the S3 source. This could be an optional parameter for the sqs queue.
```
source:
s3:
sqs:
visibility_timeout: "dynamic"
```
The S3 source would be responsible for keeping track of the time that it has been processing a message, and would make an API call if it couldn't process the message in time. For example, if the visibility timeout of the queue is 2 minutes, and the S3 source pulls this message, and finds it won't be able to process it in time, an API call to `ChangeMessageVisbility` would be made to increase the visibility timeout for the message by another 2 minutes. This would continue until the message is fully processed, or until the instance of Data Prepper crashes, which means the visibility timeout would not be increased again, and another instance of data prepper could grab the message as intended.
**Describe alternatives you've considered (Optional)**
Defaulting the visibility timeout to a much larger value (maybe even the max of 12 hr), and then if Data Prepper is going to shutdown, to call a `ChangeMessageVisibility` with a value of 0 to allow another instance of Data Prepper to immediately
| Support dynamically changing the visibility timeout for S3 Source with SQS queue | https://api.github.com/repos/opensearch-project/data-prepper/issues/2485/comments | 2 | 2023-04-13T18:21:49Z | 2023-11-08T17:10:32Z | https://github.com/opensearch-project/data-prepper/issues/2485 | 1,666,922,300 | 2,485 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The s3 source visibility timeout defaults to 30 seconds in both SQS and Data Prepper. However, if the visibility timeout is changed on the SQS queue, and not configured in the S3 Source, the default of 30 seconds is still overridden in the call to pull the message from SQS. This is confusing and can lead to unexpected timeouts if processing larger messages from the queue.
**Describe the solution you'd like**
If no `visibility_timeout` is set on the sqs queue in the data prepper pipeline configuration, the SQS queue's visibility timeout should be used, rather than the default visibility timeout set by data prepper.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 Source SQS queue visibility timeout is overridden when it is not configured | https://api.github.com/repos/opensearch-project/data-prepper/issues/2484/comments | 4 | 2023-04-13T17:01:05Z | 2023-06-13T18:52:29Z | https://github.com/opensearch-project/data-prepper/issues/2484 | 1,666,791,576 | 2,484 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. cd examples/trace-analytics-sample-app
2. docker-compose up
3. The app stopped, error message is `Waiting for databaseService to be ready`
**Expected behavior**
databaseService start successfully.
**Screenshots**
**Environment (please complete the following information):**
- OS: Windows 10, with wsl2
**Additional context**
When I get into the container and run `python3 databaseService.py`, I got this error.
```
You are using Python 3.6. This version does not support timestamps with nanosecond precision and the OpenTelemetry SDK will use millisecond precision instead. Please refer to PEP 564 for more information. Please upgrade to Python 3.7 or newer to use nanosecond precision.
Traceback (most recent call last):
File "databaseService.py", line 11, in <module>
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
File "/usr/lib/python3.6/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py", line 22, in <module>
from opentelemetry.exporter.otlp.proto.grpc.exporter import (
File "/usr/lib/python3.6/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py", line 29, in <module>
from google.rpc.error_details_pb2 import RetryInfo
File "/usr/lib/python3.6/site-packages/google/rpc/error_details_pb2.py", line 39, in <module>
_RETRYINFO = DESCRIPTOR.message_types_by_name["RetryInfo"]
AttributeError: 'NoneType' object has no attribute 'message_types_by_name'
```
Seems related to this issue.
https://github.com/protocolbuffers/protobuf/issues/10075
| [BUG] AttributeError: 'NoneType' object has no attribute 'message_types_by_name' with Trace Analytics Sample App | https://api.github.com/repos/opensearch-project/data-prepper/issues/2477/comments | 0 | 2023-04-12T09:40:44Z | 2023-04-13T02:03:15Z | https://github.com/opensearch-project/data-prepper/issues/2477 | 1,664,210,560 | 2,477 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
S3 source with csv codec is not paring the csv file accurately. It appends an additional space before the first key.
**To Reproduce**
Steps to reproduce the behavior:
1. create pipeline with S3 source and csv codec
2. ingest some csv data
Pipeline config:
```
log-pipeline:
source:
s3:
notification_type: "sqs"
compression: none
codec:
csv:
sqs:
queue_url: "https://sqs.us-east-1.amazonaws.com/<account-id>/<queue-name>"
aws:
region: "us-east-1"
sts_role_arn: "arn:aws:iam::<account-id>:role/<role>"
on_error: "retain_messages"
processor:
- date:
from_time_received: true
destination: "@timestamp"
- delete_entries:
with_keys: ["col1"]
sink:
- stdout:
```
Input:
```
col1,col2
a,101
b,102
```
Output:
```
{" col1":"a","col2":"101","s3":{"bucket":"bucket-name","key":"Book1.csv"},"@timestamp":"2023-04-10T14:48:53.411-05:00"}
{" col1":"b","col2":"102","s3":{"bucket":"bucket-name","key":"Book1.csv"},"@timestamp":"2023-04-10T14:48:53.580-05:00"}
```
**Expected behavior**
- There shouldn't be a space before `col1` in above output and `delete_entries` should delete the key.
| [BUG] S3 source csv codec parsing | https://api.github.com/repos/opensearch-project/data-prepper/issues/2471/comments | 1 | 2023-04-10T19:54:20Z | 2023-04-11T05:42:52Z | https://github.com/opensearch-project/data-prepper/issues/2471 | 1,661,273,254 | 2,471 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
when integrating with fluent-bit, there is missing metric data points occasionally.
**To Reproduce**
Steps to reproduce the behavior:
A cloudformation template:
```
AWSTemplateFormatVersion: "2010-09-09"
Description: "Template to deploy and monitor Data Prepper on ECS Fargate tasks"
Resources:
FargateTask:
Type: AWS::ECS::TaskDefinition
Properties:
NetworkMode: awsvpc
Family: data-prepper-emf-metrics
TaskRoleArn: arn:aws:iam::531516510575:role/ecsTaskExecutionRole
ExecutionRoleArn: arn:aws:iam::531516510575:role/ecsTaskExecutionRole
Cpu: 1024
Memory: 8192
RequiresCompatibilities:
- "FARGATE"
ContainerDefinitions:
- Name: data-prepper-poc
Image: 531516510575.dkr.ecr.us-east-1.amazonaws.com/data-prepper-poc:EMF-METRICS-LOGGING-POC
MemoryReservation: 2048
Essential: true
PortMappings:
- ContainerPort: 21890
HostPort: 21890
Protocol: tcp
- ContainerPort: 2021
HostPort: 2021
Protocol: tcp
- ContainerPort: 4900
HostPort: 4900
Protocol: tcp
LogConfiguration:
LogDriver: awsfirelens
Options:
Name: kinesis_streams
Match: emf-DataPrepper
log_key: log
stream: MetricLogDataStream-dev-us-east-1
region: us-east-1
Command:
- /bin/bash
- -c
- |
echo $PIPELINES_YAML | base64 -d - | tee /usr/share/data-prepper/pipelines/pipelines.yaml
echo $DATA_PREPPER_CONFIG_YAML | base64 -d - | tee /usr/share/data-prepper/config/data-prepper-config.yaml
bin/data-prepper
Environment:
- Name: PIPELINES_YAML
Value:
Fn::Base64: |
log-pipeline:
source:
http:
health_check_service: true
processor:
- date:
from_time_received: true
destination: "@timestamp"
sink:
- stdout:
- Name: DATA_PREPPER_CONFIG_YAML
Value:
Fn::Base64: |
ssl: false
metricRegistries:
- "EmbeddedMetricsFormat"
- "CloudWatch"
metricTags:
serviceName: test-emf
accountId: 531516510575
- Name: log_router
Image: public.ecr.aws/aws-observability/aws-for-fluent-bit:init-latest
MemoryReservation: 50
Essential: true
FirelensConfiguration:
Type: fluentbit
Options:
config-file-type: "file"
config-file-value: "/fluent-bit.conf"
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: 531516510575/test-log-pipeline-firelens-event-logs
awslogs-region: us-east-1
awslogs-stream-prefix: 531516510575/test-log-pipeline-firelens-event-logs
awslogs-create-group: true
Environment:
- Name: FLUENT_BIT_CONF
Value:
Fn::Base64: |
[INPUT]
Name tcp
Listen 0.0.0.0
Port 25888
Chunk_Size 32
Buffer_Size 64
Format none
Tag emf-DataPrepper
[FILTER]
Name record_modifier
Match data-prepper-poc*-firelens*
Record resource_id 531516510575/test-log-pipeline
[OUTPUT]
Name cloudwatch
Match data-prepper-poc*-firelens*
region us-east-1
log_key log
log_group_name pepper-poc-logs
log_stream_prefix pepper-poc-logs
auto_create_group true
[OUTPUT]
Name cloudwatch
Match emf-DataPrepper
region us-east-1
log_key log
log_group_name pepper-metric-logs
log_stream_prefix pepper-metric-logs
auto_create_group true
log_format json/emf
Command:
- /bin/bash
- -c
- |
echo $FLUENT_BIT_CONF | base64 -d - | tee /fluent-bit.conf
echo -n "AWS for Fluent Bit Container Image Version "
cat /AWS_FOR_FLUENT_BIT_VERSION
exec /fluent-bit/bin/fluent-bit -e /fluent-bit/firehose.so -e /fluent-bit/cloudwatch.so -e /fluent-bit/kinesis.so -c /fluent-bit/etc/fluent-bit.conf
```
deploy the template yields ECS task definition that one can create a service in an ECS cluster
**Expected behavior**
No missing data
**Screenshots**

**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Missing datapoints in EMFLoggingMeterRegistry | https://api.github.com/repos/opensearch-project/data-prepper/issues/2469/comments | 3 | 2023-04-10T17:43:45Z | 2023-04-21T16:32:44Z | https://github.com/opensearch-project/data-prepper/issues/2469 | 1,661,115,927 | 2,469 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I receive the below error
```
2023-04-07T14:45:33,589 [main] INFO org.opensearch.dataprepper.DataPrepperArgumentConfiguration - Command line args: /usr/share/data-prepper/pipelines,/usr/share/data-prepper/config/data-prepper-config.yaml
2023-04-07T14:45:35,496 [main] ERROR org.opensearch.dataprepper.parser.PipelineParser - Pipelines configuration file not found at /usr/share/data-prepper/pipelines
```
the docker-compse file I am using
```
data-prepper:
container_name: data-prepper
image: docker.io/opensearchproject/data-prepper:latest
volumes:
- ./log-pipeline.yaml:/usr/share/data-prepper/pipelines.yaml:rw
- ./data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml:rw
environment:
ENV_PIPELINE_FILEPATH: '["/usr/share/data-prepper-config/pipelines.yaml"]'
ENV_CONFIG_FILEPATH: '["/usr/share/data-prepper-config/data-prepper-config.yaml"]'
ports:
- 4900:4900
networks:
- opensearch-net
```
**Note**: I added the environment section at the very end of my attempts, I get the same alert without it.
the Pipelines file I am using
```
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- stdout:
```
if I try to change the ```pipelines.yaml``` file to just ```pipelines``` I receive a se linux error
```
Error response from daemon: runc: runc create failed: unable to start container process: error during container init: error mounting "/home/admin/docker/log-pipeline.yaml" to rootfs at "/usr/share/data-prepper/pipelines": mount /home/admin/docker/log-pipeline.yaml:/usr/share/data-prepper/pipelines (via /proc/self/fd/6), flags: 0x5006, data: context="system_u:object_r:container_file_t:s0:c560,c842": not a directory: OCI runtime error
```
**Expected behavior**
A docker container using data-prepper should be running
.
**Environment (please complete the following information):**
- OS: RHEL 8.7
| Data-Prepper not using pipelines.yaml | https://api.github.com/repos/opensearch-project/data-prepper/issues/2461/comments | 4 | 2023-04-07T15:11:33Z | 2023-09-01T15:48:15Z | https://github.com/opensearch-project/data-prepper/issues/2461 | 1,658,913,466 | 2,461 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add exemplars into metrics generated for ingesting traces | https://api.github.com/repos/opensearch-project/data-prepper/issues/2456/comments | 4 | 2023-04-06T20:08:00Z | 2023-04-24T16:05:01Z | https://github.com/opensearch-project/data-prepper/issues/2456 | 1,657,975,602 | 2,456 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper should report metrics when SQS messages fail to delete.
Additionally, SQS can return a success code and have some failed messages. Data Prepper should handle this.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessageBatch.html
**Describe the solution you'd like**
Provide a new counter metric: `sqsMessagesDeleteFailed`
| Improve error reporting with S3 source and SQS deletes | https://api.github.com/repos/opensearch-project/data-prepper/issues/2449/comments | 0 | 2023-04-05T14:56:52Z | 2023-04-05T20:58:07Z | https://github.com/opensearch-project/data-prepper/issues/2449 | 1,655,789,967 | 2,449 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I want to be able to read [Avro](https://avro.apache.org/docs/) files from S3 objects and create new Data Prepper events from them.
**Describe the solution you'd like**
Implement a new input codec using the work planned for #1532.
| Support parsing Avro files from S3 (and other sources) | https://api.github.com/repos/opensearch-project/data-prepper/issues/2446/comments | 1 | 2023-04-04T13:28:03Z | 2023-08-18T18:00:51Z | https://github.com/opensearch-project/data-prepper/issues/2446 | 1,653,894,756 | 2,446 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I may need to read [Parquet](https://github.com/apache/parquet-format)-formatted files from S3 buckets.
**Describe the solution you'd like**
Using the generic source codecs provided in #1532, have a `parquet` codec.
**Describe alternatives you've considered (Optional)**
Using the #1971 behavior, let S3 Select decode Parquet. However, this requires using this additional feature and may impact cost. | Support parsing Parquet formatted files from S3 (and other sources) | https://api.github.com/repos/opensearch-project/data-prepper/issues/2445/comments | 2 | 2023-04-04T13:26:25Z | 2023-07-26T16:48:51Z | https://github.com/opensearch-project/data-prepper/issues/2445 | 1,653,890,749 | 2,445 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Opensearch dashboards not displaying the traces at the observability traces window when the raw-pipeline is configured with index_type: custom
Versions of software's in use are
Dataprepper: 2.1
OpenSearch and OpenSearch-Dashboards: 2.6
Below is the configured pipelines.yml
```
entry-pipeline:
workers: 12
source:
otel_trace_source:
ssl: true
sslKeyCertChainFile: "/opt/nsp/os/ssl/certs/nsp/nsp_internal.pem"
sslKeyFile: "/opt/nsp/os/ssl/nsp_internal.key"
buffer:
bounded_blocking:
buffer_size: 6000
batch_size: 125
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
workers: 12
source:
pipeline:
name: "entry-pipeline"
buffer:
bounded_blocking:
buffer_size: 6000
batch_size: 125
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: [ "https://{{ .Values.opensearch.service.name }}:{{ .Values.opensearch.service.port }}" ]
cert: "/opt/nsp/os/ssl/internal_ca_cert.pem"
username: "admin"
password: "admin"
index: otel-v1-apm-span-%{yyyy.MM.dd}
index_type: custom
service-map-pipeline:
workers: 12
source:
pipeline:
name: "entry-pipeline"
buffer:
bounded_blocking:
buffer_size: 6000
batch_size: 125
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: [ "https://{{ .Values.opensearch.service.name }}:{{ .Values.opensearch.service.port }}" ]
cert: "/opt/nsp/os/ssl/internal_ca_cert.pem"
username: "admin"
password: "admin"
index_type: trace-analytics-service-map
```
When we try to view/refresh/pull the traces from OpenSearch-Dashboards, I see the below errors.
```
StatusCodeError: [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [serviceName] in order to load field data by uninverting the inverted index. Note that this can use significant memory.
at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:475:12)
at endReadableNT (internal/streams/readable.js:1333:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
status: 400,
displayName: 'BadRequest',
path: '/otel-v1-apm-service-map*/_search',
query: { size: 0 },
body: {
error: {
root_cause: [Array],
type: 'search_phase_execution_exception',
reason: 'all shards failed',
phase: 'query',
grouped: true,
failed_shards: [Array],
caused_by: [Object]
},
status: 400
},
statusCode: 400,
response: '{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"otel-v1-apm-span-2023.04.03","node":"2bptfcmUTZS-fnTg1vp9jg","reason":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory.","caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}},"status":400}',
toString: [Function (anonymous)],
toJSON: [Function (anonymous)]
}
{"type":"response","@timestamp":"2023-04-03T17:53:53Z","tags":[],"pid":1,"method":"post","statusCode":200,"req":{"url":"/api/observability/stats","method":"post","headers":{"host":"135.249.148.155","x-request-id":"617b5ef19130c595c6092c005310f8c9","x-real-ip":"10.143.82.108","x-forwarded-for":"10.143.82.108","x-forwarded-host":"135.249.148.155","x-forwarded-port":"443","x-forwarded-proto":"https","x-forwarded-scheme":"https","x-scheme":"https","content-length":"44","sec-ch-ua":"\"Google Chrome\";v=\"111\", \"Not(A:Brand\";v=\"8\", \"Chromium\";v=\"111\"","content-type":"application/json","dnt":"1","sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36","osd-version":"2.6.0","sec-ch-ua-platform":"\"Windows\"","accept":"*/*","origin":"https://135.249.148.155","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://135.249.148.155/logviewer/app/observability-dashboards","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"10.233.121.35","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36","referer":"https://135.249.148.155/logviewer/app/observability-dashboards"},"res":{"statusCode":200,"responseTime":24,"contentLength":9},"message":"POST /api/observability/stats 200 24ms - 9.0B"}
{"type":"response","@timestamp":"2023-04-03T17:53:53Z","tags":[],"pid":1,"method":"post","statusCode":400,"req":{"url":"/api/observability/trace_analytics/query","method":"post","headers":{"host":"135.249.148.155","x-request-id":"7df90927d894aef041d43ce2593c39c1","x-real-ip":"10.143.82.108","x-forwarded-for":"10.143.82.108","x-forwarded-host":"135.249.148.155","x-forwarded-port":"443","x-forwarded-proto":"https","x-forwarded-scheme":"https","x-scheme":"https","content-length":"640","sec-ch-ua":"\"Google Chrome\";v=\"111\", \"Not(A:Brand\";v=\"8\", \"Chromium\";v=\"111\"","content-type":"application/json","dnt":"1","sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36","osd-version":"2.6.0","sec-ch-ua-platform":"\"Windows\"","accept":"*/*","origin":"https://135.249.148.155","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://135.249.148.155/logviewer/app/observability-dashboards","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"10.233.121.35","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36","referer":"https://135.249.148.155/logviewer/app/observability-dashboards"},"res":{"statusCode":400,"responseTime":29,"contentLength":9},"message":"POST /api/observability/trace_analytics/query 400 29ms - 9.0B"}
```
Can you pls guide me on additional configs that could be needed at dataprepper OR opensearch/opensearch-dashboards to have the traces displayed at observability traces window? | [BUG] Opensearch dashboards not displaying the traces at the observability traces window when the raw-pipeline is configured with index_type: custom | https://api.github.com/repos/opensearch-project/data-prepper/issues/2443/comments | 4 | 2023-04-04T05:13:22Z | 2023-10-15T20:31:31Z | https://github.com/opensearch-project/data-prepper/issues/2443 | 1,653,186,078 | 2,443 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The running java process does not get killed when data prepper itself terminates. This is problematic when running data prepper inside of a docker container using kubernetes HorizontalPodAutoscaler to control data prepper. Right now data prepper will terminate for some reason however the container/pod does not terminate and will stick around as a zombie pod- holding onto resources but unable to function. I believe this is because PID 1 (the java cmd running data prepper) does not get terminated when the data-prepper process stops.
**To Reproduce**
Really you just need data prepper to try to use more resources than is available in a container/pod. Here is one way
1. Run data prepper in kubernetes
1. Set the pod resource limit to something low like [cpu 100m](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-1) to force it to run into resource issues
1. Configure data prepper to accept data
1. Fire a lot of data at data prepper really fast (One way is to build up a buffer in your source and then try to send it to data prepper all at once) What I did was force restart data prepper a few times while simultaneously generated a bunch of source data trying to send to data prepper. Then data prepper gets overwhelmed, tries to get more resources, throws an error and terminates
Sample error
```
2023-04-03T17:18:30,799 [pool-7-thread-83] ERROR com.amazon.dataprepper.plugins.source.otellogs.OTelLogsGrpcService - Failed to write the request of size 2380052 due to:
java.util.concurrent.TimeoutException: Pipeline [log-pipeline] - Buffer does not have enough capacity left for the size of records: 2611, timed out waiting for slots.
at org.opensearch.dataprepper.plugins.buffer.blockingbuffer.BlockingBuffer.doWriteAll(BlockingBuffer.java:123) ~[blocking-buffer-2.1.0.jar:?]
at org.opensearch.dataprepper.model.buffer.AbstractBuffer.writeAll(AbstractBuffer.java:100) ~[data-prepper-api-2.1.0.jar:?]
at org.opensearch.dataprepper.plugins.MultiBufferDecorator.writeAll(MultiBufferDecorator.java:39) ~[data-prepper-core-2.1.0.jar:?]
at com.amazon.dataprepper.plugins.source.otellogs.OTelLogsGrpcService.processRequest(OTelLogsGrpcService.java:106) ~[otel-logs-source-2.1.0.jar:?]
at com.amazon.dataprepper.plugins.source.otellogs.OTelLogsGrpcService.lambda$export$0(OTelLogsGrpcService.java:80) ~[otel-logs-source-2.1.0.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.10.3.jar:1.10.3]
at com.amazon.dataprepper.plugins.source.otellogs.OTelLogsGrpcService.export(OTelLogsGrpcService.java:80) ~[otel-logs-source-2.1.0.jar:?]
at io.opentelemetry.proto.collector.logs.v1.LogsServiceGrpc$MethodHandlers.invoke(LogsServiceGrpc.java:246) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0]
at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182) ~[grpc-stub-1.49.1.jar:1.49.1]
at com.linecorp.armeria.server.grpc.AbstractServerCall.invokeOnMessage(AbstractServerCall.java:384) ~[armeria-grpc-1.22.1.jar:?]
at com.linecorp.armeria.server.grpc.AbstractServerCall.lambda$onRequestMessage$2(AbstractServerCall.java:348) ~[armeria-grpc-1.22.1.jar:?]
at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$1.run(SequentialExecutor.java:123) ~[armeria-1.22.1.jar:?]
at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$QueueWorker.workOnQueue(SequentialExecutor.java:235) ~[armeria-1.22.1.jar:?]
at com.linecorp.armeria.internal.shaded.guava.util.concurrent.SequentialExecutor$QueueWorker.run(SequentialExecutor.java:180) ~[armeria-1.22.1.jar:?]
at com.linecorp.armeria.common.RequestContext.lambda$makeContextAware$3(RequestContext.java:566) ~[armeria-1.22.1.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
2023-04-03T17:18:30,800 [log-pipeline-sink-worker-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - Shutting down process workers
2023-04-03T17:19:01,806 [log-pipeline-sink-worker-4-thread-1] WARN org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - Workers did not terminate in time, forcing termination
2023-04-03T17:19:05,278 [log-pipeline-processor-worker-3-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.FutureHelper - FutureTask is interrupted or timed out
2023-04-03T17:19:06,287 [log-pipeline-processor-worker-3-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Processor shutdown phase 1 complete.
2023-04-03T17:19:07,300 [log-pipeline-processor-worker-3-thread-1] INFO org.opensearch.dataprepper.pipeline.ProcessWorker - Beginning processor shutdown phase 2, iterating until buffers empty.
2023-04-03T17:19:07,300 [log-pipeline-sink-worker-4-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - Shutting down process workers
Exception in thread "HTTP-Dispatcher" Exception in thread "I/O dispatcher 1" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
2023-04-03T17:20:00,693 [log-pipeline-sink-worker-4-thread-1] WARN org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - Workers did not terminate in time, forcing termination
2023-04-03T17:20:00,693 [metrics-pipeline-processor-worker-5-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [metrics-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space: failed reallocation of scalar replaced objects
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) ~[data-prepper-core-2.1.0.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: java.lang.OutOfMemoryError: Java heap space: failed reallocation of scalar replaced objects
2023-04-03T17:20:00,693 [metrics-pipeline-processor-worker-5-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [metrics-pipeline] - Received shutdown signal with processor shutdown timeout PT30S and sink shutdown timeout PT30S. Initiating the shutdown process
2023-04-03T17:20:00,693 [log-pipeline-processor-worker-3-thread-1] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline log-pipeline processing
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@416a9cb0[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@706b5ab7[Wrapped task = org.opensearch.dataprepper.pipeline.Pipeline$$Lambda$1099/0x00000008014068d8@6dccc3d5]] rejected from org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor@6f8299f9[Shutting down, pool size = 1, active threads = 1, queued tasks = 1, completed tasks = 109]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[?:?]
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134) ~[?:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$4(Pipeline.java:262) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.DataFlowComponentRouter.route(DataFlowComponentRouter.java:45) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.Router.route(Router.java:42) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.publishToSinks(Pipeline.java:261) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:117) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:98) ~[data-prepper-core-2.1.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:52) ~[data-prepper-core-2.1.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
2023-04-03T17:20:00,694 [metrics-pipeline-processor-worker-5-thread-1] INFO org.opensearch.dataprepper.plugins.source.otelmetrics.OTelMetricsSource - Stopped otel_metrics_source.
2023-04-03T17:20:00,694 [metrics-pipeline-processor-worker-5-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [metrics-pipeline] - Shutting down process workers
2023-04-03T17:20:00,694 [metrics-pipeline-processor-worker-5-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [metrics-pipeline] - Encountered interruption terminating the pipeline execution, Attempting to force the termination
2023-04-03T17:20:00,694 [metrics-pipeline-processor-worker-5-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [metrics-pipeline] - Shutting down process workers
2023-04-03T17:20:00,694 [pool-4-thread-1] ERROR org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1 - I/O reactor terminated abnormally
org.apache.http.nio.reactor.IOReactorException: I/O dispatch worker terminated abnormally
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:359) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.5.jar:4.1.5]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: java.lang.OutOfMemoryError: Java heap space
```
**Expected behavior**
PID 1 to be terminated, and then the container/pod to terminate because PID 1 (the init cmd for the container) was terminated.
I ran this after it crashed & you can see PID 1 was not terminated
```
data-prepper-775db9b7b6-msg2v:/usr/share/data-prepper# ps -A
PID USER TIME COMMAND
1 root 5:23 java -Ddata-prepper.dir=/usr/share/data-prepper -Dlog4j.configurationFile=/usr/share/data-prepper/config/log4j2-rolling.properties -cp /u
482 root 0:00 sh -c clear; (bash || ash || sh)
489 root 0:00 bash
490 root 0:00 ps -A
```
**Environment (please complete the following information):**
- Docker
- Kubernetes
- Data prepper v2.1.0
**Additional context**
Note: data prepper does not crash when the resource cpu/mem limit is gone and I can view the resources used spike way over my preset limit if I run the same test.
| [BUG] Java process does not get killed when data prepper itself terminates | https://api.github.com/repos/opensearch-project/data-prepper/issues/2441/comments | 3 | 2023-04-03T19:57:28Z | 2023-05-05T02:15:42Z | https://github.com/opensearch-project/data-prepper/issues/2441 | 1,652,682,721 | 2,441 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Adding metrics based on the response code for the S3 PutObject call would be useful. This would give easier insight into the types of failures the DLQ Writer is experiencing.
**Describe the solution you'd like**
Metrics for:
- Auth issues
- Access Denied
- Bad Request
- TImeout
**Describe alternatives you've considered (Optional)**
There are a lot of exceptions that can be thrown. Are there any we should prioritize over the three that are listed?
| S3 DLQ Writer response code metrics | https://api.github.com/repos/opensearch-project/data-prepper/issues/2440/comments | 0 | 2023-04-03T17:26:28Z | 2023-04-05T21:01:45Z | https://github.com/opensearch-project/data-prepper/issues/2440 | 1,652,475,406 | 2,440 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
`jacocoTestCoverageVerification` is only run on [coreProjects](https://github.com/opensearch-project/data-prepper/blob/main/build-resources.gradle#L6). This leads a false sense of high test coverage for our plugins. For example. The opensearch plugin has a code coverage of 75% with a requirement of [90%](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/build.gradle#L127).
**To Reproduce**
Steps to reproduce the behavior:
1. `./gradlew build` succeeds when it should fail given the current code coverage for `opensearch` plugin
To verify:
1. Temporarily add `opensearch` to list of [coreProjects](https://github.com/opensearch-project/data-prepper/blob/main/build-resources.gradle#L6)
2. `./gradlew build` results in 11 classes with less than 90% code coverage. Overall coverage is 75% for the plugin. Output:
```
> Task :data-prepper-plugins:opensearch:jacocoTestCoverageVerification FAILED
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.bulk.SerializedJsonImpl: instructions covered ratio is 0.73, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.bulk.PreSerializedJsonpMapper: instructions covered ratio is 0.89, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.bulk.PreSerializedJsonpMapper.PreSerializedJsonProvider: instructions covered ratio is 0.60, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.bulk.BulkOperationWriter: instructions covered ratio is 0.00, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.AwsRequestSigningApacheInterceptor: instructions covered ratio is 0.09, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.X509TrustAllManager: instructions covered ratio is 0.42, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.index.IndexConfiguration.Builder: instructions covered ratio is 0.88, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.index.AbstractIndexManager: instructions covered ratio is 0.78, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.index.IndexConstants: instructions covered ratio is 0.86, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.index.IndexConfiguration: instructions covered ratio is 0.89, but expected minimum is 0.90
[ant:jacocoReport] Rule violated for class org.opensearch.dataprepper.plugins.sink.opensearch.index.IndexManagerFactory.DefaultIndexManager: instructions covered ratio is 0.75, but expected minimum is 0.90
```
HTML Report:
Element | Missed Instructions | Cov. | Missed Branches | Cov. | Missed | Cxty | Missed | Lines | Missed | Methods | Missed | Classes
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
Total | 1,561 of 6,339 | 75% | 158 of 466 | 66% | 164 | 527 | 345 | 1,531 | 55 | 293 | 2 | 45
org.opensearch.dataprepper.plugins.sink.opensearch | | 66% | | 62% | 96 | 246 | 240 | 759 | 35 | 121 | 1 | 11
org.opensearch.dataprepper.plugins.sink.opensearch.index | | 88% | | 74% | 47 | 192 | 58 | 542 | 10 | 106 | 0 | 18
org.opensearch.dataprepper.plugins.sink.opensearch.bulk | | 71% | | 81% | 9 | 42 | 32 | 107 | 7 | 34 | 1 | 8
org.opensearch.dataprepper.plugins.sink.opensearch.s3 | | 96% | | 92% | 1 | 22 | 3 | 82 | 0 | 15 | 0 | 6
**Expected behavior**
Individual plugins should enforce their code coverage requirement.
| [BUG] jacocoTestCoverageVerification is not enforced for plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/2427/comments | 0 | 2023-03-31T18:24:36Z | 2023-04-04T13:49:29Z | https://github.com/opensearch-project/data-prepper/issues/2427 | 1,649,826,205 | 2,427 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of data prepper, I would like the ability to add new Event entries dynamically from existing Event entries. Given the following Event
```
{
"key_one": "value_one",
"key_two": "value_two"
}
```
I would like to transform the Event with a new key that combines these values into the following Event
```
{
"key_one": "value_one",
"key_two": "value_two",
"key_three": "value_one-value_two"
}
```
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Support for this can be added to the existing `add_entries` processor (https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins/mutate-event-processors#addentryprocessor). The current configuration does not allow the `value` to be dynamic.
I propose that we allow dynamic values similar to the opensearch sink `index` parameter, which supports injecting values from the Event into the index with `my-${some_event_key}-name`.
The add_entries processor would be configured to look like the following to complete the use case described above.
```
processor:
- add_entries:
entries:
- key: "key_three"
value: "${key_one}-${key_two}
```
which would result in the expected Event. The Event class has existing interface functionality to support this injection from the `add_entries` processor (https://github.com/opensearch-project/data-prepper/blob/5110699c993d4e17a56ae7d1f8c11def0429c0d9/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/event/Event.java#L106), although it will only result in the value being created as a String, which can be mitigated with the `convert_type` processor if needed.
**Describe alternatives you've considered (Optional)**
An ability to automatically support injection of Event values into any parameters by specifying that the value is of type `DynamicEventValue<T>`
This could be added as a data-prepper-plugin, and then in the plugin configuration for a processor, source, or sink, the configuration could use this plugin to automatically support injecting of values. For example, the add_entries config would change to
```
private final String key;
private final DynamicEventValue<Object> value;
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Ability to combine multiple Event keys into one new Event key | https://api.github.com/repos/opensearch-project/data-prepper/issues/2424/comments | 5 | 2023-03-30T19:44:37Z | 2023-04-14T20:28:05Z | https://github.com/opensearch-project/data-prepper/issues/2424 | 1,648,209,788 | 2,424 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The [Grok Debugger](https://grokdebug.herokuapp.com/) is down.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://grokdebug.herokuapp.com/
**Expected behavior**
I expect the Grok Debugger to be available to use.
**Screenshots**
Not necessary.
**Environment (please complete the following information):**
Not applicable. | [BUG] Grok Debugger is down | https://api.github.com/repos/opensearch-project/data-prepper/issues/2423/comments | 2 | 2023-03-30T19:18:10Z | 2023-04-05T21:02:45Z | https://github.com/opensearch-project/data-prepper/issues/2423 | 1,648,178,434 | 2,423 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
SQS throws an AccessDeniedException when KMS SSE is enabled.
`2023-03-23T06:56:51,165 [Thread-7] ERROR org.opensearch.dataprepper.plugins.source.SqsWorker - Error reading from SQS: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access. (Service: AWSKMS; Status Code: 400; Error Code: AccessDeniedException; Request ID: b284a4f4-e245-4843-96dc-52922101df67; Proxy: null) (Service: Sqs, Status Code: 400, Request ID: 2f14d052-7b67-5e76-89a8-8cc7810c3906). Retrying with exponential backoff.`
**Describe the solution you'd like**
I think having `kms:decrypt` permission should allow users to use encryption with SQS.
**Additional context**
Current permissions required to use S3 source with SQS.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3policy",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"s3:GetObject",
"sqs:ReceiveMessage"
],
"Resource": "*"
}
]
}
```
| Support KMS encryption for SQS | https://api.github.com/repos/opensearch-project/data-prepper/issues/2422/comments | 1 | 2023-03-30T16:58:20Z | 2023-03-30T19:12:23Z | https://github.com/opensearch-project/data-prepper/issues/2422 | 1,647,984,490 | 2,422 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Pipeline author wants to read Read Parquet files on AWS S3 with Snappy Compression. Currently S3 (via abstract files source) only supports gzip
**Describe the solution you'd like**
Build in support for other compression such as:
- Snappy
| Snappy Compression support | https://api.github.com/repos/opensearch-project/data-prepper/issues/2420/comments | 0 | 2023-03-30T15:17:12Z | 2023-04-05T20:58:25Z | https://github.com/opensearch-project/data-prepper/issues/2420 | 1,647,834,811 | 2,420 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-1370 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-smart-2.4.7.jar</b></p></summary>
<p>JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.</p>
<p>Library home page: <a href="https://urielch.github.io/">https://urielch.github.io/</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/net.minidev/json-smart/2.4.7/8d7f4c1530c07c54930935f3da85f48b83b3c109/json-smart-2.4.7.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.0-beta-5.jar (Root Library)
- json-path-2.7.0.jar
- :x: **json-smart-2.4.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
[Json-smart](https://netplex.github.io/json-smart/) is a performance focused, JSON processor lib. When reaching a ‘[‘ or ‘{‘ character in the JSON input, the code parses an array or an object respectively. It was discovered that the code does not have any limit to the nesting of such arrays or objects. Since the parsing of nested arrays and objects is done recursively, nesting too many of them can cause a stack exhaustion (stack overflow) and crash the software.
<p>Publish Date: 2023-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1370>CVE-2023-1370</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://research.jfrog.com/vulnerabilities/stack-exhaustion-in-json-smart-leads-to-denial-of-service-when-parsing-malformed-json-xray-427633/">https://research.jfrog.com/vulnerabilities/stack-exhaustion-in-json-smart-leads-to-denial-of-service-when-parsing-malformed-json-xray-427633/</a></p>
<p>Release Date: 2023-03-22</p>
<p>Fix Resolution: net.minidev:json-smart:2.4.9</p>
</p>
</details>
<p></p>
| CVE-2023-1370 (High) detected in json-smart-2.4.7.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/2418/comments | 0 | 2023-03-30T14:51:23Z | 2023-04-20T14:46:01Z | https://github.com/opensearch-project/data-prepper/issues/2418 | 1,647,789,019 | 2,418 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Trying to copy the value from a key with a "/" on the name
**To Reproduce**
Steps to reproduce the behavior:
Use the following processor with docker image 2.1.1:
```
processor:
- copy_values:
entries:
- from_key: "/kubernetes/labels/app/kubernetes/io~1name"
to_key: "app_name"
overwrite_if_to_key_exists: true
```
**Expected behavior**
I would expect the "~1" to be replaced with "/" as described on the expression documentations
**Screenshots**
`java.lang.IllegalArgumentException: key /kubernetes/labels/app/kubernetes/io~1name must contain only alphanumeric chars with .-_ and must follow JsonPointer (ie. 'field/to/key')`
| [BUG] Use Json Pointer syntax on copy_values processor "from_key" | https://api.github.com/repos/opensearch-project/data-prepper/issues/2416/comments | 3 | 2023-03-29T17:18:32Z | 2023-06-05T19:48:57Z | https://github.com/opensearch-project/data-prepper/issues/2416 | 1,646,219,929 | 2,416 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper can either be run as a single instance, or as two or more instances that essentially make up a cluster of data prepper instances.
When configuring the `data-prepper-config.yaml` there are certain settings that only apply in a multi-node scenario. For example, the peer forwarder (https://github.com/opensearch-project/data-prepper/blob/main/docs/peer_forwarder.md) is only relevant in a multi-node scenario. The peer forwarder is configured with `service_discovery` settings to allow peers to communicate and forward Events. However, the configuration for service discovery is too coupled to peer forwarding, since service_discovery can be applied to other situations when data prepper nodes need to be aware of other data prepper nodes.
**Describe the solution you'd like**
I am proposing we break the service discovery configuration out from under the peer_forwarder (of course, we can do this while not making breaking changes and just deprecating the old peer forwarder service discovery config. It would be going from this
```
peer_forwarder:
discovery_mode: static
static_endpoints: ["dataprepper1", "dataprepper2"]
... other peer forwarder configurations ...
```
to configuring service_discovery at the highest level, like this
```
some_other_feature_that_uses_service_discovery:
peer_forwarder:
... other peer forwarder configurations ...
service_discovery:
discovery_mode: static
static_endpoints: ["dataprepper1", "dataprepper2"]
```
Additionally, a configuration for `disovery_mode` could be added to `service_discovery`, which will be useful for all plugins or features of data prepper that might behave differently if they are in a multi-node vs single-node setting (for example, source coordination with a single node does not require a distributed store #2412).
This configuration would be a `discovery_mode` of type `single_instance` (or `single_node`), which is a way to explicitly say that this will always be just a single instance of data prepper not as a part of a cluster. There are cases to make for this to be the default `discovery_mode` if no `service_discovery` is found in the `data-prepper-config.yaml`. If `single_node` is not the default, then it is always assumed to be a multi-node scenario, which will not properly work for stateful aggregations without a peer forwarder and service discovery configuration. Defaulting to `single_node` seems like the safer option, since that is technically how data prepper works today (no peer forwarder configuration will essentially act as single node). The only other alternative is to make it required, but this is a breaking change. The final configuration would look like this, and would illustrate to anyone that could use the information that this is only a single instance of data prepper working on its own.
```
service_discovery:
discovery_mode: single_node
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| No clear indication of single node vs. multi node data prepper cluster | https://api.github.com/repos/opensearch-project/data-prepper/issues/2413/comments | 0 | 2023-03-29T05:24:37Z | 2023-03-29T05:27:19Z | https://github.com/opensearch-project/data-prepper/issues/2413 | 1,645,057,851 | 2,413 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper has many push based sources, such as `http`, `otel_trace_source`, etc. Distributing data between multiple instances of data prepper can easily be solved with a load balancer.
However, pull based sources of Data Prepper do not have a Data Prepper internal way to coordinate which work is done between different instances of Data Prepper in a multi-node scenario. For example, pulling data from something like an OpenSearch cluster with 5 nodes of Data Prepper would result in all 5 nodes pulling the entirety of the data and processing it 5 times total.
**Describe the solution you'd like**
A core data prepper solution for pull based sources to distribute data between multiple instances of data prepper, and a way to track the progress of the data that is pulled to skip processing of duplicate data.
This solution could use a distributed store to coordinate and track progress of the data. The store could be pluggable and configured in the `data-prepper-config.yaml`. The store type could range from Remote/Local File DB, Apache Zookeeper, MySQL, DynamoDB, and more.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
| Source Coordination in Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/2412/comments | 0 | 2023-03-29T04:51:16Z | 2023-08-10T23:06:09Z | https://github.com/opensearch-project/data-prepper/issues/2412 | 1,645,028,458 | 2,412 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.