issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
There're use cases where nested json objects needs to be flattened. For example:
From:
```json
{
"key1": "val1",
"key2": {
"key3": {
"key4": "val2"
}
},
"list1": [
{
"list2": [
{
"name": "name1",
"value": "value1"
},
{
"name": "name2",
"value": "value2"
}
]
}
]
}
```
to
```json
{
"key1": "val1",
"key2.key3.key4": "val2",
"list1[0].list2[0].name": "name1",
"list1[0].list2[0].value": "value1",
"list1[0].list2[1].name": "name2",
"list1[0].list2[1].value": "value2",
}
```
In some specific use cases (https://github.com/opensearch-project/data-prepper/issues/3965), the users wants to remove list indices from the flattened keys and combine leaf values in lists:
```json
{
"key1": "val1",
"key2.key3.key4": "val2",
"list1[].list2[].name": ["name1","name2"],
"list1[].list2[].value": ["value1","value2"]
}
```
**Describe the solution you'd like**
A new flatten processor with these configurations:
```yaml
processor:
- flatten:
source: ""
target: ""
remove_processed_fields: true
remove_list_indices: false
```
* `source`: the source key to the object to flatten
* `target`: the target key to put the flatten object
* `remove_processed_fields`: boolean, whether to remove processed fields, only keeping flattened fields
* `remove_list_indices`: boolean, whether to remove list indices from the flattened keys and combine leaf values
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
A specific use case is in #3965
| Flatten json processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4128/comments | 4 | 2024-02-14T19:15:12Z | 2024-06-11T12:15:09Z | https://github.com/opensearch-project/data-prepper/issues/4128 | 2,135,023,521 | 4,128 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `kafka` buffer and `kafka` source share a significant number of classes. Many of these classes have their own loggers. And they also use `org.apache.kafka` classes and use those loggers.
I'd like to have more logging control over the Kafka components depending on whether the logger is used by the source or the buffer. This solution should also extend to the upcoming `kafka` sink.
**Describe the solution you'd like**
I'd like to have the Kafka plugins set an MDC context. This way, we can know the context for the different loggers.
```
try {
MDC.put("kafkaPluginType", "buffer");
// All existing work
} finally {
MDC.remove("kafkaPluginType");
}
```
**Describe alternatives you've considered (Optional)**
1.
One idea I had is to attempt to set MDC in Data Prepper core and then plugins can get MDC automatically. This could be a nice feature, but may take a significant amount of effort. I think we can start with this simple approach and then see how we'd like to make it evolve.
2.
Another approach would be to have different loggers. This would likely require a base class and then sub-classes.
```
public abstract class KafkaCustomProducer<T> {
protected abstract Logger getLogger();
}
```
```
public class KafkaBufferCustomProducer<T> {
private static final Logger LOG = LoggerFactory.getLogger(KafkaBufferCustomProducer.class);
protected Logger getLogger() {
return LOG;
}
}
```
This approach adds quite a bit of code complexity. And it will not allow control logging in the `org.apache.kafka` package.
**Additional context**
N/A
## Tasks
- [x] Add MDC context for buffer plugin
- [ ] Add MDC context for source plugin
| Support enhanced configuration of the Kafka source and buffer loggers | https://api.github.com/repos/opensearch-project/data-prepper/issues/4126/comments | 0 | 2024-02-14T01:49:09Z | 2024-06-26T19:25:59Z | https://github.com/opensearch-project/data-prepper/issues/4126 | 2,133,411,497 | 4,126 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data source (such as DynamoDb Table) supports [Client Side Encryption](https://docs.aws.amazon.com/database-encryption-sdk/latest/devguide/client-server-side.html), which encrypts specified fields and stored as binary data. It would be nice to have the encrypted fields decrypted before being indexed into Data sink (OpenSearch Cluster), so that the data is searchable.
**Describe the solution you'd like**
An [Index Processor](https://opensearch.org/docs/latest/ingest-pipelines/processors/index-processors/) that takes the field name and key arn as input, and decrypts the field using the specified KMS key.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
This is currently a blocker of leveraging [DynamoDB zero-ETL integration with Amazon OpenSearch Service](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/OpenSearchIngestionForDynamoDB.html). Without the kms decrypt processor, we won't be able to replace our customized lambda functions with the ingestion pipeline. | Add a kms decrypt processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4125/comments | 0 | 2024-02-14T01:20:50Z | 2024-04-16T19:53:35Z | https://github.com/opensearch-project/data-prepper/issues/4125 | 2,133,389,545 | 4,125 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Scheduled S3 Scan filters out objects by checking a global state item that contains the timestamp of the most recent object that was processed during the previous scan. (https://github.com/opensearch-project/data-prepper/blob/503b7741fe63596ad994198e513404c1143f9e1d/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/S3ScanPartitionCreationSupplier.java#L168)
This has the potential to skip objects that are missed in the scan, but created with the same timestamp.
**Expected behavior**
Do not filter out objects that have a timestamp equal to the last modified timestamp
**Additional context**
Add any other context about the problem here.
| [BUG] S3 Scan has potential to filter out objects with the same timestamp | https://api.github.com/repos/opensearch-project/data-prepper/issues/4123/comments | 1 | 2024-02-13T19:20:32Z | 2024-02-14T17:07:23Z | https://github.com/opensearch-project/data-prepper/issues/4123 | 2,132,987,794 | 4,123 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Extended from #4102 . Currently secret retrieval failure blocks pipeline or server spinning up since extension loader is a prerequisite for plugin loading.
**Describe the solution you'd like**
Some ideas to change this behavior for better user experience:
* create subpipelines that does not depend on secrets
* spin up data prepper server since it does not depend on secrets
* keep retrying secret retrieval for blocked plugin creation.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Unblock pipeline and server creation when secret retrieval fails | https://api.github.com/repos/opensearch-project/data-prepper/issues/4122/comments | 0 | 2024-02-13T18:13:13Z | 2024-02-13T20:50:05Z | https://github.com/opensearch-project/data-prepper/issues/4122 | 2,132,896,526 | 4,122 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently our trace ingestion index alias are reserved to be
spans: otel-v1-apm-span
service-map: otel-v1-service-map
There are user requests to allow custom suffix appended to the above
**Describe the solution you'd like**
add an index alias suffix setting for trace analytics index.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow custom suffix for trace ingestion index alias | https://api.github.com/repos/opensearch-project/data-prepper/issues/4121/comments | 4 | 2024-02-13T15:35:51Z | 2024-02-23T16:56:19Z | https://github.com/opensearch-project/data-prepper/issues/4121 | 2,132,600,066 | 4,121 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, `date` processor parses the timestamp and stores in `destination` field which is of `String` type.
**Describe the solution you'd like**
In scenario's, when the integer value of `destination` field is needed, we could use this `convert_entry_type` processor to convert the `destination` field to `long`, because using `integer` is not feasible for `epoch` times. | Add `long` as a target type for `convert_entry_type` processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4120/comments | 3 | 2024-02-13T13:59:32Z | 2024-04-04T15:15:01Z | https://github.com/opensearch-project/data-prepper/issues/4120 | 2,132,391,672 | 4,120 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Ingestion of events via Opentelemetry over GRPC can fail due to a full buffer or an open CircuitBreaker.
In that case, an error response is sent. When testing with the OpenTelemetry Collector, this leads to dropped data by the collector.
Reason for this is, that apparently the [OTLP/gRPC Throttling specification](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc-throttling) is not used properly. The DataPrepper gRPC response needs to contain a proper `RetryInfo`.
**Describe the solution you'd like**
DataPrepper should implement proper OTLP/gRPC throttling, when the ingress buffers are full or the circuit breaker is open.
**Describe alternatives you've considered (Optional)**
DataPrepper can document, that on full buffers or open circuit breakers, requests are rejected as non-retryable. This can happen for both cases or separately just for one, e.g. open circuit breaker.
**Additional context**
Currently on an open curcuit breaker, the error message from the [CircuitBreakingBuffer](https://github.com/opensearch-project/data-prepper/blob/3da1696aa1ac3b8d12c3c8960c21bbda722474d2/data-prepper-core/src/main/java/org/opensearch/dataprepper/parser/CircuitBreakingBuffer.java#L62) is contained in the DataPrepper response.
This is managed by the [GrpcRequestExceptionHandler](https://github.com/opensearch-project/data-prepper/blob/3da1696aa1ac3b8d12c3c8960c21bbda722474d2/data-prepper-plugins/armeria-common/src/main/java/org/opensearch/dataprepper/GrpcRequestExceptionHandler.java#L56), that generates a `RESOURCE_EXHAUSTED`response status. This is only retryable, if a `RetryInfo` is present according to <https://opentelemetry.io/docs/specs/otlp/#failures>. This is currently not the case, hence the OpenTelemetry collector drops the events and does no retries. | Improve or Clarify Backpressure on OTel-Grpc-Sources | https://api.github.com/repos/opensearch-project/data-prepper/issues/4119/comments | 4 | 2024-02-13T08:35:23Z | 2025-04-23T18:40:30Z | https://github.com/opensearch-project/data-prepper/issues/4119 | 2,131,768,060 | 4,119 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When running the Kafka buffer with `create_topic: false`, I get the following error:
```
2024-02-12T11:57:08,901 [main] ERROR org.opensearch.dataprepper.plugins.kafka.service.TopicService - Caught exception creating topic with name: test-encrypted
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicExistsException: Topic 'test-encrypted' already exists.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2005) ~[?:?]
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) ~[kafka-clients-7.5.0-ccs.jar:?]
at org.opensearch.dataprepper.plugins.kafka.service.TopicService.createTopic(TopicService.java:37) ~[kafka-plugins-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.kafka.producer.KafkaCustomProducerFactory.checkTopicCreationCriteriaAndCreateTopic(KafkaCustomProducerFactory.java:111) ~[kafka-plugins-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.kafka.producer.KafkaCustomProducerFactory.prepareTopicAndSchema(KafkaCustomProducerFactory.java:87) ~[kafka-plugins-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.kafka.producer.KafkaCustomProducerFactory.createProducer(KafkaCustomProducerFactory.java:69) ~[kafka-plugins-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.kafka.buffer.KafkaBuffer.<init>(KafkaBuffer.java:75) ~[kafka-plugins-2.7.0-SNAPSHOT.jar:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:53) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:75) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.buildPipelineFromConfiguration(PipelineTransformer.java:117) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.transformConfiguration(PipelineTransformer.java:97) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:67) ~[data-prepper-core-2.7.0-SNAPSHOT.jar:2.7.0-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:211) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:311) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:296) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:541) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:229) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:920) [spring-context-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) [spring-context-5.3.28.jar:5.3.28]
at org.opensearch.dataprepper.AbstractContextManager.start(AbstractContextManager.java:59) [data-prepper-core-2.7.0-SNAPSHOT.jar:2.7.0-SNAPSHOT]
at org.opensearch.dataprepper.AbstractContextManager.getDataPrepperBean(AbstractContextManager.java:45) [data-prepper-core-2.7.0-SNAPSHOT.jar:2.7.0-SNAPSHOT]
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:39) [data-prepper-main-2.7.0-SNAPSHOT.jar:2.7.0-SNAPSHOT]
Caused by: org.apache.kafka.common.errors.TopicExistsException: Topic 'test-encrypted' already exists.
```
**To Reproduce**
Run Data Prepper with `create_topic: false`.
```
buffer:
kafka:
...
topics:
- name: test-encrypted
group_id: data-prepper
create_topic: false
```
**Expected behavior**
Data Prepper does not attempt to create the Kafka topic. No error is thrown.
**Environment (please complete the following information):**
Data Prepper 2.6.0 and current `main`.
| [BUG] Kafka buffer attempts to create a topic when disabled | https://api.github.com/repos/opensearch-project/data-prepper/issues/4111/comments | 2 | 2024-02-12T18:07:32Z | 2024-02-15T16:42:41Z | https://github.com/opensearch-project/data-prepper/issues/4111 | 2,130,681,396 | 4,111 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Evaluating a statement like
```
when: '/my_field == null'
```
will throw an Exception if the value of my_field does not match the expression ANTLR language
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with a when condition equal to `/my_field == null`
2. Send an Event where my_field is an array, such as `{ "my_field": ["2024-02-09 19:26:25.308", "2024-02-09 19:26:25.308"] }`
3. Observe the exception
```
2024-02-12T10:28:13,789 [test-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.expression.ParseTreeEvaluator - Unable to evaluate event
org.opensearch.dataprepper.expression.ExpressionCoercionException: Unsupported type for value [2024-02-09 19:26:25.308, 2024-02-09 19:26:25.308]
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.lambda$new$0(ParseTreeCoercionService.java:33) ~[data-prepper-expression-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.resolveJsonPointerValue(ParseTreeCoercionService.java:103) ~[data-prepper-expression-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.coercePrimaryTerminalNode(ParseTreeCoercionService.java:69) ~[data-prepper-expression-2.7.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.visitTerminal(ParseTreeEvaluatorListener.java:69) ~[data-prepper-expression-2.7.0-SNAPSHOT.jar:?]
at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:29) ~[antlr4-runtime-4.10.1.jar:4.10.1]
```
**Expected behavior**
For checks of `/my_field != null` or `/my_field == null`, we should convert the value of my_field to an Object and check that it is null or not null, and should not rely on the ANTLR language for conditional expressions
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Attempting to evaluate if a key is null throws an Exception if the value is a List<String> for conditional expressions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4109/comments | 0 | 2024-02-12T16:06:23Z | 2024-02-20T16:54:07Z | https://github.com/opensearch-project/data-prepper/issues/4109 | 2,130,451,923 | 4,109 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The date processor does not catch any Exceptions in processing. This means that the pipeline will be shut down (in the current state of `data-prepper-core`), and will skip the processing of the remainder of Events in a batch once an Exception is hit.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with date processor and a `date_when`
2. Send an Event that will trigger an exception evaluating the conditional statement against that Event
3. Observe the exception get thrown and not caught by the date processor.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Date Processor throws Exception and skips processing the remainder of a batch of Events | https://api.github.com/repos/opensearch-project/data-prepper/issues/4107/comments | 0 | 2024-02-12T15:58:07Z | 2024-02-12T18:07:02Z | https://github.com/opensearch-project/data-prepper/issues/4107 | 2,130,435,592 | 4,107 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When a processor throws an exception, the current Data Prepper processor thread stops processing. Data Prepper is not shutting down the pipeline, nor is it attempting to restart the thread.
**To Reproduce**
Create a pipeline with a processor that will throw an exception. The `date` processor throws an exception when the `date_when` value has in valid expression.
Run Data Prepper and send data through.
The threads stop.
**Expected behavior**
There are two things I'd expect:
* The whole pipeline shuts down
* Data Prepper attempts to restart the thread
**Analysis**
The exact cause is this line:
https://github.com/opensearch-project/data-prepper/blob/2be8166ac8ac1f80784201f2a6e33302145cb5a8/data-prepper-core/src/main/java/org/opensearch/dataprepper/pipeline/ProcessWorker.java#L91-L93
The `ProcessWorker` catches the exception. Then the `ProcessWorker::run` method exits. After this, the thread remains, but it is waiting on a task to run.
You can see that the thread remains with a thread dump:
```
"my-pipeline-pipeline-processor-worker-1-thread-1" #41 prio=5 os_prio=0 cpu=2889321.30ms elapsed=70643.67s tid=0x0000aaaadebd3310 nid=0xa8 waiting on condition [0x0000ffff08bdf000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.22/Native Method)
- parking to wait for <0x00000005e1d1c890> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.22/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.22/AbstractQueuedSynchronizer.java:2081)
at java.util.concurrent.LinkedBlockingQueue.take(java.base@11.0.22/LinkedBlockingQueue.java:433)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.22/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.22/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.22/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.22/Thread.java:829)
```
**Environment (please complete the following information):**
Data Prepper 2.6.1
| [BUG] Data Prepper process threads stop when processors throw exceptions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4103/comments | 1 | 2024-02-10T16:15:53Z | 2024-02-20T21:05:19Z | https://github.com/opensearch-project/data-prepper/issues/4103 | 2,128,587,312 | 4,103 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The s3 sink defaults to https://github.com/opensearch-project/data-prepper/blob/c45ddb15f9fc8803655e4879d3d3494ede937b0d/data-prepper-plugins/s3-sink/src/main/java/org/opensearch/dataprepper/plugins/sink/s3/configuration/ObjectKeyOptions.java#L14 and is not configurable.
**Describe the solution you'd like**
As a user of Data Prepper's s3 sink, I would like to configure the file_pattern to be different than the default value of `events-%{yyyy-MM-dd'T'HH-mm-ss'Z'}`
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow configurable file_pattern in s3 sink object_keys | https://api.github.com/repos/opensearch-project/data-prepper/issues/4099/comments | 4 | 2024-02-09T17:33:13Z | 2025-01-31T15:06:40Z | https://github.com/opensearch-project/data-prepper/issues/4099 | 2,127,572,910 | 4,099 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When using an OpenSearch Serverless collection in a pipeline in the `opensearch` sink, the index configured is not created until a document is sent to the pipeline and is indexed into the collection.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with an opensearch serverless collection as a sink
2. Observe the index configured in the sink is not created when starting Data Prepper
3. Send an Event to the pipeline
4. Observe the document in the collection in the specified index
**Expected behavior**
The index should be created on startup like it is for non-serverless opensearch sinks
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] OpenSearch Serverless Sink does not create index on startup | https://api.github.com/repos/opensearch-project/data-prepper/issues/4097/comments | 2 | 2024-02-08T22:56:45Z | 2024-02-13T20:38:06Z | https://github.com/opensearch-project/data-prepper/issues/4097 | 2,126,202,075 | 4,097 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper has sources which can pull binary data (mostly in base64) format. And we are adding some new processors which can decompress binary data. It would be good to handle binary data consistently so that we don't too much code spread across the project which will result in some processor combinations breaking a pipeline.
I'd like Data Prepper's sources and sinks to know their own encodings as much as possible.
**Describe the solution you'd like**
Create a new `BinaryData` model in `data-prepper-api`. Allow this to be set and retrieved from the `Event` model. This model can also be designed to avoid unnecessary encoding/decoding.
When a Data Prepper source gets binary data, it wraps it in the `BinaryData` model. Similarly, when writing to a sink use that same model.
There are some situations where the source cannot know the encoding. For example, JSON could have binary data encoded as base64 or base64. In such cases, the pipeline author will need to know the encoding and convert it accordingly.
```
class BinaryData {
public byte[] getBinaryData();
public static fromBase64Data(String base64) { ... }
}
```
There may also be an good way to decouple the binary data from the encoding itself.
**Describe alternatives you've considered (Optional)**
There may some useful third party libraries that have a similar solution we could make use of. Though, I'd still propose we keep our interface and use that for the internals.
**Additional context**
Coming from this comment: https://github.com/opensearch-project/data-prepper/issues/4016#issuecomment-1934998917
| Create a model for binary data | https://api.github.com/repos/opensearch-project/data-prepper/issues/4096/comments | 0 | 2024-02-08T22:08:21Z | 2024-02-13T20:36:37Z | https://github.com/opensearch-project/data-prepper/issues/4096 | 2,126,148,746 | 4,096 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `kafka` buffer does not include metadata. Thus, this information is lost when sent across the Kafka topic.
**Expected behavior**
All metadata from the source should be retained in the `Event` when it is pulled by the `kafka` buffer.
| [BUG] Kafka buffer does not retain metadata, including the receipt timestamp | https://api.github.com/repos/opensearch-project/data-prepper/issues/4092/comments | 1 | 2024-02-08T20:21:40Z | 2024-06-26T19:25:58Z | https://github.com/opensearch-project/data-prepper/issues/4092 | 2,126,007,543 | 4,092 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-1932 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-validator-6.1.7.Final.jar</b></p></summary>
<p>Hibernate's Jakarta Bean Validation reference implementation.</p>
<p>Library home page: <a href="http://hibernate.org/validator">http://hibernate.org/validator</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.hibernate.validator/hibernate-validator/6.1.7.Final/8d10290c5b23b7d061c79ad804dca107b335cb36/hibernate-validator-6.1.7.Final.jar</p>
<p>
Dependency Hierarchy:
- kafka-schema-registry-7.4.0.jar (Root Library)
- :x: **hibernate-validator-6.1.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/1b9789c3a834a557324612fb449548de8f8d4980">1b9789c3a834a557324612fb449548de8f8d4980</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in hibernate-validator version 6.1.2.Final, where the method 'isValid' in the class org.hibernate.validator.internal.constraintvalidators.hv.SafeHtmlValidator can by bypassed by omitting the tag end (less than sign). Browsers typically still render the invalid html which leads to attacks like HTML injection and Cross-Site-Scripting.
<p>Publish Date: 2023-04-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1932>CVE-2023-1932</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1809444">https://bugzilla.redhat.com/show_bug.cgi?id=1809444</a></p>
<p>Release Date: 2023-04-07</p>
<p>Fix Resolution: org.hibernate.validator:hibernate-validator:6.2.0.Final</p>
</p>
</details>
<p></p>
| CVE-2023-1932 (Medium) detected in hibernate-validator-6.1.7.Final.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4091/comments | 1 | 2024-02-08T19:41:43Z | 2024-03-21T21:16:22Z | https://github.com/opensearch-project/data-prepper/issues/4091 | 2,125,947,596 | 4,091 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some users want to split an input event into multiple events by splitting a field from the input event.
Say I have the following two events:
```
{"query" : "open source", "some_other_field" : "abc" }
{"query" : "data prepper documentation", "some_other_field" : "xyz" }
```
I'd like to get the following events:
```
{"query" : "open", "some_other_field" : "abc" }
{"query" : "source", "some_other_field" : "abc" }
{"query" : "data, "some_other_field" : "xyz" }
{"query" : "prepper", "some_other_field" : "xyz" }
{"query" : "documentation", "some_other_field" : "xyz" }
```
**Describe the solution you'd like**
Create a split event processor.
It will require a `field` which is the field we are splitting on. The value of that field could be either a string or an array. When it is a string, the user must provide a delimiter. This could be expressed as concrete value or a regex.
Example 1: Split events from `query` based on a regex:
```
processor:
- split_event:
field: query
regex_delimiter: '\\s+'
```
Example 2: Split events from `query` based on a delimiter:
```
processor:
- split_event:
field: query
delimiter: ' '
```
Example 3: Split events from an array `query`. In this example, we are using `split_string` first, which would be unnecessary. But, it conveys how the processor works in the case of arrays.
```
processor:
- split_string:
entries:
- source: "query"
delimiter_regex: "\\s+"
- split_event:
field: query
```
| Create a split event processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4089/comments | 4 | 2024-02-07T22:50:30Z | 2024-02-28T20:28:26Z | https://github.com/opensearch-project/data-prepper/issues/4089 | 2,124,031,682 | 4,089 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, there're no metrics emitted from mutate event processors, like what we have in grok, aggregate, etc.
**Describe the solution you'd like**
Add metrics to mutate event processors
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
N/A
| Add metrics to mutate event processors | https://api.github.com/repos/opensearch-project/data-prepper/issues/4088/comments | 0 | 2024-02-07T19:35:54Z | 2024-02-13T20:31:39Z | https://github.com/opensearch-project/data-prepper/issues/4088 | 2,123,736,429 | 4,088 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-21485 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>dash_core_components-2.0.0-py3-none-any.whl</b>, <b>dash_html_components-2.0.0-py3-none-any.whl</b></p></summary>
<p>
<details><summary><b>dash_core_components-2.0.0-py3-none-any.whl</b></p></summary>
<p>Core component suite for Dash</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/9e/a29f726e84e531a36d56cff187e61d8c96d2cc253c5bcef9a7695acb7e6a/dash_core_components-2.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/00/9e/a29f726e84e531a36d56cff187e61d8c96d2cc253c5bcef9a7695acb7e6a/dash_core_components-2.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **dash_core_components-2.0.0-py3-none-any.whl** (Vulnerable Library)
</details>
<details><summary><b>dash_html_components-2.0.0-py3-none-any.whl</b></p></summary>
<p>Vanilla HTML components for Dash</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/75/65/1b16b853844ef59b2742a7de74a598f376ac0ab581f0dcc34db294e5c90e/dash_html_components-2.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/75/65/1b16b853844ef59b2742a7de74a598f376ac0ab581f0dcc34db294e5c90e/dash_html_components-2.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **dash_html_components-2.0.0-py3-none-any.whl** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50">2f4c8c9c7f8d4ec6e76c3653ef8446fcee35cd50</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the package dash-core-components before 2.13.0; versions of the package dash-core-components before 2.0.0; versions of the package dash before 2.15.0; versions of the package dash-html-components before 2.0.0; versions of the package dash-html-components before 2.0.16 are vulnerable to Cross-site Scripting (XSS) when the href of the a tag is controlled by an adversary. An authenticated attacker who stores a view that exploits this vulnerability could steal the data that's visible to another user who opens that view - not just the data already included on the page, but they could also, in theory, make additional requests and access other data accessible to this user. In some cases, they could also steal the access tokens of that user, which would allow the attacker to act as that user, including viewing other apps and resources hosted on the same server.
**Note:**
This is only exploitable in Dash apps that include some mechanism to store user input to be reloaded by a different user.
<p>Publish Date: 2024-02-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-21485>CVE-2024-21485</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-21485">https://www.cve.org/CVERecord?id=CVE-2024-21485</a></p>
<p>Release Date: 2024-02-02</p>
<p>Fix Resolution: dash - 2.15.0, dash-core-components - 2.13.0, dash-html-components - 2.0.16</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-21485 (Medium) detected in dash_core_components-2.0.0-py3-none-any.whl, dash_html_components-2.0.0-py3-none-any.whl - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4083/comments | 2 | 2024-02-05T17:48:42Z | 2024-04-15T23:53:38Z | https://github.com/opensearch-project/data-prepper/issues/4083 | 2,119,125,717 | 4,083 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Creating an opensearch pipeline in AWS is not possible using the following yaml:
`version: '2'
kafka-pipeline:
source:
kafka:
bootstrap_servers:
- '100.100.100.100:9092'
topics:
- name: Topic1
group_id: groupID1
sink:
- opensearch:
hosts: [ "https://XXXX.eu-central-1.amazonaws.com" ]
aws:
sts_role_arn: "arn:aws:iam::XXXXXXX:role/AWSROLE"
region: "eu-central-1"
index: "index_123"`
It returns an error with the following message:
**"kafka.bootstrap_servers" is not configurable parameter for Amazon OpenSearch Ingestion pipelines using an aws kafka source: "$['log-pipeline']['source']['kafka']".**
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Opensearch Ingestion Pipelines
2. Create a new pipeline
3. Paste the yaml and validate
4. See error
**Expected behavior**
It should be valid and start a pipeline
**Screenshots**

**Environment (please complete the following information):**
- AWS eu-central-1
- OpenSearch Cluster Version 2.11 (AWS Latest)
- Service Software Version: OpenSearch_2_11_R20231113-P2 (latest)
| [BUG] bootstrap_servers is not configurable for Kafka Source in AWS | https://api.github.com/repos/opensearch-project/data-prepper/issues/4081/comments | 1 | 2024-02-03T17:05:34Z | 2024-02-06T20:45:04Z | https://github.com/opensearch-project/data-prepper/issues/4081 | 2,116,595,923 | 4,081 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Really large requests that are sent to an HTTP source pipeline can lead to Armeria exceptions which are not correctly classified by Data Prepper. Metrics show this as a 500 error. Logs included below:
```
2024-01-31T05:55:22.912 [armeria-common-worker-epoll-3-4] ERROR com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator - Error in Armeria request handling
com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 1048576, contentLength: 2070408, transferred: 1052672
at com.linecorp.armeria.common.ContentTooLargeExceptionBuilder.build(ContentTooLargeExceptionBuilder.java:93) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.Http1RequestDecoder.channelRead(Http1RequestDecoder.java:312) ~[armeria-1.26.4.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1471) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1345) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1385) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.flush.FlushConsolidationHandler.channelRead(FlushConsolidationHandler.java:152) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create an HTTP source pipeline.
2. Send data large than `1048576` bytes
**Expected behavior**
This type of error should result in a 413 or similar 4XX error code
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Armeria ContentTooLargeExceptions are misclassified as Internal Exceptions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4080/comments | 2 | 2024-02-02T21:34:24Z | 2024-04-03T18:40:42Z | https://github.com/opensearch-project/data-prepper/issues/4080 | 2,115,836,251 | 4,080 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We do support epoch_second, epoch_mili and epoch_nano, but noticed some users need epoch_micro as well.
We have a workaround using `value_expression` in `add_entires` to convert micro seconds to other format we support. But we might as well add this `epoch_micro` option to save the need for an additional processor.
**Describe the solution you'd like**
Support epoch_micro in date processor patterns
**Describe alternatives you've considered (Optional)**
Use `value_expression` in `add_entires` to convert micro seconds to other format we support
**Additional context**
N/A
| Support epoch_micro in date processor patterns | https://api.github.com/repos/opensearch-project/data-prepper/issues/4076/comments | 3 | 2024-02-02T16:44:50Z | 2024-02-15T16:16:17Z | https://github.com/opensearch-project/data-prepper/issues/4076 | 2,115,326,222 | 4,076 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As mentioned on the [initial feature request ](https://github.com/opensearch-project/data-prepper/issues/2298) for the S3 DLQ, it would be great to add support for compressing the objects before storing them in S3 bucket.
**Describe the solution you'd like**
The ability to have the failed objects compressed into some formats instead of storing them as plain json files, similar to the S3 sink plugin. An example for the config could be
```
sink:
opensearch:
dlq:
s3:
bucket: "my-dlq-bucket"
key_path_prefix: "dlq-files/"
region: "us-west-2"
sts_role_arn: "arn:aws:iam::123456789012:role/dlq-role"
compression: gzip
```
| Support compression for the DLQ S3 objects | https://api.github.com/repos/opensearch-project/data-prepper/issues/4074/comments | 0 | 2024-02-01T14:22:03Z | 2024-02-06T20:37:35Z | https://github.com/opensearch-project/data-prepper/issues/4074 | 2,112,650,347 | 4,074 |
[
"opensearch-project",
"data-prepper"
] | null | Run KMS-related integration tests as part of the GitHub Actions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4040/comments | 0 | 2024-01-31T22:03:20Z | 2024-02-10T16:24:38Z | https://github.com/opensearch-project/data-prepper/issues/4040 | 2,111,049,703 | 4,040 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
While performing some load testing on our fluentbit -> data-prepper -> opensearch stack, we discovered that past a certain http request size (i.e. once a fluent-bit instance is under high enough load), data-prepper begins to throw the following errors:
```
2024-01-30T02:56:29.331 [armeria-common-worker-epoll-3-3] WARN com.linecorp.armeria.server.DefaultUnhandledExceptionsReporter - Observed 1 exception(s) that didn't reach a LoggingService in the last 10000ms(10000000000ns). Please consider adding a LoggingService as the outermost decorator to get detailed error logs. One of the thrown exceptions:
com.linecorp.armeria.server.HttpStatusException: 413 Request Entity Too Large
at com.linecorp.armeria.server.HttpStatusException.of0(HttpStatusException.java:105) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.HttpStatusException.of(HttpStatusException.java:99) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.Http1RequestDecoder.channelRead(Http1RequestDecoder.java:327) ~[armeria-1.26.4.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1471) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1345) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1385) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.flush.FlushConsolidationHandler.channelRead(FlushConsolidationHandler.java:152) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 10485760, contentLength: 23500850, transferred: 10487808
at com.linecorp.armeria.common.ContentTooLargeExceptionBuilder.build(ContentTooLargeExceptionBuilder.java:93) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.Http1RequestDecoder.channelRead(Http1RequestDecoder.java:312) ~[armeria-1.26.4.jar:?]
... 38 more
```
```
2024-01-30T02:56:29.448 [armeria-common-worker-epoll-3-3] ERROR com.amazon.osis.HttpAuthorization - Unable to process the request: maxContentLength: 10485760, contentLength: 23500850, transferred: 10487808
```
Unfortunately fluent-bit doesn't seem to have any way to limit the size of chunks sent to an output (see https://github.com/fluent/fluent-bit/issues/1938). We tried to mitigate this by using gzip, but that just produced different (but similar) errors:
```
2024-01-30T23:14:22.778 [armeria-common-worker-epoll-3-1] ERROR org.opensearch.dataprepper.HttpRequestExceptionHandler - Unexpected exception handling HTTP request
com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 10485760
at com.linecorp.armeria.common.ContentTooLargeExceptionBuilder.build(ContentTooLargeExceptionBuilder.java:93) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.encoding.AbstractStreamDecoder.decode(AbstractStreamDecoder.java:55) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.HttpDecodedRequest.filter(HttpDecodedRequest.java:55) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.HttpDecodedRequest.filter(HttpDecodedRequest.java:38) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.stream.FilteredStreamMessage.lambda$collect$0(FilteredStreamMessage.java:166) ~[armeria-1.26.4.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2272) ~[?:?]
at com.linecorp.armeria.common.stream.FilteredStreamMessage.collect(FilteredStreamMessage.java:142) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.stream.AggregationSupport.aggregate(AggregationSupport.java:126) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.FilteredHttpRequest.aggregate(FilteredHttpRequest.java:61) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.HttpRequest.aggregate(HttpRequest.java:565) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.HttpRequest.aggregate(HttpRequest.java:547) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve1(AnnotatedService.java:314) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve0(AnnotatedService.java:298) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve(AnnotatedService.java:268) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve(AnnotatedService.java:79) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.DecodingService.serve(DecodingService.java:118) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.DecodingService.serve(DecodingService.java:49) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService$ExceptionHandlingHttpService.serve(AnnotatedService.java:554) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:112) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:75) ~[armeria-1.26.4.jar:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.lambda$serveRequest$2(HttpAuthDecorator.java:125) ~[FizzyDrPepper-2.6.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2272) ~[?:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.serveRequest(HttpAuthDecorator.java:93) ~[FizzyDrPepper-2.6.jar:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.serve(HttpAuthDecorator.java:89) ~[FizzyDrPepper-2.6.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:112) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:75) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.throttling.AbstractThrottlingService.lambda$serve$0(AbstractThrottlingService.java:63) ~[armeria-1.26.4.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478) ~[?:?]
at com.linecorp.armeria.common.DefaultContextAwareRunnable.run(DefaultContextAwareRunnable.java:45) ~[armeria-1.26.4.jar:?]
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: io.netty.handler.codec.compression.DecompressionException: Decompression buffer has reached maximum size: 10485760
at io.netty.handler.codec.compression.ZlibDecoder.prepareDecompressBuffer(ZlibDecoder.java:80) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.compression.JdkZlibDecoder.decode(JdkZlibDecoder.java:265) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:344) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at com.linecorp.armeria.common.encoding.AbstractStreamDecoder.decode(AbstractStreamDecoder.java:48) ~[armeria-1.26.4.jar:?]
... 41 more
```
```
2024-01-30T23:14:22.779 [armeria-common-worker-epoll-3-1] ERROR com.amazon.osis.HttpAuthorization - Http request failed. Response code: 500
```
**To Reproduce**
Steps to reproduce the behavior:
Generate large quantities of logs from one fluent-bit instance to data-prepper via http.
**Expected behavior**
There should be a way to be able to handle high volumes of logs coming from a single source.
**Environment (please complete the following information):**
Running on AWS Opensearch Ingestion Service (OSIS) with persistent buffering (Kafka).
**Additional context**
Any suggestions or workarounds would be much appreciated. I acknowledge that sending such huge volumes of logs from a single source isn't ideal but it is sometimes unavoidable in our environment.
| [BUG] HTTP input unable to handle large requests from fluent-bit | https://api.github.com/repos/opensearch-project/data-prepper/issues/4037/comments | 2 | 2024-01-30T23:18:29Z | 2024-02-06T20:36:05Z | https://github.com/opensearch-project/data-prepper/issues/4037 | 2,108,976,102 | 4,037 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `upsert` action is supposed to insert a document if it does not exist, and update a document if it does exist. However, the `upsert` action as part of a bulk request from the `opensearch` sink receives a `document not found` error from OpenSearch.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure and start a pipeline with an opensearch sink with `action: upsert`
2. Send an Event that does not have an existing `document_id` for that document in the OpenSearch index
3. Observe the error
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation
= Update, error = [2]: document missing
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Upsert action requires existing document in OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/4036/comments | 2 | 2024-01-30T21:04:02Z | 2024-03-06T17:39:51Z | https://github.com/opensearch-project/data-prepper/issues/4036 | 2,108,773,751 | 4,036 |
[
"opensearch-project",
"data-prepper"
] | See #3356 for details on unnecessary dependencies in geoip. There may be others as well. | Remove unnecessary dependencies from the geoip-processor. | https://api.github.com/repos/opensearch-project/data-prepper/issues/4035/comments | 1 | 2024-01-30T20:58:54Z | 2024-03-21T16:22:55Z | https://github.com/opensearch-project/data-prepper/issues/4035 | 2,108,765,939 | 4,035 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
There are several exceptions which the `grok` processor catches that do not tag the event.
**To Reproduce**
Run a pipeline with a grok pattern than times out. The event is not tagged.
**Expected behavior**
I expect that any failure to match will result in a tag.
| [BUG] Many Grok failures do not tag events | https://api.github.com/repos/opensearch-project/data-prepper/issues/4031/comments | 1 | 2024-01-30T00:36:27Z | 2024-01-30T20:04:36Z | https://github.com/opensearch-project/data-prepper/issues/4031 | 2,106,730,774 | 4,031 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When running Data Prepper's `grok` processor with a timeout, the thread running the grok pattern continues.
| [BUG] Grok processor match requests continue after timeout | https://api.github.com/repos/opensearch-project/data-prepper/issues/4026/comments | 0 | 2024-01-29T21:07:02Z | 2024-01-29T22:44:56Z | https://github.com/opensearch-project/data-prepper/issues/4026 | 2,106,429,743 | 4,026 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We want to be able to parse XML documents in log fields to make them more easily searchable in OpenSearch. This would mean we wouldn't have to rely on logging the field as a `keyword` and using expensive wildcard search queries to search for values within the document.
**Describe the solution you'd like**
Implement an XML filter, similar to what is available in Logstash - https://www.elastic.co/guide/en/logstash/current/plugins-filters-xml.html. | XML Filter | https://api.github.com/repos/opensearch-project/data-prepper/issues/4024/comments | 3 | 2024-01-29T05:30:35Z | 2024-03-07T17:17:48Z | https://github.com/opensearch-project/data-prepper/issues/4024 | 2,104,682,129 | 4,024 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When sending data to an otel source, both the `otel-collector` and the source of the pipeline must align on the `compression` used. This compression defaults to `none` in the sources.
When sending compressed data to a source with compression `none`, an error like below is currently shown
```
2024-01-24T17:04:23.769 [armeria-common-worker-epoll-3-1] ERROR org.opensearch.dataprepper.GrpcRequestExceptionHandler - Unexpected exception handling gRPC request
io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence
at io.grpc.Status.asRuntimeException(Status.java:529) ~[grpc-api-1.58.0.jar:1.58.0]
at com.linecorp.armeria.internal.common.grpc.GrpcMessageMarshaller.deserializeProto(GrpcMessageMarshaller.java:253) ~[armeria-grpc-1.26.4.jar:?]
at com.linecorp.armeria.internal.common.grpc.GrpcMessageMarshaller.deserializeRequest(GrpcMessageMarshaller.java:118) ~[armeria-grpc-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.grpc.AbstractServerCall.onRequestMessage(AbstractServerCall.java:343) ~[armeria-grpc-1.26.4.jar:?]
at com.linecorp.armeria.server.grpc.UnaryServerCall.lambda$startDeframing$0(UnaryServerCall.java:107) ~[armeria-grpc-1.26.4.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2079) ~[?:?]
at com.linecorp.armeria.internal.common.stream.FixedStreamMessage.collect(FixedStreamMessage.java:235) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.common.stream.FixedStreamMessage.lambda$collect$2(FixedStreamMessage.java:203) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.DefaultContextAwareRunnable.run(DefaultContextAwareRunnable.java:45) ~[armeria-1.26.4.jar:?]
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
at com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:142) ~[protobuf-java-3.24.3.jar:?]
at com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:526) ~[protobuf-java-3.24.3.jar:?]
at com.google.protobuf.GeneratedMessageV3.parseUnknownField(GeneratedMessageV3.java:332) ~[protobuf-java-3.24.3.jar:?]
at io.opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest.<init>(ExportLogsServiceRequest.java:63) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0]
at io.opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest.<init>(ExportLogsServiceRequest.java:9) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0]
at io.opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest$1.parsePartialFrom(ExportLogsServiceRequest.java:935) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0]
at io.opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest$1.parsePartialFrom(ExportLogsServiceRequest.java:929) ~[opentelemetry-proto-0.16.0-alpha.jar:0.16.0]
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:86) ~[protobuf-java-3.24.3.jar:?]
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:91) ~[protobuf-java-3.24.3.jar:?]
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) ~[protobuf-java-3.24.3.jar:?]
at com.linecorp.armeria.internal.common.grpc.GrpcMessageMarshaller.deserializeProto(GrpcMessageMarshaller.java:243) ~[armeria-grpc-1.26.4.jar:?]
... 18 more
```
**Expected behavior**
Return a 400 bad request with a message indicating that it could be related to compression mismatch.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
It may be possible to dynamically support compression based on what otel tries to send (as in otel asks the pipeline source if it supports this compression, the pipeline responds yes or no, and then the pipeline dynamically handles the data based on the compression type if it supports it, otherwise it throws a 400 Bad Request
| [BUG] otel sources should show a more clear exception when receiving data that cannot be processed based on the configured compression type | https://api.github.com/repos/opensearch-project/data-prepper/issues/4022/comments | 0 | 2024-01-25T21:27:51Z | 2024-04-09T22:30:33Z | https://github.com/opensearch-project/data-prepper/issues/4022 | 2,101,197,303 | 4,022 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Sometimes creating a small pipeline that reads from a file helps test a pipeline before transitioning to S3.
**Describe the solution you'd like**
Support input codecs on the `file` source.
e.g.
```
source:
file:
path: my-file.log
codec:
newline:
```
| Support codec on the file source to help with testing | https://api.github.com/repos/opensearch-project/data-prepper/issues/4018/comments | 0 | 2024-01-25T15:42:20Z | 2024-02-09T23:16:48Z | https://github.com/opensearch-project/data-prepper/issues/4018 | 2,100,662,461 | 4,018 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper should use the AWS SDK v2 exclusively. This will reduce the number of dependencies it pulls in and allow all projects to use the AWS extension.
We did this once before: #1562
**Describe the solution you'd like**
First, remove all existing dependencies:
https://github.com/opensearch-project/data-prepper/blob/d501af69c10b93efbb0c68abeeda4f5023323e18/data-prepper-plugins/translate-processor/build.gradle#L14-L15
https://github.com/opensearch-project/data-prepper/blob/334239bb662f6176c2ab6c3423b8f30214355488/data-prepper-plugins/kafka-plugins/build.gradle#L49
https://github.com/opensearch-project/data-prepper/blob/a9e419af24810cad29a950205376bc5fc2b89391/data-prepper-plugins/http-sink/build.gradle#L29
Second, I'd like a rule in Gradle that disallows using the AWS SDK v1.
| Remove the AWS SDK v1 (again) | https://api.github.com/repos/opensearch-project/data-prepper/issues/4017/comments | 0 | 2024-01-24T23:16:56Z | 2024-01-24T23:18:15Z | https://github.com/opensearch-project/data-prepper/issues/4017 | 2,099,283,753 | 4,017 |
[
"opensearch-project",
"data-prepper"
] | Add a decompress processor:
```
processor:
- decompress:
keys: [my_gzip_key]
type: gzip
```
Replace the value of the existing key with the decompressed value. Users likely don't want to save compressed values in OpenSearch.
See #3841 for more details.
| Decompress processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4016/comments | 3 | 2024-01-24T20:36:39Z | 2024-02-14T20:08:38Z | https://github.com/opensearch-project/data-prepper/issues/4016 | 2,099,049,668 | 4,016 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We need to differentiate requests from an instance of Data Prepper that our solution is using and from the rest of a cluster's clients.
To migrate data, our [solution](https://github.com/opensearch-project/opensearch-migrations) uses does a bulk move of data from a source cluster to a target cluster. Independently, individual requests are recorded from the source cluster and replayed to the target to both keep the target cluster in sync and also to compare the behavior of the two clusters.
When we capture traffic, depending on the order that a customer chooses to perform each step, there may be overlap with the Data Prepper requests to the source. We'd like to be able to mask out those requests from our replay. Those would at the very least, create more noisy data for users and could cause confusion as they would see updates replayed on already existing data that was migrated with Data Prepper. Allowing the customer/us to set a unique value that we can easily filter on the capture side would eliminate this problem and be more more efficient (much lower costs).
**Describe the solution you'd like**
I'd like to have a command line flag to set the user-agent HTTP header for all requests that Data Prepper sends. A default value of something different than the ES/OS user-agent may be beneficial too.
**Describe alternatives you've considered (Optional)**
Other HTTP header values could work too, but user-agent seems like it could be the most natural and easy to explain one. For our greater solution, dealing with the duplicate data better is possible, but it is 1) considerable effort to mitigate, 2) still will be expensive as we aren't able to remove the data passively.
**Additional context**
N/A
| Allow users to override the user-agent | https://api.github.com/repos/opensearch-project/data-prepper/issues/4015/comments | 2 | 2024-01-24T15:37:41Z | 2024-02-08T20:30:50Z | https://github.com/opensearch-project/data-prepper/issues/4015 | 2,098,540,404 | 4,015 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
End to end acknowledgements in DataPrepper do not support aggregate processor because the events may be forwarded to a remote peer. But if the local aggregation is used (`discovery_mode: local_node` in peer forwarder configuration), the events are not forwarded to remote peer and end-to-end acknowledgements may be supported in that case.
For example, if a aggregate_processor is configured with an aggregation action, and N number of events are sent to the processor, the individual events are not sent to the sink and acknowledgements for each of the individual events is sent back to the source. And source considers the data is durable even though the aggregated event itself is not sent to the sink (yet). And it is possible that due to some unexpected event after conclusion of the aggregation period, the DataPrepper fails to send aggregated event to the sink.
**Describe the solution you'd like**
Solution is to create an aggregate event handle and register it with all AcknowledgementSets that contribute to aggregate event (before it is concluded). Upon the conclusion of the aggregation, if there are no aggregated events generated, release the aggregated event handle. If any events are generated, associate the event handle with all events generated.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add acknowledgements support to aggregate processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4010/comments | 0 | 2024-01-23T22:59:57Z | 2024-01-30T20:33:07Z | https://github.com/opensearch-project/data-prepper/issues/4010 | 2,097,126,571 | 4,010 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When building the project locally on the main branch, multiple date-processor tests fail. It seems that GitHub actions have these tests pass. Results from [`Grade Build`](https://github.com/opensearch-project/data-prepper/blob/main/.github/workflows/gradle.yml) workflow:
```
<testsuite name="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" tests="39" skipped="0" failures="0" errors="0" timestamp="2024-01-22T19:03:45" hostname="fv-az585-104" time="1.235">
<properties/>
<testcase name="match_with_missing_hours_minutes_seconds_adds_zeros_test()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.676"/>
<testcase name="[1] en-US" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.022"/>
<testcase name="[2] zh-CN" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.102"/>
<testcase name="[3] it-IT" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.078"/>
<testcase name="match_with_wrong_patterns_return_same_record_test_without_timestamp()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.009"/>
<testcase name="[1] yyyy MM dd HH mm ss" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.01"/>
<testcase name="date_when_does_not_run_date_processor_for_event_with_date_when_as_false()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.007"/>
<testcase name="[1] MMM/dd" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.014"/>
<testcase name="[2] MM dd" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.009"/>
<testcase name="[1] epoch_second, 1705950226, epoch_milli, 1705950226000" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.012"/>
<testcase name="[2] epoch_second, 1705950226, epoch_nano, 1705950226000000000" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[3] epoch_second, 1705950226, yyyy-MMM-dd HH:mm:ss.SSS, 2024-Jan-22 19:03:46.000" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[4] epoch_second, 1705950226, yyyy-MM-dd'T'HH:mm:ss.SSSXXX, 2024-01-22T19:03:46.000Z" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[5] epoch_milli, 1705950226581, epoch_second, 1705950226" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[6] epoch_milli, 1705950226581, epoch_nano, 1705950226581000000" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[7] epoch_milli, 1705950226581, yyyy-MMM-dd HH:mm:ss.SSS, 2024-Jan-22 19:03:46.581" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.01"/>
<testcase name="[8] epoch_milli, 1705950226581, yyyy-MM-dd'T'HH:mm:ss.SSSXXX, 2024-01-22T19:03:46.581Z" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.007"/>
<testcase name="[9] epoch_nano, 1705950226818194066, epoch_second, 1705950226" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[10] epoch_nano, 1705950226818194066, epoch_milli, 1705950226818" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[11] epoch_nano, 1705950226818194066, yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan-22 19:03:46.818194066Z" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[12] epoch_nano, 1705950226818194066, yyyy-MM-dd'T'HH:mm:ss.SSSXXX, 2024-01-22T19:03:46.818Z" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[13] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan-22 19:03:46.818194066Z, epoch_second, 1705950226" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.009"/>
<testcase name="[14] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan-22 19:03:46.818194066Z, epoch_milli, 1705950226818" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[15] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan-22 19:03:46.818194066Z, epoch_nano, 1705950226818194066" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.009"/>
<testcase name="[16] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan-22 19:03:46.818194066Z, yyyy-MM-dd'T'HH:mm:ss.SSSXXX, 2024-01-22T19:03:46.818Z" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.009"/>
<testcase name="match_with_epoch_second_pattern()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.006"/>
<testcase name="[1] en_US" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.01"/>
<testcase name="[2] fr_FR" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.015"/>
<testcase name="[3] ja_JP" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.015"/>
<testcase name="match_with_default_destination_test()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.005"/>
<testcase name="[1] MMM/dd/uuuu" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[2] yyyy MM dd" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.007"/>
<testcase name="[1] America/New_York" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.01"/>
<testcase name="[2] America/Los_Angeles" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.008"/>
<testcase name="[3] Australia/Adelaide" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.007"/>
<testcase name="[4] Japan" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.007"/>
<testcase name="from_time_received_with_custom_destination_test()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.005"/>
<testcase name="from_time_received_with_default_destination_test()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.003"/>
<testcase name="match_with_custom_destination_test()" classname="org.opensearch.dataprepper.plugins.processor.date.DateProcessorTests" time="0.005"/>
<system-out>
<![CDATA[ ]]>
</system-out>
<system-err>
<![CDATA[ SLF4J: No SLF4J providers were found. SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details. ]]>
</system-err>
</testsuite>
```
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the repo
2. Checkout `main` branch
3. Run `./gradlew build`
4. See error:
```
> Task :data-prepper-plugins:date-processor:test
DateProcessorTests > match_without_year_test(String) > [1] MMM/dd FAILED
java.lang.NullPointerException at DateProcessorTests.java:510
DateProcessorTests > match_with_different_input_output_formats(String, Object, String, Object) > [13] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan.-22 11:39:29.367126542-08:00, epoch_second, 1705952369 FAILED
java.lang.AssertionError at DateProcessorTests.java:334
DateProcessorTests > match_with_different_input_output_formats(String, Object, String, Object) > [14] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan.-22 11:39:29.367126542-08:00, epoch_milli, 1705952369367 FAILED
java.lang.AssertionError at DateProcessorTests.java:334
DateProcessorTests > match_with_different_input_output_formats(String, Object, String, Object) > [15] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan.-22 11:39:29.367126542-08:00, epoch_nano, 1705952369367126542 FAILED
java.lang.AssertionError at DateProcessorTests.java:334
DateProcessorTests > match_with_different_input_output_formats(String, Object, String, Object) > [16] yyyy-MMM-dd HH:mm:ss.nnnnnnnnnXXX, 2024-Jan.-22 11:39:29.367126542-08:00, yyyy-MM-dd'T'HH:mm:ss.SSSXXX, 2024-01-22T11:39:29.367-08:00 FAILED
java.lang.AssertionError at DateProcessorTests.java:334
DateProcessorTests > match_with_default_destination_test() FAILED
java.lang.NullPointerException at DateProcessorTests.java:535
DateProcessorTests > match_with_different_year_formats_test(String) > [1] MMM/dd/uuuu FAILED
java.lang.NullPointerException at DateProcessorTests.java:453
DateProcessorTests > match_with_custom_destination_test() FAILED
java.lang.NullPointerException at DateProcessorTests.java:535
70 tests completed, 8 failed
> Task :data-prepper-plugins:date-processor:test FAILED
```
**Expected behavior**
All tests should pass locally.
**Environment (please complete the following information):**
- OS: macOS Sonoma 14.2.1
- main branch on commit 41eab7326a1ef1e68110fe97e39c7e7b4f132dc6
- Java version 17.0.4 | [BUG] Failing DateProcessorTests when building | https://api.github.com/repos/opensearch-project/data-prepper/issues/3999/comments | 4 | 2024-01-22T22:27:54Z | 2025-01-15T00:12:26Z | https://github.com/opensearch-project/data-prepper/issues/3999 | 2,094,882,232 | 3,999 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Peer Forwarder sends some events to remote peer based on the hash function. If the inner processor of Peer forwarder has "when" condition, it is possible an event is forwarded to remote peer first and then get dropped because when condition evaluates to false. This is very sub-optimal. Also in some cases an option to force local aggregation may be needed.
**Describe the solution you'd like**
Add a new API to `RequiresPeerForwarding.java` some thing like
```
Boolean shouldForwardToRemotePeer(final Event event);
```
This will allow innerProcessor to evaluate `when` condition and also check if `localOnly` option is configured.
For example, aggregate processor could implement the new API as follows
```
@Override
public Boolean shouldForwardToRemotePeer(Event event) {
if (localOnly || (whenCondition != null && !expressionEvaluator.evaluateConditional(whenCondition, event))) {
return false;
}
return true;
}
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow peer forwarder to skip sending events to remote peer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3996/comments | 3 | 2024-01-22T06:07:23Z | 2024-01-30T20:53:24Z | https://github.com/opensearch-project/data-prepper/issues/3996 | 2,093,158,527 | 3,996 |
[
"opensearch-project",
"data-prepper"
] | This is part of the solution toward #1025.
Generation algorithm:
```
${pluginType}${incrementedCount > 1 ? incrementedCount : ''}
```
Where `incrementedCount` is unique per plugin type (e.g. `opensearch`). | Auto-generate unique component Ids | https://api.github.com/repos/opensearch-project/data-prepper/issues/3995/comments | 0 | 2024-01-19T22:49:46Z | 2024-10-07T14:01:31Z | https://github.com/opensearch-project/data-prepper/issues/3995 | 2,091,583,330 | 3,995 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently Data Prepper does not support multiple instances of processors that require peer-forwarding in a single pipeline.
```
processor:
- aggregate:
- aggregate:
```
This is primarily because Data Prepper does not currently distinguish between different plugins of the same type. So when forwarding events, the event would go back to the first `aggregate` processor.
https://github.com/opensearch-project/data-prepper/blob/db3325484be26d9abc6d7f5977695e6ac8ebefc5/data-prepper-core/src/main/java/org/opensearch/dataprepper/peerforwarder/PeerForwarderProvider.java#L39-L42
There is also currently a restriction on sharing the same identification keys.
https://github.com/opensearch-project/data-prepper/blob/6d74ec488306992961b7685d86fb30522272bd4e/data-prepper-core/src/main/java/org/opensearch/dataprepper/peerforwarder/PeerForwardingProcessorDecorator.java#L54-L62
**Describe the solution you'd like**
Open up this functionality.
When routing events between peers, Data Prepper includes a `destinationPluginId` property.
https://github.com/opensearch-project/data-prepper/blob/5bc6a2c79cbb6afa21b0b4a0b450b08da7d3031e/data-prepper-core/src/main/java/org/opensearch/dataprepper/peerforwarder/model/PeerForwardingEvents.java#L21
Currently, pluginIds are not unique. Thus, if we forwarded events Data Prepper would not know which `aggregate` processor to use - the first or second. We should update Data Prepper by:
1. Supporting dynamic pluginIds as noted in #1025 and #3995. We do not need configurable plugin Ids for this.
2. When forwarding events, use the pluginId instead of the plugin type.
By generating component Ids such as `aggregate` and `aggregate2`, Data Prepper can forward events to peers and target the correct processor in the processor chain.
## Tasks
- [ ] #3995
- [ ] Investigate the identification keys restriction and solution
- [ ] Disable the restriction on the number of processors of the same type
| Support multiple peer-forwarding components in the same pipeline | https://api.github.com/repos/opensearch-project/data-prepper/issues/3994/comments | 0 | 2024-01-19T22:49:31Z | 2024-06-18T19:39:37Z | https://github.com/opensearch-project/data-prepper/issues/3994 | 2,091,582,883 | 3,994 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I removed some try-catch lines in the `ParquetOutputCodecTest` test. These tests began to fail.
Something is definitely wrong as the lines in question are part of the setup. It may be that the tests are not actually testing what they claim to test.
**To Reproduce**
Remove these lines:
https://github.com/opensearch-project/data-prepper/blob/c4acf28be243b2fad676ba9863250c673be06d70/data-prepper-plugins/s3-sink/src/test/java/org/opensearch/dataprepper/plugins/codec/parquet/ParquetOutputCodecTest.java#L576-L578
https://github.com/opensearch-project/data-prepper/blob/c4acf28be243b2fad676ba9863250c673be06d70/data-prepper-plugins/s3-sink/src/test/java/org/opensearch/dataprepper/plugins/codec/parquet/ParquetOutputCodecTest.java#L584-L585
Run the tests.
Quite a few of them fail.
**Expected behavior**
I should be able to run tests without this code throwing exceptions at all.
**Additional context**
This was found while removing `ex.printStackTrace()` lines:
https://github.com/opensearch-project/data-prepper/pull/3991/files#r1459559030 | [BUG] ParquetOutputCodecTest may not be testing correctly | https://api.github.com/repos/opensearch-project/data-prepper/issues/3992/comments | 2 | 2024-01-19T18:59:01Z | 2024-07-02T16:26:17Z | https://github.com/opensearch-project/data-prepper/issues/3992 | 2,091,151,299 | 3,992 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
It seems that the put_all aggregate function only works with "root" keys, and do not add new nested keys.
For example, I have these 2 postfix events that I need to aggregate, based on /host/id and /email/local_id :
```json
{
"@timestamp": "2024-01-19T13:25:35.503436Z",
"postfix": {
"nrcpt": "1",
"size": "17603"
},
"application": "postfix",
"host": {
"id": "95e0888c1df24d79a4bc827f9636cca7"
},
"email": {
"from": {
"address": "EMAIL_FROM"
},
"local_id": "7A20930009B"
},
"message": "queue active"
}
```
```json
{
"@timestamp": "2024-01-19T13:25:35.527242Z",
"postfix": {
"relay": "localhost[::1]:24",
"delays": "0/0/0/0.02",
"dsn": "2.1.5",
"delay": "0.03",
"status": "sent"
},
"application": "postfix",
"host": {
"id": "95e0888c1df24d79a4bc827f9636cca7"
},
"email": {
"local_id": "7A20930009B",
"to": {
"address": "EMAIL_TO"
}
},
"test": "OK"
}
```
The put_all function only add keys that are at the '/' level of the JSON, and drop the other keys :
```json
{
"@timestamp": "2024-01-19T13:25:35.503436Z",
"postfix": {
"nrcpt": "1",
"size": "17603"
},
"application": "postfix",
"host": {
"id": "95e0888c1df24d79a4bc827f9636cca7"
},
"email": {
"from": {
"address": "EMAIL_FROM"
},
"local_id": "7A20930009B"
},
"message": "queue active",
"test": "ok"
}
```
**To Reproduce**
Steps to reproduce the behavior:
```yaml
pipeline:
processor:
- aggregate:
identification_keys:
- "/host/id"
- "/email/local_id"
action:
put_all:
group_duration: "10s"
aggregate_when: '/email/local_id != null'
```
**Expected behavior**
It needs to create the missing entries in the nested keys :
```json
{
"@timestamp": "2024-01-19T13:25:35.503436Z",
"postfix": {
"nrcpt": "1",
"size": "17603",
"relay": "localhost[::1]:24",
"delays": "0/0/0/0.02",
"dsn": "2.1.5",
"delay": "0.03",
"status": "sent"
},
"application": "postfix",
"host": {
"id": "95e0888c1df24d79a4bc827f9636cca7"
},
"email": {
"from": {
"address": "EMAIL_FROM"
},
"to": {
"address": "EMAIL_TO"
}
"local_id": "7A20930009B"
},
"message": "queue active",
"test": "ok"
}
```
**Environment (please complete the following information):**
- latest dataprepper docker image (2.6.1)
| [BUG] aggregate put_all not working with nested keys | https://api.github.com/repos/opensearch-project/data-prepper/issues/3989/comments | 0 | 2024-01-19T14:54:21Z | 2024-04-16T19:52:17Z | https://github.com/opensearch-project/data-prepper/issues/3989 | 2,090,727,465 | 3,989 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
`update`, `upsert`, and `delete` bulk actions in the opensearch sink requires knowing the `document_id` of the document to update or delete. However, if `document_id` is not provided in the `opensearch` sink with one of these actions being used, the following NPE is hit when converting the failed document to a DLQ object
```
Caused by: java.lang.NullPointerException
at org.opensearch.dataprepper.plugins.sink.opensearch.dlq.FailedBulkOperationConverter.convertDocumentToGenericMap(FailedBulkOperationConverter.java:64) ~[opensearch-2.6.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.dlq.FailedBulkOperationConverter.convertToDlqObject(FailedBulkOperationConverter.java:38) ~[opensearch-2.6.1.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
```
**Expected behavior**
Do not crash the pipeline and send the document to DLQ without any document_id
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Related to #3933
| [BUG] Using update, upsert, or delete actions without specifying document_id crashes the pipeline with NPE | https://api.github.com/repos/opensearch-project/data-prepper/issues/3988/comments | 2 | 2024-01-18T23:21:42Z | 2024-03-12T19:36:33Z | https://github.com/opensearch-project/data-prepper/issues/3988 | 2,089,253,450 | 3,988 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Sometimes Data Prepper shuts down without a clear reason as to why.
```
2024-01-18T06:23:32.523 [pool-33-thread-1] INFO org.opensearch.dataprepper.DataPrepper - Shutting down pipeline: my-pipeline
2024-01-18T06:23:32.524 [pool-33-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [my-pipeline] - Received shutdown signal with buffer drain timeout PT0S, processor shutdown timeout PT5M, and sink shutdown timeout PT30S. Initiating the shutdown process
2024-01-18T06:23:34.564 [pool-33-thread-1] INFO org.opensearch.dataprepper.plugins.source.loghttp.HTTPSource - Stopped http source.
```
**Expected behavior**
I'd like to see clearer logging when the pipeline shuts down. Flor example, was it from a call to `POST /shutdown`?
**Environment (please complete the following information):**
- Data Prepper: 2.6.1
| [BUG] Understanding what caused a pipeline shutdown can be difficult. | https://api.github.com/repos/opensearch-project/data-prepper/issues/3986/comments | 0 | 2024-01-18T22:07:23Z | 2024-01-18T22:20:26Z | https://github.com/opensearch-project/data-prepper/issues/3986 | 2,089,123,141 | 3,986 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When Data Prepper receives a SIGTERM or SIGINT (ctrl-C), Data Prepper should begin a graceful shutdown.
**Describe the solution you'd like**
Handle a SIGTERM or SIGINT by adding a Java shutdown hook.
```
getRuntime().addShutdownHook( ... thread ...);
```
This shutdown should be similar to the `POST :4900/shutdown` API.
| Shutdown gracefully on SIGTERM or SIGINT | https://api.github.com/repos/opensearch-project/data-prepper/issues/3984/comments | 0 | 2024-01-18T21:13:07Z | 2024-04-16T19:51:58Z | https://github.com/opensearch-project/data-prepper/issues/3984 | 2,089,057,640 | 3,984 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Peer forwarding is unable to serialize events.
```
2024-01-17T23:51:34.293 [test-pipeline-processor-worker-1-thread-1] WARN org.opensearch.dataprepper.peerforwarder.RemotePeerForwarder - Unable to submit request for forwarding, processing locally.
java.lang.RuntimeException: java.io.NotSerializableException: java.lang.ref.WeakReference
at org.opensearch.dataprepper.peerforwarder.client.PeerForwarderClient.getSerializedJsonBytes(PeerForwarderClient.java:87) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.peerforwarder.client.PeerForwarderClient.serializeRecordsAndSendHttpRequest(PeerForwarderClient.java:71) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.peerforwarder.RemotePeerForwarder.forwardRecordsForIp(RemotePeerForwarder.java:267) ~[data-prepper-core-2.6.1.jar:?]
Caused by: java.io.NotSerializableException: java.lang.ref.WeakReference
at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175) ~[?:?]
at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1543) ~[?:?]
at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1500) ~[?:?]
at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1423) ~[?:?]
at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1169) ~[?:?]
at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1543) ~[?:?]
at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1500) ~[?:?]
... 14 more
```
**To Reproduce**
1. Configure Data Prepper to use peer-forwarding
2. Create a pipeline with an `aggregate` processor.
3. Run multiple Data Prepper instances
4. Inject data that will need to route across peers
**Expected behavior**
One Data Prepper node should be able to serialize and send the data to the other Data Prepper node.
**Environment (please complete the following information):**
- Data Prepper: 2.6.1
| [BUG] Serialization error during peer-forwarding | https://api.github.com/repos/opensearch-project/data-prepper/issues/3981/comments | 1 | 2024-01-18T19:36:36Z | 2024-01-18T22:41:40Z | https://github.com/opensearch-project/data-prepper/issues/3981 | 2,088,933,187 | 3,981 |
[
"opensearch-project",
"data-prepper"
] | null | Create Kafka buffer integration tests for KMS | https://api.github.com/repos/opensearch-project/data-prepper/issues/3980/comments | 0 | 2024-01-18T19:17:45Z | 2024-01-30T16:15:11Z | https://github.com/opensearch-project/data-prepper/issues/3980 | 2,088,907,905 | 3,980 |
[
"opensearch-project",
"data-prepper"
] | ## Summary
The `kafka` buffer should support rotating the `encryption_key`. This is a high-level issue for supporting key rotation. Data Prepper is not doing any key rotation on KMS. Instead, it will support rotation of the `encryption_key` which is encrypted by a KMS key.
## Tasks
- [x] #3655
- [ ] Use the embedded `encrypted_data_key` to decrypt data when present
- [x] #3980
- [x] #4040 | Support KMS encryption key rotation in Kafka buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/3979/comments | 0 | 2024-01-18T19:15:59Z | 2024-04-16T19:51:52Z | https://github.com/opensearch-project/data-prepper/issues/3979 | 2,088,905,175 | 3,979 |
[
"opensearch-project",
"data-prepper"
] | ## Background
Previously, when installing the security plugin demo configuration, the cluster was spun up with the default admin credentials, admin:admin. A change was made in main and backported to 2.x for the 2.12.0 release, which now requires an initial admin password to be passed in via the environment variable OPENSEARCH_INITIAL_ADMIN_PASSWORD. This will break some CI/testing that relies on OpenSearch to come up without setting this environment variable. This tracking issue is to ensure compliance with the new changes.
Coming from: https://github.com/opensearch-project/security/issues/3624
## Acceptance Criteria
- [ ] All documentation references to the old default credentials admin:admin are removed
- [ ] Ensure that CI/testing is working with main and 2.x branches | [v2.12.0] Ensure CI/documentation reflect changes to default admin credentials | https://api.github.com/repos/opensearch-project/data-prepper/issues/3978/comments | 4 | 2024-01-18T16:53:34Z | 2025-03-11T19:52:54Z | https://github.com/opensearch-project/data-prepper/issues/3978 | 2,088,692,942 | 3,978 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper pipelines often expand events using processors like `grok` or `parse_json`. Processors like these output events which are larger than the input event. Each event now takes up more memory.
Data Prepper processors work in batches. Each processor is invoked with the whole batch of events. So when executing a pipeline that expands events, Data Prepper expands entire batches of events in the expansion processor. Then downstream processors which reduce some redundant data (e.g. the field which was parsed) happen after the batch.
Say you have a pipeline where you grok a field to get data from it. Then you delete that input field. With a batch size of 100,000, Data Prepper will expand all 100,000 events. Then it removes the messages for all 100,000 events.
```
my-pipeline:
buffer:
bounded_blocking:
batch_size: 100000
processor:
- grok:
match:
message: ["..."]
- delete_entries:
with_keys: ["message"]
```
**Describe the solution you'd like**
I'd like to be able to remove the `message` field sooner, rather than wait for all 100,000 events to be groked.
For these processors which have large expansion potential, provide an additional configuration to delete the input source. This way, the processor itself can remove the input data sooner and reduce the memory needs of Data Prepper.
```
processor:
- grok:
match:
message: ["..."]
delete_source: true
- parse_json:
source: message
delete_source: true
```
These would only delete the source key if the match or parsing succeeded.
**Describe alternatives you've considered (Optional)**
I've also considered a concept of allowing processors to declare that they can work on smaller batches. This would work well for these processors that just loop over the events, just like grok and parse_json.
This would be nice because pipeline authors won't need to think about the memory usage and Data Prepper could take care of reducing the batch sizes. However, this work could be complicated to implement. It could be as simple as running a single Event through each processor. But, we would need to have points at which we re-batch for the processors where it matters.
| Delete input for processors which expand the event | https://api.github.com/repos/opensearch-project/data-prepper/issues/3968/comments | 1 | 2024-01-17T03:08:51Z | 2024-07-04T18:11:29Z | https://github.com/opensearch-project/data-prepper/issues/3968 | 2,085,306,022 | 3,968 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We have several related requests for transforming maps and lists. We also have the `list_to_map` and `map_to_list` processors. Can we combine them somehow?
**Describe the solution you'd like**
Create a cohesive processor (or set of processors) for supporting array and map transformations.
**Issues**
- [x] #3961
- [x] #3962
- [x] #3963
- [x] #3964
- [x] #3965
- [x] #3867
| Enhanced support for lists, maps, strings | https://api.github.com/repos/opensearch-project/data-prepper/issues/3967/comments | 1 | 2024-01-16T20:52:06Z | 2024-02-28T21:22:52Z | https://github.com/opensearch-project/data-prepper/issues/3967 | 2,084,900,447 | 3,967 |
[
"opensearch-project",
"data-prepper"
] | {
"src_key_0.1": "val_0.1",
"src_key_0.2": "val_0.2",
"src_key_0.3": "val_0.3",
"src_key_0.4": {
"src_key_0.5": {
"src_key_0.6": "val_0.4"
}
},
"some_source": [
{
"nested_array": [
{
"src_key_1": "val_1",
"src_key_2": "val_2",
"src_key_3": "val_3"
},
{
"src_key_1": "val_5",
"src_key_2": "val_6",
"src_key_3": "val_7"
}
]
}
]
}
#TO
{
"somekey": [
[
"src_key_0.1",
""val_0.1"
],
[
"src_key_0.2",
""val_0.2"
],
[
"src_key_0.3",
""val_0.3"
],
[
"src_key_0.4.src_key0.5.src_key0.6",
"val_0.4"
],
[
"some_source[].nested_array[].src_key_1",
"val_3, val5"
],
[
"some_source[].nested_array[].src_key_2",
"val_3, val_6"
],
[
"some_source[].nested_array[].src_key_3",
"val_3, val_7"
],
]
}
#MAPPING
- set_default_map_to: "somekey"
| [Feature Request] Dynamic Auto-Map | https://api.github.com/repos/opensearch-project/data-prepper/issues/3965/comments | 3 | 2024-01-16T15:07:15Z | 2024-02-28T21:22:52Z | https://github.com/opensearch-project/data-prepper/issues/3965 | 2,084,212,660 | 3,965 |
[
"opensearch-project",
"data-prepper"
] | This is a feature request which is requested a processor to implement the following ETL functionality to handle Mapping string to array.
#SOURCE
"some_source": [
{
"src_key_1": "val_1"
}
]
#TO
"some_dest": {
"dest_key_1": ["val_1"]
}
#MAPPING
- from_key: "some_source[]/src_key_1/"
to_key: "some_dest/dest_key_1[]"
overwrite_if_to_key_exists: true | [Feature Request] Mapping String to Array | https://api.github.com/repos/opensearch-project/data-prepper/issues/3964/comments | 2 | 2024-01-16T15:01:20Z | 2024-02-12T16:13:27Z | https://github.com/opensearch-project/data-prepper/issues/3964 | 2,084,196,972 | 3,964 |
[
"opensearch-project",
"data-prepper"
] | This is a feature request which is requested a processor to implement the following ETL functionality to handle Mapping Array to string
#SOURCE
"some_source": [
{
"src_key_1": "val_1"
},
{
"src_key_1": "val_2"
}
]
},
#TO
}
"some_dest": {
"dest_key_1": "val_1, val_2"
}
#MAPPING
- from_key: "some_source[]/src_key_1"
to_key: "some_dest/dest_key1"
dest_type: "string" #Can also be "Array"
delim: "," | [Feature Request] Mapping Array to string | https://api.github.com/repos/opensearch-project/data-prepper/issues/3963/comments | 10 | 2024-01-16T14:59:52Z | 2024-02-12T16:18:36Z | https://github.com/opensearch-project/data-prepper/issues/3963 | 2,084,192,939 | 3,963 |
[
"opensearch-project",
"data-prepper"
] | This is a feature request which is requested a processor to implement the following ETL functionality to handle Mapping Records within a Record Array
#REQUIRED RECORD ARRAY MANIPULATION
"some_source": [
{
"nested_array": [
{
"src_key_1": "val_1",
"src_key_2": "val_2",
"src_key_3": "val_3"
},
{
"src_key_1": "val_5",
"src_key_2": "val_6",
"src_key_3": "val_7"
},
],
#TO
"some_dest": {
"nested_object": {
"key_4": "val_8",
"nested_array": [
{
"dest_key_1": "val_1"
},
{
"dest_key_1": "val_5"
},
],
}
}
#MAPPING
- from_key: "some_source[0]/nested_array[]/src_key_1"
to_key: "some_dest/nested_object/nested_array[]/dest_key_1" | [Feature Request] Mapping Records within a Record Array | https://api.github.com/repos/opensearch-project/data-prepper/issues/3962/comments | 3 | 2024-01-16T14:57:28Z | 2024-02-09T21:09:55Z | https://github.com/opensearch-project/data-prepper/issues/3962 | 2,084,186,374 | 3,962 |
[
"opensearch-project",
"data-prepper"
] | This is a feature request which is requested a processor to implement the following ETL functionality to handle Conditional Logic on Record Arrays -
#REQUIRED RECORD ARRAY MANIPULATION
"src_object_1": {
"src_key_1": "src_val_1",
"src_key_2": "src_val_2",
"src_nested_array": [
{
"src_name": "src_val_3",
"src_value": "src_val_4"
},
{
"src_name": "src_val_5",
"src_value": "src_val_6"
},
{
"src_name": "src_val_7",
"src_value": "src_val_8"
}
]
}
#TO
"dest_object_1": {
"dest_name": "src_val_4",
"dest_nested_array": [
{
"src_name": "src_val_3",
"src_value": "src_val_4"
},
{
"src_name": "src_val_5",
"src_value": "src_val_6"
},
{
"src_name": "src_val_7",
"src_value": "src_val_8"
}
]
}
#MAPPING
- from_key: "/src_object_1/nested_array[]/src_value"
add_when: '/src_object_1/nested_array[]/src_name == "src_val_3"'
to_key: "/dest_object_1//dest_name"
# preserve nested array
- from_key: "/src_object_1/src_nested_array"
to_key: "/dest_object_1/dest_nested_array"
| [Feature Request] Conditionally Assign New Field Logic on Record Arrays | https://api.github.com/repos/opensearch-project/data-prepper/issues/3961/comments | 4 | 2024-01-16T14:52:38Z | 2024-02-12T16:14:06Z | https://github.com/opensearch-project/data-prepper/issues/3961 | 2,084,172,155 | 3,961 |
[
"opensearch-project",
"data-prepper"
] | **Describe the issue**
When using the OpenSearch sink with timestamp formats such as `index-%{yyyyMMdd}-test`, the OpenSearch sink throws an IllegalArgumentException on startup after validating that this format is being injected in a suffix such as `index-test-%{yyyyMMdd}`
```
2024-01-11T18:56:53,018 [log-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [log-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Time pattern can only be a suffix of an index.
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
```
There is no reason/limitation that requires this timestamp injection to be as a suffix, so injection of this timestamp into the index at any point should be supported.
This validation occurs here (https://github.com/opensearch-project/data-prepper/blob/787064edf5e0ee455d96ea577129cb3dee66dd94/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L138), and should be removed. Additionally, the formatter should be expanded to not be specific to suffixes like it is here (https://github.com/opensearch-project/data-prepper/blob/787064edf5e0ee455d96ea577129cb3dee66dd94/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L175)
**Additional context**
Add any other context about the problem here.
| Injecting timestamp in index name that is not a suffix throws IllegalArgumentException | https://api.github.com/repos/opensearch-project/data-prepper/issues/3957/comments | 1 | 2024-01-12T01:01:15Z | 2024-01-24T22:18:09Z | https://github.com/opensearch-project/data-prepper/issues/3957 | 2,077,857,782 | 3,957 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Feature request is to add support for data prepper internal tags which can be used for different purposes.
**Describe the solution you'd like**
This feature would allow adding Data Prepper internal tags, identified by having a fixed data prepper internal prefix (like `_data_prepper_internal_`). These tags may be added by any part of the Data Prepper (api, core, sources, sinks, processor, etc) and these tags may be used either internally or by pipeline config various use case 1.
1. Routing logic in Data Prepper core could tag routed events, and pipeline code could check for events that were not routed (with tag like `_data_prepper_internal_routed` and take action on those events (like dropping them or sending them to DLQ)
2. Failures at any part of the pipeline could add an internal tag (like `_data_prepper_internal_to_dlq`) to send the failed events to pipeline DLQ.
To prevent sending these tags to the Sink, all the tags with `_data_prepper_internal_` prefix can be filtered out
**Describe alternatives you've considered (Optional)**
Alternate solution would be to introduce a new field in EventMetadata like `internalTags` (need not even be called tags, it could be flags or something like that), which can be used for the same purpose. But it would require all the expressions, conditions support that event tags already have. I think this would result in a lot of duplicate/similar code and double the effort for any functionality we add for tags.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add Data Prepper internal tags | https://api.github.com/repos/opensearch-project/data-prepper/issues/3956/comments | 0 | 2024-01-11T20:58:16Z | 2024-01-11T21:57:01Z | https://github.com/opensearch-project/data-prepper/issues/3956 | 2,077,573,864 | 3,956 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-41329 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>wiremock-3.0.1.jar</b></p></summary>
<p>A web service test double for all occasions</p>
<p>Library home page: <a href="http://wiremock.org">http://wiremock.org</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.wiremock/wiremock/3.0.1/d2d53be1e1710812e3fca3f437c277928e60fdf4/wiremock-3.0.1.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.0.1.pom (Root Library)
- :x: **wiremock-3.0.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/912705b534753259fbcac811ff514b7644403abb">912705b534753259fbcac811ff514b7644403abb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
WireMock is a tool for mocking HTTP services. The proxy mode of WireMock, can be protected by the network restrictions configuration, as documented in Preventing proxying to and recording from specific target addresses. These restrictions can be configured using the domain names, and in such a case the configuration is vulnerable to the DNS rebinding attacks. A similar patch was applied in WireMock 3.0.0-beta-15 for the WireMock Webhook Extensions. The root cause of the attack is a defect in the logic which allows for a race condition triggered by a DNS server whose address expires in between the initial validation and the outbound network request that might go to a domain that was supposed to be prohibited. Control over a DNS service is required to exploit this attack, so it has high execution complexity and limited impact. This issue has been addressed in version 2.35.1 of wiremock-jre8 and wiremock-jre8-standalone, version 3.0.3 of wiremock and wiremock-standalone, version 2.6.1 of the python version of wiremock, and versions 2.35.1-1 and 3.0.3-1 of the wiremock/wiremock Docker container. Users are advised to upgrade. Users unable to upgrade should either configure firewall rules to define the list of permitted destinations or to configure WireMock to use IP addresses instead of the domain names.
<p>Publish Date: 2023-09-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-41329>CVE-2023-41329</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/wiremock/wiremock/security/advisories/GHSA-pmxq-pj47-j8j4">https://github.com/wiremock/wiremock/security/advisories/GHSA-pmxq-pj47-j8j4</a></p>
<p>Release Date: 2023-09-06</p>
<p>Fix Resolution: com.tomakehurst.wiremock:wiremock-jre8-standalone:2.35.1, com.tomakehurst.wiremock:wiremock-jre8:2.35.1, org.wiremock:wiremock-standalone:3.0.3, org.wiremock:wiremock:3.0.3, wiremock - 2.6.1</p>
</p>
</details>
<p></p>
| CVE-2023-41329 (Medium) detected in wiremock-3.0.1.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3954/comments | 0 | 2024-01-11T20:21:52Z | 2024-01-17T18:25:35Z | https://github.com/opensearch-project/data-prepper/issues/3954 | 2,077,521,854 | 3,954 |
[
"opensearch-project",
"data-prepper"
] | Use a public CDN endpoint to improve the user experience and provide MaxMind GeoLite2 Database by default.
Configuration for using geoip can be as simple as this,
```
processor:
- geoip:
keys:
- key:
source: "/peer/ip"
target: "target1"
``` | Use a public CDN endpoint to get MaxMind GeoLite2 Data by default | https://api.github.com/repos/opensearch-project/data-prepper/issues/3942/comments | 1 | 2024-01-10T22:13:36Z | 2024-03-21T21:05:37Z | https://github.com/opensearch-project/data-prepper/issues/3942 | 2,075,304,546 | 3,942 |
[
"opensearch-project",
"data-prepper"
] | Add GeoIP service configuration in `data-prepper-config.yaml` file.
This can be done by adding it in extensions and used by plugin, instead of configuring it in each plugin instance.
```
extensions:
geoip_service:
maxmind:
database_paths: ["path1", "path2"]
database_refresh_interval: "P10D"
cache_size: 2048
``` | Add GeoIP extensions in Data Prepper Config | https://api.github.com/repos/opensearch-project/data-prepper/issues/3941/comments | 0 | 2024-01-10T22:13:34Z | 2024-01-17T17:56:22Z | https://github.com/opensearch-project/data-prepper/issues/3941 | 2,075,304,510 | 3,941 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Sometimes we want to test pipelines with a delay after pulling events from the buffer. I found this helpful for working on #3937 for example.
**Describe the solution you'd like**
Create a new `delay` processor.
```
processor:
- delay:
for: 500ms
```
| Provide a delay processor to put a delay in the processor for debugging and testing | https://api.github.com/repos/opensearch-project/data-prepper/issues/3938/comments | 0 | 2024-01-10T18:32:32Z | 2024-03-19T18:30:57Z | https://github.com/opensearch-project/data-prepper/issues/3938 | 2,074,947,907 | 3,938 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `BlockingBuffer.bufferUsage` metric is inaccurate. It indicates that it is the percentage of the buffer used. However, it is only the percentage of the buffer used for messages that are waiting. It does not include in-flight messages.
**Expected behavior**
I expect this metric to include both the records waiting and the records-in-flight when calculating the usage.
**Screenshots**
We sent these metrics to AWS CloudWatch. You can see in the screenshot below that the sum of `recordsInBuffer` and `recordsInFlight` (the line in blue) reaches near the maximum defined size of 1,000,000 records. However, the `bufferUsage` metric is around 70% at the time that happens.

**Additional context**
This can lead to confusion when trying to see why events are not writing to the buffer. The metrics indicate that there is capacity, but we fail to write to the buffer due to a timeout exception.
You can see that below.
https://github.com/opensearch-project/data-prepper/blob/b0d253c810e9ae657a8fd66608a3543296257ab2/data-prepper-plugins/blocking-buffer/src/main/java/org/opensearch/dataprepper/plugins/buffer/blockingbuffer/BlockingBuffer.java#L120-L125
This code is what appears to be incorrect. It does not account for the records in-flight.
https://github.com/opensearch-project/data-prepper/blob/b0d253c810e9ae657a8fd66608a3543296257ab2/data-prepper-plugins/blocking-buffer/src/main/java/org/opensearch/dataprepper/plugins/buffer/blockingbuffer/BlockingBuffer.java#L202-L205 | [BUG] BlockingBuffer.bufferUsage metric does not include records in-flight | https://api.github.com/repos/opensearch-project/data-prepper/issues/3936/comments | 4 | 2024-01-09T19:26:05Z | 2024-01-10T22:10:54Z | https://github.com/opensearch-project/data-prepper/issues/3936 | 2,073,022,287 | 3,936 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A processor that converts a map of key-value pairs:
```json
{
"my-map": {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
}
```
to a list of object with key and value under separate fields:
```json
{
"my-list": [
{
"key": "key1",
"value": "value1"
},
{
"key": "key2",
"value": "value2"
},
{
"key": "key3",
"value": "value3"
}
]
}
```
This can be used when `my-map` contains a large number of fields to avoid mapping explosion in OpenSearch while keeping the key-value pairs searchable.
**Describe the solution you'd like**
This can be a reversed operation of what [ListToMap processor](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/list-to-map/) does. The configuration options would be like this:
```yaml
- map_to_list:
source: "my-map"
target: "my-list"
key_name: "key"
value_name: "value"
```
`source`(required): the source map to perform the operation
`target`(required): the target list
`key_name`: the key name of the field to hold the original key, default is "key"
`value_name`: the key name of the field to hold the original value, default is "value"
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| MapToList processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3935/comments | 2 | 2024-01-09T18:06:11Z | 2024-01-11T20:59:04Z | https://github.com/opensearch-project/data-prepper/issues/3935 | 2,072,897,053 | 3,935 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
My sink configuration is specified as:
```
sink:
- opensearch:
hosts: [ "https://search-opport/..." ]
index: "poc"
document_id_field: "opportunity_id"
routing_field: "opportunity_id"
action: "upsert"
...
```
However, when it gets documents with a new `opportunity_id` value, it throws
```
2024-01-06T03:45:00.550 [s3-log-pipeline-sink-worker-2-thread-2] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Update, error = [marketplaceId-ATVPDKIKX0DER#merchantId-47624524402#ruleId-ssep_new_keyword_suggestion]: document missing
```
**To Reproduce**
Steps to reproduce the behavior:
1. Set up OpenSearch Ingestion pipeline with S3 source, newline code and parse_json processor.
2. Specify `action: "upsert"` for OpenSearch sink
3. Add a file to S3 with 1 JSON document per new line.
4. Check data prepper logs in Cloudwatch
**Expected behavior**
JSON documents provided in source S3 file should be ingested in OpenSearch as expected.
**Screenshots**
N/A
**Environment (please complete the following information):**
- Amazon OpenSearch Ingestion Service
**Additional context**
_Possible root cause:_
I did a deep dive in the github library and came across [this commit](https://github.com/opensearch-project/data-prepper/commit/36b0b9c95006697ece9ad678b9553bf43526b655) where the functionality was added. I see it uses UpdateOperation [here ](https://github.com/opensearch-project/data-prepper/blob/e6df3eb2cd46ebd13dd1c7b808d288c2f3d6ee51/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L311C19-L311C34)to create requests to be added to bulkOperation. This might not be the correct approach as OpenSearch _bulk API expects doc_as_upsert as true for using Upsert with Update in bulk ([Doc ref](https://opensearch.org/docs/latest/api-reference/document-apis/bulk/#request-body)). I see that this is also supported in the opensearch-java library used by data prepper ([ref](https://www.javadoc.io/doc/org.opensearch.client/opensearch-java/latest/org/opensearch/client/opensearch/core/bulk/UpdateOperation.Builder.html)).
| [BUG] OpenSearch Sink upsert action fails to create new document if it doesn't exist already | https://api.github.com/repos/opensearch-project/data-prepper/issues/3934/comments | 1 | 2024-01-09T17:40:17Z | 2024-03-12T19:37:43Z | https://github.com/opensearch-project/data-prepper/issues/3934 | 2,072,856,250 | 3,934 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
There is a potential NullPointerException for when `document` is null here that causes pipeline to crash (https://github.com/opensearch-project/data-prepper/blob/35a69489c2f8621c8aa258ddd8dda105cd67a9e4/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/dlq/FailedBulkOperationConverter.java#L64).
```
2024-01-07T01:22:08.271 [dynamodb-pipeline-processor-worker-1-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.FutureHelper - FutureTask failed due to:
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.FutureHelper.awaitFuturesIndefinitely(FutureHelper.java:29) ~[data-prepper-core-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:158) ~[data-prepper-core-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:139) ~[data-prepper-core-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) ~[data-prepper-core-2.6.0.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.NullPointerException
at org.opensearch.dataprepper.plugins.sink.opensearch.dlq.FailedBulkOperationConverter.convertDocumentToGenericMap(FailedBulkOperationConverter.java:64) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.dlq.FailedBulkOperationConverter.convertToDlqObject(FailedBulkOperationConverter.java:38) ~[opensearch-2.6.0.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) ~[?:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.logFailureForBulkRequests(OpenSearchSink.java:501) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleFailures(BulkRetryStrategy.java:369) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleFailures(BulkRetryStrategy.java:269) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetriesAndFailures(BulkRetryStrategy.java:256) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:291) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.execute(BulkRetryStrategy.java:195) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$flushBatch$12(OpenSearchSink.java:487) ~[opensearch-2.6.0.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.11.3.jar:1.11.3]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.flushBatch(OpenSearchSink.java:484) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doOutput(OpenSearchSink.java:453) ~[opensearch-2.6.0.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:67) ~[data-prepper-api-2.6.0.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.11.3.jar:1.11.3]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:67) ~[data-prepper-api-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:349) ~[data-prepper-core-2.6.0.jar:?]
... 5 more
```
**To Reproduce**
Not sure how to reproduce
**Expected behavior**
Do not crash the pipeline when document is null
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Potential for NullPointerException when converting OpenSearch docuents to be sent to DLQ | https://api.github.com/repos/opensearch-project/data-prepper/issues/3933/comments | 2 | 2024-01-09T17:21:28Z | 2024-08-08T14:27:30Z | https://github.com/opensearch-project/data-prepper/issues/3933 | 2,072,823,621 | 3,933 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some configurations will not work if a request size is too large. For example, an `http` source with a `kafka` buffer will fail to write to Kafka if the request size is greater than 1MB. Data Prepper does not check the request size up-front leading to a confusing error.
**Describe the solution you'd like**
Provide new configurations on the HTTP/gRPC sources (this includes all three OTel sources) to configure a maximum request length.
e.g.
```
apache-log-pipeline:
source:
http:
path: "/${pipelineName}/logs"
max_request_length: 1mb
```
**Describe alternatives you've considered (Optional)**
Have different exceptions thrown from the buffer to indicate the size is too large.
**Additional context**
This is somewhat related to the PR to increase the Kafka message size in the `kafka` buffer:
https://github.com/opensearch-project/data-prepper/pull/3916
| Support maximum request length configurations in the HTTP and OTel sources | https://api.github.com/repos/opensearch-project/data-prepper/issues/3931/comments | 2 | 2024-01-09T17:08:44Z | 2024-01-13T20:42:59Z | https://github.com/opensearch-project/data-prepper/issues/3931 | 2,072,802,890 | 3,931 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
S3-SQS source pipelines can get stuck when SQS is reading from multiple buckets and one bucket has permissions issues.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an SQS queue
2. Create 2 S3 buckets and configure event notifications for the SQS queue in step 1
3. Create an IAM role with permissions needed to get objects from one of the S3 buckets (call it Bucket1)
4. Create an S3-SQS pipeline using the SQS queue and IAM role
5. Upload objects to Bucket1, the pipeline should process them correctly
6. Upload objects to Bucket 2, the pipeline will get 403 errors and retry
7. Upload objects to Bucket1, these will not be processed because the pipeline is still trying to process objects from Step 6
**Expected behavior**
If a pipeline hits access denied errors with one bucket it should still process objects from the other bucket
| [BUG] S3-SQS source pipelines can get stuck when SQS is reading from multiple buckets and one bucket has permissions issues | https://api.github.com/repos/opensearch-project/data-prepper/issues/3930/comments | 1 | 2024-01-09T16:00:54Z | 2024-01-16T21:06:29Z | https://github.com/opensearch-project/data-prepper/issues/3930 | 2,072,671,435 | 3,930 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A Null pointer exception occurs in Key Value Processor with the following stack trace
2023-12-29T11:03:28.730 [osi-logs-dev-pipeline-processor-worker-1-thread-2] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline osi-logs-dev-pipeline processing
java.lang.NullPointerException: null
at java.base/java.util.regex.Matcher.getTextLength(Matcher.java:1770) ~[?:?]
at java.base/java.util.regex.Matcher.reset(Matcher.java:416) ~[?:?]
at java.base/java.util.regex.Matcher.<init>(Matcher.java:253) ~[?:?]
at java.base/java.util.regex.Pattern.matcher(Pattern.java:1134) ~[?:?]
at java.base/java.util.regex.Pattern.split(Pattern.java:1262) ~[?:?]
at org.opensearch.dataprepper.plugins.processor.keyvalue.KeyValueProcessor.doExecute(KeyValueProcessor.java:239) ~[key-value-processor-2.6.0.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.6.0.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.11.3.jar:1.11.3]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:133) ~[data-prepper-core-2.6.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) ~[data-prepper-core-2.6.0.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
**To Reproduce**
To reproduce the issue, send input that does not have the "source" in the key value processor config.
**Expected behavior**
Input without the configured source should be ignored and the not crash data prepper.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Null Pointer Exception in Key Value Processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3928/comments | 0 | 2024-01-09T06:55:40Z | 2024-01-17T19:12:01Z | https://github.com/opensearch-project/data-prepper/issues/3928 | 2,071,742,231 | 3,928 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-21634 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ion-java-1.0.2.jar</b>, <b>ion-java-1.9.5.jar</b></p></summary>
<p>
<details><summary><b>ion-java-1.0.2.jar</b></p></summary>
<p>A Java implementation of the Amazon Ion data notation.</p>
<p>Library home page: <a href="https://github.com/amznlabs/ion-java/">https://github.com/amznlabs/ion-java/</a></p>
<p>Path to dependency file: /data-prepper-plugins/kafka-plugins/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/software.amazon.ion/ion-java/1.0.2/ee9dacea7726e495f8352b81c12c23834ffbc564/ion-java-1.0.2.jar</p>
<p>
Dependency Hierarchy:
- schema-registry-serde-1.1.15.jar (Root Library)
- aws-java-sdk-sts-1.12.151.jar
- aws-java-sdk-core-1.12.151.jar
- :x: **ion-java-1.0.2.jar** (Vulnerable Library)
</details>
<details><summary><b>ion-java-1.9.5.jar</b></p></summary>
<p>A Java implementation of the Amazon Ion data notation.</p>
<p>Path to dependency file: /release/archives/linux/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.amazon.ion/ion-java/1.9.5/42becac25189163db3393f93c455a4db92d3029b/ion-java-1.9.5.jar</p>
<p>
Dependency Hierarchy:
- jackson-dataformat-ion-2.15.3.jar (Root Library)
- :x: **ion-java-1.9.5.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/774fa213614252c4772b018731452f020cafa16a">774fa213614252c4772b018731452f020cafa16a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Amazon Ion is a Java implementation of the Ion data notation. Prior to version 1.10.5, a potential denial-of-service issue exists in `ion-java` for applications that use `ion-java` to deserialize Ion text encoded data, or deserialize Ion text or binary encoded data into the `IonValue` model and then invoke certain `IonValue` methods on that in-memory representation. An actor could craft Ion data that, when loaded by the affected application and/or processed using the `IonValue` model, results in a `StackOverflowError` originating from the `ion-java` library. The patch is included in `ion-java` 1.10.5. As a workaround, do not load data which originated from an untrusted source or that could have been tampered with.
<p>Publish Date: 2024-01-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-21634>CVE-2024-21634</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/amazon-ion/ion-java/security/advisories/GHSA-264p-99wq-f4j6">https://github.com/amazon-ion/ion-java/security/advisories/GHSA-264p-99wq-f4j6</a></p>
<p>Release Date: 2024-01-03</p>
<p>Fix Resolution: com.amazon.ion:ion-java:1.10.5</p>
</p>
</details>
<p></p>
| CVE-2024-21634 (High) detected in ion-java-1.0.2.jar, ion-java-1.9.5.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3926/comments | 1 | 2024-01-09T00:26:21Z | 2024-02-28T21:05:55Z | https://github.com/opensearch-project/data-prepper/issues/3926 | 2,071,430,821 | 3,926 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The string truncate processor should truncate the string to the user configured length.
**Describe the solution you'd like**
Solution would be to support truncate string processor with the following config
```
processor:
- truncate:
entries:
- source: "message"
start_at: 2
length: 5
truncate_when: <condition>
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add string truncate processor to the family of mutate string processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/3925/comments | 0 | 2024-01-08T19:27:57Z | 2024-01-11T17:14:44Z | https://github.com/opensearch-project/data-prepper/issues/3925 | 2,071,089,181 | 3,925 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-22102 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-j-8.0.33.jar</b></p></summary>
<p>JDBC Type 4 driver for MySQL.</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>
Dependency Hierarchy:
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.1.0 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker and while the vulnerability is in MySQL Connectors, attacks may significantly impact additional products (scope change). Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.1 Base Score 8.3 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:H/A:H).
<p>Publish Date: 2023-10-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-22102>CVE-2023-22102</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-22102">https://nvd.nist.gov/vuln/detail/CVE-2023-22102</a></p>
<p>Release Date: 2023-10-17</p>
<p>Fix Resolution: com.mysql:mysql-connector-j:8.2.0</p>
</p>
</details>
<p></p>
| CVE-2023-22102 (High) detected in mysql-connector-j-8.0.33.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3920/comments | 3 | 2024-01-05T17:55:12Z | 2024-02-15T16:52:01Z | https://github.com/opensearch-project/data-prepper/issues/3920 | 2,067,797,128 | 3,920 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-51074 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-path-2.8.0.jar</b></p></summary>
<p>A library to query and verify JSON</p>
<p>Library home page: <a href="https://github.com/jayway/JsonPath">https://github.com/jayway/JsonPath</a></p>
<p>Path to dependency file: /data-prepper-plugins/s3-source/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.jayway.jsonpath/json-path/2.8.0/b4ab3b7a9e425655a0ca65487bbbd6d7ddb75160/json-path-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- wiremock-3.3.1.jar (Root Library)
- :x: **json-path-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
json-path v2.8.0 was discovered to contain a stack overflow via the Criteria.parse() method.
<p>Publish Date: 2023-12-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-51074>CVE-2023-51074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-51074">https://www.cve.org/CVERecord?id=CVE-2023-51074</a></p>
<p>Release Date: 2023-12-27</p>
<p>Fix Resolution: com.jayway.jsonpath:json-path:2.9.0</p>
</p>
</details>
<p></p>
| CVE-2023-51074 (Medium) detected in json-path-2.8.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3919/comments | 0 | 2024-01-05T17:55:11Z | 2024-02-15T16:54:47Z | https://github.com/opensearch-project/data-prepper/issues/3919 | 2,067,797,082 | 3,919 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
I follow example from https://github.com/opensearch-project/data-prepper/tree/main/examples/log-ingestion
and all field are parsed as string expected date as number
**Expected behavior**
date should be parsed as timestamp, and size ans number ...
**Screenshots**


**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 22.04 LTS]
- opensearch : 2.11.1
- fleunt-bit : 2.2.1
- prepper: 2.6.1
| [BUG] all parsed field with pattern COMMONAPACHELOG are string | https://api.github.com/repos/opensearch-project/data-prepper/issues/3918/comments | 2 | 2024-01-05T17:43:50Z | 2024-01-06T05:04:29Z | https://github.com/opensearch-project/data-prepper/issues/3918 | 2,067,777,969 | 3,918 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
AWS Opensearch Ingestion pipeline with DynamoDB
1. DynamoDB export
2. DynamoDB stream
3. Opensearch sink
4. DDB table more than 1.7 gigabytes with 2.1m records
5. Issue: No records are sent to the sink and stopping and increasing the OCU min and max does not fix the issue.
The pipeline circuit breaker flips open and closed.
```
2024-01-04T01:29:28.016 [pool-3-thread-1] INFO org.opensearch.dataprepper.breaker.HeapCircuitBreaker - Circuit breaker tripped and open. 6442470568 used memory bytes > 6442450944 configured
2024-01-04T01:29:30.342 [pool-3-thread-1] INFO org.opensearch.dataprepper.breaker.HeapCircuitBreaker - Circuit breaker closed. 6423596448 used memory bytes <= 6442450944 configured
```
After stopping the pipeline and re-configuring it to have more min and max OCUs (6-10 Ingestion-OCU) the flip flop of the circuit breaker no longer happens but it gets stuck trying to process the shards and continuously reports 0 shards found or no shards acquired.
```
2024-01-04T05:58:20.517 [pool-13-thread-1] INFO org.opensearch.dataprepper.plugins.source.dynamodb.leader.ShardManager - Listing shards (DescribeStream call) took 25 milliseconds with 0 shards found
2024-01-04T05:58:41.941 [pool-13-thread-4] INFO org.opensearch.dataprepper.plugins.source.dynamodb.stream.StreamScheduler - No new shards acquired after 250 attempts. This means that all shards are currently being consumed, or that the export is still in progress. New shards will not be consumed until the export is fully processed.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create DDB table with 1.7 gigabytes of data and 2.1m records.
8. Create pipeline with min OCUs of 1 and max OCUs of 4:
```yml
version: "2"
dynamodb-pipeline:
source:
dynamodb:
tables:
- table_arn: 'arn:aws:dynamodb:us-east-2:123456789:table/MyTable'
stream:
start_position: "LATEST"
export:
s3_bucket: "opensearch-ingestion"
s3_region: "us-east-2"
s3_prefix: "export/"
aws:
sts_role_arn: "arn:aws:iam::123456789:role/opensearch-ingestion-pipeline"
region: "us-east-2"
sink:
- opensearch:
hosts:
[
"https://123456789.us-east-2.aoss.amazonaws.com"
]
index: "mytableindex"
index_type: "custom"
document_id: "${getMetadata(\"primary_key\")}"
action: "${getMetadata(\"opensearch_action\")}"
document_version: "${getMetadata(\"document_version\")}"
document_version_type: "external"
aws:
sts_role_arn: "arn:aws:iam::123456789:role/opensearch-ingestion-pipeline"
region: "us-east-2"
serverless: true
network_policy_name: "easy-my-data"
```
**Expected behavior**
The pipeline recovers and processes all records.
**Environment (please complete the following information):**
- OS: AWS Opensearch ingestion pipeline
- Version: 2 created on Jan 4th 2024 so assuming it will be the latest 2.x as at that date.
**Additional context**
Add any other context about the problem here.
| [BUG] dynamodb source hangs if initial OCUs are not enough in AWS ingestion pipeline | https://api.github.com/repos/opensearch-project/data-prepper/issues/3914/comments | 5 | 2024-01-04T07:28:15Z | 2024-12-17T16:11:14Z | https://github.com/opensearch-project/data-prepper/issues/3914 | 2,065,142,292 | 3,914 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Some pipelines using an `http` source have had the following error:
```
2023-12-27T23:16:48.364 [armeria-common-worker-epoll-3-2] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:402)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:2353)
io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1487)
io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1345)
io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1385)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
io.netty.handler.flush.FlushConsolidationHandler.channelRead(FlushConsolidationHandler.java:152)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:829)
```
| [BUG] Possible memory leak related to Armeria/Netty | https://api.github.com/repos/opensearch-project/data-prepper/issues/3912/comments | 1 | 2024-01-03T19:23:37Z | 2024-01-04T21:45:37Z | https://github.com/opensearch-project/data-prepper/issues/3912 | 2,064,534,200 | 3,912 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-50572 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jline-3.9.0.jar</b>, <b>jline-3.22.0.jar</b></p></summary>
<p>
<details><summary><b>jline-3.9.0.jar</b></p></summary>
<p>JLine</p>
<p>Library home page: <a href="http://nexus.sonatype.org/oss-repository-hosting.html/jline-parent/jline">http://nexus.sonatype.org/oss-repository-hosting.html/jline-parent/jline</a></p>
<p>Path to dependency file: /release/archives/linux/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.9.0/da6eb8ebdd131ec41f7e42e7e77b257868279698/jline-3.9.0.jar</p>
<p>
Dependency Hierarchy:
- parquet-codecs-2.7.0-SNAPSHOT (Root Library)
- hadoop-mapreduce-client-core-3.3.6.jar
- hadoop-yarn-client-3.3.6.jar
- :x: **jline-3.9.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jline-3.22.0.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /performance-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jline/jline/3.22.0/512dde71f1ba9cb87f318e4e1e3acc77dc67a712/jline-3.22.0.jar</p>
<p>
Dependency Hierarchy:
- zinc_2.13-1.9.3.jar (Root Library)
- zinc-compile-core_2.13-1.9.3.jar
- zinc-classpath_2.13-1.9.3.jar
- scala-compiler-2.13.11.jar
- :x: **jline-3.22.0.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue in the component GroovyEngine.execute of jline-groovy v3.24.1 allows attackers to cause an OOM (OutofMemory) error.
<p>Publish Date: 2023-12-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-50572>CVE-2023-50572</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-12-29</p>
<p>Fix Resolution: org.jline:jline-console:3.25.0,org.jline:jline:3.25.0</p>
</p>
</details>
<p></p>
| CVE-2023-50572 (Medium) detected in jline-3.9.0.jar, jline-3.22.0.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3871/comments | 0 | 2024-01-01T00:46:51Z | 2024-01-30T00:30:01Z | https://github.com/opensearch-project/data-prepper/issues/3871 | 2,061,212,515 | 3,871 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-50570 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ipaddress-5.4.0.jar</b></p></summary>
<p>Library for handling IP addresses, both IPv4 and IPv6</p>
<p>Library home page: <a href="https://seancfoley.github.io/IPAddress/">https://seancfoley.github.io/IPAddress/</a></p>
<p>Path to dependency file: /data-prepper-expression/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.github.seancfoley/ipaddress/5.4.0/10920b3aeb1696b410a4ac572a72553a78c9a234/ipaddress-5.4.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **ipaddress-5.4.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue in the component IPAddressBitsDivision of IPAddress v5.1.0 leads to an infinite loop. This is disputed because an infinite loop occurs only for cases in which the developer supplies invalid arguments. The product is not intended to always halt for contrived inputs.
<p>Publish Date: 2023-12-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-50570>CVE-2023-50570</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-12-29</p>
<p>Fix Resolution: com.github.seancfoley:ipaddress:5.4.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2023-50570 (Medium) detected in ipaddress-5.4.0.jar - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/3870/comments | 1 | 2024-01-01T00:46:49Z | 2024-01-30T16:26:33Z | https://github.com/opensearch-project/data-prepper/issues/3870 | 2,061,212,501 | 3,870 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data prepper throws java.lang.IllegalStateException: Duplicate key xxx when ingest through log pipeline.
See the error below, due to open telemetry exporter exporting a duplicate **ConnectionId** key. Not sure this is expected behavior or not. Please help considering to fix this issue.
```js
2023-12-29T07:28:02,516 [pool-13-thread-1] ERROR org.opensearch.dataprepper.plugins.source.otellogs.OTelLogsGrpcService - Failed to parse the request resource_logs {
resource {
...truncate...
scope_logs {
scope {
name: "Microsoft.Extensions.Hosting.Internal.Host"
}
log_records {
time_unix_nano: 1703834880595497100
severity_number: SEVERITY_NUMBER_DEBUG
severity_text: "Debug"
body {
string_value: "Connection id \"{ConnectionId}\" sending FIN because: \"{Reason}\""
}
attributes {
key: "ConnectionId"
value {
string_value: "0HN087UNTCNA9"
}
}
attributes {
key: "Reason"
value {
string_value: "The Socket transport\'s send loop completed gracefully."
}
}
attributes {
key: "ConnectionId"
value {
string_value: "0HN087UNTCNA9"
}
}
observed_time_unix_nano: 1703834880595497100
}
}
}
java.lang.IllegalStateException: Duplicate key log.attributes.ConnectionId (attempted merging values 0HN087UNTCNA9 and 0HN087UNTCNA9)
at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:135) ~[?:?]
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:182) ~[?:?]
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) ~[?:?]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.plugins.otel.codec.OTelProtoCodec.unpackKeyValueListLog(OTelProtoCodec.java:1094) ~[otel-proto-common-2.6.1.jar:?]
at org.opensearch.dataprepper.plugins.otel.codec.OTelProtoCodec$OTelProtoDecoder.lambda$processLogsList$7(OTelProtoCodec.java:376) ~[otel-proto-common-2.6.1.jar:?]
```
**To Reproduce**
Steps to reproduce the behavior:
1. I have pushed the sample repository, please visit [here](https://github.com/hoangthanh28/data-prepper)
2. Run `docker-compose -f docker-compose.yml up -d`
3. `curl --location 'http://localhost:5000'`
4. Wait a couple of seconds, then check data-prepper log. `docker logs -f data-prepper`
**Expected behavior**
Remove the duplicate key and ingest the log without issue.
Not sure whether this is a right place to fix or not ( forgive me if I'm wrong, I'm a .net developer - FYI)
[Code](https://github.com/opensearch-project/data-prepper/blob/f19de03d5418925935e019837fc4824fb250820c/data-prepper-plugins/otel-proto-common/src/main/java/org/opensearch/dataprepper/plugins/otel/codec/OTelProtoCodec.java#L376)
```java
protected List<OpenTelemetryLog> processLogsList(final List<LogRecord> logsList,
final String serviceName,
final Map<String, Object> ils,
final Map<String, Object> resourceAttributes,
final String schemaUrl) {
return logsList.stream()
.map(log -> JacksonOtelLog.builder()
.withTime(OTelProtoCodec.convertUnixNanosToISO8601(log.getTimeUnixNano()))
.withObservedTime(OTelProtoCodec.convertUnixNanosToISO8601(log.getObservedTimeUnixNano()))
.withServiceName(serviceName)
.withAttributes(OTelProtoCodec.mergeAllAttributes(
Arrays.asList(
// would be possible to de-duplicate log.getAttributesList() first ???, so that will solve the problem.
OTelProtoCodec.unpackKeyValueListLog(log.getAttributesList()),
resourceAttributes,
ils
)
))
.withSchemaUrl(schemaUrl)
.withFlags(log.getFlags())
.withTraceId(OTelProtoCodec.convertByteStringToString(log.getTraceId()))
.withSpanId(OTelProtoCodec.convertByteStringToString(log.getSpanId()))
.withSeverityNumber(log.getSeverityNumberValue())
.withSeverityText(log.getSeverityText())
.withDroppedAttributesCount(log.getDroppedAttributesCount())
.withBody(OTelProtoCodec.convertAnyValue(log.getBody()))
.build())
.collect(Collectors.toList());
}
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: Debian
- Version: 12
**Additional context**
N/A
| [BUG] Duplicate key exception when integrate with open-telemetry exporter | https://api.github.com/repos/opensearch-project/data-prepper/issues/3868/comments | 3 | 2023-12-29T07:56:49Z | 2025-06-19T15:21:30Z | https://github.com/opensearch-project/data-prepper/issues/3868 | 2,059,172,800 | 3,868 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user, I would like to have the ability to transform a map to a list, or flatten a list of maps to a list.
Example input:
```
{
"mylist": [
{ "a": "b" },
{ "a": "c" },
{ "x": "y" },
]
}
```
Example output:
```
{
"a": ["b", "c"],
"x": ["y"]
}
```
**Describe the solution you'd like**
A new processor created to implement the above described functionality
| Combine values under the same key from a list of key-value pairs | https://api.github.com/repos/opensearch-project/data-prepper/issues/3867/comments | 6 | 2023-12-28T21:00:09Z | 2024-02-05T20:36:53Z | https://github.com/opensearch-project/data-prepper/issues/3867 | 2,058,816,984 | 3,867 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
This is not technically a bug, but is easy to miss and leads to unexpected behavior. If a user has acknowledgements enabled for their source and routes enabled, but the routes do not cover all possibilities of the incoming data, a portion of the data will never end up in a sink or DLQ. This prevents the ack from being acknowledged and the sink permanently loops on a set of incoming data.
Example:
* S3-SQS source pipeline with acks enabled
* A single route is defined where `'/myfield == "my value"'`
If one of the S3 objects contains `/myfield` with a value of `not my value`, then it is not routed to a sink so the acknowledgement set cannot complete successfully. The ack timeout expires and the object re-enters the queue to be processed again.
**To Reproduce**
Steps to reproduce the behavior: Documented above
**Expected behavior**
There are a few ways this could be handled, I am not sure which is correct.
1. Any records not matching a route end up in a DLQ
2. Acknowledgements consider records with no route being dropped as intentional and still send a positive ack in this case
3. Validation prevents a pipeline from being created if acks are enabled and the routes form an incomplete set of all possibilities
| [BUG] Incomplete route set leads to duplicates when E2E ack is enabled. | https://api.github.com/repos/opensearch-project/data-prepper/issues/3866/comments | 5 | 2023-12-28T17:26:02Z | 2024-01-17T21:11:40Z | https://github.com/opensearch-project/data-prepper/issues/3866 | 2,058,667,465 | 3,866 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
After a while that is running, Data Prepper is starting to generate these lines on the log:
```
2023-12-21T17:50:19,079 [pool-9-thread-71] ERROR org.opensearch.dataprepper.plugins.source.loghttp.LogHTTPService - Failed to parse the request of size 56244 due to: Unrecognized token 'inf': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (com.linecorp.armeria.internal.shaded.fastutil.io.FastByteArrayInputStream); line: 1, column: 25221] (through reference chain: java.util.ArrayList[215])
2023-12-21T17:50:28,018 [pool-9-thread-141] ERROR org.opensearch.dataprepper.plugins.source.loghttp.LogHTTPService - Failed to parse the request of size 56244 due to: Unrecognized token 'inf': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (com.linecorp.armeria.internal.shaded.fastutil.io.FastByteArrayInputStream); line: 1, column: 25221] (through reference chain: java.util.ArrayList[215])
2023-12-26T12:10:19,062 [pool-9-thread-125] ERROR org.opensearch.dataprepper.plugins.source.loghttp.LogHTTPService - Failed to parse the request of size 38988 due to: Unrecognized token 'inf': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (com.linecorp.armeria.internal.shaded.fastutil.io.FastByteArrayInputStream); line: 1, column: 27896] (through reference chain: java.util.ArrayList[240])
2023-12-26T12:10:29,030 [pool-9-thread-63] ERROR org.opensearch.dataprepper.plugins.source.loghttp.LogHTTPService - Failed to parse the request of size 38988 due to: Unrecognized token 'inf': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (com.linecorp.armeria.internal.shaded.fastutil.io.FastByteArrayInputStream); line: 1, column: 27896] (through reference chain: java.util.ArrayList[240])
```
**To Reproduce**
yaml config file:
```
dns-ip-pipeline:
#example event: {"count_source_ip":102,"source_ip":"10.199.0.150","tag":"dns_metrics_query_by_ip_5m"}
source:
http:
ssl: false
port: 2023
buffer:
kafka:
bootstrap_servers:
- redpanda-0:9092
topics:
- name: dns-ip-pipeline-buffer
group_id: data-prepper-group
encryption:
type: none
processor:
- anomaly_detector:
keys: ["count_source_ip"]
mode:
random_cut_forest:
sink:
- file:
path: /logs/dns_metrics_ip_anomalies.json
```
**Expected behavior**
After that errors on the log, the file is not being written anymore.
**Environment (please complete the following information):**
- OS: Ubuntu 20.04 LTS, Docker Container
- Version: 2.6.0
Can anyone explain to me what that error is saying, and how can I correct it? | [BUG] ERROR org.opensearch.dataprepper.plugins.source.loghttp.LogHTTPService - Failed to parse the request of size XXX due to: Unrecognized token | https://api.github.com/repos/opensearch-project/data-prepper/issues/3865/comments | 5 | 2023-12-27T00:34:26Z | 2024-03-04T12:27:28Z | https://github.com/opensearch-project/data-prepper/issues/3865 | 2,056,791,624 | 3,865 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Rollover index is not working:
```
{
"cause": "no permissions for [indices:admin/rollover] and associated roles [dataprepper, own_index]",
"message": "Failed to rollover index [index=otel-v1-apm-span-000001]"
}
```
**To Reproduce**
Install dataprepper and use a specific user with the following role
```
# DataPrepper Role
dataprepper:
reserved: true
cluster_permissions:
- cluster_all
- indices:admin/template/get
- indices:admin/template/put
index_permissions:
- index_patterns:
- 'otel-v1*'
- '.opendistro-ism-config'
- 'events-*'
- 'metrics-*'
allowed_actions:
- 'indices_all'
- index_patterns:
- '*'
allowed_actions:
- 'manage_aliases'
```
**Expected behavior**
Rolling policy should work
**Environment (please complete the following information):**
- OS: 2.11.0
- Version 2.6.0
| [BUG] Failed to rollover index | https://api.github.com/repos/opensearch-project/data-prepper/issues/3864/comments | 3 | 2023-12-21T16:08:15Z | 2024-04-18T15:02:53Z | https://github.com/opensearch-project/data-prepper/issues/3864 | 2,052,699,621 | 3,864 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Without any separate process deleting the sqs message, users found data prepper could still encounters the following error:
```
2023-12-15T15:42:44.320 [acknowledgement-callback-2] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Failed to set visibility timeout for message 5df184d5-a97e-4031-af20-ab91800adbcc to 30
software.amazon.awssdk.services.sqs.model.SqsException: Value xxx for parameter ReceiptHandle is invalid. Reason: Message does not exist or is not available for visibility timeout change. (Service: Sqs, Status Code: 400, Request ID: dd3f6a59-96f7-54d3-94fe-37afe0c3a4b6)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125) ~[sdk-core-2.21.23.jar:?]
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:82) ~[sdk-core-2.21.23.jar:?]
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:60) ~[sdk-core-2.21.23.jar:?]
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:41) ~[sdk-core-2.21.23.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40) ~[sdk-core-2.21.23.jar:?]
....
```
There could be race condition between acknowledgement and [change message visibility](https://github.com/opensearch-project/data-prepper/blob/fa413a4b10b6a7678e3e942faf4e7bb1a350d28c/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/SqsWorker.java#L290)
**To Reproduce**
Do not have the exact input files but the s3 source configuration prototype is as follows:
```
source:
s3:
notification_type: "sqs"
buffer_timeout: "60s"
codec:
newline: null
sqs:
queue_url: ...
visibility_timeout: "60s"
visibility_duplication_protection: true
acknowledgments: true
compression: "gzip"
```
**Expected behavior**
The error message should not show up.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Failed to reset visibility timeout due to message not existing | https://api.github.com/repos/opensearch-project/data-prepper/issues/3861/comments | 0 | 2023-12-19T21:49:51Z | 2023-12-26T20:30:17Z | https://github.com/opensearch-project/data-prepper/issues/3861 | 2,049,489,205 | 3,861 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Enable Kafka buffer for pipeline:
```yaml
service-map-pipeline:
...
buffer:
kafka:
bootstrap_servers: []
topics:
- name: topic-name
group_id: kafka_group_id
create_topic: true
authentication:
sasl:
aws_msk_iam: default
aws:
region: aws_region
msk:
arn: msk_arn
```
**Expected behavior**
A clear and concise description of what you expected to happen.
Should not fail.
**Screenshots**
If applicable, add screenshots to help explain your problem.
N/A
**Environment (please complete the following information):**
- Version: 2.6.1
**Additional context**
```
2023-12-14T14:43:03,300 [main] INFO org.opensearch.dataprepper.parser.PipelineTransformer - Building buffer for the pipeline [service-map-pipeline]
2023-12-14T14:43:03,302 [main] ERROR org.opensearch.dataprepper.parser.PipelineTransformer - Construction of pipeline components failed, skipping building of pipeline [service-map-pipeline] and its connected pipelines
org.opensearch.dataprepper.model.plugin.InvalidPluginDefinitionException: Unable to create an argument for required plugin parameter type: interface org.opensearch.dataprepper.model.codec.ByteDecoder
at org.opensearch.dataprepper.plugin.ComponentPluginArgumentsContext.lambda$createBeanSupplier$10(ComponentPluginArgumentsContext.java:117) ~[data-prepper-core-2.6.1.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) ~[?:?]
at org.opensearch.dataprepper.plugin.ComponentPluginArgumentsContext.createArguments(ComponentPluginArgumentsContext.java:87) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:46) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:75) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.buildPipelineFromConfiguration(PipelineTransformer.java:117) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.transformConfiguration(PipelineTransformer.java:97) ~[data-prepper-core-2.6.1.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:67) ~[data-prepper-core-2.6.1.jar:2.6.1]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:211) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:311) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:296) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:920) [spring-context-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) [spring-context-5.3.28.jar:5.3.28]
at org.opensearch.dataprepper.AbstractContextManager.start(AbstractContextManager.java:59) [data-prepper-core-2.6.1.jar:2.6.1]
at org.opensearch.dataprepper.AbstractContextManager.getDataPrepperBean(AbstractContextManager.java:45) [data-prepper-core-2.6.1.jar:2.6.1]
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:39) [data-prepper-main-2.6.1.jar:2.6.1]
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.opensearch.dataprepper.model.codec.ByteDecoder' available
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:351) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:342) ~[spring-beans-5.3.28.jar:5.3.28]
at org.opensearch.dataprepper.plugin.ComponentPluginArgumentsContext.lambda$createBeanSupplier$10(ComponentPluginArgumentsContext.java:115) ~[data-prepper-core-2.6.1.jar:?]
... 38 more
``` | [BUG] Error when building buffer for pipeline using Kafka | https://api.github.com/repos/opensearch-project/data-prepper/issues/3858/comments | 4 | 2023-12-14T14:43:42Z | 2024-01-18T14:55:40Z | https://github.com/opensearch-project/data-prepper/issues/3858 | 2,041,847,797 | 3,858 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Provide a way to send all failed events to a global/pipeline-level DLQ. Failed events any where in the pipeline (sources, processors, and sinks) are sent directly to this DLQ. This will eventually replace sink level DLQs we have today.
**Describe the solution you'd like**
Preferred solution (based on @dlvenable's initial thoughts and a discussion meeting)
1. Option to define failure pipeline in the YAML file like
```
my-failure-pipeline:
type: failure
sink:
- s3:
bucket: "..."
codec:
ndjson:
```
And each sub-pipeline in the yaml may have an entry pointing to this as follows
```
sample-pipeline:
failure-pipeline: my-failure-pipeline
source:
...
processor:
...
sink:
...
```
In addition, there may be an option to have a default pipeline which is used if no failure pipeline is mentioned in a sub-pipeline
```
default-failure-pipeline:
type: failure
sink:
- s3:
bucket: "..."
codec:
ndjson:
```
And finally an implicit failure pipeline which is created without any entries in the YAML file. The implicit failure pipeline will send all failed events to `stdout`
This requires changes to code in many places and so it is better to introduce a new API (For example, `executeWithFailures()` API in processors which will return both output records and failed records). Data Prepper core code can then take the failed records and send them to appropriate failure pipeline (configured failure pipeline or default failure pipeline or implicit failure pipeline). Similarly new API at source and sink level maybe added. Once the API is added, code may be modified slowly so that all sources/sinks/processors use this new API.
Having a separate pipeline for failure, allows the same pipeline to be used by multiple pipelines. And also makes it possible to write sub-pipelines under it and do conditional routing etc.
**Describe alternatives you've considered (Optional)**
Instead of new API in processors/sources/sinks, we could have a global singleton for DLQEvents managed by DLQEventsManager which each source uses in its constructor and failed events are handed over to this DLQEventsManager which will route the events to failure pipeline. I think this approach is also ok.
**Additional context**
Add any other context or screenshots about the feature request here.
| Pipeline DLQ | https://api.github.com/repos/opensearch-project/data-prepper/issues/3857/comments | 5 | 2023-12-13T22:18:29Z | 2024-10-23T18:38:08Z | https://github.com/opensearch-project/data-prepper/issues/3857 | 2,040,553,912 | 3,857 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I am seeing parquet records are not completely ingested into the open search severless sink sometimes.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to AWS console
2. Click on Open Search Ingestion Pipeline
3. Check the document metrics and no documents failed to ingest and dlq is empty, however, I still see some parquet records were not completely ingested.
**Expected behavior**
The parquet record read count need to be the same with document write count. Would also like to see a metrics to reflect how many parquet records are ingested, so I can be confident that all records have been successfully read.
**Screenshots**
**Environment (please complete the following information):**
**Additional context**
Opened a internal ticket as well, will link this issue to the internal ticket.
| [BUG] parquet records are not completely ingested into the open search severless sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3856/comments | 2 | 2023-12-13T21:34:06Z | 2023-12-22T15:35:36Z | https://github.com/opensearch-project/data-prepper/issues/3856 | 2,040,500,774 | 3,856 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In #1005, there is a syntax for supporting sets in Data Prepper expressions as well as operations on sets. However, these do not actually work in Data Prepper.
**Describe the solution you'd like**
Implement support for sets in Data Prepper.
| Support sets and set operations in Data Prepper expressions | https://api.github.com/repos/opensearch-project/data-prepper/issues/3854/comments | 1 | 2023-12-12T20:31:38Z | 2024-08-16T14:41:47Z | https://github.com/opensearch-project/data-prepper/issues/3854 | 2,038,507,750 | 3,854 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of Data Prepper, I would like to send pipeline processing messages as notifications to different external systems, so that I can quickly be alerted and notified of any issues or state changes within my pipeline.
**Describe the solution you'd like**
Support for instrumentation of Data Prepper messaging to different external systems
| Data Prepper messaging system to report messages about the pipeline processing | https://api.github.com/repos/opensearch-project/data-prepper/issues/3851/comments | 0 | 2023-12-12T16:31:55Z | 2024-01-30T19:46:08Z | https://github.com/opensearch-project/data-prepper/issues/3851 | 2,038,148,748 | 3,851 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper's `dynamodb` source does not have integration tests. Thus, we may introduce breaking changes without knowing in the PRs.
**Describe the solution you'd like**
Data Prepper's `dynamodb` source should have integration tests. Additionally, we should run these as part of GitHub Actions on PRs to this project.
We should have a new CDK stack for the DynamoDB resources.
**Additional context**
PR for the testing resources CDK: #3501
Issue for the S3 source integration tests: #3847
| Create DynamoDB source integration tests and run with GitHub | https://api.github.com/repos/opensearch-project/data-prepper/issues/3849/comments | 0 | 2023-12-12T15:08:05Z | 2024-06-18T19:40:46Z | https://github.com/opensearch-project/data-prepper/issues/3849 | 2,037,967,639 | 3,849 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `s3` sink integration tests do not run automatically. Thus, PRs may be breaking behavior without our knowing.
**Describe the solution you'd like**
Run the `s3` sink integration tests as part of the GitHub Action. To support this, create an S3 bucket in the testing CDK.
**Describe alternatives you've considered (Optional)**
We could consider using the same bucket as the S3 source (see #3847) and rely on the key-prefix. However, I think it would be better to have a dedicated bucket. This could also be good for testing writing without any key prefix.
| Run S3 sink integration tests as GHA | https://api.github.com/repos/opensearch-project/data-prepper/issues/3848/comments | 0 | 2023-12-12T15:04:55Z | 2024-05-21T19:42:37Z | https://github.com/opensearch-project/data-prepper/issues/3848 | 2,037,961,621 | 3,848 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `s3` source integration tests do not run automatically. Thus, PRs may be breaking behavior without our knowing.
**Describe the solution you'd like**
Run the `s3` source integration tests as part of the GitHub Action. To support this, create an S3 bucket and SQS queue in the testing CDK.
It may also be advantageous to update the tests to write to a key prefix depending on the use-case: SQS notifications versus S3 scan. Then the SQS queue can trigger for events only a known key prefix. This way we don't need a bucket per use-case.
| Run S3 source integration tests as GHA | https://api.github.com/repos/opensearch-project/data-prepper/issues/3847/comments | 0 | 2023-12-12T15:02:42Z | 2024-05-21T19:42:28Z | https://github.com/opensearch-project/data-prepper/issues/3847 | 2,037,955,101 | 3,847 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of Data Prepper, my Events contain keys that are encoded in different formats, such as `gzip`, `base64`, and `protobuf` (https://protobuf.dev/programming-guides/encoding/).
Sample Event
```
{
"my_protobuf_key": "",
"my_gzip_key": "H4sIAAAAAAAAA/NIzcnJVyjPL8pJAQBSntaLCwAAAA==",
"my_base64_key": "SGVsbG8gd29ybGQ="
}
```
**Describe the solution you'd like**
A new processor called a `decoder` processor that can decode various encodings. The following configuration example would decode the three values in the example Event above
```
processor:
- decoder:
key: "my_base64_key"
# Can be one of gzip, base64, or protobuf
base64:
- decoder:
key: "my_gzip_key"
gzip:
- decoder:
key: "my_protobuf_key"
protobuf:
message_definition_file: "/path/to/proto_definition.proto"
```
**Tasks**
- [x] #4016
- [ ] OTel decode processor
- [ ] Decode encoded data (base64)
| Create a decoder processor to decode Event keys | https://api.github.com/repos/opensearch-project/data-prepper/issues/3841/comments | 6 | 2023-12-11T15:44:46Z | 2024-02-14T20:08:38Z | https://github.com/opensearch-project/data-prepper/issues/3841 | 2,035,967,166 | 3,841 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Given an item in with a Number type ending in 0, such as `1702062202420`, the DynamoDB source will convert it to scientific notation for export items.
```
{"pk": "my_partition_key", "sk":"my_sort_key", "my_number_ending_in_0": 702062202420 }
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with a dynamodb source with export and an opensearch
2. Once the export is complete, the document will be sent to the OpenSearch sink as the following with the `my_number_ending_in_0` key converted to scientific notation
```
{"pk": "my_partition_key", "sk":"my_sort_key", "my_number_ending_in_0": 1.70206220242E+12 }
```
**Expected behavior**
The Numbers ending in 0 should not be manipulated and the above example should result in
```
{"pk": "my_partition_key", "sk":"my_sort_key", "my_number_ending_in_0": 702062202420 }
```
**Additional context**
The conversion only happens for export values when converting from the ion line here (https://github.com/opensearch-project/data-prepper/blob/91ff22d6da2b14d8a27ade89ee516341181c8bd6/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/converter/ExportRecordConverter.java#L82), but Data Prepper `JacksonEvent` also converts to scientific notation when converting to json string (https://github.com/opensearch-project/data-prepper/blob/91ff22d6da2b14d8a27ade89ee516341181c8bd6/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/event/JacksonEvent.java#L621). I initially had created a custom deserializer that iterated over and converted all decimals of this format to not use scientific notation, however this may not be the best approach
| [BUG] DynamoDB source export converts Numbers ending in 0 to scientific notation | https://api.github.com/repos/opensearch-project/data-prepper/issues/3840/comments | 6 | 2023-12-08T21:20:52Z | 2025-06-03T19:30:53Z | https://github.com/opensearch-project/data-prepper/issues/3840 | 2,033,325,728 | 3,840 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
OpenSearch has a security advisory. It is related to the `_search` API. But, we should update to avoid the CVE.
See advisory:
https://github.com/advisories/GHSA-6g3j-p5g6-992f
| [BUG] GHSA-6g3j-p5g6-992f | https://api.github.com/repos/opensearch-project/data-prepper/issues/3837/comments | 2 | 2023-12-08T17:30:35Z | 2025-02-10T04:52:12Z | https://github.com/opensearch-project/data-prepper/issues/3837 | 2,033,056,206 | 3,837 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.6.1
**BUILD NUMBER**: 76
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://github.com/opensearch-project/data-prepper/actions/runs/7132559423
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka asifsmohammed dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 7132559423: Release Data Prepper : 2.6.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/3834/comments | 3 | 2023-12-07T19:01:22Z | 2023-12-07T19:17:35Z | https://github.com/opensearch-project/data-prepper/issues/3834 | 2,031,354,446 | 3,834 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the opensearch sink and the `routing_field`, I would like to dynamically format my _routing id using one or more metadata keys or keys in the Event
**Describe the solution you'd like**
A new `routing_id` parameter that supports format expressions. This would follow a similar path of the movement from `document_id_field` to `document_id`,
**Describe alternatives you've considered (Optional)**
Configuring extra `add_entries` processors to create format expressions in the actual Event, and then using `exclude_keys` in the sink to remove the custom routing field.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support format expressions for routing in the opensearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3833/comments | 0 | 2023-12-07T17:44:42Z | 2024-01-11T17:15:08Z | https://github.com/opensearch-project/data-prepper/issues/3833 | 2,031,240,936 | 3,833 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2023-6481 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>logback-core-1.4.12.jar</b></p></summary>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /performance-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.4.12/670c77fc6e71cbb24dfabc9fc125f7536ed7a4ab/logback-core-1.4.12.jar</p>
<p>
Dependency Hierarchy:
- :x: **logback-core-1.4.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A serialization vulnerability in logback receiver component part of
logback version 1.4.13, 1.3.13 and 1.2.12 allows an attacker to mount a Denial-Of-Service
attack by sending poisoned data.
<p>Publish Date: 2023-12-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-6481>CVE-2023-6481</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-6481">https://www.cve.org/CVERecord?id=CVE-2023-6481</a></p>
<p>Release Date: 2023-12-04</p>
<p>Fix Resolution: 1.4.14</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2023-6481 (High) detected in logback-core-1.4.12.jar | https://api.github.com/repos/opensearch-project/data-prepper/issues/3817/comments | 0 | 2023-12-06T21:34:59Z | 2023-12-06T22:35:38Z | https://github.com/opensearch-project/data-prepper/issues/3817 | 2,029,423,379 | 3,817 |
[
"opensearch-project",
"data-prepper"
] | I am made two files, docker-compose.yml file with following description
version: '3'
services:
data-prepper:
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./log_pipeline.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml
ports:
- 2021:2021
networks:
- opensearch-net
networks:
opensearch-net:
and log_pipelines.yaml with following description
log-pipeline:
source:
http:
ssl: false
processor:
- grok:
match:
log: [ "%{COMMONAPACHELOG}" ]
sink:
- opensearch:
hosts: [ "https://opensearch.stack.com:9200" ]
insecure: true
username: admin
password: admin
index: nginx_logs
upon running docker compose file, the container is running but when i check it logs, having following problem
2023-12-06T07:13:10,583 [Thread-2] INFO org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2023-12-06T07:13:10,583 [Thread-2] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2023-12-06T07:13:10,583 [Thread-2] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2023-12-06T07:13:10,587 [Thread-2] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink, retrying: opensearch.stack.com
2023-12-06T07:13:11,582 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - sink is not ready for execution, retrying
2023-12-06T07:13:11,582 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2023-12-06T07:13:11,582 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2023-12-06T07:13:11,582 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2023-12-06T07:13:11,587 [log-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink, retrying: opensearch.stack.com
i am using latest version for pulling image, is this causing an error
| Failed Initialization of Opensearch Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/3810/comments | 2 | 2023-12-06T07:40:40Z | 2024-08-18T12:13:56Z | https://github.com/opensearch-project/data-prepper/issues/3810 | 2,027,877,867 | 3,810 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.