issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present, OpenSearch API source uses the Blocking Buffer as the default buffer. But for OpenSearch API source, the default buffer should allow a synchronized way to pass the data prepper events from source to sink.
**Describe the solution you'd like**
Set the default buffer for OpenSearch API source to [ZeroBuffer](#5416 )
**Additional context**
- [Open Search API source](#248) | Set Zero Buffer as the default buffer for OpenSearch API source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5443/comments | 1 | 2025-02-19T07:36:53Z | 2025-02-25T20:55:44Z | https://github.com/opensearch-project/data-prepper/issues/5443 | 2,862,470,534 | 5,443 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the OpenSearch sink without for OpenSearch Serverless vector and time series collections that do not support custom document id, I would like to prevent duplicate data from entering OpenSearch.
**Describe the solution you'd like**
Configuration options in the OpenSearch sink that will enable querying OpenSearch for a documents that may already exist in OpenSearch to prevent duplicate documents.
```
- opensearch:
....
query_for_existing_document:
query_when: 'getMetadata("potential_duplicate") == true'
query_term: 'id'
action_on_found: drop // only option currently
query_duration: PT3M
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Support options in OpenSearch sink to prevent duplicates by querying OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/5442/comments | 1 | 2025-02-18T22:07:55Z | 2025-03-31T15:07:56Z | https://github.com/opensearch-project/data-prepper/issues/5442 | 2,861,733,509 | 5,442 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The feature request is related to the offline batch ingestion integration with ml-commons. https://github.com/opensearch-project/ml-commons/issues/2891
The S3-Source Scan currently reads the full content of S3 objects. However, for offline ML batch job processing, only object metadata (such as name and size) is required. Reading the entire content can lead to duplicate batch job executions, adding unnecessary overhead.
**Describe the solution you'd like**
Enhance the S3-Source Scan by introducing an option to retrieve only metadata (e.g., object name, size) instead of reading full object contents. This ensures that batch job processors receive only the required information, preventing redundant job triggers.
**Additional context**
This improvement would optimize offline batch ingestion while also benefiting future async processors that require metadata without processing full object content.
| Support reading S3 object meta data only | https://api.github.com/repos/opensearch-project/data-prepper/issues/5433/comments | 2 | 2025-02-13T00:28:16Z | 2025-03-14T18:58:25Z | https://github.com/opensearch-project/data-prepper/issues/5433 | 2,849,649,953 | 5,433 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper produces a lot of metrics. And not all of them are necessary for all users. I'd like a way to disable metrics in Data Prepper.
**Describe the solution you'd like**
Add a list to the `data-prepper-config.yaml` file - `disabled_metrics`. This will disable metrics by name. It can also support Ant glob patterns to control how they are disabled.
```
disabled_metrics:
- **.add_entries.recordsOut.count
- jvm.gc.max.data.size.value
- **.BlockingBuffer.readLatency.count
```
**Describe alternatives you've considered (Optional)**
I considered adding this into a new file. But, the metrics are all configured in `data-prepper-config.yaml`, so I think we should keep it there for consistency.
We could also have some heuristic to know when a metric is part of a pipeline or not. Then we could avoid the `*` and `**`. But, using `*` and `**` would be more flexible. And probably clearer.
For example, this could attempt to determine that the metric is associated with a pipeline and then disable.
```
disabled_metrics:
- add_entries.recordsOut.count
```
**Additional context**
N/A
| Allow disabling metrics | https://api.github.com/repos/opensearch-project/data-prepper/issues/5431/comments | 1 | 2025-02-11T17:14:47Z | 2025-05-01T10:05:16Z | https://github.com/opensearch-project/data-prepper/issues/5431 | 2,846,008,160 | 5,431 |
[
"opensearch-project",
"data-prepper"
] | ### Problem
At Present, the functionalities of running a pipeline is encapsulated inside [Process Worker](data-prepper-core/src/main/java/org/opensearch/dataprepper/core/pipeline/ProcessWorker.java). While this provides encapsulation, it does not provide a way for a given thread such as in the case of [Zero buffer](https://github.com/opensearch-project/data-prepper/pull/5416) to execute processors and publish to sinks in a synchronous way without having an instance of Process Worker itself which contains additional responsibilities such as shutting down process for pipeline.
### Describe the solution you'd like
Implement a Pipeline Runner by moving the functionalities of to reading from buffer, executing processors and publishing to sinks from process worker to a Pipeline runner class, which can be used by process worker and other threads.
### Additional context
[Zero Buffer](https://github.com/opensearch-project/data-prepper/issues/5415) | Implement Pipeline Runner | https://api.github.com/repos/opensearch-project/data-prepper/issues/5429/comments | 0 | 2025-02-10T21:37:02Z | 2025-03-09T21:09:13Z | https://github.com/opensearch-project/data-prepper/issues/5429 | 2,843,719,968 | 5,429 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
convert_entry_type provides a capability to convert key format. Currently, the processor requires to exact-match key name on key or keys entry. Using regex based key patterns with type check operator [1] will offer flexible converting type capability as follows
- key_patterns: *version*
- type: integer
- convert_when: /*version*/typeof float
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for key_patterns parameter of convert_entry_type | https://api.github.com/repos/opensearch-project/data-prepper/issues/5427/comments | 2 | 2025-02-10T03:01:00Z | 2025-03-26T01:50:54Z | https://github.com/opensearch-project/data-prepper/issues/5427 | 2,841,204,295 | 5,427 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
If the configuration is using an index template which has a field with `flat_object` type, then during the deserialization phase the pipeline fails.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new data-prepper (osi) pipeline with source as DynamoDb and Sink as Opensearch collection.
2. In data-prepper (osi) configuration add a new index-template in sink with a field of type `flat_object`.
```
index: "sample-test-index"
index_type: custom
template_type: index-template
template_content: >
{
"template" : {"mappings":{"properties":"description":{"type":"keyword"},"configuration":
{"type":"flat_object"},"update_time":{"type":"date","format":"strict_date_time||epoch_millis"}}}}
}
```
3. Above is the sample config template with `configuration` field of type `field_object`. We don't have any index `sample-test-index` already created in opensearch. We will be creating this index from data-prpper.
4. Save the config and create the data-prepper (osi) pipeline.
5. Confirm that index is created in opensearch and validate that index mapping is identical to the index template passed in config file.
6. now restart the data-prepper (osi) pipeline.
7. After pipeline is restarted, you can check the data-pepper(osi) pipeline logs. You will find an exception `Failed to initialize OpenSearch sink with a retryable exception.` in logs with data-prepper (osi) pipeline broken.
8. See error :
```
org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink with a retryable exception.
org.opensearch.client.util.MissingRequiredPropertyException: Missing required property 'Builder.<variant kind>'
at org.opensearch.client.util.ApiTypeHelper.requireNonNull(ApiTypeHelper.java:89) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.opensearch._types.mapping.Property.<init>(Property.java:187) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.opensearch._types.mapping.Property$Builder.build(Property.java:1410) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.json.BuildFunctionDeserializer.deserialize(BuildFunctionDeserializer.java:60) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:55) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.json.JsonpDeserializerBase$StringMapDeserializer.deserialize(JsonpDeserializerBase.java:369) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.json.JsonpDeserializerBase$StringMapDeserializer.deserialize(JsonpDeserializerBase.java:355) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.json.JsonpDeserializer.deserialize(JsonpDeserializer.java:87) ~[opensearch-java-2.8.1.jar:?]
```
**Expected behavior**
Even with `configuration` field of type `field_object` is used it should work normally as expected.
**More Explanation on Flow**
This "field_type" is not a problem during the initial index creation but it's a problem after data-prepper (osi) pipeline is restarted because :
First Time (Index Creation):
```
Initial Flow:
Data Prepper starts
↓
Reads template from its config (has flat_object)
↓
Checks if template exists in OpenSearch (doesn't exist)
↓
Creates template using REST API
→ Just sends JSON template as is
→ No deserialization needed
↓
Creates index based on template
↓
Success! Index created with mappings
```
On Restart :
```
Restart Flow:
Data Prepper starts
↓
Reads template from its config (has flat_object)
↓
Checks if template exists in OpenSearch (exists)
↓
Gets existing template from OpenSearch
→ Makes GET request to OpenSearch
→ OpenSearch returns template JSON
→ Java client tries to convert JSON response into Java objects
→ Attempts to create Property objects for each mapping field
→ Fails when trying to deserialize flat_object type
→ Throws MissingRequiredPropertyException
↓
Fails before reaching template comparison step
```
**Solution**
From above flow of execution we have seen that opensearch-java client will be used to deserialize `flat_object` type and it fails. The problem is data-prepper (osi) currently uses opensearch-java client version `2.8.1` which doesn't support `flat_object` type. From opensearch-java client version `2.8.2` and greater we started supporting `flat_object` type. https://github.com/opensearch-project/opensearch-java/pull/735
To resolve this we have to upgrade opensearch-java client version from `2.8.1` to atleast `2.8.2` or later.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Index Template with flat_object type field fails during deserialization | https://api.github.com/repos/opensearch-project/data-prepper/issues/5425/comments | 0 | 2025-02-09T09:08:06Z | 2025-02-18T20:50:38Z | https://github.com/opensearch-project/data-prepper/issues/5425 | 2,840,530,275 | 5,425 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add vector normalize processor that can normalize a vector in the event.
Normalizing a vector is described here https://www.wikihow.com/Normalize-a-Vector
**Describe the solution you'd like**
A new processor that can normalize the vector as described in https://www.wikihow.com/Normalize-a-Vector.
Proposed configuration
```
processor:
vector_normalization:
keys: ["key1", "key2"]
```
The above config will do vector normalization of vector value for each of the keys.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add vector normalize processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5424/comments | 0 | 2025-02-08T19:04:13Z | 2025-02-11T19:01:05Z | https://github.com/opensearch-project/data-prepper/issues/5424 | 2,840,213,684 | 5,424 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a pipeline that reads from an OpenSearch index and writes in another one. That OpenSearch server is behind a proxy, so I need to configure it in the source and the sink
**Describe the solution you'd like**
I think the best solution is adding a `proxy` to the `opensearch` source:
```yaml
source:
opensearch:
...
proxy: https://myproxy:8080
```
Or within the `connection` field:
```yaml
source:
opensearch:
...
connection:
proxy: https://myproxy:8080
```
**Describe alternatives you've considered (Optional)**
- `proxy` field in the `opensearch` source, or in the `connection` subfield
- `proxy` field in the `data-prepper-config.yaml`
- Environment variable
**Additional context**
The proxy configuration is already supported for the `opensearch` sink. It was reported in #300 and implemented in #479
Thank you very much in advance!
I know how hard is maintaining an open source project, so if you need help, I can try to implement this | Support proxy in OpenSearch source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5422/comments | 3 | 2025-02-07T14:23:25Z | 2025-03-14T09:02:07Z | https://github.com/opensearch-project/data-prepper/issues/5422 | 2,838,290,867 | 5,422 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present, The buffer implementations provide an asynchronous behavior, but there is a need to have a synchronous buffer mechanism to guaranteed immediate processing and direct synchronous flow of data from source to sink without buffering delays.
An example is Open Search API source where the data prepper events needs to be passed to in a synchronized way from source to sink.
**Describe the solution you'd like**
- Provides synchronous, immediate pass-through behavior
- Eliminates buffering delays by directly transferring data from write to read operations
- Implements the Buffer interface to maintain compatibility with existing Data Prepper components
**Additional context**
- [Opensearch API source](https://github.com/opensearch-project/data-prepper/issues/248)
### Tasks
- [x] #5416
- [x] #5429 | Zero Buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/5415/comments | 2 | 2025-02-05T20:52:08Z | 2025-03-09T21:09:14Z | https://github.com/opensearch-project/data-prepper/issues/5415 | 2,833,938,802 | 5,415 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We have a pipeline which appears to be losing events on the `s3` sink. We see metrics that indicate a significant difference between `s3.recordsIn.count` and `s3.s3SinkObjectsEventsSucceeded.count`. However, I do not see any error logs.
**To Reproduce**
We do not yet have steps to reproduce.
**Expected behavior**
These metrics should align and all data should be in the S3 bucket.
**Environment (please complete the following information):**
Data Prepper 2.10.1
**Additional context**
This may be related to #5412. At the very least, that issue may be causing us to acknowledge the events even though they were not actually written.
| [BUG] Events are missing from S3 sink even with end-to-end acknowledgements | https://api.github.com/repos/opensearch-project/data-prepper/issues/5413/comments | 2 | 2025-02-05T01:41:07Z | 2025-06-17T19:58:22Z | https://github.com/opensearch-project/data-prepper/issues/5413 | 2,831,652,256 | 5,413 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper end-to-end acknowledgements have expiration times. These times are set to 10 minutes by default.
The current implementation of DynamoDB streams will hold a single AcknowledgementSet until the whole shard is processed. A DynamoDB shard can remain open for up to four hours with the source coordination. So it is quite easy for the AcknowledgementSet to take up to four hours. This means that the acknowledgement set will expire.
**Expected behavior**
Acknowledgement sets will not expire while processing a shard.
**Proposed Solution**
Update the DynamoDB source to increase the AcknowledgementSet expiryTime by increasing for 10 minutes.
This can be done by updating the [`ProgressCheck`](https://github.com/opensearch-project/data-prepper/blob/6681e75d8b8cfa3985e4a11f9fa9d6238562e462/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/acknowledgements/ProgressCheck.java#L8-L16) interface and implementing this behavior to change the `expiryTime`.
```
public interface ProgressCheck {
/**
* Returns the pending ratio
*
* @return returns the ratio of pending to the total acknowledgements
* @since 2.6
*/
Double getRatio();
/**
* Increases the expiry time of the acknowledgement set by the given duration
*
* @param additionalTime additional time to be added to the expiry time
* @since 2.11
*/
void increaseExpiry(Duration additionalTime);
}
```
After this, subscribe the `dynamodb` source to progress checks and use the callback to increase the expiry.
**Environment (please complete the following information):**
Data Prepper 2.10.1
**Additional context**
A fuller solution could be provided by #4764.
| [BUG] DynamoDB source with acknowledgements expires frequently | https://api.github.com/repos/opensearch-project/data-prepper/issues/5412/comments | 0 | 2025-02-05T01:33:48Z | 2025-02-13T20:34:33Z | https://github.com/opensearch-project/data-prepper/issues/5412 | 2,831,644,346 | 5,412 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Allow the `map_to_list` processor to convert multiple sources (map of key-value pairs) to lists of objects under different `map_to_list_when` conditions.
**Describe the solution you'd like**
Modify `map_to_list` processor to support this new configuration that allows multiple entries of `source`, `target` and `map_to_list_when`(optional) like shown below. The existing configuration is still supported. This is new config under entries needs to be added.
```
processor:
- map_to_list:
entries:
- source: "my-map"
target: "my-list"
map_to_list_when: "some-key == 'test'"
- source: "my-map2"
target: "my-list2"
map_to_list_when: "some-key2 == 'test2'"
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support multiple entries of source + target + map_to_list_when condition in map_to_list processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5380/comments | 0 | 2025-01-31T23:55:13Z | 2025-02-03T20:05:24Z | https://github.com/opensearch-project/data-prepper/issues/5380 | 2,824,596,551 | 5,380 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Unable to run data prepper locally in a docker container pointing to a glue schema registry in another docker container (motoserver)
**Describe the solution you'd like**
All that is necessary for the support is to allow specifying the AWS registry url and populating `AWSSchemaRegistryConstants.AWS_ENDPOINT` in the configuration if it is specified
**Describe alternatives you've considered (Optional)**
At this point I don't know of any way to be able to consume messages from a local kafka with a local aws glue schema registry
**Additional context**
Add any other context or screenshots about the feature request here.
| Kafka local aws glue registry support | https://api.github.com/repos/opensearch-project/data-prepper/issues/5377/comments | 3 | 2025-01-30T22:48:23Z | 2025-03-18T19:49:30Z | https://github.com/opensearch-project/data-prepper/issues/5377 | 2,821,975,405 | 5,377 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `documentVersionConflictErrrors` in the OpenSearch sink are not sent to DLQ and are not a true documentError since it means the latest document is in OpenSearch already. Due to this, it should not be included in the `documentErrors` metric, as this is a confusing user experience.
**Expected behavior**
Do not include `documentVersionConflictError` in the overall `documentErrors` metric
| [BUG] Do not report documentVersionConflictErrors in documentErrors metric in the OpenSearch sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5376/comments | 2 | 2025-01-30T22:03:18Z | 2025-03-06T17:07:06Z | https://github.com/opensearch-project/data-prepper/issues/5376 | 2,821,908,073 | 5,376 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In containerized environments, direct communication between control plane and data plane (Data Prepper) is restricted. For example when running data prepper as container in ECS Fargate, the control plane can initiate requests (curl commands) using [ecs execute-command](https://docs.aws.amazon.com/cli/latest/reference/ecs/execute-command.html) to Data Prepper's admin APIs, it faces an inherent limitation:
* Can execute commands in the container
* Cannot capture or return the API response
* Command execution succeeds but response is lost
**Describe the solution you'd like**
Asynchronous responses solve this by inverting the communication flow:
1. Control plane initiates request with callback URL
2. Data Prepper receives request
3. Data Prepper independently calls callback URL with results
4. No need to capture response through execute-command
Here's a high-level overview of supporting both synchronous and asynchronous responses in Data Prepper's admin API:
Request Pattern:
1. Sync Request:
GET /pipelines
Response: Immediate result
2. Async Request:
GET /pipelines
Header: X-Callback-Url: http://callback-endpoint
Response: Immediate acknowledgment + Callback with result
Implementation Strategy:
1. Handler Detection:
* Check for X-Callback-Url header or query parameter
* Choose sync/async processing based on presence
* Same endpoint supports both patterns
2. Response Flow:
* Sync: Request -> Process -> Return Result
* Async: Request -> Return Ack -> Process -> Callback with Result
3. Response Types:
* Sync Response: { data: actual_result }
* Async Response: { requestId: "uuid", status: "ACCEPTED" }
* Callback: { requestId: "uuid", data: actual_result }
This approach maintains backward compatibility while adding async support through optional callback URLs.
**Describe alternatives you've considered (Optional)**
Run a control plane proxy along with Data Prepper that has direct access to data prepper container
| Asynchronous Response support for Data Prepper Admin Apis | https://api.github.com/repos/opensearch-project/data-prepper/issues/5374/comments | 1 | 2025-01-30T19:23:26Z | 2025-02-04T20:39:10Z | https://github.com/opensearch-project/data-prepper/issues/5374 | 2,821,636,505 | 5,374 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
For OpenTelemetry spans Data Prepper is using the span id as document id in OpenSearch. The OpenTelemetry span id is supposed to be an 8 byte array with at least one non-zero value. Data Prepper encodes the array in hex and uses the result as the document id for indexing in OpenSearch. This can create collisions between different traces when span have the same span id.
**To Reproduce**
Run one of the tracing examples and ingest some spans. Query the span data from OpenSearch and compare the fields `_id` and `spanId`.
**Expected behavior**
The document id should uniquely determine a span without a collision across different traces. The used document id should either be random or respect both `traceId` and `spanId` of the corresponding span.
**Screenshots**
<img width="1406" alt="Image" src="https://github.com/user-attachments/assets/85a4a8e8-397e-4b98-9662-719b633a1c96" />
**Environment (please complete the following information):**
- Docker setup from examples folder
- Version 2.10
**Additional context**
There is a work-around by specifying a `document_id` in the OpenSearch sink, e.g. `traceId-spanId`. I did not find out, where the current behavior using just `spanId` is encoded.
| [BUG] OpenTelemetry Spans are indexed using the span id causing collisions | https://api.github.com/repos/opensearch-project/data-prepper/issues/5370/comments | 5 | 2025-01-29T12:47:01Z | 2025-05-08T16:11:29Z | https://github.com/opensearch-project/data-prepper/issues/5370 | 2,818,118,684 | 5,370 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Allow the `list-to-map` processor to convert multiple sources (a list of objects from an event) to maps of keys to objects under different conditions.
**Describe the solution you'd like**
Modify `list-to-map` processor to support this new configuration that allows multiple entries of `source` and `list_to_map_when` conditions like shown below. The existing configuration is still supported. This is new config under entries needs to be added.
```
processor:
- list_to_map:
entries:
- source: "mylist"
key: "name"
value_key: "value"
flatten: true
list_to_map_when: "some-key == 'test'"
- source: "mylist2"
key: "name2"
value_key: "value2"
flatten: true
list_to_map_when: "some-key2 == 'test2'"
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | Support multiple entries of source + list_to_map_when condition in list-to-map processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5368/comments | 0 | 2025-01-28T22:27:02Z | 2025-01-29T21:42:19Z | https://github.com/opensearch-project/data-prepper/issues/5368 | 2,816,838,331 | 5,368 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
This is a new feature request. We'd like to have Data Prepper have web site crawling capabilities that can crawl web sites and facilitate the ingestion of web pages into OpenSearch.
**Describe the solution you'd like**
Introduce a "webcrawler source" that would provide ability to crawl a public website on a periodic basis (on-demand or a schedule), respecting the configuration of the website and rate-limiting of the requests, filtering (including/excluding pages), etc... On pages that were acquired, the ability to store content in OpenSearch for search and discovery.
**Describe alternatives you've considered (Optional)**
Use of a Selenium web crawler and a Chromium driver, then filtering, enriching the content, and storing it in OpenSearch.
**Additional context**
N/A | website crawler - "source" | https://api.github.com/repos/opensearch-project/data-prepper/issues/5355/comments | 1 | 2025-01-24T15:52:23Z | 2025-01-28T20:57:55Z | https://github.com/opensearch-project/data-prepper/issues/5355 | 2,809,775,135 | 5,355 |
[
"opensearch-project",
"data-prepper"
] | ## Summary
Currently, the LambdaSink sends all incoming records immediately to AWS Lambda, causing multiple small invocations if thresholds are not met. We want to keep a stateful (persistent) buffer in the sink so it only flushes full batches immediately (when thresholds are exceeded) and persists any partial (incomplete) batch until more events arrive or until the sink shuts down. This ensures fewer, larger Lambda invocations and avoids prematurely flushing partial data.
Persisting the last incomplete batch should meet the following conditions
(a) becomes full or
(b) the sink shuts down.
## Details
### Persistent Buffer
A single buffer (persistentBuffer) accumulates events across multiple doOutput() calls.
Only when size or event-count thresholds are reached do we treat that batch as “full” and flush it to Lambda.
The partial batch remains in memory until it either becomes full or the sink shuts down.
N-1 “full” buffers get flushed immediately,
The Nth (partial) buffer remains in memory until the next doOutput() call or shutdown().
### Ensure Thread-Safety
When using persistent buffer, make sure we dont hit race conditions when multiple threads might write to buffer. | [Enhancement] Add Stateful Buffering to LambdaSink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5353/comments | 0 | 2025-01-24T02:55:31Z | 2025-02-11T01:45:24Z | https://github.com/opensearch-project/data-prepper/issues/5353 | 2,808,375,946 | 5,353 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We are using data prepper to forward otel trace data to AWS managed OpenSearch. Our security team would like to enhance Kubernetes OPA requirement for container level security context as
securityContext:
readonlyRootFilesystem
Data Prepper are showing below error in the log
ERROR StatusConsoleListener Unable to create file log/data-prepper/data-prepper.log
java.io.IOException: Could not create directory /usr/share/data-prepper/log/data-prepper
at org.apache.logging.log4j.core.util.FileUtils.mkdir(FileUtils.java:128)
at org.apache.logging.log4j.core.util.FileUtils.makeParentDirs(FileUtils.java:141)
Once I mounted emptydir() for the log path, then data prepper failed on
Caused by: java.lang.RuntimeException: Unable to create the directory at the provided path: service-map
**Describe the solution you'd like**
A separate mountable filesystem to minimize the security risk
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support Security Context "readonlyRootFilesystem" | https://api.github.com/repos/opensearch-project/data-prepper/issues/5346/comments | 1 | 2025-01-21T17:20:31Z | 2025-01-21T20:50:26Z | https://github.com/opensearch-project/data-prepper/issues/5346 | 2,802,411,725 | 5,346 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Allow source field to be deleted if `dissect` processor was successful
**Describe the solution you'd like**
Modify `dissect` processor to support this new configuration that allows the source to be deleted when `dissect` processor was successful. Something in lines of `delete_source` option of `parse_json` processor.
Example:
```yaml
processor:
- dissect:
map:
message: "%{Date} %{Time} %{Log_Type}: %{Message}"
delete_source: true
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow source field to be deleted if dissect processor was successful | https://api.github.com/repos/opensearch-project/data-prepper/issues/5345/comments | 2 | 2025-01-21T09:40:40Z | 2025-01-21T20:44:52Z | https://github.com/opensearch-project/data-prepper/issues/5345 | 2,801,249,813 | 5,345 |
[
"opensearch-project",
"data-prepper"
] | ## Description
Currently, when a rule in our RuleEvaluator matches, the entire pipeline configuration is replaced using the template. This means that all processors before and after the matched plugin get lost, and we only end up with the new pipeline from the template. We want to support partial transformations in which only the matched plugin is replaced (or expanded) while retaining all other parts of the pipeline configuration.
## UseCase
OSCF processor currently uses transformation to address different types of logs; but it comes with a limitation that no other processors can be used along with this processor.
## Requirement
Based on rule and template yaml, Transform from:
```
processors:
processorA:
processorB:
processorC:
```
to
```
processors:
processorA:
processorB1:
processorB2:
processorB3:
processorC:
```
## Proposed Solution
1. Extend RuleEvaluator, add partial transformation:
- Add fields in the rule YAML to indicate partial transformations (e.g., partialTransformation: true, transformOperation: "replaceProcessorWithList").
rule snippet:
```
pluginName: "ocsf"
applyWhen:
- "$.pipelines.*.processors[?(@.ocsf)].ocsf"
partialTransformation: true
transformOperation: "replaceProcessorWithList"
templateFile: "new_processors_template.yaml"
```
template snippet:
```
processors:
processorA1:
processorA2:
processorA3:
```
2. Store Plugin-Level Template Snippet
- Instead of holding a full pipeline in pipelineTemplateModel, load or parse a snippet containing only the replacement/expanded processors for the matched plugin.
- Keep placeholders or special logic in that snippet if needed.
3. Update DynamicConfigTransformer
Locate the exact matched processor in the pipeline (by plugin name or JSON Path).
Remove the old processor.
Insert the new processor(s) from the partial template snippet at the same position.
Keep all other processors (and sources/sinks) as-is.
Backward Compatibility: If partialTransformation is not specified (or is false), continue with the existing “full replacement” logic.
Existing users who rely on full pipeline replacement remain unaffected.
## Acceptance Criteria
- Partial transformation replaces exactly one matched plugin with new processors from the template while preserving all other processors.
- If multiple plugins match, transform them all
- Full transformation logic remains intact for rules that do not specify partialTransformation: true.
| [Feature Enhancement] Support Partial Plugin-Level Transformations in DynamicConfigTransformer | https://api.github.com/repos/opensearch-project/data-prepper/issues/5343/comments | 0 | 2025-01-17T21:52:51Z | 2025-01-21T20:42:41Z | https://github.com/opensearch-project/data-prepper/issues/5343 | 2,796,366,680 | 5,343 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
MapToList processor throws NPE when input map contains null values
**To Reproduce**
For example, if the input data is:
```json
{
"my-map": {
"key1": "value1",
"key2": null
}
}
```
and processor config is:
```yaml
- map_to_list:
source: "my-map"
target: "my-list"
```
or
```yaml
- map_to_list:
source: "my-map"
target: "my-list"
convert_field_to_list: true
```
It will throw NPE.
**Expected behavior**
Processor should be able to process the record without errors.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] MapToList processor throws NPE when input map contains null values | https://api.github.com/repos/opensearch-project/data-prepper/issues/5341/comments | 0 | 2025-01-16T21:40:06Z | 2025-01-17T04:07:43Z | https://github.com/opensearch-project/data-prepper/issues/5341 | 2,793,827,446 | 5,341 |
[
"opensearch-project",
"data-prepper"
] | The current implementation of the Lambda Processor in Data Prepper does not support conditional retries based on specific exception types. This enhancement will add the capability to configure the Lambda Processor to retry only when specific exceptions are encountered during invocation.
This feature will improve the reliability and efficiency of the processor by reducing unnecessary retries for non-recoverable errors and focusing on recoverable ones.
Retry Logic Enhancement:
Introduce a mechanism to classify exceptions (e.g., transient, recoverable).
Allow retries for specific exception classes (e.g., TooManyRequestException, ServiceUnavailableException).
| Enhance Lambda processor to retry based on certain class of exception | https://api.github.com/repos/opensearch-project/data-prepper/issues/5340/comments | 1 | 2025-01-16T20:53:47Z | 2025-03-04T20:51:45Z | https://github.com/opensearch-project/data-prepper/issues/5340 | 2,793,748,426 | 5,340 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In OpenSearch. Currently, users have to explicitly know or keep track of any metadata to use it when working with records in OpenSearch. This makes it difficult to access and utilize important information about how records were sourced, processed, and sinked to OpenSearch.
**Describe the solution you'd like**
Implement a feature that allows users to easily view or dump all metadata associated with a record in OpenSearch. This should include information about how the record was sourced, processed, and sinked to OpenSearch. The solution should provide a simple and intuitive way to access this metadata without requiring users to manually track or remember specific metadata fields.
**Additional context**
This feature would greatly improve the traceability and auditability of data in OpenSearch. It would be particularly useful for debugging, data lineage tracking, and ensuring compliance with data governance policies. The ability to easily access and view all metadata associated with a record would enhance the overall usability and value of the OpenSearch system for data management and analysis tasks.
| Feature Request: View/Dump All Metadata Associated with OpenSearch Records | https://api.github.com/repos/opensearch-project/data-prepper/issues/5338/comments | 3 | 2025-01-16T01:53:26Z | 2025-04-08T19:47:47Z | https://github.com/opensearch-project/data-prepper/issues/5338 | 2,791,490,494 | 5,338 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I'd like to be able to get encrypted data keys from an S3 bucket. I want to list the objects in the bucket and then have them all available for encryption. However, I'd like to use the latest one by default.
**Describe the solution you'd like**
I'd like to build off of #5335 and support this as part of an encryption engine feature.
```
encryption:
encryption_key_directory: s3://my-bucket/path/to/
```
**Describe alternatives you've considered (Optional)**
This could be moved to be controlled under the `kafka` buffer, but I'd rather do this in the proposed extension.
```
buffer:
kafka:
topics:
- name: topicname
encryption_key_directory: s3://data-plane-bucket/accountId/pipelineName/internalPipelineId/
```
**Additional context**
N/A
| Support loading encryption keys from an S3 bucket | https://api.github.com/repos/opensearch-project/data-prepper/issues/5336/comments | 0 | 2025-01-15T21:36:20Z | 2025-05-29T20:58:17Z | https://github.com/opensearch-project/data-prepper/issues/5336 | 2,790,970,605 | 5,336 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper's `kafka` buffer supports encryption. But, this feature is isolated only to this buffer. Other situations may warrant client-side encryption such as reading or writing to S3.
Additionally, the current solution somewhat combines two different encryption providers. We should make use of the [AWS Encryption SDK for Java](https://github.com/aws/aws-encryption-sdk-java) in the implementation.
**Describe the solution you'd like**
Data Prepper could provide an extension which supports client-side encryption. This approach would allow for the extensions to be pluggable so that different encryption providers could be available for use by different pipeline components.
Take the example of using KMS encryption in the Kafka buffer. Right now you need a configuration like the following.
```
buffer:
kafka:
topics:
- name: topicname
encryption_key: ABCD...
kms:
key_id: arn:aws:kms:us-east-2:123456789012:key/MyKmsKey
region: us-east-2
```
This provides an encrypted data key (`encryption_key`) and KMS information for decrypting this encryption key.
This could instead become.
```
buffer:
kafka:
topics:
- name: topicname
client_side_encryption: default
```
In this situation we are stating that it will use the default encryption provider. We could also have named encryption providers to allow different topics to use different ones.
```
buffer:
kafka:
topics:
- name: topicname
client_side_encryption: kms1
- name: topicname
client_side_encryption: kms2
```
To support this, we will allow configuring this encryption in the `data-prepper-config.yaml` file.
```
encryption:
default:
kms:
key_id: arn:aws:kms:us-east-2:123456789012:key/MyKmsKey
region: us-east-2
```
It could also support named configurations:
```
encryption:
kms1:
kms:
key_id: arn:aws:kms:us-east-2:123456789012:key/MyKmsKey1
region: us-east-2
kms2:
kms:
key_id: arn:aws:kms:us-east-2:123456789012:key/MyKmsKey2
region: us-east-2
```
Data Prepper would provide a new interface for encryption.
```
interface EncryptionEngine {
EncryptionEnvelope encrypt(byte[] data);
byte[] decrypt(EncryptionEnvelope encryptionInfo);
}
```
```
class EncryptionEnvelope {
/**
* The raw data such as the Data Prepper Event.
*/
String getData();
/**
* The envelope encryption key. It must be encrypted.
*/
String getEncryptedDataKey()
}
```
The existing [`EncryptionSerializer`](https://github.com/opensearch-project/data-prepper/blob/cb765d8478448482dffcfe16df30bdc970025ed3/data-prepper-plugins/kafka-plugins/src/main/java/org/opensearch/dataprepper/plugins/kafka/common/serialization/EncryptionSerializer.java) will need some additional design and refactoring. This is because it assumes a single data key.
**Describe alternatives you've considered (Optional)**
There are some alternative ways to express the `data-prepper-config.yaml`, but I chose the one I did because it looks most like the AWS plugin feature for named credentials.
**Additional context**
This would modify some of the behavior from #3486.
I'm following the convention of named AWS credentials as defined in #2570.
| Encryption extension for client-side encryption | https://api.github.com/repos/opensearch-project/data-prepper/issues/5335/comments | 1 | 2025-01-15T21:33:05Z | 2025-05-29T20:58:17Z | https://github.com/opensearch-project/data-prepper/issues/5335 | 2,790,963,215 | 5,335 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, updating processors in a Data Prepper pipeline requires stopping and restarting the pipeline, which can lead to data loss or processing delays. We need a mechanism to update processor configurations or swap out processors dynamically without interrupting the data flow.
**Describe the solution you'd like**
Implement a system for hot-swapping processors and updating their configurations on-the-fly, ensuring continuous data processing without pipeline restarts.
1. ProcessorRegistry
```
public class ProcessorRegistry {
private volatile List<Processor> processors;
public ProcessorRegistry(List<Processor> initialProcessors) {
this.processors = new ArrayList<>(initialProcessors);
}
// Atomic swap of entire processor list
public void swapProcessors(List<Processor> newProcessors) {
Objects.requireNonNull(newProcessors, "New processors list cannot be null");
this.processors = new ArrayList<>(newProcessors);
}
// Get current processors for execution
public List<Processor> getProcessors() {
return processors;
}
}
```
2. ProcessWorker class that uses the registry
```
public class ProcessWorker {
private final ProcessorRegistry processorRegistry;
private final Buffer readBuffer;
// ... other fields
private void doRun() {
final Map.Entry<Collection, CheckpointState> readResult = readBuffer.read(pipeline.getReadBatchTimeoutInMillis());
Collection records = readResult.getKey();
final CheckpointState checkpointState = readResult.getValue();
// Get current processor list from registry
List<Processor> currentProcessors = processorRegistry.getProcessors();
for (final Processor processor : currentProcessors) {
List<Event> inputEvents = null;
if (acknowledgementsEnabled) {
inputEvents = ((List<Record<Event>>) records).stream()
.map(Record::getData)
.collect(Collectors.toList());
}
try {
records = processor.execute(records);
if (inputEvents != null) {
processAcknowledgements(inputEvents, records);
}
} catch (final Exception e) {
LOG.error("Processor threw an exception. This batch of Events will be dropped.", e);
if (inputEvents != null) {
processAcknowledgements(inputEvents, Collections.emptyList());
}
records = Collections.emptyList();
break;
}
}
postToSink(records);
readBuffer.checkpoint(checkpointState);
}
}
```
3. Manager class to handle processor updates
```
public class PipelineManager {
private final ProcessorRegistry processorRegistry;
public void updateProcessors(List<Processor> newProcessors) {
try {
validateProcessors(newProcessors);
processorRegistry.swapProcessors(newProcessors);
LOG.info("Successfully updated processors");
} catch (Exception e) {
LOG.error("Failed to update processors", e);
throw new ProcessorUpdateException("Failed to update processors", e);
}
}
private void validateProcessors(List<Processor> processors) {
if (processors == null || processors.isEmpty()) {
throw new IllegalArgumentException("Processors list cannot be null or empty");
}
// Add any additional validation logic
}
}
```
**Describe alternatives you've considered (Optional)**
* Implementing a copy-on-write approach for the entire processor chain.
* Using a message queue/buffering between processors to allow for more flexible updates.
| Swap out processors dynamically without interrupting the data flow | https://api.github.com/repos/opensearch-project/data-prepper/issues/5327/comments | 3 | 2025-01-13T23:58:54Z | 2025-05-22T17:44:22Z | https://github.com/opensearch-project/data-prepper/issues/5327 | 2,785,769,307 | 5,327 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Implement Dynamic Adjustment of flush_timeout and bulk_size for OpenSearch Sink.
Currently, the OpenSearch sink in Data Prepper uses static values for flush_timeout and bulk_size. This can lead to suboptimal performance in varying traffic conditions. We need to implement a dynamic adjustment mechanism for these properties based on the observed traffic pattern.
**Describe the solution you'd like**
Implement Adaptive Batching:
Instead of adjusting flush_timeout and bulk_size, implement an adaptive batching mechanism that dynamically adjusts the batch size based on the current traffic rate. This could involve starting with smaller batches and gradually increasing the size as traffic increases and vice-versa.
1. Monitor the incoming traffic rate to the OpenSearch sink.
2. For high TPS (Transactions Per Second) scenarios:
* Use default values for flush_timeout and bulk_size to optimize for throughput.
3. For low TPS or sporadic request patterns:
* Set flush_timeout to -1 for immediate flushing.
* Potentially reduce bulk_size to ensure timely processing of smaller batches.
Benefits:
1. Improved performance across varying traffic patterns.
2. Reduced latency for low-traffic scenarios.
3. Better resource utilization during high-traffic periods.
4. Enhanced user experience without manual configuration changes.
Potential Challenges:
1. Determining optimal thresholds and adjustment strategies.
2. Ensuring thread-safety in dynamic property adjustments.
3. Avoiding frequent oscillations in settings.
**Describe alternatives you've considered (Optional)**
1. Traffic-Based Worker Scaling:
Implement a system that scales the number of worker threads processing the sink based on the incoming traffic. This could help manage both high and low traffic scenarios without changing the flush_timeout or bulk_size.
2. Time-Based Flushing with Backpressure:
Instead of using a fixed flush_timeout, implement a time-based flushing mechanism with backpressure. This would flush based on time for low traffic, but could delay flushing if the system is under high load, effectively adapting to traffic patterns.
3. Machine Learning-Based Prediction:
Implement a machine learning model that predicts traffic patterns and adjusts sink parameters proactively, rather than reactively. This could be particularly effective for systems with recurring traffic patterns.
4. Hybrid Approach:
Combine multiple strategies, such as using adaptive batching for high-traffic scenarios and immediate flushing for low-traffic periods, switching between modes based on observed patterns.
| Dynamic Adjustment of flush_timeout and bulk_size for OpenSearch Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5326/comments | 0 | 2025-01-13T16:15:22Z | 2025-01-14T20:39:36Z | https://github.com/opensearch-project/data-prepper/issues/5326 | 2,784,494,647 | 5,326 |
[
"opensearch-project",
"data-prepper"
] | **Setup**:
Otel Agents -> Otel collector -> Jaeger / DataPrepper -> Opensearch -> OpensearchDashboards
**Versions**:
Opensearch Helm Chart version: 2.27.1, appVersion: 2.18.0
Opensearch-Dashboards Helm Chart version: 2.25.0, appVersion: 2.18.0
Jaeger Helm Chart version: 3.3.3, appVersion: 1.53.0
DataPrepper Helm Chart version: 0.1.0, appVersion: 2.8.0
**Describe the issue**:
I have a setup with instrumented applications using OpenTelemetry (Otel) agents, which push traces to an Otel collector. The Otel collector sends data to both Jaeger and DataPrepper. However, I am noticing a difference in the behavior of the same traces when viewed in OpenSearch Dashboards depending on the data source selected (Jaeger vs. DataPrepper).
Specifically, when I select DataPrepper as the data source, I do not see the entire trace being marked as a trace with errors, and the errors are not displayed on the dashboard. In contrast, when using Jaeger as the data source, the errors are correctly visualized, and the entire trace is marked as an "error trace" if any span within the trace contains an error.
**Configuration**:
Jaeger:
```
jaeger:
agent:
enabled: false
provisionDataStore:
cassandra: false
elasticsearch: false
collector:
enabled: true
annotations: {}
image:
registry: ""
repository: jaegertracing/jaeger-collector
tag: ""
digest: ""
envFrom: []
cmdlineParams: {}
basePath: /
replicaCount: 1
service:
otlp:
grpc:
name: "otlp-grpc"
port: 4317
http:
name: "otlp-http"
port: 4318
serviceAccount:
create: true
storage:
type: elasticsearch
elasticsearch:
scheme: http
host: opensearch-cluster-master.opensearch-otel.svc.cluster.local
port: 9200
anonymous: true
usePassword: false
- name: SPAN_STORAGE_TYPE
value: "opensearch"
- name: ES_TAGS_AS_FIELDS_ALL
value: "true"
tls:
enabled: false
```
DataPrepper:
```
config:
otel-trace-pipeline:
delay: "1000"
source:
otel_trace_source:
ssl: false
buffer:
bounded_blocking:
buffer_size: 10240
batch_size: 160
sink:
- pipeline:
name: "raw-traces-pipeline"
- pipeline:
name: "otel-service-map-pipeline"
raw-traces-pipeline:
source:
pipeline:
name: "otel-trace-pipeline"
buffer:
bounded_blocking:
buffer_size: 10240
batch_size: 160
processor:
- otel_trace_raw:
- otel_trace_group:
hosts: [ "http://opensearch-cluster-master:9200" ]
insecure: true
sink:
- opensearch:
hosts: [ "http://opensearch-cluster-master:9200" ]
insecure: true
index_type: trace-analytics-raw
otel-service-map-pipeline:
delay: "1000"
source:
pipeline:
name: "otel-trace-pipeline"
buffer:
bounded_blocking:
buffer_size: 10240
batch_size: 160
processor:
- service_map_stateful:
window_duration: 300
sink:
- opensearch:
hosts: [ "http://opensearch-cluster-master:9200" ]
insecure: true
index_type: trace-analytics-service-map
index: otel-v1-apm-span-%{yyyy.MM.dd}
#max_retries: 20
bulk_size: 4
```
**Relevant Logs or Screenshots**:
DataPrepper source. Error in span, but not all trace marked with Error, and no statistics observed
<img width="1376" alt="Screenshot 2024-12-24 at 16 31 59" src="https://github.com/user-attachments/assets/1abbe9df-4f56-44fc-b715-3d315add7a3d" />
<img width="1540" alt="Screenshot 2024-12-24 at 16 30 49" src="https://github.com/user-attachments/assets/bfe2c24b-7239-4793-a075-e46edf460899" />
Here is Jaeger source. Error is observed in span and the whole trace marked with error (in the right top corner, next capture)
<img width="1392" alt="Screenshot 2024-12-24 at 16 31 43" src="https://github.com/user-attachments/assets/50255e13-3a06-4a4a-b237-a9e87e900d6d" />
<img width="1551" alt="Screenshot 2024-12-24 at 16 31 07" src="https://github.com/user-attachments/assets/ffe7e720-b5e1-4942-be87-3654f6c1d089" />
Please share your suggestions on how to fix it. TraceID is the same for both cases.
Thanks
| [BUG] Opensearch trace not marking by Error at the parent level | https://api.github.com/repos/opensearch-project/data-prepper/issues/5325/comments | 0 | 2025-01-13T10:07:31Z | 2025-01-14T20:38:22Z | https://github.com/opensearch-project/data-prepper/issues/5325 | 2,783,526,632 | 5,325 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Allow multiple keys to be deleted under different conditions when using `delete_entries` processor
**Describe the solution you'd like**
Modify `delete_entries` processor to support this new configuration that allows multiple `with_keys` and `delete_when` configurations like shown below. The existing configuration is still supported. This is new config under `entries` needs to be added.
```
processor:
- delete_entries:
entries :
- with_keys: ["key1", "key2"]
delete_when : "condition1"
- with_keys : ["key3", "key4"]
delete_when : "condition2"
- with_keys : ["key5", "key6", "key7"]
delete_when : "condition3"
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support multiple delete_when condition in delete_entries processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5315/comments | 1 | 2025-01-08T17:12:56Z | 2025-04-21T15:31:26Z | https://github.com/opensearch-project/data-prepper/issues/5315 | 2,775,906,907 | 5,315 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The palantir gradle docker plugin is end of life, see https://github.com/palantir/gradle-docker :
> This repo is on life support only - although we will keep it working, no new features are accepted;
While they still do releases with updated dependency versions, we don't know how long this will last.
**Describe the solution you'd like**
Research alternative plugins that are better maintained. Here is the result of a quick search:
* https://github.com/lamba92/gradle-docker-plugin
* https://github.com/gesellix/gradle-docker-plugin
* https://github.com/rcw3bb/simple-docker
**Describe alternatives you've considered (Optional)**
Continue using https://github.com/palantir/gradle-docker and hope it remains maintained.
| Use a different gradle plugin for docker | https://api.github.com/repos/opensearch-project/data-prepper/issues/5313/comments | 2 | 2025-01-08T13:04:59Z | 2025-01-21T21:04:26Z | https://github.com/opensearch-project/data-prepper/issues/5313 | 2,775,349,896 | 5,313 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some use-cases, such as GenAI, could be solved through running external programs. For example, to run data from Data Prepper into a Python program running LangChain, we could send events in and then enrich those events with the output from the program.
**Describe the solution you'd like**
Create a `pipe` processor. This processor will call an external command with arguments to modify events.
It operates similar to a Linux pipe (really two pipes). It would put each event on a newline as JSON into the program's `stdin`. Then it would read from `stdout` to get changes to the event to merge in.
Conceptually it is similar to `data-prepper | command | data-prepper`
```
processor:
- pipe:
command: python3
arguments:
- /path/to/my/script
```
**Describe alternatives you've considered (Optional)**
The AWS Lambda processor is quite similar. However, this requires the use of AWS Lambda and does not work for fully open-source use-cases.
I also considered creating both a Python processor and a LangChain processor. But, a general `pipe` processor would be more generic. And creating processors for these specific tools would not have much value if they don't already include the dependencies, but then we may get into some dependency issues.
**Additional context**
N/A
| Provide a pipe processor for arbitrary execution of input and output | https://api.github.com/repos/opensearch-project/data-prepper/issues/5312/comments | 0 | 2025-01-07T19:41:02Z | 2025-01-08T19:31:43Z | https://github.com/opensearch-project/data-prepper/issues/5312 | 2,773,599,907 | 5,312 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Many users enforce policies that docker containers should not run as root user. There should be no reason to run as root by default.
**Describe the solution you'd like**
Create a user and set `USER` in Dockerfile.
**Additional context**
A sample warning from a k8s deploy is like this:
> policy require-run-as-nonroot/run-as-non-root fail: validation error: Running as root is not allowed. Either the field spec.securityContext.runAsNonRoot must be set to `true`, or the fields spec.containers[*].securityContext.runAsNonRoot, spec.initContainers[*].securityContext.runAsNonRoot, and spec.ephemeralContainers[*].securityContext.runAsNonRoot must be set to `true`. rule run-as-non-root[0] failed at path /spec/securityContext/runAsNonRoot/ rule run-as-non-root[1] failed at path /spec/containers/0/securityContext/runAsNonRoot/ | [docker] Run as non root | https://api.github.com/repos/opensearch-project/data-prepper/issues/5311/comments | 1 | 2025-01-07T14:54:23Z | 2025-01-15T15:47:44Z | https://github.com/opensearch-project/data-prepper/issues/5311 | 2,773,041,327 | 5,311 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a Data Prepper user, I would like to have an rds source to load existing data and stream change events from RDS MySQL databases.
**Describe the solution you'd like**
For export (loading existing data), we can create a snapshot, export it to S3 and read the data from S3
For stream (streaming change events), we can create a Postgres logical replication stream to receive change events.
**Describe alternatives you've considered (Optional)**
Run SQL queries periodically through a JDBC driver to load existing and incremental data from the source database.
**Additional context**
Relevant issue for MySQL: https://github.com/opensearch-project/data-prepper/issues/4561
## Tasks
- [x] Export implementation
- [x] Stream implementation
- [x] Checkpointing in both export and stream
- [x] Secret rotation support
- [x] Add E2E acknowledge support
- [x] Add data type mapping
- [x] Add plugin metrics
- [ ] Add aggregate metrics
- [ ] Add integration tests
| Support AWS Aurora/RDS PostgreSQL as source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5309/comments | 0 | 2025-01-06T14:42:12Z | 2025-04-17T14:59:36Z | https://github.com/opensearch-project/data-prepper/issues/5309 | 2,770,807,694 | 5,309 |
[
"opensearch-project",
"data-prepper"
] | Issue:
We're implementing a conditional based routing for trace data for creating dynamic indexes(multiple sinks) based upon one of the field present in the dataset (tenant_name). The conditional routing is working for traces, but not for the service-map data. The template for the service-map doesn't contain more fields and adding of custom fields is also not possible. So the service-map data is not going to relevant dynamic indexes like the trace data, which is causing issues while stitching the data at user interface level.
Reproduce:
Have multiple sinks for trace & service-map data depending upon the tenant_name (tenant1, tenant2 & tenant3 ... tenantN) field.
The service-map data won't follow the same routing like trace data.
**Expected behavior**
The trace & service-map data need to follow the similar kind of dynamic routing.
Config: pipelines.yaml for trace & service-map data
################################################################################
```
traces-entry-pipeline:
workers: 1
delay: "100"
source:
otel_trace_source:
ssl: false
buffer:
bounded_blocking:
route:
- tenant1_traces_svcmaps: '/attributes/span.attributes.tenant_name == "tenant1"'
- tenant2_traces_svcmaps: '/attributes/span.attributes.tenant_name == "tenant2"'
- tenant3_traces_svcmaps: '/attributes/span.attributes.tenant_name == "tenant3"'
sink:
- stdout:
- pipeline:
name: "traces-svcmaps-pipeline-t1"
routes: [tenant1_traces_svcmaps]
- pipeline:
name: "traces-svcmaps-pipeline-t2"
routes: [tenant2_traces_svcmaps]
- pipeline:
name: "traces-svcmaps-pipeline-t3"
routes: [tenant3_traces_svcmaps]
traces-svcmaps-pipeline-t1:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-entry-pipeline"
buffer:
bounded_blocking:
sink:
- stdout:
- pipeline:
name: "trace-raws-pipeline-t1"
- pipeline:
name: "service-maps-pipeline-t1"
traces-svcmaps-pipeline-t2:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-entry-pipeline"
buffer:
bounded_blocking:
sink:
- stdout:
- pipeline:
name: "trace-raws-pipeline-t2"
- pipeline:
name: "service-maps-pipeline-t2"
traces-svcmaps-pipeline-t3:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-entry-pipeline"
buffer:
bounded_blocking:
sink:
- stdout:
- pipeline:
name: "trace-raws-pipeline-t3"
- pipeline:
name: "service-maps-pipeline-t3"
trace-raws-pipeline-t1:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t1"
buffer:
bounded_blocking:
processor:
- otel_traces:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant1_traces-%{yyyy.MM.dd} # Dynamically creates index based on route
service-maps-pipeline-t1:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t1"
buffer:
bounded_blocking:
processor:
- service_map:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant1_servicemap-%{yyyy.MM.dd} # Dynamically creates index based on route
trace-raws-pipeline-t2:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t2"
buffer:
bounded_blocking:
# buffer_size: 128 # 10240
# batch_size: 8 # 160
processor:
- otel_traces:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant2_traces-%{yyyy.MM.dd} # Dynamically creates index based on route
service-maps-pipeline-t2:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t2"
buffer:
bounded_blocking:
processor:
- service_map:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant2_servicemap-%{yyyy.MM.dd} # Dynamically creates index based on route
trace-raws-pipeline-t3:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t3"
buffer:
bounded_blocking:
processor:
- otel_traces:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant3_traces-%{yyyy.MM.dd} # Dynamically creates index based on route
service-maps-pipeline-t3:
workers: 1
delay: "100"
source:
pipeline:
name: "traces-svcmaps-pipeline-t3"
buffer:
bounded_blocking:
processor:
- service_map:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
# cert: "/usr/share/data-prepper/opensearch.crt"
ssl_verification_enabled: false
insecure: true
username:
password:
index: acn_tenant3_servicemap-%{yyyy.MM.dd} # Dynamically creates index based on route
```
################################################################################
| Conditional Routing for service-map data is not working. | https://api.github.com/repos/opensearch-project/data-prepper/issues/5280/comments | 7 | 2024-12-27T16:25:33Z | 2025-02-03T08:35:00Z | https://github.com/opensearch-project/data-prepper/issues/5280 | 2,761,064,192 | 5,280 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `add_entries` processor `add_when` is not working when checking for fields inside `log.attributes` or `resource.attributes`. It is working work when I check for a top level field such as `severityText `
I just setup data prepper docker container as mentioned in [here](https://opensearch.org/docs/latest/data-prepper/getting-started/)
```bash
docker run --name data-prepper -p 2021:2021 -v /${PWD}/pipelines.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml opensearchproject/data-prepper:latest
```
This is my data prepper config:
```yaml
otel-opensearch-pipeline:
workers: 1
delay: "5000"
source:
otel_logs_source:
ssl: false
port: 2021
#compression: gzip
processor:
- add_entries:
entries:
- key: "add_entry_test"
value: "done"
add_when: /severityText == "Info"
- key: "add_entry_dot_test"
value: done
add_when: /log.attributes.foo == "bar"
- date:
from_time_received: true
destination: "@timestamp"
sink:
- stdout:
```
Also in the same machine I have setup `opentelemetry-collector-contrib:0.116.1` to receive OTEL logs and forward to data prepper
```bash
docker run -p 4317:4317 -v $(pwd)/config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.116.1
```
```yaml
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
processors:
batch:
exporters:
otlp:
endpoint: "myhost:2021"
tls:
insecure: true
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
```
I am using [Telemetry generator for OpenTelemetry](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/cmd/telemetrygen) to generate OTEL logs
```bash
./telemetrygen logs --body "2024-12-12:00:00:00 INFO This is a test message" --otlp-attributes host.name=\"mydevhost\" --telemetry-attributes foo=\"bar\" --trace-id ae87dadd90e9935a4bc9660628efd569 --span-id 5828fa4960140870 --duration 1s --otlp-insecure
```
__OUTPUT__
```json
{
"traceId": "ae87dadd90e9935a4bc9660628efd569",
"spanId": "5828fa4960140870",
"severityText": "Info",
"flags": 0,
"time": "2024-12-27T10:01:38.195840064Z",
"severityNumber": 9,
"droppedAttributesCount": 0,
"serviceName": null,
"body": "2024-12-12:00:00:00 INFO This is a test message",
"observedTime": "1970-01-01T00:00:00Z",
"schemaUrl": "https://opentelemetry.io/schemas/1.4.0",
"add_entry_test": "done",
"@timestamp": "2024-12-27T10:01:38.330Z",
"log.attributes.app": "server",
"log.attributes.foo": "bar",
"resource.attributes.host@name": "mydevhost"
}
```
**Expected behavior**
I am expecting a new field `add_entry_dot_test` to be added based on the condition `add_when: /log.attributes.foo == "bar"`. I am getting new field `add_entry_test` based on the condition `add_when: /severityText == "Info"`. Looks like data prepper expression-syntax is unable to read fields inside `log.attributes` or `resource.attributes`.
**Environment (please complete the following information):**
- OS: [Ubuntu 22.04.5 LTS]
- Docker [27.3.1]
- Data Prepper [Latest]
| [BUG] add_entries processor add_when is not working when checking for fields within log.attributes or resource.attributes | https://api.github.com/repos/opensearch-project/data-prepper/issues/5279/comments | 7 | 2024-12-27T10:30:23Z | 2025-01-09T15:05:03Z | https://github.com/opensearch-project/data-prepper/issues/5279 | 2,760,706,638 | 5,279 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When processing files larger than 2GB, the system throws an error indicating that the numeric value is out of range for an integer.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload a file larger than 2GB to the specified S3 bucket.
2. Ensure that the S3EventBridgeNotification is configured to trigger when a new file is uploaded.
3. Once the Data Prepper pipeline is triggered, the error shows up in the logs.
**Expected behavior**
The system should handle files larger than 2GB without throwing an error.
**Additional context**
```
404 [s3-source-sqs-1] ERROR org.opensearch.dataprepper.plugins.source.s3.parser.S3EventBridgeNotificationParser - SQS message with message ID:8947a26b-2500-4859-bfd1-05353e0fc232 has invalid body which cannot be parsed into EventBridgeNotification. Numeric value (2409161669) out of range of int (-2147483648 - 2147483647)
at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 523] (through reference chain: org.opensearch.dataprepper.plugins.source.s3.S3EventBridgeNotification["detail"]->org.opensearch.dataprepper.plugins.source.s3.S3EventBridgeNotification$Detail["object"]->org.opensearch.dataprepper.plugins.source.s3.S3EventBridgeNotification$Object["size"])
```
| [BUG] Error Processing Files Larger Than 2GB Due to Integer Overflow in Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5276/comments | 4 | 2024-12-19T23:12:08Z | 2025-02-11T19:38:44Z | https://github.com/opensearch-project/data-prepper/issues/5276 | 2,751,592,184 | 5,276 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Copied content from my [comment](https://github.com/opensearch-project/data-prepper/issues/2268#issuecomment-2553468283).
When sending data without observedTimeUnixNano and timeUnixNano [fields](https://github.com/open-telemetry/opentelemetry-proto/blob/2bd940b2b77c1ab57c27166af21384906da7bb2b/opentelemetry/proto/logs/v1/logs.proto#L139-L159) directly to Data Prepper, it will create Documents in OpenSearch including the time and observedTime field set to epoch 0 (Jan 1 1970).
This makes logs very hard to find, sometimes users are under the expression that the logs weren't ingested at all, since they would only check a recent time.
Based on the spec of the [observedTime](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-observedtimestamp) field, it "is the time when OpenTelemetry’s code observed the event measured by the clock of the OpenTelemetry code", so Data Prepper should set it to the current time.
Based on the spec of the [time](https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-timestamp) field, setting it to epoch 0 seems wrong. Either the field should be dropped (because it is optional) or set to the value of observedTime. The latter would make sense, since the spec mentions: "Use Timestamp if it is present, otherwise use ObservedTimestamp".
**To Reproduce**
First, create a "otel-log-without-time.json" file, e.g.:
```
{
"resourceLogs": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": { "stringValue": "my-application" }
}
],
"droppedAttributesCount": 0
},
"scopeLogs": [
{
"scope": {
"name": "scopeName",
"version": "version1"
},
"logRecords": [
{
"severityNumber": 9,
"severityText": "Info",
"body": { "stringValue": "This is a log message" },
"attributes": [],
"droppedAttributesCount": 0,
"traceId": "08040201000000000000000000000000",
"spanId": "0102040800000000"
}
],
"schemaUrl": "foo"
}
],
"schemaUrl": "bar"
}
]
}
```
Second, sent it via grpcurl to Data Prepper, e.g.:
```
grpcurl -insecure -d @ < otel-log-without-time.json <dp_endpoint>:<dp_otel_log_port> opentelemetry.proto.collector.logs.v1.LogsService/Export
```
Data Prepper log pipeline looks sth. like this (highlighting that the `proto_reflection_service` is enabled for grpcurl):
```
logs-pipeline:
source:
otel_logs_source:
ssl: false
proto_reflection_service: true
buffer:
bounded_blocking:
buffer_size: 12800
batch_size: 200
processor:
sink:
- opensearch:
hosts: [ "<opensearch_endpoint>" ]
insecure: true
username: <os_username>
password: <os_user_password>
index: logs-otel-v1-%{yyyy.MM.dd}
```
Resulting OpenSearch doc:
```
{
"_index": "logs-otel-v1-2024.12.19",
"_type": "_doc",
"_id": "GB2n3pMB0Mc1_i72fE8Y",
"_version": 1,
"_score": null,
"_source": {
"traceId": "d3cd38d36d35d34d34d34d34d34d34d34d34d34d34d34d34",
"spanId": "d35d36d38d3cd34d34d34d34",
"severityText": "Info",
"flags": 0,
"time": "1970-01-01T00:00:00Z",
"severityNumber": 9,
"droppedAttributesCount": 0,
"serviceName": "my-application",
"body": "This is a log message",
"observedTime": "1970-01-01T00:00:00Z",
"schemaUrl": "bar",
"instrumentationScope.name": "scopeName",
"resource.attributes.service@name": "my-application",
"instrumentationScope.version": "version1"
},
"fields": {
"observedTime": [
"1970-01-01T00:00:00.000Z"
],
"time": [
"1970-01-01T00:00:00.000Z"
]
},
"sort": [
0
]
}
```
**Expected behavior**
The OpenSearch document should have the `observedTime` set to the current time when it was ingested.
Optionally, it could be considered to set the `time` to the `observedTime`, in case it does not exist. This allows to always use the `time` field in an OpenSearch index pattern as _time field_.
Alternatively, users could achieve this behavior with a Data Prepper processor.
**Screenshots**

=> In case no time field is contained. In this case the `logs-otel-v1-*` index pattern uses `time` as _time field_. All respective logs are at epoch 0 and therefore hard to find.
**Environment (please complete the following information):**
- DP version: 2.9.0
| [BUG] Set observedTime and time to current time instead of epoch 0 (Jan 1, 1970) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5275/comments | 1 | 2024-12-19T15:20:22Z | 2025-01-06T17:26:03Z | https://github.com/opensearch-project/data-prepper/issues/5275 | 2,750,669,781 | 5,275 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We are planning to leverage HTTP sink to index the records to Clickhouse. I see the http sink is not enabled in distribution and I understand that the code is not production ready.
**Describe the solution you'd like**
1. Configurable retryable status codes.
2. Configurable connect timeout, max conns, request timeout, socket timeout, keepalive, etc., to PoolingHttpClientConnectionManager
3. Connection leak needs to be fixed by making use of closable response
4. Http compression for outgoing requests.
5. DLQ as optional setting
6. Removal of http client building from hot path.
7. Last missing flush when iterating over records is required.
**Describe alternatives you've considered (Optional)**
We built custom data prepper distributions by including all the above.
| When will Production ready Http Sink be included in distribution? | https://api.github.com/repos/opensearch-project/data-prepper/issues/5270/comments | 4 | 2024-12-17T15:22:38Z | 2024-12-20T11:35:52Z | https://github.com/opensearch-project/data-prepper/issues/5270 | 2,745,220,926 | 5,270 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Documentation of Data prepper says that consumer_strategy can be given either as polling or fan-out in Kinesis source config. If polling is given, polling config must be given too. Even after polling config is given and polling consumer_strategy is set in config, Kinesis Fanout Publisher is only initialised.
**To Reproduce**
Steps to reproduce the behavior:
``` yaml
source:
kinesis:
streams:
- stream_name: "test_stream"
initial_position: LATEST
compression: gzip
codec:
ndjson:
consumer_strategy: "polling"
records_to_accumulate: 100
polling:
max_polling_records: 10000
idle_time_between_reads: 1s
```
**Expected behavior**
Kinesis source to make use of polling retrieval strategy .
**Additional context**
Passing of newly created retrieval config in KCL scheduler instantiation [here](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/kinesis-source/src/main/java/org/opensearch/dataprepper/plugins/kinesis/source/KinesisService.java#L206) causes this issue. | [BUG] Kinesis source doesn't pass the given polling retrieval config to underlying KCL | https://api.github.com/repos/opensearch-project/data-prepper/issues/5269/comments | 1 | 2024-12-17T15:06:17Z | 2025-01-21T21:01:00Z | https://github.com/opensearch-project/data-prepper/issues/5269 | 2,745,177,732 | 5,269 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `insecure` option should override `cert`, not the other way around.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup an Opensearch with demo certificates
2. Generate a TLS certificate pem file for the configuration, which does not match the host name of opensearch
3. Configure prepper with opensearch exporter having both `cert` and `insecure` in config:
```yaml
sink:
opensearch:
hosts: ["https://localhost:9200"]
cert: path/to/wrong/hostname/cert.pem
insecure: true
```
4. Observe that prepper cannot connect to Opensearch, it complains about hostname validation, even if `insecure` is set.
5. Remove the `cert` line and try again
6. Now prepper can connect to your opensearch
**Expected behavior**
The documentation says the following for the two settings:
> **cert (Optional)** : CA certificate that is pem encoded. Accepts both .pem or .crt. This enables the client to trust the CA that has signed the certificate that the OpenSearch cluster is using. Default is null.
> **insecure (Optional)**: A boolean flag to turn off SSL certificate verification. If set to true, CA certificate verification will be turned off and insecure HTTP requests will be sent. Default to false.
These two are by definition mutual exclusive. But the documentation does not talk about their relationship or the fact that `cert` will override `insecure`. The principle of least surprise would be that `insecure: true` overrides the presence of `cert`, not the other way around. You'd also expect a warning in the log whenever this is the case.
**Screenshots**
Code that disregards `insecure` flag:
https://github.com/opensearch-project/data-prepper/blob/956a89a4bc2057b4f13ee424d27f62c4bdb6fc4a/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/ConnectionConfiguration.java#L276-L283
**Environment (please complete the following information):**
- Kubernetes
- Opensearch helm chart
- Data prepper helm chart
- Opensearch demo certificates
**Additional context**
The k8s service has a different name than the CN in auto generated demo certificates. Since the Data prepper is configured to talk to the servicename, there is a host name verification error when using the `pem` cert from opensearch.
| [BUG][opensearch sink] Config option 'insecure' not honored when 'cert' is configured | https://api.github.com/repos/opensearch-project/data-prepper/issues/5267/comments | 2 | 2024-12-17T09:29:06Z | 2025-01-10T14:32:29Z | https://github.com/opensearch-project/data-prepper/issues/5267 | 2,744,391,858 | 5,267 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
DataPrepper doesn't support changing configuration dynamically. Any changes to the config requires the data prepper to be stopped/shutdown and restarted. This is not very convenient.
**Describe the solution you'd like**
In most of the cases we should be able to re-initialize dataprepper components (like source, processor, sink) with new config. If a new processor is added or an existing processor is deleted, it can be deleted without having to shutdown and restart DataPrepper.
There may be still some cases where dynamic reconfiguration is not possible, in which case shutdown and restart is the only option.
Here are some simple cases where shutdown/restart of the entire pipeline may not be necessary
1. Changing config option of a source or a sink or a processor. For example, `max_retries` retries options of OpenSearch Sink. We should be able change this value without having to cause any disruption to the pipeline operation
2. Changing `_when` condition of a processor. Since the condition is evaluated for each event, it should be easy to replace old condition with new condition without any disruption to the pipeline operation
3. Fixing grok pattern. This would require re-initializing grok processor object so that `compileMatchPatterns` is done again. This may disrupt the traffic a little because we need to pause the workers from consuming events from the buffer and then re-initialize the grok processor. This type of dynamic reconfiguration is more disruptive than the previous two examples but still much better option than shutting down the pipeline and restarting
4. In case of push based sources, it may not be possible to support dynamic reconfiguration. For example, changing `max_message_size` in http source configuration might not be possible to do dynamically because it would require temporarily stopping the http source. But this may result in http client getting 4xx/5xx errors. So, in this case, only option may be to shutdown and restart the DataPrepper
5. In case of pull based sources, it should be possible to change even source config changes as well.
Overall, the suggestion is to provide a new API to DataPrepperServer something like `/update_config` which sends a yaml file to the server. DataPrepper will do some validation on the yaml to see if the dynamic update can be performed. If the dynamic update cannot be performed, then rejection error should be returned. If it can be performed, then success is returned after performing the update. If some error occurs during the reconfiguration, a failure error should be returned and the pipeline config should be restored to previous configuration. It should not be partially updated.
**Describe alternatives you've considered (Optional)**
**Additional context**
Add any other context or screenshots about the feature request here.
| Support dynamic config update | https://api.github.com/repos/opensearch-project/data-prepper/issues/5261/comments | 1 | 2024-12-13T01:18:11Z | 2024-12-17T20:51:21Z | https://github.com/opensearch-project/data-prepper/issues/5261 | 2,737,173,954 | 5,261 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, all OTEL sources (OTEL trace source, OTEL metric source and OTEL log source) does some transformations while creating events from OTEL data.
1. In all sources, the keys are created by replacing "." with "@" (dedotting)
2. In all sources, the attributes are "flattened" by moving them to the root of the event instead of nesting under "attributes"
The dedotting is done to make the data is compatible with OpenSearch.
**Describe the solution you'd like**
I think the transformations should be outside of the OTEL sources because sink is not always OpenSearch. Also, the users are not given any option to not do the transformations. We should remove the transformations from the OTEL sources and let users explicitly do this as a processor or OpenSearch sink option
```
processor:
- opensearch_compatibility_transform:
flatten_attributes: true
dedotting: true
```
or Alternatively
```
sink:
- opensearch:
opensearch_compatibility_transformation: true
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| OTEL sources should create events without any transformations | https://api.github.com/repos/opensearch-project/data-prepper/issues/5259/comments | 11 | 2024-12-12T19:54:26Z | 2025-03-20T00:24:52Z | https://github.com/opensearch-project/data-prepper/issues/5259 | 2,736,760,269 | 5,259 |
[
"opensearch-project",
"data-prepper"
] | Since migrating from 2.7.0 to 2.10.1 the index id stayed at the same number (000056) and is no longer rolling over. You can see below that previously indexes would be rolled around 500mb but this latest one is now at 108.5gb. Number of documents used to be around 300000 but now it's more than 73 million. Did I miss some configuration?
```
| otel-v1-apm-span-000056 | green | Yes | open | 108.5gb | 54.1gb | 73591140 | 4806948 | 1 | 1
| otel-v1-apm-span-000055 | green | Yes | open | 471mb | 235.5mb | 231704 | 24212 | 1 | 1
| otel-v1-apm-span-000054 | green | Yes | open | 478.3mb | 239.1mb | 299849 | 20501 | 1 | 1
| otel-v1-apm-span-000053 | green | Yes | open | 504mb | 252mb | 310190 | 26766 |
..
```
Data prepper config:
```
raw-pipeline:
workers: 2
delay: "3000"
source:
pipeline:
name: "otel-trace-pipeline"
buffer:
bounded_blocking:
buffer_size: 10240
batch_size: 160
processor:
- delete_entries:
with_keys: ['command_args']
- otel_traces:
- otel_trace_group:
hosts: ["https://opensearch-node1:9200"]
username: admin
password: ------
insecure: true
sink:
- opensearch:
hosts: ["https://opensearch-node1:9200"]
index_type: trace-analytics-raw
username: admin
password: --------
insecure: true
```
| OpenSearch sink not rolling over the index after upgrading from 2.7 to 2.10 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5258/comments | 4 | 2024-12-12T19:31:12Z | 2024-12-19T17:07:03Z | https://github.com/opensearch-project/data-prepper/issues/5258 | 2,736,721,324 | 5,258 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.10.2
**BUILD NUMBER**: 90
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/12262492222
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 12262492222: Release Data Prepper : 2.10.2 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5253/comments | 3 | 2024-12-10T18:55:33Z | 2024-12-10T21:09:05Z | https://github.com/opensearch-project/data-prepper/issues/5253 | 2,730,944,270 | 5,253 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some Data Prepper configurations accept bytes values in the Data Prepper byte count, however some still take values in other formats. For example, the `opensearch` sink `bulk_size` accepts an integer in megabytes.
As a pipeline author or Data Prepper administrator I am not sure what value to use.
**Describe the solution you'd like**
Use the Data Prepper byte count format consistently for any time duration.
Existing configurations may need to be replaced so some of these changes may be breaking.
| Support ByteCount for all byte-based configurations | https://api.github.com/repos/opensearch-project/data-prepper/issues/5248/comments | 0 | 2024-12-09T15:46:41Z | 2024-12-10T20:51:17Z | https://github.com/opensearch-project/data-prepper/issues/5248 | 2,727,500,350 | 5,248 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I am ingesting pfSense firewall log (syslog RFC 3164) with Fluetbit:
```
[SERVICE]
Flush 1
Parsers_File parsers.conf
[INPUT]
Name syslog
Parser syslog-rfc3164
Listen 0.0.0.0
Port 5140
Mode udp
[OUTPUT]
Name http
Match *
Host data-prepper
Port 2021
URI /log/ingest
HTTP_User admin
HTTP_Passwd admin
```
Here is how Fluentbit parses the log before sending to data-prepper:
https://raw.githubusercontent.com/fluent/fluent-bit/refs/heads/master/conf/parsers.conf
```
PARSER]
Name syslog-rfc3164
Format regex
Regex /^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$/
Time_Key time
Time_Format %b %d %H:%M:%S
Time_Keep On
```
Here is the data-repper pipeline:
```
log-pipeline:
source:
http:
ssl: false
processor:
- date:
from_time_received: true
destination: "@timestamp"
- grok:
patterns_directories: ["/usr/share/data-prepper/patterns"]
match:
message: ['%{PFSENSE_LOG_ENTRY}']
sink:
- opensearch:
hosts: [ "https://opensearch:9200" ]
insecure: true
username: admin
password: Developer@123
index: pfsense
```
Here is the grok patter:
Originaly taken from https://gist.githubusercontent.com/Caligatio/878002ab4aa591747a3dcdbd1101db41/raw/4c0d33b75a6f064dc4b4ae3359fa24d77f2a7fa3/pfsense2-3.grok
I have to made some adjustments because the original was giving me some errors when starting data-prepper.
```
PFSENSE_LOG_ENTRY %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}?
PFSENSE_LOG_DATA ,%{INT:sub_rule}?,,%{INT:tracker},%{WORD:iface},%{WORD:reason},%{WORD:action},%{WORD:direction},
PFSENSE_IP_SPECIFIC_DATA %{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA}
PFSENSE_IPv4_SPECIFIC_DATA (4:ip_ver),%{BASE16NUM:tos},%{WORD:ecn}?,%{INT:ttl},%{INT:id},%{INT:offset},%{WORD:flags},%{INT:proto_id},%{WORD:proto},
PFSENSE_IPv6_SPECIFIC_DATA (6:ip_ver),%{BASE16NUM:class},%{DATA:flow_label},%{INT:hop_limit},%{WORD:proto},%{INT:proto_id},
PFSENSE_IP_DATA %{INT:length},%{IP:src_ip},%{IP:dest_ip},
PFSENSE_PROTOCOL_DATA %{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA}|%{PFSENSE_IGMP_DATA}
PFSENSE_TCP_DATA %{INT:src_port},%{INT:dest_port},%{INT:data_length},%{WORD:tcp_flags},%{INT:sequence_number},%{INT:ack_number},%{INT:tcp_window},%{DATA:urg_data},%{DATA:tcp_options}
PFSENSE_UDP_DATA %{INT:src_port},%{INT:dest_port},%{INT:data_length}
PFSENSE_IGMP_DATA datalength=%{INT:data_length}
PFSENSE_ICMP_DATA %{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE}
PFSENSE_ICMP_TYPE ((request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply):imcp_type),
PFSENSE_ICMP_RESPONSE %{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY}
PFSENSE_ICMP_ECHO_REQ_REPLY %{INT:icmp_echo_id},%{INT:icmp_echo_sequence}
PFSENSE_ICMP_UNREACHPORT %{IP:icmp_unreachport_dest_ip},%{WORD:icmp_unreachport_protocol},%{INT:icmp_unreachport_port}
PFSENSE_ICMP_UNREACHPROTO %{IP:icmp_unreach_dest_ip},%{WORD:icmp_unreachproto_protocol}
PFSENSE_ICMP_UNREACHABLE %{GREEDYDATA:icmp_unreachable}
PFSENSE_ICMP_NEED_FLAG %{IP:icmp_need_flag_ip},%{INT:icmp_need_flag_mtu}
PFSENSE_ICMP_TSTAMP %{INT:icmp_tstamp_id},%{INT:icmp_tstamp_sequence}
PFSENSE_ICMP_TSTAMP_REPLY %{INT:icmp_tstamp_reply_id},%{INT:icmp_tstamp_reply_sequence},%{INT:icmp_tstamp_reply_otime},%{INT:icmp_tstamp_reply_rtime},%{INT:icmp_tstamp_reply_ttime}
PFSENSE_CARP_DATA %{WORD:carp_type},%{INT:carp_ttl},%{INT:carp_vhid},%{INT:carp_version},%{INT:carp_advbase},%{INT:carp_advskew}
```
Here is what I get on opensearch:

The field message is not been parsed.
**To Reproduce**
Steps to reproduce the behavior:
1. Using the sample from https://github.com/opensearch-project/data-prepper/blob/main/examples/log-ingestion/README.md
2. And make the adjustments above.
**Expected behavior**
I expected the message part of the log been splinted by fields as specified on the grok pattern.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- NAME="Rocky Linux"
- VERSION="9.4 (Blue Onyx)"
- openserach, dashboard, fluentbit originaly taken from:
https://github.com/opensearch-project/data-prepper/blob/main/examples/log-ingestion/README.md
**Additional context**
Add any other context about the problem here.
| [BUG] message field not been parsed with Grok | https://api.github.com/repos/opensearch-project/data-prepper/issues/5247/comments | 1 | 2024-12-07T16:44:19Z | 2024-12-07T23:17:21Z | https://github.com/opensearch-project/data-prepper/issues/5247 | 2,724,720,795 | 5,247 |
[
"opensearch-project",
"data-prepper"
] | All plugins should configure the `pluginConfigurationType` attribute in `@DataPrepperPlugin` with a custom POJO configuration class. Some older plugins still use `PluginSettings`. These should be updated to use a POJO configuration class.
For example, the `s3` source does this correctly.
https://github.com/opensearch-project/data-prepper/blob/f6a06a0a43b1912cf9b37a14353e48dc19fb85a0/data-prepper-plugins/s3-source/src/main/java/org/opensearch/dataprepper/plugins/source/s3/S3Source.java#L36
The `opensearch` sink is doing this using `PluginSettings` which we do not want.
https://github.com/opensearch-project/data-prepper/blob/1ddebf68f50b730d3e495f50d53ea6fe9129c7b2/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L89
- [ ] Update the OpenSearch sink to use a POJO configuration class.
- [ ] Look for any other plugins that use the `PluginSettings`. | Migrate existing plugins to use POJO configuration classes. | https://api.github.com/repos/opensearch-project/data-prepper/issues/5246/comments | 2 | 2024-12-06T17:19:16Z | 2025-02-24T17:48:25Z | https://github.com/opensearch-project/data-prepper/issues/5246 | 2,723,555,618 | 5,246 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In order to develop ingestion pipelines for AWS infrastructure component logs (ALB, cloudfront...), I am testing my pipelines with files coming from this infra, which are compressed gzip files.
However, data-prepper does not support compression in the file source plugin, adding an extra step of decompressing the files manually.
In general this is a feature that would be welcomed as compressed log files are a common occurrence.
**Describe the solution you'd like**
the `file` source should have a `compression` field, set to `none` by default. When the plugin is started, the input stream that reads the file comes from the `DecompressionEngine` that corresponds to the `compression` field value.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
Add any other context or screenshots about the feature request here.
| Support compressed file sources | https://api.github.com/repos/opensearch-project/data-prepper/issues/5245/comments | 2 | 2024-12-05T10:21:48Z | 2024-12-11T17:00:34Z | https://github.com/opensearch-project/data-prepper/issues/5245 | 2,719,990,524 | 5,245 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
If I disable and enable the streams on my DynamoDB table, Data Prepper will continue looking for the old stream ARN that does not exist, and does not do a rediscovery of the stream ARN.
**To Reproduce**
Steps to reproduce the behavior:
1. Start a DynamoDB pipeline reading from streams
2. Disable and re-enable DynamoDB streams on the table
3. Observe the error from Data Prepper that the stream does not exist
**Expected behavior**
Data Prepper could gracefully handle the case where the stream does not exist, and rediscover the new Stream ARN
**Additional context**
Add any other context about the problem here.
| [BUG] Disabling and Enabling DynamoDB streams requires a restart of Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5243/comments | 1 | 2024-12-04T17:57:17Z | 2025-03-06T17:34:37Z | https://github.com/opensearch-project/data-prepper/issues/5243 | 2,718,440,618 | 5,243 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a pipeline to ingest logs in opensearch, and I use the aggregation processor with the `put_map` action.
At the moment, the only way a group can close with this action is to wait for the `group_duration` to expire.
That means that all records that have been merged but whose group is not yet closed still lives in memory in the data-prepper nodes.
For high throughput or high latency pipeline where you have to specify a large `group_duration`, or both, that means a lot of memory will be wasted on already merged records that are just waiting for the expiration of the group.
There should be a way to terminate a group and flush the result to the next processor or sink if you know you do not need to wait.
**Describe the solution you'd like**
The solution could work in two steps:
1. Configure an expression to evaluate when the aggregation *action* is executed. This expression would evaluate to a tag added to the metadata of the aggregation group for instance.
2. Attach an expression to evaluate or a list of tags that must be present on the aggregation group when it is mutated, that if true marks the group for finalization and flushes it immediately.
The pipeline configuration could look like:
```yaml
my_pipeline:
source:
file:
path: somefile.log
processors:
- aggregate:
action:
put_all: {}
identification_keys:
- common_key
tag_on_aggregate: /log_type
terminate_when: hasTags("type_1", "type_2")
```
**Describe alternatives you've considered (Optional)**
Other option: add a `close_when` expression common to all `AggregateAction` that provides the custom expression that guards the closure of the group.
This expression can be evaluated when `AggregateGroupManager.getGroupsToConclude()` is called, so the changes in `AggregateProcessor` are minimal.
**Additional context**
The aggregate processor first checks for groups to conclude and then processes the current batch. This logic should be reversed so the events are flushed immediately after the aggregation.
| Provide a way to terminate an aggregation group early in the aggregation processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5240/comments | 4 | 2024-12-03T15:53:04Z | 2024-12-17T09:01:48Z | https://github.com/opensearch-project/data-prepper/issues/5240 | 2,715,396,272 | 5,240 |
[
"opensearch-project",
"data-prepper"
] | **Bug Description**
`opensearchproject/data-prepper` container image incorrectly handles UTF-8 characters when streaming data from DynamoDB to S3 buckets in NDJSON format. Non-ASCII characters are replaced with question marks (?) in the output files.
**Steps to Reproduce**
1. Set up data-prepper using the `opensearchproject/data-prepper` container image
2. Create a DynamoDB table with items containing strings with non-ASCII characters (e.g., Mandarin, Tamil)
3. Configure data-prepper to stream changes from the DynamoDB table to an S3 bucket using NDJSON format
4. Observe the resulting S3 objects
**Actual Behavior**
All non-ASCII characters in the original DynamoDB data are replaced with question marks (?) in the S3 output files.
**Expected Behavior**
All UTF-8 characters, including non-ASCII characters, should be preserved in the output NDJSON files exactly as they appear in the source DynamoDB table.
**Workaround**
Adding the environment variable `LC_ALL=C.UTF-8` to the container configuration resolves the issue. This environment variable should be set by default in the container image to ensure proper UTF-8 handling. | [BUG] UTF-8 Character Encoding Issues in opensearchproject/data-prepper container | https://api.github.com/repos/opensearch-project/data-prepper/issues/5238/comments | 2 | 2024-12-02T20:31:27Z | 2025-02-07T20:23:17Z | https://github.com/opensearch-project/data-prepper/issues/5238 | 2,713,280,192 | 5,238 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
No, but a lower surface area for CVEs.
**Describe the solution you'd like**
Stop using amazon linux and migrate to alpine linux, The amazon linux image is huge with far more than is necessary to run data prepper.
**Describe alternatives you've considered (Optional)**
I could build my own container and maintain a fork but this is not desirable
**Additional context**
This would potentially aid in the image starting faster, and lower the attack surface from a vulnerability perspective. There's a closed issue regarding this and it looked like someone had challenges building on ARM and they declined to build on alpine because of this. | migrate to Alpine linux for primary container OS | https://api.github.com/repos/opensearch-project/data-prepper/issues/5213/comments | 0 | 2024-11-22T14:49:06Z | 2024-12-10T20:33:35Z | https://github.com/opensearch-project/data-prepper/issues/5213 | 2,683,455,469 | 5,213 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a nested json that contains **attributes** field of type map. I want to transform that map into keys and values array.
ex.,
```
{
"attributes":{
"field1":"value1",
"field2":"value2"
}
}
```
can be processed and below values can be extracted from it
```
"keys":["field1","field2"]
"values":["value1","value2"]
```
**Describe the solution you'd like**
Expose getKeys and getValues function in Expression
**Describe alternatives you've considered (Optional)**
No alternatives found.
**Additional context**
Add any other context or screenshots about the feature request here.
| getKeys and getValues function for map type | https://api.github.com/repos/opensearch-project/data-prepper/issues/5210/comments | 2 | 2024-11-21T16:56:27Z | 2024-11-22T08:36:51Z | https://github.com/opensearch-project/data-prepper/issues/5210 | 2,680,215,199 | 5,210 |
[
"opensearch-project",
"data-prepper"
] | Java version: 11
gradle version: Gradle 8.11.1
> Task :data-prepper-pipeline-parser:test
PipelineConfigurationFileReaderTest > getPipelineConfigurationInputStreams_with_a_configuration_file_exists_and_is_not_loadable_should_throw() FAILED
org.opentest4j.AssertionFailedError: Expected java.lang.RuntimeException to be thrown, but nothing was thrown.
151 tests completed, 1 failed | [BUG] Error building proyect | https://api.github.com/repos/opensearch-project/data-prepper/issues/5209/comments | 1 | 2024-11-20T21:30:54Z | 2024-12-09T17:11:14Z | https://github.com/opensearch-project/data-prepper/issues/5209 | 2,677,253,165 | 5,209 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, the DynamoDB source will grab up to 150 active shards in one data prepper container, and continue to hold onto those shards until the shard is closed and the end of the shard iterator is reached, which will happen either after 4 hours, or after the shard has a certain amount of data.
This means that for DynamoDB tables with a large amount of shards on the streams, regardless of how much data is being sent to the streams, many data prepper containers (a minimum of `shard count / 150`) must be used to achieve low latency on the DDB stream data.
**Describe the solution you'd like**
A single data prepper container should grab ownership of a shard, process it for some time, then checkpoint it with a sequence number, before giving up that shard and moving to the next one. This would allow for one data prepper container to process all of the shards in a DynamoDB stream in a somewhat timely manner, with the trade off that latency may be slightly higher when using a large amount of data prepper containers
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Checkpoint shards and rotate through them for DynamoDB streams source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5208/comments | 1 | 2024-11-20T21:25:50Z | 2024-12-09T17:11:01Z | https://github.com/opensearch-project/data-prepper/issues/5208 | 2,677,220,451 | 5,208 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I am trying out data prepper in local and comparing the filtering/processor functionality between logstash and data prepper. As part of my test setup, I would like to connect the data prepper to have local kinesis stream as source. Currently, I don't see any option to manipulate the kinesis endpoint and port in Kinesis source.
**Describe the solution you'd like**
Additional attributes in Kinesis source to connect to Localstack kinesis stream locally.
**Describe alternatives you've considered (Optional)**
**Additional context**
| Support for connecting to localstack's Kinesis locally. | https://api.github.com/repos/opensearch-project/data-prepper/issues/5206/comments | 1 | 2024-11-20T07:03:44Z | 2024-12-09T17:10:40Z | https://github.com/opensearch-project/data-prepper/issues/5206 | 2,674,697,525 | 5,206 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The AWS Managed Data Prepper (OSIS) Does not seem to consistently honor the filtering that I have configured.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable Security Lake with cloudtrail and eks audit logs
2. Create Ingestion for Security Lake -> OpenSearch
3. Update pipeline to include routes
```
route:
- eks-logs: '/metadata/product/name == "amazon_eks"'
- cloudtrail-logs: '/metadata/product/name == "cloudtrail"'
```
5. Have the following sink configuration
```
sink:
- opensearch:
# Provide an AWS OpenSearch Service domain endpoint
hosts: [ <redacted> ]
routes: [eks-logs]
aws:
# Provide a Role ARN with access to the domain. This role should have a trust relationship with osis-pipelines.amazonaws.com
sts_role_arn: "<redacted>"
# Provide the region of the domain.
region: "us-east-1"
# Enable the 'serverless' flag if the sink is an Amazon OpenSearch Serverless collection
serverless: false
index: "ocsf-${/metadata/version}-${/class_uid}-${/class_name}-eks-${/accountid}-%{yyyy.MM.dd}"
- opensearch:
# Provide an AWS OpenSearch Service domain endpoint
hosts: [ <redacted> ]
routes: [cloudtrail-logs]
aws:
# Provide a Role ARN with access to the domain. This role should have a trust relationship with osis-pipelines.amazonaws.com
sts_role_arn: "<redacted>"
# Provide the region of the domain.
region: "us-east-1"
# Enable the 'serverless' flag if the sink is an Amazon OpenSearch Serverless collection
serverless: false
index: "ocsf-${/metadata/version}-${/class_uid}-${/class_name}-cloudtrail-${/accountid}-%{yyyy.MM.dd}"
```
**Expected behavior**
To create two indexes. One for 'eks' and one for 'cloudtrail'. Instead what happens is that it creates three indexes; one for eks, one for cloudtrail, and then a third that is unlabeled.
The unlabeled index logs contain 'metadata.product.name', and the product name is either 'cloudtrail' or 'amazon_eks'. This should have been routed to the appropriate route but is not.
**Screenshots**

**Environment (please complete the following information):**
- AWS Managed OSIS
| [BUG] OSIS Filter Not Honored | https://api.github.com/repos/opensearch-project/data-prepper/issues/5200/comments | 0 | 2024-11-18T19:41:24Z | 2024-11-19T20:35:58Z | https://github.com/opensearch-project/data-prepper/issues/5200 | 2,669,686,403 | 5,200 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The OpenSearch sink in Data Prepper v2.10 & v2.10.1 does not recognise and use refreshed AWS STS credentials to sign requests upon expiration of existing credentials. Data Prepper v2.9 does.
**To Reproduce**
Steps to reproduce the behaviour:
1. Create an AWS profile in `aws/credentials` containing the credentials:
```
[default]
aws_access_key_id = ...
aws_secret_access_key = ...
aws_session_token = ...
```
2. Create the file `pipeline.yaml`:
```
log-forwarding-pipeline:
source:
random:
processor:
- date:
from_time_received: true
destination: "timestamp"
sink:
- opensearch:
aws:
region: ...
hosts:
- ...
insecure: true
index: "data.prepper.random"
```
3. Create the file `env`:
```
AWS_CONFIG_FILE=/usr/share/data-prepper/.aws/config
AWS_REGION=...
AWS_PROFILE=default
AWS_SHARED_CREDENTIALS_FILE=/usr/share/data-prepper/.aws/credentials
```
5. Launch a container using the Data Prepper image:
```
podman run -d \
--env-file $PWD/env \
-v $PWD/pipeline.yaml:/usr/share/data-prepper/pipelines/pipeline.yaml \
-v $PWD/aws:/usr/share/data-prepper/.aws \
public.ecr.aws/opensearchproject/data-prepper:2.10.1
```
6. Replace the credentials in `aws/credentials` with new ones shortly before they existing expire.
7. Upon expiration of the of the original credentials Data Prepper will begin reporting:
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. Number of retries 5. Retrying...
org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [security_exception] The security token included in the request is expired
at org.opensearch.client.transport.aws.AwsSdk2Transport.parseResponse(AwsSdk2Transport.java:473) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.transport.aws.AwsSdk2Transport.executeSync(AwsSdk2Transport.java:392) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.transport.aws.AwsSdk2Transport.performRequest(AwsSdk2Transport.java:192) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.client.opensearch.OpenSearchClient.bulk(OpenSearchClient.java:215) ~[opensearch-java-2.8.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.bulk.OpenSearchDefaultBulkApiWrapper.bulk(OpenSearchDefaultBulkApiWrapper.java:19) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$doInitializeInternal$6(OpenSearchSink.java:276) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:302) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.execute(BulkRetryStrategy.java:205) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$flushBatch$17(OpenSearchSink.java:532) ~[opensearch-2.10.1.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.13.0.jar:1.13.0]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.flushBatch(OpenSearchSink.java:529) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doOutput(OpenSearchSink.java:478) ~[opensearch-2.10.1.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:69) ~[data-prepper-api-2.10.1.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.13.0.jar:1.13.0]
at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:69) ~[data-prepper-api-2.10.1.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:360) ~[data-prepper-core-2.10.1.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
```
**Expected behavior**
Data Prepper continues to forward to OpenSearch.
**Screenshots**
**Environment (please complete the following information):**
- Container image: public.ecr.aws/opensearchproject/data-prepper
- 2.10, 2.10.1
**Additional context**
In my operational environment, AWS STS credentials are provided and refreshed by an external process. Data Prepper v2.9 recognises the refreshed credentials without the need for a restart. | [BUG] Data Prepper v2.10 & v2.10.1 sink do not use refreshed AWS credentials | https://api.github.com/repos/opensearch-project/data-prepper/issues/5198/comments | 0 | 2024-11-18T13:22:13Z | 2024-11-19T20:34:14Z | https://github.com/opensearch-project/data-prepper/issues/5198 | 2,668,544,016 | 5,198 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We have a number of inconsistent names in our configurations. And recently we added a plugin name with hyphens instead of underscores (#5138).
**Describe the solution you'd like**
I'd like to have automation in Data Prepper that verifies that plugin names meet our conventions.
I think this could be done by creating a testing project that has built-in unit tests for scanning for plugins and verifying the naming convention. Additionally, we can have this added automatically to all plugin projects in Data Prepper.
To accomplish this, we could have a new project:
```
data-prepper-plugin-test
```
We apply it to all plugins by adding something like the following to [`data-prepper-plugins/build.gradle`](https://github.com/opensearch-project/data-prepper/blob/675864d120e8f88deeee2b341254353c827e06de/data-prepper-plugins/build.gradle)
https://github.com/opensearch-project/data-prepper/blob/675864d120e8f88deeee2b341254353c827e06de/data-prepper-plugins/build.gradle#L10-L13
Update the above to the following.
```
subprojects {
apply plugin: 'data-prepper.publish'
group = 'org.opensearch.dataprepper.plugins'
dependencies {
implementation project(':data-prepper-plugin-test')
}
}
```
At this point, the `data-prepper-plugin-test` will be part of every plugin project. Now, we need to have tests in there.
The project `data-prepper-plugin-test` can take a dependency on `data-prepper-plugin-framework` so that it can use `ClasspathPluginProvider`. We would also need to add a new method to `ClasspathPluginProvider` to get all plugin names found. It might look somewhat like the following.
```
Set<String> listPlugins() {
if (nameToSupportedTypeToPluginType == null) {
nameToSupportedTypeToPluginType = scanForPlugins();
}
return nameToSupportedTypeToPluginType.keySet();
}
```
New we can write a unit test that list all plugins and verifies that they meet our naming conventions. Some things we should verify:
* They should be all lowercase
* They should have only `_` as a special character
One additional change we'd need is some way to override the conventions. Some of our plugins already have some bad conventions. I propose that plugin authors can add a file to indicate ignores. This can go in `src/test/resources/org.opensearch.dataprepper.plugin.test.conventions-configurations.yaml`. It might look somewhat like the following:
```
overrides:
plugin_names:
kinesis-data-streams:
ignore_conventions: [require_underscores]
```
This would tell the framework to ignore the convention to require underscores for the `kinesis-data-streams` plugin. We would still perform the other validations on it.
**Describe alternatives you've considered (Optional)**
Alternative 1: We could create a Gradle plugin in `buildSrc` which looks for annotations of `@DataPrepperPlugin` and verifies the name. This would be a good way to verify the plugin names, but would be quite difficult to extend beyond the plugin names to use on the configurations themselves.
Alternative 2: We could try to use Checkstyle to verify the plugin names. This could be quite simple. But, the downside is that it would only cover the `@DataPrepperPlugin` annotation.
**Additional context**
Add any other context or screenshots about the feature request here.
| Verify plugin conventions as part of the build | https://api.github.com/repos/opensearch-project/data-prepper/issues/5191/comments | 0 | 2024-11-15T15:29:03Z | 2024-11-19T20:31:09Z | https://github.com/opensearch-project/data-prepper/issues/5191 | 2,662,309,927 | 5,191 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>source-map-support-0.5.21.tgz</b></p></summary>
<p>Fixes stack traces for files with source maps</p>
<p>Library home page: <a href="https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz">https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz</a></p>
<p>Path to dependency file: /testing/aws-testing-cdk/package.json</p>
<p>Path to vulnerable library: /testing/aws-testing-cdk/package.json,/release/staging-resources-cdk/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/675864d120e8f88deeee2b341254353c827e06de">675864d120e8f88deeee2b341254353c827e06de</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (source-map-support version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-21540](https://www.mend.io/vulnerability-database/CVE-2024-21540) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | source-map-support-0.5.21.tgz | Direct | N/A | ❌ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2024-21540</summary>
### Vulnerable Library - <b>source-map-support-0.5.21.tgz</b></p>
<p>Fixes stack traces for files with source maps</p>
<p>Library home page: <a href="https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz">https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz</a></p>
<p>Path to dependency file: /testing/aws-testing-cdk/package.json</p>
<p>Path to vulnerable library: /testing/aws-testing-cdk/package.json,/release/staging-resources-cdk/package.json</p>
<p>
Dependency Hierarchy:
- :x: **source-map-support-0.5.21.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/675864d120e8f88deeee2b341254353c827e06de">675864d120e8f88deeee2b341254353c827e06de</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
All versions of the package source-map-support are vulnerable to Directory Traversal in the retrieveSourceMap function.
<p>Publish Date: 2024-11-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-21540>CVE-2024-21540</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details> | source-map-support-0.5.21.tgz: 1 vulnerabilities (highest severity is: 7.5) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/5188/comments | 1 | 2024-11-13T15:25:38Z | 2024-11-18T17:20:50Z | https://github.com/opensearch-project/data-prepper/issues/5188 | 2,655,870,353 | 5,188 |
[
"opensearch-project",
"data-prepper"
] | > [!warning]
> ### Artifacts v3 brownouts
>Artifact actions v3 will be closing down by December 5, 2024. To raise awareness of the upcoming removal, we will temporarily fail jobs using v3 of actions/upload-artifact or actions/download-artifact. Builds that are scheduled to run during the brownout periods will fail. The brownouts are scheduled for the following dates and times:
> – November 14, 12pm – 1pm EST
> – November 21, 9am – 5pm EST
_[Notice of Breaking Changes](https://github.blog/changelog/2024-11-05-notice-of-breaking-changes-for-github-actions/#artifacts-v3-brownouts) published Nov 5, 2024_
### This repository will be impacted by this brownout and deprecation, [based on this query.](https://github.com/search?q=org%3Aopensearch-project%20actions%2Fupload-artifact%40v3&type=code)
Want to avoid tracking this manually? Setup [dependabot](https://docs.github.com/en/code-security/dependabot/dependabot-version-updates) to automatically bump version numbers of dependencies including GitHub Actions.
#### Add a .github/dependabot.yml configuration
```yaml
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
```
_[Example Source](https://github.com/opensearch-project/security/blob/main/.github/dependabot.yml#L11-L14)_ | Github Action Deprecation: actions/upload-artifact@v3 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5173/comments | 0 | 2024-11-06T18:48:59Z | 2024-11-12T20:36:36Z | https://github.com/opensearch-project/data-prepper/issues/5173 | 2,638,941,421 | 5,173 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We are using AWS OpenSearch Ingestion Service (OSIS) and have opened an enterprise support case to mirror this one. What we are seeing is that Data Prepper isn't properly handling load and enters an out of memory (OOM) state and crashes.
We are able to see that document processing is successful until OOM state is reached, max memory and then the service hangs. This results in SQS being backed up with messages. If you restart the queue the messages will begin to de-queue and process, however it will then crash shortly after.
The error message received is:
```
INFO org.opensearch.dataprepper.core.breaker.HeapCircuitBreaker - Circuit breaker tripped and open. 6598691536 used memory bytes > 6442450944 configured
```
**To Reproduce**
Steps to reproduce the behavior:
1. Have an S3 -> SNS -> SQS Queue with a large volume of events.
2. Start OpenSearch Ingestion Service with a set number of OCU's. (Ours was set to 8-12 however changing this doesn't appear to impact anything).
3. Wait for memory to spike until maximum allowed memory is reached.
**Expected behavior**
The expected behavior is that OpenSearch Ingestion Service will load balance based on the queued messages. Large quantities of queued messages should not dictate memory usage. Rather a reduction in de-queue rate. I expect the service to operate as Log Stash which will handle messages within a limit and only pull from the SQS queue when it is able to.
What seems to be happening is that data prepper is overloading itself by pulling more SQS messages than it is able to handle.
**Screenshots**
Memory spiked
<img width="762" alt="Screenshot 2024-11-06 at 10 01 32 AM" src="https://github.com/user-attachments/assets/6cc09500-1856-43a3-95a6-fd31da6ae45f">
Document write spikes on start, then stops.
<img width="806" alt="Screenshot 2024-11-06 at 1 11 18 PM" src="https://github.com/user-attachments/assets/65f1a1ac-8edc-4e0d-8e2d-1aadc7800394">
**Environment (please complete the following information):**
AWS Managed OpenSearch Ingestion Service
**Additional context**
Add any other context about the problem here.
| [BUG] Data Prepper Enters OOM State and Stops Processing | https://api.github.com/repos/opensearch-project/data-prepper/issues/5172/comments | 2 | 2024-11-06T18:12:12Z | 2024-11-18T19:32:01Z | https://github.com/opensearch-project/data-prepper/issues/5172 | 2,638,859,513 | 5,172 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I'm using docker compose which starts an opensearch node and a data-prepper server.
My pipeline is using DynamoDB (AWS not Local) as `source`, the created opensearch node as `sink`.
The error is about casting a value from the LeaderPartition type to the StreamPartition type.
**To Reproduce**
Steps to reproduce the behavior:
1. Execute `docker compose up -d --build` to starts all needed services
2. Execute `docker logs -f docker-dynamodb-opensearch-data-prepper-opensearch-data-prepper-1`
**Expected behavior**
data-prepper server is ready to consume records from DynamoDB stream, then write to Opensearch index
**Screenshots**
<img width="1437" alt="image" src="https://github.com/user-attachments/assets/30c3574d-7675-4291-ad90-8e31f6da9908">
**Environment (please complete the following information):**
- OS: MacOS Sequoia
- Version 15.1 Beta (24B5055e)
**Additional context**
docker-compose.yml
```yaml
version: '3'
services:
opensearch-node:
image: opensearchproject/opensearch
ports:
- "9200:9200"
- "9600:9600"
environment:
- "discovery.type=single-node"
- "plugins.security.disabled=true"
- "OPENSEARCH_INITIAL_ADMIN_PASSWORD=<OPENSEARCH_INITIAL_ADMIN_PASSWORD>"
networks:
- dynamodb_opensearch
opensearch-dashboard:
image: opensearchproject/opensearch-dashboards:2.11.0
ports:
- "5601:5601"
environment:
- "OPENSEARCH_HOSTS=http://opensearch-node:9200"
- "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
networks:
- dynamodb_opensearch
opensearch-data-prepper:
image: opensearchproject/data-prepper:latest
environment:
- AWS_PROFILE=my-profile
- AWS_ACCESS_KEY_ID = <AWS_ACCESS_KEY_ID>
- AWS_SECRET_ACCESS_KEY = <AWS_SECRET_ACCESS_KEY>
- AWS_DEFAULT_REGION = eu-west-1
- AWS_REGION = eu-west-1
volumes:
- ~/.aws:/root/.aws
- ./pipelines.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml
- ./data-prepper-config.yaml:/usr/share/data-prepper/config/data-prepper-config.yaml
ports:
- "21890:21890"
networks:
- dynamodb_opensearch
networks:
dynamodb_opensearch:
driver: bridge
```
data-prepper-config.yaml
```yaml
ssl: false
```
pipelines.yaml
```yaml
version: "2"
cdc-pipeline:
source:
dynamodb:
acknowledgments: true
tables:
- table_arn: "arn:aws:dynamodb:eu-west-1:123456789:table/my-table"
stream:
start_position: LATEST
view_on_remove: OLD_IMAGE
aws:
region: "eu-west-1"
sink:
- opensearch:
hosts: ["http://opensearch-node:9200"]
index: my-index
index_type: custom
document_id: '${/id}'
action: '${getMetadata("opensearch_action")}'
document_version: '${getMetadata("document_version")}'
document_version_type: external
``` | [BUG] LeaderPartition cannot be cast to StreamPartition | https://api.github.com/repos/opensearch-project/data-prepper/issues/5167/comments | 4 | 2024-11-02T03:13:31Z | 2025-02-17T14:05:00Z | https://github.com/opensearch-project/data-prepper/issues/5167 | 2,630,153,971 | 5,167 |
[
"opensearch-project",
"data-prepper"
] | Currently, the consumer code in `KafkaCustomConsumer` is grabbing the topic/partition/timestamp info from the source `ConsumerRecord` and adding them as attributes in the event metadata. It would be helpful to also have access to the `offset` field.
This would allow us to add calls to `getMetadata("kafka_offset")` in our pipelines, as we use this for internal tracking/auditing. Looking at the code, it seems like it would be relatively easy to add this.
| Support consumer offset metadata from Kafka source records | https://api.github.com/repos/opensearch-project/data-prepper/issues/5164/comments | 3 | 2024-11-01T15:19:40Z | 2025-01-14T18:57:48Z | https://github.com/opensearch-project/data-prepper/issues/5164 | 2,629,256,765 | 5,164 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We want to utilize data prepper to ingest S3 files with json lines format like below. This is related to the offline batch ingestion that has been released in https://opensearch.org/docs/latest/ml-commons-plugin/api/async-batch-ingest/.
{ "content": ["Chapter 1", "Introduction"], "SageMakerOutput": ["Embedding1", "Embedding2"], "id":1 }
...
...
We need to ingest the above data into the KNN index with the following format:
{
"chapter": "$.content[0]",
"title": "$.content[1]",
"chapter_embedding": "$.SageMakerOutput[0]",
"title_embedding": "$.SageMakerOutput[1]"},
"id": "$.id"
}
However, there isn't a processor or combination of processors that could convert the source data into the final data format because currently data prepper does not support "json keys mapping to an array element".
**Describe the solution you'd like**
Enhance the current list-to-map or map-to-list processors to allow mapping a key to an array element in the source. Or use a new processor to handle this request.
**Describe alternatives you've considered (Optional)**
https://opensearch.org/docs/latest/ml-commons-plugin/api/model-apis/batch-predict/
**Additional context**
https://opensearch.org/docs/latest/ml-commons-plugin/api/async-batch-ingest/ | Support ingesting json format data with keys mapping to the array element from source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5134/comments | 3 | 2024-10-30T22:05:19Z | 2024-11-06T00:34:14Z | https://github.com/opensearch-project/data-prepper/issues/5134 | 2,625,456,868 | 5,134 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
This feature aims to enhance Data Prepper to support Retrieval-Augmented Generation (RAG) use cases by implementing the following components:
1. A **Vector Embedding Processor**: that leverages services like AWS Bedrock, OpenAI etc for generating vector embeddings, facilitating integration with vector databases.
2. **Codecs for Unstructured Data**: that enables Data Prepper to ingest additional file formats like PDF, HTML, etc.
3. **Advanced Chunking Strategies**: based on the 5 Levels of Text Splitting to improve the relevance of text chunks for embedding and retrieval tasks. For details of the chunking strategy refer - [link](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb)
These enhancements will enable Data Prepper to handle complex unstructured data pipelines, allowing users to implement sophisticated retrieval and generation workflows for various applications.
**Describe the solution you'd like**
Sub-Issues
[Issue 1: Implement Vector Embedding Processor - Use Bedrock, OpenAI, etc., for Embedding Generation](https://github.com/opensearch-project/data-prepper/issues/new?assignees=&labels=untriaged&projects=&template=feature_request.md&title=#)
Objective: Create a processor that generates vector embeddings for input text chunks. The processor will support multiple embedding services, such as AWS Bedrock and OpenAI, and will leverage asynchronous processing for improved throughput.
Key Features:
Configurable embedding source selection.
Batch processing support with individual chunk embeddings.
Error handling and scalability optimizations.
Implementation:
Similar to aws lambda processor in dataprepper, we will need to handle calls to external services(bedrock, openai or hugging face etc) to get vector embeddings.
[Issue 2: Add Codec for Unstructured Data (PDF, HTML, and Other Formats)](https://github.com/opensearch-project/data-prepper/issues/new?assignees=&labels=untriaged&projects=&template=feature_request.md&title=#)
Objective: Expand Data Prepper’s ingestion capabilities to handle additional unstructured data formats like PDF and HTML, which are commonly used in RAG use cases.
Key Features:
Modular codec design for flexibility.
Basic in-memory text extraction for standard documents.
Integration with AWS Textract for advanced text extraction needs (e.g., handling scanned documents, tables, images).
Implementation:
For basic PDF documents, we can use libraries like apacheTika or apachePdfBox.
For advanced parsing of documents like reading tabluar data or receipts or forms ; it is best we use external services like aws textract or aryn's partitioning service which uses OCR + ML solution to read unstructured data.
[Issue 3: Add Chunking Strategies Based on the 5 Levels of Text Splitting](https://github.com/opensearch-project/data-prepper/issues/new?assignees=&labels=untriaged&projects=&template=feature_request.md&title=#)
Objective: Implement advanced chunking strategies inspired by the 5 Levels of Text Splitting - [link](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) This will improve the semantic relevance of text chunks, enabling higher-quality embeddings and more effective retrieval.
Key Features:
Support for five levels of chunking: character, word, sentence, paragraph, and semantic unit.
Overlap handling to maintain context across chunks.
Configurable chunking parameters for customization.
Implementation:
Use existing frameworks like langchain to leverage the chunking strategy implementations. - [langchain on java](https://github.com/langchain4j/langchain4j)
To implement this feature, we will need to address the following sub-issues:
[ ] [Issue 1: Implement Vector Embedding Processor - Use Bedrock, OpenAI, etc., for Embedding Generation](https://github.com/your-repo/issues/1)
[ ] [Issue 2: Add Codec for Unstructured Data (PDF, HTML, and Other Formats)](https://github.com/your-repo/issues/2)
[ ] [Issue 3: Add Chunking Strategies Based on the 5 Levels of Text Splitting](https://github.com/your-repo/issues/3)
**Additional context**
RAG - https://aws.amazon.com/what-is/retrieval-augmented-generation/
| Feature Request: Add RAG Support in Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5126/comments | 0 | 2024-10-28T23:18:04Z | 2024-10-29T19:47:49Z | https://github.com/opensearch-project/data-prepper/issues/5126 | 2,619,760,511 | 5,126 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>werkzeug-3.0.3-py3-none-any.whl</b></p></summary>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl">https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/675864d120e8f88deeee2b341254353c827e06de">675864d120e8f88deeee2b341254353c827e06de</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (werkzeug version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-49767](https://www.mend.io/vulnerability-database/CVE-2024-49767) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | werkzeug-3.0.3-py3-none-any.whl | Direct | quart - 0.19.7;werkzeug - 3.0.6 | ✅ |
| [CVE-2024-49766](https://www.mend.io/vulnerability-database/CVE-2024-49766) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low | 3.7 | werkzeug-3.0.3-py3-none-any.whl | Direct | Werkzeug - 3.0.6 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2024-49767</summary>
### Vulnerable Library - <b>werkzeug-3.0.3-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl">https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **werkzeug-3.0.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/675864d120e8f88deeee2b341254353c827e06de">675864d120e8f88deeee2b341254353c827e06de</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a Web Server Gateway Interface web application library. Applications using `werkzeug.formparser.MultiPartParser` corresponding to a version of Werkzeug prior to 3.0.6 to parse `multipart/form-data` requests (e.g. all flask applications) are vulnerable to a relatively simple but effective resource exhaustion (denial of service) attack. A specifically crafted form submission request can cause the parser to allocate and block 3 to 8 times the upload size in main memory. There is no upper limit; a single upload at 1 Gbit/s can exhaust 32 GB of RAM in less than 60 seconds. Werkzeug version 3.0.6 fixes this issue.
<p>Publish Date: 2024-10-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-49767>CVE-2024-49767</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-q34m-jh98-gwm2">https://github.com/pallets/werkzeug/security/advisories/GHSA-q34m-jh98-gwm2</a></p>
<p>Release Date: 2024-10-25</p>
<p>Fix Resolution: quart - 0.19.7;werkzeug - 3.0.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> CVE-2024-49766</summary>
### Vulnerable Library - <b>werkzeug-3.0.3-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl">https://files.pythonhosted.org/packages/9d/6e/e792999e816d19d7fcbfa94c730936750036d65656a76a5a688b57a656c4/werkzeug-3.0.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **werkzeug-3.0.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/675864d120e8f88deeee2b341254353c827e06de">675864d120e8f88deeee2b341254353c827e06de</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a Web Server Gateway Interface web application library. On Python < 3.11 on Windows, os.path.isabs() does not catch UNC paths like //server/share. Werkzeug's safe_join() relies on this check, and so can produce a path that is not safe, potentially allowing unintended access to data. Applications using Python >= 3.11, or not using Windows, are not vulnerable. Werkzeug version 3.0.6 contains a patch.
<p>Publish Date: 2024-10-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-49766>CVE-2024-49766</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-f9vj-2wh5-fj8j">https://github.com/pallets/werkzeug/security/advisories/GHSA-f9vj-2wh5-fj8j</a></p>
<p>Release Date: 2024-10-25</p>
<p>Fix Resolution: Werkzeug - 3.0.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | werkzeug-3.0.3-py3-none-any.whl: 2 vulnerabilities (highest severity is: 7.5) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5122/comments | 1 | 2024-10-28T17:48:06Z | 2024-12-05T21:48:13Z | https://github.com/opensearch-project/data-prepper/issues/5122 | 2,619,108,382 | 5,122 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The [escaped syntax](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/#escaped-syntax) for json pointers define how to build json pointers for fields that include special characters.
However, the [`isValidKey()`](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/event/JacksonEventKey.java#L151) method in `JacksonEventKey` only checks the basic character set and keys defined with the escaped syntax are rejected.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pipeline with a `rename_keys` processor using an escaped syntax:
```yaml
my-file-pipeline:
source:
file:
path: run/data/events.jsonl
record_type: event
format: json
sink:
- file:
path: "run/data/result.jsonl"
processor:
- rename_keys:
entries:
- from_key: host
to_key: '"cs(host)"'
```
2. Run data-prepper
3. data-prepper cannot start with the error:
> 2024-10-28T16:35:39,367 [main] ERROR org.opensearch.dataprepper.core.validation.LoggingPluginErrorsHandler - 1. rp-pipeline-file.processor.rename_keys: caused by: Parameter "entries.null.to_key" for plugin "rename_keys" is invalid: key "cs(host)" must contain only alphanumeric chars with .-_@/ and must follow JsonPointer (ie. 'field/to/key')
**Expected behavior**
The `to_key` argument `"cs(host)"` should be accepted as it conforms to the documented syntax.
**Screenshots**
N/A
**Environment (please complete the following information):**
- OS: macOs
- Version 14.5
**Additional context**
N/A
| [BUG] rename_keys processor: json pointers with escaped syntax fail to validate | https://api.github.com/repos/opensearch-project/data-prepper/issues/5121/comments | 4 | 2024-10-28T15:38:05Z | 2025-03-11T19:57:57Z | https://github.com/opensearch-project/data-prepper/issues/5121 | 2,618,790,677 | 5,121 |
[
"opensearch-project",
"data-prepper"
] | ### Describe the bug
I'm trying to use opensearch ingestion and data prepper for logging and visualized with opensearch
Log has a field time with format like "yy/MM/dd HH:mm:ss.SSS"
EX) 24/10/28 13:35:45.721
I want to make this to timstamp, so I use date processor like below
```
- date:
match:
- key: time
patterns: ["yy/MM/dd HH:mm:ss.SSS"]
destination: "@timestamp"
```
expect result
time and @timestamp are same value.
result
but @timestamp has been disappeared.
As I know timezone is optional and rather default value is system time zone. So I search the log within 3 days. It doesn't work
But this log doesn't have a field @timestamp anymore. and I can't see any histgram on dashboard.
So I try it with another method
I set the timezone source and destination.
```
- date:
match:
- key: time
patterns: ["yy/MM/dd HH:mm:ss.SSS"]
destination: "@timestamp"
source_timezone: "Asia/Seoul"
destination_timezone: "Asia/Seoul"
```
Finally, It works.
But I wonder why it happen.
I always appreciate your work.
Related component
Plugins
### To Reproduce
use any date type field to @timestamp without setting timezone.
In that case, you'll never get any logs on dashboard.
### Expected behavior
Expect
time : 24/10/28 21:23:25.083
-> @timestamp : 2024/10/28 21:23:25.083
###
Additional Details
Plugins
Data Prepper
### Environment
Opensearch 2.11
fluent-bit 3.11 ( Log Source throught HTTP)
| [BUG] Date Processor can't map @timestamp correctly | https://api.github.com/repos/opensearch-project/data-prepper/issues/5119/comments | 2 | 2024-10-28T12:31:02Z | 2024-10-30T08:35:57Z | https://github.com/opensearch-project/data-prepper/issues/5119 | 2,618,293,150 | 5,119 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently data-prepper is managing dynamic policy creation and template creation but not ISM policy. When user supplies the custom policy there is no way currently update the index-patterns. Use can't supply dynamic index patterns unless user manages policies manually which will remove the user of data-prepper.
**Describe the solution you'd like**
Using DP we shall create index pattern/update index-pattern when user supplies ISM policy based on indexAlias*. So, that even if there is another dynamic index is create this will take care of the policy, template and index (including rollover).
| Support dynamic indexAlias through dynamic index patterns in ism policy using DP | https://api.github.com/repos/opensearch-project/data-prepper/issues/5117/comments | 0 | 2024-10-26T19:15:40Z | 2024-10-29T19:41:03Z | https://github.com/opensearch-project/data-prepper/issues/5117 | 2,616,077,057 | 5,117 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
- The data schema named `Data Prepper` in the `observability->traces->services` page does not display services properly. I created a minimal reproducible example at https://github.com/linghengqian/data-prepper-logs-test .
**To Reproduce**
Steps to reproduce the behavior:
1. Execute the following command on the Ubuntu 22.04.5 instance with `SDKMAN!` and `Docker Engine` installed. Refer to https://github.com/linghengqian/data-prepper-logs-test . This uses OpenJDK 23.
```shell
sdk install java 23-open
sdk use java 23-open
git clone git@github.com:linghengqian/data-prepper-logs-test.git
cd ./data-prepper-logs-test/
docker compose --file ./opensearch/docker-compose.yml up -d
./mvnw clean dependency:get -Dartifact=io.opentelemetry.javaagent:opentelemetry-javaagent:2.9.0
./mvnw clean spring-boot:run \
-Dspring-boot.run.agents="$HOME/.m2/repository/io/opentelemetry/javaagent/opentelemetry-javaagent/2.9.0/opentelemetry-javaagent-2.9.0.jar" \
-Dspring-boot.run.jvmArguments="\
-Dotel.service.name='1-linghengqian-smoke-tests' \
-Dotel.exporter.otlp.endpoint='http://localhost:24321'\
"
```
This occupies host ports `14321` and `24321`.
2. Open `http://localhost:14321/app/observability-traces#/services` in Microsoft Edge browser and log in with the
account `admin` and the password `opensearchNode1Test`.
The data schema named `Data Prepper` will not have any `services`. Although `traces` do collect APM data.
- 
- 
3. However, the data schema named `Custom source` will display `1-linghengqian-smoke-tests` normally. From beginning to end, the Spring Boot application did not throw any exceptions.
- 
**Expected behavior**
A clear and concise description of what you expected to happen.
- The data schema named `Data Prepper` in the `observability->traces->services` page can display services normally.
**Screenshots**
If applicable, add screenshots to help explain your problem.
- Already mentioned above.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS] `Ubuntu WSL 22.04.5 LTS`
- Version [e.g. 22] `opensearchproject/data-prepper:2.10.1`
**Additional context**
Add any other context about the problem here.
- An interesting point is that neither https://github.com/opensearch-project/data-prepper/tree/main/examples/jaeger-hotrod nor https://github.com/opensearch-project/data-prepper/pull/4972 have this problem. Am I supposed to understand that for a data schema named `Data Prepper`, the service map is only available when there are multiple services?
- Also, the data schema named `Data Prepper` seems to be able to directly accept the grpc data sent by the otlp exporter, so why does the OpenSearch website require the additional deployment of data prepper? `otel/opentelemetry-collector-contrib` can actually directly convert the Opentelemetry SDK data into `opensearch documents`. Another additional topic is, wouldn’t it be better if the opensearch dashboard directly integrates the `logs`, `metrics`, and `traces` sent by the Opentelemetry SDK into a single web page? Why split `logs`, `metrics`, and `traces` into different web pages?
- 
| [BUG] The data schema named `Data Prepper` in the `observability->traces->services` page does not display services properly | https://api.github.com/repos/opensearch-project/data-prepper/issues/5116/comments | 1 | 2024-10-26T15:23:25Z | 2024-10-29T19:39:51Z | https://github.com/opensearch-project/data-prepper/issues/5116 | 2,615,934,202 | 5,116 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Currently, data prepper does not validate that routes exist on startup, and even during runtime will not throw any errors.
**To Reproduce**
Steps to reproduce the behavior:
Create a config like
```
routes:
- route_one: '/my_route == 10'
- route_two: '/my_route == 11'
sink:
- pipeline:
name: "sub-pipeline-1"
routes:
- FIRST_ROUTE
- pipeline:
name: "sub-pipeline-2"
routes:
- SECOND_ROUTE
```
None of the data will be routed to either pipeline, and there will be no errors or validations
**Expected behavior**
Throw a validation error when a route is defined in the sink that does not exist. For the above, this may look like
```
Route "FIRST_ROUTE" does not exist. Configured routes include [resources_route, resource_route]
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Validate that routes configured in the sink exist on startup of Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5106/comments | 4 | 2024-10-24T19:55:34Z | 2025-04-18T04:40:18Z | https://github.com/opensearch-project/data-prepper/issues/5106 | 2,612,448,562 | 5,106 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
In my pipeline.yaml i am hoping to extract the value "foo" however in the field the key there are "/" slashes and it should be treated as one key. As seen below.
{ "kubernetes": { "labels": { "app.kubernetes.io/component": "foo" } } }
Data Prepper uses "/" for json pointers. So it would look like so:
'/kubernetes/labels/app.kubernetes.io/component'
However this does not capture the value thus it is skipped. I think it is because the "/" in component needs to be escaped. How do you escape values in Data Prepper?
**To Reproduce**
1. Publish an event like the one structured above.
2. Attempt to parse out the event value "foo" and store it into another field.
**Expected behavior**
I was expecting to be able to escape the "/" with a "\/" or of the escape patterns specified in the documentation. However this did not work.
**Environment (please complete the following information):**
- OS: MacOSX
**Additional context**
AWS Managed OpenSearch Ingestion Service | [BUG] Escaping of "/" in json pointers | https://api.github.com/repos/opensearch-project/data-prepper/issues/5101/comments | 1 | 2024-10-23T18:34:15Z | 2024-10-29T22:25:33Z | https://github.com/opensearch-project/data-prepper/issues/5101 | 2,609,517,998 | 5,101 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.10.1
**BUILD NUMBER**: 89
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/11446182629
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 11446182629: Release Data Prepper : 2.10.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5094/comments | 3 | 2024-10-21T18:40:45Z | 2024-10-21T18:41:48Z | https://github.com/opensearch-project/data-prepper/issues/5094 | 2,603,371,342 | 5,094 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A pipeline with a `kinesis` source fails on start up
```
2024-10-17T19:57:33,161 [kinesis-pipeline-local-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.plugins.kinesis.source.KinesisService - Caught exception when initializing KCL Scheduler. Will retry
2024-10-17T19:57:35,108 [kinesis-pipeline-local-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.plugins.kinesis.source.KinesisService - Caught exception when initializing KCL Scheduler. Will retry
````
**Environment (please complete the following information):**
Data Prepper 2.10.0
| [BUG] Kinesis source is failing on startup | https://api.github.com/repos/opensearch-project/data-prepper/issues/5084/comments | 0 | 2024-10-17T21:54:03Z | 2024-10-17T23:01:05Z | https://github.com/opensearch-project/data-prepper/issues/5084 | 2,595,831,742 | 5,084 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Right now our examples for configuration fields are embedded in the description (`@JsonPropertyDescription`). This makes it difficult to include multiple examples.
**Describe the solution you'd like**
Provide an annotation for `@DataPrepperExampleValues` to support providing one or more example values for a field.
For example, with the `date` processor:
```
@JsonProperty("output_format")
@JsonPropertyDescription("Determines the format of the timestamp added to an event.")
@DataPrepperExampleValues("yyyy-MM-dd'T'HH:mm:ss.SSSXXX")
private String outputFormat = DEFAULT_OUTPUT_FORMAT;
```
This would produce:
```
"output_format" : {
"exampleValues" : [
"yyyy-MM-dd'T'HH:mm:ss.SSSXXX",
]
"description" : "Determines the format of the timestamp added to an event."
},
```
| Support examples in documentation | https://api.github.com/repos/opensearch-project/data-prepper/issues/5077/comments | 0 | 2024-10-16T18:49:14Z | 2024-10-23T15:57:27Z | https://github.com/opensearch-project/data-prepper/issues/5077 | 2,592,762,751 | 5,077 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
OpenSearch users are often familiar with [Painless script](https://opensearch.org/docs/latest/api-reference/script-apis/exec-script/). Also, many users are looking for generic scripting as well.
**Describe the solution you'd like**
Create a `painless` processor:
```
processor:
- painless:
source |
event['total_time'] = event['connection_time'] + event['response_time']
```
**Describe alternatives you've considered (Optional)**
This could also exist alongside an expression script using existing Data Prepper expression languages.
**Additional context**
N/A
| Support Painless scripts as a processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5073/comments | 1 | 2024-10-15T21:47:09Z | 2024-10-22T19:36:50Z | https://github.com/opensearch-project/data-prepper/issues/5073 | 2,590,002,408 | 5,073 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.10.0
**BUILD NUMBER**: 88
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/11353033417
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 11353033417: Release Data Prepper : 2.10.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5071/comments | 3 | 2024-10-15T19:49:20Z | 2024-10-15T19:57:48Z | https://github.com/opensearch-project/data-prepper/issues/5071 | 2,589,730,066 | 5,071 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The dataprepper processor "**service_maps**" only works with the source "**otel_traces_source**".
**Describe the solution you'd like**
Make the "**service_maps**" processor work with "**otel_metrics_source**" as well.
**Describe alternatives you've considered (Optional)**
First a little about the config i worked with:
1: "gRPC oteldata" is sent to an -> **otel collector**.
2: then the "**open telemetry collector**" -> sends data to -> **dataprepper**.
I've tried moving the service-graph data from metrics to traces in this "otel collector", but the open telemetry collector doesn't seem to be able to do this. It can only move service-graph data "from traces to metrics", but not reverse. Its then called a "Service graph exporter", and sends the service_graph via metrics. The application in question only sends metrics and therefore its not possible to get traces (and it shouldn't be necessary to get traces). Therefore I cannot get the servicemaps as it is now into opensearch.
For those experienced with OTEL know that It is considered somewhat of a standard that the service-graph to be visualized from metrics, (this is the reason why the otelcollector lets you "transport" service_graph data to metrics exporters). Tempo for instance assumes all service-graph data to be sent via the metrics channel.
I've also tried to just manually re-construct the "force diagram" (the default one found in the observability page) using only metrics data, but it took too much time figuring out how to do it with VEGA and opensearch.
As of now, I am forced to use alternative applications to get the desired service-graphs, such as Prometheus+Tempo+grafana. However, when using this solution I cannot use the anomaly detection in opensearch, since opensearch anomaly detection on works with indexes (not with external prometheus sources) (which sucks too!). Therfore, the servicemaps must be in opensearch indexes by the way opensearch works now! The only way to fix this as I see it, is for dataprepper to handle service_maps in metrics (as is done in industry standards). In industry strandard deployments, connecting services is usually done by the "open telemetry collector" itself. Therefore it might be better if we start expecting potential service_graphs to be in the metrics channel.
**Additional context**
In other words: at the moment, dataprepper can only "connect" services if their "service map data" can be linked trough the traces. And only traces can be used to "connect the services"! This service-graph data cannot be "pre-configured" in the metrics object the way dataprepper works now!
So, the processor "service_maps" only works for traces. It might be desirable to make this work with metrics as well.
Letting the OtelCollector connect the services, (instead of dataprepper doing it) might increase the performance and stability for dataprepper and the overall architecture performance.
Links:
[service_maps processor](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/service-map/)
[otel_metrics_source](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/otel-metrics-source/)
Dataprepper can't create servicemaps from metrics alone, as of now, but Its probably possible to implent this. Here is an example of how it looks like when Tempo visualizes servicemaps from metrics alone, proving its possible.:
 | [FEATURE] generate service_maps from otel_metrics_source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5055/comments | 3 | 2024-10-12T18:18:39Z | 2024-10-16T13:32:24Z | https://github.com/opensearch-project/data-prepper/issues/5055 | 2,583,309,939 | 5,055 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We want to be able to decode CloudWatch Logs subscription filters.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
```
{
"owner": "111111111111",
"logGroup": "CloudTrail/logs",
"logStream": "111111111111_CloudTrail/logs_us-east-1",
"subscriptionFilters": [
"Destination"
],
"messageType": "DATA_MESSAGE",
"logEvents": [
{
"id": "31953106606966983378809025079804211143289615424298221568",
"timestamp": 1432826855000,
"message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
},
{
"id": "31953106606966983378809025079804211143289615424298221569",
"timestamp": 1432826855000,
"message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
},
{
"id": "31953106606966983378809025079804211143289615424298221570",
"timestamp": 1432826855000,
"message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
}
]
}
```
There are a few things the existing `json` codec will have trouble with here.
1. It picks the first JSON array. In this case it will be `subscriptionFilters` rather than `logEvents`.
2. There is useful metadata that never appears in the generated events. For example, `logStream`.
**Describe the solution you'd like**
I'd like the existing `json` codec to support a few new features.
1. Allow the pipeline author to choose the key within the JSON to parse.
```
codec:
json:
key_name: logEvents
```
The `key_name` matches the `json` output codec's similar configuration.
https://github.com/opensearch-project/data-prepper/blob/76d76fc19a4cde6031195663b5d6375655fe53aa/data-prepper-plugins/parse-json-processor/src/main/java/org/opensearch/dataprepper/plugins/codec/json/JsonOutputCodecConfig.java#L13-L15
2. Allow selecting data from the root of the JSON to include in each object.
```
codec:
json:
include_keys: ['owner', logGroup', 'logStream' ]
```
This would output events like the following:
```
{
"id": "31953106606966983378809025079804211143289615424298221568",
"timestamp": 1432826855000,
"message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}",
"owner": "111111111111",
"logGroup": "CloudTrail/logs",
"logStream": "111111111111_CloudTrail/logs_us-east-1",
}
```
3. Allow selecting data from the root of the JSON to include in the metadata for each object.
```
codec:
json:
include_keys_metadata: ['owner', logGroup', 'logStream' ]
```
| Support for reading CloudWatch Logs JSON using json codec | https://api.github.com/repos/opensearch-project/data-prepper/issues/5045/comments | 1 | 2024-10-10T19:31:12Z | 2024-10-15T17:02:25Z | https://github.com/opensearch-project/data-prepper/issues/5045 | 2,579,666,617 | 5,045 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently an OSIS pipeline seems to require either manual intervention or downtime to be taken when updating the mappings for an index, this includes adding subfields to an already existing field or a brand new field entirely.
**Existing Configuration For Mapping**
```JSON
"field_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
```
**New Configuration For Mapping**
```JSON
"field_name": {
"type": "text",
"fields": {
"english": {
"type": "text",
"analyzer": "english"
},
"keyword": {
"type": "keyword"
}
}
}
```
The english subfield is not shown in the cluster and requires downtime or manual changes to be used.
**Describe the solution you'd like**
It would be nice for OSIS pipelines to have the ability to update index mappings when they are updated in configuration. Once the updates are made having something like an `update_by_query` call or something similar to populate the new fields.
**Describe alternatives you've considered (Optional)**
a) A manual change to the mapping with an invocation of the update_by_query API to backfill records
b) Take some downtime to stop the pipeline, delete the index, then restart the pipeline to re-sync data
**Additional context**
The solution suggested is mainly concerned with updating subfields as `update_by_query` will only populate subfields of already existing fields and won't work for brand new fields being introduced to the mapping. For entirely new fields to the mapping you would need to run something else run (maybe like a Glue Job) to have the documents update reliably. | Index Mapping Updates Through OSIS Pipeline Configuration YAML | https://api.github.com/repos/opensearch-project/data-prepper/issues/5038/comments | 5 | 2024-10-09T18:13:25Z | 2024-10-21T17:54:12Z | https://github.com/opensearch-project/data-prepper/issues/5038 | 2,576,607,574 | 5,038 |
[
"opensearch-project",
"data-prepper"
] | Is your feature request related to a problem? Please describe.
It would be beneficial to have the ability to offload tasks asynchronously to AWS Lambda functions, especially when handling large volumes of data in Data Prepper. Currently, the synchronous Lambda invocation can limit concurrency and performance. Having an async client will allow Data Prepper to handle Lambda invocations concurrently, improving throughput and scalability.
Describe the solution you'd like
I propose adding support for
1. AWS Lambda Async Client by default in Data Prepper's Lambda-related components. This will enable non-blocking Lambda invocations for more efficient handling of high throughput data streams. The LambdaAsyncClient from the AWS SDK will be integrated for all Lambda invocations, making the system more scalable.
2. SDK defaults the connection timeout to 60secs. This means that if the lambda processing takes >60sec, the requests would fail causing all the records to drop. We should give this as a tunable parameter to the user.
3. Address Acknowledgements for processor and sink. For Processor, we will need to handle the response cardinality:
3.1. When a batch of N events that is configured by the user (N could be <=pipeline batch) ie, request to lambda contains N events in a batch , the lambda could return back N responses or M responses(N!=M). When N responses are sent as a json array, we resuse the original records and clear the old event data and populate it with the response from lambda, that way the acknowledgement set need not be changed.
3.2 When M responses(N!=M), we create new events and populate them to the original acknowledgement set. The older events are also retained in the ack set but will be released by core later.
4. Address failures at process the events, the events in the processor will be tagged and forwarded. This processor will NOT drop events on failure.
5. Lambda sink will send to DLQ on failure and will acknowledge as true. If a dlq is not setup, we will send a negative acknowledgement.
6. Add json codec for request and response.
Additional context
This enhancement allows for improved scalability, better error handling, and non-blocking invocations of Lambda functions, which is crucial for high-throughput systems. | Address Scale Items for Lambda Processor and Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5031/comments | 0 | 2024-10-08T06:04:23Z | 2024-10-29T00:52:29Z | https://github.com/opensearch-project/data-prepper/issues/5031 | 2,572,175,276 | 5,031 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It would be nice to provide native support for OTLP JSON logs in [S3 source](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/s3/). OTLP JSON logs confirms to the [OTLP specification](https://opentelemetry.io/docs/specs/otlp/) and are stored as JSON files ([sample file](https://github.com/open-telemetry/opentelemetry-proto/blob/main/examples/logs.json)). When the original log messages are in JSON format, the OTel collector would store the log messages in escaped JSON format under logRecords/body/stringValue.
For example:
"logRecords": [
{
"body": {
"stringValue": "{\\"key1\\":\\"val1\\",\\"key2\\":\\"val2\\"}"
}
}
]
The log format is currently supported in [otel_logs_source](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/otel-logs-source/) when the OTel JSON logs are ingested through http endpoints. However, if the OTLP JSON logs are stored in files in a S3 bucket, there isn't a way to parse the escaped JSON.
**Describe the solution you'd like**
Add a new codec option `otel_logs` in the [S3 source](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/s3/) configuration. With the following pipeline configuration, the OTLP JSON logs will be ingested into OpenSearch with one document per log record and all the fields in the original log messages, e.g. key1, key2, and the attributes will be stored in separate document.
For example:
version : "2"
s3-log-pipeline:
source:
s3:
acknowledgments : true
notification_type : "sqs"
compression : "none"
codec:
otel_logs:
format: "json"
workers : 3
sqs:
queue_url : ""
maximum_messages : 10
visibility_timeout : "60s"
visibility_duplication_protection : true
aws:
region : "us-west-2"
sts_role_arn : ""
processor:
- parse_json:
source : "/body"
- delete_entries:
with_keys : ["body"]
**Expected result**
- Each log record is stored in a separate document in an OpenSearch index
- Each document contains all the attributes that are shared by all the log records, i.e. the attributes under resourceLogs/resource
For example:
{
"_index": "my-index",
"_id": "qyamVZIBXzpFSvHE1m14",
"_score": 1,
"_source": {
"traceId": "",
"spanId": "",
"schemaUrl": "https://opentelemetry.io/schemas/1.6.1",
"key1": "val1",
"key2": "val2",
"key3": "ke3",
"resource.attributes.service.name": "my.service"
}
}
| Support OpenTelemetry logs in S3 source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5028/comments | 1 | 2024-10-07T23:40:54Z | 2024-11-08T01:20:27Z | https://github.com/opensearch-project/data-prepper/issues/5028 | 2,571,732,917 | 5,028 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The DynamoDB Source doesn't support parsing data with Control Characters
Object deserialization/serialization doesn't handle Control Characters:
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/dynamodb-source/src/main/java/org/opensearch/dataprepper/plugins/source/dynamodb/converter/StreamRecordConverter.java#L93
```
[pool-18-thread-104] ERROR org.opensearch.dataprepper.plugins.source.dynamodb.converter.StreamRecordConverter - Failed to parse and convert data from stream due to Illegal unquoted character ((CTRL-CHAR, code 4)): has to be escaped using backslash to be included in string value
at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 720]
```
**Expected behavior**
DynamoDb source should escape the control characters and process the data.
| [BUG] DynamoDB Source doesn't support parsing data with Control Characters | https://api.github.com/repos/opensearch-project/data-prepper/issues/5027/comments | 4 | 2024-10-07T22:58:10Z | 2025-02-18T18:08:59Z | https://github.com/opensearch-project/data-prepper/issues/5027 | 2,571,681,387 | 5,027 |
[
"opensearch-project",
"data-prepper"
] | **Summary**
OpenSearch API source should support end to end acknowledgements. This should help to track requests which has been persisted in the sinks.
Reference: https://github.com/opensearch-project/data-prepper/issues/248 | OpenSearch API source should support end to end acknowledgements | https://api.github.com/repos/opensearch-project/data-prepper/issues/5022/comments | 1 | 2024-10-04T04:14:21Z | 2024-10-08T19:42:48Z | https://github.com/opensearch-project/data-prepper/issues/5022 | 2,565,391,807 | 5,022 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Spring Framework v5.3.39 reached end of life. It would be better to upgrade to latest Spring Framework v6.1.
https://spring.io/blog/2024/03/01/support-timeline-announcement-for-spring-framework-6-0-x-and-5-3-x
**Describe the solution you'd like**
Upgrade Spring v5.3.39 to v6.1
https://github.com/spring-projects/spring-framework/wiki/Upgrading-to-Spring-Framework-6.x
Please let me know the current plan(if any) on upgrading to Spring Framework 6 and the future plans of the Spring Framework in data prepper project.
| Support Spring Framework 6 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5018/comments | 1 | 2024-10-03T13:13:41Z | 2024-10-08T19:42:04Z | https://github.com/opensearch-project/data-prepper/issues/5018 | 2,564,037,185 | 5,018 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
It could potentially possible to do this however I have not been able to find anything in documentation that covers it.
If you have CloudWatch Logs -> Data Firehose -> S3 and want to pull that into DataPrepper it brings in the multi line event.
The structure seems to be like so:
```
{ "messageType": "DATA_MESSAGE", "owner": "123456789", "logGroup": "foo", "logStream": "bar", "logEvents": [{"id": "123456", "message": "some log message here", "timestamp" 1727880215114}, {"id": "789102", "message": "another log message here", "timestamp" 1727880215114}, {"id": "99999", "message": "yet another log message here", "timestamp" 1727880215114}]}
```
What I was hoping to do was use DataPrepper to read in the log message from S3 (that is like above) and then parse out the "logEvents" and treat each entry as an individual log message to publish to S3 & OpenSearch alike.
S3 being it will allow me to create a neat structure of a prefix with accountid/log-group/YYYY/MM/DD/HH
However I am not sure that it is possible to extract logEvents dictionary that contains a list of arrays and treat them as separate events.
**To Reproduce**
Steps to reproduce the behavior:
1. Create fake log event like above.
2. Write to DataPrepper
3. Try to parse
**Expected behavior**
I was expecting a feature within DataPrepper to support something like so:
```
processor:
- parse_json:
- split_string:
entries:
- source: "/logEvents[0]"
delimiter: ","
```
**Environment (please complete the following information):**
- OS: macOSX
**Additional context**
AWS Managed OpenSearch & AWS Managed OSIS is being used. I setup a local container deployment to expedite testing and still see the same issue.
| [BUG] Processing A Nested List as Individual Log Events | https://api.github.com/repos/opensearch-project/data-prepper/issues/5015/comments | 11 | 2024-10-02T20:51:32Z | 2025-04-17T13:36:55Z | https://github.com/opensearch-project/data-prepper/issues/5015 | 2,562,619,391 | 5,015 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper now produces schemas with property descriptions.
These are currently using Markdown because that is what we used on the documentation website. But, this can be useful for other tools. Some tools support reading HTML directly, but not Markdown.
**Describe the solution you'd like**
Use HTML in the property descriptions rather than Markdown to support more systems.
| Use HTML in JsonPropertyDescription instead of Markdown | https://api.github.com/repos/opensearch-project/data-prepper/issues/4984/comments | 0 | 2024-09-27T17:16:03Z | 2024-10-02T16:52:27Z | https://github.com/opensearch-project/data-prepper/issues/4984 | 2,553,391,097 | 4,984 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The [current OpenTelemetry specification](https://opentelemetry.io/docs/specs/otlp/) describes two transport channels:
- [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc), which is currently supported by DataPrepper for logs, metrics, and traces;
- [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp), using a binary(protobuf) or JSON format.
All formats use the same [protobuf scheme](https://github.com/open-telemetry/opentelemetry-proto/tree/main/opentelemetry/proto). OTLP/HTTP is currently not supported by DataPrepper. The OpenTelemetry specification has changed since the original implementation of the DataPrepper OTel sources. It now recommends OTLP/HTTP + protobuf to be the default protocol per [opentelemetry-specification#1885](https://github.com/open-telemetry/opentelemetry-specification/issues/1885#issuecomment-934435972). Many OpenTelemetry SDKs nowadays use OTLP/HTTP that way and not all OpenTelemetry instrumentations even support OTLP/gRPC. Connecting such solutions to DataPrepper is not possible without a protocol translation.
**Describe the solution you'd like**
DataPrepper should add support for OTLP/HTTP to the Otel*Sources. The configuration should enable a selection of the protocol to be supported. Ideally, it is possible to use both protocols simultaneously. This enables connecting different services to the same DataPrepper instance.
**Describe alternatives you've considered (Optional)**
1. The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) can be used to translate between OTLP/gRPC and OTLP/HTTP and vice versa. However, this requires an additional component in the signal stream. Direct support by DataPrepper would be a better approach.
2. The HTTP source could be used for the OTLP/HTTP with JSON format. It would require a complex JSON parsing configuration due to the nested arrays in the OTel data structures. In this configuration, the data would also not pass through the OTel processors easily.
**Additional context**
There has been a previous issue about OTLP/HTTP support in the OpenDistro project: <https://github.com/opendistro-for-elasticsearch/data-prepper/issues/283>. | Support OpenTelemetry OTLP/HTTP as addition to OTLP/gRPC | https://api.github.com/repos/opensearch-project/data-prepper/issues/4983/comments | 5 | 2024-09-27T12:34:18Z | 2024-10-14T07:30:09Z | https://github.com/opensearch-project/data-prepper/issues/4983 | 2,552,826,304 | 4,983 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Since both OpenSearch and OpenSearch-Dashboards official docker images have ARM64 variants it would be nice if data-prepper did too
**Describe the solution you'd like**
Ideally official builds would push a multi-architecture manifest containing both amd64 and arm64 variants to Docker Hub
| Official Docker images for ARM64 | https://api.github.com/repos/opensearch-project/data-prepper/issues/4981/comments | 2 | 2024-09-26T08:51:08Z | 2024-09-27T17:11:01Z | https://github.com/opensearch-project/data-prepper/issues/4981 | 2,549,941,980 | 4,981 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Keys with "." in them are not able to be processed.
When ingesting logs from FluentBit -> S3 -> SQS -> Data Prepper / OSIS -> OpenSearch any key that has a dot "." in it is throwing an error on ingestion, see below error from OSIS. I believe this is because the Kubernetes metadata in labels contains dots.
```
2024-09-24T14:08:46.611 [s3-log-pipeline-sink-worker-2-thread-2] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, status = 400, error = can't merge a non object mapping [kubernetes.labels.app] with an object mapping
```
The JSON blob looks as such
```
"labels": {
"app": "fooservice",
"app.kubernetes.io/component": "foo",
"app.kubernetes.io/instance": "foo-in-cluster",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "fooservice",
"app.kubernetes.io/version": "somelonghash",
```
If these labels aren't in the log ingestion succeeds. One challenge is that the labels vary from service to service so predicting what they will be is difficult. It would be preferable if there was a way to say "If the key found has a "." (or some other char) substitute it with "_" or whatever the user chooses.
It is possible that this is able to be done and I am unaware on how to do so.
**To Reproduce**
Attempt to process and ingest a log file to OpenSearch with Data Prepper with a log that has Keys that contain dots "."
Such as:
```
"labels": {
"app": "fooservice",
"app.kubernetes.io/component": "foo",
"app.kubernetes.io/instance": "foo-in-cluster",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "fooservice",
"app.kubernetes.io/version": "somelonghash",
```
**Expected behavior**
The key in double quotes is processed as a key even when dots are present.
**Environment (please complete the following information):**
- AWS Managed OpenSearch Ingestion Service
**Additional context**
Seems this is related and was merged with a Fix. But it is unclear on how to resolve this issue.
https://github.com/opensearch-project/data-prepper/issues/450
| [BUG] Dots Discovered Key Names | https://api.github.com/repos/opensearch-project/data-prepper/issues/4977/comments | 4 | 2024-09-24T17:00:51Z | 2024-11-05T20:49:25Z | https://github.com/opensearch-project/data-prepper/issues/4977 | 2,545,916,597 | 4,977 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
- No.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
- The `processor` documentation is missing sub-documentation for `otel_trace_raw` and `service_map_stateful`. I hope the documentation can add some explanation.
- In https://opensearch.org/docs/latest/data-prepper/common-use-cases/trace-analytics/#processor, it is mentioned that there are `processors` named `service_map_stateful` and `otel_traces_raw`.
- 
- Interestingly, https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/processors/ does not mention the existence of `service_map_stateful` and `otel_traces_raw` at all.
- 
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
- Null.
**Additional context**
Add any other context or screenshots about the feature request here.
- Null. | The `processor` documentation is missing sub-documentation for `otel_trace_raw` and `service_map_stateful` | https://api.github.com/repos/opensearch-project/data-prepper/issues/4976/comments | 2 | 2024-09-24T07:51:48Z | 2024-10-03T20:32:22Z | https://github.com/opensearch-project/data-prepper/issues/4976 | 2,544,633,850 | 4,976 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
- No.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
- Attribute `index_type` of `opensearch` in `sink` lacks further explanation. See https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sinks/opensearch/#configuration-options .
> Tells the sink plugin what type of data it is handling. Valid values are `custom`, `trace-analytics-raw`, `trace-analytics-service-map`, or `management-disabled`. Default is `custom`.
- This brings up a question: what are the functions of `custom`, `trace-analytics-raw`, `trace-analytics-service-map` and `management-disabled`? I can't find any documentation explaining them.
- 
- I hope the documentation can add some explanation.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
- Null.
**Additional context**
Add any other context or screenshots about the feature request here.
- Null.
| Attribute `index_type` of `opensearch` in `sink` lacks further explanation | https://api.github.com/repos/opensearch-project/data-prepper/issues/4975/comments | 0 | 2024-09-24T07:32:22Z | 2024-10-01T19:40:20Z | https://github.com/opensearch-project/data-prepper/issues/4975 | 2,544,589,724 | 4,975 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I would like to use data-prepper to export AWS Neptune graph database data and also ingest stream data changes from Neptune to OpenSearch Service domains and collections.
**Describe the solution you'd like**
Support a new Neptune source plugin to
- integrate with `neptune-export` tool to export the entire Neptune graph database to Opensearch Sink.
- read the Neptune stream data and ingest any change data capture events to Opensearch Sink.
**Additional context**
- Neptune export tool https://docs.aws.amazon.com/neptune/latest/userguide/neptune-export.html
- Neptune Streams https://docs.aws.amazon.com/neptune/latest/userguide/streams.html
| Support Full load and CDC from AWS Neptune | https://api.github.com/repos/opensearch-project/data-prepper/issues/4973/comments | 0 | 2024-09-23T12:17:29Z | 2024-10-01T19:41:08Z | https://github.com/opensearch-project/data-prepper/issues/4973 | 2,542,457,107 | 4,973 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When running Data Prepper, we sometimes we want to shutdown Data Prepper faster than it is configured for.
Presently, the Data Prepper shutdown process will wait for the buffers to drain.
```
buffer:
kafka:
drain_timeout: 30m
```
In some cases, we want to drain faster, say in 10 minutes.
**Describe the solution you'd like**
Update the shutdown API to allow for an alternate drain time.
```
curl -X POST http://localhost:4900/shutdown?drainTime=10m
```
| Variable drain time when shutting down via shutdown API | https://api.github.com/repos/opensearch-project/data-prepper/issues/4966/comments | 1 | 2024-09-20T21:35:18Z | 2024-09-23T23:03:22Z | https://github.com/opensearch-project/data-prepper/issues/4966 | 2,539,716,067 | 4,966 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user with data in the format of
```
{
"array_one": [
{
"array_two": [
{
"field_one": {
"json_string": "{ \"key\": \"value\" }"
}
},
{
"field_one": {
"json_string": { \"key_two\": \"value-two\" }
}
}
]
},
{
"array_two": [
{
"field_one": {
"json_string": "{ \"key\": \"value\" }"
}
},
{
"field_one": {
"json_string": { \"key_two\": \"value-two\" }
}
}
]
}
]
}
```
I would like to parse the nested json for the `json_string` in all objects of the nested arrays. This is not currently possible with the `parse_json` processor as it only supports json_pointer
**Describe the solution you'd like**
A new configuration to support this use case. One idea would be to utilize JSON path as the above example can be represented as this json path
```
- parse_json:
source_path: "array_one[*].array_two[*].field_one.json_string"
```
And this would convert all of the nested json fields to valid json format.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add support for parse_json on complex json structures with arrays and nested arrays | https://api.github.com/repos/opensearch-project/data-prepper/issues/4961/comments | 1 | 2024-09-19T17:37:01Z | 2025-03-27T05:39:46Z | https://github.com/opensearch-project/data-prepper/issues/4961 | 2,536,993,850 | 4,961 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I have data-prepper creating templates and sending data with no issue
This is the config
```
sink:
- opensearch:
hosts: ["$${OS_HOST}"]
username: $${OS_USER}
password: $${OS_PASS}
index_type: trace-analytics-raw
template_type: index-template
max_retries: 10
```
My cluster sets the `number of shards=5` as the default value for new indices being rolled over.
So, I tried adding an extra line to change that settings as stated [here in the docs](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sinks/opensearch/#configuration-options)
```
number_of_shards: 1
```
But as soon I restart the service it starts to fail with the following error
```
Request failed: [x_content_parse_exception] [1:1118] [index_template] unknown field [settings]
```
**To Reproduce**
Steps to reproduce the behavior:
1. Set a sink to use OpenSearch as shown above using `number_of_shards: N`
2. Restart the service
**Expected behavior**
Templates should be created using the right sharding number.
**Environment (please complete the following information):**
- Amazon OpenSearch service
- Version [2.11]
| [BUG] data-prepper sink fails to create templates when `number_of_shards` is passed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4960/comments | 1 | 2024-09-19T13:41:19Z | 2024-10-01T19:44:13Z | https://github.com/opensearch-project/data-prepper/issues/4960 | 2,536,394,553 | 4,960 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.