issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `data-prepper-config.yaml` supports passing a default `sts_role_arn`. Sometimes, plugins are checking for this role specifically and still treating it as required. For example, the S3 source uses the sts_role_arn in some cases to determine the bucket owner.
**Describe the solution you'd like**
Add support for plugins to access the default pipeline role via `AwsCredentialsSupplier`, similar to how the default region is accessible.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Allow plugins to access the default pipeline role | https://api.github.com/repos/opensearch-project/data-prepper/issues/4958/comments | 2 | 2024-09-18T21:54:18Z | 2025-06-10T15:19:15Z | https://github.com/opensearch-project/data-prepper/issues/4958 | 2,534,815,362 | 4,958 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I'm seeing duplicate documents being ingested into my sink when using an `opensearch` source that's pointing to an Amazon OpenSearch domain running Elasticsearch 7.10. After looking into the Elastic documentation and the code I believe that this is the root cause of the duplicates
> We recommend you include a tiebreaker field in your sort. This tiebreaker field should contain a unique value for each document. If you don’t include a tiebreaker field, your paged results could miss or duplicate hits.
https://www.elastic.co/guide/en/elasticsearch/reference/7.10/paginate-search-results.html
The scroll request that's being created in the ElasticsearchAccessor only sorts based on a single field: https://github.com/opensearch-project/data-prepper/blob/2919e9942e51dcb02547b209d6ee3a3fe420944f/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/source/opensearch/worker/client/ElasticsearchAccessor.java#L159-L164
**Describe the solution you'd like**
Add a secondary sort field to the Elasticsearch Scroll request.
Something similar is already done for OpenSearch PointInTime requests: https://github.com/opensearch-project/data-prepper/blob/87c560a3964175231b35f87fa6ab0cbc626271bb/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/source/opensearch/worker/client/OpenSearchAccessor.java#L114-L131
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
Add any other context or screenshots about the feature request here.
| Reduce duplicates from the opensearch source for Scroll searches | https://api.github.com/repos/opensearch-project/data-prepper/issues/4956/comments | 0 | 2024-09-18T17:53:57Z | 2024-10-01T19:45:42Z | https://github.com/opensearch-project/data-prepper/issues/4956 | 2,534,335,601 | 4,956 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It would be nice to have ReplaceStringProcessor that use Java `String.replace()` method that is highly optimized for simple string replacements. It uses a more efficient algorithm than the `Matcher.replaceAll()` method used in [SubstituteStringProcessor](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/mutate-string-processors/src/main/java/org/opensearch/dataprepper/plugins/processor/mutatestring/SubstituteStringProcessor.java#L57)
**Describe the solution you'd like**
ReplaceStringProcessor implementation
```
final String newValue = value.replace(entry.getFrom(), entry.getTo());
recordEvent.put(entry.getSource(), newValue);
```
| Add ReplaceStringProcessor for simple string substitution that doesn't involve regex | https://api.github.com/repos/opensearch-project/data-prepper/issues/4953/comments | 1 | 2024-09-17T19:46:52Z | 2024-09-19T02:56:19Z | https://github.com/opensearch-project/data-prepper/issues/4953 | 2,531,977,324 | 4,953 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When events are sent to opensearch, usually the index is created if it doesn't exist. This happens for all data except when dataprepper recieves traces. When dataprepper starts up, it creates the necessary tracing indexes for spans and servicemaps, once but never again unless restarted.
If the index is removed during dataprepper runtime, an error saying "index is missing" shows up extremely often, possibly filling up the buffer and eventually causing packetdrops.
**To Reproduce**
Send traces to dataprepper as per usual. And you will see the trace index in the management/index page on in the "opensearch dashboards gui".
However, if you delete the index, it never gets recreated again! Even if new traces are being sent to dataprepper! Only re-starting dataprepper seems to "recreate" the index again. This can probably be easily fixed so that indexes are recreated if they don't exist in opensearch.
**Expected behavior**
1: the Span Index needs to be re-created if it doesn't exist when new events come to dataprepper. (And when they are sent to opensearch).
2: the serviceMap index needs to be re-created if it doesn't exist.
**Environment (please complete the following information):**
I tried this in dataprepper on kubernetes using the dataprepper helmchart.
**Additional context**
I tried this using the otel demo apps. It seems pretty consistent with all their traces. If the index is removed, it never gets re-created again unless dataprepper is restarted. Neither the "service-map index" nor the "span index" get recreated. | [BUG] Tracing index is not re-created in opensearch. Dataprepper needs restart? | https://api.github.com/repos/opensearch-project/data-prepper/issues/4951/comments | 3 | 2024-09-17T10:07:46Z | 2024-10-14T08:51:01Z | https://github.com/opensearch-project/data-prepper/issues/4951 | 2,530,698,419 | 4,951 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
During load test on RDS source pipeline, which uses s3 sink to send data to S3 buffer, noticed this error:
```
2024-09-09T17:26:14.680 [sdk-async-response-5-20] ERROR org.opensearch.dataprepper.plugins.sink.s3.S3SinkService - Exception occurred while uploading records to s3 bucket: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
```
The default value for max connections is 50 and acquire timeout is 10s.
**Describe the solution you'd like**
Make max connections and acquire timeout configurable in the pipeline config on S3 sink client
```
...
sink:
- s3:
s3_client:
max_connections: 100
acquire_timeout: 10s
...
```
It's also good to have the sdk metrics enabled on the client.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
N/A | Make max connections and acquire timeout configurable on S3 sink client | https://api.github.com/repos/opensearch-project/data-prepper/issues/4949/comments | 1 | 2024-09-16T18:06:09Z | 2024-09-20T03:28:04Z | https://github.com/opensearch-project/data-prepper/issues/4949 | 2,529,129,791 | 4,949 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>setuptools-68.0.0-py3-none-any.whl</b></p></summary>
<p>Easily download, build, install, upgrade, and uninstall Python packages</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (setuptools version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-6345](https://www.mend.io/vulnerability-database/CVE-2024-6345) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.0 | setuptools-68.0.0-py3-none-any.whl | Direct | setuptools - 70.0.0 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2024-6345</summary>
### Vulnerable Library - <b>setuptools-68.0.0-py3-none-any.whl</b></p>
<p>Easily download, build, install, upgrade, and uninstall Python packages</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **setuptools-68.0.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability in the package_index module of pypa/setuptools versions up to 69.1.1 allows for remote code execution via its download functions. These functions, which are used to download packages from URLs provided by users or retrieved from package index servers, are susceptible to code injection. If these functions are exposed to user-controlled inputs, such as package URLs, they can execute arbitrary commands on the system. The issue is fixed in version 70.0.
<p>Publish Date: 2024-07-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-6345>CVE-2024-6345</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-6345">https://www.cve.org/CVERecord?id=CVE-2024-6345</a></p>
<p>Release Date: 2024-07-15</p>
<p>Fix Resolution: setuptools - 70.0.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | setuptools-68.0.0-py3-none-any.whl: 1 vulnerabilities (highest severity is: 7.0) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4940/comments | 2 | 2024-09-11T21:01:46Z | 2024-10-14T18:18:09Z | https://github.com/opensearch-project/data-prepper/issues/4940 | 2,520,765,190 | 4,940 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requests-2.31.0-py3-none-any.whl</b></p></summary>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl">https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (requests version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-35195](https://www.mend.io/vulnerability-database/CVE-2024-35195) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.6 | requests-2.31.0-py3-none-any.whl | Direct | requests - 2.32.0 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2024-35195</summary>
### Vulnerable Library - <b>requests-2.31.0-py3-none-any.whl</b></p>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl">https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **requests-2.31.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Requests is a HTTP library. Prior to 2.32.0, when making requests through a Requests `Session`, if the first request is made with `verify=False` to disable cert verification, all subsequent requests to the same host will continue to ignore cert verification regardless of changes to the value of `verify`. This behavior will continue for the lifecycle of the connection in the connection pool. This vulnerability is fixed in 2.32.0.
<p>Publish Date: 2024-05-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-35195>CVE-2024-35195</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/psf/requests/security/advisories/GHSA-9wx4-h78v-vm56">https://github.com/psf/requests/security/advisories/GHSA-9wx4-h78v-vm56</a></p>
<p>Release Date: 2024-05-20</p>
<p>Fix Resolution: requests - 2.32.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | requests-2.31.0-py3-none-any.whl: 1 vulnerabilities (highest severity is: 5.6) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4939/comments | 2 | 2024-09-11T21:01:44Z | 2024-10-14T18:18:13Z | https://github.com/opensearch-project/data-prepper/issues/4939 | 2,520,765,135 | 4,939 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Werkzeug-2.2.3-py3-none-any.whl</b></p></summary>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Werkzeug version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-46136](https://www.mend.io/vulnerability-database/CVE-2023-46136) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 8.0 | Werkzeug-2.2.3-py3-none-any.whl | Direct | 2.3.8 | ✅ |
| [CVE-2024-34069](https://www.mend.io/vulnerability-database/CVE-2024-34069) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | Werkzeug-2.2.3-py3-none-any.whl | Direct | 3.0.3 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-46136</summary>
### Vulnerable Library - <b>Werkzeug-2.2.3-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Werkzeug-2.2.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. If an upload of a file that starts with CR or LF and then is followed by megabytes of data without these characters: all of these bytes are appended chunk by chunk into internal bytearray and lookup for boundary is performed on growing buffer. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. This vulnerability has been patched in version 3.0.1.
<p>Publish Date: 2023-10-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-46136>CVE-2023-46136</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-hrfv-mqp8-q5rw">https://github.com/pallets/werkzeug/security/advisories/GHSA-hrfv-mqp8-q5rw</a></p>
<p>Release Date: 2023-10-24</p>
<p>Fix Resolution: 2.3.8</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2024-34069</summary>
### Vulnerable Library - <b>Werkzeug-2.2.3-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f6/f8/9da63c1617ae2a1dec2fbf6412f3a0cfe9d4ce029eccbda6e1e4258ca45f/Werkzeug-2.2.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Werkzeug-2.2.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. The debugger in affected versions of Werkzeug can allow an attacker to execute code on a developer's machine under some circumstances. This requires the attacker to get the developer to interact with a domain and subdomain they control, and enter the debugger PIN, but if they are successful it allows access to the debugger even if it is only running on localhost. This also requires the attacker to guess a URL in the developer's application that will trigger the debugger. This vulnerability is fixed in 3.0.3.
<p>Publish Date: 2024-05-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-34069>CVE-2024-34069</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pallets/werkzeug/security/advisories/GHSA-2g68-c3qc-8985">https://github.com/pallets/werkzeug/security/advisories/GHSA-2g68-c3qc-8985</a></p>
<p>Release Date: 2024-05-06</p>
<p>Fix Resolution: 3.0.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | Werkzeug-2.2.3-py3-none-any.whl: 2 vulnerabilities (highest severity is: 8.0) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4938/comments | 2 | 2024-09-11T21:01:42Z | 2024-10-14T18:18:05Z | https://github.com/opensearch-project/data-prepper/issues/4938 | 2,520,765,089 | 4,938 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-2.0.7-py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl">https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (urllib3 version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-37891](https://www.mend.io/vulnerability-database/CVE-2024-37891) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 4.4 | urllib3-2.0.7-py3-none-any.whl | Direct | 2.2.2 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2024-37891</summary>
### Vulnerable Library - <b>urllib3-2.0.7-py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl">https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-2.0.7-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 is a user-friendly HTTP client library for Python. When using urllib3's proxy support with `ProxyManager`, the `Proxy-Authorization` header is only sent to the configured proxy, as expected. However, when sending HTTP requests *without* using urllib3's proxy support, it's possible to accidentally configure the `Proxy-Authorization` header even though it won't have any effect as the request is not using a forwarding proxy or a tunneling proxy. In those cases, urllib3 doesn't treat the `Proxy-Authorization` HTTP header as one carrying authentication material and thus doesn't strip the header on cross-origin redirects. Because this is a highly unlikely scenario, we believe the severity of this vulnerability is low for almost all users. Out of an abundance of caution urllib3 will automatically strip the `Proxy-Authorization` header during cross-origin redirects to avoid the small chance that users are doing this on accident. Users should use urllib3's proxy support or disable automatic redirects to achieve safe processing of the `Proxy-Authorization` header, but we still decided to strip the header by default in order to further protect users who aren't using the correct approach. We believe the number of usages affected by this advisory is low. It requires all of the following to be true to be exploited: 1. Setting the `Proxy-Authorization` header without using urllib3's built-in proxy support. 2. Not disabling HTTP redirects. 3. Either not using an HTTPS origin server or for the proxy or target origin to redirect to a malicious origin. Users are advised to update to either version 1.26.19 or version 2.2.2. Users unable to upgrade may use the `Proxy-Authorization` header with urllib3's `ProxyManager`, disable HTTP redirects using `redirects=False` when sending requests, or not user the `Proxy-Authorization` header as mitigations.
<p>Publish Date: 2024-06-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-37891>CVE-2024-37891</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.4</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-34jh-p97f-mpxf">https://github.com/urllib3/urllib3/security/advisories/GHSA-34jh-p97f-mpxf</a></p>
<p>Release Date: 2024-06-17</p>
<p>Fix Resolution: 2.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | urllib3-2.0.7-py3-none-any.whl: 1 vulnerabilities (highest severity is: 4.4) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4937/comments | 2 | 2024-09-11T21:01:40Z | 2024-10-14T18:18:08Z | https://github.com/opensearch-project/data-prepper/issues/4937 | 2,520,765,033 | 4,937 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zipp-3.15.0-py3-none-any.whl</b></p></summary>
<p>Backport of pathlib-compatible object wrapper for zip files</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl">https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (zipp version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-5569](https://www.mend.io/vulnerability-database/CVE-2024-5569) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low | 3.3 | zipp-3.15.0-py3-none-any.whl | Direct | 3.19.1 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> CVE-2024-5569</summary>
### Vulnerable Library - <b>zipp-3.15.0-py3-none-any.whl</b></p>
<p>Backport of pathlib-compatible object wrapper for zip files</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl">https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **zipp-3.15.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/400713b650608dc2d58750953403a3f5814062c2">400713b650608dc2d58750953403a3f5814062c2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library, affecting all versions prior to 3.19.1. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. The vulnerability was addressed in version 3.19.1 of jaraco/zipp.
<p>Publish Date: 2024-07-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-5569>CVE-2024-5569</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.com/bounties/be898306-11f9-46b4-b28c-f4c4aa4ffbae">https://huntr.com/bounties/be898306-11f9-46b4-b28c-f4c4aa4ffbae</a></p>
<p>Release Date: 2024-07-09</p>
<p>Fix Resolution: 3.19.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | zipp-3.15.0-py3-none-any.whl: 1 vulnerabilities (highest severity is: 3.3) - autoclosed | https://api.github.com/repos/opensearch-project/data-prepper/issues/4936/comments | 2 | 2024-09-11T21:01:38Z | 2024-10-14T18:18:02Z | https://github.com/opensearch-project/data-prepper/issues/4936 | 2,520,764,989 | 4,936 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We have introduced some custom annotations for generating json schema for plugins, e.g. @AlsoRequires. However, a validation mechanism has been missing which could lead to misses in specifying correct values
**Describe the solution you'd like**
Validate plugin config instances against the custom annotations
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Introduce validation mechanism for custom json schema annotation | https://api.github.com/repos/opensearch-project/data-prepper/issues/4935/comments | 0 | 2024-09-11T18:29:25Z | 2024-09-17T19:32:32Z | https://github.com/opensearch-project/data-prepper/issues/4935 | 2,520,452,936 | 4,935 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We have noticed that failures writing to the OpenSearch sink may result in the `opensearch` sink failing to continue to write, even after the issue is resolved.
**To Reproduce**
1. Create this pipeline
```
opensearch-retry-pipeline:
workers: 2
delay: 100
source:
http:
sink:
- opensearch:
hosts: [ "https://opensearch:9200" ]
insecure: true
username: admin
password: admin
index: test_forbidden
```
2. Write a message.
```
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"test": "hello"}]'
```
3. Search - the new document is available
```
GET test_forbidden/_search
{
"query": {
"match_all": {}
}
}
```
4. Add a write block
```
PUT test_forbidden/_block/write?timeout=30m
```
5. Write more messages
```
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"test": "hello2"}]'
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"test": "hello3"}]'
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"test": "hello4"}]'
```
6. Search - no new documents
```
GET test_forbidden/_search
{
"query": {
"match_all": {}
}
}
```
Also, see Data Prepper logs
```
data-prepper | 2024-09-10T18:24:11,693 [opensearch-retry-pipeline-sink-worker-2-thread-2] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - operation = Index, error = index [test_forbidden] blocked by: [FORBIDDEN/8/index write (api)];
```
7. Remove the write block
```
PUT test_forbidden/_settings
{
"index.blocks.read_only_allow_delete": null
}
```
8. Search
```
GET test_forbidden/_search
{
"query": {
"match_all": {}
}
}
```
At this point, we should see new documents. But, they are not present.
**Expected behavior**
At step 8, we should see the documents.
**Environment (please complete the following information):**
Data Prepper 2.9.0
| [BUG] OpenSearch sink is not resuming after failures | https://api.github.com/repos/opensearch-project/data-prepper/issues/4932/comments | 2 | 2024-09-10T18:39:03Z | 2024-10-07T19:35:42Z | https://github.com/opensearch-project/data-prepper/issues/4932 | 2,517,346,377 | 4,932 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper plugins have Gradle module names such as `opensearch`, `common`, kafka-plugins`, etc. These are good in Gradle. But, in the Docker image (and eventually on Maven), these names can be confusing.
In particular, I noticed this with the `opensearch-2.9.0` jar file. I thought this was OpenSearch 2.9.0. This also caused vulnerability scans to detect that this had a vulnerability which is valid for OpenSearch 2.9.0, not Data Prepper's `opensearch` sink/source.
```
docker run opensearchproject/data-prepper:2.9.0 ls /usr/share/data-prepper/lib/opensearch-2.9.0.jar
/usr/share/data-prepper/lib/opensearch-2.9.0.jar
```
Running a jar listing reveals that this is Data Prepper's `opensearch` plugin.
```
docker run opensearchproject/data-prepper:2.9.0 jar tvf /usr/share/data-prepper/lib/opensearch-2.9.0.jar | grep Sink
3406 Wed Aug 28 15:35:36 UTC 2024 org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSinkConfiguration.class
47509 Wed Aug 28 15:35:36 UTC 2024 org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.class
```
**Describe the solution you'd like**
I'd like to rename the jar files that Data Prepper produces to start all Data Prepper plugins with `data-prepper-plugin-`.
At the same time, I'd like to keep the module names in Gradle the same. We can configure Gradle to change the way it produces output to help with this.
| Improve jar names for Data Prepper plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/4931/comments | 0 | 2024-09-10T15:48:18Z | 2025-06-04T22:08:10Z | https://github.com/opensearch-project/data-prepper/issues/4931 | 2,516,835,556 | 4,931 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Recent bug fix for this with acknowledgments enabled (https://github.com/opensearch-project/data-prepper/pull/4918#discussion_r1750930332)
**Expected behavior**
Without acknowledgments enabled, this issue can still occur. A separate thread should make the `renewPartitionOwnership` call when acknowledgments are not enabled.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] S3 Scan partition ownership cannot expire with full buffer without acknowledgments enabled | https://api.github.com/repos/opensearch-project/data-prepper/issues/4928/comments | 0 | 2024-09-09T21:22:53Z | 2024-09-09T21:24:57Z | https://github.com/opensearch-project/data-prepper/issues/4928 | 2,514,946,112 | 4,928 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Many of our logs are noisy in that they will be repeated within processor loops. This results in creating far too many logs in many situations. I'd like to use the [Log4J burst filter](https://logging.apache.org/log4j/2.x/manual/filters.html#BurstFilter) in such a way that we can filter out only these noisy logs.
e.g.
```
<Filters>
<MarkerFilter marker="NOISY" onMatch="NEUTRAL" onMismatch="ACCEPT"/>
<BurstFilter level="INFO" rate="10" maxBurst="40"/>
</Filters>
```
**Describe the solution you'd like**
Create a new Data Prepper SLF4J marker in [DataPrepperMarkers](https://github.com/opensearch-project/data-prepper/blob/3dfad0b88bca23d6c09b97702f3a19285bef6ad0/data-prepper-api/src/main/java/org/opensearch/dataprepper/logging/DataPrepperMarkers.java#L6) named `NOISY`.
Additionally, we should apply this to existing logs that are in `Processor` loops. Because we use other markers, we probably need to start using [SLF4J fluent logging](https://www.slf4j.org/manual.html#fluent) as well.
e.g.
Change
https://github.com/opensearch-project/data-prepper/blob/cf24b895b8c4a42fbbe4596e5836c714336244b0/data-prepper-plugins/parse-json-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/parse/json/ParseJsonProcessor.java#L55
to:
```
LOG.atError()
.addMarker(SENSITIVE)
.addMarker(NOISY)
.setMessage("An exception occurred due to invalid JSON while parsing [{}] due to {}")
.addArgument(message)
.addArgument(e.getMessage())
.log();
```
We don't need any implementations for this. We can leave it to individual teams to add any rate limiting.
**Tasks**
- [ ] Add the `NOISY` marker
- [ ] Update logs that are in `Processor` loops to include the `NOISY` marker. | Add a NOISY SLF4J marker | https://api.github.com/repos/opensearch-project/data-prepper/issues/4927/comments | 0 | 2024-09-09T16:24:46Z | 2024-09-18T16:31:50Z | https://github.com/opensearch-project/data-prepper/issues/4927 | 2,514,392,192 | 4,927 |
[
"opensearch-project",
"data-prepper"
] | The [`EVENT` mask](https://github.com/opensearch-project/data-prepper/blob/3dfad0b88bca23d6c09b97702f3a19285bef6ad0/data-prepper-api/src/main/java/org/opensearch/dataprepper/logging/DataPrepperMarkers.java#L7) is intended to be a signal to the logger to mask only Data Prepper Event objects. However, it currently operates like a `SENSITIVE` marker which should mask all fields (except exceptions see #3375).
For example, `add_entries` has this exception.
https://github.com/opensearch-project/data-prepper/blob/aa1c5c592131dc43d167111bd7a34a66eb554593/data-prepper-plugins/mutate-event-processors/src/main/java/org/opensearch/dataprepper/plugins/processor/mutateevent/AddEntryProcessor.java#L95-L96
It results in logs like the following:
```
Error adding entry to record [******] with key [******], metadataKey [******], value_expression [******] format [******], value [******] java.lang.ClassCastException: null
```
Solution:
Fix the [SensitiveArgumentMaskingConverter](https://github.com/opensearch-project/data-prepper/blob/3dfad0b88bca23d6c09b97702f3a19285bef6ad0/data-prepper-core/src/main/java/org/opensearch/dataprepper/logging/SensitiveArgumentMaskingConverter.java#L41) to only mask `Event` objects when the marker is `EVENT` rather than `SENSITIVE`.
Also, we should audit our usages of `EVENT`. For each usage, we should consider 1) Can we keep it as `EVENT` as is? 2) Do we need to remove some fields to keep as `EVENT`; or 3) Do we need to make it `SENSITIVE`?
**Tasks**
- [ ] Fix the SensitiveArgumentMaskingConverter
- [ ] Audit and update usages of the `DataPrepperMarkers.EVENT` marker | [BUG] EVENT logging masks all inputs | https://api.github.com/repos/opensearch-project/data-prepper/issues/4926/comments | 0 | 2024-09-09T16:11:31Z | 2024-09-09T19:11:57Z | https://github.com/opensearch-project/data-prepper/issues/4926 | 2,514,366,818 | 4,926 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
- See https://github.com/opensearch-project/data-prepper/issues/2273#issuecomment-1464092972 .
- Starting from https://github.com/jaegertracing/jaeger/pull/4187, Jaeger HotROD no longer uses `jaegertracing/jaeger-agent` to receive `jaegertracing/example-hotrod` data, but directly uses `otel/opentelemetry-collector-contrib`. According to https://www.jaegertracing.io/docs/1.60/deployment/#agent , `jaegertracing/jaeger-agent` is considered a deprecated component. It does not make any sense.
- According to https://opentelemetry.io/docs/collector/installation/ and https://opentelemetry.io/docs/collector/deployment/agent/ , removing the use of the Jaeger Agent is very simple.
```yaml
services:
opensearch-node1:
hostname: node-0.example.com
image: opensearchproject/opensearch:2.16.0
environment:
discovery.type: single-node
OPENSEARCH_INITIAL_ADMIN_PASSWORD: opensearchNode1Test
volumes:
- opensearch-config1:/usr/share/opensearch/config/
expose:
- 9200
- 9600
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.16.0
ports:
- "14321:5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200"]'
depends_on:
- opensearch-node1
data-prepper:
restart: always
image: opensearchproject/data-prepper:2.9.0
volumes:
- ./pipelines.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml
- ./data-prepper-config.yaml:/usr/share/data-prepper/config/data-prepper-config.yaml
- opensearch-config1:/usr/share/opensearch-test/:ro
expose:
- 21890
depends_on:
- opensearch-node1
otel-collector:
image: otel/opentelemetry-collector-contrib:0.108.0
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
expose:
- 1888
- 8888
- 8889
- 13133
- 4317
- 4318
- 55679
depends_on:
- data-prepper
jaeger-hot-rod:
image: jaegertracing/example-hotrod:1.60.0
command: [ "all" ]
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
ports:
- "8080:8080"
expose:
- 8083
depends_on:
- otel-collector
volumes:
opensearch-config1:
```
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
- Removes Jaeger Agent from Jaeger Hotrod Demo.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
- Null.
| Removes Jaeger Agent from Jaeger Hotrod Demo | https://api.github.com/repos/opensearch-project/data-prepper/issues/4923/comments | 3 | 2024-09-08T05:35:54Z | 2024-09-22T10:02:09Z | https://github.com/opensearch-project/data-prepper/issues/4923 | 2,512,208,643 | 4,923 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
- `Sources` documentation should add `pipeline` subdocumentation.
- 
- The document content should be similar to https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sinks/pipeline/
.
- 
- Adding subdocuments to `Source` makes sense because the following usages are 100% likely to exist.
```yaml
entry-pipeline:
source:
otel_trace_source:
ssl: "false"
sink:
- pipeline:
name: raw-pipeline
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: [ "https://node-0.example.com:9200" ]
cert: "/usr/share/opensearch-test/root-ca.pem"
username: "admin"
password: "opensearchNode1Test"
index_type: trace-analytics-raw
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
- Null.
| `Sources` documentation should add `pipeline` subdocumentation | https://api.github.com/repos/opensearch-project/data-prepper/issues/4922/comments | 3 | 2024-09-08T04:02:02Z | 2024-09-13T13:19:45Z | https://github.com/opensearch-project/data-prepper/issues/4922 | 2,512,183,938 | 4,922 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I implemented a geoip processor in my AWS OpenSearch Domain Data Prepper Pipeline for VPC flow logs as seen here:
- geoip:
entries:
- source: srcaddr
target: srclocation
- source: dstaddr
target: dstlocation
tags_on_ip_not_found: ['no_ip_geo']
tags_on_no_valid_ip: ['invalid_ip']
and it works great, but the latitude and longitude fields are separated and not together in one field like [lat, long].
<img width="577" alt="Screenshot 2024-09-05 at 1 12 58 PM" src="https://github.com/user-attachments/assets/c84db981-61af-47b8-9689-26886487d993">
I tried looking to see if there is a way to concatenate these into one field and then convert the field data type to geo_point so I can use it in map visualizations, but I have not been able to find anything. I tried using an index template to convert the latitude and longitude fields into geo_point data types, but when I do this the logs with that information disappear and I do not see them anywhere. Not sure if I just did something wrong or because, since they are separated, it struggles to convert to the geo_point data type and the data disappears. For the record, when I use index templates to, say, convert a field to a string it works fine, it's just with the geo_point data type when something goes wrong.
I saw that some other users were having this issue too, the links are on Logstash not Data Prepper but still a similar issue:
https://stackoverflow.com/questions/72539041/opensearch-on-aws-does-not-recognise-geoips-location-as-geojson-type
https://github.com/opensearch-project/OpenSearch/issues/3546
**Describe the solution you'd like**
The ability to concatenate fields would be cool. Either with a processor in Data Prepper or in Opensearch itself. I want just one location field, something like location:[lat,long] that I can convert to the geo_point format so I can use it in the map visualizations.
**Describe alternatives you've considered (Optional)**
I tried using an index template to converting the lat and long fields to a geo_point and seeing if I could add them to a map visualization incrementally and get something to work with, but the log data disappeared after the index template was implemented. I tried seeing if any other processor in Data Prepper could work and I could find anything. I also tried seeing if I could chain the Data Prepper pipeline to an ingest pipeline and use any of those processors, but I couldn't figure out how to do this.
**Additional context**
Any help is appreciated, I am a novice and new to Opensearch so if there is a solution I am not seeing here sorry but thank you in advance!
| Concatenation for geo fields in Data Prepper? | https://api.github.com/repos/opensearch-project/data-prepper/issues/4916/comments | 3 | 2024-09-05T17:32:23Z | 2024-09-11T15:59:11Z | https://github.com/opensearch-project/data-prepper/issues/4916 | 2,508,365,863 | 4,916 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently our ETL job runs every 30 minutes and inserts a file into S3, triggering OpenSearch ingestion pipeline. Due to varying ETL completion time, it's challenging to determine suitable `refresh_interval` at the index level that works consistently for all scenarios.
As a result of this behavior - there is a delay in the data being available even though the ingestion to OpenSearch is complete.
**Describe the solution you'd like**
We propose to add a new configuration option for http post-processor hooks in the Data Prepper pipeline definition, which will allow us to specify the http POST endpoint and make refresh API call( /index-name/_refresh), post pipeline ingestion is completed.
Currently the processor available in the pipeline definition only works before ingesting data to OpenSearch.
**Describe alternatives you've considered (Optional)**
Provide refresh option at pipeline index settings which will internally refresh the index after the execution of pipeline.
**Additional context**
N/A | Implement Post-Processor Hooks for Refreshing Index after Ingestion | https://api.github.com/repos/opensearch-project/data-prepper/issues/4885/comments | 4 | 2024-08-28T22:19:12Z | 2024-11-05T00:38:28Z | https://github.com/opensearch-project/data-prepper/issues/4885 | 2,493,080,250 | 4,885 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
If the routes for a sink fail, such as when the expression is invalid, the process worker running will stop running. This will lead to Data Prepper running without any process workers.
The buffer will fill up and Data Prepper will have effectively been shutdown.
**To Reproduce**
1. Create a pipeline with conditional routes
2. Make one of the routes have an invalid expression
3. Run Data Prepper
4. Ingest data
```
2024-08-26T23:13:58.480 [test-pipeline-processor-worker-5-thread-2] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - Encountered exception during pipeline test-pipeline processing
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@10ec1e4d[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@198669bf[Wrapped task = org.opensearch.dataprepper.pipeline.Pipeline$$Lambda$1477/0x000000080136e230@138d51a1]] rejected from org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor@5af5a6fd[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 2018]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[?:?]
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134) ~[?:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$6(Pipeline.java:347) ~[data-prepper-core.jar:?]
at org.opensearch.dataprepper.pipeline.router.Router.lambda$route$0(Router.java:64) ~[data-prepper-core.jar:?]
at org.opensearch.dataprepper.pipeline.router.DataFlowComponentRouter.route(DataFlowComponentRouter.java:48) ~[data-prepper-core.jar:?]
at org.opensearch.dataprepper.pipeline.router.Router.route(Router.java:58) ~[data-prepper-corejar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.publishToSinks(Pipeline.java:346) ~[data-prepper-core.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:168) ~[data-prepper-corejar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:150) ~[data-prepper-corejar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:68) ~[data-prepper-corejar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:833) [?:?]
```
**Expected behavior**
I expect that Data Prepper will continue to run. One difficulty is what to do with the data. We could drop it or send it incorrectly somewhere.
Ideally, we can use the new `_default` route if available.
**Environment (please complete the following information):**
Data Prepper 2.8
**Additional context**
This is a very similar issue to #4103, but is manifest through failures in the router and/or sinks.
| [BUG] Data Prepper processor workers stop running when an error from the routes occurs | https://api.github.com/repos/opensearch-project/data-prepper/issues/4883/comments | 2 | 2024-08-28T20:45:03Z | 2024-09-11T09:30:59Z | https://github.com/opensearch-project/data-prepper/issues/4883 | 2,492,960,130 | 4,883 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.9.0
**BUILD NUMBER**: 87
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/10599810682
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 10599810682: Release Data Prepper : 2.9.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/4882/comments | 3 | 2024-08-28T15:52:14Z | 2024-08-28T16:01:58Z | https://github.com/opensearch-project/data-prepper/issues/4882 | 2,492,471,275 | 4,882 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When we create a new major/minor release, we have to update the `CURRENT_VERSION` in the `DataPrepperVersion` file to the new version. We do this in preparation for the next release version.
https://github.com/opensearch-project/data-prepper/blob/d6465efc63f98380605b90ddeab90343bb469dbf/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/configuration/DataPrepperVersion.java#L9
**Describe the solution you'd like**
This should come from the version expressed in the `gradle.properties` that we release from.
https://github.com/opensearch-project/data-prepper/blob/d6465efc63f98380605b90ddeab90343bb469dbf/gradle.properties#L8
We can probably accomplish this by reading the version from a resource file. By including some new resource in the jar file, we can read that resource to determine the major/minor version. Gradle could easily produce this resource file as part of the build.
**Describe alternatives you've considered (Optional)**
It may be possible to have Gradle modify the compiled Java instead of using a resource file. We could investigate this as well.
**Additional context**
N/A
| Automate the DataPrepperVersion from Gradle | https://api.github.com/repos/opensearch-project/data-prepper/issues/4877/comments | 0 | 2024-08-27T13:52:39Z | 2024-08-27T19:57:54Z | https://github.com/opensearch-project/data-prepper/issues/4877 | 2,489,445,712 | 4,877 |
[
"opensearch-project",
"data-prepper"
] | I'm describing a `adds_entries` processor. One of the entries refers to a nested field (`/channel/name`). I want to append that value to a new `passage_text_elements` array, which is initialized with some string. The relevant code looks like this:
```yaml
processor:
- add_entries:
entries:
- key: passage_text_elements
format: 'Content type: ${/type}'
...
- key: passage_text_elements
append_if_key_exists: true
add_when: >-
/channel/name != null and (/type ==
"channel-linear" or /type == "channel-xtra")
format: 'name: ${/channel/name}'
- key: passage_text
value_expression: |-
join("
", /passage_text_elements)
add_when: /passage_text_elements typeof array
```
when something like this is ingested (from dynamodb):
```json
{
"channel": {
"M": {
"name": {
"S": "some name"
},
...
}
},
...
"type": {
"S": "channel-linear"
}
}
```
I see this warning/error in the ingestor logs:
```
WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: failed to parse field [channel] of type [text] in document with id '<doc-id>'. Preview of field's value: '{name = some name, ...}' caused by Can't get text on a START_OBJECT at 1:127
```
so it seems that somewhere, the path `channel` is being interpreted as a string, as opposed to an object. The only entry that uses the channel field is the one I included above.
**To Reproduce**
Steps to reproduce the behavior:
1. create an add_entries processor similar to the described above
2. insert a document on the source side that has a `channel` object field with a `name` string field.
**Expected behavior**
Expecting for the formatted string `"name: <channel.name>"` to be appended to the `passage_text_elements` field
**Environment (please complete the following information):**
- Version: 2.11
| [BUG] Expression tries to evaluate something as a string when it should evaluate it as an object | https://api.github.com/repos/opensearch-project/data-prepper/issues/4859/comments | 1 | 2024-08-22T14:14:48Z | 2024-08-22T16:23:23Z | https://github.com/opensearch-project/data-prepper/issues/4859 | 2,480,907,055 | 4,859 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have JSON object in S3 with two fields like below and I want to limit the entries of those fields while uploading it to open search using ingestion pipeline.
```{
"file" : { # file is a hash-map and keys in the file attribute are pre-defined
"read": ["value1","value2", ......"value1000"],
"write": ["value1","value2", ......"value1000"],
"delete": ["value1","value2", ......"value1000"],
}
"Scan": { # scan is also a hash-map and keys in the Scan attribute are dynamic and they are not fixed
"8080": ["1.2.3.4:8080", "5.6.7.8:8080", ..... "value1000"]
"450": ["1.2.34.5:450", ...."value1000"]
}
}
```
I want to limit the entries of the attributes(file.read, file.write, file.delete, scan.key1, scan.key2) to first 10 elements due to latency issues as elastic search has latency issues while querying large arrays.
**Describe the solution you'd like**
Need the data looks like below after processing using data prepper plugin
```
file.read: ["value1",... "value10"],
file.write: ["value1",... "value10"],
file.delete: ["value1",... "value10"],
scan.8080: ["value1",... "value10"],
scan.450: ["value1",... "value10"]
```
| Support Limiting Array Entries in Hash-Map Values using Data Prepper Plugin | https://api.github.com/repos/opensearch-project/data-prepper/issues/4858/comments | 4 | 2024-08-22T00:52:53Z | 2024-08-27T20:10:50Z | https://github.com/opensearch-project/data-prepper/issues/4858 | 2,479,485,476 | 4,858 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
add roles and users to opensearch
```yaml
"log_collector_prepper" : {
"reserved" : false,
"hidden" : false,
"cluster_permissions" : [
"cluster_monitor"
],
"index_permissions" : [
{
"index_patterns" : [
"logs-*",
"nodes-*",
"kube-*",
"logstash-*",
"pacman-*"
],
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"crud",
"create_index"
]
},
{
"index_patterns" : [
".ds-logs-*",
".ds-nodes-*",
".ds-kube-*",
".ds-logstash-*",
".ds-pacman-*"
],
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"indices:admin/mapping/put"
]
}
],
"tenant_permissions" : [
{
"tenant_patterns" : [
"*"
],
"allowed_actions" : [
"kibana_all_write"
]
}
],
"static" : false
}
```
install data prepper via helm
```yaml
- chart: opensearch/data-prepper
version: 0.1.0
name: data-prepper
namespace: logging
values:
- pipelineConfig:
config:
simple-sample-pipeline:
source:
http:
buffer:
bounded_blocking:
buffer_size: 1024 # max number of records the buffer accepts
batch_size: 256 # max number of records the buffer drains after each read
processor:
route:
sink:
- opensearch:
hosts: ["https://opensearch-cluster-master:9200"]
cert: /usr/share/data-prepper/config/logstash-tls/ca.crt
username: dataprepper
password: dataprepper
index_type: custom
index: kube-2
max_retries: 4
```
and got errors
```
2024-08-21T09:37:16,656 [simple-sample-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2024-08-21T09:37:16,656 [simple-sample-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2024-08-21T09:37:16,656 [simple-sample-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the cert provided in the config.
2024-08-21T09:37:16,659 [simple-sample-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2024-08-21T09:37:16,659 [simple-sample-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the cert provided in the config.
2024-08-21T09:37:16,678 [simple-sample-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink, retrying: Forbidden access
```
**To Reproduce**
I've test the rights from conteiner and index created
```bash
curl -XPUT "https://opensearch-cluster-master:9200/kube-011" -u 'dataprepper:dataprepper' --cacert /usr/share/data-prepper/config/logstash-tls/ca.crt
```
so after some searching I found this [article](https://github.com/opensearch-project/data-prepper/blob/25968dd9c63f35b1881013f35a6eed64439be278/data-prepper-plugins/opensearch/opensearch_security.md)
and add this role
```json
"data_prepper" : {
"reserved" : false,
"hidden" : false,
"cluster_permissions" : [
"cluster_all",
"indices:admin/template/put",
"indices:admin/template/get"
],
"index_permissions" : [
{
"index_patterns" : [
"otel-v1*"
],
"dls" : "",
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"indices_all"
]
},
{
"index_patterns" : [
".opendistro-ism-config"
],
"dls" : "",
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"indices_all"
]
},
{
"index_patterns" : [
"*"
],
"dls" : "",
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"manage_aliases"
]
}
],
"tenant_permissions" : [
{
"tenant_patterns" : [
"*"
],
"allowed_actions" : [
"kibana_all_write"
]
}
],
"static" : false
}
```
but same behavior
if I add all_access role it works fine
**Expected behavior**
to work well under a limited role
or to provide information which role should we create
**Screenshots**
**Environment (please complete the following information):**
- Version 0.1.0
**Additional context**
| [BUG] Failed to initialize OpenSearch sink, retrying: Forbidden access | https://api.github.com/repos/opensearch-project/data-prepper/issues/4856/comments | 2 | 2024-08-21T10:12:02Z | 2024-09-03T19:54:07Z | https://github.com/opensearch-project/data-prepper/issues/4856 | 2,477,638,725 | 4,856 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A custom class will be needed to assemble the messages that might including truncation if necessary.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Using Custom class to handle collection of error messages from PluginErrorCollector | https://api.github.com/repos/opensearch-project/data-prepper/issues/4854/comments | 0 | 2024-08-20T22:04:11Z | 2024-08-27T16:16:56Z | https://github.com/opensearch-project/data-prepper/issues/4854 | 2,476,609,885 | 4,854 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
In our DynamoDB (DDB) table, we have documents that have fields like this:
```
delivered_at|d87e56e8-f52f-474f-ad18-155b2a08f680: 1722622017.993797
```
Where the string behind the `|` is a random string (UUID) and the value is a float (representing a timestamp). We'd like to extract the value from this field in DDB and index it in OpenSearch as simply `delivered_at: 1722622017.993797`
**Describe the solution you'd like**
Is it possible to plugin custom code, that will
```
rename 'delivered_at|d87e56e8-f52f-474f-ad18-155b2a08f680: 1722622017.993797' to 'delivered_at: 1722622017.993797' ?
```
**Describe alternatives you've considered (Optional)**
There is a `rename_keys` processor (https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/rename-keys/). But it currently can only handle static key names. So with this config, pipeline can rename the key, but if the uuid changes, rename won't work. ``` processor: - rename_keys: entries: - from_key: "delivered_at|d87e56e8-f52f-474f-ad18-155b2a08f680" to_key: "delivered_at" ``` > -
Tried a number of different configurations with the grok pipeline processor and none of them have worked. The challenge is that some of the pipeline processors support [Pipeline Expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/)s, while others do not, and it's not well-documented which ones do and which ones don't.
**Additional context**
Add any other context or screenshots about the feature request here.
| Data Prepper support for dynamic renaming of keys | https://api.github.com/repos/opensearch-project/data-prepper/issues/4849/comments | 5 | 2024-08-20T15:45:00Z | 2024-10-30T14:48:33Z | https://github.com/opensearch-project/data-prepper/issues/4849 | 2,475,975,251 | 4,849 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper plugins can depend on other Data Prepper plugins. They load these plugins manually using the `PluginModel`.
This results in duplicated logic such as the following:
https://github.com/opensearch-project/data-prepper/blob/91b6666512805bf502186bd683ae800b0943ba10/data-prepper-plugins/aggregate-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/aggregate/AggregateProcessor.java#L83-L87
More importantly, it is also not compatible with our new support for schema and documentation generation.
**Describe the solution you'd like**
Provide a feature in Data Prepper core for loading plugins on behalf of other plugins. Data Prepper can have a new `@UsesDataPrepperPlugin` annotation which states that the type is a Data Prepper plugin.
In plugin configurations, use the desired plugin interface type. For example, replace
https://github.com/opensearch-project/data-prepper/blob/91b6666512805bf502186bd683ae800b0943ba10/data-prepper-plugins/aggregate-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/aggregate/AggregateProcessorConfig.java#L31-L34
with:
```
@JsonProperty("action")
@NotNull
@UsesDataPrepperPlugin
private AggregateAction aggregateAction;
```
When loading plugins in Data Prepper core, detect this annotation and load the plugin.
| Support automatic plugin loading in Data Prepper core | https://api.github.com/repos/opensearch-project/data-prepper/issues/4838/comments | 1 | 2024-08-15T15:00:40Z | 2024-09-17T19:50:43Z | https://github.com/opensearch-project/data-prepper/issues/4838 | 2,468,252,833 | 4,838 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We would like to see a feature in data-prepper pipeline and eventually in the opensearch ingestion to ingest documents to a specific time based index based on the timestamp field from an incoming record.
For example, lets say you have following 2 records coming from source:
```
{
"message": "hello",
"created_at": "2024-08-13",
"type": "greeting"
}
```
```
{
"message": "how are you",
"created_at": "2024-08-14",
"type": "greeting"
}
```
and you have the following sink
```
sink:
- opensearch:
hosts: [ "https://your-domain" ]
aws:
sts_role_arn: "arn:aws:iam::<acc_no>:role/test-role"
region: "us-east-1"
serverless: false
index: "testindex-%{yyyy-MM-dd}"
document_id_field: "id"
```
Irrespective of when the above records are consumed by the sink, it should index the docs in the index based on the timestamp of the ```created_at``` field.
Therefore the first record would be indexed into ```testindex-2024-08-13``` and the second record should be indexed into ```testindex-2024-08-14```
**Why is this feature important?**
For the best case indexing realtime data, this feature would not be required but systems sometimes fail. Consider a scenario when pipeline is writing to ```testindex-2024-08-13``` on 13th of August and Opensearch cluster failed at 11:30 PM, the pipeline sink will be backed up on events or pipeline needs to be stopped. Opensearch cluster came back online at 2:00 AM on 14th August after performing some manual ops. Upon resuming the pipeline, technically, the records between 11:30 - 11:59 should be indexed in ```testindex-2024-08-13``` and remaining in the ```testindex-2024-08-14```. Therefore the pipeline will require some sort of inbuilt intelligence to read timestamps from a field of the incoming record from source and perform ```_bulk``` request to that index.
OR
If you have to replay last 7 days of data from Kafka expecting 7 different daily indices.
Majority of the times ISM policies would be used to enforce the retention (delete state) on the data and if the data is indexed into incorrect index, we would be keeping the data longer than expected.
**Describe the solution you'd like**
Some type of processor or field in sink similar to ```document_id``` field that would identify (or create) the index name based on the timestamp from the incoming record and index the doc into that specific index.
In simple terms, when creating the bulk object, the logic should be able to determine the index name based on the timestamp from the field of the incoming record.
**Describe alternatives you've considered (Optional)**
Tried out the [date processor](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/date/) but it does not help, it does add a timestamp to the doc but does not sort it to the index based on timestamp.
Also, trying out the [date name index processor from the ingest pipelines](https://opensearch.org/docs/latest/ingest-pipelines/processors/date-index-name/) but haven't had a success yet. Maybe it will work maybe it won't but having the requested feature in data-prepper will definitely make it easier to configure something like this at a single place.
**Note:** If someone has already figured this out, pointers would be helpful.
| Pipeline processor/feature to ingest document in Opensearch's time based indices based on time field of incoming record | https://api.github.com/repos/opensearch-project/data-prepper/issues/4832/comments | 2 | 2024-08-14T04:02:15Z | 2024-08-27T19:45:48Z | https://github.com/opensearch-project/data-prepper/issues/4832 | 2,464,850,764 | 4,832 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Initial investigation shows it is due to missing root span. Need further investigation on reproducing the issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Missing traceGroup when using opentelemetry-demo app | https://api.github.com/repos/opensearch-project/data-prepper/issues/4827/comments | 1 | 2024-08-12T18:17:43Z | 2024-08-15T04:23:50Z | https://github.com/opensearch-project/data-prepper/issues/4827 | 2,461,622,624 | 4,827 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The service-map-relationship are skipped when traceGroupName is null:
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/service-map-stateful/src/main/java/org/opensearch/dataprepper/plugins/processor/ServiceMapStatefulProcessor.java#L245
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
We should still create the relationship regardless of traceGroupName missing
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Service-map relationship should be created regardless of missing traceGroupName | https://api.github.com/repos/opensearch-project/data-prepper/issues/4821/comments | 0 | 2024-08-09T22:04:33Z | 2024-08-12T18:24:52Z | https://github.com/opensearch-project/data-prepper/issues/4821 | 2,458,718,837 | 4,821 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We use BigDecimal data type in some sources (like dynamodb) and support it in convert_entry_type processor, but it doesn't seem to work with expressions.
For example, if I run a pipeline with this processor combo:
```
- convert_entry_type:
key: /outer/inner
type: "big_decimal"
- add_entries:
entries:
- key: new_field
value_expression: /outer/inner
```
with input: {"outer":{"inner": -0.31}}
I get this error:
```
org.opensearch.dataprepper.expression.ExpressionCoercionException: Unsupported type for value -0.31
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.lambda$new$0(ParseTreeCoercionService.java:33) ~[data-prepper-expression-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.resolveJsonPointerValue(ParseTreeCoercionService.java:106) ~[data-prepper-expression-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeCoercionService.coercePrimaryTerminalNode(ParseTreeCoercionService.java:69) ~[data-prepper-expression-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeEvaluatorListener.visitTerminal(ParseTreeEvaluatorListener.java:69) ~[data-prepper-expression-2.9.0-SNAPSHOT.jar:?]
```
**Describe the solution you'd like**
Seems we just need to add BigDecimal here: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-expression/src/main/java/org/opensearch/dataprepper/expression/LiteralTypeConversionsConfiguration.java
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support BigDecimal data type in expressions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4817/comments | 0 | 2024-08-08T22:17:28Z | 2024-09-27T16:04:11Z | https://github.com/opensearch-project/data-prepper/issues/4817 | 2,456,732,457 | 4,817 |
[
"opensearch-project",
"data-prepper"
] | Date Processor in Data Prepper is unable to parse the date time given the below patterns defined in the pipeline YAML configuration.
Consider, `testMessage.log` has a line ```{"message": "Jul 30, 2024 3:28:55 PM"}```, data prepper is unable to inject the `@timestamp` in the output event after parsing the date time from `message` key.
It looks like if we remove the line to default parsing for hour `parseDefaulting(ChronoField.HOUR_OF_DAY, 0)` from [here](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/date-processor/src/main/java/org/opensearch/dataprepper/plugins/processor/date/DateProcessor.java#L129), it is matching the expected behavior.
```
version: "2"
test-pipeline:
source:
file:
path: "./testMessage.log"
format: "json"
record_type: "event"
processor:
- date:
match:
- key: "message"
patterns: ["MMM dd, yyyy HH:mm:ss a",
"MMM dd, yyyy H:mm:ss a", "MMM dd, yyyy hh:mm:ss a", "MMM dd, yyyy h:mm:ss a", "MMM d, yyyy h:mm:ss a" ]
destination: "@timestamp"
destination_timezone: "UTC"
to_origination_metadata: true
sink:
- stdout:
```
| [BUG] Unable to parse date time using defined patterns in Date Processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4815/comments | 2 | 2024-08-08T20:09:08Z | 2024-08-21T06:08:10Z | https://github.com/opensearch-project/data-prepper/issues/4815 | 2,456,541,439 | 4,815 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I'd like the ability to leverage environment variable references within the pipeline configuration file. This would allow me to set values that need to be configurable during deployment or runtime.
**Describe the solution you'd like**
* Example of desired sink configuration
```
sink:
- s3:
aws:
region: ${AWS_REGION}
bucket: ${SINK_BUCKET}
buffer_type: in_memory
...
```
* Example of source configuration
```
source:
s3:
aws:
region: ${AWS_REGION}
codec:
ndjson: null
compression: gzip
default_bucket_owner: ${AWS_ACCOUNT}
notification_type: sqs
sqs:
queue_url: ${QUEUE_URL}
visibility_duplication_protection: true
visibility_timeout: 900s
```
In the examples provided, AWS_REGION, SINK_BUCKET, AWS_ACCOUNT, and QUEUE_URL would all be represented as environment variables.
| Allow pipeline configuration files to reference and utilize environment variables | https://api.github.com/repos/opensearch-project/data-prepper/issues/4813/comments | 1 | 2024-08-08T02:18:32Z | 2024-08-12T15:41:13Z | https://github.com/opensearch-project/data-prepper/issues/4813 | 2,454,731,779 | 4,813 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When ingesting large files using S3-SQS processing, OpenSearch Ingestion pipeline failed to prevent duplication although visibility_duplication_protection was set to true. The SQS queue's ApproximateNumberOfMessagesVisible metric kept growing. Confirmed duplicated documents existed in the OpenSearch index. There were many errors in the pipeline logs. For example:
ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Failed to set visibility timeout for message [foo] to 60
software.amazon.awssdk.services.sqs.model.SqsException: Value [bar] for parameter ReceiptHandle is invalid. Reason: Message does not exist or is not available for visibility timeout change. (Service: Sqs, Status Code: 400, Request ID: [baz])
The issue happened when OSI received 10 messages (or close to 10) from the SQS queue in a single request. It worked fine when OSI received one message in a single request.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure a S3 bucket and a SQS queue according to the documentation
2. Set the following parameters in the pipeline configuration
source:
s3:
acknowledgments: true
notification_type: "sqs"
compression: "gzip"
codec:
ndjson:
workers: 5
sqs:
queue_url:
maximum_messages: 10
visibility_timeout: "60s"
visibility_duplication_protection: true
3. Provision the right amount of OCUs for the pipeline
4. Start the pipeline
5. Send large gzipped files, e.g. 80MB, to the S3 bucket continuously
6. Check the pipeline's logs using CloudWatch Log Insights with the following query
fields @timestamp, @message
| filter @message like /ReceiptHandle is invalid/
| sort @timestamp desc
7. Check the pipeline's logs using CloudWatch Log Insights with the following query
fields @timestamp, @message
| filter @message like /10 messages from SQS. Processing/
| sort @timestamp desc
8. Check the queue's ApproximateNumberOfMessagesVisible metric
9. Check OpenSearch index for duplicated documents
**Expected behavior**
No log messages are found from step 6. Log messages are found from step 7. The ApproximateNumberOfMessagesVisible metric doesn't keep growing. There are no duplicated documents in the index. | [BUG] Visibility duplication protection fails when using S3 source for large files and receiving 10 messages from SQS queue | https://api.github.com/repos/opensearch-project/data-prepper/issues/4812/comments | 0 | 2024-08-07T03:28:27Z | 2024-08-23T18:59:14Z | https://github.com/opensearch-project/data-prepper/issues/4812 | 2,452,209,015 | 4,812 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Current s3 scan does not support bucket cross regions with iam credential:
```
version: "2"
log-pipeline:
source:
s3:
codec:
newline: # Other options "json", "csv", "parquet"
compression: "none"
aws:
region: "us-east-1"
sts_role_arn: "arn:aws:iam::253613578708:role/FullAccess"
scan:
buckets:
- bucket:
name: "test-s3-gd-bucket" # bucket in us-west-2
```
**Describe the solution you'd like**
user should be allowed to configure region for each bucket in the buckets list.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support s3 scan for bucket across regions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4811/comments | 2 | 2024-08-06T00:48:29Z | 2024-09-06T19:26:38Z | https://github.com/opensearch-project/data-prepper/issues/4811 | 2,449,709,063 | 4,811 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
For workloads that are smaller and want durability, using S3 as a buffer can be a good solution.
**Describe the solution you'd like**
Data Prepper already has a few things that we can combine to create an S3 buffer.
1. An S3 source
2. An S3 sink
3. Pipeline transformations
I propose that we have a new buffer - `pipeline_s3` which is implemented only as a pipeline transformation.
```
my-pipeline:
source:
http:
buffer:
pipeline_s3:
bucket: mybucket
sink:
- opensearch:
```
This would transform into:
```
my-pipeline-source:
source:
http:
buffer:
bounded_blocking:
sink:
- s3:
bucket: mybucket
my-pipeline-sink:
source:
s3:
scan:
buckets:
- bucket:
name: mybucket
buffer:
bounded_blocking:
sink:
- opensearch:
```
**Describe alternatives you've considered (Optional)**
We could implement an S3 buffer similar to the Kafka buffer that does not require splitting the pipeline. But, creating this would be quite a bit faster.
Also, I think we should leave room for a possible S3 buffer that is implement. My proposal is to alter the name of this buffer to make it distinct from an S3 buffer. And also to avoid confusing with other buffers such as Kafka. Thus, I called this `pipeline_s3`.
One alternative to changing the name is to use a flag instead - `split_pipeline: true` or `asynchronous_buffer: true`.
**Additional context**
N/A
| S3 buffer using pipeline transformations | https://api.github.com/repos/opensearch-project/data-prepper/issues/4809/comments | 1 | 2024-08-02T16:39:26Z | 2024-08-13T20:23:13Z | https://github.com/opensearch-project/data-prepper/issues/4809 | 2,445,378,053 | 4,809 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
given the following snippet in pipeline YAML:
```
opensearch:
index: "${/index}"
dlq:
s3:
bucket: ...
```
The dlq object will end up having the following key-value:
```
"failedData":{"index":"${/index}"
```
in which /failedData/index value is the literal value in the pipeline config.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
The expected behavior should be the runtime value extracted from individual event:
```
"failedData":{"index":"<<runtime value from individual event under /index json path>>"
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] dlq object should include the dynamic index runtime value instead of the literal value in pipeline configuration | https://api.github.com/repos/opensearch-project/data-prepper/issues/4808/comments | 0 | 2024-08-02T15:12:51Z | 2024-08-02T15:13:18Z | https://github.com/opensearch-project/data-prepper/issues/4808 | 2,445,186,841 | 4,808 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.8.1
**BUILD NUMBER**: 86
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/10207697004
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 10207697004: Release Data Prepper : 2.8.1 | https://api.github.com/repos/opensearch-project/data-prepper/issues/4806/comments | 3 | 2024-08-02T00:42:10Z | 2024-08-02T00:43:24Z | https://github.com/opensearch-project/data-prepper/issues/4806 | 2,443,750,510 | 4,806 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Enable users to send events to Amazon Personalize
**Describe the solution you'd like**
Create a new sink in Data Prepper which sends data to Amazon Personalize. It should support
- End-to-end acknowledgements
- Retries
- Metrics
- DLQ
- Ingestion into Personalize datasets (items, users, interactions)
Sample YAML for items dataset:
```
pipeline:
...
sink:
- personalize:
aws:
region: us-west-2
sts_role_arn: arn:aws:iam::123456789012:role/Data-Prepper
max_retries: 5
dataset_type: items
dataset_arn: arn:aws:personalize:us-west-2:123456789012:dataset/dataset-name
```
Sample YAML for users dataset:
```
pipeline:
...
sink:
- personalize:
aws:
region: us-west-2
sts_role_arn: arn:aws:iam::123456789012:role/Data-Prepper
max_retries: 5
dataset_type: users
dataset_arn: arn:aws:personalize:us-west-2:123456789012:dataset/dataset-name
```
Sample YAML for interactions dataset:
```
pipeline:
...
sink:
- personalize:
aws:
region: us-west-2
sts_role_arn: arn:aws:iam::123456789012:role/Data-Prepper
max_retries: 5
dataset_type: interactions
tracking_id: 1234567890
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Personalize as Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/4804/comments | 0 | 2024-08-01T23:51:59Z | 2024-08-08T14:25:43Z | https://github.com/opensearch-project/data-prepper/issues/4804 | 2,443,699,606 | 4,804 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.8.0
**BUILD NUMBER**: 85
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/10207064949
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 10207064949: Release Data Prepper : 2.8.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/4802/comments | 3 | 2024-08-01T23:32:27Z | 2024-08-02T00:09:20Z | https://github.com/opensearch-project/data-prepper/issues/4802 | 2,443,673,200 | 4,802 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Add sinks under extensions so that it can be used as a sink for pipeline DLQ, or as a sink for Live Debug Event capture or to reduce duplicate information if the same sink is used multiple times (Customers finding it unnecessary to add the same sink configuration when multiple sub-pipelines are used in the configuration).
**Describe the solution you'd like**
Allow sink configuration like the following under `extensions` section
```
extensions:
sinks:
opensearch-sink1:
hosts: [ <OPENSEARCH-ENDPOINT> ]
aws:
region: "<region>"
sts_role_arn: "<ARN>"
index: <index1>
pipeline:
source:
processor:
sink:
- opensearch:
use: opensearch-sink1
route:
- route1
- pipeline:
name: pipeline2
route:
- route2
pipeline2:
source:
...
processor:
sink:
- opensearch:
use: opensearch-sink1
index: <index2>. # override some config as needed
```
Using dynamic sink as pipeline DLQ or live debug capture sink
```
extensions:
sinks:
s3-sink1:
bucket: <bucket>
object_key:
path_prefix: pfx
threshold:
event_collect_timeout: 5s
event_count: 10
aws:
region: "<region>"
sts_role_arn: "<arn>"
dlq: # This is RESERVED name
s3:
use: s3-sink1
live_capture: # This is RESERVED name
s3:
use: s3-sink1
```
**Describe alternatives you've considered (Optional)**
key words used in the above proposal like "sinks" and "use" can be different or more appropriate names may be used.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add sinks under extensions | https://api.github.com/repos/opensearch-project/data-prepper/issues/4799/comments | 2 | 2024-08-01T15:29:53Z | 2024-08-01T16:25:51Z | https://github.com/opensearch-project/data-prepper/issues/4799 | 2,442,767,336 | 4,799 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
DynamoDB pipelines leader thread can crash and exit if an error is thrown on this line trying to update state. While this should not happen very often, if there is a 5xx error the source coordination store, it results in this thread crashing and this will block discovery of new dynamodb stream shards.
**Expected behavior**
Handle exceptions properly in this thread and continue running shard discovery if a `PartitionUpdateException` is thrown.
**Additional context**
Add any other context about the problem here.
| [BUG] DynamoDB Leader Thread can crash if it fails to save state for the leader partition | https://api.github.com/repos/opensearch-project/data-prepper/issues/4775/comments | 1 | 2024-07-30T21:33:52Z | 2024-08-21T00:43:11Z | https://github.com/opensearch-project/data-prepper/issues/4775 | 2,438,720,036 | 4,775 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `s3` source has a configuration `on_error`. This can be set to `delete_messages` which signals an intent to delete messages which failed to parse.
Presently, this feature only works when failing to parse the SQS message itself. If the S3 object fails to parse, then the message is retained.
**Expected behavior**
I would expect that any error (SQS or S3) results in deleting the message when `or_error: delete_messages` is set.
| [BUG] S3 source on_error: delete_messages retains for S3 failures | https://api.github.com/repos/opensearch-project/data-prepper/issues/4773/comments | 0 | 2024-07-30T16:32:54Z | 2024-07-30T19:55:39Z | https://github.com/opensearch-project/data-prepper/issues/4773 | 2,438,240,790 | 4,773 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Strings in Data Prepper do not allow a `$` character. This can be problematic for any situation, but especially when using a regex pattern to get the end of a line.
**To Reproduce**
Create a pipeline with a route like:
```
/some_field =~ "^prefix-[a-zA-Z0-9-]+$"
```
Run Data Prepper.
You will get some cryptic errors.
```
Caused by: org.opensearch.dataprepper.expression.ExceptionOverview: Multiple exceptions (37)
|-- org.antlr.v4.runtime.LexerNoViableAltException: null
at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309)
|-- org.antlr.v4.runtime.InputMismatchException: null
at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)
```
Change the regex by removing the `$`. The route works as expected.
**Expected behavior**
I should be able to run Data Prepper with `$` in the strings.
| [BUG] Data Prepper strings do not support $ | https://api.github.com/repos/opensearch-project/data-prepper/issues/4772/comments | 0 | 2024-07-30T15:35:27Z | 2024-07-30T19:50:01Z | https://github.com/opensearch-project/data-prepper/issues/4772 | 2,438,131,097 | 4,772 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When the data prepper pipeline Opensearch Sink initialization fails, data prepper continuously retries to recreate Opensearch client and OpenSearchClientRefresher (refresh credentials). Looks like on initialization failure, we only close Opensearch client and not OpenSearchClientRefresher (this has another reference to Opensearch client). These unclosed Opensearch clients are consuming threads and are resulting in OOM.
**To Reproduce**
If the Opensearch Sink credentials or data/network access policy is not configured correctly, data prepper continuously retries to recreate Opensearch client. Below OOM exception is seen in the logs.
```
ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [cdc-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
```
**Expected behavior**
The Opensearch Sink/Source should gracefully close the opened connection.
**Code Reference**
Should add close method to [OpenSearchClientRefresher](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchClientRefresher.java) to close OpenSearchClient and call it [here](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/OpenSearchSink.java#L589).
| [BUG] Close Opensearch RestHighLevelClient in OpenSearchClientRefresher on shutdown and initialization failure | https://api.github.com/repos/opensearch-project/data-prepper/issues/4770/comments | 2 | 2024-07-29T17:54:08Z | 2024-10-07T15:44:04Z | https://github.com/opensearch-project/data-prepper/issues/4770 | 2,436,012,154 | 4,770 |
[
"opensearch-project",
"data-prepper"
] | Not sure if this is a bug or something that is not supported.
I defined an `add_entries` processor in order to build a `passage_text_elements` that will later be joined, and one of the entries defines a format expression that uses `join`:
```yaml
...
- key: passage_text_elements
append_if_key_exists: true
add_when: >-
/episode != null and /episode/name != null
format: 'name: ${/episode/name}'
...
- key: passage_text_elements
append_if_key_exists: true
add_when: >-
/episode != null and /episode/guests != null
format: 'guests: ${join(/episode/guests)}'
...
- key: passage_text
value_expression: join("\n", /passage_text_elements)
```
This is evaluating those to null and appending those:
```json
"passage_text_elements": [
"Content type: episode-video", // this is ok
"name: <name>", // this is ok
"description: <description>", // this is ok
null // this should have been "guests: <guest1,guest2>"
]
```
is it possible to use `join` in a format expression? My idea for a workaround is to do this in two steps (an entry with `value_expression: join(/episode/guests)` setting a `guests_joined` field and then another entry with `format: "guests: ${/guests_joined}"`).
| Use `join` inside a format expression | https://api.github.com/repos/opensearch-project/data-prepper/issues/4768/comments | 2 | 2024-07-26T14:46:45Z | 2024-07-26T18:41:56Z | https://github.com/opensearch-project/data-prepper/issues/4768 | 2,432,402,147 | 4,768 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I add conditional grok processor from the documentation to my dataprepper pipeline and i get error
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
... 72 more
Caused by: com.fasterxml.jackson.databind.JsonMappingException: while parsing a block mapping
in 'reader', line 17, column 10:
grok_when: '/type == "ipv4"'
^
expected <block end>, but found '<block mapping start>'
in 'reader', line 18, column 12:
match:
^
reference: https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/grok/#conditional-grok
**To Reproduce**
Steps to reproduce the behavior:
1. Go to dataprepper yml pipeline
2. Add https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/grok/#conditional-grok
3. Restart dataprepper
4. See error
**Expected behavior**
Dataprepper is running, conditions work
**Environment (please complete the following information):**
- OS: RHEL 8
- Version: 2.8.0
<img width="610" alt="123123" src="https://github.com/user-attachments/assets/5df52c17-0072-430a-a719-222faaf35e11">
| [BUG] Cannot add conditionals to grok processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/4767/comments | 3 | 2024-07-26T12:41:18Z | 2024-08-01T15:59:58Z | https://github.com/opensearch-project/data-prepper/issues/4767 | 2,432,158,527 | 4,767 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the dynamodb source, when I have acknowledgments enabled, shards are not checkpointed by sequence number in the coordination store. This means that when data prepper stops and then starts, it will process the full shard again, rather than only processing after the checkpoint from the acknowledgments. This results in some duplicate processing and higher end to end latency reporting.
**Describe the solution you'd like**
Checkpoint the shards by sequence number with acknowledgments to start reading shards from the acknowledgment checkpoint to prevent duplicate processing
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Checkpoint acknowledgments for DynamoDB pipelines | https://api.github.com/repos/opensearch-project/data-prepper/issues/4764/comments | 0 | 2024-07-25T15:35:03Z | 2024-07-25T15:35:31Z | https://github.com/opensearch-project/data-prepper/issues/4764 | 2,430,330,814 | 4,764 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
create pipeline with a route:
``` route:
- dev-nginx-logs: '/kubernetes/pod_name =~ ".*nginx.*" and /k8scluster == "dev-cluster"'
- dev-app-logs: '/kubernetes/namespace_name == "my-app-dev"'
```
but when DP starts I see:
```
2024-07-24T14:52:59,685 [logs-pipeline-processor-worker-1-thread-3] ERROR org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator - Failed to evaluate route. This route will not be applied to any events.
org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate statement "".*nginx.*" and /k8scluster == "dev-cluster""
at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:44) ~[data-prepper-expression-2.8.0.jar:?]
at org.opensearch.dataprepper.expression.ExpressionEvaluator.evaluateConditional(ExpressionEvaluator.java:30) ~[data-prepper-api-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator.findMatchedRoutes(RouteEventEvaluator.java:64) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.RouteEventEvaluator.evaluateEventRoutes(RouteEventEvaluator.java:45) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.router.Router.route(Router.java:44) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.publishToSinks(Pipeline.java:346) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:168) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:150) [data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.8.0.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
Caused by: org.opensearch.dataprepper.expression.ExpressionEvaluationException: Unable to evaluate the part of input statement: /kubernetes/pod_name =~ ".*nginx.*"
at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:41) ~[data-prepper-expression-2.8.0.jar:?]
at org.opensearch.dataprepper.expression.ParseTreeEvaluator.evaluate(ParseTreeEvaluator.java:17) ~[data-prepper-expression-2.8.0.jar:?]
at org.opensearch.dataprepper.expression.GenericExpressionEvaluator.evaluate(GenericExpressionEvaluator.java:41) ~[data-prepper-expression-2.8.0.jar:?]
... 13 more
.......
Caused by: java.lang.IllegalArgumentException: '=~' requires left operand to be String.
```
**To Reproduce**
1. Create pipeline with route provided above
2. Check
**Expected behavior**
route events to indexes according route rules
**Environment (please complete the following information):**
- AWS EKS
- Version 2.8.0
| [BUG] Routes: regex doesn't work | https://api.github.com/repos/opensearch-project/data-prepper/issues/4763/comments | 1 | 2024-07-24T15:32:33Z | 2024-08-12T15:20:50Z | https://github.com/opensearch-project/data-prepper/issues/4763 | 2,427,891,825 | 4,763 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a customer, I would like to move all my Jira ticket details to OpenSearch to use advanced search capabilities like full-text and vector search on the jira ticketing content without writing any integration code. The connector brings all ticketing system data with a simple configuration in the pipeline yaml. The connector also takes care of synchronizing the data between my Jira Cloud instance and OpenSearch instance in near real-time.
**Describe the solution you'd like**
Introducing a new source plugin for Jira that takes jira login credentials. The connector makes use of Jira APIs to fetch all the ticket details and converts them to OpenSearch event stream that gets processed with in the OpenSearch context. Rest of the OpenSearch processors and sink options should work as they are.
```
simple-jira-pipeline:
aws:
secrets:
secret1:
secret_id: "<<jira-login-credentials>>"
region: "<<us-east-1>>"
refresh_interval: PT1H
jira-pipeline:
source:
jira:
host: https://<client-subdomain>.atlassian.net
authentication-type: "api-key"
user_name: "${{aws_secrets:secret:email}}"
api_token: "${{aws_secrets:secret:api_token}}"
entity_types: [project, user, issues, comments]
processor:
sink:
- opensearch:
```
For synchronizing the data changes between the Jira Cloud instance and OpenSearch, the connector also registers a webhook to get notified when there is a change and extracts the new version of those tickets and triggers the data prepper flow for those events related to ticket modifications.
**Additional context**
Webhook registring with Jira Cloud should be using a secret enabled so that the content pushed over the registered endpoint is signed and secured
| Jira Connector - to seamlessly sync all the ticket details to OpenSearch | https://api.github.com/repos/opensearch-project/data-prepper/issues/4754/comments | 0 | 2024-07-18T01:41:17Z | 2025-03-04T21:01:48Z | https://github.com/opensearch-project/data-prepper/issues/4754 | 2,414,989,008 | 4,754 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper currently waits a period of time to flush the buffer on shutdown. The current logic is to wait for the [entire buffer to drain](https://github.com/opensearch-project/data-prepper/blob/a85e05e60f9fd2d2a269aaf2c7d5291c59946b87/data-prepper-core/src/main/java/org/opensearch/dataprepper/pipeline/ProcessWorker.java#L66-L70) or for the drain timeout to expire.
This logic does not account for end-to-end acknowledgements. If a sink is taking a while to send acknowledgements, but the buffer is empty, Data Prepper will think that the pipeline is ready for shutdown.
Because of this, Data Prepper may produce duplicate data when shutdown in the middle of reading an S3 object (e.g. half the file is sent to the sink, but we shutdown before the second half is completed).
**Describe the solution you'd like**
Update Data Prepper to track the acknowledgement sets for a give pipeline. Consider this when performing the shutdown to ensure that it is completed.
**Describe alternatives you've considered (Optional)**
None
**Additional context**
I was working toward a solution to #4575 which would allow the S3 source to continue to keep the message visibility open while the sink flushed. Then, I found that the sink doesn't wait at all.
I used this pipeline and a local `hold_forever` sink (see #4737) to demonstrate:
```
sqs-pipeline:
workers: 2
delay: 100
source:
s3:
notification_type: sqs
compression: gzip
acknowledgments: true
codec:
csv:
delimiter: ' '
sqs:
queue_url: QUEUE
aws:
region: us-east-2
sts_role_arn: ROLE
processor:
sink:
- hold_forever:
output_frequency: 5s
```
data-prepper-config.yaml:
```
ssl: false
serverPort: 4900
processor_shutdown_timeout: 'PT5M'
```
Data Prepper shutdown immediately. It should wait 5 minutes because the `hold_forever` sink is not sending any acknowledgements. | Support flushing sinks and completing acknowledgements on shutdown | https://api.github.com/repos/opensearch-project/data-prepper/issues/4740/comments | 0 | 2024-07-16T14:30:19Z | 2024-07-16T19:54:24Z | https://github.com/opensearch-project/data-prepper/issues/4740 | 2,411,320,851 | 4,740 |
[
"opensearch-project",
"data-prepper"
] | **Problem Description**
Currently, whenever a new KCL application is launched, it automatically creates a DynamoDB table for lease coordination and management for the specific application.
It would be nice to have KCL provide MultiTenant support such that multiple independent KCL applications (hosted in a single account) each consuming multiple Kinesis data streams be able to share a single DynamoDB table for lease coordination and management.
**Proposed Solution**
The proposal is to have KCL support multi-tenancy for lease coordination and management through a single DynamoDB table with the relevant partition keys, range keys and indexes defined as required. Moreover, clients should also be offered to provide additional properties like SSE, TTL, etc.
Following Table schema could be used to support multi-tenancy.
Table partition details:
* Partition Key -
* ***applicationName*** - Uniquely identifies the application.
* Sort Key
* ***LeaseKey*** - To allow for get/updates of the specific lease.
Additionally, introduce a Global Secondary Index (GSI) to allow for listing all shards per stream per application.
* Partition Key -
* ***applicationName*** - Uniquely identifies the pipeline
* Sort Key
* ***streamName*** - Uniquely identifies a stream.
***New classes and overrides***
* MultiTenantDynamoDBLeaseRefresher extends [DynamoDBLeaseRefresher](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseRefresher.java#L73)
* Methods:
* [public boolean createLeaseTableIfNotExists()](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseRefresher.java#L235) - to support table creation if it does not exist with the required GSI, enable encryption, point-in-time-recovery.
* [private List<Lease> list(Integer limit, Integer maxPages, StreamIdentifier streamIdentifier)](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseRefresher.java#L428) - use dynamoDb query on the GSI to list all the shards for a stream and application.
* MultiTenantDynamoDBLeaseSerializer extends [DynamoDBMultiStreamLeaseSerializer](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseSerializer.java#L46)
* Methods:
* [public Map<String, AttributeValue> toDynamoRecord(final Lease lease)](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseSerializer.java#L46) - to support serialization of MultiTenantLease for insertion in the lease table
* [public Lease fromDynamoRecord(final Map<String, AttributeValue> dynamoRecord)](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseSerializer.java#L117) - to support deserialization of the record from the lease table into a MultiTenantLease
* [public Map<String, AttributeValue> getDynamoHashKey(final Lease lease)](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseSerializer.java#L162) - defines the hash key or partition key which is the application name and lease key as the sort key
* [public Collection<KeySchemaElement> getKeySchema()](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseSerializer.java#L377C5-L377C55) - defines the schema of the partition key and sort keys
* New method
* public Collection<KeySchemaElement> getIndexSchema() - defines the schema for the partition key (application name) and the sort key as the stream id
* MultiTenantDynamoDBLeaseManagementFactory extends [DynamoDBLeaseManagementFactory](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseManagementFactory.java#L53)
* Methods:
* [public DynamoDBLeaseRefresher createLeaseRefresher()](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/dynamodb/DynamoDBLeaseManagementFactory.java#L1084) - to create the Multitenant DynamoDB lease refresher.
* [LeaseSerializer](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/LeaseSerializer.java#L29) interface
* New Method
* Collection<KeySchemaElement> getIndexSchema();
* MultiTenantHierarchicalShardSyncer extends [HierarchicalShardSyncer](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/HierarchicalShardSyncer.java#L71)
* Methods:
* [public synchronized Lease createLeaseForChildShard](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/HierarchicalShardSyncer.java#L654) - create a MultiTenantLease from a child shard
* MultiTenantLeaseManagementConfig extends [LeaseManagementConfig](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/leases/LeaseManagementConfig.java#L51) - lease management configuration to support MultiTenant lease management functionality
* MultiTenantMultiStreamConfigsBuilder extends [ConfigsBuilder](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/common/ConfigsBuilder.java#L52)
* Methods:
* [public LeaseManagementConfig leaseManagementConfig()](https://github.com/awslabs/amazon-kinesis-client/blob/57dcddf10b8160736ef392f77376fb83d83e03d6/amazon-kinesis-client/src/main/java/software/amazon/kinesis/common/ConfigsBuilder.java#L258C5-L258C57) - return the MultiTenantLeaseManagementConfig object
* MultiTenantLease - store lease object model
| Add support for multi-tenancy for lease coordination and management in KCL | https://api.github.com/repos/opensearch-project/data-prepper/issues/4739/comments | 1 | 2024-07-15T22:16:30Z | 2024-07-15T22:18:31Z | https://github.com/opensearch-project/data-prepper/issues/4739 | 2,409,735,391 | 4,739 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-6345 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>setuptools-68.0.0-py3-none-any.whl</b></p></summary>
<p>Easily download, build, install, upgrade, and uninstall Python packages</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/c7/42/be1c7bbdd83e1bfb160c94b9cafd8e25efc7400346cf7ccdbdb452c467fa/setuptools-68.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **setuptools-68.0.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability in the package_index module of pypa/setuptools versions up to 69.1.1 allows for remote code execution via its download functions. These functions, which are used to download packages from URLs provided by users or retrieved from package index servers, are susceptible to code injection. If these functions are exposed to user-controlled inputs, such as package URLs, they can execute arbitrary commands on the system. The issue is fixed in version 70.0.
<p>Publish Date: 2024-07-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-6345>CVE-2024-6345</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-6345">https://www.cve.org/CVERecord?id=CVE-2024-6345</a></p>
<p>Release Date: 2024-07-15</p>
<p>Fix Resolution: setuptools - 70.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-6345 (High) detected in setuptools-68.0.0-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/4738/comments | 1 | 2024-07-15T15:13:02Z | 2024-08-26T15:05:23Z | https://github.com/opensearch-project/data-prepper/issues/4738 | 2,408,989,602 | 4,738 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Sometimes when running data through Data Prepper, I'd like to test the behavior when data is not sent to a sink.
**Describe the solution you'd like**
Provide a new sink - `hold_forever`.
```
sink:
- hold_forever:
```
This sink would not send the data, would not acknowledge to acknowledgements.
This sink could provide a warning that this is only for testing purposes.
**Describe alternatives you've considered (Optional)**
It would be interesting to consider having a CLI flag for `debugMode`. Then when this flag is provided, this sink would be available. If it isn't provided, the sink is unavailable. This could prevent misuse of it.
However, Data Prepper's current arguments are not very robust. So I'd like to propose that something like this come later.
**Additional context**
I already have this implemented and keep it in my development environment. But, it may be useful beyond this, so I'd like to make it part of the standard sinks.
| Provide hold_forever sink to help with testing and experimenting | https://api.github.com/repos/opensearch-project/data-prepper/issues/4737/comments | 0 | 2024-07-15T14:32:17Z | 2024-07-16T19:52:26Z | https://github.com/opensearch-project/data-prepper/issues/4737 | 2,408,897,412 | 4,737 |
[
"opensearch-project",
"data-prepper"
] | Hi all,
I'm not sure if I'm missing something or if I'm doing something wrong on my end.
In my current data prepper setup there are traces that are getting dropped/sent to the logs of data prepper itself which pollutes its own logs and therefore important messages are getting swallowed in between all the dropped/logged traces. I would like to investigate those dropped traces further so that's why I added the local DLQ path to my pipeline config under the sink section so that those traces are getting written/exported into this file for further troubleshooting instead of the logs.
The problem that I'm facing now is that the DLQ file stays empty and is not being filled with the dropped traces. They are still getting dropped into the logs of data prepper. Data prepper is also not compaining about a misconfigured DLQ config in the logs. The DLQ file just stays empty for some reason.
The option to use AWS S3 for the DLQs is not working for me since we don't use it in our tech stack. We would either need support to use GCS or other 3rd party S3 options.
I found the config for the DLQ path here: https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/README.md
This is my current pipeline.yaml file:
```
otel-collector-pipeline:
workers: 8
delay: "100"
source:
otel_trace_source:
ssl: false
port: 9000
health_check_service: true
buffer:
bounded_blocking:
buffer_size: 25600
batch_size: 400
sink:
- pipeline:
name: raw-pipeline
- pipeline:
name: service-map-pipeline
raw-pipeline:
workers: 8
delay: "3000"
source:
pipeline:
name: otel-collector-pipeline
buffer:
bounded_blocking:
buffer_size: 25600
batch_size: 3200
processor:
- otel_traces:
- otel_trace_group:
hosts: [ "https://api.of.our.opensearch.instance.com" ]
username: dataprepper-user
password: xxxxxxxxxxxxxxx
sink:
- opensearch:
hosts: [ "https://api.of.our.opensearch.instance.com" ]
username: dataprepper-user
password: xxxxxxxxxxxxxxx
index_type: trace-analytics-raw
dlq_file: /usr/share/data-prepper/log/dlq-file
service-map-pipeline:
workers: 8
delay: "100"
source:
pipeline:
name: otel-collector-pipeline
processor:
- service_map:
window_duration: 180
buffer:
bounded_blocking:
buffer_size: 25600
batch_size: 400
sink:
- opensearch:
hosts: [ "https://api.of.our.opensearch.instance.com" ]
username: dataprepper-user
password: xxxxxxxxxxxxxxx
index_type: trace-analytics-service-map
dlq_file: /usr/share/data-prepper/log/dlq-file
```
Maybe I misunderstand what the DLQ is/should be used for and I thought it would be suitable for my use case. I also don't know if it is expected behaviour for data prepper to write the complete traces into his own logs.
I'm using the newest release of data prepper (2.8.0)
I hope someone can shed some light on my problem and help me fix it.
If you need any additional information that I did not provide or forgot to provide, just ask and I will try my best to give you that information.
Cheers! | Local dataprepper DLQ file is not getting filled with dropped traces | https://api.github.com/repos/opensearch-project/data-prepper/issues/4736/comments | 14 | 2024-07-15T09:09:15Z | 2024-09-27T05:55:38Z | https://github.com/opensearch-project/data-prepper/issues/4736 | 2,408,238,513 | 4,736 |
[
"opensearch-project",
"data-prepper"
] |
attaching docker compose for fluentbit, opensearch & opensearch dashboard
```
version: ‘3’
services:
fluent-bit:
container_name: fluent-bit
image: fluent/fluent-bit
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- ./test.log:/var/log/test.log
networks:
- dscnet
opensearch:
container_name: opensearch
image: opensearchproject/opensearch:latest
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- “OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m” # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=Developer@123
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- dscnet
dashboards:
image: opensearchproject/opensearch-dashboards:latest
container_name: opensearch-dashboards
ports:
- 5601:5601
expose:
- “5601”
environment:
OPENSEARCH_HOSTS: ‘[“https://10.177.164.51:9200/”]’
depends_on:
- opensearch
networks:
- dscnet
networks:
dscnet:
external: true
driver: overlay
name: test-net
```
fluent-bit.conf
```
[INPUT]
name tail
refresh_interval 5
path ./test.log
read_from_head true
[OUTPUT]
Name http
Match *
Host data-prepper
Port 2021
URI /log/ingest
Format json
```
docker compose for data prepper
```
version: ‘3.7’
services:
data-prepper:
image: opensearchproject/data-prepper:2.0.0
container_name: data-prepper
volumes:
- ./log_pipeline.yaml:/usr/share/data-prepper/pipelines/log_pipeline.yaml
ports:
- 2021:2021
networks:
- test-net
networks:
test-net:
external: true
```
[if i use depends upon i get error]
```
log-pipeline:
source:
http:
ssl: false
processor:
- grok:
match:
log: [ “%{COMMONAPACHELOG}” ]
sink:
- opensearch:
hosts: [“https://10.177.164.51:9200/”]
insecure: true
username: admin
password: Developer@123
index: test_logs
```
test.log file has necessary permissions.
Fluent-bit
Opensearch
Opensearch-dashboards
dataprepper
all are running without any error but fluent-bit is not able to read the test.log file
| Fluent-bit is unable to read logs and send it to data prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/4735/comments | 7 | 2024-07-13T12:34:19Z | 2024-07-20T15:21:41Z | https://github.com/opensearch-project/data-prepper/issues/4735 | 2,406,882,647 | 4,735 |
[
"opensearch-project",
"data-prepper"
] | The path option was added long ago to the http and otel sources (#2277 and #2297), but it is not documented on the OpenSearch website. See e.g. https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/http/. This option should be documented so people know it exists. The only reason I found it was because I stumbled into #3670 while trying to figure out how to get the OTel collector's otlphttp exporter to work. | path option in otel and http sources is not documented | https://api.github.com/repos/opensearch-project/data-prepper/issues/4732/comments | 5 | 2024-07-12T13:49:23Z | 2025-04-19T16:32:41Z | https://github.com/opensearch-project/data-prepper/issues/4732 | 2,405,660,805 | 4,732 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper runs in heap OOM issues. This was observed when ingesting OTel metrics via Data Prepper into OpenSearch (~400 metric data points per second).
<img width="512" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/121185951/e01956f3-34db-4f4a-8851-f4d4957038e3">
(The picture shows the summed up heap memory of 2 Data Prepper instances. The instances do not crash, since circuit breakers are configured and constantly open.)
The memory is taken away from objects sitting in the _Old Gen space_.
Possible trigger: **The issue started to occur when updating from DP version 2.7.0 to 2.8.0.**
I created a heap dump:
<img width="677" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/121185951/db5fdb4c-2d75-4389-b19d-2bfd1e8cec6b">
The `org.opensearch.dataprepper.pipeline.Pipeline` object is taking away almost all the memory. Within the dominator tree, I can trace back the memory consumption to the jackson _LockFreePool_:
<img width="1287" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/121185951/369c8327-1a5c-4587-ada4-1b11337de940">
----
There are some known issues with the LockFreePool, e.g. see
- https://github.com/FasterXML/jackson-core/issues/1260
- https://github.com/FasterXML/jackson-databind/issues/4500
> I discovered a memory leak while performing performance test in a multi threaded environment. This seems to be due to the switch to the LockFreePool introduced in 2.17.0.
> if you search around, you'll see that this pool is not working well. Stick with Jackson 2.16 or override the recycler pool to use the thread local one. 2.17.1 goes back to that pool as the default.
I am not sure what jackson version is exactly used within the opensearch sink, but at least we see that the LockFreePool is used.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup Data Prepper with otel metrics source and processor and the opensearch sink.
2. ingest OTel metrics
3. wait (I could not reproduce it reliably in my dev setup, however for some Data Prepper instances in our environment it happens frequently.)
**Expected behavior**
For comparison this is how the heap utilization looks without this issue (same ingestion workload):
<img width="516" alt="image" src="https://github.com/opensearch-project/data-prepper/assets/121185951/3f702af0-09cb-493d-bb03-3c63e7a1ecd4">
**Environment (please complete the following information):**
- Data Prepper 2.8.0
| [BUG] Jackson 2.17.0 LockFreePool causes memory issues | https://api.github.com/repos/opensearch-project/data-prepper/issues/4729/comments | 10 | 2024-07-11T09:09:33Z | 2024-08-02T14:38:11Z | https://github.com/opensearch-project/data-prepper/issues/4729 | 2,402,666,715 | 4,729 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
My pipeline sink/ingestion is failing to initialize, with the following cloudwatch error logs:
```
2024-07-09T22:19:44.802 [dynamodb-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink with a retryable exception.
java.lang.IllegalStateException: org.apache.http.nio.reactor.IOReactorException: Failure opening selector
at org.apache.http.impl.nio.client.IOReactorUtils.create(IOReactorUtils.java:45) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.client.HttpAsyncClientBuilder.build(HttpAsyncClientBuilder.java:686) ~[httpasyncclient-4.1.5.jar:4.1.5]
at java.base/java.security.AccessController.doPrivileged(Native Method) ~[?:?]
at org.opensearch.client.RestClientBuilder.createHttpClient(RestClientBuilder.java:318) ~[opensearch-rest-client-2.7.0.jar:2.7.0]
at java.base/java.security.AccessController.doPrivileged(Native Method) ~[?:?]
at org.opensearch.client.RestClientBuilder.build(RestClientBuilder.java:261) ~[opensearch-rest-client-2.7.0.jar:2.7.0]
at org.opensearch.client.RestHighLevelClient.<init>(RestHighLevelClient.java:284) ~[opensearch-rest-high-level-client-1.3.14.jar:2.7.0]
at org.opensearch.client.RestHighLevelClient.<init>(RestHighLevelClient.java:276) ~[opensearch-rest-high-level-client-1.3.14.jar:2.7.0]
at org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration.createClient(ConnectionConfiguration.java:328) ~[opensearch-2.x.138.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitializeInternal(OpenSearchSink.java:222) ~[opensearch-2.x.138.jar:?]
at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doInitialize(OpenSearchSink.java:201) ~[opensearch-2.x.138.jar:?]
at org.opensearch.dataprepper.model.sink.AbstractSink.initialize(AbstractSink.java:52) ~[data-prepper-api-2.x.138.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.isReady(Pipeline.java:200) ~[data-prepper-core-2.x.138.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:252) ~[data-prepper-core-2.x.138.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.apache.http.nio.reactor.IOReactorException: Failure opening selector
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.<init>(AbstractMultiworkerIOReactor.java:144) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.<init>(DefaultConnectingIOReactor.java:82) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.client.IOReactorUtils.create(IOReactorUtils.java:43) ~[httpasyncclient-4.1.5.jar:4.1.5]
... 18 more
Caused by: java.io.IOException: Too many open files
at java.base/sun.nio.ch.IOUtil.makePipe(Native Method) ~[?:?]
at java.base/sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:83) ~[?:?]
at java.base/sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) ~[?:?]
at java.base/java.nio.channels.Selector.open(Selector.java:295) ~[?:?]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.<init>(AbstractMultiworkerIOReactor.java:142) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.<init>(DefaultConnectingIOReactor.java:82) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.client.IOReactorUtils.create(IOReactorUtils.java:43) ~[httpasyncclient-4.1.5.jar:4.1.5]
... 18 more
```
There is a similar issue (#4195), but that is an auth issue. Seems like "Failure opening selector" can be a file io issue, but idk how to dive into that.
**To Reproduce**
I'm basically following the [AWS Amplify Opensearch guide](https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/search-and-aggregate-queries/), but have an "Events" model instead of "Todos":
```typescript
// backend.ts
// Define OpenSearch index mappings
// https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/search-and-aggregate-queries/#step-3b-opensearch-service-pipeline
const indexName = "event";
const indexMapping = {
settings: {
number_of_shards: 1,
number_of_replicas: 0,
},
mappings: {
properties: {
id: {
type: "keyword",
},
name: {
type: "text",
},
...
times: { type: "date", format: "time " },
},
},
};
// OpenSearch template definition
const openSearchTemplate = `
version: "2"
dynamodb-pipeline:
source:
dynamodb:
acknowledgments: true
tables:
- table_arn: "${tableArn}"
stream:
start_position: "LATEST"
export:
s3_bucket: "${s3BucketName}"
s3_region: "${region}"
s3_prefix: "${tableName}/"
aws:
sts_role_arn: "${openSearchIntegrationPipelineRole.roleArn}"
region: "${region}"
sink:
- opensearch:
hosts:
- "https://${openSearchDomain.domainEndpoint}"
index: "${indexName}"
index_type: "custom"
template_content: |
${JSON.stringify(indexMapping)}
document_id: '\${getMetadata("primary_key")}'
action: '\${getMetadata("opensearch_action")}'
document_version: '\${getMetadata("document_version")}'
document_version_type: "external"
bulk_size: 4
aws:
sts_role_arn: "${openSearchIntegrationPipelineRole.roleArn}"
region: "${region}"
`;
// Create a CloudWatch log group
const logGroupName = "/aws/vendedlogs/OpenSearchService/pipelines/dev";
[...omitted for brevity]
// Create an OpenSearch Integration Service pipeline
const cfnPipeline = new osis.CfnPipeline(
dataStack,
"OpenSearchIntegrationPipeline",
{
maxUnits: 4,
minUnits: 1,
pipelineConfigurationBody: openSearchTemplate,
pipelineName: "opensearch-ddb-integration",
logPublishingOptions: {
isLoggingEnabled: true,
cloudWatchLogDestination: {
logGroup: logGroupName,
},
},
}
);
// Add OpenSearch data source
// https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/search-and-aggregate-queries/#step-4-expose-new-queries-on-opensearch
const osDataSource = backend.data.addOpenSearchDataSource(
"osDataSource",
openSearchDomain
);
```
**Expected behavior**
I'd expect the pipeline to work, or more involved logging.
| [BUG] Unhelpful error message initializing OpenSearch Ingestion & sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/4717/comments | 5 | 2024-07-09T22:40:19Z | 2024-08-13T16:07:57Z | https://github.com/opensearch-project/data-prepper/issues/4717 | 2,399,346,897 | 4,717 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The S3 DLQ does not use the `AwsCredentialsSupplier` class, so it will not use the default role if it is configured in the `data-prepper-config.yaml`
**Describe the solution you'd like**
Migrate the credentials in the S3 DLQ to use the `AwsCredentialsSupplier` extension plugi
| S3 DLQ Plugin does not use AwsCredentialsSupplier | https://api.github.com/repos/opensearch-project/data-prepper/issues/4716/comments | 0 | 2024-07-09T21:46:43Z | 2024-07-09T21:47:49Z | https://github.com/opensearch-project/data-prepper/issues/4716 | 2,399,275,985 | 4,716 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-39689 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2023.7.22-py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/4c/dd/2234eab22353ffc7d94e8d13177aaa050113286e93e7b40eae01fbf7c3d9/certifi-2023.7.22-py3-none-any.whl">https://files.pythonhosted.org/packages/4c/dd/2234eab22353ffc7d94e8d13177aaa050113286e93e7b40eae01fbf7c3d9/certifi-2023.7.22-py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **certifi-2023.7.22-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi starting in 2021.05.30 and prior to 2024.07.4 recognized root certificates from `GLOBALTRUST`. Certifi 2024.07.04 removes root certificates from `GLOBALTRUST` from the root store. These are in the process of being removed from Mozilla's trust store. `GLOBALTRUST`'s root certificates are being removed pursuant to an investigation which identified "long-running and unresolved compliance issues."
<p>Publish Date: 2024-07-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-39689>CVE-2024-39689</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/certifi/python-certifi/security/advisories/GHSA-248v-346w-9cwc">https://github.com/certifi/python-certifi/security/advisories/GHSA-248v-346w-9cwc</a></p>
<p>Release Date: 2024-07-05</p>
<p>Fix Resolution: certifi - 2024.07.04</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-39689 (High) detected in certifi-2023.7.22-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/4715/comments | 0 | 2024-07-09T21:06:18Z | 2024-07-15T18:55:13Z | https://github.com/opensearch-project/data-prepper/issues/4715 | 2,399,210,989 | 4,715 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-5569 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zipp-3.15.0-py3-none-any.whl</b></p></summary>
<p>Backport of pathlib-compatible object wrapper for zip files</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl">https://files.pythonhosted.org/packages/5b/fa/c9e82bbe1af6266adf08afb563905eb87cab83fde00a0a08963510621047/zipp-3.15.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt,/release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **zipp-3.15.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library, affecting all versions prior to 3.19.1. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. The vulnerability was addressed in version 3.19.1 of jaraco/zipp.
<p>Publish Date: 2024-07-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-5569>CVE-2024-5569</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.com/bounties/be898306-11f9-46b4-b28c-f4c4aa4ffbae">https://huntr.com/bounties/be898306-11f9-46b4-b28c-f4c4aa4ffbae</a></p>
<p>Release Date: 2024-07-09</p>
<p>Fix Resolution: 3.19.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-5569 (Low) detected in zipp-3.15.0-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/4714/comments | 0 | 2024-07-09T21:06:16Z | 2024-07-15T18:55:14Z | https://github.com/opensearch-project/data-prepper/issues/4714 | 2,399,210,935 | 4,714 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-3651 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>idna-3.3-py3-none-any.whl</b></p></summary>
<p>Internationalized Domain Names in Applications (IDNA)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/04/a2/d918dcd22354d8958fe113e1a3630137e0fc8b44859ade3063982eacd2a4/idna-3.3-py3-none-any.whl">https://files.pythonhosted.org/packages/04/a2/d918dcd22354d8958fe113e1a3630137e0fc8b44859ade3063982eacd2a4/idna-3.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **idna-3.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/1d259cff3a8d8a529c40142676c9be06e931b38d">1d259cff3a8d8a529c40142676c9be06e931b38d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was identified in the kjd/idna library, specifically within the `idna.encode()` function, affecting version 3.6. The issue arises from the function's handling of crafted input strings, which can lead to quadratic complexity and consequently, a denial of service condition. This vulnerability is triggered by a crafted input that causes the `idna.encode()` function to process the input with considerable computational load, significantly increasing the processing time in a quadratic manner relative to the input size.
<p>Publish Date: 2024-07-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-3651>CVE-2024-3651</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2024-3651">https://www.cve.org/CVERecord?id=CVE-2024-3651</a></p>
<p>Release Date: 2024-07-07</p>
<p>Fix Resolution: idna - 3.7</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-3651 (High) detected in idna-3.3-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/4713/comments | 0 | 2024-07-09T21:06:14Z | 2024-07-15T18:55:14Z | https://github.com/opensearch-project/data-prepper/issues/4713 | 2,399,210,882 | 4,713 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, DataPrepper users do not have a way to see how an event is created and transformed in the pipeline until the event is written to a sink.
It would help users to see how an event gets created and transformed by different components in the pipeline
**Describe the solution you'd like**
Solution is to allow capturing of a sampled event transformations at various places in the DataPrepper and send it a user configured endpoint (any sink that is currently supported).
The final live capture event would be a collection of events that are captured at the various stages of the pipeline, like, at the time of event creation, after every processor, after routing decisions, after codecs, and so on.
The live capturing event state can be disabled by default and be enabled by a control command. When enabled, one of the events received is "marked" for tracking periodically (for example, 1 event per second) and the marked event's metadata is populated with the live capture information. When the event is finally, released, the captured data is sent to the configured endpoint. If an event gets copied as part of routing (and sub-pipelines), then the entire live capture information accumulated so far is also copied to the new event. The final live capture event may look like this
```
{"LiveCapture":
[
{
"Version": "1.0",
"Time": Time in ISO-8601 format,
"description": "Source XYZ created event",
"event": {
"key1" : "value1",
"key2" : "value2"
}
},
{
{
"Version": "1.0",
"Time": Time in ISO-8601 format,
"description": " after add entries processor execution",
"event": {
"key1" : "value1",
"key2" : "value2",
"key3": "value3"
}
},
{
{
"Version": "1.0",
"Time": Time in ISO-8601 format,
"description": " Event matched the route <route-name>",
"event": {
"key1" : "value1",
"key2" : "value2",
"key3": "value3"
}
},
{
"Version": "1.0",
"Time": Time in ISO-8601 format,
"description": " Received by sink <sink-name>",
"event": {
"key1" : "value1",
"key2" : "value2",
"key3": "value3"
}
},
{
"Version": "1.0",
"Time": Time in ISO-8601 format,
"description": " sink <sink-name> modified the event with include_tags",
"event": {
"key1" : "value1",
"key2" : "value2",
"key3" : "value3",
"tags" : [tag1, tag2]
}
}
]
}
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support live tracking of event transformation | https://api.github.com/repos/opensearch-project/data-prepper/issues/4711/comments | 0 | 2024-07-08T22:51:12Z | 2024-07-08T22:51:29Z | https://github.com/opensearch-project/data-prepper/issues/4711 | 2,396,705,455 | 4,711 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The data prepper S3 source only supports `gzip` decompression
**Describe the solution you'd like**
Support `.zip` decompression
| Support .zip decompression in S3 source | https://api.github.com/repos/opensearch-project/data-prepper/issues/4710/comments | 1 | 2024-07-08T15:31:37Z | 2025-03-04T20:54:18Z | https://github.com/opensearch-project/data-prepper/issues/4710 | 2,395,953,268 | 4,710 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user with an S3 bucket that contains a mix of json and csv objects, I would like to use a single data prepper pipeline to process these objects based on the file extension, rather than having to create multiple pipelines
**Describe the solution you'd like**
I would like a new codec `automatic` that dynamically checks the object extension to determine which codec to use. For example, when `automatic` is set, objects with the `.csv` extension would use `csv` codec, objects with `.json` extension would use the `json` codec, and so on. | Support dynamically applying codecs in the S3 source | https://api.github.com/repos/opensearch-project/data-prepper/issues/4709/comments | 2 | 2024-07-08T15:28:17Z | 2024-07-16T19:36:03Z | https://github.com/opensearch-project/data-prepper/issues/4709 | 2,395,946,460 | 4,709 |
[
"opensearch-project",
"data-prepper"
] | Is your feature request related to a problem? Please describe.
Pipeline users want to send events to AWS Lambda.
Describe the solution you'd like
Create a processor in Data Prepper which uses lambda as a remote processor . It should support
- Retries
- Buffering capabilities
Without Batching:
```
lambda-pipeline:
...
processor:
- lambda:
aws:
region: us-east-1
sts_role_arn: <arn>
sts_overrides:
function_name: "uploadToS3Lambda"
mode: synchronous
max_retries: 3
dlq:
s3:
bucket: test-bucket
key_path_prefix: dlq/
```
With Batching:
```
lambda-pipeline:
...
processor:
- lambda:
aws:
region: us-east-1
sts_role_arn: <arn>
sts_overrides:
function_name: "uploadToS3Lambda"
mode: synchronous
max_retries: 3
batch:
batch_key: "user_key"
threshold:
event_count: 3
maximum_size: 6mb
event_collect_timeout: 15s
...
```
**Additional context**
Add any other context or screenshots about the feature request here.
| AWS Lambda as Processor and Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/4699/comments | 0 | 2024-07-01T22:57:54Z | 2024-11-04T23:15:45Z | https://github.com/opensearch-project/data-prepper/issues/4699 | 2,384,874,425 | 4,699 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Nested json parsing can introduce challenges with OpenSearch field limits and type conflicts between object and concrete field types. It would be helpful to be able to only parse the top level of JSON fields, leaving subfields as strings.
**Describe the solution you'd like**
Add an option to only parse the top level of a JSON-encoded string field.
**Describe alternatives you've considered (Optional)**
Perform JSON parsing elsewhere, e.g. fluent-bit.
**Additional context**
N/A
| parse_json: add ability to only parse top level fields | https://api.github.com/repos/opensearch-project/data-prepper/issues/4695/comments | 2 | 2024-07-01T04:29:04Z | 2024-10-11T15:54:28Z | https://github.com/opensearch-project/data-prepper/issues/4695 | 2,382,781,256 | 4,695 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a data pipeline built as a combination of AOSS pipeline and AOSS collection. This pipeline is a real time monitor for logs.
We recently had an outage so the source did not move logs for few days. When we finally unblocked the pipeline and restarted the ingestion, all the days were moved at once and the AOSS pipeline started to ingest oldest to newest. This behavior does not work for us where we prioritize fresher data over older because we want a real-time monitor.
**Describe the solution you'd like**
I propose to introduce a a new behavior where the pipeline can discard data in the queue that are older than XX (days, hours,minutes). In this way users may choose to prioritize fresher data over older data without causing the queue to grow indefinitely. In my case I may just set this flag on 1H and only ingest fresh data (at least for some time) forgetting about the past.
For example:
`max_retention: 1h`
`max_retention: 1d`
`max_retention: 1w`
**Describe alternatives you've considered (Optional)**
I dont have any.
**Additional context**
Related to https://github.com/opensearch-project/data-prepper/issues/4666
| Enable pipeline to discard data older than XX | https://api.github.com/repos/opensearch-project/data-prepper/issues/4667/comments | 4 | 2024-06-29T16:25:25Z | 2024-07-03T03:00:02Z | https://github.com/opensearch-project/data-prepper/issues/4667 | 2,381,842,988 | 4,667 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a data pipeline built as a combination of AOSS pipeline and AOSS collection. This pipeline is a real time monitor for logs.
We recently had an outage so the source did not move logs for few days. When we finally unblocked the pipeline and restarted the ingestion, all the days were moved at once and the AOSS pipeline started to ingest oldest to newest. This behavior does not work for us where we prioritize fresher data over older because we want a real-time monitor.
**Describe the solution you'd like**
The request is for implementing an alternative behavior controlled by a setting (f.i. `order:(newer_first|older_first))` where user can control the order of the ingestion. In particular, it should be
`older_first`: (FIFO) older records are ingested first. Any new record added to the ingestion queue does not change the order (current behavior)
`newer_first`: (LIFO) newer records are added to the top of the ingestion queue and comes first changing the order of the ingestion.
**Describe alternatives you've considered (Optional)**
I have no alternatives for now.
**Additional context**
I think this feature should also be coupled with another feature which is to discard data that are older than XX if still to be ingested. That would also alleviate the problem above.
| Enable ingestion priority in S3 scan | https://api.github.com/repos/opensearch-project/data-prepper/issues/4666/comments | 1 | 2024-06-29T16:20:55Z | 2024-07-02T19:44:37Z | https://github.com/opensearch-project/data-prepper/issues/4666 | 2,381,837,965 | 4,666 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Creating a stateful processor that requires peer forwarding does not work with multiple workers.
```
2024-06-25T00:02:23,474 [main] ERROR org.opensearch.dataprepper.parser.PipelineTransformer - Construction of pipeline components failed, skipping building of pipeline [service-map-pipeline] and its connected pipelines
java.lang.RuntimeException: Data Prepper 2.0 will only support a single peer-forwarder per pipeline/plugin type
at org.opensearch.dataprepper.peerforwarder.DefaultPeerForwarderProvider.register(DefaultPeerForwarderProvider.java:42) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.peerforwarder.LocalModePeerForwarderProvider.register(LocalModePeerForwarderProvider.java:32) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.peerforwarder.PeerForwardingProcessorDecorator.lambda$decorateProcessors$1(PeerForwardingProcessorDecorator.java:73) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.peerforwarder.PeerForwardingProcessorDecorator.decorateProcessors(PeerForwardingProcessorDecorator.java:76) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.lambda$buildPipelineFromConfiguration$3(PipelineTransformer.java:132) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.buildPipelineFromConfiguration(PipelineTransformer.java:137) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.transformConfiguration(PipelineTransformer.java:99) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:69) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:2.9.0-SNAPSHOT]
```
**To Reproduce**
1. Edit [trace_analytics_no_ssl_2x.yml](https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl_2x.yml) by adding `workers: 2` to the `service-map-pipeline` pipeline.
2. Go to `examples/jaeger-hotrod`
3. `docker compose up -d`
4. `docker logs -f data-prepper
**Expected behavior**
This should work just like it used to.
**Environment (please complete the following information):**
Data Prepper running from `main`.
| [BUG] Unable to create stateful processors with multiple workers. | https://api.github.com/repos/opensearch-project/data-prepper/issues/4660/comments | 1 | 2024-06-25T00:06:01Z | 2024-08-28T14:57:49Z | https://github.com/opensearch-project/data-prepper/issues/4660 | 2,371,334,061 | 4,660 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Today the blueprint contains a `template` field allowing to add an inline definition of the index template the ingestion should write into.
```yaml
....
template_content: >
{
"index_patterns": [
"ss4o_metrics-*-*"
],
"template": {
"mappings": {
"_meta": {
....
```
It would make much sense to add a similar capability to create a dashboard that shows the ingested data which corresponds to the data being ingested.
**Describe the solution you'd like**
In a similar manner to the `template` field, add a `dashboard_content` field that is an embedded `ndjson` file.
This part will call either the `savedObject` API of the Dashboards server or store the ndjson document directly in the `.kibana` index as a dashboard doc.
```yaml
....
dashboard_content: >
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[],\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"ingest-services-metrics-count","uiStateJSON":"{}","version":1,"visState":"{\"title\":\"ingest-services-metrics-count\",\"type\":\"table\",\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"params\":{\"customLabel\":\"count\"},\"schema\":\"metric\"},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"params\":{\"field\":\"serviceName\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":25,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"service\"},\"schema\":\"bucket\"}],\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"showTotal\":false,\"totalFunc\":\"sum\",\"percentageCol\":\"\"}}"},"id":"cd23dc20-2f60-11ef-9514-4fea472f4f07","migrationVersion":{"visualization":"7.10.0"},"references":[{"id":"12a67740-2f60-11ef-9514-4fea472f4f07","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2024-06-20T23:57:07.682Z","version":"WzU1LDFd"}
{"attributes":{"description":"count services events","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[],\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"ingest-services-events-count","uiStateJSON":"{}","version":1,"visState":"{\"title\":\"ingest-services-events-count\",\"type\":\"table\",\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"params\":{\"customLabel\":\"count\"},\"schema\":\"metric\"},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"params\":{\"field\":\"serviceName.keyword\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":25,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"service\"},\"schema\":\"bucket\"}],\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"showTotal\":false,\"totalFunc\":\"sum\",\"percentageCol\":\"\"}}"},"id":"a7d3d1a0-14f3-11ef-8c27-a723ded8020e","migrationVersion":{"visualization":"7.10.0"},"references":[{"id":"2ba34950-14f1-11ef-8c27-a723ded8020e","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2024-06-20T23:17:22.417Z","version":"WzE4LDFd"}
....
```
**Additional context**
```
# import a saved objects
curl \
-snk \
--netrc-file ~/.netrc \
-H "Content-Type: application/json" \
-H "osd-xsrf:true" \
-XPOST \
https://my.opensearch-dashboards.url/api/saved_objects/_import \
-d '{"type":["index-pattern","config","search","dashboard","url","query","visualization"],"includeReferencesDeep":true}'
```
| Add dashboard inline asset to the blueprint | https://api.github.com/repos/opensearch-project/data-prepper/issues/4657/comments | 0 | 2024-06-24T19:44:52Z | 2024-06-25T19:31:38Z | https://github.com/opensearch-project/data-prepper/issues/4657 | 2,370,969,422 | 4,657 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Pipeline failing to connect to OpenSearch when BuildFlavor is not returned from root OpenSearch api. Source seems to point to this line - https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/source/opensearch/worker/client/SearchAccessorStrategy.java#L198
Do we need to get buildFlavor?
**To Reproduce**
Steps to reproduce the behavior:
Pipeline:
```yaml
version: '2'
opensearch-source-pipeline:
source:
opensearch:
hosts: ['https://192.168.1.100:9200']
username: 'admin'
password: 'somepass'
indices:
include:
- index_name_regex: 'wazuh-alerts-4.x*'
scheduling:
interval: 'PT5M'
connection:
insecure: true
sink:
- stdout:
```
**Expected behavior**
Pipeline connects
**Additional context**
OpenSearch API Return Data:
```
{
"name": "node-1",
"cluster_name": "wazuh-cluster",
"cluster_uuid": "EzI18fqRRsqyp_bqBT0xyQ",
"version": {
"number": "7.10.2",
"build_type": "rpm",
"build_hash": "eee49cb340edc6c4d489bcd9324dda571fc8dc03",
"build_date": "2023-09-20T23:54:29.889267151Z",
"build_snapshot": false,
"lucene_version": "9.7.0",
"minimum_wire_compatibility_version": "7.10.0",
"minimum_index_compatibility_version": "7.0.0"
},
"tagline": "The OpenSearch Project: https://opensearch.org/"
}
```
Error:
```
2024-06-21T18:14:35,773 [opensearch-source-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [opensearch-source-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.8.0.jar:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
Caused by: java.lang.RuntimeException: Unable to call info API using the elasticsearch client
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getDistributionAndVersionNumber(SearchAccessorStrategy.java:199) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getSearchAccessor(SearchAccessorStrategy.java:115) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.startProcess(OpenSearchSource.java:75) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.start(OpenSearchSource.java:65) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.8.0.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
Caused by: co.elastic.clients.util.MissingRequiredPropertyException: Missing required property 'ElasticsearchVersionInfo.buildFlavor'
at co.elastic.clients.util.ApiTypeHelper.requireNonNull(ApiTypeHelper.java:76) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo.<init>(ElasticsearchVersionInfo.java:74) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo.<init>(ElasticsearchVersionInfo.java:50) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo$Builder.build(ElasticsearchVersionInfo.java:300) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch._types.ElasticsearchVersionInfo$Builder.build(ElasticsearchVersionInfo.java:200) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.ObjectBuilderDeserializer.deserialize(ObjectBuilderDeserializer.java:80) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:43) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.ObjectDeserializer$FieldObjectDeserializer.deserialize(ObjectDeserializer.java:72) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.ObjectDeserializer.deserialize(ObjectDeserializer.java:176) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.ObjectDeserializer.deserialize(ObjectDeserializer.java:137) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.JsonpDeserializer.deserialize(JsonpDeserializer.java:75) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.ObjectBuilderDeserializer.deserialize(ObjectBuilderDeserializer.java:79) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.json.DelegatingDeserializer$SameType.deserialize(DelegatingDeserializer.java:43) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.transport.rest_client.RestClientTransport.decodeResponse(RestClientTransport.java:328) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.transport.rest_client.RestClientTransport.getHighLevelResponse(RestClientTransport.java:294) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) ~[elasticsearch-java-7.17.0.jar:?]
at co.elastic.clients.elasticsearch.ElasticsearchClient.info(ElasticsearchClient.java:983) ~[elasticsearch-java-7.17.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getDistributionAndVersionNumber(SearchAccessorStrategy.java:196) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.worker.client.SearchAccessorStrategy.getSearchAccessor(SearchAccessorStrategy.java:115) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.startProcess(OpenSearchSource.java:75) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.source.opensearch.OpenSearchSource.start(OpenSearchSource.java:65) ~[opensearch-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.8.0.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
... 2 more
```
| [BUG] Can't connect to OpenSearch if Build Flavor is not returned | https://api.github.com/repos/opensearch-project/data-prepper/issues/4654/comments | 5 | 2024-06-21T18:24:27Z | 2024-07-02T15:44:55Z | https://github.com/opensearch-project/data-prepper/issues/4654 | 2,367,048,054 | 4,654 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Confluent Schema Registry support PROTOBUF schema, when consuming data from topic with PROTOBUF schema, NPE is detected at runtime.
```
2024-06-19T17:38:03,209 [main] INFO org.opensearch.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server running at :4900
2024-06-19T17:38:03,506 [kafka-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.kafka.source.KafkaSource - Starting consumer with the properties : {value.deserializer=class org.apache.kafka.common.serialization.StringDeserializer, auto.register.schemas=false, basic.auth.credentials.source=USER_INFO, group.id=topic_3_local7, reconnect.backoff.ms=10000, max.partition.fetch.bytes=1048576, bootstrap.servers=pkc-rgm37.us-west-2.aws.confluent.cloud:9092, retry.backoff.ms=10000, schema.registry.url=https://psrc-e8157.us-east-2.aws.confluent.cloud, enable.auto.commit=false, sasl.mechanism=PLAIN, sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="2PYKP5A5VDPZDJZU" password="40NGkZ6JYgehKBfDHa2AoLjmIzWwn19HMXthIr/wYankBxPQhFGMI2oQpJ83a0qc";, fetch.max.wait.ms=500, sasl.client.callback.handler.class=class org.opensearch.dataprepper.plugins.kafka.authenticator.DynamicSaslClientCallbackHandler, session.timeout.ms=45000, client.id=AWSOpenSearchIngestion-1540FD4C-C1E6-4F02-880A-4FE8CA226E1E, key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, max.poll.records=500, auto.commit.interval.ms=5000, heartbeat.interval.ms=5000, security.protocol=SASL_SSL, basic.auth.user.info=XRG6T73EHFA32DJU:bxbOivHjQZfw3k88orcxHmeGQz3Xvv5hkwYeQ6urXIPCgC8lxMtiAH9vNrY2py6p, fetch.min.bytes=1, fetch.max.bytes=52428800, max.poll.interval.ms=300000, auto.offset.reset=earliest}
2024-06-19T17:38:03,508 [kafka-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.plugins.kafka.source.KafkaSource - Failed to setup the Kafka Source Plugin.
java.lang.NullPointerException: Cannot invoke "org.opensearch.dataprepper.plugins.kafka.util.MessageFormat.ordinal()" because "schema" is null
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.createKafkaConsumer(KafkaSource.java:166) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.lambda$start$0(KafkaSource.java:129) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:104) ~[?:?]
at java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:617) ~[?:?]
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.lambda$start$1(KafkaSource.java:126) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.start(KafkaSource.java:116) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
2024-06-19T17:38:03,516 [kafka-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor - Pipeline [kafka-pipeline] process worker encountered a fatal exception, cannot proceed further
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.opensearch.dataprepper.plugins.kafka.util.MessageFormat.ordinal()" because "schema" is null
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.opensearch.dataprepper.pipeline.common.PipelineThreadPoolExecutor.afterExecute(PipelineThreadPoolExecutor.java:70) [data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
Caused by: java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.opensearch.dataprepper.plugins.kafka.util.MessageFormat.ordinal()" because "schema" is null
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.lambda$start$1(KafkaSource.java:159) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at org.opensearch.dataprepper.plugins.kafka.source.KafkaSource.start(KafkaSource.java:116) ~[kafka-plugins-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.startSourceAndProcessors(Pipeline.java:215) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.pipeline.Pipeline.lambda$execute$2(Pipeline.java:260) ~[data-prepper-core-2.9.0-SNAPSHOT.jar:?]
Caused by: java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.opensearch.dataprepper.plugins.kafka.util.MessageFormat.ordinal()" because "schema" is null
```
**To Reproduce**
Run data prepper with the kafka source and confluent schema registry with protobuf topic
```
kafka-pipeline:
source:
kafka:
bootstrap_servers:
- "pkc-rgm37.us-west-2.aws.confluent.cloud:9092"
authentication:
sasl:
plain:
username: <username>
password: <password>
schema:
type: "confluent"
registry_url: "https://psrc-e8157.us-east-2.aws.confluent.cloud"
api_key: <api_key>
api_secret: <api_secret>
basic_auth_credentials_source: "USER_INFO"
encryption:
type: "ssl"
topics:
- name: <protobuf topic>
group_id: <group id>
workers: 1
client_id: "AWSOpenSearchIngestion-1540FD4C-C1E6-4F02-880A-4FE8CA226E1E"
```
| [BUG] Handle unsupported PROTOBUF schema type gracefully from Confluent Schema registry in kafka source | https://api.github.com/repos/opensearch-project/data-prepper/issues/4648/comments | 0 | 2024-06-19T22:40:11Z | 2024-06-20T16:51:40Z | https://github.com/opensearch-project/data-prepper/issues/4648 | 2,363,212,044 | 4,648 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
here is the Otel spec we are expecting
```
"exemplar": {
"properties": {
"time": {
"type": "date"
},
"traceId": {
"ignore_above": 256,
"type": "keyword"
},
"serviceName": {
"ignore_above": 256,
"type": "keyword"
},
"spanId": {
"ignore_above": 256,
"type": "keyword"
}
}
}
```
which is based on the [otel protocol definition](https://github.com/open-telemetry/opentelemetry-proto/blob/bd7cf55b6d45f3c587d2131b68a7e5a501bdb10c/opentelemetry/proto/metrics/v1/metrics.proto#L660)
But this is the json currently being generated:
```
"exemplars": [
{
"time": "2024-06-18T05:11:33.182223836Z",
"value": 1,
"attributes": {
"clientip": "1.1.1.1",
"id": "id1",
"serviceName": "svc0",
"traceId": "trace0",
"spanId": "span1"
},
"spanId": null,
"traceId": null
}
],
````
**Expected behavior**
1) please rename `exemplars` to `exemplar`
2) populate the `spanId`, `traceId` in the root level
3) add `serviceName` field at the root level of the exemplar
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] update Metrics exemplar field schema | https://api.github.com/repos/opensearch-project/data-prepper/issues/4647/comments | 8 | 2024-06-19T22:31:54Z | 2024-06-21T21:09:26Z | https://github.com/opensearch-project/data-prepper/issues/4647 | 2,363,206,528 | 4,647 |
[
"opensearch-project",
"data-prepper"
] | This issue is for tracking migration of processors to use the new `EventKey`. See #1916 for more details.
- [x] `mutate-event-processors` delete_entries/rename_keys - #4636
- [ ] `mutate-event-processors` all others
- [x] `mutate-string-processors` - #4649
- [x] `user-agent-processor` - #4628
- [x] `drop-events-processor` (does not need any keys)
- [ ] `grok-processor`
- [ ] `key-value-processor`
- [x] `parse-json-processor` - #4842
- [ ] `split-event-processor`
- [ ] `aggregate-processor`
- [ ] `csv-processor`
- [ ] `decompress-processor`
- [ ] `dissect-processor`
- [ ] `flatten-processor`
- [ ] `geoip-processor`
- [ ] `obfuscate-processor`
- [ ] `translate-processor`
- [ ] `truncate-processor`
- [ ] `write-json-processor` | Migrate to use EventKey in processors | https://api.github.com/repos/opensearch-project/data-prepper/issues/4646/comments | 0 | 2024-06-19T22:26:38Z | 2025-02-03T22:17:09Z | https://github.com/opensearch-project/data-prepper/issues/4646 | 2,363,202,967 | 4,646 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I am not able to find a link or document where explain how to store data at S3 bucket after some period of time.
**Expected behavior**
I am expected to store logs to s3 bucket after some period of time.
**Chart Name**
opensearch 2.20.0 | Store Logs at S3 bucket | https://api.github.com/repos/opensearch-project/data-prepper/issues/4651/comments | 2 | 2024-06-19T12:39:58Z | 2024-06-25T19:40:33Z | https://github.com/opensearch-project/data-prepper/issues/4651 | 2,365,352,248 | 4,651 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Aggregate Processor `count` aggregate action counts events with same `identification_keys`. There are some cases where we need to count based on secondary keys. For example, when OTEL traces with service name, traceId are sent through aggregate processor, if we need to count number of traces in each service, it is not possible because using both `serviceName` and `traceId` as keys would send unique values of the identification keys to different nodes. If use just `serviceName` as identification_keys, there is no action currently implemented in AggregateProcessor that can send ALL events of a service to one node.
**Describe the solution you'd like**
Solution is to have an option like `unique_keys` under `count` aggregate action that counts the number of unique keys under `identification_keys`.
A configuration like this
```
processor:
- aggregate:
identification_keys: ["serviceName"]
action:
count:
unique_keys : ["traceId"]
```
The above config will count number of unique `traceId` in a `serviceName`
**Describe alternatives you've considered (Optional)**
Alternative is to have an `action` like `all_events` which passes all events matching `identification_keys` of `serviceName` and then have another aggregate processor with `identification_keys` as `traceId`. Currently, two aggregate processors of "remote peer" type are not allowed, which makes this solution infeasible.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add an option to count unique values of specified key(s) to CountAggregateAction | https://api.github.com/repos/opensearch-project/data-prepper/issues/4644/comments | 3 | 2024-06-19T07:27:11Z | 2024-06-24T22:46:44Z | https://github.com/opensearch-project/data-prepper/issues/4644 | 2,361,555,906 | 4,644 |
[
"opensearch-project",
"data-prepper"
] | ## CVE-2024-37891 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-2.0.7-py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl">https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-2.0.7-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
urllib3 is a user-friendly HTTP client library for Python. When using urllib3's proxy support with `ProxyManager`, the `Proxy-Authorization` header is only sent to the configured proxy, as expected. However, when sending HTTP requests *without* using urllib3's proxy support, it's possible to accidentally configure the `Proxy-Authorization` header even though it won't have any effect as the request is not using a forwarding proxy or a tunneling proxy. In those cases, urllib3 doesn't treat the `Proxy-Authorization` HTTP header as one carrying authentication material and thus doesn't strip the header on cross-origin redirects. Because this is a highly unlikely scenario, we believe the severity of this vulnerability is low for almost all users. Out of an abundance of caution urllib3 will automatically strip the `Proxy-Authorization` header during cross-origin redirects to avoid the small chance that users are doing this on accident. Users should use urllib3's proxy support or disable automatic redirects to achieve safe processing of the `Proxy-Authorization` header, but we still decided to strip the header by default in order to further protect users who aren't using the correct approach. We believe the number of usages affected by this advisory is low. It requires all of the following to be true to be exploited: 1. Setting the `Proxy-Authorization` header without using urllib3's built-in proxy support. 2. Not disabling HTTP redirects. 3. Either not using an HTTPS origin server or for the proxy or target origin to redirect to a malicious origin. Users are advised to update to either version 1.26.19 or version 2.2.2. Users unable to upgrade may use the `Proxy-Authorization` header with urllib3's `ProxyManager`, disable HTTP redirects using `redirects=False` when sending requests, or not user the `Proxy-Authorization` header as mitigations.
<p>Publish Date: 2024-06-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-37891>CVE-2024-37891</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-34jh-p97f-mpxf">https://github.com/urllib3/urllib3/security/advisories/GHSA-34jh-p97f-mpxf</a></p>
<p>Release Date: 2024-06-17</p>
<p>Fix Resolution: 2.2.2</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| CVE-2024-37891 (Medium) detected in urllib3-2.0.7-py3-none-any.whl | https://api.github.com/repos/opensearch-project/data-prepper/issues/4641/comments | 1 | 2024-06-18T23:59:22Z | 2024-07-03T14:58:25Z | https://github.com/opensearch-project/data-prepper/issues/4641 | 2,360,935,920 | 4,641 |
[
"opensearch-project",
"data-prepper"
] | null | Support named credentials in the AWS extension in data-prepper-config.yaml | https://api.github.com/repos/opensearch-project/data-prepper/issues/4637/comments | 0 | 2024-06-18T19:37:46Z | 2024-06-25T19:41:53Z | https://github.com/opensearch-project/data-prepper/issues/4637 | 2,360,564,968 | 4,637 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of the S3 DLQ, I would like to dynamically determine the prefix to send failed documents to with Data Prepper expressions, similar to how the S3 sink `path_prefix` can be dynamic
**Describe the solution you'd like**
```
sink:
- opensearch:
s3:
dlq:
key_path_prefix: "my-${/key}-prefix/"
```
Alternative Solution
An alternative that would be more useful would be to create a pipeline DLQ that can send to the s3 sink, which supports dynamic path prefix | Dynamic key_path_prefix in S3 DLQ | https://api.github.com/repos/opensearch-project/data-prepper/issues/4634/comments | 1 | 2024-06-18T17:47:18Z | 2024-06-18T19:34:40Z | https://github.com/opensearch-project/data-prepper/issues/4634 | 2,360,387,978 | 4,634 |
[
"opensearch-project",
"data-prepper"
] | Currently a log message in dataprepper looks like this, after being ingested via the [otel-logs-source](https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sources/otel-logs-source/):
```
{"traceId":"","spanId":"","severityText":"","flags":0,"time":"2024-06-18T10:11:45.351322226Z","severityNumber":0,"droppedAttributesCount":0,"serviceName":null,"body":"time=\"2024-06-18T10:11:45Z\" level=info msg=\"Reconciliation completed\" application=observability/grafana dedup_ms=0 dest-name=in-cluster dest-namespace=observability dest-server=\"https://kubernetes.default.svc\" diff_ms=105 fields.level=1 git_ms=806 health_ms=0 live_ms=1 patch_ms=0 setop_ms=0 settings_ms=0 sync_ms=0 time_ms=1007","observedTime":"2024-06-18T10:11:45.438489611Z","schemaUrl":"https://opentelemetry.io/schemas/1.6.1","resource.attributes.source@environment":"dev","log.attributes.time":"2024-06-18T10:11:45.351322226Z","resource.attributes.k8s@namespace@name":"argocd","resource.attributes.cloud@account@id":"1234","resource.attributes.k8s@container@name":"application-controller","log.attributes.logtag":"F","resource.attributes.host@name":"ip-172-17-8-50.eu-central-1.compute.internal","resource.attributes.host@id":"i-1234","resource.attributes.k8s@pod@start_time":"2024-06-17T10:43:03Z","resource.attributes.cloud@region":"eu-central-1","log.attributes.log@iostream":"stderr","resource.attributes.host@image@id":"ami-1234","resource.attributes.source@project":"1234","resource.attributes.k8s@pod@name":"argocd-application-controller-0","log.attributes.log@file@path":"/var/log/pods/argocd_argocd-application-controller-0_0b9c78a1-2fed-4548-bd0f-f80ffe67faba/application-controller/0.log","resource.attributes.k8s@pod@uid":"0b9c78a1-2fed-4548-bd0f-f80ffe67faba","resource.attributes.k8s@node@name":"ip-172-17-8-50.eu-central-1.compute.internal","resource.attributes.cloud@platform":"aws_ec2","resource.attributes.k8s@container@restart_count":"0","resource.attributes.k8s@cluster@name":"1234","resource.attributes.ec2@tag@karpenter@sh/nodepool":"fallback-spot","resource.attributes.cloud@availability_zone":"eu-central-1a","resource.attributes.host@type":"c6gn.xlarge","resource.attributes.cloud@provider":"aws"}
```
The log itself was collected via [otel/reciever/filelog](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
According to the OpenTelemetry documentation, for example the [resource.attributes.cloud](https://opentelemetry.io/docs/specs/semconv/attributes-registry/cloud/) should contain keys that are separted by a dot, instead of the `@` sign.
Instead they seem to be replaced [here](https://github.com/opensearch-project/data-prepper/blob/2c53d7e8ddef713fd040a22cd51e22af9c964892/data-prepper-plugins/otel-proto-common/src/main/java/org/opensearch/dataprepper/plugins/otel/codec/OTelProtoCodec.java#L1115).
Is there a reason, that the keys under `resource.attributes` and `log.attributes` are separated with `@` instead of being kept in its original (Otel) format? | [Question] Opentelemetry format for `resource.attributes` (replacing dot with at sign) | https://api.github.com/repos/opensearch-project/data-prepper/issues/4632/comments | 5 | 2024-06-18T11:02:12Z | 2025-03-11T19:56:14Z | https://github.com/opensearch-project/data-prepper/issues/4632 | 2,359,579,440 | 4,632 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I'm seeing a number of errors such as:
```
2024-06-05T20:54:54.420 [my-pipeline-processor-worker-1-thread-2] ERROR org.opensearch.dataprepper.plugins.processor.useragent.UserAgentProcessor - An exception occurred when parsing user agent data from event [******] with source key [******]
java.lang.IllegalStateException: Entry.next=null, data[removeIndex]=***
previous=***
key=***
size=1000 maxSize=1000 This should not occur if your keys are immutable, and you have used synchronization properly.
at org.apache.commons.collections4.map.LRUMap.reuseMapping(LRUMap.java:384) ~[commons-collections4-4.4.jar:4.4]
at org.apache.commons.collections4.map.LRUMap.addMapping(LRUMap.java:349) ~[commons-collections4-4.4.jar:4.4]
at org.apache.commons.collections4.map.AbstractHashedMap.put(AbstractHashedMap.java:289) ~[commons-collections4-4.4.jar:4.4]
at ua_parser.CachingParser.parseUserAgent(CachingParser.java:92) ~[uap-java-1.6.1.jar:?]
at ua_parser.Parser.parse(Parser.java:83) ~[uap-java-1.6.1.jar:?]
at ua_parser.CachingParser.parse(CachingParser.java:72) ~[uap-java-1.6.1.jar:?]
at org.opensearch.dataprepper.plugins.processor.useragent.UserAgentProcessor.doExecute(UserAgentProcessor.java:51) ~[user-agent-processor-2.x.88.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.x.88.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.13.0.jar:1.13.0]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.x.88.jar:?]
```
**To Reproduce**
It is not very easy to reproduce this consistently.
1. Create a pipeline with the `user_agent` processor.
2. Be sure to have multiple workers in the pipeline.
3. Ingest a significant volume of data
4. These errors should occur.
**Expected behavior**
No errors, and cache works.
| [BUG] The user_agent processor throws exceptions with multiple threads. | https://api.github.com/repos/opensearch-project/data-prepper/issues/4618/comments | 2 | 2024-06-11T23:23:03Z | 2024-06-13T15:22:16Z | https://github.com/opensearch-project/data-prepper/issues/4618 | 2,347,461,907 | 4,618 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of Data Prepper, my Events look like this
```
{
"key-one": "value-one",
"key-two": "value-two"
}
```
and I would like them to look like this
```
{
"new-key": {
"key-one": "value-one",
"key-two": "value-two"
}
}
```
**Describe the solution you'd like**
A way to pass the root (or entire existing document) of an Event to json pointer expression. For example, this could be with `/`, allowing add_entries to add keys at the highest level with the existing document
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support nesting the entire document under a different key | https://api.github.com/repos/opensearch-project/data-prepper/issues/4617/comments | 4 | 2024-06-11T21:49:49Z | 2024-06-12T19:02:09Z | https://github.com/opensearch-project/data-prepper/issues/4617 | 2,347,365,977 | 4,617 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently the flatten processor adds empty squared brackets to the keys of list elements, even when the `remove_list_indices` option is set to `true`. So something like
```json
{
"key1": {
"key2": [
{
"key3": "value1",
"key4": "value2"
},
{
"key3": "value3",
"key4": "value4"
}
]
}
}
```
turns into
```json
{
"key1.key2[].key3": ["value1", "value3"],
"key1.key2[].key4": ["value2", "value4"]
}
```
When the mentioned option is set to `false` the brackets are needed to reference list indices but in this case they could easily be ommited. It would be nice if the keys would keep their original names and the output would be this instead:
```json
{
"key1.key2.key3": ["value1", "value3"],
"key1.key2.key4": ["value2", "value4"]
}
```
**Describe the solution you'd like**
Either remove the brackets from keys when `remove_list_indices` is set to `true`, or add an option (`original_key` or `remove_brackets`) that has the same effect when used together with `remove_list_indices` set to `true`.
**Additional context**
For some automated processes (e.g. creating mappings via index templates before writing documents) it could be valuable to retain the original key names. | flatten processor: option for keys wihout brackets | https://api.github.com/repos/opensearch-project/data-prepper/issues/4616/comments | 5 | 2024-06-11T12:13:17Z | 2024-06-21T15:08:19Z | https://github.com/opensearch-project/data-prepper/issues/4616 | 2,346,266,388 | 4,616 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of data prepper, I would like to be able to configure a default route for when my Events do not match any of my other routes. Currently, I would have to add an expression manually to handle this case.
**Describe the solution you'd like**
An option to use a custom route value to use as the default. Routes with a value of `default` or `DEFAULT` will be this default route
```
routes:
- ROUTE_ONE: '/my_key == null`
- DEFAULT_ROUTE: default
sink:
- opensearch:
routes:
- ROUTE_ONE
- opensearch:
routes:
- DEFAULT_ROUTE
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support default route option for Events that match no other route | https://api.github.com/repos/opensearch-project/data-prepper/issues/4615/comments | 1 | 2024-06-10T18:50:05Z | 2024-08-15T16:16:57Z | https://github.com/opensearch-project/data-prepper/issues/4615 | 2,344,611,285 | 4,615 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper is currently pulling in Hadoop dependencies. These add some CVEs and many other dependencies that we may not need.
Hadoop is mostly (exclusively?) used for Parquet support.
**Describe the solution you'd like**
Remove Hadoop dependencies while still supporting Parquet
| Remove Hadoop dependencies | https://api.github.com/repos/opensearch-project/data-prepper/issues/4612/comments | 1 | 2024-06-07T21:31:40Z | 2024-06-12T17:51:01Z | https://github.com/opensearch-project/data-prepper/issues/4612 | 2,341,192,986 | 4,612 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of s3 scan, I have a bucket with 100 million objects. The current s3 scan source is not able to handle this many objects, as it is bottlenecked by returning all objects as a list of partitions in the supplier, which can lead to out of memory errors. Additionally, if there are any failures in s3 scan supplier, no partitions will get created because all partitions are returned from the supplier before they are created in the coordination store.
**Describe the solution you'd like**
I would like the PartitionSupplier functions to be able to pass partitions back to the source coordinator for creation. So as objects are found during a scan, instead of holding them all in memory, the call to create the partition would be made right after the object is found from scanning.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 Scan Source partition supplier creates partitions in memory and a failure causes no partitions to be created | https://api.github.com/repos/opensearch-project/data-prepper/issues/4608/comments | 3 | 2024-06-06T14:56:50Z | 2024-10-09T18:15:54Z | https://github.com/opensearch-project/data-prepper/issues/4608 | 2,338,470,894 | 4,608 |
[
"opensearch-project",
"data-prepper"
] | ### Is your feature request related to a problem? Please describe
Set `data-prepper` plugin 3.0.0 baseline JDK version to JDK-21
### Describe the solution you'd like
See please https://github.com/opensearch-project/OpenSearch/issues/10745
### Related component
Build
### Describe alternatives you've considered
N/A
### Additional context
See please https://github.com/opensearch-project/OpenSearch/issues/14011 | [FEATURE] Set `data-prepper` plugin 3.0.0 baseline JDK version to JDK-21 | https://api.github.com/repos/opensearch-project/data-prepper/issues/4605/comments | 3 | 2024-06-05T18:54:35Z | 2024-06-12T19:02:47Z | https://github.com/opensearch-project/data-prepper/issues/4605 | 2,336,616,178 | 4,605 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I am trying to build a pipeline to ingest cloudfront logs to opensearch. I found [in the grok plugin code](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/grok-processor/src/main/resources/grok-patterns/patterns#L17) that there is a `CLOUDFRONT_ACCESS_LOG` pattern for the grok processor, which is exactly what I need.
However, when using it, data-prepper fails with the following exception trace:
```
2024-06-05T16:41:15,109 [main] ERROR org.opensearch.dataprepper.parser.PipelineTransformer - Construction of pipeline components failed, skipping building of pipeline [cloudfront-pipeline] and its connected pipelines
org.opensearch.dataprepper.model.plugin.PluginInvocationException: Exception throw from the plugin'GrokProcessor'.
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:60) ~[data-prepper-plugin-framework-2.8.0.jar:?]
at org.opensearch.dataprepper.plugin.DefaultPluginFactory.loadPlugins(DefaultPluginFactory.java:105) ~[data-prepper-plugin-framework-2.8.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.newProcessor(PipelineTransformer.java:170) ~[data-prepper-core-2.8.0.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.buildPipelineFromConfiguration(PipelineTransformer.java:126) ~[data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.parser.PipelineTransformer.transformConfiguration(PipelineTransformer.java:99) ~[data-prepper-core-2.8.0.jar:?]
at org.opensearch.dataprepper.DataPrepper.<init>(DataPrepper.java:69) ~[data-prepper-core-2.8.0.jar:2.8.0]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:211) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:311) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:296) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1372) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1222) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) [spring-beans-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:920) [spring-context-5.3.28.jar:5.3.28]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) [spring-context-5.3.28.jar:5.3.28]
at org.opensearch.dataprepper.AbstractContextManager.start(AbstractContextManager.java:59) [data-prepper-core-2.8.0.jar:2.8.0]
at org.opensearch.dataprepper.AbstractContextManager.getDataPrepperBean(AbstractContextManager.java:45) [data-prepper-core-2.8.0.jar:2.8.0]
at org.opensearch.dataprepper.DataPrepperExecute.main(DataPrepperExecute.java:39) [data-prepper-main-2.8.0.jar:2.8.0]
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:53) ~[data-prepper-plugin-framework-2.8.0.jar:?]
... 35 more
Caused by: java.util.regex.PatternSyntaxException: Unclosed group near index 2322
(?:(?<timestamp>(?:(?>\d\d){1,2})-(?:(?:0?[1-9]|1[0-2]))-(?:(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]))\s(?:(?!<[0-9])(?:(?:2[0123]|[01]?[0-9])):(?:(?:[0-5][0-9]))(?::(?:(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)))(?![0-9])))\s(?<name8>\S+)\s(?:-|(?<name9>(?:(?<name10>(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))))))\s(?<name11>(?:(?<name12>\b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b))|(?<name13>(?:(?<name14>((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?)|(?<name15>(?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9]))))))\s(?<name16>\b\w+\b)\s(?<name17>\b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b))\s(?<name18>\S+)\s(?:-|(?<name19>(?:(?<name20>(?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))))))\s(?<name21>.*)\s(?<name22>.*)\s(?<name23>.*)\s(?<name24>.*)\s(?<name25>\b\w+\b)\s(?<name26>\S+)\s(?<name27>\b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b))\s(?<name28>[A-Za-z]+(\+[A-Za-z+]+)?)\s(?:-|(?<name29>(?:[+-]?(?:[0-9]+))))\s(?:-|(?<name30>.*)\s(?<name31>.*)\s(?<name32>.*)\s(?<name33>.*)\s(?<name34>.*))
at java.base/java.util.regex.Pattern.error(Pattern.java:2028) ~[?:?]
at java.base/java.util.regex.Pattern.accept(Pattern.java:1878) ~[?:?]
at java.base/java.util.regex.Pattern.group0(Pattern.java:3053) ~[?:?]
at java.base/java.util.regex.Pattern.sequence(Pattern.java:2124) ~[?:?]
at java.base/java.util.regex.Pattern.expr(Pattern.java:2069) ~[?:?]
at java.base/java.util.regex.Pattern.compile(Pattern.java:1783) ~[?:?]
at java.base/java.util.regex.Pattern.<init>(Pattern.java:1430) ~[?:?]
at java.base/java.util.regex.Pattern.compile(Pattern.java:1069) ~[?:?]
at io.krakens.grok.api.Grok.<init>(Grok.java:69) ~[java-grok-0.1.9.jar:?]
at io.krakens.grok.api.GrokCompiler.compile(GrokCompiler.java:197) ~[java-grok-0.1.9.jar:?]
at io.krakens.grok.api.GrokCompiler.compile(GrokCompiler.java:124) ~[java-grok-0.1.9.jar:?]
at org.opensearch.dataprepper.plugins.processor.grok.GrokProcessor.lambda$compileMatchPatterns$3(GrokProcessor.java:240) ~[grok-processor-2.8.0.jar:?]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?]
at org.opensearch.dataprepper.plugins.processor.grok.GrokProcessor.compileMatchPatterns(GrokProcessor.java:241) ~[grok-processor-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.processor.grok.GrokProcessor.<init>(GrokProcessor.java:113) ~[grok-processor-2.8.0.jar:?]
at org.opensearch.dataprepper.plugins.processor.grok.GrokProcessor.<init>(GrokProcessor.java:93) ~[grok-processor-2.8.0.jar:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?]
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) ~[?:?]
at org.opensearch.dataprepper.plugin.PluginCreator.newPluginInstance(PluginCreator.java:53) ~[data-prepper-plugin-framework-2.8.0.jar:?]
... 35 more
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create the following pipeline
```
cloudfront-pipeline:
source:
file:
path: /input/sample.log
format: plain
codec:
newline:
processor:
- drop_events:
drop_when: '/message =~ "^#(Version|Fields).*"'
- grok:
keep_empty_captures: true
match:
message: ["%{CLOUDFRONT_ACCESS_LOG}"]
sink:
- stdout:
```
2. Run the pipeline with a decompressed cloudfront access log file:
```
docker run \
-v ${HOME}/tmp/cdn_ingest/input_logs/short_sample.log:/input/sample.log \
-v ${PWD}/cloudfront-pipeline.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml \
opensearchproject/data-prepper:latest
```
4. The pipeline fails to start with the exception above
**Expected behavior**
The pipeline should start and process the cloudfront log lines
**Screenshots**
N/A
**Environment (please complete the following information):**
- OS: MacOS Sonoma 14.1.1
- Version: 2.8.0
**Additional context**
N/A
| [BUG] Grok plugin CLOUDFRONT_ACCESS_LOG pattern does not compile | https://api.github.com/repos/opensearch-project/data-prepper/issues/4604/comments | 0 | 2024-06-05T16:47:34Z | 2024-06-12T17:45:39Z | https://github.com/opensearch-project/data-prepper/issues/4604 | 2,336,382,812 | 4,604 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
As a user of DataPrepper obfuscate processor, I would like to have the option to hash sensitive data fields, so the same value of the fields has a predictable value that can be searched on without revealing the data in clear.
Note: This is different from masking, which is available as part of the existing obfuscate processor.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Update existing obfuscate processor (or create a new one) that can take optional (seed/salt) and produce a one way hash using common hash functions SHA-* and others
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
There is a possibility of invoking a remote function, but it will be expensive and performant for processing/hashing messages at volume.
**Additional context**
Add any other context or screenshots about the feature request here.
| Ability to one way hash specific attributes in payload | https://api.github.com/repos/opensearch-project/data-prepper/issues/4602/comments | 6 | 2024-06-05T16:25:14Z | 2025-02-03T22:21:18Z | https://github.com/opensearch-project/data-prepper/issues/4602 | 2,336,345,804 | 4,602 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Dynamic Rule Detection for pipeline configuration transformation.
**Describe the solution you'd like**
Scan rule folder. Apply each rule. Check whether the rule is valid or not, if it is valid, it should return a corresponding template.
**Additional context**
Extension to issue #4446
| Dynamic Rule Detection | https://api.github.com/repos/opensearch-project/data-prepper/issues/4600/comments | 0 | 2024-06-05T05:05:58Z | 2024-08-17T01:10:54Z | https://github.com/opensearch-project/data-prepper/issues/4600 | 2,334,908,461 | 4,600 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
SqsClient get closed early when it is still making changeVisibilityTimeout call within acknowledgment callback thread:
```
2024-05-31T20:29:16.968 [acknowledgement-callback-10] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Failed to set visibility timeout for message xxxxxxxxxx to 60
java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check([Asserts.java:34](http://asserts.java:34/)) ~[Apache-HttpComponents-HttpCore-4.4.x.jar:?]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection([PoolingHttpClientConnectionManager.java:269](http://poolinghttpclientconnectionmanager.java:269/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$DelegatingHttpClientConnectionManager.requestConnection([ClientConnectionManagerFactory.java:75](http://clientconnectionmanagerfactory.java:75/)) ~[apache-client-2.23.3.jar:?]
at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$InstrumentedHttpClientConnectionManager.requestConnection([ClientConnectionManagerFactory.java:57](http://clientconnectionmanagerfactory.java:57/)) ~[apache-client-2.23.3.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute([MainClientExec.java:176](http://mainclientexec.java:176/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at org.apache.http.impl.execchain.ProtocolExec.execute([ProtocolExec.java:186](http://protocolexec.java:186/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute([InternalHttpClient.java:185](http://internalhttpclient.java:185/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute([CloseableHttpClient.java:83](http://closeablehttpclient.java:83/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute([CloseableHttpClient.java:56](http://closeablehttpclient.java:56/)) ~[Apache-HttpComponents-HttpClient-4.5.x.jar:?]
at software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute([ApacheSdkHttpClient.java:72](http://apachesdkhttpclient.java:72/)) ~[apache-client-2.23.3.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient.execute([ApacheHttpClient.java:254](http://apachehttpclient.java:254/)) ~[apache-client-2.23.3.jar:?]
at software.amazon.awssdk.http.apache.ApacheHttpClient.access$500([ApacheHttpClient.java:104](http://apachehttpclient.java:104/)) ~[apache-client-2.23.3.jar:?]
at [software.amazon.awssdk.http.apache.ApacheHttpClient$1.call](http://software.amazon.awssdk.http.apache.apachehttpclient$1.call/)([ApacheHttpClient.java:231](http://apachehttpclient.java:231/)) ~[apache-client-2.23.3.jar:?]
at [software.amazon.awssdk.http.apache.ApacheHttpClient$1.call](http://software.amazon.awssdk.http.apache.apachehttpclient$1.call/)([ApacheHttpClient.java:228](http://apachehttpclient.java:228/)) ~[apache-client-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe([MetricUtils.java:99](http://metricutils.java:99/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.executeHttpRequest([MakeHttpRequestStage.java:79](http://makehttprequeststage.java:79/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute([MakeHttpRequestStage.java:57](http://makehttprequeststage.java:57/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute([MakeHttpRequestStage.java:40](http://makehttprequeststage.java:40/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute([ApiCallAttemptTimeoutTrackingStage.java:72](http://apicallattempttimeouttrackingstage.java:72/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute([ApiCallAttemptTimeoutTrackingStage.java:42](http://apicallattempttimeouttrackingstage.java:42/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute([TimeoutExceptionHandlingStage.java:78](http://timeoutexceptionhandlingstage.java:78/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute([TimeoutExceptionHandlingStage.java:40](http://timeoutexceptionhandlingstage.java:40/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute([ApiCallAttemptMetricCollectionStage.java:55](http://apicallattemptmetriccollectionstage.java:55/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute([ApiCallAttemptMetricCollectionStage.java:39](http://apicallattemptmetriccollectionstage.java:39/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute([RetryableStage.java:81](http://retryablestage.java:81/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute([RetryableStage.java:36](http://retryablestage.java:36/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute([StreamManagingStage.java:56](http://streammanagingstage.java:56/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute([StreamManagingStage.java:36](http://streammanagingstage.java:36/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer([ApiCallTimeoutTrackingStage.java:80](http://apicalltimeouttrackingstage.java/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute([ApiCallTimeoutTrackingStage.java:60](http://apicalltimeouttrackingstage.java:60/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute([ApiCallTimeoutTrackingStage.java:42](http://apicalltimeouttrackingstage.java:42/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute([ApiCallMetricCollectionStage.java:50](http://apicallmetriccollectionstage.java:50/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute([ApiCallMetricCollectionStage.java:32](http://apicallmetriccollectionstage.java:32/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute([RequestPipelineBuilder.java:206](http://requestpipelinebuilder.java:206/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute([ExecutionFailureExceptionReportingStage.java:37](http://executionfailureexceptionreportingstage.java:37/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute([ExecutionFailureExceptionReportingStage.java:26](http://executionfailureexceptionreportingstage.java:26/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute([AmazonSyncHttpClient.java:224](http://amazonsynchttpclient.java:224/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke([BaseSyncClientHandler.java:103](http://basesyncclienthandler.java:103/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute([BaseSyncClientHandler.java:173](http://basesyncclienthandler.java:173/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1([BaseSyncClientHandler.java:80](http://basesyncclienthandler.java/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess([BaseSyncClientHandler.java:182](http://basesyncclienthandler.java:182/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute([BaseSyncClientHandler.java:74](http://basesyncclienthandler.java:74/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute([SdkSyncClientHandler.java:45](http://sdksyncclienthandler.java:45/)) ~[sdk-core-2.23.3.jar:?]
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute([AwsSyncClientHandler.java:53](http://awssyncclienthandler.java:53/)) ~[aws-core-2.23.3.jar:?]
at software.amazon.awssdk.services.sqs.DefaultSqsClient.changeMessageVisibility([DefaultSqsClient.java:544](http://defaultsqsclient.java:544/)) ~[sqs-2.23.3.jar:?]
at org.opensearch.dataprepper.plugins.source.s3.SqsWorker.lambda$processS3EventNotificationRecords$1([SqsWorker.java:286](http://sqsworker.java:286/)) ~[s3-source-2.8.0-SNAPSHOT.jar:?]
at org.opensearch.dataprepper.acknowledgements.DefaultAcknowledgementSet.checkProgress([DefaultAcknowledgementSet.java:73](http://defaultacknowledgementset.java:73/)) ~[data-prepper-core-2.8.0-SNAPSHOT.jar:?]
at java.base/java.util.concurrent.Executors$[RunnableAdapter.call](http://runnableadapter.call/)([Executors.java:515](http://executors.java:515/)) ~[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset([FutureTask.java:305](http://futuretask.java:305/)) ~[?:?]
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$[ScheduledFutureTask.run](http://scheduledfuturetask.run/)([ScheduledThreadPoolExecutor.java:305](http://scheduledthreadpoolexecutor.java:305/)) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker([ThreadPoolExecutor.java:1128](http://threadpoolexecutor.java:1128/)) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$[Worker.run](http://worker.run/)([ThreadPoolExecutor.java:628](http://threadpoolexecutor.java:628/)) ~[?:?]
at java.base/java.lang.Thread.run([Thread.java:829](http://thread.java:829/)) [?:?]
```
**To Reproduce**
Steps to reproduce the behavior:
1. setup an s3 pipeline with acknowledgment and visibility timeout
2. shutdown the pipeline
**Expected behavior**
This error root cause (connection pool shutdown) should not appear
**Additional context**
Approaches:
1. check shutdown flag when calling changing visibility timeout in acknowledgment thread. This approach will potentially increase duplicate messages during shutdown.
2. shutdown sqs threads first; wait longer for all ack to be finished before shutting down the sqsClient and pipeline. This approach can mitigate duplication of messages during shutdown but extend the shutdown latency
| [BUG] ChangeVisibilityTimeout call failure during pipeline shutdown. | https://api.github.com/repos/opensearch-project/data-prepper/issues/4575/comments | 2 | 2024-05-31T22:42:53Z | 2024-07-19T18:40:59Z | https://github.com/opensearch-project/data-prepper/issues/4575 | 2,328,675,843 | 4,575 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We need a way to effectively monitor data flow to data prepper sink from kafka source. The purpose of this is to be able to track the source of requests from a logical application name to be included in server-side kafka request logging.
**Describe the solution you'd like**
Implement support for [client.id](https://kafka.apache.org/documentation/#consumerconfigs_client.id) as topic configuration when constructing Kafka Consumer.
| Track the source of request for Kafka server | https://api.github.com/repos/opensearch-project/data-prepper/issues/4571/comments | 0 | 2024-05-28T21:40:26Z | 2024-07-02T19:51:01Z | https://github.com/opensearch-project/data-prepper/issues/4571 | 2,321,966,029 | 4,571 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data-prepper S3 source pauses SQS processing with exponential backoff when there is an issue in reading S3 object such as corrupted parquet file.
**To Reproduce**
Steps to reproduce the behavior:
1. Creat a data-prepper pipeline with S3 source
2. upload corrupted s3 objects to the bucket
3. Observe data-prepper logs with message "Pausing SQS processing for XXX seconds due to an error in processing."
**Expected behavior**
S3 source plugin should skip corrupted objects and process next object without delay. S3 source should backoff only when there is an error with the SQS processing itself.
**Additional context**
Add any other context about the problem here.
```
[s3-source-sqs-2] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Error processing from S3: java.io.IOException: can not read class org.apache.parquet.format.FileMetaData: Required field 'num_rows' was not found in serialized data! Struct: org.apache.parquet.format.FileMetaData$FileMetaDataStandardScheme@62202421. Retrying with exponential backoff.
[s3-source-sqs-2] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Pausing SQS processing for 19.858 seconds due to an error in processing.
```
| [BUG] [S3 source] Pause in SQS processing when there is an issue in reading S3 object | https://api.github.com/repos/opensearch-project/data-prepper/issues/4569/comments | 2 | 2024-05-28T20:13:50Z | 2024-11-05T20:54:39Z | https://github.com/opensearch-project/data-prepper/issues/4569 | 2,321,841,555 | 4,569 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I had a running Amazon OpenSearch Ingestion pipeline and change the config from
```"index": "cloudtrail"```
to
```"index": "cloudtrail-${yyyy.MM.dd}"```
As documented here: https://opensearch.org/docs/latest/data-prepper/pipelines/configuration/sinks/opensearch/
"The name of the export index. Only required when the index_type is custom. The index can be a plain string, such as my-index-name, contain [Java date-time patterns](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html), such as my-index-${yyyy.MM.dd} or my-${yyyy-MM-dd-HH}-index"
I see errors in CloudWatch Logs like this:
```2024-05-26T18:14:07.600 [log-pipeline-sink-worker-2-thread-1] ERROR org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - There was an exception when constructing the index name. Check the dlq if configured to see details about the affected Event: The key yyyy.MM.dd could not be found in the Event when formatting```
My problem may be that I didn't set index_type. But the docs specify that default index_type is `custom`. I'll try that next.
"Tells the sink plugin what type of data it is handling. Valid values are custom, trace-analytics-raw, trace-analytics-service-map, or management-disabled. Default is custom."
**To Reproduce**
Steps to reproduce the behavior:
Create a data prepper config with a sink like this:
```
"sink": [
{
"opensearch": {
"hosts": [
"https://<hostname>"
],
"index": "cloudtrail-${yyyy.MM.dd}",
"aws": {
"sts_role_arn": "<role arn>",
"region": "us-west-2",
"serverless": false,
"max_retries": 10
}
}
}
]
```
and send data. You should see the error in the data prepper logs. If not, create an OpenSearch Ingestion pipe and try the above sink.
**Expected behavior**
I should see data in an index named 'cloudtrail-2024.05.26", e.g.
I would like a way to specify a time zone so that indices roll over in my local time zone. I don't see that in the parameters for the sink. I assume indices will be named based on UTC instead.
**Environment (please complete the following information):**
- OS: Amazon OpenSearch Ingestion
| [BUG] Parameters in the index name for OpenSearch sink are not working as documented | https://api.github.com/repos/opensearch-project/data-prepper/issues/4568/comments | 4 | 2024-05-26T18:27:39Z | 2024-05-27T17:50:09Z | https://github.com/opensearch-project/data-prepper/issues/4568 | 2,317,878,004 | 4,568 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A default role can now be configured in the data-prepper-config.yaml (https://github.com/opensearch-project/data-prepper/pull/4559) for all plugins that use `AwsCredentialsSupplier` to get credentials. This includes all plugins but the secret manager plugin, since the secret manager plugin is an additional extension that is decoupled from using `AwsCredentialsSupplier`
**Describe the solution you'd like**
A way to use the default role for secrets manager configuration
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Use default role in secrets manager configuration if it exists | https://api.github.com/repos/opensearch-project/data-prepper/issues/4567/comments | 1 | 2024-05-23T19:52:10Z | 2024-06-04T19:37:55Z | https://github.com/opensearch-project/data-prepper/issues/4567 | 2,313,712,049 | 4,567 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Kafka messages may be produced with headers. Those headers should be extracted from the message.
**Describe the solution you'd like**
The Kafka source in DataPrepper should read the headers from consumer record if present and include them in the event
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Kafka Source should support message headers | https://api.github.com/repos/opensearch-project/data-prepper/issues/4565/comments | 1 | 2024-05-22T22:52:27Z | 2024-05-30T16:31:48Z | https://github.com/opensearch-project/data-prepper/issues/4565 | 2,311,626,175 | 4,565 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.