issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"JPressProjects",
"jpress"
] | 不要提交eclipse中的.classpath .project .setting 文件
| 不要提交eclipse中的默认文件 | https://api.github.com/repos/JPressProjects/jpress/issues/10/comments | 1 | 2016-07-08T07:26:23Z | 2016-07-11T10:51:22Z | https://github.com/JPressProjects/jpress/issues/10 | 164,473,902 | 10 |
[
"JPressProjects",
"jpress"
] | 这个东西怎么升级啊,?新的war包部署之后又得重新安装,?那之前的文章怎么办
| 怎么升级 | https://api.github.com/repos/JPressProjects/jpress/issues/9/comments | 2 | 2016-06-24T08:25:08Z | 2016-06-24T10:05:26Z | https://github.com/JPressProjects/jpress/issues/9 | 162,099,238 | 9 |
[
"JPressProjects",
"jpress"
] | 项目 clone 下来并不是 maven项目
| 说好的maven呢 | https://api.github.com/repos/JPressProjects/jpress/issues/8/comments | 1 | 2016-06-16T08:06:45Z | 2016-06-30T11:08:36Z | https://github.com/JPressProjects/jpress/issues/8 | 160,601,259 | 8 |
[
"JPressProjects",
"jpress"
] | 现在功能还有不少BUG,权限什么的都没做,感觉还不能使用,请作者尽快完善功能,看好。
强烈建议不要用maven管理项目呀。
| 什么时候新版本出来呀,不要用maven呀! | https://api.github.com/repos/JPressProjects/jpress/issues/7/comments | 3 | 2016-06-14T07:47:23Z | 2016-07-01T09:51:23Z | https://github.com/JPressProjects/jpress/issues/7 | 160,117,486 | 7 |
[
"JPressProjects",
"jpress"
] | 给文章添加评论后,再次访问该文章,无法访问,后台报错
| 文章评论后就无法访问 | https://api.github.com/repos/JPressProjects/jpress/issues/6/comments | 2 | 2016-06-12T08:54:17Z | 2016-06-12T10:29:24Z | https://github.com/JPressProjects/jpress/issues/6 | 159,816,343 | 6 |
[
"JPressProjects",
"jpress"
] | 把项目迁移到maven,使用maven管理项目依赖,以兼容不同开发环境
| 使用maven做构建工具 | https://api.github.com/repos/JPressProjects/jpress/issues/4/comments | 3 | 2016-06-12T04:22:34Z | 2016-06-30T11:09:17Z | https://github.com/JPressProjects/jpress/issues/4 | 159,807,138 | 4 |
[
"JPressProjects",
"jpress"
] | 切换到 the3 主题(貌似在程序里面直接切换不管用,只能先修改配置文件)后,发现刚刚新建的页面找不到了,

| 切换主题后找不到新建的页面 | https://api.github.com/repos/JPressProjects/jpress/issues/3/comments | 2 | 2016-06-12T03:21:09Z | 2018-11-08T03:43:49Z | https://github.com/JPressProjects/jpress/issues/3 | 159,805,317 | 3 |
[
"JPressProjects",
"jpress"
] | 上传图片后,发现路径不对,如图:

| 上传图片路径存在问题 | https://api.github.com/repos/JPressProjects/jpress/issues/2/comments | 2 | 2016-06-12T03:02:14Z | 2016-06-30T11:09:59Z | https://github.com/JPressProjects/jpress/issues/2 | 159,804,704 | 2 |
[
"JPressProjects",
"jpress"
] | 建议将项目修改为maven项目,方便项目管理。
| 项目修改为maven项目 | https://api.github.com/repos/JPressProjects/jpress/issues/1/comments | 3 | 2016-06-12T02:34:02Z | 2016-06-30T11:09:43Z | https://github.com/JPressProjects/jpress/issues/1 | 159,803,825 | 1 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</b></summary>
<p>No project description provided</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7c/68/3bc155728fe545fdf0f8f4b2ba2214486e8c868970733ca8c0db210c1304/protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl">https://files.pythonhosted.org/packages/7c/68/3bc155728fe545fdf0f8f4b2ba2214486e8c868970733ca8c0db210c1304/protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/protobuf-3.19.5.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/11737630e7f3cd436ea02ab02582b0fab4a69e83">11737630e7f3cd436ea02ab02582b0fab4a69e83</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (protobuf version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-4565](https://www.mend.io/vulnerability-database/CVE-2025-4565) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | Direct | 6.31.1 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2025-4565</summary>
### Vulnerable Library - <b>protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</b>
<p>No project description provided</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7c/68/3bc155728fe545fdf0f8f4b2ba2214486e8c868970733ca8c0db210c1304/protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl">https://files.pythonhosted.org/packages/7c/68/3bc155728fe545fdf0f8f4b2ba2214486e8c868970733ca8c0db210c1304/protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/protobuf-3.19.5.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/11737630e7f3cd436ea02ab02582b0fab4a69e83">11737630e7f3cd436ea02ab02582b0fab4a69e83</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Any project that uses Protobuf Pure-Python backend to parse untrusted Protocol Buffers data containing an arbitrary number of recursive groups, recursive messages or a series of SGROUP tags can be corrupted by exceeding the Python recursion limit. This can result in a Denial of service by crashing the application with a RecursionError. We recommend upgrading to version =>6.31.1 or beyond commit 17838beda2943d08b8a9d4df5b68f5f04f26d901
<p>Publish Date: 2025-06-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-4565>CVE-2025-4565</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-16</p>
<p>Fix Resolution: 6.31.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | protobuf-3.19.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl: 1 vulnerabilities (highest severity is: 7.5) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5802/comments | 0 | 2025-06-19T21:08:49Z | 2025-06-21T00:08:57Z | https://github.com/opensearch-project/data-prepper/issues/5802 | 3,161,336,224 | 5,802 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-2.2.2-py3-none-any.whl</b></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/urllib3-2.2.2.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (urllib3 version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-50182](https://www.mend.io/vulnerability-database/CVE-2025-50182) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | urllib3-2.2.2-py3-none-any.whl | Direct | 2.5.0 | ✅ |
| [CVE-2025-50181](https://www.mend.io/vulnerability-database/CVE-2025-50181) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | urllib3-2.2.2-py3-none-any.whl | Direct | 2.5.0 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2025-50182</summary>
### Vulnerable Library - <b>urllib3-2.2.2-py3-none-any.whl</b>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/urllib3-2.2.2.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, urllib3 does not control redirects in browsers and Node.js. urllib3 supports being used in a Pyodide runtime utilizing the JavaScript Fetch API or falling back on XMLHttpRequest. This means Python libraries can be used to make HTTP requests from a browser or Node.js. Additionally, urllib3 provides a mechanism to control redirects, but the retries and redirect parameters are ignored with Pyodide; the runtime itself determines redirect behavior. This issue has been patched in version 2.5.0.
<p>Publish Date: 2025-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-50182>CVE-2025-50182</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-19</p>
<p>Fix Resolution: 2.5.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2025-50181</summary>
### Vulnerable Library - <b>urllib3-2.2.2-py3-none-any.whl</b>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/ca/1c/89ffc63a9605b583d5df2be791a27bc1a42b7c32bab68d3c8f2f73a98cd4/urllib3-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/urllib3-2.2.2.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0.
<p>Publish Date: 2025-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-50181>CVE-2025-50181</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-19</p>
<p>Fix Resolution: 2.5.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | urllib3-2.2.2-py3-none-any.whl: 2 vulnerabilities (highest severity is: 5.3) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5801/comments | 0 | 2025-06-19T21:08:46Z | 2025-06-21T00:08:53Z | https://github.com/opensearch-project/data-prepper/issues/5801 | 3,161,336,151 | 5,801 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl</b></summary>
<p>No project description provided</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c7/df/ec3ecb8c940b36121c7b77c10acebf3d1c736498aa2f1fe3b6231ee44e76/protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/c7/df/ec3ecb8c940b36121c7b77c10acebf3d1c736498aa2f1fe3b6231ee44e76/protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/protobuf-3.20.3.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (protobuf version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-4565](https://www.mend.io/vulnerability-database/CVE-2025-4565) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl | Direct | 6.31.1 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2025-4565</summary>
### Vulnerable Library - <b>protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl</b>
<p>No project description provided</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c7/df/ec3ecb8c940b36121c7b77c10acebf3d1c736498aa2f1fe3b6231ee44e76/protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/c7/df/ec3ecb8c940b36121c7b77c10acebf3d1c736498aa2f1fe3b6231ee44e76/protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/protobuf-3.20.3.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Any project that uses Protobuf Pure-Python backend to parse untrusted Protocol Buffers data containing an arbitrary number of recursive groups, recursive messages or a series of SGROUP tags can be corrupted by exceeding the Python recursion limit. This can result in a Denial of service by crashing the application with a RecursionError. We recommend upgrading to version =>6.31.1 or beyond commit 17838beda2943d08b8a9d4df5b68f5f04f26d901
<p>Publish Date: 2025-06-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-4565>CVE-2025-4565</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-16</p>
<p>Fix Resolution: 6.31.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | protobuf-3.20.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl: 1 vulnerabilities (highest severity is: 7.5) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5800/comments | 0 | 2025-06-19T21:08:43Z | 2025-06-21T00:08:50Z | https://github.com/opensearch-project/data-prepper/issues/5800 | 3,161,336,076 | 5,800 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.19-py2.py3-none-any.whl</b></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/urllib3-1.26.19.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (urllib3 version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-50182](https://www.mend.io/vulnerability-database/CVE-2025-50182) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | urllib3-1.26.19-py2.py3-none-any.whl | Direct | 2.5.0 | ✅ |
| [CVE-2025-50181](https://www.mend.io/vulnerability-database/CVE-2025-50181) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | urllib3-1.26.19-py2.py3-none-any.whl | Direct | 2.5.0 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2025-50182</summary>
### Vulnerable Library - <b>urllib3-1.26.19-py2.py3-none-any.whl</b>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/urllib3-1.26.19.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.26.19-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, urllib3 does not control redirects in browsers and Node.js. urllib3 supports being used in a Pyodide runtime utilizing the JavaScript Fetch API or falling back on XMLHttpRequest. This means Python libraries can be used to make HTTP requests from a browser or Node.js. Additionally, urllib3 provides a mechanism to control redirects, but the retries and redirect parameters are ignored with Pyodide; the runtime itself determines redirect behavior. This issue has been patched in version 2.5.0.
<p>Publish Date: 2025-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-50182>CVE-2025-50182</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-19</p>
<p>Fix Resolution: 2.5.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2025-50181</summary>
### Vulnerable Library - <b>urllib3-1.26.19-py2.py3-none-any.whl</b>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/ae/6a/99eaaeae8becaa17a29aeb334a18e5d582d873b6f084c11f02581b8d7f7f/urllib3-1.26.19-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/urllib3-1.26.19.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.26.19-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0.
<p>Publish Date: 2025-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-50181>CVE-2025-50181</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2025-06-19</p>
<p>Fix Resolution: 2.5.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | urllib3-1.26.19-py2.py3-none-any.whl: 2 vulnerabilities (highest severity is: 5.3) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5799/comments | 0 | 2025-06-19T21:08:41Z | 2025-06-21T00:08:48Z | https://github.com/opensearch-project/data-prepper/issues/5799 | 3,161,336,034 | 5,799 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some members of the community would like `SNAPSHOT` builds of Maven artifacts
**Describe the solution you'd like**
Produce nightly snapshots.
**Describe alternatives you've considered (Optional)**
Weekly snapshots could be fine too.
**Additional context**
https://github.com/opensearch-project/opensearch-build/issues/5587
| Publish Maven artifacts as snapshots | https://api.github.com/repos/opensearch-project/data-prepper/issues/5796/comments | 1 | 2025-06-19T20:09:38Z | 2025-06-19T20:21:16Z | https://github.com/opensearch-project/data-prepper/issues/5796 | 3,161,250,840 | 5,796 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper extensions provide an `ExtensionProvider` which other plugins or Data Prepper core can use. The `ExtensionPoint` is linked to a specific class by the [`supportedClass` property](https://github.com/opensearch-project/data-prepper/blob/3ce5b53507312d06fe2edc90e64fe8224b357978/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/plugin/ExtensionProvider.java#L29-L34).
Data Prepper currently registers beans for the class type as extensions are loaded. However, there is no way to control which `ExtensionProvider` is used if multiple are provided.
We are looking to have alternative extension implementations which would provide the same supported class for the `ExtensionProvider`.
**Describe the solution you'd like**
Update Data Prepper's extension loading to support alternative extension implementations. For example, we could use a different `geoip` or `aws` extension.
To do this, I propose defining a defined process for choosing which `ExtensionProvider` to use for a given supported class. The key part of this process would be making a decision based on being configured in the `data-prepper-config.yaml`. So, any extension with a `rootKeyJsonPath` which is in the `data-prepper-config.yaml` takes priority.
Beyond that, we may want to consider some other ways to prioritize. Some ideas here would include: preferring non `org.opensearch.dataprepper` extensions, alphabetic loading, and/or a priority definition that could be provided.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
Ultimately, an extension is providing an `ExtensionProvider`. However, this could be expanded in the future.
The main consideration for implementing this is change how `addExtensionProvider` registers beans. It would need to load the correct one.
https://github.com/opensearch-project/data-prepper/blob/7cbfb0a44651b2a3e453723ee8c4d8a3a484eb71/data-prepper-plugin-framework/src/main/java/org/opensearch/dataprepper/plugin/DataPrepperExtensionPoints.java#L33-L43
| Support alternative extension implementations | https://api.github.com/repos/opensearch-project/data-prepper/issues/5792/comments | 0 | 2025-06-18T20:22:38Z | 2025-06-18T20:22:47Z | https://github.com/opensearch-project/data-prepper/issues/5792 | 3,158,067,537 | 5,792 |
[
"opensearch-project",
"data-prepper"
] | Hi @kkondaka , @KarstenSchnitter , @dlvenable , @rafael-gumiero
I would like to raise this issue to provide some information (issues) I observed trying to ingest OTLP data into opensearch using SS4O.
I have quite some experience with the `elastic-stack`, but have never worked before with `opensearch` and `data-prepper`.
As mentioned, my goal was (is) to ingest OTLP data into opensearch and keeping the structure as close as possible to OTLP (which is what I think also SS4O is trying. Correct me if I'm wrong).
And, finally, this is not meant tell, how to do it better. I simply hope, that my information might be useful to improve the stack and get new users (like myself) started quicker.
### otel_logs_source, otel_metrics_source, otel_traces_source
I tried to follow example pipelines mentioned at https://docs.opensearch.org/docs/latest/data-prepper/common-use-cases/trace-analytics/#example-trace-analytics-pipeline or https://docs.opensearch.org/docs/latest/data-prepper/common-use-cases/log-analytics/#pipeline-configuration and was later also wondering, where those `log.attributes` field came from, that is also mentioned at
https://github.com/opensearch-project/data-prepper/issues/3098 and
https://github.com/opensearch-project/data-prepper/issues/5455
I tried *rename_keys*, *add_entries* + *delete_entries* within `data-prepper`, but nothing worked. In the end, I always had those `log.attributes` object, and was wondering, where this came from.
So this comes from https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/otel-proto-common/src/main/java/org/opensearch/dataprepper/plugins/otel/codec/OTelProtoOpensearchCodec.java
The *additional problem* is, that this *transformation* seems to be done *AFTER* all `processors` have been run. So there "seemed" to be no way of getting rid of those.
But then I found the - until now undocumented? - option `output_format` within those 3 `sources`. Once I have set this to `output_format: otel`, the payload is - basically - kept as it is. Great!
So I'm running something similar to...
```
source:
otel_logs_source:
ssl: false
output_format: otel
...
sink:
- opensearch
...
index_type: management_disabled
index: ss4o_logs-demo-poc
action: create
```
Of course, I needed to prepare the `data_stream` before.
### ss4o index templates
I first found the index-templates in the [notebook.json](https://github.com/opensearch-project/opensearch-catalog/blob/main/integrations/observability/otel-schema-setup/info/notebook.json) and used those to create my templates.
Note: Each `PUT` statement is missing the leading `_`. The API is `PUT _index_template/` and not `PUT index_template/`
But using those *templates*, I pretty quickly got a lot of ingestion errors caused by `mapping_conflicts`. Mostly referring to `resource.attributes`. As `output_format: otel` is not *flattening* (nor renaming `.` with `@` in key names), the often resulting `objects` within `resource.attributes` failed with the `dynamic_template` to type all the fields as `keyword`.
Then, I finally found the *different* `index-templates` at https://github.com/opensearch-project/opensearch-catalog/tree/main/schema/observability and noticed that those templates *differ* from the templates mentioned within the `notebook.json`. Especially, PR https://github.com/opensearch-project/opensearch-catalog/pull/132 removed those `keyword` dynamic template for `resource.*`.
I then started to use the `data_stream` versions: So [logs](https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/logs/logs-datastream-1.0.0.mapping), [metrics](https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-datastream-1.0.0.mapping) and [traces](https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/traces/traces-datastream-1.0.0.mapping)
#### traces
The current [traces](https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/traces/traces-datastream-1.0.0.mapping) template is not a valid JSON! This is caused by a missing `properties: {` line at https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/traces/traces-datastream-1.0.0.mapping#L111
Once I fixed that, the *raw* `logs` and `traces` seem to get ingested just fine (without any special `trace analytics` or similar).
#### metrics
Because of issues observed in `data-prepper` (I also have an otel-demo running), I changed the field `value` to be only of type `double`, as all data that I saw were of type `double`. I found https://github.com/opensearch-project/opensearch-catalog/blob/main/docs/schema/observability/metrics/metrics.md, where `value.int` and `value.double` are defined....but for my current testing, I changed that.
```
"value@int": {
"type": "integer"
},
"value@double": {
"type": "double"
}
```
into
```
"value": {
"type": "double"
}
```
### bucketCount(s)?
I found https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/metric/JacksonHistogram.java#L32 writing `bucketCounts`, but https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-datastream-1.0.0.mapping#L131 and https://github.com/opensearch-project/opensearch-catalog/blob/main/docs/schema/observability/metrics/metrics.md are using `bucketCount` (without `s`)
Not sure, which way is easier to fix.
### exemplar(s)
https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/metric/JacksonMetric.java#L38 and https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exemplars use the plural `exemplars`, while the template is using singular `exemplar` (https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-datastream-1.0.0.mapping#L35 , https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-datastream-1.0.0.mapping#L208 and https://github.com/opensearch-project/opensearch-catalog/blob/main/docs/schema/observability/metrics/metrics.md)
With all those changes, at least, data can be ingested into my 3 signal indices.
Again...I hope, that this information is useful for you.
Some screenshots:



| Inconsistencies regarding ss4o in docs, code, index-templates | https://api.github.com/repos/opensearch-project/data-prepper/issues/5791/comments | 1 | 2025-06-18T09:28:15Z | 2025-06-18T09:53:41Z | https://github.com/opensearch-project/data-prepper/issues/5791 | 3,156,145,990 | 5,791 |
[
"opensearch-project",
"data-prepper"
] | [Here](https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins/aggregate-processor#append) I see a `append` action for aggregate.
But in latest [Doc](https://docs.opensearch.org/docs/latest/data-prepper/pipelines/configuration/processors/aggregate/#available-aggregate-actions), the actions available are pretty different.
I am wondering if the `append` and other actions still available in the latest version of data prepper, or there is any alternative way to get the same behavior as `append`. I can update the document for this part.
| Aggregate action documentation fix | https://api.github.com/repos/opensearch-project/data-prepper/issues/5790/comments | 0 | 2025-06-17T23:32:09Z | 2025-06-17T23:32:18Z | https://github.com/opensearch-project/data-prepper/issues/5790 | 3,154,998,347 | 5,790 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Need a processor similar to OTEL's lookup processor or memcached like processor which can lookup a key and cache the value
**Describe the solution you'd like**
Enhance Lambda processor to cache the results of a lambda execution for a short TTL. And lambda input should identify the key and send just the key as input to lambda processor, instead of the entire event.
**Describe alternatives you've considered (Optional)**
Alternately,
1. support lookup processor similar to OTEL
2. add memcached processor which interacts with external memcache server
3. lookup of S3 with key as S3 object name and value as contents of the object
In all these cases the data is cached locally for a short duration
**Additional context**
Add any other context or screenshots about the feature request here.
| Add Lookup Processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5788/comments | 0 | 2025-06-17T14:49:41Z | 2025-06-17T14:49:51Z | https://github.com/opensearch-project/data-prepper/issues/5788 | 3,153,791,048 | 5,788 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
S3 sink currently supports the following config where the bucket name and account id are static
```
bucket_owners:
my-bucket-01: 123456789012
my-bucket-02: 999999999999
```
The feature request is to allow dynamic values in the bucket name and accountids
**Describe the solution you'd like**
Extend the config to allow Data Prepper expressions in the above config like
```
bucket_owners:
"{/bucket-field1}": "{/account-field1}"
"{/bucket-field2}": "{/account-field2}"
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Provide option to use expressions in S3 sink bucket_owners config | https://api.github.com/repos/opensearch-project/data-prepper/issues/5787/comments | 0 | 2025-06-17T14:43:50Z | 2025-06-17T14:44:01Z | https://github.com/opensearch-project/data-prepper/issues/5787 | 3,153,770,102 | 5,787 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. It would be nice to have [...]
Currently, csv processor doesn't support multiple line input. It is good to support multi-line csv input where first line is the header line.
**Describe the solution you'd like**
Add an option to CSV processor to parse multi-line input. The suggested config is
```
processor:
- csv:
<existing options>
- multi_line : true # default false
```
When `multi_line` is specified, `key` and keys` are optional
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add option to process multi-line CSV data in csv processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5783/comments | 0 | 2025-06-16T18:09:39Z | 2025-06-16T22:44:28Z | https://github.com/opensearch-project/data-prepper/issues/5783 | 3,150,851,986 | 5,783 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>aws-cdk-lib-2.88.0.tgz</b></summary>
<p></p>
<p>Path to dependency file: /release/staging-resources-cdk/package.json</p>
<p>Path to vulnerable library: /release/staging-resources-cdk/package.json,/testing/aws-testing-cdk/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (aws-cdk-lib version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-5889](https://www.mend.io/vulnerability-database/CVE-2025-5889) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low | 3.1 | brace-expansion-1.1.11.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> CVE-2025-5889</summary>
### Vulnerable Library - <b>brace-expansion-1.1.11.tgz</b>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz</a></p>
<p>Path to dependency file: /release/staging-resources-cdk/package.json</p>
<p>Path to vulnerable library: /release/staging-resources-cdk/package.json,/testing/aws-testing-cdk/package.json</p>
<p>
Dependency Hierarchy:
- aws-cdk-lib-2.88.0.tgz (Root Library)
- minimatch-3.1.2.tgz
- :x: **brace-expansion-1.1.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in juliangruber brace-expansion up to 1.1.11. It has been rated as problematic. Affected by this issue is the function expand of the file index.js. The manipulation leads to inefficient regular expression complexity. The attack may be launched remotely. The complexity of an attack is rather high. The exploitation is known to be difficult. The exploit has been disclosed to the public and may be used. The name of the patch is a5b98a4f30d7813266b221435e1eaaf25a1b0ac5. It is recommended to apply a patch to fix this issue.
<p>Publish Date: 2025-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-5889>CVE-2025-5889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details> | aws-cdk-lib-2.88.0.tgz: 1 vulnerabilities (highest severity is: 3.1) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5780/comments | 0 | 2025-06-13T20:11:42Z | 2025-06-13T20:11:50Z | https://github.com/opensearch-project/data-prepper/issues/5780 | 3,144,655,645 | 5,780 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>aws-cdk-lib-2.100.0.tgz</b></summary>
<p></p>
<p>Path to dependency file: /testing/aws-testing-cdk/package.json</p>
<p>Path to vulnerable library: /release/staging-resources-cdk/package.json,/testing/aws-testing-cdk/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (aws-cdk-lib version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-5889](https://www.mend.io/vulnerability-database/CVE-2025-5889) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low | 3.1 | brace-expansion-1.1.11.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> CVE-2025-5889</summary>
### Vulnerable Library - <b>brace-expansion-1.1.11.tgz</b>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz</a></p>
<p>Path to dependency file: /release/staging-resources-cdk/package.json</p>
<p>Path to vulnerable library: /release/staging-resources-cdk/package.json,/testing/aws-testing-cdk/package.json</p>
<p>
Dependency Hierarchy:
- aws-cdk-lib-2.100.0.tgz (Root Library)
- minimatch-3.1.2.tgz
- :x: **brace-expansion-1.1.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in juliangruber brace-expansion up to 1.1.11. It has been rated as problematic. Affected by this issue is the function expand of the file index.js. The manipulation leads to inefficient regular expression complexity. The attack may be launched remotely. The complexity of an attack is rather high. The exploitation is known to be difficult. The exploit has been disclosed to the public and may be used. The name of the patch is a5b98a4f30d7813266b221435e1eaaf25a1b0ac5. It is recommended to apply a patch to fix this issue.
<p>Publish Date: 2025-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-5889>CVE-2025-5889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details> | aws-cdk-lib-2.100.0.tgz: 1 vulnerabilities (highest severity is: 3.1) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5779/comments | 0 | 2025-06-13T20:11:40Z | 2025-06-13T20:11:49Z | https://github.com/opensearch-project/data-prepper/issues/5779 | 3,144,655,607 | 5,779 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Implementing compression to Kafka buffer could increase bandwidth, requiring less compute units for a Push source.
**Describe the solution you'd like**
I would implement zSTD compression of data that is sent to the Kafka buffer. This would be implemented in the `KafkaCustomProducer` and `KafkaCustomConsumer` classes. | Enable Compression for Kafka Buffer - Json type | https://api.github.com/repos/opensearch-project/data-prepper/issues/5777/comments | 1 | 2025-06-13T18:01:10Z | 2025-06-20T23:53:01Z | https://github.com/opensearch-project/data-prepper/issues/5777 | 3,144,327,701 | 5,777 |
[
"opensearch-project",
"data-prepper"
] | **Description:**
As part of the changes introduced in [#5434](https://github.com/opensearch-project/data-prepper/pull/5434), the OTEL protobuf specification was updated. While the system remains **backward compatible when using Protobuf**, issues arise when the data is sent in **JSON format**.
When sample JSON data includes `instrumentationLibraryMetrics` with an **empty `instrumentationLibrary`** and a list of metrics like this:
```json
"instrumentationLibraryMetrics": [
{
"instrumentationLibrary": {},
"metrics": [
{
"name": "counter-int"
}
]
}
]
```
The following exception is thrown:
```
org.opensearch.dataprepper.GrpcRequestExceptionHandler - Unexpected exception handling gRPC request
com.google.protobuf.InvalidProtocolBufferException: Expected start of object, got: [
at org.curioswitch.common.protobuf.json.TypeSpecificMarshaller.mergeValue(TypeSpecificMarshaller.java:66) ~[protobuf-jackson-2.5.0.jar:?]
```
**Expected Behavior:**
Data Prepper should handle empty `instrumentationLibrary` objects gracefully in JSON mode, without throwing an exception. At a minimum, a more user-friendly error message would help in identifying and correcting malformed input.
**Steps to Reproduce:**
1. Send OTEL metrics in JSON format with an empty `instrumentationLibrary` object.
2. Observe the thrown `InvalidProtocolBufferException`.
Sample pipeline configuration:
```
test-pipeline:
source:
otel_metrics_source:
ssl: false
port: 4317
path: /log/ingest
unframed_requests: true
sink:
- stdout:
````
```
curl -X POST http://localhost:4317/log/ingest \
-H "Content-Type: application/json" \
-u admin:securepass \
-d '{
"resourceMetrics": [
{
"instrumentationLibraryMetrics": [
{
"instrumentationLibrary": {},
"metrics": [
{
"name": "cpu_usage",
"sum": {
"dataPoints": [
{
"asDouble": 75.5
}
]
}
}
]
}
]
}
]
}'
```
**Environment:**
* Data Prepper version: \[version that includes PR #5434]
* Data format: JSON
* Libraries: `protobuf-jackson-2.5.0`
**Suggested Fix:**
Investigate JSON deserialization logic for `instrumentationLibrary` and handle cases where the field is present but empty.
| [BUG] Exception when OTel metrics data is sent in JSON format with `instrumentationLibrary` field | https://api.github.com/repos/opensearch-project/data-prepper/issues/5769/comments | 1 | 2025-06-10T16:55:17Z | 2025-06-10T19:46:31Z | https://github.com/opensearch-project/data-prepper/issues/5769 | 3,134,271,538 | 5,769 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I updated a data prepper installation running 2.10.3 to 2.11.0 and the new instance fails to start after displaying an error message like:
```
2025-06-05T17:15:44,937 [main] WARN org.springframework.context.support.AbstractApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataPrepper' defined in URL [jar:file:/usr/share/data-prepper/lib/data-prepper-core-2.11.0.jar!/org/opensearch/dataprepper/core/DataPrepper.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.opensearch.dataprepper.core.DataPrepper]: Constructor threw exception; nested exception is org.opensearch.dataprepper.pipeline.parser.InvalidPipelineConfigurationException: The following routes do not exist in pipeline "entry-pipeline-us-west-2": [_default]. Configured routes include [usw2dev-test-dp-ingestion-us-west-2_route-us-west-2]
```
It seems like this feature addition mentioned in the 2.11.0 release notes could be related. https://github.com/opensearch-project/data-prepper/issues/5106
**To Reproduce**
This data-prepper-config.yaml file works under 2.10.3 but not under 2.11.0:
```
entry-pipeline-us-west-2:
source:
s3:
acknowledgments: true
notification_type: \"sqs\"
compression: gzip
codec:
newline:
sqs:
queue_url: \"https://sqs.us-west-2.amazonaws.com/1234567890/data-prepper-ingress-us-west-2-test\"
maximum_messages: 10
visibility_timeout: \"60s\"
bucket_owners:
usw2dev-test-dp-ingestion-us-west-2: 1234567890
aws:
region: \"us-west-2\"
processor:
route:
- usw2dev-test-dp-ingestion-us-west-2_route-us-west-2: '/s3/bucket == \"usw2dev-test-dp-ingestion-us-west-2\"'
sink:
- pipeline:
name: usw2dev-test-dp-ingestion-us-west-2-pipeline-us-west-2
routes:
- usw2dev-test-dp-ingestion-us-west-2_route-us-west-2
- pipeline:
name: default-pipeline-us-west-2
routes:
- _default
default-pipeline-us-west-2:
source:
pipeline:
name: entry-pipeline-us-west-2
sink:
- opensearch:
hosts: [ \"https://redacted.us-west-2.es.amazonaws.com\" ]
aws:
region: \"us-west-2\"
index: \"data_prepper_unsorted\"
usw2dev-test-dp-ingestion-us-west-2-pipeline-us-west-2:
source:
pipeline:
name: entry-pipeline-us-west-2
processor:
- date:
from_time_received: true
destination: \"@dp_timestamp\"
- parse_json:
- date:
match:
- key: date
patterns: [\"yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'\"]
destination: \"@timestamp\"
source_timezone: \"Etc/UTC\"
destination_timezone: \"Etc/UTC\"
- delete_entries:
with_keys: [\"date\", \"s3\"]
sink:
- opensearch:
hosts: [ \"https://redacted.us-west-2.es.amazonaws.com\" ]
aws:
region: \"us-west-2\"
index: \"usw2dev\"
action: create
max_retries: 10
dlq:
s3:
bucket: \"data-prepper-ingress-us-west-2-test-dlq\"
key_path_prefix: \"/%{yyyy}/%{MM}/%{dd}\"
region: \"us-west-2\"
```
(Note that several facets like AWS account IDs have been changed.)
The `_default` route is described in https://docs.opensearch.org/blog/Data-Prepper-2.9.0-is-ready-for-download/
**Environment (please complete the following information):**
- OS: docker image opensearchproject/data-prepper:2.11.0
- Version 2.11.0
| [BUG] _default route no longer seems to exist in 2.11.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5763/comments | 1 | 2025-06-05T19:19:11Z | 2025-06-10T19:41:11Z | https://github.com/opensearch-project/data-prepper/issues/5763 | 3,122,367,421 | 5,763 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requests-2.32.0-py3-none-any.whl</b></summary>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/24/e8/09e8d662a9675a4e4f5dd7a8e6127b463a091d2703ed931a64aa66d00065/requests-2.32.0-py3-none-any.whl">https://files.pythonhosted.org/packages/24/e8/09e8d662a9675a4e4f5dd7a8e6127b463a091d2703ed931a64aa66d00065/requests-2.32.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/requests-2.32.0.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (requests version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-47081](https://www.mend.io/vulnerability-database/CVE-2024-47081) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | requests-2.32.0-py3-none-any.whl | Direct | 2.32.4 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2024-47081</summary>
### Vulnerable Library - <b>requests-2.32.0-py3-none-any.whl</b>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/24/e8/09e8d662a9675a4e4f5dd7a8e6127b463a091d2703ed931a64aa66d00065/requests-2.32.0-py3-none-any.whl">https://files.pythonhosted.org/packages/24/e8/09e8d662a9675a4e4f5dd7a8e6127b463a091d2703ed931a64aa66d00065/requests-2.32.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/requests-2.32.0.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **requests-2.32.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Requests is a HTTP library. Due to a URL parsing issue, Requests releases prior to 2.32.4 may leak .netrc credentials to third parties for specific maliciously-crafted URLs. Users should upgrade to version 2.32.4 to receive a fix. For older versions of Requests, use of the .netrc file can be disabled with "trust_env=False" on one's Requests Session.
Mend Note: The description of this vulnerability differs from MITRE.
<p>Publish Date: 2025-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-47081>CVE-2024-47081</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-9hjg-9r4m-mvj7">https://github.com/advisories/GHSA-9hjg-9r4m-mvj7</a></p>
<p>Release Date: 2025-06-09</p>
<p>Fix Resolution: 2.32.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | requests-2.32.0-py3-none-any.whl: 1 vulnerabilities (highest severity is: 5.3) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5762/comments | 1 | 2025-06-04T22:23:27Z | 2025-06-21T00:08:51Z | https://github.com/opensearch-project/data-prepper/issues/5762 | 3,119,278,624 | 5,762 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requests-2.32.3-py3-none-any.whl</b></summary>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/requests-2.32.3.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (requests version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2024-47081](https://www.mend.io/vulnerability-database/CVE-2024-47081) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | requests-2.32.3-py3-none-any.whl | Direct | 2.32.4 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2024-47081</summary>
### Vulnerable Library - <b>requests-2.32.3-py3-none-any.whl</b>
<p>Python HTTP for Humans.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl">https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl</a></p>
<p>Path to dependency file: /release/smoke-tests/otel-span-exporter/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210007261/env/lib/python3.9/site-packages/requests-2.32.3.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **requests-2.32.3-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/eed21ce11a1f5e4c1671583fc8d6f340caaa6b71">eed21ce11a1f5e4c1671583fc8d6f340caaa6b71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Requests is a HTTP library. Due to a URL parsing issue, Requests releases prior to 2.32.4 may leak .netrc credentials to third parties for specific maliciously-crafted URLs. Users should upgrade to version 2.32.4 to receive a fix. For older versions of Requests, use of the .netrc file can be disabled with "trust_env=False" on one's Requests Session.
Mend Note: The description of this vulnerability differs from MITRE.
<p>Publish Date: 2025-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2024-47081>CVE-2024-47081</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-9hjg-9r4m-mvj7">https://github.com/advisories/GHSA-9hjg-9r4m-mvj7</a></p>
<p>Release Date: 2025-06-09</p>
<p>Fix Resolution: 2.32.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | requests-2.32.3-py3-none-any.whl: 1 vulnerabilities (highest severity is: 5.3) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5761/comments | 1 | 2025-06-04T22:23:26Z | 2025-06-21T00:08:46Z | https://github.com/opensearch-project/data-prepper/issues/5761 | 3,119,278,584 | 5,761 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, Data Prepper expression functions operate independently and cannot be nested or chained together. This limitation forces users to create multiple processing steps or complex pipeline configurations to achieve what could be expressed as a single, readable expression.
**Describe the solution you'd like**
I would like to add support for function chaining/nesting in Data Prepper expressions, allowing functions to accept the results of other functions as arguments. This would enable expressions like
- `contains(join(/error_codes, "|"), "404")` - Check if any error code in array equals 404
- `length(getMetadata("timestamp")) == 13` - Validate timestamp format length
**Additional context**
Since we are adding new functions like subList and more will be added in the future like split etc
This will make the transformation and filtering parts of data prepper way more simpler to implement | Add Function Chaining Support to Expression Functions | https://api.github.com/repos/opensearch-project/data-prepper/issues/5760/comments | 0 | 2025-06-04T07:17:53Z | 2025-06-10T19:39:17Z | https://github.com/opensearch-project/data-prepper/issues/5760 | 3,116,773,738 | 5,760 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of fluentbit and the http output plugin, which supports gzip compression (https://docs.fluentbit.io/manual/pipeline/outputs/http), Data Prepper's http source is not able to support receiving gzip compressed requests.
**Describe the solution you'd like**
An option to enable gzip compression in the http source
```
source:
http:
compression: gzip # default to none
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support receiving gzip compressed data in http source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5759/comments | 1 | 2025-06-02T15:02:53Z | 2025-06-10T19:38:50Z | https://github.com/opensearch-project/data-prepper/issues/5759 | 3,110,567,823 | 5,759 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
parse_json processor should provide an options to do key manipulations
**Describe the solution you'd like**
Data Prepper doesn't support some characters in keys. For example space character is not supported in keys. To avoid failures during json parsing, provide option to replace some characters/strings with other characters or strings. Taking "translate" processor config as example, provide `map` option to map strings to replacements string like shown below
```
processor:
- parse_json:
<other existing config options>
replace_map:
" ": "_"
"-": "_"
```
optionally, parse_json can also provide option to convert all keys to lower_case/upper_case.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| parse_json processor enhancements | https://api.github.com/repos/opensearch-project/data-prepper/issues/5758/comments | 0 | 2025-06-01T20:17:29Z | 2025-06-10T19:38:25Z | https://github.com/opensearch-project/data-prepper/issues/5758 | 3,107,793,676 | 5,758 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Support more simpler/faster renaming of keys
**Describe the solution you'd like**
Support rename_keys processors to rename every key to be lowercase/uppercase. Add support for faster replacements (like replacing "-" with "_") without regex.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Rename keys processor enhancements | https://api.github.com/repos/opensearch-project/data-prepper/issues/5757/comments | 0 | 2025-06-01T20:03:25Z | 2025-06-10T19:38:14Z | https://github.com/opensearch-project/data-prepper/issues/5757 | 3,107,773,324 | 5,757 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `convert_entry_type` processor should detect the type of data in the string and auto convert it.
**Describe the solution you'd like**
For all string type fields in the event, the processor should auto detect the type and convert it to appropriate type.
For example
`"flag":"true"` should be converted to `"flag":true` (boolean)
`"value":"10"` should be converted to `"value":10` (integer or long depending on the value)
`"delta": "1.2e6"` should be converted to `"delta": 1.2e6` (double or float)
`"date":"2025-05-30 14:22:07"` optionally converted to epoch_millis or epoch_nanos
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| convert_entry_type processor should support "automatic" mode | https://api.github.com/repos/opensearch-project/data-prepper/issues/5733/comments | 1 | 2025-05-31T00:53:15Z | 2025-06-17T21:20:27Z | https://github.com/opensearch-project/data-prepper/issues/5733 | 3,104,563,794 | 5,733 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
ParseJson processor should provide an option to generate the json without nesting. The nested fields should be flattened with a configurable separator like "."
**Describe the solution you'd like**
Provide a config option under `parse_json` processor to flatten.
```
processor:
- parse_json:
source:
destination:
flatten_with: "." # new config option indicating flatten and the character/string to use when flattening
```
The above config will parse a string like
`{"k1":"v1", "k2" : { "k3": "v3", "k4" : { "k5": "v5"}, "k6": "v6"}, "k7" : "v7"}`
to
```
{
"k1": "v1",
"k2.k3": "v3",
"k2.k4.k5": "v5",
"k2.k6": "v6",
"k7":"v7"
}
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| support flattening in ParseJson processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5732/comments | 1 | 2025-05-30T23:55:15Z | 2025-06-01T20:00:23Z | https://github.com/opensearch-project/data-prepper/issues/5732 | 3,104,487,041 | 5,732 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The event data received may have fields which can be of different format. For example, OTEL logs data contains "Body" field which is a string. The value of this field may contain JSON data or plain text or XML or CSV or Key-Value etc. Need a way to detect the format of the data in a given field
**Describe the solution you'd like**
Add a new processor `format_detector` which looks at the content of a given `source` field and detects the format and stores the format as a string in the event data or metadata
```
processor:
- format_detector:
source:
# one of the following 2 is a must
target_key: (optional)
target_metadata_key: (optional)
(other config options as needed)
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add content format detector processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5731/comments | 2 | 2025-05-30T21:58:22Z | 2025-06-13T19:54:24Z | https://github.com/opensearch-project/data-prepper/issues/5731 | 3,104,355,027 | 5,731 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We are encountering CappedPositionLost exceptions in our DocumentDB change stream processing, indicating potential data loss scenarios that need to be addressed.
## Current Behavior
When processing DocumentDB change streams, we're receiving CappedPositionLost exceptions indicating that our position in the capped collection has been deleted, potentially causing us to miss change events.
## Error Details
```java
Error: CappedPositionLost: CollectionScan died due to position in capped collection being deleted
Code: 136
Stack Trace:
org.opensearch.dataprepper.plugins.mongo.stream.StreamScheduler - Received an exception during stream processing
Location: StreamWorker.processStream(StreamWorker.java:287)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create data prepper pipeline on DocumentDb Collection with default 3 hrs change stream retention.
2. While this was happening update other collections to write update to the changestream, and after a few days the the changestream should be purged (3 hours default and 50 GB size limit)
3. The data prepper stream worker will start getting errors as it's resume token we now from before the data was purged.
**Expected behavior**
Change Stream Processing handled the recovery strategy and able to resume the change stream
Implement resumption strategy:
```
// Pseudo-code for consideration
try {
processChangeStream();
} catch (CappedPositionLost e) {
// Options:
// 1. Resume from last known good position
// 2. Use startAt with current timestamp
// 3. Implement full resync if needed
}
``` | [BUG] Mongo/DocumentDB CappedPositionLost Exception in Change Stream Processing Requires Recovery Strategy | https://api.github.com/repos/opensearch-project/data-prepper/issues/5730/comments | 1 | 2025-05-30T19:11:13Z | 2025-06-10T19:37:48Z | https://github.com/opensearch-project/data-prepper/issues/5730 | 3,104,043,706 | 5,730 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
in the HELM chart, deployment.yaml template, it opens 4900 port by default. this is OK.
```
- name: server
containerPort: {{ (.Values.config).serverPort | default 4900 }}
protocol: TCP
```
however, the HELM's service.yaml template doesn't have 4900 port open.
so, if not changing the devault values.yaml file, then the 4900 port is missing from the clusterIP service.
If adding the 4900 port in the `.Values.ports` manually, then the deployment will repot 4900 being open 2 times.
`W0528 11:33:20.601128 17742 warnings.go:70] spec.template.spec.containers[0].ports[5]: duplicate port definition with spec.template.spec.containers[0].ports[4]`
see the deployment.yaml template:
```
ports:
{{- range .Values.ports }}
- name: {{ .name }}
containerPort: {{ .port }}
protocol: TCP
{{- end }}
- name: server
containerPort: {{ (.Values.config).serverPort | default 4900 }}
protocol: TCP
```
**To Reproduce**
Steps to reproduce the behavior:
see above
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
HELM version: 0.3.1
**Additional context**
Add any other context about the problem here.
| [BUG] HELM template service.yaml missing open 4900 port | https://api.github.com/repos/opensearch-project/data-prepper/issues/5724/comments | 1 | 2025-05-28T03:30:50Z | 2025-06-10T19:44:52Z | https://github.com/opensearch-project/data-prepper/issues/5724 | 3,095,873,332 | 5,724 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
when enabling the authentication in data-prepper-config.yaml, the readinessProbe and livenessProbe fails with 401 error
```
config:
data-prepper-config.yaml: |
ssl: false
serverPort: 4900
authentication:
http_basic:
username: "data-prepper"
password: "mys3cr3t"
```
**To Reproduce**
see above
**Expected behavior**
A clear and concise description of what you expected to happen.
In HELM, the data-prepper container's readinessProbe and livenessProbe shall consider when http_basic authentication is enabled in configuration.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]. HELM VERSION version: 0.3.1
**Additional context**
Add any other context about the problem here.
| [BUG] readinessProbe and livenessProbe fails when enable the | https://api.github.com/repos/opensearch-project/data-prepper/issues/5723/comments | 1 | 2025-05-28T03:26:58Z | 2025-06-10T19:36:38Z | https://github.com/opensearch-project/data-prepper/issues/5723 | 3,095,867,290 | 5,723 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The AWS Lambda processor in Data Prepper is intermittently skipping record processing when running in AWS OSIS. This occurs in a pipeline configuration where:
* Source: DynamoDB (containing base objects with S3 paths to vector data)
* Processor: AWS Lambda (meant to read S3 files and append vector data)
* Sink: OpenSearch
The processor appears to be bypassed sometimes, with records going directly to the sink without Lambda processing, particularly under high traffic and memory pressure conditions.
Sample pipeline configuration:
```yaml
version: "2"
ddb-integ-pipeline:
source:
dynamodb:
acknowledgments: true
tables:
- table_arn: xxx
stream:
start_position: LATEST
aws: xxx
sink:
- opensearch:
hosts: xxx
index: xxx
index_type: custom
action: index
document_version: ${document_version}
document_version_type: external
document_id: ${my_id}
flush_timeout: -1
aws: xxx
dlq: xxx
processor:
- aws_lambda:
function_name: VectorIngestProcessor
batch:
key_name: events
threshold:
event_count: 10
maximum_size: 5mb
circuit_breaker_retries: 30
circuit_breaker_wait_interval: 1000
client:
max_concurrency: 600
aws:
xxx
```
Sample dlq message:
```json
{
"dlqObjects": [
{
"pluginId": "opensearch",
"pluginName": "opensearch",
"pipelineName": "ddb-integ-pipeline",
"failedData": {
"index": "xxx",
"indexId": null,
"status": 0,
"message": "There was an exception when evaluating the document_version '${document_version}': The key document_version could not be found in the Event when formatting",
"document": "{\"my_id\":\"xxx\",\"vectors\":\"s3://xxx.json\"}"
},
"timestamp": "2025-05-15T11:38:51.127Z"
}
]
}
```
Sample lambda logic:
```python
def handle_requests(self, event: Dict[str, List[Dict]], context: Any) -> List[Dict]:
...
# read s3 file path from field:vectors, and replace this field with actual vector data
...
output = []
while not result_queue.empty():
item = result_queue.get()
item["document_version"] = int(time.time() * 1000)
output.append(item)
return output
```
Observed Metrics during the issue:
* DynamoDB bytes processed: 220 MB/min (avg)
* JVM memory used: 12.23 GB (max)
* Lambda processor metrics:
* Request payload size: 81 KB (avg)
* Response payload size: 1.6 MB (avg)
Relevant Error Logs:
```
ERROR org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - There was an exception when evaluating the document_version '${document_version}': The key document_version could not be found in the Event when formatting Check the dlq if configured to see more details about the affected Event
```
```
ERROR org.opensearch.dataprepper.plugins.lambda.processor.LambdaProcessor - software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
```
**To Reproduce**
Issue occurs frequently under the following conditions:
* High ingestion traffic
* High memory pressure (near JVM memory limits)
**Expected behavior**
All records should be processed by the Lambda function before being sent to OpenSearch, with each record containing:
* Vector data from S3 (10 vectors × 768 dimensions)
* Document version (timestamp)
**Screenshots**
No
**Environment (please complete the following information):**
Platform: AWS OSIS
Not deployed locally
**Additional context**
1. The vector data structure in processed records should contain 10 vectors, each with 768 dimensions, suggesting significant memory requirements for processing.
2. No Lambda errors were detected during this issue, suggesting the processor is being bypassed rather than failing
3. When DynamoDB write speed is low, all records are successfully processed by Lambda and ingested into OpenSearch, with complete vector data and document_version included in the records
| [BUG] AWS Lambda processor skips processing records under high load in OSIS | https://api.github.com/repos/opensearch-project/data-prepper/issues/5719/comments | 3 | 2025-05-25T17:28:23Z | 2025-06-16T22:58:45Z | https://github.com/opensearch-project/data-prepper/issues/5719 | 3,089,451,210 | 5,719 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
A flexible rule based approach to be able to make a decision as to the feasibility of dynamically updating a pipeline config given the new version.
**Describe the solution you'd like**
A simple ruleset that is extensible in the future to add/remove any rules that controls this decision.
```
public interface PipelineUpdatableRule<T> {
boolean evaluate(T input);
String getDescription();
}
```
Define all the rules that we would like to apply and execute them to make the decision.
**Describe alternatives you've considered (Optional)**
Define a new method in the Plugin interface called `IsDynamicallyUpdatable`
Make every plugin implement `IsDynamicallyUpdatable` method and let each plugin define their own logic of whether the given new state is acceptable to update dynamically or not.
```
boolean IsDynamicallyUpdatable(T input);
```
For the all the Source, Processors and Sink in the original pipeline, we will invoke this method to make the decision. Any new addition of Processors should be acceptable unconditionally.
**Additional context**
Add any other context or screenshots about the feature request here.
| Ruleset to decide a Dynamic Pipeline Config update is acceptable/possible given the new version of pipeline config | https://api.github.com/repos/opensearch-project/data-prepper/issues/5717/comments | 0 | 2025-05-22T19:10:23Z | 2025-05-27T19:55:03Z | https://github.com/opensearch-project/data-prepper/issues/5717 | 3,084,318,807 | 5,717 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, DataPrepper takes the pipeline yaml as input while starting. If we had to modify the pipeline yaml dynamically, while the DataPrepper is running, we need some way to provide updated yaml to the core of the DataPrepper. This feature request is to add a new DataPrepper core API that reads the pipeline yaml from an S3 location
**Describe the solution you'd like**
Addition of a new core api that takes S3 path to load the yaml.
```
curl --location 'localhost:4900/updatePipelineConfig' \
--header 'Content-Type: application/json' \
--data '{
"s3path": [ "s3://your-bucket/path-to-updated-1.yaml", "s3://your-bucket/path-to-updated-2.yaml"]
}'
```
Returns 200 Ok, if DataPrepper able to read the new yaml from the given location and updates the pipeline state.
Returns 4xx if the input for the API is not valid or as expected
Returns 5xx if DataPrepper is unable to read the given S3 location as it was unable to find the Default AWS credentials that authorize to read the given s3 location
**Additional context**
To be able to achieve Dynamic Pipeline updates, this feature is required
| DataPrepper core API to load the pipeline yaml from a given S3 path | https://api.github.com/repos/opensearch-project/data-prepper/issues/5716/comments | 2 | 2025-05-22T18:44:23Z | 2025-06-02T20:02:07Z | https://github.com/opensearch-project/data-prepper/issues/5716 | 3,084,260,865 | 5,716 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
At present, The buffer implementations provide an asynchronous behavior, but there is a need to have a synchronous mechanism where the source thread waits until the events are published to the sink. This allows the source to ensure that the events have been processed.
A similar functionality has been done in Zero Buffer #5415 which allows the source thread to execute the processors and then publish it to sink. But it does not leverage the existing process workers which are available.
**Describe the solution you'd like**
- Implement a synchronous buffer which will block the source thread once it writes events to the synchronous buffer until those events has been published to sink.
- The worker threads will asynchronously read the events from the synchronous buffer and process these events and publish them to sink (async).
- Once these events are processed and published to sink, the buffer unblocks the source thread.
**Additional context**
This aims to utilize all the process workers while also ensuring synchronous flow between source and sink.
| [Feature] Synchronous Buffer | https://api.github.com/repos/opensearch-project/data-prepper/issues/5712/comments | 0 | 2025-05-21T23:07:10Z | 2025-05-27T19:52:46Z | https://github.com/opensearch-project/data-prepper/issues/5712 | 3,081,606,785 | 5,712 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Some community members have been looking to use Data Prepper plugins without running Data Prepper core itself.
**Describe the solution you'd like**
Produce more Maven artifacts for Data Prepper:
* Data Prepper plugins: `org.opensearch.dataprepper.plugins`
* Data Prepper core: `org.opensearch.dataprepper.core`
For example, I could use grok in Gradle:
```
implementation "org.opensearch.dataprepper.plugins:data-prepper-plugins-grok-processor:2.12.0"
```
**Describe alternatives you've considered (Optional)**
None
**Additional context**
None
## Tasks/Dependencies
- [x] #4931
| Provide Data Prepper plugins and core as jars in Maven | https://api.github.com/repos/opensearch-project/data-prepper/issues/5710/comments | 0 | 2025-05-21T17:09:10Z | 2025-06-04T22:08:10Z | https://github.com/opensearch-project/data-prepper/issues/5710 | 3,080,909,117 | 5,710 |
[
"opensearch-project",
"data-prepper"
] | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>setuptools-70.0.0-py3-none-any.whl</b></summary>
<p>Easily download, build, install, upgrade, and uninstall Python packages</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/de/88/70c5767a0e43eb4451c2200f07d042a4bcd7639276003a9c54a68cfcc1f8/setuptools-70.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/de/88/70c5767a0e43eb4451c2200f07d042a4bcd7639276003a9c54a68cfcc1f8/setuptools-70.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/setuptools-70.0.0.dist-info</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/11737630e7f3cd436ea02ab02582b0fab4a69e83">11737630e7f3cd436ea02ab02582b0fab4a69e83</a></p></details>
## Vulnerabilities
| Vulnerability | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (setuptools version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2025-47273](https://www.mend.io/vulnerability-database/CVE-2025-47273) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 8.8 | setuptools-70.0.0-py3-none-any.whl | Direct | 78.1.1 | ✅ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2025-47273</summary>
### Vulnerable Library - <b>setuptools-70.0.0-py3-none-any.whl</b>
<p>Easily download, build, install, upgrade, and uninstall Python packages</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/de/88/70c5767a0e43eb4451c2200f07d042a4bcd7639276003a9c54a68cfcc1f8/setuptools-70.0.0-py3-none-any.whl">https://files.pythonhosted.org/packages/de/88/70c5767a0e43eb4451c2200f07d042a4bcd7639276003a9c54a68cfcc1f8/setuptools-70.0.0-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/trace-analytics-sample-app/sample-app/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-ua_20250621000336_ZPRIWG/python_GTJGWW/202506210003391/env/lib/python3.9/site-packages/setuptools-70.0.0.dist-info</p>
<p>
Dependency Hierarchy:
- :x: **setuptools-70.0.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/11737630e7f3cd436ea02ab02582b0fab4a69e83">11737630e7f3cd436ea02ab02582b0fab4a69e83</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
setuptools is a package that allows users to download, build, install, upgrade, and uninstall Python packages. A path traversal vulnerability in "PackageIndex" is present in setuptools prior to version 78.1.1. An attacker would be allowed to write files to arbitrary locations on the filesystem with the permissions of the process running the Python code, which could escalate to remote code execution depending on the context. Version 78.1.1 fixes the issue.
Mend Note: The description of this vulnerability differs from MITRE.
<p>Publish Date: 2025-05-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2025-47273>CVE-2025-47273</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5rjg-fvgr-3xxf">https://github.com/advisories/GHSA-5rjg-fvgr-3xxf</a></p>
<p>Release Date: 2025-05-17</p>
<p>Fix Resolution: 78.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation will be attempted for this issue.
</details>
***
<p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p> | setuptools-70.0.0-py3-none-any.whl: 1 vulnerabilities (highest severity is: 8.8) | https://api.github.com/repos/opensearch-project/data-prepper/issues/5709/comments | 1 | 2025-05-21T15:44:16Z | 2025-06-21T00:08:55Z | https://github.com/opensearch-project/data-prepper/issues/5709 | 3,080,699,117 | 5,709 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper only supports a single source. But, the configuration will pass if multiple are provided. It appears that the first is chose.
**To Reproduce**
Steps to reproduce the behavior:
1) Create a pipeline like this:
```
pipeline:
workers: 2
source:
http:
otel_trace:
sink:
- stdout:
```
2) Run Data Prepper
See that the `http` source starts.
```
2025-05-20T19:17:13,762 [pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.source.loghttp.HTTPSource - Started http source on port 2021...
```
**Expected behavior**
Data Prepper should fail indicating that only one source may be defined.
**Screenshots**
N/A
**Environment (please complete the following information):**
Data Prepper 2.11.0
**Additional context**
N/A
| [BUG] Data Prepper allows creating pipelines with multiple sources | https://api.github.com/repos/opensearch-project/data-prepper/issues/5708/comments | 0 | 2025-05-20T19:17:52Z | 2025-05-20T19:54:53Z | https://github.com/opensearch-project/data-prepper/issues/5708 | 3,078,004,143 | 5,708 |
[
"opensearch-project",
"data-prepper"
] |
**Is your feature request related to a problem? Please describe.**
Currently, Data Prepper lacks a dedicated processor to iterate over array elements and apply transformations to each element individually. This limitation makes it challenging to process array fields efficiently, especially when dealing with nested data structures or when needing to apply consistent transformations across all elements in an array.
For example, when processing log data with arrays of strings or objects, users currently need to create complex workarounds or multiple processors to handle array transformations, which is neither efficient nor maintainable.
It would also be good to split each element into an individual event.
**Describe the solution you'd like**
Add a new "foreach" processor to Data Prepper that would:
1. Accept an array field path as input
2. Allow defining a set of sub-processors to be applied to each element
3. Support options for:
- Transforming elements in-place
- Creating new events for each element (array splitting)
- Handling null or missing values
- Controlling parallel processing of elements
**Describe alternatives you've considered (Optional)**
Can't find any
**Additional context**
| Support foreach (looping) over input arrays to split event | https://api.github.com/repos/opensearch-project/data-prepper/issues/5707/comments | 4 | 2025-05-20T18:25:21Z | 2025-05-28T18:28:22Z | https://github.com/opensearch-project/data-prepper/issues/5707 | 3,077,881,124 | 5,707 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, Data Prepper lacks the ability to execute custom scripts during data processing, which limits the flexibility in data transformation and manipulation. Users need a way to perform complex transformations that may not be covered by existing processors, similar to OpenSearch's script processor functionality.
**Describe the solution you'd like**
Add a new "script" processor to Data Prepper that allows users to:
1. Execute inline scripts (written in a supported scripting language like Painless)
2. Reference and execute stored scripts
3. Modify or transform document fields during processing
4. Support script
**Describe alternatives you've considered (Optional)**
Using the lambda processor.
**Additional context**
Ingest Pipelines support this: https://docs.opensearch.org/docs/latest/ingest-pipelines/processors/script/ | Support script processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5706/comments | 1 | 2025-05-20T18:21:04Z | 2025-05-20T19:50:01Z | https://github.com/opensearch-project/data-prepper/issues/5706 | 3,077,869,368 | 5,706 |
[
"opensearch-project",
"data-prepper"
] | Here's the ticket summary in the requested format:
**Describe the bug**
The parse_json processor in Data Prepper fails to parse valid JSON arrays. When attempting to parse a JSON string that contains an array of objects, it throws an error: `jorg.opensearch.dataprepper.plugins.processor.parse.json.ParseJsonProcessor - An exception occurred due to invalid JSON while parsing [******] due to ******`.
**To Reproduce**
1. Set up a Data Prepper pipeline with DynamoDB as source
2. Configure parse_json processor to parse a field containing a JSON array string
3. Example input:
```json
{
"tags": "[{\"type\":\"foo1\",\"value\":\"bar1\"},{\"type\":\"foo2\",\"value\":\"bar2\"}]"
}
```
4. Run the pipeline
5. Observe the error
**Expected behavior**
The parse_json processor should successfully parse valid JSON strings, including arrays, into their corresponding JSON objects/arrays. In this case, it should parse the string into an array of objects.
**Screenshots**
N/A
**Environment**
- Source: DynamoDB
- Sink: OpenSearch Serverless Collection
- Component: Data Prepper parse_json processor
- Amazon OpenSearch Ingestion Pipeline
**Additional context**
- Current workaround involves using substitute_string processor to remove array brackets or wrapping the array in an object using add_entries
- The JSON strings are valid and can be parsed using standard JSON.parse() in JavaScript
- This appears to be a limitation in the JSON codec implementation
**Example Workaround**
```
processor:
- add_entries:
entries:
- key: "tagsJSON"
format: '{"tags":${tags}}'
- parse_json:
source: "tagsJSON"
destination: "parsedTags"
overwrite_if_destination_exists: true
- add_entries:
entries:
- key: "tags"
value_expression: "/parsedTags/tags"
overwrite_if_key_exists: true
``` | [BUG] parse_json not being able to parse JSON arrays | https://api.github.com/repos/opensearch-project/data-prepper/issues/5705/comments | 2 | 2025-05-20T18:14:36Z | 2025-05-21T20:44:33Z | https://github.com/opensearch-project/data-prepper/issues/5705 | 3,077,856,139 | 5,705 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Say we have a sink A defined like:
```
sink:
- opensearch:
document_root_key: info # My testing involved a document root key
include_keys: ["foo", "name"]
```
And sink B defined like:
```
sink:
- opensearch:
document_root_key: info # My testing involved a document root key
include_keys: ["name", "foo"]
```
The two have different outputs. Sink A results in documents retaining both `foo` and `name`, but Sink B results in documents only retaining `name`. Based on larger inputs, I can confirm that alphabetical order is required.
**To Reproduce**
It's easy to see if you modify the unit test [testJsonStringBuilderWithIncludeKeys](https://github.com/opensearch-project/data-prepper/blob/a08cd7b82f63d4d107bd9469620099fa45af0297/data-prepper-api/src/test/java/org/opensearch/dataprepper/model/event/JacksonEventTest.java#L971) by appending the following lines:
```
// Test order independence
List<String> includeKeys8a = List.of("foo", "name");
List<String> includeKeys8b = List.of("name", "foo");
final String expectedJsonString8 = "{\"name\":\"hello\",\"foo\":\"bar\"}";
// Succeeds
assertThat(event.jsonBuilder().rootKey("info").includeKeys(includeKeys8a).toJsonString(), equalTo(expectedJsonString8));
// Fails
assertThat(event.jsonBuilder().rootKey("info").includeKeys(includeKeys8b).toJsonString(), equalTo(expectedJsonString8));
```
**Of note**
So when the `include_keys` functionality was first added (MR: https://github.com/opensearch-project/data-prepper/pull/2989), the `SinkModel` had a method `preprocessingKeys` that sorted the keys. At some point that code went away but the `searchAndFilter` function within `JacksonEvent` still expects a sorted input.
Whether the fix should be in `JacksonEvent` or `SinkModel` is beyond my knowledge of this repo.
**Expected behavior**
The sink's `include_keys` feature should not depend on the order of its inputs.
**Environment (please complete the following information):**
1st environment:
- OS: unknown.
- Using AWS's OpenSearch serverless pipeline on May 14th, 2025.
2nd environment in which I ran the above unit test changes:
- OS: OSX Sequoia 15.5
- Using main (commit a08cd7b)
| [BUG] The sink's include_keys feature won't work unless inputs are alphabetically ordered | https://api.github.com/repos/opensearch-project/data-prepper/issues/5695/comments | 1 | 2025-05-14T22:21:34Z | 2025-05-20T19:40:20Z | https://github.com/opensearch-project/data-prepper/issues/5695 | 3,064,396,318 | 5,695 |
[
"opensearch-project",
"data-prepper"
] | **Description**
#5429 introduced an issue where the @single-thread processors are executed multiple times for an event,
This was fixed as part of #5545 [Fixes an issue where processors are executing multiple times for an event], but
There needs to be additional tests added to verify that an event is processed exactly once by any processor and that there is no duplication of event processing.
**Proposed Solution**
- Add integration tests
- Add end to end test coverage | [Test Coverage] Add Test coverage for validating that events are processed exactly once by a processor | https://api.github.com/repos/opensearch-project/data-prepper/issues/5690/comments | 0 | 2025-05-12T09:55:16Z | 2025-05-13T19:35:07Z | https://github.com/opensearch-project/data-prepper/issues/5690 | 3,056,381,507 | 5,690 |
[
"opensearch-project",
"data-prepper"
] | There are various Data Prepper expressions and operators for AND/NOT/OR, adding binaries, etc. It would be helpful if Data Prepper supported the mod operation as well.
Example format:
`<Integer | JSON pointer> % <Integer | JSON pointer>` | [ENH] Modulus operator | https://api.github.com/repos/opensearch-project/data-prepper/issues/5685/comments | 2 | 2025-05-07T19:47:52Z | 2025-05-30T20:46:09Z | https://github.com/opensearch-project/data-prepper/issues/5685 | 3,047,003,244 | 5,685 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The current default for the scroll timeout to process a batch is 1 minute and this is not configurable (https://github.com/opensearch-project/data-prepper/blob/b42225043c774e5b6e33988db2cf55d79192a0b2/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/source/opensearch/worker/ScrollWorker.java#L52),
If this times out due to buffer being full or circuit breaker blocking writes to the buffer, it will result in this exception and reprocessing will take place for the index
```
025-05-05T16:10:34.168 [pool-12-thread-1] ERROR org.opensearch.dataprepper.plugins.source.opensearch.worker.ScrollWorker - Unknown exception while processing index 'my-index':
org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [search_phase_execution_exception] all shards failed
```
**Describe the solution you'd like**
Let this be configurable under search_options
```
search_options:
search_context_type: "scroll"
scroll_time_per_batch: "15m" // increase default to 10m
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Make opensearch source scroll timeout configurable and increase default value | https://api.github.com/repos/opensearch-project/data-prepper/issues/5679/comments | 1 | 2025-05-05T16:55:59Z | 2025-05-21T18:13:31Z | https://github.com/opensearch-project/data-prepper/issues/5679 | 3,040,245,186 | 5,679 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper added the `@Experimental` annotation in 2.11. Presently, all experimental plugins are enabled or not.
I'd like to enable specific experimental plugins.
**Describe the solution you'd like**
Add this configuration:
```
experimental:
enabled:
source:
- rss
- neptune
processor:
- ml_inference
```
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
Original issue for experimental plugins: #2695
| Support enabling specific experimental plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/5675/comments | 0 | 2025-05-03T20:11:34Z | 2025-05-07T18:58:03Z | https://github.com/opensearch-project/data-prepper/issues/5675 | 3,037,592,426 | 5,675 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The opensearch source attempts to auto detect by calling the `GET /` API if the source is an opensearch or elasticsearch cluster. Sometimes, clusters have spoofed responses that impersonate something they are not, and this can lead to failures without the distribution version being set.
**Describe the solution you'd like**
To handle this more gracefully, if distribution_version is not set, and the attempts to call the `GET /` API fail for both the opensearch and elasticsearch clients, the opensearch source should act as if the `distribution_version` is set to OpenSearch, and continue
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Assume distribution_version is set to opensearch for OpenSearch source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5673/comments | 2 | 2025-05-01T17:26:30Z | 2025-05-21T18:15:16Z | https://github.com/opensearch-project/data-prepper/issues/5673 | 3,034,326,417 | 5,673 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper currently lacks a native OTLP-compatible sink plugin for sending traces to AWS X-Ray or other OTLP-compliant backends.
**Describe the solution you'd like**
Add a new otlp sink plugin that accepts OpenTelemetry trace payloads (ExportTraceServiceRequest) and forwards them via HTTP to OTLP endpoints, with support for:
* AWS SigV4 signing (for sending to X-Ray)
* gzip compression
* Retry logic with backoff
* Configurable thresholds for batch size, event count, and flush timeout
The initial release supports exporting spans to AWS X-Ray. Future releases will support sending spans, metrics, and logs to any OTLP Protobuf-compatible endpoint.
**Describe alternatives you've considered (Optional)**
* Using the http sink to forward traces: This approach requires additional enhancements that are not natively supported, such as OTLP Protobuf encoding, AWS SigV4 signing, and retry handling. As a result, it increases configuration complexity and risks incorrect behavior under production loads.
**Additional context**
The implementation follows the Data Prepper plugin model and supports configuration via YAML. It is tested against high-throughput trace ingestion scenarios and shows strong performance characteristics under load.
| Add otlp sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5663/comments | 1 | 2025-04-30T16:01:57Z | 2025-06-03T18:29:45Z | https://github.com/opensearch-project/data-prepper/issues/5663 | 3,031,748,317 | 5,663 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
```
Exception in thread "file-source" java.lang.NullPointerException: Cannot invoke "String.equals(Object)" because "nodeName" is null
at org.opensearch.dataprepper.model.codec.JsonDecoder.parse(JsonDecoder.java:75)
at org.opensearch.dataprepper.plugins.codec.json.JsonInputCodec.parse(JsonInputCodec.java:32)
at org.opensearch.dataprepper.plugins.source.file.FileSource$CodecFileStrategy.start(FileSource.java:175)
at org.opensearch.dataprepper.plugins.source.file.FileSource.lambda$start$0(FileSource.java:82)
at java.base/java.lang.Thread.run(Thread.java:840)
```
**To Reproduce**
This bug can be reproduced with the following pipeline config
```
version: "2"
test-pipeline:
source:
file:
codec:
json:
key_name: "key"
path: "/Users/qchea/Documents/test-config-std/test1.json"
sink:
- stdout:
```
where test1.json includes json array format
```
[
{"key": "value"}
]
```
**Expected behavior**
if key_name is specified, JSON array should be skipped by the codec
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] JSON codec hits NPE with key_name specified when input is a JSON array | https://api.github.com/repos/opensearch-project/data-prepper/issues/5658/comments | 0 | 2025-04-25T21:58:45Z | 2025-04-26T01:02:13Z | https://github.com/opensearch-project/data-prepper/issues/5658 | 3,021,110,358 | 5,658 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Escape characters aren't trimmed from `regexPattern` in the antlr grammar. Data prepper requires that the pattern `^\w*$` is escaped. It produces the following error message when the pattern is left unescaped.
```
/data-prepper/tree/main/data-prepper-plugins/http-source#authentication-configurations
line 1:9 token recognition error at: '^'
line 1:10 token recognition error at: '\'
line 1:11 token recognition error at: 'w*'
line 1:13 token recognition error at: '$"'
line 1:8 mismatched input '"' expecting {JsonPointer, EscapedJsonPointer, String}
```
When escaping `\` and `$`, the parser no longer errors out, but the expression produced contains the escape sequences (`^\\w*\$`).
**To Reproduce**
Run data-prepper 2.11.0 with the following pipeline.yml
```yaml
# pipeline.yml
main:
source:
http:
processor:
- add_entries:
entries:
- add_when: /msg =~ "^\\w*\$" # Here's the problematic expression
key: "matched"
value: "true"
sink:
- stdout:
```
Send it JSON with a `msg` field.
**Expected behavior**
The following `msg` value should satisfy the `add_when` expression using the regex operator:
```json
{"msg":"word"}
```
**Environment (please complete the following information):**
- OS: [Debian Bookworm]
- Version [2.11]
**Additional context**
This value satisfies the regex used in the pipeline above.
```json
{"msg":"\\wwww$"}
```
Adding a print statement to this lambda outputs the RegEx pattern with escape characters included.
https://github.com/opensearch-project/data-prepper/blob/52a1e6d912a8e1265b3ae212507e6edf6992a15f/data-prepper-expression/src/main/java/org/opensearch/dataprepper/expression/OperatorConfiguration.java#L23
I'm not super familiar with antrl, but from my understanding you'd want to remove escape characters within the [listener](https://github.com/opensearch-project/data-prepper/blob/52a1e6d912a8e1265b3ae212507e6edf6992a15f/data-prepper-expression/src/main/java/org/opensearch/dataprepper/expression/ParseTreeEvaluatorListener.java). I'll spend a bit more time hacking away at the source, and submit a PR with whatever fix I find. | [BUG] Escape sequences aren't removed from regex | https://api.github.com/repos/opensearch-project/data-prepper/issues/5652/comments | 3 | 2025-04-24T21:28:13Z | 2025-05-14T20:54:05Z | https://github.com/opensearch-project/data-prepper/issues/5652 | 3,018,497,676 | 5,652 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It was found during testing that the `otel-logs-source` plugin does not support the `health_check_service` with http authentication, `curl` command in terminal specifically. Additionally, `otel-logs-source` plugin also does not support `unauthenticated_health_check` options.
In contrast, both `otel-metrics-source` and `otel-trace-source` sources support these features.
**Describe the solution you'd like**
Enhance the `otel-logs-source` plugin to support:
- The `health_check_service` option with HTTP authentication, so that users can check health via `curl` requests.
- The `unauthenticated_health_check` option, following the same implementation as in `otel-metrics-source` and `otel-trace-source` do.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
1. `health_check_service` in `otel_logs_source`,
**Example** of pipeline.yaml:
```
test-pipeline:
source:
otel_logs_source:
ssl: false
port: 2021
unframed_requests: true
health_check_service: true
proto_reflection_service: true
sink:
- stdout:
```
Using grpcurl works:
```
grpcurl -plaintext localhost:2021 grpc.health.v1.Health/Check
{
"status": "SERVING"
}
```
Using HTTP health check `curl` fail:
```
curl -X GET http://localhost:2021/health
Status: 404
Description: Not Found
```
2. `unauthenticated_health_check` in `otel_logs_source`,
**Example** of pipeline.yaml:
```
test-pipeline:
source:
otel_logs_source:
ssl: false
port: 2021
unframed_requests: true
health_check_service: true
unauthenticated_health_check: true
sink:
- stdout:
```
this is the error detail:
`[main] ERROR org.opensearch.dataprepper.core.validation.LoggingPluginErrorsHandler - 1. test-pipeline.source.otel_logs_source: caused by: Parameter "unauthenticated_health_check" for plugin "otel_logs_source" does not exist.` | Support health_check_service and unauthenticatedHealthCheck options in otel-logs-source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5651/comments | 1 | 2025-04-24T18:45:47Z | 2025-05-06T20:43:06Z | https://github.com/opensearch-project/data-prepper/issues/5651 | 3,018,164,090 | 5,651 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.11.0
**BUILD NUMBER**: 92
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/14644185998
Required approvers: [sb2k16 chenqi0805 engechas san81 srikanthjg graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 14644185998: Release Data Prepper : 2.11.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5649/comments | 3 | 2025-04-24T15:22:05Z | 2025-04-24T16:05:25Z | https://github.com/opensearch-project/data-prepper/issues/5649 | 3,017,646,472 | 5,649 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When a given event is a list of objects with some attributes, I would like to apply `convert_entry_type` processor on a field in all of these objects. For example, if my event looks like below and I would like to convert all `key1` attributes using `convert_entry_type` processor
```
{
[
{ "key1": "value1", "key2": "value2"},
{ "key1": "value3", "key2": "value4"},
{ "key1": "value5", "key2": "value6"}
]
}
```
**Describe the solution you'd like**
Similar to https://github.com/opensearch-project/data-prepper/issues/2853 having an optional "iterate-on" functionality on convert_entry_type processor will help apply the conversion on multiple fields of a given event.
**Describe alternatives you've considered (Optional)**
Support for providing Json Path instead of Json Pointer that points to a specific field.
**Additional context**
Add any other context or screenshots about the feature request here.
| Enhance convert_entry_type processor to convert multiple fields in an event | https://api.github.com/repos/opensearch-project/data-prepper/issues/5643/comments | 0 | 2025-04-22T17:50:22Z | 2025-04-22T19:49:31Z | https://github.com/opensearch-project/data-prepper/issues/5643 | 3,011,739,639 | 5,643 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Need a way to send messages to SQS from Data Preppper
**Describe the solution you'd like**
Support SQS as sink in Data Prepper. The sink should be able to batch Data Prepper events and send to SQS
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Add SQS sink to Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5634/comments | 0 | 2025-04-21T19:05:18Z | 2025-04-29T21:59:36Z | https://github.com/opensearch-project/data-prepper/issues/5634 | 3,009,093,915 | 5,634 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
I have a use case where I need to create/update multiple documents in OpenSearch for a single DDB item. The item basically contains a map out of which only some of entries generally get updated. Right now I need to generate all the documents for the item irrespective of the particular map entry that got updated. This is pretty inefficient and results in multiple unnecessary update calls to OpenSearch.
**Describe the solution you'd like**
In case NEW_AND_OLD_IMAGES stream view is enabled in DDB, there should be an option in OSI, to extract both the images separately. This will allow me to compare the new and old images and find out the impacted entries and only update those in OpenSearch. This feature should allow us to extract the both the images in aws_lambda processor.
| Support to compare new and old images for DDB streams | https://api.github.com/repos/opensearch-project/data-prepper/issues/5622/comments | 1 | 2025-04-17T18:09:22Z | 2025-04-22T19:55:57Z | https://github.com/opensearch-project/data-prepper/issues/5622 | 3,003,148,593 | 5,622 |
[
"opensearch-project",
"data-prepper"
] | ## Describe the bug
When ingesting OTel traces using Data Prepper into OpenSearch, we’re seeing document ingestion failures with the following error messages:
```
Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: can't merge a non object mapping [attributes.upstream_cluster] with an object mapping
Document failed to write to OpenSearch with error code 400. Configure a DLQ to save failed documents. Error: object mapping for [attributes.user_agent] tried to parse field [user_agent] as object, but found a concrete value
```
These errors originate from OpenSearch's field mapping behavior when dynamic keys in attributes contain dots (.), such as:
```
{
"key": "upstream_cluster",
"value": "frontend"
},
{
"key": "upstream_cluster.name",
"value": "frontend"
}
```
OpenSearch interprets dotted keys like upstream_cluster.name as nested fields, which causes a mapping conflict when the base key upstream_cluster is already defined as a concrete value.
## Root Cause
The current mapping template in Data Prepper sets attributes.* as dynamic fields of type keyword, long, or double using match_mapping_type without protecting against field path collisions caused by dot notation.
The issue arises because OpenSearch:
* Treats attributes.upstream_cluster as a concrete value (string, long, etc.)
* Then encounters attributes.upstream_cluster.name, which is interpreted as a nested object, resulting in: `can't merge a non object mapping with an object mapping`
* Similar issues happen when a field like attributes.user_agent appears as both a string and an object.
## Current Mapping Snippet
link to mappings: https://github.com/opensearch-project/data-prepper/blob/bc7bfecf1444b4c82d49423680d1d550b068f804/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-standard-template.json#L8-L93
This setup assumes flat string (or other relevant data type) fields under attributes, but does not prevent dotted keys from being treated as object paths.
## Suggested fix
As the goal is to store attributes as a map but support dotted keys, using the [flat_object field](https://docs.opensearch.org/docs/latest/field-types/supported-field-types/flat-object/) type would help:
```
"dynamic_templates": [
{
"resource_attributes_map": {
"mapping": {
"type": "flat_object"
},
"path_match": "resource.attributes.*"
}
},
{
"attributes_map": {
"mapping": {
"type": "flat_object"
},
"path_match": "attributes.*"
}
}
],
```
## Setup
### Data prepper pipeline config
version: 2.11-snapshot from latest main
```
entry-pipeline:
delay: "100"
source:
otel_trace_source:
output_format: otel
ssl: false
processor:
- trace_peer_forwarder:
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
processor:
- otel_traces:
sink:
- opensearch:
hosts: OPENSEARCH_HOSTS
username: OPENSEARCH_USER
password: OPENSEARCH_PASSWORD
insecure: true
index_type: trace-analytics-plain-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: OPENSEARCH_HOSTS
username: OPENSEARCH_USER
password: OPENSEARCH_PASSWORD
insecure: true
index_type: trace-analytics-service-map
```
| [BUG] Otel Trace Source - Field Mapping Conflict for OTel Attributes with Dotted Keys | https://api.github.com/repos/opensearch-project/data-prepper/issues/5616/comments | 3 | 2025-04-16T21:32:41Z | 2025-04-30T05:58:05Z | https://github.com/opensearch-project/data-prepper/issues/5616 | 3,000,794,017 | 5,616 |
[
"opensearch-project",
"data-prepper"
] | Please approve or deny the release of Data Prepper.
**VERSION**: 2.10.3
**BUILD NUMBER**: 91
**RELEASE MAJOR TAG**: true
**RELEASE LATEST TAG**: true
Workflow is pending manual review.
URL: https://api.github.com/opensearch-project/data-prepper/actions/runs/14501613808
Required approvers: [chenqi0805 engechas graytaylor0 dinujoh kkondaka KarstenSchnitter dlvenable oeyh]
Respond "approved", "approve", "lgtm", "yes" to continue workflow or "denied", "deny", "no" to cancel. | Manual approval required for workflow run 14501613808: Release Data Prepper : 2.10.3 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5614/comments | 3 | 2025-04-16T20:30:59Z | 2025-04-16T20:49:54Z | https://github.com/opensearch-project/data-prepper/issues/5614 | 3,000,675,516 | 5,614 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
OpenSearch source creates index partitions in source coordination store, and blindly grabs index partitions to process. So removing indexes from the configuration will result in these indexes being processed still, because there is no check for an index on whether it matches one of the patterns in the actual configuration
**Expected behavior**
When an index partition is acquired by OpenSearch source, cross reference the index name with the current configuration to see if it should be processed. If it should not be processed, delete the partition from the source coordination store.
**Additional context**
Add any other context about the problem here.
| [BUG] OpenSearch source will continue processing indexes after these are removed from the configuration when scheduling happens | https://api.github.com/repos/opensearch-project/data-prepper/issues/5605/comments | 0 | 2025-04-15T19:03:52Z | 2025-04-15T19:36:48Z | https://github.com/opensearch-project/data-prepper/issues/5605 | 2,997,333,416 | 5,605 |
[
"opensearch-project",
"data-prepper"
] | ### Overview
We need to enhance the current BlockingBuffer implementation to have a more dynamic capacity or buffer size that adapts based on Available system memory and also keep a track on average size of events
### Current Limitations
The current BlockingBuffer implementation uses a fixed capacity defined at initialization time. This static approach can lead to:
• Inefficient memory usage when processing events of varying sizes
• Potential out-of-memory situations during high load
#### Dynamic Capacity Management
• Monitor system memory usage and adjust capacity or buffer size accordingly.
• Implement configurable thresholds for dynamic sizing decisions
| Enhance Buffer to use Dynamic Buffer Size | https://api.github.com/repos/opensearch-project/data-prepper/issues/5601/comments | 1 | 2025-04-14T21:04:16Z | 2025-05-30T21:23:59Z | https://github.com/opensearch-project/data-prepper/issues/5601 | 2,994,208,415 | 5,601 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The `date` processor does not always produce the correct format string as output. I can see this happens when the date is from a previous year. But, I have not yet looked to see if other situations could cause this.
**To Reproduce**
1) Create a pipeline with a date processor similar to the following:
```
- date:
match:
- key: "calculated_at"
patterns: [ "YYYY-MM-dd" ]
destination: "calculated_at"
output_format: "yyyy-MM-ddXXX"
source_timezone: "UTC"
destination_timezone: "UTC"
locale: "en_US"
```
2) Run the pipeline.
3) Ingest data with the current year and last year:
```
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"calculated_at": "2024-12-31"},{"calculated_at": "2025-01-01"}]'
```
See the incorrect results:
```
data-prepper | {"id":"year2024","calculated_at":"2024-12-31","calculated_at_text":"2024-12-31"}
data-prepper | {"id":"year2025","calculated_at":"2025-01-01Z","calculated_at_text":"2025-01-01Z"}
```
**Expected behavior**
The date from 2024 should include the `Z` in the format.
**Screenshots**
N/A
**Environment (please complete the following information):**
Data Prepper 2.10.2
**Additional context**
This also prevents ingestion when using an `upsert` and a document Id. Interestingly, OpenSearch will accept the data when using `insert`.
Full pipeline sample:
```
date-test:
delay: 10
source:
http:
processor:
# curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"calculated_at": "2024-12-31"},{"calculated_at": "2025-01-01"}]'
- date:
match:
- key: "calculated_at"
patterns: [ "YYYY-MM-dd" ]
destination: "calculated_at"
output_format: "yyyy-MM-ddXXX"
source_timezone: "UTC"
destination_timezone: "UTC"
locale: "en_US"
# - add_entries:
# entries:
# - key: calculated_at
# format: '${calculated_at}Z'
# overwrite_if_key_exists: true
# add_when: length(/calculated_at) == 10
- copy_values:
entries:
- from_key: calculated_at
to_key: calculated_at_text
sink:
- opensearch:
hosts: [ "https://opensearch:9200" ]
insecure: true
username: admin
password: myStrongPassword123!
document_id: "${/id}"
action: upsert
index: test_date_with_timezone
flush_timeout: -1
template_type: index-template
template_content: >
{
"template" : {
"mappings" : {
"properties" : {
"calculated_at" : {
"type" : "date",
"format": "yyyy-MM-ddXXX"
},
"calculated_at_text" : {
"type" : "text"
}
}
}
}
}
- stdout:
```
| [BUG] Date processor does not always include timezone | https://api.github.com/repos/opensearch-project/data-prepper/issues/5600/comments | 0 | 2025-04-14T19:30:48Z | 2025-04-15T19:35:43Z | https://github.com/opensearch-project/data-prepper/issues/5600 | 2,994,017,913 | 5,600 |
[
"opensearch-project",
"data-prepper"
] | ## Overview
Today, Data prepper have three different sources for logs, traces and metrics. These three different sources need individual pipelines (source, buffer, processor [optional] and sink). Maintaining different endpoints/ports/pipelines can be difficult. For ease of initial setup and use, we propose a OTel Telemetry source in data prepper to add logs, traces and metrics in a single OTLP source separated by path suffixes as per OTel standards.
## Background
### OTel logs source
Supports OTel logs to be ingested via exposed OTLP (gRPC) or HTTP endpoints. This source decodes the OTel logs format into flattend structure that can be ingested in OpenSearch and other compatible sinks. Defaults to `/v1/logs` http path
### Otel traces source
Supports OTel traces to be ingested via exposed OTLP (gRPC) or HTTP endpoints. This source consumes OTel trace data and converts it into a format compatible with OpenSearch and other analysis platforms. The source decodes spans, span events, and other trace data for later enrichment and analysis. Defaults to /v1/traces http path.
### Otel metrics source
Supports OTel metrics to be ingested via exposed OTLP (gRPC) or HTTP endpoints. This source accepts metrics data including counters, gauges, histograms, and summary statistics. It transforms the metrics into formats suitable for aggregation, visualization, and long-term storage in OpenSearch and other time-series databases. Defaults to /v1/metrics http path.
### Existing Otel source signals setup
Below is the current setup that users use to send Otel based logs, traces and metrics to OpenSearch indexes. The three signals land onto separate sources. The sources are differentiated by ports on the same endpoint with different path suffixes based on OTel exporter SDK as mentioned [here](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/). Each source ends up in separate sinks
```
otel-logs-pipeline:
source:
otel_logs_source:
ssl: false
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
insecure: false
index_type: custom
index: ss4o_logs-%{yyyy.MM.dd}
bulk_size: 4
otel-traces-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "traces-raw-pipeline"
- pipeline:
name: "service-map-pipeline"
traces-raw-pipeline:
source:
pipeline:
name: "otel-traces-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
insecure: false
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "otel-traces-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["https://opensearch:9200"]
insecure: false
index_type: trace-analytics-service-map
otel-metrics-pipeline:
source:
otel_metrics_source:
processor:
- otel_metrics:
calculate_histogram_buckets: true
calculate_exponential_histogram_buckets: true
exponential_histogram_max_allowed_scale: 10
flatten_attributes: false
sink:
- opensearch:
hosts: ["https://opensearch:9200"]
insecure: false
index_type: custom
index: ss4o_metrics-otel-%{yyyy.MM.dd}
bulk_size: 4
```
## Proposal unified source for Logs, Traces and Metrics
We want to propose a single unified source for all OTel telemetry signals starting with logs, traces and metrics maintaining OTel sdk standards. The three three signals would be separated based on endpoints path suffixes and would have same host and port for data prepper. The routing to downstream pipelines would be based on meta-data routing.
This source would also have support for gRPC and http endpoints. The source would have similar auth model, config options as exisiting soruces.
```
# Main telemetry pipeline with unified source
otel-telemetry-pipeline:
source:
otel_telemetry_source:
ssl: false
route:
- logs: "getMetadata(\"eventType\") == \"LOG\""
- traces: "getMetadata(\"eventType\") == \"TRACE\""
- metrics: "getMetadata(\"eventType\") == \"METRIC\""
sink:
- pipeline:
name: "logs-pipeline"
routes:
- "logs"
- pipeline:
name: "traces-pipeline"
routes:
- "traces"
- pipeline:
name: "metrics-pipeline"
routes:
- "metrics"
# Logs pipeline
logs-pipeline:
source:
pipeline:
name: "otel-telemetry-pipeline"
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
insecure: false
index_type: custom
index: ss4o_logs-%{yyyy.MM.dd}
bulk_size: 4
# Traces pipeline
traces-pipeline:
delay: "100"
source:
pipeline:
name: "otel-telemetry-pipeline"
sink:
- pipeline:
name: "traces-raw-pipeline"
- pipeline:
name: "service-map-pipeline"
# Traces raw processing pipeline
traces-raw-pipeline:
source:
pipeline:
name: "traces-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: ["http://opensearch:9200"]
insecure: false
index_type: trace-analytics-raw
# Service map processing pipeline
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "traces-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["https://opensearch:9200"]
insecure: false
index_type: trace-analytics-service-map
# Metrics pipeline
metrics-pipeline:
source:
pipeline:
name: "otel-telemetry-pipeline"
processor:
- otel_metrics:
calculate_histogram_buckets: true
calculate_exponential_histogram_buckets: true
exponential_histogram_max_allowed_scale: 10
flatten_attributes: false
sink:
- opensearch:
hosts: ["https://opensearch:9200"]
insecure: false
index_type: custom
index: ss4o_metrics-otel-%{yyyy.MM.dd}
bulk_size: 4
```
### Benefits of this new approach
1. **Simplified Configuration -** Data Prepper uses a single endpoint with standard OTel paths, making the config file simpler and easier to manage.
2. **Easier Setup -** Users can get started with fewer pipelines and a smoother setup process.
3. **Consistent with Industry Standards -** It follows the OTel spec, making Data Prepper plug-and-play with other OpenTelemetry tools.
4. **Reduced Resource Usage -** One server handles all signals, saving memory and simplifying deployment.
5. **Better Integration -** All telemetry types work together in one flexible pipeline model.
6. **Future Extensibility -** The design makes it easy to add new telemetry types down the line.
### Current PoC for Otel telemetry source
* https://github.com/ps48/data-prepper/tree/otel-telemetry-source
| [RFC] OTel telemetry unified source | https://api.github.com/repos/opensearch-project/data-prepper/issues/5596/comments | 1 | 2025-04-11T19:53:40Z | 2025-04-21T17:00:14Z | https://github.com/opensearch-project/data-prepper/issues/5596 | 2,989,617,255 | 5,596 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
We have seen the following error in the `otel_logs` source:
```
2025-03-10T17:43:22.030 [armeria-common-worker-epoll-3-4] WARN com.linecorp.armeria.server.DefaultUnloggedExceptionsReporter - Observed 1 exception(s) that didn't reach a LoggingService in the last 10000ms(10000000000ns). Please consider adding a LoggingService as the outermost decorator to get detailed error logs. One of the thrown exceptions:
com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 1048576
...
at java.base/java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: io.netty.handler.codec.compression.DecompressionException: Decompression buffer has reached maximum size: 1048576
```
This is not resulting in a metric for `requestsTooLarge`.
**To Reproduce**
N/A
**Expected behavior**
1. There should be no stack trace exception. Maybe a single line.
2. The metric should report that there was a request too large.
**Screenshots**
N/A
**Environment (please complete the following information):**
N/A
**Additional context**
N/A
| [BUG] Some request too long errors do not produce metrics. | https://api.github.com/repos/opensearch-project/data-prepper/issues/5594/comments | 1 | 2025-04-10T17:55:30Z | 2025-04-15T19:34:51Z | https://github.com/opensearch-project/data-prepper/issues/5594 | 2,986,452,980 | 5,594 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Key-value processor does not work as intended for this sample input
```
{
"log_message": "level=info key=test-key msg=\"upload test\" other_field=12345"
}
```
The msg is cut off with this config
```
- key_value:
source: "log_message"
field_split_characters: null
field_delimiter_regex: "(\\s)"
value_split_characters: "="
destination: null
include_keys: ["level", "msg", "key", "other_field"]
```
and it creates this output
```
{
"log_message": "level=info key=test-key msg=\"upload test\" other_field=12345",
"msg": "\"upload"
"level": "info"
}
```
**Expected behavior**
Output should be
```
{
"log_message": "level=info key=test-key msg=\"upload test\" other_field=12345",
"msg": "\"upload test\"",
"level": "info",
"other_field": 12345
}
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Also tried using value grouping but it cuts off after the group
| [BUG] Key-value processor has incorrect behavior with spaces or quotes | https://api.github.com/repos/opensearch-project/data-prepper/issues/5584/comments | 1 | 2025-04-03T21:53:16Z | 2025-04-22T20:02:27Z | https://github.com/opensearch-project/data-prepper/issues/5584 | 2,970,762,783 | 5,584 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, data prepper either uses a certificate or hard coded user credentials to authorize writing data to OpenSearch sink. In this [feature branch](https://github.com/opensearch-project/security/tree/feature/api-tokens) I have added API tokens as an additional way to support authentication and authorization. I would like to be able to use those newly added api tokens as an additional way to create connections to the OpenSearch sink. API tokens will be defined with an immutable list of cluster permissions and index permissions, which can prevent elevation of privileges if the user is mapped to a role, which is subsequently changed, and is an alternative to passing certificates to authorize requests to OpenSearch.
**Describe the solution you'd like**
I would like to add/see api token auth as an additional way to authorize data writing to OpenSearch
**Describe alternatives you've considered (Optional)**
None
**Additional context**
Add any other context or screenshots about the feature request here.
Issue for creating api token auth method in security plugin: https://github.com/opensearch-project/security/issues/1504 | Add API tokens as an Authc/z method for OpenSearch Sink | https://api.github.com/repos/opensearch-project/data-prepper/issues/5549/comments | 5 | 2025-03-26T20:41:24Z | 2025-05-06T20:05:00Z | https://github.com/opensearch-project/data-prepper/issues/5549 | 2,950,746,661 | 5,549 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The `dissect` processor should be thread-safe. We are using `@SingleThread` on it as a short-term mitigation. But, this is not ideal as it requires additional objects.
**Describe the solution you'd like**
Make `dissect` thread-safe and remove `@SingleThread`.
**Additional context**
In this PR, we added the `@SingleThread` annotation: #5463
| Make dissect processor thread-safe | https://api.github.com/repos/opensearch-project/data-prepper/issues/5546/comments | 2 | 2025-03-25T17:54:02Z | 2025-03-27T17:34:16Z | https://github.com/opensearch-project/data-prepper/issues/5546 | 2,947,332,468 | 5,546 |
[
"opensearch-project",
"data-prepper"
] | ## Problem
When using the AWS Lambda processor at scale, we're experiencing out-of-memory (OOM) errors. The processor appears to be holding onto a large number of records during processing, which
leads to memory exhaustion.
## Proposed Solution
Leverage the existing heap-based circuit breaker in Data Prepper to pause Lambda invocations when memory usage is high. This will prevent the processor from continuing to send requests to
Lambda when the system is under memory pressure, allowing garbage collection to reclaim memory before resuming operations. | Implement circuit breaker support in AWS Lambda processor to prevent OOM errors | https://api.github.com/repos/opensearch-project/data-prepper/issues/5536/comments | 0 | 2025-03-20T17:24:24Z | 2025-03-25T17:47:52Z | https://github.com/opensearch-project/data-prepper/issues/5536 | 2,936,184,279 | 5,536 |
[
"opensearch-project",
"data-prepper"
] | it would be nice to have an option for otel_logs_source to do unauthenticated healthcheck ( similar to what already exists for otel_traces_source. )
Currently , once the authentication is enabled on data prepper otel_logs_source ports , then automatically the healthcheck authentication is also enabled.
| otel_logs_source unauthenticated health check | https://api.github.com/repos/opensearch-project/data-prepper/issues/5534/comments | 2 | 2025-03-19T23:17:41Z | 2025-03-25T22:24:08Z | https://github.com/opensearch-project/data-prepper/issues/5534 | 2,933,433,835 | 5,534 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When S3 scans multiple buckets, it doesn't load the newly added S3DataSelection config parameter correctly at the bucket level.
In this case, the first bucket should be scanned only for metadata_only, and the second bucket scanned only for data_only.
~~~
s3:
codec:
ndjson:
compression: none
aws:
region: "us-east-1"
default_bucket_owner: 802041417063
scan:
scheduling:
interval: PT6M
buckets:
- bucket:
name: "offlinebatch"
data_selection: metadata_only
filter:
include_prefix:
- sagemaker/sagemaker_djl_batch_input
- bucket:
name: "offlinebatch"
data_selection: data_only
filter:
include_prefix:
- sagemaker/output/
~~~
But in reality, from my testing the S3Scan scans data content for the first bucket too. So it becomes something similar like this
~~~
buckets:
- bucket:
name: "offlinebatch"
data_selection: data_only
filter:
include_prefix:
- sagemaker/sagemaker_djl_batch_input
- sagemaker/output/
~~~
So overall this is not the intention of the newly added "data_selection" config that should be applied to each bucket. I think there should be a bug somewhere in the pipeline creation with this multiple bucket source.
**To Reproduce**
Steps to reproduce the behavior:
1. Use the above pipeline source of S3 scanning multiple buckets with different "data_selection"
2. Run the pipeline
3. Check the pipeline results
4. You will see that the content of the first bucket in the input folder is flowing through the pipeline processors which shouldn't happen.
**Expected behavior**
Different S3 bucket with different "data_selection" should scan based on the config value.
**Screenshots**
Below is the full pipeline for reference.
~~~
ml-batch-job-pipeline:
source:
s3:
codec:
ndjson:
compression: none
aws:
region: "us-east-1"
default_bucket_owner: <your account>
scan:
scheduling:
interval: PT6M
buckets:
- bucket:
name: "offlinebatch"
data_selection: metadata_only
filter:
include_prefix:
- sagemaker/sagemaker_djl_batch_input
- bucket:
name: "offlinebatch"
data_selection: data_only
filter:
include_prefix:
- sagemaker/output/
buffer:
bounded_blocking:
buffer_size: 2048 # max number of records the buffer accepts
batch_size: 512 # max number of records the buffer drains after each read
processor:
- ml:
host: "<your host url>"
aws_sigv4: true
action_type: "batch_predict"
service_name: "sagemaker"
model_id: "<your model id>"
output_path: "s3://offlinebatch/sagemaker/output"
aws:
region: "us-east-1"
ml_when: /bucket == "offlinebatch"
- copy_values:
entries:
- to_key: chapter
from_key: /content/0
- to_key: title
from_key: /content/1
- to_key: chapter_embedding
from_key: /SageMakerOutput/0
- to_key: title_embedding
from_key: /SageMakerOutput/1
- delete_entries:
with_keys: [content, SageMakerOutput]
route:
- ml-ingest-route: "/chapter != null and /title != null"
sink:
- stdout:
- opensearch:
hosts: ["<your host url>"]
aws_sigv4: true
index: "test-nlp-index"
routes: [ml-ingest-route]
username: "<your user name>"
password: "<your password>"
~~~
**Environment (please complete the following information):**
- OS: Mac OS latest
- Version: Data Prepper running from main branch
**Additional context**
Add any other context about the problem here.
| [BUG] S3 Scan with multiple buckets does not read the S3DataSelection correctly | https://api.github.com/repos/opensearch-project/data-prepper/issues/5531/comments | 0 | 2025-03-18T20:12:33Z | 2025-03-21T16:53:29Z | https://github.com/opensearch-project/data-prepper/issues/5531 | 2,929,740,057 | 5,531 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, individual plugins have sts header overrides option, and this must be duplicated between all plugins
**Describe the solution you'd like**
Support sts_header_overrides in the default role configuration in the data-prepper-config.yaml
**Additional context**
Add any other context or screenshots about the feature request here.
| Support sts_header_overrides in default role configuration in data-prepper-config.yaml | https://api.github.com/repos/opensearch-project/data-prepper/issues/5530/comments | 4 | 2025-03-18T18:53:42Z | 2025-05-29T15:54:17Z | https://github.com/opensearch-project/data-prepper/issues/5530 | 2,929,563,803 | 5,530 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user with a field in my Events that is a List, I would like to be able to turn that list into a subList
```
my_list: [ 0, 1, 2, 3, 4, 5, 6]
```
to
```
my_list: [ 4, 5, 6]
```
or
```
my_list: [ 0, 1, 2 ]
```
or
```
my_list: [ 2, 3, 4]
```
**Describe the solution you'd like**
I would like a new function support in Data Prepper expressions to convert to a subList
```
subList(<key>, <start_index, inclusive>, <end_index, exclusive>)
```
An end index value of -1, could signal to extract from start_index to the end of the list.
This function could then be used in the `add_entries` processor like this
```
add_entries:
entries:
- key: "my_list"
value_expression: '/subList(/my_list, 0, 2)'
```
**Describe alternatives you've considered (Optional)**
Truncate processor could support this instead
```
- truncate:
entries:
- source_keys: ["my_list"]
length: 3
start_at: 0
```
**Additional context**
Add any other context or screenshots about the feature request here.
| Support extracting subLists from a list | https://api.github.com/repos/opensearch-project/data-prepper/issues/5529/comments | 3 | 2025-03-18T16:53:14Z | 2025-04-08T06:02:36Z | https://github.com/opensearch-project/data-prepper/issues/5529 | 2,929,196,441 | 5,529 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I'm encountering a validation error when starting a new ingestion pipeline that uses the same configuration as a previously successful pipeline. One of my pipelines has been running successfully since February 26th, but a new pipeline using the identical action configuration fails with the following error:
`2025-03-14T13:41:50.091 [main] ERROR org.opensearch.dataprepper.core.validation.LoggingPluginErrorsHandler - 1. my-pipeline.sink.opensearch: caused by: HV000030: No validator could be found for constraint 'jakarta.validation.constraints.Size' validating type 'org.opensearch.dataprepper.model.opensearch.OpenSearchBulkActions'. Check configuration for 'actions[0].type'
`
**To Reproduce**
Steps to reproduce the behavior:
1. Use the following actions configuration in your ingestion pipeline (OpenSearch sink):
```
"actions": [
{
"type": "delete",
"when": "/operation == \"delete\""
},
{
"type": "index"
}
],
```
2. Start the pipeline and observe the validation error during startup in CloudWatch.
**Expected behavior**
The pipeline should start successfully.
**Environment (please complete the following information):**
* OS: AWS OpenSearch Service (managed environment)
* Version: 7.10.2
**Additional context**
* The enum OpenSearchBulkActions is defined with the following values: "create", "upsert", "update", "delete", and "index".
* The error seems to suggest that the @Size validation constraint is being applied to a field of type OpenSearchBulkActions (an enum), which isn't supported since @Size is intended for Strings, collections, arrays, or maps.
* Removing the actions section (or the "delete" action) from the configuration allows the pipeline to start without errors, which further indicates that the issue might be related to how the delete action is being validated.
**Impact:**
This issue prevents the deployment of new pipelines that rely on the "delete" action with a conditional statement, forcing us to work around the problem by modifying the pipeline configuration.
Please let me know if you need further logs or configuration details to help reproduce and investigate this issue.
| [BUG] Validation error with @Size constraint on OpenSearchBulkActions in new ingestion pipelines | https://api.github.com/repos/opensearch-project/data-prepper/issues/5526/comments | 0 | 2025-03-18T08:09:46Z | 2025-03-18T16:07:48Z | https://github.com/opensearch-project/data-prepper/issues/5526 | 2,927,551,259 | 5,526 |
[
"opensearch-project",
"data-prepper"
] | Fix flaky lambda plugin test: https://github.com/opensearch-project/data-prepper/actions/runs/13704305804/job/38325764563 | Fix Flaky Lambda test | https://api.github.com/repos/opensearch-project/data-prepper/issues/5522/comments | 0 | 2025-03-14T22:59:05Z | 2025-03-15T03:32:03Z | https://github.com/opensearch-project/data-prepper/issues/5522 | 2,921,448,037 | 5,522 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
As a user of the OpenSearch sink using the create and delete actions, I occasionally get 404 errors for document not found for the delete action, even though it came after the create. This results in an extra document in OpenSearch, and the delete event getting sent to the DLQ. This is because the bulk size batching logic results in the create and delete being in the same bulk request, which means the delete may be processed by OpenSearch before the create. While I considered lowering the flush_timeout, with large quantities of data, batching the bulk requests to be larger will provide better indexing performance
**Expected behavior**
404's for deletes that have a create in the same bulk request should not go to DLQ.
Instead, the pipeline should retry the failed 404 deletes a couple more times before sending to DLQ. This will allow the creates to be processed by OpenSearch, and then the deletes would follow
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Creates and deletes in the same bulk request can result in 404 | https://api.github.com/repos/opensearch-project/data-prepper/issues/5521/comments | 11 | 2025-03-14T22:11:28Z | 2025-06-18T13:16:02Z | https://github.com/opensearch-project/data-prepper/issues/5521 | 2,921,385,746 | 5,521 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
When the given pipeline role doesn't have access to the aws secrets, currently, the plugin is failing to read and if this happens at the start of the pipeline, then it is crashing the pipeline start activity.
**Describe the solution you'd like**
Secrets Plugin should notify the source either by sending null values or with an error code (or may be both) so that Source can decide its behavior. This way, source has an opportunity to retry instead of crashing the pipeline start. Secret should also continue to poll to check if it got access to the secret and able to read and initialize the values. Existing secrets refresh logic should help implement this functionality.
| AWS Secrets plugin should notify when it is unable to read secrets | https://api.github.com/repos/opensearch-project/data-prepper/issues/5517/comments | 0 | 2025-03-11T21:45:49Z | 2025-03-18T19:38:43Z | https://github.com/opensearch-project/data-prepper/issues/5517 | 2,911,956,240 | 5,517 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I have a Lambda processor that receives the records (N num of records) to process and outputs (M num of records) to ingest in OS
The Lambda returns the list of documents, this is the code to replicate the issue:
```
import json
def lambda_handler(event, context):
return [
{'a': 1},
{'a': 2},
{'a': 3},
]
```
My expectation is that 3 events will flow to my pipeline after the Lambda for further processing/to be ingested. The reality I see right now is that Data Prepper receives the array and treats it as a single event.
This is the output from Data Prepper showing this:
```
2025-03-08T12:14:12,590 [s3-source-sqs-1] INFO org.opensearch.dataprepper.plugins.source.s3.S3ObjectWorker - Read S3 object: [bucketName=7sybr6i6-search-indexer-replication, key=cdc/public/content_artist/20250308-121411717.csv]
2025-03-08T12:14:16,803 [simple-sample-pipeline-processor-worker-1-thread-1] INFO org.opensearch.dataprepper.plugins.lambda.processor.LambdaProcessor - Flush to Lambda check: currentBuffer.size=176, currentBuffer.events=1, currentBuffer.duration=PT58.743S
[{"a":1},{"a":2},{"a":3}]
2025-03-08T12:14:18,008 [acknowledgement-callback-4] INFO org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Deleted 1 messages from SQS. [e42e8b50-bea9-4753-9832-cf6092173e88]
2025-03-08T12:14:29,118 [simple-sample-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed.
org.opensearch.client.opensearch._types.OpenSearchException: Request failed: [x_content_parse_exception] [1:8] [UpdateRequest] doc doesn't support values of type: START_ARRAY
```
You can see in the output `[{"a":1},{"a":2},{"a":3}]` being seen as one item (this is the `stdout` output), and then you can see OS failing to import the object because it sees an array
My pipeline:
```
simple-sample-pipeline:
workers: 1
source:
s3:
acknowledgments: true
notification_type: "sqs"
compression: "none"
codec:
csv:
sqs:
queue_url: "queue_url"
maximum_messages: 10
visibility_timeout: "30s"
visibility_duplication_protection: true
aws:
region: "region"
sts_role_arn: "role"
processor:
- aws_lambda:
function_name: "function_name"
invocation_type: "request-response"
aws:
region: "region"
sts_role_arn: "role"
max_retries: 3
batch:
key_name: "records"
threshold:
event_count: 5
maximum_size: "5mb"
event_collect_timeout: PT10S
sink:
- stdout:
- opensearch:
hosts: ["http://127.0.0.1:9200"]
username: username
password: password
# aws_sigv4: true
insecure: true
index_type: management_disabled
index: test-index
document_id: "${/id}"
max_retries: 20
bulk_size: 4
action: upsert
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a lambda function with the code I wrote above
2. Create a pipeline using my template, replace the vars with yours
3. Start it
4. See error
**Expected behavior**
Data Prepper should see 3 events
```
{"a": 1}
{"a": 2}
{"a": 3}
```
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS] Ubuntu 22.04.3 LTS
- Version [e.g. 22] Latest opensearchproject/data-prepper:latest
**Additional context**
In my lambda I will need the ability to return a dynamic number of documents (that will likely be less than the number of records I receive in input)
I will also need it to return "delete actions" and not just upsert ones, but shouldn't matter for this bug
| [BUG] Lambda list output is treated as a single document | https://api.github.com/repos/opensearch-project/data-prepper/issues/5510/comments | 5 | 2025-03-08T12:20:53Z | 2025-03-16T20:46:53Z | https://github.com/opensearch-project/data-prepper/issues/5510 | 2,904,756,052 | 5,510 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
[ML Commons](https://opensearch.org/docs/latest/ml-commons-plugin/) is an OpenSearch plugin that manages Machine Learning models to enhance search relevance through semantic understanding. You can deploy models directly within your OpenSearch cluster or connect to externally hosted models.
For neural search, a language model converts text into vector embeddings. During ingestion, OpenSearch generates vector embeddings for text fields in incoming requests. At search time, the same model transforms query text into vector embeddings, enabling vector similarity search. It is crucial to use the same ML model for both ingestion and search to ensure consistency.
To support offline batch ingestion, Data Prepper is proposed as the ingestion engine for transforming text into vector embeddings. This processor will also support streaming mode data transformation.
**Describe the solution you'd like**
Build a new processor that integrates the ml-commons ML model Predict/batch_predict APIs into the Data Prepper pipelines.
**Describe alternatives you've considered (Optional)**
The model management and predict/batch_predict API has already been launched in ml-commons. This feature only integrate them into the Data Prepper.
**Additional context**
https://github.com/opensearch-project/data-prepper/issues/5433
| Integrate OpenSearch Ml-Commons into Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/5509/comments | 3 | 2025-03-07T18:38:03Z | 2025-04-16T20:17:35Z | https://github.com/opensearch-project/data-prepper/issues/5509 | 2,903,681,529 | 5,509 |
[
"opensearch-project",
"data-prepper"
] | I am trying to use OpenSearch Ingestion to consume data from Kafka, but my Kafka cluster is configured to use SSL/TLS with mutual authentication (mTLS). This means I only have a client certificate and key for authentication, without a username/password.
Could you confirm whether SSL/TLS authentication with a client certificate (without username/password) is not supported?
Thanks in advance for your help
| Support for Kafka authentication using SSL/TLS | https://api.github.com/repos/opensearch-project/data-prepper/issues/5505/comments | 4 | 2025-03-05T18:08:40Z | 2025-06-17T14:00:24Z | https://github.com/opensearch-project/data-prepper/issues/5505 | 2,898,052,336 | 5,505 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Using a dynamic index with an index template results in the pipeline continuously trying to create index templates, even for the same index value, which is apparent by this log repeating for the same indexes as documents are written
https://github.com/opensearch-project/data-prepper/blob/10984216cf845f9e0ffe8226ddd566d7db61cfc7/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/index/AbstractIndexManager.java#L314
**To Reproduce**
Steps to reproduce the behavior:
Create a pipeline with the following configuration where `dynamic_field` has high granularity (50+)
...
sink:
- opensearch:
index: "dynamic-index-${/dynamic_field}"
index_type: "custom"
template_type: "index-template"
template_content: "... template content ...."
```
Send data and observe the pipeline continue to check and log that the index template has not been created for the same index
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Dynamic Index Manager keeps trying to create index templates | https://api.github.com/repos/opensearch-project/data-prepper/issues/5504/comments | 2 | 2025-03-05T18:08:04Z | 2025-03-11T19:39:54Z | https://github.com/opensearch-project/data-prepper/issues/5504 | 2,898,050,650 | 5,504 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
In my setup I receive CDC records from AWS DMS that get put in S3. Those records contain data from the DB that might contain a field that has newlines in it
An example of a possible CSV file is:
```
D,metadata,public,Lyrics,"I don’t love you like I love my dog
I don’t love you like I love my dog
My dog wanders with me all day
My dog wanders with me all day
People leave you and let you down
People leave you and let you down
But my dog will always stay
Just a footstep away
People leave you and they let you down
When my dog goes I will go too
When my dog goes I will go too
Over that bridge into the blue
I’m not going anywhere with you
Cause when my dog goes I will go too
I don’t love you like I love my dog
I don’t love you like I love my dog",122474,exiftool,2338974,,,
U,track,public,122474,dog,dog,dog,2025-03-04 10:11:49.924523,0,80370,,web_upload:///dog.mp3,,false,,6,2025-03-03 16:48:30.271660,31274,audio,true,155,internal,,false,0,2025-03-03 16:48:34.372470,1,someurl,false,10,,,,1100,,false,,true,31942,,,true,,,,,model123,,,,,,,,,,
```
In this case the lyrics have multiple lines.
Due to the nature of CDC updates, each line will have a different number of columns which causes the S3 source CSV codec to crash and suggest to use the csv processor.
Unfortunately this means setting the codec to `newline` which means Data prepper will see the lyrics newlines as separate rows meaning incorrect events will be sent to my pipeline crashing it (
```yaml
simple-sample-pipeline:
workers: 1
source:
s3:
acknowledgments: true
notification_type: "sqs"
compression: "none"
codec:
newline:
sqs:
queue_url: ""
maximum_messages: 10
visibility_timeout: "30s"
visibility_duplication_protection: true
aws:
region: ""
sts_role_arn: ""
processor:
- csv:
column_names: ["op", "table", "schema"]
# Here I would do Lambda processing of each record but will receive incorrect data
sink:
- stdout
# ...
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a CSV file on S3 with the text above
2. Trigger data prepper and look at it printing incorrect records
**Expected behavior**
Ability to handle cases like this. Probably means S3 codec is able to handle dynamic columns so we don't need to split the file?
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS] Ubuntu 22.04.3 LTS
- Version [e.g. 22] Latest (opensearchproject/data-prepper:latest)
**Additional context**
I should be able to fix it by having DMS only expose some columns this is I am not sure if should be seen as a bug or a feature request for the S3 source csv to handle dynamic columns so that I wouldn't have to do this? | Processing s3 source csv with newlines | https://api.github.com/repos/opensearch-project/data-prepper/issues/5503/comments | 3 | 2025-03-05T17:51:02Z | 2025-03-11T20:49:43Z | https://github.com/opensearch-project/data-prepper/issues/5503 | 2,898,004,887 | 5,503 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Currently, the S3 DLQ does not have an expected bucket owner when it makes a PutObject request (https://github.com/opensearch-project/data-prepper/blob/76ba065601cff606f336b1d676c243a52c538f27/data-prepper-plugins/failures-common/src/main/java/org/opensearch/dataprepper/plugins/dlq/s3/S3DlqWriter.java#L104), which is at risk to bucket sniping
**Describe the solution you'd like**
A new `bucket_owner` option in the DLQ configuration
Pass the `expectedBucketOwner` to the PutObject request by following these rules
1. Pass the `bucket_owner` if configured
2. Pass the account id of the sts_role_arn if supplied
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| S3 DLQ should pass expected bucket owner to PutObject request | https://api.github.com/repos/opensearch-project/data-prepper/issues/5498/comments | 0 | 2025-03-04T21:57:04Z | 2025-03-18T17:56:36Z | https://github.com/opensearch-project/data-prepper/issues/5498 | 2,895,530,568 | 5,498 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
OpenSearch Serverless now supports point in time (https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-pit.html), which is the preferred way to paginate data in OpenSearch.
**Describe the solution you'd like**
Allow point in time search context to be used for OpenSearch Serverless (and verify it works end to end)
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support reading via point in time for OpenSearch Serverless | https://api.github.com/repos/opensearch-project/data-prepper/issues/5493/comments | 0 | 2025-03-03T15:55:32Z | 2025-03-04T20:32:31Z | https://github.com/opensearch-project/data-prepper/issues/5493 | 2,891,638,059 | 5,493 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
I have a pipeline that reads data from S3, processes it via CSV and then uses the Lambda processor to create the documents to ingest into OpenSearch
In one case, the Lambda invocation failed, the failure message got ingested into OS for some reason:
```
{
"took": 0,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "test-index",
"_id": "_id",
"_score": 1,
"_source": {
"errorMessage": "'table'",
"errorType": "KeyError",
"requestId": "d303b986-8f1c-4be1-9723-3b0184f60ffd",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 33, in handler\n tracks_ids, playlists_ids, artists_ids = parse_records(records, datadog_env)\n",
" File \"/var/task/code/records_parser.py\", line 15, in parse_records\n table_name = record['table']\n"
]
}
}
]
}
}
```
**To Reproduce**
This is my pipeline (I am evaluating Data Prepper so I just have a basic test pipeline)
```
simple-sample-pipeline:
workers: 2
source:
s3:
# Prevent data loss by only considering logs to be processed successfully after they are received by the opensearch sink
acknowledgments: true
notification_type: "sqs"
# Provide compression property, can be "none", "gzip", or "automatic"
compression: "none"
codec:
newline:
sqs:
# Provide a SQS Queue URL to read from
queue_url: ""
# Lower maximum_messages depending on the size of your S3 objects
maximum_messages: 10
# Modify the visibility_timeout of the sqs messages depending on the size of your access log S3 objects.
# Objects that are small (< 0.5 GB) and evenly distributed in size will result in the best performance
# It is recommended to allocate a minimum of 30 seconds, and to add 30 seconds for every 0.25 GB of data in each S3 Object
visibility_timeout: "30s"
# Enable this flag to allow the visibility timeout to be extended if an object has not yet finished processing.
# This helps prevent duplicate processing of SQS messages when the visibility timeout is lower than the amount of time required to process an object.
visibility_duplication_protection: true
aws:
# Provide the region to use for aws credentials
region: "ap-southeast-2"
# Provide the role to assume for requests to SQS and S3
sts_role_arn: ""
processor:
- csv:
column_names: ["op", "table", "schema"]
- aws_lambda:
function_name: "function_name"
invocation_type: "request-response"
aws:
region: "ap-southeast-2"
sts_role_arn: ""
max_retries: 3
batch:
key_name: "records"
threshold:
event_count: 50
maximum_size: "5mb"
event_collect_timeout: PT10S
sink:
- opensearch:
hosts: ["http://127.0.0.1:9200"]
username: empty
password: empty
insecure: true
index_type: management_disabled
index: test-index
document_id: _id
max_retries: 20
bulk_size: 4
```
**Expected behavior**
The failed batch of records is ignored/sent to DLQ
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS] Ubuntu 22.04.3 LTS
- Version [e.g. 22] Latest version (`opensearchproject/data-prepper:latest`)
| [BUG] Lambda processing failure stored as document | https://api.github.com/repos/opensearch-project/data-prepper/issues/5491/comments | 6 | 2025-03-02T00:01:07Z | 2025-03-07T10:02:59Z | https://github.com/opensearch-project/data-prepper/issues/5491 | 2,889,227,133 | 5,491 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
As a user of condition statements in my role to access AWS secrets manager, I am not able to pass `sts_header_overrides` to the secrets configuration (https://github.com/opensearch-project/data-prepper/blob/06018285a72e25139175bc8539f6846a8d947b4f/data-prepper-plugins/aws-plugin/src/main/java/org/opensearch/dataprepper/plugins/aws/AwsSecretManagerConfiguration.java#L41)
**Describe the solution you'd like**
Options to pass `sts_header_overrides` to the assume role request for secrets
| AWS Secrets Manager Plugin does not support sts_header_overrides | https://api.github.com/repos/opensearch-project/data-prepper/issues/5475/comments | 0 | 2025-02-28T18:28:43Z | 2025-03-06T00:15:38Z | https://github.com/opensearch-project/data-prepper/issues/5475 | 2,887,820,760 | 5,475 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Noticed following issues with lambda processor
1. When batching multiple records based on size, we seems to be adding more than the limit mentioned in the batch threshold configuration. Specifically, we seem to be one over the limit.
2. When there is a lambda invocation failure, we seem to fail all the records that the processor is receiving. We should only fail for that perticular batch and not all the records that the processor receives.
**To Reproduce**
try to execute the processor with 3 records, 1MB,7MB and 1MB; in that sequence. We should ideally see 3 batches , ie 3 requests to lambda, but currently we will see only 2 batches - first 2 in once batch and the last 1 mb in another. | [BUG] Lambda Plugin Batching | https://api.github.com/repos/opensearch-project/data-prepper/issues/5473/comments | 0 | 2025-02-28T06:32:57Z | 2025-03-03T20:03:49Z | https://github.com/opensearch-project/data-prepper/issues/5473 | 2,886,329,414 | 5,473 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The S3 source with folder partitioning enabled will quickly acquire and give up a partition if that folder has no objects to process. With the dynamodb source coordination store, this can lead to a failure to give up the partition because the Global Secondary Index on the table has not replicated the change, which leads to the folder partition not getting acquired again until it's ownership times out
```
org.opensearch.dataprepper.model.source.coordinator.exceptions.PartitionNotOwnedException: The partition is no longer owned by this instance of Data Prepper. The partition ownership timeout most likely expired and was grabbed by another instance of Data Prepper
at org.opensearch.dataprepper.core.sourcecoordination.LeaseBasedSourceCoordinator.validatePartitionOwnership(LeaseBasedSourceCoordinator.java:437) ~[data-prepper-core-2.x.418.jar:?]
at org.opensearch.dataprepper.core.sourcecoordination.LeaseBasedSourceCoordinator.giveUpPartitionInternal(LeaseBasedSourceCoordinator.java:362) ~[data-prepper-core-2.x.418.jar:?]
at org.opensearch.dataprepper.core.sourcecoordination.LeaseBasedSourceCoordinator.giveUpPartition(LeaseBasedSourceCoordinator.java:351) ~[data-prepper-core-2.x.418.jar:?]
at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.processFolderPartition(ScanObjectWorker.java:287) ~[s3-source-2.x.418.jar:?]
at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.startProcessingObject(ScanObjectWorker.java:189) ~[s3-source-2.x.418.jar:?]
at org.opensearch.dataprepper.plugins.source.s3.ScanObjectWorker.run(ScanObjectWorker.ja
```
**Expected behavior**
The S3 source with folder partitioning should handle race conditions better. The simplest way to do this would be to add a new function to `LeaseBasedSourceCoordinator` `giveUpPartitionWithRetries()` where the partition is grabbed from the source coordination store X amount of times to try to give up the partition (call regular `giveUpPartition` method with back off and retry)
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] S3 source folder partitioning has noisy log that causes delay in processor folders | https://api.github.com/repos/opensearch-project/data-prepper/issues/5468/comments | 0 | 2025-02-26T23:28:54Z | 2025-04-23T20:46:05Z | https://github.com/opensearch-project/data-prepper/issues/5468 | 2,883,036,405 | 5,468 |
[
"opensearch-project",
"data-prepper"
] | When creating a common server builder and auth module for pushed based plugins in this pr: https://github.com/opensearch-project/data-prepper/pull/5423/, (merged into [common-server-builder-and-auth-module branch](https://github.com/opensearch-project/data-prepper/tree/common-server-builder-and-auth-module)) a new ServerConfiguration class was made.
For http source and otel trace, logs, and metrics, the configuration had to be converted to Server Configuration. See this comment: https://github.com/opensearch-project/data-prepper/pull/5423#discussion_r1972147888. It is probably better to use an interface or abstract class for Server Configuration, and then the configurations for http source and otel trace, logs, and metrics could extend or implement that. Or change CreateServer.java and the source configurations to use BaseHttpServerConfig implementations instead of ServerConfiguration.
The branch also needs to be merged to main when changes are complete. | Make ServerConfiguration in http-common an interface or abstract class, or user interface BaseHttpServerConfig | https://api.github.com/repos/opensearch-project/data-prepper/issues/5467/comments | 0 | 2025-02-26T23:19:07Z | 2025-03-04T20:39:55Z | https://github.com/opensearch-project/data-prepper/issues/5467 | 2,883,024,609 | 5,467 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
The default value for stream read constraints is 20,000,000 characters. This means that if any single Event being read by the json codec exceeds this size, then an error is thrown due to it not being parseable
```
com.fasterxml.core.exc.StreamConstraintsException "String value length (20054016 exceeds the maximum allowed (20000000 from StreamReadConstraints.getMaxStringLength()"
```
**Describe the solution you'd like**
A configurable option in the JSON codec
```
source:
s3:
codec:
json:
max_event_length: 30000000
```
And this could then be wired into the JSON decoder here (https://github.com/opensearch-project/data-prepper/blob/837d1a9a4bee8d1cb05e1f539bb41fa37c6fc070/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/codec/JsonDecoder.java#L33)
```
public JsonDecoder(String keyName, Collection<String> includeKeys, Collection<String> includeKeysMetadata, int maxEventLength) {
this.keyName = keyName;
this.includeKeys = includeKeys;
this.includeKeysMetadata = includeKeysMetadata;
jsonFactory.setStreamReadConstraints(StreamReadConstraints.builder()
.maxStringLength(maxEventLength)
.build());
}
```
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Support configurable stream read constraints max length in the JSON input codec | https://api.github.com/repos/opensearch-project/data-prepper/issues/5466/comments | 4 | 2025-02-26T22:53:46Z | 2025-04-09T01:13:09Z | https://github.com/opensearch-project/data-prepper/issues/5466 | 2,882,991,181 | 5,466 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
The dissect processor behaves unexpectedly at high traffic with multiple processor worker threads.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload a file for dissect processor with a lot of data with the same record
2. Set "workers" for the pipeline to 2
3. Observe inconsistent parsing for the same record
4. Set "workers" for the pipeline to 1
5. Observe consistent results for the same record
**Expected behavior**
Consistent parsing for the same record with more than 1 worker
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 20.04 LTS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| [BUG] Dissect processor is not thread safe | https://api.github.com/repos/opensearch-project/data-prepper/issues/5462/comments | 0 | 2025-02-26T05:52:19Z | 2025-03-04T20:15:42Z | https://github.com/opensearch-project/data-prepper/issues/5462 | 2,880,296,682 | 5,462 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
For dynamoDb pipeline with export enabled, it looks the S3 connection during the export process is dropping resulting in reprocessing of the entire file. This results in the pipeline not being able to complete the export process and start the streaming ingestion.
This is the error we can see:
```
ERROR org.opensearch.dataprepper.plugins.source.dynamodb.export.DataFileScheduler - There was an exception while processing an S3 data file: java.util.concurrent.CompletionException: java.lang.RuntimeException: Loading of s3://bucket/export/AWSDynamoDB/7zzb546dhkx2v6jq.ion.gz completed with Exception: Read timed out
```
Currently, the code calls `getObject` API that returns a stream. Instead, we can use the range on the getObject API to retrieve chunks of the S3 object till there are chunks/partitions available.
This behavior becomes more frequent when the s3 bucket is in a different region than the region from where data prepper is configured to run.
**Expected behavior**
The connections to S3 from the pipeline should be reliable.
| [BUG] Issue with dynamoDb export where S3 connections are dropping | https://api.github.com/repos/opensearch-project/data-prepper/issues/5461/comments | 0 | 2025-02-25T20:44:38Z | 2025-03-06T17:36:34Z | https://github.com/opensearch-project/data-prepper/issues/5461 | 2,879,523,953 | 5,461 |
[
"opensearch-project",
"data-prepper"
] | ### Bug Description
The Data-Prepper's `otel_logs_source` pipeline is generating documents that don't follow the OpenTelemetry [log data model specification](https://opentelemetry.io/docs/specs/otel/logs/data-model/#log-and-event-record-definition). The current implementation incorrectly maps fields and doesn't maintain the standard OTEL log record structure.
### Current Behavior
Currently, the Data Prepper pipeline generates documents with this incorrect structure:
```json
{
"_index": "otel-logs-osi-classic",
"_id": "3ewMKZUBlpG2jc_nugaw",
"_source": {
"traceId": "",
"spanId": "",
"severityText": "INFO",
"flags": 0,
"time": "2025-02-21T15:07:20.691914Z",
"severityNumber": 9,
"droppedAttributesCount": 1,
"serviceName": null,
"body": "the message",
"observedTime": "1970-01-01T00:00:00Z",
"schemaUrl": "",
"log.attributes.app": "server"
},
"fields": {
"observedTime": [
"1970-01-01T00:00:00.000Z"
],
"time": [
"2025-02-21T15:07:20.691Z"
]
}
}
```
### Expected Behavior
The OpenTelemetry Collector's [OpenSearch Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/opensearchexporter) already implements the correct structure, following both the OTEL specification and [ss4o](https://opensearch.org/docs/latest/observing-your-data/ss4o/). Here's an example of the same log being correctly formatted by the OpenSearch Exporter:
```json
{
"_index": "teste-logs-osi",
"_id": "LzYMKZUBRGZ4BbSsiBQY",
"_source": {
"attributes": {
"app": "server",
"data_stream": {
"dataset": "default",
"namespace": "namespace",
"type": "record"
}
},
"body": "the message",
"instrumentationScope": {},
"observedTimestamp": "2025-02-21T15:07:22.004464639Z",
"severity": {
"text": "INFO",
"number": 9
},
"@timestamp": "2025-02-21T15:07:20.691914Z"
}
}
```
Key differences and issues:
1. Field Names:
- Wrong: "time" → Should be: "timestamp"
- Wrong: "observedTime" → Should be: "observedTimestamp"
2. Structure Issues:
- Resource attributes are flattened instead of being nested under "resource"
- Instrumentation scope is missing
- Attributes are incorrectly prefixed with "log.attributes." instead of being nested
- Invalid observed_timestamp defaulting to epoch
3. Missing Standard Fields:
- Proper resource context
- Instrumentation scope information
- Structured attribute mapping
### Steps to Reproduce
1. Configure Data Prepper with an `otel_logs_source` pipeline
2. Send OTLP format logs through OpenTelemetry Collector
3. Examine the resulting documents in OpenSearch
4. Compare with the OTEL log data model specification
### Pipeline Configuration
```yaml
version: "2"
otel-logs-pipeline:
source:
otel_logs_source:
# Provide the path for ingestion. ${pipelineName} will be replaced with sub-pipeline name, i.e. otel-logs-pipeline, configured for this pipeline.
# In this case it would be "/otel-logs-pipeline/v1/logs".
path: "/${pipelineName}/v1/logs"
sink:
- opensearch:
# Provide an AWS OpenSearch Service domain endpoint
hosts: [ "https://XXXXX.us-east-1.es.amazonaws.com" ]
aws:
# Provide a Role ARN with access to the domain. This role should have a trust relationship with osis-pipelines.amazonaws.com
sts_role_arn: "arn:aws:iam::XXXXXX:role/role-osi-pipeline-otel-logs-aos"
# Provide the region of the domain.
region: "us-east-1"
# Enable the 'serverless' flag if the sink is an Amazon OpenSearch Serverless collection
serverless: false
# serverless_options:
# Specify a name here to create or update network policy for the serverless collection
# network_policy_name: "network-policy-name"
index: "otel-logs-osi"
# Enable the 'distribution_version' setting if the AWS OpenSearch Service domain is of version Elasticsearch 6.x
# distribution_version: "es6"
# Enable and switch the 'enable_request_compression' flag if the default compression setting is changed in the domain. See https://docs.aws.amazon.com/opensearch-service/latest/developerguide/gzip.html
# enable_request_compression: true/false
# Optional: Enable the S3 DLQ to capture any failed requests in an S3 bucket. Delete this entire block if you don't want a DLQ.
dlq:
s3:
# Provide an S3 bucket
bucket: "aws-s3-XXXX"
# Provide a key path prefix for the failed requests
key_path_prefix: "otel-logs-pipeline/logs/dlq"
# Provide the region of the bucket.
region: "us-east-1"
# Provide a Role ARN with access to the bucket. This role should have a trust relationship with osis-pipelines.amazonaws.com
sts_role_arn: "arn:aws:iam::XXXXX:role/role-osi-pipeline-otel-logs-aos"
```
### Environment
- Data Prepper Version: [2.10.2] / OSI
- OpenSearch Version: [2.17] AOS
- OpenTelemetry Collector Version: [v0.120.0]
### Impact
1. Inconsistency with OTEL specification
2. Difficult to implement standard log queries
3. Poor integration with OTEL ecosystem
4. Compromised observability capabilities
5. Extra development effort for workarounds
### Questions
1. Is there a plan to align the implementation with the OTEL specification or ss4o?
2. Can custom processors be used to transform the data into the correct format? | [Bug] Data-Prepper `otel_logs_source`: Incorrect OTLP Log Structure Mapping | https://api.github.com/repos/opensearch-project/data-prepper/issues/5455/comments | 14 | 2025-02-21T19:01:14Z | 2025-05-21T13:44:35Z | https://github.com/opensearch-project/data-prepper/issues/5455 | 2,869,797,273 | 5,455 |
[
"opensearch-project",
"data-prepper"
] | routing is not working, am I doing anything incorrect?
```shell
log-routing-pipeline:
workers: 5
delay: 10
source:
otel_logs_source:
ssl: false
buffer:
bounded_blocking:
processor:
- rename_keys:
entries:
- from_key: "log.attributes.fluent_tag"
to_key: "fluent_tag"
route:
- gitlab-logs: '/fluent_tag == "gitlab-project"'
- harbor-logs: '/fluent_tag == "harbor-project"'
- default-logs: 'true'
sink:
- opensearch:
routes: ["gitlab-logs"]
hosts:
- https://opensearch-node1:9200
username: "admin"
password: "{{password}}"
insecure: true
index: gitlab-logs-%{yyyy.MM.dd}
bulk_size: 4
- opensearch:
routes: ["harbor-logs"]
hosts:
- https://opensearch-node1:9200
username: "admin"
password: "{{password}}"
insecure: true
index: harbor-logs-%{yyyy.MM.dd}
bulk_size: 4
- opensearch:
routes: ["default-logs"]
hosts:
- https://opensearch-node1:9200
username: "admin"
password: "{{password}}"
insecure: true
index: all-no-tag-logs-%{yyyy.MM.dd}
bulk_size: 4
``` | [BUG] data prepper routing not working | https://api.github.com/repos/opensearch-project/data-prepper/issues/5452/comments | 7 | 2025-02-21T18:33:41Z | 2025-04-13T06:01:31Z | https://github.com/opensearch-project/data-prepper/issues/5452 | 2,869,748,302 | 5,452 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
When processing S3 delete events, data-prepper throws a NullPointerException because the object size is null.
**To Reproduce**
1. Configure an S3 bucket with SQS Event Notifcations and ObjectDelete enabled.
2. Delete a file from the monitored S3 bucket.
3. The following error will occur:
`[s3-source-sqs-1] ERROR org.opensearch.dataprepper.plugins.source.s3.SqsWorker - Unable to process SQS messages. Processing error due to: Cannot invoke "java.lang.Long.longValue()" because the return value of "org.opensearch.dataprepper.plugins.source.s3.S3EventNotification$S3ObjectEntity.getSizeAsLong()" is null
`
Pipeline config:
```
S3-pipeline:
source:
s3:
notification_type: sqs
codec:
csv:
compression: none
sqs:
queue_url: "https://sqs.us-east-2.amazonaws.com/someQueue"
aws:
region: "us-east-2"
sts_role_arn: "someArn"
sink:
- stdout:
```
**Expected behavior**
The application should gracefully handle S3 delete events without throwing an exception. The object size should default to 0 when not present in the event notification.
**Proposed Fix**
- Add a null check for `getSizeAsLong()` in the `ParsedMessage` class.
- Default object size to `0L` if the value is null.
**Environment (please complete the following information):**
- OS: MacOS 14.7 Sonoma
- Data Prepper 2.10.2
| [BUG] NullPointerException on S3 Delete Event Due to Null Object Size | https://api.github.com/repos/opensearch-project/data-prepper/issues/5448/comments | 0 | 2025-02-20T18:38:31Z | 2025-02-26T20:42:13Z | https://github.com/opensearch-project/data-prepper/issues/5448 | 2,866,983,914 | 5,448 |
[
"opensearch-project",
"data-prepper"
] | I have set up tracing in the method Otelcol -> DataPrepper -> OpenSearch, where Kafka is used as a buffer in DataPrepper. This method works completely and there are no issues with it.
After that, I needed to remove the Kafka buffer and use Kafka as a trace source instead, where they are written using otelcol.
Configuration:
```
entry-pipeline:
source:
kafka:
bootstrap_servers:
- kafka_host:9092
topics:
- name: test_trace_topic
group_id: trace-test
encryption:
type: none
sink:
- pipeline:
name: "raw-trace-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-trace-pipeline:
source:
pipeline:
name: "entry-pipeline"
buffer:
bounded_blocking:
buffer_size: 65536
batch_size: 8
processor:
- otel_trace_raw:
- otel_trace_group:
hosts: ["http://OpenSearch_host"]
insecure: true
username: admin
password: password
sink:
- opensearch:
hosts: ["http://OpenSearch_host"]
insecure: true
username: admin
password: password
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
buffer:
bounded_blocking:
buffer_size: 65536
batch_size: 8
processor:
- otel_trace_raw:
- otel_trace_group:
hosts: ["http://OpenSearch_host"]
insecure: true
username: admin
password: password
sink:
- opensearch:
hosts: ["http://OpenSearch_host"]
insecure: true
username: admin
password: password
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
buffer:
bounded_blocking:
buffer_size: 65536
batch_size: 8
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://OpenSearch_host"]
insecure: true
username: admin
password: password
index_type: trace-analytics-service-map
```
At the execution stage of the
processor - otel_trace_raw,
DataPrepper reports processing errors related to parsing.
log of the error:
```
2025-02-20T15:56:09,339 [raw-trace-pipeline-processor-worker-3-thread-1] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - A processor threw an exception. This batch of Events will be dropped, and their EventHandles will be released:
java.lang.ClassCastException: class org.opensearch.dataprepper.model.log.JacksonLog cannot be cast to class org.opensearch.dataprepper.model.trace.Span (org.opensearch.dataprepper.model.log.JacksonLog and org.opensearch.dataprepper.model.trace.Span are in unnamed module of loader 'app')
at org.opensearch.dataprepper.plugins.processor.oteltrace.OTelTraceRawProcessor.doExecute(OTelTraceRawProcessor.java:89) ~[otel-trace-raw-processor-2.10.1.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.10.1.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.13.0.jar:1.13.0]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.10.1.jar:?]
at org.opensearch.dataprepper.peerforwarder.PeerForwardingProcessorDecorator.execute(PeerForwardingProcessorDecorator.java:103) ~[data-prepper-core-2.10.1.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:139) [data-prepper-core-2.10.1.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.10.1.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
2025-02-20T15:56:09,426 [service-map-pipeline-processor-worker-5-thread-1] ERROR org.opensearch.dataprepper.pipeline.ProcessWorker - A processor threw an exception. This batch of Events will be dropped, and their EventHandles will be released:
java.lang.ClassCastException: class org.opensearch.dataprepper.model.event.JacksonEvent cannot be cast to class org.opensearch.dataprepper.model.trace.Span (org.opensearch.dataprepper.model.event.JacksonEvent and org.opensearch.dataprepper.model.trace.Span are in unnamed module of loader 'app')
at org.opensearch.dataprepper.plugins.processor.ServiceMapStatefulProcessor.lambda$doExecute$5(ServiceMapStatefulProcessor.java:152) ~[service-map-stateful-2.10.1.jar:?]
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at org.opensearch.dataprepper.plugins.processor.ServiceMapStatefulProcessor.doExecute(ServiceMapStatefulProcessor.java:152) ~[service-map-stateful-2.10.1.jar:?]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.lambda$execute$0(AbstractProcessor.java:54) ~[data-prepper-api-2.10.1.jar:?]
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:69) ~[micrometer-core-1.13.0.jar:1.13.0]
at org.opensearch.dataprepper.model.processor.AbstractProcessor.execute(AbstractProcessor.java:54) ~[data-prepper-api-2.10.1.jar:?]
at org.opensearch.dataprepper.peerforwarder.PeerForwardingProcessorDecorator.execute(PeerForwardingProcessorDecorator.java:103) ~[data-prepper-core-2.10.1.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:139) [data-prepper-core-2.10.1.jar:?]
at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.10.1.jar:?]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
```
Is there a solution to this problem, because DataPrepper can work with data obtained from Kafka when it is used at the buffer stage, but it cannot work with the same data when it is obtained at the source stage? As a result, the situation looks like a bug.
**Environment (please complete the following information):**
- OS: [Debian 12]
- Version DataPrepper [2.10.1]
| OTel Trace from kafka source with otel_trace_raw | https://api.github.com/repos/opensearch-project/data-prepper/issues/5446/comments | 4 | 2025-02-20T15:28:41Z | 2025-03-13T19:13:41Z | https://github.com/opensearch-project/data-prepper/issues/5446 | 2,866,515,186 | 5,446 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Data Prepper will produce a stack overflow error when writing to a dynamic index if that value of the fields looks like a Data Prepper expression. After it evaluates the index name, it things it is a dynamic index and then tries to evaluate it again.
**To Reproduce**
1. Create a pipeline that sets a value to look like a Data Prepper expression. Use that value to write to an index.
Here is an example
```
opensearch-stackoverflow-pipeline:
workers: 20
source:
http:
processor:
- add_entries:
entries:
- key: "/indexName"
value: "${/indexName}"
sink:
- stdout:
- opensearch:
hosts: [ "https://opensearch:9200" ]
insecure: true
username: admin
password: admin
index: "${/indexName}"
refresh_interval: -1
```
2. Run Data Prepper
3. Ingest data
```
curl http://localhost:2021/log/ingest -X POST -H 'Content-Type: application/json' -d '[{"value": "xyz"}]'
```
4. You get a stack overflow
```
data-prepper | {"value":"xyz","indexName":"${/indexName}"}
data-prepper | 2025-02-19T14:56:40,127 [opensearch-stackoverflow-pipeline-processor-worker-1-thread-6] ERROR org.opensearch.dataprepper.pipeline.common.FutureHelper - FutureTask failed due to:
data-prepper | java.util.concurrent.ExecutionException: java.lang.StackOverflowError
data-prepper | at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
data-prepper | at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
data-prepper | at org.opensearch.dataprepper.pipeline.common.FutureHelper.awaitFuturesIndefinitely(FutureHelper.java:29) [data-prepper-core-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.pipeline.ProcessWorker.postToSink(ProcessWorker.java:181) [data-prepper-core-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.pipeline.ProcessWorker.doRun(ProcessWorker.java:154) [data-prepper-core-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.pipeline.ProcessWorker.run(ProcessWorker.java:61) [data-prepper-core-2.10.2.jar:?]
data-prepper | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
data-prepper | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
data-prepper | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
data-prepper | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
data-prepper | at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
data-prepper | Caused by: java.lang.StackOverflowError
data-prepper | at com.github.benmanes.caffeine.cache.BoundedBuffer$RingBuffer.<init>(BoundedBuffer.java:65) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.BoundedBuffer.create(BoundedBuffer.java:54) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.StripedBuffer.expandOrRetry(StripedBuffer.java:204) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.StripedBuffer.offer(StripedBuffer.java:133) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.BoundedLocalCache.afterRead(BoundedLocalCache.java:1286) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.BoundedLocalCache.getIfPresent(BoundedLocalCache.java:2206) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at com.github.benmanes.caffeine.cache.LocalManualCache.getIfPresent(LocalManualCache.java:56) ~[caffeine-3.1.8.jar:3.1.8]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:83) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.index.DynamicIndexManager.getIndexName(DynamicIndexManager.java:90) ~[opensearch-2.10.2.jar:?]
```
5. Data Prepper also shuts down.
**Expected behavior**
These events should go into the DLQ successfully.
**Environment (please complete the following information):**
Data Prepper 2.10.2
**Additional context**
N/A
| [BUG] OpenSearch sink stack overflow when dynamic index name does not evaluate | https://api.github.com/repos/opensearch-project/data-prepper/issues/5444/comments | 1 | 2025-02-19T15:01:59Z | 2025-04-10T08:24:37Z | https://github.com/opensearch-project/data-prepper/issues/5444 | 2,863,619,276 | 5,444 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.