issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
262k
issue_title
stringlengths
1
1.02k
issue_comments_url
stringlengths
53
116
issue_comments_count
int64
0
2.49k
issue_created_at
stringdate
1999-03-17 02:06:42
2025-06-23 11:41:49
issue_updated_at
stringdate
2000-02-10 06:43:57
2025-06-23 11:43:00
issue_html_url
stringlengths
34
97
issue_github_id
int64
132
3.17B
issue_number
int64
1
215k
[ "libjxl", "libjxl" ]
I was asked to create this issue from the discord. libjxl produces substantially larger files than cjxl in lossless mode due to the usage of 14-bit XYB, which in addition to being larger is also not truly lossless. Code used below: ```CPP m_opt = JxlEncoderOptionsCreate(m_enc, nullptr); if (quality < 0) quality = 0; JXLEE(JxlEncoderOptionsSetDistance(m_opt, quality)); JXLEE(JxlEncoderOptionsSetLossless(m_opt, quality == 0 ? JXL_TRUE : JXL_FALSE)); JXLEE(JxlEncoderOptionsSetEffort(m_opt, 3)); static constexpr JxlPixelFormat PFMT { .num_channels = 3, .data_type = JXL_TYPE_UINT8, .endianness = JXL_NATIVE_ENDIAN, .align = 0 }; m_info.xsize = width; m_info.ysize = height; m_info.bits_per_sample = 8; m_info.num_color_channels = 3; m_info.intensity_target = 255; m_info.orientation = JXL_ORIENT_IDENTITY; JXLEE(JxlEncoderSetBasicInfo(m_enc, &m_info)); JxlColorEncoding color_profile; JxlColorEncodingSetToSRGB(&color_profile, JXL_FALSE); JXLEE(JxlEncoderSetColorEncoding(m_enc, &color_profile)); JXLEE(JxlEncoderAddImageFrame(m_opt, &PFMT, buf, width * height * 3)); JxlEncoderCloseInput(m_enc); ``` (`quality` is set to 0 in this circumstance)
14-bit XYB encoding incorrectly used when JxlEncoderOptionsSetLossless is set to JXL_TRUE
https://api.github.com/repos/libjxl/libjxl/issues/257/comments
5
2021-06-30T20:26:25Z
2022-03-25T16:22:35Z
https://github.com/libjxl/libjxl/issues/257
934,099,251
257
[ "libjxl", "libjxl" ]
**Describe the bug** ``` 1576 - DecodeTest.PixelTestOpaqueSrgbLossyNoise (Failed) 1637 - DecodeTest/DecodeTestParam.PixelTest/301x33RGBtoRGBu8#GetParam()=301x33RGBtoRGBu8 (Failed) 1661 - DecodeTest/DecodeTestParam.PixelTest/301x33RGBtoRGBAu8#GetParam()=301x33RGBtoRGBAu8 (Failed) 1957 - RoundtripTest.Uint8FrameRoundtripTest (Failed) 1960 - RoundtripTest.TestICCProfile (Failed) ``` **To Reproduce** Steps to reproduce the behavior: /ci.sh asan
Tests fail on ASAN build
https://api.github.com/repos/libjxl/libjxl/issues/254/comments
1
2021-06-30T16:42:55Z
2021-07-01T07:39:13Z
https://github.com/libjxl/libjxl/issues/254
933,900,822
254
[ "libjxl", "libjxl" ]
**Describe the bug** Encoding `011.png` with modular mode crashes, lossy and lossless. It also happens when encoding `.ppm` and `.pgx` files created from the `.png` file. "Works" after transcoding to `.pfm`, but this changes the image and the resulting `.jxl` is larger than the original `.png`. **To Reproduce** ``` ~ $ cjxl -m 011.png 011.jxl JPEG XL encoder v0.3.7 [AVX2] build/libjxl-git/src/libjxl/lib/extras/codec_png.cc:497: PNG: no color_space/icc_pathname given, assuming sRGB Read 822x1168 image, 38.3 MP/s Encoding [Modular, lossless, squirrel], 8 threads. build/libjxl-git/src/libjxl/lib/jxl/enc_ans.cc:220: JXL_DASSERT: n <= 255 fish: Job 1, 'cjxl -m 011.png 011.jxl' terminated by signal SIGILL (Illegal instruction) ``` **Expected behavior** It should be possible to encode this image in modular mode. **Environment** - OS: Arch Linux - Compiler version: clang 12.0.0 - CPU type: ryzen 2700x x86_64 - cjxl/djxl version string: 8193d7b370c36f3df0528f1b22c481c182627aa5 **Additional context** Compiled with https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libjxl-git and ``` CFLAGS="-march=native -O3 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -fno-plt -pthread -Wno-error -w" CXXFLAGS="$CFLAGS" ``` https://user-images.githubusercontent.com/7374061/123955891-707f7100-d9aa-11eb-9ff4-d8af1667466f.png
Modular mode crashes with SIGILL; JXL_DASSERT: n <= 255
https://api.github.com/repos/libjxl/libjxl/issues/251/comments
3
2021-06-30T11:58:30Z
2022-03-29T18:42:04Z
https://github.com/libjxl/libjxl/issues/251
933,629,377
251
[ "libjxl", "libjxl" ]
**Describe the bug** When decoding 003.jxl, which was losslessly transcoded from a jpg, the resulting png file is only 1953 B small and all black, even though djxl didn't throw any errors. Decoding to the original jpg file is still possible, but only because `--strip` was not used, otherwise it would be impossible to decode this file at all. **To Reproduce** (unpack the attached zip file to get 003.jxl) ``` ~ $ djxl 003.jxl 003.png JPEG XL decoder v0.3.7 [AVX2] Read 274618 compressed bytes. Decoded to pixels. 752 x 1080, 14.64 MP/s [14.64, 14.64], 1 reps, 8 threads. Allocations: 476 (max bytes in use: 3.297596E+07) ~ $ djxl 003.jxl 003.jpg JPEG XL decoder v0.3.7 [AVX2] Read 274618 compressed bytes. Reconstructed to JPEG. 752 x 1080, 16.33 MP/s [16.33, 16.33], 6.64 MB/s [6.64, 6.64], 1 reps, 8 threads. Allocations: 429 (max bytes in use: 3.297534E+07) ~ $ du -b 003.* 330011 003.jpg 274618 003.jxl 1953 003.png ``` **Expected behavior** It should be possible to decode this file to png. **Environment** - OS: Arch Linux - Compiler version: clang 12.0.0 - CPU type: ryzen 2700x x86_64 - cjxl/djxl version string: 8193d7b370c36f3df0528f1b22c481c182627aa5 **Additional context** Compiled with https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libjxl-git and ``` CFLAGS="-march=native -O3 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -fno-plt -pthread -Wno-error -w" CXXFLAGS="$CFLAGS" ``` 003.jxl inside a zip file, otherwise github wouldn't accept it. [003.zip](https://github.com/libjxl/libjxl/files/6736434/003.zip)
Decoding to png results in a small black image.
https://api.github.com/repos/libjxl/libjxl/issues/248/comments
3
2021-06-29T21:12:23Z
2021-07-05T15:36:08Z
https://github.com/libjxl/libjxl/issues/248
933,107,249
248
[ "libjxl", "libjxl" ]
Hello, I got an error when I tried to decode jxl file after trunction it to 1KB. I have attached below PNG image that I encoded. Looks like this issue happens with PNG images that have transparency. Error message: ``` JPEG XL decoder v0.3.7 [AVX2,SSE4,Scalar] Read 1024 compressed bytes. Failed to decompress to pixels. ``` My steps: 1. cjxl <input_png> <output_jxl> -p --distance=15 2. <truncate output jxl file by std::filesystem::file_size in C++> 3. djxl <input_jxl> <output_png> --allow_partial_files --allow_more_progressive_steps Environment: - OS: Windows 10 - Compiler version: Clang x64, Visual Studio 2019 - CPU type: x86_64 - cjxl/djxl version string: decoder v0.3.7 [AVX2,SSE4,Scalar], encoder v0.3.7 [AVX2,SSE4,Scalar] Image: ![Rough1](https://user-images.githubusercontent.com/29024201/123817090-859fc580-d900-11eb-9800-1d10081e1d08.png)
Fail decoding of truncated JXL file
https://api.github.com/repos/libjxl/libjxl/issues/245/comments
8
2021-06-29T14:37:13Z
2021-11-19T15:56:54Z
https://github.com/libjxl/libjxl/issues/245
932,766,253
245
[ "libjxl", "libjxl" ]
**Describe the bug** `-e 3` produces a smaller file than `-e 9` when losslessly compressing the attached jpg. **To Reproduce** ``` cjxl -e 3 a.jpg a-3.jxl cjxl -e 9 a.jpg a-9.jxl du -b * JPEG XL encoder v0.3.7 [AVX2] Read 1200x1000 image, 112.8 MP/s Encoding [Container | JPEG, lossless transcode, falcon | JPEG reconstruction data], 4 threads. Compressed to 127967 bytes (0.853 bpp). 1200 x 1000, 30.81 MP/s [30.81, 30.81], 1 reps, 4 threads. Including container: 128487 bytes (0.857 bpp). JPEG XL encoder v0.3.7 [AVX2] Read 1200x1000 image, 104.5 MP/s Encoding [Container | JPEG, lossless transcode, tortoise | JPEG reconstruction data], 4 threads. Compressed to 128733 bytes (0.858 bpp). 1200 x 1000, 0.42 MP/s [0.42, 0.42], 1 reps, 4 threads. Including container: 129253 bytes (0.862 bpp). 128487 a-3.jxl 129253 a-9.jxl 161639 a.jpg ``` **Expected behavior** `-e 9` should produce the smallest files out of all the presets. **Environment** - OS: Arch Linux - Compiler version: clang 12.0.0 - CPU type: ryzen 3500U x86_64 - cjxl/djxl version string: f8790509f3588413413682b611cf11f18f4498b1 **Additional context** Compiled with https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libjxl-git and ``` CFLAGS="-march=native -O3 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -fno-plt -pthread -Wno-error -w" CXXFLAGS="$CFLAGS" ``` https://user-images.githubusercontent.com/7374061/123633993-2cf5fd00-d81a-11eb-8be1-150ac3fbdd95.jpg
Lossless JPEG -e 3 smaller than -e 9
https://api.github.com/repos/libjxl/libjxl/issues/235/comments
4
2021-06-28T12:20:23Z
2021-11-18T09:41:58Z
https://github.com/libjxl/libjxl/issues/235
931,512,361
235
[ "libjxl", "libjxl" ]
Arch Linux GCC 11.1.0 JPEG XL encoder v0.3.7 [AVX2,SSE4,Scalar] I tried to load an assortment of JXL images and there seems to be a bug specifically with JXLs that were transcoded from JPEG, it only seems to happen to the GIMP plugin specifically. The issue does not occur with lossless JXL, or JXLs encoded with a -q < 100, only JPEG -> JXL I get a GIMP error: "Opening '...' failed: JPEG XL image plug-in could not open image" and then the following output in the console: ``` /home/.cache/yay/libjxl-git/src/libjxl/lib/jxl/image_metadata.cc:354: JXL_FAILURE: invalid min 0.001234 vs max 0.001001 /home/.cache/yay/libjxl-git/src/libjxl/lib/jxl/fields.cc:697: JXL_RETURN_IF_ERROR code=1: visitor.Visit(fields, PrintVisitors() ? "-- Read\n" : "") /home/.cache/yay/libjxl-git/src/libjxl/lib/jxl/dec_file.cc:33: JXL_RETURN_IF_ERROR code=1: ReadImageMetadata(reader, &io->metadata.m) /home/.cache/yay/libjxl-git/src/libjxl/lib/jxl/dec_file.cc:108: JXL_RETURN_IF_ERROR code=1: DecodeHeaders(&reader, io) /home/.cache/yay/libjxl-git/src/libjxl/plugins/gimp/file-jxl-load.cc:95: JXL_RETURN_IF_ERROR code=1: DecodeFile(dparams, compressed, &io, &pool) ```
GIMP plugin fails to open JXLs which were transcoded from JPEG
https://api.github.com/repos/libjxl/libjxl/issues/229/comments
5
2021-06-26T00:30:34Z
2021-08-25T08:55:46Z
https://github.com/libjxl/libjxl/issues/229
930,575,447
229
[ "libjxl", "libjxl" ]
Hello honored devs, cjxl keeps the alpha channel even when it is fully opaque. That's not a huge deal regarding file size but it negatively affects RAM consumption and especially decoding performance. The encoder should automatically drop such unnecessary channels and maybe have a new parameter `--always-keep-alpha` if someone really wants to do that. Thanks!
Ability to drop fully opaque alpha channel
https://api.github.com/repos/libjxl/libjxl/issues/219/comments
3
2021-06-24T12:36:34Z
2022-03-29T10:37:48Z
https://github.com/libjxl/libjxl/issues/219
929,184,869
219
[ "libjxl", "libjxl" ]
**Is your feature request related to a problem? Please describe.** JPEG XL seems to have several features that allows it to be a PSD replacement, but all the necessary metadata is currently encapsulated in the internal API. The only thing that is possible with the current API is to convert a PSD to JXL (the inverse conversion is not implemented). In any other cases, only the final blended image is accessible. The feature request aims to make it possible to manipulate layered JXL images in image editor applications. To do that, the API needs to support reading and writing unblended images with their crop coordinates. **Describe the solution you'd like** The encoder can already take the input frame by frame. Therefore this could be implemented by allowing additional metadata (crop coordinates, blend mode) when calling `JxlEncoderAddImageFrame`. The decoder can handle the output frame by frame as well, but the API is currently intended for full-sized frames. I propose to add an decoder option that, when enabled, outputs unblended, cropped frames. The current `JXL_DEC_FRAME` and `JXL_DEC_NEED_IMAGE_OUT_BUFFER` events could be reused to inform the application to resize the buffer when necessary. **Describe alternatives you've considered** TBD. The proposal is subject to change. **Unresolved questions** - Should we expose internal frames? For PSD-like usecases it's likely not used, but it might be useful for parsing animation files. - Should we allow the application to have control over reference frames? This could be used control the "preferred final image", where some layers are hidden by default. It could also be used to create images where there's one "base" layer and multiple switchable "patch" layers, and the switchable variants would only depend on the base frame instead of sequentially depending on previous frames.
Expose APIs for crop coordinates/unblended frames
https://api.github.com/repos/libjxl/libjxl/issues/217/comments
2
2021-06-24T06:59:28Z
2022-03-29T10:33:31Z
https://github.com/libjxl/libjxl/issues/217
928,911,482
217
[ "libjxl", "libjxl" ]
I packaged `libjxl` for nixpkgs; the `x86_64-linux` build works fine, but the `aarch64-linux` fails because apparently `skcms` triggers a GCC 9.3.0 `internal compiler error`: https://github.com/NixOS/nixpkgs/pull/103160#issuecomment-866388610 Since this is just a free-time experiment for me and I don't have time to chase this up to skia or GCC, I'm reporting it here in case you are interested in following up on it.
build failure on aarch64-linux due to `skcms` triggering GCC internal compiler error
https://api.github.com/repos/libjxl/libjxl/issues/213/comments
1
2021-06-22T22:57:51Z
2022-03-29T10:31:42Z
https://github.com/libjxl/libjxl/issues/213
927,694,502
213
[ "libjxl", "libjxl" ]
**Describe the bug** as in title when trying to run cjxl or djxl a error shows up "cjxl: error while loading shared libraries: libOpenEXR-3_0.so.27: cannot open shared object file: No such file or directory". seeing as this was a problem with openexr i have tried both the repository version of 3.0.4 and the git version from the aur. but this still happens with either version. **To Reproduce** Steps to reproduce the behavior: 1. try to run cjxl or djxl with either openexr3.0.4 or -git with any options **Expected behavior** cjxl or djxl should put a list of options **Screenshots** If applicable, add screenshots or example input/output images to help explain your problem. **Environment** - OS: Manjaro - Compiler version: clang 12.0.0 - CPU type: x86_64 - cjxl/djxl version string: unable to retrieve as error stops program from running but from install is 3.7.r80-git **Additional context** Add any other context about the problem here.
when using cjxl or djxl error:"cjxl: error while loading shared libraries: libOpenEXR-3_0.so.27: cannot open shared object file: No such file or directory" happens
https://api.github.com/repos/libjxl/libjxl/issues/204/comments
8
2021-06-18T08:42:17Z
2021-11-22T06:24:00Z
https://github.com/libjxl/libjxl/issues/204
924,682,850
204
[ "libjxl", "libjxl" ]
**Is your feature request related to a problem? Please describe.** It would be good to evaluate butteraugli on the CLIC-2021 perceptual quality task. This should provide additional information to he community with respect to its performance characteristics when compared to other potentially usable perceptual quality metrics that JPEG XL could optimize for. **Describe the solution you'd like** Please use the test data from the CLIC 2021 perceptual challenge to generate a CSV file with the decisions (see link below for the exact instructions): https://github.com/fab-jul/clic2021-devkit/blob/main/README.md#perceptual-challenge Please email them to me to get the final results / ranks. We'll publish these at: http://compression.cc/leaderboard/perceptual/test/ ...and of course update this bug tracker.
Evaluate butteraugli on the CLIC-2021 perceptual quality task
https://api.github.com/repos/libjxl/libjxl/issues/202/comments
6
2021-06-17T20:54:26Z
2022-03-29T10:30:30Z
https://github.com/libjxl/libjxl/issues/202
924,325,924
202
[ "libjxl", "libjxl" ]
cjxl.exe image_21447_24bit.png image_21447_24bit.jxl -v -m -q 100 -s 5 -C 1 --num_threads=4 ``` JPEG XL encoder v0.3.7 0.3.7-13649d2b [Scalar] codec_png.cc:496: PNG: no color_space/icc_pathname given, assuming sRGB Read 1563x1558 image, 20.7 MP/s Encoding [Modular, lossless, hare], 4 threads. transform.cc:70: JXL_FAILURE: Invalid channel range ``` https://github.com/libjxl/libjxl/commit/30dae3b5b0cb70134dd08964a6bb1f247f2ac412
JXL_FAILURE: Invalid channel range
https://api.github.com/repos/libjxl/libjxl/issues/184/comments
3
2021-06-16T10:32:08Z
2021-06-17T10:17:08Z
https://github.com/libjxl/libjxl/issues/184
922,434,216
184
[ "libjxl", "libjxl" ]
I followed the build for windows documentation ![errors](https://user-images.githubusercontent.com/55511549/122160378-1d68d280-ce70-11eb-8404-ba95b01af37d.png) a lot of errors seems to be in test_util-int.h how can I skip the test build to see if it's work? or how can I build it for windows?
Build for windows (advanced) doesn't build
https://api.github.com/repos/libjxl/libjxl/issues/180/comments
15
2021-06-16T04:59:16Z
2022-03-29T10:29:49Z
https://github.com/libjxl/libjxl/issues/180
922,095,994
180
[ "libjxl", "libjxl" ]
Hello I built cjxl with the command BUILD_TARGET=x86_64-w64-mingw32 SKIP_TEST=1 CC=clang-7 CXX=clang++-7 ./ci.sh release on docker, build worked and I got the cjxl.exe but when I run it with a command line nothing is happening and it returns immediately without doing any thing or sending errors. What dependencies should I install on windows to run the exe? How can I modify the build command line so that all the dependencies are imbedded in the exe? Thank you
docker build for windows, can't run exe
https://api.github.com/repos/libjxl/libjxl/issues/179/comments
8
2021-06-16T04:48:53Z
2021-06-16T20:53:56Z
https://github.com/libjxl/libjxl/issues/179
922,088,884
179
[ "libjxl", "libjxl" ]
``` JPEG XL decoder v0.3.7 [AVX2,SSE4,Scalar] /C/msys64/home/eustas/clients/libjxl/tools/cpu/cpu.cc:415: JXL_FAILURE: Unable to detect processor topology Failed to choose default num_threads; you can avoid this error by specifying a --num_threads N argument. ```
MSYS2 build fails to detect architecture (and number of threads)
https://api.github.com/repos/libjxl/libjxl/issues/168/comments
3
2021-06-14T14:32:51Z
2021-11-08T20:44:10Z
https://github.com/libjxl/libjxl/issues/168
920,476,110
168
[ "libjxl", "libjxl" ]
Hello, djxl crashes on some (not all) JXL files during decoding. ``` $ djxl jpegxl-logo.jxl output.png --num_threads=1 Segmentation fault ``` Test file: [jpegxl-logo.jxl](https://github.com/novomesk/qt-jpegxl-image-plugin/blob/main/testfiles/jpegxl-logo.jxl) ``` (gdb) run Starting program: C:\msys64\mingw64\bin\djxl.exe jpegxl-logo.jxl output.png "--num_threads=1" [New Thread 10912.0x3630] [New Thread 10912.0x32d0] [New Thread 10912.0x1a7c] [New Thread 10912.0xc94] Thread 5 received signal SIGSEGV, Segmentation fault. [Switching to Thread 10912.0xc94] 0x00007ff64ad95967 in jxl::N_AVX2::ComputePixelChannel<hwy::N_AVX2::Simd<float, 8ull> >(hwy::N_AVX2::Simd<float, 8ull>, float, float const*, float const*, float const*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, unsigned long long) (x=8, gap=0xfddbdfb7e0, sm=0xfddbdfb840, mc=0xfddbdfb8a0, row_bottom=0x2414425fb80, row=0x2414425f980, row_top=0x2414425f780, dc_factor=0.000206910816, d=...) at C:/msys64/home/daniel/libjxl/lib/jxl/compressed_dc.cc:96 96 *gap = MaxWorkaround(*gap, Abs((*mc - *sm) / dc_quant)); (gdb) bt #0 0x00007ff64ad95967 in jxl::N_AVX2::ComputePixelChannel<hwy::N_AVX2::Simd<float, 8ull> >(hwy::N_AVX2::Simd<float, 8ull>, float, float const*, float const*, float const*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, decltype (Zero((hwy::N_AVX2::Simd<float, 8ull>)()))*, unsigned long long) (x=8, gap=0xfddbdfb7e0, sm=0xfddbdfb840, mc=0xfddbdfb8a0, row_bottom=0x2414425fb80, row=0x2414425f980, row_top=0x2414425f780, dc_factor=0.000206910816, d=...) at C:/msys64/home/daniel/libjxl/lib/jxl/compressed_dc.cc:96 #1 jxl::N_AVX2::ComputePixel<hwy::N_AVX2::Simd<float, 8ull> > (x=8, out_rows=0xfddbdfd310, rows_bottom=0xfddbdfd330, rows=0xfddbdfd350, rows_top=0xfddbdfd370, dc_factors=0xfddb5fd400) at C:/msys64/home/daniel/libjxl/lib/jxl/compressed_dc.cc:114 #2 operator() (__closure=0xfddb5fc200, y=1) at C:/msys64/home/daniel/libjxl/lib/jxl/compressed_dc.cc:188 #3 0x00007ff64adb4090 in jxl::ThreadPool::RunCallState<jxl::Status(long long unsigned int), jxl::N_AVX2::AdaptiveDCSmoothing(float const*, jxl::Image3F*, jxl::ThreadPool*)::<lambda(int, int)> >::CallDataFunc(void *, uint32_t, size_t) ( jpegxl_opaque=0xfddb5fc0b0, value=1, thread_id=0) at C:/msys64/home/daniel/libjxl/lib/jxl/base/data_parallel.h:88 #4 0x00007ff64ae06bb2 in jpegxl::ThreadParallelRunner::RunRange ( self=0xfddb5ff930, command=4294967439, thread=0) at C:/msys64/home/daniel/libjxl/lib/threads/thread_parallel_runner_internal.cc:137 #5 0x00007ff64ae06cb8 in jpegxl::ThreadParallelRunner::ThreadFunc ( self=0xfddb5ff930, thread=0) at C:/msys64/home/daniel/libjxl/lib/threads/thread_parallel_runner_internal.cc:167 #6 0x00007ff64b4977aa in std::__invoke_impl<void, void (*)(jpegxl::ThreadParallelRunner*, int), jpegxl::ThreadParallelRunner*, unsigned int> ( __f=@0x241441e31d8: 0x7ff64ae06bc0 <jpegxl::ThreadParallelRunner::ThreadFunc(jpegxl::ThreadParallelRunner*, int)>) at C:/msys64/mingw64/include/c++/10.3.0/bits/invoke.h:60 #7 0x00007ff64b4ae2c0 in std::__invoke<void (*)(jpegxl::ThreadParallelRunner*, int), jpegxl::ThreadParallelRunner*, unsigned int> ( __fn=@0x241441e31d8: 0x7ff64ae06bc0 <jpegxl::ThreadParallelRunner::ThreadFunc(jpegxl::ThreadParallelRunner*, int)>) at C:/msys64/mingw64/include/c++/10.3.0/bits/invoke.h:95 #8 0x00007ff64b45bbc0 in std::thread::_Invoker<std::tuple<void (*)(jpegxl::ThreadParallelRunner*, int), jpegxl::ThreadParallelRunner*, unsigned int> >::_M_invoke<0ull, 1ull, 2ull> (this=0x241441e31c8) at C:/msys64/mingw64/include/c++/10.3.0/thread:264 #9 0x00007ff64b45bbe7 in std::thread::_Invoker<std::tuple<void (*)(jpegxl::ThreadParallelRunner*, int), jpegxl::ThreadParallelRunner*, unsigned int> >::operator() (this=0x241441e31c8) at C:/msys64/mingw64/include/c++/10.3.0/thread:271 #10 0x00007ff64b45b94c in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)(jpegxl::ThreadParallelRunner*, int), jpegxl::ThreadParallelRunner*, unsigned int> > >::_M_run (this=0x241441e31c0) at C:/msys64/mingw64/include/c++/10.3.0/thread:215 #11 0x00007ff81eed06e1 in ?? () from C:\msys64\mingw64\bin\libstdc++-6.dll #12 0x00007ff821d54f33 in ?? () from C:\msys64\mingw64\bin\libwinpthread-1.dll #13 0x00007ff8340daf5a in msvcrt!_beginthreadex () from C:\WINDOWS\System32\msvcrt.dll #14 0x00007ff8340db02c in msvcrt!_endthreadex () from C:\WINDOWS\System32\msvcrt.dll #15 0x00007ff833827034 in KERNEL32!BaseThreadInitThunk () from C:\WINDOWS\System32\kernel32.dll #16 0x00007ff834362651 in ntdll!RtlUserThreadStart () from C:\WINDOWS\SYSTEM32\ntdll.dll #17 0x0000000000000000 in ?? () ``` configuration: `cmake -G "MSYS Makefiles" -DCMAKE_INSTALL_PREFIX=/mingw64 -DCMAKE_BUILD_TYPE=Debug -DJPEGXL_ENABLE_PLUGINS=OFF -DBUILD_TESTING=OFF -DJPEGXL_WARNINGS_AS_ERRORS=OFF -DJPEGXL_ENABLE_SJPEG=OFF -DJPEGXL_ENABLE_BENCHMARK=OFF -DJPEGXL_ENABLE_EXAMPLES=OFF -DJPEGXL_ENABLE_MANPAGES=OFF -DJPEGXL_FORCE_SYSTEM_BROTLI=ON ..` I observed that when I added `-DHWY_COMPILE_ONLY_SCALAR` to CXX_FLAGS, the djxl worked correctly.
crash on Windows/MSYS2
https://api.github.com/repos/libjxl/libjxl/issues/165/comments
4
2021-06-14T11:44:01Z
2021-11-12T13:57:53Z
https://github.com/libjxl/libjxl/issues/165
920,329,106
165
[ "libjxl", "libjxl" ]
**Describe the bug** fuzzer_corpus fails on an assert. This failure was introduced by https://github.com/libjxl/libjxl/pull/25 **To Reproduce** Steps to reproduce the behavior: ```bash mkdir corpusdir build/tools/fuzzer_corpus -j 0 corpusdir ``` Result: ``` Generating ImageSpec<size=8x8 * chan=3 depth=8 alpha=0 (premult=0) x frames=1 seed=407987, speed=1, butteraugli=1.5, modular_mode=0, lossy_palette=0, noise=0, preview=0, fuzzer_friendly=0, is_reconstructible_jpeg=1, orientation=4> as a6abe1aab9970bb0771cf78b376749e2 ../lib/jxl/enc_frame.cc:1066: JXL_ASSERT: metadata->m.xyb_encoded == (cparams.color_transform == ColorTransform::kXYB) Illegal instruction ``` **Environment** ci.sh release build; Linux build.
fuzzer_corpus hits an assert
https://api.github.com/repos/libjxl/libjxl/issues/150/comments
2
2021-06-10T13:32:26Z
2021-06-10T16:24:45Z
https://github.com/libjxl/libjxl/issues/150
917,351,879
150
[ "libjxl", "libjxl" ]
Could you please consider adding a new release tag? I would like to package libjxl for Void Linux (draft PR at https://github.com/void-linux/void-packages/pull/31397), which requires a release version. However, the last release (0.3.7) is outdated (old license, does not work with qt-jpegxl-image-plugin).
Request for new release tag
https://api.github.com/repos/libjxl/libjxl/issues/144/comments
4
2021-06-10T09:28:05Z
2021-08-06T19:52:12Z
https://github.com/libjxl/libjxl/issues/144
917,124,689
144
[ "libjxl", "libjxl" ]
Sorry to be the bearer of bad news but the libvips fuzzers appear to have something against libjxl today :( This image causes an integer overflow when decoding using the latest code on the `main` branch. [clusterfuzz-testcase-minimized-6288583458684928.txt](https://github.com/libjxl/libjxl/files/6626055/clusterfuzz-testcase-minimized-6288583458684928.txt) ``` /src/libjxl/lib/jxl/modular/encoding/encoding.cc:228:24: runtime error: signed integer overflow: 1107427842 + 1073742852 cannot be represented in type 'int' #0 0x10eedda in jxl::DecodeModularChannelMAANS(jxl::BitReader*, jxl::ANSSymbolReader*, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> > const&, std::__1::vector<jxl::PropertyDecisionNode, std::__1::allocator<jxl::PropertyDecisionNode> > const&, jxl::weighted::Header const&, int, unsigned long, jxl::Image*) libjxl/lib/jxl/modular/encoding/encoding.cc:228:24 #1 0x10f179c in jxl::ModularDecode(jxl::BitReader*, jxl::Image&, jxl::GroupHeader&, unsigned long, jxl::ModularOptions*, std::__1::vector<jxl::PropertyDecisionNode, std::__1::allocator<jxl::PropertyDecisionNode> > const*, jxl::ANSCode const*, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> > const*, bool) libjxl/lib/jxl/modular/encoding/encoding.cc:487:5 #2 0x10f1cc8 in jxl::ModularGenericDecompress(jxl::BitReader*, jxl::Image&, jxl::GroupHeader*, unsigned long, jxl::ModularOptions*, int, std::__1::vector<jxl::PropertyDecisionNode, std::__1::allocator<jxl::PropertyDecisionNode> > const*, jxl::ANSCode const*, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> > const*, bool) libjxl/lib/jxl/modular/encoding/encoding.cc:517:21 #3 0x1456f7b in jxl::ModularFrameDecoder::DecodeGlobalInfo(jxl::BitReader*, jxl::FrameHeader const&, bool) libjxl/lib/jxl/dec_modular.cc:211:23 #4 0x13a4938 in jxl::FrameDecoder::ProcessDCGlobal(jxl::BitReader*) libjxl/lib/jxl/dec_frame.cc:381:46 #5 0x13a1191 in jxl::FrameDecoder::ProcessSections(jxl::FrameDecoder::SectionInfo const*, unsigned long, ``` Perhaps further clamping might be required here: https://github.com/libjxl/libjxl/blob/87ebbe9d5cd7581afbcce650e6879bccf80e3beb/lib/jxl/modular/encoding/encoding.cc#L225-L228
Possible integer overflow in DecodeModularChannelMAANS
https://api.github.com/repos/libjxl/libjxl/issues/140/comments
1
2021-06-09T18:36:09Z
2021-06-10T07:09:47Z
https://github.com/libjxl/libjxl/issues/140
916,536,180
140
[ "libjxl", "libjxl" ]
Hello, the following file, discovered via fuzz testing, causes a divide by zero when decoded using the latest code on the `main` branch. [clusterfuzz-testcase-minimized-jpegsave_buffer_fuzzer-5146219264475136.txt](https://github.com/libjxl/libjxl/files/6622107/clusterfuzz-testcase-minimized-jpegsave_buffer_fuzzer-5146219264475136.txt) ``` /src/libjxl/lib/jxl/splines.cc:105:39: runtime error: division by zero #0 0x14cb3e8 in jxl::N_AVX2::(anonymous namespace)::DrawGaussian(jxl::Image3<float>*, jxl::Rect const&, jxl::Rect const&, jxl::Spline::Point const&, float, float const*, float, std::__1::vector<int, std::__1::allocator<int> >&, std::__1::vector<int, std::__1::allocator<int> >&, std::__1::vector<float, std::__1::allocator<float> >&) libjxl/lib/jxl/splines.cc:105:39 #1 0x14c7cbf in jxl::N_AVX2::(anonymous namespace)::DrawFromPoints(jxl::Image3<float>*, jxl::Rect const&, jxl::Rect const&, jxl::Spline const&, bool, std::__1::vector<std::__1::pair<jxl::Spline::Point, float>, std::__1::allocator<std::__1::pair<jxl::Spline::Point, float> > > const&, float) libjxl/lib/jxl/splines.cc:162:5 #2 0x14c2de5 in jxl::Status jxl::Splines::Apply<true>(jxl::Image3<float>*, jxl::Rect const&, jxl::Rect const&, jxl::ColorCorrelationMap const&) const libjxl/lib/jxl/splines.cc:507:5 #3 0x1479729 in jxl::FinalizeImageRect(jxl::Image3<float>*, jxl::Rect const&, std::__1::vector<std::__1::pair<jxl::Plane<float>*, jxl::Rect>, std::__1::allocator<std::__1::pair<jxl::Plane<float>*, jxl::Rect> > > const&, jxl::PassesDecoderState*, unsigned long, jxl::ImageBundle*, jxl::Rect const&) libjxl/lib/jxl/dec_reconstruct.cc:900:5 #4 0x1481683 in operator() libjxl/lib/jxl/dec_reconstruct.cc:1160:12 ``` The point of failure is: https://github.com/libjxl/libjxl/blob/6946efdf32adfde7cb7d715ae06912da5521dac7/lib/jxl/splines.cc#L105 which appears to be due to `sigma` being calculated as zero here: https://github.com/libjxl/libjxl/blob/6946efdf32adfde7cb7d715ae06912da5521dac7/lib/jxl/splines.cc#L160-L161 I'm happy to help fix this, but am unsure what the right approach is here. Perhaps we should avoid the `DrawGaussian` call when `sigma` is zero, or maybe this is a `JXL_FAILURE` condition?
Possible divide by zero in DrawGaussian function of splines.cc
https://api.github.com/repos/libjxl/libjxl/issues/129/comments
11
2021-06-09T08:23:49Z
2021-08-20T18:39:56Z
https://github.com/libjxl/libjxl/issues/129
915,932,026
129
[ "libjxl", "libjxl" ]
**Is your feature request related to a problem? Please describe.** The current settings are not optimal **Describe the solution you'd like** Add `-I 0 -P 0 --palette=0` by default when `--lossy-palette` is enabled and if they are not specified
Change the default settings for --lossy-palette
https://api.github.com/repos/libjxl/libjxl/issues/119/comments
2
2021-06-07T22:57:41Z
2025-04-27T03:11:52Z
https://github.com/libjxl/libjxl/issues/119
914,021,898
119
[ "libjxl", "libjxl" ]
Dear all, Can anyone clarify me which class code should I change in order to dump/write to file the predicted image by JPEG-XL in modular mode for lossless compression? Kind regards, F
Dump image prediction
https://api.github.com/repos/libjxl/libjxl/issues/116/comments
7
2021-06-07T14:55:19Z
2021-06-09T12:22:35Z
https://github.com/libjxl/libjxl/issues/116
913,642,229
116
[ "libjxl", "libjxl" ]
Hello, Can someone help building cjxl and djxl for mac os? (I managed to build a debian version with the advanced guide for debian) may be a documentation page to build for different platform like macos, ios, and android could be done at some point? Thank you
How to build for mac os
https://api.github.com/repos/libjxl/libjxl/issues/115/comments
9
2021-06-07T05:57:22Z
2022-03-29T10:27:45Z
https://github.com/libjxl/libjxl/issues/115
913,129,113
115
[ "libjxl", "libjxl" ]
It would be great if the plugin showed a dialog with encode options before actual saving. The options could be: Distance, Quality, Effort, Progressive (which corresponds with `cjxl -h`). A brief explanation of the options could be there as well. (The plugin can't easily decide itself whether it should use lossy/lossless in the same way as cjxl because the input are pixels, not image formats.)
GIMP plugin: Add encode options dialog
https://api.github.com/repos/libjxl/libjxl/issues/100/comments
0
2021-06-04T06:00:07Z
2021-08-25T08:54:14Z
https://github.com/libjxl/libjxl/issues/100
911,149,090
100
[ "libjxl", "libjxl" ]
**Describe the bug** Using container when losslessly transcoding JPEG to JPEG XL makes filesize reported by cjxl smaller than the one reported by the filesystem. It also doesn't change in cjxl whether you use a container or not, contrary to the reality reported by the filesystem. **To Reproduce** (on Linux and PowerShell) 1. `cjxl file.jpg file.jxl --strip; ls -l file.jxl` Without container file sizes reported both by `cjxl` and `ls -l` are the same... 2. `cjxl file.jpg file.jxl; ls -l file.jxl` ...but they no longer match if you decide to use a container! Note how the `cjxl`-reported file size is the same as in 1. **Expected behavior** Filesize should be reported properly by cjxl. **Environment** - cjxl/djxl version string: dfc730a8dd7d94f0ce9ec32573e7c9c7e178fb15
File size reported by cjxl doesn't take the container cost into account
https://api.github.com/repos/libjxl/libjxl/issues/99/comments
0
2021-06-04T05:28:20Z
2021-06-14T15:38:21Z
https://github.com/libjxl/libjxl/issues/99
911,132,343
99
[ "libjxl", "libjxl" ]
**Describe the bug** When `-Wp,-D_GLIBCXX_ASSERTIONS` is added to CXXFLAGS, many tests fail when their processes unexpectedly abort. In my tests exactly 79 tests fail, a list of these compiled by another user is here: https://aur.archlinux.org/pkgbase/libjxl/#comment-811172 **To Reproduce** With the latest release, add `-Wp,-D_GLIBCXX_ASSERTIONS` to CXXFLAGS. Try to build and run tests with cmake, e.g. cmake --build . -- -j8 cmake --build . -- test **Expected behavior** Tests complete successfully. **Environment** - OS: Arch Linux - Compiler version: clang 11.0.1 - CPU type: x86_64 - cjxl/djxl version string: [v0.3.7 | SIMD supported: SSE4,Scalar] **Additional context** Arch Linux recently added this option to its default CXXFLAGS for building packages. Users trying to build libjxl as a package on Arch Linux are likely to run into this problem. Here is sample output from running one of the tests directly: "/home/adam/Downloads/jxl/libjxl/src/build/lib/tests/butteraugli_test" "--gtest_filter=ButteraugliTest.Distmap" "--gtest_also_run_disabled_tests" Running main() from /build/gtest/src/googletest-release-1.10.0/googletest/src/gtest_main.cc Note: Google Test filter = ButteraugliTest.Distmap [==========] Running 1 test from 1 test suite. [----------] Global test environment set-up. [----------] 1 test from ButteraugliTest [ RUN ] ButteraugliTest.Distmap /usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/11.1.0/../../../../include/c++/11.1.0/bits/atomic_base.h:268: void std::atomic_flag::clear(std::memory_order): Assertion '__b != memory_order_acq_rel' failed. zsh: abort (core dumped) "/home/adam/Downloads/jxl/libjxl/src/build/lib/tests/butteraugli_test" Possibly related bug: https://github.com/libjxl/libjxl/issues/64 (this user also had this compiler option enabled and failed to build libjxl)
Many tests fail when built with -Wp,-D_GLIBCXX_ASSERTIONS
https://api.github.com/repos/libjxl/libjxl/issues/98/comments
2
2021-06-03T19:57:03Z
2021-06-04T03:27:50Z
https://github.com/libjxl/libjxl/issues/98
910,809,777
98
[ "libjxl", "libjxl" ]
It would be good to add a document that keeps track of software that has jxl support. Such a list serves several purposes: - thank/acknowledge other projects for integrating jxl support - point end-users to software that can read/write jxl - keep track of the adoption status of jxl - in case of a (security) bug, it's easier to see who might be affected and check if they are updated (in case they use static linking) Here is a first attempt at making such a list. Please add missing software in the comments! When we have a more or less complete list, I'll make a pull request to add the list as a markdown document. ## Browsers - Chromium: behind a flag since version 91, tracking bug: https://bugs.chromium.org/p/chromium/issues/detail?id=1178058 - Firefox: behind a flag since version 90, tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1539075 - Safari: not supported, tracking bug: https://bugs.webkit.org/show_bug.cgi?id=208235 - Edge: behind a flag since version 91, start with `.\msedge.exe --enable-features=JXL` ## Image libraries - ImageMagick: supported since 7.0.10-54 (https://imagemagick.org/) - libvips: supported since 8.11 (https://libvips.github.io/libvips/) - Imlib2: https://github.com/alistair7/imlib2-jxl ## OS-level support / UI frameworks / file browser plugins - Qt / KDE: plugin available: https://github.com/novomesk/qt-jpegxl-image-plugin - GDK-pixbuf: plugin available in libjxl repo - gThumb: https://ubuntuhandbook.org/index.php/2021/04/gthumb-3-11-3-adds-jpeg-xl-support/ - MacOS viewer/QuickLook plugin: https://github.com/yllan/JXLook - Windows Imaging Component: https://github.com/mirillis/jpegxl-wic - Windows thumbnail handler: https://github.com/saschanaz/jxl-winthumb - OpenMandriva Lx (since 4.3 RC) ## Image editors - GIMP: plugin available in libjxl repo, no official support, tracking bug: https://gitlab.gnome.org/GNOME/gimp/-/issues/4681 - Photoshop: no plugin available yet, no official support yet ## Image viewers - XnView: https://www.xnview.com/en/ - ImageGlass: https://imageglass.org/ - Any viewer based on Qt, KDE, GDK-pixbuf, ImageMagick, libvips or imlib2 (see above) - Qt viewers: gwenview, digiKam, KolourPaint, KPhotoAlbum, LXImage-Qt, qimgv, qView, nomacs, VookiImageViewer ## Online tools - Squoosh: https://squoosh.app/ - Cloudinary: https://cloudinary.com/blog/cloudinary_supports_jpeg_xl - MConverter: https://mconverter.eu/
Add list of applications/projects using libjxl
https://api.github.com/repos/libjxl/libjxl/issues/96/comments
6
2021-06-03T13:58:55Z
2021-11-21T14:28:55Z
https://github.com/libjxl/libjxl/issues/96
910,522,993
96
[ "libjxl", "libjxl" ]
**Is your feature request related to a problem? Please describe.** Attempting to losslessly transcode a jpg to a progressively encoded jxl fails. When you add `-p` to a lossless transcode, you get the following message: `Error: progressive lossless JPEG transcode is not yet implemented.` **Describe the solution you'd like** A progressively transcoded jxl should be produced.
Progressive Transcoding
https://api.github.com/repos/libjxl/libjxl/issues/92/comments
1
2021-06-03T00:17:31Z
2022-03-29T10:26:41Z
https://github.com/libjxl/libjxl/issues/92
909,978,904
92
[ "libjxl", "libjxl" ]
**Describe the bug** Attempting to subsample colors causes encoding to fail. **To Reproduce** Add `--resampling=2` (or any other valid value other than 1). An error occurs: `Failed to compress to VarDCT.` **Expected behavior** Produce an image. **Environment** - OS: Windows 10 - Compiler version: various - CPU type: x86_64 - cjxl/djxl version string: [v0.3.7 | SIMD supported: AVX, Scalar]
Subsampling Fails
https://api.github.com/repos/libjxl/libjxl/issues/91/comments
16
2021-06-03T00:07:59Z
2021-07-09T14:53:08Z
https://github.com/libjxl/libjxl/issues/91
909,974,861
91
[ "libjxl", "libjxl" ]
In the case of premultiplied (associated) alpha, `djxl` and other applications ignore that image header field, and treat everything as if it is the usual non-premultiplied (unassociated) alpha like in PNG. It would make sense to do the same thing here as with the Orientation: by default, the decoder should return pixels in a default way (orientation corrected, unassociated alpha), so applications that want to interpret the `alpha_associated` info themselves and get the raw data would have to explicitly ask the decoder to not do the normalization.
Premultiplied alpha not rendered correctly
https://api.github.com/repos/libjxl/libjxl/issues/81/comments
0
2021-06-02T07:37:58Z
2021-06-03T12:59:33Z
https://github.com/libjxl/libjxl/issues/81
909,212,792
81
[ "libjxl", "libjxl" ]
The bitstream can do premultiplied alpha, but currently cjxl has no way to select whether to do that or not. Current behavior is that it does whatever the input format does, e.g. non-premultiplied alpha on PNG input and premultiplied alpha on EXR input. It would be nice to have a flag to control this. Also, in the non-premultiplied case, it would be good to have an option in the lossless mode to say "I don't care about the invisible pixels", so they can be made black or whatever else is good for compression (in lossy mode this is already done, but in lossless mode there is currently no way to do it, i.e. we now do what `cwebp -exact` does).
Add option to cjxl do clear invisible pixels and/or do premultiplied alpha
https://api.github.com/repos/libjxl/libjxl/issues/76/comments
2
2021-06-01T15:29:08Z
2021-07-26T08:57:40Z
https://github.com/libjxl/libjxl/issues/76
908,416,264
76
[ "libjxl", "libjxl" ]
**Describe the bug** cjxl allows command-line parameters for lossy Modular and lossy palette in the same command despite them being incompatible with each other by JPEG XL's design, thus resulting in buggy images. **To Reproduce** `cjxl smart_ptr.jpg smart_ptr.jxl -j -m -Q 50 --lossy-palette` **Expected behavior** The encoder either ignores one of the parameters or exits with a failure message. **Screenshots** ![lossy_modular_palette](https://user-images.githubusercontent.com/15923635/120330408-15942480-c2ed-11eb-820c-6c059609529f.png) **Environment** - cjxl/djxl version: af6ece2e3b6bdece69cb4ce8f8cb7d630c5d72cb
Disable lossy palette when using lossy modular
https://api.github.com/repos/libjxl/libjxl/issues/75/comments
0
2021-06-01T13:28:45Z
2021-06-02T18:41:47Z
https://github.com/libjxl/libjxl/issues/75
908,294,850
75
[ "libjxl", "libjxl" ]
From reading JxlDecoderSetPreviewOutBuffer() in decode.h it's not clear which color space the resulting buffer will have. My first guess was that's the same as encode srgb for JXL_TYPE_UINT8 and linear-srgb for JXL_TYPE_FLOAT. But it seems to be JxlDecoderGetColorAsEncodedProfile() => Could please add some pointers for the user? Moved this from gitlab https://gitlab.com/wg1/jpeg-xl/-/issues/243 as suggested. If at all possible it would be nice to have a simple output-color-space option reducing all these possibilities of JxlDecoderGetColorAsEncodedProfile() return value to something easy-to-use.
Decode API: clarify color space of returned pixels
https://api.github.com/repos/libjxl/libjxl/issues/70/comments
3
2021-06-01T07:21:46Z
2021-11-20T11:34:35Z
https://github.com/libjxl/libjxl/issues/70
907,984,205
70
[ "libjxl", "libjxl" ]
oss-fuzz might have found a nasty write-after-free bug in git master libjxl. This file: http://www.rollthepotato.net/~john/.clusterfuzz-testcase-minimized-jpegsave_file_fuzzer-4933665846067200 Generate this asan error: ``` &nbsp; | ==291946==ERROR: AddressSanitizer: heap-use-after-free on address 0x622000015380 at pc 0x000000525dac bp 0x7fb1ddff69b0 sp 0x7fb1ddff6178 &nbsp; | WRITE of size 32 at 0x622000015380 thread T2 &nbsp; | SCARINESS: 55 (multi-byte-write-heap-use-after-free) &nbsp; | #0 0x525dab in __asan_memset /src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:26:3 &nbsp; | #1 0x2db1720 in Upsample&lt;2, 2&gt; libjxl/lib/jxl/dec_upsample.cc:106:3 &nbsp; | #2 0x2db1720 in jxl::N_AVX2::UpsampleRect(unsigned long, float const*, jxl::Plane&lt;float&gt; const&amp;, jxl::Rect const&amp;, jxl::Plane&lt;float&gt;*, jxl::Rect const&amp;, long, unsigned long, float*, unsigned long) libjxl/lib/jxl/dec_upsample.cc:245:7 &nbsp; | #3 0x2ddfce0 in UpsampleRect libjxl/lib/jxl/dec_upsample.cc:336:3 &nbsp; | #4 0x2ddfce0 in jxl::Upsampler::UpsampleRect(jxl::Image3&lt;float&gt; const&amp;, jxl::Rect const&amp;, jxl::Image3&lt;float&gt;*, jxl::Rect const&amp;, long, unsigned long, float*) const libjxl/lib/jxl/dec_upsample.cc:347:5 &nbsp; | #5 0x2d8ed4e in jxl::FinalizeImageRect(jxl::Image3&lt;float&gt;*, jxl::Rect const&amp;, std::__1::vector&lt;std::__1::pair&lt;jxl::Plane&lt;float&gt;*, jxl::Rect&gt;, std::__1::allocator&lt;std::__1::pair&lt;jxl::Plane&lt;float&gt;*, jxl::Rect&gt; &gt; &gt; const&amp;, jxl::PassesDecoderState*, unsigned long, jxl::ImageBundle*, jxl::Rect const&amp;) libjxl/lib/jxl/dec_reconstruct.cc:822:24 &nbsp; | #6 0x2d9ad27 in operator() libjxl/lib/jxl/dec_reconstruct.cc:1048:12 &nbsp; | #7 0x2d9ad27 in jxl::ThreadPool::RunCallState&lt;jxl::FinalizeFrameDecoding(jxl::ImageBundle*, jxl::PassesDecoderState*, jxl::ThreadPool*, bool, bool)::$_9, jxl::FinalizeFrameDecoding(jxl::ImageBundle*, jxl::PassesDecoderState*, jxl::ThreadPool*, bool, bool)::$_10&gt;::CallDataFunc(void*, unsigned int, unsigned long) libjxl/lib/jxl/base/data_parallel.h:88:14 &nbsp; | #8 0x3177779 in RunRange libjxl/lib/threads/thread_parallel_runner_internal.cc:137:7 ... ``` <details> <summary>Full report</summary> </details>
oss-fuzz reports a possible write-after-free in libjxl
https://api.github.com/repos/libjxl/libjxl/issues/66/comments
3
2021-05-31T18:00:40Z
2021-06-04T21:40:12Z
https://github.com/libjxl/libjxl/issues/66
907,641,254
66
[ "libjxl", "libjxl" ]
**Describe the bug** The build fails on aarch64: ``` In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h: In lambda function: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:166:53: note: use '-flax-vector-conversions' to permit conversions between vectors with differing element types or numbers of subparts 166 | Vec128<uint8_t, 16>(vreinterpretq_s16_u8(exp16))) | ~~~~~~~~~~~~~~~~~~~~^~~~~~~ /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:166:54: error: cannot convert 'int16x8_t' to 'uint8x16_t' 166 | Vec128<uint8_t, 16>(vreinterpretq_s16_u8(exp16))) | ^~~~~ | | | int16x8_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:5281:34: note: initializing argument 1 of 'int16x8_t vreinterpretq_s16_u8(uint8x16_t)' 5281 | vreinterpretq_s16_u8 (uint8x16_t __a) | ~~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:171:54: error: cannot convert 'int16x8_t' to 'uint8x16_t' 171 | Vec128<uint8_t, 16>(vreinterpretq_s16_u8(exp16))) | ^~~~~ | | | int16x8_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:5281:34: note: initializing argument 1 of 'int16x8_t vreinterpretq_s16_u8(uint8x16_t)' 5281 | vreinterpretq_s16_u8 (uint8x16_t __a) | ~~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:174:30: error: cannot convert 'uint8x16_t' to 'int16x8_t' 174 | vreinterpretq_u8_s16(pow_low), vreinterpretq_u8_s16(pow_high), 8)); | ^~~~~~~ | | | uint8x16_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:5631:33: note: initializing argument 1 of 'uint8x16_t vreinterpretq_u8_s16(int16x8_t)' 5631 | vreinterpretq_u8_s16 (int16x8_t __a) | ~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:174:61: error: cannot convert 'uint8x16_t' to 'int16x8_t' 174 | vreinterpretq_u8_s16(pow_low), vreinterpretq_u8_s16(pow_high), 8)); | ^~~~~~~~ | | | uint8x16_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:5631:33: note: initializing argument 1 of 'uint8x16_t vreinterpretq_u8_s16(int16x8_t)' 5631 | vreinterpretq_u8_s16 (int16x8_t __a) | ~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h: In function 'void jxl::N_NEON::{anonymous}::FastXYBTosRGB8(const Image3F&, const jxl::Rect&, const jxl::Rect&, uint8_t*, size_t)': /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:296:29: error: cannot convert '__Int16x8_t' to 'uint16x8_t' in initialization 296 | uint16x8_t r = srgb_tf(linear_r16); | ~~~~~~~^~~~~~~~~~~~ | | | __Int16x8_t /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:297:29: error: cannot convert '__Int16x8_t' to 'uint16x8_t' in initialization 297 | uint16x8_t g = srgb_tf(linear_g16); | ~~~~~~~^~~~~~~~~~~~ | | | __Int16x8_t /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:298:29: error: cannot convert '__Int16x8_t' to 'uint16x8_t' in initialization 298 | uint16x8_t b = srgb_tf(linear_b16); | ~~~~~~~^~~~~~~~~~~~ | | | __Int16x8_t /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:301:61: error: cannot convert 'uint16x8_t' to 'int16x8_t' 301 | vqmovun_s16(vrshrq_n_s16(vsubq_s16(r, vshrq_n_s16(r, 8)), 6)); | ^ | | | uint16x8_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:25724:24: note: initializing argument 1 of 'int16x8_t vshrq_n_s16(int16x8_t, int)' 25724 | vshrq_n_s16 (int16x8_t __a, const int __b) | ~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:303:61: error: cannot convert 'uint16x8_t' to 'int16x8_t' 303 | vqmovun_s16(vrshrq_n_s16(vsubq_s16(g, vshrq_n_s16(g, 8)), 6)); | ^ | | | uint16x8_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:25724:24: note: initializing argument 1 of 'int16x8_t vshrq_n_s16(int16x8_t, int)' 25724 | vshrq_n_s16 (int16x8_t __a, const int __b) | ~~~~~~~~~~^~~ In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:36: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_xyb-inl.h:305:61: error: cannot convert 'uint16x8_t' to 'int16x8_t' 305 | vqmovun_s16(vrshrq_n_s16(vsubq_s16(b, vshrq_n_s16(b, 8)), 6)); | ^ | | | uint16x8_t In file included from /usr/include/hwy/ops/arm_neon-inl.h:18, from /usr/include/hwy/highway.h:282, from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/lib/jxl/dec_reconstruct.cc:26: /usr/lib/gcc/aarch64-redhat-linux/11/include/arm_neon.h:25724:24: note: initializing argument 1 of 'int16x8_t vshrq_n_s16(int16x8_t, int)' 25724 | vshrq_n_s16 (int16x8_t __a, const int __b) | ~~~~~~~~~~^~~ ``` **To Reproduce** ``` %cmake -DENABLE_CCACHE=1 \ -DBUILD_TESTING=OFF \ -DINSTALL_GTEST:BOOL=OFF \ -DJPEGXL_ENABLE_BENCHMARK:BOOL=OFF \ -DJPEGXL_ENABLE_PLUGINS:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_BROTLI:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_GTEST:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_HWY:BOOL=ON \ -DJPEGXL_WARNINGS_AS_ERRORS:BOOL=OFF \ -DBUILD_SHARED_LIBS:BOOL=OFF %cmake_build -- all doc ``` Flags: ``` CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -mbranch-protection=standard -fasynchronous-unwind-tables -fstack-clash-protection' LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' ``` **Environment** - OS: Fedora Rawhide - Compiler version: GCC 11.1.1 - CPU type: ``` Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 5 On-line CPU(s) list: 0-4 Thread(s) per core: 1 Core(s) per socket: 5 Socket(s): 1 NUMA node(s): 1 Vendor ID: APM Model: 2 Model name: X-Gene Stepping: 0x3 BogoMIPS: 80.00 NUMA node0 CPU(s): 0-4 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Vulnerable Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid ``` - libhwy: version 0.12.1 - cjxl/djxl version string: v0.3.7 Full log available at https://koji.fedoraproject.org/koji/taskinfo?taskID=69038231
Build failure on aarch64 with GCC
https://api.github.com/repos/libjxl/libjxl/issues/64/comments
5
2021-05-31T16:58:34Z
2021-06-01T15:54:46Z
https://github.com/libjxl/libjxl/issues/64
907,613,431
64
[ "libjxl", "libjxl" ]
**Describe the bug** The build fails on armv7hl: ``` In file included from /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/skcms.cc:2071: /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/src/Transform_inl.h: In function 'baseline::F baseline::F_from_Half(baseline::U16)': /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/src/Transform_inl.h:158:26: error: 'float16x4_t' was not declared in this scope; did you mean 'bfloat16x4_t'? 158 | return vcvt_f32_f16((float16x4_t)half); | ^~~~~~~~~~~ | bfloat16x4_t /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/src/Transform_inl.h:158:12: error: 'vcvt_f32_f16' was not declared in this scope; did you mean 'vcvt_f32_bf16'? 158 | return vcvt_f32_f16((float16x4_t)half); | ^~~~~~~~~~~~ | vcvt_f32_bf16 /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/src/Transform_inl.h: In function 'baseline::U16 baseline::Half_from_F(baseline::F)': /builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/third_party/skcms/src/Transform_inl.h:187:17: error: 'vcvt_f16_f32' was not declared in this scope; did you mean 'vcvt_bf16_f32'? 187 | return (U16)vcvt_f16_f32(f); | ^~~~~~~~~~~~ | vcvt_bf16_f32 gmake[2]: *** [third_party/CMakeFiles/skcms.dir/build.make:79: third_party/CMakeFiles/skcms.dir/skcms/skcms.cc.o] Error 1 gmake[2]: Leaving directory '/builddir/build/BUILD/jpeg-xl-v0.3.7-9e9bce86164dc4d01c39eeeb3404d6aed85137b2/armv7hl-redhat-linux-gnueabi' gmake[1]: *** [CMakeFiles/Makefile2:345: third_party/CMakeFiles/skcms.dir/all] Error 2 ``` **To Reproduce** ``` %cmake -DENABLE_CCACHE=1 \ -DBUILD_TESTING=OFF \ -DINSTALL_GTEST:BOOL=OFF \ -DJPEGXL_ENABLE_BENCHMARK:BOOL=OFF \ -DJPEGXL_ENABLE_PLUGINS:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_BROTLI:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_GTEST:BOOL=ON \ -DJPEGXL_FORCE_SYSTEM_HWY:BOOL=ON \ -DJPEGXL_WARNINGS_AS_ERRORS:BOOL=OFF \ -DBUILD_SHARED_LIBS:BOOL=OFF %cmake_build -- all doc ``` Flags: ``` CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -march=armv7-a -mfpu=vfpv3-d16 -mtune=generic-armv7-a -mabi=aapcs-linux -mfloat-abi=hard' LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' ``` **Environment** - OS: Fedora Rawhide - Compiler version: GCC 11.1.1 - CPU type: ``` CPU info: Architecture: armv7l Byte Order: Little Endian CPU(s): 5 On-line CPU(s) list: 0-4 Thread(s) per core: 1 Core(s) per socket: 5 Socket(s): 1 Vendor ID: APM Model: 2 Model name: X-Gene Stepping: 0x3 BogoMIPS: 80.00 Flags: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm aes pmull sha1 sha2 crc32 ``` - libhwy: version 0.12.1 - cjxl/djxl version string: v0.3.7 Full log available at https://koji.fedoraproject.org/koji/taskinfo?taskID=69038231
Build failure on armv7hl with GCC
https://api.github.com/repos/libjxl/libjxl/issues/63/comments
12
2021-05-31T16:55:52Z
2021-11-21T22:57:43Z
https://github.com/libjxl/libjxl/issues/63
907,612,159
63
[ "libjxl", "libjxl" ]
**Describe the bug** We have a mutex in the LCMS color_management which can probably be removed after an update in LCMS. See https://github.com/libjxl/libjxl/pull/23#discussion_r641966901 for context.
Remove LCMS mutex
https://api.github.com/repos/libjxl/libjxl/issues/53/comments
2
2021-05-31T12:19:50Z
2022-03-11T20:11:25Z
https://github.com/libjxl/libjxl/issues/53
907,419,968
53
[ "libjxl", "libjxl" ]
**Summary** For GIF and JPEG, libjxl will use lossless mode (GIF) / lossless JPEG transcode (JPEG) by default. However, if you explicitly ask libjxl to use lossless mode by `-q 100` or `-d 0`, the results would be different. **Steps to reproduce** * `cjxl -s 3 $infile $outfile` * `cjxl -s 3 -d 0 $infile $outfile` * `cjxl -s 3 -q 100 $infile $outfile` **Observed behavior** For JPEG transcode, no parameter or `-q 100` produces 280305, but `-d 0` produces 280246. For GIF, no parameter produces 47431412, while `-d 0` or `-q 100` produces output 26518874. **Expected behavior** The output files should be the same. **Test files** * https://www.ganganonline.com/contents/slime/img/slime_7cover.gif * https://jpegxl.info/fallbacklogo.jpg **Environment** - OS: Linux - Compiler version: clang - CPU type: x86_64 - cjxl/djxl version string: `cjxl [v0.3.7 | SIMD supported: SSE4,Scalar]` **Additional context** For the test JPEG file, the difference is small, but for the GIF file, the difference is very large.
GIF / JPEG -> Lossless JXL: Different results with and without `-q 100` / `-d 0`
https://api.github.com/repos/libjxl/libjxl/issues/44/comments
0
2021-05-29T08:37:37Z
2021-05-31T12:42:03Z
https://github.com/libjxl/libjxl/issues/44
906,419,615
44
[ "libjxl", "libjxl" ]
**Describe the bug** On [GitLab releases page](https://gitlab.com/wg1/jpeg-xl/-/releases), there is changelog (and notes) for every release, but [GitHub releases page](https://github.com/libjxl/libjxl/releases) does not. **To Reproduce** Go to GitHub releases page, and click "..." to expand the details of 0.3.7. It only contains: > Update JPEG-XL with latest changes. > This includes all changes up to 2021-03-29 10:37:25 +0000. **Expected behavior** For 0.3.7: > * Bump JPEG XL version to 0.3.7. > * Fix a rounding issue in 8-bit decoding. > > Note: This release is for evaluation purposes and may contain bugs, including security bugs, that will not be individually documented when fixed. Always prefer to use the latest release. Please provide feedback and report bugs here.
Releases page has no changelog
https://api.github.com/repos/libjxl/libjxl/issues/43/comments
1
2021-05-29T02:26:52Z
2021-05-31T14:06:22Z
https://github.com/libjxl/libjxl/issues/43
906,301,512
43
[ "libjxl", "libjxl" ]
**Describe the solution you'd like** I hope `cjxl` can offer presets for different types of source material, so that average users can have optimized outputs (in size and in image quality), without the need to know / try the advanced parameters. It would be best if the presets be applied on both lossy and mathematically lossless mode. **Additional context** `cwebp` has `default, photo, picture, drawing, icon, text` presets.
Presets for different types of source material
https://api.github.com/repos/libjxl/libjxl/issues/42/comments
0
2021-05-29T02:14:40Z
2022-03-29T10:28:15Z
https://github.com/libjxl/libjxl/issues/42
906,297,162
42
[ "libjxl", "libjxl" ]
Right now, this repo here has no tags, but the one on gitlab does: https://gitlab.com/wg1/jpeg-xl/-/tags
Missing git tags from gitlab
https://api.github.com/repos/libjxl/libjxl/issues/40/comments
1
2021-05-27T21:17:25Z
2021-05-27T23:03:43Z
https://github.com/libjxl/libjxl/issues/40
904,199,293
40
[ "libjxl", "libjxl" ]
I think oss-fuzz has found an assert failure in libjxl. This file: http://www.rollthepotato.net/~john/clusterfuzz-testcase-minimized-pngsave_buffer_fuzzer-6695474309496832.fuzz Triggers this: ``` &nbsp; | /src/libjxl/lib/jxl/image_ops.h:25: JXL_ASSERT: SameSize(from, *to) &nbsp; | AddressSanitizer:DEADLYSIGNAL &nbsp; | ================================================================= &nbsp; | ==484==ERROR: AddressSanitizer: ILL on unknown address 0x00000250cd19 (pc 0x00000250cd19 bp 0x7f7603b124d0 sp 0x7f7603b124d0 T4) &nbsp; | #0 0x250cd19 in jxl::Abort() libjxl/lib/jxl/base/status.cc:42:3 &nbsp; | #1 0x28dd2ff in CopyImageTo&lt;float&gt; libjxl/lib/jxl/image_ops.h:25:3 &nbsp; | #2 0x28dd2ff in jxl::ImageBlender::PrepareBlending(jxl::PassesDecoderState*, jxl::FrameOrigin, unsigned long, unsigned long, jxl::ColorEncoding const&amp;, jxl::ImageBundle*) libjxl/lib/jxl/blending.cc:130:9 &nbsp; | #3 0x2d5931d in jxl::FinalizeFrameDecoding(jxl::ImageBundle*, jxl::PassesDecoderState*, jxl::ThreadPool*, bool, bool) libjxl/lib/jxl/dec_reconstruct.cc:1081:5 &nbsp; | #4 0x2bc9b4a in jxl::FrameDecoder::Flush() libjxl/lib/jxl/dec_frame.cc:817:3 &nbsp; | #5 0x2bbe2b2 in jxl::FrameDecoder::FinalizeFrame() libjxl/lib/jxl/dec_frame.cc:849:3 &nbsp; | #6 0x2537b44 in jxl::(anonymous namespace)::JxlDecoderProcessInternal(JxlDecoderStruct*, unsigned char const*, unsigned long) libjxl/lib/jxl/decode.cc:1155:30 &nbsp; | #7 0x2532671 in JxlDecoderProcessInput libjxl/lib/jxl/decode.cc:1668:14 ... ``` With this version of libjxl: https://gitlab.com/wg1/jpeg-xl/-/compare/040eae8105b61b312a67791213091103f4c0d034...30ea86ab4c1f1b98c21967a2e3d72a51fe77e454
oss-fuzz reports an assert failure in libjxl
https://api.github.com/repos/libjxl/libjxl/issues/37/comments
0
2021-05-27T15:30:40Z
2021-06-10T17:04:35Z
https://github.com/libjxl/libjxl/issues/37
903,914,222
37
[ "libjxl", "libjxl" ]
Hi, this image: www.rollthepotato.net/~john/clusterfuzz-testcase-minimized-pngsave_buffer_fuzzer-5360982477111296 Produces this error in oss-fuzz: ``` &nbsp; | /src/jpeg-xl/lib/jxl/dec_modular.cc:471:47: runtime error: shift exponent -5 is negative &nbsp; | #0 0x1434953 in jxl::ModularFrameDecoder::FinalizeDecoding(jxl::PassesDecoderState*, jxl::ThreadPool*, jxl::ImageBundle*) jpeg-xl/lib/jxl/dec_modular.cc:0 &nbsp; | #1 0x1387e5e in jxl::FrameDecoder::Flush() jpeg-xl/lib/jxl/dec_frame.cc:814:3 &nbsp; | #2 0x137f603 in jxl::FrameDecoder::FinalizeFrame() jpeg-xl/lib/jxl/dec_frame.cc:849:3 &nbsp; | #3 0x104eadf in jxl::(anonymous namespace)::JxlDecoderProcessInternal(JxlDecoderStruct*, unsigned char const*, unsigned long) jpeg-xl/lib/jxl/decode.cc:1155:30 &nbsp; | #4 0x104c287 in JxlDecoderProcessInput jpeg-xl/lib/jxl/decode.cc:1668:14 ... ``` Not very important, but it should probably be fixed.
oss-fuzz has found an undefined shift (by -5) in libjxl
https://api.github.com/repos/libjxl/libjxl/issues/29/comments
4
2021-05-27T09:06:11Z
2021-05-31T11:01:44Z
https://github.com/libjxl/libjxl/issues/29
903,439,945
29
[ "libjxl", "libjxl" ]
Mozilla uses clang-5.0 for some builds; but it doesn't work when compiling in C++17 mode (it is fine in C++11 mode) for what appears to be a clang-5 compiler bug. We should add a check that libjxl compiles in release mode with clang-5 to not regress here. Details: https://gitlab.com/wg1/jpeg-xl/-/issues/227 @saschanaz FYI
Add check that libjxl builds with clang-5.0
https://api.github.com/repos/libjxl/libjxl/issues/28/comments
5
2021-05-26T22:04:42Z
2021-11-22T16:41:24Z
https://github.com/libjxl/libjxl/issues/28
902,963,008
28
[ "libjxl", "libjxl" ]
hello, I'm on aarch64 and am trying to build a standalone of jpeg-xl to detect breakages. I have already done this with one of your dependencies called thirdparty/highway: https://github.com/google/highway/issues/93 the fix for aarch64 was published in v0.12.1 ; so can you please pull in the new version with your submodule magic? thanks :-)
please update thirdparty/highway to v0.12.1 to unbreak aarch64 and possibly armv7
https://api.github.com/repos/libjxl/libjxl/issues/21/comments
10
2021-05-26T13:35:10Z
2021-06-25T22:27:41Z
https://github.com/libjxl/libjxl/issues/21
902,403,536
21
[ "libjxl", "libjxl" ]
Hi I used a test file from here... `$ wget http://www.r0k.us/graphics/kodak/kodak/kodim20.png ` Create a jxl file with speed 9... `$ cjxl kodim20.png kodim20.jxl -s 9 -d 3 ` Create a progressive jxl file with speed 9... `$ cjxl kodim20.png prog_kodim20.jxl -s 9 -d 3 -p ` Compare the file sizes... `$ ls -l kodim20.jxl | awk '{print $5}' && ls -l prog_kodim20.jxl | awk '{print $5}' ` **24893 72467** The progressive file is much larger than the non-progressive file. I'm using `cjxl v0.3.7-30ea86ab`
Big file size with progressive + tortoise.
https://api.github.com/repos/libjxl/libjxl/issues/17/comments
10
2021-05-26T10:41:05Z
2021-05-30T13:15:05Z
https://github.com/libjxl/libjxl/issues/17
902,183,177
17
[ "libjxl", "libjxl" ]
Thanks for starting the move to full open source! However, to understand the codebase better, it would be hugely beneficial to have the full git commit history available, and not the squashed ones from the current public GitLab repo.
Full commit history
https://api.github.com/repos/libjxl/libjxl/issues/8/comments
5
2021-05-26T07:48:49Z
2021-05-26T16:43:29Z
https://github.com/libjxl/libjxl/issues/8
901,928,577
8
[ "libjxl", "libjxl" ]
If the input files are JPEGs, `cjxl` creates `jpe????.tmp` files in the system `%temp%` folder and doesn't delete them after the encoding is finished. _Windows 10 20H2 x64, cjxl v0.3.7-12-g04267a8_
cjxl - temp JPEG files remain
https://api.github.com/repos/libjxl/libjxl/issues/6/comments
18
2021-05-26T03:41:40Z
2021-07-09T06:03:27Z
https://github.com/libjxl/libjxl/issues/6
901,705,435
6
[ "alexw994", "eziod" ]
We found a malicious backdoor in versions 0.0.1 of this project, and its malicious backdoor is the request package. Even if the request package was removed by pypi, many mirror sites did not completely delete this package, so it could still be installed.When using pip install eziod==0.0.1 -i http://pypi.doubanio.com/simple --trusted-host pypi.doubanio.com, the request malicious plugin can be successfully installed. ![image](https://user-images.githubusercontent.com/58363074/176373168-6c9c3b16-ce9f-40f0-acdc-0de19e3f4320.png) Repair suggestion: delete version 0.0.1 in PyPI
code execution backdoor
https://api.github.com/repos/alexw994/eziod/issues/1/comments
0
2022-06-29T07:05:34Z
2022-06-29T07:05:34Z
https://github.com/alexw994/eziod/issues/1
1,288,265,493
1
[ "FreeOpcUa", "opcua-asyncio" ]
Hello, I am importing an XML file with asyncua version 1.1.0. The XML contains many elements as shown below. <Value> <uax:ExtensionObject> <uax:TypeId> <uax:Identifier>ns=2;i=1005</uax:Identifier> </uax:TypeId> <uax:Body> <uax:ByteString>ADKJAKKAGSKGKDUWGKW==</uax:ByteString> </uax:Body> </uax:ExtensionObject> </Value> The import gives an error "('Error val should be a list, this is a python-asyncua bug', 'ByteString', <class 'str'>, 'ADKJAKKAGSKGKDUWGKW==') ". How can this be solved ? Note: Only asyncua 1.1.0 is available to be installed in my system.
Import XML with ByteString in ExtensionObject Error
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1846/comments
0
2025-06-23T14:11:28Z
2025-06-23T14:11:28Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1846
3,168,355,898
1,846
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> UaExpert nor TwinCat shows any names for parameters of methods registered with asyncua. This causes major confusion, especially when imported into something like TwinCat, where you can't make heads or tails of which parameter is which. ![Image](https://github.com/user-attachments/assets/593cb3e6-f4c3-4687-9a97-457cfd54829c) **To Reproduce**<br /> Run `examples/server-methods.py`. Produces the screenshot above for `func_async`. **Expected behavior**<br /> There to be names associated with parameters, such as with `Server.RequestServerStateChange` in the OPC tree: ![Image](https://github.com/user-attachments/assets/26fbebc3-ba3b-4763-bdfb-13a83cd2a3be) Or `Server.GetMonitoredItems` which also has named Output Arguments: ![Image](https://github.com/user-attachments/assets/496063ef-0c4d-496e-ad67-2ca25b5a84ea) **Screenshots**<br /> One of my methods as it is imported into TwinCat 3 4026: ![Image](https://github.com/user-attachments/assets/016a1952-61c5-4d15-b8c8-6ba62d2ebc84) **Version**<br /> Python-Version: 3.12.11<br /> opcua-asyncio Version (e.g. master branch, 0.9): 1.1.6
Unable to set parameter names on methods
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1845/comments
3
2025-06-20T19:51:03Z
2025-06-21T21:22:34Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1845
3,164,243,901
1,845
[ "FreeOpcUa", "opcua-asyncio" ]
`async def task(loop): url = "opc.tcp://myserver:4840" try: client = Client(url=url) client.set_user("User") client.set_password("Test") await client.connect() print("connected to OPC UA Server") except Exception: _logger.exception("error") finally: await client.disconnect() ` URI and Creds are Dummy, Iam just using the standard [auth with no security example](https://github.com/FreeOpcUa/opcua-asyncio/blob/master/examples/client-minimal-auth.py). I am able connect to server using "UA Expert" also using the .net opc ua. Getting following error > INFO:asyncua.client.client:connect INFO:asyncua.client.ua_client.UaClient:opening connection INFO:asyncua.uaprotocol:updating client limits to: TransportLimits(max_recv_buffer=65535, max_send_buffer=65535, max_chunk_count=0, max_message_size=0) INFO:asyncua.client.ua_client.UASocketProtocol:open_secure_channel INFO:asyncua.client.ua_client.UaClient:create_session INFO:asyncua.client.ua_client.UASocketProtocol:close_secure_channel INFO:asyncua.client.ua_client.UASocketProtocol:Request to close socket received ERROR:asyncua:error Traceback (most recent call last): File "/Users/lalitm/Work/Python/opcaua/client_connect.py", line 17, in task await client.connect() File "/Users/lalitm/Work/Python/opcaua/.venv/lib/python3.12/site-packages/asyncua/client/client.py", line 321, in connect await self.create_session() File "/Users/lalitm/Work/Python/opcaua/.venv/lib/python3.12/site-packages/asyncua/client/client.py", line 510, in create_session response = await self.uaclient.create_session(params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/lalitm/Work/Python/opcaua/.venv/lib/python3.12/site-packages/asyncua/client/ua_client.py", line 367, in create_session response.ResponseHeader.ServiceResult.check() File "/Users/lalitm/Work/Python/opcaua/.venv/lib/python3.12/site-packages/asyncua/ua/uatypes.py", line 383, in check raise UaStatusCodeError(self.value) asyncua.ua.uaerrors._auto.BadCertificateInvalid: The certificate provided as a parameter is not valid.(BadCertificateInvalid) INFO:asyncua.client.client:disconnect INFO:asyncua.client.ua_client.UaClient:close_session WARNING:asyncua.client.ua_client.UaClient:close_session but connection wasn't established WARNING:asyncua.client.ua_client.UaClient:close_secure_channel was called but connection is closed INFO:asyncua.client.ua_client.UASocketProtocol:Socket has closed connection
Cannot connect for server without security
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1844/comments
6
2025-06-18T12:32:13Z
2025-06-19T04:20:24Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1844
3,156,700,786
1,844
[ "FreeOpcUa", "opcua-asyncio" ]
Hi, I was looking for support of certificate chain for client side both for user authentication and secure channel, but could not find any direct mentions. Is this feature implemented? If not, is it planned or at least known about? Are there any implementations or PoCs? Thank you in advance for the answers.
Client certificate chain support
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1843/comments
0
2025-06-12T12:30:41Z
2025-06-12T12:30:41Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1843
3,140,109,606
1,843
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> On several devices I get this warning: _Requested session timeout to be 3600000ms, got 30000ms instead_ This happens while connecting this way: ```python async with Client(url=opc_url) as client: ... ``` Setting `client.session_timeout = 30000` in this context has no effect, of course. **To Reproduce**<br /> Steps to reproduce the behavior incl code: Connecting this way to a Siemens S7-1500 device: ```python from asyncua import Client async with Client(url=opc_url) as client: ... ``` **Expected behavior**<br /> The connection should establish without a warning. For achieving this, explicitly setting and optional parameter would help: ```python from asyncua import Client async with Client(url=opc_url, session_timeout=30000) as client: ... ``` but this is not supported, in the `Client` class, the `session_timeout` parameter is statically set to: ```python self.session_timeout = 3600000 # 1 hour ``` **Version**<br /> Python-Version: 3.11<br /> opcua-asyncio Version: 1.1.6:
Missing optional session_timeout during connecting in context manager
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1842/comments
2
2025-06-11T08:36:50Z
2025-06-11T13:10:02Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1842
3,135,976,487
1,842
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> When creating a Server, some of the ReferenceTypes in the Base Information Model have wrong attribute values. The attributes isAbstract and Symmetric are not consistently set correctly. This is because the default values are set to True for both, even though they should be set to False for both. These dafault attributes are set in ua/uaprotocol_auto.py Lines: 6258, 6259 ``` python data_type = NodeId(ObjectIds.ReferenceTypeAttributes) SpecifiedAttributes: UInt32 = 0 DisplayName: LocalizedText = field(default_factory=LocalizedText) Description: LocalizedText = field(default_factory=LocalizedText) WriteMask: UInt32 = 0 UserWriteMask: UInt32 = 0 isAbstract: Boolean = True Symmetric: Boolean = True InverseName: LocalizedText = field(default_factory=LocalizedText) ``` **To Reproduce**<br /> Run the Example Server and a use Client to inspect the Reference "organizes" for example. According to the OPC UA specification, both isAbstract and Symmetric should be false. ![Image](https://github.com/user-attachments/assets/8bf3f383-ac5e-48dc-97e0-541757f64598) **Version**<br /> Python-Version: 3.10.7 opcua-asyncio Version (e.g. master branch, 0.9): 1.1.6
Default Attributes for isAbstract and Symmetric of ReferenceTypes are wrong
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1840/comments
0
2025-05-22T13:00:36Z
2025-05-22T13:00:36Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1840
3,083,304,945
1,840
[ "FreeOpcUa", "opcua-asyncio" ]
Hello, i'm building custom NodeSets for testing different OPC UA stacks for a simulation. I currently have the issue, that my recent models fail to correctly load into the server to access the variables needed for the simulation. I found the models to be working with open62541, the UA ReferenceStack and Node-OPCUA. I would like to confirm wether this is an issue with the model generation (or the models), if there is an issue with the XML importer or wether i am missing the proper instantiation required to load the models into the server. **Describe the bug** <br /> The required variables should be exposed writable and accessable, however on import through load_xml the UserAccessLevel of the entire session is dropped to "CurrentRead". I believe i am having issues with the XML import of the model, since the value attributes for custom variables are not even accessible after import. Custom Properties are not affected by this for some reason. An example of a broken Variable Type: ![Image](https://github.com/user-attachments/assets/f99ad53a-9721-445f-9d60-4804085a95b5) To run the server i am using poetry, docker and UA Model Compiler to generate my files. For generation i used the Schema linked in the Submodules: [Ref](https://github.com/OPCFoundation/UA-Nodeset/tree/526c5e59e7ba9e54f9dc7848ce2baa249395d3ef) I tried crosschecking the generated files using the available schema files from the foundation using xmllint: ``` xmllint --noout --schema Opc.Ua.ModelDesign.xsd Model.xml xmllint --noout --schema UANodeSetv105.xsd Reference.NodeSet2.xml ``` I did get a single warning for the models: ``` /model/input/TankModel.xml:255: element Children: Schemas validity error : Element '{http://opcfoundation.org/UA/ModelDesign.xsd}Children': This element is not expected. ``` For the other OPC stacks this did not cause any issue so far. **To Reproduce**<br /> I ran some of the provided examples to test my import issues. Mainly server-woodworking and server-robotics. I just changed the import to use the custom files. The provided models do not require DI or any other models as a dependency. I added one of the model files+NodeSet2 here: https://gist.github.com/Frozenbitz/1506cfdbd097cc7819275b50aba80d25 A note on the publication date: I did have issues with the publication date generated by UA Model Compiler. Currently the import fails because the original files are missing the publication date key, which i usually manually add. I did not find a way to edit how the compiler generates the required model definitions to use the dates from the provided files. ``` <Model ModelUri="urn:open62541.server.application" Version="1.05.02" PublicationDate="2024-04-12T00:00:00Z" ModelVersion="1.5.2"> <RequiredModel ModelUri="http://opcfoundation.org/UA/" XmlSchemaUri="http://opcfoundation.org/UA/2008/02/Types.xsd" PublicationDate="2024-04-12T00:00:00Z" /> </Model> ``` To setup a simple server: ```python def __init__(self, endpoint, name, model_filepath): self.server = Server() self.model_filepath = model_filepath self.server.set_server_name(name) self.server.set_endpoint(endpoint) async def init(self): await self.server.init() # This need to be imported at the start or else it will overwrite the data #await self.server.import_xml(os.path.join(self.model_filepath, "../nodeset/DI/Opc.Ua.Di.NodeSet2.xml")) await self.server.import_xml( os.path.join(self.model_filepath, "/devel/meta/demo-nodeset2/Tank.NodeSet2.xml") ) .... ``` Logging on Startup: ``` INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18979, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18974, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18981, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18980, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18982, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18980, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18983, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18980, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18984, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18980, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18989, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18988, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=18990, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=18988, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists ... ``` ``` INFO:asyncua.common.xmlimporter:Importing XML file /devel/modules/model-compiler/output/KRITIS3M.Reference.NodeSet2.xml INFO:asyncua.common.xmlimporter:namespace map: {1: 2} INFO:asyncua.common.xmlimporter:Importing xml node (QualifiedName(NamespaceIndex=2, Name='DeviceInfoType'), NodeId(Identifier=1, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) as (QualifiedName(NamespaceIndex=2, Name='DeviceInfoType') NodeId(Identifier=1, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) INFO:asyncua.common.xmlimporter:Importing xml node (QualifiedName(NamespaceIndex=2, Name='DeviceID'), NodeId(Identifier=2, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) as (QualifiedName(NamespaceIndex=2, Name='DeviceID') NodeId(Identifier=2, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) INFO:asyncua.common.xmlimporter:Importing xml node (QualifiedName(NamespaceIndex=2, Name='Location'), NodeId(Identifier=3, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) as (QualifiedName(NamespaceIndex=2, Name='Location') NodeId(Identifier=3, NamespaceIndex=2, NodeIdType=<NodeIdType.Numeric: 2>)) ... ``` ``` INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='NamespaceFile') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='DefaultRolePermissions') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='DefaultUserRolePermissions') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='DefaultAccessRestrictions') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='ConfigurationVersion') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') INFO:asyncua.common.instantiate_util:Instantiate: Skip optional node QualifiedName(NamespaceIndex=0, Name='ModelVersion') as part of QualifiedName(NamespaceIndex=2, Name='urn:open62541.server.application') ``` **Expected behavior**<br /> The server should start and accept client sessions with "CurrentRead" and "CurrentWrite" enabled. **Version**<br /> Python-Version: 3.10, 3.11, 3.12 (tested with Docker and python-base) <br /> opcua-asyncio Version master, 1.6, 1.4:
Import of custom NodeSet causes some Nodes to be readonly
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1839/comments
0
2025-05-19T13:40:58Z
2025-05-19T13:40:58Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1839
3,073,899,173
1,839
[ "FreeOpcUa", "opcua-asyncio" ]
Hi, I think there is an error with the typing of function "new_struct" on this line: https://github.com/FreeOpcUa/opcua-asyncio/blob/5b1091795dc7745efb94acd94381db17274779e8/asyncua/common/structures104.py#L61 I suspect the correct code would be: ```python name: Union[ua.QualifiedName, str], ```
Probable typing error in "new_struct" function
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1837/comments
0
2025-05-02T08:03:26Z
2025-05-02T08:03:26Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1837
3,035,431,047
1,837
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Calls to _to_node_id() fail with int arguments > 255. A typical example is a call like `self.get_referenced_nodes()` where the type of the reference is provided in parameter refs as integer argument. Meanwhile a lot of standard OPC UA references have integer IDs >> 255 (see example): **To Reproduce**<br /> The example shows a search for references of type HasDictionaryEntry, a standard reference type of OPC UA. It fails because the integer id of the type is HasDictionaryEntry = 17597.<br /> ``` nodes = await self.get_referenced_nodes(refs = ua.ObjectIds.HasDictionaryEntry, direction = ua.BrowseDirection.Forward, nodeclassmask = ua.NodeClass.Object) ``` The root cause of the failure is, that the current implementation of the function _to_nodeid(), which is utilized when issuing the call in the example, only supports TwoByteNodeIds for input arguments of type int.<br /> **Expected behavior**<br /> The implementation of _to_nodeid() should check the numeric range of the input argument and return an appropriate nodeid object, e.g. as depicted in the following code snippet with additional range check:<br /> ``` def _to_nodeid(nodeid: Union["Node", ua.NodeId, str, int]) -> ua.NodeId: if isinstance(nodeid, int): if nodeid <= 255: return ua.TwoByteNodeId(nodeid) elif nodeid <= 65535: return ua.FourByteNodeId(nodeid) else: return ua.NumericNodeId(nodeid) if isinstance(nodeid, Node): return nodeid.nodeid if isinstance(nodeid, ua.NodeId): return nodeid if isinstance(nodeid, str): return ua.NodeId.from_string(nodeid) raise ua.UaError(f"Could not resolve '{nodeid}' to a type id") ``` **Version**<br /> Python-Version: 3.13.3 <br /> opcua-asyncio Version: master branch, 1.1.6
_to_nodeid() fails with int arguments > 255
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1831/comments
0
2025-04-28T09:11:51Z
2025-04-28T12:36:13Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1831
3,024,266,714
1,831
[ "FreeOpcUa", "opcua-asyncio" ]
Hi! I would like to report 3 bugs related to importing and exporting XML and dependent namespaces. I will also post a PR with test cases to reproduce and a proposal to fix them 😊 **Describe the bug** <br /> *No. 1: Exporting a node with a data type from another namespace leads to an invalid XML* The namespace URI of the data type is missing in the exported XML file, since the XML exporter only checks for referenced nodes but not the data type attribute. *No. 2: Exporting a node without a valid namespace URI leads to an invalid XML* If you export a node that was added to the server with a namespace index that doesn't relate to a namespace URI, consequently the namespace URI is also missing in the XML file. *No. 3: Importing such invalid XMLs does not report any errors* Without the knowledge about which URIs these namespace indices from the XMLs belong to, they cannot be added correctly to a server without making mistakes like adding them to the wrong namespace or referencing nodes from an unwanted namespace. I'm not fully sure what the current code does in these cases, if I understood it correctly it looks quite random and depends on the existence of the namespace indices on the server when importing the XML. **To Reproduce**<br /> See test cases in PR. **Expected behavior**<br /> - No. 1: The missing namespace URI should be mentioned in the XML - No. 2: It should not be possible to export nodes without a valid namespace URI - No. 3: An error should be reported if nodes in a XML file cannot be related to a namespace URI For no. 3 I'm not sure how many "strictly speaking invalid" XML files are out there, so I considered one exception that should keep most of them intact: If a XML file has only nodes from ns=1 and a namespace URI is missing, allow them to be added anyway, as they will just be added to the local server's namespace. **Version**<br /> Python-Version: 3.12.3<br /> opcua-asyncio Version: f12f3e19b3e7a3bc3db38add30176eef0cecd300
Issues with missing namespace URIs when importing and exporting XMLs
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1829/comments
1
2025-04-27T14:25:22Z
2025-04-30T09:39:11Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1829
3,023,115,186
1,829
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> server.import_xml leads to node duplicates. The function that causes the duplicates is found in the XmlImporter.import_xml() and this line: ` await self._add_references(remaining_refs)` The nodes already got added including their refs through this prior called function: `node = await self._add_node_data(nodedata, no_namespace_migration=True)` with this code in the function, which contains the same nodes again, the duplication happens internally: ``` self.refs, remaining_refs = [], self.refs await self._add_references(remaining_refs) ``` Why is this funtion call necessary, if the nodes and their refs already get registered in address space through the prior function? I cannot determine how the duplication happens, but without the second call of _add_references the nodes are not duplicated under the namespace index. **UAExpert** will **not** show the duplication of nodes. Connect to the server through the library client and read the imported xml nodes. Here the server will report duplicate nodes (identical from my perspective - same namespace index and identifier - which should not be possible at all). The duplicates start right after the imported namespace node (level 2). ``` root = server.get_root_node() nodes = await (root.get_children()) #root layer nodes = await (nodes[0].get_children()) # namespace layer / nodes nodes = await (nodes[3].get_children()) # duplicated notes ``` I do not have any clue on how to fix the bug. The internal server calls are somehow challenging to follow. I do not know where to start my search, or what is causing the duplication after one day of debugging. **To Reproduce**<br /> import an xml. e.g: ``` <?xml version='1.0' encoding='UTF-8'?> <UANodeSet xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:uax="http://opcfoundation.org/UA/2008/02/Types.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://opcfoundation.org/UA/2011/03/UANodeSet.xsd"> <NamespaceUris> <Uri>urn:server</Uri> <Uri>SYM:</Uri> </NamespaceUris> <Aliases> <Alias Alias="Boolean">i=1</Alias> <Alias Alias="SByte">i=2</Alias> <Alias Alias="Byte">i=3</Alias> <Alias Alias="Int16">i=4</Alias> <Alias Alias="UInt16">i=5</Alias> <Alias Alias="Int32">i=6</Alias> <Alias Alias="UInt32">i=7</Alias> <Alias Alias="Int64">i=8</Alias> <Alias Alias="UInt64">i=9</Alias> <Alias Alias="Float">i=10</Alias> <Alias Alias="Double">i=11</Alias> <Alias Alias="String">i=12</Alias> <Alias Alias="DateTime">i=13</Alias> <Alias Alias="ByteString">i=15</Alias> <Alias Alias="Organizes">i=35</Alias> <Alias Alias="HasTypeDefinition">i=40</Alias> </Aliases> <UAObject NodeId="ns=1;s=SYM:" BrowseName="1:SYM:"> <DisplayName>SYM:</DisplayName> <References> <Reference ReferenceType="HasTypeDefinition">i=61</Reference> <Reference ReferenceType="Organizes" IsForward="false">i=85</Reference> <Reference ReferenceType="Organizes">ns=2;s=S71500ET200MP-Station_2</Reference> <Reference ReferenceType="Organizes">ns=2;s=S71500ET200MP-Station_1</Reference> </References> </UAObject> <UAObject NodeId="ns=2;s=S71500ET200MP-Station_2" BrowseName="2:S71500ET200MP-Station_2" ParentNodeId="ns=1;s=SYM:"> <DisplayName>S71500ET200MP-Station_2</DisplayName> <References> <Reference ReferenceType="HasTypeDefinition">i=61</Reference> </References> </UAObject> <UAObject NodeId="ns=2;s=S71500ET200MP-Station_1" BrowseName="2:S71500ET200MP-Station_1" ParentNodeId="ns=1;s=SYM:"> <DisplayName>S71500ET200MP-Station_1</DisplayName> <References> <Reference ReferenceType="HasTypeDefinition">i=61</Reference> </References> </UAObject> </UANodeSet> ``` after the import, just do: ``` root = server.get_root_node() nodes = await (root.get_children()) #root layer nodes = await (nodes[0].get_children()) # namespace layer / nodes nodes = await (nodes[3].get_children()) # duplicated notes ``` **Expected behavior**<br /> import_xml and _add_references should not add any duplicates if the nodes are already there. **Screenshots**<br /> after importing the above example xml. the following can be observed if the imported nodes are requested: ![Image](https://github.com/user-attachments/assets/ad00770b-7e67-4b64-ad64-a5e2a5a4f87b) **Version**<br /> Python-Version: 3.11 opcua-asyncio Version (e.g. master branch, 0.9): 1.1.6
Server import XML leads to opcua node duplications
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1828/comments
0
2025-04-25T20:30:53Z
2025-04-25T20:39:16Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1828
3,020,976,534
1,828
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Hello together, I'm trying to read an array within an ExtensionObject from a B&R PLC. The array on its own can be correctly read. The array within an extension object is treated as a single integer. This issue is possibly related to: https://github.com/FreeOpcUa/opcua-asyncio/issues/1388 **To Reproduce**<br /> Python Code ``` from asyncua.sync import Client url = "opc.tcp://127.0.0.1:4840" client = Client(url) client.connect() client.load_type_definitions() # client.load_data_type_definitions() # Does not load the required definitions # client.load_enums() array_node = client.get_node("ns=6;s=::AsGlobalPV:global_structure.nested_Struct1.my_array") structure_node = client.get_node("ns=6;s=::AsGlobalPV:global_structure") array_node.get_value() structure_node.get_value() ``` Output: ``` array_node.get_value() Out[6]: [11, 12, 13, 14, 15, 16, 17, 0] structure_node.get_value() Out[7]: Structure1(bool2=True, bool1=True, bool3=False, nested_Struct1=Nested_struct(bool_nest2=False, bool_nest1=False, bool_nest3=True, _my_array=8, my_array=11)) ``` You can see that the array is correctly read if it is directly accessed. If the array is within an extension object it is not recognized. The bsd (from OPC Binary -> BR.Default) when converted to text is: ``` <opc:TypeDictionary xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tns="http://br-automation.com/OpcUa/PLC/PV/" DefaultByteOrder="LittleEndian" xmlns:opc="http://opcfoundation.org/BinarySchema/" xmlns:ua="http://opcfoundation.org/UA/" TargetNamespace="http://br-automation.com/OpcUa/PLC/PV/"> <opc:Import Namespace="http://opcfoundation.org/UA/" /> <opc:StructuredType BaseType="ua:ExtensionObject" Name="Structure1"> <opc:Field TypeName="opc:Boolean" Name="bool2"/> <opc:Field TypeName="opc:Boolean" Name="bool1"/> <opc:Field TypeName="opc:Boolean" Name="bool3"/> <opc:Field TypeName="tns:Nested_struct" Name="nested_Struct1"/> </opc:StructuredType> <opc:StructuredType BaseType="ua:ExtensionObject" Name="Nested_struct"> <opc:Field TypeName="opc:Boolean" Name="bool_nest2"/> <opc:Field TypeName="opc:Boolean" Name="bool_nest1"/> <opc:Field TypeName="opc:Boolean" Name="bool_nest3"/> <opc:Field TypeName="opc:Int32" Name="#my_array"/> <opc:Field LengthField="#my_array" TypeName="opc:Int32" Name="my_array"/> </opc:StructuredType> </opc:TypeDictionary> ``` **Screenshots**<br /> The corresponding structure from UaExpert which correctly loads the arrays in the extension object: <img width="395" alt="Image" src="https://github.com/user-attachments/assets/8f5894b8-69f8-4a4f-a928-e59bd4226f00" /><br /> **Version**<br /> Python-Version: 3.11.8 opcua-asyncio Version: 1.1.6 Is there anything else I can supply to troubleshoot this issue? Thanks in advance for your work!
Reading an array within an extension object is not possible.
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1825/comments
2
2025-04-23T07:55:49Z
2025-05-05T11:17:42Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1825
3,013,054,264
1,825
[ "FreeOpcUa", "opcua-asyncio" ]
## Problem Statement I need to model a large namespace that will not support browsing. I would like to create my nodes for only as long as at least one subscription exists. ## What I've tried I've tried subscribing to the `ItemSubscriptionCreated` callback, and to use the callback as a chance to create my nodes. ```python # during setup self.server.subscribe_server_callback( CallbackType.ItemSubscriptionCreated, self._dispatch_item_subscription ) ... # _dispatch_item_subscription # Create a new node, and set the status of the `ItemSubscriptionCreated` response_params. event.response_params[index].StatusCode = ( await component.on_item_subscription(item) ) ``` This almost works. The Good: The nodes are created and the client receives whatever status code I return. The Bad: The client's subscription does not seem to work (i.e. receive values). Only after creating a second subscription does it work as expected. I believe this is because although I'm setting the `StatusCode`, my callback is being run after the subscription creation handling logic, meaning the Node doesn't exist yet to properly create a subscription. ## Questions 1. Does this approach make any sense? 2. Is there another way to accomplish a dynamic address space? I believe Milo supports this in the form of a [ManagedAddressSpace](https://javadoc.io/doc/org.eclipse.milo/sdk-server/0.3.3/org/eclipse/milo/opcua/sdk/server/api/ManagedAddressSpace.html) that gives you exact control over the handling of service calls.
Create Nodes During Subscription
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1824/comments
4
2025-04-23T06:29:38Z
2025-04-25T15:31:30Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1824
3,012,843,125
1,824
[ "FreeOpcUa", "opcua-asyncio" ]
Hi! Im try to connect to prosys opc ua server with example: [(https://github.com/FreeOpcUa/opcua-asyncio/blob/master/examples/client_to_prosys.py)] but have error: " ................ File "C:\Users\unreg\AppData\Local\Programs\Python\Python311\Lib\enum.py", line 695, in __call__ return cls.__new__(cls, value) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\unreg\AppData\Local\Programs\Python\Python311\Lib\enum.py", line 1111, in __new__ raise ve_exc ValueError: 36 is not a valid NodeIdType" Prosys Server version 5.5.2-362 SDK version 5.2.8-159 This is very strange, because about six months ago the connection was fine (to the old version of the server). The client UAexpert 1.6.3 448 work fine How i can fix it? Thanks!
Connect to prosys opc ua simulation server failed
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1823/comments
6
2025-04-19T14:03:33Z
2025-05-12T12:49:18Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1823
3,006,451,745
1,823
[ "FreeOpcUa", "opcua-asyncio" ]
We're using asyncua as the server part for an integration test setup connecting from an open62541 based client. With 1.1.6, we get an error at this point: ``` Traceback (most recent call last): File "c:\workspace\tools\Python312-32\Lib\site-packages\asyncua\server\uaprocessor.py", line 147, in process_message return await self._process_message(typeid, requesthdr, seqhdr, body) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\workspace\tools\Python312-32\Lib\site-packages\asyncua\server\uaprocessor.py", line 213, in _process_message data = self._connection.security_policy.peer_certificate + params.ClientNonce ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~ TypeError: can't concat NoneType to bytes ``` While debugging into it, I've found out, that the check ```python if self._connection.security_policy.peer_certificate is None: data = params.ClientNonce else: data = self._connection.security_policy.peer_certificate + params.ClientNonce ``` is not working, since `self._connection.security_policy.peer_certificate` is `b''`, but not `None`. We're using SecurityPolicyNone, so I assume due to the change of the initialization of the security_policies between 1.1.5 and 1.1.6, the peer_certificate is accidently set, but the Nonce is always None. When I add ```python if not len(self.peer_certificate): self.peer_certificate = None ``` after line https://github.com/FreeOpcUa/opcua-asyncio/blob/4a11975af723be18f6889819aa5a161e0c343541/asyncua/crypto/security_policies.py#L519 it works flawlessly again
Possible regression 1.1.5 -> 1.1.6
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1821/comments
3
2025-04-17T07:35:59Z
2025-04-29T12:41:20Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1821
3,001,694,882
1,821
[ "FreeOpcUa", "opcua-asyncio" ]
Both the `call_method` and `call_method_full` functions throw an exception if the status is not Ok, which means that it is not possible to see the return value of an Uncertain result. As far as I understand the spec if a method returns an Uncertain status it indicates that the return values *might* be unreliable, but it could still be useful to examine their values. Since most people probably only care about if the status is Ok or not I don't think it makes sense to change `call_method`, but maybe since `call_method_full` returns a `CallMethodResult` object, which contains the status code, maybe it makes sense for that method not to throw an exception on a not-Ok status? Or, if that breaks backwards compatibility, would it be possible to add either an additional parameter (with a default value) to `call_method_full` which disables the check of the return value, or a completely new function to do this? For a real world use case of this, see the "Joining Systems Base" companion spec. The methods defined there have `status` and `statusMessage` return values, and in case of error the the OPC UA status is set to Uncertain and these return values describe the error in more detail. See eg the [EnableAsset](https://reference.opcfoundation.org/IJT/Base/v100/docs/7.4.3) method and the discussion about the [status](https://reference.opcfoundation.org/IJT/Base/v100/docs/7.2.3) I'm happy to provide a pull request with the above proposed changes, but first want some input about what sort of API the developers would prefer
Accessing return values when a method return Uncertain
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1815/comments
0
2025-04-04T12:07:10Z
2025-04-04T12:07:10Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1815
2,972,179,304
1,815
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> I'm attempting to import a variable into TwinCat 3 (4026) which is an array of ExtensionObjects. No matter the size of the array, TwinCat 3 refuses to import it and reports a "Something odd happened" message with no further details. This error only occurs when the variable is an array. If the variable is a singleton ExtensionObject--there is no issue. If I create this ExtensionObject and variable in another TwinCat 3 server, and import it with the same TwinCat 3 client that throws the error for `asyncua`, it works just fine. However, I am able to import this variable--as an array created with `asyncua`--into UaExpert and work with it just fine. Therefore, this appears to be a metadata incompatibility issue between asyncua and TwinCat 3. Here's a screenshot of the Attributes panel in UaExpert of the variable: ![Image](https://github.com/user-attachments/assets/343c4d05-7989-41ca-adcd-6b2dc5eecd40) Of the Structure type: ![Image](https://github.com/user-attachments/assets/da5a7159-0e41-4c36-865d-01945960cab2) Of the Value editor of the variable: ![Image](https://github.com/user-attachments/assets/6faa7943-2a13-4745-b6ca-9c52c55db94e) I am reaching out to Beckhoff support for their input on this issue. I wanted to also open this report here in case we're able to find the solution while Beckhoff works this from their end. I'll report any notable information from them here as well.   **To Reproduce**<br /> Here is my node adding code. It supports folders, objects, variables, and structure definitions. For variables, it also supports primitives, structures, and arrays. ```python async def _add_node(self, record: NodeRecord) -> bool: # Do not operate on DELETE events. if record.operation == SchemaChangeOperation.DELETE: return False # Do not operate on existing nodes. with suppress(BadNodeIdUnknown, UaStringParsingError): node = self._server.get_node(record.node_id) await node.read_display_name() # Test if node exists return False # No error was raised, so node exists logger.info( "Node '{}/{}' does not exist, adding...", record.display_name.Text, record.node_id.to_string(), ) # Retrieve the parent node. parent_node = ( self._server.get_node(record.parent_node_id) if record.parent_node_id else self._server.get_objects_node() ) logger.debug( "Setting parent of node to -> '{}/{}'", (await parent_node.read_display_name()).Text, record.parent_node_id.to_string(), ) # Assign a new node id if required. if record.type_definition != ObjectIds.StructureDefinition: await self._assign_node_id(record) # Update the browse name. It is influenced by node id changes. record.browse_name = QualifiedName( record.browse_name.Name, record.node_id.NamespaceIndex, ) # Add the node to the tree. match record.node_class: case NodeClass.Object: if record.type_definition == ObjectIds.FolderType: node = await parent_node.add_folder( record.node_id, record.browse_name, ) elif record.type_definition == ObjectIds.BaseObjectType: node = await parent_node.add_object( record.node_id, record.browse_name, ) case NodeClass.Variable: if record.type_definition == ObjectIds.StructureDefinition: # Construct the structure. # Structures are registered with the `ua` module, # and can be accessed with # `ua.<structure browse name>`. fields = [] for field_dict in record.fields: fields.append( new_struct_field( name=field_dict["name"], dtype=field_dict["data_type"], array=field_dict["is_array"], ) ) await new_struct( self._server, self._namespace_idx, record.browse_name.Name, # Namespace may be set to 999, so just use 'Name'. fields, ) await self._server.load_data_type_definitions() return elif record.type_definition == ObjectIds.StructureType: structure_cls = getattr(ua, record.structure_cls_name) if record.value.is_array: value = [structure_cls(**value) for value in record.value.Value] else: value = structure_cls(**record.value.Value) node = await parent_node.add_variable( record.node_id, record.browse_name, Variant( Value=value, VariantType=VariantType.ExtensionObject, Dimensions=record.value.Dimensions, is_array=record.value.is_array, ), ) elif record.type_definition == ObjectIds.BaseDataVariableType: node = await parent_node.add_variable( record.node_id, record.browse_name, record.value, ) # Enable writability. await node.set_writable(writable=record.writable) logger.debug( "Node '{}/{}' writable: {}", record.display_name.Text, record.node_id.to_string(), record.writable, ) # Configure array dimensions. if isinstance(record.value.Value, (list, tuple, set)): await node.write_array_dimensions([len(record.value.Value)]) await node.write_value_rank(ValueRank.OneDimension) # Enable historization. if record.historizing: with suppress(UaNodeAlreadyHistorizedError, UaError): await self._enable_historization(node) logger.debug( "Historization enabled for node '{}/{}': {}", record.display_name.Text, record.node_id.to_string(), record.historizing, ) # Set display name. await node.write_attribute( AttributeIds.DisplayName, DataValue(Value=Variant(Value=record.display_name)), ) logger.success( "Node '{}/{}' added successfully", record.display_name.Text, record.node_id.to_string(), ) return True ``` ```python # NodeRecord definition used above. It is essentially a dataclass. class NodeRecord: """ Configuration class for managing nodes in an asyncua OPC UA server. Supports add, change, and delete events for folders, variables, objects, and custom extension objects, with child node tracking and array support. """ # Operation operation: SchemaChangeOperation = SchemaChangeOperation.ADD # Database Identification database_id: int | None = None # Node Identification node_id: NodeId | None = None # Specific NodeId, autogenerated if None parent_node_id: NodeId = NodeId.from_string("i=85") # Objects folder namespace_idx: int = 0 # Namespace index # Node Metadata display_name: LocalizedText = LocalizedText("UnnamedNode") browse_name: QualifiedName = QualifiedName("Unnamed", 0) description: LocalizedText | None = None # Node Type and Structure node_class: NodeClass = NodeClass.Object type_definition: ObjectIds | None = None # e.g., FolderType (i=61) # Variable-specific Settings value: Variant | list[Any] | None = None # Scalar or array value writable: bool = False # UserWriteMask bit historizing: bool = False # Enable history # Custom Extension Object Settings structure_cls_name: str | None = None fields: list[dict] | None = None # Child Nodes children: set[str] = set() # Direct child nodes by NodeId ``` **Expected behavior**<br /> No warning/error from TwinCat 3. **Version**<br /> Python-Version: 3.12.9 opcua-asyncio Version (e.g. master branch, 0.9): 1.1.5
Arrays of ExtensionObjects incompatible with TwinCat 3
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1813/comments
7
2025-04-03T16:37:25Z
2025-05-28T12:01:57Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1813
2,970,151,384
1,813
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Cannot export nodes with nodeIds containing german umlaute / vowel mutations. In the old library python-opcua there was an issue that German umlaute in the nodeid would break the library from working; this was somehow fixed with this merge (here https://github.com/FreeOpcUa/opcua-asyncio/issues/621 and here https://github.com/FreeOpcUa/opcua-asyncio/issues/1207#issue-1585979228). However, this fix was only a "read"-related fix, because if you try to get the displayname, or browsename or some other information from that node, that contain the mutated vowel e.g. ä,ü,ö, the .readAttribute function that calls the server will result in an BadNodeIdUnknown, because the called now converted nodeid with 0x"hex" in it do not exist after the quick'n'dirty conversion introduced through quick fixing the read-related problem. If I try to use the well-coded XMLExporter, the additional calls to the server to get every information of the node that contains umlaute in their nodeid will result in the aforementioned BadNodeIdUnknown error because the nodeid is practically wrong after the conversion. **To Reproduce**<br /> add a node containing umlaute "äüö" in the nodeid read the node with the opcua-asyncio client, e.g. root.getChildren() create the XmlExporter(client) use await exporter.build_etree([node]) <- a lot of BadNodeIdUnkown errors will appear **Expected behavior**<br /> convert the nodeid meaningfully in that way that it is possible to use them for requests After that every subsequent request for any information should be successful **Version**<br /> Python-Version: 3.11.9<br /> opcua-asyncio Version : 1.1.5
Cannot export nodes with german umlaute / vowel mutations. BadNodeIdUnknown if implemented conversion of the nodeid is used for requests..
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1811/comments
0
2025-04-03T08:34:03Z
2025-04-03T08:38:26Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1811
2,968,754,144
1,811
[ "FreeOpcUa", "opcua-asyncio" ]
Hi all, Can you please help me to understand this situation ? I have 3 custom enums: opcua_server | root:Enum trouvée : enumSungrowInverterDeviceTypeCode opcua_server | root:Enum trouvée : enumSungrowInverterOutputType opcua_server | root:Enum trouvée : enumSungrowInverterWorkState They are linked to this class: opcua_server | class typSungrowInverter: opcua_server | opcua_server | ''' opcua_server | typSungrowInverter structure autogenerated from StructureDefinition object opcua_server | ''' opcua_server | opcua_server | data_type = ua.NodeId.from_string('''ns=4;s=|tprop|wago_opcua.Application.typSungrowInverter''') opcua_server | sSN: ua.String = ua.String() opcua_server | uiDeviceTypeCode: ua.Enumeration = **field(default_factory=ua.Enumeration)** opcua_server | rP_Nominal: ua.Float = ua.Float(0) opcua_server | uiOutputType: ua.Enumeration = **field(default_factory=ua.Enumeration)** opcua_server | rYieldDaily: ua.Float = ua.Float(0) opcua_server | rYieldTotal: ua.Float = ua.Float(0) opcua_server | udiRunningTime: ua.UInt32 = ua.UInt32(0) opcua_server | rTempeInternal: ua.Float = ua.Float(0) opcua_server | rS: ua.Float = ua.Float(0) opcua_server | arrrMPPT_U: typing.List[ua.Float] = field(default_factory=list) opcua_server | arrrMPPT_I: typing.List[ua.Float] = field(default_factory=list) opcua_server | rP_DC: ua.Float = ua.Float(0) opcua_server | rU_L1_L2: ua.Float = ua.Float(0) opcua_server | rU_L2_L3: ua.Float = ua.Float(0) opcua_server | rU_L3_L1: ua.Float = ua.Float(0) opcua_server | rI_L1: ua.Float = ua.Float(0) opcua_server | rI_L2: ua.Float = ua.Float(0) opcua_server | rI_L3: ua.Float = ua.Float(0) opcua_server | rP: ua.Float = ua.Float(0) opcua_server | rQ: ua.Float = ua.Float(0) opcua_server | rPF: ua.Float = ua.Float(0) opcua_server | rFreq: ua.Float = ua.Float(0) opcua_server | uiWorkState: ua.Enumeration = **field(default_factory=ua.Enumeration)** opcua_server | uiAlarmCode: ua.UInt16 = ua.UInt16(0) The problem is in bold. I don't understand why the generated class refers to ua.Enumeration here ? It generates the following error when I want to access to ua.typSungrowInverter() opcua_server | AttributeError: module 'asyncua.ua' has no attribute 'Enumeration' It seems logic as Enumeration is not created. Only other enums but they are not correctly linked to the Class Do I miss something ? Thank you in advance for your help Michael
load_data_type_definitions() does not generate correctly a custom class
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1809/comments
1
2025-04-01T11:37:39Z
2025-04-06T01:37:42Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1809
2,963,176,104
1,809
[ "FreeOpcUa", "opcua-asyncio" ]
To save some time, please provide us following informations, if possible: **Description** <br /> When I start the script examples/statemachine-example.py, I encounter multiple errors related to node addition and type mismatches. The script fails to complete successfully. **To Reproduce**<br /> Start the script examples/statemachine-example.py. Observe the following errors in the console output: INFO:asyncua.server.internal_server:No user manager specified. Using default permissive manager instead. INFO:asyncua.server.internal_session:Created internal session Internal INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=11715, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15958, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15959, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15960, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15961, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15962, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15963, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15964, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16134, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16135, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16136, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='FiniteStateMachineType') as part of QualifiedName(NamespaceIndex=2, Name='StateMachine') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='InitialStateType') as part of QualifiedName(NamespaceIndex=2, Name='Idle') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='ChoiceStateType') as part of QualifiedName(NamespaceIndex=2, Name='Idle') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='InitialStateType') as part of QualifiedName(NamespaceIndex=2, Name='Loading') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='ChoiceStateType') as part of QualifiedName(NamespaceIndex=2, Name='Loading') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='InitialStateType') as part of QualifiedName(NamespaceIndex=2, Name='Initializing') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='ChoiceStateType') as part of QualifiedName(NamespaceIndex=2, Name='Initializing') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='InitialStateType') as part of QualifiedName(NamespaceIndex=2, Name='Processing') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='ChoiceStateType') as part of QualifiedName(NamespaceIndex=2, Name='Processing') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='InitialStateType') as part of QualifiedName(NamespaceIndex=2, Name='Finished') INFO:asyncua.common.instantiate_util:Instantiate: Skip node without modelling rule QualifiedName(NamespaceIndex=0, Name='ChoiceStateType') as part of QualifiedName(NamespaceIndex=2, Name='Finished') WARNING:asyncua.server.address_space:Write refused: Variant: Variant(Value=NodeId(Identifier=6, NamespaceIndex=2, NodeIdType=<NodeIdType.FourByte: 1>), VariantType=<VariantType.NodeId: 17>, Dimensions=None, is_array=False) with type 17 does not have expected type: 0 Traceback (most recent call last): File "C:\workspace_examples\opcua\statemachine-example.py", line 88, in <module> asyncio.run(main()) File "C:\Python311\Lib\asyncio\runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "C:\workspace_examples\opcua\statemachine-example.py", line 55, in main await mystatemachine.change_state(state1, trans1, f"{mystatemachine._name}: Idle", 300) File "C:\workspace_examples\opcua\.venv\Lib\site-packages\asyncua\common\statemachine.py", line 183, in change_state await self._write_state(state) File "C:\workspace_examples\opcua\.venv\Lib\site-packages\asyncua\common\statemachine.py", line 206, in _write_state await self._current_state_id_node.write_value(state.node.nodeid, varianttype=ua.VariantType.NodeId) File "C:\workspace_examples\opcua\.venv\Lib\site-packages\asyncua\common\node.py", line 269, in write_value await self.write_attribute(ua.AttributeIds.Value, dv) File "C:\workspace_examples\opcua\.venv\Lib\site-packages\asyncua\common\node.py", line 323, in write_attribute result[0].check() File "C:\workspace_examples\opcua\.venv\Lib\site-packages\asyncua\ua\uatypes.py", line 377, in check raise UaStatusCodeError(self.value) asyncua.ua.uaerrors._auto.BadTypeMismatch: The value supplied for the attribute is not of the same type as the attribute"s value.(BadTypeMismatch) Process finished with exit code 1 **Version**<br /> Python-Version: 3.11<br /> opcua-asyncio Version: 1.1.5
Problems running statemachine-example.py
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1807/comments
2
2025-03-31T20:38:35Z
2025-04-01T07:03:14Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1807
2,961,527,067
1,807
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> When I subscribe using this library, to my equipment's OPC/UA server (which is not based on FreeOpcUa), it sends a `PublishResponse` with a `NotificationData` containing an `ExtensionObject` of length -1 (0xffff). As a result, I get an exception: `asyncua.common.utils.NotEnoughData: Not enough data left in buffer, request for 4, we have 0` I believe the equipment's OPC/UA server is non-compliant - a length of -1 is not valid. However, it appears to be a common enough bug with OPC/UA servers that the [OPC Foundation's reference implementation](https://github.com/OPCFoundation/UA-.NETStandard) have implemented a [interop fix](https://github.com/OPCFoundation/UA-.NETStandard/commit/a887f909f1d314cfa7c32989628afa754984c4f1). Can we apply the same interop fix here? I believe the change is simply to change [ua_binary.py;546-547](https://github.com/FreeOpcUa/opcua-asyncio/blob/e6d646d12f1db1e59229034439f9b0baf9d2da6e/asyncua/ua/ua_binary.py#L546-L547) from: ```python if length < 1: body = Buffer(b"") ``` ...to... ```python if length == -1: # Interop fix for old OPC/UA implementations that omit to fill in the length body = data elif length < 1: body = Buffer(b"") ``` **To Reproduce**<br /> I can reproduce this using the built-in `uasubscribe` tool. The OPC/UA server sends back an `ExtensionObject` of the following form: ``` 0000 01 00 2b 03 01 ff ff ff ff 05 00 00 00 c9 00 00 0010 00 0d 0a 54 37 8e 3c 80 29 12 ab 53 95 db 01 80 0020 29 12 ab 53 95 db 01 c9 00 00 00 0d 0b 00 00 00 0030 00 98 40 8b 3f 40 15 25 ab 53 95 db 01 40 15 25 0040 ab 53 95 db 01 c9 00 00 00 0d 0b 00 00 00 a0 b7 0050 b3 91 3f d0 13 4b ab 53 95 db 01 d0 13 4b ab 53 0060 95 db 01 c9 00 00 00 0d 0b 00 00 00 a0 09 15 8c 0070 3f 60 9a 84 ab 53 95 db 01 60 9a 84 ab 53 95 db 0080 01 c9 00 00 00 0d 0b 00 00 00 e0 43 a6 91 3f d0 0090 86 a0 ab 53 95 db 01 d0 86 a0 ab 53 95 db 01 00 00a0 00 00 00 ``` Octets 0-3 are the `TypeId`, octet 4 is the `EncodingMask` and octet 5-8 are the length (-1). I see the following exception: ``` WARNING:asyncua.client.client:Requested session timeout to be 3600000ms, got 100000ms instead WARNING:asyncua.client.client:Revised values returned differ from subscription values: CreateSubscriptionResult(SubscriptionId=4, RevisedPublishingInterval=500.0, RevisedLifetimeCount=9900, RevisedMaxKeepAliveCount=150) Type Ctr-C to exit ERROR:asyncua.client.ua_client.UaClient:Error parsing notification from server Traceback (most recent call last): File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/client/ua_client.py", line 579, in publish response = struct_from_binary(ua.PublishResponse, data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 696, in struct_from_binary return _create_dataclass_deserializer(objtype)(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 687, in decode kwargs[field.name] = deserialize_field(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 687, in decode kwargs[field.name] = deserialize_field(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 687, in decode kwargs[field.name] = deserialize_field(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 254, in deserialize return list(unpack_array(data, length)) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 247, in <genexpr> return (deserialize_element(data) for _ in range(length)) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 542, in extensionobject_from_binary return from_binary(cls, body) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 629, in from_binary return _create_type_deserializer(uatype, type(None))(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 687, in decode kwargs[field.name] = deserialize_field(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 589, in _deserialize size = Primitives.Int32.unpack(data) File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/ua/ua_binary.py", line 131, in unpack return struct.unpack(self.format, data.read(self.size))[0] File "/home/mattwilliams/.local/lib/python3.8/site-packages/asyncua/common/utils.py", line 62, in read raise NotEnoughData(f"Not enough data left in buffer, request for {size}, we have {self._size}") asyncua.common.utils.NotEnoughData: Not enough data left in buffer, request for 4, we have 0 ``` **Expected behavior**<br /> I expect the `ExtensionObject` to be parsed correctly, by assuming that -1 means the rest of the data buffer. **Version**<br /> Python-Version: 3.8<br /> opcua-asyncio Version (e.g. master branch, 0.9): 1.1.5
asyncua.common.utils.NotEnoughData parsing an ExtensionObject with length -1
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1801/comments
1
2025-03-29T06:17:47Z
2025-03-31T14:02:43Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1801
2,957,806,022
1,801
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> When a remote server returns a value whose type is a Structure subtype that is encoded by a non-root `DataTypeEncodingType` (like `GDS`'s `ApplicationRecordDataType`, which idiotically defines its own `Default Binary` (`id=134`)), and asyncua has already previously loaded that nodeset with a different `NamespaceIndex` (like if you start a server, and then start a client), then asyncua will fail to decode that value, and leave it as an unparseable ExtensionObject, like: ``` ExtensionObject(TypeId=NodeId(Identifier=134, NamespaceIndex=3, NodeIdType=NodeIdType.FourByte), Body=b'\x04\x02\x00\x07\xf7A\xc8\xba[\x10K\x85\x01\xaa\xe1\xe4\xe2\xf7\x11 \x00\x00\x00urn:app-asr-01:Codesys:eUAServer\x00\x00\x00\x00\x01\x00\x00\x00\x02\x11\x00\x00\x00eUAServer@Codesys\x15\x00\x00\x00urn:Codesys:eUAServer\x01\x00\x00\x00\x19\x00\x00\x00opc.tcp://10.10.4.78:4840\x01\x00\x00\x00\x02\x00\x00\x00DA') ``` There is now no way in asyncua's API to decode this, even if we constructed a new `ExtensionObject` with the correct `TypeId` with `NamespaceIndex=5` **To Reproduce**<br /> 1. Create a `Server` and import a few nodeset XMLs including GDS (`http://opcfoundation.org/UA/GDS/`). * Let's say the resulting GDS `NamespaceIndex` is `5` 2. `server.load_data_type_definitions()` 3. Create a `Client` and connect to an OPC UA service with GDS, which has a **different** `NamespaceIndex` for its `http://opcfoundation.org/UA/GDS/` (let's say `3`) * You can call `client.load_data_type_definitions()` here if you want, or not, it makes no difference. 4. Invoke `call_method` `FindApplications` (GDS `i=143`) on the `Directory` (GDS `i=141`) node, with an argument that should return at least one value (of type `ApplicationRecordDataType` (GDS `i=1`). **Expected behavior**<br /> You should get a (list of) `ApplicationRecordDataType`. **Actual Behavior**<br /> asyncua fails to decode the `ExtensionObject` (like above). If you skip steps 1 & 2, and instead do call `client.load_data_type_definitions()`, it will work correctly. The fundamental problem seems to be that asyncua uses one global set of namespaces for encoders, and does not wire up data type encoding definitions from different clients/servers correctly. **Version**<br /> Python-Version: 3.13.2 opcua-asyncio Version (e.g. master branch, 0.9): 1.1.5
Failure to decode ExtensionObject if remote encoding NodeId does not match previously loaded local definition
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1800/comments
3
2025-03-26T21:31:09Z
2025-03-27T21:30:45Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1800
2,950,827,425
1,800
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Using opcua-asyncio github repo with tag v1.1.5. We are trying to run the example scripts - `server-with-encryption.py` and `client-with-encryption.py` We are getting `ValueError: Decryption failed` when client is trying to connect to the server. Below are the logs - ``` INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16136, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.internal_server:starting internal server INFO:asyncua.server.binary_server_asyncio:Listening on 0.0.0.0:4840 INFO:asyncua.server.binary_server_asyncio:New connection from ('127.0.0.1', 44910) INFO:asyncua.uaprotocol:updating server limits to: TransportLimits(max_recv_buffer=65535, max_send_buffer=65535, max_chunk_count=1601, max_message_size=104857600) ERROR:asyncua.server.binary_server_asyncio:Exception raised while processing message from client Traceback (most recent call last): File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/server/binary_server_asyncio.py", line 99, in _process_received_message_loop await self._process_one_msg(header, buf) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/server/binary_server_asyncio.py", line 105, in _process_one_msg ret = await self.processor.process(header, buf) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/server/uaprocessor.py", line 104, in process msg = self._connection.receive_from_header_and_body(header, body) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/common/connection.py", line 415, in receive_from_header_and_body chunk = MessageChunk.from_header_and_body(self.security_policy, header, body, use_prev_key=False) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/common/connection.py", line 125, in from_header_and_body decrypted = crypto.decrypt(data.read(len(data))) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/crypto/security_policies.py", line 198, in decrypt return self.Decryptor.decrypt(data) File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/crypto/security_policies.py", line 309, in decrypt decrypted += self.decryptor(self.client_pk, File "/home/eiid-d1-l3t015/opcua/examples/../asyncua/crypto/uacrypto.py", line 200, in decrypt_rsa_oaep text = private_key.decrypt( ValueError: Decryption failed INFO:asyncua.server.binary_server_asyncio:Lost connection from ('127.0.0.1', 44910), None INFO:asyncua.server.uaprocessor:Cleanup client connection: ('127.0.0.1', 44910) ``` **To Reproduce**<br /> Go to the examples directory and run the python scripts `server-with-encryption.py` and `client-with-encryption.py` ``` python3 server-with-encryption.py ``` ``` python3 client-with-encryption.py ``` **Expected behavior**<br /> We should not get the Decryption failed error, client should be able to connect to server. **Version**<br /> Python-Version: 3.10<br /> opcua-asyncio Version (e.g. master branch, 0.9): v1.1.5
server-with-encryption.py giving ValueError: Decryption failed
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1797/comments
0
2025-03-17T09:09:49Z
2025-03-17T09:09:49Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1797
2,924,286,729
1,797
[ "FreeOpcUa", "opcua-asyncio" ]
With prior versions of asyncua (<1.0.x) I was able to access nodes with as ``var = client.get_node(ua.NodeId(1002, 2))`` or ``var = client.get_node("ns=3;i=2002")``. How these always return ``BadNodeIdUnknown``! My server has many layers of nodes and using ``get_child()`` is not convenient... TIA
How to access nodes without using get_child method?
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1795/comments
3
2025-03-10T14:47:17Z
2025-03-10T16:33:07Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1795
2,907,528,971
1,795
[ "FreeOpcUa", "opcua-asyncio" ]
I am trying to understand how OPC compatibility works, but maybe you can help us with this. In our organization, we have a simulator that uses the [Traeger](https://www.traeger.de/en/products/development/opcua/opcua-sdk) as the OPC UA Server. When we try to connect to it using this library, it is throwing a `TimeoutError`. The error happens on the `open_secure_channel()` method of the `us_client.py` file. Traeger OPC Server: https://www.traeger.de/en/products/development/opcua/opcua-sdk As I understand it, for as long the OPC Server is within the same UA protocol, this library should work, irregardless to which language the OPC Server is written. Does this library supports the Traeger as the OPC UA server? ![Image](https://github.com/user-attachments/assets/7e68b22e-924c-47a4-814b-841456beaf80) Can you help us understand or verify whether this library can support them? Thanks a lot.
Traeger as the OPC Server
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1793/comments
9
2025-03-06T09:29:52Z
2025-03-12T09:52:44Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1793
2,899,856,519
1,793
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> There are the following problems with the processing of from_string/to_string of NodeId - Missing percent encoding for NamespaceUri when from_string/to_string - Identifier cannot be parsed correctly when it contains ";" - Also, unlike the specifications, nsu= and srv= are written after the Identifier, so if the Identifier happens to contain a confusing string like ;nsu=, parsing will be impossible. https://reference.opcfoundation.org/Core/Part6/v105/docs/5.1.12 **To Reproduce**<br /> ```py from asyncua import ua before_node = ua.ExpandedNodeId(Identifier="foo;nsu=bar", NamespaceUri="http://example.com/q?=foo;bar") print(before_node) # ExpandedNodeId(Identifier='foo;nsu=bar', NamespaceIndex=0, NodeIdType=<NodeIdType.String: 3>, NamespaceUri='http://example.com/q?=foo;bar', ServerIndex=0) after_node = ua.NodeId.from_string(before_node.to_string()) print(after_node) # UaStringParsingError ``` **Expected behavior**<br /> Probably the fix will be something like this: ```py from urllib.parse import unquote, quote class NodeId: @staticmethod def _from_string(string): elements = string identifier = None namespace = 0 ntype = None srv = None nsu = None while elements: k, v = elements.split("=", 1) k = k.strip() # Split only if not a string elements = "" if k != "s": if ";" in v: v, elements = v.split(";", 1) v = v.strip() if k == "ns": namespace = int(v) elif k == "i": ntype = NodeIdType.Numeric identifier = int(v) elif k == "s": ntype = NodeIdType.String identifier = v elif k == "g": ntype = NodeIdType.Guid identifier = uuid.UUID(f"urn:uuid:{v}") elif k == "b": ntype = NodeIdType.ByteString identifier = bytes(v, 'utf-8') elif k == "srv": srv = int(v) elif k == "nsu": nsu = unquote(v) # Decode parcent encoded string if identifier is None: raise UaStringParsingError(f"Could not find identifier in string: {string}") if nsu is not None or srv is not None: return ExpandedNodeId(identifier, namespace, ntype, NamespaceUri=nsu, ServerIndex=srv) return NodeId(identifier, namespace, ntype) class ExpandedNodeId(NodeId): def to_string(self): string = [] if self.ServerIndex: string.append(f"srv={self.ServerIndex}") if self.NamespaceUri: string.append(f"nsu={quote(self.NamespaceUri)}") # Percent encode string.append(NodeId.to_string(self)) # Add after processing return ";".join(string) ```
Bad from_string/to_string result for NodeId
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1792/comments
1
2025-03-06T04:14:32Z
2025-03-06T04:30:24Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1792
2,899,284,390
1,792
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Callling run_until_complete() consistently causes a TimeoutError after around ~22 minutes of function running. The exact code is: ``` # ------------------------------------------------------------------------------ def exec_loop(opcua_obj, file_name): ''' Asyncio Loop wrapper function. ''' loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) try: loop.set_debug(True) loop.run_until_complete(task(opcua_obj, file_name)) except TimeoutError as exp: print('TimeoutError') except KeyboardInterrupt as exp: print('KeyboardInterrupt') finally: loop.close() ``` This function should be running until complete so it doesn't make sense that it is causing a TimeoutError. Why would it do that? Is it because when traversing using node.get_children() the connection is lost and it can't recover? If so is there a way configure the asyncua Client to automatically reconnect? [example_opcua_async_client.py.txt](https://github.com/user-attachments/files/19091050/example_opcua_async_client.py.txt) **To Reproduce**<br /> Run the attached Python app with a OPC-UA server that is running. **Expected behavior**<br /> Expecting not to TimeoutError. **Screenshots**<br /> The output of the program shows it exits with TimeoutError exception. **Version**<br /> Python-Version: Python 3.10.12<br /> opcua-asyncio Version (e.g. master branch, 0.9): asyncua==1.1.5
Call to run_until_complete returning TimeoutError
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1791/comments
1
2025-03-05T14:45:48Z
2025-03-05T15:38:54Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1791
2,897,537,269
1,791
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> I am using "asyncua" library to connect to an OPC-UA server and get list of all of the tags available on that server. The Python application takes almost 4 hours to traverse the OPC-UA server, here is the output of the app: ``` Starting OPC-UA Client Application Started at: 2025-02-09 06:28:42.764038 URL: opc.tcp://172.16.1.5:62541 File (Output): Tags_2025-02-09_062842_UTC.txt Client(url=url) Set User Set Password Set Application_URI Client set_security USE_TRUST_STORE: True if USE_TRUST_STORE async with client -> get nodes Called client.node.root Called to get_child 0:Objects Call asyncio.wait_for() Runtime Information Start Time: 2025-02-09 06:28:42.764038 End Time: 2025-02-09 10:22:53.466855 Duration: 3:54:10.702817 ``` The code calls: `await extract_node_paths(objects, "/Objects", output_file)` and the main function looks like this: ``` async def extract_node_paths(node, path, output_file): children = await node.get_children() for child in children: browse_name = await child.read_browse_name() child_path = f"{path}/{browse_name.Name}" if await child.read_node_class() == 2: # Variable node class with open(output_file, 'a') as f: f.write(child_path + '\n') await extract_node_paths(child, child_path, output_file) ``` The output has the following syntax: ``` /Objects/Tag Providers/ZLT/ZLT/Dispatcher/Configuration/Limits/LimitBESS /Objects/Tag Providers/ZLT/ZLT/Dispatcher/Configuration/Limits/LimitEnergyBESS /Objects/Tag Providers/ZLT/ZLT/Dispatcher/Configuration/Limits/LimitPOI ``` The output file has 104,529 lines. So 104,529 tags. The question I have is: 1. Is it possible to speed this up where it can do this work in 20 minutes? 2. Is there a built-in API can get a tree list of the tags is much less time? **To Reproduce**<br /> Code is shown above. **Expected behavior**<br /> I was expecting that there would be a relatively fast (minutes and not hours) way to get a OPC-UA server tags list. **Screenshots**<br /> None, but can get some if needed. **Version**<br /> Python-Version: Python 3.10.12<br /> opcua-asyncio Version (e.g. master branch, 0.9): asyncua==1.1.5
Speeding up OPC-UA node traversal from hours to minutes
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1787/comments
14
2025-02-12T22:08:10Z
2025-03-17T17:36:15Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1787
2,849,454,516
1,787
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> A server method decorated with `@uamethod` does not convert to the dtypes specified when registering the method at the server. Integers are always converted to `Int64` and floats to `Double`. **To Reproduce**<br /> Adapted from the sync examples: ```python # %% from asyncua.sync import Server from asyncua import ua, uamethod def print_result(func): def wrapped(*args, **kwargs): result = func(*args, **kwargs) print(result) return result return wrapped @print_result @uamethod def multiply(parent, x, y): print("multiply method call with parameters: ", x, y) return x * y # set up our server server = Server() server.set_endpoint("opc.tcp://0.0.0.0:4840/freeopcua/server/") # set up our own namespace, not really necessary but should as spec uri = "http://examples.freeopcua.github.io" idx = server.register_namespace(uri) # populating our address space myobj = server.nodes.objects.add_method( idx, "multiply", multiply, [ua.VariantType.Int16, ua.VariantType.Int16], [ua.VariantType.Int16], ) # starting! server.start() #%% from asyncua.sync import Client, ThreadLoop with ThreadLoop() as tloop: with Client("opc.tcp://localhost:4840/freeopcua/server/", tloop=tloop) as client: uri = "http://examples.freeopcua.github.io" idx = client.get_namespace_index(uri) obj = client.nodes.objects res = obj.call_method(f"{idx}:multiply", 3, 2) ``` Printed output: ``` multiply method call with parameters: 3 2 [Variant(Value=6, VariantType=<VariantType.Int64: 8>, Dimensions=None, is_array=False)] ``` **Expected behavior**<br /> The return type should be `Int16` as in `add_method` instead of `Int64`. **Problem**<br/> The conversion in `uamethod` is performed by calling `_format_call_outputs` and `to_variant` if no `Variant` is returned by the method; a new `Variant` with `VariantType=None` is initialized. Thereby, the types are guessed and the types from the method registration are ignored. From methods.py: ```python def uamethod(func): """ Method decorator to automatically convert arguments and output to and from variants """ if iscoroutinefunction(func): async def wrapper(parent, *args): func_args = _format_call_inputs(parent, *args) result = await func(*func_args) return _format_call_outputs(result) else: def wrapper(parent, *args): func_args = _format_call_inputs(parent, *args) result = func(*func_args) return _format_call_outputs(result) return wrapper def _format_call_outputs(result): if result is None: return [] elif isinstance(result, ua.CallMethodResult): result.OutputArguments = to_variant(*result.OutputArguments) return result elif isinstance(result, ua.StatusCode): return result elif isinstance(result, tuple): return to_variant(*result) else: return to_variant(result) def to_variant(*args: Iterable) -> List[ua.Variant]: """Create a list of ua.Variants from a given iterable of arguments.""" uaargs: List[ua.Variant] = [] for arg in args: if not isinstance(arg, ua.Variant): arg = ua.Variant(arg) uaargs.append(arg) return uaargs ``` **Version**<br /> Python-Version: 3.11 opcua-asyncio Version (e.g. master branch, 0.9): 1.1.5
Server method decorated with uamethod do not convert to correct dtypes
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1784/comments
4
2025-02-04T14:58:50Z
2025-02-06T13:05:23Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1784
2,830,519,717
1,784
[ "FreeOpcUa", "opcua-asyncio" ]
Hello everyone, I'm trying to write a variable in a tag/node of the PLC with asyncio and I can't do it... I don't know what could be happening, if you know something that could help me I would appreciate it. I can connect, readValues, getNodes etc... but can't write, and tag is writable. I've tried both with write_values([nodeIds], [valuesToWrite]) and with write_value(nodeId, valueToWrite) and it's not working... Here is the part of the code, let me explain you: ``` # results = await globals.opc_client.write_values(nodes_to_write, nodesValues_to_write) for node, value in zip(nodes_to_write, nodesValues_to_write): print("node in zip", node) print("value in zip", value) result = await node.write_value(value) # for result in results: if result.StatusCode.is_good(): print("Write successful") else: print("Error writing tag dentro de result:", result.StatusCode) ``` **node in zip:** [ns=3;s="EQ11_PH1"."TO"."TEST1"] **value in zip:** DataValue(Value=Variant(Value='1', VariantType=<VariantType.Int16: 4>, Dimensions=None, is_array=False), StatusCode_=StatusCode(value=0), SourceTimestamp=None, ServerTimestamp=None, SourcePicoseconds=None, ServerPicoseconds=None) **And I get that error:** Error writing tag: required argument is not an integer **I try to write into the node directly like this too:** result = await node.write_value(85) **And then I get that error:** "Error writing tag: The server does not support writing the combination of value, status and timestamps provided.(BadWriteNotSupported)" It seems something happens in the dataValue, that's my code to get it: ` valorOK2 = ua.DataValue(ua.Variant(value, ua.VariantType(switch_opc_data_type(tag['DATATYPE'])))) ` where datatype is FLOAT,INT,DEC, etc... and I switch to a valid opc_data_type: ``` from asyncua import ua def switch_opc_data_type(datatype): datatype = datatype.lower() if datatype == "int": return ua.VariantType.Int16.value elif datatype == "dint": return ua.VariantType.Int32.value elif datatype == "str": return ua.VariantType.String.value elif datatype == "dec": return ua.VariantType.Float.value elif datatype == "ddec": return ua.VariantType.Double.value elif datatype == "dat": return ua.VariantType.DateTime.value else: return ua.VariantType.String.value ``` That's all I think... so if someone can help me I would appreciate it! Thanks!
Can't write values... datatype error?
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1783/comments
3
2025-01-31T10:39:13Z
2025-01-31T11:16:56Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1783
2,823,020,646
1,783
[ "FreeOpcUa", "opcua-asyncio" ]
Hi! I have an extensionobject of the type MyObject Array[32] [0]: element1 : BOOL; element2 : FLOAT; element3: INT; [1] element1 : BOOL; element2 : FLOAT; element3: INT; [2] element1 : BOOL; element2 : FLOAT; element3: INT; . . . [31] element1 : BOOL; element2 : FLOAT; element3: INT; I have managed to access the node and stored it as 'MyNode' however I cant access any of the data by name. If I want to read element1 of the third item I can do that by print = MyObject[2].Body[0] I would rather be able to access it by MyObject[2].element1 or at least be able to read the names of all the elements by a method like read_display_name() or similar. I have been trying to access the names of the elements in some way or another, but with no luck unfortunately. If any of you could help, I would greatly appreciate it :)
Reading Extensionobjects
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1778/comments
7
2025-01-23T09:15:25Z
2025-01-28T13:31:17Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1778
2,806,349,008
1,778
[ "FreeOpcUa", "opcua-asyncio" ]
I'm trying to install the package in my virtual environment using the command line given in your documentation. I get the following error: ![Image](https://github.com/user-attachments/assets/7dad5e9d-872b-49df-8866-f7e492d1ba38) I thought python3.8 was enough to install your package as python>=3.7 is required according to your doc. Am i wrong ? Thx in advance
Can't install with python 3.8
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1776/comments
1
2025-01-21T16:48:30Z
2025-01-22T10:45:07Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1776
2,802,344,512
1,776
[ "FreeOpcUa", "opcua-asyncio" ]
Hi, I'm trying to share a value between two nodes. I have a "Parent" node which is a custom structure with a value "Acknowledge" ![Image](https://github.com/user-attachments/assets/82be1023-2469-4548-af6f-91ee76f5f65d) I want to create a node Acknowledge has a child of Parent which shares the value of Acknowledge in the parent, so that when I update Acknowledge in the Parent node it also updates the value of Acknowledge in the node below. As you can see in the screenshots below, the values are not synced : ![Image](https://github.com/user-attachments/assets/b8b86d20-1b69-4606-bb49-77d6e788aef1) ![Image](https://github.com/user-attachments/assets/3d519f9c-9773-4d46-a743-f3c0e653295e) Here is my code: ![Image](https://github.com/user-attachments/assets/ceaf86d5-00ea-43d9-a47c-5017bdf85dee)
Sharing values between 2 nodes
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1775/comments
1
2025-01-21T14:06:19Z
2025-01-21T14:22:11Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1775
2,801,953,605
1,775
[ "FreeOpcUa", "opcua-asyncio" ]
**Issue**: - When a client creates a subscription to my server, deletes it, waits ~60 seconds, and tries to create it again, the server gets stuck in an infinite loop of trying to publish an answer. - If the client immediately tries to create the subscription again or within ~60 seconds, its able to without issue. - Is there some advice of what could be going wrong here? **Server Console:** Monitored Item Node NodeId(Identifier=UUID('000103e9-3001-0001-0000-000000000000'), NamespaceIndex=2, NodeIdType=<NodeIdType.Guid: 4>) was created INFO:asyncua.server.uaprocessor:Server wants to send publish answer but no publish request is available,enqueuing publish results callback, length of queue is 1 INFO:asyncua.server.uaprocessor:Server wants to send publish answer but no publish request is available,enqueuing publish results callback, length of queue is 1 INFO:asyncua.server.uaprocessor:Server wants to send publish answer but no publish request is available,enqueuing publish results callback, length of queue is 1 INFO:asyncua.server.uaprocessor:Server wants to send publish answer but no publish request is available,enqueuing publish results callback, length of queue is 1 ... continues until I kill the server....
Server wants to send publish answer but no publish request is available
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1774/comments
1
2025-01-17T23:25:06Z
2025-01-20T08:45:20Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1774
2,796,466,031
1,774
[ "FreeOpcUa", "opcua-asyncio" ]
I would like to warn you that at least the lastest 3 versions deployed on PyPi are not aligned to the GitHub code (I checked from 1.1.5 to 1.1.3). I discovered this since installing the lastest version of the library from pip, authentication with certificates doesn't validate the corresponding private key. Checking the source code released on PyPi of the aforementioned versions, you can see in internal_session.py:122 that the certificate validation check is missing (was fixed in https://github.com/FreeOpcUa/opcua-asyncio/commit/3c6317be7b1f1e8942ecf9feaab7495b946b70c5#diff-11331c0757144766f90517604c255b8cd7be5f482d28634b997042a5bbffc399R129). I don't know if it is because of the failing GitHub actions. Could you check it out? Thanks.
Lastest versions on PyPi are not updated
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1773/comments
1
2025-01-15T16:40:54Z
2025-01-15T19:09:48Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1773
2,790,322,718
1,773
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** When calling `asyncua_server.delete_nodes([self.node], recursive=True)` there appears message in terminal: ``` .local/lib/python3.12/site-packages/asyncua/server/address_space.py:428: RuntimeWarning: coroutine 'MonitoredItemService.datachange_callback' was never awaited callback(handle, None, ua.StatusCode(ua.StatusCodes.BadNodeIdUnknown)) RuntimeWarning: Enable tracemalloc to get the object allocation traceback ``` It seems there are some sync/async issues, maybe some functionality doesn't work properly. **To Reproduce** Call `asyncua_server.delete_nodes([self.node], recursive=True)` **Expected behavior** No warnings **Version** Python-Version: 3.12 opcua-asyncio Version: 1.1.5
Coroutine not awaited while deleting node
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1771/comments
2
2025-01-11T14:21:36Z
2025-01-12T04:52:52Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1771
2,781,903,572
1,771
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> I trying to rename object on running opc ua server. From old DisplayName is 'SNMP' to new DisplayName is 'SNMP changed'. But object name in tree not changed. Code received new name from redis subscription. **To Reproduce**<br /> Here is code example: `async def object_changed(jval): org_id = jval["args"]["id"] query = "SELECT po.id_p, po.address, p.name FROM page_object AS po LEFT JOIN page AS p ON " query += f"p.id_p=po.id_p WHERE po.id_p={org_id}" org, = await db.select(query, f"Database error on object {org_id}:") node_ = server.get_node(ua.NodeId(org_id, 1)) display_name = await node_.read_display_name() if org["name"] != display_name.Text: now = datetime.datetime.now() await node_.write_attribute(ua.AttributeIds.DisplayName, \ ua.DataValue(ua.uatypes.LocalizedText(org["name"].replace(':', ' ')), \ ua.StatusCode(0), now.replace(tzinfo=tz_local).astimezone(tz_utc))) await node_.write_attribute(ua.AttributeIds.BrowseName, \ ua.DataValue(ua.uatypes.QualifiedName(org["name"].replace(':', ' ')), \ ua.StatusCode(0), now.replace(tzinfo=tz_local).astimezone(tz_utc))) await node_.write_attribute(ua.AttributeIds.Description, \ ua.DataValue(ua.uatypes.LocalizedText(org["name"].replace(':', ' ')), \ ua.StatusCode(0), now.replace(tzinfo=tz_local).astimezone(tz_utc)))` **Screenshots**<br /> Screenshot attached ![snmp_plc](https://github.com/user-attachments/assets/faf6ff62-2c46-4f4a-bf89-fb733b28b72d) **Version**<br /> Python-Version: 3.11.2 x64<br /> opcua-asyncio Version (e.g. master branch, 0.9): asyncua-1.1.5
How-to rename object on running opc ua server?
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1766/comments
4
2024-12-23T15:23:48Z
2025-05-28T17:43:37Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1766
2,756,275,126
1,766
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> I'm running an OPCUA server on a Raspberry Pi 4B. When I closed the connection from my laptop acting as client an error got logged. It shows that the watchdog should handle cleaning up the sessions but it did not this time for unknown reasons, resulting in a crash in crash in `BinaryServer._close_tasks`. **To Reproduce**<br /> Not entirely clear. The only thing I know for sure is that it happened when I closed the client session. It never happened before. **Expected behavior**<br /> I expected the client session to close properly like the logs show: ![image](https://github.com/user-attachments/assets/d504ab83-182f-4cb5-b789-ca9e2ecd7bbd) **Screenshots**<br /> ![image](https://github.com/user-attachments/assets/04aa30c1-491d-4957-a409-bb02b11d8621) **Version**<br /> Python-Version:3.11.2<br /> opcua-asyncio Version: 1.1.5
Watchdog didn't clean up sessions properly resulting in crash in BinaryServer._close_tasks
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1763/comments
0
2024-12-19T08:27:22Z
2024-12-19T08:27:22Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1763
2,749,595,262
1,763
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> The namespace index of exported nodes will not match up with what is observed in UaExpert. This is because the nodeset only contains the namespaces of nodes in the NodeSet. **To Reproduce**<br /> Connect to a server and export the XML using a subset of all nodes, e.g ```python client = Client(url) print("Connecting to server...") async with client as client: print("Connected to server") objects_node = client.get_objects_node() all_nodes: List[Node] = [] await recursively_populate_children(objects_node, all_nodes) filtered_nodes: List[Node] = [] for node in all_nodes: if await is_in_namespace(node, ns=3): filtered_nodes.append(node) await client.export_xml(filtered_nodes, output_file_path) ``` There will only be one element in the `NamespaceUris` array, e.g. ```xml <NamespaceUris> <Uri>http://www.helloworld.com/OPCUA/MyNamespace</Uri> </NamespaceUris> ``` And the namespace index of nodes will all be `1`, and not `3` like they should be. (I think they are not `0`, because the `NamespaceUris` implicitly contains `http://opcfoundation.org/UA/` as index `0`, even if it looks like the `0` index is `http://www.helloworld.com/OPCUA/MyNamespace`. I think the issue is related to this function ```python async def _add_namespaces(self, nodes): ns_array = await self.server.get_namespace_array() idxs = await self._get_ns_idxs_of_nodes(nodes) # now create a dict of idx_in_address_space to idx_in_exported_file self._addr_idx_to_xml_idx = self._make_idx_dict(idxs, ns_array) ns_to_export = [ns_array[i] for i in sorted(list(self._addr_idx_to_xml_idx.keys())) if i != 0] # write namespaces to xml self._add_namespace_uri_els(ns_to_export) ``` **Version**<br /> Python-Version: 3 opcua-asyncio Version: 1.1.5
Nodeset exporter should include the full namespace array, not just the namespaces of the nodes that are exported
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1758/comments
3
2024-12-05T10:02:31Z
2025-04-27T10:26:20Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1758
2,719,936,409
1,758
[ "FreeOpcUa", "opcua-asyncio" ]
**Description:** I am trying to delete a node with a custom type using the asyncua framework. I have reviewed the client-deleting.py example provided in the repository and attempted to implement node deletion in my application. However, I am encountering an issue: after calling delete_nodes, I am still able to access and read the node's attributes like BrowseName. **Code Example:** ``` # Attempt to delete a batch of nodes await self.client.delete_nodes(self.obj_list) # Attempt to delete a single node node = self.client.get_node(self.obj_nodeid) results = await self.client.delete_nodes([node]) # Delete one node # Verify if the node still exists node_after = self.client.get_node(self.obj_nodeid) try: browse_name = await node_after.read_browse_name() except Exception: logger.info("Delete Successfully") ``` **Observed Behavior:** Even after calling delete_nodes, the node remains accessible. I am still able to retrieve the BrowseName using read_browse_name() without any exceptions. **Expected Behavior:** After calling delete_nodes, the node should no longer exist, and attempting to access it should raise an exception. **Additional Context:** The node being deleted has a custom type derived from BaseObjectType. asyncua version: 1.1.5 python version: 3.8.17 I have stored the nodes in a list and attempted both batch deletion and single-node deletion. Is there something I am missing in the deletion process? Any guidance or example code would be greatly appreciated.
How to Delete a Node with Custom Type
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1757/comments
0
2024-12-05T07:10:23Z
2024-12-05T07:10:23Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1757
2,719,549,270
1,757
[ "FreeOpcUa", "opcua-asyncio" ]
Hi all, I'm writing an OPC UA Server using latest version 1.1.5. The server define a new structure using the: from asyncua.common.structures104 import new_struct, new_enum, new_struct_field **method to create and add new structure type to OPC UA Server** ``` async def define_work_order_datatype(server): res, * = await new_struct( server, idx, "WorkOrderDataType", [ new_struct_field("order_locked" , ua.VariantType.Boolean), new_struct_field("order_priority" , ua.VariantType.Byte), new_struct_field("job_order_code" , ua.VariantType.String), new_struct_field("customer_code" , ua.VariantType.String), new_struct_field("item_code" , ua.VariantType.String), new_struct_field("material_code" , ua.VariantType.String), new_struct_field("order_notes" , ua.VariantType.String), new_struct_field("used_deadline_datetime" , ua.VariantType.Boolean), new_struct_field("deadline_datetime" , ua.VariantType.DateTime), new_struct_field("file_name_1" , ua.VariantType.String), new_struct_field("pieces_per_file_1" , ua.VariantType.Int32), new_struct_field("requested_pieces_1" , ua.VariantType.Int32), new_struct_field("file_name_2" , ua.VariantType.String), new_struct_field("pieces_per_file_2" , ua.VariantType.Int32), new_struct_field("requested_pieces_2" , ua.VariantType.Int32), new_struct_field("file_name_3" , ua.VariantType.String), new_struct_field("pieces_per_file_3" , ua.VariantType.Int32), new_struct_field("requested_pieces_3" , ua.VariantType.Int32), new_struct_field("file_name_4" , ua.VariantType.String), new_struct_field("pieces_per_file_4" , ua.VariantType.Int32), new_struct_field("requested_pieces_4" , ua.VariantType.Int32), new_struct_field("file_name_5" , ua.VariantType.String), new_struct_field("pieces_per_file_5" , ua.VariantType.Int32), new_struct_field("requested_pieces_5" , ua.VariantType.Int32), new_struct_field("file_name_6" , ua.VariantType.String), new_struct_field("pieces_per_file_6" , ua.VariantType.Int32), new_struct_field("requested_pieces_6" , ua.VariantType.Int32), new_struct_field("file_name_7" , ua.VariantType.String), new_struct_field("pieces_per_file_7" , ua.VariantType.Int32), new_struct_field("requested_pieces_7" , ua.VariantType.Int32), new_struct_field("file_name_8" , ua.VariantType.String), new_struct_field("pieces_per_file_8" , ua.VariantType.Int32), new_struct_field("requested_pieces_8" , ua.VariantType.Int32), ], ) return res ``` **Create new structure data type and method which use it as argument** ``` # create new structure datatype and add it to OPC UA Struct node work_order_datatype = await define_work_order_datatype(self.server) # create and add uamethod witch use new structure datatype as argument a_work_order_order_code = ua.Argument() a_work_order_order_code.Name = "order_code" a_work_order_order_code.DataType = ua.NodeId(ua.ObjectIds.String) a_work_order_order_code.ValueRank = -1 a_work_order_order_code.Description = ua.LocalizedText("work order: order code") a_work_order_data = ua.Argument() a_work_order_data.Name = "data" a_work_order_data.DataType = work_order_datatype.nodeid a_work_order_data.ValueRank = -1 a_work_order_data.Description = ua.LocalizedText("work order: data") await self.server.nodes.objects.add_method( idx, "work_order_add", self.api_work_order_add, [a_work_order_order_code, a_work_order_data], [ua.VariantType.Boolean] ) ``` **uamethod implementation** ``` @uamethod async def api_work_order_add(self, parent, order_code: str, data): # ??? HOW TO GET STRUCTURE DATA ??? return True ``` The data content is: ``` [Dbg]>>> data ExtensionObject(TypeId=NodeId(Identifier=2, NamespaceIndex=2, NodeIdType=<NodeIdType.FourByte: 1>), Body=b'\x00\x00\x05\x00\x00\x0012341\x04\x00\x00\x001234\x01\x00\x00\x003\x05\x00\x00\x00sderg\x07\x00\x00\x00dfgsdfg\x00\x00@m%\xebS\xbf\x01\x08\x00\x00\x00sdfgsdfg\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00') ``` I was able, in some way, to get data using struct but the code is terrible and complex to maintain: ``` import struct from asyncua.ua import ua @uamethod async def api_work_order_add(self, parent, order_code: str, data): try: # Stampa informazioni di debug print("ExtensionObject TypeId:", data.TypeId) print("Body hex:", data.Body.hex()) print("Body length:", len(data.Body)) # Punto di partenza per la lettura offset = 0 # Funzione di supporto per leggere stringhe def read_string(body, start): # Legge la lunghezza della stringa (4 byte) str_len = struct.unpack('<i', body[start:start+4])[0] start += 4 # Se la lunghezza è -1, è una stringa vuota if str_len == -1: return '', start # Legge la stringa string_value = body[start:start+str_len].decode('utf-8') return string_value, start + str_len # Lettura campi booleani e byte order_locked = data.Body[offset] == 1 offset += 1 order_priority = data.Body[offset] offset += 1 # Lettura stringhe job_order_code, offset = read_string(data.Body, offset) customer_code, offset = read_string(data.Body, offset) item_code, offset = read_string(data.Body, offset) material_code, offset = read_string(data.Body, offset) order_notes, offset = read_string(data.Body, offset) # Lettura booleano deadline used_deadline_datetime = data.Body[offset] == 1 offset += 1 # Lettura datetime (64-bit intero che rappresenta i tick di Windows) deadline_datetime_raw = struct.unpack('<Q', data.Body[offset:offset+8])[0] # Converti i tick di Windows in datetime di Python se necessario deadline_datetime = ua.datetime_from_win_filetime(deadline_datetime_raw) if deadline_datetime_raw != 0 else None offset += 8 # Lettura dettagli file file_details = [] for _ in range(8): file_name, offset = read_string(data.Body, offset) pieces_per_file = struct.unpack('<i', data.Body[offset:offset+4])[0] offset += 4 requested_pieces = struct.unpack('<i', data.Body[offset:offset+4])[0] offset += 4 file_details.append({ 'file_name': file_name, 'pieces_per_file': pieces_per_file, 'requested_pieces': requested_pieces }) # Stampa i dati decodificati print("Order Locked:", order_locked) print("Order Priority:", order_priority) print("Job Order Code:", job_order_code) print("Customer Code:", customer_code) print("Item Code:", item_code) print("Material Code:", material_code) print("Order Notes:", order_notes) print("Used Deadline:", used_deadline_datetime) print("Deadline Datetime:", deadline_datetime) # Stampa dettagli dei file for i, file_info in enumerate(file_details, 1): if file_info['file_name']: print(f"File {i}:") print(f" Name: {file_info['file_name']}") print(f" Pieces per File: {file_info['pieces_per_file']}") print(f" Requested Pieces: {file_info['requested_pieces']}") return True except Exception as e: print(f"Errore nella decodifica dei dati: {e}") import traceback traceback.print_exc() return False ``` There is some better and ready to use way to get info from "data" using library functions ?
How to get data from a new_struct datatype received in a @uamethod
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1756/comments
1
2024-12-04T15:08:30Z
2024-12-04T16:42:54Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1756
2,718,045,190
1,756
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> When subscribing to a node component on an OPCUA server the SourceTimeStamp is identical to the ServerTimeStamp. When viewing the data with UaExpert or similar software these two are different. The same goes for logging these values with these softwares. **To Reproduce**<br /> * Setup a server with a node that recieves data with a source timestamp (Double data type) in batch. The time difference between each data point is 2 minutes while the transmission frequency is 1 hour (so every hour 30 values are sent). ![bilde](https://github.com/user-attachments/assets/b8ee7b53-5f44-41e1-88b3-900f436861ce) * Subscribe to the node using the example subscribe client supplied here (with username and password in my case) * View the logged output. The used code ``` #%% import asyncio import logging from asyncua import Client, Node, ua logging.basicConfig(level=logging.INFO) _logger = logging.getLogger("asyncua") class SubscriptionHandler: """ The SubscriptionHandler is used to handle the data that is received for the subscription. """ def datachange_notification(self, node: Node, val, data): """ Callback for asyncua Subscription. This method will be called when the Client received a data change message from the Server. """ server_timestamp = data.monitored_item.Value.ServerTimestamp source_timestamp = data.monitored_item.Value.SourceTimestamp print(f"Server Timestamp: {server_timestamp}") print(f"Source Timestamp: {source_timestamp}") _logger.info("datachange_notification %r %s", node, val) async def main(url,username,password): """ Main task of this Client-Subscription example. """ node_addresses = ["ns=2;s=DL_01.NI_0094.HValue"] client = Client(url) client.set_user(username) client.set_password(password) async with client: nodes = [client.get_node(address) for address in node_addresses] var = nodes[0] handler = SubscriptionHandler() # We create a Client Subscription. subscription = await client.create_subscription(500, handler) nodes = [ var, client.get_node(ua.ObjectIds.Server_ServerStatus_CurrentTime), ] # We subscribe to data changes for two nodes (variables). await subscription.subscribe_data_change(nodes) # We let the subscription run for ten seconds await asyncio.sleep(4) # We delete the subscription (this un-subscribes from the data changes of the two variables). # This is optional since closing the connection will also delete all subscriptions. await subscription.delete() # After one second we exit the Client context manager - this will close the connection. await asyncio.sleep(1) #%% if __name__ == "__main__": url = "opc.tcp://servername:4840" username = "*******" password = "*******" await main(url,username,password) ``` Some example output from my testing ``` INFO:asyncua.common.subscription:Publish callback called with result: PublishResult(SubscriptionId=3367183, AvailableSequenceNumbers=[7], MoreNotifications=False, NotificationMessage_=NotificationMessage(SequenceNumber=7, PublishTime=datetime.datetime(2024, 12, 3, 10, 53, 0, 834848, tzinfo=datetime.timezone.utc), NotificationData=[DataChangeNotification(MonitoredItems=[MonitoredItemNotification(ClientHandle=202, Value=DataValue(Value=Variant( Value=datetime.datetime(2024, 12, 3, 10, 53, 0, 802766, tzinfo=datetime.timezone.utc), VariantType=<VariantType.DateTime: 13>, Dimensions=None, is_array=False), StatusCode_=StatusCode(value=0), SourceTimestamp=datetime.datetime(2024, 12, 3, 10, 53, 0, 802766, tzinfo=datetime.timezone.utc), ServerTimestamp=datetime.datetime(2024, 12, 3, 10, 53, 0, 802766, tzinfo=datetime.timezone.utc), SourcePicoseconds=None, ServerPicoseconds=None))], DiagnosticInfos=[])]), Results=[StatusCode(value=0)], DiagnosticInfos=[]) INFO:asyncua:datachange_notification Node(NodeId(Identifier=2258, NamespaceIndex=0, NodeIdType=<NodeIdType.FourByte: 1>)) 2024-12-03 10:53:00.802766+00:00 2024-12-03 10:53:00.802766+00:00 ``` **Expected behavior**<br /> The data recieved by the server has the Source timestamps with 2 minute difference, but the recieved data with the python client shows the SourceTimeStamp as identical with the ServerTimeStamp. **Screenshots**<br /> The data logged using UaExports data logger view shows the logged timestamps being different (expected behavior). ![bilde](https://github.com/user-attachments/assets/de50c03c-a2b7-4d7e-a49a-88dc6c735be1) Printout of the python opcua clients subscribe timestamps. ![bilde](https://github.com/user-attachments/assets/ade112fa-dc19-4f68-bc82-3f5b48bff0ac) **Version**<br /> Python-Version:<br /> 3.11.7 opcua-asyncio Version (e.g. master branch, 0.9): asyncua == 1.1.5
SourceTimeStamp is identical to ServerTimeStamp
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1754/comments
1
2024-12-03T11:54:49Z
2024-12-03T12:47:08Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1754
2,714,815,068
1,754
[ "FreeOpcUa", "opcua-asyncio" ]
Why we are unable to get data change when nodes get inactive and active again however it handle doesn't get change and also it is present in monitored items dictionary?
Unable to get data change notification
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1751/comments
0
2024-11-21T08:44:28Z
2024-11-21T08:44:28Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1751
2,678,563,887
1,751
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Unable to get client subscribe to work. **To Reproduce**<br /> See code.… [1] **Expected behavior**<br /> If Data is changed on the server the client should be notified via the subscription. **Version**<br /> Python-Version: 3.11.9<br /> opcua-asyncio Version (e.g. master branch, 0.9): 1.1.5 [1] ```python # server.py import asyncio import logging from asyncua import Server, Client, Node from datetime import datetime class OPCUAServer: def __init__(self, endpoint: str = "opc.tcp://0.0.0.0:4840/freeopcua/server/"): self.endpoint = endpoint self.server = Server() self.trigger_var = None self.part_id_var = None self.namespace_idx = None async def init(self): await self.server.init() self.server.set_endpoint(self.endpoint) # Set server name await self.server.set_application_uri("urn:example:opcua:server") # Get Objects node objects = self.server.get_objects_node() # Create custom namespace uri = "http://examples.freeopcua.github.io" self.namespace_idx = await self.server.register_namespace(uri) # Load XML nodeset await self.server.import_xml("UA_NodeSet.xml") # Create a new object myobj = await objects.add_object(self.namespace_idx, "Process") # Create variables self.trigger_var = await myobj.add_variable( self.namespace_idx, "Trigger", False ) self.part_id_var = await myobj.add_variable(self.namespace_idx, "PartID", "") # Set variables writable await self.trigger_var.set_writable() await self.part_id_var.set_writable() print(f"Server namespace index: {self.namespace_idx}") print(f"Trigger node id: {self.trigger_var.nodeid}") print(f"PartID node id: {self.part_id_var.nodeid}") async def start(self): async with self.server: while True: # Simulate trigger and part ID updates current_time = datetime.now().strftime("%H:%M:%S") await self.trigger_var.write_value(True) name = (await self.trigger_var.read_browse_name()).Name value = (await self.trigger_var.read_value()) print(f"{name} = {value}") await self.part_id_var.write_value(f"PART_{current_time}") name = (await self.part_id_var.read_browse_name()).Name value = (await self.part_id_var.read_value()) print(f"{name} = {value}") # Wait for 5 seconds before next update await asyncio.sleep(5) await self.trigger_var.write_value(False) name = (await self.trigger_var.read_browse_name()).Name value = (await self.trigger_var.read_value()) print(f"{name} = {value}") await asyncio.sleep(5) # client.py class SubscriptionHandler: def datachange_notification(self, node: Node, val, data): try: node_name = node print(f"New value for {node_name}: {val} {data=}") except Exception as e: print(f"Error in notification handler: {e}") class OPCUAClient: def __init__(self, url: str = "opc.tcp://localhost:4840/freeopcua/server/"): self.url = url self.client = Client(url=self.url) async def subscribe_to_variables(self): async with self.client: try: # Find the namespace index uri = "http://examples.freeopcua.github.io" nsidx = await self.client.get_namespace_index(uri) print(f"Client namespace index: {nsidx}") # Get the Process node first objects = self.client.get_objects_node() process_node = await objects.get_child(f"{nsidx}:Process") # Get variables using their browse paths trigger_node = await process_node.get_child(f"{nsidx}:Trigger") part_id_node = await process_node.get_child(f"{nsidx}:PartID") print(f"Found trigger node: {trigger_node.nodeid}") print(f"Found part_id node: {part_id_node.nodeid}") # Create subscription handler = SubscriptionHandler() subscription = await self.client.create_subscription(100, handler=handler) await subscription.subscribe_data_change([trigger_node, part_id_node], sampling_interval=0) # Keep the client running while True: print("ZZzzzZZzzZ!") await asyncio.sleep(1) except Exception as e: print(f"Error in client: {e}") raise # Example XML configuration (UA_NodeSet.xml) XML_CONTENT = """<?xml version="1.0" encoding="utf-8"?> <UANodeSet xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:uax="http://opcfoundation.org/UA/2008/02/Types.xsd" xmlns="http://opcfoundation.org/UA/2011/03/UANodeSet.xsd"> <NamespaceUris> <Uri>http://examples.freeopcua.github.io</Uri> </NamespaceUris> <UAObject NodeId="ns=1;i=1" BrowseName="1:Process"> <DisplayName>Process</DisplayName> <References> <Reference ReferenceType="HasComponent" IsForward="false">i=85</Reference> </References> </UAObject> <UAVariable NodeId="ns=1;i=2" BrowseName="1:Trigger" DataType="Boolean"> <DisplayName>Trigger</DisplayName> <References> <Reference ReferenceType="HasComponent" IsForward="false">ns=1;i=1</Reference> </References> </UAVariable> <UAVariable NodeId="ns=1;i=3" BrowseName="1:PartID" DataType="String"> <DisplayName>PartID</DisplayName> <References> <Reference ReferenceType="HasComponent" IsForward="false">ns=1;i=1</Reference> </References> </UAVariable> </UANodeSet> """ # main.py async def main(): # Save XML configuration with open("UA_NodeSet.xml", "w") as f: f.write(XML_CONTENT) # Create and start server server = OPCUAServer() await server.init() # Create and start client client = OPCUAClient() # Run both server and client concurrently await asyncio.gather(server.start(), client.subscribe_to_variables()) if __name__ == "__main__": # logging.basicConfig(level=logging.INFO) asyncio.run(main()) ```
Unable to get subscribe example working
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1746/comments
2
2024-11-11T16:01:16Z
2024-11-14T13:54:30Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1746
2,649,750,336
1,746
[ "FreeOpcUa", "opcua-asyncio" ]
I took the very basic server example and changed the protocol from `opc.tcp` to `opc.http`. I want to deploy the server to heroku but that apparently does not support tcp endpoints and requires http/s. This is the server: ```python import asyncio from asyncua import Server async def main(): server = Server() await server.init() server.set_endpoint('opc.http://0.0.0.0:4840/') uri = "http://examples.freeopcua.github.io" idx = await server.register_namespace(uri) myobj = await server.nodes.objects.add_object(idx, "MyObject") myvar = await myobj.add_variable(idx, "MyVariable", 6.7) await myvar.set_writable() async with server: while True: await asyncio.sleep(1) new_val = await myvar.get_value() + 0.1 await myvar.write_value(new_val) if __name__ == '__main__': print('starting server') asyncio.run(main()) ``` When connecting with a basic client: ```python from asyncua import Client url = "opc.http://0.0.0.0:4840/freeopcua/server/" async with Client(url=url) as client: _logger.info("Root node is: %r", client.nodes.root) ``` I'm getting this error: ``` [...] INFO:asyncua.client.client:find_endpoint [EndpointDescription(EndpointUrl='opc.http://0.0.0.0:4840/', Server=ApplicationDescription(ApplicationUri='urn:freeopcua:python:server', ProductUri='urn:freeopcua.github.io:python:server', ApplicationName=LocalizedText(Locale=None, Text='FreeOpcUa Python Server'), ApplicationType_=<ApplicationType.ClientAndServer: 2>, GatewayServerUri=None, DiscoveryProfileUri=None, DiscoveryUrls=['opc.http://0.0.0.0:4840/']), ServerCertificate=None, SecurityMode=<MessageSecurityMode.None_: 1>, SecurityPolicyUri='http://opcfoundation.org/UA/SecurityPolicy#None', UserIdentityTokens=[UserTokenPolicy(PolicyId='anonymous', TokenType=<UserTokenType.Anonymous: 0>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None), UserTokenPolicy(PolicyId='certificate_basic256sha256', TokenType=<UserTokenType.Certificate: 2>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None), UserTokenPolicy(PolicyId='username', TokenType=<UserTokenType.UserName: 1>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None)], TransportProfileUri='http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary', SecurityLevel=0)] <MessageSecurityMode.None_: 1> 'http://opcfoundation.org/UA/SecurityPolicy#None' INFO:asyncua.client.ua_client.UASocketProtocol:close_secure_channel INFO:asyncua.client.ua_client.UASocketProtocol:Request to close socket received INFO:asyncua.client.ua_client.UASocketProtocol:Socket has closed connection Traceback (most recent call last): File "cloud_demo_app/client/main.py", line 76, in <module> asyncio.run(main()) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "cloud_demo_app/client/main.py", line 26, in main async with Client(url=url) as client: File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 95, in __aenter__ await self.connect() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 311, in connect await self.create_session() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 511, in create_session ep = Client.find_endpoint(response.ServerEndpoints, self.security_policy.Mode, self.security_policy.URI) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 130, in find_endpoint raise ua.UaError(f"No matching endpoints: {security_mode}, {policy_uri}") asyncua.ua.uaerrors._base.UaError: No matching endpoints: 1, http://opcfoundation.org/UA/SecurityPolicy#None ``` Do you have a hint how to work with http over tcp?
Use http instead of tcp
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1745/comments
0
2024-11-11T09:08:23Z
2024-11-11T09:08:36Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1745
2,648,662,914
1,745
[ "FreeOpcUa", "opcua-asyncio" ]
I took the very basic server example and changed the protocol from `opc.tcp` to `opc.http`. I want to deploy the server to heroku but that apparently does not support tcp endpoints and requires http/s. This is the server: ```python import asyncio from asyncua import Server async def main(): server = Server() await server.init() server.set_endpoint('opc.http://0.0.0.0:4840/freeopcua/server/') uri = "http://examples.freeopcua.github.io" idx = await server.register_namespace(uri) myobj = await server.nodes.objects.add_object(idx, "MyObject") myvar = await myobj.add_variable(idx, "MyVariable", 6.7) await myvar.set_writable() async with server: while True: await asyncio.sleep(1) new_val = await myvar.get_value() + 0.1 await myvar.write_value(new_val) if __name__ == '__main__': print('starting server') asyncio.run(main()) ``` When connecting with a basic client: ```python from asyncua import Client url = "opc.http://0.0.0.0:4840/freeopcua/server/" async with Client(url=url) as client: _logger.info("Root node is: %r", client.nodes.root) ``` I'm getting this error: ``` [...] INFO:asyncua.client.client:find_endpoint [EndpointDescription(EndpointUrl='opc.http://0.0.0.0:4840/', Server=ApplicationDescription(ApplicationUri='urn:freeopcua:python:server', ProductUri='urn:freeopcua.github.io:python:server', ApplicationName=LocalizedText(Locale=None, Text='FreeOpcUa Python Server'), ApplicationType_=<ApplicationType.ClientAndServer: 2>, GatewayServerUri=None, DiscoveryProfileUri=None, DiscoveryUrls=['opc.http://0.0.0.0:4840/']), ServerCertificate=None, SecurityMode=<MessageSecurityMode.None_: 1>, SecurityPolicyUri='http://opcfoundation.org/UA/SecurityPolicy#None', UserIdentityTokens=[UserTokenPolicy(PolicyId='anonymous', TokenType=<UserTokenType.Anonymous: 0>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None), UserTokenPolicy(PolicyId='certificate_basic256sha256', TokenType=<UserTokenType.Certificate: 2>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None), UserTokenPolicy(PolicyId='username', TokenType=<UserTokenType.UserName: 1>, IssuedTokenType=None, IssuerEndpointUrl=None, SecurityPolicyUri=None)], TransportProfileUri='http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary', SecurityLevel=0)] <MessageSecurityMode.None_: 1> 'http://opcfoundation.org/UA/SecurityPolicy#None' INFO:asyncua.client.ua_client.UASocketProtocol:close_secure_channel INFO:asyncua.client.ua_client.UASocketProtocol:Request to close socket received INFO:asyncua.client.ua_client.UASocketProtocol:Socket has closed connection Traceback (most recent call last): File "cloud_demo_app/client/main.py", line 76, in <module> asyncio.run(main()) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "cloud_demo_app/client/main.py", line 26, in main async with Client(url=url) as client: File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 95, in __aenter__ await self.connect() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 311, in connect await self.create_session() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 511, in create_session ep = Client.find_endpoint(response.ServerEndpoints, self.security_policy.Mode, self.security_policy.URI) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/asyncua/client/client.py", line 130, in find_endpoint raise ua.UaError(f"No matching endpoints: {security_mode}, {policy_uri}") asyncua.ua.uaerrors._base.UaError: No matching endpoints: 1, http://opcfoundation.org/UA/SecurityPolicy#None ``` Do you have a hint how to work with http over tcp?
Use http instead of tcp
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1744/comments
2
2024-11-11T09:08:22Z
2024-11-11T12:26:58Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1744
2,648,662,883
1,744
[ "FreeOpcUa", "opcua-asyncio" ]
README.md has this command to run to run tests: `python -m pip install -r dev_requirements.txt` However dev_requirements.txt is not in repo.
dev_requirements.py is missing
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1740/comments
2
2024-10-29T16:39:03Z
2024-10-31T09:42:10Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1740
2,621,795,494
1,740
[ "FreeOpcUa", "opcua-asyncio" ]
**Description** <br /> I'm working with a nodeset generated by CODESYS for a PLC, which has been causing several issues. While most seem to stem from the CODESYS side, one issue appears to be an unnecessary limitation in the Asyncua library itself. Specifically, the nodeset generated uses string-based identifiers for ExtensionObjects, as shown in the abbreviated XML example below. When exporting the nodeset to XML, the check for integer identifiers in <code>xmlexporter.py/_value_to_etree()</code> raises an UaInvalidParameterError: ```xml <UAVariable/> <DisplayName/> <Description/> <References/> <Value> <uax:ExtensionObject> <uax:TypeId> <uax:Identifier>ns=3;s=|type|My_String_Identifier</uax:Identifier> </uax:TypeId> <uax:Body/> </uax:ExtensionObject> </Value> </UAVariable> ``` From UA Part 6: Mappings (V. 1.05.03) / section 5.2.2.15: > "Server implementers should use namespace-qualified numeric NodeIds for any DataTypeEncoding Objects they define. This will minimize the overhead introduced by packing Structured DataType values into an ExtensionObject." My understanding of this part of the standard is that it recommends—but does not strictly require—the use of numeric NodeIds for ExtensionObjects. The standard only mandates numeric identifiers explicitly for ExtensionObjects in the UA namespace (section 5.2.2.15). Currently, my nodeset exports correctly if I disable the numeric identifier check. **To Reproduce**<br /> Unfortunately I can't provide the complete nodeset, but any ExtensionObject with a String TypeId should trigger the behavior. **Expected behavior**<br /> Allow String IDs for types of Extension objects. **Screenshots**<br /> ![image](https://github.com/user-attachments/assets/007bf3f2-4501-43fa-8166-1695ddb9898d) UAExpert doesn't have any problems with String IDs for types of ExtensionObjects **Versions**<br /> Python-Version: 3.12.3<br /> opcua-asyncio Version: 1.1.5 (installed via Anaconda)
ExtensionObjects with String TypeIds
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1739/comments
0
2024-10-29T09:03:51Z
2024-10-29T09:07:11Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1739
2,620,587,743
1,739
[ "FreeOpcUa", "opcua-asyncio" ]
Hi, I tried starting `./examples/server_minimal.py` and got a wall of INFO about not existing nodes. I tried starting it on current version from master and version 1.1.5. I tried starting it on Debian 12 and Ubuntu 22.04 - got the same result. Using Python 3.10 `INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=11715, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15958, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15959, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15960, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15961, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15962, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15963, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=15964, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16134, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16135, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists INFO:asyncua.server.address_space:add_node: while adding node NumericNodeId(Identifier=16136, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>), requested parent node NumericNodeId(Identifier=15957, NamespaceIndex=0, NodeIdType=<NodeIdType.Numeric: 2>) does not exists`
Requested parent node does not exist in server-minimal.py
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1733/comments
2
2024-10-25T11:06:25Z
2025-06-16T09:09:59Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1733
2,613,794,137
1,733
[ "FreeOpcUa", "opcua-asyncio" ]
**Describe the bug** <br /> Initializing a server with server.init(Path('cache_file')) will create a cache_file.db in the same directory as the script has been executed. Running the script the second time will recreate the same cache file. **Solution** <br /> In the internal_server.py file the load_standard_address_space function must be modified, that the Path(...).with_suffix('.db') can be recognized. Current code snippet: ``` async def load_standard_address_space(self, shelf_file: Optional[Path] = None): if shelf_file: is_file = await asyncio.get_running_loop().run_in_executor( None, Path.is_file, shelf_file ) or await asyncio.get_running_loop().run_in_executor(None, Path.is_file, shelf_file.with_suffix('.db')) ``` Working code snippet: ``` async def load_standard_address_space(self, shelf_file: Optional[Path] = None): if shelf_file: is_file = await asyncio.get_running_loop().run_in_executor( None, Path.is_file, shelf_file ) or await asyncio.get_running_loop().run_in_executor(None, Path.is_file, shelf_file.with_suffix('.db')) ``` **To Reproduce**<br /> Steps to reproduce the behavior incl code: Calling this script: ``` import asyncio from pyinstrument import Profiler from asyncua import Server from pathlib import Path async def opcua_startup(): server = Server() await server.init(Path('cache_file')) print("Server initialized") server.set_endpoint('opc.tcp://localhost:4840/freeopcua/server/') server.set_server_name('FreeOpcUa Example Server') await server.start() print("Server started") return True async def main(): p = Profiler() with p: await opcua_startup() p.print() if __name__ == '__main__': asyncio.run(main()) ``` **Expected behavior**<br /> A clear and concise description of what you expected to happen. If this script would be called twice. The cache file should be catched and used to be much more faster **Version**<br /> Python-Version:3.9<br /> opcua-asyncio Version: 1.1.5 Hardware: RevolutionPi / RaspberryPi
Could not load shelf file on server init if it already exists
https://api.github.com/repos/FreeOpcUa/opcua-asyncio/issues/1732/comments
0
2024-10-25T08:34:29Z
2024-10-25T08:36:15Z
https://github.com/FreeOpcUa/opcua-asyncio/issues/1732
2,613,457,147
1,732