repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
seleniumbase/SeleniumBase | web-scraping | 3,128 | Looks like CF shipped a new update, only minutes after my last release | ### Looks like CF shipped a new update, only minutes after my last release
Let the games begin. They must've been waiting, thinking that I don't ship more than one release in one day.
Currently, `sb.uc_gui_click_captcha()` is not clicking at the correct coordinates anymore. I'm on it! | closed | 2024-09-12T17:40:35Z | 2024-09-12T21:50:15Z | https://github.com/seleniumbase/SeleniumBase/issues/3128 | [
"UC Mode / CDP Mode",
"Fun"
] | mdmintz | 2 |
yt-dlp/yt-dlp | python | 12,116 | [reddit] video downloading failure | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
[f.txt](https://github.com/user-attachments/files/18448725/f.txt)
reddit downloading problem in latest yt-dlp nightly and master when using --http-chunk-size
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-U', '--config-location', '/data/data/com.termux/files/home/.config/yt-dlp/video/config', 'https://www.reddit.com/r/sports/comments/1i2h287/arsenals_leandro_trossard_with_with_ridiculous/'] [debug] | Config "/data/data/com.termux/files/home/.config/yt-dlp/video/config": ['--paths', 'storage/shared/Youtube', '-N', '8', '--retries', '6', '--fragment-retries', '1', '-r', '75K', '--buffer-size', '40K', '--no-resize-buffer', '--socket-timeout', '30', '--sponsorblock-api', 'https://sponsor.ajay.app', '-4', '--cookies', 'storage/shared/Download/cookies1.txt', '--trim-filenames', '183', '--write-thumbnail', '--convert-thumbnails', 'jpg', '--no-mtime', '--sponsorblock-remove', 'sponsor', '--write-description', '--extractor-args', 'youtube:player_client=-ios,mweb,web_creator,-web,-tv,web_safari,-web_embedded,-mweb_music,-web_music;po_token=mweb+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_creator+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,tv+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_safari+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_embedded+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,mweb_music+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_music+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO;formats=dashy;skip=translated_subs,hls;comment_sort=top;max_comments=7000,all,7000,500', '--sponsorblock-mark', 'all', '--embed-chapters', '--merge-output-format', 'mkv', '--video-multistreams', '--audio-multistreams', '-S', '+hasaud,vcodec:vp9,vext:mkv', '-f', 'bv*+bv*.2+ba+ba.2/bv*+bv*.2/bv*+ba', '--write-subs', '--write-auto-subs', '--sub-langs', 'en,en-US,en-us,en-gb,en-GB,vi,vi-VN,vi-vn,vi-en-GB,.*-orig', '-P', 'temp:storage/shared/Android/data/com.termux/files', '-o', '%(title).120B %(resolution)s %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s.%(ext)s', '--parse-metadata', '%(title)s:%(meta_title)s', '--parse-metadata', '%(uploader)s:%(artist)s', '--no-embed-info-json', '--replace-in-metadata', 'video:title', ' #.+? ', ' ', '--replace-in-metadata', 'video:title', '^#.+? ', '', '--replace-in-metadata', 'video:title', ' #.+?$', '', '--write-annotations', '--mark-watched', '--no-windows-filenames', '--http-chunk-size', '524288', '--print-to-file', '%(thumbnails_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 0.txt', '--print-to-file', '%(playlist:thumbnails_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 1.txt', '--retry-sleep', 'exp=1:8', '--retry-sleep', 'extractor:exp=1:5', '--retry-sleep', 'fragment:exp=1:20', '--print-to-file', '%(filename)s', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 2.txt', '--print-to-file', '%(formats_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 3.txt', '--print-to-file', '%(comments)+#j', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 4.json', '--print-to-file', '%(duration>%H:%M:%S)+j', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 5.json', '--abort-on-unavailable-fragments', '--sleep-subtitles', '3', '--sleep-requests', '1', '--sleep-interval', '5', '--max-sleep-interval', '70', '--verbose', '--write-comments', '--fixup', 'warn', '--embed-metadata', '--remux-video', 'mkv', '--no-check-certificate', '--add-headers', 'Accept:*/*', '--add-headers', 'Accept-Encoding:gzip, deflate', '--add-headers', 'User-Agent:Mozilla/5.0 (X11; Linux x86_64; rv:133.0) Gecko/20100101 Firefox/133.0', '--replace-in-metadata', 'video:title', ' ?[\\U00010000-\\U0010ffff]+', '', '--no-quiet', '--parse-metadata', '%(uploader,channel,creator,artist|null)s:^(?P<uploader>.*?)(?:(?= - Topic)|$)']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds [dade5e35c] (zip)
[debug] Python 3.12.8 (CPython aarch64 64bit) - Linux-4.4.248-hadesKernel-v2.0-greatlte-aarch64-with-libc (OpenSSL 3.3.2 3 Sep 2024, libc)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.2, urllib3-2.3.0, websockets-14.1 [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2025.01.16.232854/SHA2-256SUMS
Current version: nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds Latest version: nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds Current Build Hash: 3d89f1b3c060659dc01b492147eed3388da96e14a515a467116403abbfc04e3c
Updating to nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds ... [debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2025.01.16.232854/yt-dlp
Updated yt-dlp to nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds
[debug] Restarting: python3 /data/data/com.termux/files/home/.local/bin/yt-dlp -U --config-location /data/data/com.termux/files/home/.config/yt-dlp/video/config https://www.reddit.com/r/sports/comments/1i2h287/arsenals_leandro_trossard_with_with_ridiculous/
[debug] Command-line config: ['-U', '--config-location', '/data/data/com.termux/files/home/.config/yt-dlp/video/config', 'https://www.reddit.com/r/sports/comments/1i2h287/arsenals_leandro_trossard_with_with_ridiculous/']
[debug] | Config "/data/data/com.termux/files/home/.config/yt-dlp/video/config": ['--paths', 'storage/shared/Youtube', '-N', '8', '--retries', '6', '--fragment-retries', '1', '-r', '75K', '--buffer-size', '40K', '--no-resize-buffer', '--socket-timeout', '30', '--sponsorblock-api', 'https://sponsor.ajay.app', '-4', '--cookies', 'storage/shared/Download/cookies1.txt', '--trim-filenames', '183', '--write-thumbnail', '--convert-thumbnails', 'jpg', '--no-mtime', '--sponsorblock-remove', 'sponsor', '--write-description', '--extractor-args', 'youtube:player_client=-ios,mweb,web_creator,-web,-tv,web_safari,-web_embedded,-mweb_music,-web_music;po_token=mweb+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_creator+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,tv+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_safari+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_embedded+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,mweb_music+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO,web_music+MlsHypOFoTEUBAfyOPOxe4gMsfJiYsLXMj1Z7C9TTJJ115xCndEDrI-lwue0FPQ1d8uenjdEvq_ffY_G6hQKmJjDG0SsknBVMq1RjjVSl0vE4nXMUYJ81PA1KYRO;formats=dashy;skip=translated_subs,hls;comment_sort=top;max_comments=7000,all,7000,500', '--sponsorblock-mark', 'all', '--embed-chapters', '--merge-output-format', 'mkv', '--video-multistreams', '--audio-multistreams', '-S', '+hasaud,vcodec:vp9,vext:mkv', '-f', 'bv*+bv*.2+ba+ba.2/bv*+bv*.2/bv*+ba', '--write-subs', '--write-auto-subs', '--sub-langs', 'en,en-US,en-us,en-gb,en-GB,vi,vi-VN,vi-vn,vi-en-GB,.*-orig', '-P', 'temp:storage/shared/Android/data/com.termux/files', '-o', '%(title).120B %(resolution)s %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s.%(ext)s', '--parse-metadata', '%(title)s:%(meta_title)s', '--parse-metadata', '%(uploader)s:%(artist)s', '--no-embed-info-json', '--replace-in-metadata', 'video:title', ' #.+? ', ' ', '--replace-in-metadata', 'video:title', '^#.+? ', '', '--replace-in-metadata', 'video:title', ' #.+?$', '', '--write-annotations', '--mark-watched', '--no-windows-filenames', '--http-chunk-size', '524288', '--print-to-file', '%(thumbnails_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 0.txt', '--print-to-file', '%(playlist:thumbnails_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 1.txt', '--retry-sleep', 'exp=1:8', '--retry-sleep', 'extractor:exp=1:5', '--retry-sleep', 'fragment:exp=1:20', '--print-to-file', '%(filename)s', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 2.txt', '--print-to-file', '%(formats_table)+#l', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 3.txt', '--print-to-file', '%(comments)+#j', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 4.json', '--print-to-file', '%(duration>%H:%M:%S)+j', '%(title).120B %(id).30B %(uploader).29B %(availability)s %(upload_date>%d/%m/%Y)s 5.json', '--abort-on-unavailable-fragments', '--sleep-subtitles', '3', '--sleep-requests', '1', '--sleep-interval', '5', '--max-sleep-interval', '70', '--verbose', '--write-comments', '--fixup', 'warn', '--embed-metadata', '--remux-video', 'mkv', '--no-check-certificate', '--add-headers', 'Accept:*/*', '--add-headers', 'Accept-Encoding:gzip, deflate', '--add-headers', 'User-Agent:Mozilla/5.0 (X11; Linux x86_64; rv:133.0) Gecko/20100101 Firefox/133.0', '--replace-in-metadata', 'video:title', ' ?[\\U00010000-\\U0010ffff]+', '', '--no-quiet', '--parse-metadata', '%(uploader,channel,creator,artist|null)s:^(?P<uploader>.*?)(?:(?= - Topic)|$)']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds [164368610] (zip) [debug] Python 3.12.8 (CPython aarch64 64bit) - Linux-4.4.248-hadesKernel-v2.0-greatlte-aarch64-with-libc (OpenSSL 3.3.2 3 Sep 2024, libc) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.2, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds)
[Reddit] Extracting URL: https://www.reddit.com/r/sports/comments/1i2h287/arsenals_leandro_trossard_with_with_ridiculous/ [Reddit] 1i2h287: Downloading JSON metadata
[Reddit] Sleeping 1.0 seconds ...
[Reddit] 1i2h287: Downloading m3u8 information [Reddit] Sleeping 1.0 seconds ...
[Reddit] 1i2h287: Downloading MPD manifest
[info] sqcrgf2cbade1: Downloading subtitles: en [debug] Sort order given by user: +hasaud, vcodec:vp9, vext:mkv
[debug] Formats sorted by: hasvid, ie_pref, +hasaud, vcodec:vp9(9), vext:mkv(2), lang, quality, res, fps, hdr:12(7), channels, acodec, size, br, asr, proto, aext, source, id
[debug] Searching for '(?P<meta_title>.+)' in '%(title)s'
[MetadataParser] Parsed meta_title from '%(title)s': 'Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win '
[debug] Searching for '(?P<artist>.+)' in '%(uploader)s' [MetadataParser] Parsed artist from '%(uploader)s': 'Mahatma_Gone_D'
[debug] Searching for '^(?P<uploader>.*?)(?:(?= - Topic)|$)' in '%(uploader,channel,creator,artist|null)s'
[MetadataParser] Parsed uploader from '%(uploader,channel,creator,artist|null)s': 'Mahatma_Gone_D' [SponsorBlock] SponsorBlock is not supported for Reddit
[info] sqcrgf2cbade1: Downloading 1 format(s): hls-957+hls-911+dash-5+dash-4 [debug] Replacing all ' #.+? ' in title with ' '
[MetadataParser] Did not find ' #.+? ' in title
[debug] Replacing all '^#.+? ' in title with '' [MetadataParser] Did not find '^#.+? ' in title
[debug] Replacing all ' #.+?$' in title with ''
[MetadataParser] Did not find ' #.+?$' in title [debug] Replacing all ' ?[\\U00010000-\\U0010ffff]+' in title with ''
[MetadataParser] Did not find ' ?[\\U00010000-\\U0010ffff]+' in title
[info] Writing '%(thumbnails_table)+#l' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 0.txt
[info] Writing '%(playlist:thumbnails_table)+#l' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 1.txt
[info] Writing '%(filename)s' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 2.txt
[info] Writing '%(formats_table)+#l' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 3.txt
[info] Writing '%(comments)+#j' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 4.json
[info] Writing '%(duration>%H:%M:%S)+j' to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025 5.json
[info] There's no video description to write
[info] Writing video subtitles to: storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.en.vtt [debug] Invoking hlsnative downloader on "https://v.redd.it/sqcrgf2cbade1/wh_ben_en/index.m3u8"
[download] Sleeping 3.00 seconds ... [hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 2
[download] Destination: storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.en.vtt [download] 50.0% of ~ 14.59KiB at 6.97KiB/s ETA Unknown (frag[download] 25.0% of ~ 29.17KiB at 6.97KiB/s ETA Unknown (frag[download] 100.0% of ~ 7.73KiB at 6.73KiB/s ETA 00:00 (frag 1[download] 94.7% of ~ 8.17KiB at 6.73KiB/s ETA 00:00 (frag 2[download] 100% of 8.20KiB in 00:00:01 at 6.38KiB/s
[info] Downloading video thumbnail 4 ... [info] Writing video thumbnail 4 to: storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.png
[info] Writing video metadata as JSON to: storage/shared/Youtube/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.info.json WARNING: There are no annotations to write.
[ThumbnailsConvertor] Converting thumbnail "storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.png" to jpg [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i 'file:storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.png' -update 1 -bsf:v mjpeg2jpeg -movflags +faststart 'file:storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.jpg'
[debug] ffmpeg version 7.1 Copyright (c) 2000-2024 the FFmpeg developers built with Android (12470979, +pgo, +bolt, +lto, +mlgo, based on r522817c) clang version 18.0.3 (https://android.googlesource.com/toolchain/llvm-project d8003a456d14a3deb8054cdaa529ffbf02d9b262) configuration: --arch=aarch64 --as=aarch64-linux-android-clang --cc=aarch64-linux-android-clang --cxx=aarch64-linux-android-clang++ --nm=llvm-nm --ar=llvm-ar --ranlib=llvm-ranlib --pkg-config=/home/builder/.termux-build/_cache/android-r27c-api-24-v1/bin/pkg-config --strip=llvm-strip --cross-prefix=aarch64-linux-android- --disable-indevs --disable-outdevs --enable-indev=lavfi --disable-static --disable-symver --enable-cross-compile --enable-gnutls --enable-gpl --enable-version3 --enable-jni --enable-lcms2 --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libharfbuzz --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenmpt --enable-libopus --enable-librav1e --enable-librubberband --enable-libsoxr --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-mediacodec --enable-opencl --enable-shared --prefix=/data/data/com.termux/files/usr --target-os=android --extra-libs=-landroid-glob --disable-vulkan --enable-neon --disable-libfdk-aac
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.100 / 61. 19.100 libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100 libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100 [png @ 0x7fa70b6380] Invalid PNG signature 0xFFD8FFDB00430005.
[image2 @ 0x7fa7123280] Could not find codec parameters for stream 0 (Video: png, none): unspecified size Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'file:storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.png': Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: png, none, 25 fps, 25 tbr, 25 tbn
Stream mapping: Stream #0:0 -> #0:0 (png (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[png @ 0x7fa70b7180] Invalid PNG signature 0xFFD8FFDB00430005. [vist#0:0/png @ 0x7fa70b3300] [dec:png @ 0x7fa715e780] Decoding error: Invalid data found when processing input
[vist#0:0/png @ 0x7fa70b3300] [dec:png @ 0x7fa715e780] Decode error rate 1 exceeds maximum 0.666667
[vist#0:0/png @ 0x7fa70b3300] [dec:png @ 0x7fa715e780] Task finished with error code: -1145393733 (Unknown error 1145393733) Cannot determine format of input 0:0 after EOF
[vist#0:0/png @ 0x7fa70b3300] [dec:png @ 0x7fa715e780] Terminating thread with return code -1145393733 (Unknown error 1145393733) [vf#0:0 @ 0x7fa7177ca0] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 0x7fa7177ca0] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[vost#0:0/mjpeg @ 0x7fa70aac00] Could not open encoder before EOF
[vost#0:0/mjpeg @ 0x7fa70aac00] Task finished with error code: -22 (Invalid argument)
[vost#0:0/mjpeg @ 0x7fa70aac00] Terminating thread with return code -22 (Invalid argument) [out#0/image2 @ 0x7fa713d940] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A
Conversion failed!
ERROR: Preprocessing: Conversion failed!
Traceback (most recent call last):
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3743, in pre_process
info = self.run_all_pps(key, info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3736, in run_all_pps info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3714, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/postprocessor/common.py", line 22, in run
ret = func(self, info, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 1130, in run thumbnail_dict['filepath'] = self.convert_thumbnail(original_thumbnail, target_ext)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 1107, in convert_thumbnail self.real_run_ffmpeg(
File "/data/data/com.termux/files/home/.local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 367, in real_run_ffmpeg raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[debug] Invoking hlsnative downloader on "https://v.redd.it/sqcrgf2cbade1/HLS_360.m3u8"
[download] Sleeping 8.61 seconds ...
[hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 8
[download] Destination: storage/shared/Youtube/storage/shared/Android/data/com.termux/files/Arsenal’s Leandro Trossard with with ridiculous ankle breaker against Pedro Porro in a North London derby win NA sqcrgf2cbade1 Mahatma_Gone_D NA 16⧸01⧸2025.fhls-957.mp4
[download] 1.6% of ~ 2.52MiB at 16.78KiB/s ETA Unknown (frag[download] 1.9% of ~ 4.16MiB at 35.52KiB/s ETA 00:52 (frag 0[download] 3.0% of ~ 3.95MiB at 55.80KiB/s ETA 00:51 (frag 0[download] 4.4% of ~ 3.55MiB at 55.80KiB/s ETA 00:51 (frag 0[download] 5.2% of ~ 3.77MiB at 86.97KiB/s ETA 00:48 (frag 0[download] 9.3% of ~ 2.53MiB at 105.69KiB/s ETA 00:45 (frag 0[download] 5.3% of ~ 5.20MiB at 123.24KiB/s ETA 00:43 (frag 0[download] 6.5% of ~ 4.80MiB at 123.24KiB/s ETA 00:43 (frag 0[download] 7.0% of ~ 5.02MiB at 146.76KiB/s ETA 00:41 (frag 0[download] 10.3% of ~ 3.78MiB at 146.76KiB/s ETA 00:41 (frag 0[download] 6.2% of ~ 6.97MiB at 174.03KiB/s ETA 00:40 (frag 0[download] 6.4% of ~ 7.28MiB at 192.49KiB/s ETA 00:39 (frag 0[download] 7.2% of ~ 7.07MiB at 204.35KiB/s ETA 00:38 (frag 0[download] 8.2% of ~ 6.68MiB at 204.35KiB/s ETA 00:38 (frag 0[download] 11.0% of ~ 5.34MiB at 220.54KiB/s ETA 00:36 (frag 0[download] 8.7% of ~ 7.20MiB at 220.54KiB/s ETA 00:36 (frag 0[download] 10.5% of ~ 5.97MiB at 235.48KiB/s ETA 00:34 (frag 0/8)[download] Got error: Conflicting range. (start=2989764 > end=2989763). Retrying (1/6)... Sleeping 1.00 seconds ...
[download] 8.1% of ~ 8.27MiB at 244.64KiB/s ETA 00:34 (frag 0[download] 8.5% of ~ 8.31MiB at 253.50KiB/s ETA 00:33 (frag 0[download] 8.3% of ~ 8.99MiB at 253.50KiB/s ETA 00:33 (frag 0[download] 8.4% of ~ 9.29MiB at 264.00KiB/s ETA 00:33 (frag 0[download] 8.1% of ~ 10.12MiB at 273.76KiB/s ETA 00:33 (frag 0[download] 9.6% of ~ 9.42MiB at 273.76KiB/s ETA 00:33 (frag 0[download] 9.4% of ~ 9.21MiB at 273.76KiB/s ETA 00:33 (frag 0[download] 9.0% of ~ 10.46MiB at 306.05KiB/s ETA 00:32 (frag 0[download] 8.6% of ~ 11.37MiB at 306.05KiB/s ETA 00:32 (frag 0[download] 9.4% of ~ 10.81MiB at 335.37KiB/s ETA 00:31 (frag 0[download] 9.2% of ~ 11.49MiB at 335.37KiB/s ETA 00:31 (frag 0[download] 9.3% of ~ 11.79MiB at 359.24KiB/s ETA 00:31 (frag 0[download] 10.0% of ~ 11.39MiB at 376.55KiB/s ETA 00:30 (frag 0[download] 10.1% of ~ 11.61MiB at 376.55KiB/s ETA 00:30 (frag 0[download] 9.2% of ~ 13.25MiB at 392.57KiB/s ETA 00:30 (frag 0[download] 9.9% of ~ 12.69MiB at 392.57KiB/s ETA 00:30 (frag 0[download] 9.7% of ~ 13.37MiB at 409.85KiB/s ETA 00:30 (frag 0[download] 9.7% of ~ 13.67MiB at 422.10KiB/s ETA 00:29 (frag 0[download] 10.3% of ~ 13.27MiB at 430.52KiB/s ETA 00:29 (frag 0[download] 10.5% of ~ 13.48MiB at 430.52KiB/s ETA 00:29 (frag 0[download] 9.9% of ~ 14.62MiB at 443.27KiB/s ETA 00:29 (frag 0[download] 10.2% of ~ 14.56MiB at 443.27KiB/s ETA 00:29 (frag 0[download] 10.1% of ~ 15.15MiB at 455.91KiB/s ETA 00:29 (frag 0[download] 10.1% of ~ 15.46MiB at 455.91KiB/s ETA 00:29 (frag 0[download] 10.1% of ~ 15.85MiB at 470.20KiB/s ETA 00:29 (frag 0[download] 10.6% of ~ 15.46MiB at 470.20KiB/s ETA 00:29 (frag 0[download] 10.7% of ~ 15.67MiB at 485.08KiB/s ETA 00:29 (frag 0[download] 10.0% of ~ 17.31MiB at 485.08KiB/s ETA 00:29 (frag 0[download] 10.6% of ~ 16.30MiB at 485.08KiB/s ETA 00:29 (frag 0/8)[download] Got error: Conflicting range. (start=833404 > end=833403). Retrying (1/6)... Sleeping 1.00 seconds ...
[download] 10.4% of ~ 17.04MiB at 496.95KiB/s ETA 00:29 (frag 0[download] 10.0% of ~ 17.95MiB at 500.32KiB/s ETA 00:29 (frag 0[download] 10.6% of ~ 17.39MiB at 500.32KiB/s ETA 00:29 (frag 0[download] 10.4% of ~ 18.07MiB at 502.10KiB/s ETA 00:29 (frag 0[download] 10.5% of ~ 18.37MiB at 505.83KiB/s ETA 00:30 (frag 0[download] 10.9% of ~ 17.97MiB at 505.83KiB/s ETA 00:30 (frag 0[download] 10.7% of ~ 18.64MiB at 514.16KiB/s ETA 00:30 (frag 0[download] 10.5% of ~ 19.32MiB at 514.16KiB/s ETA 00:30 (frag 0[download] 11.3% of ~ 17.26MiB at 514.16KiB/s ETA 00:30 (frag 0[download] 10.8% of ~ 18.25MiB at 514.17KiB/s ETA 00:30 (frag 0/8)[download] Got error: Conflicting range. (start=1175940 > end=1175939). Retrying (1/6)...
Sleeping 1.00 seconds ...
[download] 10.6% of ~ 19.00MiB at 509.16KiB/s ETA 00:30 (frag 0[download] 10.6% of ~ 19.39MiB at 503.39KiB/s ETA 00:31 (frag 0[download] 10.8% of ~ 19.35MiB at 492.26KiB/s ETA 00:32 (frag 0[download] 10.6% of ~ 20.03MiB at 492.26KiB/s ETA 00:32 (frag 0[download] 10.4% of ~ 20.85MiB at 489.37KiB/s ETA 00:32 (frag 0[download] 10.4% of ~ 21.16MiB at 487.99KiB/s ETA 00:33 (frag 0[download] 10.7% of ~ 20.87MiB at 483.94KiB/s ETA 00:34 (frag 0[download] 10.7% of ~ 21.26MiB at 483.94KiB/s ETA 00:34 (frag 0[download] 10.8% of ~ 21.50MiB at 473.83KiB/s ETA 00:35 (frag 0[download] 10.8% of ~ 22.21MiB at 469.20KiB/s ETA 00:36 (frag 0[download] 11.0% of ~ 21.54MiB at 473.83KiB/s ETA 00:35 (frag 0[download] 10.8% of ~ 22.51MiB at 463.88KiB/s ETA 00:37 (frag 0/8)[download] Got error: Conflicting range. (start=2437796 > end=2437795). Retrying (1/6)...
Sleeping 1.00 seconds ...
``` | closed | 2025-01-17T04:37:25Z | 2025-01-18T10:55:03Z | https://github.com/yt-dlp/yt-dlp/issues/12116 | [
"cant-reproduce",
"site-bug",
"triage"
] | error-reporting | 6 |
aeon-toolkit/aeon | scikit-learn | 2,262 | [ajb/feature_selection] is STALE | @TonyBagnall,
ajb/feature_selection has had no activity for 142 days.
This branch will be automatically deleted in 33 days. | closed | 2024-10-28T01:28:12Z | 2024-10-28T16:10:16Z | https://github.com/aeon-toolkit/aeon/issues/2262 | [
"stale branch"
] | aeon-actions-bot[bot] | 1 |
laughingman7743/PyAthena | sqlalchemy | 22 | na | closed | 2017-12-05T19:08:44Z | 2017-12-06T09:41:00Z | https://github.com/laughingman7743/PyAthena/issues/22 | [] | ghost | 0 |
|
langmanus/langmanus | automation | 128 | 求救🆘:Enter your query 报错 | 我使用的是最新main 代码:conf.yaml 配置如下
model: "deepseek-ai/DeepSeek-V3"
api_key: "sk-xxxx"
api_base: "https://api.siliconflow.cn/v1/chat/completions"
执行uv run main.py 并 Enter your query 输入 :123 信息之后,直接报错:
File "/Users/admin/Desktop/code/langmanus/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 333, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-ai/DeepSeek-V3
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
During task with name 'coordinator' and id '439c9eb1-9ea1-0fd0-60e8-1ca399a0d8a8' | open | 2025-03-24T10:49:43Z | 2025-03-24T16:18:59Z | https://github.com/langmanus/langmanus/issues/128 | [
"enhancement"
] | yangyl568 | 2 |
roboflow/supervision | machine-learning | 1,633 | labels or label isn't an existing parameter for box_annotate() | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
labels or label isn't an existing parameter for box_annotate(), what do we replace this with? is it needed I just removed it?
Notebook = https://github.com/roboflow/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb
box_annotator = sv.BoxAnnotator()
image = cv2.imread(f"C:/Users/jmorde02/DATA/2024-10-18_15-13-59\sample\left_3070_2024-10-18 15-17-30.481540.png")
mask_annotator = sv.MaskAnnotator()
results = base_model.predict(image)
annotated_image = mask_annotator.annotate(
image.copy(), detections=results
)
images = []
for image_name in image_names:
image = dataset.images[image_name]
annotations= dataset.annotations[image_name]
labels = [
dataset.classes[class_id]
for class_id
in annotations.class_id]
annotates_image = mask_annotator.annotate(
scene=image.copy(),
detections=annotations)
annotates_image = box_annotator.annotate(
scene=annotates_image,
detections=annotations
labels=labels)
images.append(annotates_image)
As you can see here the options
(method) def annotate(
scene: ImageType@annotate,
detections: Detections,
custom_color_lookup: ndarray | None = None
) -> ImageType@annotate
### Environment
supervision==0.24.0
Windows 11
Python3.11
Cuda 11.8
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-10-30T11:27:25Z | 2024-10-30T14:59:27Z | https://github.com/roboflow/supervision/issues/1633 | [
"question"
] | Jarradmorden | 2 |
predict-idlab/plotly-resampler | data-visualization | 302 | [BUG] `renderer="png"` - rendering png images in notebooks - does not work | I am trying to replicate some of the [basic examples](https://github.com/predict-idlab/plotly-resampler/blob/main/examples/basic_example.ipynb). Especially, I'd like to produce some *png* figures when I share notebooks with others, but when I use `fig.show(renderer="png")`, it never returns anything (I waited for up to 20 minutes).
If I use `USE_PNG = False`, the code works as expected and is super fast.
Here is the code I use:
``` python
import numpy as np
import pandas as pd
import plotly.graph_objects as go
from plotly_resampler import register_plotly_resampler, unregister_plotly_resampler
USE_PNG = True # Set to false to use dynamic plots
n = 20000
x = np.arange(n)
x_time = pd.date_range("2020-01-01", freq="1s", periods=len(x))
noisy_sine = (3 + np.sin(x / 2000) + np.random.randn(n) / 10) * x / (n / 4)
register_plotly_resampler(mode="auto", default_n_shown_samples=4500)
fig = go.Figure()
fig.add_traces(
[
{"y": noisy_sine + 2, "name": "yp2", "type": "scattergl"},
{"y": noisy_sine - 3, "name": "ym1", "type": "scatter"},
]
)
if USE_PNG:
unregister_plotly_resampler()
go.Figure(fig).show(renderer="png")
else:
fig.show()
```
**Environment information**:
- OS: Windows 11
- Python environment:
- Python version: 3.9.12
- plotly-resampler environment: I tried in VSCode 1.87.1, Jupyter Notebook 6.4.12 and JupyterLab '4.1.0'. Same behavior (never returns)
- plotly-resampler version: '0.9.2'
- kaleido: '0.2.1'
- ipywidgets: '8.1.1'
Am I doing something wrong ?
Any help would be much appreciated!
| closed | 2024-03-13T16:29:04Z | 2024-03-14T16:26:50Z | https://github.com/predict-idlab/plotly-resampler/issues/302 | [
"bug",
"works-on-main"
] | etiennedemontalivet | 6 |
ivy-llc/ivy | tensorflow | 28,448 | Extend ivy-lint to everything instead of just the frontends | We're currently using a custom pre-commit lint hook specifically designed for formatting Python files in Ivy's frontend, as detailed in our documentation and implemented in our [lint hook repository](https://github.com/unifyai/lint-hook). This formatter organizes the code into two main sections: `helpers` and `main`, sorting functions alphabetically within these sections based on a [specific regex pattern](https://github.com/unifyai/lint-hook/blob/main/ivy_lint/formatters/function_ordering.py#L15).
The task at hand is to adapt and extend this formatter to cater to Ivy's backend and the Ivy Stateful API. Unlike the frontend, where the division is simply between `helpers` and `main` functions, the backend and stateful API require a more nuanced approach to accommodate various headers that categorize functions into either the "Array API Standard" or "Extras", among others. This distinction is crucial as it helps in segregating functions that adhere to the standard from those that do not, with new functions being added regularly.
For the backend and Ivy Stateful API, the goal is to maintain the integrity of these headers, such as "Array API Standard", "Autograd", "Optimizer Steps", "Optimizer Updates", "Array Printing", "Retrieval", "Conversions", "Memory", etc., ensuring they remain unchanged. The proposed approach involves sorting functions alphabetically within each section defined by these headers, thereby preserving the organizational structure and clarity regarding the functionalities of each section.
The desired structure for updating the formatter should adhere to the following template, ensuring a clear and organized codebase:
```py
# global imports
# local imports
# Global declarations
<Global variables, mode stacks, initializers, postponed evaluation typehints, etc.>
# Helpers #
# -------- #
<Private helper functions specific to the submodule>
# Classes
<Class definitions within the submodule>
# <function section header 1>
<Alphabetical listing of functions in section 1, including relevant assignment statements>
# <function section header 2>
<Alphabetical listing of functions in section 2>
...
# <function section header n>
<Alphabetical listing of functions in section n>
```
This structure not only ensures functions are easily locatable and the code remains clean but also respects the categorization of functionalities as per Ivy's standards. The approach was previously attempted in a pull request ([#22830](https://github.com/unifyai/ivy/pull/22830)), which serves as a reference for implementing these changes.
If you have any questions feel free to reach out to @NripeshN or @KareemMAX | closed | 2024-02-28T08:07:29Z | 2024-05-06T10:47:45Z | https://github.com/ivy-llc/ivy/issues/28448 | [
"Bounty"
] | vedpatwardhan | 1 |
numpy/numpy | numpy | 27,658 | BUG: np.cov with rowvar=False returns the wrong shape for N=1 | ### Describe the issue:
For one observation of three variables, `np.cov` with `rowvar=True` returns the following:
```python
>>> x = np.ones((3, 1))
>>> np.cov(x, ddof=0, rowvar=True)
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
```
For the same computation laid-out with `rowvar=False`, the return value is a scalar:
```python
>>> np.cov(x.T, ddof=0, rowvar=False)
array(0.)
```
This should return a 3x3 matrix of zeros, same as the computation above.
The problem is the special-casing of `X.shape[0] == 1` in this line, which seems aimed at handling the `x.ndim == 1` case: https://github.com/numpy/numpy/blob/70fde29fdd4d8fcc6098df7ef8a34c84844e347f/numpy/lib/_function_base_impl.py#L2739-L2740
It looks like this behavior was introduced in 959f36c04ce8ca0b7bc44bb6438bddf162ad2db9, 19 years ago.
### Python and NumPy Versions:
```
$ python -c "import sys, numpy; print(numpy.__version__); print(sys.version)"
2.1.1
3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
```
### Runtime Environment:
```
$ python -c "import numpy; numpy.show_runtime()"
[{'numpy_version': '2.1.1',
'python': '3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 '
'(clang-1300.0.29.30)]',
'uname': uname_result(system='Darwin', node='jmdg-macbookpro.roam.internal', release='23.6.0', version='Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:39 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6000', machine='arm64')},
{'simd_extensions': {'baseline': ['NEON', 'NEON_FP16', 'NEON_VFPV4', 'ASIMD'],
'found': ['ASIMDHP'],
'not_found': ['ASIMDFHM']}}]
``` | closed | 2024-10-28T19:48:03Z | 2024-10-29T19:16:28Z | https://github.com/numpy/numpy/issues/27658 | [
"00 - Bug"
] | jakevdp | 2 |
nteract/papermill | jupyter | 387 | pass aws credentials as to the executor | Hey I'm trying to use the [papermill airflow](https://airflow.readthedocs.io/en/latest/howto/operator/papermill.html) operator to read and execute notebook with airflow.
in our company we use a central airflow instance and we could not rely on having aws credentials in `~/.aws/credentials` nor we can use environment variables.
because our airflow instance is hosted in kubernetes we could not relay on ec2 roles either.
our most feasible option is to use [airflow connections](https://airflow.apache.org/concepts.html?highlight=connection#connections) and explicitly pass aws credentials to papermill.
I assume different user's will use different credentials and thus will be a good addition to have.
Currently, `S3Handler` does not get argument so there is no way to pass those to to the `boto3` instance it abstracts.
IMO it's a good addition to have
| open | 2019-06-26T13:21:50Z | 2021-06-16T05:10:42Z | https://github.com/nteract/papermill/issues/387 | [] | Liorba | 3 |
davidteather/TikTok-Api | api | 264 | More of a question than an issue. | Is there any way, with this API or otherwise to get user username history? Kind of like steam has username history? | closed | 2020-09-16T02:21:07Z | 2020-09-16T16:25:42Z | https://github.com/davidteather/TikTok-Api/issues/264 | [
"question",
"installation_help"
] | KauzDs | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,166 | TypeError: load() takes 1 positional argument but 2 were given | Hello,
I have done everything so far, but when I'm trying to load from the LibreSpeech samples, this is the error I get.
Arguments:
datasets_root: datasets_root
models_dir: saved_models
cpu: False
seed: None
Traceback (most recent call last):
File "/Users/XYZ/Documents/real-time-voice-cloning/toolbox/__init__.py", line 76, in <lambda>
self.ui.browser_load_button.clicked.connect(lambda: self.load_from_browser())
File "/Users/XYZ/Documents/real-time-voice-cloning/toolbox/__init__.py", line 157, in load_from_browser
wav = Synthesizer.load_preprocess_wav(fpath)
File "/Users/XYZ/Documents/real-time-voice-cloning/synthesizer/inference.py", line 136, in load_preprocess_wav
wav = librosa.load(str(fpath), hparams.sample_rate)[0]
TypeError: load() takes 1 positional argument but 2 were given
What should I do? | open | 2023-02-20T23:36:06Z | 2024-12-10T19:03:44Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1166 | [] | enlightenight | 11 |
Yorko/mlcourse.ai | data-science | 661 | Jupyter Images are not rendering | Hi
Thanks for launching such an awesome course. I am really enjoying working with this material. However, I am facing a small issue right now.
After cloning the repo and running the notebooks, it seems that the images are not getting rendered correctly. I have not moved the path of the Images and I have verified that the images themselves exist in the location mentioned in the img src of the image.


I have attached a list of the pictures which might help you. I am using jupyter notebook to work with the notebooks locally alogn with Python3 3.6.9 | closed | 2020-04-05T13:45:45Z | 2020-04-16T17:34:26Z | https://github.com/Yorko/mlcourse.ai/issues/661 | [] | blaine12100 | 7 |
ijl/orjson | numpy | 557 | Default function not applied before checking that dict keys are str | ```python
import json
import orjson
def default(obj):
if isinstance(obj, dict):
return {str(k): v for k, v in obj.items()}
raise TypeError
print(orjson.dumps({1: 2}, default=default).decode('utf-8'))
# -> TypeError: Dict key must be str
print(orjson.dumps({"1": 2}, default=default).decode('utf-8'))
# -> {"1":2}
print(json.dumps({1: 2}, default=default))
# -> {"1": 2}
```
When serializing a `dict` using `orjson`, it appears that the keys need to be strings *before* they are parsed to the `dumps` function, even if the specified default does the conversion and workers in the standard `json` library.
This seems like an undesired behaviour, I would expect the same output as with the standard library. | closed | 2025-02-27T19:14:07Z | 2025-03-09T08:02:54Z | https://github.com/ijl/orjson/issues/557 | [
"Stale"
] | RaphaelRobidas | 0 |
blb-ventures/strawberry-django-plus | graphql | 59 | Have a relay.connection field on a django.type? | I think this should be allowed? We had a similar schema with graphene:
```
@gql.django.type(SkillModule)
class SkillModuleType(gql.relay.Node):
id: gql.relay.GlobalID
order: int
name: str
description: str
@gql.relay.connection
def active_skill_list(self: SkillModule) -> List["SkillType"]:
return self.skill_set.filter(is_active=True)
@gql.relay.connection
def published_skill_list(self: SkillModule) -> List["SkillType"]:
return self.skill_set.filter(is_published=True, is_active=True)
```
```
File "/code/.../types.py", line 48, in <module>
class SkillModuleType(gql.relay.Node):
File "/usr/local/lib/python3.9/site-packages/strawberry_django_plus/type.py", line 396, in wrapper
return _process_type(
File "/usr/local/lib/python3.9/site-packages/strawberry_django_plus/type.py", line 281, in _process_type
fields = list(_get_fields(django_type).values())
File "/usr/local/lib/python3.9/site-packages/strawberry_django_plus/type.py", line 233, in _get_fields
fields[name] = _from_django_type(django_type, name)
File "/usr/local/lib/python3.9/site-packages/strawberry_django_plus/type.py", line 173, in _from_django_type
elif field.django_name or field.is_auto:
AttributeError: 'ConnectionField' object has no attribute 'django_name'
``` | open | 2022-06-09T02:42:47Z | 2022-06-15T13:21:04Z | https://github.com/blb-ventures/strawberry-django-plus/issues/59 | [
"documentation"
] | eloff | 2 |
microsoft/nni | pytorch | 5,638 | DARTS experiment does not save | Hi,
I was able to run the darts tutorial. However, when I want to save the experiment, although it doesn't show any error, it doesn't save the experiments:

Other multi-trial experiements are being save without any problem, and I use the exact same two lines of codes. Is there a different way of saving oneshot methods?

Thank you! | closed | 2023-07-19T00:00:27Z | 2023-07-28T04:55:54Z | https://github.com/microsoft/nni/issues/5638 | [] | ekurtgl | 2 |
ageitgey/face_recognition | machine-learning | 1,114 | Face Recognition of an Original Person | How can we improve the code or add some additional functionality that the face is recognized only if there's an original person standing in front of the camera and not from a photo of that person?
Is there a way to do so?
EX- If I want Barack Obama's face to be recognized, he should be standing there in front of the camera and I must not be able to do so by showing his image on my phone.
Thanks.
| closed | 2020-04-17T14:13:45Z | 2020-04-21T02:08:11Z | https://github.com/ageitgey/face_recognition/issues/1114 | [] | KaramveerSidhu | 4 |
graphdeco-inria/gaussian-splatting | computer-vision | 730 | Performance difference without shuffle camera and randomly pop viewpoint_cam | hello,
Thanks for the great works! I have a question about the camera randomness, it will be greatly appreciated if any reply.
I did a experiment with shutdown the randomly pop viewpoint_cam:
1. set the shuffle with False in scene.init
2. get the viewpoint_cam sequentially in viewpoint_stack
3. count PSNR use all the input images
i find without random camera, there is a significant PSNR decline.
with my case, which has 187 images as input
**7000 iteration :without random PSNR is around 18; with random PSNR is around 35 **
**30000 iteration :without random PSNR is around 31; with random PSNR is around 42 **
i want to figure out why there is so big difference on PSNR without random camera
BTW, if we shuffle the camera every time when we fill the viewpoint_stack, why we need random pop camera again? | open | 2024-03-28T03:12:31Z | 2024-03-28T03:12:31Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/730 | [] | TopGun34 | 0 |
opengeos/leafmap | plotly | 431 | Coordinate transformation parameter | Kindly add transform parameter in
tms_to_geotiff(). | closed | 2023-04-23T19:08:06Z | 2023-04-23T20:29:12Z | https://github.com/opengeos/leafmap/issues/431 | [
"Feature Request"
] | ravishbapna | 1 |
joeyespo/grip | flask | 66 | Extra blank lines in --gfm | Markdown code: https://gist.github.com/vejuhust/70bc97c829c7ba0f0a58
As you see, the last block quote contains **no extra blank** lines on Github.
When it comes to grip, the first output HTML contains **no extra blank** lines like Github, whereas the second contains **extra blank lines after each**, which was different from Github.
```
grip --export gfm_issue.md ../Dropbox/issue_normal.html
grip --gfm --export gfm_issue.md ../Dropbox/issue_gfm.html
```
| closed | 2014-07-20T05:34:26Z | 2014-07-20T16:36:41Z | https://github.com/joeyespo/grip/issues/66 | [
"not-a-bug"
] | vejuhust | 8 |
pydata/bottleneck | numpy | 188 | Support tuple axis keyword argument | It would be nice for the bottleneck functions to support tuples as is supported in Numpy:
```python
In [7]: bt.nansum(np.random.random((3, 4, 2)), axis=(1, 2))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-ecd7e19320a5> in <module>()
----> 1 bt.nansum(np.random.random((3, 4, 2)), axis=(1, 2))
TypeError: `axis` must be an integer or None
```
This would make it easier to use the bottleneck functions as a drop-in replacement. | open | 2018-04-04T16:09:03Z | 2019-09-23T04:19:30Z | https://github.com/pydata/bottleneck/issues/188 | [
"enhancement"
] | astrofrog | 1 |
google-research/bert | tensorflow | 838 | how can I get a six layer bert? | Can I just take the embedding layer and first six layers from the pretrain model to get a six layer bert, change the config `num_hidden_layers=6`, and change `get_assignment_map_from_checkpoint` function to delete the 7-12 layer variable name?
Or I need to train a six layer bert?
| open | 2019-09-04T08:44:40Z | 2019-09-04T15:20:14Z | https://github.com/google-research/bert/issues/838 | [] | RyanHuangNLP | 0 |
aio-libs-abandoned/aioredis-py | asyncio | 713 | Support client name part of Pool & Connection | I'd like to set in my application a client name for my connections. This is supported via the commands, but when using the Pool/Connection object it sometimes reconnect/creates new connections hence losing the set name.
This feature also exists in py-redis and makes a lot of sense - you set it once in the initialization of the Pool/Connection and it sets it after connecting.
PR incoming | closed | 2020-03-12T10:33:57Z | 2020-11-15T17:17:21Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/713 | [
"pr-available"
] | aviramha | 0 |
skypilot-org/skypilot | data-science | 4,754 | [k8s] Better error messages when allowed_contexts is set and remote API server does not have context | Repro:
Remote API server is setup with only in-cluster auth.
My local config.yaml:
```
kubernetes:
allowed_contexts:
- myctx
```
```
(base) ➜ ~ sky check kubernetes
Checking credentials to enable clouds for SkyPilot.
Kubernetes: disabled
Reason: No available context found in kubeconfig. Check if you have a valid kubeconfig file and check "allowed_contexts" in your /tmp/skypilot_configniuukzgv file.
To enable a cloud, follow the hints above and rerun: sky check Kubernetes
If any problems remain, refer to detailed docs at: https://docs.skypilot.co/en/latest/getting-started/installation.html
🎉 Enabled clouds 🎉
AWS
GCP
Using SkyPilot API server: http://....:30050
```
Above error should clearly state which contexts are available, and the path to the config file should be the local path (not remote API server path). | open | 2025-02-19T04:19:12Z | 2025-02-19T04:19:12Z | https://github.com/skypilot-org/skypilot/issues/4754 | [] | romilbhardwaj | 0 |
kennethreitz/records | sqlalchemy | 7 | Import from csv, excel, etc | I do a lot data work which requires importing and exporting csv's. This library looks extremely useful, but it doesn't import from csv. If it did, it'd probably be a tool I used every day.
Logistically, I'm not even sure what this would look like, but if it's something possible, it'd be great!
Thanks for another great tool.
| closed | 2016-02-07T19:36:19Z | 2018-04-28T23:02:54Z | https://github.com/kennethreitz/records/issues/7 | [
"enhancement",
"wontfix"
] | mlissner | 7 |
erdewit/ib_insync | asyncio | 281 | Change to raising exceptions on errors | Hi, currently when operation doesn't succeed most of the time an error is logged, but otherwise code executes correctly. This is a bit annoying since I have to add additional checks that operations succeed. For example if call to `qualifyContracts` fails, it logs as much, but continues as normal, so I have to check that contract is actually qualified afterwards.
Can you instead throw exception when something fails, so I can handle it properly? | closed | 2020-07-21T13:49:28Z | 2020-07-30T18:06:53Z | https://github.com/erdewit/ib_insync/issues/281 | [] | Rizhiy | 5 |
kizniche/Mycodo | automation | 891 | Output GPIO not set as output when Pi / Mycodo restarts | Hello,
I have 4 outputs in my Mycodo, set up as "ON/OFF GPIO" type on pin 6, 13, 19 and 26.
When I restart the Pi, the outputs are not working because the GPIO's are not set as outputs until I open each of them and click "save".
Is it a normal behavior ?
- Mycodo Version: 8.8.8
- Raspberry Pi Version: Pi 4B Rev 1.2
- Raspbian OS Version: Buster
Thanks for help.
Lilian | closed | 2020-11-22T07:16:02Z | 2020-11-24T13:48:31Z | https://github.com/kizniche/Mycodo/issues/891 | [] | LilianV | 4 |
psf/black | python | 3,708 | Black Github action causes subsequent actions to fail on self-hosted runner | I'm running a couple of GHA jobs, one of them being the black formatter inside an Ubuntu container on a self-hosted runner. I've noticed that after running the black formatting action there are files leftover in the `_actions` directory that aren't owned by the runner's user but rather by `root`:
```
ne or more errors occurred. (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/black-23.3.0.dist-info/INSTALLER' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/pyvenv.cfg' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib64' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/mypy_extensions.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/black-23.3.0.dist-info/RECORD' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/black-23.3.0.dist-info/licenses/AUTHORS.md' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/2ec0e72aa72355e6eccf__mypyc.cpython-310-x86_64-linux-gnu.so' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/_black_version.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/black-23.3.0.dist-info/REQUESTED' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/winterm.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/__init__.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/win32.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/ansitowin32.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/ansi.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/tests/ansi_test.py' is denied.) (Access to the path '/home/ubuntu/actions-runner/_work/_actions/psf/black/stable/.black-env/lib/python3.10/site-packages/colorama/tests/initialise_test.py' is denied.)
```
If the GitHub Action runner isn't running as `root`, which the default isn't, subsequent actions will fail because they run as the action's user and not `root` with the error above. I can fix this by running the GitHub Action runner as `root` but that feels a bit suboptimal.
Has anyone else seen this behaviour? Appreciate your insight. | closed | 2023-05-26T09:02:42Z | 2023-08-08T18:12:12Z | https://github.com/psf/black/issues/3708 | [
"T: bug",
"C: integrations"
] | cjproud | 4 |
NullArray/AutoSploit | automation | 1,273 | Unhandled Exception (5b424631f) | Autosploit version: `3.0`
OS information: `Linux-5.6.0-kali1-amd64-x86_64-with-debian-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Downloads/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Downloads/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| open | 2020-05-29T16:27:37Z | 2020-05-29T16:27:37Z | https://github.com/NullArray/AutoSploit/issues/1273 | [] | AutosploitReporter | 0 |
TencentARC/GFPGAN | deep-learning | 164 | How can I use gfpgan for video? | Hello, how can i use gfpgan for video? | open | 2022-02-23T14:47:56Z | 2024-01-18T18:59:22Z | https://github.com/TencentARC/GFPGAN/issues/164 | [] | osmankaya | 8 |
CorentinJ/Real-Time-Voice-Cloning | python | 803 | Errors trying to execute demo_toolbox.py | 
I have tried to
• Reinstall python
• Reinstall requierements
• Reinstall other versions | closed | 2021-07-18T11:12:36Z | 2021-08-25T09:18:03Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/803 | [] | IbaiMtnz05 | 5 |
CPJKU/madmom | numpy | 354 | Is normalization during the computation of features necessary? | @superbock How crucial do you think is the normalization of the stft window and the normalization of the triangular filters during the computation of the feature vector (Nx120)?
Can't we just subtract the mean from the feature vector and divide by the std before we give it to the neural networks? | closed | 2018-01-31T01:44:00Z | 2018-01-31T06:17:48Z | https://github.com/CPJKU/madmom/issues/354 | [] | ghost | 1 |
wkentaro/labelme | deep-learning | 1,515 | Too many annotation points make the program prone to lagging. | <img width="1286" alt="微信图片_20241119205501" src="https://github.com/user-attachments/assets/06c22dcf-b49e-4e44-9920-368ba70c2ac6">
[demo.zip](https://github.com/user-attachments/files/17815444/demo.zip)
Too many annotation points make the program prone to lagging. What optimization solutions are available | open | 2024-11-19T12:57:28Z | 2024-11-28T03:31:12Z | https://github.com/wkentaro/labelme/issues/1515 | [] | monkeycc | 1 |
shibing624/text2vec | nlp | 94 | 我按照100个batch进行一次eval_model,发现和原来的结果不同 | all.jsonl在训练中会有1360个batch,我尝试每100个batch在自身进行一次eval_model,仅增加了一行判断的代码,但是和按轮次保存的结果不一致,包括loss曲线和最终结果,请问这是什么原因呢,按道理eval_model设置了model.eval()以及with torch.no_grad()应该是不存在影响训练的梯度改变 | closed | 2023-07-10T07:21:09Z | 2023-08-17T13:17:58Z | https://github.com/shibing624/text2vec/issues/94 | [
"question"
] | programmeguru | 5 |
slackapi/bolt-python | fastapi | 909 | Action Listener Unresponsive to Interaction in App Home | I'm working on creating an app that will allow users to select an account from a dropdown list, and the homepage will update with information about the account upon selection. I've gotten interactions to work through a modal, but cannot get any response on the Home Tab of the App.
I have enabled Interactivity for my application and am testing locally using ngrok. I have the same URL handling requests from the Event Subscription and the Interactivity tab where both URLs are ending with `slack/events`.
### Reproducible in:
#### The `slack_bolt` version
`slack-bolt==1.18.0`
`slack-sdk==3.21.3`
#### Python runtime version
`Python 3.11.2`
#### OS info
ProductName: macOS
ProductVersion: 13.3.1
ProductVersionExtra: (a)
BuildVersion: 22E772610a
#### Steps to reproduce:
```python
app = App(
token=os.environ.get("SLACK_BOT_TOKEN"),
signing_secret=os.environ.get("SLACK_SIGNING_SECRET"),
)
@app.action("account-select-action")
def handle_action(body, ack):
ack()
print(body)
@app.event("app_home_opened")
def update_home_tab(client, event, logger):
try:
user = event["user"]
client.views_publish(
user_id=user,
view={
"type": "home",
"blocks": [
{
"type": "header",
"text": {"type": "plain_text", "text": "Account Summary"},
},
{
"type": "input",
"element": {
"type": "static_select",
"action_id": "account-select-action",
"placeholder": {
"type": "plain_text",
"text": "Select an item",
},
"options": [
{
"text": {"type": "plain_text", "text": "Option 1"},
"value": "value_1",
},
{
"text": {"type": "plain_text", "text": "Option 2"},
"value": "value_2",
},
{
"text": {"type": "plain_text", "text": "Option 3"},
"value": "value_3",
},
],
},
"label": {
"type": "plain_text",
"text": "Please Select an Account:",
},
},
],
},
)
except Exception as e:
logger.error(f"Error publishing home tab: {e}")
if __name__ == "__main__":
port = int(os.environ.get("PORT", 3000))
app.start(port=port)
```
### Expected result:
I would expect the body of the response to be printed when I changed the selection in the home tab of the application. I would like to confirm that this interaction is working so that I can update a markdown data table with information related to the selected account in the static dropdown menu.
### Actual result:
There is no response, nothing printed, and nothing to prove to me that this interactivity is functioning properly. There is nothing on the ngrok page either to indicate that this interaction ever happened.
## Requirements
| closed | 2023-06-07T22:17:32Z | 2023-06-08T13:06:04Z | https://github.com/slackapi/bolt-python/issues/909 | [
"question"
] | dglindner2 | 5 |
Kitware/trame | data-visualization | 57 | "Exception: no view provided: -1" on trame 1.19.1 | **Describe the bug**
On the latest version of `trame` (1.19.1), the VTK view does not work as on version 1.18.0. The viewer doesn't show up in the UI and I am getting an error from `vtkmodules` saying:
```
ERROR:root:Exception raised
ERROR:root:Exception('no view provided: -1')
ERROR:root:Traceback (most recent call last):
File ".../lib/python3.8/site-packages/wslink/backends/aiohttp/__init__.py", line 371, in onMessage
results = await asyncio.coroutine(func)(*args, **kwargs)
File "/usr/lib/python3.8/asyncio/coroutines.py", line 124, in coro
res = func(*args, **kw)
File ".../lib/python3.8/site-packages/vtkmodules/web/protocols.py", line 425, in imagePush
sView = self.getView(options["view"])
File ".../lib/python3.8/site-packages/vtkmodules/web/protocols.py", line 80, in getView
raise Exception("no view provided: %s" % vid)
Exception: no view provided: -1
```
I think it may be because of this commit? https://github.com/Kitware/trame/commit/ad23c4d88884319c05391b7e195cd8bcc85f9738
**To Reproduce**
Minimal code to reproduce:
```python
import vtk
from trame import change, state
from trame.layouts import SinglePageWithDrawer
from trame.html import vuetify
from trame.html.vtk import VtkRemoteLocalView
# -----------------------------------------------------------------------------
# VTK pipeline
# -----------------------------------------------------------------------------
renderer = vtk.vtkRenderer()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtk.vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
renderWindowInteractor.GetInteractorStyle().SetCurrentStyleToTrackballCamera()
# -----------------------------------------------------------------------------
# Callbacks
# -----------------------------------------------------------------------------
@change("viewMode")
def update_view(viewMode, flush=True, **kwargs):
html_view.update_image()
if viewMode == "local":
html_view.update_geometry()
if flush:
# can only flush once protocol is initialized (publish)
state.flush("viewScene")
# -----------------------------------------------------------------------------
# GUI
# -----------------------------------------------------------------------------
html_view = VtkRemoteLocalView(renderWindow, namespace="view")
def on_ready(**kwargs):
update_view("local", flush=False)
html_view.update()
renderer.ResetCamera()
layout = SinglePageWithDrawer("Trame App", on_ready=on_ready)
with layout.content:
vuetify.VContainer(
fluid=True,
classes="pa-0 fill-height",
children=[html_view],
)
if __name__ == "__main__":
layout.start()
``` | closed | 2022-04-20T15:48:59Z | 2022-04-20T16:54:57Z | https://github.com/Kitware/trame/issues/57 | [] | DavidBerger98 | 2 |
explosion/spaCy | deep-learning | 12,751 | ERROR when loading spacy model from local file | Hello, I am trying to load the en_core_web_sm 3.5.0 from its folder, but I keep getting an error E912:
```
poetry run app-start
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/luisgt/.asdf/installs/ivm-python/3.8.11/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/__init__.py", line 1, in <module>
from mecseourlgen import __main__
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/__main__.py", line 1, in <module>
from mecseourlgen.application.LaunchRestServerUseCase import LaunchRestServerUseCase
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/application/LaunchRestServerUseCase.py", line 4, in <module>
from mecseourlgen.infrastructure.rest.server.AppRouter import AppRouter
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/infrastructure/rest/server/AppRouter.py", line 1, in <module>
from mecseourlgen.infrastructure.rest.server import seo_generator, status
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/infrastructure/rest/server/seo_generator.py", line 5, in <module>
from mecseourlgen.application.GenerateSeoComponentUseCase import (
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/application/GenerateSeoComponentUseCase.py", line 5, in <module>
from mecseourlgen.domain.services.SeoComponentGeneratorService import (
File "/home/luisgt/dev/mic-mecseourlgen/code/mecseourlgen/domain/services/SeoComponentGeneratorService.py", line 13, in <module>
nlp = spacy.load(spacy_model_path)
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/__init__.py", line 54, in load
return util.load_model(
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/util.py", line 434, in load_model
return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/util.py", line 514, in load_model_from_path
return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/language.py", line 2125, in from_disk
util.from_disk(path, deserializers, exclude) # type: ignore[arg-type]
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/util.py", line 1352, in from_disk
reader(path / key)
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/language.py", line 2119, in <lambda>
deserializers[name] = lambda p, proc=proc: proc.from_disk( # type: ignore[misc]
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/pipeline/lemmatizer.py", line 304, in from_disk
self._validate_tables()
File "/home/luisgt/.cache/pypoetry/virtualenvs/mecseourlgen-qGBwjy5r-py3.8/lib/python3.8/site-packages/spacy/pipeline/lemmatizer.py", line 173, in _validate_tables
raise ValueError(
ValueError: [E912] Failed to initialize lemmatizer. Missing lemmatizer table(s) found for mode 'rule'. Required tables: ['lemma_rules']. Found: [].
```
Also, as I am working with Poetry, I'll include the environment info for spacy using poetry show spacy.
## How to reproduce the behaviour
Just passing a path to spacy.load() method.
```
import spacy
spacy_model_path = (
get_st_files_path() + "/en_core_web_sm-3.5.0/en_core_web_sm/en_core_web_sm-3.5.0"
)
nlp = spacy.load(spacy_model_path)
```
## Your Environment
* Operating System: WSL2
* Python Version Used: 3.8.11
* spaCy Version Used: 3.5.0
* Environment Information:
```
poetry show spacy
name : spacy
version : 3.5.0
description : Industrial-strength Natural Language Processing (NLP) in Python
dependencies
- catalogue >=2.0.6,<2.1.0
- cymem >=2.0.2,<2.1.0
- jinja2 *
- langcodes >=3.2.0,<4.0.0
- murmurhash >=0.28.0,<1.1.0
- numpy >=1.15.0
- packaging >=20.0
- pathy >=0.10.0
- preshed >=3.0.2,<3.1.0
- pydantic >=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0
- requests >=2.13.0,<3.0.0
- setuptools *
- smart-open >=5.2.1,<7.0.0
- spacy-legacy >=3.0.11,<3.1.0
- spacy-loggers >=1.0.0,<2.0.0
- srsly >=2.4.3,<3.0.0
- thinc >=8.1.0,<8.2.0
- tqdm >=4.38.0,<5.0.0
- typer >=0.3.0,<0.8.0
- wasabi >=0.9.1,<1.2.0
```
Thing is, I don't know if this is something related to Poetry, because I tried the same workflow in a jupyter notebook and worked as intended.
## [EDIT 1]
Seems that excluding "lemmatizer" from the .load() method let's the code run smoothly but, of course, now my NLP pipeline does not have a lemmatizer...
| closed | 2023-06-26T10:00:14Z | 2023-06-27T08:26:05Z | https://github.com/explosion/spaCy/issues/12751 | [] | LGTiscar | 0 |
microsoft/qlib | deep-learning | 1,493 | Confused terms for beginner | I saw that you use some kind of dataset like Alpha360, Alpha158, and some Chinese stock instrument - but without clearly explaination, or referencing. It's really difficult for any beginner who want to explore this Qlib framework
https://github.com/kyhoolee/qlib/blob/main/examples/benchmarks/README.md
Anyone can give me some easy and intuitive starting point of using Qlib framework
| closed | 2023-04-17T10:50:33Z | 2023-08-13T06:02:05Z | https://github.com/microsoft/qlib/issues/1493 | [
"stale"
] | kyhoolee | 3 |
agronholm/anyio | asyncio | 816 | return in finally swallows exceptions | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
mater branch
### Python version
NA
### What happened?
In https://github.com/agronholm/anyio/blob/3a62738be0ba8d7934bd447e3833d644d414d49a/src/anyio/from_thread.py#L119 there is a `return` statement in a `finally` block, which would swallow any in-flight exception.
This means that if an unhandled exception (including a `BaseException` such as `KeyboardInterrupt`) is raised from the `try` body, it will not propagate on as expected.
If the intention is to suppress all exceptions, I would propose to make this clear by using `except BaseException`.
See also https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions.
### How can we reproduce the bug?
NA | open | 2024-10-29T00:02:10Z | 2024-10-31T19:09:49Z | https://github.com/agronholm/anyio/issues/816 | [
"bug"
] | iritkatriel | 1 |
plotly/plotly.py | plotly | 4,687 | Scatter3d.on_click doesn't receive InputDeviceState object | Here is the code:
```python
import plotly.graph_objects as go
import numpy as np
import ipywidgets
np.random.seed(1)
scatter_1 = go.Scatter3d(x=np.random.rand(100), y=np.random.rand(100), z=np.random.rand(100), mode='markers', marker={"size":2},)
f = go.FigureWidget()
f.add_trace(scatter_1)
o = ipywidgets.Output()
@o.capture(clear_output=True)
def callback(trace, points, selector):
print(points)
print(selector)
f.data[0].on_click(callback)
ipywidgets.HBox([f, o])
```
click on any point, and the output is as following, the selector argument is None:
```
Points(point_inds=[69],
xs=[0.5865550405019929],
ys=[0.5688514370864813],
trace_name='trace 0',
trace_index=0)
None
``` | open | 2024-07-23T07:08:18Z | 2024-08-13T13:26:02Z | https://github.com/plotly/plotly.py/issues/4687 | [
"bug",
"P3"
] | ruoyu0088 | 0 |
huggingface/pytorch-image-models | pytorch | 1,933 | [FEATURE] Support shufflenet | tkx for your wonderful work, I found the `timm` is not support shufflenet v1 and v2 yet, there is any plan to support it? and [here](https://github.com/megvii-model/ShuffleNet-Series) is the official repository and related papers.
| open | 2023-08-28T09:56:38Z | 2023-08-29T03:24:32Z | https://github.com/huggingface/pytorch-image-models/issues/1933 | [
"enhancement"
] | flywheel1412 | 3 |
JaidedAI/EasyOCR | deep-learning | 1,038 | `recognize` returns gibberish when `batch_size` > 1 | When I set the batch size to 1 I get the correct results, but when batch_size > 1, it returns gibberish
```
results = self._easyocr_reader.recognize(
img,
detail=0,
free_list=oriented_bboxes,
horizontal_list=[],
batch_size=1
)
```
returns
```
['0',
'2',
'1',
'6',
'8',
'10',
'12',
'14',
'16',
'18',
'20',
'22',
'24',
'40.0',
'J5,0',
'80.0',
'25.0',
'20,0',
'15.0',
'10,0',
'6.0',
'0.0']
```
```
results = self._easyocr_reader.recognize(
img,
detail=0,
free_list=oriented_bboxes,
horizontal_list=[],
batch_size=64
)
```
returns
```
['EI',
'0f%l}r?GQ&MH4VY_`VpP0Fr:- Jy&e6D2{0/8G|T&n',
"[3sGqYwOyjw0€?6+crJlUI1,0 ^|p%1{./, xo'R)Po",
"%MG`5Wmn|`eikA|gP#mcx >?5yn'@y+x ,[Jl[;@8",
"K-@MV12(q34'bxQ[T#p;~%]xmhP 8. J@r? p`",
'9Bq[4rLjveFeHM9&btanI1yJi0 1',
'WK0[Q4SGyW+Bk$rGsp"m:IB ~|LBFE_g.C~',
'Wh&MQjIMx}/uqIVTKCm:yG#zxr/]xa/y{_yHCt!W',
'FwP4EQMf€KP@an@*€Hd Jg,4.#I ]SG[5a`[ [):j(',
'd3W(Q0krKw$Gm<wBFJ&VN>6d$/.5JjoWTjli_/6u{',
'16PQ%HKVm}6%l&JBOm@J#T|_ C` /q9e[ T:L',
'`TbcJl6e836g:KSV2G7 jx Y T)Kzy IJ',
'6}b1"MVwMHSk`g:9TgkL](-[)-2I}x[e2> .j|.',
'Wk0[.V`eCMH,0$MfWKCcTh:B_ j{[/M|/I ,<8',
"]6n8as`3d-'|G[f@MDfh%M,9Jzet37O^W~@vI_",
'FsF)U0fm1dw&NXt|gGRMsQ._jFq7T .m|,[ ',
"aFf'Q=TK^J]d01:hevr*;]>ae0`fx831m]4`T",
"k%,MHUtVJyL&n$mQ%jkD'gZd6JS/$t J$lbi<",
"Wi_N8Ng[}Q<&eH*!fk*lL_'*^t M_po_ y[! v",
']IyPBlJi(GWVRkm#Tpm QH|3_Ff.<gze-Vgx, {a',
">fAhUgwCFdfSiwBq4W+aHmA_pu[$H[a~V :_K'",
'xu(`VrmI/&k@@-eMcg`(J#h(/K i]_Pxuuy]8J']
``` | open | 2023-05-31T12:24:20Z | 2023-08-01T18:05:51Z | https://github.com/JaidedAI/EasyOCR/issues/1038 | [] | aamster | 4 |
Significant-Gravitas/AutoGPT | python | 8,812 | Search Results - All, Agents or Creators filter needs hooking up | closed | 2024-11-27T13:03:41Z | 2024-12-09T16:30:51Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8812 | [
"bug",
"UI",
"platform/frontend"
] | Swiftyos | 0 |
|
ploomber/ploomber | jupyter | 1,114 | remove `ploomber cloud` | follow up of: #1111
we also need to remove it from our docs (see `hooks.py`) and delete the notebooks from https://github.com/ploomber/projects
| closed | 2023-06-21T21:18:17Z | 2023-11-20T22:28:01Z | https://github.com/ploomber/ploomber/issues/1114 | [] | edublancas | 2 |
tatsu-lab/stanford_alpaca | deep-learning | 275 | Why the model I got after finetune is not good | Has anyone reproduced this result? Why is the alpaca obtained after combining weight_diff much better than the alpaca obtained by my own reproduction? Hope someone can answer me, thanks! | open | 2023-06-07T14:58:05Z | 2023-06-07T14:58:05Z | https://github.com/tatsu-lab/stanford_alpaca/issues/275 | [] | wyzhhhh | 0 |
MentatInnovations/datastream.io | jupyter | 38 | can't find where to put novelty =true | AttributeError: decision_function is not available when novelty=False. Use novelty=True if you want to use LOF for novelty detection and compute decision_function for new unseen data. Note that the opposite LOF of the training samples is always available by considering the negative_outlier_factor_ attribute. | open | 2023-01-04T05:50:21Z | 2023-01-04T05:50:21Z | https://github.com/MentatInnovations/datastream.io/issues/38 | [] | arya-STARK-Z | 0 |
matplotlib/mplfinance | matplotlib | 653 | Parabolic Sar and supertrend for renko | Hello how to plot Parabolic Sar and supertrend for renko chart ? | open | 2023-12-12T13:06:31Z | 2023-12-12T13:06:31Z | https://github.com/matplotlib/mplfinance/issues/653 | [
"question"
] | RVGITUHUB | 0 |
GibbsConsulting/django-plotly-dash | plotly | 362 | Graph not visible on older Kindle Fire Generations | Hi Guys!
I built a dash application that so far has run on all my browsers without problems including Safari, Chrome both mobile and desktop as well as newer Kindle Fires. For Kindle, i always use a fullscreen browser such as (Fully Kiosk).
However, i have problems with older Kindle versions. For these, **only the native kindle browser works but all other browsers don't show the graph component** (headlines, checkboxes etc. are shown, also the loading screen). I tried several browsers and versions without success.
The Kindles that don't work are for example HD Fire 8 (6th generation), OS 5.6.8.0 (626542120). Interestingly, for the **newer generations (e.g. HD Fire 8 (7th generation) running OS 5.6.9.0) the graph loads just fine**.
I didn't find anything on google and would be happy about suggestions. What might be the problem? Any tips to allow backward compatibility of older browsers?
I am using the following dash packages:
dash 1.21.0
core components 1.17.1
dash html components 1-1-4
dash renderer 1.9.1
django (3.1.7) using django-plotly-dash (1.6.5)
Thank you!
| closed | 2021-10-19T15:46:20Z | 2022-04-26T12:44:27Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/362 | [] | janvv | 3 |
cvat-ai/cvat | pytorch | 8,204 | Revert change to "Finish the job" behavior on app.cvat.com | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
When finishing a job it would previously change the job stage to acceptance instead of annotation. Today when you press "Finish the job" it will change the state to completed instead.
There is however still a dropdown where you can change the job state to e.g. completed, but now you will need a lot of clicks to change the stage of a video.

### Describe the solution you'd like
|t would be nice to either revert how this behaves or to add a drop-down menu that allows you to easily change the stage of a job.
### Describe alternatives you've considered
Drop-down menu to easily change the job stage

### Additional context
I'm using the web interface app.cvat.com | closed | 2024-07-22T06:03:36Z | 2024-12-10T18:16:00Z | https://github.com/cvat-ai/cvat/issues/8204 | [
"enhancement"
] | ChristianIngwersen | 8 |
deepset-ai/haystack | nlp | 8,784 | Create a CSV Document splitter | **Is your feature request related to a problem? Please describe.**
This is related to this issue https://github.com/deepset-ai/haystack/issues/8783 to make it easier to work with csv style documents in Haystack.
We've been working with more clients who have large and sometimes complicated excel and csv files that often contain multiple tables within one spread sheet.
We've found that keeping the document size manageable to be necessary in RAG use cases so we would ideally be able to split these spreadsheets into their separate tables. Otherwise we find the single massive table is too large to be effectively retrieved and often takes up too much space in the LLM context window.
**Describe the solution you'd like**
Therefore, it would be great to have a component that could split these single massive tables into the multiple smaller tables. I think it would make the most sense to create a separate CSV Document splitter to handle to this rather than expand our existing DocumentSplitter, but I'm open to discussion.
**Additional context**
Here is an example csv I created that has two tables combined into a single large table.
[two-tables-in-one.csv](https://github.com/user-attachments/files/18584186/two-tables-in-one.csv)
| closed | 2025-01-29T07:44:56Z | 2025-02-10T17:10:20Z | https://github.com/deepset-ai/haystack/issues/8784 | [
"type:feature",
"P2"
] | sjrl | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 752 | Teacher, please advise: several models, no download list | Teacher, please advise: Isn’t Demucs_ft and htdemucs_ft the same model? Isn’t UVR-MDX-VOC-FT Fullband SRS and UVR-MDX-VOC-FT the same model? Didn't see Demucs_ft, UVR-MDX-VOC-FT Fullband SRS, MDXv3 demo models in the uvr5 download, or they have another name in the download list? | open | 2023-08-21T09:54:01Z | 2023-08-21T09:54:01Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/752 | [] | ybhka2022 | 0 |
albumentations-team/albumentations | deep-learning | 2,400 | [New feature] Add apply_to_images to Equalize | open | 2025-03-11T01:01:43Z | 2025-03-11T01:01:59Z | https://github.com/albumentations-team/albumentations/issues/2400 | [
"enhancement",
"good first issue"
] | ternaus | 0 |
|
flairNLP/flair | pytorch | 3,482 | [Feature]: upgrade urllib3 | ### Problem statement
flair requires `urllib3<2.0.0` which is quite old (latest is 2.2, the 2.0 alpha was released in late 2022)
https://github.com/flairNLP/flair/blob/59bd7053a1a73da03293b0bdb68113cf383d6b9e/requirements.txt#L26C1-L26C75
I have other dependencies that require urllib3 >= 2.0 (i.e. `tritonclient[http] (>=2.47.0,<3.0.0) requires urllib3 (>=2.0.7)`)
### Solution
I'm not sure that flair directly uses `urllib3`, I suspect the require was added as per the comment due to 3rd party libraries not requiring urllib3 < 2 so resulting in a broken installation when urllib3 v2 was released. I would hope this is no longer a constraint and it can be removed?
### Additional Context
In general I think flair imports to much by default, i.e. I have to downgrade `scipy` because even though I'm not using any `gensim` based features the fact that gensim uses deprecated scipy features that are now removed (triu) I get import errors (gensim has fixed this but not released the fix yet). Also note a recent "fix" added by gensim pins `numpy < 2` which flair will inherit which could cause issues down the line.
I think most 3rd party (gensim, spacy, huggingface) functionality should be "optional" - i.e. `pip install flair[gensim]` to enable gensim features. I'm not a big fan of getting errors due to flair importing gensim when I'm training a tagger with transformer embeddings.
| closed | 2024-07-02T03:58:15Z | 2024-07-19T14:08:47Z | https://github.com/flairNLP/flair/issues/3482 | [
"feature"
] | david-waterworth | 5 |
open-mmlab/mmdetection | pytorch | 11,334 | DINO测试fps时与原本论文差了3倍! | 官方您好,我在使用`brenchmark.py`测试DINO的fps时,测得的推理速度与论文对不上,我这里使用的时3090ti进行推理,获得的fps居然比论文里**A100**的还要快,想向您求助!
- 原论文:

- 我的结果:

我的验证命令行:
```python
python tools/analysis_tools/benchmark.py /CV/xhr_project/Paper/mmdetection/work_dirs/Dino_r50_24_SODA2/Dino_r50_24_SODA2.py --checkpoint /CV/xhr_project/Paper/mmdetection/work_dirs/Dino_r50_24_SODA2/best_coco/bbox_mAP_epoch_24.pth
``` | open | 2024-01-02T08:47:25Z | 2024-01-02T09:20:31Z | https://github.com/open-mmlab/mmdetection/issues/11334 | [] | Hongru0306 | 2 |
explosion/spaCy | deep-learning | 12,259 | Documentation Update - Fix incorrect filename found in Entity Ruler usage page | <!-- Describe the problem or suggestion here. If you've found a mistake and you know the answer, feel free to submit a pull request straight away: https://github.com/explosion/spaCy/pulls -->
Filename mentioned in documentation is not correctly.
## Which page or section is this issue related to?
<!-- Please include the URL and/or source. -->
In the last paragraph for the section [Using Pattern files](https://spacy.io/usage/rule-based-matching#entityruler-files), the documentation [mentions](https://spacy.io/usage/rule-based-matching#entityruler-files:~:text=the%20pipeline%20directory%20contains%20a%20file%20entityruler.jsonl) that the file saved in the `entity_ruler` folder is `entityruler.jsonl` when it should be `patterns.jsonl`
| closed | 2023-02-08T17:09:15Z | 2023-03-12T00:02:19Z | https://github.com/explosion/spaCy/issues/12259 | [
"docs"
] | SethDocherty | 2 |
lepture/authlib | django | 49 | response_mode=form_post | - [x] https://openid.net/specs/oauth-v2-form-post-response-mode-1_0.html | closed | 2018-04-25T10:17:36Z | 2018-06-13T11:53:25Z | https://github.com/lepture/authlib/issues/49 | [] | lepture | 0 |
paperless-ngx/paperless-ngx | django | 9,483 | Relative path resolution is broken (update on #2433 and clarification, with patch example). | ### Description
Path finding for configuration files uses relative paths from the current working directory, not the source directory.
#2433 was closed as being an extra environment issue. This is not the case.
I was able to reproduce this by setting up an environment:
* `/opt/paperless/paperless-2.14.7` # latest stable.
* `/opt/paperless/paperless-2.14.7/paperless.conf` # valid configuration (runs in other same-version instances).
* `/var/venv/paperless` # python3 environment, same as other instances.
Reproduction:
* `cd /` - change to any directory outside of `src/`.
* `/var/venv/paperless/bin/python3 /opt/paperless/paperless/src/manage.py migrate` - execute migrate (or any command)
* Error is expressed from #2433
Suggested fix:
Always fully resolve file paths with libraries, even if relative:
[settings.py#22](https://github.com/paperless-ngx/paperless-ngx/blob/7a07f1e81ddf826476a2add2f1262661f7daad03/src/paperless/settings.py#L22)
``` diff
- elif os.path.exists("../paperless.conf"):
+elif os.path.exists(os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "paperless.conf")):
```
* `../paperless.conf` actually referes to `CWD/../paperless.conf` which resolves to `/paperless.conf` for the reproduction case.
Alternatively, you can import `pathlib` and resolve paths that way.
This needs to be applied to all path resolutions for all files.
Work around:
* `cd /opt/paperless/paperless-2.14.7/src`
* `/var/venv/paperless/bin/python3 /opt/paperless/paperless/src/manage.py migrate`
* Works fine (basically, execute everything from the source directory).
### Steps to reproduce
See above.
### Webserver logs
```bash
# Executed outside of the src/ directory
$ /var/venv/paperless/bin/python3 /opt/paperless/paperless/src/manage.py migrate
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is set but doesn't exist.
HINT: Create a directory at /opt/paperless/paperless-2.14.7/consume
?: PAPERLESS_MEDIA_ROOT is set but doesn't exist.
HINT: Create a directory at /opt/paperless/paperless-2.14.7/media
# Executed within the src/ directory
$ /var/venv/paperless/bin/python3 /opt/paperless/paperless/src/manage.py migrate
$
```
### Browser logs
```bash
N/A
```
### Paperless-ngx version
1.11.3 to 2.14.7 (likely all existing versions).
### Host OS
Debian bookworm.
### Installation method
Bare metal
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-03-24T17:31:48Z | 2025-03-24T18:17:22Z | https://github.com/paperless-ngx/paperless-ngx/issues/9483 | [
"not a bug"
] | r-pufky | 2 |
encode/databases | asyncio | 138 | Problem with parsing JSON/JSONB | I want to use some columns to store json data inside my postgres json or jsonb columns but when data is fetched the data is not being converted back to it's original basic `dict` type.
Let's have a column with definition:
```python
Column("foo", postgres.ARRAY(postgres.JSONB), nullable=False, server_default="{}")
```
When I save a dictionary of structure a follow the data is stored correctly in the database using internal converter:
```python
data = [{"foo": "bar"}]
```
When reading the data back it comes back as array of strings that I have to deserialize back to dictionary:
```python
foo = row["foo"] # This equals: ['{"foo":"bar"}']
```
I tried to create my custom type according to SQL Alchemy docs: https://docs.sqlalchemy.org/en/13/core/custom_types.html and came up with:
```python
class JsonDictType(types.TypeDecorator):
impl = postgres.JSONB
def coerce_compared_value(self, op, value):
return self.impl.coerce_compared_value(op, value)
def process_result_value(self, value, dialect):
return json.loads(value)
def result_processor(self, dialect, coltype):
return functools.partial(self.process_result_value, dialect=dialect)
```
and redefined my column to:
```python
Column("foo", postgres.ARRAY(JsonDictType), nullable=False, server_default="{}")
```
This, however doesn't solve anything. The `result_processor` and `process_result_value` are never called and I have to manually iterate through list and deserialize json string outside query. | open | 2019-08-29T10:13:51Z | 2021-10-20T10:06:42Z | https://github.com/encode/databases/issues/138 | [] | LKay | 9 |
explosion/spaCy | data-science | 13,059 | Dependency sentence segmenter handles newlines inconsistently between languages | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
[Colab notebook demonstrating problem](https://colab.research.google.com/drive/14FFYKqjRVRbN7aAVmHUYEao9CwahY0We?usp=sharing)
When parsing a sentence that contains newlines, the Italian parser sometimes assigns the newline to a sentence by itself, for example:
>Ma regolamenta solo un settore, a differenza dell’azione a largo raggio dell’Inflation Act. \nI tentativi di legiferare per stimolare l’industria non hanno avuto molto successo.
Produces 3 sentences:
```
'Ma regolamenta solo un settore, a differenza dell’azione a largo raggio dell’Inflation Act (dalla sanità all’industria pesante).'
'\n'
'I tentativi di legiferare per stimolare l’industria non hanno avuto molto successo.'
```
There are various experiments with different combinations of punctuation in the notebook.
Looking at the tokens and their `is_sent_start` property, it seems under some circumstances the `\n` and `I` tokens are both assigned as the start of a new sentence.
I have not been able to cause this problem with `en_core_web_sm`, which always correctly identifies 2 sentences.
Although I understand that sentence segmentation based on the dependency parser is probabilistic and not always correct, it seems there's some inconsistency between languages here, and I don't think it would ever be correct for a whitespace token to be assigned as the start of a sentence.
## Your Environment
- **spaCy version:** 3.6.1
- **Platform:** Linux-5.15.120+-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** it_core_news_sm (3.6.0), en_core_web_sm (3.6.0) | open | 2023-10-11T15:25:36Z | 2023-10-13T11:17:23Z | https://github.com/explosion/spaCy/issues/13059 | [
"lang / it",
"feat / senter"
] | freddyheppell | 3 |
STVIR/pysot | computer-vision | 221 | Would single gpu will affect the results? | Thanks for your good work! Now,I reperformance this project with two gpus ,EAO=0.41;In the second time, I set nproc_per_node=1,I find this project can work with single gpu,I want to know:would single gpu will affect the results? | closed | 2019-10-29T07:14:58Z | 2019-12-20T02:19:23Z | https://github.com/STVIR/pysot/issues/221 | [] | smile-hahah | 6 |
K3D-tools/K3D-jupyter | jupyter | 318 | Volume rendering comparison | I'm trying out volume rendering in different libraries, namely this one and `yt` (https://yt-project.org/doc/visualizing/volume_rendering.html)
When rendering the same dataset with the same transfer function, I get different results.
Dataset is created manually, and is supposed to represent a spherical volume:
```
import k3d
import yt
import numpy as np
import ipywidgets
nx, ny, nz = 110, 110, 110
volume = np.zeros((nx, ny, nz))
center = np.array([55, 55, 55])
for i in range(nx):
for j in range(ny):
for k in range(nz):
if np.linalg.norm(np.array([i, j ,k]) - center) < 40:
volume[i, j, k] = 1.0
```
Transfer function is initialized via the `yt` libary:
```
tf = yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction(
x_bounds=(np.min(volume), np.max(volume) + 0.1) # 0.1 offset needed to prevent rendering issues in yt
)
tf.map_to_colormap(
tf.x_bounds[0],
tf.x_bounds[1],
colormap='Blues',
scale_func=lambda vals, minval, maxval: (vals - vals.min()) / (vals.max() - vals.min()),
)
```
Which results in the following transfer function:

Where the opacity increases linearly from 0.0 to 1.0 and the end color is blue.
I convert the transfer function into a colormap and opacity function suitable for k3d
```
# create colormap from tf
colormap = np.zeros((256, 4), np.float32)
colormap[:, 0] = tf.alpha.x / max(tf.alpha.x) # rescale back between 0.0 and 1.0
colormap[:, 1] = tf.red.y
colormap[:, 2] = tf.green.y
colormap[:, 3] = tf.blue.y
# create opacity func
opacity_func = np.zeros((256, 2), dtype=np.float32)
opacity_func[:, 0] = tf.alpha.x / max(tf.alpha.x) # rescale back between 0.0 and 1.0
opacity_func[:, 1] = np.linspace(0, 1, 256)
color_range = (np.min(volume), np.max(volume))
```
And feed to `k3d.volume`
```
out = ipywidgets.Output(layout={'width': '600px', 'height': '600px'})
plot = k3d.plot(background_color=0, grid_visible=False, lighting=0)
plot += k3d.volume(
volume=volume,
color_range=color_range,
opacity_function=opacity_func,
color_map=colormap,
alpha_coef=1.0
)
with out:
display(plot)
display(out)
plot.camera_reset(factor=0.5)
```

I also feed the same data into `yt` (setting their camera lens to 'perspective' to match k3d)
```
data = dict(density = (volume, "g"))
ds = yt.load_uniform_grid(data, volume.shape, length_unit="m", nprocs=1)
sc = yt.create_scene(ds, 'density')
dd = ds.all_data()
sc[0].log_field = False
sc[0].set_transfer_function(tf)
cam = sc.add_camera(ds, lens_type="perspective")
cam.resolution = [600, 600]
sc.show(sigma_clip=0)
```

The images look very different, and I have some questions:
- Why does `k3d` not saturate the rays that pass through the middle of the sphere? I would expect the rays to be oversaturated with blue there, and mixing many contributions of the darker blue eventually leads to light blue?
- Why is there almost no gradient in color between the edges of the sphere (where the rays pass only through a small section of the volume) and the middle
- Why does the volume seem to consist of little cubes? Even if I set the lighting to 0, the volume seems to be made of little cubes that have a 'dark' and a 'bright' side
I found out that if I use the `sigma_clip` option in yt, I can make the renders look more similar:

> sigma_clip = N can address this by removing values that are more than N standard deviations brighter than the mean of your image
Is this something that is built-in to k3d as well?
I hope this question is not to long and complicated. I included a notebook that can render the examples as well. Any help would be much appreciated!
[k3d_vs_yt.ipynb.gz](https://github.com/K3D-tools/K3D-jupyter/files/7505949/k3d_vs_yt.ipynb.gz)
| closed | 2021-11-09T15:26:39Z | 2022-12-19T21:35:52Z | https://github.com/K3D-tools/K3D-jupyter/issues/318 | [] | ghost | 13 |
streamlit/streamlit | python | 10,199 | Page Reload on Cache Miss | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I am noticing behaviour very similar to the behaviour described in [this issue](https://github.com/streamlit/streamlit/issues/9036#issuecomment-2594452089) when using v1.41.1. In particular, I'm noticing that if I cause a rerun by pressing "R" and there is a cache miss, then sometimes the page reloads, element status is lost (e.g. active element, but not state like selected items) and scroll position is lost (the page is scrolled to the very top).
I first noticed this on a large page with many elements and several cached_data references. I was able to narrow it down to a single simple page show in the reproducible code.
The problem seems to come up when there are multiple cached_data functions that all expire together and all need to be run again as part of the same app rerun.
The attached video shows the reload happening consistently on the cache miss, i.e. every 3 seconds. I've found that if I reduce the number of cached_data functions, then this is less likely to happen.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10199)
```Python
import streamlit as st
import pandas as pd
@st.cache_data(ttl=3, show_spinner=True)
def get_table(table_name: str) -> list:
return [{"name": "AUD"}, {"name": "USD"}, {"name": "EUR"}, {"name": "GBP"}]
def get_table_as_dataframe(table_name: str) -> pd.DataFrame:
return pd.DataFrame(get_table(table_name=table_name))
currencies1 = get_table_as_dataframe("1")
currencies2 = get_table_as_dataframe("2")
currencies3 = get_table_as_dataframe("3")
currencies4 = get_table_as_dataframe("4")
currencies5 = get_table_as_dataframe("5")
currencies6 = get_table_as_dataframe("6")
currencies7 = get_table_as_dataframe("7")
currencies8 = get_table_as_dataframe("8")
currencies9 = get_table_as_dataframe("9")
currencies10 = get_table_as_dataframe("10")
currencies11 = get_table_as_dataframe("11")
currencies12 = get_table_as_dataframe("12")
currencies13 = get_table_as_dataframe("13")
currencies14 = get_table_as_dataframe("14")
currencies15 = get_table_as_dataframe("15")
st.markdown("### Top of the page")
st.table(currencies1)
st.table(currencies1)
st.table(currencies1)
st.table(currencies1)
st.table(currencies1)
st.multiselect(
"What are your favorite currencies?",
currencies1["name"],
)
st.text_input("Text")
```
### Steps To Reproduce
- Scroll to the bottom of the page
- Make a currency selection
- Hit "R" until the cache expires
- the page should reload and be scrolled to the top
- the previous selection is STILL in place, i.e. not all element state is lost
- Scroll to the bottom again
- Enter text into the text box and press "enter" until the cache expires
- the page should reload and be scrolled to the top
- the active selection of the text input IS LOST
### Expected Behavior
The page should not be scrolled to the top and the active element should remain active.
### Current Behavior
There is nothing in the browser console.
The following appears in the python logs
```
2025-01-17 01:55:30.907 DEBUG streamlit.web.server.browser_websocket_handler: Received the following back message:
rerun_script {
widget_states {
widgets {
id: "$$ID-811df22d59c27c532e48ee256fa80d39-None"
int_array_value {
data: 0
}
}
widgets {
id: "$$ID-882dd42dbbd4c338093ec615a29077f2-None"
string_value: "1111111"
}
}
page_script_hash: "397c4a328c8f95ca5967dac095deeca0"
}
2025-01-17 01:55:30.908 DEBUG streamlit.runtime.scriptrunner.script_runner: Beginning script thread
2025-01-17 01:55:30.908 DEBUG streamlit.runtime.scriptrunner.script_runner: Running script RerunData(widget_states=widgets {
id: "$$ID-811df22d59c27c532e48ee256fa80d39-None"
int_array_value {
data: 0
}
}
widgets {
id: "$$ID-882dd42dbbd4c338093ec615a29077f2-None"
string_value: "1111111"
}
, page_script_hash='397c4a328c8f95ca5967dac095deeca0')
2025-01-17 01:55:30.908 DEBUG streamlit.runtime.media_file_manager: Disconnecting files for session with ID 31855fe9-2f06-4d2d-ac09-4f726f8e571d
2025-01-17 01:55:30.908 DEBUG streamlit.runtime.media_file_manager: Sessions still active: dict_keys([])
2025-01-17 01:55:30.909 DEBUG streamlit.runtime.media_file_manager: Files: 0; Sessions with files: 0
2025-01-17 01:55:30.911 DEBUG streamlit.runtime.caching.cache_utils: Cache key: e070e0160f18db9df205936508f0591e
2025-01-17 01:55:30.911 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: e070e0160f18db9df205936508f0591e
2025-01-17 01:55:30.913 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: e070e0160f18db9df205936508f0591e
2025-01-17 01:55:30.915 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 2b3e18e75707bbd9478816e21a3fdd4a
2025-01-17 01:55:30.915 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 2b3e18e75707bbd9478816e21a3fdd4a
2025-01-17 01:55:30.916 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 2b3e18e75707bbd9478816e21a3fdd4a
2025-01-17 01:55:30.917 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 894c9b69f51fa13841a2dcc57ebe1e2f
2025-01-17 01:55:30.917 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 894c9b69f51fa13841a2dcc57ebe1e2f
2025-01-17 01:55:30.917 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 894c9b69f51fa13841a2dcc57ebe1e2f
2025-01-17 01:55:30.918 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 33e415fced6619a2f5e91dacaf64325a
2025-01-17 01:55:30.918 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 33e415fced6619a2f5e91dacaf64325a
2025-01-17 01:55:30.919 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 33e415fced6619a2f5e91dacaf64325a
2025-01-17 01:55:30.919 DEBUG streamlit.runtime.caching.cache_utils: Cache key: a312bf67afb8375c5e54246304b9be68
2025-01-17 01:55:30.919 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: a312bf67afb8375c5e54246304b9be68
2025-01-17 01:55:30.919 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: a312bf67afb8375c5e54246304b9be68
2025-01-17 01:55:30.920 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 800425b80fd72720a71947f102c81fa7
2025-01-17 01:55:30.920 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 800425b80fd72720a71947f102c81fa7
2025-01-17 01:55:30.920 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 800425b80fd72720a71947f102c81fa7
2025-01-17 01:55:30.920 DEBUG streamlit.runtime.caching.cache_utils: Cache key: df8691c87869548ae28c4d67b0ce2d83
2025-01-17 01:55:30.920 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: df8691c87869548ae28c4d67b0ce2d83
2025-01-17 01:55:30.921 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: df8691c87869548ae28c4d67b0ce2d83
2025-01-17 01:55:30.921 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 7107b569f00e427014580e7313504cdb
2025-01-17 01:55:30.921 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 7107b569f00e427014580e7313504cdb
2025-01-17 01:55:30.922 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 7107b569f00e427014580e7313504cdb
2025-01-17 01:55:30.922 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 3f9e6bde62be5ab0e489d70075bdaa66
2025-01-17 01:55:30.922 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 3f9e6bde62be5ab0e489d70075bdaa66
2025-01-17 01:55:30.923 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 3f9e6bde62be5ab0e489d70075bdaa66
2025-01-17 01:55:30.923 DEBUG streamlit.runtime.caching.cache_utils: Cache key: b4cb12d1434538fc9d88cd57c4b1df1b
2025-01-17 01:55:30.923 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: b4cb12d1434538fc9d88cd57c4b1df1b
2025-01-17 01:55:30.923 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: b4cb12d1434538fc9d88cd57c4b1df1b
2025-01-17 01:55:30.924 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 12924a20c03721afc1c1baa93686041f
2025-01-17 01:55:30.924 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 12924a20c03721afc1c1baa93686041f
2025-01-17 01:55:30.924 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 12924a20c03721afc1c1baa93686041f
2025-01-17 01:55:30.924 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 1d705635a20fe314007382c505be4d33
2025-01-17 01:55:30.924 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 1d705635a20fe314007382c505be4d33
2025-01-17 01:55:30.925 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 1d705635a20fe314007382c505be4d33
2025-01-17 01:55:30.925 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 9acdbddfe3cd5c9978ada8d8672c814f
2025-01-17 01:55:30.925 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 9acdbddfe3cd5c9978ada8d8672c814f
2025-01-17 01:55:30.926 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 9acdbddfe3cd5c9978ada8d8672c814f
2025-01-17 01:55:30.926 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 9f30edaed642107dec187dba84761bd0
2025-01-17 01:55:30.926 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 9f30edaed642107dec187dba84761bd0
2025-01-17 01:55:30.927 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 9f30edaed642107dec187dba84761bd0
2025-01-17 01:55:30.927 DEBUG streamlit.runtime.caching.cache_utils: Cache key: 0ad19e637e4a0d0899835e9b1f55ed1b
2025-01-17 01:55:30.927 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 0ad19e637e4a0d0899835e9b1f55ed1b
2025-01-17 01:55:30.928 DEBUG streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper: Memory cache MISS: 0ad19e637e4a0d0899835e9b1f55ed1b
2025-01-17 01:55:30.933 DEBUG streamlit.runtime.media_file_manager: Removing orphaned files...
2025-01-17 01:55:30.980 DEBUG streamlit.runtime.runtime: Script run finished successfully; removing expired entries from MessageCache (max_age=2)
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version: 3.12
- Operating System: Linux 6.6.65-1-MANJARO
- Browser: Chromium, Firefox
### Additional Information
https://github.com/user-attachments/assets/049ee61f-2be8-4434-a851-94b95f4cd87c | open | 2025-01-17T02:00:51Z | 2025-01-21T18:26:46Z | https://github.com/streamlit/streamlit/issues/10199 | [
"type:bug",
"feature:cache",
"status:confirmed",
"priority:P3",
"feature:st.spinner"
] | stefanadelbert | 7 |
TracecatHQ/tracecat | pydantic | 70 | Update autocomplete for tags only | # Motivation
Don't need to update context and acitons, just add AI tags (v0 - autocompleted from our curated list of tracecat-owned tags) | closed | 2024-04-20T05:26:15Z | 2024-04-20T06:58:23Z | https://github.com/TracecatHQ/tracecat/issues/70 | [] | daryllimyt | 0 |
jupyterlab/jupyter-ai | jupyter | 339 | Enable chat editing | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
-->
It is not good to clear the chat or continue to prompt in a somewhat "dirty" chat history, both from a cost and quality perspective.
### Proposed Solution
<!-- Provide a clear and concise description of a way to accomplish what you want. For example:
* Add an option so that when [...] [...] will happen
-->
In that vein, I would like to edit any message in the chat history. A simple edit button that enables me to alter my previous prompt should be enough.
### Additional context
<!-- Add any other context or screenshots about the feature request here. You can also include links to examples of other programs that have something similar to your request. For example:
* Another project [...] solved this by [...]
-->
Just like the ChatGPT default UI, I think.
---
Thanks for the project! | open | 2023-08-12T20:11:38Z | 2024-10-23T22:12:28Z | https://github.com/jupyterlab/jupyter-ai/issues/339 | [
"enhancement"
] | vitalwarley | 3 |
slackapi/bolt-python | fastapi | 423 | Using AWS S3 for persistent storage of OAuth tokens giving ClientError (Access Denied) | Hello again! 😅 I have had some success implementing the S3 storage for OAuth tokens and installation, however have run into an issue I'm not quite sure how to solve. I come across this issue when trying to navigate to the `.../slack/install` link: `An error occurred (AccessDenied) when calling the PutObject operation: Access Denied`
I thought this might be a permissions issue so I added these permissions to the bucket:
```
{
"Version": "2012-10-17",
"Id": "redacted",
"Statement": [
{
"Sid": "redacted",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::user-num:my-user-name"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket-name"
}
]
}
```
I also tried setting actions to:
```
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
```
And this allowed me to manually add and remove files and folders from the bucket, but the slack application was still giving me an access denied error.
I have my code setup like this:
```
s3_client = boto3.client("s3", aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
state_store = AmazonS3OAuthStateStore(
s3_client=s3_client,
bucket_name=os.environ["SLACK_STATE_S3_BUCKET_NAME"],
expiration_seconds=600,
)
oauth_settings = ...,
installation_store=AmazonS3InstallationStore(
s3_client=s3_client,
bucket_name=os.environ["SLACK_INSTALLATION_S3_BUCKET_NAME"],
client_id=os.environ["SLACK_CLIENT_ID"],
),
...
app = App(
process_before_response=True,
signing_secret=os.environ.get("SLACK_SIGNING_SECRET"),
oauth_settings=oauth_settings,
oauth_flow=LambdaS3OAuthFlow(),
)
```
My `settings.py` file looks like this (though I only reference a few things from here, I normally reference straight from `.env` file:
```
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID", None)
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY", None)
SLACK_STATE_S3_BUCKET_NAME = os.getenv("SLACK_STATE_S3_BUCKET_NAME", None)
SLACK_INSTALLATION_S3_BUCKET_NAME = os.getenv(
"SLACK_INSTALLATION_S3_BUCKET_NAME", None
)
SLACK_LAMBDA_PATH = os.getenv("SLACK_LAMBDA_PATH", None)
```
The slack application `.settings` portion of the error page shows that all of these variables are being initialized correctly.
Any help understanding my issue would be much appreciated!
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
slack-bolt==1.6.1
slack-sdk==3.8.0
slackclient==2.9.3
slackeventsapi==2.2.1
#### Python runtime version
Python 3.9.6
#### OS info
Darwin Kernel Version 20.5.0: Sat May 8 05:10:31 PDT 2021; root:xnu-7195.121.3~9/RELEASE_ARM64_T8101
#### Steps to reproduce:
Currently this is running on a docker container, but I imagine any slack application would reproduce the issue if the code setup was the same as I have pasted above.
### Expected result:
The application should be able to store OAuth tokens and installation data to the AWS S3 bucket.
### Actual result:
:(
<img width="817" alt="Screen Shot 2021-07-27 at 12 43 46 PM" src="https://user-images.githubusercontent.com/85132374/127217720-290e7021-0afa-4870-b8bf-73f725805d8b.png">
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-07-27T19:56:52Z | 2021-07-28T23:46:23Z | https://github.com/slackapi/bolt-python/issues/423 | [
"question"
] | JasonDykstraSelectStar | 10 |
miguelgrinberg/microblog | flask | 226 | Chapter 15.4: Unit Testing improvements - ImportWarning: PyEnchant is unavailable | Hi when I run `$ python3 tests.py` unit testing for **Microblog version 15** example code I get the below **import warning** in terminal. But I don't get this warning when running **Microblog version 8** example code. When comparing the files `models.py` and `tests.py` between these two versions I don't see much difference for the **avatar testing** code, besides for the application structure changes.
Is there a way to get rid of this import warning message when running unit testing?
.
**microblog-v0.15**
```
microblog-0.15$ python3 tests.py
test_avatar (__main__.UserModelCase)
... /home/billgates/.local/lib/python3.6/site-packages/guess_language/__init__.py:529:
ImportWarning: PyEnchant is unavailable
warnings.warn("PyEnchant is unavailable", ImportWarning)
ok
test_follow (__main__.UserModelCase) ... ok
test_follow_posts (__main__.UserModelCase) ... ok
test_password_hashing (__main__.UserModelCase) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.373s
OK
```
.
**microblog-v0.8**
```
microblog-0.8$ python3 tests.py
[2020-04-25 12:44:41,940] INFO in __init__: Microblog startup
test_avatar (__main__.UserModelCase) ... ok
test_follow (__main__.UserModelCase) ... ok
test_follow_posts (__main__.UserModelCase) ... ok
test_password_hashing (__main__.UserModelCase) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.255s
OK
```
**Chaper 15: A better Application Structure**
https://github.com/miguelgrinberg/microblog/releases/tag/v0.15
**Chaper 8: Followers**
https://github.com/miguelgrinberg/microblog/releases/tag/v0.8 | closed | 2020-04-25T11:49:33Z | 2020-04-25T16:09:27Z | https://github.com/miguelgrinberg/microblog/issues/226 | [
"question"
] | mrbiggleswirth | 2 |
httpie/http-prompt | api | 210 | how to remove pager? | I don't know python but I do know several other languages so could someone tell me how do I remove the pager? what specific file or line would be appreciative as I use my own scrollbar and buffer and do NOT want to use more or less or anything I just want it to display the results at the bottom | open | 2021-12-12T00:32:40Z | 2021-12-30T09:59:35Z | https://github.com/httpie/http-prompt/issues/210 | [] | gittyup2018 | 2 |
quantumlib/Cirq | api | 6,373 | Test failures in `_compat_test.py/test_deprecated_module` | I'm seeing failures in CI in `_compat_test.py/test_deprecated_module`: https://github.com/quantumlib/Cirq/actions/runs/7097897242/job/19318827306?pr=6372. The tests are not seeing expected log messages from a deprecation.
| closed | 2023-12-05T16:29:32Z | 2023-12-05T19:29:45Z | https://github.com/quantumlib/Cirq/issues/6373 | [
"kind/health"
] | maffoo | 1 |
mwaskom/seaborn | data-science | 2,973 | Rename layout(algo=) to layout(engine=) | Matplotlib has settled on this term with the new `set_layout_engine` method in 3.6 so might as well be consistent with them.
The new API also ha some implications for how the parameter should be documented / typed. | closed | 2022-08-23T22:47:53Z | 2022-09-05T00:36:45Z | https://github.com/mwaskom/seaborn/issues/2973 | [
"api",
"objects-plot"
] | mwaskom | 0 |
sinaptik-ai/pandas-ai | data-science | 841 | Conversational/Follow Up questions for Agent still takes the base dataframe for code generation | ### System Info
python==3.10.13
pandasai==1.5.11
Windows OS
### 🐛 Describe the bug
I initialized pandasai agent for conversation capabilities. Gave the base dataframe and Azure OpenAI LLM. Agent answers first question well, but when I ask follow up question it runs/build the python code on base dataframe rather than previous answer dataframe. Below is my code (excluding imports)
```
nl_course_agent = Agent([course_data], memory_size=10, config={
"llm": llm, "response_parser": CustomPandasDataFrameResponse,
"generate_python_code": MyCustomPrompt()
# "custom_instructions": "Ensure to include all queries in the conversation while you generate the response"
}
)
question1 = "what are HR case management related courses?"
question2 = "show me only greater than 4 rating"
nl_course_agent.start_new_conversation()
nl_course_agent.chat(question1) ## returns right answer
### Follow up questions
nl_course_agent.chat(question2) ## Returns wrong answer
print(nl_course_agent.last_prompt)
<conversation>
Q: what are HR case management related courses?
A: Check it out: <dataframe>
</conversation>
<query>
show me only greater than 4 rating
</query>
Is the query somehow related to the previous conversation? Answer only "true" or "false" (lowercase).
nl_course_agent.check_if_related_to_conversation(question2) ## Returns True
```
I also tried a custom prompt but no change in response.
```
class MyCustomPrompt(AbstractPrompt):
template = """You are given a dataframe with number if rows equal to {dfs[0].shape[0]} and number of columns equal to {dfs[0].shape[1]}
Here's the conversation:
{conversation}
If the question is related to conversation, then use entire conversation to filter the dataframe. Not just the recent question.
"""
```
How to make agent
1) Take entire conversation to build python code if it recognizes a follow up question ? OR
2) Pass filtered dataframe as input to follow up question? | closed | 2023-12-29T05:11:32Z | 2024-06-01T00:20:42Z | https://github.com/sinaptik-ai/pandas-ai/issues/841 | [] | chaituValKanO | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 119 | 合并13B模型时报错 | Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /home/sd/miniconda3/envs/textgen1/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/sd/miniconda3/envs/textgen1/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Loading checkpoint shards: 100%|██████████████████████████████████████████| 3/3 [03:19<00:00, 66.39s/it]
Traceback (most recent call last):
File "/home/sd/cctext2023/chinese-LLaMA/Chinese-LLaMA-Alpaca/scripts/merge_llama_with_chinese_lora.py", line 41, in <module>
base_model = LlamaForCausalLM.from_pretrained(
File "/home/sd/miniconda3/envs/textgen1/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2787, in from_pretrained
dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index)
TypeError: dispatch_model() got an unexpected keyword argument 'offload_index' 指定了offload_dir,猜测是不是机器内存不够导致的,机器内存32g | closed | 2023-04-11T06:25:14Z | 2023-05-30T11:08:37Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/119 | [
"stale"
] | jayecho51 | 7 |
assafelovic/gpt-researcher | automation | 440 | 如何使用Langchain Adapter 支持的开源模型,且并没有找到config/config.py文件 | 由于特殊原因,无法使用OpenAI GPT,想使用开源llm进行测试?能否提供具体操作步骤呢?感谢! | closed | 2024-04-11T02:12:04Z | 2024-04-30T14:50:39Z | https://github.com/assafelovic/gpt-researcher/issues/440 | [] | YinSonglin1997 | 3 |
widgetti/solara | fastapi | 755 | Default app layout | According to the docs, we can specify a default app layout by creating a `Layout` object in an `__init__.py` file, in the same directory as the main file of the solara app.
Maybe I am doing something wrong, but this file does not seem to be picked up.
Here is a [pycafe example](https://py.cafe/snippet/solara/v1#c=H4sIAH9vzmYAA51W227cNhD9FUH7kKRYqbrtzYCAtkGBPrQIEKTtQzYwuBJ3xVgiFZKyvQn87z1Dai927LiJFzDA4fDM_Yy-hJWqeXgRrqXoeqVtYFTLNFtL-v3iD3GlcCe5tGtZ823QqI6_fHWxlgH-RpV3_Na-fPHHm79-f_HqG0_ZRg328be__vbm73f0mG6-AXHNtRFKPgqyffFlPF5ejnqXl3feo7XUsM1NUAbv7718S-KXPbNNuQ5_XofT4GixpFCnQcs2vMUlnaJ0Hb6aPg0wmr0PMwpPSP7lIZZvI7qc3cdzohOaO0YZYNbyg491Ekwmk-BdI0xwo_SVCaJgkADo8DywKjCck9IjOZ4ElOU_2Z4qVTWirTWXlOwJORjcCNsc3Py170e90_1DndeqHTpfrZPCw7qtw9eDsapDRATnAnlUuxamh87Ln46O-WDPS-sf_nBxSUJOlD40qszTiD9e7acxv6_ek-BDOA01_zQIzam6BtN8GGLc2H1P8-0lOLO-_0fwm_Biy1rDpyGvhf1dsk0LLasHSPq9bZTEm36valHz6DqJs1mc4rHPTHjx5RB4eJHBuFL2rQLkl4M1jdM0PNQovHh_vLFsY7jF5Y2obRNepLNkGnZC_uuPuT_9wcWugR06ihrPtqLlvwHVcP1aScuE5PoJC6QabbwuVCipkIZ3H-6mj3gxGlqu4myeLopiNZ8XRbaaPen9CRKZjPt9eB_2dH1ektje2icVLy-FFPby0oGde-mzeHAxTZN4kS-z2SzJ89lskTzto4_t-ayOWXouo4dkPp7C5uDfAg49b9NyDTFrnzN60COr9Lubusqiu6EjWUc655kDChCRbMi_WmVPrJJHSO7bFPcMwX0Pvf0vckPwGDpVXVHolDEf3yRomGkugjqZLxOerdJNkaVbtplv6jRfpklRLLPlbLFdSyb3QpVlERdxgpPeKZlF1XYryjLL4_SBMNoIWQu5M7hN4yxOgknQwTopoQfKMo1z98RYq664JD0gp5BYq-k0Ym5YdVWxti1LUIcTcDZYsR1ao4a-IIfSLM4gbzmrmrKcxwmpVThwq1QLrBlsAbniGg_J3wToizjDKx9AGqdgJZwaptGKkVS6Y634zHVZ5nhMiq2orspyCa8WOGH9eY9wVfPNsOv3BIPrY6A1r5RmVgED4ASPRhkMr287F82CRPyWVwhH7ij-hCRbpOSjUdLA_46ROHMBUd2ofuQRvC0g-VRLMupcb9KUQFNXnWbomIT7LjvutaglsHJ6Jvr9FdcS9I9cgY0RAURE1C68OaGNgmjHJbyjHI7Zx8X1AFzE40LyZ6TVhT_a6veg5B23eEb5mkFkVD0gGeB5Sj89huJHbAzn9MqdhPzIMhce-UQ56JVAo7oiOOTzvCCyRZxDNsBXriMUiPYbLCKqM7HScHdG5T4J-TVRKZmen0OAmbCMKOUUyLGQ96-jA52492itowJ2atTvd46m6c41-NndMSkUTYp3HdNXtbpBStCOp2MkLHCOQZN46A3bIg54RgY7ZvtW2VZsIiFbUJ_LosNAml13uRx2wtiBbgkKZ7mpWmaMqFyxzkKkC58-vHRWIVES4UKERp1T48jNluYCEnRc4iTc2IiZvayIGlJkk14qyzdKoVHnSM-ZjVEemUaMw0MQQ-dHJ3M2FExqfCkgSwvUDHA9k7Wq0P3IIqRju_cYVJhMMHHIR89ve17BsSJGKx0t9hhZ8DyG2iVoQZ3Yi7Z1_IMoHXzLLIVVC0Iv_ET3Gl9wtuGDObYV3PX6uOptRMxyJSCnzBZIfG9oUCg1q3h58gArSKuKG9cQPp5BczQgGxmNvN9XFI7vvIyCOzYR6u3msd93rjOwAsCVmCLKRBKv6MZ_WTXW9h7RqeOTyhDFj-PlrLiJrpnl3tMMuaNYvZxGK2rVbufdgLN0tWfEVUSpDvVz9wmXoJizJGvQrqWxJvaDHfpYQVs453M35npb5Xm-ihCzgHkiROpPVJtuVsv5wxvY0oK4PM2BSQg4USWI_Q5m-tq4KcED6BzdMVzWmdVYakdVMPrQj6tgToSEqI24dTPgOtZIgTXgOtgtCr9E3dHRhj8f6eGBeHAL5CDCpArQy3GdGYsFRlkH3kg3EOmWW-uaMne0OnJK7Voa8eAhlsK-MgaEOK5KpMdruEFBiMICxa03sD5w6XOHcvJVnTET6N6sSJbUXVDDvrnXSoVfRoMWYLeORsJRvDMLPgHP5J576HwtwKmoN3xHQGt5w2zV1Aob7H5jOLn70HJ9eU43N5X7Zh_7Fa7fcNBNq9ziL3zNIeLYs-NHBDEtPITQAAEr-jCXVGOv7S9oMJzfI9vie-EQqHeQzBnT4t-nVliOU3j3H79SE_C8EAAA)
The example also shows a way that I figured one can pass a custom Layout that works.
Do you think this is a bug, or have I wrongly understood or implemented the documentation.
Thank you.
P.S.: There is also some funny rendering bug of the test app, but I assume that is more of a pycafe thing rather than a solara thing, since it renders correctly locally - so can be ignored. | closed | 2024-08-28T00:38:50Z | 2024-08-28T20:21:56Z | https://github.com/widgetti/solara/issues/755 | [] | JovanVeljanoski | 2 |
deepset-ai/haystack | nlp | 8,065 | docs: clean up docstrings of DocumentWriter | closed | 2024-07-24T10:36:59Z | 2024-07-24T11:54:40Z | https://github.com/deepset-ai/haystack/issues/8065 | [] | agnieszka-m | 0 |
|
huggingface/transformers | nlp | 36,646 | Hybrid models | ### Feature request
HymbaForCausalLM
### Motivation
Hybrid models not supported:
Support:
[Hymba](https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/)
A hybrid attention mechanism combining local sliding window attention and global attention.
Grouped-query attention (GQA).
A mix of global and local rotary embeddings.
### Your contribution
No | open | 2025-03-11T11:50:36Z | 2025-03-11T11:50:36Z | https://github.com/huggingface/transformers/issues/36646 | [
"Feature request"
] | johnnynunez | 0 |
sinaptik-ai/pandas-ai | pandas | 1,184 | why so slow. compare langchaindatabase and vanna .... | ### System Info
2.0.44, the same database ,the same question , langchain cost 7s, vanna cost 8s(2s for plot), i test 48 hours to check why pandasai so slow , find that ,all by the generate_python_code.tmpl and generate_python_code_with_sql.tmpl.
Why use a huge prompt instead of separating it and use different prompts for different stages. To make a large model, reduce the amount of reading and thinking each time, which can save a lot of time and tokens.
### 🐛 Describe the bug
2024-05-30 00:50:25,773 - logger.py[line:75] - INFO: Persisting Agent Training data in E:\LANGCHAT\Langchain-Chatchat\chromadb
2024-05-30 00:50:25,810 - segment.py[line:189] - INFO: Collection pandasai-qa is not created.
2024-05-30 00:50:25,811 - segment.py[line:189] - INFO: Collection pandasai-docs is not created.
2024-05-30 00:50:25,811 - logger.py[line:75] - INFO: Successfully initialized collection pandasai
2024-05-30 00:50:26,037 - logger.py[line:75] - INFO: Question: 列出12号楼所有电表中度数最高的前5个,生成图表
2024-05-30 00:50:26,037 - logger.py[line:75] - INFO: Running PandasAI with langchain_tongyi LLM...
2024-05-30 00:50:26,037 - logger.py[line:75] - INFO: Prompt ID: c2c97c1c-fc13-4fd8-87d1-a5f647ce100f
2024-05-30 00:50:26,038 - logger.py[line:75] - INFO: Executing Pipeline: GenerateChatPipeline
2024-05-30 00:50:28,044 - logger.py[line:75] - INFO: Executing Step 0: ValidatePipelineInput
2024-05-30 00:50:28,045 - logger.py[line:75] - INFO: Executing Step 1: CacheLookup
2024-05-30 00:50:28,045 - logger.py[line:75] - INFO: Executing Step 2: PromptGeneration
2024-05-30 00:50:30,261 - logger.py[line:75] - INFO: Executing Step 3: CodeGenerator
2024-05-30 00:50:45,749 - logger.py[line:75] - INFO: Executing Step 4: CachePopulation
2024-05-30 00:50:45,750 - logger.py[line:75] - INFO: Executing Step 5: CodeCleaning
2024-05-30 00:50:45,750 - logger.py[line:75] - INFO: Saving charts to exports\charts\c2c97c1c-fc13-4fd8-87d1-a5f647ce100f.png
2024-05-30 00:50:45,752 - logger.py[line:75] - INFO: | closed | 2024-05-29T17:10:09Z | 2024-05-29T17:22:35Z | https://github.com/sinaptik-ai/pandas-ai/issues/1184 | [] | colorwlof | 2 |
mckinsey/vizro | plotly | 871 | Does Google Gemini support Vizro-AI? | ### Question
I'm trying to follow the example given on [Vizro-AI homepage](https://vizro.readthedocs.io/projects/vizro-ai/en/vizro-ai-0.3.2/pages/tutorials/quickstart/), but instead of using the **OpenAI model** I switched to the **Google Gemini**.
But I'm getting the error bellow.
**Error**:
```
ChatGoogleGenerativeAIError:** Invalid argument provided to Gemini: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[imports].items: missing field.
```
**Stacktrace**:
```
Traceback (most recent call last):
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/graphaito.py", line 42, in <module>
fig = vizro_ai.plot(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/vizro_ai/_vizro_ai.py", line 34, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/vizro_ai/_vizro_ai.py", line 87, in plot
response = _get_pydantic_model(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/vizro_ai/dashboard/_pydantic_output.py", line 84, in _get_pydantic_model
res = pydantic_llm.invoke(message_content)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2879, in invoke
input = context.run(step.invoke, input, config)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 5093, in invoke
return self.bound.invoke(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_google_genai/chat_models.py", line 975, in _generate
response: GenerateContentResponse = _chat_with_retry(
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_google_genai/chat_models.py", line 198, in _chat_with_retry
return _chat_with_retry(**kwargs)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 336, in wrapped_f
return copy(f, *args, **kw)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
File "/home/ubuntu/Projects/DataAnalysis/graphaito/.venv/lib/python3.10/site-packages/langchain_google_genai/chat_models.py", line 192, in _chat_with_retry
raise ChatGoogleGenerativeAIError(
langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[imports].items: missing field.
```
**Python:** 3.10
**vizro**: 0.1.26
**vizro_ai**: 0.3.2
**langchain**: 0.2.17
**langchain_core**: 0.2.43
**langchain_google_genai**: 1.0.10
**langgraph**: 0.2.16
**langsmith**: 0.1.142
### Code/Examples
```python
from langchain_google_genai import (
ChatGoogleGenerativeAI,
HarmBlockThreshold,
HarmCategory,
)
from vizro_ai import VizroAI
import vizro.plotly.express as px
safety_settings = {
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
}
llm = ChatGoogleGenerativeAI(
model='gemini-1.5-flash',
api_key='XXXX',
safety_settings=safety_settings)
vizro_ai = VizroAI(model=llm)
df = px.data.gapminder()
fig = vizro_ai.plot(
df,
"""
Create a line graph for GDP per capita since 1950 for each continent.
Mark the x axis as Year, y axis as GDP Per Cap and don't include a title.
Make sure to take average over continent.
"""
)
```
### Which package?
vizro-ai
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-11-13T16:47:34Z | 2025-02-07T22:43:48Z | https://github.com/mckinsey/vizro/issues/871 | [
"Bug Report :bug:",
"General Question :question:"
] | richizo | 4 |
idealo/image-super-resolution | computer-vision | 59 | Any chance on getting the sample weights? Drive, Dropbox... | Can someone please share the old weights files?
rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5
and
rdn-C6-D20-G64-G064-x2/PSNR-driven/rdn-C6-D20-G64-G064-x2_PSNR_epoch086.hdf5
| closed | 2019-08-29T11:50:38Z | 2020-01-09T14:44:28Z | https://github.com/idealo/image-super-resolution/issues/59 | [] | talvasconcelos | 13 |
alteryx/featuretools | data-science | 2,419 | Standardize Regular Expressions for Natural Language Primitives | Multiple primitives make use of regular expressions. Some of them define what punctuation to delimit on. We should standardize these. This would give users the confidence that Primitive A does not consider a string to be one word, while Primitive B considers it to be two. We could define the regexes in a common file and import them in the primitives.
This was originally an issue on `nlp-primitives` before the primitives were moved here. | closed | 2022-12-20T00:20:38Z | 2023-01-03T17:35:13Z | https://github.com/alteryx/featuretools/issues/2419 | [
"refactor"
] | sbadithe | 0 |
aimhubio/aim | data-visualization | 2,619 | UI crashes when opening Metrics, Params, or Scatters Explorer | ## 🐛 Bug
UI crashes when opening Metrics, Params, or Scatters Explorer with empty chart data
### To reproduce
1. need to have empty chart data
2. open one of the Explorers: Metrics, Params or Scatters
3. UI crashes when trying to open the Explorer page
### Expected behavior
Expected to open the Explorer page without crashing the UI
### Environment
- Aim Version (e.g., 3.17.0) | closed | 2023-03-24T16:26:45Z | 2023-03-27T11:19:09Z | https://github.com/aimhubio/aim/issues/2619 | [
"type / bug",
"area / Web-UI",
"phase / shipped",
"priority / critical-urgent"
] | KaroMourad | 0 |
databricks/koalas | pandas | 2,015 | koalas Series not showing after apply (java.lang.IllegalArgumentException) | I'm trying to apply a custom function to a koalas Series using the `apply` method. However, when I perform some operations on the resulting Series, such as `.loc` or `.value_counts()`, I get an error similar to #1858.
I tried to set manually `ARROW_PRE_0_15_IPC_FORMAT` to `'1'` as suggested in the cited issue but with no success.
Below is a small reproducible example along with the error message.
Versions:
- `pyspark==2.4.0`
- `pyarrow==2.0.0`
- `koalas==1.4.0`
```
import os
os.environ['ARROW_PRE_0_15_IPC_FORMAT'] = '1'
os.environ['PYARROW_IGNORE_TIMEZONE'] = '1'
import databricks.koalas as ks
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('test').enableHiveSupport().getOrCreate()
s = ks.Series(['45.123', '0.123', '3.5323', '6554.0', None, '42'])
def get_number_decimal_digits(x) -> int:
if x is None:
return -1
else:
return x[::-1].find('.')
ss = s.apply(get_number_decimal_digits)
ss.value_counts()
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
/opt/venv/geocoding/lib/python3.6/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
/opt/venv/geocoding/lib/python3.6/site-packages/IPython/lib/pretty.py in pretty(self, obj)
392 if cls is not object \
393 and callable(cls.__dict__.get('__repr__')):
--> 394 return _repr_pprint(obj, self, cycle)
395
396 return _default_pprint(obj, self, cycle)
/opt/venv/geocoding/lib/python3.6/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
698 """A pprint that just redirects to the normal repr function."""
699 # Find newlines and replace them with p.break_()
--> 700 output = repr(obj)
701 lines = output.splitlines()
702 with p.group():
/opt/venv/geocoding/lib/python3.6/site-packages/databricks/koalas/series.py in __repr__(self)
5839 return self._to_internal_pandas().to_string(name=self.name, dtype=self.dtype)
5840
-> 5841 pser = self._kdf._get_or_create_repr_pandas_cache(max_display_count)[self.name]
5842 pser_length = len(pser)
5843 pser = pser.iloc[:max_display_count]
/opt/venv/geocoding/lib/python3.6/site-packages/databricks/koalas/frame.py in _get_or_create_repr_pandas_cache(self, n)
10606 def _get_or_create_repr_pandas_cache(self, n):
10607 if not hasattr(self, "_repr_pandas_cache") or n not in self._repr_pandas_cache:
> 10608 self._repr_pandas_cache = {n: self.head(n + 1)._to_internal_pandas()}
10609 return self._repr_pandas_cache[n]
10610
/opt/venv/geocoding/lib/python3.6/site-packages/databricks/koalas/frame.py in _to_internal_pandas(self)
10602 This method is for internal use only.
10603 """
> 10604 return self._internal.to_pandas_frame
10605
10606 def _get_or_create_repr_pandas_cache(self, n):
/opt/venv/geocoding/lib/python3.6/site-packages/databricks/koalas/utils.py in wrapped_lazy_property(self)
514 def wrapped_lazy_property(self):
515 if not hasattr(self, attr_name):
--> 516 setattr(self, attr_name, fn(self))
517 return getattr(self, attr_name)
518
/opt/venv/geocoding/lib/python3.6/site-packages/databricks/koalas/internal.py in to_pandas_frame(self)
807 """ Return as pandas DataFrame. """
808 sdf = self.to_internal_spark_frame
--> 809 pdf = sdf.toPandas()
810 if len(pdf) == 0 and len(sdf.schema) > 0:
811 pdf = pdf.astype(
/opt/venv/geocoding/lib/python3.6/site-packages/pyspark/sql/dataframe.py in toPandas(self)
2140
2141 # Below is toPandas without Arrow optimization.
-> 2142 pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
2143
2144 dtype = {}
/opt/venv/geocoding/lib/python3.6/site-packages/pyspark/sql/dataframe.py in collect(self)
531 """
532 with SCCallSiteSync(self._sc) as css:
--> 533 sock_info = self._jdf.collectToPython()
534 return list(_load_from_socket(sock_info, BatchedSerializer(PickleSerializer())))
535
/opt/venv/geocoding/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
/opt/venv/geocoding/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/opt/venv/geocoding/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o324.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 8, miceslmcw02.2irgdc.lan, executor 1): java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
at org.apache.arrow.vector.ipc.ArrowReader.ensureInitialized(ArrowReader.java:172)
at org.apache.arrow.vector.ipc.ArrowReader.getVectorSchemaRoot(ArrowReader.java:65)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:162)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec$$anon$2.<init>(ArrowEvalPythonExec.scala:98)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:96)
at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)
at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2107)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2132)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3263)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3260)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
at org.apache.arrow.vector.ipc.ArrowReader.ensureInitialized(ArrowReader.java:172)
at org.apache.arrow.vector.ipc.ArrowReader.getVectorSchemaRoot(ArrowReader.java:65)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:162)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec$$anon$2.<init>(ArrowEvalPythonExec.scala:98)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:96)
at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)
at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
```
---
Also, I tried to run the checks you mentioned in [this comment](https://github.com/databricks/koalas/issues/1858#issuecomment-714636658) after the code pasted above and I obtain:
```
os.environ.get('ARROW_PRE_0_15_IPC_FORMAT', 'None')
'1'
```
```
from pyspark.sql.functions import udf
@udf('string')
def check(x):
return os.environ.get('ARROW_PRE_0_15_IPC_FORMAT', 'None')
spark.range(1).select(check('id')).head()[0]
'None'
```
| closed | 2021-01-21T10:24:22Z | 2021-01-22T21:06:31Z | https://github.com/databricks/koalas/issues/2015 | [
"not a koalas issue"
] | RicSpd | 3 |
smiley/steamapi | rest-api | 48 | Easy way to access a games tags | Hi, first off I want to thank you for this great tool you have created, it has helped me immensely with my current project I am working on! However, I noticed that you are missing a feature that however very specific, was a major roadblock in the development of my project. I needed a way to quickly access the tags for a specified list of games, and I couldn't find any way to do that with your interface or anything on the web. So in order to fix this I created a python script that created a database file of all games on steam and their tags. I plan on either hosting off my computer, or some free random hosting service, this database in an easily accessible way for use in my project, and think it could be a valuable addition to your tool to be able to add the tags of a game to its steamapp object. Here is a link to the code that creates the db https://github.com/BronxBombers/Steam-Tags-Database if you are interested in adding this to your project let me know and I can give you a link to wherever I decide to host it to, and I could also code the implementation to your system for you if ya wanted. | closed | 2017-10-02T00:22:45Z | 2019-04-09T16:00:27Z | https://github.com/smiley/steamapi/issues/48 | [
"question",
"question-answered"
] | zach-morgan | 1 |
Ehco1996/django-sspanel | django | 75 | 节点列表显示节点掉线,但节点可以正常使用,主机可以ping通 | 节点列表显示节点掉线,但节点可以正常使用,主机可以ping通,请问如何解决? | closed | 2018-01-25T01:44:12Z | 2018-01-30T00:15:28Z | https://github.com/Ehco1996/django-sspanel/issues/75 | [] | syssfo | 3 |
allenai/allennlp | data-science | 4,904 | Prepare data loading for 2.0 | - [x] Remove `PyTorchDataLoader`. I wouldn't recommend anyone use this, and it doesn't provide any benefit over `MultiProcessDataLoader`. (https://github.com/allenai/allennlp/pull/4907)
- [x] Make submodule names consistent. Currently we have `multi_process_data_loader.py` and `multitask_*.py`. We should either rename `multi_process_*` to `multiprocess_*` or rename `multitask_*` to `multi_task_*`. (https://github.com/allenai/allennlp/pull/4906)
- [x] `data_path` param should accept `PathLike`. (https://github.com/allenai/allennlp/pull/4908)
- [x] Do more extensive testing on `MultiProcessDataLoader`. Potentially improve error handling. (https://github.com/allenai/allennlp/pull/4912)
- [x] Document best practices for `MultiProcessDataLoader`. (https://github.com/allenai/allennlp/pull/4909)
- [ ] ~~❓Improve where `TokenIndexers` fit it. Maybe they are part of `DatasetReader`. This would actual require minimal changes.~~
- [x] Update `allennlp-models@vision` for these changes. (https://github.com/allenai/allennlp-models/pull/192) | closed | 2021-01-08T16:47:26Z | 2021-01-15T15:27:27Z | https://github.com/allenai/allennlp/issues/4904 | [] | epwalsh | 1 |
Miserlou/Zappa | flask | 2,050 | Zappa Deployment to AWS Lamda not working with MSSQL | Django apps using MSSQL as Database deployed using Zappa, fails with error: ImproperlyConfigured("Error loading pyodbc module: %s" % e)s_removeddulele named 'pyodbc'
even if installed and works properly on local environment
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: zappa 0.50.0
* Operating System and Python version: Linux| python 3.7
| closed | 2020-02-28T16:09:41Z | 2020-03-06T05:16:54Z | https://github.com/Miserlou/Zappa/issues/2050 | [] | ORC-1 | 5 |
keras-team/autokeras | tensorflow | 1,258 | On model.fit, at Starting New Trial include trial number | I hope that this would be easy to implement. When training a model, in the output along with "Starting new trial" include the trial number which would count up to the "max_trials" parameter passed in the model definition. Would look like "Starting new trial- #1/50" and increment up to the max-trials (in this case 50).
This would allow monitoring of the progress of the training, just like the output currently displays the count of the epochs ("Epoch 1/10", "Epoch 2/10", etc.)
Thank you for your consideration and thank you for the tool. | closed | 2020-07-29T20:28:34Z | 2020-08-06T19:56:48Z | https://github.com/keras-team/autokeras/issues/1258 | [] | dcohron | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 659 | lgfntveceig | closed | 2018-12-16T20:16:54Z | 2020-12-05T20:46:22Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/659 | [] | seanmcfeely | 0 |
|
albumentations-team/albumentations | deep-learning | 1,997 | [Performance] Update cv2.lut with stringzilla lut | closed | 2024-10-18T19:04:24Z | 2024-10-19T01:34:24Z | https://github.com/albumentations-team/albumentations/issues/1997 | [
"Speed Improvements"
] | ternaus | 0 |
|
manbearwiz/youtube-dl-server | rest-api | 141 | What to do if filename is too long? | Just installed youtube-dl-server and tried pasting a random YouTube link into the web form. I could see a`.part` file and `.ytdl` file got created immediately, but nothing else seemed to happen. So I looked in the logs of the running Docker image and saw it had this error (title changed to XXXXX):
```
yt_dlp.utils.DownloadError: ERROR: unable to open for writing: [Errno 36] Filename too long: '/youtube-dl/XXXXX[YYYYYYYY].f248.webm.part-Frag1.part' | stdout
-- | --
```
What am I supposed to do here? The filename seems to be derived from the YouTube title, and there is no field to specify the filename. | open | 2023-03-15T12:25:57Z | 2023-03-15T12:25:57Z | https://github.com/manbearwiz/youtube-dl-server/issues/141 | [] | callumlocke | 0 |
MaartenGr/BERTopic | nlp | 1,348 | Passing different text to model vs. embeddings | Hi there,
I'm working with a dataset of social media messages, but some of the messages are incredibly short and, without the context of the message they are replying to, they are pretty uninformative. I wanted to try and use a pre-trained model to embed the messages using the messages along with their "parent" message so to speak, but I want to avoid supplying the topic model with the same messages twice because this will amplify the importance of repeated messages.
For example, would the following achieve what I'm describing where `docs_with_replies` is used to create the embedding and then only `docs` is supplied to the model?
`sentence_model = SentenceTransformer("all-MiniLM-L6-v2")`
`embeddings = sentence_model.encode(docs_with_replies, show_progress_bar=False)`
`topic_model = BERTopic( )`
`topics, probs = topic_model.fit_transform(docs, embeddings)`
Thank you in advance! | closed | 2023-06-17T15:00:20Z | 2023-09-27T09:12:23Z | https://github.com/MaartenGr/BERTopic/issues/1348 | [] | bexy-b | 2 |
deezer/spleeter | tensorflow | 94 | [Feature] offline app or a web based service | ## Description
It will be a nice feature to have a simple web interface for the project. Making it easier for non-IT users to benefit from.
## Additional information
I noticed a docker folder in the project, this will make the deployment easier already. | closed | 2019-11-14T15:16:37Z | 2019-11-28T22:14:32Z | https://github.com/deezer/spleeter/issues/94 | [
"invalid",
"wontfix"
] | MohamedAliRashad | 10 |
newpanjing/simpleui | django | 362 | actions.js addEventListener报错 | **bug描述**
* *Bug description * *
简单的描述下遇到的bug:
Briefly describe the bugs encountered:
Uncaught TypeError: Cannot read property 'addEventListener' of null
at window.Actions (actions.js:128)
at HTMLDocument.<anonymous> (actions.js:167)
**重现步骤**
** repeat step **
1.安装最新版simpleui
2.打开浏览器,访问查询数据页面
3.控制台报错,但不知道是否对功能照成影响
**环境**
** environment**
1.Operating System:
(Windows/Linux/MacOS)....
win10 64位
2.Python Version:
3.8
3.Django Version:
3.2
4.SimpleUI Version:
2021.4.1
**Description**
| closed | 2021-04-15T03:46:25Z | 2021-05-11T09:12:27Z | https://github.com/newpanjing/simpleui/issues/362 | [
"bug"
] | goeswell | 1 |
MagicStack/asyncpg | asyncio | 908 | UndefinedTableError: relation "pg_temp.session_vars" does not exist | I can successfully connect to a PostgreSQL database and execute queries against a database to view the data the tables contain.
However, when I try to execute an UPDATE, I get the following error:
```
UndefinedTableError: relation "pg_temp.session_vars" does not exist
```
I am creating a connection to the database by doing the following:
(nothing special)
```
conn = await asyncpg.connect(user='username',
password='password',
database='databasename',
host='localhost')
```
I was wondering what, if anything, you might be aware of that would resolve this issue. Might there be something additional I need to do when setting up the connection?
(I can connect to the database directly via the shell and issue UPDATE's without any problems.) | open | 2022-04-19T21:48:33Z | 2022-04-19T23:09:48Z | https://github.com/MagicStack/asyncpg/issues/908 | [] | eric-g-97477 | 3 |
fastapi/sqlmodel | pydantic | 20 | Setting PostgreSQL schema | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import datetime
from typing import Optional, List
from sqlmodel import Field, SQLModel
class Campaign(SQLModel, table=True):
campaign_id: Optional[int] = Field(
default=None,
primary_key=True,
schema_extra={'schema': 'test'},
)
created_at: Optional[datetime] = Field(
default=None,
schema_extra={'schema': 'test'}
)
updated_at: Optional[datetime] = Field(
default=None,
schema_extra={'schema': 'test'}
)
name: str = Field(
schema_extra={'schema': 'test'}
)
description: Optional[str]= Field(
default=None,
schema_extra={'schema': 'test'}
)
template_id: int = Field(
schema_extra={'schema': 'test'}
)
orders_needed: int = Field(
schema_extra={'schema': 'test'}
)
postcode_areas: List[str] = Field(
schema_extra={'schema': 'test'}
)
```
### Description
I've created a model to import and I'm trying to connect to PostgreSQL to an already created database.
But I have my tables in a different schema called `test` and not `public`.
After looking through the code I was hoping the input `schema_extra` would do the trick
https://github.com/tiangolo/sqlmodel/blob/02da85c9ec39b0ebb7ff91de3045e0aea28dc998/sqlmodel/main.py#L158
But, I must be wrong.
This is the SQL being outputted:
```
SELECT campaign.campaign_id, campaign.created_at, campaign.updated_at, campaign.name, campaign.description, campaign.template_id, campaign.orders_needed, campaign.postcode_areas
FROM campaign
```
Ideally I want it to say:
```
SELECT campaign.campaign_id, campaign.created_at, campaign.updated_at, campaign.title, campaign.description, campaign.template_id, campaign.orders_needed, campaign.postcode_areas
FROM test.campaign AS campaign
```
Is there a way to map a model to a different schema other than the default `public` schema?
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.3
### Python Version
3.8.10
### Additional Context
_No response_ | closed | 2021-08-25T15:46:10Z | 2024-11-14T02:51:28Z | https://github.com/fastapi/sqlmodel/issues/20 | [
"question"
] | Chunkford | 8 |
pykaldi/pykaldi | numpy | 74 | How to access a public member of a class that is not defined in the clif file ? | I am trying to access plda.psi_ vector, which is a public member in Plda class defined in kaldi ivector.h. And I write the following code that fails.
```python
i = 0
plda = Plda()
print(plda.psi_[i])
```
The error is:
```
AttributeError: '_plda.Plda' object has no attribute 'psi_'
```
It seems the plda.clif file does not define this psi_ variable. Does it mean that I should revise the clif file ? As I installed pykaldi by conda, I am not sure where is the file. | closed | 2019-01-11T14:05:27Z | 2019-01-12T11:02:17Z | https://github.com/pykaldi/pykaldi/issues/74 | [] | JerryPeng21cuhk | 5 |
explosion/spacy-course | jupyter | 4 | Chapter 1, section 10-11: "ORTH" vs "TEXT" in pattern matching | Hi Ines!
This is :heart_eyes:
Great UI _and_ content. Really great work!
So far (Chapter 1, section 11), I've only been confused twice: Once when I had to install the `en_core_web_sm` _myself_ (I don't mind, though, it was easy to find out how), and now in section 11.
In section 10, we learned to use the pattern key `ORTH` to do exact text matching, but section 11 expects the [newer v2.1 `TEXT`](https://spacy.io/usage/rule-based-matching#adding-patterns-attributes) key (nice docs by the way).
I think the two sections should be aligned, or we/you should tell students that they can use either. Having not used spacy before, I strongly prefer the new `TEXT` key (what is `ORTH` even?) :grin:
My initial investigation used the very neat `spacy.explain` feature, which does not yet know either word - I'm not sure if it is meant to also explain the pattern matching keys.
At any rate, thank you for your very nice and accessible work! | closed | 2019-04-20T14:48:34Z | 2019-04-20T20:42:57Z | https://github.com/explosion/spacy-course/issues/4 | [] | thorbjornwolf | 2 |
unit8co/darts | data-science | 2,638 | [BUG] TCN model cannot be saved when used with callbacks | **Describe the bug**
For some reason, defining callbacks in pl_trainer_kwargs will make the torch.save function of TCN.save method save the lightning module. Because TCN contains Parametrized layers, torch will raise an error `RuntimeError: Serialization of parametrized modules is only supported through state_dict().`
Not setting any custom callbacks will not trigger the bug.
**To Reproduce**
```python
from darts.utils import timeseries_generation as tg
from darts.models.forecasting.tcn_model import TCNModel
def test_save(self):
large_ts = tg.constant_timeseries(length=100, value=1000)
model = TCNModel(
input_chunk_length=6,
output_chunk_length=2,
n_epochs=10,
num_layers=2,
kernel_size=3,
dilation_base=3,
weight_norm=True,
dropout=0.1,
**{
"pl_trainer_kwargs": {
"accelerator": "cpu",
"enable_progress_bar": False,
"enable_model_summary": False,
"callbacks": [LiveMetricsCallback()],
}
},
)
model.fit(large_ts[:98])
model.save("model.pt")
import pytorch_lightning as pl
from pytorch_lightning.callbacks import Callback
class LiveMetricsCallback(Callback):
def __init__(self):
self.is_sanity_checking = True
def on_train_epoch_end(
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"
) -> None:
print()
print("train", trainer.current_epoch, self.get_metrics(trainer, pl_module))
def on_validation_epoch_end(
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"
) -> None:
if self.is_sanity_checking and trainer.num_sanity_val_steps != 0:
self.is_sanity_checking = False
return
print()
print("val", trainer.current_epoch, self.get_metrics(trainer, pl_module))
@staticmethod
def get_metrics(trainer, pl_module):
"""Computes and returns metrics and losses at the current state."""
losses = {
"train_loss": trainer.callback_metrics.get("train_loss"),
"val_loss": trainer.callback_metrics.get("val_loss"),
}
return dict(
losses,
**pl_module.train_metrics.compute(),
**pl_module.val_metrics.compute(),
)
```
will output
```
darts/models/forecasting/torch_forecasting_model.py:1679: in save
torch.save(self, f_out)
../o2_ml_2/.venv/lib/python3.10/site-packages/torch/serialization.py:629: in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol, _disable_byteorder_record)
../o2_ml_2/.venv/lib/python3.10/site-packages/torch/serialization.py:841: in _save
pickler.dump(obj)
RuntimeError: Serialization of parametrized modules is only supported through state_dict(). See: https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training
```
**System (please complete the following information):**
- Python version: 3.10
- darts version 0.32.0 (the bug does not exist in 0.31)
**Additional context**
It likely comes from https://github.com/unit8co/darts/pull/2593
| closed | 2025-01-07T17:12:33Z | 2025-02-04T13:10:10Z | https://github.com/unit8co/darts/issues/2638 | [
"bug",
"improvement"
] | MarcBresson | 5 |
dpgaspar/Flask-AppBuilder | flask | 1,512 | Template not found | I want to add multiple entries to a model at a single time, for this I was experimenting with different settings.
In a class derived from ModelView, I mentioned `add_widget = ListAddWidget`.
I got below error
> jinja2.exceptions.TemplateNotFound: appbuilder/general/widgets/list_add.html
Also I found there is `lnk_add(my_href)` macro which can enable multi form entries into a model, can you guide how can I include it in custom add template. | closed | 2020-11-05T16:26:45Z | 2021-07-09T12:34:36Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1512 | [
"stale"
] | mswastik | 2 |
kizniche/Mycodo | automation | 547 | Identifying Input devices across reboots | This issue was briefly discussed with @Theoi-Meteoroi. Essentially, USB devices are assigned as /dev/ttyUSB*, where * is a an integer assigned by the kernel upon detection. If there are several USB devices connected, the linux device for the physical device may change across reboots. For instance, after plugging in two USB devices, the linux devices may be as follows:
Device 1: /dev/ttyUSB1
Device 2: /dev/ttyUSB2
However, after a reboot, the devices may now be:
Device 1: /dev/ttyUSB2
Device 2: /dev/ttyUSB1
This will obviously cause an issue if the location is set based on the linux device. What is needed is either a way to preserve the linux device designation across reboots or a way to probe the deivces for some sort of identifier that can be used to save the location in the Mycodo settings. | closed | 2018-10-10T16:48:48Z | 2018-11-26T22:29:37Z | https://github.com/kizniche/Mycodo/issues/547 | [
"manual"
] | kizniche | 2 |
jmcnamara/XlsxWriter | pandas | 380 | Issue with add_table | add_table throws a misleading warning in the variant of add_table that uses integer columns and rows when the ranges are setup incorrectly. Given this, an error is the expected output but the type of error has caused undue headaches on my part given that the error thrown tells me nothing about why it was actually failing in a much more complicated context.
This is on Python 2.7.6 using xlsxwriter 0.9.3 and Excel 2016.
Demo code:
``` python
import xlsxwriter
workbook = xlsxwriter.Workbook("test.xlsx")
sheet = workbook.add_worksheet()
table = {"name": "Test", "data": [], "columns": [ {"header": "One"} ]}
# The table range is incorrect, so an error should be thrown but the wrong one is thrown
sheet.add_table(0, 0, 1, 1, table)
workbook.close()
```
This is what's thrown:
> /usr/local/lib/python2.7/dist-packages/xlsxwriter/worksheet.py:2361: UserWarning: Duplicate header name in add_table(): 'one'
> % force_unicode(name))
> -1
What it should throw (I'm just filling in what I think it should say):
> /usr/local/lib/python2.7/dist-packages/xlsxwriter/worksheet.py:2361: UserWarning: Manually specified table bounds in add_table doesn't line up with the dimensions of the data given!
| closed | 2016-09-20T16:34:53Z | 2016-12-03T00:51:09Z | https://github.com/jmcnamara/XlsxWriter/issues/380 | [
"bug",
"ready to close"
] | Ragora | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.